text
stringlengths 15
59.8k
| meta
dict |
|---|---|
Q: How to create custom component using JSF 1.2? I am new in the JSF world, please tell me step by step answer of how to create JSF custom component i search on the net but i didn't get any proper answer or give me some link which shows how to create custom component.
Thanks
Vinod
A: Googling for "extend UIComponentELTag" or "extends UIComponentELTag" should yield enough hints.
This is one of my favourites: http://blogs.steeplesoft.com/2006/12/jsf-component-writing-check-list/
A: RichFaces includes CDK (component development kit) that can be used for components development.
Here is the link to the guide: RichFaces CDK Developer Guide
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/3255102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: When validation fails, associated record count still changes Here's a problem. I have my models:
class Collection < ActiveRecord::Base
has_many :collection_ownerships
has_many :items
has_many :users, :through => :collection_ownerships
validates :title, presence: true, length: { maximum: 25 }
validates :description, length: { maximum: 100 }
end
class Item < ActiveRecord::Base
belongs_to :collection
has_many :item_ownerships, :dependent => :destroy
accepts_nested_attributes_for :item_ownerships
validates :name, :presence => true, length: { maximum: 50 }
validates :collection, :presence => true
end
class ItemOwnership < ActiveRecord::Base
belongs_to :item
belongs_to :user
validates :user, :presence => true
validates :item, :presence => true
end
Here is my controller code:
before_filter(:except => :toggle_item_owned_state) do
@collection = current_user.collections.find(params[:collection_id])
end
def new
@item = @collection.items.new
@item_ownership = @item.item_ownerships.build(:owned => true, :user => current_user, :item => @item)
end
def create
@item = @collection.items.new(item_params)
@item_ownership = @item.item_ownerships.build(:owned => false, :user => current_user, :item => @item)
if @item.save
redirect_to collection_items_path(@collection)
else
flash.now[:alert] = "There was a problem saving this item."
render "new"
end
end
I have a couple of controller tests:
describe 'POST#create' do
context "with bad data" do
it "should not create a new record for 'items'" do
expect { post :create, :collection_id => batman_collection.id,
:item => { :name => '',
:item_ownership_attributes => { :owned => '1'} }
}.to change(Item,:count).by(0)
end
it "should not create new record 'item_ownerships'" do
expect { post :create, :collection_id => batman_collection.id,
:item => { :name => 'item_name',
:item_ownership_attributes => { :owned => '1'} }
}.to change(ItemOwnership,:count).by(0)
end
end
end
When I run my tests, the second one fails:
1) ItemsController authenticated user POST#create with bad data should not create new record 'item_ownerships'
Failure/Error: expect { post :create, :collection_id => batman_collection.id,
expected #count to have changed by 0, but was changed by 1
# ./spec/controllers/items_controller_spec.rb:62:in `block (5 levels) in <top (required)>'
And ultimately this reflects in the view as well. Now, when I look in the db, I don't see the record created. I assume this his happening because somehow Count is reflecting the count of objects in memory, not in the db.
How can I get a handle on the situation. The problem manifests itself in that when the form is submitted for populating the validation fails for Item, however, multiple instances of ItemOwnership end up showing up on form.
Thanks
A: I found the problem. My second test is wrong as it's actually submitting a valid request to the server in that 'name' is not blank.
I removed it and now the test passes.
A: Rails will by default save records created from nested attributes even if the validations of the parent record fails. Thats why Item.count increases. .count unlike collection#size always queries the database.
What you need to do is create a validation on Item that tells Rails that an Item is not valid unless the parent collection is valid.
class Item < ActiveRecord::Base
belongs_to :collection
has_many :item_ownerships, :dependent => :destroy
accepts_nested_attributes_for :item_ownerships
validates :name, :presence => true, length: { maximum: 50 }
validates :collection, :presence => true
validates_associated :collection #!
end
http://guides.rubyonrails.org/active_record_validations.html#validates-associated
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/30509798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Best practice for keeping denormalized schema up to date? I'm creating a game with points for doing little things, so I have a schema as such:
create table points (
id int,
points int,
reason varchar(10)
)
and to get the number of points a user has is trivial:
select sum(points) as total from points where id = ?
however, performance has become more and more of an issue as the points table expand. I want to do something like:
create table pointtotal (
id int,
totalpoints int
)
what is the best practice for keeping them in sync? Do I try to update pointtotal on every change? Do I run a daily script?
(Assume I have the right keys - they were left out for conciseness)
Edit:
Here are some characteristics that I left out but should be helpful:
Inserts/Updates to Points are not all that frequent
There are a large number of entries, and there are a large number of requests - the keys were pretty trivial, as you can see.
A: The best practice is to use a normalized database schema. Then the DBMS keeps it up to date, so you don't have to.
But I understand the tradeoff that makes a denormalized design attractive. In that case, the best practice is to update the total on every change. Investigate triggers. The advantage of this practice is that you can make the total keep in sync with the changes so you never have to think about whether it's out of date or not. If one change is committed, then the updated total is committed too.
However, this has some weaknesses with respect to concurrent changes. If you need to accommodate concurrent changes to the same totals, and you can tolerate the totals being "eventually consistent," then use periodic recalculation of the total, so you can be sure only one process at a time is changing the total.
Another good practice is to cache aggregate totals outside the database, e.g. memcached or in application variables, so you don't have to hit the database every time you need to display the value.
The query "select sum(points) as total from points where id = ?" should not take 2 seconds, even if you have a huge number of rows and a lot of requests.
If you have a covering index defined over (id, points) then the query can produce the result without reading data from the table at all; it can calculate the total by reading values from the index itself. Use EXPLAIN to analyze your query and look for the "Using index" note in the Extra column.
CREATE TABLE Points (
id INT,
points INT,
reason VARCHAR(10),
KEY id (id,points)
);
EXPLAIN SELECT SUM(points) AS total FROM Points WHERE id = 1;
+----+-------------+--------+------+---------------+------+---------+-------+------+--------------------------+
| id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra |
+----+-------------+--------+------+---------------+------+---------+-------+------+--------------------------+
| 1 | SIMPLE | points | ref | id | id | 5 | const | 9 | Using where; Using index |
+----+-------------+--------+------+---------------+------+---------+-------+------+--------------------------+
A: By all means keep the underlying table normalized. If you can deal with data potentially being one day old, run a script each nigh (you can schedule it), to do the roll up and populate the new table. Best to just re-create the thing each night from the source table to prevent any inconsistencies between the two.
That said, with the size of your record, you must either have very slow server, or very large # of records, because a record that small, with an indexed field on id should sum very quickly for you - however, I am of the mindset that if you can improve user response time by even a few seconds, there is no reason not to use rollup tables - even if DB purists object.
A: Have the extra totalpoints column on the same table, and create/update the value of totalpoints for every row creation/update.
If you need totalpoints for a certain record, you can lookup the value without computing totalpoints. For example if you need the last value of totalpoint, you can get it like this:
SELECT totalpoint FROM point ORDER BY id DESC LIMIT 1;
A: There is another approach: caching. Even if it's cached for only a few seconds or minutes, that is a win on a frequently accessed value. And it's possible to dissociate the cache-fetch with the cache-update. That way, a reasonably current value is always returned in constant time. The tricky bit is having the fetch spawning a new process to do the update.
A: I'd suggest to create a layer that you use to access and modify the data. You can use these DB access functions to encapsulate the data maintenance in all tables to keep the redundant data in sync.
A: You could go either way in this case, because it's not very complicated.
I prefer, as a general rule, to allow the data to be temporarily inconsistent, by having just enough redundancy, and have a periodic process resolve the inconsistencies. However, there is no harm in having a trigger mechanism to encourage early execution of the periodic process.
I feel this way because relying on event-based notification-style code to keep things consistent can, in more complex cases, greatly complicate the code and make verification difficult.
A: You could also create another reporting schema and have it reload at fixed intervals via some process that does the calculations. This is not applicable to realtime information - but is very standard way of doing things.
A: Keeping Denormalized Values Correct
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/855304",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Why aren't composed self-recursive functions with same data structure (same dims) on in and out inlined together with other recursions? In tutorial https://markkarpov.com/tutorial/ghc-optimization-and-fusion.html#fusion-without-rewrite-rules is a code example, which will not be optimized by fusion.
map0 :: (a -> b) -> [a] -> [b]
map0 _ [] = []
map0 f (x:xs) = f x : map0 f xs
foldr0 :: (a -> b -> b) -> b -> [a] -> b
foldr0 _ b [] = b
foldr0 f b (a:as) = foldr0 f (f a b) as
nofusion0 :: [Int] -> Int
nofusion0 = foldr0 (+) 0 . map0 sqr
sqr :: Int -> Int
sqr x = x * x
My question is: Why isn't function foldr0 (+) 0 . map0 sqr automatically inlined together into one recursion if the compiler can easily prove, that map0 doesn't change data structure dimension and deduce everything on run from the up of structure to the down of structure? I can't still find out, why this kind of code can't be automatically optimized without rewritting rules.
Do you know any specific reason, why this optimization isn't used please?
Thanks.
A: I think the most promising approach that could optimize your example is called supercompilation. There is a paper about supercompilation for lazy functional languages: https://www.microsoft.com/en-us/research/publication/supercompilation-by-evaluation/.
In the future work section of the paper the authors state:
The major barriers to the use of supercompilation in practice are code bloat and compilation time.
There has been work on trying to integrate supercompilation into GHC, but it has not yet been successful. I don't know all the details, but there is a very technical GHC wiki page about it: https://gitlab.haskell.org/ghc/ghc/-/wikis/supercompilation.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/71207242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Effects on Input Variable after Failed Input Stream I was working on the following code.
#include <iostream>
int main()
{
std::cout << "Enter numbers separated by whitespace (use -1 to quit): ";
int i = 0;
while (i != -1) {
std::cin >> i;
std::cout << "You entered " << i << '\n';
}
}
I know that using while (std::cin >> i) would have been better but I don't understand a specific occurrence.
If I provide an invalid input, the loop becomes infinite because the Input Stream enters a failbit state. My question is that what happens to the input variable i? In my case, it becomes 0 regardless of the previous value entered. Why does it change to 0 after an invalid input? Is this a predefined behaviour?
A: You get zero because you have a pre-C++11 compiler. Leaving the input value unchanged on failure is new in the latest standard. The old standard required the following:
If extraction fails, zero is written to value and failbit is set. If
extraction results in the value too large or too small to fit in
value, std::numeric_limits::max() or std::numeric_limits::min()
is written and failbit flag is set.
(source)
For gcc, you need to pass -std=c++11 to the compiler to use the new behavior.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/17430495",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Restlet ServerResource method parameters? This is probably a really stupid/simple question with such an obvious answer that it seems not worth stating in restlet documentation. Where and how (if at all) can Restlet pass parameters to methods in ServerResource classes?
Given this class:
public class FooServerResource extends ServerResource {
@Get
public String foo(String f) {
return f;
}
}
and a Router attachment router.attach("/foo", FooServerResource.class);
I know that if I use a Restlet client connector I could create a proxy for this class and invoke methods directly, but what if I am making calls to this ServerResource from some other non-java language, e.g. PHP?
A: You can access the query parameters using from the resource reference. Typically, something like this:
@Get
public String foo() {
Form queryParams = getReference().getQueryAsForm();
String f = queryParams.getFirstValue("f");
return f;
}
Generally speaking (and this would work for other methods that GET), you can access whatever is passed to the request (including the entity, when appropriate) using getRequest() within the ServerResource.
A: hithe question is about one year old, but I just started with Restlet and stumbled into the "same" problem. I am talking about the server, not the client (as Bruno marked it, the original question is mixing server and client part)
I think the question is not completely answered. If you, for instance, prefer to separate the Restlet resource from semantic handling of the request (separating business logics from infrastructure) it is quite likely that you need some parameters, like an Observer, or a callback, or sth. else. So, as I fas as I see, no parameter could be transmitted into this instantiation process. The resource is instantiated by Restlet engine per request. Thus I found no way to pass a parameter directly (is there one?)
Fortunately it is possible to access the Application object of the Restlet engine from within the resource class, and thus also to the class that creates the component, the server, etc.
In the resource class I have sth. like this
protected Application initObjLinkage(){
Context cx = this.getContext();
Client cli = cx.getClientDispatcher();
Application app = cli.getApplication() ;
return app;
}
Subsequently you may use reflection and an interface to access a method in the Application class (still within the resource class), check reflection about this...
Method cbMethod = app.getClass().getMethod("getFoo", parameterTypes) ;
CallbackIntf getmethodFoo = ( CallbackIntf )cbMethod.invoke( app, arguments );
String str = getmethodFoo()
In my application I use this mechanism to get access to an observer that supplies the received data to the classes for the business logics. This approach is a standard for all my resource classes which renders them quite uniform, standardized and small.
So... I just hope this is being helpful and that there is {no/some} way to do it in a much more simple way :)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/3977448",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: How do I get the color of a specific pixel from a picture box? I have a picture box in one of my programs which displays my images just fine. What's displayed consists of a chosen "BackColor" and some filled rectangles using a brush and some lines using a pen. I have no imported images. I need to retrieve the color value of a specified pixel on the picture box. I've tried the following:
Bitmap b = new Bitmap(pictureBox1.Image);
Color colour = b.GetPixel(X,Y)
But pictureBox1.Image always returns null. Does .Image only work with imported images? If not how can I get this to work? Are there any alternatives?
A: Yes you can, but should you?
Here is the change your code needs:
Bitmap b = new Bitmap(pictureBox1.ClientSize.Width, pictureBox1.Height);
pictureBox1.DrawToBitmap(b, pictureBox1.ClientRectangle);
Color colour = b.GetPixel(X, Y);
b.Dispose();
But there is really no way around giving the PictureBox a real Image to work with somewhere if you want to do real work with it, meaning if you want to use its possibilities e.g. its SizeMode.
Simply drawing on its background is just not the same. Here is a minimal code to get a real Bitmap assigned:
public Form1()
{
InitializeComponent();
pictureBox1.Image = new Bitmap(pictureBox1.ClientSize.Width,
pictureBox1.ClientSize.Height);
using (Graphics graphics = Graphics.FromImage(pictureBox1.Image))
{
graphics.FillRectangle(Brushes.CadetBlue, 0, 0, 99, 99);
graphics.FillRectangle(Brushes.Beige, 66, 55, 66, 66);
graphics.FillRectangle(Brushes.Orange, 33, 44, 55, 66);
}
}
However if you really don't want to assign an Image you can make the PictureBox draw itself onto a real Bitmap. Note that you must draw the Rectangles etc in the Paint event for this to work! (Actually you must use the Paint event for other reasons as well!)
Now you can test either way e.g. with a Label and your mouse:
private void pictureBox1_MouseDown(object sender, MouseEventArgs e)
{
if (pictureBox1.Image != null)
{ // the 'real thing':
Bitmap bmp = new Bitmap(pictureBox1.Image);
Color colour = bmp.GetPixel(e.X, e.Y);
label1.Text = colour.ToString();
bmp.Dispose();
}
else
{ // just the background:
Bitmap bmp = new Bitmap(pictureBox1.ClientSize.Width, pictureBox1.Height);
pictureBox1.DrawToBitmap(bmp, pictureBox1.ClientRectangle);
Color colour = bmp.GetPixel(e.X, e.Y);
label1.Text += "ARGB :" + colour.ToString();
bmp.Dispose();
}
}
private void pictureBox1_Paint(object sender, PaintEventArgs e)
{
e.Graphics.FillRectangle(Brushes.DarkCyan, 0, 0, 99, 99);
e.Graphics.FillRectangle(Brushes.DarkKhaki, 66, 55, 66, 66);
e.Graphics.FillRectangle(Brushes.Wheat, 33, 44, 55, 66);
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/24354354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Django Ajax 'GET' newbie here. I am trying to get some data from my database using ajax.
In my views.py,
def getEvents(request):
eventList = Events.objects.all()
events=[]
for event in eventList:
events.append({"name": event.name, "start": event.start, "end": event.end})
return HttpResponse(events, content_type="application/json")
Note that Events is the model that I am trying to parse. After I collect the data from this model, I want to return it to my template using the following ajax code:
$.ajax({
url: 'getEvents/',
datatype: 'json',
type: 'GET',
sucess: function(data) {
alert(data.name);
}
});
In my urls.py:
url(r'^getEvents/', views.getEvents, name='getEvents'),
However, I think I am doing something wrong because it doesn't work. I have been stuck on this for hours...Any ideas?
EDIT:
Okay. When I append the getEvents to the url, I do see all the database objects together in a dict but it seems my ajax is not working. How do I parse this data? The data is in the form:
[{"start": "2017-02-06", "end": "2017-02-07", "name": "Capstone Meeting"},
{"start": "2017-02-07T0:00", "end": "2017-02-08", "name": "Capstone"},
{"start": "2017-01-31T0:00", "end": "2017-02-01", "name": "dasdsd"},
{"start": "2017-01-31", "end": "2017-02-01", "name": "hjb"}]
Here is what I have so far...
$.ajax({
url: 'getEvents/',
datatype: 'json',
type: 'GET',
sucess: function(data) {
$.each(data, function(index, element) {
$('body').append($('<div>', {
text: element.name
}));
});
}
});
A: One of your errors is being caused by not using a JsonResponse in your view instead of an HttpResponse. Here’s how to fix that issue:
from django.http import JsonResponse
def getEvents(request):
eventList = Events.objects.all()
events=[]
for event in eventList:
events.append({"name": event.name, "start": event.start, "end": event.end})
return JsonResponse(events)
From the docs, the JsonResponse is
An HttpResponse subclass that helps to create a JSON-encoded response.
The reason that your regular HttpResponse didn’t work, is because you have to manually serialize the data to JSON when using an HttpResponse, e.g., something like:
import json
response_data = json.dumps(events)
return HttpResponse(response_data, content_type="application/json")
Otherwise, I think what will happen is that you will get a call to __repr__ on the events list which will get you python ast serialized data and not JSON serialized data.
A: First of all, there's a typo of your sucess function, it should be success.
Secondly, JSON response should be a dict object rather than a list, if you really want to get a JSON array response anyway, then you have to specify safe=False when you serializing the data by using JsonResponse(events, safe=False), otherwise you'll get a TypeError like TypeError: In order to allow non-dict objects to be serialized set the safe parameter to False.
So the code sample should be:
def getEvents(request):
eventList = Events.objects.all().values("name", "start", "end")
return JsonResponse({"events": eventList})
And for frontend:
$.ajax({
url: 'getEvents/',
datatype: 'json',
type: 'GET',
success: function(data) {
$.each(data.events, function(index, element) {
$('body').append($('<div>', {
text: element.name
}));
});
}
});
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/42059025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Influxdb howto update tag values I would like to do a query like this:
SELECT max(value) as value, "counter" = 'countergasconsumption'
INTO "home"."1month"."test"
FROM "telegraf"."autogen"."mqtt_consumer"
WHERE time > now() - 11d AND "topic" = '/utilitiesmonitor/countergasconsumption'
GROUP BY time(5m)
I already have data with the countergasconsumption tag (not the whole topic). I can't get it to work, tried different select into's and the docs are not really helpful on this topic. The old data is one measurement with a tag for the type of counter, the new data comes in on separate topics via Telegraf.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/41764467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Segmentation fault when writing to array of strings I am trying to copy some of the arguments passed in argv to an array of strings. Here is my program.
#include <string.h>
#include <stdlib.h>
#include <unistd.h>
int main (int argc, char * argv[]) {
char **args = malloc(argc * sizeof(char *));
for (int i = 0; i < argc - 1; ++i) {
strcpy(*(args+i), argv[i+1]);
}
}
I am getting a segmentation fault in the for loop. Why is this happening?
A: You get the segmentation fault because you are using uninitialized pointers.
In other words: *(args+i) is uninitialized.
Let's look at your memory:
char **args = malloc(argc * sizeof(char *));
This will give a local variable args that points to a dynamic allocated memory area consisting of argc pointers to char. Looks like this:
But the argc pointers to char are uninitialized (i.e. malloc doesn't initialized anything) so the real picture is:
That is - the argc pointers to char may point anywhere.
So when you do
strcpy(*(args+i), argv[i+1]);
^^^^^^^^^
Read uninitialized pointer
you read the i'th uninitialized pointer and copy the string at argv[i+1] to that location. In other words - to a location that we can't know where is and most like doesn't belong to your program. This is likely to result in a seg fault.
So before you copy anything you want those char-pointers to point to some chars. Like:
So basically you need to do additional malloc.
Now one problem is: How many chars do you need to malloc?
Well, you can't know until you know the length of the input strings. So you need to put the malloc inside the loop. Like:
int main (int argc, char * argv[]) {
char **args = malloc(argc * sizeof(char *));
for (int i = 0; i < argc - 1; ++i) {
*(args+i) = malloc(strlen(argv[i+1]) + 1); // Allocate memory
strcpy(*(args+i), argv[i+1]);
}
}
Note:
*
*You don't need to write sizeof(char) as it is always 1
*You have to add 1 to the string length of the input string in order to reserve memory for the string termination.
*You should always check that malloc doesn't return NULL
*Instead of *(args+i) you can use the more readable form args[i]
So after a real execution the picture could be:
A: The problem lies in this part of your code:
malloc(argc * sizeof(char *));
Don't think that sizeof(char*) would give you the size of a string.
What is the size of a pointer?
Solution:
int main(int argc, char* argv[])
{
char* copyOfArgv;
copyOfArgv = strdup(argv[1]);
}
strdup() - what does it do in C?
A: The problem is you allocated memory for the pointers not for the string (arguments from argv itself) hence the strcpy is invalid - causing segmentation fault.
It would've been better to just:
Allocate memory for the pointers (as you have done) then make those pointers point to the arguments passed. Example:
#include <string.h>
#include <stdlib.h>
#include <unistd.h>
int main (int argc, char * argv[]) {
char **args = malloc(argc*sizeof(char *)); //allocated memory for the pointers only
int i=0;
for(;i<argc;i++)
{
args[i]=argv[i]; //making the pointers POINT to the arguments including the 0th argument as well
i++;
}
return 0;
}
For using strcpy you should've allocated memory for the arguments itself and then copied those.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/59906740",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Overriding timeout for database connection in properties file I was wondering if there is a specific way to override database connection timeout in the properties file in my Java web project? I am using Hibernate, Spring, and MySQL DB. I have tried several different property fields and reduced the timeout time to 1 millsecond, yet the connection is still completed with transactions still being processed properly.
These are the property fields I have used to no avail...
*
*spring.jpa.properties.javax.persistence.query.timeout=1
*spring.jdbc.template.query-timeout=1
*hibernate.c3p0.timeout=1
Is hibernate overriding this timeout value or am I just setting it improperly?
Thanks in advance!
A: Assuming that you're using Spring Boot you can try:
spring.transaction.defaultTimeout=1
This property sets defaultTimeout for transactions to 1 second.
(Looking at the source code of TransactionDefinition it seems that it is not possible to use anything more precise than seconds.)
See also: TransactionProperties
javax.persistence.query.timeout
This is a hint for Query. It is supposed to work if you use it like this:
entityManager.createQuery("select e from SampleEntity e")
.setHint(QueryHints.SPEC_HINT_TIMEOUT, 1)
.getResultList();
See also QueryHints
spring.jdbc.template.query-timeout
Remember that according to the JdbcTemplate#setQueryTimeout javadoc:
Any timeout specified here will be overridden by the remaining transaction timeout when executing within a transaction that has a timeout specified at the transaction level.
hibernate.c3p0.timeout
I suspect that this property specifies timeout for getting from the connection pool, not for a query execution
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/53599657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: quill error with Invalid Quill Container on Vuejs when used with vuetify tab Error on plugin quill occurred when i placed the editor in a tab container of Vuetify. It is created under the mounted hook.
The error in Console is
quill Invalid Quill container undefined
[Vue warn]: Error in mounted hook: "TypeError: Cannot read property
'on' of undefined"
Below is the vue file.
<template>
<v-app class="panel" ref="panel">
<v-tabs fixed-tabs v-model="tab">
<v-tabs-slider></v-tabs-slider>
<v-tab key="1" href="#tab1">
Tab 1
</v-tab>
<v-tab key="2" href="#tab2">
Tab 2
</v-tab>
<v-tabs-items v-model="tab">
<v-tab-item key="1" value="tab1">
<div class="formPanel" ref="formPanel">
<div class="title-text" ref="title">Edit text in tab 1</div>
<div ref="editor" v-html="value"></div>
</div>
</v-tab-item>
<v-tab-item key="2" value="tab2">
<v-card-text>This is tab 2</v-card-text>
</v-tab-item>
</v-tabs-items>
</v-tabs>
</v-app>
</template>
<script>
import Quill from 'quill';
export default {
data: function () {
return {
tab: 'editor'
};
},
mounted() {
var toolbarOptions = [
['bold', 'italic', 'underline', 'strike'],
[{ 'size': ['small', false, 'large', 'huge'] }],
];
this.editor = new Quill(this.$refs.editor, {
modules: { toolbar: toolbarOptions },
placeholder: 'Edit text',
theme: 'snow'
});
},
};
</script>
<style scoped>
</style>
A: this is probably due to the fact that
<div ref="editor" v-html="value"></div> is inside a child component's slot v-tab-item which is conditionally rendered.
that means that the v-tab-item is mounted AFTER the parent's mounted() executes, so the content (including your refs) are not available.
If you can defer the initialization until the child has mounted then you can access the ref, but getting that to work is a complex endeavor.
Instead, I would opt to define a component that handles quill initialization and that can be nested in the tab.
ie:
<v-tab-item key="1" value="tab1">
<MyQuillComponent v-model="value"/>
</v-tab-item>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/62555485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to use coalesce to take sum Can I use sum() in coalesce()?
I want to use it in a stored function in Postgres.
For example
case (COALESCE(t3.Count3,0)+ COALESCE(t2.Count2,0) >= t4.Count4::float)
then ( select (t4.Count4::float/((t3.Count3::float)+(t2.Count2::float))) * 100 as Count5 )
else ''0''
end as Count5
A: It's hard to tell what you're trying to do here because there are multiple issues in the code. Also though your title mentions SUM your code doesn't include it (though you do have + but that's not the same).
If this is part of a SELECT statement, then I'm guessing what you want is:
SELECT
CASE
WHEN COALESCE(t3.Count3,0)+ COALESCE(t2.Count2,0) >= t4.Count4::float
THEN 100 * t4.Count4::float/(COALESCE(t3.Count3::float,0)+COALESCE(t2.Count2::float,0))
ELSE 0
END as Count5
FROM MyTable
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/44238928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Count 1's between the intersection of two binary arrays Given two binary arrays, how can one quickly find the number of 1's occurring between intersecting bits for both arrays?
For example, consider a pair of bitsets (i1, i2) and their intersection (i12):
i1 = [0001001000100100];
i2 = [0101000100010110];
i12 = [0001000000000100]; // (i1 & i2)
The intersecting bits are in position 3 and 13, so count the number of 1's in positions 0-2 and 4-12:
x1 = [0, 2] // results i1
x2 = [1, 2] // results for i2
Now generalize this idea to 32-bit integers. Toy example:
int i1[2] = {2719269390, 302235938};
int i2[2] = {2315436042, 570885266};
Toy example in binary:
i1 = [10100010 00010100 11000010 00001110, 00010010 00000011 11000001 00100010]
i2 = [10001010 00000010 11000000 00001010, 00100010 00000111 00000100 10010010]
i1 & i2 = [10000010 00000000 11000000 00001010, 00000010 00000011 00000000 00000010]
Expected results:
x1 = [2, 2, 0, 1, 1, 1, 0, 0, 4];
x2 = [2, 1, 0, 0, 0, 1, 1, 0, 3];
I can see a "brute-force" approach using __builtin_clz() to determine leading zeros in i1 & i2, shifting i1 and i2 right by that number, doing __builtin_popcount() for the shifted i1 and i2 results, and repeating the procedure until no intersections remain. My instinct suggests there may be a more elegant approach, as this involves a few temporaries, many instructions, and at least two logical branches.
C/C++ implementations are welcome, but I would be satisfied enough with conceptual perspectives. Suffice it to say this approach intends to remove a critical bottleneck in a popular program.
A: std::popcount returns the number of active bits in a value. It is able to call intrinsic functions, if available.
constexpr auto get_popcount(auto list) {
decltype(list) result;
std::transform(list.begin(), list.end(), result.begin(), [](auto x) {
return std::popcount(x);
});
return result;
}
int main() {
constexpr auto i12 = std::array{
0b10000010u, 0b00000000u, 0b11000000u, 0b00001010u,
0b00000010u, 0b00000011u, 0b00000000u, 0b00000010u};
static_assert(get_popcount(i12) == std::array{ 2u, 0u, 2u, 2u, 1u, 2u, 0u, 1u });
}
A: Can't test this right now but (I1 XOR I2) AND I1 should leave only the bits that you want to count, so:
constexpr auto get_popcount(auto i1, auto i2) {
std::vector<std::pair(int,int)> res;
std::transform (i1.begin(), i1.end(), i2.begin(), res.begin(), []
(auto x, auto y){
return std::tie(std::popcount((x^y)&x),std::popcount((x^y)&y));
});
return res;
}
Where the first item of the touple is the number of bits on I1 and the second on I2.
A: I'm dealing with a similar problem, where I have to count the
number of '1' bits from a run-time-determined range. Don't know how long your bit array is, but mine is millions to billions of bits.
For a very long bit arrays (say, millions of bits), you could build an auxiliary tree structure for each bit array. The root node counts all bits 0~N in the array, the left child node of root stores the partial sum of bits 0~N/2, the right child node store the partial sum of bits (N/2+1)~N, etc. etc. That way, when you want to count the number of '1' bits from an arbitrary range, you traverse the tree and add the partial sum values stored in appropriate nodes of the tree. The time complexity is O(log(N)). When your array is huge, this is going to be much more efficient than iterating over the entire array (complexity O(N)).
If your array is smaller, I would guess SSE/AVX instructions can result in higher throughput? Like Stack Danny has implied, intrinsic functions may already be the underlying implementation of popcount.
If you just use a for loop, GCC is able to vectorize your loop into SSE/AVX (e.g. if you got AVX2 on your CPU, add '-mavx2 -O3' to your compiler command). That works for summing elements of an integer array, but I never examined if looping on a bitarray can also result in vectorized code. https://gcc.gnu.org/projects/tree-ssa/vectorization.html
A: Use the AND operator for i1 and i2 to get only the intersecting bits. Then in a loop, SHIFT right checking each bit using the AND operator until the variable is zero.
Assuming the arrays are in packs of 4 bytes (unsigned long), this should work:
size_t getIntersecting(unsigned long* i1, unsigned long* i2, size_t nPacks) {
size_t result = 0;
unsigned long bitsec;
for (size_t i = 0; i < nPacks; i++)
{
bitsec = i1[i] & i2[i];
while (bitsec != 0) {
if (bitsec & 1) {
result++;
}
bitsec >>= 1;
}
}
return result;
}
I'm not at my workstation to test it. Generally it should count intersecting bits.
EDIT:
After reexamination of your question, try something like this.
// Define the global vectors
vector<unsigned long> vector1;
vector<unsigned long> vector2;
// Define the function
void getIntersecting(unsigned long* pi1, unsigned long* pi2, size_t nPacks) {
// Initialize the counters to 0
unsigned long i1cnt = 0;
unsigned long i2cnt = 0;
// Local storage variables
unsigned long i1, i2;
// Loop through the pointers using nPacks as the counter
for (size_t i = 0; i < nPacks; i++) {
// Get values from the pointers
i1 = pi1[i];
i2 = pi2[i];
// Perform the bitwise operations
unsigned long i4 = i1 & i2;
// Loop for each bit in the 4-byte pack
for (int ibit = 0; ibit < 32; ibit++) {
// Check the highest bit of i4
if (!(i4 & 0x80000000)) {
// Check the highest bit of i1
if (i1 & 0x80000000) {
i1cnt++;
}
// Check the highest bit of i2
if (i2 & 0x80000000) {
i2cnt++;
}
}
else {
// Push the counters to the global vectors
vector1.push_back(i1cnt);
vector2.push_back(i2cnt);
// Reset counters
i1cnt = 0;
i2cnt = 0;
}
// Shift i4, i1, and i2 left by 1 bit
i4 = i4 << 1;
i1 = i1 << 1;
i2 = i2 << 1;
}
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/74642705",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Want to make db layer generic and not tied to spring's JdbcDaoSupport Current I have a seperate maven module for my database access, all my DAO classes inherit from:
public class GenericDaoImpl<T> extends JdbcDaoSupport implements GenericDao<T> {
}
My maven module has a spring dependancy:
org.springframework spring-orm
So a typical Dao class looks like:
public class UserDaoImpl extends GenericDaoImpl<User> implements UserDao {
@Override
public void insert(User user) {
getJdbcTemplate().update("insert into users(...)...");
}
}
My Dao's get autowired with the dataSource bean.
Can I make this generic somehow so I can continue to use it in my spring MVC application, yet it will work if I need to use this library in a cron job service type environment? (without having to bring in spring's application context into the picture).
A: I'd start to observe the following:
*
*You have a GenericDao<T> which means that you can have different implementations.
*Currently, you have GenericDaoImpl<T> extends JdbcDaoSupport which means that you cannot use in the other environment you described without any application context unless you prepare all the object in a manual fashion.
So my suggestion is:
*
*Allow GenericDao<T> extends org.springframework.jdbc.core.JdbcOperations
*Develop an AbstractGenericDao<T> implements GenericDao<T> to bring the abstract general functionality you need.
*Develop a MyEnvGenericDao<T> extends AbstractGenericDao<T> that is responsible to provide a data source and underlying implementation of different methods you need; could be using a direct Hibernate/OpenJPA implementation to directly executing the queries.
*Develop a SpringGenericDao<T> extends JdbcDaoSupport implements GenericDao<T> that comes is already with a getJdbcTemplate() to perform the operations you need and is delegated to Spring JDBC template. In this scenario, you need to delegate the operations to JdbcDaoSupport.getJdbcTemplate().
Related to Maven, then you can actually have different modules for MyEnv and Spring implementations but both with one parent to have access to GenericDao<T> interface.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/9965304",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Css media print show additional data, how to avoid them not come in print I am working On a laravel project and used CSS media print in somewhere, It is good but it shows additional data in Chrome Print Dialog like the below image as I marked them, So I don't want them to display when I print it, how to avoid it?
this the HTML code:
<div class="row">
<h3 style="margin-right: 14px;">مشخصات شاگرد</h3>
<div class="col-md-4" style="padding-left: 0px;" id="student_info">
<div class="list-group">
<a href="" class="list-group-item disabled student_details">نام</a>
<a href="" class="list-group-item student_details">تخلص</a>
<a href="" class="list-group-item disabled student_details">نام پدر</a>
<a href="" class="list-group-item student_details">جنسیت</a>
<a href="" class="list-group-item disabled student_details">سن</a>
<a href="" class="list-group-item student_details">تلیفون</a>
<a href="" class="list-group-item student_details disabled">آدرس</a>
<a href="" class="list-group-item student_details">ایمیل</a>
<a href="" class="list-group-item student_details disabled">حالت مدنی</a>
<a href="" class="list-group-item student_details">نمبر تذکره</a>
</div>
</div>
<div class="col-md-8" style="padding-right: 0px;" id="student_info_date">
<div class="list-group">
<a href="" class="list-group-item student_details disabled"> {{ $student->first_name }} </a>
<a href="" class="list-group-item student_details">{{ $student->last_name }}</a>
<a href="" class="list-group-item student_details disabled"> {{ $student->father_name }} </a>
<a href="" class="list-group-item student_details">{{ $student->gender }}</a>
<a href="" class="list-group-item student_details disabled"> {{ $student->age }} </a>
<a href="" class="list-group-item student_details">{{ $student->phone }}</a>
<a href="" class="list-group-item student_details disabled"> {{ $student->address }} </a>
<a href="" class="list-group-item student_details"> {{ $student->email_address }} </a>
<a href="" class="list-group-item student_details disabled"> {{ $student->marital_status }} </a>
<a href="" class="list-group-item student_details"> {{ $student->ssn_number }} </a>
</div>
</div>
</div>
this my script code:
<script>
$(document).ready(function () {
$('#print').click(function () {
window.print()
})
})
</script>
this is style for print:
<style>
@media print {
body * {
visibility: visible !important;
}
@page {
margin: 0;
}
#print,.fixed-navbar{
display: none;
}
#student{
margin-top: -70px;
}
#student_info
{
width: 15%;
}
#student_info_date,
#student_info_date{
margin-right:15% ;
margin-top: -535px;
width: 30%;
}
#student_family{
margin-right: 41%;
margin-top: -797px;
}
#student_family_info{
width: 25%;
}
#student_family_info_data{
width: 45%;
margin-right: 25%;
margin-top: -546px;
}
}
a.student_details{
max-height:100px;
min-height: 50px;
}
a.student_class_details{
min-height: 90px;
}
a.student_class_score_details{
min-height: 90px;
padding:28%;
}
</style>
A: use instead of every <a> tags from <span> tag.
<div class="row">
<h3 style="margin-right: 14px;">مشخصات شاگرد</h3>
<div class="col-md-4" style="padding-left: 0px;" id="student_info">
<div class="list-group">
<span href="" class="list-group-item disabled student_details">نام</span>
<span href="" class="list-group-item student_details">تخلص</span>
<span href="" class="list-group-item disabled student_details">نام پدر</span>
<span href="" class="list-group-item student_details">جنسیت</span>
<span href="" class="list-group-item disabled student_details">سن</span>
<span href="" class="list-group-item student_details">تلیفون</span>
<span href="" class="list-group-item student_details disabled">آدرس</span>
<span href="" class="list-group-item student_details">ایمیل</span>
<span href="" class="list-group-item student_details disabled">حالت مدنی</span>
<span href="" class="list-group-item student_details">نمبر تذکره</span>
</div>
</div>
<div class="col-md-8" style="padding-right: 0px;" id="student_info_date">
<div class="list-group">
<span href="" class="list-group-item student_details disabled"> {{ $student->first_name }} </span>
<span href="" class="list-group-item student_details">{{ $student->last_name }}</span>
<span href="" class="list-group-item student_details disabled"> {{ $student->father_name }} </span>
<span href="" class="list-group-item student_details">{{ $student->gender }}</span>
<span href="" class="list-group-item student_details disabled"> {{ $student->age }} </span>
<span href="" class="list-group-item student_details">{{ $student->phone }}</span>
<span href="" class="list-group-item student_details disabled"> {{ $student->address }} </span>
<span href="" class="list-group-item student_details"> {{ $student->email_address }} </span>
<span href="" class="list-group-item student_details disabled"> {{ $student->marital_status }} </span>
<span href="" class="list-group-item student_details"> {{ $student->ssn_number }} </span>
</div>
</div>
</div>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/64365539",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Regex string must contain certain substrings only and separated by whitespace So, I've been trying to come up with a regexp in PHP which can pick out substrings like "XX-035" (or alternatively "XX035" or "XX35") from a larger string of words and character sequences - that's the easy part - with the added proviso that the required substrings must be separated by one or more whitespace characters from other substrings in the main string.
In addition the substrings must also start with a particular two-letter group like "AB","CG" or "MS", etc., followed by zero or one dashes, and then 1 to 4 numerals (again, that part is easy). So, I have tried many different regex's, with and without \b word-boundaries, with and without whitespace \s, the latest of which is as follows:
/\b(\s+[^\/a-zA-Z](AB|CG|MS|MT|NA|OQ|TS){1}[\-]?\d{1,4})\b/i
but I just can't seem to crack the whitespace requirement. I've gone through many iterations in https://regex101.com/ and still haven't managed to get it down.
Obviously, I'm no expert when it comes to regular expressions, so any help would be appreciated here.
A: You may use
(?<!\S)(?:AB|CG|MS|MT|NA|OQ|TS)-?\d{1,4}(?!\S)
See the regex demo
Details
*
*(?<!\S) - the previous char should be a whitespace or start of string
*(?:AB|CG|MS|MT|NA|OQ|TS) - one of the 2-letter alternative
*-? - an optional hyphen
*\d{1,4} - one to four digits
*(?!\S) - the next char should be a whitespace or end of string.
PHP:
if (preg_match_all('~(?<!\S)(?:AB|CG|MS|MT|NA|OQ|TS)-?\d{1,4}(?!\S)~', $s, $matches)) {
print_r($matches[0]);
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/52244094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Spring security jet always redirecting to "/" after successful auth I have made a Rest API project with Spring Boot 2. I have used jwt for authentication. I can generate tokens fine. But when I send the generated token in a header with a request, it always redirects me to "/" path instead of the requested path (in my case "/rest/hello").
This is my custom AuthenticationProvider
@Component
public class JwtAuthenticationProvider extends AbstractUserDetailsAuthenticationProvider {
@Autowired
private JwtValidator jwtValidator;
@Override
protected void additionalAuthenticationChecks(UserDetails userDetails, UsernamePasswordAuthenticationToken authentication) throws AuthenticationException {
}
@Override
protected UserDetails retrieveUser(String username, UsernamePasswordAuthenticationToken authentication) throws AuthenticationException {
JwtAuthenticationToken jwtAuthenticationToken = (JwtAuthenticationToken)authentication;
String token = jwtAuthenticationToken.getToken();
JwtUser jwtUser = jwtValidator.validate(token);
if (jwtUser == null) {
throw new RuntimeException("JWT Token not correct");
}
List<GrantedAuthority> grantedAuthorities = AuthorityUtils.commaSeparatedStringToAuthorityList(jwtUser.getRole());
return new JwtUserDetails(jwtUser.getUserName(), jwtUser.getId(), grantedAuthorities, token);
}
@Override
public boolean supports(Class<?> authentication) {
return JwtAuthenticationToken.class.isAssignableFrom(authentication);
}
}
This is my custom Filter
public class JwtAuthenticationTokenFilter extends AbstractAuthenticationProcessingFilter {
public JwtAuthenticationTokenFilter() {
super("/rest/**");
}
@Override
public Authentication attemptAuthentication(HttpServletRequest request, HttpServletResponse response) throws AuthenticationException, IOException, ServletException {
String header = request.getHeader("Authorization");
if (header == null || !header.startsWith("Token ")) {
throw new RuntimeException("JWT Token is missing");
}
String authenticationToken = header.substring(6);
JwtAuthenticationToken token = new JwtAuthenticationToken(authenticationToken);
return this.getAuthenticationManager().authenticate(token);
}
public void setAuthenticationManager(AuthenticationManager authenticationManager) {
super.setAuthenticationManager(authenticationManager);
}
public void setAuthenticationSuccessHandler(JwtSuccessHandler jwtSuccessHandler) {
}
@Override
protected void successfulAuthentication(HttpServletRequest request, HttpServletResponse response, FilterChain chain, Authentication authResult) throws IOException, ServletException {
super.successfulAuthentication(request, response, chain, authResult);
chain.doFilter(request, response);
}
}
This is my Security config
@Configuration
@EnableWebSecurity
@EnableGlobalMethodSecurity(prePostEnabled = true)
public class JwtSecurityConfig extends WebSecurityConfigurerAdapter {
@Autowired
private JwtAuthenticationProvider authenticationProvider;
@Autowired
private JwtAuthenticationEntryPoint entryPoint;
@Bean
public AuthenticationManager authenticationManager() {
return new ProviderManager(Collections.singletonList(authenticationProvider));
}
@Bean
public JwtAuthenticationTokenFilter authenticationTokenFilter(){
JwtAuthenticationTokenFilter filter = new JwtAuthenticationTokenFilter();
filter.setAuthenticationManager(authenticationManager());
filter.setAuthenticationSuccessHandler(new JwtSuccessHandler());
return filter;
}
@Override
protected void configure(HttpSecurity http) throws Exception {
http.csrf().disable().authorizeRequests().antMatchers("/rest/**").authenticated().and().exceptionHandling().authenticationEntryPoint(entryPoint)
.and().sessionManagement().sessionCreationPolicy(SessionCreationPolicy.STATELESS);
http.addFilterBefore(authenticationTokenFilter(), UsernamePasswordAuthenticationFilter.class);
http.headers().cacheControl();
}
}
I have custom classes for user details and token as well.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/56207965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Count of an NSMutable Array of Objects that is in an NSMutable Array of Objects returns 0 I have an array of NSObjects (mapArray), and the object being focused on is at position 0 (MAIN_MAP). In each NSObject, there is a NSMutableArray called moonGateArray (has @property and @synthesize) to which I add another NSObject called Moongate. However, I keep getting zero as the count for moonGateArray. Any answers for why that is?
Code:
for (int i = 0; i < 8; i++) {
[[[mapArray objectAtIndex:MAIN_MAP] moonGateArray] addObject:[Moongate new]];
}
[[[[mapArray objectAtIndex:MAIN_MAP] moonGateArray] objectAtIndex:0] setPosition:9 :3];
[[[[mapArray objectAtIndex:MAIN_MAP] moonGateArray] objectAtIndex:1] setPosition:16 :8];
[[[[mapArray objectAtIndex:MAIN_MAP] moonGateArray] objectAtIndex:2] setPosition:11 :6];
[[[[mapArray objectAtIndex:MAIN_MAP] moonGateArray] objectAtIndex:3] setPosition:16 :13];
[[[[mapArray objectAtIndex:MAIN_MAP] moonGateArray] objectAtIndex:4] setPosition:13 :10];
[[[[mapArray objectAtIndex:MAIN_MAP] moonGateArray] objectAtIndex:5] setPosition:3 :1];
[[[[mapArray objectAtIndex:MAIN_MAP] moonGateArray] objectAtIndex:6] setPosition:3 :12];
[[[[mapArray objectAtIndex:MAIN_MAP] moonGateArray] objectAtIndex:7] setPosition:22 :17];
NSLog(@"%i",[[[mapArray objectAtIndex:MAIN_MAP] moonGateArray] count]);
A: You probably haven't initialized either mapArray or moonGateArray thats why you are getting 0 count. In objective-c calling count on nil object doest not give any exception but instead returns 0.
hope that helps!
A: However for your count=0, this is straight forward you missed to alloc/init any of the array you used.
Are you creating a 2D array or 3D array?
It is creating confusion to get your logic?
Edit:
if your MAIN_APP is always 0, why you need an array, a simple object or instead of storing in 2D array you could have used dictionary. However the entire logic I can guess.
And I would like to add one more thing about your naming conventions... Name suffix or prefix your variable with Array, Button etc. you can use "maps", "moonGates", all plural symbolizes its gonna to be an array.
A: Most likely you haven't properly initialize your arrays. Make your like easier and change your code a bit to make it easier to read and easier to debug. You don't get points for writing highly nested method calls. It makes your life harder. Split the code up to make things easier.
Change this:
for (int i = 0; i < 8; i++) {
[[[mapArray objectAtIndex:MAIN_MAP] moonGateArray] addObject:[Moongate new]];
}
to this:
// replace "SomeMapClass" with your actual class name
SomeMapClass *mainMap = mapArray[MAIN_MAP];
NSMutableArray *moonArray = mainMap.moonGateArray;
for (int i = 0; i < 8; i++) {
[moonArray addObject:[Moongate new]];
}
Now you can run this code in the debugger or add NSLog statements and verify each step of the process.
Is mapArray properly initialized?
Is moonArray property initialized?
Remember, simply defining a property doesn't actually create the object. You still need to initialize it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/13677127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: ReactNative nested render functions. Question regarding `this` between objects and functions Here is my confusion:
constructor(props) {
super(props);
}
hookNav(){
console.log("work");
}
renderItem({ item, index }) {
return (
<TouchableOpacity style={{ margin: 9 }} onPress={this.hookNav}>
<View
style={{
flex: 1,
minWidth: 170,
maxWidth: 223,
height: 280,
maxHeight: 280,
borderRadius: 10,
}}
>
<ImageBackground
source={{
uri: "https://picsum.photos/170/223",
}}
style={{ flex: 1 }}
imageStyle={{ borderRadius: 10 }}
>
<Text
style={{
color: "white",
position: "absolute",
bottom: 20,
right: 10,
fontWeight: "bold",
textShadowColor: "black",
textShadowOffset: { width: -1, height: 1 },
textShadowRadius: 10,
}}
>
Topic
</Text>
</ImageBackground>
</View>
</TouchableOpacity>
);
}
render(){
return (
<View style={{ marginBottom: 80, backgroundColor: "white" }}>
<Header
centerComponent={{ text: "test", style: { color: "orange" } }}
rightComponent={{ icon: "add", color: "orange" }}
/>
<FlatList
numColumns={2}
onEndReachedThreshold={0}
onEndReached={({ distanceFromEnd }) => {
console.debug("on end reached ", distanceFromEnd);
}}
contentContainerStyle={styles.list}
data={[
{ key: "a" },
{ key: "b" },
{ key: "c" },
{ key: "d" },
{ key: "e" },
{ key: "f" },
]}
renderItem={this.renderItem}
ListHeaderComponent={
<Text
style={{ padding: 10, fontWeight: "bold" }}
>
Your Topics
</Text>
}
/>
</View>
);
}
}
in this code above, when navigating to this page an error is immediately thrown that this.hookNav is undefined.
However, when I put the hookNav function inside of a state object, like so
constructor(props) {
super(props);
state = {
hookNav : function() {
console.log("work please");
}
}
}
and in onPress inside of renderItem
const { hookNav} = this.state;
return (
<TouchableOpacity style={{ margin: 9 }} onPress={hookNav}>
This works as intended.
It is my understanding that render already has access to the component’s state via this. If renderItem can access the state object, through this, why can it not access this.hookNav() directly? Why does the function need to be encapsulated in an object for this to work?
Thanks.
A: It's because the value of this inside any Javascript function depends on how that function was called.
In your non-working example, this.hookNav is inside of the renderItem method. So it will adopt the this of that method - which, as I just said, depends on how it is called. Further inspecting your code shows that the renderItem method isn't called directly by your component but passed as a prop (called renderItem) of the FlatList component. Now I don't know how that component is implemented (I've never used React Native), but almost certainly it calls that function at some point - and when it does, it can't be in the context of your component. This is because FlatList can't possibly know where that function prop is coming from, and when it calls it, it will be treated as an "ordinary" function, not in the context of any particular object. So its this context won't be your component instance, as intended, but will simply be the global object - which doesn't have any hookNav method.
This problem doesn't happen in your second example because you simply access the function as part of this.state - there is no passing of a method, like the renderItem in the previous example, which depends on an internal this reference. Note that this won't be the case if your hookNav refers to this inside it.
Such problems with this context are very common in React, but fortunately there are two general solutions which will always fix this:
*
*Bind methods in your component's constructor. If the constructor, in the first example, includes the statement this.renderItem = this.renderItem.bind(this);, then it would work fine. (Note that this is recommended in the official React docs.) It's the "best" solution technically (performance is slightly better than in the other option) - but does involve quite a bit of boilerplate if you have lots of methods which need such binding.
*Use arrow functions to define the methods instead. Instead of renderItem({ item, index }) {...}, write it as renderItem = ({ item, index }) => {...}. This works because arrow functions adopt the this of their lexical scope, which will be the class itself - so in other words this will always refer to the component instance, as you almost always want.
Don't worry if this is confusing - the behaviour of the this keyword in JS is a common stumbling block for beginners to JS, and often enough for those more experienced too. There are a number of good explanations online which demystify it, of which I can especially recommend this one.
A: You have to bind the method in to this of your class. One way to do so will be like below.
constructor(props){
super(props);
this.renderItem = this.renderItem.bind(this);
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/62604230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: linux - Bash Terminal not allowing usage of scripts even if terminal is at current directory of script itself So,I had Fedora installed in my computer,since Windows gave me no choice last time.
Then after I installed Fedora,I learned that Fedora was actually Linux - only modified,and thus,I wanted to try out the programming language that it has - BASH.
And my search for tutorials went on,and I found one.And it tole me to do something like:
#!/bin/bash
echo Hello World.
Then,I wanted to try it out,so I saved it as testscript.sh and then,opened Terminal,and this happened.
[JRGarcia@localhost ~]$ ./testscript.sh
bash: ./testscript.sh : command not found
So,I thought to myself: "That just ain't right!"
And then,on went my rampant rage across my room,and everything I have is destroyed right now.
What do I have to do? I saw some videos on tutorials and that method worked fine for them.BTW,I saved the script in the /home/JRGarcia,which is what Terminal uses as a starting directory.
A: If you have the x permission on the script and cannot execute it, it may be because you mounted the current partition with the option noexec. See explanation in manpage of mount
You can verify this by running the mount command without any arguments.
A: $ cat > testscript.sh
#!/bin/bash
echo Hello World.
^D
$ chmod +x testscript.sh
$ ./testscript.sh #=> Hello world.
Works fine.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/30846732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
}
|
Q: Error when adding ext.grid.panel into ext.form.panel I have this grid and form panels. I want to add a grid of data into the form panel and still have buttons from the form panel.
myGrid= Ext.create('Ext.grid.Panel',{
layout: {
type: 'fit'
},
id: 'gridId',
title: null,
store: myDataStore,
columns: [{
header: 'header 1',
dataIndex: 'index_1',
width: 120
}, {
header: 'header 2',
dataIndex: 'index_2',
width: 120
}, {
header: 'header 3',
dataIndex: 'index_3',
width: 120
}],
stripeRows: true
})
formPanel= Ext.create('Ext.form.Panel',{
id:'panelId',
items:[myGrid],
buttons: [{
// cancel, save, other buttons
}]
})
But I get this error
HierarchyRequestError: Node cannot be inserted at the specified point in the hierarchy
what do I do wrong?
A: You forgot semicolons after your Ext.create() config.
Seems to work just fine here. I made a fiddle
http://jsfiddle.net/WGgR8/1/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/13620804",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Is there an [embed] alternative for Flash only (not flex)? Ok we all know that you can embed assets with Flex using the [embed] code.
But is there anyway to achieve something similar to this when working with only Flash?
For example:
I need to create a class (that contains certain assets) that needs to be used in the Flash IDE, but I don't want to have to drop all the assets into the library for every Flash file that happens to use the class.
A: Are you looking for this? http://adobe.com/devnet/flash/articles/embed_metadata.html
A: Hope this will helpful for you.
http://www.adobe.com/devnet/flex/articles/actionscript_blitting.html
http://blog.nightspade.com/2010/02/01/embedding-asset-at-compile-time-in-pure-as3-project/
http://www.bit-101.com/blog/?p=853
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/8599669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to set a cron expression of current date + 14 days I am looking for a cron expression which will help in setting the cron job for current date + 14 days. This is a repeated exercise which will take place whenever a new code is deployed into dev environment and then use the current date of dev will used to schedule the trigger for prod after 14 days.
Example : This a BAMBOO CLI Command to set scheduled triggers in a Bamboo Plan.
--action addEnvironmentTrigger --deploymentProject "Deploy ZDEPLOY4774565-BASE" --environment "QA" --type "Scheduled" --description "scheduled trigger" --schedule "1 0 0 ? * *"
Now once the dev deployment is successfully completed , then it will create a scheduled trigger for prod to be deployed after 14 days .so every time dev is deployed . I need to use the current date of dev .
A: Following cron expression will run 14 days once
0 5 */14 * * /your/command/
If you like to run only once from current date on 20th September 2018
0 0 20 9 ? 2018 /command
WARNING: The year column is not supported in standard/default implementations of cron.
A: Run a cron-job every 14 days starting from day X
If you want to do this, have a look at this answer.
Run a cron-job only ones 14 days after day X
If you want to do this, there are a few options you have. Assume that day X is today (2018-09-06), then I give you the following options:
# Example of job definition:
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0 - 6) (Sunday=0 or 7)
# | | | | |
# * * * * * command to be executed
0 0 6 9 * [[ `date '+\%Y'` == "2018" ]] && sleep 1209600 && command1
0 0 20 9 * [[ `date '+\%Y'` == "2018" ]] && command2
0 0 * * * /path/to/daytestcmd 14 && command3
*
*The first option here is simply checking if the year is correct. If so, sleep for 14 days and then execute command1
*The second option is by computing the day itself and execute the script on that day. Again, you have to check manually for the correct year.
The third option is much more flexible for your purpose. The idea is to create a script called daytestcmd which does the check for you. The advantage of this script is that you only need to update one line in the script to change the test and never change the cronjob. The script would look something like:
#/usr/bin/env bash
# Give reference day
reference_day="20180906"
# Give the number of days to await execution (defaults to zero)
delta_days="${1:-0}"
# compute execution day
execution_day=$(date -d "${reference_day} + $delta_days days" "+%F")
[[ $(date "+%F") == "$execution_day" ]]
So, the next time you want to wait 14 days, just update the script and change the variable reference_day.
note: the OP updated the question. The above script can also be adjusted where reference_day is obtained via a set of commands, for example to retrieve the day of dev deployment. Imagine get_dev_deployment_day is a command which returns that day, then you just need to update
reference_day=$(get_dev_deployment_day)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/52200474",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: Custom css for paging in codeigniter I need a little help here,
I want using paging CodeIgniter.
Here my piece code for showing the paging.
$this->load->library("pagination");
$config = array();
$key = $this->input->get_post('qfront');
$config["base_url"] = base_url() . "source/";
$config["total_rows"] = $this->fetch_count($key)->count;
$config["per_page"] = 10;
$choice = $config["total_rows"] / $config["per_page"];
$config["uri_segment"] = 2;
$config["num_links"] = round($choice);
$config['full_tag_open'] = '<div class="pagination"><ul>';
$config['full_tag_close'] = '</ul></div><!--pagination-->';
$config['first_link'] = '« First';
$config['first_tag_open'] = '<li class="prev page">';
$config['first_tag_close'] = '</li>';
$config['last_link'] = 'Last »';
$config['last_tag_open'] = '<li class="next page">';
$config['last_tag_close'] = '</li>';
$config['next_link'] = 'Next →';
$config['next_tag_open'] = '<li class="next page">';
$config['next_tag_close'] = '</li>';
$config['prev_link'] = '← Previous';
$config['prev_tag_open'] = '<li class="prev page">';
$config['prev_tag_close'] = '</li>';
$config['cur_tag_open'] = '<li class="active"><a href="">';
$config['cur_tag_close'] = '</a></li>';
$config['num_tag_open'] = '<li class="page">';
$config['num_tag_close'] = '</li>';
$this->pagination->initialize($config);
$page = ($this->uri->segment(2)) ? $this->uri->segment(2) : 0;
$data["results"] = $this->fetch_result($config["per_page"], $page, $key);
$data["links"] = $this->pagination->create_links();
$data["key"] = $key;
$this->load->view("page", $data);
in my view :
<?php echo $links; ?>
And what I got is
How can I make like <-prev 1 2 3 4 5 . . . 200 next->
Is it possible? Or maybe a little nice css
A: In the $config array passed into the pagination initialize() method, you can set the number of links to display on either side of the current page with num_links:
$config['num_links'] = 2;
From the CI user guide:
The number of "digit" links you would like before and after the selected page number. For example, the number 2 will place two digits on either side, as in the example links at the very top of this page.
That should get you closer to exactly what you want. I don't know the exact HTML generated by this library, so if you post it maybe someone can give you a CSS solution. You could also look into extending the CI pagination library to suit you needs (check out "Extending Native Libraries").
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/20573882",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Clean map periodically I have a A class managing a map.
class A {
public:
A() {}
void addElem(uint8_t a, const B& b) {
std::lock_guard<std::mutex> lock(_mutex);
auto result = _map.emplace_hint(_map.end(), a, b);
_deque.push_back(std::make_pair(result, time(nullptr)));
}
void cleanMap() {
std::lock_guard<std::mutex> lock(_mutex);
_map.erase(_deque.front().first);
_deque.pop_front();
}
private:
std::map<uint8_t, B> _map;
std::deque<std::pair<std::map<uint8_t, B>::iterator, time_t>> _deque;
std::mutex _mutex;
};
As I add a lot of elements in my map, I want to periodically clean it by removing elements that were first inserted.
if (difftime(time(nullptr), _deque.front().second > EXPIRY)) {
cleanMap();
}
The following code is crashing at some point when I try to pop element from deque:
double free or corruption (fasttop): 0x00007fffdc000900 ***
Is the above code makes sense ? If yes, where could be the error ? And if not, how can I clean periodically a map ?
A: You have problems when you add the elements with the same key.
When emplace_hint is called with the key which exists, you push on deque duplicated iterators (emplace_hint returns the iterator to already existing element map::emplace_hint).
When deque and map are cleared, you call map::erase but it accepts only valid and dereferenceable iterators. So when erase is called for the duplicated iterator (map::erase), the code crashes because this item was deleted in previous erase call.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/55019915",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Uploadify onComplete not firing I am using Yii framework with uploadify. The upload works perfectly but the onComplete event gets not fired..
It seems that the main problem of most of the users is that the .php file does not return anything and so flash does not know when the .php script has finished. But my script does echo "OK" at the end, so this is not the problem.
My Code:
$('#element').uploadify({
'uploader' : 'index.php?page=upload',
'swf' : 'javascript/uploadify.swf',
onComplete : function(event, queueID, fileObj, response, data) {
alert(event); alert(queueID); alert(fileObj); alert(response); alert(data);
},
});
My PHP Code looks like:
//Upload things
echo 'OK';
The upload works perfectly, but the onComplete event is not fired. I tried to remove all upload things, so that the php file looks like:
echo 'OK';
but that also does not work.
Thanks for your help!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/6722785",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Jinja2 template if/else statement In a jinja2 template file, I'm attempting to find a string in a server hostname, then apply the settings I want based on what it found. Here is what I have. This works, however, everything is getting the developmenthosts setting.
{% if 'dev' or 'tst' or 'test' or 'eng' in ansible_hostname %} {{ developmenthosts }} {% else %} {{ productionhosts }} {% endif %}
Any help would be great, and I'm fairly new using jinja2 templates.
A: Try the following, using regex_search the if/else can be written in shorter and cleaner format.
"{{ developmenthosts if ( ansible_hostname|regex_search('dev|tst|test|eng') ) else productionhosts }}"
Example:
---
- name: Sample playbook
connection: local
# gather_facts: false
hosts: localhost
vars:
developmenthosts: MYDEVHOST
productionhosts: MYPRODHOST
tasks:
- debug:
msg: "{{ ansible_hostname }}"
- debug:
msg: "{{ developmenthosts if ( ansible_hostname |regex_search('dev|tst|test|eng') ) else productionhosts }}"
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/69378649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Upload ALAsset URL images to the server I am trying to access iPhone's photo album photo images through ALAssetsLibrary and upload(send) the images to my server. I am being successful accessing the photo album and get the asset URL of each images, via the following code:
- (void)viewDidLoad
{
[super viewDidLoad];
void (^assetEnumerator)(struct ALAsset *, NSUInteger, BOOL *) = ^(ALAsset *result, NSUInteger index, BOOL *stop) {
if(result != NULL) {
NSLog(@"See Asset: %@", result);
[assets addObject:result];
// Here storing the asset's image URL's in NSMutable array urlStoreArr
NSURL *url = [[result defaultRepresentation] url];
[urlStoreArr addObject:url];
}
};
void (^assetGroupEnumerator)(struct ALAssetsGroup *, BOOL *) = ^(ALAssetsGroup *group, BOOL *stop)
{
if(group != nil) {
[group enumerateAssetsUsingBlock:assetEnumerator];
}
[self.activity stopAnimating];
[self.activity setHidden:YES];
};
assets = [[NSMutableArray alloc] init];
library = [[ALAssetsLibrary alloc] init];
[library enumerateGroupsWithTypes:ALAssetsGroupAlbum
usingBlock:assetGroupEnumerator
failureBlock: ^(NSError *error) {
NSLog(@"Failure");
}];
urlStoreArr = [[NSMutableArray alloc] init];
}
-(void) UploadImagesToServer
{
for (int i=0; i<[urlStoreArr count]; i++)
{
// To get the each image URL here...
NSString *str = [urlStoreArr objectAtIndex:i];
NSLog(@"str: %@",str);
// Need to upload the images to my server..
}
}
I want to upload the images which i got from the device asset's URL to my server. Could someone please advise how can i program it for sending the images to the server using this case?
Thank you!
A: you could use the below SO post code for uploading images to server.
How can I upload a photo to a server with the iPhone?
upload image from iphone to the server folder
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/5442653",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to use socket.io and Express.js with IISNode I have setup a windows 2012 r2 server with IIS, IISNode and Rewrite module.
This all seems the work just fine, simple applications with Express also seem to work well.
But when I add socket.io to the application, I get stuck. I get following Error:
*
GET http://mywebsiteadress/socket.io/socket.io.js net::ERR_ABORTED
*
I use IIS version 8.5.96, express version 4.16.2 and socket.io version 2.0.4.
Server:
let app = require('express')(),
server = require('http').Server(app),
io = require('socket.io')(server);
server.listen(process.env.PORT);
app.get('/',(req,res)=>{
res.sendFile(__dirname + '/index.html');
})
io.on('connection',socket=>{
socket.emit('test','working');
});
EDIT:
I changed
io = require('socket.io')(server);
to:
io = require('socket.io')(server,{
path: '/SocketIO/socket.io'
});
And now i'm getting this error:
polling-xhr.js:264 GET http://mywebsiteadress/socket.io/?EIO=3&transport=polling&t=M6RHmTd 404 (Not Found)
i.create @ polling-xhr.js:264
i @ polling-xhr.js:165
o.request @ polling-xhr.js:92
o.doPoll @ polling-xhr.js:122
r.poll @ polling.js:118
r.doOpen @ polling.js:63
r.open @ transport.js:80
r.open @ socket.js:245
r @ socket.js:119
r @ socket.js:28
r.open.r.connect @ manager.js:226
r @ manager.js:69
r @ manager.js:37
r @ index.js:60
(anonymous) @ (index):12
Client:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<title>Socket IO Test Page</title>
</head>
<body>
<script src='socket.io/socket.io.js'></script>
<script>
let socket = io.connect()
socket.on('test',data => {
console.log(data);
})
</script>
</body>
</html>
Web.config:
<configuration>
<system.webServer>
<handlers>
<add name="iisnode" path="index.js" verb="*" modules="iisnode" />
</handlers>
<rewrite>
<rules>
<rule name="LogFile" patternSyntax="ECMAScript">
<match url="socket.io"/>
<action type="Rewrite" url="index.js"/>
</rule>
</rules>
</rewrite>
<webSocket enabled="false"/>
</system.webServer>
</configuration>
What am I doing wrong? I have searched the internet and this forum, and tried a lot of things, but none seem to solve it for me.
I can't seem to find documentation on setting up Socket.io with IISNode?
I hope someone can help me.
Kind regards
A: Looks like your rewrite rule is wrong. Did you forget to add ".+" after your url attribute?
<rule name="SocketIO" patternSyntax="ECMAScript">
<match url="socket.io.+"/>
<action type="Rewrite" url="server.js"/>
</rule>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/48810509",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Select options won't display when parent div is shown after being hidden I have a select box that is nested inside three divs as such:
<div id="entry">
<div id="entryContent">
<div>
<div>
Name:
<select name="ddlName" id="ddlName">
<option value="Name1">Name1</option>
<option value="Name2">Name2</option>
<option value="Name3">Name3</option>
</select>
</div>
...
</div>
...
</div>
...
</div>
The CSS:
#entry
{
position:absolute;
width:527px;
height:364px;
left:69px;
top:214px;
z-index:2;
display:none;
}
The following jquery code is to show/hide the entry div based on clicking certain buttons:
$("document").ready(function() {
$("#addButton").live("click", function(event) {
$("#entry").show();
})
$("#closeEntry").live("click", function(event) {
$("#entry").hide();
})
})
The problem is that when I first click the add button, it shows the entry div and the select box works fine. After I click the close button and click the add button again, the select box will not let me select another option. It works fine in firefox but not ie7.
A: Found that it has something to do with jQuery UI CSS file. When I exclude it, the works. When I added an effect to the hide() and show() function, then it worked properly.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/5312234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How do i break a number down into a percentage (0 - 100%)? Details inside I am using a JS progress bar that is set using a percentage: 0 to 100 (percent). I need the progress bar to reach 100% when 160,000 people have signed a certain form. I have the total number of signers set in a PHP variable but am lost on how to do the math to convert that into a percentage that fits within 1 - 100 (so that the progress bar actually reflects the goal of 160,000).
I may be missing something obvious here (i suck at anything number-related) so does anyone here have a clue as to how to do this?
A: Percentage calculation is basic mathematics:
$total = 160000;
$current = 12345;
$percentage = $current/$total * 100;
A: Just
percentage = number/160000 * 100
A: If N is the goal, and X is the number of people that have signed so far, then the percentage is (X/N)*100.
A: ...you can't convert a number to a percentage?
$percent = ($currentNumber / 160000) * 100;
If you don't want a float answer you can just cast or round it however you'd like.
A: 160,000/160,000 = 1 = 100%
160,000/2 = 0.5 = 50%
Given this, the calculation should be easy. The numerator is the number completed and the denominator is the "goal" -- the total to complete.
So once you are 80,000 you'd be 50% complete and your progress bar would render as such.
If you need to work with smaller numbers, divide everything by 100 or 1000 and you can reduce the size of the numbers in a relevant manor.
160,000 / 1000 = 160 160 would be your denominator. So than 50% would be 80/160. Does this make sense?
A: Do it Microsoft style:
current = 12345
percentage = 100*(1-exp(-current/100000))
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/2768503",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Multitenancy and Partitaioning HI There,
i have to make my application SAAS compliant .For achieving multi tenancy , i was thing of partitioning the data , and each partition will be for a tenant. and this partitioning will be done dynamically .
has anybody done something like this ?
what do you think the better approach be ?
i am using SQL 2005
Regards
DEE
A: There is a limit of 1000 partitions per partition scheme and you can only partition on a single field, so if you intend to multi-tenant beyond 1000 instances you are going to have to jump through a lot more hoops. You can extend the limit by using a partitioned view on top of multiple partitioned tables, but this increases the management overhead. You can use the DMVs and create your own automated system that generates new partitions per client / tenant and manages the problem but it will be specific to your application and not generic.
At present there is no automatic dynamic partitioning in SQL Server, it was mentioned at the PDC09 in relation to the SQL Azure future roadmap, but I did not hear of it for SQL Server.
Your alternative choices are a database or SQL Instance per client, there are benefits to this approach in that you give yourself far more opportunity to scale out if the needed arises, and if you start looking at a larger data centre, you can start balancing the SQL Instances across a farm of servers etc. If you automatically have all the data in a single database.
Other things to take into consideration:
Security: Whilst you have the data in a single database with partitioning, you have no data protection. You risk exposing one client's data to another very trivially with any single bug in the code.
Upgrading: If all the clients access the same database, then the upgrades will be an all or nothing approach - you will not be able to easily migrate some users to a new version whilst leaving the others as they were.
Backups: You can make each partition occupy a separate file group and try manage the situation. Out of the box so to speak, every client's backups are mingled together. If a single client asks for a rollback to a given data you have to plan carefully in advance how that could be executed without affecting the other users of the system.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/1940272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Using pwelch to a set of signals: some questions (Matlab) I would like to use pwelch on a set of signals and I have some questions.
First, let's say that we have 32 (EEG) signals of 30 seconds duration. The sampling frequency is fs=256 samples/sec, and thus each signal has length 7680. I would like to use pwelch in order to estimate the power spectral density (PSD) of those signals.
Question 1:
Based on the pwelch's documentation,
pxx = pwelch(x) returns the power spectral density (PSD) estimate, pxx, of the input signal, x, found using Welch's overlapped segment averaging estimator. When x is a vector, it is treated as a single channel. When x is a matrix, the PSD is computed independently for each column and stored in the corresponding column of pxx.
However, if call pwelch as follows
% ch_signals: 7680x32; one channel signal per each column
[pxx,f] = pwelch(ch_signals);
the resulting pxx is of size 1025x1, not 1025x32 as I would expect, since the documentation states that if x is a matrix the PSD is computed independently for each column and stored in the corresponding column of pxx.
Question 2:
Let's say that I overcome this problem, and I compute the PSD of each signal independently (by applying pwelch to each column of ch_signals), I would like to know what is the best way of doing so. Granted that the signal is a 30-second signal in time with sampling frequency fs=256, how should I call pwelch (with what arguments?) such that the PSD is meaningful?
Question 3: If I need to split each of my 32 signals into windows and apply pwech to each one of those windows, what would be the best approach? Let's say that I would like to split each of my 30-second signals into windows of 3 seconds with an overlap of 2 seconds. How should I call pwelch for each one of those windows?
A: Here is an example, just like your case,
The results show that the algorithm indicates the signal frequencies just right.
Each column of matrix, y is a sinusoidal to check how it works.
The windows are 3 seconds with 2 seconds of overlapping,
Fs = 256;
T = 1/Fs;
t = (0:30*Fs-1)*T;
y = sin(2 * pi * repmat(linspace(1,100,32)',1,length(t)).*repmat(t,32,1))';
for i = 1 : 32
[pxx(:,i), freq] = pwelch(y(:,i),3*Fs,2*Fs,[],Fs); %#ok
end
plot(freq,pxx);
xlabel('Frequency (Hz)');
ylabel('Spectral Density (Hz^{-1})');
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/35528353",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Logging for Custom Code Written in Spark We are writing Scala code for Apache Spark and running the process in Yarn Mode (Yarn Client Mode) in cloudera 5.5. Spark version is 1.5
I need to do logging for this code and want to move logs in Specific Directory for Spark ,outside of noise in spark logs
We are using plain log4j. Dont have time for logging trait for now.
I have changed the default log4j file in ${spark_home} like this
# Set everything to be logged to the console
log4j.rootLogger=ERROR, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n
# Spark Logs
log4j.appender.aAppender=org.apache.log4j.RollingFileAppender
log4j.appender.aAppender.File=/var/log/aLogger.log
log4j.appender.aAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.aAppender.layout.ConversionPattern=[%p] %d %c %M - %m%n
log4j.appender.aAppender.maxFileSize=50MB
log4j.appender.aAppender.maxBackupIndex=5
log4j.appender.aAppender.encoding=UTF-8
# My custom logging goes to another file
log4j.logger.fsaLogger=ERROR, fsaAppender
This is my Code in scala
This is inside main method
val logger =LogManager.getLogger("aLogger")
val jobName = "AData"
logger.warn("jobName ::" +jobName +" Started at ::" +
+Calendar.getInstance().getTime())
val sc = new SparkContext(conf)
val sqlContext = new org.apache.spark.sql.hive.HiveContext(sc)
val sourceDF = sqlContext.read
.format("com.databricks.spark.avro")
.load(pathToFiles)
finalCSMDF.map { row =>
//To remove Serilizable Exception
var loggers = org.apache.log4j.LogManager.getLogger("aLogger")
loggers.error("Test Log")
row
}
logger.warn("jobName ::" +jobName +" completed at ::" +
+Calendar.getInstance().getTime())
My problem here is that when ever i run this in yarn mode , i find the lines jobName ::" +jobName started" etc in /var/log/aLogger.log but not the loggers.error("Test Log") which is inside the closure. Basically everything in driver is going in closure but not the exceutioners closures
Please help
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/36609689",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Migrating from mongodb to firebase My app currently uses mongodb and I'm looking forward to migrating it to firebase instead.
How easy is it to do and are there things I have to watch out for.
A: Isn't Firebase stores all its data in MongoDB?
Update May 2016
Apparently a page where Firebase was mentioned in MongoDB's site was removed (http://www.mongodb.org/about/production-deployments/)
After some search on their site I've found another page in their blog
https://www.mongodb.com/post/45270275688/mongodbs-growing-ecosystem (mirror)
where they say:
it’s great to see so many companies building on MongoDB. Here are just a few:
*
*Modulus. A Node.js platform as a service (PaaS) offering, Modulus includes MongoDB as its default data store. This follows related offerings from Meteor and Firebase.
An alternative to MongoDB would be RethinkDB and recently the team behind RethinkDB released Horizon, an open-source backend platform on NodeJS and that is kind of a locally hosted Firebase. Here's a nice talk about Horizon.
A: Given that both MongoDB and Firebase are non-relational in nature, most of your data should map to Firebase cleanly. The Firebase REST endpoints support regular JSON, so getting your data in (and, back out if you choose) should also be easy. The main areas you need to keep watch for are:
*
*The Firebase API is realtime/asynchronous in nature; specifically, when clients are reading data. Migrating your backend request/response code to the client and using this approach will probably be the biggest area with regard to level of effort.
*There will also be a disparity in the feature set that MongoDB and Firebase provide; notable areas include Mongo's support for doing things like MapReduce, Cursors, and free-text queries (Firebase doesn't currently support these areas).
The other thing to keep in mind is that Firebase isn't an all-or-nothing type of undertaking. Apps can definitely take advantage of the realtime, scaling, and platform features piecemeal.
A: Not specifically answering the question, but if you find Firebase lacking a few features which you you are used to with Mongo -
I found a node package which will allow you to run both with Firebase as the master DB.
Firebase
*
*Security/Authentication
*Sockets
MongoDB
*
*querying
*indexing
*aggregation
https://www.npmjs.org/package/mongofb
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/16043822",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15"
}
|
Q: AngularJS : $watch vs. $observe when newValue === oldValue If this is already explained or discussed somewhere, I am very sorry, but I couldn't find this exact problem discussed anywhere.
So I have an angular directive with one data binding 'myvar' (= or @ makes no difference). The value from the data binding is used in the directive: scope.myvarStr = scope.myvar + 'somestring'. Then I bind myvarStr in the template.
Because scope.myvarStr must be modified when scope.myvar changes, I used $watch('myvar', function(...)) to watch the value and update scope.myVarStr when needed. In the watch function I put the classic if (newValue === oldValue) return;
The problems started the very first time $watch fired and the two values were equal; then the view was not updated. I could easily see that from console.log(scope.myvar) on the first line in the link function that scope.myvar was undefined (or '' dependent on binding type) to begin with and that the value had changed to something else when I did a console.log in the $watch.
I googled for an hour or so, and found this: https://github.com/angular/angular.js/issues/11565
However, this issue wasn't discussed anywhere else, so I looked googled more and came across $observe AngularJS : Difference between the $observe and $watch methods
When I changed from $watch to $observe, all my problems went away and I can still use if(newValue === oldValue) return;.
(function(directives) {
'use strict';
directives.directive('someDir', [function() {
return {
restrict: 'E',
scope: {
myvar: '=' //or @ didn't matter at the time...
},
template: '<p>{{myvarStr}}</p>',
link: function(scope, el, attrs) {
function toString() {
if (scope.myvar < 1000) {
scope.myvarStr = scope.myvar;
} else {
scope.myvarStr = scope.myvar/1000 + 'k';
}
}
toString();
scope.$watch('myvar', function(newValue, oldValue) {
console.log("changed", newValue, oldValue)
if (newValue == oldValue) return;
toString();
},true);
// When changing to $observe it works :)
//attrs.$observe('myvar', function(newValue, oldValue) {
// console.log("changed", newValue, oldValue)
// if (newValue == oldValue) return;
// toString();
//},true);
}
};
}]);
}(angular.module('myApp.directives')));
Suggestion: As far as I understand this issue occurs with $watch because the scope value is never changed. It takes a while for the directive to pick up the value, until then the binding is just an empty string or something and when the value is detected the $watch fires but the actual scope value has not changed (or as explained in the first link; the first watch fires when the value 'appears' in the directive).
A: I don't quite understand your suggestion/explanation, but I feel like things are much simpler than you make it appear.
You don't need the newValue === oldValue test, because your watch-action is idempotent and cheap. But even if you do, it only means you need to initialize the value yourself (e.g. by calling toString() manually), which you seem to be doing and thus your directive should work as expected. (In fact, I couldn't reproduce the problem you mention with your code.)
Anyway, here is a (much simpler) working version:
.directive('test', function testDirective() {
return {
restrict: 'E',
template: '<p>{{ strVal }}</p>',
scope: {
val: '='
},
link: function testPostLink(scope) {
scope.$watch('val', function prettify(newVal) {
scope.strVal = (!newVal || (newVal < 1000)) ?
newVal :
(newVal / 1000) + 'K';
});
}
};
})
BTW, since you are only trying to "format" some value for display, it seems like a filter is more appropriate (and clear imo), than a $watch (see the above demo):
.filter('prettyNum', prettyNumFilter)
.directive('test', testDirective)
function prettyNumFilter() {
return function prettyNum(input) {
return (!input || (input < 1000)) ? input : (input / 1000) + 'K';
};
}
function testDirective() {
return {
template: '<p>{{ val | prettyNum }}</p>',
scope: {val: '='}
};
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/30356803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Responsive dataTable is not working with Bootstrap 4.3.1 I have an issue with the responsive DataTable Plugin and Boostrap 4.3.1 in this specific scenario:
I have an extra large modal which display a table, this table have 3 collapsed columns with some contact information this work as expected:
However in mobile display the table dont collapse the other columns as I would expect, and the table is not wraped inside the modal:
I used the following example at Datatables.net and loaded basically the same files with the exception of Bootstrap which is the version 4.3.1 and not the 4.1.3.
My HTML:
<div class="modal fade bd-example-modal-xl" tabindex="-1" role="dialog" aria-labelledby="myExtraLargeModalLabel" id="modalOpciones" aria-hidden="true">
<div class="modal-dialog modal-xl">
<div class="modal-content">
<div class="modal-header">
<h4 class="modal-title">Opciones:</h4>
<button type="button" class="close" data-dismiss="modal">×</button>
</div>
<div class="modal-body">
<table id="tablaDirectorio" cellspacing="0" class="table table-striped table-bordered dt-responsive " style="width:100%">
<thead>
<tr>
<th>id</th>
<th>Nombre</th>
<th>Apellido</th>
<th>Departamento</th>
<th>Puesto</th>
<th>Fecha Creado</th>
<th>Extension</th>
<th class="none">Telefono</th>
<th class="none">Celular</th>
<th class="none">Email</th>
</tr>
</thead>
<tfoot>
<tr>
<th>id</th>
<th>Nombre</th>
<th>Apellido</th>
<th>Departamento</th>
<th>Puesto</th>
<th>Fecha Creado</th>
<th>Extension</th>
<th>Telefono</th>
<th>Celular</th>
<th>Email</th>
</tr>
</tfoot>
</table>
</div>
<div class="modal-footer">
<button type="button" class="btn btn-outline-dark" data-dismiss="modal">Cerrar</button>
</div>
</div>
</div>
</div>
My Javascript:
$('#opciones-list').on('click', 'a' ,(e) => {
$('#modalOpciones').modal('show');
var tablaEditar=$('#tablaDirectorio').DataTable({
"destroy": true,
"responsive":/*{
"details": {
renderer: function ( api, rowIdx, columns ) {
var data = $.map( columns, function ( col, i ) {
return col.hidden ?
'<tr data-dt-row="'+col.rowIndex+'" data-dt-column="'+col.columnIndex+'">'+
'<td>'+col.title+':'+'</td> '+
'<td>'+col.data+'</td>'+
'</tr>' :
'';
} ).join('');
return data ?$('<table/>').append( data ) :false;
}
}
}*/true,
"autoWidth": false,
"ajax": {
"url": 'controller/controller.php',
"method": 'POST',
data:{accion:'getTabla'}
},
"columns": [
{"data": "directorio"},
{"data": "nombres"},
{"data": "apellidos"},
{"data": "departamento_nombre"},
{"data": "puesto"},
{"data": "fechacreado"},
{"data": "Extension"},
{"data": "Telefono"},
{"data": "Celular"},
{"data": "Email"}
],
"language":idioma_spanol,
"columnDefs": [
{"className": "dt-center", "targets": "_all"}
]
});
})
Hope someone can help me on this one !!
Thanks in advice
A: CAUSE
Your table is hidden initially which prevents jQuery DataTables from initializing the table correctly.
SOLUTION
Use the code below to recalculate column widths of all visible tables once modal becomes visible by using a combination of columns.adjust() and responsive.recalc() API methods.
$('#modal').on('shown.bs.modal', function(e){
$($.fn.dataTable.tables(true)).DataTable()
.columns.adjust()
.responsive.recalc();
});
You can also adjust single table by using table ID, for example:
$('#modal').on('shown.bs.modal', function(e){
$('#example').DataTable()
.columns.adjust()
.responsive.recalc();
});
LINKS
*
*jQuery DataTables: Column width issues with Bootstrap tabs - Responsive extension – Incorrect breakpoints.
*Datatable jquery - table header width not aligned with body width
A: Try to wrap the table inside a divand then give it the attribute overflow:auto;
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/58291901",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: ImageMagick code neeeds to be edited..please CHECK OUT please check my imagemagick code where iam trying to add "size" variable ,but not getting where exactly we need to add size variable just like "color" ,"string"...
$animation = new Imagick();
$animation->setFormat( "gif" );
$color = new ImagickPixel( "blue" );
$color->setColor( "red" );
$string = "kothi!";
$draw = new ImagickDraw();
$draw->setFont( "arial.ttf" );
for ( $i =0; $i <= 0; $i++ )
{
$part = substr( $string,100, $i);
$animation->newImage( 100, 50, $color);
$animation->annotateImage( $draw, 100, 100, 100, $part );
$animation->setImageDelay( 30 );
}
$draw->setFont( "arial.ttf" );
$animation->newImage( 100, 50, $color);
$animation->annotateImage( $draw, 10, 10, 0, $string );
$animation->setImageDelay( 120 );
header( "Content-Type: image/gif" );
echo $animation->getImagesBlob();
A: bool Imagick::setSize ( int $columns , int $rows )
Sets the size of the Imagick object. Set it before you read a raw image format such as RGB, GRAY, or CMYK.
--php.net
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/8701755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to pass variable from stimulus-js to view file of Rails? Facing problem
The order of things I want to do is
*
*Open modal from @posts partial
*Enter the posting deadline in the form on modal
*POST the contents of the form to rails action that updates the posting flag to true
When opening the modal in item 1 of above, it will be lost track of the partial post record information, so I'm getting the partial post.id with the stimulus getter, but I'm having trouble embedding that id in the url specified in the form.
Development Env
ruby 3.1.2
rails 7.0.3
importmap-rails 1.1.2
tailwindcss-rails 2.0.10
stimulus-rail 1.0.4
Related source code
_post.html.erb
<button data-action="click->modal#open" data-modal-id-param=<%= post.id %> class="flex justify-center ml-1 py-2 px-4">
</button>
modal_controller.js
import { Controller } from "@hotwired/stimulus"
export default class extends Controller {
static targets = ["modal", "form"]
open(event) {
event.preventDefault();
this.modalTarget.showModal();
this.modalTarget.addEventListener('click', (e) => this.backdropClick(e));
this.idValue = event.params
console.log(this.idValue)
}
backdropClick(event) {
event.target === this.modalTarget && this.close(event)
}
close(event) {
event.preventDefault();
this.modalTarget.close();
if (this.hasFormTarget) {
this.formTarget.reset();
}
}
}
_publish_limit_form.html.erb
<dialog class="bg-white shadow-xl w-4/5 md:w-2/5 backdrop:bg-gray-700 backdrop:bg-opacity-50 open:animate-fade-in open:backdrop:animate-fade-in" data-modal2-target="modal">
<button type="button" data-action="modal#close" class="float-right">
<svg xmlns="http://www.w3.org/2000/svg" class="h-6 w-6" fill="none" viewBox="0 0 24 24" stroke="currentColor">
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M6 18L18 6M6 6l12 12" />
</svg>
</button>
<%= form_with url: start_to_publish_admin_post_path(????), method: :patch, data: { modal2_target: 'form2'} do |f| %>
// <- Above, I want to embed the id in the specified url by extracting post.id from stimulus. advice please! ! ->
<div class="flex px-4 py-5 flex-wrap">
<p class= "text-lg leading-6 font-base text-gray-800"><%= t('.please_set_publish_limit')%></p>
<div class="w-full"></div>
<%= f.datetime_select :publish_limit, { min: Time.zone.now, minute_step: 30, discard_year: true, discard_second: true, start_minite: 30, start_second: 0 }, { class: "leading-none rounded mt-2" } %>
<span class="ml-2 text-lg leading-6 font-base text-gray-800 pt-3"><%= t('.until') %></span>
</div>
<%= f.hidden_field :publish_status, value: 'is_publishing' %>
<div class="flex items-center justify-center mt-2">
<%= f.submit 'OK', class: "mt-3 font-bold leading-normal text-white py-4 px-10 bg-blue-700 rounded hover:bg-blue-600 focus:ring-2 focus:ring-offset-2 focus:ring-blue-700 focus:outline-none" %>
</div>
<% end %>
</dialog>
Any advice on what to do would be greatly appreciated.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/73795076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How do I have the serial monitor of the Arduino show a string a value but only the value is changing? I already have my code to work, I'm just trying to make it look nicer now. Just to give a really short summary on what my device does: Smart parking system that detects cars going in and out of a parking lot and displays how many vacant spots are available or not at the entrance. Right now the output looks like this:
Vacant Spots: 1
Vacant Spots: 1
Vacant Spots: 1
Vacant Spots: 1
Vacant Spots: 0
.
.
.
.
.
This is in the case of when a car is entering so it's decrementing by 1 and it increments when a car leaves since there's a added vacant spot available. What I'm trying to do is have the output look like this:
"
Vacant Spots: 1
"
The only thing I want to change is the numerical value. I don't want a continuous stream of
" Vacant Spots: 1 "s to show at the LCD display for the parking user to see. Is there a way to just clear the serial monitor after the loop ends without having it output a new value below it continuously? I've provided the code for the program. I have 3 xbees (1 coordinator and 2 routers). The two routers dont have code on them and are just sending data to the coordinator. The coordinator is where it receives the data. This is the code for the coordinator:
int readValue = 0;
void setup()
{
Serial.begin(9600);
}
int vslots = 1;
void loop()
{
if(Serial.available()>21)
{
if(Serial.read() == 0x7E)
{
for(int i=0;i<19;i++)
{
byte discard = Serial.read(); //Next 18 bytes, it's discarded
}
readValue = Serial.read();
bool flagTrue = false;
bool flagFalse = false;
if((readValue == 0) && flagTrue == false ) //EXIT
{
flagTrue = true;
flagFalse = false;
Serial.print("Vacant Spots: ");
Serial.println(vslots);
}
else if((readValue == 16 && flagFalse == false) && vslots >= 1)
//DECREMENT (CAR ENTERING)
{
flagTrue = false;
flagFalse = true;
vslots -= 1;
Serial.print("Vacant Spots: ");
Serial.println(vslots);
}
if((readValue == 18) && flagTrue == false )
{
flagTrue = true;
flagFalse = false;
Serial.print("Vacant Spots: ");
Serial.println(vslots);
}
else if((readValue == 19 && flagFalse == false) && vslots <= 10)
//INCREMENT (CAR EXITING)
{
flagTrue = false;
flagFalse = true;
vslots += 1;
Serial.print("Vacant Spots: ");
Serial.println(vslots);
}
}
}
}
A: First, you need to understand that Arduino serial terminal is not like a real terminal software It does not support a command sent on UART. So to achieve what you want you will require real terminal software such as putty. Which allows you to make changes by sending character bytes using UART communication.
//ADD these lines before printing on a serial terminal
Serial.write(27); // 27 is an ESC command for terminal
Serial.print("[2J"); // this will clear screen
Serial.write(27); // 27 is an ESC command for terminal
Serial.print("[H"); // move cursor to home
//now print what you want...
Serial.print("Vacant Spots: ");
Serial.println(vslots);
For more info and methods you can check this post here. But had tried everything and suggesting to go with putty.
A: The ln suffix in println stands for new line. That means println will "print" the value and a newline.
So as an initial solution you could change from using println to plain print:
Serial.print("Vacant Spots: ");
Serial.print(vslots); // Note: not println
Then we have to solve the problem of going back to the beginning of the line. to overwrite the old text. This is typically done with the carriate-return character '\r'. If we add it to the beginning of the printout:
Serial.print("\rVacant Spots: "); // Note initial cariage-return
Serial.print(vslots); // Note: not println
Then each printout should go back to the beginning of the line, and overwrite the previous content on that line.
To make sure that each line is equally long, you could use e.g. sprintf to format the string first, and then write it to the serial port:
char buffer[32];
// - for left justification,
// 5 for width of value (padded with spaces)
sprintf(buffer, "\rVacant Spots: %-5d", vslots);
Serial.print(buffer);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/64905860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Can I develop a desktop app in VS2008 that can also run on Mac? What approach? Can I develop a desktop app in VS2008 that can also run on Mac? What approach?
That is, if I am developing an application (thick client) that runs on a Windows XP/Vista PC, is there an approach to do this such that I could also run it on a Mac? (e.g. silverlight?)
The kind of things my winforms type app needs includes:
*
*HttpWebRequest / HTTP calls
*Access to underlying Sqlite database (via ADO.net)
*Windows / Dialogs
*System Tray Presence
I'm currently working in VS2008 and using WinForms, so I'm not really across WPF / Silverlight etc.
Thanks
A: I would suggest using Mono for this project, since you're already in Windows Forms. You'll have difficulty with the database requirement using Silverlight, and WPF will not work at all.
A: Agreed on the Mono front. I would definitely check out Mono Develop - it's the cross platform Mono IDE. It's been released for Mac so that may help you port your app (or even develop it!)
A: I would suggest looking into Silverlight 4.0. Beta version is already available. I expect to see release at Mix 2010 (which is March 15-17th).
From what you requested it already supports
*
*HttpWebRequest / HTTP calls
*Windows / Dialogs (kind of)
*System Tray Presence
As for access to underlying Sqlite database (via ADO.net) this can be achieved through RIA Services.
Silverlight supports out of browser mode, and runs both on Window and Mac. IDE is VS 2010, and (probably) Eclipse.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/2293248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Custom Calendar for sales data aggregated by weeks and months I am struggling with creating Custom Calendar that would allow me to use time intelligence function on data that is already aggregated by weeks and months. The original table with transactions contains over 20M rows, so for performance and space saving reasons the grouping is necessary and I perform it while querying the database in SQL Server. I want to create:
*
*Month vs Same Month Last Year
*Week vs Same Week Last Year
*Year-To-Date vs Last Year Year-To-Date (by Months)
*Year-To-Date vs Last Year Year-To-Date (by Weeks)
My idea was to assign some dummy dates to each row of data, and then create custom calendar that would have each of these dummy dates along with other details. I just cannot figure out the key (how to create dummy date having Year number Month number and Week number - please keep in mind that for example Week 5 of 2020 is partially in January and partially in February)
Is it possible? Maybe I have to create 2 separate Calendars, one for Weeks and one for Months?
A: Add calculated table:
Calendar =
GENERATE (
CALENDAR (
DATE ( 2016, 1, 1 ),
DATE ( 2020, 12, 31 )
),
VAR VarDates = [Date]
VAR VarDay = DAY ( VarDates )
VAR VarMonth = MONTH ( VarDates )
VAR VarYear = YEAR ( VarDates )
VAR YM_text = FORMAT ( [Date], "yyyy-MM" )
VAR Y_week = YEAR ( VarDates ) & "." & WEEKNUM(VarDates)
RETURN
ROW (
"day" , VarDay,
"month" , VarMonth,
"year" , VarYear,
"YM_text" , YM_text,
"Y_week" , Y_week
)
)
You can customize it. In relation pane connect your Date field of FactTable (which is by week, as you say. This table can have missing dates and duplicate dates, to be precise) to the field Date of the Calendar table (which has unique Date, by day). Then, in all your visuals or measures always use the the Date field of the Calendar table. It is a good practice to hide the field Date in your FactTable.
More explanations here https://stackoverflow.com/a/54980662/1903793
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/58917742",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: the result weka j48 classifyinstance is not correct I already build a tree to classify the instance. In my tree, there are 14 attributes. Each attribute is discretize by supervised discrete. When I created a new instance, I put the value in this instance and classify it in my tree, and I found the result is wrong. So I debug my program, and I found the value of the instance is not divided into the interval correctly. For example:
value of the instance:0.26879699248120303 is divided into '(-inf-0]'.
Why?
A: Problem solved.I didn't discretize the instance that was to be tested so that the weka didn't know the format of my instance.add the following code:
discretize.input(instance);//discretize is a filter
instance = discretize.output();
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/29838267",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Unit test custom IActionResult.ExecuteResultAsync(ActionContext) in ASP.NET Core 2.0 Having the below class
/// <summary>
/// An unsuccessful HTTP result.
/// </summary>
public class UnsuccessfulActionResult : IActionResult, IHttpStatusCodeResult, IErrors
{
/// <inheritdoc />
/// <summary>
/// The HTTP status code.
/// </summary>
public HttpStatusCode StatusCode { get; }
/// <summary>
/// The corresponding error messages, if any.
/// </summary>
public IReadOnlyCollection<string> Errors { get; }
/// <summary>
/// The request Transaction ID.
/// </summary>
public string TransactionId { get; }
/// <summary>
/// Default ctor.
/// </summary>
/// <param name="transactionId">The request Transaction ID.</param>
/// <param name="errors">The corresponding error messages, if any.</param>
/// <param name="httpStatusCode"></param>
public UnsuccessfulActionResult(
string transactionId,
HttpStatusCode httpStatusCode,
IReadOnlyCollection<string> errors = null)
{
TransactionId = transactionId;
StatusCode = httpStatusCode;
Errors = errors ?? new List<string>();
}
/// <summary>
/// Convenient method to create an unsuccessful response model.
/// </summary>
/// <param name="statusCode"></param>
/// <returns></returns>
protected UnsuccessfulResponseModel CreateResponseModel(int statusCode)
{
return new UnsuccessfulResponseModel(TransactionId, statusCode, Errors);
}
/// <inheritdoc />
public virtual Task ExecuteResultAsync(ActionContext context)
{
var objectResult = new ObjectResult(CreateResponseModel((int)StatusCode))
{
StatusCode = (int)StatusCode
};
return objectResult.ExecuteResultAsync(context);
}
}
I need to test this class to make sure that ActionContext has the response provided in overridden ExecuteResultAsync.
Just started going deep into ASP.NET Core 2.0 and missing some parts, in Web API 2 I would just check the HttpResponseMessage returned in the equivalent method of IHttpActionResult.
Task<HttpResponseMessage> ExecuteAsync(
CancellationToken cancellationToken
)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/47840057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: I have error while building the angular application Please help me solve this error. I have seen some solutions on stack overflow, but they are not working for me.
*
*Here is the error code
ERROR in ./src/app/remedies/Remedy.model.ts
Module build failed: Error: /Volumes/Prince/MyApp/Angular/health/src/app/remedies/Remedy.model.ts is missing from the TypeScript compilation. Please make sure it is in your tsconfig via the 'files' or 'include' property.
at AngularCompilerPlugin.getCompiledFile (/Volumes/Prince/MyApp/Angular/health/node_modules/@ngtools/webpack/src/angular_compiler_plugin.js:674:23)
at plugin.done.then (/Volumes/Prince/MyApp/Angular/health/node_modules/@ngtools/webpack/src/loader.js:467:39)
at <anonymous>
*
*links that I have tried but not working
*
*Module build failed: Error: index.ts is missing from the TypeScript compilation
*main-aot is missing from the typeScript compilation
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/49605497",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to use GIT in web development team I am trying to implement GIT in my company for web development.
Our current workflow is:
*
*Implementing and testing features on a test server in our environment
*Copy/Paste the files on the production server.
Now I am trying to figure out how I would do this with GIT.
My idea is that the developers should push changes on the local(test) server and when the project is finished the production server will pull the updates.
My question is how the test server will see the git files?
Do you have any recommendations for this approach?
A: I don't think it is a good idea to push to the test server. Since shared repository is usually a bare repository.
It would be best to keep the shared repository somewhere on a third server.
Everyone can push to this server, and you can maintain several branches there if you wish.
The test server pulls the changes before running the test.
Or for a more complete test - it would clone the entire repository fresh each time before test.
A: There are really plenty of approaches to this problem, you should consider read more about "Auto deployment", "GIT + deploy" etc.
Personally I'm using special script to export all the files between specific tags in GIT repository and upload them directly on the server.
A: Take a look on the git workflow(s) modal.
https://www.atlassian.com/git/tutorials/comparing-workflows/
http://nvie.com/posts/a-successful-git-branching-model/
You simply have to figure out which workflow suits you the best and you can grab the most suitable one.
If you choose to use nvie gitworkflow you can customize it to your own needs by modifing teh scripts
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/34261763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: Comparing field values using reflection I am trying to compare the field values of two different objects in a generic way. I have a function (seen below) that takes in two Objects and then gets the fields and then compares the fields in a loop and adds the fields to a list if they are not the same - is this the proper way to do this?
public void compareFields(Object qa, Object qa4) throws FieldsNotEqualException
{
Field[] qaFields = qa.getClass().getFields();
Field[] qa4Fields = qa4.getClass().getFields();
for(Field f:qaFields)
{
for(Field f4:qa4Fields)
{
if(f4.equals(f))
{
found = true;
break;
}
else
{
continue;
}
}
}
if(!found)
{
report.add(/*some_formatted_string*/) //some global list
throw new FieldsNotEqualException();
}
}
I was googling and I saw that C# had like a PropertyInfo Class - does Java have anything like that?
ALSO, is there a way to do like f.getFieldValue() -I know there is no method like this but maybe there is another way???
A: Libraries like commons-beanutils can help you if you want to compare bean properties (values returned by getters) instead of comparing field values.
However, if you want to stick with plain reflection, you should:
*
*Use Class.getDeclaredFields() instead of Class.getFields(), as the latter only returns the public fields.
*Since fields only depend on their class, you should cache the result and keep the fields in a static / instance variable rather than invoking getDeclaredFields() for each comparison.
*Once you have an object of that class (say o), in order to get the value of some field f for that particular object, you need to call: f.get(o).
A: You might check out org.apache.commons.lang.builder.EqualsBuilder which will save you a lot of this hassle if you're wanting to do a field by field comparison.
org.apache.commons.lang.builder.EqualsBuilder.reflectionEquals(Object, Object)
If you're wanting to compare fields yourself, check out java.lang.Class.getDeclaredFields() which will give you all the fields including non-public fields.
To compare the value of the fields use f.get(qa).equals(f.get(qa4)) Currently, you are actually comparing the field instances and not the values.
A: // If you want to have some fields, not all field then use this.
public boolean compareObject(Object object) throws NoSuchFieldException, SecurityException, IllegalArgumentException, IllegalAccessException
{
String[] compareFields = { "fieldx", "fieldy","fieldz", "field15",
"field19"}; // list of all we need
for(String s : compareFields) {
Field field = DcrAttribute.class.getDeclaredField(s); // get a list of all fields for this class
field.setAccessible(true);
if(!field.get(this).equals(field.get(object))){ //if values are not equal
return true;
}
}
return false;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/13496026",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12"
}
|
Q: Usability: How do I provide & easily deploy a (preferably node.js + MongoDB based) server backend for my users? I'm currently planing an application (brainstorming, more or less), designed to be used in small organizations. The app will require syncronization w/ a backend-server, e.g. for user management and some advanced, centralized functionality. This server has to be hosted locally and should be able to run on Linux, Mac and Windows. I haven't decided how I'm going to realize this, mainly I simply don't know which would be the smartest approach.
Technically speaking, a very interessting approach seemed to be node.js + mongoose, connecting to a local MongoDB. But this is where I'm struggeling: How do I ensure that it's easy and convienient for a organization's IT to set this up?
Installing node.js + MongoDB is tedious work and far from standartized and easy. I don't have the ressources to provide a detailled walthrough for every major OS and configuration or do take over the setup myself. Ideally, the local administrator should run some sort of setup on the machine used as server (a "regular" PC running 24/7 should suffice) and have the system up and running, similar to the way some games provide executables for hosting small game-servers for a couple friends (Minecraft, for instance).
I also thought about Java EE, though I haven't dug into an details here. I'm unsure about whether this is really an option.
Many people suggest to outsource the backend (BaaS), e.g. to parse.com or similar services. This is not an option, since it's mandatory that the backend will be hosted locally.
I'm sorry if this question is too unspecific, but unfortunately, I really don't know where to start.
A: I can give you advice both from the sysadmin's side and the developers side.
Sysadmin
Setting up node.js is not a big task. Setting up a MongoDB correctly is. But that is not your business as an application vendor, especially not when you are a one man show FOSS project, as I assume. It is an administrators task to set up a database, so let him do it. Just tell them what you need, maybe point out security concerns and any capable sysadmin will do his job and set up the environment.
There are some things you underestimate, however.
Applications, especially useful ones, tend to get used. MongoDB has many benefits, but being polite about resources isn't exactly one of them. So running on a surplus PC may work in a software development company or visual effects company, where every workstation has big mem, but in an accountant company your application will lack resources quite fast. Do not make promises like "will run on your surplus desktop" until you are absolutely, positively sure about it because you did extensive load tests to make sure you are right. Any sensible sysadmin will monitor the application anyway and scale resources up when necessary. But when you make such promises and you break them, you loose the single most important factor for software: the users trust. Once you loose it, it is very hard to get it back.
Developer
You really have to decide whether MongoDB is the right tool for the job. As soon as you have relations between your documents, in which the change of of document has to be reflected in others, you have to be really careful. Ask yourself if your decision is based on a rational, educated basis. I have seen some projects been implemented with NoSQL databases which would have been way better of with a relational database, just because NoSQL is some sort of everybody's darling.
It is a FAR way from node.js to Java EE. The concepts of Java EE are not necessarily easy to grasp, especially if you have little experience in application development in general and Java.
The Problem
Without knowing anything about the application, it is very hard to make a suggestion or give you advice. Why exactly has the mongodb to be local? Can't it be done with a VPC? Is it a webapp, desktop app or server app? Can the source ode be disclosed or not? How many concurrent users per installation can be expected? Do you want a modular or monolithic app? What are your communication needs? What is your experience in programming languages? It is all about what you want to accomplish and which services you want to provide with the app.
A: Simple and to the point: Chef (chef solo for vagrant) + Vagrant.
Vagrant provides a uniform environment that can be as closed to production as you want and Chef provides provisioning for those environments.
This repository is very close to what you want: https://github.com/TryGhost/Ghost-Vagrant
There are hundreds of thousands of chef recipes to install and configure pretty much anything in the market.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/22898861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Sending FILE to a External API REST from Django I'm trying to send a file from a DJANGO FORM to an API REST service, if i send only the text it works but i need to send a file an i tried all that i found but i doesn't work.
This is my form:
class Publicacion(forms.Form):
publicacion = forms.CharField(label=False, widget=forms.Textarea(attrs={'rows': '3', 'cols': '40'}))
imagen = forms.FileField(required=False)
This is the html of that form:
<form name="publicion" enctype="multipart/form-data" id="publicacion" method="POST">{% csrf_token %}
<div class="post post_form" style="padding:0;">
{{ formpublicar|crispy }}
<button class="post_form_extra"></button>
<input value=" " type="submit" class="post_form_submit" name="publicar"/>
</div>
</form>
and this is my views.py method:
def sesionactiva(request):
if 'token' in request.session:
token = request.session['token']
crearpublicacion = Publicacion(request.POST or None, request.FILES or None)
if 'publicar' in request.POST and request.POST['publicar']:
if crearpublicacion.is_valid():
publicacion_data = crearpublicacion.cleaned_data
publicaciontexto = publicacion_data.get("publicacion")
imgpublicacion = request.FILES['imagen']
apipublicar = 'http://localhost/apiSocial/publicacion/createPublication'
payloadpublicacion = {'token': token, 'texto': publicaciontexto, 'imagen': imgpublicacion}
responsepublicacion = requests.post(apipublicar, data=payloadpublicacion)
crearpublicacion = Publicacion()
A: You should use requests as such:
requests.post(url, files=files, data=data)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/42965871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: how to remove uiview with animation I added an UIView with some animation by using the following code:
CGRect a = CGRectMake(10,10,295,460);
UIView *customView = [[UIView alloc]initWithFrame:a];
customView.tag=10;
[self.tableView addSubview:customView];
[UIView animateWithDuration:1.0
animations:
^{
customView.transform = CGAffineTransformMakeScale(1.0f, 1.0f);
}
];
When I try to remove the UIView with animation, the animation seems not working. The view is just getting removed without any animation.
Here is what I have to remove the UIView with animation:
UIView *removeView=[self.tableView viewWithTag:10] ;
removeView .transform = CGAffineTransformMakeScale(1.0f, 1.0f);
[removeView removeFromSuperview];
[UIView animateWithDuration:1.5
animations:
^{
removeView .transform = CGAffineTransformMakeScale(0.0f, 0.0f);
}
];
I tried by placing the [UIView animateWithDuration:1.5 ... block before and after the [removeView removeFromSuperview], but none gave me the animation. Kindly help me out in this.
A: [UIView animateWithDuration:1.5 animations:^ {
removeView.transform = CGAffineTransformMakeScale(0.0f, 0.0f);
} completion:^(BOOL finished) {
[removeView removeFromSuperview];
}];
A: Some changes:
UIView *removeView=[self.tableView viewWithTag:10] ;
removeView .transform = CGAffineTransformMakeScale(1.0f, 1.0f);
[removeView performSelector:@selector(removeFromSuperview) withObject:nil afterDelay:1.5];
[UIView animateWithDuration:1.5
animations:
^{
removeView .transform = CGAffineTransformMakeScale(0.01f, 0.01f);
}
];
Try setting your final transform to something small but nonzero.
A: Try waiting to remove the view until the animation is over:
[removeView performSelector:@selector(removeFromSuperview) withObject:nil afterDelay:2.0];
A: Try this one, it's worked for me:
[UIView beginAnimations:nil context:NULL];
[UIView setAnimationDuration:1.5f];
removeView.transform = CGAffineTransformMakeTranslation(removeView.frame.origin.x,1000.0f + (removeView.frame.size.height/2));
aview.alpha = 0;
[UIView commitAnimations];
A: Refer the following sample from Apple.
[UIView transitionWithView:containerView
duration:0.2
options:UIViewAnimationOptionTransitionFlipFromLeft
animations:^{ [fromView removeFromSuperview]; [containerView addSubview:toView]; }
completion:NULL];
The above code creates a flip transition for the specified container view. At the appropriate point in the transition, one subview is removed and another is added to the container view. This makes it look as if a new view was flipped into place with the new subview, but really it is just the same view animated back into place with a new configuration.
May be the animation type you need to manage.
Refer:-
https://developer.apple.com/documentation/uikit/uiview/1622574-transition
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/29211563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: Express Routes params to define react page components I am learning Express.js, this is my case, i have 2 urls
/security/1
/security/2
The answer to that request will be the following, depending on the answer, the "/security/{id}" will return a Web page that will show and enable a certain number of "elements".
app.get('/security/:id', (req, res) => {
setTimeout(
() =>
res.send({
access: req.params.id === '1' ? ['EDIT', 'DELETE', 'VIEW', 'ADD'] : ['VIEW'],
}),
Math.floor(Math.random() * 2000 + 100)
);
});
My question is now that i have flagged the elements based on the request parameter, how can i render my react page ( for example index.js ) flagging these elements
I would appreciate any example or code answer helping me to figure it out the approach
A: I did not fully understand your question but you can make request from your frontend React page and based on the result you can render whatever you want.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/67606267",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How do I read in a series of numbers using "Textscan" in MATLAB if the file is mostly text? I have a text file that have a string of 3 numbers that I need to read into MATLAB.
For Example:
#######################
#
#
# Text Text Text
#
#
#######################
Blah blah blah = ####
Blah blah blah = ####
Blah blah blah = ####
Blah blah blah = ####
Blah blah blah = ####
Blah blah blah = ####
I_NEED_THIS_STRING = 1234.5 6789.0 1234.5 !Comment blah blah blah
I need to read in those 3 numbers into an array.
PLEASE HELP.
Thanks
A: If most of the file is irrelevant to your application, I suggest preprocessing with your favorite scripting language or command line tool to find the relevant lines and use textscan() on that.
e.g., from a shell prompt:
grep ^I_NEED_THIS_STRING infile > outfile
in matlab:
fid = fopen('outfile');
C = textscan(fid, 'I_NEED_THIS_STRING = %f %f %f')
fclose(fid)
See the textscan documentation for more details.
A: An alternative is to use IMPORTDATA to read the entire file into a cell array of strings (with one line per cell), then use STRMATCH to find the cell that contains the string 'I_NEED_THIS_STRING', then use SSCANF to extract the 3 values from that cell:
>> data = importdata('mostly_useless_text.txt','\n'); %# Load the data
>> index = strmatch('I_NEED_THIS_STRING',data); %# Find the index of the cell
%# containing the string
>> values = sscanf(data{index},'I_NEED_THIS_STRING = %f %f %f') %# Read values
values =
1.0e+003 *
1.2345
6.7890
1.2345
If the file potentially has a lot of useless text before or after the line you are interested in, then you may use up a lot of memory in MATLAB by loading it all into a variable. You can avoid this by loading and parsing one line at a time using a loop and the function FGETS:
fid = fopen('mostly_useless_text.txt','r'); %# Open the file
newLine = fgets(fid); %# Get the first line
while newLine ~= -1 %# While EOF hasn't been reached
if strmatch('I_NEED_THIS_STRING',newLine) %# Test for a match
values = sscanf(newLine,'I_NEED_THIS_STRING = %f %f %f'); %# Read values
break %# Exit the loop
end
newLine = fgets(fid); %# Get the next line
end
fclose(fid); %# Close the file
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/3620079",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Finding text width in jquery I am trying to to calculate text width in px.
I will take a string from html, this string may contain some special characters as well (in this example just a hyphen), and make this string an html content of a div, and calculate string width.
But I am getting strange results.
If there is an better way to get text width, cross-browser!, please let me know.
I have made a fiddle:
http://jsfiddle.net/YaPcP/39/
Thank you!
A: <div> elements are set to span the width of their parent element, so changing the font size will have no effect on its actual width. Changing your <div> to a <span> should give you what you're looking for.
A: Add float: left; to #holder
#holder which is the test container for width, is defaulting to width: auto;. In other words, it is spanning 100% of the browser window and giving you the mixed results.
You might consider adding white-space: nowrap; as well to #holder should it ever exceed the width of it's container.
Updated fiddle
A: It may look nasty but gives you the width:
http://jsfiddle.net/adaz/YaPcP/43/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/9404536",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Is there any way to get smart sheet rows based on column value I want to use smart sheet api to read sheet data. my sheet has changedtm column. I want to fetch rows which are greater than certain date. is there any way to do this using smart sheet API
A: You can't do this kind of filtering via the Smartsheet API exactly. One approach would be you could do a GET Sheet request and use the columnIds query string parameter to provide the id number of the specific column you are looking to review. Then you would get back a response that only contained data for that specific column. The filtering based on the date values would then have to be done by you in your code.
If you will need other data in the rows I would then just do the GET Sheet request to get all of the data in the sheet and do the filtering of the response in your code.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/58550123",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How does node.js handle simultaneous http requests? I am learning node.js, and I am not managing to find a direct answer to this question. How does node.js deal with HTTP incoming requests, if they come in virtually at the same time? Let's say that one HTTP request comes in at a given time. As a result, the value of a global variable might change. However, at virtually the same time, another request comes in. In order to service the new request, the value of that one global variable is needed, but the code for the first request is still executing. How does node react to this?
A: Node.js processes the request one after the other. There is only one thread.
However, if you for example query the database for some information and pass a callback, while the query is executed, node.js can process new requests. Once the database query is completed, node.js calls the callback and finishes processing the first request.
EDIT:
Simple server example:
var http = require('http');
var numresponses = 0;
http.createServer(function (request, response) {
res.writeHead(200, {'Content-Type': 'text/plain'});
res.end('This is response #' + (++numresponses));
}).listen(80);
this server will always print out the number of the request even if two requests happen simultaneously, node will choose one that gets processed first, and both will have different numbers.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/24934935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: LSL Wrapper for pygame Newbie to Python and gaming. Is there an LSL (lab streaming layer) wrapper for pygame? I want to create a game using EEG signals to create a brain computer interface application. Any help will be deeply appreciated. thanks! :)
A: There is a LSL module for Python called pylsl. You should be able to incorporate this into your game loop.
The following code was adapted from this example:
from pylsl import StreamInlet, resolve_stream
import pygame
# first resolve an EEG stream on the lab network
streams = resolve_stream('type', 'EEG')
# create a new inlet to read from the stream
inlet = StreamInlet(streams[0])
# Pygame setup
pygame.init()
size = width, height = 320, 240
screen = pygame.display.set_mode(size)
samples = []
while True:
# get a new sample (you can also omit the timestamp part if you're not
# interested in it)
# Get a sample from the inlet stream
sample, timestamp = inlet.pull_sample()
samples.push(sample)
#TODO: interpolate streamed samples and update game objects accordingly.
# You'll probably need to keep a few samples to interpolate this data.
#TODO: draw game
...
pygame.display.flip()
A: Adding another possibility: engineers from the Foundation Campus Biotech Geneva have created a library using LSL for an online paradigm using EEG, called NeuroDecode.
It relies on pylsl and offers a higher-level implementation.
On my fork version, I dropped the (old) decoding functionalities in favor of improvements to the low-level functionalities, e.g. signal acquisition/visualization.
https://github.com/mscheltienne/NeuroDecode
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/38787395",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: com.google.cloud.datastore.DatastoreException: Unauthenticated. objectify java I have set up objectify v6 and when I try to save an entity I get this exception
com.google.cloud.datastore.DatastoreException: Unauthenticated.
Caused by:
com.google.datastore.v1.client.DatastoreException: Unauthenticated., code=UNAUTHENTICATED
NOTE: I am only trying to save this entity on localhost.
Here is my code to save entity using objectify:
Subscriber subscriber = new Subscriber(phone);
ofy().save().entity(subscriber).now();
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/49422740",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4"
}
|
Q: SQLPLus Decode if value is not null I have a sqlplus script and if a variable is not null then I want to execute a script and pass the value. If it is null then I want to execute a different script. I have the following:
col scr new_value script
set term off
select decode('&partitions', 'true', 'CreateTablesPartitions', 'CreateTables') scr from dual;
set term on
@@&script &partitions
Which checks if the variable partitions is true then execute CreateTablePartitions. How do I check if the variable partitions is null?
A: just add null and the value to decode to in your decode string.
select decode('&partitions', 'true', 'CreateTablesPartitions',
null, 'itsnull', 'CreateTables') scr
from dual;
so if its null, then the result will be itsnull
A: You can just include null as a recognised value in your decode:
col scr new_value script
set term off
set verify off
select decode('&partitions', null, 'CreateTables',
'CreateTablesPartitions') as scr from dual;
set term on
@@&script &partitions
... which will run CreateTables if the entered partitions value is null.
But, because you have termout off, you won't see the prompt for the value. You might want to use positional variables (&1 etc.) depending on how you're intending to call this, but assuming you do want to be prompted at run-time, you can either leave termout on and add noprint to the column command (col scr new_value script noprint), which will give some blank lines in the output; or set partitions earlier. You can't use define though because that won't like a null value.
The cleanest approach may be to use accept with its own prompt:
accept partitions prompt "Enter partitions: "
col scr new_value script noprint
set verify off
set term off
select decode('&partitions', null, 'CreateTables',
'CreateTablesPartitions') as scr from dual;
set term on
@@&script &partitions
With simple dummy scripts to call, e.g. CreateTables.sql:
prompt In CreateTables
... and CreateTablesPartitions:
prompt In CreateTablesPartitions with passed value "&1"
... this gives:
Enter partitions:
In CreateTables
... and:
Enter partitions: Test
In CreateTablesPartitions with passed value "Test"
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/14136547",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Eliminate zeroed values in an array with dataweave function In this array of a list of devices, you would need to delete from the array when any item was zeroed out
[
{
"valueTotal": "6.50"
},
{
"bread": "001",
"value": "3.00"
},
{
"milk": "002",
"value": "3.50"
},
{
"coffe": "003",
"value": "0.00"
}
]
A: Assuming you intend to remove the items in the input, whose "value" field is 0 and then get the totalValue. Here is a quick one I have come up with(could be improved).
%dw 2.0
output application/json
//filter the items whose value is zero
var filteredPayload= ((payload [-1 to 1] map (item1, index1) ->
{
(if (item1.value as Number != 0) (item1) else null)
}) filter ($ != {}))
// get the totalValue from the filteredPayload
var totalFilteredPayload = filteredPayload reduce ($.value + $$.value)
---
// simply add both the arrays
filteredPayload ++ [{ "valueTotal": totalFilteredPayload as String }]
A: If by "zeroed out" you mean value = 0 it is just a basic filter operation
payload filter ($.valueTotal? or ($.value as Number != 0))
The condition $.valueTotal? is to make the object with valueTotal pass the check. And the other one is for the value itself.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/73583505",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Background Image Quality Android Development help please.
Can somone tell me how I might increase the quality of background images in my app? I have been using Photoshop where the images look great, then I save for the web as a png...then add it to my app... Then on the phone the image looks a little blurred.
Whats the best way to increase my image quality?
A: I would make sure you're saving it at an adequate resolution. I'm will to bet that "save for web" reduces the resolution to 72 dpi which may not be enough for an android handset. In photoshop, try bumping the resolution of the final png to something like 300 dpi and see if that makes a difference. From there you can experiment with different resolutions to figure out what's the smallest value you can use and still have a crisp image. Alternatively, you could just look for the documented resolution requirements.
A: The apparent quality of your images may also depend in the type of device you are displaying the images on. For example, if your image is saved as 72px x 72px in your image editor, then displayed with a size defined using 72 scaled pixels (sp) in android on a high pixel-density device, then the OS will stretch the image before display. As such, the pixel density of the display device can affect the apparent image quality.
You can provide different resolution images for different pixel densities by using the hdpi, mdpi and ldpi folders for drawables. See these links for more info:
*
*Screens support
*Icon design
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/4797957",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Cast arraylist in recyclerview firebase I have an array of data which I am retrieving from firebase. I am using a recyclerview to display the data but my adapter is not working correctly.I tried adding the arraylist in the adapter but this is not working.
It is saying the adapter is not attached and I am having a blank activity.
Any help on this ?
Here are my details.
Modal Class
public class Order {
private String ProductId;
private String ProductName;
private String Quantity;
public Order() {
}
public String getProductId() {
return ProductId;
}
public void setProductId(String productId) {
ProductId = productId;
}
public String getProductName() {
return ProductName;
}
public void setProductName(String productName) {
ProductName = productName;
}
public String getQuantity() {
return Quantity;
}
public void setQuantity(String quantity) {
Quantity = quantity;
}
public Order(String productId, String productName, String quantity) {
ProductId = productId;
ProductName = productName;
Quantity = quantity;
}
}
Adapter
public class AllOrdersAdapter extends RecyclerView.Adapter<AllOrdersViewHolder> {
List<Order> myfoods;
public AllOrdersAdapter(List<Order> myfoods) {
this.myfoods = myfoods;
}
@NonNull
@Override
public AllOrdersViewHolder onCreateViewHolder(@NonNull ViewGroup parent, int viewType) {
View itemView = LayoutInflater.from(parent.getContext())
.inflate(R.layout.allorders_layout,parent,false);
return new AllOrdersViewHolder(itemView);
}
@Override
public void onBindViewHolder(@NonNull AllOrdersViewHolder holder, int position) {
holder.foodname.setText(myfoods.get(position).getProductName());
holder.foodquantity.setText(myfoods.get(position).getQuantity());
holder.foodId.setText(myfoods.get(position).getProductId());
}
@Override
public int getItemCount() {
return myfoods.size();
}
}
Test Class
public class Test extends AppCompatActivity {
FirebaseDatabase db;
DatabaseReference requests;
RecyclerView lstFoods;
RecyclerView.LayoutManager layoutManager;
TextView food_id,food_quan,food_name;
// List foods = new ArrayList<>();
// RecyclerView.Adapter<AllOrder> adapter;
// List<String> myOrders = new ArrayList<String>();
// ArrayList<String> foods=new ArrayList<>();
List<String> myfoods = new ArrayList<String>();
AllOrdersAdapter adapter;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_test);
//firebase
db = FirebaseDatabase.getInstance();
requests= db.getReference().child("Requests");
lstFoods = (RecyclerView)findViewById(R.id.lstAllFoods);
lstFoods.setHasFixedSize(true);
layoutManager = new LinearLayoutManager(this);
lstFoods.setLayoutManager(layoutManager);
loadOrderss();
}
private void loadOrderss() {
requests.addValueEventListener(new ValueEventListener() {
@Override
public void onDataChange(@NonNull DataSnapshot dataSnapshot) {
for (DataSnapshot postSnapshot : dataSnapshot.getChildren()) {
if (postSnapshot.getValue() != null) {
// List ingredients = new ArrayList<>();
for (DataSnapshot ing : postSnapshot.child("foods").getChildren()) {
// String data = String.valueOf(postSnapshot.getValue(Order.class));
myfoods.add(ing.child("quantity").getValue(String.class));
myfoods.add(ing.child("productName").getValue(String.class));
myfoods.add(ing.child("productId").getValue(String.class));
// myfoods.add(String.valueOf(Order.class));
System.out.println("Gained data: " + ing.child("productName").getValue(String.class));
}
}
}
adapter = new AllOrdersAdapter((ArrayList<String>) myfoods);
lstFoods.setAdapter(adapter);
adapter.notifyDataSetChanged();
}
@Override
public void onCancelled(@NonNull DatabaseError databaseError) {
}
});
}
A: There seems to be a couple things wrong with the code. As it is posted I would be surprised if it compiles.
In your Adapter you have:
List<Order> myfoods;
and
public AllOrdersAdapter(List<Order> myfoods) {
this.myfoods = myfoods;
}
but in your activity code you pass:
adapter = new AllOrdersAdapter((ArrayList<String>) myfoods);
one is a ArrayList of String the other of Order !
You also need to change your adapter class to something like:
public class AllOrdersAdapter extends RecyclerView.Adapter<AllOrdersAdapter.AllOrdersViewHolder> {
private static final String TAG = AllOrdersAdapter.class.getSimpleName();
private ArrayList<Order> mData;
public class AllOrdersViewHolder extends RecyclerView.ViewHolder {
public TextView mTvFoodname;
public TextView mTvFoodQuantity;
public TextView mTvFoodId;
public AllOrdersViewHolder(View v){
super(v);
// TODO: You need to assign the appropriate View Id's instead of the placeholders ????
mTvFoodQuantity = v.findViewById(R.id.????);
mTvFoodname = v.findViewById(R.id.????);
mTvFoodId = v.findViewById(R.id.????);
}
}
public AllOrdersAdapter(ArrayList<Order> data){
this.mData = data;
}
@Override
public AllOrdersViewHolder onCreateViewHolder(ViewGroup parent, int viewType){
View itemView = LayoutInflater.from(parent.getContext()).inflate(R.layout.business_list_card_view, parent, false);
return new AllOrdersViewHolder(itemView);
}
@Override
public void onBindViewHolder(final AllOrdersViewHolder holder, final int position){
//TODO: You need to decide whether you want to pass a string or order object
Order data = mData.get(position);
final String name = data.getProductName();
final String quantity = data.getQuantity();
final String id = data.getProductId();
holder.mTvFoodname.setText(name);
holder.mTvFoodQuantity.setText(quantity );
holder.mTvFoodId.setText(id)
}
@Override
public int getItemCount(){
return mData.size();
}
}
Note: That since I can not know, whether an ArrayList of String or of Order should be used the parameters in either the Activity or Adapter will need to be changed. Also how you assign the data to the RecyclerView will be affected in the onBindViewHolder method.
You should also follow the advice given by Frank.
EDIT
Change your onDataChange() method to this:
@Override
public void onDataChange(@NonNull DataSnapshot dataSnapshot) {
for (DataSnapshot postSnapshot : dataSnapshot.getChildren()) {
if (postSnapshot.getValue() != null) {
List ingredients = new ArrayList<>();
for (DataSnapshot ing : postSnapshot.child("foods").getChildren()) {
String name = ing.child("productName").getValue(String.class);
String quantity = ing.child("quantity").getValue(String.class);
String productId = ing.child("productId").getValue(String.class);
// Using your overloaded class constructor to populate the Order data
Order order = new Order(productId, name, quantity);
// here we are adding the order to the ArrayList
myfoods.add(order);
Log.e(TAG, "Gained data: " + name)
}
}
}
adapter.notifyDataSetChanged();
}
In your Activity you will need to change the ArrayList class variable "myfoods" to this:
ArrayList(Order) myfoods = new ArrayList<>();
and in your onCreate() method you can now change:
adapter = new AllOrdersAdapter((ArrayList<String>) myfoods);
to simply this:
adapter = new AllOrdersAdapter(myfoods);
Also notice that I have made some changes in my original code above.
A: You'll want to create the adapter, and attach it to the view, straight in onCreate:
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_test);
//firebase
db = FirebaseDatabase.getInstance();
requests= db.getReference().child("Requests");
lstFoods = (RecyclerView)findViewById(R.id.lstAllFoods);
lstFoods.setHasFixedSize(true);
layoutManager = new LinearLayoutManager(this);
lstFoods.setLayoutManager(layoutManager);
adapter = new AllOrdersAdapter((ArrayList<String>) myfoods);
lstFoods.setAdapter(adapter);
loadOrders();
}
This also means you should declare myfoods as a ArrayList<String>, which saves you from having to downcast it. Something like:
ArrayList<String> myfoods = new ArrayList<String>();
Now in loadOrders you simple add the items to the list, and then notify the adapter that its data has changed (so that it repaints the view):
private void loadOrders() {
requests.child("foods").addValueEventListener(new ValueEventListener() {
@Override
public void onDataChange(@NonNull DataSnapshot dataSnapshot) {
for (DataSnapshot postSnapshot : dataSnapshot.getChildren()) {
for (DataSnapshot ing: postSnapshot.getChildren()) {
myfoods.add(ing.child("quantity").getValue(String.class));
myfoods.add(ing.child("productName").getValue(String.class));
myfoods.add(ing.child("productId").getValue(String.class));
}
}
adapter.notifyDataSetChanged();
}
@Override
public void onCancelled(@NonNull DatabaseError databaseError) {
throw databaseError.toException(); // don't ignore errors
}
});
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/52349755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to set the default icon for apps in registry using Exe? I created the installer using NSIS. Everything working fine but default
icon not setting to the particular files.
WriteRegStr HKCR "Myapp\DefaultIcon" "" "$INSTDIR\Myexe,1"
I need to set default icon by using my Exe not by using icon file
A: You need to provide the full path. If you want the first icon in your .exe then you don't need a icon index:
RequestExecutionLevel Admin
Section
WriteRegStr HKCR ".foo" "" "foofile" ; .foo file extension
WriteRegStr HKCR "foofile\DefaultIcon" "" "$INSTDIR\MyApp.exe"
SectionEnd
Other icons need a icon index:
WriteRegStr HKCR "MyProgId\DefaultIcon" "" "$INSTDIR\MyApp.exe,1"
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/57214781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to exclude a Java Object Property as part of JSON String? I have a Java POJO. Few properties are there along with a list<>. While converting this object to JSON String, I want to exclude the list property,
So what annotation to use for that?
public class StudentResultSummary {
private String totMarks;
private String avgMarks;
private List<StudentResult> resultList = new ArrayList<StudentResult>();
}
Convert to JSON:
StudentResultSummary resultSummary = new StudentResultSummary();
Json json = new Json();
policySummary = json.encode(resultSummary);
How can I make sure the field resultList is not included as part of the JSON response?
A: From Chris Seline's answer:
Any fields you don't want serialized in general you should use the
"transient" modifier, and this also applies to json serializers (at
least it does to a few that I have used, including gson).
If you don't want name to show up in the serialized json give it a
transient keyword, eg:
private transient String name;
A: If You don't want to that column in response json you can use @JsonIgnore
and if you don't want to that field in table you should use @Transient
@Transient
private String password_key_type;
@JsonIgnore
public int getUser_id()
{
return user_id;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/33893481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Converting spectral data for given Observer/Illuminant to another Observer/Illuminant I'm working on a simple Measuring Software for HunterLab (Color) instruments (EZ line) (screenshot here) and I hope someone can help out here.
They deliver spectral data from 400nm...700nm by 10nm using a D65 light source and 10° Observer.
I have the observer functions for ASTM D65 which work great and I can reproduce any value from the instrument 1:1, as long as i measure in D65, 10° (converting to XYZ and then CIELab using tristimulus references for perfect reflecting diffuser).
That was done mostly using algorithms from brucelindbloom.com and easyrgb.com, both have some great information!
Now I want to add the ability to convert the spectral data to another observer or another illuminant (or both). But I cant wrap my head around how to do that.
I guess some directions would be enough but I dont know if I would need even more references for that (references for illuminants by wavelength?) or if its done by some other means.
A: OK, here is the answer :)
Spectral data from most spectrophotometers is already corrected in so far that the hardware illuminant and angle dont matter.
What you do is just use the observer functions for every single angle/illuminant, as written in ASTM E308, to convert the spectral data to XYZ instead of only using the table which corresponds to the hardware illuminant/angle.
Thats a lot of reference values but it works perfect.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/16340265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: PageMethods and UpdatePanel I have a page hierarchy as the following
I want to execute a PageMethod if I click the 'SAVE' button, so I coded like the following
On Button Click I called
OnClientClick="return btnSaveAS_Clicked()"
Called the following on PageLoad of the inner user control
private void RegisterJavaScript()
{
StringBuilder jScript = new StringBuilder();
jScript.Append("<script type='text/javascript'>");
jScript.Append(@"function btnSaveAS_Clicked() {
var txtConditionName = document.getElementById('" + txtConditionName.ClientID + @"').value;
PageMethods.Combine('hello','world', OnSuccess);
function onSuccess(result)
{
alert(result);
}
}");
jScript.Append("</script>");
Page.ClientScript.RegisterStartupScript(this.GetType(), "conditions_key", jScript.ToString());
}
Coded page method as
[WebMethod]
public static string Combine(string s1, string s2) {
return s1 + "," + s2;
}
But it gives the following error...
A: You cannot define page methods in ascx pages. You have to define them in your web form. If you want to have a page method, defined in your user control, you'd have to define a forwarding page method in you aspx page like below (source):
in user control:
[WebMethod]
[ScriptMethod(UseHttpGet = true)]
public static string MyUserControlPageMethod()
{
return "Hello from MyUserControlPageMethod";
}
in aspx.cs page:
[WebMethod]
[ScriptMethod]
public static string ForwardingToUserControlMethod()
{
return WebUserControl.MyUserControlMethod();
}
and in aspx page:
function CallUserControlPageMethod()
{
PageMethods.ForwardingToUserControlPageMethod(callbackFunction);
}
Alternatively, you could use ASMX services and jquery ajax methods (jQuery.ajax, jQuery.get, jQuery.post) to call your methods asynchronously (sample).
Another option would be defining http handlers and call them via jQuery as well (tutorial).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/6825461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6"
}
|
Q: In Google BigQuery, how to get storage size of a partition in time-partitioned table? I can query for storage size of a table in BigQuery using SELECT size_bytes FROM dataset.__TABLES__ WHERE table_id='mytable', but this only works for finding total size of table. How to get size of a specific partition from a time-partitioned table, for example I want to find out how much data is stored in mytable$20180701.
I know I can for example copy that partition to a non-partitioned table and use the method above, but I feel this can't be the right method.
A: You can use dryRun for this - or in UI just type SELECT * FROM mytable$20180701 and see in Validator how much bytes will be processed - this is the size of the table
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/51129082",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: cv has no member CAP_PROP_POS_FRAMES I'm trying to run a bit of code to add trackbars onto some video, it's from the Learning OpenCV Second Edition book, but I can't compile my code and gives the error "namespace cv has no member CAP_PROP_POS_FRAMES"
Here's the first bit of the code
#include <opencv2\highgui\highgui.hpp>
#include <opencv2\imgproc\imgproc.hpp>
#include <iostream>
#include <fstream>
using namespace std;
int g_slider_position = 0;
int g_run = 1, g_dontset = 0; //start out in a single step mode
cv::VideoCapture g_cap;
void onTrackbarSlide(int pos, void *) {
g_cap.set(cv::CAP_PROP_POS_FRAMES, pos);
if(!g_dontset)
g_run = 1;
g_dontset = 0;
}
A: It's CV_CAP_PROP_POS_FRAMES (note the S) and it should be brought in by highgui.hpp. It's an unnamed enum in the global namespace.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/26744549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Is it possible to sync an azure repo with MWAA (Amazon Workflows for Apache Airflow)? I have set up a private MWAA instance in AWS. It has set up a bucket that stores DAGs in S3.
I've created a private repository in Azure DevOps and have set up a role that can access this bucket.
With Azure-Pipelines is it possible to sync the entire repository to control the DAGs created/modified in that S3 bucket?
I've seen it's possible to create artefacts and push them to the S3 bucket, but what if a dag is deleted? The DAG will still persist in the S3 Bucket and will still be available in MWAA.
Any guidance will be appreciated.
A: If you just want to sync entire repository to S3 bucket,you can use the task Amazon S3 Upload in your azure pipeline.
I'm not sure if that will fully address your problem, though.
If there is any misunderstanding, please feel free to add comments related to your issue.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/74868686",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Xcode skips if else statement my if else statement checks whether some text fields are empty and if so pops an alert. However xcode even if going through everything, moves on to other functions.
There is an if statement which checks the value of a segmented control and accordingly checks some text fields.
@IBAction func calc(_ sender: Any) {
// Check if dilution text field is empty
let dilutiontext = self.dilution.text
if (dilutiontext?.isEmpty ?? true) {
Alert.showAlert(on: self, with: "Empty Fields", message: "Dilution field is empty")
}
if choose.selectedSegmentIndex == 0 {
if (self.number1.text?.isEmpty) ?? true || self.number2.text?.isEmpty ?? true || self.number3.text?.isEmpty ?? true || self.number4.text?.isEmpty ?? true {
Alert.showAlert(on: self, with: "Empty Fields", message: "Number 1-4 fields should not be empty")
} else {
performSegue(withIdentifier: "turner", sender: self)
}
} else {
if (self.number1.text?.isEmpty) ?? true || self.number2.text?.isEmpty ?? true || self.number3.text?.isEmpty ?? true || self.number4.text?.isEmpty ?? true || self.number5.text?.isEmpty ?? true || self.number6.text?.isEmpty ?? true || self.number7.text?.isEmpty ?? true || self.number8.text?.isEmpty ?? true {
Alert.showAlert(on: self, with: "Empty Fields", message: "Number 1-8 fields should not be empty")
} else {
performSegue(withIdentifier: "turner", sender: self)
}
}
}
I have another file alert.swift which controls the alerts:
import Foundation
import UIKit
struct Alert {
public static func showAlert(on vc: UIViewController, with title: String, message: String) {
let alert = UIAlertController(title: title, message: message, preferredStyle: .alert)
alert.addAction(UIAlertAction(title: "OK", style: .default, handler: nil))
vc.present(alert, animated: true)
}
}
EDIT:
Previously self.dilution.text?.isEmpty and now let dilutiontext = self.dilution.text with dilutiontext?isEmpty
I commented out the prepare for segue function and surprisingly the alerts started working. I still need that function and the alerts working though. Here is the function:
override func prepare(for segue: UIStoryboardSegue, sender: Any?) {
var vc = segue.destination as! SecondViewController
if choose.selectedSegmentIndex == 0 {
vc.n1 = Int(number1.text!)!
vc.n2 = Int(number2.text!)!
vc.n3 = Int(number3.text!)!
vc.n4 = Int(number4.text!)!
vc.dil = Int(dilution.text!)!
vc.cn = Int(choose.selectedSegmentIndex)
} else {
vc.n1 = Int(number1.text!)!
vc.n2 = Int(number2.text!)!
vc.n3 = Int(number3.text!)!
vc.n4 = Int(number4.text!)!
vc.n5 = Int(number5.text!)!
vc.n6 = Int(number6.text!)!
vc.n7 = Int(number7.text!)!
vc.n8 = Int(number8.text!)!
vc.cn = Int(choose.selectedSegmentIndex)
vc.dil = Int(dilution.text!)!
}
}
When I run it, instead of showing the alerts (which check if a text field is empty) it continues to the segue function and displays Unexpectedly found nil while unwrapping an Optional value, which is expected
A: Obviously neither the "if" nor the "else if" condition are true then. Add
let dilutiontext = self.dilution.text
let celltext = self.cell.text
then set a breakpoint and examine the values.
A: Clearly the alerts were skipped if one of the if conditions in the segue function was true. So if there was something that would initially make the statements false and then after going through the alerts it would make them true, the problem would be solved.
Therefore I made two more functions each for the if and if else statements in the segue func.
func option1() -> Bool {
if (self.number1.text?.isEmpty) ?? true || self.number2.text?.isEmpty ?? true || self.number3.text?.isEmpty ?? true || self.number4.text?.isEmpty ?? true || self.dilution.text?.isEmpty ?? true || !(self.number5.text?.isEmpty ?? true) || !(self.number6.text?.isEmpty ?? true) || !(self.number7.text?.isEmpty ?? true) || !(self.number8.text?.isEmpty ?? true) {
return false
} else {
return true
}
}
func option2() -> Bool {
if (self.number1.text?.isEmpty) ?? true || self.number2.text?.isEmpty ?? true || self.number3.text?.isEmpty ?? true || self.number4.text?.isEmpty ?? true || self.number5.text?.isEmpty ?? true || self.number6.text?.isEmpty ?? true || self.number7.text?.isEmpty ?? true || self.number8.text?.isEmpty ?? true || self.dilution.text?.isEmpty ?? true {
return false
} else {
return true
}
}
which checked if all the conditions were true and if so returned true, so that the program could move on to the segue func.
The segue would check whether the conditions were true, if not, it would go through the alerts-therefore making the option1() or option2() return true- and so the if conditions in the segue func would be true for the program to continue.
override func prepare(for segue: UIStoryboardSegue, sender: Any?) {
var vc = segue.destination as! SecondViewController
if option1() == true {
vc.n1 = Int(number1.text!)!
vc.n2 = Int(number2.text!)!
vc.n3 = Int(number3.text!)!
vc.n4 = Int(number4.text!)!
vc.dil = Int(dilution.text!)!
vc.cn = Int(choose.selectedSegmentIndex)
} else if option2() == true {
vc.n1 = Int(number1.text!)!
vc.n2 = Int(number2.text!)!
vc.n3 = Int(number3.text!)!
vc.n4 = Int(number4.text!)!
vc.n5 = Int(number5.text!)!
vc.n6 = Int(number6.text!)!
vc.n7 = Int(number7.text!)!
vc.n8 = Int(number8.text!)!
vc.cn = Int(choose.selectedSegmentIndex)
vc.dil = Int(dilution.text!)!
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/63283258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: Codebehind is not calling through ajax In the below code i have a dropdown in which when i select a value in dropdown it should move to the method in codebehind and perform the operation.In my case it is not moving to the codebehind .My aim is to make dropdown a dependent pls help me to solve the issue.
Code:
<asp:DropDownList ID="cbField" runat="server" onfocus="setFocus()" CausesValidation="true">
</asp:DropDownList>
Ajax:
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3/jquery.min.js" type="text/javascript"></script>
<script type ="text/javascript">
$(document).ready(function () {
$('#<%=cbField.ClientID %>').click(function () {
$.ajax({
type: "POST",
url: "GmasField.ascx/cbField_Dependent",
data: "{}",
contentType: "application/json; charset=utf-8",
dataType: "json",
async: true,
cache: false,
success: function (msg) {
$('#myDiv').text(msg.d);
}
})
return false;
});
});
</script>
A: You didn't mention it but have you added the WebMethodAttribute to the method you have defined on the code behind?
You seem to have a method in the code-behind called cbField_Dependent()
So just add the attribute like the first line below I think.
[WebMethod]
public void cbField_Dependent() {
//
}
A: Follow this article, this show step by step method to call web method.
http://www.c-sharpcorner.com/UploadFile/63e78b/call-an-Asp-Net-C-Sharp-method-web-method-using-jquery/
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/21644845",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Trouble clearing spinner data android I have a problem with my spinners in my android app. I have two spinners implemented. Both loads contents from json. The first one look as,
// Code for First spinner
loc = json.getJSONArray("location");
for(int i = 0; i < loc.length(); i++){
JSONObject c = loc.getJSONObject(i);
//put json obkject on variable
String l = c.getString("stock_location");
locationList.add(l);
// Set Spinner Adapter
location.setAdapter(new ArrayAdapter<String>OfferedDiesel.this,android.R.layout.simple_spinner_dropdown_item,locationList));
// Spinner on item click listener
location.setOnItemSelectedListener(new AdapterView.OnItemSelectedListener() {
@Override
public void onItemSelected(AdapterView<?> arg0,View arg1, int position, long arg3) {
place = locationList.get(position);
selected = place;
new LoadProduct().execute();
product_title.setVisibility(View.VISIBLE);
product.setVisibility(View.VISIBLE);
}
@Override
public void onNothingSelected(AdapterView<?> arg0) {
place = null;
}
});
}
//Following is the LoadProduct
private class LoadProduct extends AsyncTask<String, String, JSONObject> {
@Override
protected void onPreExecute() {
super.onPreExecute();
pDialog = new ProgressDialog(OfferedDiesel.this);
pDialog.setMessage("Loading Product ...");
pDialog.setIndeterminate(false);
pDialog.setCancelable(true);
}
@Override
protected JSONObject doInBackground(String... args) {
UserFunctions userFunction = new UserFunctions();
JSONObject json = userFunction.getProductFromLocation(selected);
return json;
}
protected void onPostExecute(JSONObject json) {
if(pDialog.isShowing()){
pDialog.dismiss();
}
// Locate the spinner in activity_main.xml
/*LOcate site node*/
try{
if(json.has("error_msg")){
String err = json.getString("error_msg");
alert.showAlertDialog(OfferedDiesel.this, "ALert !!", err, null);
}else{
prod = json.getJSONArray("product");
for(int i = 0; i < prod.length(); i++){
JSONObject d = prod.getJSONObject(i);
//put json obkject on variable
String k = d.getString("stock_name");
productList.add(k);
arrayAdapter = new ArrayAdapter<String>(OfferedDiesel.this,android.R.layout.simple_spinner_dropdown_item,productList);
// Set Spinner Adapter
product.setAdapter(arrayAdapter);
// Spinner on item click listener
product.setOnItemSelectedListener(new AdapterView.OnItemSelectedListener() {
@Override
public void onItemSelected(AdapterView<?> arg0,View arg1, int position, long arg3) {
good = locationList.get(position);
//selected = place;
}
@Override
public void onNothingSelected(AdapterView<?> arg0) {
place = null;
}
});
}
}
}catch (JSONException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
}/*End of Spinner Populate*/
Here, whenever I am selecting a new item in above spinner it will execute new LoadProduct().execute(); and on my second spinner I am getting the data but the date are appended. I mean for example if by defalt '1' on my first spiner then at second spinner I will get "First" and when I select '2' (change on first spinner) then I will get "Second" it is appended on 'First' and Hence I get 2 items "First" and "Second" instead of just one i.e'Second'. I now need to clear all the spinner list before loading the new. For both spinners I am using json to load data.
Thanks in advance.
A: I'm going to take a few guesses here about what you are trying to do, because I'm not 100% sure I understood... Here's what I think you are trying to do:
*
*There are two ListViews.
*When you click on the first one, it sets up text in the second one to load.
*It does so via an AsyncTask called LoadProduct.
*selected is used in the LoadProduct to determine what to load.
*You aren't getting quite the results you expect.
Okay, so if I'm right here, I would suggest a few changes.
*
*Pass in the selected value to the AsyncTask.
*Clear the previous adapter from it's values.
*Add the new values.
The AsyncTask should be able to take the input, no problem.
A: You just need to clear array list using clear method before adding new data to array,this would defiantly help you
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/22262687",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Read Word ActiveX radio button into vb.net form I'm working on some code to read values in a Word document into a windows form using vb.net. The word document is designed so that the data to be read in is all contained within content controls. Here is a sample of my code:
Private Sub ImportWordButton_Click(sender As Object, e As EventArgs) Handles ImportWordButton.Click
Dim oWord As Microsoft.Office.Interop.Word.Application
Dim oDoc As Microsoft.Office.Interop.Word.Document
Dim oCC As Microsoft.Office.Interop.Word.ContentControl
oWord = CreateObject("Word.Application")
oDoc = oWord.Documents.Open("C:\Temp\PIFFormTest2.docx", [ReadOnly]:=False)
For Each oCC In oDoc.ContentControls
Select Case oCC.Tag
Case "PIFNo"
NumberBox.Text = oCC.Range.Text
Case "PIFTitle"
TitleBox.Text = oCC.Range.Text
Case "Initiator"
InitiatorBox.Text = oCC.Range.Text
Case "PTHealthSafety"
HealthSafeCheckBox.Checked = oCC.Checked
Case "PTRegEnviro"
RegEnvCheckBox.Checked = oCC.Checked
... it goes on. Some of the content in the Word file is captured with ActiveX radio buttons rather than content controls. I can't seem to find the correct object for referring to the radio buttons. I've spent significant time searching the web. Any help is appreciated.
A: I'm working on this too, and it's a nightmare.
For Each f As Field In oDoc.Fields 'notice fields not content controls
Console.WriteLine(f.OLEFormat.Object.Name) 'notice properties, not methods...
Next
Here's the MSDN reference
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/31055821",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Entity framework relationship issue 'Unable to determine the principal end...' I am having two classes, like below
Class One
{
ID (PK),
Property 2;
}
Class Two
{
ID (PK),
One_ID (FK),
Nullable_One_ID (FK)
}
While saving I am getting error ' Unable to determine the principal end of the 'x' relationship. Multiple added entities may have the same primary key.'
I tried many combinations WithOutPrincipal and WithOutDependant etc. But no luck, please guide me to the right relationship.
A: Your question is a bit unclear, and you might want to provide more detail, but I suspect that you want only one foreign key property on your class two. Depending on how you're creating these objects, this may also be happening because you're trying to reference an id that's 0, because the object has not yet been saved to the database, so it has not been assigned an id.
A: If you are trying to have 1 to 0 or 1 relationship between class one and class two then you need to have primary key and foriegn key same in class 2 and it should be primary key of class one
Class One { ID (PK), Property 2; }
Class Two { One_ID (PK, FK), Nullable_One_ID (FK) }
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/26739130",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Maven trying to deploy the same artifact twice I'm using Maven to build my project, but when I run the command mvn clean package deploy, it tries to deploy the artifact twice. I have the build-helper-maven-plugin plugin configured to attach an ear file that I create using a custom plugin.
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>build-helper-maven-plugin</artifactId>
<version>1.9.1</version>
<executions>
<execution>
<id>attach-artifacts</id>
<phase>package</phase>
<goals>
<goal>attach-artifact</goal>
</goals>
<configuration>
<artifacts>
<artifact>
<file>${project.build.directory}/${project.artifactId}-${project.version}.ear</file>
<type>ear</type>
</artifact>
</artifacts>
</configuration>
</execution>
</executions>
</plugin>
When I disable build-helper-maven-plugin, the remaining artifact (only the pom) is uploaded only once.
What should I do to let Maven deploy the extra ear file only once?
Erates
EDIT
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>my.group.id</groupId>
<artifactId>my.artifact.id</artifactId>
<version>1.0.0-SNAPSHOT</version>
<packaging>pom</packaging>
<name>My Project</name>
<properties>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>
<scm>
<!-- Config -->
</scm>
<distributionManagement>
<repository>
<!-- Config -->
</repository>
<snapshotRepository>
<!-- Config -->
</snapshotRepository>
</distributionManagement>
<dependencies>
<!-- My Dependencies here -->
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-dependency-plugin</artifactId>
<version>2.9</version>
<executions>
<execution>
<id>copy-dependencies</id>
<phase>package</phase>
<goals>
<goal>copy-dependencies</goal>
</goals>
<configuration>
<outputDirectory>${project.build.directory}</outputDirectory>
<overWriteReleases>false</overWriteReleases>
<overWriteSnapshots>false</overWriteSnapshots>
<overWriteIfNewer>true</overWriteIfNewer>
<includeGroupIds>my.group.ids.that.need.to.be.included</includeGroupIds>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>my.group.id</groupId>
<artifactId>my.custom.plugin</artifactId>
<version>1.0.1</version>
<configuration>
<params>
<!-- My params -->
</params>
</configuration>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>my-custom-goal</goal>
</goals>
</execution>
</executions>
</plugin>
<!-- Release Plugin -->
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-release-plugin</artifactId>
<version>2.4</version>
<configuration>
<goals>clean package deploy</goals>
<tagBase>https://my.tagbase</tagBase>
</configuration>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>build-helper-maven-plugin</artifactId>
<version>1.9.1</version>
<executions>
<execution>
<id>attach-artifacts</id>
<phase>package</phase>
<goals>
<goal>attach-artifact</goal>
</goals>
<configuration>
<artifacts>
<artifact>
<file>${project.build.directory}/${project.artifactId}-${project.version}.ear</file>
<type>ear</type>
</artifact>
</artifacts>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
<modules>
<!-- My Modules -->
</modules>
</project>
A: First you are using module and trying to do weird things in your parent pom (dependency-plugin, build-helper etc.). In a parent there should never be an execution like you have in your pom. You should make the appropriate configuration/execution within the appropriate modules cause this definition will be inherited of all childs.
Would you like to create an ear file? Than you should use packaging ear and your ear file will simply being deployed by using mvn deploy.
Furthermore you seemed to misunderstand the life cycle cause if you call:
mvn clean package deploy
this can be reduced to:
mvn clean deploy
cause the package life cycle is part of deploy so i recommend to read the life cycle information.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/30391676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: SQL Query Help: Probably Involves Group By My software runs on a few databases so I need to be generic (noticibly, I think a "distinct on" solution might work, but that isn't standard).
Say I have two tables defined as:
Table A: id, time (pk: id)
Table B: id(fk), key, value (pk: id, key)
Where the id of Table B is a foreign key to Table A and the primary keys are as specified.
I need the latest (in time) value of the requested keys. So, say I have data like:
id | key | value
1 | A | v1
1 | B | v2
1 | C | v3
2 | A | v4
2 | C | v5
3 | B | v6
3 | D | v7
Where time is increasing for the id, and I want values for keys A and C, I would expect a result like:
key | value
A | v4
C | v5
Or for keys D and A:
key | value
D | v7
A | v4
Ideally, I'd only get n rows back for n key requests. Is something like this possible with a single query?
A: Here is it working with SQL Server 2008 R2:
Schema
CREATE TABLE B (
ind integer,
[key] varchar(8),
value varchar(16)
);
INSERT INTO B VALUES (1, 'A', 'v1');
INSERT INTO B VALUES (1, 'B', 'v2');
INSERT INTO B VALUES (1, 'C', 'v3');
INSERT INTO B VALUES (2, 'A', 'v4');
INSERT INTO B VALUES (2, 'C', 'v5');
INSERT INTO B VALUES (3, 'B', 'v6');
INSERT INTO B VALUES (3, 'D', 'v7');
INSERT INTO B VALUES (3, 'A', 'v12');
INSERT INTO B VALUES (3, 'C', 'v17');
INSERT INTO B VALUES (3, 'C', 'v101');
Query
select [key], max(CAST(SUBSTRING(value, 2, LEN(value)) as INT)) from B
where [key] in ('A', 'C')
group by [key]
If you want to keep the v as a prefix, do this:
select [key], 'v' + CAST(max(CAST(SUBSTRING(value, 2, LEN(value)) as INT)) as VARCHAR) as Value
from B
where [key] in ('A', 'C')
group by [key]
Output
| KEY | VALUE |
------------------
| A | v12 |
| C | v101 |
The SQL Fiddle to play with: http://sqlfiddle.com/#!3/9de72/1
A: I'm assuming that no ID values will have the same TIME value.
I believe the following is about as generic as you can get with the option of running on as many databases as possible. But the trade off for platform flexibility is less than optimal performance.
The sub-query with alias t identifies the latest time available for each key.
select a.id,
a.time,
b.key,
b.value
from tableB as b
inner join tableA as a
on a.id=b.id
inner join (
select b2.key,
max(a2.time) time
from tableB as b2
inner join tableA as a2 on a2.id=b2.id
group by b2.key
) as t on a.time = t.time and b.key = t.key
where b.key in ('D','A')
The WHERE clause could be moved inside the sub-query and it might perform slightly better on a database that has a primitive optimizer. But putting the WHERE clause in the outer query makes it easier to maintain, and also leaves open the possibility of creating a view without the WHERE clause. The WHERE clause can then be added to a select from the view.
A query using ROW_NUMBER() would be more efficient, or even better yet a query using something like Oracle's KEEP LAST. But those features would make the query more restrictive to certain platforms.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/12101817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Chained creational OOP design pattern problem I am creating a class that is responsible for validating a configuration. This class calls other classes that validate said config by creating new instances in the form of a chain. At first glance, the code structure looks horrible, but It works. Anyway, I think it's not the best way to handle this logic.
I leave here a simplified version of the code in TypeScript, but I also leave it in Python and Java for reference only:
class Validator {
private _notValidatedConfig: NotValidatedConfig
constructor(notValidatedConfig: NotValidatedConfig) {
this._notValidatedConfig = notValidatedConfig
}
validateConfig(): ValidatedConfig {
return (
new Phase4Validation(
new Phase3Validation(
new Phase2Validation(
new Phase1Validation(
this._notValidatedConfig
).validate()
).validate()
).validate()
).validate()
)
}
// Alternative
validateConfig2(): ValidatedConfig {
const validatedPhase1Config: ValidatedPhase1Config = new Phase1Validation(this._notValidatedConfig).validate()
const validatedPhase2Config: ValidatedPhase2Config = new Phase2Validation(validatedPhase1Config).validate()
const validatedPhase3Config: ValidatedPhase3Config = new Phase3Validation(validatedPhase2Config).validate()
const validatedPhase4Config: ValidatedPhase4Config = new Phase4Validation(validatedPhase3Config).validate()
return validatedPhase4Config;
}
}
*
*Python
*Java Disclaimer: I don't have any experience with Java, so maybe there are some syntax errors.
The "alternative" is the same code, but not directly chained, instead, for every validation, it's creating a new variable.
I think the "alternative" is more readable but performs worse.
What do you think about this code? what did you change? How would you face this problem or with what design pattern or framework? (programming language doesn't matter for these question)
A: I would create a base class Validation and just create derived classes from it if it is necessary to add new validation:
public abstract class Validation
{
public Validation(string config)
{
}
public abstract string Validate();
}
and its concrete implementations:
public class Phase1Validation : Validation
{
public Phase1Validation(string config) : base(config)
{}
public override string Validate()
{
if (true)
return null;
return "There are some errors Phase1Validation";
}
}
public class Phase2Validation : Validation
{
public Phase2Validation(string config) : base(config)
{
}
public override string Validate()
{
if (true)
return null;
return "There are some errors in Phase2Validation";
}
}
and then just create a list of validators and iterate through them to find errors:
public string Validate()
{
List<Validation> validations = new List<Validation>()
{
new Phase1Validation("config 1"),
new Phase2Validation("config 2")
};
foreach (Validation validation in validations)
{
string error = validation.Validate();
if (!string.IsNullOrEmpty(error))
return error;
}
return null; // it means that there are no errors
}
UPDATE:
I've little bit edited my classes to fit your new question requirements:
*
*validations should be ordered. Added Order property
*get config from previous validation and send it to the next validation
It can be seen that this approach allows to avoid to write nested classes like this:
new Phase4Validation(
new Phase3Validation(
new Phase2Validation(...).validate()
).validate()
).validate()
So you can add new classes without editing validation classes and it helps to keep Open CLosed Principle of SOLID principles.
So the code looks like this:
Abstractions:
public abstract class Validation
{
// Order to handle your validations
public int Order { get; set; }
// Your config file
public string Config { get; set; }
public Validation(int order)
{
Order = order;
}
// "virtual" means that method can be overriden
public virtual string Validate(string config)
{
Config = config;
if (true)
return null;
return "There are some errors Phase1Validation";
}
}
And its concrete implementations:
public class Phase1Validation : Validation
{
public Phase1Validation(int order) : base(order)
{
}
}
public class Phase2Validation : Validation
{
public Phase2Validation(int order) : base(order)
{
}
}
And method to validate:
string Validate()
{
List<Validation> validations = new List<Validation>()
{
new Phase1Validation(1),
new Phase2Validation(2)
};
validations = validations.OrderBy(v => v.Order).ToList();
string config = "";
foreach (Validation validation in validations)
{
string error = validation.Validate(config);
config = validation.Config;
if (!string.IsNullOrEmpty(error))
return error;
}
return null; // it means that there are no errors
}
A: I leave here my own answer, but I'm not going to select it as correct because I think there exist better answers (besides the fact that I am not very convinced of this implementation).
A kind of Decorator design pattern allowed me to do chain validation with greater use of the dependency injection approach.
I leave here the code but only for Python (I have reduced the number of phases from 4 to 2 to simplify the example).
from __future__ import annotations
import abc
from typing import cast
from typing import Any
from typing import TypedDict
NotValidatedConfig = dict
ValidatedConfig = TypedDict("ValidatedConfig", {"foo": Any, "bar": Any})
class InvalidConfig(Exception):
...
# This class is abstract.
class ValidationHandler(abc.ABC):
_handler: ValidationHandler | None
def __init__(self, handler: ValidationHandler = None):
self._handler = handler
# This method is abstract.
@abc.abstractmethod
def _validate(self, not_validated_config: NotValidatedConfig):
...
def _chain_validation(self, not_validated_config: NotValidatedConfig):
if self._handler:
self._handler._chain_validation(not_validated_config)
self._validate(not_validated_config)
def get_validated_config(self, not_validated_config: NotValidatedConfig) -> ValidatedConfig:
self._chain_validation(not_validated_config)
# Here we convert (in a forced way) the type `NotValidatedConfig` to
# `ValidatedConfig`.
# We do this because we already run all the validations chain.
# Force a type is not a good way to deal with a problem, and this is
# the main downside of this implementation (but it works anyway).
return cast(ValidatedConfig, not_validated_config)
class Phase1Validation(ValidationHandler):
def _validate(self, not_validated_config: NotValidatedConfig):
if "foo" not in not_validated_config:
raise InvalidConfig('Config miss "foo" attr')
class Phase2Validation(ValidationHandler):
def _validate(self, not_validated_config: NotValidatedConfig):
if not isinstance(not_validated_config["foo"], str):
raise InvalidConfig('"foo" must be an string')
class Validator:
_validation_handler: ValidationHandler
def __init__(self, validation_handler: ValidationHandler):
self._validation_handler = validation_handler
def validate_config(self, not_validated_config: NotValidatedConfig) -> ValidatedConfig:
return self._validation_handler.get_validated_config(not_validated_config)
if __name__ == "__main__":
# "Pure Dependency Injection"
validator = Validator((Phase2Validation(Phase1Validation())))
validator.validate_config({"foo": 1, "bar": 1})
What is the problem with this approach?: the lightweight way in which the types are concatenated. In the original example, the Phase1Validation generates a ValidatedPhase1Config, which is safely used by the Phase2Validation. With this implementation, each decorator receives the same data type to validate, and this creates safety issues (in terms of typing). The Phase1Validation gets NotValidatedConfig, but the Phase2Validation can't use that type to do the validation, they need the Phase1Validation.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/72596535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Issues with persistent volume on DigitalOcean Kubernetes cluster Just created a managed 2-node Kubernetes (ver. 1.22.8) cluster on DigitalOcean (DO).
After installing WordPress using Bitnami Helm chart, and then installing a WP plugin, the site became unreachable.
Looking into DO K8s dashboard in the deployment section, the wordpress deployment shows the following error:
0/2 nodes are available: 2 pod has unbound immediate PersistentVolumeClaims.
AttachVolume.Attach failed for volume "pvc-c859847e-f250-4e71-9ed3-63c92cc01f50" : rpc error: code = DeadlineExceeded desc = context deadline exceeded
MountVolume.MountDevice failed for volume "pvc-c859847e-f250-4e71-9ed3-63c92cc01f50" : rpc error: code = Internal desc = formatting disk failed: exit status 1 cmd: 'mkfs.ext4 -F /dev/disk/by-id/scsi-0DO_Volume_pvc-c859847e-f250-4e71-9ed3-63c92cc01f50' output: "mke2fs 1.45.5 (07-Jan-2020)\nThe file /dev/disk/by-id/scsi-0DO_Volume_pvc-c859847e-f250-4e71-9ed3-63c92cc01f50 does not exist and no size was specified.\n"
Readiness probe failed: HTTP probe failed with statuscode: 404
As I'm quite new to K8s, I don't know much how to troubleshoot this.
Any help would be much appreciated.
UPDATE
I was able to reproduce the error and found what triggers it.
WordPress Bitnami charts installs several WP plugins by default. As soon as I try to delete them, the error shows up and the persistent volume gets corrupted...
Is this maybe a bug or it's standard behavior?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/72059425",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: ArrayList duplicate element show-up in list view I'm developing an app (twitter-like) and my feed is based on a listview + adapter. by default you get the 20 first elements of your feeds and when scrolling down and reaching the end of the list, the fragment which contain the view send a request to twitter to get the remaining feeds and as soon as we get them, I update the list handle by the adapter and send the data notification changed. the overall process works but I got a but at the end of the list.
When reaching the end of the list and getting the new list, the last tweets is duplicated which mean that in my feed you have 2 times the same tweets.
For example, you have this feed: tweetA, tweetB, tweetC. When reaching tweetC, we asking for the old tweets and once received, I'm using method called setData to give the new list to the adapter and request the notify data set change. but when you get the list and update, you see the listview showing:
tweetA, tweetB, tweetC, tweetC, tweet D, tweetF
I was thinking it's coming from the new list received and not the listview and thinking that I can just remove the first element of the new list but the issue remain, in my case (when removing the first element), tweet D disappear and show tweetF
here is the code:
Feedfragment :
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container,
Bundle savedInstanceState) {
Log.d(TAG, "inside onCreateView");
View view = inflater.inflate(R.layout.twitter_swip_list,container, false);
mlistview = (ListView) view.findViewById(R.id.listview) ;
TwitterTimeLines twitt = new TwitterTimeLines(mActivity,TwitterHomeActivity.getTwitterHandler());
twitt.getTimeline(handler); return view;
}
handler is used to get back the result from twitter
private final Handler handler = new Handler() {
@Override
public void handleMessage(android.os.Message msg) {
ArrayList<Status> status = (ArrayList<Status>) msg.obj ;
mStatus = status;
mTwitterFeedAdapter.setData (mStatus);
mTwitterFeedAdapter.notifyDataSetChanged();
}
}
} ;
Adapter :
public View getView(final int position, View convertView, ViewGroup parent) {
Log.d(TAG,"position"+position) ;
mPosition = position;
mViewHolder = new TweetViewHolder(mActivity);
mInflater = (LayoutInflater) mContext.getSystemService(Context.LAYOUT_INFLATER_SERVICE) ;
if(convertView == null) {
convertView = mInflater.inflate(R.layout.twitter_feed,null) ;
mViewHolder.userProfilePicture = (ImageView) convertView.findViewById(R.id.profileImg) ;
mViewHolder.userRealName = (TextView) convertView.findViewById(R.id.userName) ;
....
convertView.setTag(mViewHolder) ;
}
else {
mViewHolder = (TweetViewHolder) convertView.getTag() ;
}
/* detect end of the list and request old tweets*/
if(position >= mStatus.size() - 1 ){
mTweetFeedFragment.populateOldTimeLine();
/* both classes are used to extract data and put it in a easy
user format */
mTweetMediaInfo = new TweetMediaInfo(mStatus.get(position));
mTweetInfo = new TweetInfo(mStatus.get(position));
showTweetContent();
return convertView ;
}
public void setData(ArrayList<Status> status){
mStatus = status;
}
Any idea ?
A: you can use HasHSet like this for removing the redudancey
list = new ArrayList<String>(new LinkedHashSet<String>(list))
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/38725952",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Resetting Score In Restart of Game I am making a VR game, in which there is only one level which is on main scene and the other scene is of "end" on which the game Over text Score is visible with Restart(which reloads the main scene) and Exit Button.
My problem is, M using this script as my ScoreManager script given below. I want this score in end scene too and this is working as m using PlayerPrefs
But the main problem is, when clicking the restart at the end scene, the game reloads the Main scene but that score Still have the same value of the previous game. I want it to set to Zero.
using UnityEngine;
using UnityEngine.UI;
using System.Collections;
namespace CompleteProject
{
public class ScoreManager : MonoBehaviour
{
public static int score ; // The player's score.
Text text; // Reference to the Text component.
void Awake()
{
// Set up the reference.
text = GetComponent<Text>();
score = 0;
score = PlayerPrefs.GetInt("Score");
}
void Update ()
{
// Set the displayed text to be the word "Score" followed by the score value.
text.text = "Score: " + score;
PlayerPrefs.SetInt("Score", score);
}
}
}
I have also used DeleteKey(string) to delete the score, but nothing happened.
A: You said you tried DeleteKey(int score) but it didn't work. Your code does not have the DeleteKey function anywhere. If you don't know how to use that function, the code below will show you how to use it. If you actually know how to use it but it's not working as mentioned in your question, then call PlayerPrefs.Save() after it. This should delete the key and update it right away.
To reset the score after each game, put the code in the OnDisable() function.
void OnDisable()
{
PlayerPrefs.DeleteKey("Score");
PlayerPrefs.Save();
}
To reset it when game begins, get the current score like you did in the Awake() function then change the function above to OnEnable().
A: The problem you still get the previous session of your score so you need to reset the saved values by resetting the value back to zero ,by using the line :
PlayerPrefs.SetInt("Score", 0);
public static int score ;
Text text;
void Start(){
PlayerPrefs.SetInt("Score", 0);
// Set up the reference.
text = GetComponent<Text>();
score = 0;
score = PlayerPrefs.GetInt("Score",0);
}
void Update ()
{
// Set the displayed text to be the word "Score" followed by the score value.
text.text = "Score: " + score;
PlayerPrefs.SetInt("Score", score);
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/39155642",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: scroll event on Hostlistener I have defined template
@Component({
selector: 'name',
directives: [ ... ],
templateUrl: 'name.html'
})
and class
export class ProductGridComponent implements OnInit {
@HostListener('scroll', ['$event'])
onScroll(e) {
alert(window.pageYOffset)
}
products = [];
}
But it does not shot anything , however when i replace scroll and onScroll with click and onClick it indeed show the alert.
Why does not it work with scroll , does angular2 has any other implementation for it?
Thanks
A: You can create a custom directive like bellow
import { Directive, HostListener } from '@angular/core';
@Directive({
selector: '[scroller]'
})
export class ScrollerDirective {
@HostListener('scroll') scrolling(){
console.log('scrolling');
}
@HostListener('click') clicking(){
console.log('clicking...');
}
constructor() { }
}
And then assuming you have a html template like
<div class="full-width" scroller>
<div class="double-width"></div>
</div>
use the following css
.full-width{
width: 200px;
height: 200px;
overflow: auto;
}
.double-width{
width:400px;
height: 100%;
}
the console will print "scrolling" on running the app if you move the horizontal scroll bar.
Here the key is that - the scoller directive should be in the parent not in the child div.
A: @HostListener('document:wheel', ['$event.target'])
public onWheel(targetElement) {
console.log()
}
A: Set (scroll)="yourFunction($event)" within the template at the corresponding element.
A: You could do it by the following code as well:
import { HostListener} from "@angular/core";
@HostListener("window:scroll", [])
onWindowScroll() {
//we'll do some stuff here when the window is scrolled
}
A: For those who were still not managed to figure this out with any of above solutions, here's a running code that is tested on Angular 12
@HostListener('document:wheel', ['$event.target'])
onScroll(): void {
console.log("Scrolling");
}
A: In angular 10 and above, when listening to window events do the following:
import { HostListener } from '@angular/core';
@HostListener('window:scroll', ['$event']) onScroll($event: Event): void {
if($event) {
console.log($event));
}
}
A: Assuming you want to display the host scroll (and not the windows one) and that you are using angular +2.1
@HostListener('scroll', ['$event']) private onScroll($event:Event):void {
console.log($event.srcElement.scrollLeft, $event.srcElement.scrollTop);
};
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/38748572",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Multiple if blocks or single if/else block? Which will typically have better running time, multiple if blocks or a single if/else block?
if (statement1) {
/* do something */
return result;
}
if (statement2) {
/* do something */
return result;
}
if (statement3) {
/* do something */
return result;
}
Versus:
if (statement1) {
/* do something */
return result;
} else if (statement2) {
/* do something */
return result;
} else if (statement3) {
/* do something */
return result;
}
I've always used the first style when the logical statements weren't in any way related, and the second if they were.
EDIT
To clarify why this thought popped into my head, I am programming a prime checker and have an if block that looks something like this:
if(n<2) return 0;
if(n==2) return 1;
if(n%2==0) return 0;
...
A:
Which will typically have better running time, multiple if blocks or a single if/else block?
This is largely irrelevant as the semantics are different.
Now, if the goal is comparing the case of
if (a) { .. }
else if (b) { .. }
else { .. }
with
if (a) { return }
if (b) { return }
return
where no statements follow the conditional then they are the same and the compiler (in the case of a language like C) can optimize them equivalently.
A: Always go for if else if... when you know only one of the condition is to be executed, Writing multiple if's will make the compiler to check for each and every condition even when the first condition is met which will have a performance overhead, multiple if's can be used when u want to check and perform multiple operations based on certain condition
A: The if/else approach is faster, because it will skip evaluating the following conditions after one test succeeds.
However, the two forms are only equivalent if the conditions are mutually exclusive. If you just mindlessly convert one form into the other, you will introduce bugs. Make sure that you get the logic right.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/23687634",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: Attach Angular component to document body I would like to render an Angular component (an overlay) as a direct child of the documents body element. Today this can be done as follows:
constructor(
private applicationRef: ApplicationRef,
private componentFactoryResolver: ComponentFactoryResolver,
@Inject(DOCUMENT) private document: Document,
private injector: Injector
) {}
public renderOverlayInBody(): void {
const overlayRef = this.componentFactoryResolver
.resolveComponentFactory(OverlayComponent)
.create(this.injector);
this.applicationRef.attachView(overlayRef.hostView);
const overlayRoot = (overlayRef.hostView as EmbeddedViewRef<unknown>)
.rootNodes[0] as HTMLElement;
// This component can be appended to any place in the DOM,
// in particular directly to the document body.
this.document.body.appendChild(overlayRoot);
}
Demo in Stackblitz
Unfortunately, ComponentFactoryResolver has been deprecated in Angular 13 and may be removed in Angular 16. The suggested replacement is ViewContainerRef.createComponent:
constructor(private viewContainerRef: ViewContainerRef) {}
public ngOnInit(): void {
// This component can only be appended to the current one,
// in particular not directly to the document body.
this.viewContainerRef.createComponent(OverlayComponent);
}
While this is much simpler to read, it doesn't allow to render components as direct children of the documents body element. Is there any way to do that, which doesn't rely on currently deprecated code?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/73734581",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Why not show chinese word in Smartface App Studio IDE? Why,
function Page1_Self_OnShow() {
//Comment following block for removing navigationbar/actionbar sample
//Copy this code block to every page onShow
header.init(this); header.setTitle("Page1中国文字");
header.setRightItem("RItem");
header.setLeftItem();
this.statusBar.transparent = true; /**/
}
Chinese word not show in Smartface App Studio IDE?
A: There is a problem about the visuality of these kind of characters on Smartface desktop ide. But actually it works fine. About the visual problem, there is a reported bug and it will be fixed with the new versions of Smartface.
But for now, you can show chinese characters on device without any problem.
I copied your code (the below code):
function Page1_Self_OnShow() {
//Comment following block for removing navigationbar/actionbar sample
//Copy this code block to every page onShow
header.init(this);
header.setTitle("Page1中国文字");
header.setRightItem("RItem");
header.setLeftItem();
this.statusBar.transparent = true;
/**/
}
And this code block looks wrong on ide (this is the bug):
It should be setTitle("Page1中国文字"); but it is lack of the last " character.
As I said before this is only a visual bug.
When I run this app on device (by using device emulator or publish), it is seen as in the below screenshot:
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/34203244",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Java not found while installing Websphere Application Server I am trying to install the Websphere Application Server (32bit) in Ubuntu 14.04.3 (64bit) using the IBM installation manager a silent install and a response file.
The commands I am using are:
sudo ./IBMIM --launcher.ini silent-install.ini -input
ibm_im_response_file.xml -acceptLicense
In the logs I see the following:
WARNING: /opt/IBM/WebSphere/AppServer/bin/iscdeploy.sh: 44:
/opt/IBM/WebSphere/AppServer/bin/iscdeploy.sh:
/opt/IBM/WebSphere/AppServer/java/bin/java: not found
although this file exists, but it's a link to /opt/IBM/WebSphere/AppServer/java/jre/java
EDIT: same happens when using the GUI
EDIT2: I tried doing the same under Ubuntu 12.04 and it worked. Apparently there is some problem regarding the 32bit version that is related to the ia32-libs package that is no longer available in 14.04.
A: If you are trying to Install Websphere Application Server through Installation Manager then you need to select IBM SDK provided with Websphere Application Server.
If you don't choose the IBM SDK you will be need to enter the JAVA Path in next steps.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/33804485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Delete all files and folders in multiple directory but leave the directoy Well I like this nice piece of code right here it seems to work awesomely but I can't seem to add any more directories to it
DirectoryInfo dir = new DirectoryInfo(@"C:\temp");
foreach(FileInfo files in dir.GetFiles())
{
files.Delete();
}
foreach (DirectoryInfo dirs in dir.GetDirectories())
{
dirs.Delete(true);
}
I would also like to add in special folders as well like History and cookies and such how would I go about doing that (I would like to include at least 4-5 different folders)
A: Perhaps something like this would help. I did not test it.
public void DeleteDirectoryFolders(DirectoryInfo dirInfo){
foreach (DirectoryInfo dirs in dirInfo.GetDirectories())
{
dirs.Delete(true);
}
}
public void DeleteDirectoryFiles(DirectoryInfo dirInfo) {
foreach(FileInfo files in dirInfo.GetFiles())
{
files.Delete();
}
}
public void DeleteDirectoryFilesAndFolders(string dirName) {
DirectoryInfo dir = new DirectoryInfo(dirName);
DeleteDirectoryFiles(dir)
DeleteDirectoryFolders(dir)
}
public void main() {
List<string> DirectoriesToDelete;
DirectoriesToDelete.add("c:\temp");
DirectoriesToDelete.add("c:\temp1");
DirectoriesToDelete.add("c:\temp2");
DirectoriesToDelete.add("c:\temp3");
foreach (string dirName in DirectoriesToDelete) {
DeleteDirectoryFilesAndFolders(dirName);
}
}
A: Here's a recursive function that will delete all files in a given directory and navigate down the directory structure. A pattern string can be supplied to only work with files of a given extension, as per your comment to another answer.
Action<string,string> fileDeleter = null;
fileDeleter = (directoryPath, pattern) =>
{
string[] files;
if (!string.IsNullOrEmpty(pattern))
files = Directory.GetFiles(directoryPath, pattern);
else
files = Directory.GetFiles(directoryPath);
foreach (string file in files)
{
File.Delete(file);
}
string[] directories = Directory.GetDirectories(directoryPath);
foreach (string dir in directories)
fileDeleter(dir, pattern);
};
string path = @"C:\some_folder\";
fileDeleter(path, "*.bmp");
Directories are otherwise left alone, and this can obviously be used with an array or list of strings to work with multiple initial directory paths.
Here is the same code rewritten as a standard function, also with the recursion as a parameter option.
public void DeleteFilesFromDirectory(string directoryPath, string pattern, bool includeSubdirectories)
{
string[] files;
if (!string.IsNullOrEmpty(pattern))
files = Directory.GetFiles(directoryPath, pattern);
else
files = Directory.GetFiles(directoryPath);
foreach (string file in files)
{
File.Delete(file);
}
if (includeSubdirectories)
{
string[] directories = Directory.GetDirectories(directoryPath);
foreach (string dir in directories)
DeleteFilesFromDirectory(dir, pattern, includeSubdirectories);
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/2954708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Array acting strange in Lua I am a noob in Lua. I have two arrays
First one:
levels={
-- 1
{
{9,9,9,9,9,9,9,9,9},
{9,9,9,9,9,9,9,9,9},
{9,9,1,0,9,0,3,9,9},
{9,9,9,9,9,9,9,9,9},
{9,9,9,9,9,9,9,9,9}
}
,
-- 2
{
{9,9,9,9,9},
{9,9,9,9,9},
{9,9,1,9,9},
{9,9,0,9,9},
{9,9,0,9,9},
{9,9,0,9,9},
{9,9,0,9,9},
{9,9,3,9,9},
{9,9,9,9,9},
{9,9,9,9,9}
}
,
-- 3
{
{9,9,9,9,9,9,9,9,9,9},
{9,9,9,9,9,9,9,9,9,9},
{9,9,0,9,0,9,9,9,9,9},
{9,9,1,0,0,9,0,3,9,9},
{9,9,9,9,9,9,9,9,9,9},
{9,9,9,9,9,9,9,9,9,9}
}
}
And The second I declare it like this:
playingLevel=levels[1]
The problem is that after I change playingLevel values, the levels array also changes the same way. I want to change only playingLevel.
A: table values are references in lua. when you do
playingLevel=levels[1]
you are not copying the table value at levels[1] into playingLevel, you are getting a reference to the actual data at levels[1], so changing an array value through playingLevel is essentially the same as changing the value as if you wrote levels[1][some_index] = new_value.
if you want a copy of the data, you will need a function that will create the copy for you. (either a shallow or deep copy depending on your use case)
so your code would look like playingLevel = copyTable(levels[1]) instead where copyTable is your custom implementation of a function that knows how to create a copy of the target table.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/38721565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How do i export tfs 2010 comments into a changelog file? I want to export TFS2010 comments from a date range or continually, (which ever works) into a changelog.txt file or similar.
I have looked all over the web trying to find examples or documentation on how to do this, but cannot find anything.
Also microsoft's website seems to just redirect me to TFS2012.
A: You can use the following command to output the "History" between a date range, but this will get you a lot more than the comments.
tf history "$/Project/Main" /format:detailed /noprompt /recursive /v:D"13 Jun 2013 00:00"~D"01 Jun 2013 00:00"
You could use the Brief format, but this is limited in it's width, and will truncate longer comments.
Once you have your "Log" you will have to parse it yourself. TFS does not have a format like git does.
You could create a console App that reads the history from Console.In.ReadToEnd() and then parses it into just comments and just pipe the results of your tf history into it.
You could also query the TFS API for this information using the VersionControlServer.QueryHistory Method, and just get the comments and output those.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/17084901",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Detect edge crossing in vis.js Is there a way to detect if edges cross each other in vis.js graph-2d?
I am trying to layout a directed graph with the Sugiyama algorithm, but its a little bit tricky. Is there an available Javascript implementation/tutorial of that algorithm or the edge-crossing?
I have seen many papers with pseudo-code and very specific implementations but they aren't very helpful for me.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/37387855",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Python - Removing paragraph breaks in input So I have written a program (however ugly) that counts the number of words and the instances of each unique word in a given input.
My problem is that I want to use it for song lyrics, but most lyric sets come with multiple paragraph breaks.
My question is: how can I take a user input of lyrics with paragraph breaks and reduce the input down to a single string?
This is my code so far:
Song = {}
lines = []
while True:
line = input("")
if line:
lines.append(line)
else:
break
string = '\n'.join(lines)
def string_cleaner(string):
string = string.lower()
newString = ''
validLetters = " abcdefghijklmnopqrstuvwxyz"
newString = ''.join([char for char in string if char in validLetters])
return newString
def song_splitter(string):
string = string_cleaner(string)
words = string.split()
for word in words:
if word in Song:
Song[word] += 1
else:
Song[word] = 1
Expected input:
Well, my heart went "boom"
When I crossed that room
And I held her hand in mine
Whoah, we danced through the night
And we held each other tight
And before too long I fell in love with her
Now I'll never dance with another
(Whooh)
Since I saw her standing there
Oh since I saw her standing there
Oh since I saw her standing there
Desired output:
This song has 328 words.
39 of which are unique.
This song is 11% unique words.
('i', 6)
('her', 4)
('standing', 3)
.... etc
A: The following example code extracts all the words (English alphabet only) from every line and process them (counts the number of words, and retrieve instances of each unique word).
import re
MESSAGE = 'Please input a new line: '
TEST_LINE = '''
Well, my heart went "boom"
When I crossed that room
And I held her hand in mine
Whoah, we danced through the night
And we held each other tight
And before too long I fell in love with her
Now I'll never dance with another
(Whooh)
Since I saw her standing there
Oh since I saw her standing there well well
Oh since I saw her standing there
'''
prog = re.compile(r'\w+')
class UniqueWordCounter():
def __init__(self):
self.data = {}
def add(self, word):
if word:
count = self.data.get(word)
if count:
count += 1
else:
count = 1
self.data[word] = count
# instances of each unique word
set_of_words = UniqueWordCounter()
# counts the number of words
count_of_words = 0
def handle_line(line):
line = line.lower()
words = map(lambda mo: mo.group(0), prog.finditer(line))
for word in words:
global count_of_words
count_of_words += 1
set_of_words.add(word)
def run():
line = input(MESSAGE)
if not line:
line = TEST_LINE
while line:
'''
Loop continues as long as `line` is not empty
'''
handle_line(line)
line = input(MESSAGE)
count_of_unique_words = len(set_of_words.data.keys())
unique_percentage = count_of_unique_words / count_of_words
print('-------------------------')
print('This song has {} words.'.format(count_of_words))
print('{} of which are unique.'.format(count_of_unique_words))
print('This song is {:.2%} unique words.'.format(unique_percentage))
items = sorted(set_of_words.data.items(), key = lambda tup: tup[1], reverse=True)
items = ["('{}', {})".format(k, v) for k, v in items]
print('\n'.join(items[:3]))
print('...')
run()
If you want to handle lyrics in other languages, you should check out this link.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/52013541",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Split python list objects separated by u' ' I have a python list object containing elements as follows:
[u'Sentence 1 blabla.'],
[u'Sentence 2 blabla.'],
...
I want to extract the contents inside ' ' present in the list and store each as separate lines in a text file. Please help me with the same
A: u'blablabla' is Unicode. You can convert it into string using str(unicode)
Example:
a = [[u'qweqwe'],[u'asdasd']]
str(a[0][0]) will be string qweqwe.
Now you can write it into file as usual.
Try this example for clarity:
a = [[u'qweqwe'],[u'asdasd']]
print type(a[0][0])
print type(str(a[0][0]))
Output:
<type 'unicode'>
<type 'str'>
A: Firstly, Given that you have a list of lists, I am going to assume that you are going to convert the given list from unicode to an UTF8 string. For this lets have a function convertList which takes a list of lists as an input
So your initial value for l would be
[[u'Sentence 1 blabla.'],
[u'Sentence 2 blabla.'],
...]
Notice that this is a list of lists. Now for each list item, loop through the items within that list and convert them all to UTF8.
def convertList(l):
newlist = []
for x in l:
templ = [item.encode('UTF8') for item in x]
newlist.append(templ)
return newlist
l = [[u'Sentence 1 blabla.'], [u'Sentence 2 blabla.']]
l = convertList(l) # This updates the object you have
# Now do the filewrite operations here.
f = open('myfile.txt','w')
for iList in l:
for items in iList:
f.write(items+'\n')
f.close()
This will write to the file myfile.txt as follows
Sentence 1 blabla.
Sentence 2 blabla.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/41974508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
}
|
Q: Scientific number format in R Considering the code below.
x <- c(3423, 123412121, 4567121)
format(x, scientific = TRUE)
[1] "3.423000e+03" "1.234121e+08" "4.567121e+06"
The results are using different exponential each time like e+03, e+08, e+06.
Is there a way to get the results with a fixed exponential ? Like say all output should be in e+03 ?
Thank You.
A: With options(scipen=999) you get the full number without e+03 and so on. Maybe there is a way with options(scipen=...)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/60855658",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to log a particular address from an STM32 NUCLEO-F334R8 with an inbuilt ST-LINK in real time using SWD & openOCD without halting the processor? I am trying to learn how to debug an MCU non-intrusively using SWD & openOCD.
while (1)
{
my_count++;
HAL_GPIO_TogglePin(LD2_GPIO_Port,LD2_Pin);
HAL_Delay(750);
}
The code running on my MCU has a free running counter "my_count" . I want to sample/trace the data stored in the address holding "my_count" in real time :
I was doing it this way:
while(1){// generic algorithm no specific language
mdw 0x00000000200000ac; //openOCD command to read from an address
}
0x200000ac is the address of the variable my_count from the .map file.
But, this method is very slow and experiences data drops at high frequencies.
Is there any other way to trace the data at high frequencies without experiencing data drops?
A: I made some napkin math, and I have an idea that may work.
As per Reference Manual, page 948, the max baud rate for UART of STM32F334 is 9Mbit/s.
If we want to send memory at the specific address, it will be 32 bits. 1 bit takes 1/9Mbps or 1.111*10^(-7)s, multiply that by 32 bits, that makes it 3.555 microseconds. Obviously, as I said, it's purely napkin math. There are start and stop bits involved. But we have a lot of wiggle room. You can easily fit 64 bits into transmission too.
Now, I've checked with the internet, it seems the ST-Link based on STM32F103 can have max baud rate of 4.5Mbps. A bummer, but we simply need to double our timings. 3.55*2 = 7.1us for 32-bit and 14.2us for 64-bit transmission. Even given there is some start and stop bit overhead, we still seem to fit into our 25us time budget.
So the suggestion is the following:
You have a timer set to 25us period that fires an interrupt, that activates DMA UART transmission. That way your MCU actually has very little overhead since DMA will autonomously handle the transmission, while your MCU can do whatever it wants in the meantime. Entering and exiting the timer ISR will be in fact the greatest part of the overhead caused by this, since in the ISR you will literally flip a pair of bits to tell DMA to send stuff over UART @ 4.5Mbps.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/72695890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: matrix manipulation - why is a normal loop producing different results? My question is related to this question. I was trying to model the list comprehension provided by @shuttle87 in his/her answer to the question to a regular old-fashioned loop. Here is what my code snippet looks like:
matrix = [[2,0,2],[0,2,0],[2,0,2]]
sqd = []
for i in matrix:
for e in i:
sqd.append(e*e)
print(sqd)
My problem is that my code returns a list i.e., [4, 0, 4, 0, 4, 0, 4, 0, 4] instead of a matrix i.e., [[4, 0, 4], [0, 4, 0], [4, 0, 4]]. What can i be possibly doing wrong?
disclaimer: I am aware there are wonderful python libraries that can do this for e.g., numpy. I like understanding things through intuition and thus this question...so forgive my naivety.
A: You have a single list sqd that you are appending scalar values to, so it will always just be a 1-dimensional list. If you want a list of lists (i.e. 2-dimensional matrix), you need to append lists to sqd, not scalar values:
matrix = [[2,0,2],[0,2,0],[2,0,2]]
sqd = []
for i in matrix:
row = [] # create a new list for each row
for e in i:
row.append(e*e) # append scalar to the row list
sqd.append(row) # append row to matrix list
print(sqd)
A: Because you append numbers to sqd inside the inner for e in i loop. Instead, you need to append those numbers to a temp list, then append that list to sqd.
matrix = [[2,0,2],[0,2,0],[2,0,2]]
sqd = []
for i in matrix:
row = []
for e in i:
row.append(e*e)
sqd.append(row)
print(sqd)
Or, as a list-comprehension:
matrix = [[2,0,2],[0,2,0],[2,0,2]]
sqd = [[e * e for e in row] for row in matrix]
print(sqd)
A: You have two for loops here. Your outer loop is going through the Columns of the matrix.
Your inner loop is going through the rows.
Your inner loop runs through the entire loop before going down to your next column.
Now that you understand that flow, you need to see that your list "sqd" has only one operation it is performing. That operation of append will happen for every loop of the inner loop. Each loop you are growing that list by adding the latest operation.
To create the matrix you wish to see, you are going to want some more work between your inner and outer loop.
I would recommend making a new list for every iteration of your outer loop. This new list will be appended by the inner loop, and once the inner loop completes, you can add this new temp list to "sqd".
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/67116372",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Fetch request returning status cors and readable stream I am making a fetch call to my back end, here is the call.
const response = await fetch(apiUrl + '/recipes', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Token token=${user.token}`
},
body: JSON.stringify({
recipe: {
recipeId: uri
}
})
})
Then this is the route I send the call to on my backend
router.post('/recipes', requireToken, (req, res) => {
req.body.recipe.owner = req.user.id
Recipe.create(req.body.recipe)
.then(recipe => {
console.log('Recipe object saved is', recipe)
res.status(201).json({ recipe: recipe.toObject() })
})
.catch(err => handle(err, res))
})
When I do this, the correct object logs right before it sends back. Here is an example of what gets logged
{ __v: 0,
updatedAt: 2018-10-19T15:47:16.809Z,
createdAt: 2018-10-19T15:47:16.809Z,
recipeId: 'http://www.edamam.com/ontologies/edamam.owl#recipe_7dae4a3b1f6e5670be3c2df5562e4782',
owner: 5bc9fc6a3682194cdb8d6fa5,
_id: 5bc9fc843682194cdb8d6fa7 }
However, when I console.log what I get back on the front end, I get this.
Response {type: "cors", url: "http://localhost:4741/recipes", redirected: false, status: 201, ok: true, …}
body: ReadableStream
bodyUsed: true
headers: Headers {}
ok: true
redirected: false
status: 201
statusText: "Created"
type: "cors"
url: "http://localhost:4741/recipes"
__proto__: Response
On the call, it does record the action with my database, so the information gets saved, and is logged right before it should be sent back, however the proper information is not sent back.
Thank you ahead of time for any response.
A: Since you're using fetch to make the request, the response is encapsulated in the Response object, and to access it you have to call the async method json(). Just like the following:
const Response = await fetch(apiUrl + '/recipes', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': `Token token=${user.token}`
},
body: JSON.stringify({
recipe: {
recipeId: uri
}
})
});
const json = await Response.json();
console.log(json);
You can play with another example in your chrome console.
(async () => {
const Response = await fetch('https://jsonplaceholder.typicode.com/todos/1');
const res = await Response.json();
console.log(res);
})();
UPDATE
Another way to do that is:
(async () => {
const response = await (
await fetch('https://jsonplaceholder.typicode.com/todos/1')
).json();
console.log(response);
})();
I hope this helps.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/52897253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Combining Two Near Identical MySQL Queries New to this community... I have to say I love this resource though, it's fantastic. :)
(Before I go further, I want to point out that I have already achieved an undesirable yet functional solution to this problem using multiple queries and then combining what I get using a PHP array, but I'm weak with MySQL and what I've done simply isn't elegant or efficient, and I'd very much appreciate any assistance you can give me to actually help me improve it. My project uses dozens and dozens of SQL queries and I'd like to learn to write them properly.)
I'm essentially trying to merge two near identical MySQL recordsets, using a UNION statement or something along those lines. I've been looking around for ages trying to find a solution to this, and at this point it remains beyond me. So, here's my first question:
My data structure is relatively simple, but I suspect the way in which I want to use it is perhaps a little less commonplace. I have a series of users regularly transmitting data from their mobile devices to a PHP server which processes them and stores their contents in a MySQL database. This is where it gets a little unorthodox, perhaps -- Users are effectively links in a closed chain. At all times, every user has one other user serving as their link, or, I prefer, 'Target', while they themselves serve as the Target for someone else. This model can be thought of in a linear fashion: A targets B, B targets C, C targets D, D targets A.
The data's stored in two tables:
*
*Users: Contains data about users (e.g. UserID, Name, Age, etc etc... TargetID)
*Packages: Contains data about all user updates (e.g. UserID, Timestamp, etc etc...)
Upon receiving a request from a user, I want to:
*
*Firstly, pull the most recent update from the Packages table, along with biographical information from the Users table pertaining to the Target of our User.
*Secondly, pull the most recent update from the Packages table, along with biographical information from the Users table pertaining to whoever has our User as their Target.
This might be a little confusing, so to clarify:
Using the aforementioned model, if I get a request from B, I want the most recent Package from A and the most recent Package from C returned in a single recordset, along with related biographical information. I'm not actually interested in B!
The best I have managed to achieve at this point involves THREE queries:
*
*Initially, checking the Users table for the correct TargetID for our User.
SELECT TargetID FROM Users WHERE Users.UserID = [VALUE SUBMITTED IN USER REQUEST];
*Querying both tables to produce Result 1.
SELECT Users.UserID, Users.Name, Users.TargetID, Packages.Timestamp, Packages.Latitude, Packages.Longitude, Packages.Message FROM Users, Packages WHERE Packages.UserID = Users.UserID AND Users.UserID = [VALUE OBTAINED FROM QUERY 1] ORDER BY Packages.Timestamp DESC LIMIT 0,1;
*Querying both tables to produce Result 2.
SELECT Users.UserID, Users.Name, Users.TargetID, Packages.Timestamp, Packages.Latitude, Packages.Longitude, Packages.Message FROM Users, Packages WHERE Packages.UserID = Users.UserID AND Users.TargetID = [VALUE SUBMITTED IN USER REQUEST] ORDER BY Packages.Timestamp DESC LIMIT 0,1;
Then I've merged Results 1 & 2 in a PHP array, and fired the contents of this array back to the User submitting the request. In any situation involving many many such requests, I have a hard time believing this approach is efficient, let alone ideal. Despite the fact Queries 2 & 3 are virtually identical, and I'm sure a simple OR condition within a WHERE clause should be able to combine them, I cannot successfully do so without returning all Packages for both the users I'm interested in, or alternatively, returning exactly one record, discarding the second (I clearly apply the time constraint incorrectly). For example, MySQL complains I incorrectly use UNION And ORDER BY statements together at times. I'm just getting the syntax all wrong every time I try to modify it.
I am really unhappy with my current solution, because looking at my project at this point it is the single MySQL task my server must execute more often than all others. I do realise this question is rather verbose, but I think it is always better to explain things thoroughly so it can be understood the first time. Thanks in advance for any guidance or direction you can give me. Cheers.
A: If I understand correctly, something like:
select
Users.UserID,
TargetedPackages.UserID as TargetedID,
TargetedPackages.TimeStamp as TargetedTimestamp,
TargeterPackages.UserID as TargeterID,
TargeterPackages.Timestamp as TargetedTimestamp
from
Users
INNER JOIN Users as Targeted
on Targeted.TargetId = Users.TargetID
INNER JOIN Users as Targeter
on Targeter.TargetId = Users.UserID
LEFT JOIN Packages as TargetedPackages
on Users.TargetID = TargetedPackages.UserID AND TargetedPackages.Timestamp = (SELECT Max(Timestamp) FROM Packages where Packages.UserID = Users.TargetID)
LEFT JOIN Packages as TargeterPackages
on Targeter.UserID = TargeterPackages.UserID AND TargeterPackages.Timestamp = (SELECT Max(Timestamp) FROM Packages where Packages.UserID = Targeter.UserID)
where Users.UserID = 2
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/11302020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Managing tables in MS Word by Java application I am using XDocReport and Velocity to fill simple tables in docx files. Now, I would like to create table with merged fields.
Is it possible to do this in XDocReport? If not, how can I do it?
A: If I understand your need, you wish to set a mergefield and replace this mergefield with a table?
If it that, you can use HTML text styling. You design your docx template like this :
${htmlTable}
You mark that htmlTable field uses HTML syntax :
FieldsMetadata metadata = report.createFieldsMetadata();
metadata.addFieldAsTextStyling("htmlTable", SyntaxKind.Html);
You put in the context, the HTML table :
context.put("htmlTable", "<table><tr><td>A</td><td>B</td></tr></table>");
But today, it's very basic, you cannot manage border, width, height, etc for HTML table. See issue 302
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/25527144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: Why in Presto, it shows cardinality function is "Python function"? But this cardinality function doesn't exist in Python though..
https://prestodb.io/docs/current/search.html?q=cardinality+function
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/70468683",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to get around calling Wait inside of a task I have code something like this:
var myTask = requiredTask.ContinueWith(_=>
{
var otherTasks = from item in otherObjects select item.DoSomethingAsync();
Task.WaitAll(otherTasks);
// do my real work
});
My understanding is that the call to WaitAll is going to block and hold up a thread in the thread pool while the (IO bound) subtasks are completing. My question is:
*
*Is my assumption about tying up a thread pool thread correct?
*If so what is the best method to avoid doing this?
Note that this is for a library that needs to support .NET4/Windows XP so using await is not an option.
A: If you include the Microsoft.Bcl.Async assembly and build with a compatible IDE (VS2012+), you can use async/await with .NET 4. Then you can await Task.WhenAll, e.g.
var myTask = await requiredTask;
var otherTasks = from item in otherObjects select item.DoSomethingAsync();
await Task.WhenAll(otherTasks);
// do my real work
Since Task.WhenAll was added in .NET 4.5, not in the Microsoft.Bcl.Async assembly, this is how you'd do it in .NET 4:
var myTask = await requiredTask;
var otherTasks = (from item in otherObjects select item.DoSomethingAsync()).ToList();
foreach (var otherTask in otherTasks)
await otherTask;
I also threw in a ToList() so that if you use otherTasks later (e.g. to get results), the expression will not be reevaluated.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/19529300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Add new column to WooCommerce admin products list with discount percentage on sale products I am trying to display the percentage discount of simple products that are on sale in an additional column in the backend.
I have used the code below
add_filter( 'manage_edit-product_columns', 'discount_column', 20 );
function discount_column( $col_th ) {
return wp_parse_args( array( 'discount' => 'Discount' ), $col_th );
}
add_action( 'manage_posts_custom_column', 'discount_col' );
function discount_col( $column_id ) {
if( $column_id == 'discount' )
$saleprice = get_post_meta( get_the_ID(), '_sale_price', true );
$regularprice = get_post_meta( get_the_ID(), '_regular_price', true );
if ($saleprice > 0) {
$discountperc = ($regularprice -$saleprice) /$regularprice * 100;
echo (round($discountperc,2)). '%';
}
}
But I am getting multiple (same) errors:
Undefined variable: saleprice
Can someone walk me through how to do that?
A: UPDATE 06/21: now also works for variable products.
It is not necessary to get the postmeta data via get_post_meta because you can access the product object via the $postid.
Once you have the product object, you have access to all kinds of product information.
*
*WooCommerce: Get Product Info (ID, SKU, $) From $product Object
So you get:
// Column header
function filter_manage_edit_product_columns( $columns ) {
// Add column
$columns['discount'] = __( 'Discount', 'woocommerce' );
return $columns;
}
add_filter( 'manage_edit-product_columns', 'filter_manage_edit_product_columns', 10, 1 );
// Column content
function action_manage_product_posts_custom_column( $column, $postid ) {
// Compare
if ( $column == 'discount' ) {
// Get product object
$product = wc_get_product( $postid );
// Is a WC product
if ( is_a( $product, 'WC_Product' ) ) {
// Product is on sale
if ( $product->is_on_sale() ) {
// Output
echo '<ul>';
// Simple products
if ( $product->is_type( 'simple' ) ) {
// Get regular price
$regular_price = $product->get_regular_price();
// Get sale price
$sale_price = $product->get_sale_price();
// Calculate discount percentage
$discount_percentage = ( ( $sale_price - $regular_price ) / $regular_price ) * 100;
// Output
echo '<li>' . abs( number_format( $discount_percentage, 2, '.', '') ) . '%' . '</li>';
// Variable products
} elseif ( $product->is_type( 'variable' ) ) {
foreach( $product->get_visible_children() as $variation_id ) {
// Get product
$variation = wc_get_product( $variation_id );
// Get regular price
$regular_price = $variation->get_regular_price();
// Get sale price
$sale_price = $variation->get_sale_price();
// NOT empty
if ( ! empty ( $sale_price ) ) {
// Get name
$name = $variation->get_name();
// Calculate discount percentage
$discount_percentage = ( ( $sale_price - $regular_price ) / $regular_price ) * 100;
// Output
echo '<li>' . $name . '</li>';
echo '<li>' . abs( number_format( $discount_percentage, 2, '.', '') ) . '%' . '</li>';
}
}
}
// Output
echo '</ul>';
}
}
}
}
add_action( 'manage_product_posts_custom_column', 'action_manage_product_posts_custom_column', 10, 2 );
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/64532772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.