text
stringlengths 15
59.8k
| meta
dict |
|---|---|
Q: jsPDF: addHTML method not working I created html table and tried to generate pdf using jsPDF.
I have used following code:
var pdf = new jsPDF('p','pt','a4');
pdf.addHTML(document.getElementById('pdfTable'),function() {
console.log('pdf generated');
});
pdf.save('pdfTable.pdf');
I am getting blank pdf file. Am I doing anything wrong?
Here is a plunker for this.
Working demo of jsPDF with addHTML function.
A: Made working plunker for this:
Update:
I have updated the plunker and made it work with addHTML() function:
var pdf = new jsPDF('p','pt','a4');
//var source = document.getElementById('table-container').innerHTML;
console.log(document.getElementById('table-container'));
var margins = {
top: 25,
bottom: 60,
left: 20,
width: 522
};
// all coords and widths are in jsPDF instance's declared units
// 'inches' in this case
pdf.text(20, 20, 'Hello world.');
pdf.addHTML(document.body, margins.top, margins.left, {}, function() {
pdf.save('test.pdf');
});
You can see plunker for more details.
A: Change this in app.js:
pdf.addHTML(document.getElementById('pdfTable'),function() {
});
pdf.save('pdfTable.pdf');
for this one:
pdf.addHTML(document.getElementById('pdfTable'),function() {
pdf.save('pdfTable.pdf');
});
A: I took out everything but these:
<script src="//mrrio.github.io/jsPDF/dist/jspdf.debug.js"></script>
<script src="//html2canvas.hertzen.com/build/html2canvas.js"></script>
then I used this:
$(document).ready(function() {
$("#pdfDiv").click(function() {
var pdf = new jsPDF('p','pt','letter');
var specialElementHandlers = {
'#rentalListCan': function (element, renderer) {
return true;
}
};
pdf.addHTML($('#rentalListCan').first(), function() {
pdf.save("rentals.pdf");
});
});
});
I can print the table... looks great with chrome, safari or firefox, however, I only get the first page if my table is very large.
A: In app.js change:
pdf.addHTML(document.getElementById('pdfTable'),function() {
});
pdf.save('pdfTable.pdf');
to
pdf.addHTML(document.getElementById('pdfTable'),function() {
pdf.save('pdfTable.pdf');
});
If you see a black background, you can add to sytle.css "background-color: white;"
A: Instead of using the getElementById() you should use querySelector()
var pdf = new jsPDF('p','pt','a4');
pdf.addHTML(document.querySelector('#pdfTable'), function () {
console.log('pdf generated');
});
pdf.save('pdfTable.pdf');
A: I have test this in your plunker.
Just update your app.js
pdf.addHTML(document.getElementById('pdfTable'), function() {
pdf.save('pdfTable.pdf');
});
update your css file:
#pdfTable{
background-color:#fff;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/27047365",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: How to return all the rows in tracker_sessions table using antonioribeiro/tracker I am using antonioribeiro/tracker for my website analytics.
Everything is working as intended but I want to count all the rows in tracker_sessions table, when I do
$sessions = Tracker::sessions();
return count($sessions);
It returns only the number of rows created today.
How would I return all the rows in tracker_sessions table?
A: Looking at the documentation, you can do this to get more sessions:
$sessions = Tracker::sessions(60 * 24 * 365 ); // get sessions (visits) from the past 365 days
.., or any number of minutes you want.
You can also use Query Builder to count the table directly:
use Illuminate\Support\Facades\DB;
$sessions = DB::table('tracker_sessions')->count();
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/52733523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: In Visual Studio (2010), how do I get the little method information help box to pop up again? I know that Ctr+Space is to bring up intellisense for code completion, but how do I get this little box to pop up, which is the one that provides method and parameter info:
At the moment, if I lose it, I backspace the comma, then it pops up when I re-enter the comma. I'm sure there's a keyboard shortcut I'm missing though.
A: You can bring it back up with
Ctrl + Shift + Space
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7213854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: What is Java Regex to check if a String is a chain of numbers delimited by "_" (java)? I want to check if a String is a chain of numbers delimited by "_":
Examples:
200_121_545 : return true;
455 : return true; // it doesn't need _ to be valid, a number is ok
ff_78 : return false;
2212 _55 : return false; // no space in the string
121212@44 : return false;
The number of numbers is unknown inadvanced. Ex: it could be 21212_545 or 45_545_78
A: If you want each number to have at least one non-zero digit you may try something like this (updated with help of Tims comments):
^0*[1-9]\\d*(_0*[1-9]\\d*)*$
The [1-9] makes sure there is (at least) one non-zero digit in each group, where as the 0* before allows any number of zeros. After that one non-zero digit, any digits including (but not only) zero can be allowed: \\d*.
Update 2: Added anchors as suggested by Daniël to make sure the whole string is matched.
A: As it stands you will need to define your regexp of what you consider a number and repeat it. As far as I can tell you would like to have something like
s.matches("0[1-9][0-9]*|[1-9][0-9]*")
to match a single number string. As regexps cannot be factored you will need to repeat yourself for the _, as in
s.matches("0[1-9][0-9]*|[1-9][0-9]*(_(0[1-9][0-9]*|[1-9][0-9]*)*")
that should do the trick as long as you do not want 0 as an answer and you want to avoid double leading zeros. In all it is a horrible expression though.
A: try this
s.matches("\\d*[1-9](_\\d+)*")
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/23239224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Q: The function table.setColumnLayout(columnSettings); is not working for me I've written a function to create tables with different column setups and different data based on the parameters I send it. That all works fine.
My problem is that I can't get the setColumnLayout function to work when I want to reconfigure the existing table with new data and column settings. I know that the column settings are correct because they work for table creation.
The code, table.clearData(); , in the top section works normally.
I'm also unable to get the destroy function to work. As far as I could tell that is for Tabulator 3.0.
I am using Mac Chrome 76 and Tabulator 4.3.
Any help letting me know what I'm doing wrong would be apreciated.
Thanks, Mike
function createTable (tabledata,SelectedColumnSettings){
$("#clear").click(function(){
table.clearData();
var columnSettings = [
{title:"id", field:"id", visible:false},
{title:"Company", field:"Company Name", sorter:"string"},
{title:"Name", field:"Name", sorter:"string"},
{title:"Word Count Rate", field:"Word Count Rate", sorter:"number", align:"center"},
{title:"Hourly Rate", field:"Hourly Rate", sorter:"number", align:"center"},
{title:"Resourced", field:"Resourced", sorter:"number", align:"center"},
{title:"Language Source", field:"Language Source", sorter:"string"},
{title:"Profile Picture", field:"Profile Picture", align:"center"},
{title:"Completed Projects", field:"Completed Projects", sorter:"number", align:"center"}];
// var columnSettings=getColumnSettings("Contacts");
table.setColumnLayout(columnSettings);
});
var columnSettings=getColumnSettings(SelectedColumnSettings); // get the settings for the selected list
var table = new Tabulator("#example-table", {
height:600, // set height of table to enable virtual DOM
resizableColumns:false, // this option takes a boolean value (default = true)
data:tabledata, //load initial data into table
layout:"fitDataFill", //fit columns to width of table (optional)
columns: columnSettings, //Define Table Columns. Sets columns for the different lists projects, contacts....
});
}; // end create table
I'm not getting any errors.
A: setColumnLayout needs to be done in conjunction with getColumnLayout. Not sure where SelectedColumnSettings is coming from. Also getColumnSettings is a property of a table and I do see it being called that way:
var columnSettings=getColumnSettings(SelectedColumnSettings);
A: Use Set Column
var columnSettings = [
{title:"id", field:"id", visible:false},
{title:"Company", field:"Company Name", sorter:"string"},
{title:"Name", field:"Name", sorter:"string"},
{title:"Word Count Rate", field:"Word Count Rate", sorter:"number", align:"center"},
{title:"Hourly Rate", field:"Hourly Rate", sorter:"number", align:"center"},
{title:"Resourced", field:"Resourced", sorter:"number", align:"center"},
{title:"Language Source", field:"Language Source", sorter:"string"},
{title:"Profile Picture", field:"Profile Picture", align:"center"},
{title:"Completed Projects", field:"Completed Projects", sorter:"number", align:"center"}];
// var columnSettings=getColumnSettings("Contacts");
table.setColumn(columnSettings);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/57455602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Strange behavior with var in Chrome I'm getting some very strange behavior in Chrome v57.0.2987.133 using JavaScript:
Here is my code:
var firstName = "Bob";
var lastName = "Smith";
var age = 29;
function happyBirthdayLocal(first, last, a) {
a = a + 1;
message = "Happy Birthday to " + first + " " +
last + " who just turned " + a;
console.log(message);
}
happyBirthdayLocal(firstName, lastName, age);
As expected, because I did not use var in front of message in the function, message acts like a global variable, and I can access window.message in the developer console.
However, if I insert a line accessing message above the call to the function, like so:
var firstName = "Bob";
var lastName = "Smith";
var age = 29;
function happyBirthdayLocal(first, last, a) {
a = a + 1;
message = "Happy Birthday to " + first + " " +
last + " who just turned " + a;
console.log(message);
}
console.log(message);
happyBirthdayLocal(firstName, lastName, age);
(no other changes!) now, I'm getting an uncaught reference error, and window.message is no longer defined.
My sense is that "failing to declare variables will very likely lead to unexpected results" is at play here and it seems Chrome is in an in-between place with regards to defaulting to global vs. local scope for undeclared variables depending on other use in the program? Can anyone confirm this?
I get similar results in Firefox. I've tried searching for specific information regarding what the default behavior is for undeclared variables in both Firefox and Chrome and didn't find anything more enlightening other than to potentially expect strange behavior.
(I'm a bit surprised this hasn't broken a bunch of people's code actually. Or maybe it has. My (mis)use of the variable here--not declaring it--is in the context of teaching about global vs. local scope; I wouldn't on purpose forget to declare a variable (and would lint my code anyway) so this is the first time I've noticed this behavior.)
Thank you in advance for any insight.
A: Step through it in order.
First, due to hoisting, the variables firstName, lastName, age are declared, and the function happyBirthdayLocal is also declared.
Then, firstName, lastName, age are all assigned their values.
Next you call console.log(message);. Uh-oh, message hasn't been defined yet. That doesn't happen until happyBirthdayLocal is actually called for the first time, which would be on the following line.
Due to the error, which is fatal, the function is never called, and message remains undefined.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/43262209",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Elasticsearch port 9300 django I'm facing a strange issue, using Django Haystack and ElasticSearch so I can't rebuild_index.
ElasticSearch is properly running on the machine :
$ curl -X GET 'http://localhost:9200'
{
"status" : 200,
"name" : "Ziggy Pig",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "1.7.2",
"build_hash" : "e43676b1385b8125d647f593f7202acbd816e8ec",
"build_timestamp" : "2015-09-14T09:49:53Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
But this is the log of ElasticSearch :
[2017-11-29 18:25:22,723][INFO ][node] [Ziggy Pig] initialized
[2017-11-29 18:25:22,724][INFO ][node] [Ziggy Pig] starting ...
[2017-11-29 18:25:22,913][INFO ][transport] [Ziggy Pig] bound_address
{inet[/127.0.0.1:9300]}, publish_address {inet[/10.142.0.2:9300]}
[2017-11-29 18:25:22,937][INFO ][discovery] [Ziggy Pig] .
elasticsearch/HWEvbIkAR3mFwcGeHIa7Cg
[2017-11-29 18:25:26,710][INFO ][cluster.service] [Ziggy Pig]
new_master [Ziggy Pig][HWEvbIkAR3mFwcGeHIa7Cg][stagelighted]
[inet[/10.142.0.2:9300]], reason: zen-disco-join(elected_as_master)
[2017-11-29 18:25:26,734][INFO ][http] [Ziggy Pig] bound_address
{inet[/127.0.0.1:9200]}, publish_address {inet[/10.142.0.2:9200]}
[2017-11-29 18:25:26,734][INFO ][node] [Ziggy Pig] started
[2017-11-29 18:25:26,762][INFO ][gateway] [Ziggy Pig] recovered [1]
indices into cluster_state
[2017-11-29 18:26:22,946][WARN ][cluster.service] [Ziggy Pig] failed to
reconnect to node [Ziggy Pig][HWEvbIkAR3mFwcGeHIa7Cg]
[stagelighted][inet[/10.142.0.2:9300]]
org.elasticsearch.transport.ConnectTransportException: [Ziggy Pig][inet[/10.142.0.2:9300]] connect_timeout[30s]
at org.elasticsearch.transport.netty.NettyTransport.connectToChannels(NettyTransport.java:825)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:758)
at org.elasticsearch.transport.netty.NettyTransport.connectToNode(NettyTransport.java:731)
at org.elasticsearch.transport.TransportService.connectToNode(TransportService.java:216)
at org.elasticsearch.cluster.service.InternalClusterService$ReconnectToNodes.run(InternalClusterService.java:584)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused: /10.142.0.2:9300
at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.connect(NioClientBoss.java:152)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.processSelectedKeys(NioClientBoss.java:105)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.process(NioClientBoss.java:79)
at org.elasticsearch.common.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337)
at org.elasticsearch.common.netty.channel.socket.nio.NioClientBoss.run(NioClientBoss.java:42)
at org.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
I installed JAVA and Elastisearch with this tutorial
My app running in Google Compute Engine
If someone can help me.
My settings:
ELASTICSEARCH_INDEX_SETTINGS = {
'settings': {
"analysis": {
"analyzer": {
"synonym_analyzer" : {
"type": "custom",
"tokenizer" : "standard",
"filter" : ["synonym"]
},
"ngram_analyzer": {
"type": "custom",
"tokenizer": "lowercase",
"filter": ["haystack_ngram", "synonym"]
},
"edgengram_analyzer": {
"type": "custom",
"tokenizer": "lowercase",
"filter": ["haystack_edgengram"]
}
},
"tokenizer": {
"haystack_ngram_tokenizer": {
"type": "nGram",
"min_gram": 3,
"max_gram": 15,
},
"haystack_edgengram_tokenizer": {
"type": "edgeNGram",
"min_gram": 2,
"max_gram": 15,
"side": "front"
}
},
"filter": {
"haystack_ngram": {
"type": "nGram",
"min_gram": 3,
"max_gram": 15
},
"haystack_edgengram": {
"type": "edgeNGram",
"min_gram": 2,
"max_gram": 15
},
"synonym" : {
"type" : "synonym",
"ignore_case": "true",
"synonyms_path" : "synonyms.txt"
}
}
}
}
}
HAYSTACK_CONNECTIONS = {
'default': {
'ENGINE': 'elasticstack.backends.ConfigurableElasticSearchEngine',
'URL': 'http://127.0.0.1:9200/',
'INDEX_NAME': 'haystack',
},
}
HAYSTACK_SEARCH_RESULTS_PER_PAGE = 100
A: I think you have a typo in your settings.py file: You're trying to connect to port 9300 while elasticsearch is running on port 9200:
Caused by: java.net.ConnectException: Connection refused: /10.142.0.2:9300
Can you post the relevant parts of your settings.py file if that doesn't solve the issue?
EDIT
Looking through a few relevant posts, it seems as though nodes communicate with each other via port 9300 as well as port 9200. Does your ping work on port 9300 as well? If not, that may also need to be opened up.
Possibly related: https://discuss.elastic.co/t/elasticsearch-port-9200-or-9300/72080
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/47560039",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to improve performance with MonetDB on OSX? I am using monetdb on a 16GB Macbook Pro with OSX 10.10.4 Yosemite.
I execute queries with SQLWorkbenchJ (configured with a minimum of 2048M RAM).
I find the performance overall erratic:
*
*performance is acceptable / good with small size tables (<100K rows)
*abysmal with tables with many rows: a query with a join of two tables (8670 rows and 242K rows) and a simple sum took 1H 20m!!
My 16GB of memory notwithstanding, in one run I never saw MSERVER5 using more than 35MB of RAM, 450MB in another. On the other hand the time is consumed swapping data onto disk (according to Activity Monitor over 160GB of data!).
There are a number of performance-related issues that I would like to understand better:
*
*I have the impression that MonetDB struggles with understanding how much RAM to use / is available in OSX. How can I "force" MonetDB to use more RAM?
*I use MonetDB through R. The MonetDB.R driver converts all the character fields into CLOB. I wonder if CLOBs create memory allocation issues?
*I find difficult to explain the many GBs of writes (as mentioned >150GB!!) even for index creation or temporary results. On the other hand when I create the DB and load the tables overall the DB is <50MB. Should I create an artificial integer key and set it as index?
*I join 2 tables on a timestamp field (e.g. "2015/01/01 01:00") that again is seen as a text CLOB by MonetDb / MonetDb.R. Should I just convert it to integer before saving it to MonetDb?
*I have configured each table with a primary key, using a field of type integer. MonetDB (as a typical columnar database) doesn't need the user to specify an index. Is there any other way to improve performance?
Any recommendation is welcome.
For clarity the two tables I join have the following layout:
Calendar # classic calendar table with one entry per our in a year = 8760 rows
Fields: datetime, date, month, weekbyhour, monthbyday, yearbyweek, yearbymonth # all fields are CLOBs as mentioned
Activity # around 200K rows
Fields: company, department, subdepartment, function, subfunction, activityname, activityunits, datetime, duration # all CLOBs except activityunits; datetime refers to when the activity has occurred
I have tied various types of join syntax, but an example would (`*` used for brevity)
select * from Activity as a, Calendar as b where a.datetime=b.datetime
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/31552094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Accidentally ran php artisan clear - compiled services and packages files removed Just as I was getting ready to run:
php artisan cache:clear
php artisan view:clear
php artisan route:clear
...for debugging purposes, I ended up running php artisan clear instead. I got back the following output:
Compiled services and packages files removed!
What have I just done and how do I undo it? My application still seems to work fine, but if things are broken I'd rather roll things back now rather than get further down the line before I eventually realise that they are. If things aren't broken, I'd still like to know what I just did.
A: A preliminary answer before someone who can do more than read through the help commands comes along:
Although:
php artisan clear
...isn't listed in artisan's documentation at all (or at least not in php artisan list), based on its output it seems to be an alias of:
php artisan clear-compiled
This seems to be confirmed by php artisan help clear-compiled:
Description:
Remove the compiled class file
Usage:
clear-compiled
Although that help command doesn't really clear up much, like what exactly compiled class files are or what relation they have to any of the other clear commands, I'm guessing the command does something similar to:
php artisan view:clear
...which does the following according to its own help command:
Description:
Clear all compiled view files
Usage:
view:clear
Based on the above, I can only guess that running php artisan clear is about as harmless as running php artisan view:clear, in that both are designed to clear away compiled files that, when cleared, are simply recompiled again on the next build.
How exactly php artisan clear differs from php artisan view:clear and why it needs its own separate command and a much more memorable alias than any of the other clear commands is still... unclear.
BONUS
To anyone else that arrived here because they find themselves constantly running some variation of the above commands for debugging and troubleshooting, I just learnt about the php artisan optimize:clear command, which runs all of these commands at once:
php artisan view:clear
php artisan cache:clear
php artisan route:clear
php artisan config:clear
php artisan clear
and returns the following output:
Compiled views cleared!
Application cache cleared!
Route cache cleared!
Configuration cache cleared!
Compiled services and packages files removed!
Caches cleared successfully!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/70883721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Get property value and convert class type to boolean I have following piece of code:
ClassName class = new ClassName();
var getValue = GetPrivateProperty<BaseClass>(class, "BoolProperty");
public static T GetPrivateProperty<T>(object obj, string name)
{
BindingFlags flags = BindingFlags.Instance | BindingFlags.NonPublic;
PropertyInfo field = typeof(T).GetProperty(name, flags);
return (T)field.GetValue(obj, null);
}
Now when I get an InvalidCastException in the return statement that he can not convert the object with type System.Boolean to the type ClassName.
BaseClass has the property. ClassName inherits from BaseClass. In have to access all properties from the "ClassName" Class. Since this property is private, I have to access it directly over the BaseClass. This works but I crashes because the property has the return value boolean.
Thanks!
A: You got the property of type T and the return value should also be of type T? I don't believe that.
Maybe this will help:
var getValue = GetPrivateProperty<bool>(class, "BoolProperty");
public static T GetPrivateProperty<T>(object obj, string name)
{
BindingFlags flags = BindingFlags.Instance | BindingFlags.NonPublic;
PropertyInfo field = null;
var objType = obj.GetType();
while (objType != null && field == null)
{
field = objType.GetProperty(name, flags);
objType = objType.BaseType;
}
return (T)field.GetValue(obj, null);
}
Please see the changes of <BaseClass> to <bool> and typeof(T).GetProperty to obj.GetType().GetProperty.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/33648308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Does GRPC have a Channel that can be used for testing? I'd like to create a simple test but with RPC within the same JVM.
What I was thinking was something like
@Test
void foo() {
Channel channel = new SomeMagicalObject(new MySystemImpl());
SystemStub stub = SystemGrpc.newStub(channel);
var ret = stub.doSomething(...)
assertThat(ret).isTrue();
}
I was wondering if there's something already built that implements Channel that takes in a GRPC server implementation. That way I avoid having to run Netty and manage ports etc. Like this...
private static Server server;
private static Channel channel;
@BeforeAll
@SneakyThrows
static void setupServer() {
server = ServerBuilder.forPort(0)
.addService(new MySystemImpl())
.build()
.start();
channel = ManagedChannelBuilder.forAddress("localhost", server.getPort())
.usePlaintext()
.build();
}
@AfterAll
@SneakyThrows
static void teardownServer() {
server
.shutdown()
.awaitTermination();
}
A: The InProcess transport is what you want. It is a full-fledged transport, but uses method calls and some tricks to avoid message serialization. It is ideally suited to testing, but is also production-worthy. When used with directExecutor(), tests can be deterministic. InProcess transport is used in the examples. See HelloWorldServerTest for one usage.
private static Server server;
private static Channel channel;
@BeforeAll
@SneakyThrows
static void setupServer() {
// This code, except for the server.start(),
// could be moved to the declarations above.
server = InProcessServerBuilder.forName("test-name")
.directExecutor()
.addService(new MySystemImpl())
.build()
.start();
channel = InProcessChannelBuilder.forName("test-name")
.directExecutor()
.build();
}
@AfterAll
@SneakyThrows
static void teardownServer() {
server.shutdownNow();
channel.shutdownNow();
// Useful if you are worried about test
// code continuing to run after the test is
// considered complete. For a static usage,
// probably not necessary.
server.awaitTermination(1, SECONDS);
channel.awaitTermination(1, SECONDS);
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/71059894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Unroll / splat arguments in common lisp Say I have a list of arguments:
> (setf format-args `(t "it's ~a" 1))
(T "it's ~a" 1)
How can I then "splat" or "unroll" this into a series of arguments rather than a single list argument, for supplying to the format function?
i.e I would like the following function call to take place:
> (format t "it's ~a" 1)
For reference, I would write the following in python or ruby:
format(*format-args)
I'm sure it can be done, but perhaps I'm thinking about it wrong. It also doesn't help that the name for this operation doesn't seem to be terribly well agreed upon...
A: Oops! I should have remembered how javascript does it.
Turns out you use the apply function, as in:
(apply #'format format-args)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/2345999",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10"
}
|
Q: Drupal trial account development I am creating a website where you can create an account with your name and email. When this is done, you get a 30 day trial. From this point, you can 'upgrade' your account by supplying more information.
When you do not update your information after 30 days, your account is suspended.
Can anyone give me some tips how to do this ?
So:
- Create profile with email and name (easy), indicator is stored in db that you are trial user.
- When you log in, you can extend your profile with extra information. indicator that you are full user.
A: You can always write your own module to do it, but my recommendation is using the Rules module, and using several user roles.
*
*Any new user gets a "trial" role he registers.
*Create the needed fields in the user profile
*Create a rule which will change the user's role in case the field is filled (rule triggeres whenever user profile is updated).
*Create a rule with cron that executes once a day, to suspend user account, and probably to send him a notification before doing so.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/8064561",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Swift: Modifying an Array using a variable to specify the array name I am working on an app that parses XML responses from a web service. This web service will return several instances of an object of many different types using object stacking. I just want to take these objects and populate them into an array. Right now I have a switch that is repeating several different lines of similar code based on "objectStack.last" type. This is what it looks like right now in the parserDidEndElement function of the XML parser:
switch(objectStack.last){
case _ as apple:
apples.removeAll()
for apple in objectStack as! [apple]{
apples.append(apple.name.value)
objectType = "apple"
}
case _ as orage:
oranges.removeAll()
for orange in objectStack as! [orange]{
oranges.append(orange.name.value)
}
objectType = "orange"
case _ as bananas:
bananas.removeAll()
for banana in objectStack as! [banana]{
bananas.append(banana.name.value)
}
objectType = "banana"
case default:
break
The problem is that the webservice will be returning a large number of different types, and I would rather not have to build a switch case for each. For the purposes of this discussion, let's say I have created an empty array for each type just by adding a "s" to the type name (ie apples, oranges and bananas). Is there a way for me to manipulate the arrays using a variable? Something like this:
objectStack.last + "s".removeAll()
for objectStack.last in objectStack.last + "s" as! [objectStack.last]{
objectStack.last + "s".append(objectStack.last.name.value)
objectType = objectStack.last
}
Obviously this second code block is not valid code, I'm just hoping it helps to illustrate what I am trying to accomplish.
Thanks in advance for your help!
A: I ended up using inheritance to fix my issue. Although I have objects of many types, I created a "baseObject" class that contains the fields I needed for this to be successful. I was then able to use the following:
for obj in objectStack{
let obj2 = obj as! baseObject
fieldArrays[objectType]!.append(obj2.name.value)
}
I also used the solution here to use a dictionary to populate the correct array using the "didSet" event on the dictionary.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/42172739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Haskell Unit Testing integrated in Leksah I'm writing some Haskell code in the Leksah IDE. As I edit the code, Leksah does background compilation and runs unit tests after the background compilation completes.
I see in the "console" frame the following:
Building UNFI-EIC-0.0.1...
Preprocessing test suite 'test-UNFI-EIC' for UNFI-EIC-0.0.1...
Preprocessing executable 'UNFI-EIC' for UNFI-EIC-0.0.1...
-----------------------------------------
Running 1 test suites...
Test suite test-UNFI-EIC: RUNNING...
test-UNFI-EIC: Prelude.head: empty list
Test suite test-UNFI-EIC: FAIL
Test suite logged to: dist/test/UNFI-EIC-0.0.1-test-UNFI-EIC.log
0 of 1 test suites (0 of 1 test cases) passed.
Where are the default test cases that failed? How do I add relevant unit tests to them? There is nothing obvious in the GUI menus...
How can I edit the test suite for the package that is integrated in Leksah?
A: You can edit the unit test suite by finding the test-suite reference in the .cabal file of the project.
To do this, go to your project directory and open *.cabal in a text editor and search for the line containing test-suite:. This line will be of the form test-suite: ExampleTests, where ExampleTests is the main file of the test suite for the project.
Simply add tests to this file using the testing framework of your choice. Leksah will run these tests automatically through the IDE GUI.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/21637776",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Image preprocessing before feeding into Deep learning algorithm I am developing an image segmentation algorithm, I used Deeplabv3+ to train a model, which performs fine. The question is if I can improve its performance. I would like to know if there is preprocessing methods to use before feeding the image.
I want to detect the charging connector, as depicted in the image:
I can detect it without any problem. But just looking for some improvements if exist.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/73545812",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Pandas: Find date of last edit for each element in Dataframe Hope to express myself correctly in the following as it seems to be a complicated one.
My department is regularly creating daily snapshots at random times for our current project portfolio website. Below, I have filtered the entire table for all snapshot with project_id = 1. Filtering here to make it more understandable as there are many projects. Moreover, I have reduced the number of columns for this example.
df_table
project_id project_name region style effect representative lazy timestamp
1 PullPressure EU A-B-C Pull Martin DCA 10/01/20
1 PullPressure EU A-B-C Pull Martin DCA 09/05/20
1 PushPressure EU A-B-C Push Martin 08/20/20
1 PressurePush EU A-B-C Push Martin 04/06/20
1 PressurePush US A-B-C Push Johnsson 12/31/19
1 PressurePush US A-B-C Push Johnsson 10/15/19
My goal is to find out when the last change for any columns of project_id (or in general any key_column) has occurred, i.e. when was each cell for a given id last edited?
My goal is to achieve something like this:
df_table_new:
project_id project_name region style effect representative lazy timestamp
1 08/20/20 04/06/20 10/15/19 09/05/20 04/06/20 09/05/20 10/01/20
1 08/20/20 04/06/20 10/15/19 09/05/20 04/06/20 09/05/20 09/05/20
1 08/20/20 04/06/20 10/15/19 10/15/19 04/06/20 10/15/19 08/20/20
1 10/15/19 04/06/20 10/15/19 10/15/19 04/06/20 10/15/19 04/06/20
1 10/15/19 10/15/19 10/15/19 10/15/19 10/15/19 10/15/19 12/31/19
1 10/15/19 10/15/19 10/15/19 10/15/19 10/15/19 10/15/19 10/15/19
Please let me know in case anything is unclear!
edit: having null values inside the column, results in an NaT error as such:
lazy
09/05/20
09/05/20
NaT
NaT
NaT
NaT
Whereas, instead of NaT the values the fields should refer to the oldest available timestamp in timestamp column, which is 10/15/19.
edit2: Solved with the solution of @jezrael by adding the respective elements to the function. Well appreciated!
A: Use GroupBy.transform with GroupBy.last for each column generated by Index.difference:
df['timestamp'] = pd.to_datetime(df['timestamp'], format='%m/%d/%y')
for c in df.columns.difference(['project_id','timestamp']):
df[c] = df.groupby(['project_id',c], sort=False)['timestamp'].transform('last')
print (df)
project_id project_name region style effect representative \
0 1 2020-09-05 2020-04-06 2019-10-15 2020-09-05 2020-04-06
1 1 2020-09-05 2020-04-06 2019-10-15 2020-09-05 2020-04-06
2 1 2020-08-20 2020-04-06 2019-10-15 2019-10-15 2020-04-06
3 1 2019-10-15 2020-04-06 2019-10-15 2019-10-15 2020-04-06
4 1 2019-10-15 2019-10-15 2019-10-15 2019-10-15 2019-10-15
5 1 2019-10-15 2019-10-15 2019-10-15 2019-10-15 2019-10-15
timestamp
0 2020-10-01
1 2020-09-05
2 2020-08-20
3 2020-04-06
4 2019-12-31
If need original format add Series.dt.strftime:
df['timestamp'] = pd.to_datetime(df['timestamp'], format='%m/%d/%y')
for c in df.columns.difference(['project_id','timestamp']):
df[c] = (df.groupby(['project_id',c], sort=False)['timestamp'].transform('last')
.dt.strftime('%m/%d/%y'))
print (df)
project_id project_name region style effect representative \
0 1 09/05/20 04/06/20 10/15/19 09/05/20 04/06/20
1 1 09/05/20 04/06/20 10/15/19 09/05/20 04/06/20
2 1 08/20/20 04/06/20 10/15/19 10/15/19 04/06/20
3 1 10/15/19 04/06/20 10/15/19 10/15/19 04/06/20
4 1 10/15/19 10/15/19 10/15/19 10/15/19 10/15/19
5 1 10/15/19 10/15/19 10/15/19 10/15/19 10/15/19
timestamp
0 2020-10-01
1 2020-09-05
2 2020-08-20
3 2020-04-06
4 2019-12-31
5 2019-10-15
EDIT: Add fillna by minimal timestamp:
df['timestamp'] = pd.to_datetime(df['timestamp'], format='%m/%d/%y')
min1 = df['timestamp'].min()
for c in df.columns.difference(['project_id','timestamp']):
df[c] = df.groupby(['project_id',c], sort=False)['timestamp'].transform('last').fillna(min1)
print (df)
project_id project_name region style effect representative \
0 1 2020-09-05 2020-04-06 2019-10-15 2020-09-05 2020-04-06
1 1 2020-09-05 2020-04-06 2019-10-15 2020-09-05 2020-04-06
2 1 2020-08-20 2020-04-06 2019-10-15 2019-10-15 2020-04-06
3 1 2019-10-15 2020-04-06 2019-10-15 2019-10-15 2020-04-06
4 1 2019-10-15 2019-10-15 2019-10-15 2019-10-15 2019-10-15
5 1 2019-10-15 2019-10-15 2019-10-15 2019-10-15 2019-10-15
lazy timestamp
0 2020-09-05 2020-10-01
1 2020-09-05 2020-09-05
2 2019-10-15 2020-08-20
3 2019-10-15 2020-04-06
4 2019-10-15 2019-12-31
5 2019-10-15 2019-10-15
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/64607926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Can you insert jquery into a php file? I have the following .php file:
<div class="excerpt">
<?php get_the_excerpt();?>
</div>
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.4.1/jquery.min.js"></script>
<script>
$(document).ready(function() {
$(".excerpt").hide();
});
</script>
I want to hide the div class "excerpt" through jQuery. Unfortunately, the code doesn't do anything and I do not understand where I went wrong. Is it even possible to use jQuery in php?
A: identifier $ will usually won't be enabled by default in wordpress like cms
you have to use identifier jQuery
inorder to make identifier $ work, you can try this
(function($){
// inside this scope, $ can be used
$(document).ready(function(){
// other scripts
});
})(jQuery);
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/59266911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Notepad++ NppExec run in cmd How can I make my commands run in NppExec run in cmd, as opposed to the built in console, which doesn't work with things like: "Press any key to continue", which only works when you press enter?
A: You want to take a look at cmd /? output and http://ss64.com/nt/. Using start along with cmd /c will give an external window and using cmd /k will keep it in the nppexec console. One thing I don't like about the /k option is that it really isn't true as 'any key' doesn't do the trick and Enter needs to be used.
Test it out with cmd /k pause and start cmd /c pause.
Notice the start option has the window close, so any history will go away too. If that's important then substitute /k for the /c and use exit when done.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/12847699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Cannot get same cursor for all objects and events in a window I have searched and tested a lot and have not found a way to force a window and all children use the same cursor for all events.
In the example code below I want the DIAMOND_CROSS cursor used even when hovering over the Gtk.Paned handle. As it is now it switches to the horizontal sizing arrow.
import gi
gi.require_version('Gdk', '3.0')
gi.require_version('Gtk', '3.0')
from gi.repository import Gdk, Gtk
class MainWindow(Gtk.ApplicationWindow):
def __init__(self):
Gtk.Window.__init__(self)
self.connect("realize", self.on_realize)
self.connect('delete_event', Gtk.main_quit)
self.set_default_size(800, 600)
button1 = Gtk.Button('Button 1')
button2 = Gtk.Button('Button 2')
paned = Gtk.Paned()
paned.set_position(400)
paned.add1(button1)
paned.add2(button2)
self.add(paned)
self.show_all()
def on_realize(self, widget):
cursor = Gdk.Cursor(Gdk.CursorType.DIAMOND_CROSS)
self.get_window().set_cursor(cursor)
if __name__ == '__main__':
win = MainWindow()
Gtk.main()
A: I just checked the source of the paned object which has the following code in the gtk_paned_state_flags_changed function:
if (gtk_widget_is_sensitive (widget))
cursor = gdk_cursor_new_from_name (gtk_widget_get_display (widget),
priv->orientation == GTK_ORIENTATION_HORIZONTAL
? "col-resize" : "row-resize");
else
cursor = NULL;
(https://git.gnome.org/browse/gtk+/tree/gtk/gtkpaned.c?h=3.18.6#n1866)
Thus in theory you could overwrite the do_state_flags_changed function of the paned widget (and all other widget that have behavior like this) to achieve the result you want.
But as you ask for a way to set the cursor for all objects in one go the answer is simply: No, that is not possible.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/35084229",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How to properly dispose of a pthread mutex? I wrote a class to wrap a mutex. In the destructor, I call pthread_mutex_destroy and sometimes it returns EBUSY because some other thread has not released it. My question is, what is the best way to handle the destruction of a mutex? Should I wait for it to be free?
Here is what I have so far, which may not be the best solution:
Mutex::~Mutex()
{
int rc = pthread_mutex_destroy( &_mutex );
while ( rc == EBUSY )
{
lock(); // Call to pthread_mutex_lock
unlock(); // Call to pthread_mutex_unlock
// Attempt destroy again
rc = pthread_mutex_destroy( &_mutex );
}
}
A:
My question is, what is the best way to handle the destruction of a mutex? Should I wait for it to be free?
You should not destroy resources while they are being used because that often leads to undefined behaviour.
The correct course of action is:
*
*Tell the other thread to release the resources and wait till it does, or
*Tell the other thread to terminate politely an wait till it has done so, or
*Pass the ownership of the resource to another thread and let it manage the resource's lifetime as it pleases, or
*Use shared resources, so that they stay alive while its users are still around.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/24410030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: VueJS data() not working I am trying to make a VueJS app but I am failing even with the simplest examples.
I am using Laravel 5.3 with pre-built support for VueJS (version 1, I tried version 2 as well).
Here is my Example.vue component
<template>
<div class="profile">
{{ name }}
</div>
</template>
<script>
export default {
data () {
return {
name: 'John Doe'
}
}
}
</script>
And here is the main code
Vue.component('example', require('./components/Example.vue'));
const app = new Vue({
el: '#app'
});
This is the error that shows up everytime in console:
[Vue warn]: Property or method "name" is not defined on the instance but referenced during render. Make sure to declare reactive data properties in the data option. (found in component )
Any ideas whats wrong?
Thanks
A: In your script tags instead of export default use:
module.exports = {
data() {
return { counter: 1 }
}
}
This should work for you
A: Call the component inside your template
Vue.component('example', {
template: `<div class="profile">{{ name }}</div>`,
data () {
return {
name: 'John Doe'
}
}
})
const app = new Vue({
el: '#app'
})
<script src="https://unpkg.com/vue/dist/vue.js"></script>
<div id="app"><example></example></div>
A: The problem is that you are trying to load the component 'example' from that file but you didn't give a name to it. You should use:
<script>
export default {
name: 'example',
data() {
return {
name: 'John Doe'
}
}
}
</script>
Or load the component the following way (not sure if extension .vue is needed):
require('./exmaple').default();
If you are using Babel you can also load the components without giving them a name using this syntax:
import Example from ./example
Also checkout this post to get some more info in case you use Babel
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/39932110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Programmatically Push VC on Table Row Tap I am attempting to do a onRowTap method without using a storyboard segue to push to a new View Controller, however when tapped it only displays information from the first item of the indexPath no matter which row is tapped. Here is my code:
- (void)tableView:(UITableView *)theTableView didSelectRowAtIndexPath:(NSIndexPath *)indexPath
{
[self.tableView deselectRowAtIndexPath:[self.tableView indexPathForSelectedRow] animated:YES];
NSInteger row = [[self tableView].indexPathForSelectedRow row];
NSDictionary *insta = [self.instaPics objectAtIndex:row];
UIStoryboard* sb = [UIStoryboard storyboardWithName:@"Main"
bundle:nil];
InstaPicDetailViewController* vc = [sb instantiateViewControllerWithIdentifier:@"InstagramDetail"];
vc.detailItem = insta;
[self.navigationController pushViewController:vc animated:YES];
}
A: Move your deslectRowAtIndexPath message call below the assignment of your row variable:
NSInteger row = [[self tableView].indexPathForSelectedRow row];
[self.tableView deselectRowAtIndexPath:[self.tableView indexPathForSelectedRow] animated:YES];
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/28637523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to delete an entry from a Record in typescript,based on the Id I have a Main.tsx file where i have a Record ,personList,with key as PersonId and value as PersonInfo,i want to delete a particular entry from personList,based on Id provided.Below is my code:
interface PersonInfo{
FirstName:string;
SecondName:string;
Age:string;
}
const [personList,setPersonList] = useState<Record<string,PersonInfo>>({});
//For inserting entry
const Create = () => {
setPersonList((oldList)=>{
return {
...oldList,[PersonId]:PersonDescription //PersonDescription is of type PersonInfo
}
});
};
const Delete = () => {
const newPersonList :Record<string,PersonInfo>=
personList.filter()//Here i want to delete the entry based on personId
setPersonList(newPersonList);
};
A: What about use delete keyword?
delete personList[personId];
A: Since you're already assigning the keys of your object to the personId you can just do:
const Delete = (id: string): void => {
const filteredPersonList: Record<string,PersonInfo> = {}
for(let key in personList){
if(key !== id){
filteredPersonList[key] = personList[key]
}
}
setPersonList(filteredPersonList)
}
A: Based on the context (i.e. updating React state) and your code sample, what you're actually trying to do is create a modified copy of the record without mutating the original.
const deletePerson = (personId: string): void => {
// first create a shallow copy of the old record.
const newList = {...personList};
// now delete the entry from the new record
delete newList[personId];
setPersonList(newList)
}
A: Eslint complains with @ehutchllew answer:
40:7 error for..in loops iterate over the entire prototype chain, which is virtually never what you want. Use Object.{keys,values,entries}, and iterate over the resulting array no-restricted-syntax
40:7 error The body of a for-in should be wrapped in an if statement to filter unwanted properties from the prototype guard-for-in
So using the suggested Object.{keys,values,entries}, it becomes:
const Delete = (id: string): void => {
const filteredPersonList: Record<string, PersonInfo> = {};
Object.keys(personList).forEach((key) => {
if (key !== id) {
filteredPersonList[key] = personList[key];
}
});
setPersonList(filteredPersonList);
};
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/61439693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7"
}
|
Q: Session and page Redirection Not working on hosted server? My php website is clearly working on localhost.When hosted it to server page redirection doesn't works(Tested by removing all echo).But when using javascript redirection it works but missing session variables on the next page.
Then tested echo session_id(); on both pages it show different numbers.How can I solve this issue?
I handle path on server using set_include_path();method.
<!DOCTYPE html>
<?php
session_start();
ob_start();
set_include_path(".:/home/user/data");
echo session_id();
//echo ini_get('session.cookie_domain');
if($id == 'Login failed'){ header("Location: login.php?msg=Invalid Username or Password"); }
?>
A: You can not use session_start() or header() after content has been sent to the browser (<!DOCTYPE html> in your case).
Here, even if you are using ob_start() to buffer the output, what came before has not been buffered and is sent to the browser, which prevents header() and session_start() from working.
From the PHP documentation :
To use cookie-based sessions, session_start() must be called before outputting anything to the browser.
Remember that header() must be called before any actual output is sent, either by normal HTML tags, blank lines in a file, or from PHP.
The fact that it works on your local computer but not on your Web hosting provider's server is most likely due to differences between your configuration of PHP or of your Web server. For instance, output buffering may be enabled by default on your local installation (output_buffering = 4096 for instance), and be disabled on your Web hosting (output_buffering = Off).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/21060322",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Dynamically adding files to a input I'm using this plugin: jQuery HTML5 uploader (see the demo), inside a form. I want to send uploaded files with the POST method when user submit the form.
Maybe can I create a input[type="file"], make it hidden, and dynamically add uploaded files with the plugin inside the input? It is possible?
So, if I can't do that, how can I upload files to server with this plugin, only when user click on the submit button? I know that the plugin it's already doing that with AJAX, but I want to upload them only if user clicks on a button.
Maybe should I create a variable named for example files and, when user clicks on the submit button use AJAX myself to send files and the rest of inputs data?
Something like this:
jQuery( "#dropbox" ).html5Uploader({
onClientLoadStart: function( event, file ) {
files[] = file;
[...]
}
});
[...]
jQuery("#button").on("click", function() {
jQuery.ajax({
data: files
});
[...]
});
A: You can't do that for security reasons. Just imagine the possibilities. I enter a file by JavaScript in a hidden field. You press submit. I get your file. Any file. Terrifying, right?
A:
You can not select file in file uploader because of Browser security .
User can only do this not the script .
A: this script already upload the files to the server.. what you need is a way to get back the files name/id from the server and attach them to hidden form.. than when someone submit the form you will know which files he uploaded..
youd do this by listen to the onSuccess event..
A: For security reasons you can not dynamically change the value an input field of type file. If that were possible, a malicious user could set the value to say: "c:\maliciousfile" and harm your system
A: Looks like your question is two-part:
*
*How to dynamically add files to a form using JavaScript
1a. (I'll throw in a bonus - previewing images)
*How to perform an Ajax file upload
1 Dynamically adding files to a form
What you want to do here create an <input type="file"> field and change the form value by Javascript. You can hide the input using CSS display: none;.
As far as I know, this can only be done using the FileReader API and drag and drop, and the API is limited.
You can replace and remove all files from an input but you can't append or remove individual files.
The basic drag/drop file upload looks like this:
const dropZone = document.getElementById('dropzone');
const fileInput = document.getElementById('file-input');
dropzone.addEventListener('dragover', event => {
// we must preventDefault() to let the drop event fire
event.preventDefault();
});
dropzone.addEventListener('drop', event => {
event.preventDefault();
// drag/drop files are in event.dataTransfer
let files = event.dataTransfer.files;
fileInput.files = files;
console.log(`added ${files.length} files`);
});
.upload {
border: 1px solid #eee;
border-radius: 10px;
padding: 20px
}
.hidden {
opacity: 0.5;
}
<div id="dropzone" class="upload">Drop images here</div>
<input type="file" id="file-input" class="hidden" />
1a Previewing files
Working with Javascript opens up some fun options for form interactions, such as previewing uploaded files. Take for example an image uploader which can preview multiple drag-and-dropped images.
const imageTypeFilter = /image.*/;
const dropZone = document.getElementById('dropzone');
const fileInput = document.getElementById('file-input');
const previewContainer = document.getElementById('preview-container');
function generateImagePreview(file) {
const fileReader = new FileReader();
fileReader.addEventListener('load', event => {
// generate an image preview
let image = document.createElement('img');
image.classList.add('preview-image');
srcAttr = document.createAttribute('src');
srcAttr.value = fileReader.result;
image.setAttributeNode(srcAttr);
previewContainer.append(image);
}, false);
// open and read the file
fileReader.readAsDataURL(file);
}
function processFiles(files) {
previewContainer.innerHTML = "";
([...files]).forEach((file, index) => {
if (!file.type.match(imageTypeFilter)) {
console.error("Incorrect file type:");
console.log(file);
} else {
generateImagePreview(file);
}
});
}
dropZone.addEventListener('dragover', event => {
// required to fire the drop event
event.preventDefault();
});
dropZone.addEventListener('drop', event => {
event.preventDefault();
const files = event.dataTransfer.files;
fileInput.files = files;
processFiles(files);
});
fileInput.addEventListener('change', event => {
processFiles(event.target.files);
});
.upload {
border: 1px solid #eee;
border-radius: 10px;
padding: 20px
}
.preview-image {
max-width: 100px;
max-height: 100px;
}
.hidden {
opacity: 0.5;
}
<div id="dropzone" class="upload">
Drag images here
</div>
<input id="file-input" type="file" multiple accept="image/jpeg,image/png,image/gif" class="hidden" />
<div id="preview-container"></div>
2 Perform an AJAX upload when the user clicks the submit button.
In this case, you want to override the default the form submit event which I believe would look something like this:
const submitUrl = 'https://httpbin.org/post';
const form = document.getElementById('ajax-form');
form.addEventListener('submit', event => {
// this line prevents the default submit
event.preventDefault();
// convert the form into POST data
const serializedFormData = new FormData(event.target);
// use your favorite AJAX library
axios.post(
submitUrl,
serializedFormData
).then(response => {
console.log("success!");
}).catch(error => {
console.log("falied");
console.log(error.response);
});
});
<script src="https://cdn.jsdelivr.net/npm/axios/dist/axios.min.js"></script>
<form id="ajax-form" method="post" enctype="multipart/form-data">
<input type="text" name="name"/>
<input name="file" type="file" />
<button type="submit">Submit</button>
</form>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/21883749",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to compare if an item in a list has the same value AND position as an item in another list? Python 2.7 eg.
l1 = [a,b,c,d]
l2 = [e,b,f,g]
A method that would return true when it sees that b is in both l1 and l2, and in position [1] in both lists. Preferably something that I can use in a for loop so that I can compare all the items in the list.
Many thanks :)
A: Try this code:
if 'b' in l1 and 'b' in l2: # Separated both statements to prevent ValueErrors
if l1.index('b') == l2.index('b'):
print 'b is in both lists and same position!'
Unlike Volatility's code, the length in either list doesn't matter.
The index() function gets the position of an element in a string. For example, if there was:
>>> mylist = ['hai', 'hello', 'hey']
>>> print mylist.index('hello')
1
A: You can do:
def has_equal_element(list1, list2):
return any(e1 == e2 for e1, e2 in zip(list1, list2))
This function will return True when at least one element has the same value and position as in the other list. This function also works when the lists differ in length, you'll need to adjust the function if that's not desired.
A: Assuming the lists are the same length, you could use the zip function
for i, j in zip(l1, l2):
if i == j:
print '{0} and {1} are equal and in the same position'.format(i, j)
What the zip function does is something like this:
l1 = [1, 2, 3]
l2 = [2, 3, 4]
print zip(l1, l2)
# [(1, 2), (2, 3), (3, 4)]
If you want a function that returns True or False given an input, you could do this
def some_func(your_input, l1, l2):
return (your_input,)*2 in zip(l1, l2)
(your_input,) is a one-tuple containing your_input, and multiplying it by two makes it (your_input, your_input) - which is what you want to test for.
Or if you want the return True if any satisfy the condition
def some_func(l1, l2):
return any(i == j for i, j in zip(l1, l2))
The any function basically checks if any of the elements of a list (or in this case a generator) are True in a boolean context, so in this case it returns true if two lists satisfy your condition.
A: If you actually want a method to compare one position in two lists you can use the following:
def compare_pos(l1, l2, pos):
try:
return l1[pos] == l2[pos]
except IndexError:
return False
l1 = [0, 1, 2, 3]
l2 = [0, 2, 2, 4]
for i, _ in enumerate(l1):
print i, compare_pos(l1, l2, i)
# Output:
# 0 True
# 1 False
# 2 True
# 3 False
If you want to test whether two lists have all the same elements in the same positions you can just check for equality:
print l1 == l2
A: I'd get common elements from both lists:
l1 = [a, b, c, d]
l2 = [e, b, f, g]
common_elements = [(i, v) for i,v in enumerate(l1) if l2[i] == v]
This will create a list of tuples: (index, value) and then you can just check if your desired value or index is in the list.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/14542948",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Disadvantages of using multiple
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/67209780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Is there a way to crack or bypass the VBA project password with Excel 2011 Mac edition? There are several good sources (referenced below) that demonstrate how to bypass VBA project passwords for Windows versions of Excel. But, none of these work for Excel 2011 for Mac.
Is there a way to achieve the same result with Excel 2011 for Mac?
References:
Is there a way to crack the password on an Excel VBA Project?
https://superuser.com/questions/807926/how-to-bypass-the-vba-project-password-from-excel
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/37029166",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How can I change a color channel in an image using numpy? I have an image where some color channels in the pixels have a value of zero (ie 255, 146, 0). I want to be able to change any value equal to zero in the array to a different value, but I do not know how to access these values. Any help with this?
This is the image array:
[[[ 76 163 168]
[109 166 168]
[173 172 167]
...,
[ 83 182 144]
[ 78 172 134]
[ 82 150 131]]
[[ 51 151 168]
[ 99 157 171]
[173 195 159]
...,
[ 56 165 144]
[ 25 198 125]
[ 35 185 121]]
[[ 76 163 121]
[112 147 120]
[175 151 118]
...,
[ 57 162 159]
[ 36 185 132]
[ 32 194 97]]
...,
[[ 78 189 126]
[ 68 173 129]
[ 58 171 150]
...,
[ 41 188 163]
[ 34 176 126]
[ 35 176 102]]
[[131 155 161]
[101 141 161]
[ 42 151 177]
...,
[ 56 178 122]
[ 45 192 114]
[ 46 184 112]]
[[130 157 185]
[ 83 141 185]
[ 42 158 185]
...,
[ 63 187 88]
[ 45 194 102]
[ 45 184 129]]]
A: Use masking -
img[(img==zero_val).all(-1)] = new_val
, where zero_val is the zero color and new_val is the new color to be assigned at those places where we have zero colored pixels.
Sample run -
# Random image array
In [112]: img = np.random.randint(0,255,(4,5,3))
# Define sample zero valued and new valued arrays
In [113]: zero_val = [255,146,0]
...: new_val = [255,255,255]
...:
# Set two random points/pixels to be zero valued
In [114]: img[0,2] = zero_val
In [115]: img[2,3] = zero_val
# Use proposed approach
In [116]: img[(img==zero_val).all(-1)] = new_val
# Verify that new values have been assigned
In [117]: img[0,2]
Out[117]: array([255, 255, 255])
In [118]: img[2,3]
Out[118]: array([255, 255, 255])
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/43645586",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: NASM: move from memory to memory via registers I am difficulties moving data from memory to another memory in bss. My implementation somewhat works however when moving some weird characters appear in the first couple bytes and half of my string is also missing while the other half is fine.
This is the value I get when I print out message == �F@elcome to the new life
I need all the help, what am I missing? I went over my code a hundred times.
section .data
hello: db "Hello, Welcome to the new life! Lets begin the journey.",10
hello_len: equ $ - hello
section .bss
message: resb 255
section .text
mov rdi, hello
mov rsi, message
msg_into_message:
cmp byte [rdi], 10 ; hello ends with a new line char
je end_count
mov al, byte [rdi]
mov byte [rsi], al
inc rsi
inc rdi
jmp msg_into_message
end_count:
mov [message], rsi
ret
; Prints message
mov rsi, message
mov rdx, hello_len
call pre_print
syscall
A: Okay, first things first, the s and d in rsi and rdi stand for source and destination. It may work the other way (as you have it) but you'll upset a lot of CDO people like myself(a) :-)
But, for your actual problem, look here:
end_count:
mov [message], rsi
I assume that's meant to copy the final byte 0x10 into the destination but there are two problems:
*
*message is the start of the buffer, not the position where the byte should go.
*You're copying the multi-byte rsi variable into there, not the byte you need.
Those two points mean that you're putting some weird value into the first few bytes, as your symptoms suggest.
Perhaps a better way to do it would be as follows:
mov rsi, hello ; as Gordon Moore intended :-)
mov rdi, message
put_str_into_message:
mov al, byte [rsi] ; get byte, increment src ptr.
inc rsi
mov byte [rdi], al ; put byte, increment dst ptr.
inc rdi
cmp al, 10 ; continue until newline.
jne put_str_into_message
ret
For completeness, if you didn't want the newline copied (though this is pretty much what you have now, just with the errant buffer-damaging mov taken away) (b):
put_str_into_message:
mov al, byte [rsi] ; get byte.
cmp al, 10 ; stop before newline.
je stop_str
mov byte [rdi], al ; put byte, increment pointers.
inc rsi
inc rdi
jmp put_str_into_message
stop_str:
ret
(a) CDO is obsessive-compulsive disorder, but with the letters arranged correctly :-)
(b) Or the don't-copy-newline loop can be done more efficiently, while still having a single branch at the bottom.
Looping one byte at a time is still very inefficient (x86-64 has SSE2 which lets you copy and check 16 bytes at a time). Since you have the length as an assemble-time constant hello_len, you could use that to efficiently copy in wide chunks (possibly needing special handling at the end if your buffer size is not a multiple of 16), or with rep movsb.
But this demonstrates an efficient loop structure, avoiding the false dependency of merging a new AL into the bottom of RAX, allowing out-of-order exec to run ahead and "see" the loop exit branch earlier.
strcpy_newline_end:
movzx eax, byte [rsi] ; get byte (without false dependency).
cmp al, 10
je copy_done ; first byte isn't newline, enter loop.
copy_loop: ; do:
mov [rdi], al ; put byte.
inc rsi ; increment both pointers.
inc rdi
movzx eax, byte [rsi] ; get next byte.
cmp al, 10
jne copy_loop ; until you get a newline.
; After falling out of the loop (or jumping here from the top)
; we've loaded but *not* stored the terminating newline
copy_done:
ret
You should also be aware there are other tricks you can use, to save instructions inside the loop, such as addressing one string relative to the other (with an indexed addressing mode for the load, only incrementing one pointer).
However, we won't cover them in detail here as it risks making the answer more complex than it needs to be.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/58576125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Why does average damping magically speed up the convergence of fixed-point calculators? I'm reading through SICP, and the authors brush over the technique of average damping in computing the fixed points of functions. I understand that it's necessary in certain cases, ie square roots in order to damp out the oscillation of the function y = x/y however, I don't understand why it magically aids the convergence of the fixed point calculating function. Help?
edit
Obviously, I've thought this through somewhat. I can't seem to wrap my head around why averaging a function with itself would speed up convergence when applied repeatedly.
A: While I can't answer your question on a mathematical basis, I'll try on an intuitive one:
fixpoint techniques need a "flat" function graph around their ..well.. fixpoint. This means: if you picture your fixpoint function on an X-Y chart, you'll see that the function crosses the diagonal (+x,+y) exactly at the true result. In one step of your fixpoint algorithm you are guessing an X value which needs to be within the interval around the intersection point where the first derivative is between (-1..+1) and take the Y value. The Y that you took will be closer to the intersection point because starting from the intersection it is reachable by following a path which has a smaller slope than +/-1 , in contrast to the previous X value that you utilized, which has in this sense, the exact slope -1. It is immediately clear now that the smaller the slope, the more way you make towards the intersection point (the true function value) when using the Y as new X. The best interpolation function is trivially a constant, which has slope 0, giving you the true value in the first step.
Sorry to all mathematicians.
A: It only speeds up those functions whose repeated applications "hop around" the fixpoint. Intuitively, it's like adding a brake to a pendulum - it'll stop sooner with the brake.
But not every function has this property. Consider f(x)=x/2. This function will converge sooner without the average damping (log base 2 steps vs log base (4/3) steps), because it approaches the fixpoint from one side.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/3860929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9"
}
|
Q: Java Null Object that is used in Hash Tables and Comparables I have an object that represents an UNKNOWN value, or "Null object".
As in SQL, this object should never be equal to anything, including another UNKNOWN, so (UNKNOWN == UNKNOWN) -> false.
The object is used, however, in hashtables and the type is Comparable, so I created a class as follows:
public class Tag implements Comparable {
final static UNKNOWN = new Tag("UNKNOWN");
String id;
public Tag(String id) {
this.id = id;
}
public int hashCode(){
return id.hashCode();
}
public String toString(){
return id;
}
public boolean equals(Object o){
if (this == UNKNOWN || o == UNKNOWN || o == null || !(o instanceof Tag))
return false;
return this.id.equals(((Tag)o).id);
}
public int compareTo(Tag o){
if (this == UNKNOWN)
return -1;
if (o == UNKNOWN || o == null)
return 1;
return this.id.compareTo(o.id);
}
}
But now compareTo() seems "inconsistent"?
Is there a better way to implement compareTo()?
A: The documentation for compareTo mentions this situation:
It is strongly recommended, but not strictly required that
(x.compareTo(y)==0) == (x.equals(y))
Generally speaking, any class that implements the Comparable interface and violates this condition should clearly indicate this fact. The recommended language is "Note: this class has a natural ordering that is inconsistent with equals."
Therefore, if you want your object to be Comparable and yet still not allow two UNKNOWN objects to be equal via the equals method, you must make your compareTo "Inconsistent with equals."
An appropriate implementation would be:
public int compareTo(Tag t) {
return this.id.compareTo(t.id);
}
Otherwise, you could make it explicit that UNKNOWN values in particular are not Comparable:
public static boolean isUnknown(Tag t) {
return t == UNKNOWN || (t != null && "UNKNOWN".equals(t.id));
}
public int compareTo(Tag t) {
if (isUnknown(this) || isUnknown(t)) {
throw new IllegalStateException("UNKNOWN is not Comparable");
}
return this.id.compareTo(t.id);
}
A: You're correct that your compareTo() method is now inconsistent. It violates several of the requirements for this method. The compareTo() method must provide a total order over the values in the domain. In particular, as mentioned in the comments, a.compareTo(b) < 0 must imply that b.compareTo(a) > 0. Also, a.compareTo(a) == 0 must be true for every value.
If your compareTo() method doesn't fulfil these requirements, then various pieces of the API will break. For example, if you sort a list containing an UNKNOWN value, then you might get the dreaded "Comparison method violates its general contract!" exception.
How does this square with the SQL requirement that null values aren't equal to each other?
For SQL, the answer is that it bends its own rules somewhat. There is a section in the Wikipedia article you cited that covers the behavior of things like grouping and sorting in the presence of null. While null values aren't considered equal to each other, they are also considered "not distinct" from each other, which allows GROUP BY to group them together. (I detect some specification weasel wording here.) For sorting, SQL requires ORDER BY clauses to have additional NULLS FIRST or NULLS LAST in order for sorting with nulls to proceed.
So how does Java deal with IEEE 754 NaN which has similar properties? The result of any comparison operator applied to NaN is false. In particular, NaN == NaN is false. This would seem to make it impossible to sort floating point values, or to use them as keys in maps. It turns out that Java has its own set of special cases. If you look at the specifications for Double.compareTo() and Double.equals(), they have special cases that cover exactly these situations. Specifically,
Double.NaN == Double.NaN // false
Double.valueOf(Double.NaN).equals(Double.NaN) // true!
Also, Double.compareTo() is specified so that it considers NaN equal to itself (it is consistent with equals) and NaN is considered larger than every other double value including POSITIVE_INFINITY.
There is also a utility method Double.compare(double, double) that compares two primitive double values using these same semantics.
These special cases let Java sorting, maps, and so forth work perfectly well with Double values, even though this violates IEEE 754. (But note that primitive double values do conform to IEEE 754.)
How should this apply to your Tag class and its UNKNOWN value? I don't think you need to follow SQL's rules for null here. If you're using Tag instances in Java data structures and with Java class libraries, you'd better make it conform to the requirements of the compareTo() and equals() methods. I'd suggest making UNKNOWN equal to itself, to have compareTo() be consistent with equals, and to define some canonical sort order for UNKNOWN values. Usually this means sorting it higher than or lower than every other value. Doing this isn't terribly difficult, but it can be subtle. You need to pay attention to all the rules of compareTo().
The equals() method might look something like this. Fairly conventional:
public boolean equals(Object obj) {
if (this == obj) {
return true;
}
return obj instanceof Tag && id.equals(((Tag)obj).id);
}
Once you have this, then you'd write compareTo() in a way that relies on equals(). (That's how you get the consistency.) Then, special-case the unknown values on the left or right-hand sides, and finally delegate to comparison of the id field:
public int compareTo(Tag o) {
if (this.equals(o)) {
return 0;
}
if (this.equals(UNKNOWN)) {
return -1;
}
if (o.equals(UNKNOWN)) {
return 1;
}
return id.compareTo(o.id);
}
I'd recommend implementing equals(), so that you can do things like filter UNKNOWN values of a stream, store it in collections, and so forth. Once you've done that, there's no reason not to make compareTo consistent with equals. I wouldn't throw any exceptions here, since that will just make standard libraries hard to use.
A: The simple answer is: you shouldn't.
You have contradiction requirements here. Either your tag objects have an implicit order (that is what Comparable expresses) OR you can have such "special" values that are not equal to anything, not even themselves.
As the other excellent answer and the comments point out: yes, you can somehow get there; for example by simply allowing for a.compare(b) < 0 and b.compare(a) < 0 at the same time; or by throwing an exception.
But I would simply be really careful about this. You are breaking a well established contract. And the fact that some javadoc says: "breaking the contract is OK" is not the point - breaking that contract means that all the people working on this project have to understand this detail.
Coming from there: you could go forward and simply throw an exception within compareTo() if a or b are UNKNOWN; by doing so you make at least clear that one shouldn't try to sort() a List<Tag> for example. But hey, wait - how would you find out that UNKNOWN is present in your list? Because, you know, UNKNOWN.equals(UNKNOWN) returns false; and contains() is using equals.
In essence: while technically possible, this approach causes breakages wherever you go. Meaning: the fact that SQL supports this concept doesn't mean that you should force something similar into your java code. As said: this idea is very much "off standards"; and is prone to surprise anybody looking at it. Aka "unexpected behavior" aka bugs.
A: A couple seconds of critical thinking:
There is already a null in Java and you can not use it as a key for a reason.
If you try and use a key that is not equal to anything else including
itself you can NEVER retrieve the value associated with that key!
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/43795877",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Getting a Method in the Main C# static void bubble(int [] mas)
{
int temp;
for (int i = 0; i < mas.Length; i++)
{
for (int j = 0; j < mas.Length - 1; j++)
if (mas[j] > mas[j + 1])
{
temp = mas[j + 1];
mas[j + 1] = mas[j];
mas[j] = temp;
}
}
foreach (var element in mas)
{
Console.WriteLine("Sorted elements are: {0}", element);
}
}
static void Main(string[] args)
{
int[] mas = new int[5];
Console.WriteLine("Please enter the elements of the array: ");
for (int i = 0; i < 5; i++)
{
mas[i] = Convert.ToInt32(Console.ReadLine());
}
bubble();
}
I need to be using methods to solve some tasks for Uni project and the problem I am having is that I want to create an array in the Main, use it in different methods and call the results of those methods back to the Main. When I do that and try and call the "bubble" back in to the Main it is telling me that there is no argument given that corresponds to the formal given parameter. Is there an easy way to fix that so I can fix that and proceed in creating the other methods analogically.
Thanks in Advance
A: The reason you are getting the error is because the function bubble requires an int[] as a parameter.
Currently you have bubble() which in its current state is "parameterless"
Replace it with bubble(mas);
A: You need to pass the parameter in your call to bubble in main:
bubble(mas);
A: change your code like this :
static int[] bubble(int [] mas)
{
int temp;
for (int i = 0; i < mas.Length; i++)
{
for (int j = 0; j < mas.Length - 1; j++)
if (mas[j] > mas[j + 1])
{
temp = mas[j + 1];
mas[j + 1] = mas[j];
mas[j] = temp;
}
}
foreach (var element in mas)
{
Console.WriteLine("Sorted elements are: {0}", element);
}
}
static void Main(string[] args)
{
int[] mas = new int[5];
Console.WriteLine("Please enter the elements of the array: ");
for (int i = 0; i < 5; i++)
{
mas[i] = Convert.ToInt32(Console.ReadLine());
}
bubble(mas);
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/33853238",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: how to detect if user has an app in Cocoa How can I detect if user has an app, like echofon or twitter for mac or if user has pages, or textmate? Any suggestions?
A: Use NSWorkspace's fullPathForApplication: to get an application's bundle path. If that method returns nil, the app is not installed. For example:
NSString *path = [[NSWorkspace sharedWorkspace] fullPathForApplication:@"Twitter"];
BOOL isTwitterInstalled = (nil != path);
URLForApplicationWithBundleIdentifier is another method you may use.
A: I have never tried the code in the above answer, but the following works for me:
if ( [[UIApplication sharedApplication] canOpenURL:[NSURL URLWithString:@"app-scheme://"]] ) {
NSLog(@"This app is installed.");
} else {
NSLog(@"This app is not installed.");
}
This method requires the app to have a scheme though. I don't know about the one above.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7866273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Execution order of simple function I am a bit new to javascript, i was just trying the below snippet:
_getUniqueID = (function () {
var i = 1;
return function () {
return i++;
};
}());
s = _getUniqueID();
console.log(s); // 1
console.log(_getUniqueID()); // 2
I was under the impression that i would have to do s() to get 1 as the result and i was thinking that _getUniqueID() returns a function rather than execute the funtion inside it. Can somebody explain the exact execution of this function please ?
A: What you're seeing here is a combination of Javascript's notion of closure combined with the pattern of an immediately invoked function expression.
I'll try to illustrate what's happening as briefly as possible:
_getUniqueID = (function () {
var i = 1;
return function () {
return i++;
};
}()); <-- The () after the closing } invokes this function immediately.
_getUniqueID is assigned the return value of this immediately invoked function expression. What gets returned from the IIFE is a function with a closure that includes that variable i. i becomes something like a private field owned by the function that returns i++ whenever it's invoked.
s = _getUniqueID();
Here the returned function (the one with the body return i++;) gets invoked and s is assigned the return value of 1.
Hope that helps. If you're new to Javascript, you should read the book "Javascript, the Good Parts". It will explain all of this in more detail.
A: _getUniqueID = (function () {
var i = 1;
return function () {
return i++;
};
}());
s = _getUniqueID();
console.log(s); // 1
console.log(_getUniqueID()); // 1
*
*when you do () it calls the function,
a- makes function recognize i as global for this function.
b- assigns function to _getUniqueID
*you do s = _getUniqueID();,
a - it assigns s with return value of function in _getUniqueID that is 1 and makes i as 2
*when you do _getUniqueID() again it will call the return function again
a- return 2 as the value and
b makes value of i as 3.
A: This is a pattern used in Javascript to encapsulate variables. The following functions equivalently:
var i = 1;
function increment() {
return i ++;
}
function getUniqueId() {
return increment();
}
But to avoid polluting the global scope with 3 names (i, increment and getUniqueId), you need to understand the following steps to refactor the above. What happens first is that the increment() function is declared locally, so it can make use of the local scope of the getUniqueId() function:
function getUniqueId() {
var i = 0;
var increment = function() {
return i ++;
};
return increment();
}
Now the increment function can be anonymized:
function getUniqueId() {
var i = 0;
return function() {
return i ++;
}();
}
Now the outer function declaration is rewritten as a local variable declaration, which, again, avoids polluting the global scope:
var getUniqueId = function() {
var i = 0;
return (function() {
return i ++;
})();
}
You need the parentheses to have the function declaration act as an inline expression the call operator (() can operate on.
As the execution order of the inner and the outer function now no longer make a difference (i.e. getting the inner generator function and calling it, or generate the number and returning that) you can rewrite the above as
var getUniqueId = (function() {
var i = 0;
return function() {
return i ++;
};
})();
The pattern is more or less modeled after Crockford's private pattern
A: _getUniqueID = (function () {
var i = 1;
return function () {
return i++;
};
}());
console.log(_getUniqueID()); // 1 , this surprised me initially , I was expecting a function definition to be printed or rather _getUniqueID()() to be called in this fashion for 1 to be printed
So the above snippet of code was really confusing me because I was't understanding that the above script works in the following manner, by the time the IFFE executes _getUniqueID is essentially just the following:
_getUniqueID = function () {
i = 1
return i++;
};
and hence,
_getUniqueID() // prints 1.
prints 1.
Note: please note that I understand how closures and IFFE's work.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/31909178",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: CouchDB Views - emit Keys as JSON & filter based on any attribute Consider following Employee document structure
{
"_id":...,
"rev":...,
"type":"Employee",
"fName":...,
"lName":...,
"designation":...,
"department":...,
"reportingTo":...,
"isActive":..,
more attributes
more attributes
}
And following map function in a View named "Employee"
function(doc) {
if (doc.type=="Employee") {
emit({
"EID":doc._id,
"FirstName":doc.fName,
"LastName":doc.lName,
"Designation":doc.designation,
"Department":doc.department,
"ReportingTo":doc.reportingTo,
"Active":doc.isActive
},
null
);
}
};
I want to query this view based on any combination & order of emitted attributes ( a query may include few random attributes may be like duck typing ). Is it possible? If so kindly let me know some samples or links.
Thanks
A: I've ran into the same problem a few times; you can, but you'll have to index each by itself (not all in one hash like you've done). But you could through the whole thing in the value for emit. It can be fairly inefficient, but gets the job done. (See this link: View Snippets)
I.e.:
function(doc) {
if (doc.type=="Employee") {
emit(["EID",doc.values.EID], doc.values);
emit(["FirstName", doc.fName], doc.values);
emit(["LastName", doc.lName], doc.values);
emit(["Designation", doc.designation], doc.values);
emit(["Department", doc.department], doc.values);
emit(["ReportingTo", doc.reportingTo], doc.values);
emit(["Active", doc.isActive], doc.values);
}
}
This puts all "EID" things in the same part of the tree, etc., I'm not sure if that is good or bad for you.
If you start needing a lot of functionality with [field name:]value searches, its probably worth it to move towards a Lucene-CouchDB setup. Several exist, but are a little immature.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/15834963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Uncaught PDOException: could not find driver ... Window XAMPP MySQL PHP Version => 7.4.4 I've walked through all the StackOverflow answers and no one was useful for my case.
I am on Windows using XAMPP with MySQL my PHP Version is 7.4.4, in my php.ini the extensions are uncommented and I am posting them in here
extension=bz2
extension=curl
;extension=ffi
;extension=ftp
extension=fileinfo
extension=gd2
extension=gettext
;extension=gmp
;extension=intl
;extension=imap
;extension=ldap
extension=mbstring
extension=exif ; Must be after mbstring as it depends on it
extension=mysqli
;extension=oci8_12c ; Use with Oracle Database 12c Instant Client
;extension=odbc
;extension=openssl
;extension=pdo_firebird
extension=pdo_mysql
;extension=pdo_oci
;extension=pdo_odbc
;extension=pdo_pgsql
extension=pdo_sqlite
;extension=pgsql
;extension=shmop
extension=php_pdo.dll
extension=php_pdo.dll
extension=php_odbc.dll
extension=php_pdo_odbc.dll
I have done all that all the previous answer said to do, such as uncomment pdo_mysql (which was already uncommented) and I have added a few rows that I am mentioning here again:
extension=php_pdo.dll
extension=php_pdo.dll
extension=php_odbc.dll
extension=php_pdo_odbc.dll
Which were not in the php.ini file but I have added them.
I have looked all around the web and it seems that for having this connection working the only thing to do is to uncomment those few lines and as an extra I have added that .dll reference due to the fact that I couldn't get it to work.
the code which is failing is the following
$dbPassword = "admin";
$dbUser = "admin";
$dbServer = "localhost";
$dbName = "PHP";
$connection = new PDO('mysql:host='.$dbServer.';dbname='.$dbName,$dbUser,$dbPassword);
print_r($connection);
Does anyone have an idea on how to get this driver set properly on windows?
A: The issue was I had two php.ini, one from XAMPP and the other one inside another folder from a previous installation.
Due to the fact that the PATH variable is pointing to the old one, than it wasn't picking my changes.
Fixed via uncommenting the correct php.ini
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/61158974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: AWS Auto Scaling Group As a newbie to AWS concepts, I am trying to understand ASG concept.
a) Does it work per region or per Availability Zones?
b) Do I need to have an ASG for each Availability zone, or it can scale up and down the instances from different AZs on a shared region?
A: You can have an autoscale group work across availability zones, but not across regions.
Though it is possible to have an ASG in a single AZ, you would certainly want the ASG to be in multiple AZ's, otherwise you have no real fault tolerance in the case where an AZ has problems.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/46728768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Opencart check if specific product is in stock I want to be able to check to see if a specific product is in stock or enabled or disabled. I would then like to put this block of code on the checkout page. More specifically it will go in Gift_Wrap.xml. I have a product called Gift Box and it goes out of stock quite frequently. How can I have it check if that specific product is enabled and then display x y or z.
<?php if ($product_id['GIFTBOX']) { ?>
if($this->data['status']=$this->language->get('enable'){
//do this
}
else if{
//do this
}
<?php } ?>
Not sure if I am on the right track or not. What do I need to call before this to get the data options of that product. New to Opencart and PHP thanks for your help.
A: go to controller/product/product.php
run this, to check product in stock:
<?php
foreach ($products as $product) {
if($product['stock'] > 0 && $product['product_id']==$product_id['GIFTBOX']) {
}
?>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/31328773",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How do I apply an Ansible with_items loop to the included tasks? The documentation for import_tasks mentions
Any loops, conditionals and most other keywords will be applied to the included tasks, not to this statement itself.
This is exactly what I want. Unfortunately, when I attempt to make import_tasks work with a loop
- import_tasks: msg.yml
with_items:
- 1
- 2
- 3
I get the message
ERROR! You cannot use loops on 'import_tasks' statements. You should use 'include_tasks' instead.
I don't want the include_tasks behaviour, as this applies the loop to the included file, and duplicates the tasks. I specifically want to run the first task for each loop variable (as one task with the standard with_items output), then the second, and so on. How can I get this behaviour?
Specifically, consider the following:
Suppose I have the following files:
playbook.yml
---
- hosts: 192.168.33.100
gather_facts: no
tasks:
- include_tasks: msg.yml
with_items:
- 1
- 2
msg.yml
---
- name: Message 1
debug:
msg: "Message 1: {{ item }}"
- name: Message 2
debug:
msg: "Message 2: {{ item }}"
I would like the printed messages to be
Message 1: 1
Message 1: 2
Message 2: 1
Message 2: 2
However, with import_tasks I get an error, and with include_tasks I get
Message 1: 1
Message 2: 1
Message 1: 2
Message 2: 2
A: You can add a with_items loop taking a list to every task in the imported file, and call import_tasks with a variable which you pass to the inner with_items loop. This moves the handling of the loops to the imported file, and requires duplication of the loop on all tasks.
Given your example, this would change the files to:
playbook.yml
---
- hosts: 192.168.33.100
gather_facts: no
tasks:
- import_tasks: msg.yml
vars:
messages:
- 1
- 2
msg.yml
---
- name: Message 1
debug:
msg: "Message 1: {{ item }}"
with_items:
- "{{ messages }}"
- name: Message 2
debug:
msg: "Message 2: {{ item }}"
with_items:
- "{{ messages }}"
A: It is not possible. include/import statements operate with task files as a whole.
So with loops you'll have:
Task 1 with Item 1
Task 2 with Item 1
Task 3 with Item 1
Task 1 with Item 2
Task 2 with Item 2
Task 3 with Item 2
Task 1 with Item 3
Task 2 with Item 3
Task 3 with Item 3
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/47352361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14"
}
|
Q: AIDL server usage in Android Currently, I have implemented AIDL in 2 apps. App1 works as a server and App2 works as a client. The service is added to App1 and App2 connects to that service. Now, I want to run the App1 (server app) as a service and avoid being installed then what should I do? Currently, I tried to install the App1 without any UI then it is installed properly but didn't start the service.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/73316972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Mapbox map not showing up (react-map-gl) I was using this website [http://vis.academy/#/building-a-geospatial-app/1-starting-with-a-map] to learn how to visualize data using react and mapbox.gl but I ran into some problems.
I'm running the app on localhost:8080, and all I see is a blank white screen. Even though I've matched my code to vis academy's, I don't have a map.
Here is my code for the app.js (everything else was not changed, you can clone the code from their website).
import React, { Component } from 'react';
import MapGL from 'react-map-gl';
import {MapStylePicker} from './controls';
export default class App extends Component {
state = {
style: 'mapbox://styles/light-v9',
viewport: {
width: window.innerWidth,
height: window.innerHeight,
longitude: -74,
latitude: 40.7,
zoom: 11,
maxZoom: 16
}
};
onStyleChange = (style) => {
this.setState({style});
}
_onViewportChange = (viewport) => {
this.setState({
viewport: {...this.state.viewport, ...viewport}
});
}
componentDidMount() {
window.addEventListener('resize', this._resize);
this._resize();
}
componentWillUnmount() {
window.removeEventListener('resize', this._resize);
}
_resize = () => {
this._onViewportChange({
width: window.innerWidth,
height: window.innerHeight
});
}
render() {
<MapStylePicker onStyleChange={this.onStyleChange} currentStyle={this.state.style}/>
return (
<div>
<MapGL
{...this.state.viewport}
mapStyle={this.state.style}
// Callback for viewport changes (below)
onViewportChange={viewport => this._onViewportChange(viewport)}
></MapGL>
</div>
);
}
}
For the mapbox token, I did what the guide told me to do, which was to go to ~/.bash_profile file and add in export MAPBOX_TOKEN="py........" and then I updated it in terminal using source ~/.bash_profile`
Further into the guide, it had two more things that exported the mapbox token so here is my code for ~/.bash_profile
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/opt/anaconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/opt/anaconda3/etc/profile.d/conda.sh" ]; then
. "/opt/anaconda3/etc/profile.d/conda.sh"
else
export PATH="/opt/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda initialize <<<
export MapboxAccessToken="pk.eyJ1IjoidmVyYTFrIiwiYSI6ImNrZjZ5bWZ3YzBhNDEycWxiZHBycDFrZnoifQ.hVFCurngFTtg2R0ivt-zyg"
export MAPBOX_TOKEN="pk.eyJ1IjoidmVyYTFrIiwiYSI6ImNrZjQ3aWJoczA4ZGQydm43cXFjcW5peTkifQ.sYJ99dX7B8QyPgV_-TszTA"
export mapboxApiAccessToken="pk.eyJ1IjoidmVyYTFrIiwiYSI6ImNrZjQ3aWJoczA4ZGQydm43cXFjcW5peTkifQ.sYJ99dX7B8QyPgV_-TszTA"
Thank you so much for your time! I am super confused (btw i am on a mac)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/63947709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: bash while read loop fails with \ as line termination Input file:
$ cat list
dog \
cat \
apple \
script:
while read line;do echo $line;done < list;
The Problem:
The above script prints nothing. Whereas using 'read -r' in place of read prints all lines.
My Question is..
Shouldn't the while loop run at least once in the above case(without -r) and print the first line of output ? Why the condition check fails on the first line itself ?
A: Since all newlines are escaped, and read reads till an unescaped newline, it reaches end-of-file without getting one. And, as help read says:
Exit Status:
The return code is zero, unless end-of-file is encountered, read times out
(in which case it's greater than 128), a variable assignment error occurs,
or an invalid file descriptor is supplied as the argument to -u.
Note that line still contains the lines read so far:
$ while read line; do echo "$line"; done < foo; echo $line
dog cat apple
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/42554165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Cant delete record from SQL Server CE I have an ASP.NET MVC application. I use EF 5.0 to persist data in the database of a SQL Server CE type. I have installed SqlServerCe.Enitity NuGet package. I can successfully process Create, Read and Update operations on the database records using Linq and my model object. For some reasons however, I cannot delete any records from the database. Objects are being erased on the model level but changes are not being persisted in the database. What might be causing that?
This my approach to deleting records:
var files = context.Files.ToList();
files.RemoveAll(x => x.Name == "test");
context.SaveChanges();
And this is my connection string:
<add name="DefaultConnection" connectionString="Data Source=C:\SLG PROJECTS\BACKUPS CHECK SYSTEM\Data\aspnet.sdf" providerName="System.Data.SqlServerCe.4.0" />
A: Try this:
context.Files.Where(x => x.Name == "test").ToList()
.ForEach( f => context.Files.Remove(f));
context.SaveChanges();
The RemoveAll may not compiled to SQL and not executed in SQL Server.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/19356708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Activating chrome language flags when activating from protractor (selenium) I'm writing end to end tests with Protractor for an angular website.
We have to support certain languages so I would like to init chrome using the --lang flag and start it with some other language. I searched the web and couldn't find any example to how it can be done.
My only lead was some article I saw and understood that I need to add to Protractor config file the "capabilities" section and there I can define the "args" property.
Then tried to tinker with it but no luck.
Any help will be most welcome.
Thanks,
Alon
A: How to set Browser language and/or Accept-Language header
exports.config = {
capabilities: {
browserName: 'chrome',
chromeOptions: {
// How to set browser language (menus & so on)
args: [ 'lang=fr-FR' ],
// How to set Accept-Language header
prefs: {
intl: { accept_languages: "fr-FR" },
},
},
},
};
More examples:
intl: { accept_languages: "es-AR" }
intl: { accept_languages: "de-DE,de" }
A: As it turns out - the answer was in the docs all along.
This is how you do it for chrome and i guess it similar for other browsers:
inside the protractor.conf.js (for Spanish):
capabilities: {
browserName: 'chrome',
version: '',
platform: 'ANY',
'chromeOptions': {
'args': ['lang=es-ES']}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/26872481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Opencart optimus theme errors Hello guys i have some problems with my opencart theme .
https://melani-shop.com
I got too much erros can you help me ?
Screenshot http://prntscr.com/erggqi
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/43167074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: My dev++ is returning non-static warning. Whats the problem with the code? I'm doing this code and can't find a way to compile. Tried Online compilers which each one of them gives a different reason for the error, my Dev++ says that
[Warning] non-static data member initializers only available with -std=c++11 or -std=gnu++11
Every online compiler and cant find an error on code either
#include <stdio.h>
#include <stdlib.h>
struct uninter
{
char nome[5] = {'L','U','C','A','S'};
int RU = 2613496;
}; struct uninter aluno;
int main() {
int i;
printf("\n Nome do Aluno: ");
for (i = 0; i < 5; i++ ){
printf( "%c\n", aluno.nome[i]);
}
printf("\n RU do aluno: %d \n ", aluno.RU);
system("pause");
return 0;
}
[Warning] non-static data member initializers only available with -std=c++11 or -std=gnu++11
A: The warning message you are receiving is because you are compiling your code with a C++ compiler. Probably with with -std=c++98 or -ansi or otherwise implicitly using the 1998 standard.
You are trying to create a default initializer for a member of a struct, which is a feature not added to C++ until the 2011 standard. To compile a C++ program with this feature without the compiler warning you about this, you need to pass in the -std=c++11 or -std=gnu++11 flags to the compiler command as the warning states.
If this is expected to be C code rather than C++ code, default initializers for structs are simply not a part of the language. You can initialize its member variables upon declaration of an object of that struct's type.
An example of how you might do this with a C compiler:
// definition of the struct
struct uninter
{
char nome[5];
int RU;
};
// declaration of an instance of an object of type struct uninter
struct uninter x = {{'L','U','C','A','S'}, 2613496};
// alternative declaration if using C99 standard with designated initializers
struct uninter y = {
.nome={'L','U','C','A','S'},
.RU= 2613496
};
A: in general, cannot initialize a struct contents in C in the definition of the struct. Suggest something similar to:
struct uninter
{
char nome[5];
int RU;
};
struct uninter aluno = {.nome = "LUCAS", .RU = 2613496};
A: The problem with the code is that it is using non-static data member intializers:
struct uninter
{
char nome[5] = {'L','U','C','A','S'};
int RU = 2613496;
}; struct uninter aluno;
... which is a C++11 feature, and therefore isn't portable unless you are using a C++11 (or later) compiler. (It might still compile under older compilers, if they have that feature enabled as a compiler-specific extension, but they are politely warning you that you can't expect it to compile everywhere)
If you don't want your program to require C++11 or later to compile, the easiest thing to do would be to rewrite it so that the struct's member variables are initialized via a different mechanism. For example (assuming your c tag is intentional) you could have an init-method do it for you:
struct uninter
{
char nome[5+1]; // +1 for the NUL/terminator byte!
int RU;
}; struct uninter aluno;
void Init_uninter(uninter * u)
{
strcpy(u->nome, "LUCAS");
u->RU = 2613496;
}
[...]
int main() {
int i;
Init_uninter(&aluno);
[...]
... or if you actually intended to specify/use a pre-C++11 version of C++, a default-constructor would do the trick a bit more gracefully:
struct uninter
{
uninter()
{
strcpy(nome, "LUCAS");
RU = 2613496;
}
char nome[5+1]; // +1 for the NUL terminator byte!
int RU;
}; struct uninter aluno;
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/57861831",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Why the magic method '__get' has been called twice? I have seen some code like below, and it is strange that the __get method has been called twice, why?
class Foo {
private $bar;
function __get($name){
echo "__get is called!";
return $this->$name;
}
function __unset($name){
unset($this->$name);
}
}
$foo = new Foo;
unset($foo->bar);
echo $foo->bar;
Attention: unset($foo->bar) will not call the __get.
A: For me, it looks like a bug. Put some debugging code (the following) and see the result:
<?php
class Foo {
private $bar;
function __get($name){
echo "__get(".$name.") is called!\n";
debug_print_backtrace();
$x = $this->$name;
return $x;
}
function __unset($name){
unset($this->$name);
echo "Value of ". $name ." After unsetting is \n";
echo $this->$name;
echo "\n";
}
}
echo "Before\n";
$foo = new Foo;
echo "After1\n";
unset($foo->bar);
echo "After2\n";
echo $foo->bar;
echo "After3\n";
echo $foo->not_found;
?>
The result is:
Before
After1
Value of bar After unsetting is
__get(bar) is called!
#0 Foo->__get(bar) called at [E:\temp\t1.php:17]
#1 Foo->__unset(bar) called at [E:\temp\t1.php:24]
PHP Notice: Undefined property: Foo::$bar in E:\temp\t1.php on line 9
After2
__get(bar) is called!
#0 Foo->__get(bar) called at [E:\temp\t1.php:26]
__get(bar) is called!
#0 Foo->__get(bar) called at [E:\temp\t1.php:9]
#1 Foo->__get(bar) called at [E:\temp\t1.php:26]
PHP Notice: Undefined property: Foo::$bar in E:\temp\t1.php on line 9
After3
__get(not_found) is called!
#0 Foo->__get(not_found) called at [E:\temp\t1.php:28]
PHP Notice: Undefined property: Foo::$not_found in E:\temp\t1.php on line 9
A: invoked in
1)
return $this->$name;
2)
echo $foo->bar;
in the code:
class Foo {
private $bar;
function __get($name){
echo "__get is called!";
return $this->$name; *** here ***
}
function __unset($name){
unset($this->$name);
}
}
$foo = new Foo;
unset($foo->bar);
echo $foo->bar; *** and here ***
__get() is utilized for reading data from inaccessible properties.
so, un-setting the variable, turn $foo->bar and $this->bar inaccessible. However, if unset is removed then $foo->bar is inaccessible but $this->bar is accessible.
However, i don't know how PHP avoid to call the function recursively. May be PHP is smart or the variable is self-setting at some point.
A: The magic __get function is called everytime you try to access a variable. If you look at your code, you do this exactly 2 times. Once in the unset function and once in the echo function.
unset($foo->bar);
echo $foo->bar;
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7567603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Calling click function from external js file I have a onClick function in external file navigation.js, which is included at the bottom of page. This is navigation.js:
( function() {
function openNav() {
document.getElementById("mySidenav").style.width = "250px";
}
function closeNav() {
document.getElementById("mySidenav").style.width = "0";
}
var open = document.getElementById("openNav");
var close = document.getElementById("closeNav");
open.addEventListener("click", openNav, false);
close.addEventListener("click", closeNav, false);
} )();
And this is part of wordpress page where click event should occure:
<div id="mySidenav" class="sidenav">
<a href="javascript:void(0)" class="closebtn" id="closeNav">×</a>
<?php
wp_nav_menu( array(
'theme_location' => 'menu-2',
'menu_id' => 'primary-menu-bottom',
'walker' => new My_Walker(),
) );
?>
</div>
<span class="open-menu" style="font-size:30px;cursor:pointer" id="openNav">☰ open</span>
That is actually hamburger menu, when screen is resized. Problem is that function is not working. Can I call onClick events from external js files, or I must put it at the same page where event should occure ?
A: The problem probably lies in how you load your script, as there seem to be nothing wrong with it. You have to ensure that when the code is run, all of the required DOM elements are already rendered by the browser.
You can use window.onload handler (or it's .addEventListener() equivalent) to call your initialization script after the page has been fully loaded:
window.onload = function() {
function openNav() {
document.getElementById("mySidenav").style.width = "250px";
}
function closeNav() {
document.getElementById("mySidenav").style.width = "0";
}
var open = document.getElementById("openNav");
var close = document.getElementById("closeNav");
open.addEventListener("click", openNav, false);
close.addEventListener("click", closeNav, false);
};
You can see this code piece in action here: https://jsfiddle.net/sLLLtb3x/
(Please note that I've switched how JavaScript code is loaded - it's set to no wrap - in head to simulate described behavior of code loading).
As for the question of using functions in "external scripts" - in general, yes, you can do that, but in your case, your functions are wrapped in Immediately Invoked Functional Expression, which prevents your functions from leaking to the global scope. You can either manually assign them to some global variable, or you could remove the IIFE (keep in mind that you still need the onload behavior).
But as I've said already, your code example is fine, as you are assigning the event handler within the code piece itself (which is totally fine).
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/45653073",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: JSF cant resolve bean in xhtml file using spring boot i have a simple JSF xhtml page contains xml:
<h:body>
<h:outputText value='#{someBean == null ? "null" : "not null"}'/>
</h:body>
my faces-config contains:
<application>
<el-resolver>org.springframework.web.jsf.el.SpringBeanFacesELResolver</el-resolver>
</application>
and i have a java configuration bean like
@Bean
public ServletRegistrationBean jsfServletRegistration (ServletContext servletContext) {
servletContext.setInitParameter("com.sun.faces.forceLoadConfiguration", Boolean.TRUE.toString());
ServletRegistrationBean srb = new ServletRegistrationBean();
srb.setServlet(new FacesServlet());
srb.setUrlMappings(Collections.singletonList("*.xhtml"));
srb.setLoadOnStartup(1);
return srb;
}
and a bean like:
@Component
@ViewScoped
@ManagedBean
public class SomeBean {
public String message() {
return "some";
}
}
here, when i go to the page i get null string as response...
note: i can access my pages using browser.
why cant i get someBean inside xhtml file ?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/65088769",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: RestKit - Map to a simple instance variable I am quite new to RestKit. I know that the Object Mapping of Restkit is very powerful. However, in some case, I just want to map to a simple variable. For example, take a look at the following response:
{
"response": 400,
"result": {
"error_message": "Invalid session token"
}
}
I just want to know the value of "response" or "error_message". It's quite wasteful to create 2 classes "response", and "result", since these classes have only few fields.
Any recommendation is welcome.
A: You can create a single class with response and message properties, then use mappings:
@"response" : @"response",
@"result.error_message" : @"message"
Or you can just map into a dictionary for error responses and then use the keypath to access the message.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/24304926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How should I mark a method as "obsolete" in JS? I am refactoring a rather large JS file that contains many unrelated methods into something that will regroup the methods together according to their usage, and renaming some of them as needed (to prevent misleading names).
However, most of the web pages that actually use this code are spread across different code branches, preventing me from doing a simple find&replace. I could do it in all the different branches, but that requires doing maintenance in 30+ branches at the same time, or probably forgetting to perform the renaming once the change is merged in the other branches (by me or other team members).
If this was C#, I could just mark the method with [Obsolete] and it would flag the necessary changes as needed, so I am looking for something somewhat equivalent. I will still provide functionality with the old interface for a while by just redirecting the calls to the new methods, but I'd like to "force" people to switch to the new interface as they work on the pages for other reasons.
Is there any other way to do something similar, besides adding a debugger;
statement and a verbose comment to every method so that it breaks when developing but not in production?
A: There are a couple of things you can do in a transition period.
*
*Add the @deprecated JSDoc flag.
*Add a console warning message that indicates that the function is deprecated.
A sample:
/**
* @deprecated Since version 1.0. Will be deleted in version 3.0. Use bar instead.
*/
function foo() {
console.warn("Calling deprecated function!");
bar();
}
A: Here's what we've found for Visual Studio 2013 : http://msdn.microsoft.com/en-us/library/vstudio/dn387587.aspx
It's not tested yet as we haven't made the switch, but it looks promising.
In the meantime, I am inserting a flag at page load depending on context such as :
<%
#if DEBUG
Response.Write("<script type=\"text/javascript\"> Flags.Debug = true; </script>");
#endif
%>
and then I call a method that throws an error if the flag is true, or redirect to the new call if it is in release configuration.
A: function obsolete(oldFunc, newFunc) {
const wrapper = function() {
console.warn(`WARNING! Obsolete function called. Function ${oldFunc.name} has been deprecated, please use the new ${newFunc.name} function instead!`)
newFunc.apply(this, arguments)
}
wrapper.prototype = newFunc.prototype
return wrapper
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/19412660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "82"
}
|
Q: Rename project built with Gradle I have a Java project built with Gradle that is using a package com.example.package1. I would like to change the package structure to com.example.package2 and name the corresponding directories accordingly.
Is there some refactor (Eclipse like) tool that can do that for me? Or what all files may I need to look into? e.g. build.gradle, settings.gradle.
A: When you rename the directories in your project to fix the Gradle build, you should modify the following files where you will need to change the old directory name to the new.
*
*.project
*.classpath
*build.gradle
*settings.gradle
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/17243826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to disable weak cipher suits in java application server for ssl I have a custom Java application server running. I am seeing that there are some weak cipher suits supported by the server for example some 112 bit ciphers. I want to disable those. Where can I don that. Also I want to enable TLSv1.2. The following is the code to initialize the socket.
KeyStore ks = KeyStore.getInstance("JKS");
ks.load(new FileInputStream(sslstore), sslstorepass.toCharArray());
KeyManagerFactory kmf = KeyManagerFactory.getInstance("SunX509");
kmf.init(ks, sslcertpass.toCharArray());
SSLContext sc = SSLContext.getInstance("TLS");
sc.init(kmf.getKeyManagers(), null, new SecureRandom());
SSLServerSocketFactory ssf = sc.getServerSocketFactory();
serverSocket = ssf.createServerSocket(port);
System.out.println("Socket initialized");
A: JAVA allows cipher suites to be removed/excluded from use in the security policy file called java.security that’s located in your JRE: $PATH/[JRE]/lib/security The jdk.tls.disabledAlgorithms property in the policy file controls TLS cipher selection. The jdk.certpath.disabledAlgorithms controls the algorithms you will come across in SSL certificates. Oracle has more information about this here
In the security policy file, if you entered the following: jdk.tls.disabledAlgorithms=MD5, SHA1, DSA, RSA keySize < 4096 It would make it, so that MD5, SHA1, DSA are never allowed, and RSA is allowed only if the key is at least 4096 bits.
To solve the tls version issue. Just have to put SSLContext sc = SSLContext.getInstance("TLSv1.2"); It will support all the versions.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/36597380",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Error occurred during initialization of VM Unable to load native library: Can't find dependent libraries [Compiling C file, JNI] I am implementing a "reverse" JNI to call Java from C following this tutorial: How to Call Java Functions from C Using JNI. Whenever I compile my file, "CTest.cpp", and run "CTest.exe" in Windows Command Prompt or Visual Studio, I always receive the following error:
Error occurred during initialization of VM Unable to load native library: Can't find dependent libraries
I know there are similar questions out there: I've seen them and applied the recommendations (such as linking my jvm.dll and jvm.lib paths to the PATH environment variable). I also added the jvm.dll location inside my file properties via Visual Studio. I really need help- have been stuck on this for a few days now. Thank you for your time!
CTest.cpp is below:
#include "StdAfx.h"
#include <stdio.h>
#include <jni.h>
#include <string.h>
#define PATH_SEPARATOR ';' /* define it to be ':' on Solaris */
#define USER_CLASSPATH "." /* where Prog.class is */
struct ControlDetail
{
int ID;
char Name[100];
char IP[100];
int Port;
};
struct WorkOrder
{
char sumSerialId[20];
char accessNumber[18];
char actionType[4];
char effectiveDate[24];
char fetchFlag[2];
char reason[456];
char accessSource[100];
};
JNIEnv* create_vm(JavaVM ** jvm) {
JNIEnv *env;
JavaVMInitArgs vm_args;
JavaVMOption options;
options.optionString = "-Djava.class.path=C:\\Users\\My Name\\Downloads\\src_CJNIJava\\Java Src\\TestStruct"; //Path to the java source code
vm_args.version = JNI_VERSION_1_8; //JDK version. This indicates version 1.8
vm_args.nOptions = 1;
vm_args.options = &options;
vm_args.ignoreUnrecognized = 0;
int ret = JNI_CreateJavaVM(jvm, (void**)&env, &vm_args);
if (ret < 0)
printf("\nUnable to Launch JVM\n");
return env;
}
int main(int argc, char* argv[])
{
JNIEnv *env;
JavaVM * jvm;
env = create_vm(&jvm);
if (env == NULL)
return 1;
struct ControlDetail ctrlDetail;
ctrlDetail.ID = 11;
strcpy(ctrlDetail.Name, "HR-HW");
strcpy(ctrlDetail.IP, "10.32.164.133");
ctrlDetail.Port = 9099;
printf("Struct Created in C has values:\nID:%d\nName:%s\n IP:%s\nPort:%d\n", ctrlDetail.ID, ctrlDetail.Name, ctrlDetail.IP, ctrlDetail.Port);
/********************************************************/
struct WorkOrder WO[2];
strcpy(WO[0].sumSerialId, "2000");
strcpy(WO[0].accessNumber, "2878430");
strcpy(WO[0].actionType, "04");
strcpy(WO[0].effectiveDate, "25-12-2007 12:20:30 PM");
strcpy(WO[0].fetchFlag, "0");
strcpy(WO[0].reason, "Executed Successfully");
strcpy(WO[0].accessSource, "PMS");
strcpy(WO[1].sumSerialId, "1000");
strcpy(WO[1].accessNumber, "2878000");
strcpy(WO[1].actionType, "T4");
strcpy(WO[1].effectiveDate, "25-12-2007 11:20:30 PM");
strcpy(WO[1].fetchFlag, "0");
strcpy(WO[1].reason, "");
strcpy(WO[1].accessSource, "RMS");
jclass clsH = NULL;
jclass clsC = NULL;
jclass clsW = NULL;
jclass clsR = NULL;
jmethodID midMain = NULL;
jmethodID midCalling = NULL;
jmethodID midDispStruct = NULL;
jmethodID midDispStructArr = NULL;
jmethodID midRetObjFunc = NULL;
jmethodID midCtrlDetConst = NULL;
jmethodID midWoConst = NULL;
jobject jobjDet = NULL;
jobject jobjRetData = NULL;
jobjectArray jobjWOArr = NULL;
//Obtaining Classes
clsH = env->FindClass("HelloWorld");
clsC = env->FindClass("ControlDetail");
clsW = env->FindClass("WorkOrder");
//Obtaining Method IDs
if (clsH != NULL)
{
midMain = env->GetStaticMethodID(clsH, "main", "([Ljava/lang/String;)V");
midCalling = env->GetStaticMethodID(clsH, "TestCall", "(Ljava/lang/String;)V");
midDispStruct = env->GetStaticMethodID(clsH, "DisplayStruct", "(LControlDetail;)I");
midDispStructArr = env->GetStaticMethodID(clsH, "DisplayStructArray", "([LWorkOrder;)V");
midRetObjFunc = env->GetStaticMethodID(clsH, "ReturnObjFunc", "()Ljava/lang/Object;");
}
else
{
printf("\nUnable to find the requested class\n");
}
if (clsC != NULL)
{
//Get constructor ID for ControlDetail
midCtrlDetConst = env->GetMethodID(clsC, "<init>", "(ILjava/lang/String;Ljava/lang/String;I)V");
}
else
{
printf("\nUnable to find the requested class\n");
}
if (clsW != NULL)
{
//Get Constructor ID for WorkOrder
midWoConst = env->GetMethodID(clsW, "<init>", "(Ljava/lang/String;Ljava/lang/String;Ljava/lang/String;Ljava/lang/String;Ljava/lang/String;Ljava/lang/String;Ljava/lang/String;)V");
}
else
{
printf("\nUnable to find the requested class\n");
}
/************************************************************************/
/* Now we will call the functions using the their method IDs */
/************************************************************************/
if (midMain != NULL)
env->CallStaticVoidMethod(clsH, midMain, NULL); //Calling the main method.
if (midCalling != NULL)
{
jstring StringArg = env->NewStringUTF("\nTestCall:Called from the C Program\n");
//Calling another static method and passing string type parameter
env->CallStaticVoidMethod(clsH, midCalling, StringArg);
}
printf("\nGoing to Call DisplayStruct\n");
if (midDispStruct != NULL)
{
if (clsC != NULL && midCtrlDetConst != NULL)
{
jstring StringArgName = env->NewStringUTF(ctrlDetail.Name);
jstring StringArgIP = env->NewStringUTF(ctrlDetail.IP);
//Creating the Object of ControlDetail.
jobjDet = env->NewObject(clsC, midCtrlDetConst, (jint)ctrlDetail.ID, StringArgName, StringArgIP, (jint)ctrlDetail.Port);
}
if (jobjDet != NULL && midDispStruct != NULL)
env->CallStaticIntMethod(clsH, midDispStruct, jobjDet); //Calling the method and passing ControlDetail Object as parameter
}
//Calling a function from java and passing Structure array to it.
printf("\n\nGoing to call DisplayStructArray From C\n\n");
if (midDispStructArr != NULL)
{
if (clsW != NULL && midWoConst != NULL)
{
//Creating the Object Array that will contain 2 structures.
jobjWOArr = (jobjectArray)env->NewObjectArray(2, clsW, env->NewObject(clsW, midWoConst, env->NewStringUTF(""), env->NewStringUTF(""), env->NewStringUTF(""),
env->NewStringUTF(""), env->NewStringUTF(""), env->NewStringUTF(""), env->NewStringUTF("")));
//Initializing the Array
for (int i = 0; i<2; i++)
{
env->SetObjectArrayElement(jobjWOArr, i, env->NewObject(clsW, midWoConst, env->NewStringUTF(WO[i].sumSerialId),
env->NewStringUTF(WO[i].accessNumber),
env->NewStringUTF(WO[i].actionType),
env->NewStringUTF(WO[i].effectiveDate),
env->NewStringUTF(WO[i].fetchFlag),
env->NewStringUTF(WO[i].reason),
env->NewStringUTF(WO[i].accessSource)));
}
}
//Calling the Static method and passing the Structure array to it.
if (jobjWOArr != NULL && midDispStructArr != NULL)
env->CallStaticVoidMethod(clsW, midDispStructArr, jobjWOArr);
}
//Calling a Static function that return an Object
if (midRetObjFunc != NULL)
{
//Calling the function and storing the return object into jobject type variable
//Returned object is basically a structure having two fields (string and integer)
jobjRetData = (jobject)env->CallStaticObjectMethod(clsH, midRetObjFunc, NULL);
//Get the class of object
clsR = env->GetObjectClass(jobjRetData);
//Obtaining the Fields data from the returned object
jint nRet = env->GetIntField(jobjRetData, env->GetFieldID(clsR, "returnValue", "I"));
jstring jstrLog = (jstring)env->GetObjectField(jobjRetData, env->GetFieldID(clsR, "Log", "Ljava/lang/String;"));
const char *pLog = env->GetStringUTFChars(jstrLog, 0);
printf("\n\nValues Returned from Object are:\nreturnValue=%d\nLog=%s", nRet, pLog);
//After using the String type data release it.
env->ReleaseStringUTFChars(jstrLog, pLog);
}
//Release resources.
int n = jvm->DestroyJavaVM();
return 0;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/31466611",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: How to create a recursive queue? I am trying to implement a recursive queue using the trait Element<T> as a container for all the functions of the different nodes of the queue and two structs implementing it called Node<T> and End.
The Node<T> struct is supposed to handle all the functionality within the queue itself and the End struct's purpose is to deal with the case of the last node.
I've got the following code:
trait Element<T> {
fn append_item(self, item: T) -> Node<T>;
}
struct Node<T> {
data: T,
successor: Box<dyn Element<T>>
}
impl<T> Element<T> for Node<T> {
fn append_item(mut self, item: T) -> Node<T> {
self.successor = Box::new(self.successor.append_item(item));
self
}
}
struct End;
impl<T> Element<T> for End {
fn append_item(self, item: T) -> Node<T> {
Node { data: item, successor: Box::new(self) }
}
}
The problem is, that I get two errors:
*
*Cannot move a value of type dyn Element<T>
*The parameter type T may not live long enough
both on the same line in Node::append_item.
Now, I get why the first error occurs (because the size of dyn Element<T> cannot be statically determined) but I don't know how to work around it and I have no idea why the second error occurs.
A: error[E0161]: cannot move a value of type `dyn Element<T>`
--> src/lib.rs:12:35
|
12 | self.successor = Box::new(self.successor.append_item(item));
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the size of `dyn Element<T>` cannot be statically determined
The issue here is that fn append_item(self, item: T) takes self by value, but in this case self has type dyn Element<T>, which is unsized and can therefore never be passed by value.
The easiest solution here is to just take self: Box<Self> instead. This means that this method is defined on a Box<E> instead of directlThis will work perfectly fine with trait objects like dyn Element<T>, and it does not require the type to be sized.
(Additionally, in the updated code below, I've changed the return type to Box<Node<T>> for convenience, but this is not required per se, just a bit more convenient.)
trait Element<T> {
fn append_item(self: Box<Self>, item: T) -> Box<Node<T>>;
}
impl<T> Element<T> for Node<T> {
fn append_item(mut self: Box<Self>, item: T) -> Box<Node<T>> {
self.successor = self.successor.append_item(item);
self
}
}
impl<T> Element<T> for End {
fn append_item(self: Box<Self>, item: T) -> Box<Node<T>> {
Box::new(Node { data: item, successor: self })
}
}
[Playground link]
error[E0310]: the parameter type `T` may not live long enough
--> src/lib.rs:12:26
|
12 | self.successor = Box::new(self.successor.append_item(item));
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...so that the type `T` will meet its required lifetime bounds
|
The issue here is that dyn Trait has an implicit lifetime. In reality, it's dyn '_ + Trait. What exactly that lifetime is depends on the context, and you can read the exact rules in the Rust reference, but if neither the containing type nor the trait itself has any references, then the lifetime will always be 'static.
This is the case with Box<dyn Element<T>>, which really is Box<dyn 'static + Element<T>>. In other words, whatever type is contained by the Box must have a 'static lifetime. For this to be the case for Node<T>, it must also be that T has a 'static lifetime.
This is easy to fix, though: just follow the compiler's suggestion to add a T: 'static bound:
impl<T: 'static> Element<T> for Node<T> {
// ^^^^^^^^^
// ...
}
[Playground link)
A: Frxstrem already elaborated how to get around the need to move by using Box<Self> as receiver. If you need more flexibility in the parameter type (i.e. be able to hold on to references as well as owned types) instead of requiring T: 'static you can introduce a named lifetime:
trait Element<'a, T: 'a> {
fn append_item(self: Box<Self>, item: T) -> Node<'a, T>;
}
struct Node<'a, T: 'a> {
data: T,
successor: Box<dyn Element<'a, T> + 'a>,
}
impl<'a, T: 'a> Element<'a, T> for Node<'a, T> {
fn append_item(self: Box<Self>, new_data: T) -> Node<'a, T> {
Node {
successor: Box::new(self.successor.append_item(new_data)),
..*self
}
}
}
struct End;
impl<'a, T: 'a> Element<'a, T> for End {
fn append_item(self: Box<Self>, data: T) -> Node<'a, T> {
Node {
data,
successor: self,
}
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/75377008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: angular 4 router auth-guard not invoked on same route Hi i configured this routing:
const routes: Routes = [
{
path: '',
component: LayoutComponent,
canActivateChild: [AuthGuardService],
children: [
{
path: '',
pathMatch: 'full',
redirectTo: '/buchbestellungen',
canActivate: [AuthGuardService]
},
{
path: 'buchbestellungen',
component: BuchbestellungenComponent,
canActivate: [AuthGuardService]
},
{
path: 'buchbestellungen2',
component: Buchbestellungen2Component,
canActivate: [AuthGuardService]
},
]
},
{path: 'login', component: LoginComponent},
{path: '**', component: NotFoundComponent}
];
I dont want to take care of the name of the login page an not if that is the target page after logout in future. So after logout is done logout component redirects to the startpage.
For now, ther is no startpage component (later, there may be a dashboard) but a redirectTo the "buchbestellungen" route if you authenticated. If you are not logged in, visiting the start-page redirects to /buchbestellungen and the auth-guard service redirects to login route.
This works perferct from every route (e.g. buchbestellungen2), but not, when i log out while i'm on /buchbestellungen. This redirects to "/" and then back to buchbestellungen. But the auth-guard is not invoked (i have a Console.log() in the canActivate() method, but it's also not invoked).
Example 1:
buchbestellungen2 -> logout() -> redirects to / -> redirects to /buchbestellungen an intercepted by auth-guard i end on the /login route
Example 2:
buchbestellungen -> logout() -> redirects to / -> redirects to /buchbestellungen (no auth-guard and no redirect to /login)
Is it possible, that the router dont invoke the auth-guard, because i am comming from /buchbestellungen an this rout is already authenticated? Is there any possibility to force the router to invoke the guard?
Thanks
A: Angular by default reuses component if the route doesn't change (and redirect doesn't count as route change). Apart from implementing custom RouteReuseStrategy (which seems like an overkill here), the only idea I have is creating some kind of LogoutComponent attached to /logout path. That component would redirect user back to root during ngOnInit, or it could be guarded by AuthGuard, which will perform redirection.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/48277269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Preservation of exception cause when redelivering failed activemq jms messages processed by Mule ESB I have built several Mule processes that consume messages from jms queues (ActiveMQ). Whenever a Mule component throws an exception, the transaction that consumes the messages rollback and the message gets redelivered to the original queue. After a few tries, it will be sent to a dead letter queue (DLQ.queuName).
We have this working OK, but we are missing the exception thrown, either the first one or the last one, we don't care (it'll probably be the same). This is something that can be done on other brokers (like WebLogic JMS), but I've been struggling with this for a while to no avail.
Does anybody know if this is something that can be configured or do I need to build a specific Mule exception handler or policy for ActiveMQ.
TIA,
Martin
A: That exception is lost in ActiveMQ at the moment (don't know about Mule) but it is reported to the log as error.
It would make a good enhancement, remembering the string form of the exception in the ActiveMQConsumer and passing it back to the broker with the poison Ack that forces it to go to
the DLQ. In that way, it could be remembered as a message property in the resulting DLQ message.
How would you like to handle the exception, have it reported to a connection exception listener or have it recorded in the DLQ message?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/5069733",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: HTTP ERROR 500 after deploy on web service I was deploy my app on Azure Web Service.
The app works locally but when I deploy to Azure I get internal server error (500).
I already clicked on "Allow Azure services and resources to access this server" on Set Server Firewall.
Most likely causes:
IIS received the request; however, an internal error occurred during the processing of the request. The root cause of this error depends on which module handles the request and what was happening in the worker process when this error occurred.
IIS was not able to access the web.config file for the Web site or application. This can occur if the NTFS permissions are set incorrectly.
IIS was not able to process configuration for the Web site or application.
The authenticated user does not have permission to use this DLL.
The request is mapped to a managed handler but the .NET Extensibility Feature is not installed.
And I go to KUDU Console at https://sitename.scm.azurewebsites.net . I found the web.config file like this
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<location path="." inheritInChildApplications="false">
<system.webServer>
<handlers>
<add name="aspNetCore" path="*" verb="*" modules="AspNetCoreModuleV2" resourceType="Unspecified" />
</handlers>
<aspNetCore processPath="dotnet" arguments=".\CRM_BackEnd.dll" stdoutLogEnabled="false" stdoutLogFile=".\logs\stdout" hostingModel="inprocess" />
</system.webServer>
</location>
</configuration>
So how can I resolve this?
My Program.cs is
var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
builder.Services.AddControllers();
// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
var defaultConnectionString = string.Empty;
builder.Services.AddDbContext<CRMDBContext>(option =>
{
option.UseSqlServer(builder.Configuration.GetConnectionString("BE_CRM"));
});
builder.Services.Configure<AppSettings>(builder.Configuration.GetSection("AppSettings"));
builder.Services.Configure<PublitioApi>(builder.Configuration.GetSection("PublitioApi"));
builder.Services.Configure<SendGirdApi>(builder.Configuration.GetSection("SendGridApi"));
builder.Services.AddDependenceInjection();
builder.Services.AddSwagger();
builder.Services.AddSwaggerGenNewtonsoftSupport();
builder.Services.AddControllers().AddJsonOptions(options =>
options.JsonSerializerOptions.Converters.Add(new JsonStringEnumConverter()));
var app = builder.Build();
// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
app.UseDeveloperExceptionPage();
//app.UseSwagger();
//app.UseSwaggerUI();
}
else
{
app.Use(async (context, next) =>
{
await next();
if(context.Response.StatusCode == 404 && !Path.HasExtension(context.Request.Path.Value))
{
context.Request.Path = "/swagger/index.html";
await next();
}
});
app.UseHsts();
}
app.UseSwagger(c =>
{
c.RouteTemplate = "MyBackEnd/swagger/{documentName}/swagger.json";
});
app.UseSwaggerUI(c =>
{
c.SwaggerEndpoint("/MyBackEnd/swagger/v1/swagger.json","CRM_BackEnd v1");
});
app.UseHttpsRedirection();
app.UseCors(builder =>
{
builder
.AllowAnyOrigin()
.AllowAnyMethod()
.AllowAnyHeader();
});
app.UseAuthentication();
app.UseJwt();
app.UseHttpsRedirection();
app.UseAuthorization();
app.MapControllers();
app.Run();
And .csproj is
<Project Sdk="Microsoft.NET.Sdk.Web">
<PropertyGroup>
<TargetFramework>net6.0</TargetFramework>
<Nullable>disable</Nullable>
<ImplicitUsings>enable</ImplicitUsings>
<UserSecretsId>1798ad59-2539-4d78-bdc2-fd2f9983b3e9</UserSecretsId>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="AutoMapper.Extensions.Microsoft.DependencyInjection" Version="12.0.0" />
<PackageReference Include="BCrypt.Net-Next" Version="4.0.3" />
<PackageReference Include="Firebase.Auth" Version="1.0.0" />
<PackageReference Include="FirebaseAdmin" Version="2.3.0" />
<PackageReference Include="FirebaseAuthentication.net" Version="3.7.2" />
<PackageReference Include="FirebaseStorage.net" Version="1.0.3" />
<PackageReference Include="Microsoft.AspNetCore.Http.Abstractions" Version="2.2.0" />
<PackageReference Include="Microsoft.EntityFrameworkCore" Version="6.0.9" />
<PackageReference Include="Microsoft.EntityFrameworkCore.SqlServer" Version="6.0.9" />
<PackageReference Include="Microsoft.EntityFrameworkCore.Tools" Version="6.0.9">
<PrivateAssets>all</PrivateAssets>
<IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
</PackageReference>
<PackageReference Include="Newtonsoft.Json" Version="13.0.1" />
<PackageReference Include="Otp.NET" Version="1.2.2" />
<PackageReference Include="Publitio-NET" Version="1.0.1" />
<PackageReference Include="SendGrid" Version="9.28.1" />
<PackageReference Include="Swashbuckle.AspNetCore" Version="6.2.3" />
<PackageReference Include="Swashbuckle.AspNetCore.Newtonsoft" Version="6.4.0" />
<PackageReference Include="System.IdentityModel.Tokens.Jwt" Version="6.25.0" />
<PackageReference Include="System.Security.Cryptography.ProtectedData" Version="6.0.0" />
</ItemGroup>
<ItemGroup>
<Content Update="firebasesdk.json">
<CopyToOutputDirectory>Always</CopyToOutputDirectory>
</Content>
</ItemGroup>
<ProjectExtensions><VisualStudio><UserProperties appsettings_1json__JsonSchema="" /></VisualStudio></ProjectExtensions>
</Project>
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/74442333",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: how to make a sass mixin loop from array value(s)? I'm looking for some advice on my issue.
I have this mixin which works great..
@mixin theme-change($theme,$attr,$value) {
.theme-#{$theme} & {
#{$attr} : $value;
}
}
which I then output like this..
@include theme-change('media','background-color', #242424);
But I need extend this mixin to handle multiple attributes, but all my attempt either result in compiling issues or just don't work.
See my closest attempt below (from this you can see what I'm trying to do) ..
@mixin theme-change($theme,$array) {
.theme-#{$theme} & {
@each $attr in $array {
#{$attr};
}
}
}
This is how I'm outputting it..
@include theme-change('media',(
'color: #ffffff',
'border-color: #000000'
));
But the above just fails obviously, but you can see what I'm trying to do.
See I'm not sure if the array should be written in this format, but then obviously I need to declare the vars in the mixing...
@include theme-change('media',(
'color' : #ffffff,
'border-color' : #000000
));
Any ideas would be great thanks.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/26778659",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: php for loop skips by twos I have a function in php that builds an html string for a table so at inside I have a for loop
for ($counter = 0; $counter<10; $counter++){
$htmlString .= code for table row here;
}
The for loop only makes 5 rows and i even printed out the value of $counter and it says: 0, 2, 4, 6, 8
I have never seen this before and I can't think of any way to fix this other than changing the loop from 10 to 20 but I rather know what is causing this so I can fix it for good.
here is my full code for the function i pass in a row from a sql db:
function BuildNetworkString($Query){
$NetworkHtml = "<table style='width:100%;'><tr>";
$counter = 0;
if($Query['FacebookID'] != '')
{
$NetworkHtml .= "<td style='width:10%; height:80px; text-align:center;'><a href='http://facebook.com/profile.php?id=" . $Query['FacebookID'] . "' class='black' title='Facebook - " . $Query['FacebookName'] . "'><img width='50px' src='Images/facebook.png'></a></td>";
$counter+=1;
}
for ($counter; $counter<10; $counter++)
{
$NetworkHtml .= "<td style='width:10%; height:80px>  ; $counter</td>";
}
$NetworkHtml .= "</tr></table>";
return $NetworkHtml;
}
A: Try changing:
"<td style='width:10%; height:80px>  ; $counter</td>"
to:
"<td style='width:10%; height:80px'> $counter</td>"
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/7865318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to eliminate the repetition of same rows I have a table in my database which looks like (there can be same tuples in table):
+-------------+---------+--------+
| ProductName | Status | Branch |
+-------------+---------+--------+
| P1 | dead | 1 |
| P1 | dead | 2 |
| P1 | dead | 2 |
| P2 | expired | 1 |
+-------------+---------+--------+
I want to show the result after as (the Branch attribute is dynamic):
+-------------+---------+--------+
| ProductName | Branch 1|Branch 2|
+-------------+---------+--------+
| P1 | dead | dead|
| P2 | expired | OK |
+-------------+---------+--------+
After some help i came up with following solution:
SET @sql = NULL;
SELECT
GROUP_CONCAT(DISTINCT
CONCAT(
'GROUP_CONCAT(case when branch = ''',
branch,
''' then status ELSE NULL end) AS ',
CONCAT('Branch',branch)
)
) INTO @sql
FROM Table1;
SET @sql = CONCAT('SELECT productName, ', @sql, '
FROM Table1
GROUP BY productName');
PREPARE stmt FROM @sql;
EXECUTE stmt;
The result shown is like:
+-------------+---------+-----------+
| productName | Branch1 | Branch2 |
+-------------+---------+-----------+
| p1 | dead | dead,dead |
| p2 | expired | (null) |
+-------------+---------+-----------+
SQL Fiddle. What i want now is to show product status as "OK" instead of null when status is null and also need not to concatenate the status if there is a repetition of a product in the table..tried a lot, but couldn't sort out. Thanks in advance.
A: This is a bit tricky because there will be repetion if statuses are the same,so the coalesce has to be on the outside of the GROUP_CONCAT
SET @sql = NULL;
SELECT
GROUP_CONCAT(DISTINCT
CONCAT(
'COALESCE(GROUP_CONCAT(DISTINCT case when branch = ''',
branch,
''' then status end),''OK'') AS ',
CONCAT('Branch',branch)
)
) INTO @sql
FROM Table1;
SET @sql = CONCAT('SELECT productName, ', @sql, '
FROM Table1
GROUP BY productName');
PREPARE stmt FROM @sql;
EXECUTE stmt;
Link
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/31670303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Force code to execute in order? I am seeing a strange problem in my C# code. I have something like this:
public static class ErrorHandler {
public static int ErrorIgnoreCount = 0;
public static void IncrementIgnoreCount() {
ErrorIgnoreCount++;
}
public static void DecrementIgnoreCount() {
ErrorIgnoreCount--;
}
public static void DoHandleError() {
// actual error handling code here
}
public static void HandleError() {
if (ErrorIgnoreCount == 0) {
DoHandleError();
}
}
}
public class SomeClass {
public void DoSomething() {
ErrorHandler.IncrementIgnoreCount();
CodeThatIsSupposedToGenerateErrors(); // some method; not shown
ErrorHandler.DecrementIgnoreCount();
}
}
The problem is that the compiler often decides that the order of the three calls in the DoSomething() method is not important. For example, the decrement may happen before the increment. The result is that when the code that is supposed to generate errors is run, the error handling code fires, which I don't want.
How can I prevent that?
A: Add Trace or Logs to your code in IncrementIgnoreCount, DecrementIgnoreCount and HandleError function.
That will help you to view real call order.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/42694478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Pandas error TypeError: bad operand type for unary ~: 'float' I have inherited this piece of code
dummy_data1 = {
'id': ['1', '2', '3', '4', '5'],
'Feature1': ['A', 'C', 'E', 'G', 'I'],
'Feature2': ['Mouse', 'dog', 'house and parrot', '23', np.NaN],
'dates': ['12/12/2020','12/12/2020','12/12/2020','12/12/2020','12/12/2020']}
df1 = pd.DataFrame(dummy_data1, columns = ['id', 'Feature1', 'Feature2', 'dates'])
df1 = df1.assign(
Feature2=lambda df: df.Feature2.where(
~df.Feature2.str.isnumeric(),
pd.to_numeric(df.Feature2, errors="coerce").astype("Int64"),
)
)
print(df1)
I know that this is because of the np.NAN value. What does the code do? My understanding is that it tries to convert the String to Int, if it is of type integer. Also please tell me how to overcome this issue.
A: You can try via pd.to_numeric() and then fill NaN's:
df['Feature2']=pd.to_numeric(df['Feature2'], errors="coerce").fillna(df['Feature2'])
OR
go with the where() condition by filling those NaN's with fillna() in your condition ~df.Feature2.str.isnumeric():
df['Feature2']=df['Feature2'].where(~df.Feature2.str.isnumeric().fillna(True),
pd.to_numeric(df.Feature2, errors="coerce").astype("Int64")
)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/68603734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: MySQL - Filter one last item I am trying to count the different majors involved in various program courses. I have it mostly done, but noticed that I am counting duplicate usernames if they are registered for more than one course. Can you help me get the DISTINCT tag somewhere in this query?
SELECT a.`major_desc` AS `major`,
(SELECT COUNT(a.`major_desc`) FROM all_students WHERE major_desc=`major` LIMIT 1) AS `count`
FROM all_students AS a
INNER JOIN all_course_reg AS b ON a.username=b.username
INNER JOIN courses AS c ON b.`crn`=c.`crn`
GROUP BY `major`
EDIT: sqlfiddle -> http://sqlfiddle.com/#!2/81bb0/3/0
A: It seems to me that this is what you're after - I don't know what all that other stuff is for...
SELECT major_desc, COUNT(*) cnt
FROM all_students
GROUP
BY major_desc;
A: Here we go. Thanks to my boy 3mHz.
SELECT a.major_desc,
(select count(a.username)) as test
FROM all_students AS a
WHERE EXISTS (select * from all_course_reg
inner join courses on courses.crn=all_course_reg.crn
where all_course_reg.username=a.username)
GROUP BY a.major_desc
So we now get the right count for each major if the course is offered within the set of program courses.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/28610227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Update columns for the first user inserted I'm trying to create a trigger and a function to update some columns (roles and is_verified) for the first user created. Here is my function :
CREATE OR REPLACE FUNCTION public.first_user()
RETURNS trigger
LANGUAGE plpgsql
AS $function$
DECLARE
begin
if(select count(*) from public.user = 1) then
update new set new.is_verified = true and new.roles = ["ROLE_USER", "ROLE_ADMIN"]
end if;
return new;
end;
$function$
;
and my trigger :
create trigger first_user
before insert
on
public.user for each row execute function first_user()
I'm working on Dbeaver and Dbeaver won't persist my function because of a syntax error near the "=". Any idea ?
A: Quite a few things are wrong in your trigger function. Here it is revised w/o changing your business logic.
However this will affect the second user, not the first. Probably you shall compare the count to 0. Then the condition shall be if not exists (select from public.user) then
CREATE OR REPLACE FUNCTION public.first_user()
RETURNS trigger LANGUAGE plpgsql AS
$function$
begin
if ((select count(*) from public.user) = 1) then
-- probably if not exists (select from public.user) then
new.is_verified := true;
new.roles := array['ROLE_USER', 'ROLE_ADMIN'];
end if;
return new;
end;
$function$;
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/74667951",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to check a particular part of url using php I have a url in a variable.
<?php
$a='www.example.com';
?>
I have another variable, that has like this
<?php
$b='example.com';
?>
In what way I can check that $b and $a are same. I mean even if the url in $b is like
'example.com/test','example.com/test.html','www.example.com/example.html'
I need to check that $b is equal to $a in this case. If it is like example.net/example.org as the domain name changes, it should return false.
I checked with strpos and strcmp. But I didn't find it is the correct way to check in case of urls.What function can I use to check that $b in this case is similar to $a?
A: You can use parse_url to do the heavy lifting and then split the hostname by the dot, checking if the last two elements are the same:
$url1 = parse_url($url1);
$url2 = parse_url($url2);
$host_parts1 = explode(".", $url1["host"]);
$host_parts2 = explode(".", $url2["host"]);
if ($host_parts1[count($host_parts1)-1] == $host_parts2[count($host_parts2)-1] &&
($host_parts1[count($host_parts1)-2] == $host_parts2[count($host_parts2)-2]) {
echo "match";
} else {
echo "no match";
}
A: You could use parse_url for parsing the URL and get the root domain, like so:
*
*Add http:// to the URL if not already exists
*Get the hostname part of the URL using PHP_URL_HOST constant
*explode the URL by a dot (.)
*Get the last two chunks of the array using array_slice
*Implode the result array to get the root domain
A little function I made (which is a modified version of my own answer here):
function getRootDomain($url)
{
if (!preg_match("~^(?:f|ht)tps?://~i", $url)) {
$url = "http://" . $url;
}
$domain = implode('.', array_slice(explode('.', parse_url($url, PHP_URL_HOST)), -2));
return $domain;
}
Test cases:
$a = 'http://example.com';
$urls = array(
'example.com/test',
'example.com/test.html',
'www.example.com/example.html',
'example.net/foobar',
'example.org/bar'
);
foreach ($urls as $url) {
if(getRootDomain($url) == getRootDomain($a)) {
echo "Root domain is the same\n";
}
else {
echo "Not same\n";
}
}
Output:
Root domain is the same
Root domain is the same
Root domain is the same
Not same
Not same
Note: This solution isn't foolproof and could fail for URLs like example.co.uk and you might want to additional checks to make sure that doesn't happen.
Demo!
A: I think that this answer can help: Searching partial strings PHP
Since these URLs are just strings anyway
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/19068193",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Describe file content in swagger I'm working on documenting our existing API's in swagger using open api 3.0.0 specification. I would like to understand the procedure on documenting the contents of a file that is to be uploaded. For e.g.
Say the file that is to be uploaded is response.json, and it has the following content:
{
name: xyz,
age: 50
}
I would like to know the procedure for documenting the above JSON so that clients using our API should provide the file in the aforesaid JSON format.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/60937349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Save File on Filesystem using werkzeug I am trying to save uploaded file on my system. at specific path but i am getting this error in windows. can someone tell me where i am doing mistake?
system: windows 8.1
python version: 2.7
Here is my code:
# -*- coding: utf-8 -*-
from werkzeug.serving import run_simple
from werkzeug.wrappers import BaseRequest, BaseResponse
import os
def view_file(req):
if not 'file' in req.files:
return BaseResponse('no file uploaded')
f = req.files['file']
s = "C:\Users\admin\Desktop\test"
f.save(s, f.filename)
return BaseResponse('File Saved!')
def upload_file(req):
return BaseResponse('''
<h1>Upload File</h1>
<form action="" method="post" enctype="multipart/form-data">
<input type="file" name="file">
<input type="submit" value="Upload">
</form>
''', mimetype='text/html')
def application(environ, start_response):
req = BaseRequest(environ)
if req.method == 'POST':
resp = view_file(req)
else:
resp = upload_file(req)
return resp(environ, start_response)
if __name__ == '__main__':
run_simple('localhost', 5000, application, use_debugger=True)
here is traceback:
Traceback (most recent call last):
File "C:\Users\admin\Desktop\test.py", line 30, in application
resp = view_file(req)
File "C:\Users\admin\Desktop\test.py", line 13, in view_file
f.save(s, f.filename)
File "C:\Python27\lib\site-packages\werkzeug\datastructures.py", line 2703, in
save
dst = open(dst, 'wb')
IOError: [Errno 22] invalid mode ('wb') or filename: 'C:\\Users\x07dmin\\Desktop
\test'
A: I don't know why are you all mixing up all things, just keep it easy and clean.
Make two files called 'app.py' and 'index.html' and make a folder in your app root dirrectory called "upload"
index.html
<h1>Upload File</h1>
<form action="/uploader" method="post" enctype="multipart/form-data">
<input type="file" name="file">
<input type="submit" value="Upload">
</form>
app.py
from flask import Flask, request,render_template
from werkzeug import secure_filename
import os
app = Flask(__name__)
uploads_dir = "upload"
@app.route("/")
def index():
return render_template('index.html')
@app.route('/uploader', methods = ['GET', 'POST'])
def uploader():
if request.method == 'POST':
input = request.files['file']
input.save(os.path.join(uploads_dir, secure_filename(input.filename)))
#print(input)
return "<h2>Successfully uploaded</h2>"
if __name__ == '__main__':
app.run()
You can also redirect to other page after saving the file:
return render_template(otherpage.html)
A: It looks like the \a is being interpreted as a control character.
You should write the path like this:
s = "C:\\Users\\admin\\Desktop\\test"
If you're calling .save() you must also close the file: http://werkzeug.pocoo.org/docs/0.11/datastructures/#werkzeug.datastructures.FileStorage.save
Also you need to provide a buffer size to .save().
So try:
f.save(s, buffer_size=16384)
f.close()
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/43616258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Using Api keys with AWS API Gateway I want to use api-keys for authorization and grouping users for accesing the api's in API Gateway. The requests will be sent from web-page using javascript calls.
*
*Is there any way to encrypt the api-keys?
*Lets say I am able to encrypt it, will it be beneficial at all? Because someone can still see the encrypted api-keys and use it, and it will still work, because anyhow i will be decrypting it somewhere.
*Is there any better way?
A: You cannot protect your API keys for authorization when your API calls are initiated from the client (i.e., JavaScript). As you said, there will be no point of encrypting them as well. You'll need to have an authorization provider that can return the API key as part of the response.
API Gateway allows you to have custom authorizer for your API. See Output from an Amazon API Gateway Custom Authorizer.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/49551755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: 404 depending on query string Parameters or strictly if route exists Say I had a route such as /Item/Create/ which creates a new Item but requires a mandatory parameter called GroupId. It would have to be called via /Item/Create?GroupId=xxx.
If the given GroupId doesn't exist can I return a 404 or is it wrong to return a 404 based on query string paramters?
I know it would be alright to return a 404 if my route itself was /Item/Create/{GroupId} and the GroupId was not found.
A: It's not "wrong" per se, status 404 means "Resource not found" and you can't find a resource that hasn't been specified. Status 400 (Bad Request) however might be more appropriate. It really comes down to the intended meaning of the error code and your interpretation of the error.
A full list of status codes can be found in section 10 of RFC 2616. The 4xx (error) codes start in section 10.4.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/15703062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Using 'multiprocessing' libary in Python 3 for PostgreSQL queries I'm trying to write a Python script to read query data from my database into pandas dataframes.
I have simplified the code significantly to test the effectiveness of using the multiprocessing library in order to parallelize running the queries since running a query including all of the information I want to collect takes several minutes.
However, using Pool from multiprocessing is not the most effective. (In fact, no difference in performance occurred when running the script). Is there a more effective approach to run queries concurrently in PostgreSQL?
Any advice would be awesome!
import psycopg2
import pandas as pd
import sqlalchemy as sa
from multiprocessing import Pool
engine = sa.create_engine("<database info>")
def run_query(query):
print(query)
data_frame = pd.read_sql_query(query, engine)
if __name__ == '__main__':
pool = Pool(processes=len(queries))
pool.map(run_query, queries)
A: I dont know if it is efficient but you can use worker and producers scheme. Basically you define a multiprocessing Q and the producer process adds something into the Q. The Worker listens to the Q and starts working as soon some information is put into the Q.
Here is a good example.
http://danielhnyk.cz/python-producers-queue-consumed-by-workers/
The problem you have with Multiprocessing is that you have to take care about shared data and also the time to schedule the processes must be taken into account which makes the Multiprocessing in Python not very usable for small tasks. However, If you do that tasks very often or you create the process once and just run the tasks when there is one you get a benefit.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/50820366",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: I'm making an ASCII rpg in c++ where my character moves using arrow keys. How do i make my character the unicode smiley? I am using the '@' symbol to act as my character right now. I want to switch it to the unicode ☺ smiley. Below is an example of an ASCII map where my character (currently '@') starts at the top left. The code works by when i hit the arrow keys, the map clears my '@' character symbol and creates it in the adjacent spot in the chosen direction. Basically, instead of my character being a '@' i want it to be a '☺'
char map[10][23] = {
"########## #########",
"#@ # # #",
"# # ##### #",
"# # # #",
"# ######### #",
"# #",
"######################" };
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/46248365",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How To Populate PDF With Values From Model My django project generates a view in the form of a PDF file which uses an HTML file to outline the format. I have a separate view which simply displays a drop down list of all the "Reference IDs" in my "Orders" model. My goal is to choose a "Reference ID" from the dropdown, click submit, and have the PDF generate populating certain values with values from the data that corresponds to that reference ID in the Orders model.
For example, I'd like to
What I have is listed below - any help with this problem would be greatly appreciated!
Below shows code for the view generating the dropdown:
VIEWS.PY
def reference_view(request):
query_results = Orders.objects.all()
reference_list = DropDownMenuReferences()
context = {
'query_results': query_results,
'reference_list': reference_list
}
return render(request, 'proforma_select.html', context)
FORMS.PY
class DropDownMenuReferences(forms.Form):
Reference_IDs = forms.ModelChoiceField(queryset=Orders.objects.values_list('reference', flat=True).distinct(),
empty_label=None)
Proforma_select.html
{% extends 'base.html' %}
{% block body %}
<div class="container">
<br>
<form method=POST action="">
{{ reference_list }}
<button type="submit" class="btn btn-primary" name="button">Add Order</button>
</form>
</div>
{% endblock %}
Below shows code for the PDF which is generated:
VIEWS.PY
def generate_view(request, *args, **kwargs):
template = get_template('invoice.html')
context = {
"invoice_id": 123,
"ultimate_consignee": "john cooper",
}
html = template.render(context)
pdf = render_to_pdf('invoice.html', context)
return HttpResponse(pdf, content_type='application/pdf')
INVOICE.HTML (only way I could find online to do this was with HTML 4)
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
<html>
<head>
<title>Proforma Invoice</title>
<style type="text/css">
body {
font-weight: 200;
font-size: 14px;
}
.header {
font-size: 20px;
font-weight: 100;
text-align: center;
color: #007cae;
}
.title {
font-size: 22px;
font-weight: 100;
/* text-align: right;*/
padding: 10px 20px 0px 20px;
}
.title span {
color: #007cae;
}
.details {
padding: 10px 20px 0px 20px;
text-align: left !important;
/*margin-left: 40%;*/
}
.hrItem {
border: none;
height: 1px;
/* Set the hr color */
color: #333; /* old IE */
background-color: #fff; /* Modern Browsers */
}
.column {
float: left;
width: 50%;
}
</style>
</head>
<body>
<div class='header'>
<p class='title'>Proforma Invoice # {{ invoice_id }}</p>
<p class='title'>Customer: {{ ultimate_consignee }}</p>
</div>
</body>
</html>
Below is the MODELS.PY for Orders Model which the dropdown menu reference IDs is created from and where I would like to pull other fields into the PDF:
MODELS.PY
class Orders(models.Model):
reference = models.CharField(max_length=50, blank=False)
ultimate_consignee = models.CharField(max_length=500)
ship_to = models.CharField(max_length=500)
vessel = models.CharField(max_length=100)
booking_no = models.CharField(max_length=50, blank=True)
POL = models.CharField(max_length=50)
DOL = models.DateField()
COO = models.CharField(max_length=50)
POE = models.CharField(max_length=50)
ETA = models.DateField()
pickup_no = models.CharField(max_length=50)
terms = models.CharField(max_length=1000)
sales_contact = models.CharField(max_length=100)
trucking_co = models.CharField(max_length=100)
loading_loc = models.CharField(max_length=100)
inspector = models.CharField(max_length=50)
total_cases = models.IntegerField()
total_fob = models.DecimalField(max_digits=10, decimal_places=2)
freight_forwarder = models.CharField(max_length=100)
commodity = models.CharField(max_length=200)
is_airshipment = models.BooleanField(default=False)
credit = models.DecimalField(max_digits=10, decimal_places=2)
def _str_(self):
return self.reference
As you can see - I am current hardcoding in values to the View which then appear on the PDF - but I'd like these to pull from the Orders Model.
i.e. something like WHERE Reference ID = 100, Ultimate Consignee = John Smith
A: You have to improve your code with a couple of changes
First
Your form is not pointing to any action (Maybe you are using Js for activating the view that generates the PDF) and the easiest way to delivering the form data to the view is making the form action attribute points to the target view.
Let's say your view has an URL named generate_pdf (see Naming URL patterns) then you could in your template:
<form method=POST action="{% url generate_pdf %}">
{{ reference_list }}
<button type="submit" class="btn btn-primary" name="button">Add Order</button>
</form>
Second
... having the form pointing to your view, you can read the Reference_IDs from the post data in order to obtain the Order instance and use it:
def generate_view(request, *args, **kwargs):
context = {}
if request.method == 'POST':
order_reference = self.request.POST.get('Reference_IDs')
if order_reference is not None:
order = Orders.objects.get(reference=order_reference)
template = get_template('invoice.html')
context.update({
"ultimate_consignee": order.ultimate_consignee,
# ...
})
html = template.render(context)
pdf = render_to_pdf('invoice.html', context)
return HttpResponse(pdf, content_type='application/pdf')
Good luck!
Note: I've not tested this, it is just an effort to show you one of the approaches you can follow to achieve what you want.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/56121633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Keras - Compile model when using theano backend (K.function) to calculate loss I am building an NN in Keras and I am using Theano backend.
Code:
cost = ((pred - target)**2).mean()
opt = RMSprop(lr = self.lr, rho = self.rho, epsilon = self.rms_epsilon)
#model -> keras nn
params = self.model.trainable_weights
updates = opt.get_updates(params, [], cost)
self.model.compile(optimizer = opt,loss='mse')
self._train = K.function([S, NS, A, R, T], cost, updates=updates)
If I compile this model using the above-mentioned parameters and save it, will the loaded model preserve the training configuration (loss,optim.)?
Since I use the K.function(...) method to calculate cost(loss), will this affect the model's configuration in any way?
So far I have not been able to figure out a way to check whether the resumed model's config is same as the saved one.
So, is there a way to print out the network configuration to check if it has been properly restored i.e. with the exact same config and param it was saved with?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/41315365",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Entity saving to database wrong date format codeigniter 4 My entity looks like this:
class Request extends Entity
{
protected $casts = [
'id' => 'integer',
'added_at' => 'datetime',
'deadline' => 'datetime',
'completed' => 'integer'
];
}
When saving, the model generates the date fields in 'Y/m/d' format for the sql query, hovewer my database can not parse this. How can I force it to generate dates in 'Y-m-d' format when calling $myModel->insert($myEntity) ?
A: Entities has the option of setters. It performs the validation or conversion in your case whenever you perform save an entity. Lets say you want to change the form of deadline in that case you have to set the Setter for your deadline as follow :
public function setDeadline(string $dateString)
{
$this->attributes['deadline'] = $dateString;
return $this;
}
In the following line : $this->attributes['deadline'] = $dateString; You will use some library like Carbon to format the $dateString and then reassign your variable deadline. Reference link :
https://codeigniter4.github.io/userguide/models/entities.html
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/61127876",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Python - How to start Y axis at 0 import pandas as pd
import matplotlib.pyplot as plt
pd.options.display.max_rows = 100
pd.set_option('display.max_columns', None)
western_counties = pd.read_csv('2018_governor_results_western_ny_dem.csv')
df = pd.DataFrame(western_counties)
X = list(df.iloc[:, 0])
Y = list(df.iloc[:, 1])
plt.bar(X, Y, color='g')
plt.title('Region 1 - Western New York')
plt.xlabel('x')
plt.ylabel('y')
plt.ylim(ymin=0)
plt.show()
I am getting the following chart. As you can see the first "bar" starts at its value (3254). I want this to actually show the green bar, which means Y axis needs to be set to zero. Any help is appreciated.
A: Matplotlib doc says to use ylim(bottom=0) instead of ylim(ymin=0)
https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.ylim.html#
you could also just say plt.ylim([0,162772])
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/70526814",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: DJANGO importing fields to model from another model I have the following Models and I'm try to get the vector in Especie.zonas to be a field in the model Zona.
i.e. Especie.zonas is a vector of Zonas (model Zona) and I want it to have a OneToOne relationship with the model EspecieZona
Models.py
class Zona(models.Model):
codigo = models.CharField(max_length=120)
area = models.CharField(max_length=120)
especies = models.ManyToManyField("Especie", blank=True)
def __str__(self):
return self.codigo
def get_especies(self):
return self.especies.all().values_list('nome', flat=True)
class Especie(models.Model):
nome = models.CharField(max_length=120)
nome_latino = models.CharField(max_length=120)
data_insercao = models.DateTimeField(auto_now_add=True)
actualizacao = models.DateTimeField(auto_now=True)
zonas = models.ManyToManyField("Zona",blank=True )
def get_zonas(self):
return self.zonas.all().values_list('codigo', flat=True)
def __str__(self):
return self.nome
class EspecieZona(models.Model):
idEspecie = models.OneToOneField("Especie")
here_is_my_problem = models.Especie.zonas()
idZona = models.OneToOneField("Especie.zonas")
fechado = models.BooleanField()
def __str__(self):
return str(self.idEspecie)+' em '+str(self.idZona)
Thanks in advance!
A: I'm not entirely sure of your intent. Relationships are to be established between models (so you can't OneToOne to a class' field). As I understand EspecieZona relates one Especie instance to one Zona instance, but you also would like to easily get all the Zona instances related to some Especie instance that you are accessing through EspecieZona.
It seems to me like you should drop that here_is_my_problem field entirely and use foreign keys to Especie and Zona in EspecieZona, instead of OneToOne. With the foreign keys you can use a ManyToMany field from Zona to Especie (or vice versa. you have to declare the MtM field in only one side of the relationship, whichever makes more sense to you) and use EspecieZona as a 'through' field for that relationship (probably with a unique_together constraint for Especie and Zona). That way you can establish a relationship like this:
"Each species can be on multiple zones, and each zone can have multiple species (described by the MtM relationship). Additionally, for each zone and species found in them, each species might or not have been recorded(?) in that zone (described in the 'through' model, EspecieZona, thou not sure I understand the meaning of fechado here. In any case, a 'through' model lets you describe whatever details about the relationship between two models)".
Then, in order to retrieve Especies.zonas from an instance of EspecieZona you can do someSpecificEspecieZona.especie.zonas.all() (assuming zonas
is declared as a MtM field in Especie), which would get you all the zones that are related to that species (as I understand your models)
I suggest reading through the documentation of relationship fields https://docs.djangoproject.com/en/1.11/ref/models/fields/#module-django.db.models.fields.related so you get the details about how it all works, like how to use a ManyToManyField.through, and perhaps about reverse relationships if you need them.
(also, maybe that was just a mistake when copypasting your code here, but the indentation looks wrong. Different models should be at the same level of indentation there)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/47392733",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to run bash file for (different directory) as input automatically I have a bash file which takes 5 inputs.
Input1 = file1
Input2 = file2
Input3 = directory1
Input4 = func
Input5 = 50
input 4 and 5 are always the same, never change.
file1 and file 2 are located inside directory1
directory1 is located inside a code directory
/code/directory1/file1
/code/directory1/file2
and there are many directories with the same structure directory(1-70) inside the code folder
/code/directory1/*
/code/directory2/*
/code/directory3/*
...
/code/directory70/*
In order to run the bash file, I have to run the command from terminal 70 times :<
Is there a way to automatically run all these folders at once?
UPDATE: the directory(1-7) each one has a different name e.g bug1, test, 4-A and so on. Even the files are different e.g. bug1.c hash.c
/code/bug1/bug1.c
code/bug1/hash.c
A: Try this:
for dirs in $(ls -F /code | grep '/')
do
eval files=( "$(ls -1 ${dirs})" )
<ShellScript>.sh "${dirs}${files[0]}" "${dirs}${files[1]}" "${dirs%/}" func 50
done
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/57161280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Generic template template parameters Fed up with "uniform initialisation" not being very uniform, I decided to write a generic construct() wrapper which uses aggregate initialisation if a type is an aggregate, and direct initialisation otherwise:
template <class T, class... Args,
std::enable_if_t<std::is_aggregate_v<T>, int> = 0>
constexpr auto construct(Args&&... args)
-> decltype(T{std::forward<Args>(args)...})
{
return T{std::forward<Args>(args)...};
}
template <class T, class... Args>
constexpr auto construct(Args&&... args)
-> decltype(T(std::forward<Args>(args)...))
{
return T(std::forward<Args>(args)...);
}
This works well enough:
template <class T, class U>
struct my_pair { T first; U second; };
auto p = construct<my_pair<int, float>>(1, 3.14f);
auto v = construct<std::vector<int>>(5, 0);
I'd like to extend this to use template argument deduction for constructors. So I added another pair of overloads:
template <template <class...> class T, // <-- (1)
class... Args,
class A = decltype(T{std::declval<Args>()...}),
std::enable_if_t<std::is_aggregate_v<A>, int> = 0>
constexpr auto construct(Args&&... args)
-> decltype(T{std::forward<Args>(args)...})
{
return T{std::forward<Args>(args)...};
}
template <template <class...> class T, // <-- (1)
class... Args>
constexpr auto construct(Args&&... args)
-> decltype(T(std::forward<Args>(args)...))
{
return T(std::forward<Args>(args)...);
}
Perhaps surprisingly (at least to me), this works for simple cases:
// deduction guide for my_pair
template <class T, class U> my_pair(T&&, U&&) -> my_pair<T, U>;
auto p = construct<my_pair>(1, 3.14f); // my_pair<int, float>
auto v = construct<std::vector>(5, 0); // vector of 5 ints
Unfortunately however this fails when trying to call
auto a = construct<std::array>(1, 2, 3); // No matching call to construct()
because std::array has a non-type template parameter, so it doesn't match the template <class...> class T template template parameter at (1).
So my question is, is there a way to formulate the parameter at (1) such that it can accept any class template name, regardless of the kind (type or non-type) of its template parameters?
A: Unfortunately, there is no proper way of doing this without code repetition. The newly added "auto as template parameter" in C++17 only supports non-type template parameters.
The only way I can think this could work is by using a code generator to generate a fixed amount of permutations of auto and class. E.g.
template <
template <class, auto...> class T,
class... Args>
constexpr auto construct(Args&&... args) // ...
template <
template <class, auto, class...> class T,
class... Args>
constexpr auto construct(Args&&... args) // ...
template <
template <auto, class, auto...> class T,
class... Args>
constexpr auto construct(Args&&... args) // ...
template <
template <auto, class, auto, class...> class T,
class... Args>
constexpr auto construct(Args&&... args) // ...
template <
template <auto, class, auto, class, auto...> class T,
class... Args>
constexpr auto construct(Args&&... args) // ...
// and so on...
live example on wandbox
Sounds like you have a good idea for a proposal...
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/46812945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Implement messaging into website I have a website where I have registered users in the system. When user logs into his profile I have section "Messages".
What is the best way to implement messaging into the website? I don't want to use plug-ins with live chat because of the pop-up boxes. I just want when one user will send a message to another, to have notification for new message, and to be able to start conversation between two sides in the real time.
I have an idea how to implement this with ajax, but I don't think it's a good idea to check for a new messages every a few seconds. Is there some better way to listening for a new messages and for implementing messaging feature in the site?
A: A websocket is what you are looking for; however it is subject to some browser limitations and libraries may fall back to polling with Ajax if the browser doesn't support it.
Here is some reading for you so you can ask a more specific question in the future:
*
*http://en.wikipedia.org/wiki/WebSocket (general info)
*http://socket.io/ (NodeJS and browser client library)
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/26185743",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How does shell expands *.c? I encounter a question while I am reading a textbook - Unix System Programming
How big is the argument array passed as the second argument to execvp
when you execute execcmd of Program 3.5 with the following command
line?
execcmd ls -l *.c
Answer: The answer depends on the number of .c files in the current
directory because the shell expands *.c before passing the command
line to execcmd.
Program 3.5:
#include <errno.h>
#include <stdio.h>
#include <unistd.h>
#include <sys/types.h>
#include <sys/wait.h>
#include "restart.h"
int main(int argc, char *argv[]) {
pid_t childpid;
if (argc < 2){ /* check for valid number of command-line arguments */
fprintf (stderr, "Usage: %s command arg1 arg2 ...\n", argv[0]);
return 1;
}
childpid = fork();
if (childpid == -1) {
perror("Failed to fork");
return 1;
}
if (childpid == 0) {
execvp(argv[1], &argv[1]);
perror("Child failed to execvp the command");
return 1;
}
if (childpid != r_wait(NULL)) {
perror("Parent failed to wait");
return 1;
}
return 0;
}
Why is the size of argument array passed depending on the number of .c files in the current directory? Isn't it the argument array just something like
argv[0] = "execcmd";
argv[1] = "ls";
argv[2] = "-l";
argv[3] = "*.c";
argv[4] = NULL;
Update: Find a link explains pretty well about the shell expansion. May useful for someone who see this post later also do not understand shell expansion.
Description about shell expansion
A: No, because the shell does the wild card expansion.
It finds files in the directory that match the expression, so for instance you can use "echo *.c" to discover what the shell would match. Then it lists out, every filename matching *.c on the exec call or if none *.c which is likely to result in an error message about file not found.
It is more powerful that the shell does the expansion, the same file wildcarding is immediately available for all programs, like cat, echo, ls, cc.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/35191985",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Nested Params in Spring Controllers is asking for Neting in JSP file Path. How to deal with that? So this is my controller code :
`
package org.inventorymanagement.controller;
import org.inventorymanagement.databaseservice.BorrowBookDatabaseService;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.servlet.ModelAndView;
@Controller
public class BorrowPageController {
@RequestMapping(value = "/borrow/{bookName}/{issuerName}", method = RequestMethod.GET)
public ModelAndView searchForBookName(@PathVariable("bookName") String bookName, @PathVariable("issuerName") String issuerName) {
ModelAndView mav = new ModelAndView("borrow.jsp");
String rs = (String) new BorrowBookDatabaseService().borrowBookFromBookTable(bookName, issuerName);
return mav;
}
}
Here is the error which i am seeing on my webpage:
JSP file [/borrow/ABC/borrow.jsp] not found
ABC is my bookName.
Any suggestions how can i overcome it ?
I tried to change the path of jsp inside ABC folder and it works fine. But since ABC is generic how many folders i will made ?
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/75262296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Advantages of Maven Tycho over Eclipse PDE Tools for exporting RCP In the light of some recent issues regarding JavaFX and exporting an Eclipse RCP application, I'm considering abandoning the Eclipse PDE exporter, and switching to a Tycho build.
*
*Which approach is simpler, and which is over-complicated? Do I need to constantly tweak the Tycho build configuration?
*Current project is already using ant scripts to build a core EAR. Should that be built with Maven to be consistent?
*I'm aware of some issues between Maven and Eclipse Plug-ins. Is there anything critical I should be worried about?
I'm in heavy R&D mode.
Also, I don't consider this to be opinion-based, rather a topic asking for strong arguments in favour of each one of these two.
A: I wouldn't say one of them is much simpler then the other. Most often Tycho is preferred since Maven is de facto an industry standard for builds. As such Maven skills are more common. This also allows you to build an RCP product like any other application using Maven.
In Maven/Tycho both the pom.xml and the OSGi manifest include dependency information so there's a bit of redundancy. The idea is to have one of these files be the master version. If you choose OSGi files to be master then resulting approach is called manifest-first. Choose otherwise and you end up with POM-first.
I'm using manifest-first and my POM files for plugins look like this:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>aerie-sdk</groupId>
<artifactId>aerie-sdk</artifactId>
<version>3.0.5-SNAPSHOT</version>
<relativePath>../pom.xml</relativePath>
</parent>
<groupId>aerie-sdk</groupId>
<artifactId>com.example.aerie.ui</artifactId>
<version>2.5.5-SNAPSHOT</version>
<packaging>eclipse-plugin</packaging>
</project>
I only need to bump versions in POM files to match the ones in manifest files.
Answers:
*
*I'd say the one you're more familiar with is simpler. If you're new to Maven but have some experience with PDE build then Maven is harder. For Maven POM-first there's a need to tweak POM files quite often. If you choose manifest-first then POMs are not modified that frequently (and changes are simpler - mostly version changes).
*You can run Ant from Maven, no need to convert.
*Not really. If you hit any bumps there's a lot of information on Eclipse community forums, stackoverflow or similar sites.
A: I'm not sure your comparison makes sense at all. The Eclipse PDE exporter is an interactive tool, and as such cannot be used for an automated build. So if you need an automated build, you will have move to something else, e.g. Tycho.
Tycho is a Maven build extension which aims to make it easy to set up an automated build for projects developed with the Eclipse PDE. It re-uses the configuration files that you already have (MANIFEST.MF, feature.xml, *.product), so the additional configuration files needed for Tycho (pom.xml) are pretty minimal and rarely need to be updated.
The only piece of information which is redundant in the pom.xml file is the artifact version. To support you when updating artifact versions, there is a command-line tool: the tycho-versions-plugin.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/25005404",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: UserForm ComboBox with searchable list to not select first item with keyDown Background
I have a UserForm with a ComboBox (ComboBox1) that includes a searchable list (thanks to the code). However, when I use the down button, it just selects the first item and gets rid of the rest of the list that was there first (like as if basically puts the first item as the searched name).
I have found a very similar question on stack overflow but the solution did not fully work for me.
Here is the link: How use the combobox keydown event without selecting the listed item
The code that I have for my UserForm is below:
Dim a()
Private Sub CommandButton2_Click()
End Sub
Private Sub UserForm_Initialize()
a = [Liste].Value
Me.ComboBox1.List = a
End Sub
Private Sub ComboBox1_KeyDown(ByVal KeyCode As MSForms.ReturnInteger, ByVal Shift As Integer)
Dim Abort As Boolean
Select Case KeyCode
Case 38 'Up
If ComboBox1.ListIndex <= 0 Then KeyCode = 0 'ignore "Up" press if already on the first selection, or if not on a selection
Abort = True
If Not KeyCode = 0 Then ' If on a selection past the first entry
KeyCode = 0
'Manually choose next entry, cancel key press
ComboBox1.ListIndex = ComboBox.ListIndex - 1
End If
Me.ComboBox1.DropDown
Case 40 'Down
If ComboBox1.ListIndex = ComboBox1.ListCount - 1 Then KeyCode = 0
' This method was from the discussion I linked, prevents "falling off the bottom of the list"
Abort = True
If Not KeyCode = 0 Then ' If on a selection before the last entry
KeyCode = 0
'Manually choose next entry, cancel key press
ComboBox1.ListIndex = ComboBox1.ListIndex + 1
End If
Me.ComboBox1.DropDown
End Select
Abort = False
End Sub
Private Sub ComboBox1_Change()
If Abort Then Exit Sub ' Stop Event code if flag set
Abort = True
' sets the flag until finished with commands to prevent changes made by code triggering the event multiple times
Set d1 = CreateObject("Scripting.Dictionary")
tmp = UCase(Me.ComboBox1) & "*"
For Each c In a
If UCase(c) Like tmp Then d1(c) = ""
Next c
Me.ComboBox1.List = d1.keys
Me.ComboBox1.DropDown
Abort = False
End Sub
Private Sub CommandButton1_Click()
ActiveCell = Me.ComboBox1
Unload Me
End Sub
Private Sub cmdClose_Click()
Unload Me
End Sub
The particular section that is apparently trying to stop the comboBox from selecting the first item is this:
Private Sub ComboBox1_KeyDown(ByVal KeyCode As MSForms.ReturnInteger, ByVal Shift As Integer)
Dim Abort As Boolean
Select Case KeyCode
Case 38 'Up
If ComboBox1.ListIndex <= 0 Then KeyCode = 0 'ignore "Up" press if already on the first selection, or if not on a selection
Abort = True
If Not KeyCode = 0 Then ' If on a selection past the first entry
KeyCode = 0
'Manually choose next entry, cancel key press
ComboBox1.ListIndex = ComboBox.ListIndex - 1
End If
Me.ComboBox1.DropDown
Case 40 'Down
If ComboBox1.ListIndex = ComboBox1.ListCount - 1 Then KeyCode = 0
' This method was from the discussion I linked, prevents "falling off the bottom of the list"
Abort = True
If Not KeyCode = 0 Then ' If on a selection before the last entry
KeyCode = 0
'Manually choose next entry, cancel key press
ComboBox1.ListIndex = ComboBox1.ListIndex + 1
End If
Me.ComboBox1.DropDown
End Select
Abort = False
End Sub
However, when I enter something (Let's say I enter "App" to find "Applesauce" and the dropdown list shows both "Apple" and "Applesauce") and click the down button, it just goes to the first item and does not move anymore. The list does not disappear but it just gets stuck on the first item.
Does anyone have any idea what I need to change in my code to make it work? (Or any other ideas on how to make it work in general)?
GIF for clarification:
KeyDown not moving beyong first item
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/56245385",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How to compute this double integral in r? I'm wanting to compute an integral of the following density function :
Using the packages "rmutil" and "psych" in R , i tried :
X=c(8,1,2,3)
Y=c(5,2,4,6)
correlation=cov(X,Y)/(SD(X)*SD(Y))
bvtnorm <- function(x, y, mu_x = mean(X), mu_y = mean(Y), sigma_x = SD(X), sigma_y = SD(Y), rho = correlation) {
force(x)
force(y)
function(x, y)
1 / (2 * pi * sigma_x * sigma_y * sqrt(1 - rho ^ 2)) *
exp(- 1 / (2 * (1 - rho ^ 2)) * ((x - mu_x) / sigma_x) ^ 2 +
((y - mu_y) / sigma_y) ^ 2 - 2 * rho * (x - mu_x) * (y - mu_y) /
(sigma_x * sigma_y))
}
f2 <- bvtnorm(x, y)
print("sum_double_integral :")
integral_1=int2(f2, a=c(-Inf,-Inf), b=c(Inf,Inf)) # should normaly give 1
print(integral_1) # gives Nan
The problem :
This integral should give 1 , but it gives Nan ??
I don't know how can i fix the problem , i tried to force() x and y variables without success.
A: You were missing a pair of parentheses. The corrected code looks like:
library(rmutil)
X=c(8,1,2,3)
Y=c(5,2,4,6)
correlation=cor(X,Y)
bvtnorm <- function(x, y, mu_x = mean(X), mu_y = mean(Y), sigma_x = sd(X), sigma_y = sd(Y), rho = correlation) {
function(x, y)
1 / (2 * pi * sigma_x * sigma_y * sqrt(1 - rho ^ 2)) *
exp(- 1 / (2 * (1 - rho ^ 2)) * (((x - mu_x) / sigma_x) ^ 2 +
((y - mu_y) / sigma_y) ^ 2 - 2 * rho * (x - mu_x) * (y - mu_y) /
(sigma_x * sigma_y)))
}
f2 <- bvtnorm(x, y)
print("sum_double_integral :")
integral_1=int2(f2, a=c(-Inf,-Inf), b=c(Inf,Inf)) # should normaly give 1
print(integral_1) # prints 1.000047
This was hard to spot. From a debugging point of view, I found it helpful to first integrate over a finite domain. Trying it with things like first [-1,1] and then [-2,2] (on both axes) showed that the integrals were blowing up rather than converging. After that, I looked at the grouping even more carefully.
I also cleaned up the code a bit. I dropped SD in favor of sd since I don't see the motivation in importing the package psych just to make the code less readable (less flippantly, dropping psych from the question makes it easier for others to reproduce. There is no good reason to include a package which isn't be used in any essential way). I also dropped the force() which was doing nothing and used the built-in function cor for calculating the correlation.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/61444796",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: Centering MenuItems in second ActionBar? So I am trying to center my icons like this image from Google Developer:
My second row is actually just below the top row (which is also actionbar tabs without a title or icon enabled in the activity). So, right now, as I've added in my three buttons (just text, no icons) they stack on the right side of the bar, and I can't seem to find anything in the docs about centering. Any ideas?
Relevant code:
<?xml version="1.0" encoding="utf-8"?>
<menu xmlns:android="http://schemas.android.com/apk/res/android">
<item android:id="@+id/menu_demos"
android:title="Chart +/-"
android:showAsAction="always|withText" />
<item android:id="@+id/menu_data"
android:title="Datas"
android:showAsAction="always|withText" />
<item android:id="@+id/menu_reset"
android:title="Reset"
android:showAsAction="always|withText" />
</menu>
manifest line:
<activity android:name="polling.Chart" android:label="Chart"
android:icon="@drawable/chart512" android:windowSoftInputMode="stateHidden"/>
Main Activity:
bar = getSupportActionBar();
bar.setNavigationMode(ActionBar.NAVIGATION_MODE_TABS);
bar.setDisplayShowTitleEnabled(false);
bar.setDisplayShowHomeEnabled(false);
public boolean onCreateOptionsMenu(Menu menu) {
MenuInflater inflater = getSupportMenuInflater();
inflater.inflate(R.menu.chartsmenu, menu);
return true;
}
A: Use the split action bar option. All items which does not fix in the top action bar will be shown in the second action bar at the bottom of your screen. The remaining items will be centered within the second action bar.
protected void onCreate(final Bundle savedInstanceState) {
getSherlock().setUiOptions(ActivityInfo.UIOPTION_SPLIT_ACTION_BAR_WHEN_NARROW);
super.onCreate(savedInstanceState);
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/10790350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Running an interactive shellscript using java I am writing a javacode to call a interactive shellscript and using process builder for calling shellscript. I know that to pass parameters to this shell script i have to take its inputstream to check it's output and need to use output stream to pass command to it. My question is that how would I know using Input Stream that it's prompting for entering values ?
My code :
ProcessBuilder pb2=new ProcessBuilder("/home/abhijeet/sample1.sh","--ip="+formobj.getUpFile().getFileName(),"--seqs="+seqs);
script_exec = pb2.start();
OutputStream in = script_exec.getOutputStream();
InputStreamReader rd=new InputStreamReader(script_exec.getInputStream());
pb2.redirectError();
BufferedReader reader1 =new BufferedReader(new InputStreamReader(script_exec.getInputStream()));
StringBuffer out=new StringBuffer();
String output_line = "";
while ((output_line = reader1.readLine())!= null)
{
out=out.append(output_line+"/n");
System.out.println("val of output_line"+output_line);
//---> i need code here to check that whether script is prompting for taking input ,so i can pass it values using output stream
}
Is there any way to know directly that script is waiting for an input from user?
A: Read the process output stream and check for the input prompt, if you know the input prompt, then put the values to process inupt stream.
Otherwise you have no way to check.
ProcessBuilder pb = new ProcessBuilder(command);
pb.redirectErrorStream(true);
BufferedReader b1 = new BufferedReader(new InputStreamReader(p.getInputStream()));
BufferedWriter w1 = new BufferedWriter(new OutputStreamWriter(p.getOutputStream()));
String line = "";
String outp = "";
while ((line = b1.readLine()) != null) {
if (line.equals("PLEASE INPUT THE VALUE:")) {
// output to stream
w1.write("42");
}
outp += line + "\n";
}
...
UPD: for your code it should be something like that
ProcessBuilder pb2=new ProcessBuilder("/home/ahijeet/sample1.sh","--ip="+formobj.getUpFile().getFileName(),"--seqs="+seqs);
script_exec = pb2.start();
OutputStream in = script_exec.getOutputStream();
InputStreamReader rd=new InputStreamReader(script_exec.getInputStream());
pb2.redirectError();
BufferedReader reader1 =new BufferedReader(new InputStreamReader(script_exec.getInputStream()));
StringBuffer out=new StringBuffer();
String output_line = "";
while ((output_line = reader1.readLine())!= null)
{
out=out.append(output_line+"/n");
System.out.println("val of output_line"+output_line);
//---> i need code here to check that whether script is prompting for taking input ,so i can pass it values using output stream
if (output_line.equals("PLEASE INPUT THE VALUE:")) {
// output to stream
in.write("42");
}
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/24487514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: How can I save a variable from a database and use it in C#? I am changing a program and I need some help because I don't know C#. I change things with:
strSQL = "UPDATE materials SET ";
strSQL = strSQL + "Dscr = 'concrete', ";
strSQL = strSQL + "width=50 ";
strSQL = strSQL + " WHERE ID=385";
objCmd = new OleDbCommand(strSQL, db_def.conn);
objCmd.ExecuteNonQuery();
There is a case where I need to find an ID, store it and then use it again. So I use select
OleDbCommand cmd = new OleDbCommand("SELECT ID FROM materials WHERE Type=1", db_def.conn);
OleDbDataReader reader = cmd.ExecuteReader();
if (reader.HasRows)
{
reader.Read();
var result = reader.GetInt32(0);
}
strSQL = "UPDATE materials SET ";
strSQL = strSQL + "Dscr = 'concrete', ";
strSQL = strSQL + "width=50 ";
strSQL = strSQL + " WHERE ID=result";
objCmd = new OleDbCommand(strSQL, db_def.conn);
objCmd.ExecuteNonQuery();
But I get an error:
No value given for one or more required parameters.
A: Although your code that you're working on is less than desirable, I'll just provide the fix you require at this point:
Change your code to :
OleDbCommand cmd = new OleDbCommand("SELECT ID FROM materials WHERE Type=1", db_def.conn);
OleDbDataReader reader = cmd.ExecuteReader();
int result=0;
if (reader.HasRows)
{
reader.Read();
result = reader.GetInt32(0);
}
strSQL = "UPDATE materials SET ";
strSQL = strSQL + "Dscr = 'concrete', ";
strSQL = strSQL + "width=50 ";
strSQL = strSQL + " WHERE ID=" + result;
objCmd = new OleDbCommand(strSQL, db_def.conn);
objCmd.ExecuteNonQuery();
You need to provide the value of the result variable outside the SQL string - the database will not know the value of 'result' in its own context.
EDIT: the result variable was declared within the if statement, therefore not available further down for assigning.
A: you can try this and tell me if it works
OleDbCommand cmd = new OleDbCommand("SELECT ID FROM materials WHERE Type=1", db_def.conn);
OleDbDataReader reader = cmd.ExecuteReader();
int result=-1 ;
if (reader.HasRows)
{
reader.Read();
result = reader.GetInt32(0);
}
if (result != -1)
{
strSQL = "UPDATE materials SET ";
strSQL = strSQL + "Dscr = 'concrete', ";
strSQL = strSQL + "width=50 ";
strSQL = strSQL + " WHERE ID="+result;
objCmd = new OleDbCommand(strSQL, db_def.conn);
objCmd.ExecuteNonQuery();
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/30186974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-2"
}
|
Q: Unable to Import after Pip Install Ive just installed a package via pip,
Cloning https://github.com/felix001/commontools.git to /tmp/pip-LMwfnO-build
Installing collected packages: commontools
Running setup.py install for commontools
Successfully installed commontools-1.0
When I try to import commontools it doesnt import.
The path looks ok though,
>>> import pprint
>>> import sys
>>> pprint.pprint(sys.path)
['',
'/Library/Python/2.7/site-packages/pip-7.1.2-py2.7.egg',
'/Users/felix001',
'/usr/local/lib/python2.7/site-packages',
'/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python27.zip',
'/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7',
'/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-darwin',
'/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac',
'/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/plat-mac/lib-scriptpackages',
'/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk',
'/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-old',
'/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-dynload',
'/usr/local/Cellar/python/2.7.10_2/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages',
'/Library/Python/2.7/site-packages']
>>>
MacBook-Pro:~ felix001$ ls -l /Library/Python/2.7/site-packages
total 392
-rw-r--r-- 1 root wheel 119 9 Sep 2014 README
drwxr-xr-x 7 root wheel 238 18 Dec 21:39 commontools-1.0-py2.7.egg-info
-rw-r--r-- 1 root wheel 207 3 Nov 20:20 easy-install.pth
drwxr-xr-x 4 root wheel 136 3 Nov 20:20 pip-7.1.2-py2.7.egg
drwxr-xr-x 60 root wheel 2040 23 Oct 2014 scapy
-rw-r--r-- 1 root wheel 261 18 Oct 2014 scapy-2.1.0-py2.7.egg-info
drwxr-xr-x 7 root wheel 238 11 Dec 22:18 threader-1.0-py2.7.egg-info
drwxr-xr-x 6 root wheel 204 20 Jul 21:49 vboxapi
-rw-r--r-- 1 root wheel 241 20 Jul 21:49 vboxapi-1.0-py2.7.egg-info
drwxr-xr-x 10 root wheel 340 3 Nov 20:21 virtualenv-13.1.2.dist-info
-rw-r--r-- 1 root wheel 99421 3 Nov 20:21 virtualenv.py
-rw-r--r-- 1 root wheel 83209 3 Nov 20:21 virtualenv.pyc
drwxr-xr-x 8 root wheel 272 3 Nov 20:21 virtualenv_support
Here is the package details.
MacBook-Pro:~ felix001$ pip show commontools
---
Metadata-Version: 1.0
Name: commontools
Version: 1.0
Summary: commontools
Home-page: UNKNOWN
Author: ====
Author-email: ====
License: UNKNOWN
Location: /Library/Python/2.7/site-packages
Requires:
EDIT :
The error is shown below when I try to import,
MacBook-Pro:~$ which python
/usr/local/bin/python
MacBook-Pro:~$ python
Python 2.7.10 (default, Sep 23 2015, 04:34:14)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.72)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> from commontools import *
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named commontools
EDIT2 :
So i removed all instances of python. I installed a new version using brew. Same Issue.
New Python path ,
MacBook-Pro:~ felix001$ which python
/usr/local/bin/python
Now if I install anything by pip. It works fine. But if I install my own git repos it fails (??). Heres an example of one - https://github.com/felix001/commontools
Heres an example,
MacBook-Pro:~ felix001$ pip -v install netaddr
Collecting netaddr
Getting page https://pypi.python.org/simple/netaddr/
1 location(s) to search for versions of netaddr:
* https://pypi.python.org/simple/netaddr/
Getting page https://pypi.python.org/simple/netaddr/
Analyzing links from page https://pypi.python.org/simple/netaddr/
Skipping link https://pypi.python.org/packages/2.4/n/netaddr/netaddr-0.3.1.win32.exe#md5=ba0615105ab8193f4115e2a5767a121c (from https://pypi.python.org/simple/netaddr/); unsupported archive format: .exe
Skipping link https://pypi.python.org/packages/2.4/n/netaddr/netaddr-0.4.win32.exe#md5=77c28e03c14e081b3339118fad7f1bce (from https://pypi.python.org/simple/netaddr/); unsupported archive format: .exe
Skipping link https://pypi.python.org/packages/2.4/n/netaddr/netaddr-
0.7.x/CHANGELOG (from https://pypi.python.org/simple/netaddr/); not a file
Skipping link http://packages.python.org/netaddr/ (from https://pypi.python.org/simple/netaddr/); not a file
Skipping link https://github.com/drkjam/netaddr/blob/rel-0.7.x/CHANGELOG (from https://pypi.python.org/simple/netaddr/); not a file
Skipping link https://pythonhosted.org/netaddr/ (from https://pypi.python.org/simple/netaddr/); not a file
Using version 0.7.18 (newest of versions: 0.7.18, 0.7.18, 0.7.18, 0.7.17, 0.7.17, 0.7.17, 0.7.16, 0.7.16, 0.7.16, 0.7.15, 0.7.15, 0.7.15, 0.7.14, 0.7.14, 0.7.14, 0.7.13, 0.7.13, 0.7.13, 0.7.12, 0.7.12, 0.7.11, 0.7.11, 0.7.10, 0.7.10, 0.7.3, 0.7.3, 0.7.2, 0.7.2, 0.7.1, 0.7.1, 0.7, 0.7, 0.6.4, 0.6.4, 0.6.3, 0.6.3, 0.6.2, 0.6.2, 0.6.1, 0.6.1, 0.6, 0.6, 0.5.2, 0.5.2, 0.5.1, 0.5.1, 0.5, 0.5, 0.4, 0.4, 0.3.1, 0.3.1)
Using cached netaddr-0.7.18-py2.py3-none-any.whl
Downloading from URL https://pypi.python.org/packages/any/n/netaddr/netaddr-0.7.18-py2.py3-none-any.whl#md5=f18d38b78469bb440560544f5559e649 (from https://pypi.python.org/simple/netaddr/)
Installing collected packages: netaddr
Successfully installed netaddr-0.7.18
Cleaning up...
MacBook-Pro:~ felix001$ python
Python 2.7.10 (default, Sep 23 2015, 04:34:14)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.72)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import netaddr
>>>
>>>
>>>
MacBook-Pro:~ felix001$ pip -vvv install git+https://github.com/felix001/commontools
Collecting git+https://github.com/felix001/commontools
Cloning https://github.com/felix001/commontools to /var/folders/13/780bwzl13p5118ldv4qdqgvw0000gn/T/pip-76GaiR-build
Running command git clone -q https://github.com/felix001/commontools /var/folders/13/780bwzl13p5118ldv4qdqgvw0000gn/T/pip-76GaiR-build
Running setup.py (path:/var/folders/13/780bwzl13p5118ldv4qdqgvw0000gn/T/pip-76GaiR-build/setup.py) egg_info for package from git+https://github.com/felix001/commontools
Running command python setup.py egg_info
running egg_info
creating pip-egg-info/commontools.egg-info
writing pip-egg-info/commontools.egg-info/PKG-INFO
writing top-level names to pip-egg-info/commontools.egg-info/top_level.txt
writing dependency_links to pip-egg-info/commontools.egg-info/dependency_links.txt
writing manifest file 'pip-egg-info/commontools.egg-info/SOURCES.txt'
warning: manifest_maker: standard file '-c' not found
reading manifest file 'pip-egg-info/commontools.egg-info/SOURCES.txt'
writing manifest file 'pip-egg-info/commontools.egg-info/SOURCES.txt'
Source in /var/folders/13/780bwzl13p5118ldv4qdqgvw0000gn/T/pip-76GaiR-build has version 1.0, which satisfies requirement commontools==1.0 from git+https://github.com/felix001/commontools
Installing collected packages: commontools
Running setup.py install for commontools
Running command /usr/local/opt/python/bin/python2.7 -c "import setuptools, tokenize;__file__='/var/folders/13/780bwzl13p5118ldv4qdqgvw0000gn/T/pip-76GaiR-build/setup.py';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /var/folders/13/780bwzl13p5118ldv4qdqgvw0000gn/T/pip-hU9LyU-record/install-record.txt --single-version-externally-managed --compile
running install
running build
running install_egg_info
running egg_info
creating commontools.egg-info
writing commontools.egg-info/PKG-INFO
writing top-level names to commontools.egg-info/top_level.txt
writing dependency_links to commontools.egg-info/dependency_links.txt
writing manifest file 'commontools.egg-info/SOURCES.txt'
warning: manifest_maker: standard file '-c' not found
reading manifest file 'commontools.egg-info/SOURCES.txt'
writing manifest file 'commontools.egg-info/SOURCES.txt'
Copying commontools.egg-info to /usr/local/lib/python2.7/site-packages/commontools-1.0-py2.7.egg-info
running install_scripts
writing list of installed files to '/var/folders/13/780bwzl13p5118ldv4qdqgvw0000gn/T/pip-hU9LyU-record/install-record.txt'
Removing source in /var/folders/13/780bwzl13p5118ldv4qdqgvw0000gn/T/pip-76GaiR-build
Successfully installed commontools-1.0
Cleaning up...
MacBook-Pro:~ felix001$ python
Python 2.7.10 (default, Sep 23 2015, 04:34:14)
[GCC 4.2.1 Compatible Apple LLVM 7.0.0 (clang-700.0.72)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import commontools
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named commontools
>>>
Any ideas ?
A: This is not a problem with pip; this is a problem with the commontools package.
Here is its setup.py:
from setuptools import setup
setup(
name="commontools",
version="1.0",
author="xxxxxxxxxxxx",
author_email="xxxxxxxxxxxx",
description="commontools",
)
It does not even follow the minimal viable example for packaging or the basic use for setuptools.
TL;DR:
It is not including the source files; which is what the packages keyword for setup() does.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/34364449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2"
}
|
Q: Getting a "Unexpected field at makeError" when uploading a form My form has an input for a file (in my case an image) and for some text. When I hit the submit button, the stated error occurs. I have had the issue for two days now and I tried to understand where the problem was, but my efforts were in vain and I have decided to yield and seek assistance.
Here is my controller for the image:
// using this to generate a random name for an image
var possible = 'abcdefghijklmnopqrstuvwxyz0123456789',
imgUrl = '';
// this is just a loop to create a 10 character random name
for(var i = 0; i < 10; i++) {
imgUrl += possible.charAt(Math.floor(Math.random() *
possible.length));
}
/* I saw an answer to a similar question where the correct answer author
said path will always refer to folder where the input folder resides */
var tempPath = req.file.path,
ext = path.extname(req.file.path).toLower(),
targetPath = './app/controller/store/' + imgUrl + ext;
// Check if image is of the correct format
if (ext === '.png' || ext === '.jpg' || ext === '.jpeg' || ext === '.gif')
{
fs.rename(tempPath, targetPath, function(err) {
if (err) throw err;
res.redirect('/posts/'+ imgUrl);
});
} else {
fs.unlink(tempPath, function () {
if (err) throw err;
res.json(500, {error: 'Only image files are allowed.'});
});
var post = new Post({
content: req.body.content,
author: req.user,
filename: imgUrl + ext
});
post.save(function(err) {
if(err) {
return res.status(400).send({
message: getErrorMessage(err)
});
} else res.json(post);
});
}
And here is the form:
<form method="post" action="/posts" enctype="multipart/form-data">
<textarea name="content"></textarea>
<input type="file" name="file" id="file">
<input type="submit" value="Post">
</form>
I did the usual stuff in my configuration file:
app.use(multer({dest: './app/controller/store'}).single('photo'));
I would truly appreciate it if you steer me towards the right path, thanks in advance.
A: .single('parameter') means the input field's name is 'parameter'
In your case:
app.use(multer({dest: './app/controller/store'}).single('photo'));
You passed a 'photo' argument into single func.
Then your form should look like this, change it:
..
..
<input type="file" name="photo">
..
..
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/42593719",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Unable to deserialize the execution context after upgrading to spring boot 2.0 for a spring cloud dataflow task I updated my spring boot (Task) application to spring boot 2.0 from 1.5. Now when I run it I get a deserializable error.
2018-10-29 15:03:00.346 ERROR 713 --- [ main] o.s.boot.SpringApplication : Application run failed
java.lang.IllegalStateException: Failed to execute CommandLineRunner
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:795) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]
at org.springframework.boot.SpringApplication.callRunners(SpringApplication.java:776) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:315) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1242) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]
at org.springframework.boot.SpringApplication.run(SpringApplication.java:1230) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]
at com.mediaiq.miq.batch.MiqBatchApplication.main(MiqBatchApplication.java:42) [classes!/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_191]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_191]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_191]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_191]
at org.springframework.boot.loader.MainMethodRunner.run(MainMethodRunner.java:48) [aiq-miq-batch-2.18-SNAPSHOT.jar:na]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:87) [aiq-miq-batch-2.18-SNAPSHOT.jar:na]
at org.springframework.boot.loader.Launcher.launch(Launcher.java:50) [aiq-miq-batch-2.18-SNAPSHOT.jar:na]
at org.springframework.boot.loader.JarLauncher.main(JarLauncher.java:51) [aiq-miq-batch-2.18-SNAPSHOT.jar:na]
Caused by: java.lang.IllegalArgumentException: Unable to deserialize the execution context
at org.springframework.batch.core.repository.dao.JdbcExecutionContextDao$ExecutionContextRowMapper.mapRow(JdbcExecutionContextDao.java:325) ~[spring-batch-core-4.0.1.RELEASE.jar!/:4.0.1.RELEASE]
at org.springframework.batch.core.repository.dao.JdbcExecutionContextDao$ExecutionContextRowMapper.mapRow(JdbcExecutionContextDao.java:309) ~[spring-batch-core-4.0.1.RELEASE.jar!/:4.0.1.RELEASE]
at org.springframework.jdbc.core.RowMapperResultSetExtractor.extractData(RowMapperResultSetExtractor.java:93) ~[spring-jdbc-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.jdbc.core.RowMapperResultSetExtractor.extractData(RowMapperResultSetExtractor.java:60) ~[spring-jdbc-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate$1.doInPreparedStatement(JdbcTemplate.java:667) ~[spring-jdbc-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:605) ~[spring-jdbc-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:657) ~[spring-jdbc-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:688) ~[spring-jdbc-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:700) ~[spring-jdbc-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:756) ~[spring-jdbc-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.batch.core.repository.dao.JdbcExecutionContextDao.getExecutionContext(JdbcExecutionContextDao.java:112) ~[spring-batch-core-4.0.1.RELEASE.jar!/:4.0.1.RELEASE]
at org.springframework.batch.core.explore.support.SimpleJobExplorer.getJobExecutionDependencies(SimpleJobExplorer.java:202) ~[spring-batch-core-4.0.1.RELEASE.jar!/:4.0.1.RELEASE]
at org.springframework.batch.core.explore.support.SimpleJobExplorer.getJobExecutions(SimpleJobExplorer.java:83) ~[spring-batch-core-4.0.1.RELEASE.jar!/:4.0.1.RELEASE]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_191]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_191]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_191]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_191]
at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:343) ~[spring-aop-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:197) ~[spring-aop-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) ~[spring-aop-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.batch.core.configuration.annotation.SimpleBatchConfiguration$PassthruAdvice.invoke(SimpleBatchConfiguration.java:127) ~[spring-batch-core-4.0.1.RELEASE.jar!/:4.0.1.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185) ~[spring-aop-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:212) ~[spring-aop-5.0.10.RELEASE.jar!/:5.0.10.RELEASE]
at com.sun.proxy.$Proxy185.getJobExecutions(Unknown Source) ~[na:na]
at org.springframework.batch.core.JobParametersBuilder.getNextJobParameters(JobParametersBuilder.java:264) ~[spring-batch-core-4.0.1.RELEASE.jar!/:4.0.1.RELEASE]
at org.springframework.boot.autoconfigure.batch.JobLauncherCommandLineRunner.execute(JobLauncherCommandLineRunner.java:162) ~[spring-boot-autoconfigure-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]
at org.springframework.boot.autoconfigure.batch.JobLauncherCommandLineRunner.executeLocalJobs(JobLauncherCommandLineRunner.java:179) ~[spring-boot-autoconfigure-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]
at org.springframework.boot.autoconfigure.batch.JobLauncherCommandLineRunner.launchJobFromProperties(JobLauncherCommandLineRunner.java:134) ~[spring-boot-autoconfigure-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]
at org.springframework.boot.autoconfigure.batch.JobLauncherCommandLineRunner.run(JobLauncherCommandLineRunner.java:128) ~[spring-boot-autoconfigure-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]
at org.springframework.boot.SpringApplication.callRunner(SpringApplication.java:792) [spring-boot-2.0.6.RELEASE.jar!/:2.0.6.RELEASE]
... 13 common frames omitted
Caused by: com.fasterxml.jackson.databind.exc.InvalidTypeIdException: Could not resolve type id '' as a subtype of [simple type, class java.lang.Object]: no such class found
at [Source: (ByteArrayInputStream); line: 1, column: 11] (through reference chain: java.util.HashMap["map"])
at com.fasterxml.jackson.databind.exc.InvalidTypeIdException.from(InvalidTypeIdException.java:43) ~[jackson-databind-2.9.7.jar!/:2.9.7]
at com.fasterxml.jackson.databind.DeserializationContext.invalidTypeIdException(DeserializationContext.java:1635) ~[jackson-databind-2.9.7.jar!/:2.9.7]
at com.fasterxml.jackson.databind.DeserializationContext.handleUnknownTypeId(DeserializationContext.java:1187) ~[jackson-databind-2.9.7.jar!/:2.9.7]
at com.fasterxml.jackson.databind.jsontype.impl.ClassNameIdResolver._typeFromId(ClassNameIdResolver.java:53) ~[jackson-databind-2.9.7.jar!/:2.9.7]
at com.fasterxml.jackson.databind.jsontype.impl.ClassNameIdResolver.typeFromId(ClassNameIdResolver.java:44) ~[jackson-databind-2.9.7.jar!/:2.9.7]
at com.fasterxml.jackson.databind.jsontype.impl.TypeDeserializerBase._findDeserializer(TypeDeserializerBase.java:156) ~[jackson-databind-2.9.7.jar!/:2.9.7]
at com.fasterxml.jackson.databind.jsontype.impl.AsArrayTypeDeserializer._deserialize(AsArrayTypeDeserializer.java:97) ~[jackson-databind-2.9.7.jar!/:2.9.7]
at com.fasterxml.jackson.databind.jsontype.impl.AsArrayTypeDeserializer.deserializeTypedFromAny(AsArrayTypeDeserializer.java:71) ~[jackson-databind-2.9.7.jar!/:2.9.7]
at com.fasterxml.jackson.databind.deser.std.UntypedObjectDeserializer$Vanilla.deserializeWithType(UntypedObjectDeserializer.java:712) ~[jackson-databind-2.9.7.jar!/:2.9.7]
at com.fasterxml.jackson.databind.deser.std.MapDeserializer._readAndBindStringKeyMap(MapDeserializer.java:529) ~[jackson-databind-2.9.7.jar!/:2.9.7]
at com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:364) ~[jackson-databind-2.9.7.jar!/:2.9.7]
at com.fasterxml.jackson.databind.deser.std.MapDeserializer.deserialize(MapDeserializer.java:29) ~[jackson-databind-2.9.7.jar!/:2.9.7]
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4013) ~[jackson-databind-2.9.7.jar!/:2.9.7]
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3077) ~[jackson-databind-2.9.7.jar!/:2.9.7]
at org.springframework.batch.core.repository.dao.Jackson2ExecutionContextStringSerializer.deserialize(Jackson2ExecutionContextStringSerializer.java:70) ~[spring-batch-core-4.0.1.RELEASE.jar!/:4.0.1.RELEASE]
at org.springframework.batch.core.repository.dao.Jackson2ExecutionContextStringSerializer.deserialize(Jackson2ExecutionContextStringSerializer.java:50) ~[spring-batch-core-4.0.1.RELEASE.jar!/:4.0.1.RELEASE]
at org.springframework.batch.core.repository.dao.JdbcExecutionContextDao$ExecutionContextRowMapper.mapRow(JdbcExecutionContextDao.java:322) ~[spring-batch-core-4.0.1.RELEASE.jar!/:4.0.1.RELEASE]
One person had suggested that :
This error happens when the execution context of your job is serialized with version 3 (using XStream by default) and then deserialized with version 4 (using Jackson by default). So either downgrade Spring Batch to version 3 or configure your job repository to use the XStreamExecutionContextStringSerializer.
In your case, you have already defined a bean of type BatchConfigurer, so you can override the createJobRepository method and configure the XStream serializer. For example:
@Bean
BatchConfigurer configurer(@Qualifier("dataSource") DataSource dataSource, PlatformTransactionManager transactionManager) {
return new DefaultBatchConfigurer(dataSource) {
@Override
protected JobRepository createJobRepository() throws Exception {
JobRepositoryFactoryBean factory = new JobRepositoryFactoryBean();
factory.setDataSource(dataSource);
factory.setTransactionManager(transactionManager);
factory.setSerializer(new XStreamExecutionContextStringSerializer());
factory.afterPropertiesSet();
return factory.getObject();
}
};
}
I added the above bean to my main class but still got the error.
A: In the batch_job_execution_context table, you may have some records which are created by the spring batch version 3. While you are trying to execute new execution with spring batch version 4, it trying to compare previous records. So it trying to deserialize those older records. this is why you are getting this issue.
Spring batch version 3 records are like bellow in the batch_job_execution_context table
{"map":[{"entry":{"string":["key","value"]}}]}
Spring batch version 4 records are like bellow in the batch_job_execution_context table
{"key":"value"}
Remove all the records created by version 3. This will fix the issue.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/53042640",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1"
}
|
Q: How can integrate Keycloak with kafka? I have configured 3 nodes kafka cluster. Now we want to setup security with Keycloak for kafka. Please let me know what are the ways to do the same.
Question 1: How to implement security for kafka broker to kafka broker with keycloak?
Question 2: How to implement security for kafka client to kafka broker with keycloak?
Note: We had already Keycloak setup.
A: You can configure Kafka to use AUTHBEARER which is implemented in latest kafka release , You can find more info how to configure here .
And also get more information about the feature from Kafka doc
You need to implement org.apache.kafka.common.security.auth.AuthenticateCallbackHandler to get token from keycloak and validate token from Keycloak.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/53516762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5"
}
|
Q: Pandas - get the mean of one column using descending N rows of another column I have this dataframe:
team opponent home_dummy round points
0 Athlético-PR Flamengo 0 13 22.91
1 Athlético-PR Atlético-GO 0 17 23.6
2 Athlético-PR Fortaleza 1 20 28.58
3 Athlético-PR Fortaleza 0 1 75.71
4 Athlético-PR Ceará 1 14 42.22
5 Athlético-PR Coritiba 1 10 52.91
6 Athlético-PR Goiás 1 2 39.82
7 Athlético-PR Goiás 0 21 65.13
8 Athlético-PR Internacional 0 15 43.09
9 Athlético-PR Grêmio 1 18 15.38
10 Athlético-PR Sport 0 19 13.09
11 Athlético-PR Santos 1 22 65.45
12 Athlético-PR Santos 0 3 28.04
13 Athlético-PR Palmeiras 1 4 -7.31
14 Athlético-PR Palmeiras 0 23 11.02
15 Athlético-PR Vasco 0 8 15.93
16 Athlético-PR Fluminense 1 5 9.16
17 Athlético-PR Bahia 1 12 59.78
18 Athlético-PR Corinthians 1 16 18.22
19 Athlético-PR Botafogo 1 9 29.35
20 Athlético-PR Bragantino 1 7 20.07
.......
The dataframe above has another 19 teams, other than 'Athlético-PR'.
How do I group this dataframe, getting, for each team:
*
*Mean points for the last N rounds, say N=6, which would get the mean of round 23, 22, 21, 20, 19, 18.
*Mean points for the last N rounds passing 'home_dummy' as condition, which would get the mean of either rounds 23, 21, 19, 17, 15, 13 or rounds 22, 20, 18, 16, 14, 12.
ending up with:
team mean_total mean_home_0 mean_home_1
0 Athlético-PR mean x mean y mean z
...
A: I think you can to do two separate groupby:
df = df.sort_values(['team','round'])
out = (df.groupby(['team','home_dummy']).tail(6)
.groupby(['team','home_dummy'])['points'].mean()
.unstack('home_dummy')
.add_prefix('mean_home_')
)
out['mean_total'] = df.groupby('team').tail(6).groupby('team')['points'].mean()
Output:
home_dummy mean_home_0 mean_home_1 mean_total
team
Athlético-PR 29.806667 38.271667 33.108333
Another option is to write a udf so as to reduce the two groupby to one:
def last6mean(x):
return x.tail(6).mean()
out = (df.groupby(['team','home_dummy'])['points']
.apply(last6mean)
.unstack('home_dummy')
.add_prefix('mean_home_')
)
out['mean_total'] = df.groupby('team')['points'].apply(last6mean)
A: You can create two dataframes for filtering for the 6 largest with groupby and then get the means and merge together:
First make sure they are numeric:
df['round'] = df['round'].astype(int)
df['points'] = df['points'].astype(float)
OR
df['round'] = pd.to_numeric(df['round'], errors='coerce')
df['points'] = pd.to_numeric(df['points'], errors='coerce')
Then, you can run the following:
df1 = (df.loc[df.index.isin(df.groupby('team')['round'].nlargest(6).reset_index().iloc[:,1]),
['team', 'points']].groupby('team')['points'].mean().rename('mean_total').reset_index())
df2 = (df.loc[df.index.isin(df.groupby(['team','home_dummy'])['round'].nlargest(6).reset_index().iloc[:,2]),
['team', 'home_dummy', 'points']].groupby(['team','home_dummy'])['points'].mean()
.unstack(1).add_prefix('mean_home_').reset_index())
df1.merge(df2, on='team')
Out[1]:
team mean_total mean_home_0 mean_home_1
0 Athlético-PR 33.108333 29.806667 38.271667
You could also create a function to make cleaner:
def f(df, cols):
return df.loc[df.index.isin(df.groupby(cols)['round'].nlargest(6).reset_index().iloc[:, len(cols)]),
cols + ['points']].groupby(cols)['points'].mean()
(f(df, ['team']).rename('mean_total').reset_index().merge(
f(df, ['team', 'home_dummy']).unstack(1).add_prefix('mean_home_').reset_index()))
Out[2]:
team mean_total mean_home_0 mean_home_1
0 Athlético-PR 33.108333 29.806667 38.271667
A: I know that there's already awesome answers here, but I'd just like to share one using pivot_table:
If your DataFrame looks something like this
team home_dummy round points
0 A 0 1 3
1 A 0 2 4
2 A 1 3 7
3 A 1 4 8
4 B 0 1 3
5 B 1 2 6
6 B 1 3 9
7 B 0 4 12
8 C 0 1 5
9 C 1 2 5
10 C 1 3 5
11 C 0 4 5
Then you could do something like:
N = 2
df['points'] = df['points'].astype(float)
result = df.pivot_table(index='team',
columns='home_dummy',
values='points',
aggfunc=lambda x: x.tail(N).mean())
result.rename({i: f"mean_home_{i}" for i in result.columns}, axis=1, inplace=True)
result.columns.name = None
result['mean_total'] = df.groupby('team').tail(N).groupby('team')['points'].mean()
Which will give you the following dataframe
mean_home_0 mean_home_1 mean_total
team
A 3.5 7.5 5.5
B 7.5 7.5 7.5
C 5.0 5.0 5.0
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/65115116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3"
}
|
Q: Problem on import a react native component I'm having a error once import a component, react native says me that the file path is wrong, but it's not!
My main component where I'm trying to import a component and my file structure:
I'm on pages/Main/index and wanna import a file from components/Header/index, I'm sure about the path, I'm I think this is because of some react native project config, something like this
(I'm a windwos user and tryed to use /index in the path too)
The error:
A: Try passing '../components/Header'
Please let me know if it works. Thanks
A: Your .'./components/Header' path is right..
In your Header component folder there is no any styles.js but you are importing in index.js which is in Header folder.. The above error is regarding path to styles.js in header folders Index.js
Read the error carefully. They have mentioned original Path and target path.
Hope you get it.
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/60332314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: Why UITableView is not getting any user interaction as it's ViewController is added to a UITabBarController as subView I'm setting a left side menu programatically to a UITabBarController. To do this I am inserting a viewController with 0 index as subView to the TabBarController. When I press menu icon first tabView move right and menu UITableView is shown but I can not interact with the UITableView where did I make wrong?
Total UITabBarController.swift file
import UIKit
class MainTabBarController: UITabBarController {
// MARK: Properties
var menuController: MenuController!
var centerController: UIViewController!
var isExpanded = false
// MARK: Initialization
override func viewDidLoad() {
super.viewDidLoad()
configureHomeController()
}
override var preferredStatusBarStyle: UIStatusBarStyle {
return .lightContent
}
override var preferredStatusBarUpdateAnimation: UIStatusBarAnimation {
return .slide
}
override var prefersStatusBarHidden: Bool {
return isExpanded
}
// MARK: Handlers
func configureHomeController() {
let navigationController = self.viewControllers![0] as! UINavigationController
let homeController = navigationController.viewControllers[0] as! FirstViewController
homeController.homeControllerDelegate = self
centerController = homeController
}
//Configuring Side Menu Controller
func configureMenuController() {
if menuController == nil {
// add menu controller here
menuController = MenuController()
menuController.homeControllerDelegate = self
// Add Child View Controller
addChild(menuController)
// Add Child View as Subview
view.insertSubview(menuController.view, at: 0)
// Configure Child View
menuController.view.frame = view.bounds
menuController.view.autoresizingMask = [.flexibleWidth, .flexibleHeight]
// Notify Child View Controller
menuController.didMove(toParent: self)
}
}
func animateSideMenuOpeningAndClosing(shouldExpand: Bool, menuItem: MenuItem?) {
if shouldExpand {
// Show Menu
UIView.animate(withDuration: 0.5, delay: 0, usingSpringWithDamping: 0.8, initialSpringVelocity: 0, options: .curveEaseInOut, animations: {
self.centerController.view.frame.origin.x = self.centerController.view.frame.width - 80
}, completion: nil)
} else {
// Hide Menu
UIView.animate(withDuration: 0.5, delay: 0, usingSpringWithDamping: 0.8, initialSpringVelocity: 0,options: .curveEaseInOut, animations: {
self.centerController.view.frame.origin.x = 0
}) { (_) in
guard let menuItem = menuItem else {return}
self.didSelectMenuItem(menuItem: menuItem)
}
}
animateStatusBar ()
}
func didSelectMenuItem(menuItem: MenuItem) {
switch menuItem {
case .Dashboard:
print("Show Dashboard")
case .Profile:
print("Show Profile")
case .Notifications:
print("Show Notifications")
case .Contacts:
let controller = SettingsController()
controller.username = "YAMIN"
present(UINavigationController(rootViewController: controller), animated: true, completion: nil)
}
}
func animateStatusBar () {
UIView.animate(withDuration: 0.5, delay: 0, usingSpringWithDamping: 0.8, initialSpringVelocity: 0,options: .curveEaseInOut, animations: {
self.setNeedsStatusBarAppearanceUpdate()
}, completion: nil)
}
}
extension MainTabBarController: HomeControllerDelegate {
func handleMenuToogle(forMenuItem menuItem: MenuItem?) {
print("Pressed")
if !isExpanded {
configureMenuController()
}
isExpanded = !isExpanded
animateSideMenuOpeningAndClosing(shouldExpand: isExpanded, menuItem: menuItem)
}
}
A: You insert subview at 0 index, so, I think there are some views which are higher in subviews hierarchy.
You can look at your subviews with "Debug View Hierarchy button"
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/55605477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: how to make invisible some layers with actionscript? I have a map and i can use on (release), on (rollover) but i can't use onload function. I want some cities will be invisible on load. My code is here with images;
Here is my city code:
on (rollOver) { //this code is on all city.
y = new String(_name);
a = y.slice(1, 3);
_parent.rbtxt(a);
}
on (rollOut) {
_parent.rbalon(a);
}
on (release) {
_parent.rpress(a);
}
Here is my actionscript codes:
ilad="a,Adana,Adıyaman,Afyon,Ağrı"; //.. more city
ilurl="a,adana,adıyaman,afyon,ağrı"; //.. more city
passivearray="b,adana,afyon" //must be passive this cities.
function rbtxt(a) {
var Register_1_ = a;
var Register_2_ = this;
balon._visible = true;
arbtxt = ilad.split(",");
balon.txt.text = arbtxt[Register_1_];
Register_2_[("x" + Register_1_)].play();
balon._x = Register_2_[("x" + Register_1_)]._x;
balon._y = (Register_2_[("x" + Register_1_)]._y - Register_2_[("x" + Register_1_)]._height / 2) - 25;
}
function rbalon(a) {
balon._visible = false;
this[("x" + a)].gotoAndStop(1);
}
Should be passive array is => passivecities.
How can i make invisible somecities ?
A: At the beginning of your script, you can use
movieclipName._visible=false;
Then you modify the same property to reverse that.
A: Have you tried:
on(load){
this._visible = false;
}
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/31281041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: c# printing through PDF drivers, print to file option will output PS instead of PDF After struggling whole day, I identified the issue but this didn't solve my problem.
On short:
I need to open a PDF, convert to BW (grayscale), search some words and insert some notes nearby found words. At a first look it seems easy but I discovered how hard PDF files are processed (having no "words" concepts and so on).
Now the first task, converting to grayscale just drove me crazy. I didn't find a working solution either commercial or free. I came up with this solution:
*
*open the PDF
*print with windows drivers, some free PDF printers
This is quite ugly since I will force the C# users to install such 3'rd party SW but.. that is fpr the moment. I tested FreePDF, CutePDF and PDFCreator. All of them are working "stand alone" as expected.
Now when I tried to print from C#, obviously, I don't want the print dialog, just select BW option and print (aka. convert)
The following code just uses a PDF library, shown for clarity only.
Aspose.Pdf.Facades.PdfViewer viewer = new Aspose.Pdf.Facades.PdfViewer();
viewer.BindPdf(txtPDF.Text);
viewer.PrintAsGrayscale = true;
//viewer.RenderingOptions = new RenderingOptions { UseNewImagingEngine = true };
//Set attributes for printing
//viewer.AutoResize = true; //Print the file with adjusted size
//viewer.AutoRotate = true; //Print the file with adjusted rotation
viewer.PrintPageDialog = true; //Do not produce the page number dialog when printing
////PrinterJob printJob = PrinterJob.getPrinterJob();
//Create objects for printer and page settings and PrintDocument
System.Drawing.Printing.PrinterSettings ps = new System.Drawing.Printing.PrinterSettings();
System.Drawing.Printing.PageSettings pgs = new System.Drawing.Printing.PageSettings();
//System.Drawing.Printing.PrintDocument prtdoc = new System.Drawing.Printing.PrintDocument();
//prtdoc.PrinterSettings = ps;
//Set printer name
//ps.PrinterName = prtdoc.PrinterSettings.PrinterName;
ps.PrinterName = "CutePDF Writer";
ps.PrintToFile = true;
ps.PrintFileName = @"test.pdf";
//
//ps.
//Set PageSize (if required)
//pgs.PaperSize = new System.Drawing.Printing.PaperSize("A4", 827, 1169);
//Set PageMargins (if required)
//pgs.Margins = new System.Drawing.Printing.Margins(0, 0, 0, 0);
//Print document using printer and page settings
viewer.PrintDocumentWithSettings(ps);
//viewer.PrintDocument();
//Close the PDF file after priting
What I discovered and seems to be little explained, is that if you select
ps.PrintToFile = true;
no matter C# PDF library or PDF printer driver, Windows will just skip the PDF drivers and instead of PDF files will output PS (postscript) ones which obviously, will not be recognized by Adobe Reader.
Now the question (and I am positive that others who may want to print PDFs from C# may be encountered) is how to print to CutePDF for example and still suppress any filename dialog?
In other words, just print silently with programmatically selected filename from C# application. Or somehow convince "print to file" to go through PDF driver, not Windows default PS driver.
Thanks very much for any hints.
A: I solved conversion to grayscale with a commercial component with this post and I also posted there my complete solution, in care anyone will struggle like me.
Converting PDF to Grayscale pdf using ABC PDF
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/44678404",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "0"
}
|
Q: PHP: Uncaught Error Cannot use object of type mysqli_result as array I have a simple section in which I am displaying data from the database,
If the user clicks eg construction and selects eg Algeria, I am displaying [251, 211,712] if a user clicks eg Power and selects eg. Egypt I am displaying [406, 228,559] etc
Now I want if the user clicks the button All available industries and select eg. Algeria I want to display this [251+203+130, 211,712+179,877+154,946] in a simple way like this in SQL
SELECT sum(SumofNoOfProjects) as sum_projects, sum(SumofTotalBudgetValue) as sum_value FROM `meed` WHERE Countries = 'Algeria'
Which give me this [611, 546535]
Here is my solution
HTML
<div id="interactive-layers">
<div buttonid="43" class="video-btns">
<span class="label">Construction</span></div>
<div buttonid="44" class="video-btns">
<span class="label">Power</span></div>
<div buttonid="45" class="video-btns">
<span class="label">Oil</span></div>
<div buttonid="103" class="video-btns">
<span class="label">All available industries</span>
</div>
</div>
Here is js ajax
$("#interactive-layers").on("click", ".video-btns", function(){
if( $(e.target).find("span.label").html()=="Confirm" ) {
var selectedCountries = [];
$('.video-btns .selected').each(function () {
selectedCountries.push( $(this).parent().find("span.label").html() ) ;
});
if( selectedCountries.length>0 ) {
if(selectedCountries.indexOf("All available countries")>-1) {
selectedCountries = [];
}
} else {
return;
}
var ajaxurl = "";
if(selectedCountries.length>0) {
ajaxurl = "data.php";
} else {
ajaxurl = "dataall.php";
}
$.ajax({
url: ajaxurl,
type: 'POST',
data: {
countries: selectedCountries.join(","),
sector: selectedSector
},
success: function(result){
console.log(result);
result = JSON.parse(result);
$(".video-btns").each(function () {
var getBtn = $(this).attr('buttonid');
if (getBtn == 106) {
var totalProjects = $("<span class='totalprojects'>"+ result[0] + "</span>");
$(this).append(totalProjects)
}else if(getBtn ==107){
var resultBudget = result[1]
var totalBudgets = $("<span class='totalbudget'>"+ '$m' +" " + resultBudget +"</span>");
$(this).append( totalBudgets)
}
});
return;
}
});
}
});
UPDATE Here is Updated data.php
<?php
$selectedSectorByUser = $_POST['sector'];
$countries = explode(",", $_POST['countries']);
echo '$countries';
$conn = mysqli_connect("localhost", "root", "", "meedadb");
if (mysqli_connect_errno()) {
echo "Failed to connect to MySQL: " . mysqli_connect_error();
}
$result = mysqli_query($conn, "SELECT * FROM meed");
$data = array();
$wynik = [];
$totalProjects = 0;
$totalBudget = 0;
while ($row = mysqli_fetch_array($result))
{
if($row['Sector']==$selectedSectorByUser && in_array($row['Countries'],$countries ) ) {
$totalProjects+= $row['SumofNoOfProjects'];
$totalBudget+= $row['SumofTotalBudgetValue'];
}elseif($selectedSectorByUser =="All available industries"){
$result = mysqli_query($conn,
"SELECT sum(SumofNoOfProjects) as 'SumofNoOfProjects, sum(SumofTotalBudgetValue) as SumofTotalBudgetValue
FROM `meed`
WHERE Countries = '$countries'");
while( $row=mysqli_fetch_array($result,MYSQLI_ASSOC)) {
echo json_encode([ $row['SumofNoOfProjects,'], $row['SumofTotalBudgetValue '] ] );
exit;
}
exit;
}
}
echo json_encode([ $totalProjects, $totalBudget ] );
exit();
?>
Now when the user clicks All available industries btn and selects a country I get the following error
b>Fatal error: Uncaught Error: Cannot use object of type mysqli_result as array in C:\custom-xammp\htdocs\editor\data.php:23
What do I need to change to get what I want? any help or suggestion will be appreciated,
A: you should fecth a row (at least)
if (mysqli_connect_errno()) {
echo "Failed to connect to MySQL: " . mysqli_connect_error();
}
$result = mysqli_query($conn,
"SELECT sum(SumofNoOfProjects) as sum_projects, sum(SumofTotalBudgetValue) as sum_value
FROM `meed`
WHERE Countries = '$countries'");
while( $row=mysqli_fetch_array($result,MYSQLI_ASSOC);) {
echo json_encode([ $row['sum_projects'], $row['sum_value'] ] );
exit;
}
for the multiple countries
Asuming you $_POST['countries'] contains "'Egypt','Algerie'"
then you could use a query as
"SELECT sum(SumofNoOfProjects) as sum_projects, sum(SumofTotalBudgetValue) as sum_value
FROM `meed`
WHERE Countries IN (" . $_POST['countries'] . ");"
|
{
"language": "en",
"url": "https://stackoverflow.com/questions/57134867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "-1"
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.