id
stringlengths 5
11
| text
stringlengths 0
146k
| title
stringclasses 1
value |
|---|---|---|
doc_23525500
|
Is there any other way of doing this.
Platforms - Android iOS both
| |
doc_23525501
|
For Handle push notification in background following below steps :-
*
*In Capabilities -> Enable Remote notification.
*In Capabilities -> Background Mode -> Enable Remote notifications.
*In didFinishLaunchingWithOptions give all permission for ios 10.
*For push notification used UNUserNotificationCenter.
*App In Foreground then push notification is working fine and below method call :
userNotificationCenter:(UNUserNotificationCenter *)center willPresentNotification:(UNNotification *)notification withCompletionHandler:(void (^)(UNNotificationPresentationOptions options))completionHandler
But my problem is app in background then not call any method.so any one have idea or solution for handle push notification in background for ios 10 then please help me.
Thanks.
A: willPresentNotification is called when app is in foreground. Have a look to their documentation
- (void)userNotificationCenter:(UNUserNotificationCenter *)center
willPresentNotification:(UNNotification *)notification
withCompletionHandler:(void (^)(UNNotificationPresentationOptions options))completionHandler {
// The method will be called on the delegate only if the application is in the foreground.
// If the method is not implemented or the handler is not called in a timely manner then the notification will not be presented.
// The application can choose to have the notification presented as a sound, badge, alert and/or in the notification list.
// This decision should be based on whether the information in the notification is otherwise visible to the user.
}
- (void)userNotificationCenter:(UNUserNotificationCenter *)center
didReceiveNotificationResponse:(UNNotificationResponse *)response
withCompletionHandler:(void(^)())completionHandler {
// The method will be called on the delegate when the user responded to the notification by opening the application,
// dismissing the notification or choosing a UNNotificationAction.
// The delegate must be set before the application returns from applicationDidFinishLaunching:.
}
Try to check in didReceiveNotificationResponse you will get what you need.
ALSO If need to fetch any data or any processing, Enable background fetch in background modes and use below method
-(void)application:(UIApplication *)application didReceiveRemoteNotification:(NSDictionary *)userInfo fetchCompletionHandler:(nonnull void (^)(UIBackgroundFetchResult))completionHandler{
completionHandler(UIBackgroundFetchResultNewData);
}
Handling APNS on the basis of application states
if(application.applicationState == UIApplicationStateInactive)
{
/*
# App is transitioning from background to foreground (user taps notification), do what you need when user taps here!
*/
}
else if(application.applicationState == UIApplicationStateActive)
{
/*
# App is currently active, can update badges count here
*/
}
else if(application.applicationState == UIApplicationStateBackground)
{
/* # App is in background, if content-available key of your notification is set to 1, poll to your backend to retrieve data and update your interface here */
}
A: When a push notification containing the alert key is received iOS will present the notification to the user. If the user interacts with the alert your application will be launched into the foreground mode.
Silent notifications are push notifications that contain the content-available key. These notifications are not presented to the user and can't contain the alert, sound, or badge keys. When iOS delivers a silent notification your application is launched into the background mode. The application can perform a very limited amount of work in the background in response to the silent notification.
A silent notification looks like this:
{
"aps" : {
"content-available" : 1
},
"com.yourcompany.something.something" : "something"
}
When it is delivered the application delegate method application:didReceiveRemoteNotification:fetchCompletionHandler: is called. Your implementation of that method has 30 seconds to call the block passed in the fetchCompletionHandler parameter.
- (void)application:(UIApplication *)application didReceiveRemoteNotification:(NSDictionary *)userInfo fetchCompletionHandler:(void (^)(UIBackgroundFetchResult result))completionHandler {
completionHandler(UIBackgroundFetchResultNoData);
return;
}
A: This was the solution for me:
{
"aps" : {
"content-available" : 1
},
"acme1" : "bar",
"acme2" : 42
}
What is important is to put the content-available to 1.
| |
doc_23525502
|
$query = mysql_query("SELECT m.*, i.*
FROM members AS m
INNER JOIN information AS i
ON n.m_id = s.m_id AND n.secret_code= '$code' AND n.password = '$password'")
or die(mysql_error());
I run this query every page. And I put this query on header.php file which is included in index.php file. I am doing everything through index.php file with including files, like that:
If it is page index.php?id=gym I include folder gym with all its files.
What would you suggest me? Re-write all system and insert into each file a query which would pull out only necessary information? Use Sessions? But if I buy, for example, two items, how can I know when to get data on how much member has money? Thank you very much.
A: If you've got two tables and there are about 60-70 rows, it's all going to be in memory extremely easily. Is it actually causing you any pain at all? I can't see how this would be a problem, unless you're either getting a vast number of hits, or you're running your database and site on a tiny, tiny machine.
A: Some information could be stored in the session. Non-volatile things like user name, player/character name, etc... That's not going to change very often. But things like current score, gold/credits available, etc.. which change frequently and should be kept consistent would most likely have to be pulled from the database each time. Consider what happens if they open two browser windows and do some shopping in each. Both load up saying you've got 500 gold available and both could potentially purchase something for 499 gold at the same time.
But, it comes down to what's faster, really. Do some benchmarking and see if pulling 60-70 bits of data from the database for each hit is faster/slow than pulling a mix of database/session data.
A: A single query per page is actually really good - a more complicated site/application will include multiple requests. If the information isn't always required, you could include right at the beginning of every page:
$require_lookup=false;
Or true - depending on if the info needs to be loaded this time. This spares the lookup on the pags when it isn't needed (presumably some pages are outside the game - the help page for example).
Storing that data in sessions, assuming they're file based (you can get database base sessions too, which will have you right back where you started) will simply move the memory constraints from the database to the file-system. Ultimately if your application is going to become popular, you'll be separating the database server from the PHP server - maybe even using multiple PHP servers - which will mean a file based session system will no longer work.
Stick with what you've got - it'll work fine for now.
| |
doc_23525503
|
Superclass androidx.core.app.f of androidx.activity.ComponentActivity is declared final
According to the google issues tracker this is new, maybe someone has a solution to this problem.
A: I added
-keep class androidx.core.app.** { *; }
In the proguard rules to solve the issue.
Edit: minus sign (-) needs to be before the line
A: On 27.07.2022 Google team members posted that the bug has been fixed.
and on 09.08.2022 they added some explanation that you can find in this link.
In two words:
In pre-launch testing app-crawler apk and the app apk generated different βkeep rulesβ for shrunk.
This will cause βno such methodβ or βsuperclass will be declared finalβ.
A: Did you upgrade some libraries in your build.gradle?
Coroutines maybe?
We have this exception reported too and it has the same stack trace as an older exception we had before, so possibly equally related to the Kotlin coroutines lib version:
java.lang.VerifyError: Verifier rejected class. Code working fine in debug mode, but not throwing this error in release mode
A: You try update implementation 'com.google.android.gms' in build.gradle to lastest version
A: I had the same issue with Flutter. I remove Firebase from build.gradle
// implementation platform('com.google.firebase:firebase-bom:29.0.4')
also downgrading firebase to 29.0.2 is working too.
A: I added this line into gradle.properties
android.enableR8.fullMode=true
I hope it will help you too
A: For builds made before August 2022
If your APK/App Boundle was created before August 2022, the issue might be related to an internal issue by Google.
See this issue tracker: https://issuetracker.google.com/issues/237785592?pli=1
For me it was enough to build a new release on my machine (increasing version number at least by 1!) and upload it... Afterwards, the problem was gone.
| |
doc_23525504
|
A: You can limit the delete functionality of a List/Form depending on the EditMode state, by using deleteDisabled(_:).
The following is a short example demonstrating deleting which only works in edit mode:
struct ContentView: View {
@State private var data = Array(1 ... 10)
var body: some View {
NavigationView {
Form {
DataRows(data: $data)
}
.navigationTitle("Delete Test")
.toolbar {
EditButton()
}
}
}
}
struct DataRows: View {
@Environment(\.editMode) private var editMode
@Binding private var data: [Int]
init(data: Binding<[Int]>) {
_data = data
}
var body: some View {
ForEach(data, id: \.self) { item in
Text("Item: \(item)")
}
.onMove { indices, newOffset in
data.move(fromOffsets: indices, toOffset: newOffset)
}
.onDelete { indexSet in
data.remove(atOffsets: indexSet)
}
.deleteDisabled(editMode?.wrappedValue != .active)
}
}
| |
doc_23525505
|
I think system("pause") is visual studio / windows specific and getchar() or other similar stuff that waits for users' input create unnecessary input for exiting the program running under say gcc.
Any idea?
-- Edit --
I also tried hitting Ctrl+F5, but it doesn't work sometimes. So I'm looking for an alternative command (if there's any) or setup that can pause the console screen in visual studio and doesn't cause any discrepancy in other c++ compilers.
A: This problem only occurs when you launch a console program from a GUI. So there is a very simple cross-platform workaround -- run console programs from a console. If you want to make a program that runs well from a GUI, make a GUI program.
The other suggested workarounds are awful. Both getchar() and system("pause") interfere with any attempt to use the program as a filter or to redirect its input and output. It doesn't make sense to break a program so that it works "correctly" when used incorrectly.
A: u can use this method
after that u write the last code of your program(before return 0;) u can use for example ( cin >> x; ) command..
then the program will wait for u to enter new data. and u can see your answer in debugging program :D..
good luck with this trick :D
| |
doc_23525506
|
This is the data class file that I have.
public class SU2018LAB6_DriverCandidate_Wayne {
private char[] keySet = {
'A','C','B','B','D','B','C','D','A','B',
'C','A','B','C','A','B','A','C','A','D',
'B','C','A','D','B'
};
//the answer key to be graded off of
private char[] answerSet;
//the answer key that is inputted by the user
private String lastName;
private String firstName;
private String socialNumber;
private String phone;
private String address;
//the getters and setter made by Eclipse
public char[] getKeySet() {
return keySet;
}
public void setKeySet(char[] keySet) {
this.keySet = keySet;
}
public char[] getAnswerSet() {
return answerSet;
}
public void setAnswerSet(char[] answerSet) {
this.answerSet = answerSet;
}
public String getLastName() {
return lastName;
}
public void setLastName(String lastName) {
this.lastName = lastName;
}
public String getFirstName() {
return firstName;
}
public void setFirstName(String firstName) {
this.firstName = firstName;
}
public String getSocialNumber() {
return socialNumber;
}
public void setSocialNumber(String socialNumber) {
this.socialNumber = socialNumber;
}
public String getPhone() {
return phone;
}
public void setPhone(String phone) {
this.phone = phone;
}
public String getAddress() {
return address;
}
public void setAddress(String address) {
this.address = address;
}
}
The driver class of the project:
public static void main(String[] args) {
SU2018LAB6_DriverCandidate_Wayne Driver = new SU2018LAB6_DriverCandidate_Wayne();
Scanner keyboard = new Scanner(System.in);
int test = 1;
int i;
int score;
while (test == 1) {
System.out.println("Welcome to the Online Driving Test");
System.out.println("To begin, enter your last name");
Driver.setLastName(keyboard.nextLine());
System.out.println("Enter your first name");
Driver.setFirstName(keyboard.nextLine());
System.out.println("Enter your SS number");
Driver.setSocialNumber(keyboard.nextLine());
System.out.println("Enter your phone number");
Driver.setPhone(keyboard.nextLine());
System.out.println("Enter your address");
Driver.setAddress(keyboard.nextLine());
//for (i = 0; i < Driver.getKeySet().length; i++) {
// System.out.println(Driver.getKeySet()[i]);
//}
System.out.println("Driver License Test");
System.out.println("There are 25 multiple choice questions");
System.out.println("You have to get at least 20 questions correct to pass");
System.out.println("---------------------------------");
// This is the area that I am having trouble with
for (i = 0; i < Driver.getKeySet().length; i++) {
System.out.println("Question " + (i + 1) + ": ");
Driver.setAnswerSet(keyboard.next().charAt(0));
}
for (i = 0; i < Driver.getKeySet().length; i++) {
System.out.println(Driver.getAnswerSet()[i]);
}
}
}
A: The error "The method setAnswerSet(char[]) in the type SU2018LAB6_DriverCandidate_Wayne is not applicable for the arguments (char)" is telling you that the setAnswerSet method is expecting an Array of characters, not a single character. The method let's you overwrite the reference to the array with a new one you pass in.
You have two options: Create an array filled with answers in SU2018LAB6_GradingDLTest_Wayne and "set" it as the new answerSet, or initialize answerSet in SU2018LAB6_DriverCandidate_Wayne and "get" it so you can start filling in the answers.
As it's currently written you can't just get the answerSet and update it because the array hasn't been initialized and is just a null reference.
| |
doc_23525507
|
I have an element being show and hidden on click basically. They all have the same ID, just a different classes.
How is it possible to achieve this?
This is what my html looks like:
<div id="spotparent">
<div id="spot" class="one" style="position: absolute; z-index: 103; width: 25px; height:25px;">
<div id="ball"></div>
<div id="pulse"></div>
<div id="contentspot" class="one">
<img src="img/line-boost.png">
</div>
</div>
<div id="spot" class="two" style="position: absolute; z-index: 103; width: 25px; height: 25px;">
<div id="ball"></div>
<div id="pulse"></div>
<div id="contentspot" class="two">
<img src="img/line-tpu.png">
</div>
</div>
<div id="spot" class="three" style="position: absolute; z-index: 103; width: 25px; height:25px;">
<div id="ball"></div>
<div id="pulse"></div>
<div id="contentspot" class="three">
<img src="img/line-wrap.png">
</div>
</div>
<div id="spot" class="four" style="position: absolute; z-index: 103; width: 25px; height:25px;">
<div id="ball"></div>
<div id="pulse"></div>
<div id="contentspot" class="four">
<img src="img/line-support.png">
</div>
</div>
<div id="spot" class="five" style="position: absolute; z-index: 103; width: 25px; height:25px;">
<div id="ball"></div>
<div id="pulse"></div>
<div id="contentspot" class="five">
<img src="img/line.png">
</div>
</div>
<div id="spot" class="six" style="position: absolute; z-index: 103; width: 25px; height:25px;">
<div id="ball"></div>
<div id="pulse"></div>
<div id="contentspot" class="six">
<img src="img/line-knit.png">
</div>
</div>
<div id="spot" class="seven" style="position: absolute; z-index: 103; width: 25px; height:25px;">
<div id="ball"></div>
<div id="pulse"></div>
<div id="contentspot" class="seven">
<img src="img/line-signature.png">
</div>
</div>
</div>
The point is on click of a spot id, the id 'contentspot' show ( by toggling a class who change the display from none to block).
for this I'm using the following JQuery:
$('#spot.one').click(function() {
// if($('.hide').hasClass('test')){
$('#contentspot.one').toggleClass('showcontent');
});
$('#spot.two').click(function() {
// if($('.hide').hasClass('test')){
$('#contentspot.two').toggleClass('showcontent');
});
$('#spot.three').click(function() {
// if($('.hide').hasClass('test')){
$('#contentspot.three').toggleClass('showcontent');
});
$('#spot.four').click(function() {
// if($('.hide').hasClass('test')){
$('#contentspot.four').toggleClass('showcontent');
});
$('#spot.five').click(function() {
// if($('.hide').hasClass('test')){
$('#contentspot.five').toggleClass('showcontent');
});
$('#spot.six').click(function() {
// if($('.hide').hasClass('test')){
$('#contentspot.six').toggleClass('showcontent');
});
$('#spot.seven').click(function() {
// if($('.hide').hasClass('test')){
$('#contentspot.seven').toggleClass('showcontent');
});
So it get repeated the all time.
I'm sure it's possible to make all this code in 2 lines, if anybody can explain me the process for me to understand, it would be fantastic!
Thank you for your time.
A: First, you should take @empiric's advice and only use an id once per page at most. id should be a unique attribute, not reused. I would recommend something like this, using class attribute for similar elements
<div id="spots">
<div class="spot" style="position: absolute; z-index: 103; width: 25px; height:25px;">
<div class="ball"></div>
<div class="pulse"></div>
<div class="content">
<img src="img/line-boost.png">
</div>
</div>
<!-- more spots... -->
</div>
Then you can write your jQuery function to run any time an element with class="spot" is clicked
<script>
$('#spots .spot').on('click', function() {
$(this) //this refers to the clicked element, our .spot
.find('.content') //select child elements with class 'content'
.toggleClass('showcontent');
});
</script>
| |
doc_23525508
|
It works fine on PC browsers, but it works badly on mobile browsers (especially Firefox)
The problem is that the fixed positioned element scrolls with the page and then awkwardly snap back into position once scrolling is complete.
Here's the demonstration: (Notice the βTopβ block at the bottom right corner, which is a fixed positioned element)
http://imgur.com/a/94A3v
How to solve this problem?
A: For mobile browsers, "fixed" is discussed at length here:
http://bradfrost.com/blog/mobile/fixed-position/
You could use jQuery Mobile, as discussed here:
http://demos.jquerymobile.com/1.2.1/docs/toolbars/bars-fixed.html
You'd end up with something like
<div data-role="header" data-position="fixed">
<h1>Fixed Header!</h1>
</div>
| |
doc_23525509
|
For example, if the final item should read as "John Smith". Currently, as the user types J-O-H-N, a list containing John will appear and they can select John as needed. As they move on to typing S-M-I-T-H, I've handled the Populating Event to pass only the final word in the .Text property to the Web Service and they will get a list that includes smith. So far, so good. However, when "Smith"is selected from the DropDown, the Contents "John" are REPLACED by the contents "Smith", leaving you with simply "Smith", not "John Smith" as we'd like.
I've attempted to deal with this by writing custom handlers for the DropDownClosing and/or SelectionChanged events. Neither of these appears to be the correct event to handle.
Can someone direct me where I might go to manage this behaviour?
Thanks
A: Seeing as you're already attaching to the on populating event and presumably kicking off requst to the server for the data, why not just append the 'John ' to all the items in the itemssource before you give it back? Then when you match it'll already be there.
| |
doc_23525510
|
Error in .Call("R_igraph_write_graph_graphml", graph, file, as.logical(prefixAttr), :
At foreign-graphml.c:1236 : Forbidden control character 0x08 found in igraph_i_xml_escape, Invalid value
I am using the basic syntax for writing igraph objects to file in graphml format:
write.graph(myGraphObject,"graph_object_to_file.graphml",format="graphml")
I have tried converting all the character vector attributes of the graph to UTF-8 using the iconv function, however it has not worked so far.
Any ideas much appreciated.
A: Find the character attribute that contains a character with character code 0x08, and fix it. That character stands for Backspace in the ASCII table, so I'm pretty sure that this is not meant to be there. Also, that character is disallowed in XML 1.0 anyway, so you won't be able to save it into an XML 1.0 file.
Converting to UTF-8 won't work because the UTF-8 equivalent of 0x08 is also 0x08.
| |
doc_23525511
|
If user dial my asterisk gateway I would like to check his number and return some wave file.
when user dial "1" I would like to run some procedure on my app and return other wave file etc.
I Use http://www.asteriskwin32.com/
Is there any way to communicate with web services etc?
A: You can extend your dialplan using AGI. While my Asterisk work on Linux I use Python AGI library. I do similar thing you want to do: my AGI program connects to CRM WebService and checks if caller is known. Some callers (for example our staff) will hear other voice menu then "usual" callers.
| |
doc_23525512
|
Sub DynamicForm_NewForm_SmartHUB_ProjectsActivated(ByVal sender As Object, ByVal e As EventArgs)
strProjectTypeFolderName = '(form in which the sender is located).combobox_ProjectType.selecteditem
End Sub
A: You'd first need to cast the sender as Control at least. You can then access the Parent property, although that property is type Control and may not be a form, if the sender is in a Panel, GroupBox or some other container. You ought to call the FindForm method instead. It will return a Form reference and it will also get the containing form no matter how deeply the sender is nested.
If you have Option Strict On, which you probably don't but you definitely should, even a Form reference won't be sufficient. The Form class has no combobox_ProjectType field, so you'd need to cast it as it's actual type - Form1 or whatever - in order to access that field without using late binding.
| |
doc_23525513
|
Thanks
A: There's a nice analogy of this described here:
http://daverecycles.tumblr.com/post/3104767110/explain-event-driven-web-servers-to-your-grandma
Relevant text from the above link:
Explain βEvent-Drivenβ Web Servers to Your Grandma Youβve heard the
term event-driven, event-based, or evented when it comes to web
servers. Node.js is based on evented I/O. nginx is an asynchronous
event-driven web server.
But what does the term mean? Hereβs an easy way to think about it.
Letβs think of a web server as a pizza shop that needs to take orders
over the phone (requests for web pages).
Traditional Web Server
The pizza shop hires operators (processes/threads) to take orders over
the phone. Each operator has one phone line. After the operator is
done taking the order, the operator keeps the customer on the line
until the pizza (response web page) is done baking and then tells them
itβs ready to pick up.
So the pizza shop needs to hire as many operators as the number of
pizzas that may be baked at once in order to serve all customers who
call in.
Event-Driven Web Server
The pizza shop only hires one operator, but has trained the operator
to hang up after taking the order, and call the customer back when the
pizza is ready to be picked up.
Now one operator can serve many customers.
A: A web server needs to handle concurrent connections. There are many ways to do this, some of them are:
*
*A process per connection.
*A process per connection, and have a pool of processes ready to use.
*A thread per connection.
*A thread per connection, and have a pool of threads ready to use.
*A single process, handle every event (accepted connection, data available to read, can write to client, ...) on a callback.
*Some combination of the above.
*...
At the end, the distinction ends up being in how you store each connection state (explicitly in a context structure, implicitly in the stack, implicitly in a continuation, ...) and how you schedule between connections (let the OS scheduler do it, let the OS polling primitives do it, ...).
A: Event-driven manner aims at resolving the C10K Problem. It turns the traditional 'push model' into a 'pull model' to create a non-blocking evented I/O. Simply put, the event-driven architecture avoid spawning additional threads and thread context switching overheads, and usually ends up with better performance and less resource consumption.
Some overview from a rails developer, also includes analogy:
http://odysseyonrails.com/articles/8
| |
doc_23525514
|
def remove( self, p ) :
tmp = p.prev
p.prev.next = p.next
p.prev = tmp
Why is there a tmp = p.prev and p.prev = tmp. What is the purpose of those extra line? and finally, why is no nodes being deleted with "del"? shouldn't it be "del p" at the end of the code?
Thank you!
A: First, if this is the entire function, it's wrong, which is probably part of the reason you're having trouble understand it.
To remove a node from a doubly linked list, you need to do three things:
*
*Make the previous node point ahead at the next node, instead of the p node.
*Make the next node point back at the previous node, instead of the current one.
*Delete the current node.
Because Python is garbage-collected,1 step 3 happens automatically.2
And step 1 is taken care of by p.prev.next = p.next.
But step 2 doesn't happen anywhere. p.next.prev is still pointing at p, instead of at p.prev. This means that if you walk the list forward, p won't be part of itβbut if you walk backward, it will. So p isn't actually removed.
Meanwhile, tmp = p.prev followed by p.prev = tmp doesn't do anything useful.3 And, whatever it was trying to do, you almost never need tmp like that in Python; you can swap values with just x, y = y, x instead of tmp = x; x = y; y = tmp.
So, what you actually want here is:
def remove(self, p):
p.prev.next = p.next
p.next.prev = p.prev
1. CPython, the reference interpreter for Python that you're probably using, does garbage collection through automated reference counting, with a cycle breaker that runs occasionally. Whether this counts as "real" garbage collection or not is a nice holy-war argument, but it doesn't matter here.
2. You just need to drop all references to the object, and it becomes garbage and gets cleaned up automatically. So, you need to remove the references from the next and previous nodes to p, which you're already doing. And you need to let p go away, but you don't need del p for thatβit's a local variable; it goes away when you return from the function. After that, it's up to the caller of remove; if they don't keep any references to the node, the node is garbage.
3. It could do something useful, if we temporarily assigned a different value to p.prev and wanted to restore it at the end of the function. But that isn't happening here, and I don't think whoever wrote this code intended anything like that; I think they were trying to do some kind of swap.
| |
doc_23525515
|
Consider
namespace A
{
public interface ICollectionFactory
{
ICollection<T> GetCollection<T>();
}
}
using A;
namespace B
{
public class MongoCollectionFactory : ICollectionFactory
{
public CollectionFactory(string host, string db, int port)
{
// < init readonly fields >
}
public ICollection<T> GetCollection<T>(){ ... }
}
}
using A;
namespace C
{
public class AService(ICollectionFactory collectionFactory)
{
// ...
}
}
I can avoid having to reference B from C by using the various techniques within the DI libs that scan & load available assemblies and making B.dll available at runtime, easy. BUT how can I provide the constructor arguments for MongoCollectionFactory without these details leaking into C? Furthermore C requires multiple instances of ICollectionFactory that connect to difference databases and these instances need to be bound to the correct services in C.
I have looked a Ninject and LightInject so far. I am happy to use any mature container that works on Mono and has at least reasonable performance.
EDIT
I have created another assembly; the composition root 'D' which references everything else. This and the entry point assembly are the only ones to reference the DI container. D also has the configuration for the system (connection details, endpoints etc). I'm satisfied with this solution although I can see D becoming a monster as the system grows.
A: If your C-Service needs two different databases it needs two parameters in its constructor.
Example: if the service has to copy data from a source-ICollectionFactory to a destination-ICollectionFactory then the constructor would look like this.
public class AService(ICollectionFactory source, ICollectionFactory destination)
{ ... }
Usually you need a seperate module "D" that is responsible for wiring up all dependencies and knows the database-specific settings.
A: What you're missing here is the concept of the Composition Root. In other words, what you're missing is a start-up assembly that references all other assemblies and wires everything together.
| |
doc_23525516
|
Here is more info on the Tag: http://img42.com/gw07d+
The Tag ID is read correctly but the data in the tag is not.
onCreate Method:
// initialize NFC
nfcAdapter = NfcAdapter.getDefaultAdapter(this);
nfcPendingIntent = PendingIntent.getActivity(this, 0, new Intent(this, this.getClass()).addFlags(Intent.FLAG_ACTIVITY_SINGLE_TOP), 0);
onNewIntent method:
if (NfcAdapter.ACTION_TAG_DISCOVERED.equals(intent.getAction()) || NfcAdapter.ACTION_TECH_DISCOVERED.equals(intent.getAction())) {
currentTag = intent.getParcelableExtra(NfcAdapter.EXTRA_TAG);
byte[] id = currentTag.getId();
Tag_data_TextDisplay.setText("TagId:" + Common.getHexString(id));
for (String tech : currentTag.getTechList()) {
if (tech.equals(NfcV.class.getName())) {
NfcV nfcvTag = NfcV.get(currentTag);
try {
nfcvTag.connect();
txtType.setText("Hello NFC!");
} catch (IOException e) {
Toast.makeText(getApplicationContext(), "Could not open a connection!", Toast.LENGTH_SHORT).show();
return;
}
try {
byte[] cmd = new byte[]{
(byte) 0x00, // Flags
(byte) 0x23, // Command: Read multiple blocks
(byte) 0x00, // First block (offset)
(byte) 0x04 // Number of blocks
};
byte[] userdata = nfcvTag.transceive(cmd);
userdata = Arrays.copyOfRange(userdata, 0, 32);
txtWrite.setText("DATA:" + Common.getHexString(userdata));
} catch (IOException e) {
Toast.makeText(getApplicationContext(), "An error occurred while reading!", Toast.LENGTH_SHORT).show();
return;
}
}
}
}
userdata is contains a single byte with value 0x02 ({ 0x02 }) right after the transceive method finished.
A: So you receive a value of { 0x02 } from the transceive method. As found in this thread this may happen when you use unaddressed commands. Hence, you should always send addressed commands through NfcV (as this seems to be supported across all NFC chipsets on Android devices). In your case, you could use something like this to generate an addressed READ MULTIPLE BLOCKS command:
int offset = 0; // offset of first block to read
int blocks = 1; // number of blocks to read
byte[] cmd = new byte[]{
(byte)0x60, // flags: addressed (= UID field present)
(byte)0x23, // command: READ MULTIPLE BLOCKS
(byte)0x00, (byte)0x00, (byte)0x00, (byte)0x00, (byte)0x00, (byte)0x00, (byte)0x00, (byte)0x00, // placeholder for tag UID
(byte)(offset & 0x0ff), // first block number
(byte)((blocks - 1) & 0x0ff) // number of blocks (-1 as 0x00 means one block)
};
System.arraycopy(id, 0, cmd, 2, 8);
byte[] response = nfcvTag.transceive(cmd);
| |
doc_23525517
|
dependencies {
compile 'com.facebook.android:facebook-android-sdk:4.16.1'
}
And change my my mindSdkVersion from 14 to 15 cause it was said in the document Select API 15: Android 4.0.3 or higher and create your new project.
defaultConfig {
applicationId "com.myapp.myapplication"
minSdkVersion 15
targetSdkVersion 23
multiDexEnabled true
}
after following the steps I encountered this error Manifest merger failed with multiple errors how to solve this?
*Update
I forgot to app my newly created class that is extended to Application in the Manifest here is my class
public class MyApplication extends Application {
@Override
public void onCreate() {
super.onCreate();
// Initialize the SDK before executing any other operations,
FacebookSdk.sdkInitialize(getApplicationContext());
AppEventsLogger.activateApp(this);
}
}
and so I added it in my Manifest
<application
android:allowBackup="true"
android:icon="@drawable/ic_launcher"
android:label="@string/app_name"
android:theme="@style/AppTheme"
android:name="com.myapp.myapplication.MyApplication">
changing android:name="android.support.multidex.MultiDexApplication" to com.myapp.myapplication.MyApplication
and encountered this
Error:Execution failed for task ':app:processDebugManifest'.> Manifest merger failed : Attribute activity#com.facebook.FacebookActivity@theme value=(@android:style/Theme.Translucent.NoTitleBar) from AndroidManifest.xml:388:13-72
is also present at [com.facebook.android:facebook-android-sdk:4.16.1] AndroidManifest.xml:32:13-63 value=(@style/com_facebook_activity_theme).
Suggestion: add 'tools:replace="android:theme"' to <activity> element at AndroidManifest.xml:385:13-389:48 to override.
A: In Manifest, I re-arrange the position to
<meta-data android:name="com.facebook.sdk.ApplicationId"
android:value="@string/facebook_app_id" />
<activity android:name="com.facebook.FacebookActivity"
android:configChanges=
"keyboard|keyboardHidden|screenLayout|screenSize|orientation"
android:label="@string/app_name"
android:screenOrientation="portrait"/>
<provider android:authorities="com.facebook.app.FacebookContentProvider"
android:name="com.facebook.FacebookContentProvider"
android:exported="true" />
A: Upgrade your Facebook sdk to
implementation 'com.facebook.android:facebook-android-sdk:5.2.0'
and add "android:replace" to your Facebook activity tag like this
<activity android:name="com.facebook.FacebookActivity"
android:label="@string/app_name"
android:replace="android:theme"/>
| |
doc_23525518
|
let connection = FBSDKGraphRequestConnection()
connection.add(FBSDKGraphRequest.init(graphPath: "/\(pageId)?fields=access_token", parameters: nil)) { (httpResponse, result, error) in
if let error = error {
// print("Error: \(error)")
} else {
let result = result as! [String:Any]
self.pageAccessToken = result["access_token"] as! String
}
}
connection.start()
let params = [
"access_token":self.pageAccessToken,
"caption":self.itemName.text! + " - $ " + self.itemcost + " " + self.width + "w x " + self.depth + "d x " + self.height + "h",
"url":imageURL,
"appId":FBSDKAccessToken.current().appID,
"uid":FBSDKAccessToken.current().userID]
FBSDKGraphRequest.init(graphPath: "\(pageId)/photos", parameters: params, httpMethod: "POST").start(completionHandler: { (connection, result, error) -> Void in
if let error = error {
print("Error: \(error)")
} else {
print("Success")
print(result)
}
| |
doc_23525519
|
=COUNTIF(C3:C46, TRUE) and =COUNTIF(C3:C46, FALSE)
but I want both in the same cell, separated by a "/"
A: add & "/" & between the formulas and it will work :)
=COUNTIF(C3:C46, TRUE) & "/" & COUNTIF(C3:C46, FALSE)
| |
doc_23525520
|
print('Choose two primary colors to get their secondary color.')
print('Choose the number 1 for red, 2 for blue and 3 for yellow.')
red = 1
blue = 2
yellow = 3
def main():
prime_1 = input('Enter your first primary color: ')
prime_2 = input('Enter your second primary color: ')
if prime_1 == red and prime_2 == blue:
print('Your secondary color is purple!')
elif prime_1 == yellow and prime_2 == red:
print('Your secondary color is orange!')
elif prime_1 == blue and prime_2 == yellow:
print('Your secondary color is green!')
else:
print('That is not a primary color!')
main()
A: input returns a string, but the values in the variables red, blue, and yellow are integers. Integers and strings are not equal:
>>> '5' == 5
False
You can work around this by either making your red, blue, and yellow variables strings:
red = '1'
blue = '2'
yellow = '3'
Or converting the user's input to an integer before comparing:
prime_1 = int(input('Enter your first primary color: '))
prime_2 = int(input('Enter your second primary color: '))
If you decide to go with the approach of converting the user's input to an integer before comparing, note that this has another failure mode: if they enter a string that's a valid integer but an invalid color like 4, your error message will be output; but if they enter a string that's not a valid integer, like red, it will raise a ValueError exception and crash your program rather than triggering your error logic. You could catch that using a try block or two:
try:
prime_1 = int(input('Enter your first primary color: '))
except ValueError:
prime_1 = None
try:
prime_2 = int(input('Enter your second primary color: '))
except ValueError:
prime_2 = None
| |
doc_23525521
|
kubectl-argo-rollouts -n ddash5 set image detector detector=starry-academy-177207/detector:deepak-detector-8
I was expecting this to update the pod, but it created a new one.
NAME READY STATUS RESTARTS AGE
detector-5d96bc8456-h2x7p 1/1 Running 0 35m
detector-68f89d8b45-j465j 0/1 Running 0 35m
Even if I delete detector-5d96bc8456-h2x7p, pod gets recreated with the older image.
and detector-68f89d8b45-j465j stays in 0/1 state.
I am new to kube, Can someone give me insights to this?
Thanks!!!
Deepak
A: You are using argo rollout, where rolling updates allow deployment updates pods instances with new ones. The new Pods will be scheduled on Nodes with available resources.That is the reason new pods are getting created by replacing existing pods.
Instead you can use kubectl set image command which is used to update images of existing deployment, it will update images without recreating the deployment.Use the following command.
kubectl set image deployment/<deployment-name> <container-name>=<image>:<tag>
In your case:
kubectl set image deployment/detector detector=starry-academy-177207/detector:deepak-detector-8
This will update existing deployment, try it and let me know if this works.Found ArgoCD Image updater you can check it.
| |
doc_23525522
|
project(weather-station-hardware-driver)
message(STATUS "Selected C Compiler: ${CMAKE_C_COMPILER}")
message(STATUS "Selected C++ Compiler: ${CMAKE_CXX_COMPILER}")
set(CMAKE_ARCHIVE_OUTPUT_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/bin)
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/bin)
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_CURRENT_SOURCE_DIR}/bin)
set(CMAKE_CXX_STANDARD 20)
include(FetchContent)
FetchContent_Declare(json URL https://github.com/nlohmann/json/releases/download/v3.11.1/json.tar.xz)
FetchContent_Declare(bmp2 GIT_REPOSITORY https://github.com/BoschSensortec/BMP2-Sensor-API.git)
FetchContent_MakeAvailable(json bmp2)
add_library(wsh SHARED src/library.cpp src/bmp280.cpp src/bmp280.hpp)
set_target_properties(wsh PROPERTIES LINKER_LANGUAGE CXX)
target_link_libraries(wsh PUBLIC i2c)
target_include_directories(wsh PUBLIC ${bmp2_SOURCE_DIR})
add_executable(main src/main.cpp)
target_link_libraries(main PRIVATE wsh nlohmann_json::nlohmann_json)
The result is the following error message:
: && /usr/bin/c++ -g CMakeFiles/main.dir/src/main.cpp.o -o ../bin/main -Wl,-rpath,/home/fabro122/workspace/intellij/weather-station-webserver/native/hardware-driver/bin ../bin/libwsh.so -li2c && :
/usr/bin/ld: ../bin/libwsh.so: undefined reference to `bmp2_get_config'
/usr/bin/ld: ../bin/libwsh.so: undefined reference to `bmp2_init'
where bmp2_get_config and bmp2_init are c functions from the source files in ${bmp2_SOURCE_DIR}.
I already tried to add the following line:
target_link_directories(wsh PUBLIC ${bmp2_SOURCE_DIR})
And to include the headers from the library as C code (which should be unnecessary, because the bmp280 source file includes a "cpp guard" anyways):
extern "C" {
#include <bmp2.h>
}
And also to compile directly to the executable -> without compiling and linking my own shared library first.
Nothing helped and couldn't find any other tips on stackoverflow, so here I am, hoping for help. Thanks in advance!
Here are the other relevant source files (excluding the .hpp files):
main.cpp
#include <iostream>
#include <nlohmann/json.hpp>
#include "library.hpp"
using namespace std;
using namespace wsh;
namespace wsh {
// The non-intrusive macro is used to avoid the shared library depending on nlohmann/json.hpp
NLOHMANN_DEFINE_TYPE_NON_INTRUSIVE(Measurement, temperature, pressure, humidity, airQuality, ambientLight)
}
int main() {
Measurement data = doMeasurement();
nlohmann::json j = data;
cout << j << endl;
return 0;
}
library.hpp
#include "library.hpp"
#include "bmp280.hpp"
wsh::Measurement wsh::doMeasurement() {
auto allData = Measurement();
bmp280 bmp;
return allData;
}
bmp280.cpp
#include "bmp280.hpp"
#include <bmp2.h>
namespace wsh {
BMP2_INTF_RET_TYPE i2c_read(uint8_t reg_addr, uint8_t* reg_data, uint32_t length, void* intf_ptr) {
return 0;
}
BMP2_INTF_RET_TYPE i2c_write(uint8_t reg_addr, const uint8_t* reg_data, uint32_t length, void* intf_ptr) {
return 0;
}
bmp280::bmp280() {
// device and address are defined in .hpp
device.chip_id = address;
device.intf = BMP2_I2C_INTF;
device.read = i2c_read;
device.write = i2c_write;
bmp2_init(&device);
// configure
bmp2_config conf{};
bmp2_get_config(&conf, &device);
}
}
| |
doc_23525523
|
now = Time.now
more_time = (24*365*24*60*60)
puts "more_time.class = #{more_time.class}"
later = now + more_time
now = Time.now
more_time = (25*365*24*60*60)
puts "more_time.class = #{more_time.class}"
later = now + more_time
Produces:
more_time.class = Fixnum
more_time.class = Fixnum
ruby_time.rb:11:in `+': time + 788400000.000000 out of Time range (RangeError) from ruby_time.rb:11
Am I running into a year 2038 problem? I don't have this issue with 64-bit ruby 1.9.2-p290.
A: 32 bit Ruby uses 32 bits to represent the time, therefore it has a valid range from 13 Dec 1901 20:45:54 UTC to 19 Jan 2038 03:14:07 UTC, as these are the minimum/maximum signed integer values representable with 32 bits, with time 0 being unix epoch time (1 Jan 1970 00:00:00 UTC).
64 bit Ruby uses 64 bits to represent the time, therefore it has a valid range of basically anything.
To fix this, you could look into using the DateTime class, which is not limited to 32 bits.
| |
doc_23525524
|
For a reason I don't know, it is rendering/showing/outputting two codex-editores on the page! Both of them are working correctly and independently.
Screenshot (I added borders to show the area)
Codes
index.jsx
import EditorJS from "@editorjs/editorjs";
import css from "./removeme.module.scss" // just to see the area;
export default function index(params) {
const editor = new EditorJS({
holder: "editorjs",
});
return (
<>
<h1>New Note</h1>
<div className={css["editor-area"]} id="editorjs" />
</>
);
}
_app.js
function SafeHydrate({ children }) {
return <div suppressHydrationWarning>{typeof window === "undefined" ? null : children}</div>;
}
function MyApp({ Component, pageProps }) {
return (
<SafeHydrate>
<Body>
<Sidebar items={SidebarLinks} />
<Page>
<Component {...pageProps} />
</Page>
</Body>
</SafeHydrate>
);
}
export default MyApp;
What I tried
*
*Commenting <SafeHydrate> and render the page normally: no luck. (I added SafeHydrate so I can use window API and also disable SSR)
*Removing my custom CSS (the borders): no luck.
*Removing id="editorjs" from <div>: nothing shows up.
Additional Notes
The page localhost:3001/notes/editor is only accessible if visited from sidebar menu (SPA-like). I mean, it shows 'window is not defined' if opened directly.
What's causing the problem?
A: Using useEffect solves the issue since it only runs after the initial page render:
export default function index(params) {
useEffect(() => {
const editor = new EditorJS({
holder: "editorjs",
});
}, []);
return (
<>
<h1>New Note</h1>
<div className={css["editor-area"]} id="editorjs" />
</>
);
}
| |
doc_23525525
|
I'd like to create a complex query with more than one criterion using the SailsJS "Find Where" blueprint route. However, I am unable to use the equals comparator and the and condition successfully. I couldn't find adequate documentation on how to implement the Find Where route, so I worked through the source code and came up with the following scenarios.
Question
Using the SailsJS Find Where Blueprint Route, how does one implement:
*
*the equality comparison
*the and condition
Success Scenarios
The following scenarios will return the appropriate response:
http://localhost:1337/api/user?name=fred
http://localhost:1337/api/user?where={"name":{"startsWith":"fred"}}
http://localhost:1337/api/user?where={"name":{"endsWith":"fred"}}
http://localhost:1337/api/user?where={"name":{"contains":"fred"}}
http://localhost:1337/api/user?where={"name":{"like":"fred"}}
http://localhost:1337/api/user?where={"or":[{"name":{"startsWith":"fred"}}]}
http://localhost:1337/api/user?where={"or":[{"name":{"startsWith":"fred"}},{"path":{"endsWith":"fred"}}]}
Failure Scenario
The following scenarios return an empty response:
http://localhost:1337/api/user?where={"name":{"equals":"fred"}}
http://localhost:1337/api/user?where={"name":{"=":"fred"}}
http://localhost:1337/api/user?where={"name":{"equal":"fred"}}
http://localhost:1337/api/user?where={"and":[{"name":{"startsWith":"fred"}}]}
http://localhost:1337/api/user?where={"and":[{"name":{"startsWith":"fred"}},{"path":{"endsWith":"fred"}}]}
A: To use "and" queries you use the query string syntax and chain criteria together using the ampersand character. For more advanced queries like OR or complex operators its best to write a controller action. If you decide to stick with blueprints you can use most valid Waterline queries encoded as JSON.
For simple queries you use the query string method. The following would build an "and" query for name and school.
http://localhost:1337/api/user?name=fred&school=foo
To chain together more advanced operators the following should work:
http://localhost:1337/api/user?where={"name":{"startsWith":"fred"},"path":{"endsWith":"fred"}}
| |
doc_23525526
|
awk '{gsub ( "[:\\']","" ) ; print $0 }'
and
awk '{gsub ( "[:\']","" ) ; print $0 }'
and
awk '{gsub ( "[:']","" ) ; print $0 }'
non of them worked, but return the error Unmatched ".. when I put
awk '{gsub ( "[:_]","" ) ; print $0 }'
then It works and removes all : and _ chars. How can I get rid of the ' char?
A: You could use:
*
*Octal code for the single quote:
[:\47]
*The single quote inside double quotes, but in that case special
characters will be expanded by the shell:
% print a\': | awk "sub(/[:']/, x)"
a
*Use a dynamic regexp, but there are performance implications related
to this approach:
% print a\': | awk -vrx="[:\\\']" 'sub(rx, x)'
a
A: With bash you cannot insert a single quote inside a literal surrounded with single quotes. Use '"'"' for example.
First ' closes the current literal, then "'" concatenates it with a literal containing only a single quote, and ' reopens a string literal, which will be also concatenated.
What you want is:
awk '{gsub ( "[:'"'"']","" ) ; print $0; }'
ssapkota's alternative is also good ('\'').
A: I don't know why you are restricting yourself to using awk, anyways you've got many answers from other users. You can also use sed to get rid of " :' "
sed 's/:\'//g'
This will also serve your purpose. Simple and less complex.
A: This also works:
awk '{gsub("\x27",""); print}'
A: tr is made for this purpose
echo test\'\'\'\':::string | tr -d \':
teststring
$ echo test\'\'\'\':::string | awk '{gsub(/[:\47]*/,"");print $0}'
teststring
A: This works:
awk '{gsub( "[:'\'']","" ); print}'
A: simplest
awk '{gsub(/\047|:/,"")};1'
| |
doc_23525527
|
New branch FEATURE has been created. There are a lot of developer commits.
I merge FEATURE branch into DEVELOP branch using:
git checkout develop
git merge --no-ff --no-commit feature
git add .
git commit -m 'message'
git push
git log
I can see all the commits from the FEATURE branch.
What did I do wrong?
A: You didn't squash your commits together before you merged, so you will still see all commits. And you merged with --no-ff which I think will make it always create a new commit for the merge, and the man page seems to support this.
You seem to have a misunderstanding about what the --no-ff and --ff flags are for. Reading the above man page might help, but my take on it is this.
Assuming that no changes have been made on the main branch you took you current branch from since you took the branch and started making new commits, then when you come to merge the changes back into the main branch you don't need to do a merge at all. You could just pretend that you made the changes on the main branch all along (as nothing has changed there which would require a merge).
This is what the --ff switch gives you and is the default option. It doesn't remove any commits, it just 'replays' them on the main branch.
The --no-ff switch forces git to create a new commit anyway, even though it is not technically needed.
The --ff-only switch forces git to abort the merge if it can't just replay the commits (ie if any additional commits have been made on the main branch since you took the current branch). Again this doesn't remove any of the commits, it just forces history to be linear as no merges can be done.
If you want to make your feature branch commits look like fewer commits you can rebase your changes (although there are other options, I believe that using the --squash option with your commit might work, but I've not used this so check it out and make sure you understand it before you do it on your production code).
What I do, and there are many different possibilities with git, is rebase my feature branch onto the master branch before I do the merge.
git checkout feature
git rebase master -i
this line launches a window which allows you to choose which commits you want to keep and which you want to 'squash' together. I here I choose which commits I want to merge together (to get rid of superfluous 'forgot this file' or 'tried this and it didn't work' commits, grouping the commits into logical structures which define the chunks of work. Often this is a single commit per feature, but not always. You can also choose to rename any group of commits at this point. The man page for rebase shows you the options.
git checkout develop
git merge feature --ff-only
I merge the feature branch into the master with a --ff-only so that history remains linear. If this fails then you checkout feature again and repeat the process.
As I said there are many ways to skin a cat in git, this is what works for me, your mileage may vary.
A: Git Shows the Entire History of a Branch by Default
The original poster says
I can see all the commits from the FEATURE branch.
What did I do wrong?
This is expected behavior in Git. When you merge a branch B into another branch A, the history of B becomes a part of the history of A, so of course B's commits would also show up in the history of A.
By default, Git shows the entire history of branches, including commits that have been merged into them.
This is true whether you do a fast-forward merge or not. The difference here between a fast-forward merge and a merge done with --no-ff is that --no-ff will force the creation of a merge-commit, even when a simple fast-forward (without a merge commit) would have been possible.
Either way, however, you would still normally see the commit's of all of the branches that have been merged into the current one.
Hiding the History of Merged Commits
If you don't want to see the history of merged commits, you can hide them from the log output by passing in the --first-parent option
$ git log --oneline --graph --first-parent
* e156455 Git 2.0 (HEAD, tag: v2.0.0, https/master)
* 4a28f16 Update draft release notes to 2.0
* 8ced8e4 Git 2.0-rc4 (tag: v2.0.0-rc4)
* 3054c66 RelNotes/2.0.0.txt: Fix several grammar issu
* b2c851a Revert "Merge branch 'jc/graduate-remote-hg-
* 00a5b79 Merge branch 'jc/graduate-remote-hg-bzr' (ea
* df43b41 Merge branch 'rh/prompt-pcmode-avoid-eval-on
* 7dde48e Merge branch 'lt/request-pull'
* 5714722 Merge branch 'jl/use-vsatisfy-correctly-for-
* c29bf4a Merge git://github.com/git-l10n/git-po
* 3fc2aea Merge branch 'kb/fast-hashmap'
* 6308767 Merge branch 'fc/prompt-zsh-read-from-file'
Compare that with the output of viewing the whole history for a branch
$ git log --oneline --graph
* e156455 Git 2.0 (HEAD, tag: v2.0.0, https/master)
* 4a28f16 Update draft release notes to 2.0
* 8ced8e4 Git 2.0-rc4 (tag: v2.0.0-rc4)
* 3054c66 RelNotes/2.0.0.txt: Fix several grammar issues, notably a lack
* b2c851a Revert "Merge branch 'jc/graduate-remote-hg-bzr' (early part)"
* 00a5b79 Merge branch 'jc/graduate-remote-hg-bzr' (early part)
|\
| * 896ba14 remote-helpers: point at their upstream repositories
| * 0311086 contrib: remote-helpers: add move warnings (v2.0)
| * 10e1fee Revert "Merge branch 'fc/transport-helper-sync-error-fix'"
* | df43b41 Merge branch 'rh/prompt-pcmode-avoid-eval-on-refname'
|\ \
| * | 1e4119c git-prompt.sh: don't assume the shell expands the value of
* | | 7dde48e Merge branch 'lt/request-pull'
|\ \ \
| * | | d952cbb request-pull: resurrect for-linus -> tags/for-linus DWIM
* | | | 5714722 Merge branch 'jl/use-vsatisfy-correctly-for-2.0'
|\ \ \ \
| * | | | b3f0c5c git-gui: tolerate major version changes when comparing
* | | | | c29bf4a Merge git://github.com/git-l10n/git-po
|\ \ \ \ \
| * | | | | a6e8883 fr: a lot of good fixups
* | | | | | 3fc2aea Merge branch 'kb/fast-hashmap'
|\ \ \ \ \ \
| |/ / / / /
|/| | | | |
| * | | | | c2538fd Documentation/technical/api-hashmap: remove source hi
* | | | | | 6308767 Merge branch 'fc/prompt-zsh-read-from-file'
Keeping a Simpler History by Avoiding Merge Commits
When Git users talk about maintaining a "clean history", they're often referring to avoiding the creation of merge commits. This is often the case when frequently updating a feature branch with new changes from the master branch. People want to avoid merge commits in these cases because they don't add much additional informational value when you're looking at the log for the feature branch, and end up being "clutter", "junk", and "noise", making it harder to see the more important changes that have been made.
To avoid that kind of situation, Git users will often rebase the feature branch on top of an upstream branch like master, which will update the feature branch (just like a merge would), but without creating a merge commit. It's in this way that Git users will "keep a clean history", by avoiding the creation of unnecessary merge commits, and thus maintaining a simpler, easier to understand commit history.
See Also
For more information on fast-forward merges vs non-fast-forward merges, see
*
*Why does git fast-forward merges by default?
*Git fast forward VS no fast forward merge
| |
doc_23525528
|
In particular, I have a string val myStr = "Shall we meet at, let's say, 8:45 AM?". I would like to tokenize it and retain the delimiters (all except whitespace). If my delimiters were only characters, e.g. ., :, ? etc., I could do:
val strArr = myStr.split("((\\s+)|(?=[,.;:?])|(?<=\\b[,.;:?]))")
which yields
[Shall, we, meet, at, ,, let's, say, ,, 8, :, 45, AM, ?]
However, I wish to make the time signature \\d+:\\d+ a delimiter, and would still like to retain it. So, what I'd like is
[Shall, we, meet, at, ,, let's, say, ,, 8:45, AM, ?]
Note:
*
*Adding the disjunct (?=(\\d+:\\d+)) in the expression of the split statement is not helping
*outside of the time signature, : is a delimiter in itself
How could I make this happen?
A: I suggest matching all your tokens, not splitting a string, because that way you may control what you get in a better way:
\b\d{1,2}:\d{2}\b|[,.;:?]+|(?:(?!\b\d{1,2}:\d{2}\b)[^\s,.;:?])+
See the regex demo.
We start matching the most specific patterns and the last one is the most generic one.
Details
*
*\b\d{1,2}:\d{2}\b - 1 to 2 digits, :, 2 digits enclosed with word boundaries
*| - or
*[,.;:?]+ - 1 or more ,, ., ;, :, ? chars
*| - or
*(?:(?!\b\d{1,2}:\d{2}\b)[^\s,.;:?])+ - matches any char that is not our delimiter char or whitespace ([^\s,.;:?]) that is not a starting point for the time string.
Consider this snippet:
val str = "Shall we meet at, let's say, 8:45 AM?"
var rx = """\b\d{1,2}:\d{2}\b|[,.;:?]+|(?:(?!\b\d{1,2}:\d{2}\b)[^\s,.;:?])+""".r
rx findAllIn str foreach println
Output:
Shall
we
meet
at
,
let's
say
,
8:45
AM
?
A: /**
* StringPatternTokenizer is simlular to java.util.StringTokenizer
* But it uses regex string as the tokenizer separator.
* See inside method #testCase for detail usage.
*/
public class StringPatternTokenizer {
Pattern pattern;
public StringPatternTokenizer(String regex) {
this.pattern = Pattern.compile(regex);
}
public void getTokens(String str, NextToken nextToken) {
Matcher matcher = pattern.matcher(str);
int index = 0;
Result result = null;
while (matcher.find()) {
if (matcher.start() > index) {
result = nextToken.visit(null, str.substring(index, matcher.start()));
}
if (result != Result.STOP) {
index = matcher.end();
result = nextToken.visit(matcher, null);
}
if (result == Result.STOP) {
return;
}
}
if (index < str.length()) {
nextToken.visit(null, str.substring(index));
}
}
enum Result {
CONTINUE,
STOP,
}
public interface NextToken {
Result visit(Matcher matcher, String str);
}
/***********************************/
/***** test cases FOR IT ***********/
/***********************************/
public void testCase() {
// as a test, it tries access tokenizer result for each part,
// then replace variable parts by given values.
// And finally, we collect the result target string as output.
String strSource = "My name is {{NAME}}, nice to meet you.";
String strTarget = "My name is TokenTst, nice to meet you.";
// separator pattern for: variable names in two curly brackets
String variableRegex = "\\{\\{([A-Za-z]+)\\}\\}";
// variable values
org.json.JSONObject data = new org.json.JSONObject(
java.util.Collections.singletonMap("NAME", "TokenTst")
);
StringBuilder sb = new StringBuilder();
new StringPatternTokenizer(variableRegex)
.getTokens(strSource, (matcher, str) -> {
sb.append(matcher == null ? str
: data.optString(matcher.group(1), ""));
return StringPatternTokenizer.Result.CONTINUE;
});
// check the result as expected
org.junit.Assert.assertEquals(strTarget, sb.toString());
}
}
| |
doc_23525529
|
<AuthProvider>
<SelectedBranchProvider>
<SelectedEventProvider>
<DrawerFormProvider>
<RightDrawerNavigator />
</DrawerFormProvider>
</SelectedEventProvider>
</SelectedBranchProvider>
</AuthProvider>
A: The danger with context isn't memory issues, it's unneeded re-renders. The way Context is implemented defeats some of React's diffing, so a change in Context can cause a re-render in any component that uses the same Context, even if the value that changes is unused in the component.
If you have a small app, or only rarely change what's in your context, you'll probably never hit these limitations. Good use cases for context are things that would require many re-renders and rarely change, like user language or app theme.
The docs for Context have a caveats section that goes over this: https://reactjs.org/docs/context.html#caveats
Here's a blog post with a good example of how this can cause re-renders: https://blog.logrocket.com/pitfalls-of-overusing-react-context/
| |
doc_23525530
|
Sample input:
122545;bmwx3;new;red,black,white,pink
I want the final output to be like this:
INSERT INTO myTable VALUES ("122545", "bmwx3", "new", "red");
INSERT INTO myTable VALUES ("122545", "bmwx3", "new", "black");
INSERT INTO myTable VALUES ("122545", "bmwx3", "new", "white");
INSERT INTO myTable VALUES ("122545", "bmwx3", "new", "pink");
The 4th element is a "sub-csv" with an unknown amount of entries. But always in that format (no ")
Ideally I would like to do this in notepad++ using regex, if not possible I will have to cook up a script.
I think that first I need to make this:
122545;bmwx3;new;red,black,white,pink
Look like this:
122545;bmwx3;new;red
122545;bmwx3;new;black
122545;bmwx3;new;white
122545;bmwx3;new;pink
My problem is that I don't know to match the sub-csv. Is it even possible to do this in pure regex (no programming needed)?
A: Certainly not the simplest way, but it works:
Find what: ^([^,]+;)(.+),([^,]+)$
Replace with: $1$2\n$1$3
And click on Replace all as many time as needed!
A: If the 122545;bmwx3;new; part is not fixed
In three steps:
*
*Get to red,black,white,pink#LIMIT#122545;bmwx3;new;: replace (.*;)([^;]*) with \2#LIMIT#\1
*Create the 122545;bmwx3;new;red stings: replace
(\w+)(?:,|(?=#LIMIT#))(?=.*#LIMIT#(.*))
with \2\1\n (see demo)
*Remove the #LIMIT#... lines: replace ^#LIMIT#.* with an empty string
If the 122545;bmwx3;new; part is fixed
@hjpotter's idea seems pretty cool, you just new to replace , with
\n122545;bmwx3;new;
What's left
Replace
^(\w*);(\w*);(\w*);(\w*)$
with
INSERT INTO myTable VALUES ("\1", "\2", "\3", "\4")
You're good to go !
| |
doc_23525531
|
Currently these are the scheduled times.
I PRESUME 9AM everyday. As I've checked everywhere and it appears correct syntax.
0 9 * * * bash /home/user/Desktop/CRON/OAK3/dw_3704255.sh
I PRESUME 1:30PM everyday. As I've checked everywhere and it appears correct syntax.
30 13 * * * bash /home/user/Desktop/CRON/OAK3/dw_3704278.sh
I PRESUME 6PM everyday. As I've checked everywhere and it appears correct syntax.
0 18 * * * bash /home/user/Desktop/CRON/OAK3/dw_3704286.sh
I PRESUME 10PM everyday. As I've checked everywhere and it appears correct syntax.
0 22 * * * bash /home/user/Desktop/CRON/OAK3/dw_3704294.sh
Now, I've tried changed changing the front zeros to 00, but the same result has occured. I recently changed to single zero, but I believe that's how I've originally had it.
I may just need a sanity check from outside perspective, because it appears right, but any insight would be appreciated. Thank you!
A: I'd assume the jobs will start running, but won't complete for some reason, making it look like they didn't start at all. This is often caused by an environment variable that's set in .profile - cron jobs won't execute .profile and won't have access to these variables.
I'd put a statement like
exec > /tmp/dw_3704255.log 2>&1
set -x
at the start of your dw_3704255.sh script; then you can check if the file appears at the time it should, and check its contents for a trace as well.
Also, i'd replace bash with /bin/bash to protect against weird PATH settings in the cron process, but i wouldn't assume this to be the cause of your current problem.
A: Please check the cron log
grep CRON /var/log/syslog
| |
doc_23525532
|
OSError: No space left on device
When I run this
(python3) ubuntu@ip-172-30-1-208:~$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 30G 0 30G 0% /dev
tmpfs 6.0G 8.9M 6.0G 1% /run
/dev/xvda1 93G 93G 0 100% /
tmpfs 30G 0 30G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 30G 0 30G 0% /sys/fs/cgroup
/dev/loop0 92M 92M 0 100% /snap/core/8689
/dev/loop1 90M 90M 0 100% /snap/core/8039
/dev/loop2 18M 18M 0 100% /snap/amazon-ssm-agent/1480
/dev/loop4 18M 18M 0 100% /snap/amazon-ssm-agent/1566
tmpfs 6.0G 24K 6.0G 1% /run/user/1000
It looks like all my hard disk, 93G is taken up by xvda1. Am I reading this right?
A: The root file system is full
Usually, this happens on servers when the logging goes awry
My tip is, log in, become root and then cd /var/log and run du -smc *
This may well take a while but will show you where the big logs are
Note that deleting an in-use logfile will not usually free up disk space
A: This happened because root is full.
You can find which files are using more space by using the following command.
sudo find / -type f -size +10M -exec ls -lh {} \;
After that delete large files using rm command
rm -f <path-to-large-file>
| |
doc_23525533
|
if (isRed(h.left))h = rotateRight(h);
I just can't find a good example to help me to get the usage of this code.
Can anybody helps to give me the reason why the code should be there (with a tiny example is much better) ?
A: Page 7 of the PDF contains the full function. Basically what it is doing, if the line is "red" (meaning it was added to force the tree to be a LLRBT), then rotate the left child node in it's place.
A
/ \
B C
If I'm deleting A, I'd rotate B to it's place:
B
\
C
| |
doc_23525534
|
warn No apps connected. Sending "devMenu" to all React Native apps failed. Make sure your app is running in the simulator or on a phone connected via USB.
This also happens when I try to reload it through the cli. The app is running fine as it builds and is able to make requests to my API, but these features are not working, and the hot reload that happened when I saved a file in my project also gives me the error:
Could not reload the application after an edit.
This started happening last week, and after the expo SDK 45 was released (though I'm not sure if it was immediately after that), and I have tried upgrading my packages, reinstalling expo cli, deleting my project folder and cloning it again, running on android and iOS, and also running the adb reverse tcp command, unfortunately to no avail.
Has anyone been through this kind of problem with an expo project? Have any solutions been found? And is it related to the new SDK update?
Thank you for the help in advance
A: I'm almost positive it's a bug with Expo 45 - been running into this since upgrading to it.
A: Update the expo cli to the latest version eg:
npm install -g expo-cli
Fixes the issue for reload on SDK 45.
| |
doc_23525535
|
Then I found that CONFIRMATION ALERT is very similar. All I need to change is to the button's text and it's action,while when click retry to back to game scene and click quit to back to the main scene.
Now I know how to set action for the button in alert box, but i had some new question for that, I have no idea how to call the ventHandler<ActionEvent> in the statement.
Here is my code for two alert box
(source code from https://code.makery.ch/blog/javafx-dialogs-official/)
Alert right = new Alert(AlertType.CONFIRMATION);
right.setTitle("Checking Result");
right.setHeaderText(null);
right.setContentText("Your answer is correct. Would you like to start
again");
ButtonType restart = new ButtonType("Restart");
ButtonType quit = new ButtonType("Quit");
right.getButtonTypes().setAll(restart, quit);
Alert wrong = new Alert(AlertType.CONFIRMATION);
wrong.setTitle("Checking Result");
wrong.setHeaderText(null);
wrong.setContentText("Your answer is incorrect. Would you like to try
again");
ButtonType retry = new ButtonType("Retry");
wrong.getButtonTypes().setAll(retry, quit);
the code for actions
Optional<ButtonType> result = right.showAndWait();
if (result.isPresent() && result.get() == quit) {
stage.setScene(main_frame);
}else if(result.isPresent() && result.get() ==
restart) {// call the actionevent clears}
Optional<ButtonType> result = wrong.showAndWait();
if (result.isPresent() && result.get() == quit) {
stage.setScene(main_frame);
}else if(result.isPresent() && result.get() ==
retry) {// call the actionevent clears}
The code for eventhandler
final EventHandler<ActionEvent> clears = new EventHandler<ActionEvent>() {
@Override
public void handle(final ActionEvent event) {
for (int i = 0; i < 9; i++) {
for (int j = 0; j < 9; j++) {
if (digging_array[i][j] == 1) {
sudoku[i][j].setText(Integer.toString(final_Array[i][j]));
} else {
sudoku[i][j].setText("");
}
}
}
}
};
A: In the linked tutorial, there is an example on how to set custom actions (I shortened it a bit):
Alert alert = new Alert(AlertType.CONFIRMATION);
alert.setTitle("Confirmation Dialog with Custom Actions");
ButtonType buttonTypeOne = new ButtonType("One");
ButtonType buttonTypeCancel = new ButtonType("Cancel", ButtonData.CANCEL_CLOSE);
alert.getButtonTypes().setAll(buttonTypeOne, buttonTypeCancel);
Optional<ButtonType> result = alert.showAndWait();
if (result.get() == buttonTypeOne){
// ... user chose "One"
} else {
// ... user chose CANCEL or closed the dialog
}
You can get the result (what the user clicked) via result.get() and check, which button was pressed (buttonTypeOne, buttonTypeCancel, ...).
When the user presses "One", you can now do something in the first body of the if statement.
In your code you are missing the showAndWait() call. If for example the user was right, you should do:
Observable<ButtonType> rightResult = right.showAndWait();
if (rightResult.isPresent()) {
if (rightResult.get() == restart) { //because "restart" is the variable name for your custom button type
// some action, method call, ...
} else { // In this case "quit"
}
}
Note, this is probably not the most elegant way (double if-statement) to do it. @Others feel free to edit my answer and put in a better way to do it.
A: You did change the button type correctly regarding the right alert. Your last line does not change the buttons for the wrong alert. Replacing right with wrong will target the correct alert and thus change its buttons.
Checking which button was pressed can be done in multiple ways. Extract from the official documentation (https://docs.oracle.com/javase/8/javafx/api/javafx/scene/control/Alert.html):
Option 1: The 'traditional' approach
Optional<ButtonType> result = alert.showAndWait();
if (result.isPresent() && result.get() == ButtonType.OK) {
formatSystem();
}
Option 2: The traditional + Optional approach
alert.showAndWait().ifPresent(response -> {
if (response == ButtonType.OK) {
formatSystem();
}
});
Option 3: The fully lambda approach
alert.showAndWait()
.filter(response -> response == ButtonType.OK)
.ifPresent(response -> formatSystem());
instead of using ButtonType.OK you need to use your custom buttons.
EDIT
In your example you have to modify the code like this:
void clear() {
for (int i = 0; i < 9; i++) {
for (int j = 0; j < 9; j++) {
if (digging_array[i][j] == 1) {
sudoku[i][j].setText(Integer.toString(final_Array[i][j]));
} else {
sudoku[i][j].setText("");
}
}
}
}
Optional<ButtonType> result = right.showAndWait();
if (result.isPresent() && result.get() == quit) {
stage.setScene(main_frame);
} else if(result.isPresent() && result.get() == restart) {
clear()
}
Optional<ButtonType> result = wrong.showAndWait();
if (result.isPresent() && result.get() == quit) {
stage.setScene(main_frame);
} else if(result.isPresent() && result.get() == retry) {
clear()
}
| |
doc_23525536
|
This is the main .php:
<!DOCTYPE html>
<html>
<head>
<title>Hola Mundo con AJAX</title>
<script>
function loadXMLDoc()
{
var xmlhttp;
if (window.XMLHttpRequest)
{
// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp=new XMLHttpRequest();
}
else
{
// code for IE6, IE5
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
}
xmlhttp.onreadystatechange = function()
{
if (xmlhttp.readyState==4 && xmlhttp.status==200)
{
document.getElementById("myDiv").innerHTML=xmlhttp.responseText;
}
}
xmlhttp.open("GET","carga.php",true);
xmlhttp.send();
}
</script>
</head>
<body>
<table>
<tr>
<td bgcolor="#D6D6D6">field 1</td>
<td bgcolor="#D6D6D6">field 2</td>
<td bgcolor="#D6D6D6">field 3</td>
</tr>
<tr>
<div id="myDiv"><td>1</td>
<td>2</td>
<td>3</td></div>
</tr>
</table>
<button onclick="loadXMLDoc()">Cambio</button>
</body>
</html>
This is cargar.php:
<?php
echo "<td>4</td>
<td>5</td>
<td>6</td>";
?>
A: Try this:
<!DOCTYPE html>
<html>
<head>
<title>Hola Mundo con AJAX</title>
<script>
function loadXMLDoc()
{
var xmlhttp;
if (window.XMLHttpRequest)
{
// code for IE7+, Firefox, Chrome, Opera, Safari
xmlhttp=new XMLHttpRequest();
}
else
{
// code for IE6, IE5
xmlhttp=new ActiveXObject("Microsoft.XMLHTTP");
}
xmlhttp.onreadystatechange = function()
{
if (xmlhttp.readyState==4 && xmlhttp.status==200)
{
document.getElementById("myTable").tBodies[0].innerHTML+=xmlhttp.responseText;
}
}
xmlhttp.open("GET","carga.php",true);
xmlhttp.send();
}
</script>
</head>
<body>
<table id="myTable">
<tbody>
<tr>
<td bgcolor="#D6D6D6">field 1</td>
<td bgcolor="#D6D6D6">field 2</td>
<td bgcolor="#D6D6D6">field 3</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>3</td></div>
</tr>
</tbody>
</table>
<button onclick="loadXMLDoc()">Cambio</button>
</body>
</html>
| |
doc_23525537
|
I have several entities, like customer, car, repair, repair team member, employee, service, tire storage, personal file, sales.
Now I have to do following queries:
*
*Sort every car that contains the combination "123" inside the vehicle identification number. (Result column: VehicleID)
Here I would say SELECT * FROM Vehicle WHERE VIN = '%123%';
*A list of every sales in 2021. (Result column: document number)
Here I don't know how to connect both entities (service, sales) and filter by year.
*The total revenue generated from sales in 2020
*An overview of all the sales to the customer with the customer ID "4711" (Result column: Date of sale, product number, price)
What about -> SELECT * FROM Verkauf WHERE Verkauf.Dienstleistung_D_ID IN ( SELECT D_ID FROM Dienstleistung WHERE Kunde_K_ID == "K4711" )
*Overview of all customers who have been sold something by an employee with employment year before 2010. The output should be sorted by last name and first name of the customer. (Result columns: Customer number, first name, last name, personnel number, last name of employee, year of employment).
I hope you can help me, since I am not able to create the code, even with SQLite, MySQL etc. so I am not able to test my queries and hope that you can help me.
ER Diagram
My ER diagram is in German, I hope you understand the structure.
Reparatur - repair,
Fahrzeug - car,
Kunde - customer ,
Dienstleistung - service,
Verkauf - sales ,
Reifenlagerung - tire storage,
Reparaturteammitglied - repair team member,
Mitarbeiter - employee,
Personalakte - personal file,
| |
doc_23525538
|
*
*packages/app1 has is-even in it's package.json
*packages/app2 does not have is-even in its package.json
*packages/app2 tries to use is-even
Then currently, I don't get any warning from something like eslint-plugin-import, when preferably, I would like an error because if I publish app2, then any user that tries to install it from NPM will receive errors because it does not properly specify that it needs is-even as a dependency
Reproducible case here with a minimal monorepo https://github.com/cmdcolin/yarn_workspaces_eslint_plugin_import
A: This was fixed by adding
extends:
- eslint:recommended
- plugin:import/recommended
rules:
import/no-extraneous-dependencies: error
This makes it detect the error properly, e.g. this message is expected and good now
yarn run v1.22.15
$ eslint .
/home/cdiesh/test/packages/app2/src/index.js
1:1 error 'is-even' should be listed in the project's dependencies. Run 'npm i -S is-even' to add it import/no-extraneous-dependencies
β 1 problem (1 error, 0 warnings)
| |
doc_23525539
|
The code is supposed to create a plot map of Haiti but I got an error as below:
Traceback (most recent call last):
File "Haiti.py", line 74, in <module>
x, y = m(cat_data.LONGITUDE, cat_data.LATITUDE)
File "/usr/local/lib/python2.7/site-packages/mpl_toolkits/basemap/__init__.py", line 1148, in __call__
xout,yout = self.projtran(x,y,inverse=inverse)
File "/usr/local/lib/python2.7/site-packages/mpl_toolkits/basemap/proj.py", line 286, in __call__
outx,outy = self._proj4(x, y, inverse=inverse)
File "/usr/local/lib/python2.7/site-packages/mpl_toolkits/basemap/pyproj.py", line 388, in __call__
_proj.Proj._fwd(self, inx, iny, radians=radians, errcheck=errcheck)
File "_proj.pyx", line 122, in _proj.Proj._fwd (src/_proj.c:1571)
RuntimeError
I checked if mpl_toolkits.basemap and proj module were installed okay on my machine. Basemap was installed from source as instructed and proj was installed by Homebrew and they looks fine to me.
If you have basemap and proj installed, does this code run successfully? If not, do you think if it's a module installation issue, the code itself, or any other?
Haiti.csv file can be downloaded from https://github.com/pydata/pydata-book/raw/master/ch08/Haiti.csv
import pandas as pd
import numpy as np
from pandas import DataFrame
data = pd.read_csv('Haiti.csv')
data = data[(data.LATITUDE > 18) & (data.LATITUDE < 20) &
(data.LONGITUDE > -75) & (data.LONGITUDE < -70)
& data.CATEGORY.notnull()]
def to_cat_list(catstr):
stripped = (x.strip() for x in catstr.split(','))
return [x for x in stripped if x]
def get_all_categories(cat_series):
cat_sets = (set(to_cat_list(x)) for x in cat_series)
return sorted(set.union(*cat_sets))
def get_english(cat):
code, names = cat.split('.')
if '|' in names:
names = names.split(' | ')[1]
return code, names.strip()
all_cats = get_all_categories(data.CATEGORY)
english_mapping = dict(get_english(x) for x in all_cats)
def get_code(seq):
return [x.split('.')[0] for x in seq if x]
all_codes = get_code(all_cats)
code_index = pd.Index(np.unique(all_codes))
dummy_frame = DataFrame(np.zeros((len(data), len(code_index))),
index=data.index, columns=code_index)
for row, cat in zip(data.index, data.CATEGORY):
codes = get_code(to_cat_list(cat))
dummy_frame.ix[row, codes] = 1
data = data.join(dummy_frame.add_prefix('category_'))
from mpl_toolkits.basemap import Basemap
import matplotlib.pyplot as plt
def basic_haiti_map(ax=None, lllat=17.25, urlat=20.25, lllon=-75, urlon=-71):
# create polar stereographic Basemap instance.
m = Basemap(ax=ax, projection='stere',
lon_0=(urlon + lllon) / 2,
lat_0=(urlat + lllat) / 2,
llcrnrlat=lllat, urcrnrlat=urlat,
llcrnrlon=lllon, urcrnrlon=urlon,
resolution='f')
# draw coastlines, state and country boundaries, edge of map. m.drawcoastlines()
m.drawstates()
m.drawcountries()
return m
fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(12, 10))
fig.subplots_adjust(hspace=0.05, wspace=0.05)
to_plot = ['2a', '1', '3c', '7a']
lllat=17.25; urlat=20.25; lllon=-75; urlon=-71
for code, ax in zip(to_plot, axes.flat):
m = basic_haiti_map(ax, lllat=lllat, urlat=urlat,
lllon=lllon, urlon=urlon)
cat_data = data[data['category_%s' % code] == 1]
# compute map proj coordinates.
print cat_data.LONGITUDE, cat_data.LATITUDE
x, y = m(cat_data.LONGITUDE, cat_data.LATITUDE)
m.plot(x, y, 'k.', alpha=0.5)
ax.set_title('%s: %s' % (code, english_mapping[code]))
A: This is resolved by changing m(cat_data.LONGITUDE, cat_data.LATITUDE) to m(cat_data.LONGITUDE.values, cat_data.LATITUDE.values), thanks to Alex Messina's finding.
With a little further study of mine, pandas changed that Series data of DataFrame (derived from NDFrame) should be passed with .values to a Cython function like basemap/proj since v0.13.0 released on 31 Dec 2013 as below.
Quote from github commit log of pandas:
+.. warning::
+
+ In 0.13.0 since ``Series`` has internaly been refactored to no longer sub-class ``ndarray``
+ but instead subclass ``NDFrame``, you can **not pass** a ``Series`` directly as a ``ndarray`` typed parameter
+ to a cython function. Instead pass the actual ``ndarray`` using the ``.values`` attribute of the Series.
+
+ Prior to 0.13.0
+
+ .. code-block:: python
+
+ apply_integrate_f(df['a'], df['b'], df['N'])
+
+ Use ``.values`` to get the underlying ``ndarray``
+
+ .. code-block:: python
+
+ apply_integrate_f(df['a'].values, df['b'].values, df['N'].values)
You can find the corrected version of the example code here.
| |
doc_23525540
|
I have a page with two tables, but I only want to find the <th>'s of a table, with a particular class name flexi-fruit.
For example:
<table class="flexi-fruit">
<colgroup>
<col>
<col span="2">
</colgroup>
<tbody>
<tr class="header-row">
<th>
</th>
<th> Standard </th>
<th> Exotic </th>
</tr>
<tr>
<td>
Blah
</td>
<td>
Blah
</td>
<td>
Blah
</td>
</tr>
</tbody>
</table>
So I want to find all the <th>s, in the table with the classname flexi-fruit. (And then do something).
Something like this document.querySelectorAll('th.flexi-fruit'),which obviously doesn't work.
Thanks.
A: You just had a wrong selector. document.querySelectorAll('table.flexi-fruit th').
This should work. Your selector would match every th which has the class flexi-fruit.
A: Your css selector order is wrong. Refer to the css selector
document.querySelectorAll('table.flexi-fruit th')
| |
doc_23525541
|
The message example is below:
production.ALERT: Http analyzer failed {"reason":"Maximum count of dump files reached. Recording stopped."}
| |
doc_23525542
|
So what I'm doing is:
*
*fire FullPage scroll to section
*find instance of iScroll in that section and fire it's scrollTo() event
*reBuild() FullPage
The scroll positions of both plugins change correctly, but the problem seems to be that FullPage doesn't register iScroll's scrollTo() event and behaves like the active section is scrolled to the top, so basically scrolling up gets you to previous section and scrolling down gets you under the content eventually.
document.querySelector('.button').addEventListener('click', e => {
fullpage_api.moveTo(3)
fullpage_api.getActiveSection().item.querySelector('.fp-scrollable').scrollTo(0, 1000, 1000)
fullpage_api.reBuild()
})
Here is a simplified version of my code with the bug reproduced: https://jsfiddle.net/5bojtrmd/13/
after clicking the button, you can't get to the section 3 title anymore and you can scroll to red zone which shouldn't be seen
A: Few things:
*
*You need to use a negative value for the iscroll scrollTo y position.
*You won't have to call the refresh function of fullPage.js but the one from iScroll.
Code here:
document.querySelector('.button').addEventListener('click', e => {
fullpage_api.moveTo(3)
var instance = fullpage_api.getActiveSection().item.querySelector('.fp-scrollable').fp_iscrollInstance;
instance.scrollTo(0, -1050, 1000);
setTimeout(function () {
instance.refresh();
}, 1000 + 150);
});
Reproduction online.
| |
doc_23525543
|
At any rate, if I omit the test validating the unique constraint, all is well with the delete. However, if I add the SerialNumberUniqueConstraint_Test method, it fails and my item class is not null as the delete never occurs. If I move the SerialNumberUniqueConstraint_Test ahead of other tests, the subsequent tests fail with the same UniqueFieldValueConstraintValidationException as well. What am I doing incorrectly?
[TestMethod]
[ExpectedException( typeof( UniqueFieldValueConstraintViolationException ) )]
public void SerialNumberUniqueConstraint_Test()
{
using( var logic = new ItemLogic() )
{
logic.Save( CreateItem() );
}
}
[TestMethod]
public void DeleteItem_Test()
{
Item item = null;
using( var logic = new ItemLogic() )
{
logic.Delete( SerialNumber );
}
using( var logic = new ItemLogic() )
{
item = logic.Retrieve( SerialNumber );
}
Assert.IsNull( item );
}
private Item CreateItem()
{
return new Item { Name = "My item", Make = "make", Model = "model", SerialNumber = "1234" };
}
public Item Save( Item item )
{
Db4oDatabase.Database.Store( item );
Db4oDatabase.Database.Commit();
return this.Retrieve( item.SerialNumber );
}
public Item Retrieve( string serialNumber )
{
Item item = (from i in Db4oDatabase.Database.AsQueryable<Item>()
where i.SerialNumber == serialNumber
select i).FirstOrDefault();
return item;
}
public void Delete( string serialNumber )
{
Db4oDatabase.Database.Delete( this.Retrieve( serialNumber ) );
}
A: The data class' Save method now utilizes a try/catch on the Commit() operation and performs a Rollback should the UniqueFieldValueConstraintViolationException occur. Additionally, I have made the DeleteItem_Test independent as recommended by Bob Horn.
public Item Save( Item item )
{
Db4oDatabase.Database.Store( item );
try
{
Db4oDatabase.Database.Commit();
}
catch( UniqueFieldValueConstraintViolationException )
{
Db4oDatabase.Database.Rollback();
throw;
}
return this.Retrieve( item.SerialNumber );
}
[TestMethod]
public void DeleteItem_Test()
{
string serialNumber = "DeleteItem_Test";
Item item = new Item
{
Name = "Washer",
Make = "Samsung",
Model = "Model No",
SerialNumber = serialNumber,
PurchasePrice = 2500m
};
using( var logic = new ItemLogic() )
{
item = logic.Save( item );
Assert.IsNotNull( item, TestResources.DevMessage_IntermediateOperationFailed, "Save", serialNumber );
logic.Delete( item );
item = logic.Retrieve( serialNumber );
}
Assert.IsNull( item );
}
| |
doc_23525544
|
I have to prepend a TaskId to every commit.
Current state:
pick 7c2dbd5 Message1
pick d57eb65 Message2
...
pick d57eb65 MessageN
Target state
pick 7c2dbd5 [TaskID] Message1
pick d57eb65 [TaskID] Message2
...
pick d57eb65 [TaskID] MessageN
Ideally, I'd like to perform this operation automatically without leaving the editor.
A: pick 7c2dbd5 Message1
x git commit --amend -m "[TaskID] Message1"
pick d57eb65 Message2
x git commit --amend -m "[TaskID] Message2"
...
pick d57eb65 MessageN
x git commit --amend -m "[TaskID] MessageN"
x $command or exec $command runs the command after the previous action is done.
| |
doc_23525545
|
public static IEnumerable<MetaTable> GetMetaTables()
{
using (var connection = new SqlConnection(ConnectionString))
using (var context = new SchemaDataContext(connection))
return context.Mapping.GetTables().ToList();
}
What am I doing wrong!?
A:
var model = new AttributeMappingSource().GetModel(typeof({YourDataContext}));
return model.GetTables().ToList();
Edit to my original solution:
using (SqlConnection connection = new SqlConnection(connectionString))
{
connection.Open();
DataTable table = connection.GetSchema("Tables");
// displaying data:
foreach (System.Data.DataRow row in table.Rows)
{
foreach (System.Data.DataColumn col in table.Columns)
{
Console.WriteLine("{0} = {1}", col.ColumnName, row[col]);
}
}
A: The problem is that you don't have entities SchemaDataContext, so there are no mappings at all. But if you don't have entities in SchemaDataContext anyway, you really shouldn't use an ORM.
Here is a simpler solution, without Linq to SQL:
public static IEnumerable<string> GetTables()
{
using (var connection = new SqlConnection(ConnectionString))
{
connection.Open();
foreach (var table in connection.GetSchema("Tables").Rows)
{
yield return (string)table[2];
}
}
}
| |
doc_23525546
|
A: Here is what you want to achieve, expressed in the simplest terms:
I want something to happen only on recognised filetypes
which you can then express with some pseudocode:
if Vim assigned a filetype to this buffer
do something
endif
Now, what better way to know if the current buffer has a filetype than ask Vim?
if &filetype != ""
update
endif
which is not pseudocode anymore and only needs to be inlined:
if &filetype != "" | update | endif
to be used in your autocommand:
augroup MyAutoSave
autocmd!
autocmd CursorHold,CursorHoldI * if &filetype != "" | update | endif
augroup END
| |
doc_23525547
|
I'm working on a project where the program reads one file (that's approx 50 lines or so) and I need to have it match data on the third column to data on the first column of a separate line. I opened up a new project because it was getting too complex for such an easy task.
Here's an example file that is closely relevant to the actual file I'm working with:
a1,b1,c1,d1
**c4**,b2,c2,d2
a3,b3,c3,d3
a4,b4,**c4**,d4
a5,b5,c4,d5
I promise this isn't for a school project, this is something that I need to figure out for work purposes.
Here is what I have, and I know it's just not going to work because it's only reading line by line for comparison. How do I get the program to read the current array value in the foreach command against the entire file that I caught in streamreader?
static void Main(string[] args)
{
StreamReader sr = new StreamReader("directories.txt");
string sline = sr.ReadLine();
string[] sarray = sline.Split(',');
string col3 = sarray[2];
string col1 = sarray[0];
foreach(string a in sarray)
{
// ?!?!?!!!
// I know this won't work because I'm comparing the same line being read.
// How in the world can I make this program read col3 of the current line being read against the entire file that was read earlier?
if (col3 == col1)
{
Directory.CreateDirectory("DRIVE:\\Location\\" + a.ToString());
}
}
}
Thank you ahead of time!
A: Since your file is small you can go with the simplest path...
var lines = File.ReadLines(filename)
.Select(line => line.Split(','))
.ToList();
var result = from a in lines
from b in lines
where a[0] == b[2]
select new { a, b };
foreach(var x in result)
{
Console.WriteLine(string.Join(",", x.a) + " - " + string.Join(",", x.b));
}
| |
doc_23525548
|
i am using simple mail function:
$to .= 'email-Id';
$subject = " Test Subject";
$headers = 'MIME-Version: 1.0' . "\r\n";
$headers .= 'Content-type: text/html; charset=iso-8859-1' . "\r\n";
$headers .= 'To: '.$to.'' . "\r\n";
$headers .= 'From: '.$name. '<'.$email.'>' . "\r\n";
echo $message='email text here';
@mail($to, $subject, $message, $headers);
A: I had never done PHP, but the following guide was step by step and incredibly easy to get working.
http://www.windowsazure.com/en-us/Documentation/Articles/store-sendgrid-php-how-to-send-email/
Hope it helps someone.
A: To send emails using PHP you have a few options:
Option 1: Use SMTP
You'll need to modify your php.ini configuration file (http://php.net/manual/en/ref.mail.php) and set the SMTP value to an external SMTP server you can use. SMTP servers are not part of the Windows Azure features at the moment.
[mail function]
SMTP = mail.mycompany.com
Option 2: Use sendmail
You'll need to modify your php.ini configuration file (http://php.net/manual/en/ref.mail.php) and set the sendmail_path value to the sendmail executable.
[mail function]
sendmail_path = "C:\wamp\sendmail\sendmail.exe -t"
Since sendmail doesn't exist in Windows, you'll need to use the fake sendmail for windows: http://glob.com.au/sendmail/
Option 3: Use a mail/smtp service
You could use a service like SendGrid to send your emails (they have an offer for Azure users: http://sendgrid.com/azure.html). They'll take care of sending out the email, you'll just need to call the REST api:
$sendgrid = new SendGrid('username', 'password');
$mail = new SendGridMail();
$mail->addTo('foo@bar.com')->
setFrom('me@bar.com')->
setSubject('Subject goes here')->
setText('Hello World!')->
setHtml('<strong>Hello World!</strong>');
$sendgrid->smtp->send($mail);
A: email-Id ?? what is this?
I'm guessing it is the email address of the recipient.
Your headers do not require the To: as the to address is specified in the first parameter.
Unless you know the recipient's name and want him to see email was sent to: Some Name not just you do not need that.
Also you have an error in it: missing <> before and after the email address.
P.S. Emails sent through PHP's mail() function have one of the highest rates of ending up in SPAM, especially if you do not have Domain Keys and SPF set in your DNS for this.
If using Cpanel please refer to Email Authentication section of your Email group in Cpanel.
A: I was having the same trouble , but this solution works perfectly for me .
just follow these steps :
*
*Just enable 2 step verification on your G-mail account.
*Go to app password and then select app = other and then type AzureWebsite and generate a password , and keep the password.
*replace the
$mail->Password = 'new password';
4.I hope this will work for you too .
A: Updated information @ Nov-2017:
Full Blog post:
https://blogs.msdn.microsoft.com/mast/2017/11/15/enhanced-azure-security-for-sending-emails-november-2017-update/
Recommended Method of Sending E-mail
"Microsoft recommends that Azure customers employ authenticated SMTP relay services (typically connected via TCP port 587 or 443, but often support other ports too)....."
| |
doc_23525549
|
Possible Duplicate:
Sizeof an array in the C programming language?
I have an array of char and I want to process it in a function, I tried code like this:
int main(){
char *word = new char [5];
/*here we make this word
....
*/
process(word);
puts(word);
}
void process(char *word){
int sizeOfWord = sizeof(word)-1;
/* here is cycle that process the word, I need it lenght to know how long cycle must be
.....
*/
}
But I can't get the length of array with sizeof. Why? And how can I get that?
A: You can't. With a pointer, there is no way to know the size.
What you should do is, pass the length of word to your process also.
You should know that there is a difference between arrays and pointers. Arrays are indeed a number of elements and therefore sizeof of the array gives its size (in bytes). A pointer on the other hand is just an address. It may not even point to an array. Since the sizeof operator is computed at compile time (except for variable length arrays), it cannot know what you mean.
Think of this example:
void process(char *word);
int main(){
char *word = new char [5];
char *word2 = new char [10];
process(word);
process(word2);
/* ... */
}
void process(char *word){
int sizeOfWord = sizeof(word)-1; // what should this be?
/* ... */
}
Now, knowing that sizeof in this case is computed at compile time, what value do you think it should get? 5? 10?
Side note: It looks like you are using this array as a string. In that case, you can easily get its length with strlen.
A: U should use strlen() instead.
EDIT:
Assuming your word is valid word with \0 at the end.
Anyway, you use c++ so you should use some container like std::string.
A: If your array is always char* or const char* and contains null terminated strings, you can use strlen, or even more easily use the std::string class and its appropriate methods.
If it is not a null terminated string (eg. its a plain byte array, or your question is about arrays in general and the char was just an example), use std::vector<char> or std::array<char, 5> and call their .size() methods.
A: No, it won't work. sizeof a pointer returns only the size of the pointer type, not that of the array. When you pass an array as parameter to a function, the array decays into a pointer, which means that the compiler won't know anymore what exactly it points to, and if it is an array, how long it can be. The only ways for you to know it inside the function is
*
*pass the length as an additional parameter, or
*add an extra terminator element to the array.
Note that arrays in C and C++ don't store their size in any way, so it is the programmer who must keep it in mind and program accordingly. In the code above, you know that the actual size of a is 5, but the compiler (and the runtime) has no way of knowing it. If you declared it as an array, i.e.
char[] word = new char [5];
the compiler would keep track of its size, and sizeof would return this size (in bytes) while in the scope of this declaration. This allows one to apply a well known idiom to get the actual size of an array:
int size = sizeof(a) / sizeof(char);
But again, this works only as long as the compiler knows that a is actually an array, so it is not possible inside function calls where a is passed as a (pointer) parameter.
A: Since arrays "decay" into pointers to their first elements in code like this, you can't. The pointer doesn't "know" how large a block of memory it points at. You need to pass the size separately, e.g.
void process(char *word, size_t length);
A: There is only one way to know the length of an array in C/C++: already knowing it.
You will need to pass the length of the array as another parameter.
If you are talking specifically about strings, search for the null character in the array and then use its index + 1 as the length (which is basically what the strlen() function does).
Otherwise, perhaps you could just use std::string instead and then get the length property of it?
A: You either have to pass the length of the array to the process function, or use the null (\0) character to mark the end.
Assuming the latter you can use
void process(char *word)
{
while (*word)
{
//do stuff with the character *word
++word;
}
}
A: On Windows there is _msize function to get the allocated memory size on heap in bytes. In VS2005 and VS2008 it worked perfectly with operator new, because it used malloc to allocate the memory. I have to note:
It is not standard way, so you should NOT use it!
Object * addr = new Object[xxx];
size_t number_of_objects = _msize(addr) / sizeof(Object);
A: The other answers have explained in detail why you cannot get the size of a classical C-array, but I want to point out the following.
Your example is missing a
delete [] word;
at the end of main. Otherwise you have a memory leak.
That shows why the preferred way of working with arrays in C++ is using std::vector or -- in case of a fixed length -- the new std::array and in case of character strings std::string. The compiler will manage the underlying storage for you. All of these classes provide a size function.
| |
doc_23525550
|
Automatically scrolling cursor into view after selection change
this will be disabled in the next version
set editor.$blockScrolling = Infinity to disable this message
Where to set that editor.$blockScrolling variable to remove those warnings?
var aces = el.find('textarea.code.json');
var aceInit = function() {
//window.ace.$blockScrolling = Infinity; // no effect
//$.ace.$blockScrolling = Infinity; // no effect
//window.jQueryAce.AceDecorator.$blockScrolling = Infinity; // no effect
//window.jQueryAce.BaseEditor.$blockScrolling = Infinity; // no effect
//window.jQueryAce.TextAreaEditor.$blockScrolling = Infinity; // no effect
aces.ace({theme: 'eclipse', lang: 'json'}).each(function (idx, editor) {
var el = $(editor);
var editor = el.data('ace').editor;
//editor.$blockScrolling = Infinity; // no effect
var ace = editor.ace;
//ace.$blockScrolling = Infinity; // no effect, even this the correct one
ace.setReadOnly(el.prop('disabled'));
ace.setOption("maxLines", 10);
ace.setOption("minLines", 2);
});
}; // this function called when ace.js, mode-json.js and jquery-ace.js loaded
A: The correct one would be:
var el = $(the_element);
var editor = el.data('ace').editor;
editor.$blockScrolling = Infinity;
| |
doc_23525551
|
I've passed the test of the statement assetlinks.json.
I've also checked the log of SingleHostAsyncVerifier, but I couldn't figure out why it returned false.
2019-09-05 11:13:58.390 31360-31652/?
I/SingleHostAsyncVerifier: Verification result: checking for a statement with source a: # bpkr@ad19471e
v: 21
, relation delegate_permission/common.handle_all_urls, and target b <
a: "com.jh.testproject"
b: # bpkp@f2f5b17d
v: 125
>
v: 127
--> false.
Now, when I trying to open my app by link, the chooser still shows up.
A: Since your test passed, check the headers whether it's accepting json. You should add header Content-Type application-json for .well-known/assetlinks.json
Nginx configuration
location = /.well-known/assetlinks.json {
default_type 'application/json';
}
You can find more details about universal link configuration including Amazon CloudFront from here Setting Up Universal Links
A: I agree with Chathura Wijesinghe.
In case you are using Apache, here is the configuration to set the header content-type to application/json.
Apache 2.4.51 configuration
Create a file at /.well-known/.htaccess :
# BEGIN SET ASSETLINKS HEADER
<IfModule mod_headers.c>
Header always set content-type "application/json"
</IfModule>
# END SET ASSETLINKS HEADER
| |
doc_23525552
|
POST https://vortex.data.microsoft.com/collect/v1 HTTP/1.1
Also some GET over https. I think only CONNECT is allowed over https, am I wrong? If I am wrong, how to deal with these request? (I just dropped these requests in my app.)
Another thing maybe unrelated is all these requests are related to microsoft from the log.
A: There isn't any problem handling any HTTP Method with HTTPS within a proxy.
All the requests with https://-protocol will be automatically received and sent to port 443 if not indicated otherwise.
Independently if you have a server where you deployed a HAProxy, NGINX, Apache Web Server or that you literally wrote a proxy like this one in JavaScript, only thing you have to do is to literally proxy the requests to the destination server address.
Regarding the encryption, precisely HTTPS ensures that there are no eavesdroppers between the client and the actual target, so the Proxy would act as initial target and then this would transparently intercept the connection.
*
*Client starts HTTPS session to Proxy
*Proxy intercepts and returns its certificate, signed by a CA trusted by the client.
*Proxy starts HTTPS session to Target
*Target returns its certificate, signed by a CA trusted by the Proxy.
*Proxy streams content, decrypt and re-encrypt with its certificate.
Basically it's a concatenation of two HTTPS sessions, one between the client and the proxy and other between the proxy and the final destination.
| |
doc_23525553
|
foreach($cities as $city){
echo $city.", "; }
The output is: Madrid, Berlin, Array
As There is 1 city from Spain, 1 from Germany and 2 from the USA(in array) it outputs array now what to do that php detects it is array and echo the parts of array.
Thank you in advance.
A: echo will not output array. Infact it will only output Array.
If you want to print an array use print_r
A: To detect if a value is an array, use is_array
E.g.
if(is_array($city)) {
foreach($city as $sub_city) { echo 'Sub-city: ' . $sub_city; }
}
A: you can use a combination of the ternary operator and implode()
foreach($cities as $city)
{
echo is_array($city) ? implode(', ', $city) : $city;
}
| |
doc_23525554
|
How Do I update the string to just keep the first 10 Chars?
Please Keep in mind I have thousands of entries where 'ThisPartOfTheStringIsRandom' is different in every case
A: The LEFT function is a string function that returns the left part of a string with a specified length.
UPDATE TableA
SET YourColumn = LEFT(YourColumn,10)
| |
doc_23525555
|
Originally I had a free Dyno from heroku. When my website was inactive for 30 mins it would sleep, consequently reseting my database.
I then upgraded to a Hobby Dyno. This solved the issue of the dyno sleeping. Although it still has a daily restart causing my database to reset. Is there a way to stop this from happening?
A: Assuming you start the application with python main.py, then your main file should be referenced as __main__ according to Python import rules. So in your blueprint, when you import the socketio object you should do it like this:
from __main__ import socketio
| |
doc_23525556
|
import telebot
from telebot import types
bot = telebot.TeleBot("5918393858:AAFyk-FNWiVPpYHv7u9WojgsvYqzAyGt4LE")
API_KEY = os.getenv('API_KEY')
@bot.message_handler(content_types=['text'])
def get_user_text(message):
if message.text == ("Hello").lower():
photo = open('adel.jpg', 'rb')
bot.send_photo(message.chat.id, photo)
elif message.text == "id":
bot.send_message(message.chat.id, f"your ID: {message.from_user.id}", parse_mode='html')
elif message.text == "How are you?":
bot.send_message(message.chat.id,'I am fine , and you ?' )
else:
bot.send_message(message.chat.id," I don't understand any fucking shit ", parse_mode='html')
Hi everyone , so i'm trying to create a telegram boy with Python but as you can see ---> if message.text == ("Hello").lower():
i tried typing .lower() after the (Hello) message so that the bot understand both upper and lowercase but it doesn't work
What should i do ?
A: To compare two strings case insensitive, you can convert both sides to lowercase:
if message.text.lower() == "Hello".lower():
If you also want to handle non Ascii strings, you should consider the casefold function:
if message.text.casefold() == "Hello".casefold():
| |
doc_23525557
|
As you can see below, the left_join procudes NA values for the new column of Incep.Price and DayCounter. Why does this happen, and how can this be resolved?
Update: Thanks to @akrun, using left_join(Avanza.XML, checkpoint, by = c('Firm' = 'Firm')) solves the issue and the columns are joined correctly.
However the warning message is sitll the same, could someone explain this behaviour? Why one must in this case explicitly specify the join columns, or otherwise produce NA values?
> head(Avanza.XML)
Firm Gain.Month.1 Last.Price Vol.Month.1
1 Stockwik FΓΆrvaltning 131.25 0.074 131264420
2 Novestra 37.14 7.200 605330
3 Bactiguard Holding 29.55 14.250 2815572
4 MSC Group B 20.87 3.070 671855
5 NeuroVive Pharmaceutical 18.07 9.800 3280944
6 Shelton Petroleum B 16.21 3.800 2135798
> head(checkpoint)
Firm Gain.Month.1 Last.Price Vol.Month.1 Incep.Price DayCounter
1 Stockwik FΓΆrvaltning 87.50 0.06 91270090 0.032000 2016-01-25
2 Novestra 38.10 7.25 604683 5.249819 2016-01-25
3 Bactiguard Holding 29.09 14.20 2784161 11.000077 2016-01-25
4 MSC Group B 27.56 3.24 657699 2.539981 2016-01-25
5 Shelton Petroleum B 19.27 3.90 1985305 3.269892 2016-01-25
6 NeuroVive Pharmaceutical 16.87 9.70 3220303 8.299820 2016-01-25
> head(left_join(Avanza.XML, checkpoint))
Joining by: c("Firm", "Gain.Month.1", "Last.Price", "Vol.Month.1")
Firm Gain.Month.1 Last.Price Vol.Month.1 Incep.Price DayCounter
1 Stockwik FΓΆrvaltning 131.25 0.074 131264420 NA <NA>
2 Novestra 37.14 7.200 605330 NA <NA>
3 Bactiguard Holding 29.55 14.250 2815572 NA <NA>
4 MSC Group B 20.87 3.070 671855 NA <NA>
5 NeuroVive Pharmaceutical 18.07 9.800 3280944 NA <NA>
6 Shelton Petroleum B 16.21 3.800 2135798 NA <NA>
Warning message:
In left_join_impl(x, y, by$x, by$y) :
joining factors with different levels, coercing to character vector
A: There are two problems.
*
*Not specifying the by argument in left_join: In this case, by default all the columns are used as the variables to join by. If we look at the columns - "Gain.Month.1", "Last.Price", "Vol.Month.1" - all numeric class and do not have a matching value in each of the datasets. So, it is better to join by "Firm"
left_join(Avanza.XML, checkpoint, by = "Firm")
*The "Firm" column class - factor: We get warning when there is difference in the levels of the factor column (if it is the variable that we join by). In order to remove the warning, we can either convert the "Firm" column in both datasets to character class
Avanza.XML$Firm <- as.character(Avanza.XML$Firm)
checkpoint$Firm <- as.character(checkpoint$Firm)
Or if we still want to keep the columns as factor, then change the levels in the "Firm" to include all the levels in both the datasets
lvls <- sort(unique(c(levels(Avanza.XML$Firm),
levels(checkpoint$Firm))))
Avanza.XML$Firm <- factor(Avanza.XML$Firm, levels=lvls)
checkpoint$Firm <- factor(checkpoint$Firm, levels=lvls)
and then do the left_join.
| |
doc_23525558
|
The URL follows the pattern: /v1/users/{uuid}
I came up with the following regex which matches /v1/users/ followed by a valid UUID: ^/v1/users/[0-9a-fA-F]{8}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{12}$
When testing it against /v1/users/29fe7f14-f04b-4607-9ece-9bd0525f7b38 on an online regex tester it succesfully matches.
However, it isn't matching inside the policy:
is_accessing_user_resource if {
re_match(
"^/v1/users/[0-9a-fA-F]{8}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{12}$",
input.path,
)
}
I'm testing it with the following input.json
{
"body": null,
"claims": {},
"method": "GET",
"path": "/v1/users/29fe7f14-f04b-4607-9ece-9bd0525f7b38"
}
Is there any adaptation needed when using regex in rego?
A: By reading the documentation regarding raw strings:
The other type of string declaration is a raw string declaration. These are made of characters surrounded by backticks (`), with the exception that raw strings may not contain backticks themselves. Raw strings are what they sound like: escape sequences are not interpreted, but instead taken as the literal text inside the backticks. For example, the raw stringΒ `hello\there` Β will be the text βhello\thereβ, not βhelloβ and βhereβ separated by a tab. Raw strings are particularly useful when constructing regular expressions for matching, as it eliminates the need to double escape special characters."
da doc
A simple example is a regex to match a valid Rego variable. With a regular string, the regex isΒ "[a-zA-Z_]\w*", but with raw strings, it becomesΒ `[a-zA-Z_]\w*`.
In this case, my regex was failing to match because it contained unescaped backslashes.
It was solved by changing the the quotation marks to backticks, like so:
re_match(
`^/v1/users/[0-9a-fA-F]{8}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{4}\b-[0-9a-fA-F]{12}$`,
input.path,
)
| |
doc_23525559
|
I've looked at the documentation, but I still can't figure out:
*
*Whether I'm supposed to call ListView_SetItem instead of ListView_InsertItem to add an item, after calling ListView_SetItemCount.
*Why neither of the above approaches seems to accelerate adding a large number of items (around a few hundred) to a list view. :(
Any ideas?
A: It is the same idea as vector::capacity(). It is not going to make a noticeable improvement on a few hundred items.
| |
doc_23525560
|
A: add_user script doesn't come with default installation. But its available in perl script
Download this script to your script path in freeswitch and you have to create a new user like add_user 10001
Reference
A: FreeSWITCH don't have any command for creating a user by command. But the task of creating default user is very simple.you can simply copy the default xml file from /usr/local/freeswitch/conf/directory/default directory and rename it and change the configuration as per your requirement.that's it now you are ready with new user.
Even in freeswitch you can write your own freeswitch command that can create new user in that.
| |
doc_23525561
|
Dajaxice.debater.upvote(
upvote_js,//Don't worry about this, it's just dealing with AJAX
{
'username' : 'bob',//bob is the user being upvoted by
}
)
You see the problem is the webpage is using the username directly, it is bad design since the user can easily open a chrome console, change the usernames and fake a request that upvote say alice instead of bob. I have investigated a couple of websites and see they are using a long string looks like hash to represent users. I am wondering how is this implemented usually? In more general form, I am looking for an algorithm that:
*
*Gives an unique id for every user
*Even if this id is given exposed to user or any third party, they will have no idea which user this id represents without using a long time or a lot of memory to compute(reverse the hash).
A: You can generate an encrypted form of the string (a hash, by definition, is not reversible).
The drawback to this is if someone has a list of all your users, they can then calculate the algorithm you used and defeat your encryption.
A common way to do reversible encryption is with base64:
>>> import base64
>>> secret = base64.b64encode('bob')
>>> secret
'Ym9i'
>>> print base64.b64decode('Ym9i')
bob
A: Sounds more like this is a design issue, rather than a coding problem.
Even if you hashed the usernames somehow, there would always be a way to get the hash by finding an instance of that hash on a webpage somewhere.
You could implement a system such that each user may only upvote another user once, but there's nothing to stop someone from creating multiple accounts to bypass the limitation.
One common solution is to keep track of which IP addresses have upvoted a particular user, and disallow multiple upvotes for the same user from the same IP address.
| |
doc_23525562
|
Iβm trying to compile module via closure compiler like so:
java -jar compiler.jar --js=node_modules/object-id/index.js --js=node_modules/each-csv/index.js --js=node_modules/matches-selector/index.js --js=index.js --process_common_js_modules --common_js_entry_module=index.js
Everything I get is an error:
ERROR - required entry point "module$each_csv" never provided
ERROR - required entry point "module$matches_selector" never provided
ERROR - required entry point "module$object_id" never provided
3 error(s), 0 warning(s)
What is the proper way to use processing of commonjs modules in closure compiler?
A: The best solution Iβve come up with is a non-wrapping CommonJS merger - UncommonJS.
It merges all the dependable modules in the proper order from the passed entry file into a single one-scoped bundle, resolving global vars interference, replacing module.exports and require declarations and removing duplicates, so that result is ~27% better compressable via closure compiler than a browserify bundle.
| |
doc_23525563
|
I've tried using the code below, but it didn't work.
printw("Input the name: ");
echo();
getstr(name);
noecho();
printw(" name | score \n");
printw("----------------------------\n");
for(int i=0;i<rlen;i++){
if(cur->name==name){
flag=1;
printw("%-16s| %d\n", cur->name, cur->score);
}
cur=cur->next;
}
if(flag==0) printw("\nsearch failure: no name in the list\n");
break;
For example, if there was a player whose name is aaaa and he scored 1000, if I input aaaa, It should print
aaaa | 1000
But instead it prints the search failure message.
| |
doc_23525564
|
I accidentally double clicked the textbox in my application, and there is a textbox_click function being created, I deleted that function because I dont need it but when I run my application there was a :
Error 1 'Bond_Yield_Calculator.BaseForm' does not contain a definition for 'textBox5_TextChanged' and no extension method 'textBox5_TextChanged' accepting a first argument of type 'Bond_Yield_Calculator.BaseForm' could be found (are you missing a using directive or an assembly reference?) C:\Users\Alex Chan\documents\visual studio 2010\Projects\Bond Yield Calculator\Bond Yield Calculator\BaseForm.Designer.cs 139 77 Bond Yield Calculator
error, how do I fix this?
A: Double-click on the error and it will jump you to that location in code. Delete the line of code that created the event handler.
If this is winforms, it will look something like
Bond_Yield_Calculator.BaseForm.TextBoxChanged += new EventHandler(textBox5_TextChanged);
It'll probably be in a file named
xxxxx.designer.cs (Where xxxxxx is the name of a form or control in your project.)
Alternatively, if it's ASP.NET you will see inside the tag something that looks like
OnTextChanged="textBox5_TextChanged"
Remove that.
A: The method should have the following signature:
protected void textBox5_TextChanged(object sender, EventArgs e) {
}
See the definition on MSDN.
A: Deleting the event handler in the code behind does not remove the event completely. You'll have to remove it from the designer.
Open up the properties window in Visual Studio (CTRL+W,P).
Click the textbox in the designer (only once!), then click the lightning bolt button in the properties window. Find the text changed event listed, and delete the text in the box.
| |
doc_23525565
|
try {
retrieveID();
String sqlStm = "INSERT INTO Job (employerID,title,description,type,salary,benefits,vacancies,closing,requirement,placement,applyTo,status,posted,location) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?)";
pst = conn.prepareStatement(sqlStm);
pst.setInt(1,id);
pst.setString(2,txtTitle.getText());
pst.setString(3,areaDescription.getText());
pst.setString(4,comboType.getSelectedItem().toString());
pst.setString(5,txtSalary.getText());
pst.setString(6,areaBenefits.getText());
pst.setString(7,txtVac.getText());
Date close;
close = txtDate.getDate();
pst.setString(8,sdf.format(close));
pst.setString(9,areaReq.getText());
pst.setString(10,comboPlace.getSelectedItem().toString());
pst.setString(11,txtWeb.getText());
pst.setString(12,comboStatus.getSelectedItem().toString());
Date now = new Date();
pst.setString(13,sdf.format(now));
pst.setString(14,txtLoc.getText());
pst.executeUpdate();
JOptionPane.showMessageDialog(null,"You have successfully added a job");
//empty all JTextfields
//switch to another
I am trying to empty the set of JTextFields in the JPanel, but instead of emptying them one by one, can I just refresh the panel? if so, how do you do this. i tried repaint(), revalidate() these dont work. perhaps I am wrong here.
I would also like to switch the JTabbedPane to another Pane, but this doesnt work when I try with this...
JTabbedPane sourceTabbedPane = (JTabbedPane) evt.getSource();
sourceTabbedPane.setSelectedIndex(0);
can someone show an example code how to do this.
A: You could loop through all components that are contained in the panel, and if they are text components, clear their value. The code would be something like this:
private void clearTextFields(Container container)
{
int count = container.getComponentCount();
for (int i = 0; i < count; i++)
{
Component component = container.getComponent(i);
if (component instanceof Container) {
clearTextFields((Container) component);
}
else if (component instanceof JTextComponent) {
((JTextComponent) component).setText("");
}
}
}
This method works recursively and takes care of the case when your panel contains another panel which contains the text fields.
A: Keep all your JTextFields in a container, and iterate over that container to empty them.
So, somewhere:
ArrayList<JTextField> textFields = new ArrayList<JTextField>();
After all fields are actually created (with JTextField txtTitle = new JTextField() or similar):
textFields.add(txtTitle);
textFields.add(areaDescription);
// ... add all others here
And finally, when you need to clear them all:
for (JTextField tf : textFields) {
tf.setText("");
}
| |
doc_23525566
|
link:
https://github.com/airbnb/react-native-maps
and I just run on terminal
npm install react-native-maps --save
react-native link
react-native run-ios
and run but got an error on terminal like this:
The following build commands failed:
CompileC /Users/preechawanaraksakul/Documents/jobpro/ios/build/Build/Intermediates/AirMaps.build/Debug-iphonesimulator/AirMaps.build/Objects-normal/x86_64/AIRMapCallout.o AirMaps/AIRMapCallout.m normal x86_64 objective-c com.apple.compilers.llvm.clang.1_0.compiler
(1 failure)
I use react native version 0.38.0
because I need to use library motion manager
when I upgrade > 0.40 and release on iPhone got error like this:
IOS - React Native - Unhandled JS Exception: SyntaxError
this is my screenshot from simulator, so it has only box(no display google map)
and this way can solve my problem to run motion sensor
but
someone tell me upgrade to > 0.40 for use react-native-maps
I need someone help for solve this problem without upgrade > 0.40
thank you for support
A: Actually, as per Airbnb documentation you should not use react-native link command while using react-native-maps.
Please refer to: this site
If u had already linked kindly follow this instructions
| |
doc_23525567
|
I have several thousand extracted html files and want to extract the content between the p tags in all files.
Here is the relevant code:
for line in text:
soup = bs(line, 'html.parser')
autor = soup.find_all('p').text
s = autor.replace('\\n', '')
l.append(s)
I want to use find_all().text to extract the text between all p tags, but I am getting this error:
ResultSet object has no attribute 'text'. You're probably treating a list of items like a single item. Did you call find_all() when you meant to call find()?
If I am using just find().text
autor = soup.find('p').text
I just get the first p tag of every file.
Somebody can help?
A: Text naturally separated by new lines:
paragraph_text = '\n\n'.join(p.text for p in soup.find_all('p'))
Or, e.g., if you want to connect the paragraphs by spaces:
paragraph_text = ' '.join(p.text for p in soup.find_all('p'))
A list of all the texts in <p>:
paragraphs = [p.text for p in soup.find_all('p')]
| |
doc_23525568
|
This is my attempt which gives a compile error cannot convert lambda expression to type <Main.Globals.Node> because it is not a delegate type:
// get IV value where Node BookID=4
var val = Globals.BookLL.Where(B => B.BookID == 4).Select(B => B.IV).Single();
// can update first node using this method
Globals.BookLL.First.Value.IV = 999;
// can upddate IV by traversing list
LinkedListNode<Globals.Node> Current = Globals.BookLL.First;
while (Current != null)
{
if(Current.Value.BookID==4)
{
Current.Value.IV = 444;
}
Current = Current.Next;
}
// how can you update IV using linq?
Globals.BookLL.Find(B => B.BookID == 4).Value.IV = 999; // cannot convert lambda expression to type <Main.Globals.Node> because it is not a delegate type
Thanks for any help.
A: If you use
Globals.BookLL val = Globals.BookLL.Single(B => B.BookID == 4);
to determine the item, you can use Find()
Globals.BookLL.Find(val).Value.IV = 999;
to change it's value
A: Globals.BookLL.Single(b => b.BookId == 4).IV = 44;
You can use single if BookIds are unique.
A: Globals.BookLL.Find(B => B.BookID == 4)
This returns an IEnumerable<Globals.Node>, you should use LINQ's First():
Globals.BookLL.First(B => B.BookID == 4).Value.IV = 999;
| |
doc_23525569
|
Where do you recommend to store the data?
A: You can store it in KeyChain.
In Mac OS X, keychain files are stored in ~/Library/Keychains/,
/Library/Keychains/, and /Network/Library/Keychains/, and the Keychain
Access GUI application is located in the Utilities folder in the
Applications folder. It is free, open source software released under
the terms of the APSL. The command line equivalent of Keychain Access
is /usr/bin/security. The keychain file(s) stores a variety of data
fields including a title, URL, notes and password. Only the passwords
and Secure Notes are encrypted, with Triple DES.
The default keychain file is the login keychain, typically unlocked on
login by the user's login password, although the password for this
keychain can instead be different from a userβs login password, adding
security at the expense of some convenience.[5] The Keychain Access
application does not permit setting an empty password on a keychain.
The keychain may be set to be automatically "locked" if the computer
has been idle for a time,[6] and can be locked manually from the
Keychain Access application. When locked, the password has to be
re-entered next time the keychain is accessed, to unlock it.
Overwriting the file in ~/Library/Keychains/ with a new one (e.g. as
part of a restore operation) also causes the keychain to lock and a
password is required at next access.
Keychain Access is a Mac OS X application that allows the user to
access the Keychain and configure its contents, including passwords
for websites, web forms, FTP servers, SSH accounts, network shares,
wireless networks, groupware applications, encrypted disk images, etc.
It unlocks, locks, and displays passwords saved by the system which
are dynamically linked to the user's login password, as well as
managing root certificates, keys, and secure notes. Its graphical user
interface displays various keychains, with there usually being at
least two; the login keychain and the system keychain. It also
includes the Keychain first aid utility that can repair problems with
Keychains. Various events can cause problems with Keychains, and
sometimes the only solution to solving a problem is to delete the
Keychain, which also deletes any passwords stored in the Keychain, and
create a new one. It is usually found in the Utilities folder in under
Applications in OS X. As an ancillary application to OS X, it is
subject to updates via Software Update and thus should not be moved
out of the Utilities folder. There is also an included command-line
tool to access the keychain, called "security".
Extract from Wikipedia.
Note: With root password, you can view nearly every saved password on the computer.
| |
doc_23525570
|
I have an understanding of what goes into measuring running times of code. It's run multiple times to get an average running time to account for differences per run and also to get times when the cache was utilized better.
In an attempt to measure running times for someone, I came up with this code after multiple revisions.
In the end I ended up with this code which yielded the results I intended to capture without giving misleading numbers:
// implementation C
static void Test<T>(string testName, Func<T> test, int iterations = 1000000)
{
Console.WriteLine(testName);
Console.WriteLine("Iterations: {0}", iterations);
var results = Enumerable.Repeat(0, iterations).Select(i => new System.Diagnostics.Stopwatch()).ToList();
var timer = System.Diagnostics.Stopwatch.StartNew();
for (int i = 0; i < results.Count; i++)
{
results[i].Start();
test();
results[i].Stop();
}
timer.Stop();
Console.WriteLine("Time(ms): {0,3}/{1,10}/{2,8} ({3,10})", results.Min(t => t.ElapsedMilliseconds), results.Average(t => t.ElapsedMilliseconds), results.Max(t => t.ElapsedMilliseconds), timer.ElapsedMilliseconds);
Console.WriteLine("Ticks: {0,3}/{1,10}/{2,8} ({3,10})", results.Min(t => t.ElapsedTicks), results.Average(t => t.ElapsedTicks), results.Max(t => t.ElapsedTicks), timer.ElapsedTicks);
Console.WriteLine();
}
Of all the code I've seen that measures running times, they were usually in the form:
// approach 1 pseudocode
start timer;
loop N times:
run testing code (directly or via function);
stop timer;
report results;
This was good in my mind since with the numbers, I have the total running time and can easily work out the average running time and would have good cache locality.
But one set of values that I thought were important to have were minimum and maximum iteration running time. This could not be calculated using the above form. So when I wrote my testing code, I wrote them in this form:
// approach 2 pseudocode
loop N times:
start timer;
run testing code (directly or via function);
stop timer;
store results;
report results;
This is good because I could then find the minimum, maximum as well as average times, the numbers I was interested in. Until now I realized that this could potentially skew results since the cache could potentially be affected since the loop wasn't very tight giving me less than optimal results.
The way I wrote the test code (using LINQ) added additional overheads which I knew about but ignored since I was just measuring the running code, not the overheads. Here was my first version:
// implementation A
static void Test<T>(string testName, Func<T> test, int iterations = 1000000)
{
Console.WriteLine(testName);
var results = Enumerable.Repeat(0, iterations).Select(i =>
{
var timer = System.Diagnostics.Stopwatch.StartNew();
test();
timer.Stop();
return timer;
}).ToList();
Console.WriteLine("Time(ms): {0,3}/{1,10}/{2,8}", results.Min(t => t.ElapsedMilliseconds), results.Average(t => t.ElapsedMilliseconds), results.Max(t => t.ElapsedMilliseconds));
Console.WriteLine("Ticks: {0,3}/{1,10}/{2,8}", results.Min(t => t.ElapsedTicks), results.Average(t => t.ElapsedTicks), results.Max(t => t.ElapsedTicks));
Console.WriteLine();
}
Here I thought this was fine since I'm only measuring the times it took to run the test function. The overheads associated with LINQ are not included in the running times. To reduce the overhead of creating timer objects within the loop, I made the modification.
// implementation B
static void Test<T>(string testName, Func<T> test, int iterations = 1000000)
{
Console.WriteLine(testName);
Console.WriteLine("Iterations: {0}", iterations);
var results = Enumerable.Repeat(0, iterations).Select(i => new System.Diagnostics.Stopwatch()).ToList();
results.ForEach(t =>
{
t.Start();
test();
t.Stop();
});
Console.WriteLine("Time(ms): {0,3}/{1,10}/{2,8} ({3,10})", results.Min(t => t.ElapsedMilliseconds), results.Average(t => t.ElapsedMilliseconds), results.Max(t => t.ElapsedMilliseconds), results.Sum(t => t.ElapsedMilliseconds));
Console.WriteLine("Ticks: {0,3}/{1,10}/{2,8} ({3,10})", results.Min(t => t.ElapsedTicks), results.Average(t => t.ElapsedTicks), results.Max(t => t.ElapsedTicks), results.Sum(t => t.ElapsedTicks));
Console.WriteLine();
}
This improved overall times but caused a minor problem. I added the total running time in the report by adding each iteration's times but gave misleading numbers since the times were short and didn't reflect the actual running time (which was usually much longer). I needed to measure the time of the entire loop now so I moved away from LINQ and ended up with the code I have now at the top. This hybrid gets the the times I think are important with minimal overhead AFAIK. (starting and stopping the timer just queries the high resolution timer) Also any context switching occurring is unimportant to me as it's part of normal execution anyway.
At one point, I forced the thread to yield within the loop to make sure that it is given the chance at some point at a convenient time (if the test code is CPU bound and doesn't block at all). I'm not too concerned about the processes running which might change the cache for the worse since I would be running these tests alone anyway. However, I came to the conclusion that for this particular case, it was unnecessary to have. Though I might incorporate it in THE final final version if it proves beneficial in general. Perhaps as an alternate algorithm for certain code.
Now my questions:
*
*Did I make some right choices? Some wrong ones?
*Did I make wrong assumptions about the goals in my thought process?
*Would the minimum or maximum running times really be useful information to have or is it a lost cause?
*If so, which approach would be better in general? The time running in a loop (approach 1)? Or the time running just the code in question (approach 2)?
*Would my hybrid approach be ok to use in general?
*Should I yield (for reasons explained in the last paragraph) or is that more harm to the times than necessary?
*Is there a more preferred way to do this that I did not mention?
Just to be clear, I'm not looking for an all-purpose, use anywhere, accurate timer. I just want to know of an algorithm that I should use when I want a quick to implement, reasonably accurate timer to measure code when a library or other 3rd party tools is not available.
I'm inclined to write all my test code in this form should there be no objections:
// final implementation
static void Test<T>(string testName, Func<T> test, int iterations = 1000000)
{
// print header
var results = Enumerable.Repeat(0, iterations).Select(i => new System.Diagnostics.Stopwatch()).ToList();
for (int i = 0; i < 100; i++) // warm up the cache
{
test();
}
var timer = System.Diagnostics.Stopwatch.StartNew(); // time whole process
for (int i = 0; i < results.Count; i++)
{
results[i].Start(); // time individual process
test();
results[i].Stop();
}
timer.Stop();
// report results
}
For the bounty, I would ideally like to have all the above questions answered. I'm hoping for a good explanation on whether my thoughts which influenced the code here well justified (and possibly thoughts on how to improve it if suboptimal) or if I was wrong with a point, explain why it's wrong and/or unnecessary and if applicable, offer a better alternative.
To summarize the important questions and my thoughts for the decisions made:
*
*Is getting the running time of each individual iteration generally a good thing to have?
With the times for each individual iteration, I can calculate additional statistical information like the minimum and maximum running times as well as standard deviation. So I can see if there are factors such as caching or other unknowns may be skewing the results. This lead to my "hybrid" version.
*Is having a small loop of runs before the actual timing starts good too?
From my response to Sam Saffron's thought on the loop, this is to increase the likelihood that constantly accessed memory will be cached. That way I'm measuring the times only for when everything is cached, rather than some of the cases where memory access isn't cached.
*Would a forced Thread.Yield() within the loop help or hurt the timings of CPU bound test cases?
If the process was CPU bound, the OS scheduler would lower the priority of this task potentially increasing times due to lack of time on the CPU. If it is not CPU bound, I would omit the yielding.
Based on the answers here, I'll be writing my test functions using the final implementation without the individual timings for the general case. If I would like to have other statistical data, I would reintroduce it back into the test function as well as apply the other things mentioned here.
A: My first thought is that a loop as simple as
for (int i = 0; i < x; i++)
{
timer.Start();
test();
timer.Stop();
}
is kinda silly compared to:
timer.Start();
for (int i = 0; i < x; i++)
test();
timer.Stop();
the reason is that (1) this kind of "for" loop has a very tiny overhead, so small that it's not worth worrying about even if test() only takes a microsecond, and (2) timer.Start() and timer.Stop() have their own overhead, which is likely to affect the results more than the for loop. That said, I took a peek at Stopwatch in Reflector and noticed that Start() and Stop() are fairly cheap (calling Elapsed* properties is likely more expensive, considering the math involved.)
Make sure the IsHighResolution property of Stopwatch is true. If it's false, Stopwatch uses DateTime.UtcNow, which I believe is only updated every 15-16 ms.
1. Is getting the running time of each individual iteration generally a good thing to have?
It is not usually necessary to measure the runtime of each individual iteration, but it is useful to find out how much the performance varies between different iterations. To this end, you can compute the min/max (or k outliers) and standard deviation. Only the "median" statistic requires you to record every iteration.
If you find that the standard deviation is large, you might then have reason to reason to record every iteration, in order to explore why the time keeps changing.
Some people have written small frameworks to help you do performance benchmarks. For example, CodeTimers. If you are testing something that is so tiny and simple that the overhead of the benchmark library matters, consider running the operation in a for-loop inside the lambda that the benchmark library calls. If the operation is so tiny that the overhead of a for-loop matters (e.g. measuring the speed of multiplication), then use manual loop unrolling. But if you use loop unrolling, remember that most real-world apps don't use manual loop unrolling, so your benchmark results may overstate the real-world performance.
For myself I wrote a little class for gathering min, max, mean, and standard deviation, which could be used for benchmarks or other statistics:
// A lightweight class to help you compute the minimum, maximum, average
// and standard deviation of a set of values. Call Clear(), then Add(each
// value); you can compute the average and standard deviation at any time by
// calling Avg() and StdDeviation().
class Statistic
{
public double Min;
public double Max;
public double Count;
public double SumTotal;
public double SumOfSquares;
public void Clear()
{
SumOfSquares = Min = Max = Count = SumTotal = 0;
}
public void Add(double nextValue)
{
Debug.Assert(!double.IsNaN(nextValue));
if (Count > 0)
{
if (Min > nextValue)
Min = nextValue;
if (Max < nextValue)
Max = nextValue;
SumTotal += nextValue;
SumOfSquares += nextValue * nextValue;
Count++;
}
else
{
Min = Max = SumTotal = nextValue;
SumOfSquares = nextValue * nextValue;
Count = 1;
}
}
public double Avg()
{
return SumTotal / Count;
}
public double Variance()
{
return (SumOfSquares * Count - SumTotal * SumTotal) / (Count * (Count - 1));
}
public double StdDeviation()
{
return Math.Sqrt(Variance());
}
public Statistic Clone()
{
return (Statistic)MemberwiseClone();
}
};
2. Is having a small loop of runs before the actual timing starts good too?
Which iterations you measure depends on whether you care most about startup time, steady-state time or total runtime. In general, it may be useful to record one or more runs separately as "startup" runs. You can expect the first iteration (and sometimes more than one) to run more slowly. As an extreme example, my GoInterfaces library consistently takes about 140 milliseconds to produce its first output, then it does 9 more in about 15 ms.
Depending on what the benchmark measures, you may find that if you run the benchmark right after rebooting, the first iteration (or first few iterations) will run very slowly. Then, if you run the benchmark a second time, the first iteration will be faster.
3. Would a forced Thread.Yield() within the loop help or hurt the timings of CPU bound test cases?
I'm not sure. It may clear the processor caches (L1, L2, TLB), which would not only slow down your benchmark overall but lower the measured speeds. Your results will be more "artificial", not reflecting as well what you would get in the real world. Perhaps a better approach is to avoid running other tasks at the same time as your benchmark.
A: Regardless of the mechanism for timing your function (and the answers here seems fine) there is a very simple trick to eradicate the overhead of the benchmarking-code itself, i.e. the overhead of the loop, timer-readings, and method-call:
Simply call your benchmarking code with an empty Func<T> first, i.e.
void EmptyFunc<T>() {}
This will give you a baseline of the timing-overhead, which you can essentially subtract from the latter measurements of your actual benchmarked function.
By "essentially" I mean that there are always room for variations when timing some code, due to garbage collection and thread and process scheduling. A pragmatic approach would e.g. be to benchmark the empty function, find the average overhead (total time divided by iterations) and then subtract that number from each timing-result of the real benchmarked function, but don't let it go below 0 which wouldn't make sense.
You will, of course, have to re-arrange your benchmarking code a bit. Ideally you'll want to use the exact same code to benchmark the empty function and real benchmarked function, so I suggest you move the timing-loop into another function or at least keep the two loops completely alike. In summary
*
*benchmark the empty function
*calculate the average overhead from the result
*benchmark the real test-function
*subtract the average overhead from those test results
*you're done
By doing this the actual timing mechanism suddenly becomes a lot less important.
A: I think your first code sample seems like the best approach.
Your first code sample is small, clean and simple and doesn't use any major abstractions during the test loop which may introduce hidden overhead.
Use of the Stopwatch class is a good thing as it simplifies the code one normally has to write to get high-resolution timings.
One thing you might consider is providing the option to iterate the test for a smaller number of times untimed before entering the timing loop to warm up any caches, buffers, connections, handles, sockets, threadpool threads etc. that the test routine may exercise.
HTH.
A: I tend to agree with @Sam Saffron about using one Stopwatch rather than one per iteration. In your example you performing 1000000 iterations by default. I don't know what the cost of creating a single Stopwatch is, but you are creating 1000000 of them. Conceivably, that in and of itself could affect your test results. I reworked your "final implementation" a little bit to allow the measurement of each iteration without creating 1000000 Stopwatches. Granted, since I am saving the result of each iteration, I am allocating 1000000 longs, but at first glance it seems like that would have less overall affect than allocating that many Stopwatches. I haven't compared my version to your version to see if mine would yield different results.
static void Test2<T>(string testName, Func<T> test, int iterations = 1000000)
{
long [] results = new long [iterations];
// print header
for (int i = 0; i < 100; i++) // warm up the cache
{
test();
}
var timer = System.Diagnostics.Stopwatch.StartNew(); // time whole process
long start;
for (int i = 0; i < results.Length; i++)
{
start = Stopwatch.GetTimestamp();
test();
results[i] = Stopwatch.GetTimestamp() - start;
}
timer.Stop();
double ticksPerMillisecond = Stopwatch.Frequency / 1000.0;
Console.WriteLine("Time(ms): {0,3}/{1,10}/{2,8} ({3,10})", results.Min(t => t / ticksPerMillisecond), results.Average(t => t / ticksPerMillisecond), results.Max(t => t / ticksPerMillisecond), results.Sum(t => t / ticksPerMillisecond));
Console.WriteLine("Ticks: {0,3}/{1,10}/{2,8} ({3,10})", results.Min(), results.Average(), results.Max(), results.Sum());
Console.WriteLine();
}
I am using the Stopwatch's static GetTimestamp method twice in each iteration. The delta between will be the amount of time spent in the iteration. Using Stopwatch.Frequency, we can convert the delta values to milliseconds.
Using Timestamp and Frequency to calculate performance is not necessarily as clear as just using a Stopwatch instance directly. But, using a different stopwatch for each iteration is probably not as clear as using a single stopwatch to measure the whole thing.
I don't know that my idea is any better or any worse than yours, but it is slightly different ;-)
I also agree about the warmup loop. Depending on what your test is doing, there could be some fixed startup costs that you don't want to affect the overall results. The startup loop should eliminate that.
There is proabably a point at which keeping each individual timing result is counterproductive due to the cost of storage necessary to hold the whole array of values (or timers). For less memory, but more processing time, you could simply sum the deltas, computing the min and max as you go. That has the potential of throwing off your results, but if you are primarily concerned with the statistics generated based on the invidivual iteration measurements, then you can just do the min and max calculation outside of the time delta check:
static void Test2<T>(string testName, Func<T> test, int iterations = 1000000)
{
//long [] results = new long [iterations];
long min = long.MaxValue;
long max = long.MinValue;
// print header
for (int i = 0; i < 100; i++) // warm up the cache
{
test();
}
var timer = System.Diagnostics.Stopwatch.StartNew(); // time whole process
long start;
long delta;
long sum = 0;
for (int i = 0; i < iterations; i++)
{
start = Stopwatch.GetTimestamp();
test();
delta = Stopwatch.GetTimestamp() - start;
if (delta < min) min = delta;
if (delta > max) max = delta;
sum += delta;
}
timer.Stop();
double ticksPerMillisecond = Stopwatch.Frequency / 1000.0;
Console.WriteLine("Time(ms): {0,3}/{1,10}/{2,8} ({3,10})", min / ticksPerMillisecond, sum / ticksPerMillisecond / iterations, max / ticksPerMillisecond, sum);
Console.WriteLine("Ticks: {0,3}/{1,10}/{2,8} ({3,10})", min, sum / iterations, max, sum);
Console.WriteLine();
}
Looks pretty old school without the Linq operations, but it still gets the job done.
A: The logic in Approach 2 feels 'righter' to me, but I'm just a CS student.
I came across this link that you might find of interest:
http://www.yoda.arachsys.com/csharp/benchmark.html
A: Depending on what the running time of the code you're testing is, it's quite difficult to measure the individual runs. If the runtime of the code your testing is multiple seconds, your approach of timing the specific run will most likely not be a problem. If it's in the vicinity of milliseconds, your results will probably very too much. If you e.g. have a context switch or a read from the swap file at the wrong moment, the runtime of that run will be disproportionate to the average runtime.
A: I had a similar question here.
I much prefer the concept of using a single stopwatch, especially if you are micro benchamrking. Your code is not accounting for the GC which can affect performance.
I think forcing a GC collection is pretty important prior to running test runs, also I am not sure what the point is of a 100 warmup run.
A: I would lean toward the last, but I'd consider whether the overhead of starting and stopping a timer could be greater than that of looping itself.
One thing to consider though, is whether the effect of CPU cache misses is actually a fair thing to try to counter?
Taking advantage of CPU caches is something where one approach may beat another, but in real world cases there might be a cache-miss with each call so this advantage becomes inconsequential. In this case the approach that made less good use of the cache could become that which has better real-world performance.
An array-based or singly-linked-list-based queue would be an example; the former almost always having greater performance when cache-lines don't get refilled in between calls, but suffering on resize-operations more than the latter. Hence the latter can win in real-world cases (all the more so as they are easier to write in a lock-free form) even though they will almost always lose in the rapid iterations of timing tests.
For this reason it can also be worth trying some iterations with something to actually force the cache to be flushed. Can't think what the best way to do that would be right now, so I might come back and add to this if I do.
| |
doc_23525571
|
Here an image of the site
But I don't know how to show the country you want to click in the scrollbar.
Here the app.js code:
import React, { Component } from 'react';
import './App.css';
import NavBar from '../Components/NavBar';
import SideBar from './SideBar';
import CountryList from '../Components/SideBarComponents/CountryList';
import Scroll from '../Components/SideBarComponents/Scroll';
import Main from './Main';
import SearchCountry from '../Components/MainComponents/SearchCountry';
import SearchedCountry from '../Components/MainComponents/SearchedCountry';
import Datas from '../Components/MainComponents/Datas';
class App extends Component {
constructor() {
super();
this.state = {
nations: [],
searchField: '',
button: false
}
}
onSearchChange = (event) => {
this.setState({searchField: event.target.value});
console.log(this.state.searchField)
}
onClickChange = () => {
this.setState(prevsState => ({
button: true
}))
}
render() {
const {nations, searchField, button, searchMemory} = this.state;
const searchedNation = nations.filter(nation => {
if(button) {
return nation.name.toLowerCase().includes(searchField.toLowerCase())
}
});
return (
<div>
<div>
<NavBar/>
</div>
<Main>
<div className='backgr-img'>
<SearchCountry searchChange={this.onSearchChange} clickChange={this.onClickChange}/>
<SearchedCountry nations={searchedNation}/>
</div>
<Datas nations={searchedNation}/>
</Main>
<SideBar>
<Scroll className='scroll'>
<CountryList nations={nations} clickFunc/>
</Scroll>
</SideBar>
</div>
);
}
componentDidMount() {
fetch('https://restcountries.eu/rest/v2/all')
.then(response => response.json())
.then(x => this.setState({nations: x}));
}
componentDidUpdate() {
this.state.button = false;
}
}
export default App;
The countryList:
import React from 'react';
import Images from './Images';
const CountryList = ({nations, clickFunc}) => {
return (
<div className='container' style={{display: 'grid', gridTemplateColumns: 'repeat(auto-fill, minmax(115px, 3fr))'}}>
{
nations.map((country, i) => {
return (
<Images
key={country.numericCode}
name={country.name}
flag={country.flag}
clickChange={clickFunc}
/>
);
})
}
</div>
)
}
export default CountryList;
And the images.js:
import React from 'react';
import './images.css'
const Images = ({name, capital, region, population, flag, numericCode, clickChange}) => {
return (
<div className='hover bg-navy pa2 ma1 tc w10' onClick={clickChange = () => name}>
<img alt='flag' src={flag} />
<div>
<h6 className='ma0 white'>{name}</h6>
{capital}
{region}
{population}
{numericCode}
</div>
</div>
);
}
export default Images;
I had thought of using the onClick event on the single nation that was going to return the name of the clicked nation. After that I would have entered the name in the searchField and set the button to true in order to run the searchedNation function.
I thank anyone who gives me an answer in advance.
A: To keep the actual structure, you can try using onClickChange in Images:
onClickChange = (newName = null) => {
if(newName) {
this.setState(prevsState => ({
searchField: newName
}))
}
// old code continues
this.setState(prevsState => ({
button: true
}))
}
then in onClick of Images you call:
onClick={() => {clickChange(name)}}
Or you can try as well use react hooks (but this will require some refactoring) cause you'll need to change a property from a parent component.
With that you can use useState hook to change the value from parent component (from Images to App):
const [searchField, setSearchField] = useState('');
Then you pass setSearchField to images as props and changes the searchField value when Images is clicked:
onClick={() => {
clickChange()
setSearchField(name)
}}
| |
doc_23525572
|
A: Please see my answer here: How to find corners on a Image using OpenCv
As for step 7: cvApproxPoly returns a CvSeq*. This link explains it well. As shown here, a CvSeq struct contains a total member that contains the number of elements in the sequence. In the case of a true quadrilateral, total should equal 4. If the quadrilateral is a square (or rectangle), angles between adjacent vertices should be ~90 degrees.
A: findContours will give you the outline points
| |
doc_23525573
|
This code work fine
// Launch parameters
$bash = './linux_app;
// Temporary file
$temp = sys::temp($bash);
// Update start.sh file
$ssh->setfile($temp, $tarif['install'].$server['uid'].'/start.sh', 0500);
// Launch line
$ssh->set('cd '.$tarif['install'].$server['uid'].';' // change to game server directory
.'rm *.pid;' // Deleting *.pid files
.'chown server'.$server['uid'].':1000 start.sh;' // Update owner of file start.sh
.'sudo -u server'.$server['uid'].' screen -dmS s_'.$server['uid'].' '.$taskset.' sh -c "./start.sh"'); // Start the app
But when trying to rewrite the script to work with windows application using wine, the script does not run "start.sh":
// Launch parameters
$bash = 'xvfb-run --auto-servernum wine ./windows_app.exe -batchmode ';
// Temporary file
$temp = sys::temp($bash);
// Update start.sh file
$ssh->setfile($temp, $tarif['install'].$server['uid'].'/start.sh', 0755);
///////library/games/legacy/////////////
$ssh->set('cd /servers/'.$server['uid'].';' // change to game server directory
.'rm *.pid;' // Deleting *.pid files
.'chown server'.$server['uid'].':1000 start.sh;' // Update owner of file start.sh
.'sudo -u server'.$server['uid'].' screen -dmS s_'.$server['uid'].' '.$taskset.' sh -c "./start.sh"'); // Start the app
Although if manually write "Launch parameters" in the "start.sh" file and run it manually, everything works.
Tried running without screen.
I tried launching from screen, but instead of "start.sh" substitute the string "Launch parameters"
Please tell me what am I doing wrong?
| |
doc_23525574
|
The default sort is alphabetical. But I'd like to sort the output in the same order as defined in the policy
I saw some switches for the opa command line tool. --profile-sort So I tried to put this profile-sort = "line" in the policy. But it didn't work.
Any ideas on how to make it sort by "line"?
A: The output from the policy evaluated by the Playground is an object, and those are almost always "unsorted" in that order of the keys shouldn't matter. If you're using OPA in a more realistic context you'd be free to sort the result however you wanted after receiving the decision from OPA.
Also note that Rego is not an imperative language. There is no guarantees that the order of which you've added rules to a policy will be the order that OPA evaluates them.
| |
doc_23525575
|
Model:
information_request_issued_date = models.DateField(verbose_name='Date Information Request Issued', null=True, blank=True)
Form class:
class InformationRequestForm(forms.ModelForm):
class Meta:
model = DevelopmentAssessment
fields = ('information_request_issued_date')
def __init__(self, *args, **kwargs):
super(InformationRequestForm, self).__init__(*args, **kwargs)
self.fields['information_request_issued_date'] = forms.DateField(('%d/%m/%Y',), widget=forms.DateTimeInput(format='%d/%m/%Y', attrs={'class': 'date'}))
If i don't have the self.fields declaration in the form class the verbose_name works fine.
Any ideas?
A: Maybe because it's now a regular form field and thus doesn't have an attribute named verbose_name. Instead, it now has a label attribute.
Try this:
self.fields['information_request_issued_date'].label = 'Date Information Request Issued'
| |
doc_23525576
|
func makeIncrementer(forIncrement amount: Int) -> () -> Int {
var runningTotal = 0
func incrementer() -> Int {
print("something")
runningTotal += amount
return runningTotal
}
print("running total is: \(runningTotal)")
return incrementer
}
let incrementByTen = makeIncrementer(forIncrement: 10)
incrementByTen()
incrementByTen()
incrementByTen()
incrementByTen()
incrementByTen()
incrementByTen()
when I run the code I get:
running total is: 0
something
something
something
something
something
something
Why isn't the "running total is:" print executed every time I call incrementByTen()? - thanks
A: *
*Executing the line let incrementByTen = makeIncrementer(forIncrement: 10)
creates the function func incrementer() by capturing runningTotal and amount, prints running total is.. and returns
func incrementer() -> Int {
print("something")
runningTotal += 10
return runningTotal
}
It does not execute incrementer().
After that the variable incrementByTen contains the entire incrementer() function
*Executing the line incrementByTen()
executes only the function incrementer(), prints something, does the math and returns the incremented value.
| |
doc_23525577
|
class Person {
constructor(firstName, lastName) {
this.firstName = firstName;
this.lastName = lastName;
}
get fullName() {
return [this.firstName, this.lastName].join(" ");
}
}
you can access the getter after instantiating a new object
const person = new Person("Jane", "Doe");
console.log(person.fullName); // "Jane Doe"
but this won't work after copying the object using the spread operator
const personCopy = { ...person };
console.log(personCopy.fullName); // undefined
I think this is somewhat confusing syntax.
A: The spread operator only
copies own enumerable properties from a provided object onto a new object.
While the property defined using the get syntax
will be defined on the prototype of the object.
A: The spread operator creates a new object using Object as the constructor. So, in your case, personCopy is not the instance of class Person and as a result of this, its __proto__ is not Person.prototype and therefore the getter won't work.
| |
doc_23525578
|
describe('Login screen tests', function () {
var ptor = protractor.getInstance();
beforeEach(function(){
console.log('In before Each method');
ptor.get('http://staging-machine/login/#/');
});
it('Blank Username & Password test', function(done) {
ptor.findElement(protractor.By.id("submit")).click();
var message = ptor.findElement(protractor.By.repeater('message in messages'));
message.then(function(message){
message.getText().then(function(text) {
console.log("Message shown:"+text);
expect(message.getText()).toContain('Username or Password can\'t be blank');
done();
});
});
});
});
I tried to google around, and found that there might be some issue with jasmine, but i am still unable to resolve this. Because the error seems to be really unexpected. Any help would be appreciated.
A: Are you sure you're getting undefined is not a function in line done() ?
I think your problem is here: ptor.findElement(protractor.By.repeater('message in messages')) because by then your clearly are on an Angular page so, regarding webdriver's findElement for a repeater: you should not be doing that.
Anyway, I would do 2 things:
*
*Upgrade Protractor to latest
*Rewrite the whole test like below since calling done() here is not required at all.
Rewrite:
describe('Login screen tests', function () {
// Page Objects. TODO: Extract to separate module file.
var submitBtnElm = $('#submit');
var messagesRepElms = element.all(by.repeater('message in messages'));
describe('Blank Username & Password test', function() {
// Moved login get out of beforeEach since you need to get it once
it('Opens an Angular login page', function() {
browser.get('http://staging-machine/login/#/');
});
it('Clicks submit btn without entering required fields', function() {
submitBtnElm.click();
});
it('Should trigger validation errors', function() {
expect(messagesRepElms.first().isPresent()).toBeTruthy();
expect(messagesRepElms.first().getText()).
toContain('Username or Password can\'t be blank');
});
});
});
| |
doc_23525579
|
ASPNETCOMPILER : error ASPCONFIG: Could not load file or assembly 'Duke.Owin.VkontakteMiddleware, Version=1.2.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The parameter is incorrect. (Exception from HRESULT: 0x80070057 (E_INVALIDARG))
This error does not appear when I build solution in Visual Studio (2013). Trying to fix the problem I've already done the following:
*
*All existing references in the projects of my solution are configured to Copy local = true
*Before each build I clear (and remove) all bin and obj folders of my projects.
*I cleared Temporary Asp .Net files multiple times and before each build.
*All assemblies, which compiler says it can't find, are located in packages folder.
*Tried to restore NuGet packages before each build:
*
*Downloaded NuGet
*Enabled NuGet package restore by adding EnableNuGetPackageRestore=true environment variable
*Added <Exec Command='$(ToolsDirectory)\nuget restore "$(Solution)"'/> command to my build project.
*Tried manually define references in build project:
<ItemGroup><Reference Include="Duke.Owin.VkontakteMiddleware"><HintPath>$(PackagesFolder)\Duke.Owin.VkontakteMiddleware.1.2.0.0\lib\net45\Duke.Owin.VkontakteMiddleware.dll</HintPath></Reference></ItemGroup>
*As it is suggested here, I manually edited the project file that is failing to build and added <Private>True</Private> in each Reference node that isn't found by compiler.
*Tried manually copy missed dlls: <Copy SourceFiles="$(PackagesFolder)\Duke.Owin.VkontakteMiddleware.1.2.0.0\lib\net45\Duke.Owin.VkontakteMiddleware.dll" DestinationFolder="$(OutputDirectory)\bin" SkipUnchangedFiles="true" />
But, after all of the above, still no luck. Are there any more solutions?
UPDATE:
If I add the assembly to GAC it actually helps, BUT some of my assemblies are installed packages from the nuget and they are not strongly named so I can't add all of them to the GAC. And, of course, registering all assemblies in the GAC is not a desirable solution.
A: See Common Issues With Automatic Package Restore, which reads:
For a custom build .proj, a pre build <Exec> action to restore nuget packages is required. This is not added automatically.
Hope that helps.
| |
doc_23525580
|
*
*https:xxx.domain1.com (the front-end)
*https:yyy.domain1.com (the back-end only called from the front-end)
Both are running under nginx and the run correctly on my development Ubuntu 20.04.1 machine.
Now I' moving them on AWS: I created a Linux machine the same OS, and transferred both the machines.
So now I have
*
*https:xxx.domain2.com (the front-end)
*https:yyy.domain2.com (the back-end only called from the front-end)
The second server will be always called only by the first one. It should be considered hidden.
I run them, but, when accessing the front-end for the login, I received the following errors:
OPTIONS https://xxx.domain2.com/login CORS Missing Allow Origin
Now, in the server https:yyy.domain2.com I always specified
const router = express();
router.use(cors())
and the full nginx config file is as follow
server {
server_name xxx.domain2.com;
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:3000;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/xxx.domain2.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/xxx.domain2.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
server_name yyy.domain2.com;
add_header Access-Control-Allow-Origin "xxx.domain2.com";
location / {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass http://localhost:3001;
}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/xxx.domain2.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/xxx.domain2.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = xxx.domain2.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
listen [::]:80;
server_name xxx.domain2.com;
return 404; # managed by Certbot
}
server {
if ($host = yyy.domain2.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
listen [::]:80;
server_name yyy.domain2.com ;
return 404; # managed by Certbot
}
Please note that I added the line
add_header Access-Control-Allow-Origin "xxx.domain2.com";
that I don't have on my development server.
====================== FIRST ADDENDUM ==========================
This is the client offending piece of code: The error is in response to the const res = await axios.put(cfLogin, { 'cf': cf }); request below.
const handleSubmitCf = async (e) => {
e.preventDefault();
setSudo(false)
try {
const res = await axios.put(cfLogin, { 'cf': cf });
if (res.status === 200 || res.status === 201)
{
nextPhase();
setResponse(res.data.data1);
}
setErrore('');
}
catch (error) { setErrore(error.response.data); };
}
| |
doc_23525581
|
in App.js react
class ShowAll extends Component {
constructor(props){
super(props);
this.state = {
Data: [],
}
}
componentDidMount(){
Request.get('/budget').then((res)=>{
let DataString = JSON.stringify(res.body);
this.setState({
Data: DataString
}, function(){
console.log(DataString);
})
}).catch((err)=> console.log(err));
}
render(){
return(
<div>
{
this.state.Data.map(function(dynamicData, key){
<div>{dynamicData[0]._id}</div> // doesn't render anything and throws an error message saying TypeError: Cannot read property '_id' of undefined
})
}
</div>
)
}
}
**EDIT 1 **
The api data structure is
[{
"_id":"lul",
"_creator":"5a8f8ecdd67afa6494805bef",
"firstItem":"hero",
"secondItem":"30",
"thirdItem":"3",
"__v":0,
"tBudget":9,
"thirdPrice":3,
"secondPrice":3,
"firstPrice":3
}]
A: Oh. Your original post was a completely different problem.
Two problems. First:
You're mapping, so you don't need to index the mapped value
this.state.Data.map(function(dynamicData, n) {
// dynamicData _is_ the nth element in the array, you don't need dynamicData[x]
dynamicData._id === "lul"
})
second, you're not returning anything from your map callback
map(function(...) {
return (<div>...</div>)
})
A: Based on the server response, you don't need to access the item 0 when rendering each one of them
class ShowAll extends Component {
constructor(props){
super(props);
this.state = {
Data: [],
}
}
componentDidMount(){
Request.get('/budget').then((res)=>{
this.setState({
Data: res.body, // Assuming res.body is already an array
}, function(){
console.log(DataString);
})
}).catch((err)=> console.log(err));
}
render(){
return(
<div>
{
this.state.Data.map((dynamicData, key) => // Using an arrow function
<div>{dynamicData._id}</div> // Don't access the item 0
)
}
</div>
)
}
}
The previous code assumes res.body is already an array. If that's not the case and you are actually getting a string or something else, you need to parse the response and make sure you assign an array to the state.
| |
doc_23525582
|
Parser.h
#ifndef PARSER_H
#define PARSER_H
#include <fstream>
#include <cstdlib>
#include "Heap.h"
#include "Word.h"
enum FILESTREAM_ERRORS{OPEN_ERROR};
class Parser
{
public:
Parser();
void SetFile(const char *filename);
void LoadHeap();
//private:
int word, line, paragraph;
ifstream fin;
Heap<Word> *wordheap;
void LoadWord();
};
Parser::Parser()
{
word = line = paragraph = 0;
}
void Parser::SetFile(const char* filename)
{
fin.open(filename);
if(fin.fail())
throw OPEN_ERROR;
}
void Parser::LoadWord()
{
QString wordstring;
char c;
fin.get(c);
if (c == '\n')
{
char p = fin.peek();
if (p == '\n')
{
fin.get(c);
paragraph++;
}
line++;
fin.get(c);
}
while (isblank(c))
fin.get(c);
while (!isblank(c))
wordstring.append(c);
word++;
cout << qPrintable(wordstring) << endl;
}
#endif // PARSER_H
A: Check that your file exists, that you can effectively read it.
Activate std::ios exceptions to get more information on the error.
One another possible cause of fail is that the fstream already has a file associated to it. Check it with is_open before attempting the open operation.
Update from Drew Dormann comment :
If you specify a relative path, i.e. a path that is not absolute (typically starting with a drive letter on Windows or a slash on *nix), you should consider the value of the current working directory (CWD). This value is often by default the directory of you executable, but some OS or IDE may change it, and of course your program can change it also.
So if you use a relative path, also check the CWD (you can get it with getcwd on *nix), or for testing purpose use an absolute path.
| |
doc_23525583
|
function add() {
<? addToDakavebebi($_POST['answer'], $_POST['number']);?>
}
<form method="post">
<tr>
<select id='number'>
<option value="">select</option>
</select>
<select name="answer">
<option value="">select</option>
<option value="1">YES</option>
<option value="2">NO</option>
</select>
<input type="submit" value="SAVE" onclick="add()">
</tr>
</form>
it fills box correctly on page
<script>
var ddlItems = document.getElementById("number"),
itemArray = ["a", "b", "c"];
for (var i = 0; i < itemArray.length; i++) {
var opt = itemArray[i];
var el = document.createElement("option");
el.textContent = opt;
el.value = opt;
ddlItems.appendChild(el);
}
</script>
.........................................................................................................
A: Add name number to the field. At the moment your server cannot grab that data
<select id='number' name='number'>
<option value="">select</option>
</select>
| |
doc_23525584
|
<td align="left">
<div class="topicos">
<a href="/_t_1593901" title="Welcome2">
<span class="titulo">
Hello World!
</span>
</a><br>
</div>
</td>
and this is the topic format if it has more then one page:
<td align="left">
<div class="topicos">
<a href="_t_1594517" title="Welcome">
<span class="titulo">
Hello World!
</span>
</a><br>
</div>
<span class="quickPaging">
[<img src="http://forum.imguol.com//forum/themes/jogos/images/clear.gif" class="master-sprite sprite-icon-minipost" alt="Ir Γ pΓ‘gina" title="Ir Γ pΓ‘gina">
Ir Γ pΓ‘gina:
<a href="/_t_1594517?&page=1">1</a>,
<a href="/_t_1594517?&page=2">2</a>,
<a href="/_t_1594517?&page=3">3</a>,
<a href="/_t_1594517?&page=4">4</a>,
<a href="/_t_1594517?&page=5">5</a>
]</span>
</td>
I want to get the id(_t_1594517) of those topics with 5 or more pages, how can I do that ? This is what I were tyring, but I got lost and I didn't understand the DOMDocument documentation very well, I'm new to programming and PHP, help:
<?php
$html = new DOMDocument();
$url = "http://website.com/forum/?page=";
$page = "1";
while($page <= 10)
{
$html->loadHTML($url + $page);
foreach($html->getElementsByTagName('td') as $td)
{
if($td->hasAttributes())
{
if($td->getAttribute('align') == "left")
{
$div = $td->getElementsByTagName('div');
if($div->hasAttributes())
{
if($td->getAttribute('class') == "topicos")
{
$a = $td->getElementsByTagName('a');
{
if($a->hasAttributes())
{
/*$return['link'][] =*/ echo $a->getElementById('href')->tagName;
}
}
}
}
}
}
}
}
?>
A: I think xpath can help you:
If $with_links had the HTML content with the 5 links then
$doc = new DOMDocument();
$doc->loadHTML($with_links);
$xpath = new DOMXPath($doc);
$quick_paging_links = $xpath->query('//span[@class="quickPaging"]/a[contains(@href,"_t_")]/@href');
if($quick_paging_links->length>4)
{
$first_href = $quick_paging_links->item(0)->value;
$id = substr($first_href, 1, strpos($first_href, '?')-1);
echo 'Topic with id '.$id.' has '.$quick_paging_links->length." links.\n";
}
will produce the output:
Topic with id _t_1594517 has 5 links.
| |
doc_23525585
|
I have 3 tables, User, Team and Players.
User can have many teams and one team can have many players. This would all work if I would create User first, then team for it and then add players to it. Because all of this is one-to-many relationship.
However, what I will always have is Team and its Players.
Only when people actually register on my webpage and enter id of their team User needs to be created and conneted with Team that has that id.
This is my code for now, but it's not working because of the reasons mentioned.
class User(UserMixin, db.Model):
__tablename__ = 'user'
user_id = db.Column(db.Integer,primary_key=True)
username = db.Column(db.String(64), nullable=False)
email = db.Column(db.String(64), nullable=False)
password_hash = db.Column(db.String(128), nullable = False)
token = db.Column(db.String(64), unique=True, nullable = True)
secret_token = db.Column(db.String(64),unique=True, nullable = True)
teams = db.relationship("Team", backref="user", lazy=True)
def add_team(self,team_id,team_name,user_name,youthteam_id,user_id):
t = Team(team_id = team_id,team_name = team_name,user_name = user_name,youthteam_id = youthteam_id,user_id = self.user_id)
db.session.add(t)
db.session.commit()
def __repr__(self):
return '<User {}>'.format(self.username)
def set_password(self, password):
self.password_hash = generate_password_hash(password)
def check_password(self, password):
return check_password_hash(self.password_hash, password)
class Team(db.Model):
__tablename__ = "team"
team_id = db.Column(db.Integer, primary_key=True)
team_name = db.Column(db.String, nullable=False)
user_name = db.Column(db.String, nullable=False)
youthteam_id = db.Column(db.Integer, nullable=False)
players = db.relationship("Player", backref="team", lazy=True)
user_id = db.Column(db.Integer, db.ForeignKey("user.user_id"), nullable=False)
def add_player(self,youthplayer_id, youth_name, youth_surname, age, promoted_in, speciality, last_rating,last_pos):
p = Player(youthplayer_id = youthplayer_id,youth_name = youth_name,youth_surname = youth_surname,age = age,promoted_in = promoted_in,speciality = speciality,last_rating= last_rating,last_pos = last_pos,team_id = self.team_id)
db.session.add(p)
db.session.commit()
class Player(db.Model):
__tablename__ = "player"
youthplayer_id = db.Column(db.Integer, primary_key=True)
youth_name = db.Column(db.String, nullable=False)
youth_surname = db.Column(db.String, nullable=False)
age = db.Column(db.Numeric, nullable=False)
promoted_in = db.Column(db.Integer, nullable=False)
speciality = db.Column(db.String, nullable=False)
last_rating = db.Column(db.Numeric, nullable=False)
last_pos = db.Column(db.String, nullable=False)
team_id = db.Column(db.Integer, db.ForeignKey("team.team_id"), nullable=False)
So what I would like to achieve is this,
team Alpha exists in database and player1 and player2 are connected to it. When actual person registers on my page he enters his email, username and password. Through ouath I get his user_id from another page. I then make user in my users table and through that user_id he is connected with his team.
| |
doc_23525586
|
A: You cannot include your .cc directly with tcl instead of that you have to add your code with available *.cc code. for example the Promiscuous operations already available in dsr.cc so you can put your new code in dsr.cc and use "make" to compile your ns2. now from your tcl script you can call DSR as a routing protocol.
| |
doc_23525587
|
String myString = "Hello Boom, Ho3w are || You ? Are ^ you ," fr45ee now ?";
I tried the below to split the string as ,
"Hello,Boom,Ho3w,are,You,Are,you,fr45ee,now" - where comma indicates the separation of String to String array. My code is
String[] temp = data.split("\\s+\\^,?\"'\\|+");
But it is no working . Hope You people helps.Thanks.
A: Your example won't compile as you have an unescaped double quote in your myString variable.
However, assuming it is escaped...
// | escaped " here
String myString = "Hello Boom, Ho3w are || You ? Are ^ you ,\" fr45ee now ?";
// | printing array
// | | splitting "myString"...
// | | | on 1 or more non-word
// | | | characters
System.out.println(Arrays.toString(myString.split("\\W+")));
Output
[Hello, Boom, Ho3w, are, You, Are, you, fr45ee, now]
A: I believe this should work:
String myString = "Hello Boom, Ho3w are || You ? Are ^ you ,\" fr45ee now ?";
String[] arr = myString.split("\\W+");
//=> [Hello, Boom, Ho3w, are, You, Are, you, fr45ee, now]
A: If you literally only trying to split on the characters in your original regex, you should factor out the + and put everything into a character class.
public class Split {
public static void main(String[] args) {
String myString = "Hello Boom, Ho3w are || You ? Are ^ you ,\" fr45ee now ?";
String[] temp = myString.split("[\\s\\^,?\"'\\|]+");
for (int i = 0; i < temp.length; i++)
System.out.println(temp[i]);
}
}
Output:
Hello
Boom
Ho3w
are
You
Are
you
fr45ee
now
| |
doc_23525588
|
Having to build all of your test cases by hand seems really tedious. But the BOOST_PARAM_TEST_CASE mechanism is pretty darn useful, but only works if you have a test init function, which in turn requires you to have be using manual test case construction.
Is there any documentation on how to hook into the automated system yourself so you can provide your own tests that auto-register themselves?
I'm using boost 1.46 right now.
A: The solution provided by @Omnifarious works works, but requires a C++11 compiler.
Adapting that solution for a C++03 compiler:
#include <boost/test/unit_test_suite.hpp>
#include <boost/test/parameterized_test.hpp>
#define BOOST_FIXTURE_PARAM_TEST_CASE( test_name, F, P, mbegin, mend ) \
struct test_name : public F \
{ \
typedef P param_t; \
void test_method(const param_t &); \
}; \
\
void BOOST_AUTO_TC_INVOKER( test_name )(const test_name::param_t ¶m) \
{ \
test_name t; \
t.test_method(param); \
} \
\
BOOST_AUTO_TU_REGISTRAR( test_name )( \
boost::unit_test::make_test_case( \
&BOOST_AUTO_TC_INVOKER( test_name ), #test_name, \
(mbegin), (mend))); \
\
void test_name::test_method(const param_t ¶m) \
// *******
#define BOOST_AUTO_PARAM_TEST_CASE( test_name, param_type, mbegin, mend ) \
BOOST_FIXTURE_PARAM_TEST_CASE( test_name, \
BOOST_AUTO_TEST_CASE_FIXTURE, \
param_type, \
mbegin, mend)
This solution is slightly different is usage. Since there is no declspec in C++03, the type of the parameter object cannot be automatically deduced. We must pass it in as a parameter to BOOST_AUTO_PARAM_TEST_CASE:
class FooTestParam
{
public:
std::string mS;
FooTestParam (int n)
{
std::stringstream ss;
ss << n;
mS = ss.str();
}
};
FooTestParam fooParams [] =
{
FooTestParam (42),
FooTestParam (314)
};
BOOST_AUTO_PARAM_TEST_CASE (TestFoo, FooTestParam, fooParams, fooParams + 2)
{
const std::string testVal = param.mS;
}
BOOST_AUTO_TEST_CASE (TestAddressField)
{
const uint32_t raw = 0x0100007f; // 127.0.0.1
const uint8_t expected[4] = {127, 0, 0, 1};
const Mdi::AddressField& field = *reinterpret_cast <const Mdi::AddressField*> (&raw);
for (size_t i = 0; i < 4; ++i)
BOOST_CHECK_EQUAL (field[i], expected[i]);
}
A: You can easily mix manual and automated test unit registration. Implement your own init function (like in example 20 on this page) and inside init function you can perform registration for parameterized test cases. Boost.Test will merge them both into single test tree.
A: Since Boost 1.59 internal details of realization was changed and Omnifarious's solution doesn't compile.
Reason ot that is changing signature of boost::unit_test::make_test_case function: now it take 2 additional args: __FILE__, __LINE__
Fixed solution:
#if BOOST_VERSION > 105800
#define MY_BOOST_TEST_ADD_ARGS __FILE__, __LINE__,
#define MY_BOOST_TEST_DEFAULT_DEC_COLLECTOR ,boost::unit_test::decorator::collector::instance()
#else
#define MY_BOOST_TEST_ADD_ARGS
#define MY_BOOST_TEST_DEFAULT_DEC_COLLECTOR
#endif
#define BOOST_FIXTURE_PARAM_TEST_CASE( test_name, F, mbegin, mend ) \
struct test_name : public F { \
typedef ::std::remove_const< ::std::remove_reference< decltype(*(mbegin)) >::type>::type param_t; \
void test_method(const param_t &); \
}; \
\
void BOOST_AUTO_TC_INVOKER( test_name )(const test_name::param_t ¶m) \
{ \
test_name t; \
t.test_method(param); \
} \
\
BOOST_AUTO_TU_REGISTRAR( test_name )( \
boost::unit_test::make_test_case( \
&BOOST_AUTO_TC_INVOKER( test_name ), #test_name, \
MY_BOOST_TEST_ADD_ARGS \
(mbegin), (mend)) \
MY_BOOST_TEST_DEFAULT_DEC_COLLECTOR); \
\
void test_name::test_method(const param_t ¶m) \
#define BOOST_AUTO_PARAM_TEST_CASE( test_name, mbegin, mend ) \
BOOST_FIXTURE_PARAM_TEST_CASE( test_name, \
BOOST_AUTO_TEST_CASE_FIXTURE, \
mbegin, mend)
A: Starting with Boost version 1.59, this is being handled by data-driven test cases:
#define BOOST_TEST_MODULE MainTest
#include <boost/test/included/unit_test.hpp>
#include <boost/test/data/test_case.hpp>
#include <boost/array.hpp>
static const boost::array< int, 4 > DATA{ 1, 3, 4, 5 };
BOOST_DATA_TEST_CASE( Foo, DATA )
{
BOOST_TEST( sample % 2 );
}
This functionality requires C++11 support from compiler and library, and does not work inside a BOOST_AUTO_TEST_SUITE.
If you have to support both old and new versions of Boost in your source, and / or pre-C++11 compilers, check out And-y's answer.
A: I wrote my own support for this since there really didn't seem to be any good support. This requires the C++11 decltype feature and the ::std::remove_const and ::std::remove_reference library methods to work.
The macro definitions are a modified versions of the BOOST_FIXTURE_TEST_CASE and BOOST_AUTO_TEST_CASE macros.
You use this by declaring your function thus:
BOOST_AUTO_PARAM_TEST_CASE(name, begin, end)
{
BOOST_CHECK_LT(param, 5); // The function will have an argument named 'param'.
}
Here is the header that defines the BOOST_AUTO_PARAM_TEST_CASE macro:
#include <boost/test/unit_test_suite.hpp>
#include <boost/test/parameterized_test.hpp>
#include <type_traits>
#define BOOST_FIXTURE_PARAM_TEST_CASE( test_name, F, mbegin, mend ) \
struct test_name : public F { \
typedef ::std::remove_const< ::std::remove_reference< decltype(*(mbegin)) >::type>::type param_t; \
void test_method(const param_t &); \
}; \
\
void BOOST_AUTO_TC_INVOKER( test_name )(const test_name::param_t ¶m) \
{ \
test_name t; \
t.test_method(param); \
} \
\
BOOST_AUTO_TU_REGISTRAR( test_name )( \
boost::unit_test::make_test_case( \
&BOOST_AUTO_TC_INVOKER( test_name ), #test_name, \
(mbegin), (mend))); \
\
void test_name::test_method(const param_t ¶m) \
// *******
#define BOOST_AUTO_PARAM_TEST_CASE( test_name, mbegin, mend ) \
BOOST_FIXTURE_PARAM_TEST_CASE( test_name, \
BOOST_AUTO_TEST_CASE_FIXTURE, \
mbegin, mend)
A: I took Omnifarious' header file and modified it such that the parameter is passed to the constructor of the test fixture rather than to the test method. This requires the test fixture's constructor declaration to take a single argument with the parameter's type. I found this to be super handy--much thanks for the initial question and answer!
#include <boost/test/unit_test_suite.hpp>
#include <boost/test/parameterized_test.hpp>
#include <type_traits>
#define BOOST_FIXTURE_PARAM_TEST_CASE( test_name, F, mbegin, mend ) \
struct test_name : public F { \
typedef ::std::remove_const< ::std::remove_reference< decltype(*(mbegin)) >::type>::type param_t; \
test_name(const param_t ¶m) : F(param) {} \
void test_method(void); \
}; \
\
void BOOST_AUTO_TC_INVOKER( test_name )(const test_name::param_t ¶m)\
{ \
test_name t(param); \
t.test_method(); \
} \
\
BOOST_AUTO_TU_REGISTRAR( test_name )( \
boost::unit_test::make_test_case( \
&BOOST_AUTO_TC_INVOKER( test_name ), #test_name, \
(mbegin), (mend))); \
\
void test_name::test_method(void) \
// *******
#define BOOST_AUTO_PARAM_TEST_CASE( test_name, mbegin, mend ) \
BOOST_FIXTURE_PARAM_TEST_CASE( test_name, \
BOOST_AUTO_TEST_CASE_FIXTURE, \
mbegin, mend)
| |
doc_23525589
|
I addition to this i get a warning in the line as (sending "ViewController *const_strong' to parameter of incompatible type 'id'
[outputsetSampleBufferDelegate:selfqueue:queue];
.........................................................
Code implemented by me,
-(void)setupCaptureSession
{
NSError *error=nil;
AVCaptureSession *session =[[AVCaptureSessionalloc]init];
self.captureSession=session;
self.captureSession.sessionPreset=AVCaptureSessionPresetMedium;
AVCaptureDevice *device =[AVCaptureDevicedefaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput *Input=[AVCaptureDeviceInputdeviceInputWithDevice:deviceerror:&error];
if(!Input){
}
[[selfcaptureSession] addInput:Input];
AVCaptureVideoDataOutput *output =[[AVCaptureVideoDataOutputalloc]init];
[[selfcaptureSession]addOutput:output];
dispatch_queue_t queue =dispatch_queue_create("myQueue", NULL);
[outputsetSampleBufferDelegate:selfqueue:queue];
dispatch_release(queue);
output.videoSettings =[NSDictionarydictionaryWithObjectsAndKeys:[NSNumbernumberWithUnsignedInt:kCVPixelFormatType_32BGRA],(id)kCVPixelBufferPixelFormatTypeKey,nil];
AVCaptureConnection *connection =[output connectionWithMediaType:AVMediaTypeVideo];
[connectionsetVideoMinFrameDuration:CMTimeMake(1,15)];
[sessionstartRunning];
[selfsetSession:session];
}
-(UIImage *)imageFromSampleBuffer:(CMSampleBufferRef)sampleBuffer
{
CVImageBufferRefimageBuffer =CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
void *baseAddress =CVPixelBufferGetBaseAddress(imageBuffer);
size_tbytesPerRow =CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width =CVPixelBufferGetWidth(imageBuffer);
size_t height=CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRefcolorSpace =CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(baseAddress,width,height,8,bytesPerRow,colorSpace,kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedLast);
CGImageRefquartzImage =CGBitmapContextCreateImage(context);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
UIImage *image =[UIImageimageWithCGImage:quartzImage];
CGImageRelease(quartzImage);
return image;
}
////////////////////
-(void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBufferfromConnection:(AVCaptureConnection *)connection{
UIImage *image = [selfimageFromSampleBuffer:sampleBuffer];
//Here i do my processing with image and display a count, Counter updates for few seconds and
Log memory warning and then works again, Then application get terminates
}
-(void)setSession:(AVCaptureSession *)session
{
self.captureSession=session;
}
| |
doc_23525590
|
The first API provider has a single URL which we call, it returns an XML response, and that has all the data we need from them. Whenever there are troubles with it, curl() always reports sensible errors; request timeout, errors 503/505, etc.
With the second API provider, we need to call 4 different URLs to get all the data we need.
For each URL, a curl() request is made, its response parsed (json_decode()), values are saved to an array and the next URL is called.
Every now and then, the second API provider randomly returns
Failed connect to our.api.provider:443; Operation now in progress
I am stumped because wherever this error is mentioned, it is consistently returned every time curl() runs and gets fixed by either adding proxy settings to curl() options or telling the firewall to stop blocking the request.
We do not have a proxy, and if it were the firewall it would be failing 100% of the time, not 0.01% of the time.
curl() options:
CURLOPT_RETURNTRANSFER => true,
CURLOPT_TIMEOUT => 20,
CURLOPT_HTTPHEADER => [$api_access_token],
I have already contacted the API provider if they have any experience with this error from their other clients, will update the post with their response. I am assuming the problem is on our side, though.
Here is the curl() response when it failed:
url => our.api.provider
content_type =>
http_code => 0
header_size => 0
request_size => 0
filetime => -1
ssl_verify_result => 0
redirect_count => 0
total_time => 18.566207
namelookup_time => 8.518884
connect_time => 0
pretransfer_time => 0
size_upload => 0
size_download => 0
speed_download => 0
speed_upload => 0
download_content_length => -1
upload_content_length => -1
starttransfer_time => 0
redirect_time => 0
redirect_url =>
primary_ip => our.ip.add.ress
primary_port => 443
local_ip =>
local_port => 0
| |
doc_23525591
|
./setup --path /configdir/openam --log /configdir/openam/log --acceptLicense /configdir/openam/debug -v
ssoadm fails with the following error message:
Cannot locate system configuration. Directory Server may be down or configuration directory is invalid.
A: There are two potential issues if you get that error message:
*
*The config directory server itself is invalid/incorrect. So make sure you are passing the correct config dir to --path (or enter when prompted)
*If you have SSL enabled, then the certificate needs to be added to your jdk trust store at: $JAVA_HOME/jre/lib/security/cacerts store.
Alternatively, you can also modify ./setup as in:
$JAVA_HOME/bin/java -D"load.config=yes" -D"javax.net.ssl.trustStore=/ssocerts/opensso.jks" -D"javax.net.ssl.trustStorePassword=changeit"
| |
doc_23525592
|
A: I've used this before - https://github.com/hugoduncan/clj-ssh
Easiest way to get the code is to use Leinengen or Cake. Add [clj-ssh "0.2.0"] to your dependencies, and you should be good to go.
| |
doc_23525593
|
also:if the string has a letter thats not any of these then doesnt go into the if
I have tried with || and other symbols but still cant get it
Doesnt have to be a regex Im just trying to find a way to solve it
String message = "AIDDDAAIDAAA"
if(message.matches("(A|D|I)")){
System.out.println("Matches");
}
A: You can use include all characters you are interested in within a square bracket. To match one or more occurrences of these characters in square brackets, append + to it. The message string should be entirely made up of only these characters for it to be considered a match.
Try this.
String message = "ADAIAIAIAIAIADDDAI";
if(message .matches("[ADI]+")) {
System.out.println("Matches");
}
A: Since you're asking about "words", I guess A, D and I stand for words with more than one letter and that that is the reaason why you're not using the character class [ADI]. You just have to add a + because the message consists of more words.
String message = "AIDDDAAIDAAA";
if (message.matches("(A|D|I)+")) {
System.out.println("Matches");
}
scigs answer works as well.
A: You could use string replace to do something similar:
String message = "AIDDDAAIDAAA";
message = message.replace("A","")
.replace("I","")
.replace("D","");
if (message.equals("")) {
//do your thing
}
A: You could just check each char composing the String:
String message = "AIDDDAAIDAAA";
boolean matches = true;
for (int i=0; i<message.length(); i++;){
if (message.charAt(i)!='A' && message.charAt(i)!='I' && message.charAt(i)!='D'){
matches = false;
break;
}
}
if (matches) System.out.println("Matches");
| |
doc_23525594
|
[__ob__: Observer]
0:
name: "alata"
status: "4"
__ob__: Observer {value: {β¦}, dep: Dep, vmCount: 0}
get name: Ζ reactiveGetter()
I converted it :
var obj = JSON.parse(JSON.stringify(this.option))
console.log(obj)
And the result is an empty array :
[]
length: 0
__proto__: Array(0)
Please help me get name and status data inside it. Thanks
| |
doc_23525595
|
<None Include="C:\foo.bar" />
different from
<Content Include="C:\foo.bar" />
?
A: In my situation, my MSBuild file had an ItemGroup for image resources that appeared as follows:
<ItemGroup>
<Content Include="Resources\image001.png" />
<Content Include="Resources\image002.png" />
<Content Include="Resources\image003.png" />
<Content Include="Resources\image004.png" />
<None Include="Resources\image005.png" />
<None Include="Resources\image006.png" />
<None Include="Resources\image007.png" />
</ItemGroup>
While my project was building fine, this left me wondering why I had a mix of Content and None item type elements in my ItemGroup. This MSDN article (for Visual Studio 2010) gave me the guidance I was looking for:
Note that when the resource editor adds an image, it sets Build
Action to None, because the .resx file references the image
file. At build time, the image is pulled into the .resources file
created out of the .resx file. The image can then easily be accessed
by way of the strongly-typed class auto-generated for the .resx file.
Therefore, you should not change this setting to Embedded
Resource, because doing this would include the image two times in
the assembly.
Resolution: With this guidance, using a text editor, I changed the Content item type elements to None.
Also, for an overview of MSBuild items, see this MSDN article.
A: Content files are not included in a build, but are included in a publish.
None files are not included in a build or publish, unless they are configured that way by you. For instance, a "Copy to Output Directory" setting of "Always" or "Newer", will cause them to be included in both a build and publish.
A: One difference is how they get published; "None" items don't get included in a publish, "Content" items do; for example, on the "Application Files" dialog on the Publish tab.
A: I have a project that contains no compilable items (it stores html and javascript for jasmine unit tests).
One day my solution (that contained said project) stopped compiling saying "The target "Build" does not exist in the project".
I added an import to bring in the compiler, which worked fine on my machine but failed using msbuild on the build server.
<Import Project="$(MSBuildToolsPath)\Microsoft.CSharp.targets" />
I then changed a line from
<None Include="SpecRunner.html" />
to
<Content Include="SpecRunner.html" />
and it worked on the build server as well.
A: You need None in a template project file to include files you define in the .vstemplate otherwise they are lost in the creation & translation process. They get left behind in the temp folder it uses to build everything and then deleted shortly after.
A: In my case .Pubxml is one of those files among None list. It's not meant for solution building or as a static file for web project. But to publish the site to Azure, the configurations are present in this.
As per Microsoft article these are the major types we see among .csproj file tags:
None - The file is not included in the project output group and is not
compiled in the build process. An example is a text file that contains
documentation, such as a Readme file.
Compile - The file is compiled into the build output. This setting is
used for code files.
Content - The file is not compiled, but is included in the Content
output group. For example, this setting is the default value for an
.htm or other kind of Web file.
Embedded Resource - This file is embedded in the main project build
output as a DLL or executable. It is typically used for resource
files.
A: The MSDN article on the build action property explains the differences.
None - The file is not included in the project output group and is not compiled in the build process. An example is a text file that contains documentation, such as a Readme file.
Content - The file is not compiled, but is included in the Content output group. For example, this setting is the default value for an .htm or other kind of Web file.
A: I am not 100% sure (I read the MSDN description of Build Action property) but just copying that answer from MSDN to StackOverflow does not answer the question completely for me.
The difference of None and Content only has an effect on Web projects. For a command line project, WinForm project or UnitTest project (in my case) etc. None and Content have no different behavior.
MSDN: "project output group" or "Content output group" only terms used in a Web project, right?
| |
doc_23525596
|
I got this error when click on save changes button
Error!Payload validation error: 'Object didn't pass validation for
format absolute-https-uri-or-empty: https://localhost:4200/en/signin'
on property initiate_login_uri (Initiate login uri, must be https).
How to solve this error?
A: change your Application Login URI localhost to 127.0.0.1
so your URI will look like that https://127.0.0.1:4200/en/signin
source: Application Login URI field
| |
doc_23525597
|
The query I have is (and this is the short version):
SELECT
SQL_CALC_FOUND_ROWS i.idItems RowCount,
i.* Items,
# Create a JSON formatted field
CONCAT('{',GROUP_CONCAT('"',Attributes.key, '":"', CONVERT(Attributes.value,CHAR),'"'),'}') as Attributes,
IF (te.Key IS NULL,tp.Key,te.Key) as Type,
tc.Value Color,
l.* Location,
c.Name,
c.Mobile,
c.Mail
FROM
(SELECT ItemID, ats.Key, ats.Value FROM attributeStrings as ats
UNION ALL
SELECT ItemID, ati.Key, ati.Value FROM attributeIntegers as ati
) Attributes
JOIN Items i ON
i.idItems = Attributes.ItemID
AND CheckIn >= DATE_SUB('2011-02-16 00:00:00',INTERVAL 90 DAY)
AND CheckIn <= DATE_ADD('2011-02-16 23:59:59',INTERVAL 90 DAY)
AND Checkout IS NULL
LEFT JOIN Customers c ON c.idCustomers = i.CustomerID
LEFT JOIN attributeintegers atli ON atli.itemid = i.idItems AND atli.key = 'Location'
LEFT JOIN locations l ON l.id = atli.value
LEFT JOIN attributestrings atts ON atts.itemid = i.idItems AND atts.key = 'Type' LEFT
JOIN Lists tp ON tp.value = atts.value
LEFT JOIN attributestrings attes ON attes.itemid = i.idItems AND attes.key = 'Tech' LEFT
JOIN Lists te ON te.value = attes.value
LEFT JOIN attributeintegers atci ON atci.itemid = i.idItems AND atci.key = 'Color' LEFT
JOIN Strings tc ON tc.StringID = atci.value
GROUP BY Attributes.ItemID
ORDER BY CheckIn DESC
Now I need to get this statement in here somewhere
MATCH(attributestrings.Value) AGAINST("Nokia" IN BOOLEAN MODE)
As you can see there is a table called attributestrings and it has 3 columns: ItemID,*Key* and Value. I need to search the column Value for the words in the AGAINST() and only show results matching this and the other criterias such as the Date and Checkout above.
I tried to add the statement after the AND Checkout IS NULL like this:
AND Checkout IS NULL
AND MATCH(Attributes.Value) AGAINST("Nokia" IN BOOLEAN MODE)
I had to use the Attributes.Value instead of attributestrings because it didn't found the table. This only resulted in the CONCATENATED column Attributes only contained the value "Nokia", even if there where more to CONCATENATE.
I hope someone are willing to take on this challenge...
// Tank you.
[EDIT]
I tried to put in the WHERE before the GROUP as Tim Fultz sugested, but I get the error
Unknown column 'attributestrings.Value' in 'Where clause'
LEFT JOIN attributeintegers atci ON atci.itemid = i.idItems AND atci.key = 'Color' LEFT JOIN Strings tc ON tc.StringID = atci.value
WHERE MATCH(attributestrings.Value) AGAINST("Nokia Sony" IN BOOLEAN MODE)
GROUP BY Attributes.ItemID
A: Typically this is put in the Where clause:
WHERE MATCH(attributestrings.Value) AGAINST("Nokia" IN BOOLEAN MODE)
A: I think I came up with a solution...
I added another:
LEFT JOIN attributestrings fts ON fts.itemid = i.idItems AND MATCH(fts.Value) AGAINST("Nokia Sony" IN BOOLEAN MODE)
Just before the GROUP BY... and then included fts.Value ftsv in the main select statement. Now I could insert a HAVING ftsv IS NOT NULL between GROUP BY... and ORDER BY...
This gave me the result I wanted but the query starts to get a bit slow now...
A: There is a problem in your query when you are assigning table names with as use every where in you query the name you assigned. Like you gave attributestrings as ats now use ats everywhere and this will work.
| |
doc_23525598
|
Using my Flutter-App in debug-mode, I keep receiving the following error message:
ββββββββ Exception caught by rendering library βββββββββββββββββββββββββββ
The method '_greaterThanFromInteger' was called on null.
Receiver: null
Tried calling: _greaterThanFromInteger(7)
The relevant error-causing widget was
ListView
lib/drawers/drawer_left.dart:96
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
In addition, the error message sais the following:
[38;5;244mThe relevant error-causing widget was[39;49m
[38;5;248mListView[39;49m
[38;5;244mWhen the exception was thrown, this was the stack[39;49m
[38;5;244m#0 Object.noSuchMethod (dart:core-patch/object_patch.dart:53:5)[39;49m
[38;5;244m#1 int.> (dart:core-patch/integers.dart:103:18)[39;49m
[38;5;244m#2 RenderSliverFixedExtentBoxAdaptor._calculateTrailingGarbage[39;49m
[38;5;244m#3 RenderSliverFixedExtentBoxAdaptor.performLayout[39;49m
[38;5;244m#4 RenderObject.layout[39;49m
[38;5;244m...[39;49m
[38;5;244mThe following RenderObject was being processed when the exception was fired: RenderSliverFixedExtentList#bbb75 relayoutBoundary=up16 NEEDS-LAYOUT[39;49m
[38;5;244mRenderObject: RenderSliverFixedExtentList#bbb75 relayoutBoundary=up16 NEEDS-LAYOUT[39;49m
[38;5;244mneeds compositing[39;49m
[38;5;244mparentData: paintOffset=Offset(0.0, 0.0) (can use size)[39;49m
[38;5;244mconstraints: SliverConstraints(AxisDirection.down, GrowthDirection.forward, ScrollDirection.idle, scrollOffset: 0.0, remainingPaintExtent: Infinity, crossAxisExtent: 304.0, crossAxisDirection: AxisDirection.right, viewportMainAxisExtent: Infinity, remainingCacheExtent: Infinity cacheOrigin: 0.0 )[39;49m
[38;5;244mgeometry: SliverGeometry(scrollExtent: 480.0, paintExtent: 480.0, maxPaintExtent: 480.0, cacheExtent: 480.0)[39;49m
[38;5;244mscrollExtent: 480.0[39;49m
[38;5;244mpaintExtent: 480.0[39;49m
[38;5;244mmaxPaintExtent: 480.0[39;49m
[38;5;244mcacheExtent: 480.0[39;49m
[38;5;244mcurrently live children: 0 to 7[39;49m
[38;5;244mchild with index 0: RenderIndexedSemantics#f15ec[39;49m
[38;5;244mneeds compositing[39;49m
[38;5;244mparentData: index=0; layoutOffset=0.0[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244msemantic boundary[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244mindex: 0[39;49m
[38;5;244mchild: RenderRepaintBoundary#da636[39;49m
[38;5;244mneeds compositing[39;49m
[38;5;244mparentData: <none> (can use size)[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244mlayer: OffsetLayer#3f6f3[39;49m
[38;5;244moffset: Offset(0.0, 0.0)[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244mmetrics: 0.0% useful (1 bad vs 0 good)[39;49m
[38;5;244mdiagnosis: insufficient data to draw conclusion (less than five repaints)[39;49m
[38;5;244mchild: RenderConstrainedBox#1b8cb[39;49m
[38;5;244mparentData: <none> (can use size)[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244madditionalConstraints: BoxConstraints(0.0<=w<=Infinity, h=60.0)[39;49m
[38;5;244mchild: RenderPadding#7a95f[39;49m
[38;5;244mparentData: <none> (can use size)[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244mpadding: EdgeInsets.zero[39;49m
[38;5;244mtextDirection: ltr[39;49m
[38;5;244mchild with index 1: RenderIndexedSemantics#e3942[39;49m
[38;5;244mneeds compositing[39;49m
[38;5;244mparentData: index=1; layoutOffset=60.0[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244msemantic boundary[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244mindex: 1[39;49m
[38;5;244mchild: RenderRepaintBoundary#1ca15[39;49m
[38;5;244mneeds compositing[39;49m
[38;5;244mparentData: <none> (can use size)[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244mlayer: OffsetLayer#7b0c3[39;49m
[38;5;244moffset: Offset(0.0, 60.0)[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244mmetrics: 0.0% useful (1 bad vs 0 good)[39;49m
[38;5;244mdiagnosis: insufficient data to draw conclusion (less than five repaints)[39;49m
[38;5;244mchild: RenderConstrainedBox#5fa56[39;49m
[38;5;244mparentData: <none> (can use size)[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244madditionalConstraints: BoxConstraints(0.0<=w<=Infinity, h=60.0)[39;49m
[38;5;244mchild: RenderPadding#5a6e7[39;49m
[38;5;244mparentData: <none> (can use size)[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244mpadding: EdgeInsets.zero[39;49m
[38;5;244mtextDirection: ltr[39;49m
[38;5;244mchild with index 2: RenderIndexedSemantics#3a923[39;49m
[38;5;244mneeds compositing[39;49m
[38;5;244mparentData: index=2; layoutOffset=120.0[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244msemantic boundary[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244mindex: 2[39;49m
[38;5;244mchild: RenderRepaintBoundary#61670[39;49m
[38;5;244mneeds compositing[39;49m
[38;5;244mparentData: <none> (can use size)[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244mlayer: OffsetLayer#7db6e[39;49m
[38;5;244moffset: Offset(0.0, 120.0)[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244mmetrics: 0.0% useful (1 bad vs 0 good)[39;49m
[38;5;244mdiagnosis: insufficient data to draw conclusion (less than five repaints)[39;49m
[38;5;244mchild: RenderConstrainedBox#cdca9[39;49m
[38;5;244mparentData: <none> (can use size)[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244madditionalConstraints: BoxConstraints(0.0<=w<=Infinity, h=60.0)[39;49m
[38;5;244mchild: RenderPadding#9709e[39;49m
[38;5;244mparentData: <none> (can use size)[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244mpadding: EdgeInsets.zero[39;49m
[38;5;244mtextDirection: ltr[39;49m
[38;5;244mchild with index 3: RenderIndexedSemantics#f64fe[39;49m
[38;5;244mneeds compositing[39;49m
[38;5;244mparentData: index=3; layoutOffset=180.0[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244msemantic boundary[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244mindex: 3[39;49m
[38;5;244mchild: RenderRepaintBoundary#7aa60[39;49m
[38;5;244mneeds compositing[39;49m
[38;5;244mparentData: <none> (can use size)[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244mlayer: OffsetLayer#e4b81[39;49m
[38;5;244moffset: Offset(0.0, 180.0)[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244mmetrics: 0.0% useful (1 bad vs 0 good)[39;49m
[38;5;244mdiagnosis: insufficient data to draw conclusion (less than five repaints)[39;49m
[38;5;244mchild: RenderConstrainedBox#22bc3[39;49m
[38;5;244mparentData: <none> (can use size)[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244madditionalConstraints: BoxConstraints(0.0<=w<=Infinity, h=60.0)[39;49m
[38;5;244mchild: RenderPadding#50d9b[39;49m
[38;5;244mparentData: <none> (can use size)[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244mpadding: EdgeInsets.zero[39;49m
[38;5;244mtextDirection: ltr[39;49m
[38;5;244mchild with index 4: RenderIndexedSemantics#0ece6[39;49m
[38;5;244mneeds compositing[39;49m
[38;5;244mparentData: index=4; layoutOffset=240.0[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244msemantic boundary[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244mindex: 4[39;49m
[38;5;244mchild: RenderRepaintBoundary#4891a[39;49m
[38;5;244mneeds compositing[39;49m
[38;5;244mparentData: <none> (can use size)[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244mlayer: OffsetLayer#6f936[39;49m
[38;5;244moffset: Offset(0.0, 240.0)[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244mmetrics: 0.0% useful (1 bad vs 0 good)[39;49m
[38;5;244mdiagnosis: insufficient data to draw conclusion (less than five repaints)[39;49m
[38;5;244mchild: RenderConstrainedBox#b06e6[39;49m
[38;5;244mparentData: <none> (can use size)[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244madditionalConstraints: BoxConstraints(0.0<=w<=Infinity, h=60.0)[39;49m
[38;5;244mchild: RenderPadding#44b12[39;49m
[38;5;244mparentData: <none> (can use size)[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244mpadding: EdgeInsets.zero[39;49m
[38;5;244mtextDirection: ltr[39;49m
[38;5;244mchild with index 5: RenderIndexedSemantics#97bd7[39;49m
[38;5;244mneeds compositing[39;49m
[38;5;244mparentData: index=5; layoutOffset=300.0[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244msemantic boundary[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244mindex: 5[39;49m
[38;5;244mchild: RenderRepaintBoundary#db1ac[39;49m
[38;5;244mneeds compositing[39;49m
[38;5;244mparentData: <none> (can use size)[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244mlayer: OffsetLayer#7f4b8[39;49m
[38;5;244moffset: Offset(0.0, 360.0)[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244mmetrics: 0.0% useful (1 bad vs 0 good)[39;49m
[38;5;244mdiagnosis: insufficient data to draw conclusion (less than five repaints)[39;49m
[38;5;244mchild: RenderConstrainedBox#053e3[39;49m
[38;5;244mparentData: <none> (can use size)[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244madditionalConstraints: BoxConstraints(0.0<=w<=Infinity, h=60.0)[39;49m
[38;5;244mchild: RenderPadding#cb6ed[39;49m
[38;5;244mparentData: <none> (can use size)[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244mpadding: EdgeInsets.zero[39;49m
[38;5;244mtextDirection: ltr[39;49m
[38;5;244mchild with index 7: RenderIndexedSemantics#393e7[39;49m
[38;5;244mneeds compositing[39;49m
[38;5;244mparentData: index=7; layoutOffset=420.0[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244msemantic boundary[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244mindex: 7[39;49m
[38;5;244mchild: RenderRepaintBoundary#300dc[39;49m
[38;5;244mneeds compositing[39;49m
[38;5;244mparentData: <none> (can use size)[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244mlayer: OffsetLayer#442ed[39;49m
[38;5;244moffset: Offset(0.0, 420.0)[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244mmetrics: 0.0% useful (1 bad vs 0 good)[39;49m
[38;5;244mdiagnosis: insufficient data to draw conclusion (less than five repaints)[39;49m
[38;5;244mchild: RenderConstrainedBox#5e085[39;49m
[38;5;244mparentData: <none> (can use size)[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244madditionalConstraints: BoxConstraints(0.0<=w<=Infinity, h=60.0)[39;49m
[38;5;244mchild: RenderPadding#feb7c[39;49m
[38;5;244mparentData: <none> (can use size)[39;49m
[38;5;244mconstraints: BoxConstraints(w=304.0, h=60.0)[39;49m
[38;5;244msize: Size(304.0, 60.0)[39;49m
[38;5;244mpadding: EdgeInsets.zero[39;49m
[38;5;244mtextDirection: ltr[39;49m
[38;5;248mββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ[39;49m
I read here that this happens when variables are not initialised.
But I do not see where this is the case in my code.
Below is my code - can somebody please point it out for me ? I am looking into this since 2 hours without success....
Here is the entire Widget
It happens mostly under ListView - but where ????
import 'package:corona_test/screens/user_state.dart';
import 'package:flutter/material.dart';
import 'package:corona_test/app_locations.dart';
import 'package:url_launcher/url_launcher.dart';
class DrawerLeft extends StatefulWidget {
DrawerLeft({Key key}) : super(key: key);
@override
_DrawerLeftState createState() => _DrawerLeftState();
}
class _DrawerLeftState extends State<DrawerLeft> {
final int nrOfSettingsBullets = 8;
int screenSizeType = 0;
@override
Widget build(BuildContext context) {
// first define settings titles (order matters)
List<String> settings = [];
settings
.add(AppLocalizations.of(context).translate('My Personal Situation'));
settings.add(
'E-Mail'); // AppLocalizations.of(context).translate('Revert to Original Icons'));
settings.add(
'Test Dialog'); // AppLocalizations.of(context).translate('App help videos 1'));
settings.add(
't.b.d'); // AppLocalizations.of(context).translate('Facebook Like'));
settings.add(
't.b.d'); // AppLocalizations.of(context).translate('Our website'));
settings
.add('t.b.d'); // AppLocalizations.of(context).translate('Contact'));
settings
.add('t.b.d'); // AppLocalizations.of(context).translate('Rate App'));
settings
.add('t.b.d'); //AppLocalizations.of(context).translate('Impressum'));
final double screenHeight = MediaQuery.of(context).size.longestSide;
if (screenHeight >= 1000) {
screenSizeType = 8;
} else if (screenHeight >= 896) {
// iPhone XSmax/XR
screenSizeType = 7;
} else if (screenHeight >= 812) {
// iPhone XS
screenSizeType = 6;
} else if (screenHeight >= 800) {
// Android Samsung Galaxy S7 5.1"
screenSizeType = 5;
} else if (screenHeight >= 736) {
// iPhone 6S Plus
screenSizeType = 4;
} else if (screenHeight >= 690) {
// Android Samsung Galaxy S9 5.8"
screenSizeType = 3;
} else if (screenHeight >= 683) {
// Android Nexus 5X, Pixel 2
screenSizeType = 2;
} else if (screenHeight >= 667) {
// iPhone 6S
screenSizeType = 1;
} else if (screenHeight >= 568) {
// iPhone 5S
screenSizeType = 0;
} else {
screenSizeType = 0;
}
return ListView(
physics: const NeverScrollableScrollPhysics(),
padding: EdgeInsets.zero,
children: <Widget>[
Container(
height: _drawerHeaderHeight() ?? 80.0,
child: DrawerHeader(
child: Row(
mainAxisAlignment: MainAxisAlignment.start,
crossAxisAlignment: CrossAxisAlignment.center,
children: <Widget>[
SizedBox(
width: _spacerXTitle() ?? 12.0,
),
Text(AppLocalizations.of(context).translate('Settings'),
style: TextStyle(
fontSize: 21.0,
color: Colors.white,
fontWeight: FontWeight.w600),
),
],
),
decoration: BoxDecoration(
color: Colors.blue,
),
),
),
ListView.builder(
physics: const NeverScrollableScrollPhysics(),
shrinkWrap: true, // this way you don't need Expanded()
itemCount: this.nrOfSettingsBullets,
itemExtent: _tileHeightSettings() ?? 60.0,
itemBuilder: (BuildContext ctxt, int index) {
return Container(
padding: EdgeInsets.fromLTRB(0.0, 0.0, 0.0, 0.0),
height: _tileHeightSettings() ?? 60.0,
child: ListTile(
title: Container(
alignment: AlignmentDirectional.centerStart,
height: _itemHeightSetting() ?? 50.0,
decoration: BoxDecoration(
color: Color.fromRGBO(0x00, 0x99, 0xCC, 0.2),
borderRadius: BorderRadius.all(Radius.circular(14.0)),
border: Border.all(
color: Color.fromRGBO(0x2d, 0x32, 0x7d, 0.2)),
),
child: Row(
mainAxisAlignment: MainAxisAlignment.start,
crossAxisAlignment: CrossAxisAlignment.start,
children: <Widget>[
SizedBox(
width: 12.0,
),
Text(
settings[index],
style: TextStyle(
fontSize: 20.0,
color: Color.fromRGBO(0x2d, 0x32, 0x7d, 1.0),
fontWeight: FontWeight.w600),
),
],
),
),
onTap: () async {
switch (index) {
case 0:
Navigator.of(context).pop();
_navigateToUserState(context);
break;
case 1:
_sendMail();
break;
case 2:
_showDialog(
'Wollen Sie wirklich TEST TEST ?',
'Alle Einstellungen werden TEST TEST',
// AppLocalizations.of(context).translate(
// 'Do you really want to revert to Original Icons ?'),
// AppLocalizations.of(context).translate(
// 'All privately added Image Icons will be deleted.'),
);
break;
case 3:
//_launchFacebookURL();
break;
case 4:
//_launchChunderURL();
break;
case 5:
break;
case 6:
//_sendReview();
break;
case 7:
break;
default:
break;
}
},
),
);
},
),
SizedBox(
height: _aboveCancelSpaceLeft() ?? 16.0,
),
MaterialButton(
height: 70.0,
minWidth: 70.0,
color: Colors.blue,
textColor: Colors.white,
child: Text(
'Cancel', // AppLocalizations.of(context).translate('Cancel'),
style: TextStyle(
fontSize: 21.0,
color: Colors.white,
fontWeight: FontWeight.w600),
),
onPressed: () {
Navigator.of(context).pop();
},
splashColor: Color.fromRGBO(0x00, 0x99, 0xCC, 1.0),
)
],
);
}
// user defined function
void _showDialog(String title, String content) {
// flutter defined function
showDialog(
context: context,
builder: (BuildContext context) {
// return object of type Dialog
return AlertDialog(
title: Text(title),
content: Text(content),
actions: <Widget>[
FlatButton(
child: Text(
'No', //AppLocalizations.of(context).translate('No'),
style: TextStyle(
fontSize: 21.0,
color: Color.fromRGBO(0x00, 0x99, 0xCC, 1.0),
fontWeight: FontWeight.w600),
),
onPressed: () {
Navigator.of(context).pop();
},
),
FlatButton(
child: Text(
'Yes', // AppLocalizations.of(context).translate('Yes'),
style: TextStyle(
fontSize: 21.0,
color: Color.fromRGBO(0x00, 0x99, 0xCC, 1.0),
fontWeight: FontWeight.w600),
),
onPressed: () async {
// revert to Original Icons
// Directory directory = await LocationiKK.dbDirectory();
// DBHelperLocations databaseHelperLocations =
// DBHelperLocations();
// databaseHelperLocations.resetLocationDB(directory);
// await _loadLocationsFromSQLDBIntoContainer();
Future.delayed(
const Duration(milliseconds: 800),
() {
Navigator.of(context).pop();
},
);
},
),
],
);
},
);
}
double _tileHeightSettings() {
switch (screenSizeType) {
case 8:
return 62.0;
break;
case 7:
return 62.0;
break;
case 6:
return 57.0;
break;
case 5:
return 57.0;
break;
case 4:
return 57.0;
break;
case 3:
return 57.0;
break;
case 2:
return 56.0;
break;
case 1:
return 53.0;
break;
case 0:
return 46.0;
break;
default:
return 55.0;
break;
}
}
double _drawerHeaderHeight() {
switch (screenSizeType) {
case 8:
return 80.0;
break;
case 7:
return 80.0;
break;
case 6:
return 80.0;
break;
case 5:
return 80.0;
break;
case 4:
return 80.0;
break;
case 3:
return 80.0;
break;
case 2:
return 70.0;
break;
case 1:
return 80.0;
break;
case 0:
return 70.0;
break;
default:
return 105.0;
break;
}
}
double _itemHeightSetting() {
switch (screenSizeType) {
case 8:
return 52.0;
break;
case 7:
return 52.0;
break;
case 6:
return 50.0;
break;
case 5:
return 50.0;
break;
case 4:
return 50.0;
break;
case 3:
return 50.0;
break;
case 2:
return 48.0;
break;
case 1:
return 46.0;
break;
case 0:
return 40.0;
break;
default:
return 50.0;
break;
}
}
double _aboveCancelSpaceLeft() {
switch (screenSizeType) {
case 8:
return 16.0;
break;
case 7:
return 16.0;
break;
case 6:
return 16.0;
break;
case 5:
return 16.0;
break;
case 4:
return 16.0;
break;
case 3:
return 16.0;
break;
case 2:
return 5.0;
break;
case 1:
return 16.0;
break;
case 0:
return 16.0;
break;
default:
return 16.0;
break;
}
}
double _spacerXTitle() {
switch (screenSizeType) {
case 8:
return 12.0;
break;
case 7:
return 12.0;
break;
case 6:
return 12.0;
break;
case 5:
return 12.0;
break;
case 4:
return 12.0;
break;
case 3:
return 12.0;
break;
case 2:
return 12.0;
break;
case 1:
return 12.0;
break;
case 0:
return 12.0;
break;
default:
return 22.0;
break;
}
}
_sendMail() async {
const url =
'mailto:corona@ideenkaffee.ch?subject=Feedback%20Corona%20Control&body=';
if (await canLaunch(url)) {
await launch(url);
} else {
throw 'Could not launch $url';
}
}
Future<String> _navigateToUserState(BuildContext context) async {
return await Navigator.push(
context, MaterialPageRoute(builder: (context) => UserState()));
}
}
A: You can copy paste run full code below
Works fine after remove this line itemExtent: _tileHeightSettings() ?? 60.0,
working demo
full code
import 'package:flutter/material.dart';
import 'package:url_launcher/url_launcher.dart';
class DrawerLeft extends StatefulWidget {
DrawerLeft({Key key}) : super(key: key);
@override
_DrawerLeftState createState() => _DrawerLeftState();
}
class _DrawerLeftState extends State<DrawerLeft> {
final int nrOfSettingsBullets = 8;
int screenSizeType = 0;
@override
Widget build(BuildContext context) {
// first define settings titles (order matters)
List<String> settings = [];
settings.add('My Personal Situation');
settings.add(
'E-Mail'); // AppLocalizations.of(context).translate('Revert to Original Icons'));
settings.add(
'Test Dialog'); // AppLocalizations.of(context).translate('App help videos 1'));
settings.add(
't.b.d'); // AppLocalizations.of(context).translate('Facebook Like'));
settings.add(
't.b.d'); // AppLocalizations.of(context).translate('Our website'));
settings
.add('t.b.d'); // AppLocalizations.of(context).translate('Contact'));
settings
.add('t.b.d'); // AppLocalizations.of(context).translate('Rate App'));
settings
.add('t.b.d'); //AppLocalizations.of(context).translate('Impressum'));
final double screenHeight = MediaQuery.of(context).size.longestSide;
if (screenHeight >= 1000) {
screenSizeType = 8;
} else if (screenHeight >= 896) {
// iPhone XSmax/XR
screenSizeType = 7;
} else if (screenHeight >= 812) {
// iPhone XS
screenSizeType = 6;
} else if (screenHeight >= 800) {
// Android Samsung Galaxy S7 5.1"
screenSizeType = 5;
} else if (screenHeight >= 736) {
// iPhone 6S Plus
screenSizeType = 4;
} else if (screenHeight >= 690) {
// Android Samsung Galaxy S9 5.8"
screenSizeType = 3;
} else if (screenHeight >= 683) {
// Android Nexus 5X, Pixel 2
screenSizeType = 2;
} else if (screenHeight >= 667) {
// iPhone 6S
screenSizeType = 1;
} else if (screenHeight >= 568) {
// iPhone 5S
screenSizeType = 0;
} else {
screenSizeType = 0;
}
return ListView(
physics: const NeverScrollableScrollPhysics(),
padding: EdgeInsets.zero,
children: <Widget>[
Container(
height: _drawerHeaderHeight() ?? 80.0,
child: DrawerHeader(
child: Row(
mainAxisAlignment: MainAxisAlignment.start,
crossAxisAlignment: CrossAxisAlignment.center,
children: <Widget>[
SizedBox(
width: _spacerXTitle() ?? 12.0,
),
Text(
'Settings',
style: TextStyle(
fontSize: 21.0,
color: Colors.white,
fontWeight: FontWeight.w600),
),
],
),
decoration: BoxDecoration(
color: Colors.blue,
),
),
),
ListView.builder(
physics: const NeverScrollableScrollPhysics(),
shrinkWrap: true, // this way you don't need Expanded()
itemCount: this.nrOfSettingsBullets,
//itemExtent: _tileHeightSettings() ?? 60.0,
itemBuilder: (BuildContext ctxt, int index) {
return Container(
padding: EdgeInsets.fromLTRB(0.0, 0.0, 0.0, 0.0),
height: _tileHeightSettings() ?? 60.0,
child: ListTile(
title: Container(
alignment: AlignmentDirectional.centerStart,
height: _itemHeightSetting() ?? 50.0,
decoration: BoxDecoration(
color: Color.fromRGBO(0x00, 0x99, 0xCC, 0.2),
borderRadius: BorderRadius.all(Radius.circular(14.0)),
border: Border.all(
color: Color.fromRGBO(0x2d, 0x32, 0x7d, 0.2)),
),
child: Row(
mainAxisAlignment: MainAxisAlignment.start,
crossAxisAlignment: CrossAxisAlignment.start,
children: <Widget>[
SizedBox(
width: 12.0,
),
Text(
settings[index],
style: TextStyle(
fontSize: 20.0,
color: Color.fromRGBO(0x2d, 0x32, 0x7d, 1.0),
fontWeight: FontWeight.w600),
),
],
),
),
onTap: () async {
switch (index) {
case 0:
Navigator.of(context).pop();
_navigateToUserState(context);
break;
case 1:
_sendMail();
break;
case 2:
_showDialog(
'Wollen Sie wirklich TEST TEST ?',
'Alle Einstellungen werden TEST TEST',
// AppLocalizations.of(context).translate(
// 'Do you really want to revert to Original Icons ?'),
// AppLocalizations.of(context).translate(
// 'All privately added Image Icons will be deleted.'),
);
break;
case 3:
//_launchFacebookURL();
break;
case 4:
//_launchChunderURL();
break;
case 5:
break;
case 6:
//_sendReview();
break;
case 7:
break;
default:
break;
}
},
),
);
},
),
SizedBox(
height: _aboveCancelSpaceLeft() ?? 16.0,
),
MaterialButton(
height: 70.0,
minWidth: 70.0,
color: Colors.blue,
textColor: Colors.white,
child: Text(
'Cancel', // AppLocalizations.of(context).translate('Cancel'),
style: TextStyle(
fontSize: 21.0,
color: Colors.white,
fontWeight: FontWeight.w600),
),
onPressed: () {
Navigator.of(context).pop();
},
splashColor: Color.fromRGBO(0x00, 0x99, 0xCC, 1.0),
)
],
);
}
// user defined function
void _showDialog(String title, String content) {
// flutter defined function
showDialog(
context: context,
builder: (BuildContext context) {
// return object of type Dialog
return AlertDialog(
title: Text(title),
content: Text(content),
actions: <Widget>[
FlatButton(
child: Text(
'No', //AppLocalizations.of(context).translate('No'),
style: TextStyle(
fontSize: 21.0,
color: Color.fromRGBO(0x00, 0x99, 0xCC, 1.0),
fontWeight: FontWeight.w600),
),
onPressed: () {
Navigator.of(context).pop();
},
),
FlatButton(
child: Text(
'Yes', // AppLocalizations.of(context).translate('Yes'),
style: TextStyle(
fontSize: 21.0,
color: Color.fromRGBO(0x00, 0x99, 0xCC, 1.0),
fontWeight: FontWeight.w600),
),
onPressed: () async {
// revert to Original Icons
// Directory directory = await LocationiKK.dbDirectory();
// DBHelperLocations databaseHelperLocations =
// DBHelperLocations();
// databaseHelperLocations.resetLocationDB(directory);
// await _loadLocationsFromSQLDBIntoContainer();
Future.delayed(
const Duration(milliseconds: 800),
() {
Navigator.of(context).pop();
},
);
},
),
],
);
},
);
}
double _tileHeightSettings() {
print('screenSizeType ${screenSizeType}');
switch (screenSizeType) {
case 8:
return 62.0;
break;
case 7:
return 62.0;
break;
case 6:
return 57.0;
break;
case 5:
return 57.0;
break;
case 4:
return 57.0;
break;
case 3:
return 57.0;
break;
case 2:
return 56.0;
break;
case 1:
return 53.0;
break;
case 0:
return 46.0;
break;
default:
return 55.0;
break;
}
}
double _drawerHeaderHeight() {
switch (screenSizeType) {
case 8:
return 80.0;
break;
case 7:
return 80.0;
break;
case 6:
return 80.0;
break;
case 5:
return 80.0;
break;
case 4:
return 80.0;
break;
case 3:
return 80.0;
break;
case 2:
return 70.0;
break;
case 1:
return 80.0;
break;
case 0:
return 70.0;
break;
default:
return 105.0;
break;
}
}
double _itemHeightSetting() {
switch (screenSizeType) {
case 8:
return 52.0;
break;
case 7:
return 52.0;
break;
case 6:
return 50.0;
break;
case 5:
return 50.0;
break;
case 4:
return 50.0;
break;
case 3:
return 50.0;
break;
case 2:
return 48.0;
break;
case 1:
return 46.0;
break;
case 0:
return 40.0;
break;
default:
return 50.0;
break;
}
}
double _aboveCancelSpaceLeft() {
switch (screenSizeType) {
case 8:
return 16.0;
break;
case 7:
return 16.0;
break;
case 6:
return 16.0;
break;
case 5:
return 16.0;
break;
case 4:
return 16.0;
break;
case 3:
return 16.0;
break;
case 2:
return 5.0;
break;
case 1:
return 16.0;
break;
case 0:
return 16.0;
break;
default:
return 16.0;
break;
}
}
double _spacerXTitle() {
switch (screenSizeType) {
case 8:
return 12.0;
break;
case 7:
return 12.0;
break;
case 6:
return 12.0;
break;
case 5:
return 12.0;
break;
case 4:
return 12.0;
break;
case 3:
return 12.0;
break;
case 2:
return 12.0;
break;
case 1:
return 12.0;
break;
case 0:
return 12.0;
break;
default:
return 22.0;
break;
}
}
_sendMail() async {
const url =
'mailto:corona@ideenkaffee.ch?subject=Feedback%20Corona%20Control&body=';
if (await canLaunch(url)) {
await launch(url);
} else {
throw 'Could not launch $url';
}
}
Future<String> _navigateToUserState(BuildContext context) async {
return await Navigator.push(
context, MaterialPageRoute(builder: (context) => UserState()));
}
}
class UserState extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Container();
}
}
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: MyHomePage(title: "test"),
);
}
}
class MyHomePage extends StatefulWidget {
MyHomePage({Key key, this.title}) : super(key: key);
final String title;
@override
_MyHomePageState createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
int _counter = 0;
void _incrementCounter() {
setState(() {
_counter++;
});
}
@override
Widget build(BuildContext context) {
return Scaffold(
drawer: DrawerLeft(),
appBar: AppBar(
title: Text(widget.title),
),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
Text(
'You have pushed the button this many times:',
),
Text(
'$_counter',
style: Theme.of(context).textTheme.headline4,
),
],
),
),
floatingActionButton: FloatingActionButton(
onPressed: _incrementCounter,
tooltip: 'Increment',
child: Icon(Icons.add),
),
);
}
}
| |
doc_23525599
|
I want to identify the significant, prominent n-grams in the text. In that case, what is the usual recommendation for confidence and support parameters (rule of thumb)?
A: When you ask DBpedia Spotlight to annotate text (finding entities/topics), it searches for n-grams that have URIs on DBpedia (n-grams that are Wikipedia titles). Those n-grams are called DBpedia resources.
Support: this is the Resource Prominence parameter, it helps you to ignore unimportant or uninformative resources. When you set a value X to it, this means resources that have a number of Wikipedia in-links smaller than X will be ignored and not returned to you.
Confidence: this is the Disambiguation Confidence parameter, it is a threshold which takes a value between 0 and 1. When you set a high value to it, you get better and more trustworthy annotations but you risk losing some correct ones.
Choosing values of those (or any other) parameters depends on your use case.
Examples:
*
*If you have some test set or gold standard for the type of n-grams you are interested in, you can tune your choice until you get good enough results satisfied by your gold standard.
*If you care about retrieving the top-N n-grams only to infer the topic of text, you can tune your parameters choosing high values to get few (mostly) correct n-grams and sort them by Confidence.
*If you want to get as many n-grams as possible and your task won't get affected or biased by mistakes, you can set low values.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.