id
stringlengths
5
11
text
stringlengths
0
146k
title
stringclasses
1 value
doc_23535300
Right now I can add an "arch curve" which doesn't rotate the letters, just stretches them and moves them to fit the path, but not the smooth curved text look that CustomInk is using. A: You could create a template using SVG (create in Inkscape or any SVG editor) and then manipulate the XML using PHP. Finally to get some dynamic output, render the modified SVG file using inkscape on the command line, to get either bitmap or PDF output. The results will be of excellent quality.
doc_23535301
It uses a global keyboard hook to capture key presses and play back wav files using NAudio. However playback lags on some computers and plays a few seconds after the key has been pressed. Could this be an HDD/SSD or CPU speed issue or is it a programming issue? What can be done to solve it? Tried on 4 computers, 2 lagged, 2 did not. * *My SSD/i7 - did not lag. *My HDD/Core2Duo - did not lag. *Friend's SSD/i7 - lagged. *Friend's HDD/i7 - lagged. Program Info https://github.com/MattMcManis/Ink Source https://github.com/MattMcManis/Ink/tree/master/source/Ink Download https://github.com/MattMcManis/Ink/releases App.xaml.cs Start the Keyboard Listener. // Application Startup // private void Application_Startup(object sender, StartupEventArgs e) { th = new Thread(() => RunKeyListener()); th.IsBackground = true; th.Start(); th.Join(); } // Keyboard Listener // private void RunKeyListener() { KListener.KeyDown += new RawKeyEventHandler(KListener_KeyDown); } // Key Down // void KListener_KeyDown(object sender, RawKeyEventArgs args) { Sound.KeyPressed(args); } MainWindow.xaml.cs KeyboardListener Class is in here. https://gist.github.com/Ciantic/471698 Sound.cs private static string wavKeyChar = "Sounds\\character.wav"; private static string wavKeyNum = "Sounds\\number.wav"; public static WaveFileReader wav = null; public static WaveOutEvent output = null; // Key Pressed // public static void KeyPressed(RawKeyEventArgs args) { // Letters if (args.Key >= Key.A && args.Key <= Key.Z) { PlaySound(wavKeyChar); } // Numbers else if (args.Key >= Key.D0 && args.Key <= Key.D9) { PlaySound(wavKeyNum); } } // Play Sound // public static void PlaySound(string sound) { wav = new WaveFileReader(sound); output = new WaveOutEvent(); output.NumberOfBuffers = 3; output.DesiredLatency = 500; output.Init(wav); output.Play(); } A: Try to show a MessageBox or something, to understand if the delayed event is the sound itself or the keypress event. If the MessageBox shows before the sound is played then it's not a problem of keyboard hook library.
doc_23535302
import urllib.request URL=urllib.request.urlretrieve("https://firebasestorage.googleapis.com/v0/b/cameraviewer-32936.appspot.com/o/images%2F49567?alt=media&token=1eded9d0-b9f0-48bf-b869-37756b31a94a") URL object has a few key value pairs. One of them was 'str' with value of: 'C:\Users\DELL\AppData\Local\Temp\tmp4ki4w7we' The above value represents the path of image fetched into my device But how can i open image. I don't know where to go or am I confused a lot. A: It is pretty simple using Python Imaging Library (PIL). PIL is free and open-source library. How to Install PIL If you don't have PILLOW installed, first open a terminal ( CTRL + ALT + T ) and download it using command: sudo pip install Pillow Displaying an Image After you successfully installed it, use the code below to show your downloaded image: from PIL import Image import urllib.request URL=urllib.request.urlretrieve("https://firebasestorage.googleapis.com/v0/b/cameraviewer-32936.appspot.com/o/images%2F49567?alt=media&token=1eded9d0-b9f0-48bf-b869-37756b31a94a") img = Image.open(URL[0]) img.show()
doc_23535303
private ArrayList mainList; public void AddTest(int number) { Test t = new Test(); mainList.add(t); mainList.add(number); } As can be seen we add a integer and something of class Test. In rascal we create a object flow graph which consists of the following: OFG: { <|java+class:///java/util/ArrayList/this|,|java+constructor:///java/util/ArrayList/ArrayList()|>, <|java+variable:///test1/Main/AddTest(int)/t|,|java+field:///test1/Main/mainList|>, <|java+class:///test1/Main/this|,|java+constructor:///test1/Main/Main()|>, <|java+parameter:///test1/Main/AddTest(int)/scope(number)/scope(0)/number|,|java+field:///test1/Main/mainList|>, <|java+class:///test1/Test/this|,|java+constructor:///test1/Test/Test()|>, <|java+class:///test1/Test/this|,|java+field:///test1/Main/mainList|> } As can be seen in the OFG a integer and Test get added to the mainList. Using this knowledge we want to indicate that ArrayList should contain type Object thus private ArrayList mainList -> private ArrayList<Object> mainList For this we need a constraint solver which find the lowest type or generalization. Therefore we want to augment the solve function of the following propagation method rel[loc,&T] propagate(OFG g, rel[loc,&T] gen, rel[loc,&T] kill, bool back) { rel[loc,&T] IN = { }; rel[loc,&T] OUT = gen + (IN - kill); gi = g<to,from>; set[loc] pred(loc n) = gi[n]; set[loc] succ(loc n) = g[n]; solve (IN, OUT) { IN = { <n,\o> | n <- carrier(g), p <- (back ? pred(n) : succ(n)), \o <- OUT[p] }; OUT = gen + (IN - kill); } return OUT; } However, we find it difficult to start this using Rascal We have experience with IBM ILOG, so constraint programming is not new. A: One idea, you could write another function or group of functions which relate type parameter positions to possible or necessary types: * *a many-to-many rel[loc typeparameter, TypeSymbol bound could encode of which types the type parameter should at least be a subtype according to the objects which flow into it according to your flow analysis. *then an algorithm which computes a tight upperbound based on the alternatives, would combine several supertypes for the same typeparameter and compute the least type which includes them all. This algorithm would make the rel[loc typeparameter, TypeSymbol bound] smaller and smaller until only one solution remains for every type parameter. * *You could use the extends and implements relations in the M3 model to find out about common super types, but you should also build in some knowledge about the Java type system, such as the fact the java.lang.Object is the top type for both classes and interfaces in Java, that classes have single inheritance and interfaces multiple inheritance. TypeSymbol can be found in lang::java::m3::TypeSymbol
doc_23535304
I wanted to add a modification to the tags hash, but the modification had to be done at compile time since the value was dependent on a value stored in an input json. My first attempt was to do this: tags = node['attribute']['tags'] tags['new_key'] = json_value However, this resulted in a spec error that indicated I should use node.default, or the equivalent attribute assignment function. So I tried that: tags = node['attribute']['tags'] node.normal['attribute']['tags']['new_key'] = json_value While I did not have a spec error, the new key/value was not sticking. At this point I reached my "throw stuff at a wall" phase and used the hash.merge function, which I used to think was functionally identical to hash['new_key'] for a single key/value pair addition: tags = node['attribute']['tags'] tags.merge({ 'new_key' => 'json_value' }) This ultimately worked, but I do not understand why. What functional difference is there between the two methods that causes one to be seen as a modification of the original chef attribute, but not the other? A: The issue is you can't use node['foo'] like that. That accesses the merged view of all attribute levels. If you then want to set things, it wouldn't know where to put them. So you need to lead off by tell it where to put the data: tags = node.normal['attribute']['tags'] tags['new_key'] = json_value Or just: node.normal['attribute']['tags']['new_key'] = json_value Beware of setting things at the normal level though, it is not reset at the start of each run which is probably what you want here, but it does mean that even if you remove the recipe code doing the set, the value will still be in place on any node that already ran it. If you want to actually remove things, you have to do it explicitly.
doc_23535305
This works quite fine, if there is only one WPF window. Since that WPF application can open up more sub windows (which are undocked windows) those windows are not closed when I dispose the ElementHost control. Is there an easy way to close that WPF window and all child windows from winforms side? I have tried Application.OpenForms but the sub WPF windows do not show up (makes sense somehow ;-)). One remark: I do own the WPF code so I could implement something on the WPF side, but I really would like to stick on the win forms side. Also I would like to consider situations where the WPF window code might be "stuck" and is not able to react and close it self. That's why I'd like to kill the windows from "outside" A: So I made up my mind and followed cdkMoose recommendation to let this be handled by the WPF part. It's probably a good idea to let the clean up be done by the one who has the knowledge about what has to be done. Thanks though!
doc_23535306
When I click it, it shows me what I need, i. e. the variable per se (which I typed in the code, how it should be): How can I disable these yellow previews? Just to see the code I type, to avoid any confusion..
doc_23535307
EventHandler<WindowEvent> h; h = (WindowEvent event) -> { event.consume(); controller.end(); }; I just see this type of EventHandler for the first time. What it is supposed to do is, that it should tell the controller to close the program (end() simply calls Platform.exit) when the X at the top is clicked. This is due to the fact that this GUI has multiple windows and all windows should be closed when the main window is closed. What I don't get is why the EventHandler simply waits for a random(?) WindowEvent. Doesn't it has to be specified which WindowEvent it handles?
doc_23535308
I have referred this article, and tried implementing the same. I am looking to send a Full Screen Intent notification. Notifier.java public class Notifier extends Service { @Override public void onCreate() { super.onCreate(); Context context = this; Intent fullScreenIntent = new Intent(context, CamView.class); PendingIntent fullScreenPendingIntent = PendingIntent.getActivity(context, 0, fullScreenIntent, PendingIntent.FLAG_UPDATE_CURRENT); CharSequence name = getString(R.string.channel_name); String description = getString(R.string.channel_description); int importance = NotificationManager.IMPORTANCE_HIGH; NotificationChannel doorBellChannel = new NotificationChannel(getString(R.string.channel_name), name, importance); doorBellChannel.setDescription(description); NotificationManager notificationManager = getSystemService(NotificationManager.class); notificationManager.createNotificationChannel(doorBellChannel); NotificationCompat.Builder notificationBuilder = new NotificationCompat.Builder(context, String.valueOf(R.string.channel_name)) //.setChannelId(String.valueOf(R.string.channel_name)) .setSmallIcon(R.drawable.ic_launcher_foreground) .setContentTitle("Incoming call") .setContentText("(919) 555-1234") .setPriority(NotificationCompat.PRIORITY_MAX) .setCategory(NotificationCompat.CATEGORY_CALL) .setChannelId(String.valueOf(R.string.channel_name)) .setFullScreenIntent(fullScreenPendingIntent, true) .setAutoCancel(true); Notification incomingCallNotification = notificationBuilder.build(); Log.d(TAG, "onCreate: Here 1"); int notificationId = createID(); startForeground(notificationId, incomingCallNotification); Log.d(TAG, "onCreate: Here 2"); } @Override public int onStartCommand(Intent intent, int flags, int startId) { return super.onStartCommand(intent, flags, startId); } public int createID(){ Date now = new Date(); return Integer.parseInt(new SimpleDateFormat("ddHHmmss", Locale.US).format(now)); } @Nullable @Override public IBinder onBind(Intent intent) { return null; } } I have these 2 lines in the AndroidManifest.xml: <uses-permission android:name="android.permission.USE_FULL_SCREEN_INTENT" /> <uses-permission android:name="android.permission.FOREGROUND_SERVICE"/> When the service is called, I get the following error: 2021-06-11 17:17:27.213 9432-9432/? D/Oscillator: onCreate: Here 1 2021-06-11 17:17:27.231 9432-9432/? D/Oscillator: onCreate: Here 2 2021-06-11 17:17:27.240 9432-9432/? D/AndroidRuntime: Shutting down VM 2021-06-11 17:17:27.242 9432-9432/? E/AndroidRuntime: FATAL EXCEPTION: main Process: com.example.smartlock, PID: 9432 android.app.RemoteServiceException: Bad notification for startForeground at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1946) at android.os.Handler.dispatchMessage(Handler.java:107) at android.os.Looper.loop(Looper.java:214) at android.app.ActivityThread.main(ActivityThread.java:7397) at java.lang.reflect.Method.invoke(Native Method) at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:492) at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:935) 2021-06-11 17:17:27.260 9432-9432/? I/Process: Sending signal. PID: 9432 SIG: 9 A: I think, the problem is here. String.valueOf(R.string.channel_name) just gives a string containing the integer ID. As a result, due to unknown/unregistered channel, you got bad notification error. Hence, use the actual String. NotificationCompat.Builder notificationBuilder = /* Do not use String.valueOf() */ new NotificationCompat.Builder(context, getString(R.string.channel_name)) .setSmallIcon(R.drawable.ic_launcher_foreground) .setContentTitle("Incoming call") .setContentText("(919) 555-1234") .setPriority(NotificationCompat.PRIORITY_MAX) .setCategory(NotificationCompat.CATEGORY_CALL) .setChannelId(name) /* Do not use String.valueOf() */ .setFullScreenIntent(fullScreenPendingIntent, true) .setAutoCancel(true);
doc_23535309
# Save some codes threshold_count = 250 count_diag = Counter(df['code']) small_codes_itens = [k for k, count in count_diag.items() if count < threshold_count] # Only codes with less than 250 small_diagcodes = df['code'][df['code'].isin(small_codes_itens)].str.slice(start=0, stop=3, step=1) small_diagcodes = small_diagcodes[~small_diagcodes.str.contains("[a-zA-Z]").fillna(False)] small_diagcodes.fillna(value='1500', inplace=True) small_diagcodes = small_diagcodes.astype(int) ranges = [(1, 140), (140, 240), (240, 280), (280, 290), (290, 320), (320, 390), (390, 460), (460, 520), (520, 580), (580, 630), (630, 680), (680, 710), (710, 740), (740, 760), (760, 780), (780, 800), (800, 1000)] # Re-code in terms of integer for num, cat_range in enumerate(ranges): small_diagcodes = np.where(small_diagcodes.between(cat_range[0],cat_range[1]), num, small_diagcodes) However, I have an error and I cannot correct it. I only know that is in the 'for' part and I also know that the problem is that when I do (isinstance(small_diagcodes, pd.Series)) before the loop is 'True' and after the loop is False. It is like converts instantly from a series to an array. AttributeError: 'numpy.ndarray' object has no attribute 'between' Anyone can help me, please? For example, replacing the loop for another thing? A: Instead of the between statement try: np.logical_and(small_diagcodes>cat_range[0], small_diagcodes<cat_range[1]))
doc_23535310
to instantiate a table and the corresponding mapped class. It is related to question posted here: Dynamic Class Creation in SQLAlchemy. So far I have the following: table = Table(tbl, metadata, *(Column(col, ctype, primary_key=pk, index=idx) for col, ctype, pk, idx in zip(attrs, types, primary_keys, indexes)) ) This creates the table object. Now I need to create the corresponding class. mydict={'__tablename__':tbl} cls = type(cls_name, (Base,), mydict) This gives me following error: ArgumentError: Mapper Mapper|persons_with_coord|t_persons_w_coord could not assemble any primary key columns for mapped table My question is how do I specify the primary keys as part of the class creation. And after the class is created do I need to call mapper as follows: mapper(cls, table) A: The mydict mapping has to either include a table object or specify that the table needs to be loaded from the database. This should work mydict={'__tablename__':tbl, '__table__': table} This should also work mydict={'__tablename__':tbl, '__table_args__': ({'autoload':True},)} the primary keys are already specified when you create table. It shouldn't be necessary to specify them again when creating a declarative, however if it required (say, for a complex key), it should be specified in __table_args__. The mapper has to be called on a declarative class. However, the newly created declarative doesn't have a metadata object. Adding the metadata will make the dynamically created declarative behave more like a regular declarative, so I would recommend putting this line after class creation cls.metadata = cls.__table__.metadata so, the full edits could be: mydict={'__tablename__':tbl, '__table__': table} cls = type(cls_name, (Base,), mydict) cls.metadata = cls.__table__.metadata
doc_23535311
Now, I know that the filesystem library going into C++17 is based based on Boost::Filesystem; but - are they similar enough for me to use the Boost library and then seamlessly switch to the standard version at a later time, without changing more than, say, a using statement? Or are there (minor/significant) differences between the two? I know that for the case of variant, the Boost and the standard library versions differ quite a bit. A: Caveat: This answer does not reflect several last-minute changes before C++17 was finalized. See @DavisHerring's answer. The Boost filesystem inserter and extractor use & as the escape character for " and &. The standard will use std::quoted (which uses \ by default) to escape ", which in turn use \\ to escape \, see this reference. Demo It is likely the only one difference between them. The reason for that difference can be found at N3399 A: There are a number of differences. Some were, I believe, Boost changes that were never propagated. For example, there is no path.filename_is_dot() query (as discussed below, it would be less useful in std::filesystem anyway). There was also a good bit of late-breaking news on this front: * *Support for non-POSIX-like filesystems: * *Specify whether a string is OS-native or POSIX-like (or let the implementation decide, which is (still) the default) *An implementation may define additional file types (beyond regular, directory, socket, etc.) *An implementation may define file_size for a directory or device file *filename(), normalization, and relative/absolute conversions redefined (examples for POSIX): * *path("foo/.").lexically_normal()=="foo/" (is the opposite in Boost) *path("foo/").filename()=="" (is path(".") in Boost) *remove_filename() leaves the trailing slash and is thus idempotent (it assigns parent_path() in Boost) *path(".profile").extension()=="" (is the whole name in Boost) *path decompositions and combinations can preserve things like alternate data stream names that are normally invisible *path("foo")/"/bar"=="/bar" (is path("foo/bar") in Boost), which allows composing relative file names with others (absolute or relative) and replaces Boost's absolute() *Boost's system_complete() (which takes only one argument) is renamed to absolute() *canonical() thus takes only one argument (fixed in a DR) *lexically_relative() handles .. and root elements correctly *permissions() takes more arguments (Boost combines them into a bitmask) Note that Boost.Filesystem v4 is under development and is supposed to be C++17-compatible (but therefore incompatible in many respects with v3).
doc_23535312
app.post('/addsession', (req, res) => { pathJoiner = require("path"); process.env.GOOGLE_APPLICATION_CREDENTIALS = pathJoiner.join(__dirname, "/config/AgentKeyFile.json"); createSessionEntityType(req.body.path, res); }); function createSessionEntityType(sessionPath, res) { const dialogflow = require('dialogflow'); // Instantiates clients const sessionEntityTypesClient = new dialogflow.SessionEntityTypesClient(); const entitiesArr = [{ "value": "Test Name", "synonyms": ["Test Name", "Test"] }]; const createSessionEntityTypeRequest = { parent: sessionPath, session_entity_type: { name: sessionPath + "/entityTypes/Friends-Name", entity_override_mode: "ENTITY_OVERRIDE_MODE_OVERRIDE", entities: entitiesArr }, }; sessionEntityTypesClient .createSessionEntityType(createSessionEntityTypeRequest) .then(responses => { console.log("Entity type created: " + responses); res.setHeader('Content-Type', 'application/json'); res.send(JSON.stringify(responses.body)); }) } However, when I run this code off Heroku server, I get the following error: UnhandledPromiseRejectionWarning: Error: 3 INVALID_ARGUMENT: Name '' does not match patterns 'projects/{projectId=*}/agent/environments/{environmentId=*}/users/{userId=*} /sessions/{sessionId=*}/entityTypes/{entityTypeName=*},projects/ {projectId=*}/agent/sessions/{sessionId=*}/entityTypes/{entityTypeName=*}' I am unsure why it keeps saying the name parameter is empty. I know I'm missing something but can't figure out what. A: I changed my code to the following and the issue vanished const dialogflow = require('dialogflow'); const projectId = "projId"; // Instantiates clients const sessionEntityTypesClient = new dialogflow.SessionEntityTypesClient(); const sessionPath = sessionEntityTypesClient.sessionPath(projectId, sessionId); const sessionEntityTypePath = sessionEntityTypesClient.sessionEntityTypePath(projectId, sessionId, "Entity-Name"); const entitiesArr = [{ "value": "Test Name", "synonyms": ["Test Name", "Test"] }]; const createSessionEntityTypeRequest = { parent: sessionPath, sessionEntityType: { name: sessionEntityTypePath, entityOverrideMode: "ENTITY_OVERRIDE_MODE_OVERRIDE", entities: entitiesArr }, }; sessionEntityTypesClient .createSessionEntityType(createSessionEntityTypeRequest) .then(responses => { console.log("Entity type created: " + responses); res.setHeader('Content-Type', 'application/json'); res.send(JSON.stringify(responses[0])); })
doc_23535313
return fetch("https://ide.geeksforgeeks.org/main.php",{ method: "POST", headers: { Accept : "application/json", "Content-Type": "application/x-www-form-urlencoded;charset=UTF-8" }, body: data }) .then(response => response.json()) .catch(err => console.log(err)) } I'm sending a POST request to G4G's compiler API. It's working fine in postman but not working in my react app. A: When making a call from an website there are security considerations to be taken into account. One of them is CORS. In a nut shell the browser asks the server if it allows HTTP calls (calls that are considered not simple actually). Postman works because it doesn't do this check. G4G need to respond to OPTIONS requests with a proper CORS response that allows your host to call them. A: Please add to the header of the request Access-Control-Allow-Origin: * let me know if it's working A: Add this line of code on backend in .htacess file <IfModule mod_headers.c> Header set Access-Control-Allow-Origin "*" Header set Access-Control-Allow-Methods "POST,GET,OPTIONS,DELETE,PUT" Header set Access-Control-Allow-Headers "*" </IfModule>
doc_23535314
A: Yes you can use S3 as storage for your training datasets. Refer diagram in this link describing how everything works together: https://docs.aws.amazon.com/sagemaker/latest/dg/how-it-works-training.html You may also want to checkout following blogs that details about File mode and Pipe mode, two mechanisms for transferring training data: * *https://aws.amazon.com/blogs/machine-learning/accelerate-model-training-using-faster-pipe-mode-on-amazon-sagemaker/ In File mode, the training data is downloaded first to an encrypted EBS volume attached to the training instance prior to commencing the training. However, in Pipe mode the input data is streamed directly to the training algorithm while it is running. *https://aws.amazon.com/blogs/machine-learning/using-pipe-input-mode-for-amazon-sagemaker-algorithms/ With Pipe input mode, your data is fed on-the-fly into the algorithm container without involving any disk I/O. This approach shortens the lengthy download process and dramatically reduces startup time. It also offers generally better read throughput than File input mode. This is because your data is fetched from Amazon S3 by a highly optimized multi-threaded background process. It also allows you to train on datasets that are much larger than the 16 TB Amazon Elastic Block Store (EBS) volume size limit. The blog also contains python code snippets using Pipe input mode for reference.
doc_23535315
I programmed a Stack: struct Node<T>{ data:T, next:Option<Box<Node<T>>> } pub struct Stack<T>{ first:Option<Box<Node<T>>> } impl<T> Stack<T>{ pub fn new() -> Self{ Self{first:None} } pub fn push(&mut self, element:T){ let old = self.first.take(); self.first = Some(Box::new(Node{data:element, next:old})); } pub fn pop(&mut self) -> Option<T>{ match self.first.take(){ None => None, Some(node) =>{ self.first = node.next; Some(node.data) } } } pub fn iter(self) -> StackIterator<T>{ StackIterator{ curr : self.first } } } pub struct StackIterator<T>{ curr : Option<Box<Node<T>>> } impl<T> Iterator for StackIterator<T>{ type Item = T; fn next (&mut self) -> Option<T>{ match self.curr.take(){ None => None, Some(node) => { self.curr = node.next; Some(node.data) } } } } With a Stack Iterator, which is created calling the iter() Method on a Stack. The Problem: I had to make this iter() method consuming its Stack, therefore a stack is only iteratable once. How can I implement this method without consuming the Stack and without implementing the copy or clone trait? A: How can I implement this method without consuming the Stack and without implementing the copy or clone trait? Have StackIterator borrow the stack instead, and the iterator return references to the items. Something along the lines of impl<T> Stack<T>{ pub fn iter(&self) -> StackIterator<T>{ StackIterator{ curr : &self.first } } } pub struct StackIterator<'stack, T: 'stack>{ curr : &'stack Option<Box<Node<T>>> } impl<'s, T: 's> Iterator for StackIterator<'s, T>{ type Item = &'s T; fn next (&mut self) -> Option<&'s T>{ match self.curr.as_ref().take() { None => None, Some(node) => { self.curr = &node.next; Some(&node.data) } } } } (I didn't actually test this code so it's possible it doesn't work) That's essentially what std::iter::Iter does (though it's implemented as a way lower level). That said learning Rust by implementing linked lists probably isn't the best idea in the world, linked lists are degenerate graphs, and the borrow checker is not on very friendly terms with graphs.
doc_23535316
A: The Google Maps API's for android have Visible Region object, which contains such attributes as coordinates of four visible corners. You need to check if your Marker included in this borders, when Camera moves. Fortunately, the Google developers did it for you: map.getBounds().contains(marker.getPosition())
doc_23535317
I'm working on a project to produce a new USB device. Let's assume that this device is a webcam. One of the main features of this device is that it should have a very smart API so that programmers can get wide access to hardware parts. For example, controlling the camera lens manually with a slider, the same applies for the flash intensity and capturing frame rate. As far as I know, all the device functionality should be made available and documented by the device driver before working on the API. Unfortunately, I've been asked (as a C/C++ developer) to start designing the API as a method to guide the production process by its final deliverable functionality. So is there any work for a developer to do before having the device driver? Also, could you please provide me a sample code (pseudo code) of how an API makes use of device driver to perform some functionality? A: Designing an API usually just means writing a C-language header file with the names of the methods your library provides, along with their arguments, return types, and any necessary documentation. So, yes, you can certainly start writing that file before you have a device driver. Since you have two separate questions, I think you should have posted them separately on this site. But anyway, the answer to your second question depends heavily on what operating system you are using. In Windows, you would probably use DeviceIoControl and in Linux you would probably use ioctl (or just read and write).
doc_23535318
$("table").tablesorter({ theme: 'blue', widgets: ["zebra", "filter", "scroller" ] }); But, my table begin null or empty and after I input the data, so I've to use Update. $("table").trigger("updateAll") There's my problem, I can't doing work Scroller and Filter at same time, just one or another. Can someone help me? (Sorry if my english is being bad) Example: http://jsfiddle.net/s4ACj/5/ A: There are two issues. * *The filter widget does not initialize on an empty table. *The scroller widget needs a lot of bug fixes (which I have not had time to do) * *including adding the filter row if it was not present on initialization. *completely removing the scroller widget when updating the table *etc. In order to work around this issue, try changing your append code to this (update demo): $("#append").click(function () { // add some html var html = "<tr><td>Aaron</td><td>Johnson Sr</td><td>Atlanta</td><td>GA</td></tr>", // scroller makes a clone of the table before the original $table = $('table.update:last'); // append new html to table body $table.find("tbody").append(html); // remove scroller widget completely $table.closest('.tablesorter-scroller').find('.tablesorter-scroller-header').remove(); $table .unwrap() .find('.tablesorter-filter-row').removeClass('hideme').end() .find('thead').show().css('visibility', 'visible'); $table[0].config.isScrolling = false; // let the plugin know that we made a update, then the plugin will // automatically sort the table based on the header settings $("table.update").trigger("update"); return false; }); I'll try to patch up, bug fix and probably completely rewrite the scroller widget when I have time.
doc_23535319
I could run these tasks by going into the Gradle task list -> Reporting and there I could see those. Some time ago, Android removed these task but we could re-enable them by going to Experimental features and uncheck the Do not build Gradle task list during Gradle sync as mentioned here But the above mentioned option has been removed in the latest version of the Android studio Electric Eel and I can't seem to find out how to get those tasks back. Any help would be appreciated.
doc_23535320
A: There is. Use: <meta name="viewport" content="initial-scale=1.0, width=900"> If you want to prevent zooming entirely, use this instead: <meta name="viewport" content="initial-scale=1.0, width=900, maximum-scale=1.0, user-scalable=no"> A: I am using this code, working fine with all browser except Opera Android Browser. In Opera my site floating, but width is wrapped. <meta content='width=device-width,initial-scale=1.0,minimum-scale=1.0,maximum-scale=1.0,user-scalable=no' name='viewport'/>
doc_23535321
[features] foo = [] If this feature is enabled I want to print "FOO": fn main() { #[cfg(feature = "foo")] println!("FOO"); } I can then compile (and run) the code like this: cargo run --features foo However, I'd prefer to use the shorthand that I see in the docs. Something like this: fn main() { #[cfg(foo)] println!("FOO"); } However, when using the same cargo run command as earlier the print statement doesn't compile. What am I missing? A: According to the docs. #[cfg(foo)] is a configuration option name. And can be set as so: rustc --cfg foo main.rs. For how to set it with cargo see this question. #[cfg(feature = "foo")] is a configuration option key-value pair.
doc_23535322
app.js: angular.module('app', ['ionic', 'app.controllers', 'app.routes', 'app.services', 'app.directives']) .run(function($ionicPlatform,$rootScope) { $ionicPlatform.ready(function() { // Hide the accessory bar by default (remove this to show the accessory bar above the keyboard // for form inputs) if (window.cordova && window.cordova.plugins && window.cordova.plugins.Keyboard) { cordova.plugins.Keyboard.hideKeyboardAccessoryBar(true); cordova.plugins.Keyboard.disableScroll(true); } if (window.StatusBar) { // org.apache.cordova.statusbar required StatusBar.styleDefault(); } pushNotification = window.plugins.pushNotification; pushNotification.register( onNotification, errorHandler, { 'badge': 'true', 'sound': 'true', 'alert': 'true', 'ecb': 'onNotification', 'senderID': '999999999999', } ); }); }) window.onNotification = function(e){ switch(e.event){ case 'registered': if(e.regid.length > 0){ var device_token = e.regid; alert('registered :'+device_token); $rootScope.devicetoken = device_token; } break; case 'message': alert('msg received: ' + e.message); break; case 'error': alert('error occured'); break; } }; window.errorHandler = function(error){ alert('an error occured'); } I am getting device_token and getting in alert. but it is not going inside rootScope to use it in controller. Controller.js: angular.module('app.controllers', []) .controller('onWalletWelcomesCtrl', function($scope, $ionicModal,User,$ionicLoading,$rootScope) { $ionicModal.fromTemplateUrl('signup-modal.html', { id: '1', // We need to use and ID to identify the modal that is firing the event! scope: $scope, backdropClickToClose: false, animation: 'slide-in-up' }).then(function(modal) { $scope.oModal1 = modal; }); $scope.proceed = function(){ alert($rootScope.devicetoken); $ionicLoading.show({template: '<ion-spinner icon="android"></ion-spinner>'}); } }) I am getting undefined while alerting in proceed function. How should I use rootScope in window.onNotification. My main intention is to pass the devicetoken to controller. Please let me the best practice to share the variables. angular.module('app', ['ionic', 'app.controllers', 'app.routes', 'app.services', 'app.directives']) .run(function($ionicPlatform,$rootScope) { $ionicPlatform.ready(function() { if (window.cordova && window.cordova.plugins && window.cordova.plugins.Keyboard) { cordova.plugins.Keyboard.hideKeyboardAccessoryBar(true); cordova.plugins.Keyboard.disableScroll(true); } if (window.StatusBar) { // org.apache.cordova.statusbar required StatusBar.styleDefault(); } pushNotification = window.plugins.pushNotification; pushNotification.register( onNotification, errorHandler, { 'badge': 'true', 'sound': 'true', 'alert': 'true', 'ecb': 'onNotification', 'senderID': '9999999999', } ); }); }) window.onNotification = function(e){ switch(e.event){ case 'registered': if(e.regid.length > 0){ var device_token = e.regid; alert('registered :'+device_token); $rootScope.devicetoken = "hi"; $scope.$apply(); } break; case 'message': alert('msg received: ' + e.message); break; case 'error': alert('error occured'); break; } }; window.errorHandler = function(error){ alert('an error occured'); } Still I am getting undefined while alerting in controller. A: You need to move your window.onNotification = function(e){..} declaration inside of your .run(function($ionicPlatform,$rootScope) {...} block. $rootScope will be undefined in your onNotification handler where you currently have it placed - you need to declare your onNotification handler inside of the run() block so that it has access to the rootScope object while it is being defined. Also, because you will be updating the rootScope with an event handler that lives outside of the angular lifecycle (angular doesn't know about it), you need to call trigger a new digest cycle manually. In your notification handler, you need to wrap your $rootScope.devicetoken = device_token; line with a $rootScope.apply(), so that it looks like this: $rootScope.apply(function(){ $rootScope.devicetoken = device_token; }); A: You are using $rootScope in window.onNotification so it is not angular context, so for you need to tell to angular to do updation. So you need to add $scope.$apply(); after $rootScope updation. A: Try to define $rootScope.devicetoken on the run scope .run(function($ionicPlatform,$rootScope) { $rootScope.devicetoken = ''; make sure that window.onNotification executed before $scope.proceed
doc_23535323
{ fieldLabel:"PIN/Password", actionText:"Edit", fieldValue:"****", dialog:new MyAccount.DialogBox({ id:"win_editPIN", name:"editPIN", headerContent:"Edit Password:", updateURL:"/uiapi/myaccount/setAccountPIN", items:[{ id:"txt_currentPIN", name: "currentPIN", fieldLabel: "Current Password", validationEvent:"blur", allowBlank: false, maxLength:20, inputType:"password" },{ id:"txt_newPIN", name: "newPIN", fieldLabel: "New Password", vtype:"confirmPassword", validationEvent:"blur", allowBlank: false, maxLength:20, inputType:"password" },{ id:"txt_confirmPIN", fieldLabel: "Confirm Password", vtype:"confirmPassword", validationEvent:"blur", initialPin:"txt_newPIN", allowBlank: false, maxLength:20, inputType:"password" }], validateForm:function() { var formPanel = Ext.getCmp("win_editPIN").formPanel.getForm(); // Save the fields we are going to insert values into var pass1 = formPanel.findField("txt_newPIN"); var pass2 = formPanel.findField("txt_confirmPIN"); if (pass1 != pass2) return {success:false, errorMessage:"Passwords do not match!"} } }) A: Thanks Nghia, your answer got me halfway there, allowing me to select the fields. The rest was making a custom validator option, here is the code, just showing the last item for brevity. { id:"txt_confirmPIN", name: "newPIN_confirm", fieldLabel: "Confirm Password", validationEvent:"blur", initialPin:"txt_newPIN", allowBlank: false, maxLength:20, inputType:"password", // This custom validator option expects a return value of boolean true if it // validates, and a string with an error message if it doesn't validator: function() { var formPanel = Ext.getCmp("win_editPIN").formPanel.getForm(); // Save the fields we are going to insert values into var pass1 = Ext.getCmp('txt_newPIN').getValue(); var pass2 = Ext.getCmp('txt_confirmPIN').getValue(); console.log("pass 1 = " + pass1 + "--pass 2 = " + pass2); if (pass1 == pass2) return true; else return "Passwords do not match!"; } } This validator option expects a return value of true if it validates, and a string with an error message if it doesn't. A: You need to pass field name instead of field id when use findField() method. var pass1 = formPanel.findField("newPIN"); or simply just get its value directly var pass1 = Ext.getCmp('txt_newPIN').getValue();
doc_23535324
import java.text.NumberFormat; import java.util.Scanner; public class Main { public static void main(String[] args) { final byte MONTHS_IN_YEAR = 12; final byte PERCENT = 100; int principal = 0; float monthlyInterest = 0; int numberOfPayments = 0; Scanner scanner = new Scanner(System.in); System.out.println("Enter a number between 1,000 and 1,000,000."); while(true) { System.out.print("Principal: "); principal = scanner.nextInt(); if (principal >= 1000 && principal <= 1000_000) break; System.out.println("Enter a valid input"); } while(true) { System.out.print("Annual Interest Rate: "); float annualInterest = scanner.nextFloat(); if (annualInterest > 0 && annualInterest <= 15) { monthlyInterest = annualInterest / PERCENT / MONTHS_IN_YEAR; break; } System.out.println("Enter a value between 1 and 15"); } while(true) { System.out.print("Period (Years): "); byte years = scanner.nextByte(); if (years > 0 && years <= 30) { numberOfPayments = years * MONTHS_IN_YEAR; break; } System.out.println("Enter a value between 1 and 30"); } double mortgage = principal * (monthlyInterest * Math.pow(1 + monthlyInterest, numberOfPayments)) / (Math.pow(1 + monthlyInterest, numberOfPayments) - 1); String mortgageFormatted = NumberFormat.getCurrencyInstance().format(mortgage); System.out.println("Mortgage: " + mortgageFormatted); } } **Output #1 // when user input, Period as abc Enter a number between 1,000 and 1,000,000. Principal: 600 Enter a valid input Principal: 2000 Annual Interest Rate: 0 Enter a value between 1 and 15 Annual Interest Rate: 3.5 Period (Years): abc Exception in thread "main" java.util.InputMismatchException at java.base/java.util.Scanner.throwFor(Scanner.java:939) at java.base/java.util.Scanner.next(Scanner.java:1594) at java.base/java.util.Scanner.nextByte(Scanner.java:2002) at java.base/java.util.Scanner.nextByte(Scanner.java:1956) at com.company.Main.main(Main.java:39) Process finished with exit code 1 Output #2 // When years data type is byte, but user input 300, instead of crashing, the program should ask the user to enter a valid number Enter a number between 1,000 and 1,000,000. Principal: 1000000 Annual Interest Rate: 3.9 Period (Years): 300 Exception in thread "main" java.util.InputMismatchException: Value out of range. Value:"300" Radix:10 at java.base/java.util.Scanner.nextByte(Scanner.java:2008) at java.base/java.util.Scanner.nextByte(Scanner.java:1956) at com.company.Main.main(Main.java:39) Process finished with exit code 1 A: You should use Try Catch, like this: while(true) { System.out.print("Period (Years): "); try{ byte years = scanner.nextByte(); if (years > 0 && years <= 30) { numberOfPayments = years * MONTHS_IN_YEAR; break; } System.out.println("Enter a value between 1 and 30"); catch (Exception e){ System.out.println("Value must be numeric"); } }
doc_23535325
Blur code is pretty simple: <filter id='bf1'> <feGaussianBlur stdDeviation='0 50' /> </filter> DEMO (bug presents in desktop Safari only) Source code If scroll down a bit artifact stays still. In Chrome and Firefox everthing works just fine. Help me please to get rid of this transparency bars. EDITED: I found that Safari splits image into 512x512 squares. Demo Source code
doc_23535326
#define with(var) for(int i##__LINE__=0;i##__LINE__<1;)for(var;i##__LINE__<1;++i##__LINE__) Sample usage: #include <cstdio> #include "FileClass.hpp" #include "with.hpp" int main(){ with(FileClass file("test.txt")){ printf("%s\n",file.readlines().c_str());} return 0;} The idea is that a doubly-nested for loop has an outer obfuscated iteration variable which is incremented once in the inner loop to break it. This causes the following code to be executed once with var in its scope. Are there any downsides to this? If I obfuscate the iteration variable enough, there would be almost no chance of having a name clash, it uses only standard preprocessor features in a way that doesn't seem to have any possibility of backfiring, and it's very easy to understand. It almost seems too good to be true - is there any reason this isn't used everywhere? A: is there any reason this isn't used everywhere? Yes, C++ is not Python, and if I understood your code correctly, this does exactly the same: { FileClass file("test.txt"); printf("%s\n", file.readlines().c_str()); } So, what are the downsides? Unnatural syntax, usage of the preprocessor for code obfuscation, achieving the same thing as above with much more boilerplate code, and unidiomatic use of C++. Enough? C++ has the very important concept of value types and scope-based deterministic destruction of stack variables. This leads to very important idioms like SBRM (scope-bound resource management, also called RAII). A: It's similar in spirit to the macros used in the original Bourne shell, which was written in C. They were intended to provide a syntax similar to Algol 68, which apparently was Bourne's preferred language. A small sample from http://minnie.tuhs.org/cgi-bin/utree.pl?file=V7/usr/src/cmd/sh/mac.h : #define IF if( #define THEN ){ #define ELSE } else { #define ELIF } else if ( #define FI ;} The result tends to be code that's difficult to read either for C programmers (who have to familiarize themselves with your macros as well as the syntax of C itself), or for Algol 68, or in your case Python programmers. If I read a C++ program that uses your with() macro, I can't really understand what it's doing without (a) realizing that with() is a macro (macros are conventionally given all-caps names), (b) tracking down the macro definition, and (c) deciphering the rather odd C code that results from expanding the macro. That's assuming I don't fall into the trap of thinking that it's a compiler-specific extension, or that C has a with statement that I didn't know about. Or, if I happen to understand Python with statements, then I still need to (a) realize that your with() macro is intended to mimic a Python with statement, and (b) trust you to get it right. Years ago, I thought that this: #define ever ;; ... for (ever) { ... } was very clever. I still do think it's clever; but I no longer think that cleverness is such a good thing. A: It almost seems too good to be true - is there any reason this isn't used everywhere? Sure, that is a great structure that works well for you, but what about the rest of the people on the team? What about the future you - in six months - that can't remember the cute macro that you wrote? In short, syntactic gymnastics like this are great exercises at home, but are terrible in a collaborative environment, or even for code that you alone will maintain in the future. Stick to the best practices and your code will be much easier to maintain and understand. That isn't to say that you shouldn't do this kind of thing at home. Do what you enjoy, keep your brain limber. But don't use your warm-up exercises in production! A: Is this really "with"? The big advantage of "with" in Python is that it works with context managers, which take care of automatically closing/releasing/unlocking/unallocating the variable in the "with" statement. In C++, there are standard methods for doing this. For instance, auto_ptr will take care of auto-deleting pointers that were allocated with new. Learn these standard idioms before reinventing them yourself. A: The obvious downside to this is that no one will be able to read your spaghetti bastardized-Python-in-C++ code. So good luck maintaining that code.
doc_23535327
onChange={(e) => data.motto = (e.target as any).value} How do I correctly define the typings for the class, so I wouldn't have to hack my way around the type system with any? export interface InputProps extends React.HTMLProps<Input> { ... } export class Input extends React.Component<InputProps, {}> { } If I put target: { value: string }; I get : ERROR in [default] /react-onsenui.d.ts:87:18 Interface 'InputProps' incorrectly extends interface 'HTMLProps<Input>'. Types of property 'target' are incompatible. Type '{ value: string; }' is not assignable to type 'string'. A: When using Child Component We check type like this. Parent Component: export default () => { const onChangeHandler = ((e: React.ChangeEvent<HTMLInputElement>): void => { console.log(e.currentTarget.value) } return ( <div> <Input onChange={onChangeHandler} /> </div> ); } Child Component: type Props = { onChange: (e: React.ChangeEvent<HTMLInputElement>) => void } export Input:React.FC<Props> ({onChange}) => ( <input type="tex" onChange={onChange} /> ) A: An alternative that has not been mentioned yet is to type the onChange function instead of the props that it receives. Using React.ChangeEventHandler: const stateChange: React.ChangeEventHandler<HTMLInputElement> = (event) => { console.log(event.target.value); }; A: You can do the following: import { ChangeEvent } from 'react'; const onChange = (e: ChangeEvent<HTMLInputElement>)=> { const newValue = e.target.value; } A: Generally event handlers should use e.currentTarget.value, e.g.: const onChange = (e: React.FormEvent<HTMLInputElement>) => { const newValue = e.currentTarget.value; } You can read why it so here (Revert "Make SyntheticEvent.target generic, not SyntheticEvent.currentTarget."). UPD: As mentioned by @roger-gusmao ChangeEvent more suitable for typing form events. const onChange = (e: React.ChangeEvent<HTMLInputElement>) => { const newValue = e.target.value; } A: Here is a way with ES6 object destructuring, tested with TS 3.3. This example is for a text input. name: string = ''; private updateName({ target }: { target: HTMLInputElement }) { this.name = target.value; } A: This works for me also it is framework agnostic. const handler = (evt: Event) => { console.log((evt.target as HTMLInputElement).value)) } A: This is when you're working with a FileList Object: onChange={(event: React.ChangeEvent<HTMLInputElement>): void => { const fileListObj: FileList | null = event.target.files; if (Object.keys(fileListObj as Object).length > 3) { alert('Only three images pleaseeeee :)'); } else { // Do something } return; }} A: Thanks @haind Yes HTMLInputElement worked for input field //Example var elem = e.currentTarget as HTMLInputElement; elem.setAttribute('my-attribute','my value'); elem.value='5'; This HTMLInputElement is interface is inherit from HTMLElement which is inherited from EventTarget at root level. Therefore we can assert using as operator to use specific interfaces according to the context like in this case we are using HTMLInputElement for input field other interfaces can be HTMLButtonElement, HTMLImageElement etc. For more reference you can check other available interface here * *Web API interfaces by Mozilla *Interfaces in External Node Modules by Microsoft A: You no need to type if you do this: <input onChange={(event) => { setValue(e.target.value) }} /> Because if you set a new value with the arrow function directly in the html tag, typescript will understand by default the type of event. A: const handleChange = ( e: ChangeEvent<HTMLInputElement> ) => { const { name, value } = e.target; this.setState({ ...currentState, [name]: value }); }; you can apply this on every input element in the form component A: ChangeEvent<HTMLInputElement> is the type for change event in typescript. This is how it is done- import { ChangeEvent } from 'react'; const handleInputChange = (event: ChangeEvent<HTMLInputElement>) => { setValue(event.target.value); }; A: the correct way to use in TypeScript is handleChange(e: React.ChangeEvent<HTMLInputElement>) { // No longer need to cast to any - hooray for react! this.setState({temperature: e.target.value}); } render() { ... <input value={temperature} onChange={this.handleChange} /> ... ); } Follow the complete class, it's better to understand: import * as React from "react"; const scaleNames = { c: 'Celsius', f: 'Fahrenheit' }; interface TemperatureState { temperature: string; } interface TemperatureProps { scale: string; } class TemperatureInput extends React.Component<TemperatureProps, TemperatureState> { constructor(props: TemperatureProps) { super(props); this.handleChange = this.handleChange.bind(this); this.state = {temperature: ''}; } // handleChange(e: { target: { value: string; }; }) { // this.setState({temperature: e.target.value}); // } handleChange(e: React.ChangeEvent<HTMLInputElement>) { // No longer need to cast to any - hooray for react! this.setState({temperature: e.target.value}); } render() { const temperature = this.state.temperature; const scale = this.props.scale; return ( <fieldset> <legend>Enter temperature in {scaleNames[scale]}:</legend> <input value={temperature} onChange={this.handleChange} /> </fieldset> ); } } export default TemperatureInput; A: we can also use the onChange event fire-up with defined types(in functional component) like as follows: const handleChange = ( e: React.ChangeEvent<HTMLTextAreaElement | HTMLInputElement> ) => { const name = e.target.name; const value = e.target.value; }; A: as HTMLInputElement works for me A: The target you tried to add in InputProps is not the same target you wanted which is in React.FormEvent So, the solution I could come up with was, extending the event related types to add your target type, as: interface MyEventTarget extends EventTarget { value: string } interface MyFormEvent<T> extends React.FormEvent<T> { target: MyEventTarget } interface InputProps extends React.HTMLProps<Input> { onChange?: React.EventHandler<MyFormEvent<Input>>; } Once you have those classes, you can use your input component as <Input onChange={e => alert(e.target.value)} /> without compile errors. In fact, you can also use the first two interfaces above for your other components. A: I use something like this: import { ChangeEvent, useState } from 'react'; export const InputChange = () => { const [state, setState] = useState({ value: '' }); const handleChange = (event: ChangeEvent<{ value: string }>) => { setState({ value: event?.currentTarget?.value }); } return ( <div> <input onChange={handleChange} /> <p>{state?.value}</p> </div> ); } A: function handle_change( evt: React.ChangeEvent<HTMLInputElement> ): string { evt.persist(); // This is needed so you can actually get the currentTarget const inputValue = evt.currentTarget.value; return inputValue } And make sure you have "lib": ["dom"] in your tsconfig. A: Convert string to number simple answer <input type="text" value={incrementAmount} onChange={(e) => { setIncrementAmmount(+e.target.value); }} /> A: import { NativeSyntheticEvent, TextInputChangeEventData,} from 'react-native'; // Todo in java script const onChangeTextPassword = (text : any) => { setPassword(text); } // Todo in type script use this const onChangeTextEmail = ({ nativeEvent: { text },}: NativeSyntheticEvent<TextInputChangeEventData>) => { console.log("________ onChangeTextEmail _________ "+ text); setEmailId(text); }; <TextInput style={{ width: '100%', borderBottomWidth: 1, borderBottomColor: 'grey', height: 40, }} autoCapitalize="none" returnKeyType="next" maxLength={50} secureTextEntry={false} onChange={onChangeTextEmail} value={emailId} defaultValue={emailId} /> A: const event = { target: { value: 'testing' } as HTMLInputElement }; handleChangeFunc(event as ChangeEvent<HTMLInputElement>); this work for me.
doc_23535328
./configure --prefix=/home/riscv --with-arch=rv32i --with-abi=ilp32e The ip32e specifies soft float for RV32E. This generates a working compiler that works fine on my simple C source code. If I disassemble the created application then it does indeed stick to the RV32E specification. It only generates assembly for my code that uses the first 16 registers. I use static linking and it pulls in the expected set of soft float routines such as __divdi3 and __mulsi3. Unfortunately the pulled in routines use all 32 registers and not the restricted lower 16 for RV32E. Hence, not very useful! I cannot find where this statically linked code is coming from, is it compiled from C source and therefore being compiled without the RV32E restriction? Or maybe it was written as hand coded assembly that has been written only for the full RV32I instead of RV32E? I tried to grep around the source but have had no luck finding anything like the actual code that is statically linked. Any ideas? EDIT: Just checked in more details and the compiler is not generating using just the first 16 registers. Turns out with a simple test routine it manages to only use the first 16 but more complex code does use others as well. Maybe RV32E is not implemented yet? A: The configure.ac file contains this code: AS_IF([test "x$with_abi" == xdefault], [AS_CASE([$with_arch], [*rv64g* | *rv64*d*], [with_abi=lp64d], [*rv64*f*], [with_abi=lp64f], [*rv64*], [with_abi=lp64], [*rv32g* | *rv32*d*], [with_abi=ilp32d], [*rv32*f*], [with_abi=ilp32f], [*rv32*], [with_abi=ilp32], [AC_MSG_ERROR([Unknown arch])] )]) Which seems to map your input of rv32i to the ABI ilp32, ignoring the e. So yes, it seems support for the ...e ABIs is not fully implemented yet.
doc_23535329
<?php $con = mysql_connect('localhost', 'xxxx', 'xxxx'); mysql_select_db("test_site", $con); $query = mysql_query("SELECT * FROM test"); echo $query; mysql_close($con); ?> My entire script stops right after the first line($con = ...). I tried adding echo "TEST"; right after that line and it didn't show. When I try to print text before that line, it works fine. Even if I add something like: if (!$con) { die('Could not connect: ' . mysql_error()); } No error shows. I'm assuming that means the connection was fine... so why does my script stop? When I do the same thing through my terminal I get * from test, just like I need. Not with PHP. Help? =] thanks edit: nvm, found a way to make it work. A: You need to install the php5 mysql modules. This will install the modules and set them up in PHP on Ubuntu. sudo apt-get install php5-mysql A: mysql_query returns a resource that cannot be converted to string for echo. Try var_dump($query) instead. A: You've got MySQL running on your server right? As long as you're passing the right host/user/pass it shouldn't just be flat out failing...
doc_23535330
What I have tried: Generage N random numbers, divide all of them by the sum of them and multiply by the desired constant. This seems to work but the result does not follow the rule that the numbers should be within [a:b]. Generage N-1 random numbers add 0 and desired constant C and sort them. Then calculate the difference between each two consecutive nubmers and the differences are the result. This again sums to C but have the same problem of last method(the range can be bigger than [a:b]. I also tried to generate random numbers and always keep track of min and max in a way that the desired sum and range are kept and come up with this code: bool generate(function<int(int,int)> randomGenerator,int min,int max,int len,int sum,std::vector<int> &output){ /** * Not possible to produce such a sequence */ if(min*len > sum) return false; if(max*len < sum) return false; int curSum = 0; int left = sum - curSum; int leftIndexes = len-1; int curMax = left - leftIndexes*min; int curMin = left - leftIndexes*max; for(int i=0;i<len;i++){ int num = randomGenerator((curMin< min)?min:curMin,(curMax>max)?max:curMax); output.push_back(num); curSum += num; left = sum - curSum; leftIndexes--; curMax = left - leftIndexes*min; curMin = left - leftIndexes*max; } return true; } This seems to work but the results are sometimes very skewed and I don't think it's following the original distribution (e.g. uniform). E.g: //10 numbers within [1:10] which sum to 50: generate(uniform,1,10,10,50,output); //result: 2,7,2,5,2,10,5,8,4,5 => sum=50 //This looks reasonable for uniform, but let's change to //10 numbers within [1:25] which sum to 50: generate(uniform,1,25,10,50,output); //result: 24,12,6,2,1,1,1,1,1,1 => sum= 50 Notice how many ones exist in the output. This might sound reasonable because the range is larger. But they really don't look like a uniform distribution. I am not sure even if it is possible to achieve what I want, maybe the constraints are making the problem not solvable. A: Let's try to simplify the problem. By substracting the lower bound, we can reduce it to finding N numbers in [0,b-a] such that their sum is C-Na. Renaming the parameters, we can look for N numbers in [0,m] whose sum is S. Now the problem is akin to partitioning a segment of length S in N distinct sub-segments of length [0,m]. I think the problem is simply not solvable. if S=1, N=1000 and m anything above 0, the only possible repartition is one 1 and 999 zeroes, which is nothing like a random spread. There is a correlation between N, m and S, and even picking random values will not make it disappear. For the most uniform repartition, the length of the sub-segments will follow a gaussian curve with a mean value of S/N. If you tweak your random numbers differently, you will end up with whatever bias, but in the end you will never have both a uniform [a,b] repartition and a total length of C, unless the length of your [a,b] interval happens to be 2C/N-a. A: In case you want the sample to follow a uniform distribution, the problem reduces to generate N random numbers with sum = 1. This, in turn, is a special case of the Dirichlet distribution but can also be computed more easily using the Exponential distribution. Here is how: * *Take a uniform sample v1 … vN with all vi between 0 and 1. *For all i, 1<=i<=N, define ui := -ln vi (notice that ui > 0). *Normalize the ui as pi := ui/s where s is the sum u1+...+uN. The p1..pN are uniformly distributed (in the simplex of dim N-1) and their sum is 1. You can now multiply these pi by the constant C you want and translate them by summing some other constant A like this qi := A + pi*C. EDIT 3 In order to address some issues raised in the comments, let me add the following: * *To ensure that the final random sequence falls in the interval [a,b] choose the constants A and C above as A := a and C := b-a, i.e., take qi = a + pi*(b-a). Since pi is in the range (0,1) all qi will be in the range [a,b]. *One cannot take the (negative) logarithm -ln(vi) if vi happens to be 0 because ln() is not defined at 0. The probability of such an event is extremely low. However, in order to ensure that no error is signaled the generation of v1 ... vN in item 1 above must threat any occurrence of 0 in a special way: consider -ln(0) as +infinity (remember: ln(x) -> -infinity when x->0). Thus the sum s = +infinity, which means that pi = 1 and all other pj = 0. Without this convention the sequence (0...1...0) would never be generated (many thanks to @Severin Pappadeux for this interesting remark.) *As explained in the 4th comment attached to the question by @Neil Slater it is logically impossible to fulfill all the requirements of the original framing. Therefore any solution must relax the constraints to a proper subset of the original ones. Other comments by @Behrooz seem to confirm that this would suffice in this case. EDIT 2 One more issue has been raised in the comments: Why rescaling a uniform sample does not suffice? In other words, why should I bother to take negative logarithms? The reason is that if we just rescale then the resulting sample won't distribute uniformly across the segment (0,1) (or [a,b] for the final sample.) To visualize this let's think 2D, i.e., let's consider the case N=2. A uniform sample (v1,v2) corresponds to a random point in the square with origin (0,0) and corner (1,1). Now, when we normalize such a point dividing it by the sum s=v1+v2 what we are doing is projecting the point onto the diagonal as shown in the picture (keep in mind that the diagonal is the line x + y = 1): But given that green lines, which are closer to the principal diagonal from (0,0) to (1,1), are longer than orange ones, which are closer to the axes x and y, the projections tend to accumulate more around the center of the projection line (in blue), where the scaled sample lives. This shows that a simple scaling won't produce a uniform sample on the depicted diagonal. On the other hand, it can be proven mathematically that the negative logarithms do produce the desired uniformity. So, instead of copypasting a mathematical proof I would invite everyone to implement both algorithms and check that the resulting plots behave as this answer describes. (Note: here is a blog post on this interesting subject with an application to the Oil & Gas industry) A: For my answer I'll assume that we have a uniform distribution. Since we have a uniform distribution, every tuple of C has the same probability to occur. For example for a = 2, b = 2, C = 12, N = 5 we have 15 possible tuples. From them 10 start with 2, 4 start with 3 and 1 starts with 4. This gives the idea of selecting a random number from 1 to 15 in order to choose the first element. From 1 to 10 we select 2, from 11 to 14 we select 3 and for 15 we select 4. Then we continue recursively. #include <time.h> #include <random> std::default_random_engine generator(time(0)); int a = 2, b = 4, n = 5, c = 12, numbers[5]; // Calculate how many combinations of n numbers have sum c int calc_combinations(int n, int c) { if (n == 1) return (c >= a) && (c <= b); int sum = 0; for (int i = a; i <= b; i++) sum += calc_combinations(n - 1, c - i); return sum; } // Chooses a random array of n elements having sum c void choose(int n, int c, int *numbers) { if (n == 1) { numbers[0] = c; return; } int combinations = calc_combinations(n, c); std::uniform_int_distribution<int> distribution(0, combinations - 1); int s = distribution(generator); int sum = 0; for (int i = a; i <= b; i++) { if ((sum += calc_combinations(n - 1, c - i)) > s) { numbers[0] = i; choose(n - 1, c - i, numbers + 1); return; } } } int main() { choose(n, c, numbers); } Possible outcome: 2 2 3 2 3 This algorithm won't scale well for large N because of overflows in the calculation of combinations (unless we use a big integer library), the time needed for this calculation and the need for arbitrarily large random numbers. A: well, for n=10000 cant we have a small number in there that is not random? maybe generating sequence till sum > C-max reached and then just put one simple number to sum it up. 1 in 10000 is more like a very small noise in the system. A: Although this was old topic but I think I got a idea. Consider we want N random number which sum is C and each random between a and b. To solve problem, we create N holes and prepare C balls, for each time we ask each hole "Do you want another ball?". If no, we pass to next hole, else, we put a ball into the hole. Each hole has a cap value: b-a. If some hole reach the cap value then always pass to next hole. Example: 3 random numbers between 0 and 2 which sum is 5. simulation result: 1st run: -+- 2nd run: ++- 3rd run: --- 4th run: +*+ final:221 -:refuse ball +:accept ball *:full pass
doc_23535331
Sample data frame PL <- c(rep("PL1", 4), repl("PL2", 4), rep("PL3", 4), rep("PL4", 4)) CNT <- sample(seq(1:50), 16) YEAR <- rep(c("2015", "2016", "2017", "2018"), 4) df <- data.frame(PL, YEAR, CNT) Plot PL <- c(rep("PL1", 4), repl("PL2", 4), rep("PL3", 4), rep("PL4", 4)) CNT <- sample(seq(1:50), 16) YEAR <- rep(c("2015", "2016", "2017", "2018"), 4) df <- data.frame(PL, YEAR, CNT) # plot library(ggplot2) library(treemapify) treeMapPlot <- ggplot(df, aes(area = CNT, fill = CNT, label=PL, subgroup=YEAR)) + geom_treemap() + geom_treemap_subgroup_border(colour = "white") + geom_treemap_text(fontface = "italic", colour = "white", place = "centre", grow = F, reflow = T) + geom_treemap_subgroup_text(place = "centre", grow = T, alpha = 0.5, colour = "#FAFAFA", min.size = 0) treeMapPlot If I change the fill in aes I can get this, but I lose the gradient. I need to keep these colors, but show the tiles with gradient color, meaning small CNT lighter, larger CNT darker treeMapPlot <- ggplot(df, aes(area = CNT, fill = YEAR, label = PL, subgroup = YEAR)) A: It's not the most beautiful solution, but mapping count to alpha simulates a light-to-dark gradient for each color. Add aes(alpha = CNT) inside geom_treemap, and scale alpha however you want. library(ggplot2) library(treemapify) PL <- c(rep("PL1",4),rep("PL2",4),rep("PL3",4),rep("PL4",4)) CNT <- sample(seq(1:50),16) YEAR <- rep(c("2015","2016","2017","2018"),4) df <- data.frame(PL,YEAR,CNT) ggplot(df, aes(area = CNT, fill = YEAR, label=PL, subgroup=YEAR)) + # change this line geom_treemap(aes(alpha = CNT)) + geom_treemap_subgroup_border(colour="white") + geom_treemap_text(fontface = "italic", colour = "white", place = "centre", grow = F, reflow=T) + geom_treemap_subgroup_text(place = "centre", grow = T, alpha = 0.5, colour = "#FAFAFA", min.size = 0) + scale_alpha_continuous(range = c(0.2, 1)) Created on 2018-05-03 by the reprex package (v0.2.0). Edit to add: Based on this post on hacking faux-gradients by putting an alpha-scaled layer on top of a layer with a darker fill. Here I've used two geom_treemaps, one with fill = "black", and one with the alpha scaling. Still leaves something to be desired. ggplot(df, aes(area = CNT, fill = YEAR, label=PL, subgroup=YEAR)) + geom_treemap(fill = "black") + geom_treemap(aes(alpha = CNT)) + geom_treemap_subgroup_border(colour="white") + geom_treemap_text(fontface = "italic", colour = "white", place = "centre", grow = F, reflow=T) + geom_treemap_subgroup_text(place = "centre", grow = T, alpha = 0.5, colour = "#FAFAFA", min.size = 0) + scale_alpha_continuous(range = c(0.4, 1)) Created on 2018-05-03 by the reprex package (v0.2.0). A: One option is to calculate the colors separately for each cell and then just plot them directly. This doesn't give you a legend, but arguably a legend isn't that useful anyways. (You'd need 4 separate legends, and those could be made and added to the plot if needed.) library(ggplot2) library(treemapify) set.seed(342) PL <- c(rep("PL1", 4), rep("PL2", 4), rep("PL3", 4), rep("PL4", 4)) CNT <- sample(seq(1:50), 16) YEAR <- rep(c("2015", "2016", "2017", "2018"), 4) df <- data.frame(PL, YEAR, CNT) # code to add colors to data frame follows # first the additional packages needed library(dplyr) library(colorspace) # install via: install.packages("colorspace", repos = "http://R-Forge.R-project.org") library(scales) # I'll use 4 palettes from the colorspace package, one for each year palette <- rep(c("Teal", "Red-Yellow", "Greens", "Purples"), 4) # We add the palette names and then calculate the colors for each # data point. Two notes: # - we scale the colors to the maximum CNT in each year # - we're calculating 6 colors but use only 5 to make the gradient; # this removes the lightest color df2 <- mutate(df, palette = palette) %>% group_by(palette) %>% mutate( max_CNT = max(CNT), color = gradient_n_pal(sequential_hcl(6, palette = palette)[1:5])(CNT/max_CNT)) ggplot(df2, aes(area = CNT, fill = color, label=PL, subgroup=YEAR)) + geom_treemap() + geom_treemap_subgroup_border(colour="white") + geom_treemap_text(fontface = "italic", colour = "white", place = "centre", grow = F, reflow=T) + geom_treemap_subgroup_text(place = "centre", grow = T, alpha = 0.5, colour = "#FAFAFA", min.size = 0) + scale_fill_identity() It's also possible to generate color scales dynamically if you don't know ahead of time how many cases there will be: library(ggplot2) library(treemapify) set.seed(341) PL <- c(rep("PL1", 6), rep("PL2", 6), rep("PL3", 6), rep("PL4", 6)) CNT <- sample(seq(1:50), 24) YEAR <- rep(c("2013", "2014", "2015", "2016", "2017", "2018"), 4) df <- data.frame(PL, YEAR, CNT) # code to add colors to data frame follows # first the additional packages needed library(dplyr) library(colorspace) # install via: install.packages("colorspace", repos = "http://R-Forge.R-project.org") library(scales) # number of palettes needed n <- length(unique(YEAR)) # now calculate the colors for each data point df2 <- df %>% mutate(index = as.numeric(factor(YEAR))- 1) %>% group_by(index) %>% mutate( max_CNT = max(CNT), color = gradient_n_pal( sequential_hcl( 6, h = 360 * index[1]/n, c = c(45, 20), l = c(30, 80), power = .5) )(CNT/max_CNT) ) ggplot(df2, aes(area = CNT, fill = color, label=PL, subgroup=YEAR)) + geom_treemap() + geom_treemap_subgroup_border(colour="white") + geom_treemap_text(fontface = "italic", colour = "white", place = "centre", grow = F, reflow=T) + geom_treemap_subgroup_text(place = "centre", grow = T, alpha = 0.5, colour = "#FAFAFA", min.size = 0) + scale_fill_identity() Finally, you can manually define the hues of the color scales: library(ggplot2) library(treemapify) set.seed(341) PL <- c(rep("PL1", 6), rep("PL2", 6), rep("PL3", 6), rep("PL4", 6)) CNT <- sample(seq(1:50), 24) YEAR <- rep(c("2013", "2014", "2015", "2016", "2017", "2018"), 4) df <- data.frame(PL, YEAR, CNT) # code to add colors to data frame follows # first the additional packages needed library(dplyr) library(colorspace) # install via: install.packages("colorspace", repos = "http://R-Forge.R-project.org") library(scales) # each color scale is defined by a hue, a number between 0 and 360 hues <- c(300, 50, 250, 100, 200, 150) # now calculate the colors for each data point df2 <- df %>% mutate(index = as.numeric(factor(YEAR))) %>% group_by(index) %>% mutate( max_CNT = max(CNT), color = gradient_n_pal( sequential_hcl( 6, h = hues[index[1]], c = c(45, 20), l = c(30, 80), power = .5) )(CNT/max_CNT) ) ggplot(df2, aes(area = CNT, fill = color, label=PL, subgroup=YEAR)) + geom_treemap() + geom_treemap_subgroup_border(colour="white") + geom_treemap_text(fontface = "italic", colour = "white", place = "centre", grow = F, reflow=T) + geom_treemap_subgroup_text(place = "centre", grow = T, alpha = 0.5, colour = "#FAFAFA", min.size = 0) + scale_fill_identity()
doc_23535332
Basically my application is running on https://example.com/login. I have this DNS on route53. Now I want to display the "Under maintenance" page on the same URL. So I created a static HTML page and hosted it in s3. Now if I am hitting example.com then I can access the static page but when I am hitting https://example.com/login or http://example.com/login I don't see the static page. Now I am having 2 questions: * *Can I redirect example.com/login to example.com? so that my static page is visible. *Can I redirect https to HTTP or https://example.com/login to example.com? I guess for https I have to use CloudFront but still checking if there is any other way? A: Even if it's possible, you shouldn't do it. Just use CloudFront with Route53 and ACM and host everything on HTTPS. Here's an article how to do that, but you can find a lot of other ones. The steps you need: * *request a new certificate on ACM (make sure you use the us-east-1 region). Select domain validation, then add the CNAME record to the domain *create a new CloudFront distribution, add the S3 bucket as the origin, select "redirect HTTP to HTTPS", then add the alternate domain name as your domain (example.com) and select the ACM certificate *add an A and an AAAA record in the Route53 hosted zone, make them an ALIAS to the distribution *wait a few minutes and it should work Using HTTP marks the connection as "Not secure" by the browser and a login form is especially something you want to serve over an encrypted connection. You need to set up CloudFront once, and you can add new files to the S3 bucket.
doc_23535333
So in my database I have RecordingsTable ->id ->Name ->Path ->FileName then my Designation table which where I store the assigned call recording to a user. DesignationTable ->id ->User_id ->Recording_id I already make the function which the user can only see and play the recording assigned to him/her. My problem now is the user could also share that recording to someone else. I already done that, what I do is loading the the assigned recording to the user, and in his/her dashboard there's a public link for the video, say <a href="http://localhost/callrec/public/recording/{!! $value->recordID !!}">See Public Link</a> as you can see I'm using Blade Template. As you can that $value->recordID is my recording ID which is a resource, so let's say that link directed to http://localhost/callrec/public/recording/1 Then that link is public and the user can share it. But there's a risk, when he/she shared this that id from the link can be altered, let's say http://localhost/callrec/public/recording/4 and if that id is existing it can be accessed which is supposed to be not coz the user only shared the id = 1 . How to approach problems like this? Any ideas and suggestions? thanks! A: If you use ID in the URL, then as you noticed it's easy to guess other possible IDs, change the URL and access other recordings. So what you need to do is to share links containing a value that users won't be able to guess. One example would be a hash of the recording ID using some secret value as a hash - e.g. your APP_KEY value. What you need to do is: * *Add a string hash column to your recording table *When recording is created, calculate the hash and save it with the recording: $recording = Recording::create($attributes); $recording->hash = base64_encode(Hash::make ($recording->recordID . Config::get('APP_KEY'))); $recording->save(); *Use that hash in the URLs <a href="http://localhost/callrec/public/recording/{!! $value->hash!!}"> See Public Link </a> This way your links will be publicly available, but guessing a hash of another recording will be more or less as hard as guessing passwords in your application as the same logic is applied. Just make sure you keep your APP_KEY safe.
doc_23535334
So: library(tseries) library(zoo) ticker<-c('AAPL', 'MSFT', 'GOOG') nShares<-length(ticker) start<-'2015-01-01' end<-'2015-09-01' prices <- function() { y=get.hist.quote(instrument = ticker[i], start = start, end = end, quote = "AdjClose", retclass = "zoo") dimnames(y)[[2]] <- as.character(ticker[i]) print (y) } for (i in 1:nShares){ prices() } What I get as a result is a column with all time series of the 3 shares. I would like to have them all in 3 columns as: Date AAPL MSFT GOOG 2015-xx-xx xx.xx xx.xx xx.xx How can I do that? A: Modifying a bit your function: prices <- function(ticker, start, end) { y=get.hist.quote(instrument = ticker, start = start, end = end, quote = "AdjClose", retclass = "zoo") dimnames(y)[[2]] <- as.character(ticker) # print (y) y } You can achieve it in 1 line: zoo_group <- do.call(cbind, lapply(tickers, prices, start=start, end=end)) head(zoo_group) AAPL MSFT GOOG 2015-01-02 107.9586 45.82758 524.8124 2015-01-05 104.9172 45.40616 513.8723 2015-01-06 104.9271 44.73971 501.9623 2015-01-07 106.3984 45.30815 501.1023 2015-01-08 110.4864 46.64103 502.6823 2015-01-09 110.6049 46.24900 496.1723 A: You could also try the new package tidyquant, which makes getting stock prices for many stock symbols very simple: library(tidyquant) ticker <- c('AAPL', 'MSFT', 'GOOG') ticker %>% tq_get(get = "stock.prices", from = "2015-01-01", to = "2015-09-01")
doc_23535335
case in example here; NC='\033[31;0m\' # no colors or formatting RED='\033[0;31;1m\' # print text in bold red PUR='\033[0;35;1m\' # print text in bold purple YEL='\033[0;33;1m\' # print text in bold Yellow GRA='\033[0;37;1m\' # print text in bold Gray echo -e "This ${YEL}Message${NC} has color\nwith ${RED}new${NC} lines. Output: -e This Message has color. with new lines and if I also happen to have another command to be run in the bash script I also get this even though it shouldn't. For example with running with screen at the start of running this in a script. \033[31;0m\033[0;31;1mscreen: not found EDIT** To detail more with what I'm trying to do; #!/bin/sh ## ## Nexus Miner script ## NC='\033[31;0m' RED='\033[0;31;1m' PUR='\033[0;35;1m' YEL='\033[0;33;1m' GRA='\033[0;37;1m' ADDR="<wallet-address-goes-here" NAME="<worker-name-goes-here>" URL="nexusminingpool.com" ## check if user is root, if true, then exit with status 1, else run program. if [ `whoami` = root ]; then echo -e "\n${PUR}You don't need ROOT for this!${NC}\n" 1>&2 exit 1; ## running Nexus Miner with Screen so you can run it in background ## and call it up at any time to check mining progress of Nexus CPU Miner. else echo -e "\n\n${YEL}Normal User?${NC} ${GRA}[OK]${NC}\n${GRA}Session running with${NC} ${RED}SCREEN${NC}${GRA}, Press${NC} ${RED}^C + A${NC} ${GRA}then${NC} ${RED}^C + D${NC} ${GRA}to run in background.${NC}\n${GRA}to resume enter `${NC}${RED}screen -r${NC}${GRA}` from Terminal.${NC}\n\n" echo -e "${PUR}Nexus CPU Miner${NC}\n${GRA}Wallet Address:${NC} $ADDR\n${GRA}Worker Name:${NC} $NAME\n${GRA}CPU Threads:${NC} (Default: 2)\n\n" ## call strings for wallet address and worker name varibles followe by thread numbers (default: 2) ## run nexus cpu miner within screen so it can run in the background `screen /home/user/PrimePoolMiner/nexus_cpuminer $URL 9549 $ADDR $NAME 2` fi A: Remove the trailing backslashes and add a closing quote: NC='\033[31;0m' # no colors or formatting RED='\033[0;31;1m' # print text in bold red PUR='\033[0;35;1m' # print text in bold purple YEL='\033[0;33;1m' # print text in bold Yellow GRA='\033[0;37;1m' # print text in bold Gray echo -e "This ${YEL}Message${NC} has color\nwith ${RED}new${NC} lines." And it works as expected in bash. If you save this script into a file, run it like bash <file>. Try type -a echo to see what it is. The first line of output should be echo is a shell builtin: $ type -a echo echo is a shell builtin echo is /usr/bin/echo echo is /bin/echo A: You wrote, "I've gotten this script I've created in Bash", but you haven't told us exactly what you mean by that. UPDATE : The question has been updated. The script's first line is #!/bin/sh. Read on for an explanation and solution. I can reproduce the problem on my system by including your code in a script starting with #!/bin/sh I can correct the problem by changing that first line to #!/bin/bash /bin/sh on my system happens to be a symlink to dash. The dash shell has echo as a builtin command, but it doesn't support the -e option. The #! line, g othe There are numerous implementations of the echo command: most shells provide it as a builtin command (with various features depending on the shell), and there's likely to be an external command /bin/echo with its own subtly different behavior. If you want consistent behavior for printing anything other than a simple line of text, I suggest using the printf command. See https://unix.stackexchange.com/questions/65803/why-is-printf-better-than-echo (cited by Josh Lee in a comment). The #! line, known as a shebang, controls what shell is used to execute your script. The interactive shell you execute the script from is irrelevant. (It pretty much has to be that way; otherwise scripts would behave differently for different users). In the absence of a #! line, a script will (probably) be executed with /bin/sh, but there's not much point in not being explicit.
doc_23535336
Each user has 5 documents at max. {uid: 1, ad_id: 1} {uid: 1, ad_id: 2} {uid: 1, ad_id: 3} {uid: 1, ad_id: 4} {uid: 1, ad_id: 5} {uid: 2, ad_id: 6} {uid: 2, ad_id: 7} {uid: 2, ad_id: 8} {uid: 2, ad_id: 9} {uid: 2, ad_id: 10} Now we have a new doc {uid: 1, ad_id: 11} Because the max number of documents is 5, we delete the oldest one. The collection becomes this: {uid: 1, ad_id: 11} {uid: 1, ad_id: 1} {uid: 1, ad_id: 2} {uid: 1, ad_id: 3} {uid: 1, ad_id: 4} {uid: 2, ad_id: 6} {uid: 2, ad_id: 7} {uid: 2, ad_id: 8} {uid: 2, ad_id: 9} {uid: 2, ad_id: 10} Now I check the number in code. Is there any index that mongo could do this ? Thanks. A: I dont thinks so, You'll have to controll it on your application code. You could limit just to one record with unique indexes but not to N documents. Besides, new saves will fail instead of overwritting the previous one.
doc_23535337
After reseting the dataprovider, my last "sorted" column header remains on display along with the sorting arrow. Is there a way to force the columns "reset" all sorting indicators, etc? I reset the data provider by: ... if(grid != null){ grid.invalidate(); dataView.items = _newData; grid.setSortColumn('', false); } The only solution I've found so far is telling the grid to set a '' column as its sorting column. It just seems odd to have to do this when the datagrid knows that its dataprovider changed. I'm on: SDK 1.15.0, Polymer1.0 1.0.0-rc.17
doc_23535338
kurento_utils.WebRtcPeer.WebRtcPeerSendonly(options, function (error: any) { if ( error ) { return on_error(error); } let webRTC_peer = this; // kurento_utils binds 'this' to the callback // ^^^^ error TS2683: 'this' implicitly has type 'any' because it does not have a type annotation. webRTC_peer.generateOffer((error: string | undefined, offer: string) => { ... TypeScript is (understandably) unhappy with this situation, at least when the noImplicitThis flag is enabled. I have no idea how to properly type annotate this. A: You can add a this parameter to the function. This parameter will not be emitted to Javscript, and will be just for the benefit of the compiler so it can correctly type this witing the function: kurento_utils.WebRtcPeer.WebRtcPeerSendonly(options, function (this:WebRtcPeer, error: any) { let webRTC_peer = this; // this will be of type WebRtcPeer } Or if you also control the WebRtcPeerSendonly you could specify that the function it takes as a parameter will have a certain this passed to it and type inference will work for this just as any other parameter: class WebRtcPeer { WebRtcPeerSendonly(options: any, fn: (this: WebRtcPeer, error: any) => void) { } } new WebRtcPeer().WebRtcPeerSendonly({}, function(error){ let webRTC_peer = this; // this will be of type WebRtcPeer })
doc_23535339
Schema of DF 1 - root |-- employee: struct (nullable = true) | |-- name: string (nullable = true) | |-- id: string (nullable = true) | |-- salary: long (nullable = true) | |-- dept: string (nullable = true) |--.... Schema of DF 2- root |-- employee: struct (nullable = true) | |-- name: string (nullable = true) | |-- id: string (nullable = true) | |-- salary: long (nullable = true) | |-- dept: string (nullable = true) |. |-- phone: string (nullable = false) how can i add phone field to employee field on DF1, Note: not all employees of DF1 are in DF2, so if employee not present in DF2, the phone field should be set with 000 A: import org.apache.spark.sql.SparkSession import org.apache.spark.sql.functions.struct case class C1(name: String, id: String, salary: Long, dept: String) case class C2( name: String, id: String, salary: Long, dept: String, phone: String ) case class E1(employee: C1) case class E2(employee: C2) import spark.implicits._ val empl1DF = Seq( E1(C1("n1", "1", 1, "1")), E1(C1("n2", "2", 2, "2")), E1(C1("n5", "5", 5, "5")) ).toDF() val empl2DF = Seq( E2(C2("n1", "1", 1, "1", "1111")), E2(C2("n2", "2", 2, "2", "22222")), E2(C2("n3", "3", 3, "3", "3333")) ).toDF() empl1DF.printSchema() // root // |-- employee: struct (nullable = true) // | |-- name: string (nullable = true) // | |-- id: string (nullable = true) // | |-- salary: long (nullable = false) // | |-- dept: string (nullable = true) empl1DF.show(false) // +-------------+ // |employee | // +-------------+ // |[n1, 1, 1, 1]| // |[n2, 2, 2, 2]| // |[n5, 5, 5, 5]| // +-------------+ empl2DF.printSchema() // root // |-- employee: struct (nullable = true) // | |-- name: string (nullable = true) // | |-- id: string (nullable = true) // | |-- salary: long (nullable = false) // | |-- dept: string (nullable = true) // | |-- phone: string (nullable = true) empl2DF.show(false) // +--------------------+ // |employee | // +--------------------+ // |[n1, 1, 1, 1, 1111] | // |[n2, 2, 2, 2, 22222]| // |[n3, 3, 3, 3, 3333] | // +--------------------+ val df1 = empl1DF .join( empl2DF, empl1DF.col("employee.id") === empl2DF.col("employee.id"), "left" ) .select( empl1DF.col("employee.name"), empl1DF.col("employee.id"), empl1DF.col("employee.salary"), empl1DF.col("employee.dept"), empl2DF.col("employee.phone") ) val resDF = df1.na .fill("000", Seq("phone")) .select( struct( col("name"), col("id"), col("salary"), col("dept"), col("phone") ).as("employee") ) resDF.printSchema() // root // |-- employee: struct (nullable = false) // | |-- name: string (nullable = true) // | |-- id: string (nullable = true) // | |-- salary: long (nullable = true) // | |-- dept: string (nullable = true) // | |-- phone: string (nullable = true) resDF.show(false) // +--------------------+ // |employee | // +--------------------+ // |[n1, 1, 1, 1, 1111] | // |[n2, 2, 2, 2, 22222]| // |[n5, 5, 5, 5, 000] | // +--------------------+
doc_23535340
First I create a pandas dataframe as follows: # dependencies import folium import pandas as pd from google.colab import drive drive.mount('/content/drive/') # create dummy data df = {'Lat': [22.50, 63.21, -13.21, 33.46], 'Lon': [43.91, -22.22, 77.11, 22.11], 'Color': ['red', 'yellow', 'orange', 'blue'] } # create dataframe data = pd.DataFrame(df) I then create a world map with a zoom factor of 2: world = folium.Map( zoom_start=2 ) I can plot the locations by iterating over the dataframe rows as follows: x = data[['Lat', 'Lon', 'Color']].copy() for index, row in x.iterrows(): folium.Marker([row['Lat'], row['Lon']], popup=row['Color'], icon=folium.Icon(color="red", icon="info-sign") ).add_to(world) world This produces the following graphic: In order to use a custom icon I need to use folium.features.CustomIcon and state the image path as a location on my Google Drive where the image is stored. pushpin = folium.features.CustomIcon('/content/drive/My Drive/Colab Notebooks/pushpin.png', icon_size=(30,30)) I can use this on the map in one stated location as follows: world = folium.Map( zoom_start=2 ) folium.Marker([40.743720, -73.822030], icon=pushpin).add_to(world) world Which produces the following graphic However, when I try to use the custom icon in the iteration, it does not seem to work and only plots the first coordinate pair with the default marker. pushpin = folium.features.CustomIcon('/content/drive/My Drive/Colab Notebooks/pushpin.png', icon_size=(30,30)) world = folium.Map( zoom_start=2 ) x = data[['Lat', 'Lon', 'Color']].copy() for index, row in x.iterrows(): folium.Marker([row['Lat'], row['Lon']], icon=pushpin, popup=row['Color'], ).add_to(world) world As pictured: My expectation is for all 4 positions to be plotted with the pushpin marker. Any help greatly appreciated. A: Seems like the only way to do this is to place the custom icon call within the for loop so it initialises for each iterration, eg: world = folium.Map( zoom_start=2 ) x = data[['Lat', 'Lon', 'Color']].copy() for index, row in x.iterrows(): pushpin = folium.features.CustomIcon('/content/drive/My Drive/Colab Notebooks/pushpin.png', icon_size=(30,30)) folium.Marker([row['Lat'], row['Lon']], icon=pushpin, popup=row['Color'], ).add_to(world) world This produces the following graphic:
doc_23535341
This is the code: function onEdit(e) { Logger.log('-> START'); var row = e.range.getRow(); var col = e.range.getColumn(); //sheetConfig.getRange('G3').setValue(new Date()); //sheetConfig.getRange('G4').setValue(currentUser); if(col == esitoColumn) { if(sheetCL.getRange(row, col).getValue() != '') { Logger.log('Post check valore cella editata'); if(sheetCL.getRange(row, configDataColumn).getValue() == 'HOUR_LINE') massiveEdit(row); else { Logger.log('Post check valore cella di configurazione'); sheetCL.getRange(row, userColumn).setValue(currentUser); Logger.log('Post set username'); } } else sheetCL.getRange(row, userColumn).clearContent(); } Logger.log('-> END'); } These are a couple of executions (look at timestamps): [18-07-16 18:11:23:363 CEST] -> START [18-07-16 18:11:23:461 CEST] Post check valore cella editata [18-07-16 18:11:23:462 CEST] Post check valore cella di configurazione [18-07-16 18:11:23:464 CEST] Post set username [18-07-16 18:11:23:464 CEST] -> END [18-07-16 18:11:40:343 CEST] -> START [18-07-16 18:11:40:437 CEST] Post check valore cella editata [18-07-16 18:11:40:439 CEST] Post check valore cella di configurazione [18-07-16 18:11:40:441 CEST] Post set username [18-07-16 18:11:40:442 CEST] -> END As you can see the total execution time needed by the function is around 100 milliseconds. The real amount of time needed to visualize the username in the proper cell is about 1.5 seconds (15x the function execution time). What can i do to solve the problem? Thanks in advance!
doc_23535342
How to get both with best way and higher optimize and performance? I try write this code, using a temp table - is there another way? SELECT TOP 1 id, Code, Name, PostId INTO #User FROM Users WHERE UseName = 'myUser' AND Password = 'myPassword' SELECT * FROM #User SELECT PermetionId FROM UserPostAccess WHERE Id = (SELECT TOP (1) PostId FROM #User) or SELECT TOP 1 id, Code, Name, PostId FROM Users WHERE UseName = 'myUser' AND Password = 'myPassword' SELECT * FROM #User SELECT PermetionId FROM UserPostAccess WHERE Id = (SELECT TOP (1) PostId FROM Users WHERE UseName = 'myUser' AND Password = 'myPassword') A: you get all data together like below SELECT U.*, P.PermetionId INTO #User FROM ( SELECT TOP 1 U.id, U.Code, U.Name, U.PostId FROM Users WHERE UseName = 'myUser' AND Password = 'myPassword' )U LEFT JOIN UserPostAccess P ON P.ID=U.PostId SELECT * FROM #User A: If your query already returns only one record, no need to pull into a #temp table, just do the query and JOIN to the post access directly SELECT TOP 1 u.id, u.Code, u.Name, u.PostId, pa.PermetionId FROM Users u JOIN UserPostAccess pa on u.PostId = pa.id WHERE u.UseName = 'myUser' AND u.Password = 'myPassword'
doc_23535343
the crystal report does not show the correct data it is always return all records in database i'm using the following to get the data from the database public static List<Package> GetPalletReport(int Id) { using (ProjectEntities db = new ProjectEntities()) { List<Package> packages = new List<Package>(); Package _p = db.Packages.Include(p => p.Alloy) .Include(p => p.Temper) .Include(p => p.PaintType1) .Include(p => p.BackCoat) .SingleOrDefault(p => p.PackageId == Id); for(int i=0; i<_p.NumberOfLabels; i++) { packages.Add(_p); } return packages; } } an i load it to crystal report as following: public ActionResult PrintReport(int Id) { ProjectEntities db = new ProjectEntities(); ReportDocument rd = new ReportDocument(); rd.Load(Path.Combine(Server.MapPath("~/Reports"), "PalletReport.rpt")); var test = ReportingManager.GetPalletReport(Id); rd.SetDataSource(test); Response.Buffer = false; Response.ClearContent(); Response.ClearHeaders(); try { Random n = new Random(100); int x = n.Next(); Stream stram = rd.ExportToStream(CrystalDecisions.Shared.ExportFormatType.PortableDocFormat); stram.Seek(0, SeekOrigin.Begin); return File(stram, "application/pdf", "Document" + x + ".pdf"); } catch (Exception f) { } return View(); }
doc_23535344
I'm having problems dealing with oauth2.0, specifically to get the access token. I'm using this code right now: #esse bloco serve para criar o access_token, e vai atualizar o access_token sempre, retornando ele para o principal SCOPES = 'https://www.googleapis.com/auth/drive' creds = None if os.path.exists('token.json'): creds = Credentials.from_authorized_user_file('token.json', SCOPES) # Esse bloco pode ser um problema para um server. É sempre preciso rodar esse bloco em um desktop para obter as chaves com autenticação. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( 'credentials.json', SCOPES) creds = flow.run_local_server(port=0) # Salva as credenciais para uma proxima vez with open('token.json', 'w') as token: token.write(creds.to_json()) a_file = open('token.json', "r") tokens = json.load(a_file) access_token = tokens['token'] This code has some problems: * *On the first time it runs, it requires manual confirmation (a internet tab opens and then you have to click some stuff). *When the first access token expires, it doesn't work anymore (so the token is not refreshed). Is there a way to do it automatically (without the need for a human) ? And how do I update access token using this ? EDIT: Since it was pointed out, here's the usual outcome of the token.json {"token": "*******", "refresh_token": "*****", "token_uri": "********", "client_id": "*********************", "client_secret": "****************", "scopes": "https://www.googleapis.com/auth/drive", "expiry": "2021-09-17T15:45:39Z"} A: Part one When the first access token expires, it doesn't work anymore (so the token is not refreshed). First thing Please check token.json and verify that there is a refresh token being stored. Or you can have a look at the code below. def initialize_drive(): """Initializes the drive service object. Returns: drive an authorized drive service object. """ # Parse command-line arguments. parser = argparse.ArgumentParser( formatter_class=argparse.RawDescriptionHelpFormatter, parents=[tools.argparser]) flags = parser.parse_args([]) # Set up a Flow object to be used if we need to authenticate. flow = client.flow_from_clientsecrets( CLIENT_SECRETS_PATH, scope=SCOPES, message=tools.message_if_missing(CLIENT_SECRETS_PATH)) # Prepare credentials, and authorize HTTP object with them. # If the credentials don't exist or are invalid run through the native client # flow. The Storage object will ensure that if successful the good # credentials will get written back to a file. storage = file.Storage('drive.dat') credentials = storage.get() if credentials is None or credentials.invalid: credentials = tools.run_flow(flow, storage, flags) http = credentials.authorize(http=httplib2.Http()) # Build the service object. drive = build('drive', 'v3', http=http) return drive Part two Is there a way to do it automatically (without the need for a human) ? And how do I update access token using this ? Yes you can use a service account. Service accounts are like dummy users. A service account has its own google drive account so you can upload and download from that. You can also share a directory on your personal drive account with the service account like you would any other user. Then the service account would have permissions to upload and download from that. As it is pre authorized there will be no need for a user to consent to its access. Note: service accounts should only be used on a drive account that you the developer own. If you are accessing your users drive accounts then you should go with Oauth2. def initialize_drive(): """Initializes an drive API V3 service object. Returns: An authorized Google Drive API V3 service object. """ credentials = ServiceAccountCredentials.from_json_keyfile_name( KEY_FILE_LOCATION, SCOPES) # Build the service object. drive = build('drive', 'v3', credentials=credentials) return analytics Links * *How to create a service account remember to enable drive api under library. *Lots of info on service accounts Should you be using a service account in 2021 *Share a folder with a service account. Note: the video is C# but this spot shows how to share the folder with the service account.
doc_23535345
Example inputs: Hello {{first-name}}, how are you? The event {{event-name-address}} Example outputs: Hello {{first_name}}, how are you? The event {{event_name_address}} This is the regex I tried to do: {{.+(-).+}}, and this is the preg_replace PHP function I tried to use: $template = preg_replace("{{.+(-).+}}", "$1_", $template); This doesn't seems to work. What am I doing wrong? Thanks! A: Curve brackets should be escaped, they are regex metasymbols. Than, use two groups for parts before and after the -: (\{\{[^}]+)-([^}]+\}\}) and replace on ${1}_$2. [^}] is used to say "any symbol, but not }". UPD: thank to @CarySwoveland for help with clarifying. A: If double-braces are matched and not nested you can substitute matches of (?<=[^{])\-(?=[^{}]+}}) with '_'. This assumes the hyphen between matched pairs of double-braces must be preceded by a character other than '{' and must be followed by a character other than '}'.1 Note this permits matching multiple hyphens with the the same matched pair of double-braces. This relies on the assumption that if a hyphen is followed, after intervening characters that are neither open nor closed braces, by a pair of closing double-braces, that hyphen must be preceded by a pair of open double-braces, with intervening characters that are neither open nor closed braces. Demo The regular expression can be broken down as follow: (?<= # begin a positive lookbehind [^{] # match a character other than '{' ) # end positive lookbehind \- # match '-' (?= # begin a positive lookahead [^{}]+ # match one or more characters other than '{' and '}' }} # match '}}' ) # end positive lookahead 1. If this is not a requirement the expression can be simplified to \-(?=[^{}]*}}). A: You can use preg_replace('~(?:\G(?!^)|{{)[^{}-]*\K-(?=[^{}]*}})~', '_', $template) See the regex demo. Details: * *(?:\G(?!^)|{{) - either the end of the previous match or {{ *[^{}-]* - zero or more chars other than {, } and a - char *\K - match reset operator that omits the text matched so far from the memory buffer *- - a hyphen *(?=[^{}]*}}) - a location immediately followed by zero or more chars other than { and } and then }}.
doc_23535346
[INPUT] Name tail Path /var/log/* Only files directly under /var/log/ are handled, but files in sub-directory are not handled. I've also tried using the ** syntax, but Fluent Bit doesn't support this. Is there a way to upload entire directory, with it's sub-directories with Fluent Bit?
doc_23535347
Basically the idea would be to: 1. Create a selfhost owin server serving static/already defined controllers (web apis) -> this part is ok *At a later time, I want to dynamically generate a new controller and add it somehow to the server so that client can send request to it. -> is there a way to do that? I know I can dynamically build a controller and add it to the server BEFORE it is initialized to serves existing web apis (using CustomAssemblyResolver). *Now existing controller may need to be updated. I would like to re-generate an existing controller and update the server to use the new definition (maybe parameter change, name of apis changed, etc.) Any way to do that? Can we recycle a controller without stopping all the controllers? If somehow this can be supported, do it mean the service will not be available for some time (until the update is done). *Ideally it would work like web service hosted in IIS. If web service definition change between 2 requests. 1st request go to old definition and 2nd request is transparently directed to new definition. There is no interruption of service. Any ideas? Thanks in advance A: Found the solution for it. In case somebody else is looking for this, I need to overwrite the DefaultHttpControllerSelector. Here is a very nice article on the subject: link So basically for my use case mentioned above, I need to create a new AppDomain, start my service in it, and load my assemblies dynamically at runtime. I finally need to overwrite the DefaultHttpControllerSelector to catch the request. When the request arrive, it have then control on which controller I want to use. There I can update the controller dynamically by loading a new assembly, etc. Main thing to be careful on is that this is executed for each request so it could easily impact performance and memory. So I will implement my own caching of controller.
doc_23535348
for (int i = 0; i < n; i++) { for (int j = 0; j < 6; j++) { cin >> Array[i][j]; } } } int main() { int n; cin >> n; float** Array = new float*[n]; Insert(Array,n); return 0; } Code above was my barebone attempt at passing and inserting values into dynamically allocated array, the code compiles and lets me input value n but inputing the very first number into the array results in this exception: 0xC0000005: Access violation writing location 0xCDCDCDCD. I believe theres a problem with the way im inserting into the array but cant quite figure it out. Also Ive read about 0xCDCDCDCD and that im trying to write into non-existent memory or what not but cant figure it out, also the j value is supposed to be less than 6 for a reason.
doc_23535349
#include<stdio.h> #define TOTAL_ELEMENTS (sizeof(array) / sizeof(array[0])) int array[] = {23,34,12,17,204,99,16}; int main() { int d; for(d=-1;d <= (TOTAL_ELEMENTS-2);d++) printf("%d\n",array[d+1]);//printing the array return 0; }//looks simple but no result What's going wrong? Why am I not getting any output? A: In the comparison d <= (TOTAL_ELEMENTS-2) TOTAL_ELEMENTS has type size_t so d is converted to unsigned. For, say, sizeof(size_t)==4, this makes the test 0xffffffff < 5 which fails, causing the loop to exit. If you really want to start your loop counter from -1 d <= (int)(TOTAL_ELEMENTS-2) would work
doc_23535350
You can see the table on this picture. What I need to do is: if the time (column C) was between 21:00 and 3:00, the value in that G column has to be 0 and should be added to the number that's 144 rows below it. Otherwise, if the time was between 7:00 and 21:00, do nothing. Thank you in advance, I hope you have a great day. A: Per my understanding from the question (some clarification is needed), I would recommend read the post: Excel compare time or apply condition on timestamps to understand how to handle timestamps in Excel. Following the steps indicated in referred post this is how it can be done. The integer number representation of the hours (just format as a Number to obtain the numeric representation of the hour) we are interested are: * *9:00 -> 0.29 *21:00 -> 0.87 Standardizing the timestamp to date 1/0/1900 as follow: =IF(A2="","", MOD(A2,1)) Then we can do the comparison to identify the shift in the Shift column: =IF(A2="","", IF(AND(B2>=0.29, B2<0.87), "Day Shift", "Night Shift")) Created a helper column for easier identification, but all can be done without helper columns. Then finally we can do the calculation: =LET(offsetValue, OFFSET(D2,-144,0), IF(D2="", "", IF(ISNUMBER(offsetValue), D2+offsetValue, "Non Valid Qty"))) If there is no previous information then Non Valid Qty is returned, otherwise the Qty from 144 previous row is added to the current Qty value. Used LET function for a better readability. Here a screenshot: Here is a public link to sample excel file.
doc_23535351
A: By staking, I assume you mean liquidity mining. You can plug in quarry -- https://github.com/QuarryProtocol/quarry and use their already existing program to create a new liquidity pool
doc_23535352
When I do, I get the following exception in my console: 22:18:05,283 ERROR [org.springframework.web.servlet.DispatcherServlet] (MSC service thread 1-7) Context initialization failed: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping#0': Invocation of init method failed; nested exception is java.lang.VerifyError: (class: org/springframework/web/servlet/mvc/condition/ProducesRequestCondition, method: compareMatchingMediaTypes signature: (Lorg/springframework/web/servlet/mvc/condition/ProducesRequestCondition;ILorg/springframework/web/servlet/mvc/condition/ProducesRequestCondition;I)I) Incompatible argument to function at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1553) [spring-beans-4.0.6.RELEASE.jar:4.0.6.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:539) [spring-beans-4.0.6.RELEASE.jar:4.0.6.RELEASE] Edit The pom.xml <?xml version="1.0" encoding="UTF-8"?> <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>com.myCompany</groupId> <artifactId>myPackage</artifactId> <name>myPackage</name> <packaging>war</packaging> <version>1.0.0-BUILD-SNAPSHOT</version> <properties> <java-version>1.7</java-version> <org.springframework-version>4.0.6.RELEASE</org.springframework-version> <org.aspectj-version>1.6.10</org.aspectj-version> <org.slf4j-version>1.6.6</org.slf4j-version> <jackson.version>1.9.11</jackson.version> </properties> <dependencies> <!-- Spring --> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> <version>${org.springframework-version}</version> <exclusions> <!-- Exclude Commons Logging in favor of SLF4j --> <exclusion> <groupId>commons-logging</groupId> <artifactId>commons-logging</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-webmvc</artifactId> <version>${org.springframework-version}</version> </dependency> <!-- AspectJ --> <dependency> <groupId>org.aspectj</groupId> <artifactId>aspectjrt</artifactId> <version>${org.aspectj-version}</version> </dependency> <!-- Logging --> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>${org.slf4j-version}</version> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>jcl-over-slf4j</artifactId> <version>${org.slf4j-version}</version> <scope>runtime</scope> </dependency> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-log4j12</artifactId> <version>${org.slf4j-version}</version> <scope>runtime</scope> </dependency> <dependency> <groupId>log4j</groupId> <artifactId>log4j</artifactId> <version>1.2.15</version> <exclusions> <exclusion> <groupId>javax.mail</groupId> <artifactId>mail</artifactId> </exclusion> <exclusion> <groupId>javax.jms</groupId> <artifactId>jms</artifactId> </exclusion> <exclusion> <groupId>com.sun.jdmk</groupId> <artifactId>jmxtools</artifactId> </exclusion> <exclusion> <groupId>com.sun.jmx</groupId> <artifactId>jmxri</artifactId> </exclusion> </exclusions> <scope>runtime</scope> </dependency> <!-- @Inject --> <dependency> <groupId>javax.inject</groupId> <artifactId>javax.inject</artifactId> <version>1</version> </dependency> <!-- Servlet --> <dependency> <groupId>javax.servlet</groupId> <artifactId>servlet-api</artifactId> <version>2.5</version> <scope>provided</scope> </dependency> <dependency> <groupId>javax.servlet.jsp</groupId> <artifactId>jsp-api</artifactId> <version>2.1</version> <scope>provided</scope> </dependency> <dependency> <groupId>javax.servlet</groupId> <artifactId>jstl</artifactId> <version>1.2</version> </dependency> <!-- Test --> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.7</version> <scope>test</scope> </dependency> <!-- Jackson JSON Mapper --> <!-- <dependency> <groupId>org.codehaus.jackson</groupId> <artifactId>jackson-mapper-asl</artifactId> <version>${jackson.version}</version> </dependency> --> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-core</artifactId> <version>2.2.3</version> </dependency> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-databind</artifactId> <version>2.2.3</version> </dependency> <dependency> <groupId>com.fasterxml.jackson.core</groupId> <artifactId>jackson-annotations</artifactId> <version>2.2.3</version> </dependency> <!-- Postgres --> <dependency> <groupId>org.postgresql</groupId> <artifactId>postgresql</artifactId> <version>9.2-1002-jdbc4</version> </dependency> </dependencies> <build> <plugins> <plugin> <artifactId>maven-eclipse-plugin</artifactId> <version>2.9</version> <configuration> <additionalProjectnatures> <projectnature>org.springframework.ide.eclipse.core.springnature</projectnature> </additionalProjectnatures> <additionalBuildcommands> <buildcommand>org.springframework.ide.eclipse.core.springbuilder</buildcommand> </additionalBuildcommands> <downloadSources>true</downloadSources> <downloadJavadocs>true</downloadJavadocs> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>2.5.1</version> <configuration> <source>1.7</source> <target>1.7</target> <compilerArgument>-Xlint:all</compilerArgument> <showWarnings>true</showWarnings> <showDeprecation>true</showDeprecation> </configuration> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.2.1</version> <configuration> <mainClass>org.test.int1.Main</mainClass> </configuration> </plugin> </plugins> </build> </project> And, my servlet-context.xml: <?xml version="1.0" encoding="UTF-8"?> <beans:beans xmlns="http://www.springframework.org/schema/mvc" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:beans="http://www.springframework.org/schema/beans" xmlns:context="http://www.springframework.org/schema/context" xmlns:mvc="http://www.springframework.org/schema/mvc" xsi:schemaLocation="http://www.springframework.org/schema/mvc http://www.springframework.org/schema/mvc/spring-mvc.xsd http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context.xsd"> <!-- DispatcherServlet Context: defines this servlet's request-processing infrastructure --> <!-- Enables the Spring MVC @Controller programming model --> <annotation-driven /> <!-- Handles HTTP GET requests for /resources/** by efficiently serving up static resources in the ${webappRoot}/resources directory --> <resources mapping="/resources/**" location="/resources/" /> <!-- Resolves views selected for rendering by @Controllers to .jsp resources in the /WEB-INF/views directory --> <beans:bean class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <beans:property name="prefix" value="/WEB-INF/views/" /> <beans:property name="suffix" value=".jsp" /> </beans:bean> <context:component-scan base-package="com.myCompany.myPackage" /> </beans:beans> And, my web.xml: <?xml version="1.0" encoding="UTF-8"?> <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"> <!-- The definition of the Root Spring Container shared by all Servlets and Filters --> <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/spring/root-context.xml</param-value> </context-param> <!-- Creates the Spring Container shared by all Servlets and Filters --> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <listener> <listener-class>com.myCompany.myPackage.BackgroundServletContextListener</listener-class> </listener> <!-- Processes application requests --> <servlet> <servlet-name>appServlet</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <init-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/spring/appServlet/servlet-context.xml</param-value> </init-param> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>appServlet</servlet-name> <url-pattern>/</url-pattern> </servlet-mapping> </web-app> A: As @shazin suggested, this was a versioning problem. It may have helped to show more of my stack trace, which is below, and clearly shows an old jar in the middle of the trace: spring-web-3.2.2.RELEASE.jar:3.2.2.RELEASE. I found the offending jar in my deployment directory (wildfly in my case). Doing and add/remove -> remove then add/remove -> add solved the problem. I now know more. That is a good thing. Caused by: java.lang.VerifyError: (class: org/springframework/web/servlet/mvc/condition/ProducesRequestCondition, method: compareMatchingMediaTypes signature: (Lorg/springframework/web/servlet/mvc/condition/ProducesRequestCondition;ILorg/springframework/web/servlet/mvc/condition/ProducesRequestCondition;I)I) Incompatible argument to function at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping.createRequestMappingInfo(RequestMappingHandlerMapping.java:242) [spring-webmvc-4.0.6.RELEASE.jar:4.0.6.RELEASE] at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping.getMappingForMethod(RequestMappingHandlerMapping.java:191) [spring-webmvc-4.0.6.RELEASE.jar:4.0.6.RELEASE] at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerMapping.getMappingForMethod(RequestMappingHandlerMapping.java:51) [spring-webmvc-4.0.6.RELEASE.jar:4.0.6.RELEASE] at org.springframework.web.servlet.handler.AbstractHandlerMethodMapping$1.matches(AbstractHandlerMethodMapping.java:152) [spring-webmvc-4.0.6.RELEASE.jar:4.0.6.RELEASE] at org.springframework.web.method.HandlerMethodSelector$1.doWith(HandlerMethodSelector.java:62) [spring-web-3.2.2.RELEASE.jar:3.2.2.RELEASE] at org.springframework.util.ReflectionUtils.doWithMethods(ReflectionUtils.java:495) [spring-core-4.0.6.RELEASE.jar:4.0.6.RELEASE] at org.springframework.web.method.HandlerMethodSelector.selectMethods(HandlerMethodSelector.java:58) [spring-web-3.2.2.RELEASE.jar:3.2.2.RELEASE] at org.springframework.web.servlet.handler.AbstractHandlerMethodMapping.detectHandlerMethods(AbstractHandlerMethodMapping.java:149) [spring-webmvc-4.0.6.RELEASE.jar:4.0.6.RELEASE] A: You can use Maven "Bill Of Materials" Dependency in the pom so that all the dependencies are compatible and up. <dependencyManagement> <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-framework-bom</artifactId> <version>4.0.6.RELEASE</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> And configure each dependency as below without specifying the version. <dependencies> <dependency> <groupId>org.springframework</groupId> <artifactId>spring-context</artifactId> </dependency> ... ... </dependencies>
doc_23535353
Another question is this SpeechSynthesis API could support Android and iOS devices, but when I saw some event such as 'soundstart event', it doesn't support Safari Mobile. What are their relationships? I got really confused. And the SpeechRecognition API only supports for Chrome browser but don't I need to user some event like soundstart? Thank you for help in advance. I really appreciate it. <p id="msg" align="center"></p> <script> var utterance = new SpeechSynthesisUtterance("Hello"); //window.speechSynthesis.speak(utterance); var supportMsg = document.getElementById('msg'); if ('speechSynthesis' in window) { supportMsg.innerHTML = 'Your browser <strong>supports</strong> speech synthesis.'; console.log("Hi"); utterance.onstart = function(event) { console.log('Hhhh') }; var playList = ["1_hello", "2_how_old", "3_what_did_you_make"]; var dir = "sound/"; var extention = ".wav"; audio.src = dir + playList[audioIndex] + extention; audio.load(); var audioIndex = 0; setTimeout(function(){audio.play();}, 1000); window.speechSynthesis.speak(utterance); } else { supportMsg.innerHTML = 'Sorry your browser <strong>does not support</strong> speech synthesis.<br>Try this in <a href="https://www.google.co.uk/intl/en/chrome/browser/canary.html">Chrome Canary</a>.'; } //window.speechSynthesis(utterance); </script> <div class="container"> <button id="runProgram" onclick='utterance.onstart();' class="runProgram-button">Start</button> </div> A: Does this work for you? function playAudio() { var msg = new SpeechSynthesisUtterance('Help me with this code please?'); msg.pitch = 0; msg.rate = .6; window.speechSynthesis.speak(msg); var msg = new SpeechSynthesisUtterance(); var voices = window.speechSynthesis.getVoices(); msg.voice = voices[10]; // Note: some voices don't support altering params msg.voiceURI = 'native'; msg.volume = 1; // 0 to 1 msg.rate = 1.2; // 0.1 to 10 msg.pitch = 2; //0 to 2 msg.text = 'Sure. This code plays "Hello World" for you'; msg.lang = 'en-US'; msg.onend = function(e) { var msg1 = new SpeechSynthesisUtterance('Now code plays audios for you'); msg1.voice = speechSynthesis.getVoices().filter(function(voice) { return voice.name == 'Whisper'; })[0]; msg1.pitch = 2; //0 to 2 msg1.rate= 1.2; //0 to 2 // start your audio loop. speechSynthesis.speak(msg1); console.log('Finished in ' + e.elapsedTime + ' seconds.'); }; speechSynthesis.speak(msg); } <div class="container"> <button id="runProgram" onclick='playAudio();' class="runProgram-button">Start</button> </div>
doc_23535354
I am using DOORS 9.2, Here i want to export only "object Headings" and "object text" of the current open module to excel. I don't have any idea how to start can anyone help me with an example. Your help is highly appreciated... A: Is using dxl a strict requirement or is the real requirement to export the Heading and Text attributes to excel? Because you can do that without using DXL. You need to create/modify a view (temporary or persistant) in the current open module with exactly the attributes you want. Object Heading and Object Text in this case. Remove from the view any other attributes, which are all the default attributes, and then add the new attributes. The best way for you to do this is probably using the Edit->Columns... menu item which opens an Edit Columns Dialog. Once you have the view, then it is a simple matter of going File->Export->Microsoft Office->Excel... I am using 9.3.0.3, but from what I remember 9.2 really is not that much different. A: If you need this to be a script because you plan to do it often then the following code will output a csv file (which by default should open with Excel) with the heading and text of each object in the document. Object o Module m = current Stream outfile = write("SomeFilePathHere.csv") for o in m do { outfile << o."Object Heading ", " o."Object Text" "\n" } close outfile Otherwise James's response is accurate for avoiding scripting.
doc_23535355
This seems to work using find() but i would think there is a cleaner solution using findOne() and maybe sort()? can anyone help out with a better way of writing this please $mongo = new Mongo(); $db = $mongo->mydb; $collection = $db->user; $cursor = $collection->find(); $i=0; foreach ($cursor as $obj){ if ($i==3) echo $obj["_id"];//echo's the 3rd entry id $i++; } The solution provided here is not applicable to my problem, which is why I'm asking this question. A: Skip() does not use an index effectively so putting a index on any field within the collection would be pointless. Since you wish to skip() nth documents, if the value of skip() is low (depends on your system but normally under 100K rows on a full collection scan) then it might be OK. The problem is that normally it is not. Mongo, even with an index, will be required to scan the entire result set before being able to skip over it which will induce a full collection scan no matter what in your query. If you were to do this often and in random ways it might be better to use an incrementing ID combineing another collection with findAndModify to produce an accurate document number ( http://www.mongodb.org/display/DOCS/How+to+Make+an+Auto+Incrementing+Field ). This however induces problems, you must mantain this ID especially when deletes occur. One method around this is to mark documents as deleted instead of actually deleting them. When you query for the exact document you omit deletes and limit() by one allowing you to get the next closest document to that position like so: $cur = $db->collection->find(array('ai_id' => array('$gt' => 403454), 'deleted' => 0))->limit(1); $cursor->next(); $doc = $cursor->current(); A: Use skip of (n-1), from your code above: $cursor = $collection->find(); $cursor->skip(3); $cursor->next(); $doc = $cursor->current(); But I think probably the best way is to keep a counter in your document that you can use as an index.
doc_23535356
Error: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'WHERE company='ABC' AND branch='26' AND owner IS NULL' at line 1 $sql="SELECT * FROM spr ORDER BY id WHERE company='$_SESSION[company]' AND branch='$_SESSION[branch]' AND owner IS NULL"; I can't see what is going wrong with my query. Someboby help please... A: The order by clause must come after the where clause. A: Try this, it's the ORDER BY clause which should be at the end of the statement $sql="SELECT * FROM spr WHERE company='".$_SESSION[company]."' AND branch='".$_SESSION[branch]."' AND owner IS NULL ORDER BY id"; A: $sql="SELECT * FROM spr WHERE company='$_SESSION[company]' AND branch='$_SESSION[branch]' AND owner IS NULL ORDER BY id"; A: You need braces {} around array values and object properties when using double quotes in PHP. $sql="SELECT * FROM spr WHERE company='{$_SESSION[company]}' AND branch='{$_SESSION[branch]}' AND owner IS NULL ORDER BY id "; And your ORDER BY clause needs to come at the end.
doc_23535357
Example: www.example.com/?bla_bla should redirect to www.example.com/ www.example.com/test/?bla_bla_evil_querystring should redirect to www.example.com/test/ www.example.com/test.html?bla_bla should redirect to www.example.com/test.html I am looking for a site-wide solution to redirect any URL with querystrings to the same URL withouth querystrings. Thank you! A: The easiest way to do this would be with mod_rewrite. RewriteEngine on RewriteCond %{QUERY_STRING} . RewriteRule ^ %{REQUEST_URI} [R,QSD,L] This rule matches if the query string is at least 1 character long. The QSD flag will discard the query string. The R flag will cause a redirect. After you've tested that this rule works as expected, replace the R flag with R=301 to make the redirect permanent. On versions where the QSD flag is not yet implemented, you can use the following code instead. Please note that this will leave a trailing ? behind your url's, but there is nothing I can do about that. Consider upgrading your Apache to the latest version. RewriteRule ^ %{REQUEST_URI}? [R,L]
doc_23535358
Now, I would like to validate, if a parsed JSON complies with a JSON schema file, which is parsed by itself. There is a JSON schema module for Jackson (https://github.com/FasterXML/jackson-module-jsonSchema). However, it appears to me that its primary focus is on creating a JSON schema file from within Java. What is a good way to validate a JSON schema in Java? - preferably using Jackson, but I am also open to other solutions. A: As far as I know Jackson can only produce schemas for given types, but not do validation. There is json-schema-validator but it is no longer maintained. A: 1.) Add Dependency pom.xml :- <dependency> <groupId>com.github.fge</groupId> <artifactId>json-schema-validator</artifactId> <version>2.2.6</version> </dependency> 2.) NoSqlEntity is a meta data for entity that can reside in no-sql database. Initialized NoSqlEntity with schema file. public static final NoSqlEntity entity = new NoSqlEntity("PAYOUT_ENTITY", "DB_","/schema/payout_entity.json"); public class NoSqlEntity { private static final Map<String, NoSqlEntity> STORE = new HashMap<>(); private final AtomicLong seq = new AtomicLong(System.currentTimeMillis()); private IdentityGenerator identityGenerator; private String entity; private String collectionName; private String jsonSchema; private String idColumn = "id"; private String database; public NoSqlEntity(String entity, String idColumn, String collectionPrefix, String jsonSchema) { this.entity = entity; this.idColumn = idColumn; this.collectionName = collectionPrefix + "_" + entity; this.jsonSchema = jsonSchema; STORE.put(entity, this); } public NoSqlEntity(String collectionName, String jsonSchema) { this.collectionName = collectionName; this.jsonSchema = jsonSchema; } public static NoSqlEntity valueOf(String entityType) { return STORE.get(entityType); } public boolean isNotNullSchema() { return jsonSchema != null; } ... // Other Getter/Setter properties and methods. } 3.) Sample format of validation schema file of payout_entity.json- { "properties":{ "txId":{"type":"string"} } "required" :["txId","currency"] } 4.) JsonSchemaManager - Validate the incoming JSON schema and cache the schema as well. public class JsonSchemaManager { private final static Logger LOGGER = LoggerFactory.getLogger(JsonSchemaManager.class); protected final static String LS = StandardSystemProperty.LINE_SEPARATOR.value(); private final JsonValidator validator = JsonSchemaFactory.byDefault().getValidator(); private final Map<NoSqlEntity, JsonNode> schemaMap = new HashMap<>(); public JsonNode load(NoSqlEntity noSqlEntity) throws IOException { final JsonNode schema = JsonLoader.fromURL(this.getClass().getResource(noSqlEntity.getJsonSchema())); schemaMap.put(noSqlEntity, schema); return schema; } public void validateSchema(NoSqlEntity noSqlEntity, JsonNode toBeValidated, Consumer<ProcessingReport> consumer) { try { JsonNode schema = schemaMap.get(noSqlEntity); if (schema == null) { schema = load(noSqlEntity); } final ProcessingReport report = validator.validate(schema, toBeValidated); if (!report.isSuccess()) { consumer.accept(report); } } catch (IOException ex) { //NOSONAR throw new InvalidRequestException(ex.toString()); } catch (ProcessingException ex) { //NOSONAR throw new InvalidRequestException(ex.toString()); } } public synchronized boolean synchronizedCheck(NoSqlEntity noSqlEntity, JsonNode toBeValidated, Consumer<Map<String, Object>> messageConsumers) { boolean flags = CommonUtils.unchecked(() -> { validateSchema(noSqlEntity, toBeValidated, report -> { report.forEach(processingMessage -> messageConsumers.accept(JsonConverter.jsonAsMapObject(processingMessage.asJson()))); }); return true; }, ex -> { throw new RuntimeException(ex.toString()); //NOSONAR }); return flags; } } 5.) NoSqlRepository which persist meta data into NoSql DB. @Component public class NoSqlRepository { private static final Logger LOGGER = LoggerFactory.getLogger(NoSqlRepository.class); private final DocumentFormat documentFormat = DocumentFormat.JSON; private static final String SEPARATOR = ","; private static final ThreadLocal<MyLocalVariable> THREAD_LOCAL_VARIABLES = ThreadLocal.withInitial(() -> new MyLocalVariable()); static class MyLocalVariable { private JsonSchemaManager schemaManager = new JsonSchemaManager(); private BasicBSONDecoder bsonDecoder = new BasicBSONDecoder(); public JsonSchemaManager getSchemaManager() { return schemaManager; } public BasicBSONDecoder getBsonDecoder() { return bsonDecoder; } } private void checkSchemaIfAny(NoSqlEntity noSqlEntity, JsonNode entity) { if (noSqlEntity.isNotNullSchema()) { THREAD_LOCAL_VARIABLES.get().getSchemaManager().check(noSqlEntity, entity); } } public String saveEntity(NoSqlEntity noSqlEntity, JsonNode entity){ // Before persisting payload into noSQL, validate payload against schema. this.checkSchemaIfAny(noSqlEntity,entity); } // Other CURD methods here... } A: As described here, the feture developement of the jackson validator has been stoped. However, I found networknt's json schema validator much live and interesting as of now. You can refer these contents for quick start. Maven dependency <dependency> <groupId>com.networknt</groupId> <artifactId>json-schema-validator</artifactId> <version>1.0.49</version> </dependency> Code sneppet - String jsonOrder = SampleUtil.getSampleJsonOrder();//replace or write System.out.println(jsonOrder); //Read json schema from classpaht JsonSchemaFactory factory = JsonSchemaFactory.getInstance(SpecVersion.VersionFlag.V7); InputStream is = Thread.currentThread().getContextClassLoader() .getResourceAsStream("order-schema.json"); JsonSchema schema = factory.getSchema(is); //Read json to validate try { JsonNode node = mapper.readTree(jsonOrder); Set<ValidationMessage> errors = schema.validate(node); System.out.println("Errors in first json object: " + errors); } catch (IOException e) { e.printStackTrace(); } //Test for invalid json String emptyFoIdOrder = "{\"gtins\":[\"1\",\"2\",\"3\",\"4\"],\"storeId\":121,\"deliveryAddress\":\"Any street, some house - PIN 2021\"}/"; try { JsonNode node = mapper.readTree(emptyFoIdOrder); Set<ValidationMessage> errors = schema.validate(node); System.out.println("Errors in first json object: " + errors); } catch (IOException e) { e.printStackTrace(); } Sample Json scheme used- { "$schema": "http://json-schema.org/draft-07/schema", "$id": "http://example.com/example.json", "type": "object", "title": "The root schema", "description": "The root schema comprises the entire JSON document.", "default": {}, "examples": [ { "foId": "9876", "gtins": [ "1", "2", "3", "4" ], "storeId": 121, "deliveryAddress": "Any streeam, some house - PIN 2021" } ], "required": [ "foId", "gtins", "storeId", "deliveryAddress" ], "properties": { "foId": { "$id": "#/properties/foId", "type": "string", "title": "The foId schema", "description": "An explanation about the purpose of this instance.", "default": "", "examples": [ "9876" ] }, "gtins": { "$id": "#/properties/gtins", "type": "array", "title": "The gtins schema", "description": "An explanation about the purpose of this instance.", "default": [], "examples": [ [ "1", "2" ] ], "additionalItems": true, "items": { "$id": "#/properties/gtins/items", "anyOf": [ { "$id": "#/properties/gtins/items/anyOf/0", "type": "string", "title": "The first anyOf schema", "description": "An explanation about the purpose of this instance.", "default": "", "examples": [ "1", "2" ] } ] } }, "storeId": { "$id": "#/properties/storeId", "type": "integer", "title": "The storeId schema", "description": "An explanation about the purpose of this instance.", "default": 0, "examples": [ 121 ] }, "deliveryAddress": { "$id": "#/properties/deliveryAddress", "type": "string", "title": "The deliveryAddress schema", "description": "An explanation about the purpose of this instance.", "default": "", "examples": [ "Any streeam, some house - PIN 2021" ] } }, "additionalProperties": true } A: Just stumbled about https://github.com/leadpony/justify another implementation of a validator for json schema, also more recent draft versions. (7,6,4)
doc_23535359
The point is: sometimes some test failed, because one of element selenium doesnt fined, but in real - selenium clicked for element before the element is real displayed in page. Before the click i check isElementVisible & iselementPresent but it is doesnt help. Also i put Thread.sleep before all click.## Heading ## This is my peace of code for WaitAndClick public void waitAndClick(String locator) throws Exception { long ts = System.currentTimeMillis(); for (int second = 0;; second++) { if (selenium.isElementPresent(locator) == true && selenium.isVisible(locator) == true) { //System.out.println("click true"); Thread.sleep(250); selenium.click(locator); //selenium.fireEvent(locator,"click"); //selenium.fireEvent(locator, "click"); break; } else { //System.out.println("click false"); Thread.sleep(100); } if (System.currentTimeMillis() - ts > 20000) { throw new Exception("WaitAndClick for " + locator + " out off 20 seconds"); } } }
doc_23535360
To do this I'm using the following function, which first builds the objects: sorted: function(){ var pages = this.selectedKKS['pages']; var list; try { list = []; Object.keys(pages).forEach(function(key){ console.log(key + " is the key") var obj = {}; obj.title = key; obj.page = pages[key] list.push(obj) }); } catch(e){ console.log(e); } var sorted = list.sort(function(a, b){ console.log('a.page is ' + a.page + ' and b.page is ' + b.page); return a.page > b.page; }); return sorted; } Just to make sure that I'm actually comparing pages as I intend, here's the console log: a.page is 84 and b.page is 28 App.vue?077f:353 a.page is 84 and b.page is 46 App.vue?077f:353 a.page is 28 and b.page is 46 App.vue?077f:353 a.page is 84 and b.page is 35 App.vue?077f:353 a.page is 46 and b.page is 35 App.vue?077f:353 a.page is 28 and b.page is 35 App.vue?077f:353 a.page is 84 and b.page is 14 App.vue?077f:353 a.page is 46 and b.page is 14 App.vue?077f:353 a.page is 35 and b.page is 14 App.vue?077f:353 a.page is 28 and b.page is 14 App.vue?077f:353 a.page is 84 and b.page is 5 App.vue?077f:353 a.page is 46 and b.page is 5 App.vue?077f:353 a.page is 84 and b.page is 8 App.vue?077f:353 a.page is 5 and b.page is 8 I'm looping over this computed property in my template, and since it's being sorted incorrectly it's giving me an undesirable result: <f7-list> <f7-list-item v-for="val in sorted" @click="openpdf(selectedKKS.url, val.page)"> <f7-col><span style="color: black">{{ val.title }}</span></f7-col> <f7-col>{{ val.page }}</f7-col> </f7-list-item> </f7-list> Any ideas as to what could be going wrong? A: Because the values are strings, they're sorted in lexical (alphabetical) instead of numerical order. Change your sort function as follows: list.sort(function(a, b){ return Number(a.page) > Number(b.page); }); Or better still: list.sort(function(a, b){ return Number(a.page) - Number(b.page); }); As noted in the comments, it's better to do the number conversion during object creation to avoid having to repeatedly carry it out for each comparison. A: Try '-' instead '>' inside sorted function.. var sorted = list.sort(function(a, b){ console.log('a.page is ' + a.page + ' and b.page is ' + b.page); return a.page - b.page; }); Hope it helps!! A: callback function for sort must return a int number. lt return < 0 ,eq return 0 ,gt return > 0.
doc_23535361
I know Firebase uses https, but looking around, it seems Firebase does not yet make encryption at rest available. Is there a way around this to use Firebase and still make an administrator unable to read the data from the Firebase Forge, for instance? Thank you. A: If you encrypt all data that you store in Firebase with a key that is only known to the client, it will not be readable by anyone but that client. Update (20160528): As of a few months ago all data for the Firebase Database is also encrypted at rest.
doc_23535362
One possible solution is try to rewrite Swashbuckle and rewrite rabbitmq receiving part to be like webapi controllers, but not sure how much work will it take and i would like to avoid this way. Or maybe i'm doit it incorrect way, but main idea is to have queue which will help resolve problems with performance as at some moments there migth be too many messages, and when message processing fails then message will remain in queue until problem is fixed, looks like rabbitmq is good enought for that but protocol description part is missing here A: Swagger is for RESTful APIs. If you like the richer messaging semantics of RabbitMQ you can add something like: * *protobuf (possibly with gRPC) *thrift (We're using this with ZeroMQ)
doc_23535363
Issue here is i am able to store data successfully but not able to retrieve data from another activity. What else do i need to change in my code to retrieve data from another activity.. EditText ed1,ed2,ed3; Button b1; public static final String MyPREFERENCES = "MyPrefs" ; public static final String Name = "nameKey"; public static final String Phone = "phoneKey"; public static final String Email = "emailKey"; SharedPreferences sharedpreferences; Main Activity @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); ed1=(EditText)findViewById(R.id.editText); ed2=(EditText)findViewById(R.id.editText2); ed3=(EditText)findViewById(R.id.editText3); b1=(Button)findViewById(R.id.button); sharedpreferences = getSharedPreferences(MyPREFERENCES, Context.MODE_PRIVATE); b1.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { String n = ed1.getText().toString(); String ph = ed2.getText().toString(); String e = ed3.getText().toString(); SharedPreferences.Editor editor = sharedpreferences.edit(); editor.putString(Name, n); editor.putString(Phone, ph); editor.putString(Email, e); editor.commit(); Toast.makeText(MainActivity.this,"Thanks",Toast.LENGTH_LONG).show(); } }); } @Override public boolean onCreateOptionsMenu(Menu menu) { // Inflate the menu; this adds items to the action bar if it is present. getMenuInflater().inflate(R.menu.menu_main, menu); return true; } @Override public boolean onOptionsItemSelected(MenuItem item) { // Handle action bar item clicks here. The action bar will // automatically handle clicks on the Home/Up button, so long // as you specify a parent activity in AndroidManifest.xml. int id = item.getItemId(); //noinspection SimplifiableIfStatement if (id == R.id.action_settings) { return true; } return super.onOptionsItemSelected(item); } public void Second_layout(View view) { Intent i = new Intent(MainActivity.this, retrive.class); startActivity(i); } retrieve.java @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_retrive); sharedpreferences = getSharedPreferences(MyPREFERENCES, Context.MODE_PRIVATE); TextView tv =(TextView)findViewById(R.id.textView11); sharedpreferences = getSharedPreferences("MyPREFERENCES",0); String userString = sharedpreferences.getString("Name","Nothing Found"); tv.setText(userString); } A: You are using wrong keys in getSharedPreferences() and getString() methods.Also, the mode must be same. You're supposed to do this : sharedpreferences = getSharedPreferences(MainActivity.MyPREFERENCES, Context.MODE_PRIVATE); String userString = sharedpreferences.getString(MainActivity.Name,"Nothing Found"); tv.setText(userString); A: Two Things make sure that the shared preferences name "My Preferences" is common in both ways and cross if you are storing and retrieved in same manner as followed: Storing: public static void setUserInfo(Context context, Map<String,Object> userInfo){ SharedPreferences.Editor editor = context.getSharedPreferences(USER_PREFERENCES, context.MODE_PRIVATE).edit(); editor.putString(USER_INFO,"My Data"); editor.apply(); } Fetching: public static Map<String,Object> getUserInfo(Context context){ SharedPreferences prefs = context.getSharedPreferences(USER_PREFERENCES, context.MODE_PRIVATE); String userInfo = prefs.getString(USER_INFO,""); return userInfo; } Comment below if you have any doubt.
doc_23535364
All of these blocked threads have almost the same stacktrace as the blocking thread (+- 1 last frame) - trying to load a VAADIN resource file from the application JAR file. Does this mean that the thread hanged on reading a static file from a JAR? And others are waiting for one thread to finish reading it? Anybody have any idea why it happened and how can we prevent this? Java version: 1.8.0_131 Jetty version: 9.2.z-SNAPSHOT "qtp489279267-42356" #42356 prio=5 os_prio=0 tid=0x00007fe7e4054800 nid=0x6f37 waiting for monitor entry [0x00007fe776831000] java.lang.Thread.State: BLOCKED (on object monitor) at java.util.zip.ZipFile$ZipEntryIterator.hasNext(ZipFile.java:492) - waiting to lock <0x0000000700002e80> (a sun.net.www.protocol.jar.URLJarFile) at java.util.zip.ZipFile$ZipEntryIterator.hasMoreElements(ZipFile.java:488) at java.util.jar.JarFile$JarEntryIterator.hasNext(JarFile.java:253) at java.util.jar.JarFile$JarEntryIterator.hasMoreElements(JarFile.java:262) at org.eclipse.jetty.util.resource.JarFileResource.exists(JarFileResource.java:191) at org.eclipse.jetty.webapp.WebAppContext.getResource(WebAppContext.java:372) at org.eclipse.jetty.webapp.WebAppContext$Context.getResource(WebAppContext.java:1459) at com.vaadin.terminal.gwt.server.AbstractApplicationServlet.serveStaticResourcesInVAADIN(AbstractApplicationServlet.java:1276) at com.vaadin.terminal.gwt.server.AbstractApplicationServlet.serveStaticResources(AbstractApplicationServlet.java:1246) at com.vaadin.terminal.gwt.server.AbstractApplicationServlet.service(AbstractApplicationServlet.java:423) at example.ApplicationServlet.service(ApplicationServlet.java:37) at javax.servlet.http.HttpServlet.service(HttpServlet.java:790) at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:808) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1669) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346) at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:262) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:186) at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160) at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346) at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:262) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346) at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:262) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:577) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:223) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:110) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97) at org.eclipse.jetty.server.Server.handle(Server.java:499) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257) at org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635) at org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) at java.lang.Thread.run(Thread.java:748) A: You probably guessed it right, you seem to be facing contention. We could look at it from two related perspectives: * *Lock contention *It may not apply to the present use case, but could as well be a problem in a production system under load: I/O contention. The symptom we have thus far from your stacktrace is what seems to be lock contention. It seems you are sharing the same files (in the sense of Java ZipFile objects) across multiple readers reading at the same time. This brings the idea of caching - though it may be utterly wrong to think about it depending on your use case, e.g. if you read files of size over one gigabyte. So, it would be useful to know * *The total number of files you can read; *The median and maximal sizes of these files; *The average TTL of these files, i.e. how long they are needed for once loaded in memory; If we have a reasonable amount of data to cache over any given period of time (i.e. we won't face a spike in demand from clients which would make us need to fetch 1 TB of new data in few seconds) and manage a "relatively small" ad-hoc software system, a straightforward solution could be to cache.
doc_23535365
format i use dd/mm/yyyy time 24 hrs format :HH:MM:SS var strt_date = 31/03/2014 23:02:01; var end_date = 01/04/2014 05:02:05; if(Date.parse(strt_date) < Date.parse(end_date)) { alert("End datetime Cannot Be Less Than start dateime"); return false; } A: See the following answer: Compare two dates with JavaScript Essentially you create two date objects and you can compare them. var start_date = new Date('31/03/2014 23:02:01'); var end_date = new Date('31/03/2014 23:02:01'); if (end_date < start_date) { alert("End datetime Cannot Be Less Than start dateime"); return false; } (from reading the linked answer it is possible using the Date::gettime method for comparison purposes may be faster than the actual comparing of date objects) A: Your timestamps are not quoted as strings, which is throwing a syntax error, add single quotes to them: var strt_date = '31/03/2014 23:02:01'; var end_date = '01/04/2014 05:02:05'; if((new Date(strt_date)).getTime() < (new Date(end_date)).getTime()) { alert("End datetime Cannot Be Less Than start dateime"); return false; } Using .getTime() will compare as numbers, so you can determine if the start date has a greater number than the end date. DEMO A: Try to use the folowing format: Date.parse("YEAR-MONTH-DAYTHOURS:MINUTES:SECONDS") var strt_date = "2014-03-31T23:02:01"; var end_date = "2014-04-01T05:02:05"; if(Date.parse(strt_date) < Date.parse(end_date)) { alert("End datetime Cannot Be Less Than start dateime"); return false; }
doc_23535366
The time to come. Normal, common, or expected. A special set of clothes worn by all the members of a particular group or organization Already made use of, as in a used car. Bing A circle of light shown around or above the head of a holy person. The god of thunder. An act that is against the law. Long dress worn by women. Odd behaviour. This is the code I have used to make the output for the Words to these definitions, But Scanf doesn't like spaces, so can someone edit this code to output the definitons above, thanks. Should of said this eariler but the output should be 1 sentence at one time. #include <stdio.h> #include <conio.h> #include <stdlib.h> FILE *fp; int main(void) { struct store { char id[128]; }stock[10]; int printnum; int allrec=0; int startrec=0; fp=fopen("Test 24 Definitions.txt","r"); printf("i"); fscanf(fp,"%s",&stock[startrec].id); while(!feof(fp)) { printf("%s", stock[startrec].id); printf(" \n"); getch(); startrec=startrec+1; allrec=startrec; fscanf(fp,"%s",&stock[startrec].id); } fclose(fp); printf("\n\n\n\n"); int i; for (i=0; i<allrec; i++) { printf("%s\n",stock[i].id); getch(); } } Sample code with fgets would be appreciated A: This might help u understand #include <stdio.h> #include <stdlib.h> FILE *fp; int main(void) { struct store { char id[128]; }stock[10]; int printnum; int allrec=0; int startrec=0; fp=fopen("Test 24 Definitions.txt","r"); while(!feof(fp)) { fscanf(fp,"%[^\t]s",stock[startrec].id); printf("%s", stock[startrec].id); } fclose(fp); return 0; } A: Get the size of the file using ftell. Then read the file content using fgets. Don't use feof to find the end of the file. “while( !feof( file ) )” is always wrong. Try this code. #include <stdio.h> #include <string.h> #include <stdlib.h> #define MAX_LINE_LENGH 255 int main(void) { char id[MAX_LINE_LENGH]; int size; FILE *fp; fp=fopen("test.txt","r"); fseek(fp, 0, SEEK_END); size = ftell(fp); fseek(fp, 0, SEEK_SET); while(size>0) { fgets(id, MAX_LINE_LENGH, fp); printf("%s", id); /* copy this id to any char array if you want */ size = size-strlen(id); } fclose(fp); printf("\n"); } A: Some remarks before the code: * *I removed conio - not used or needed *I removed allrec since you can just use startrec which I renamed to rec *There's no need to use getc() (not getch(), that's from libcurses), fgets reads till the newline, including that one *I check if the file is actually opened or you're gonna risk reading from NULL, resulting in a segfault *In the printing for-loop, I use getchar() instead of getch() #include <stdio.h> #include <stdlib.h> FILE *fp; int main(void) { struct store { char id[128]; } stock[10]; int allrec=0; int rec=0; int res; fp = fopen("text.txt","r"); if(!fp) printf("failed to open file\n"); while(!feof(fp)) { res = fgets(&stock[rec].id, 128, fp); if(!res) { break; } printf("%s", stock[rec].id); rec++; } fclose(fp); printf("\n\n\n\n"); int i; fflush(stdin); for (i=0; i<rec; i++) { printf("%s",stock[i].id); getchar(); } } A: To read a line of text using fscanf() rather than words, for this code in 2 places use: fscanf(fp,"%127[^\n]%*c", stock[startrec].id); "%127[^\n]" Without skipping leading white-space, read up to 127 char. Except do not read in a '\n'. Store the result, with an appended '\0' to stock[startrec].id. "%*c" Without skipping leading white-space, read any 1 char. This is either the '\n' that stopped the preceding or we are now in an EOF condition. '*' means to not save the result. Or better yet... Use fgets(), trimming the typical trailing \n as needed. fgets(stock[startrec].id, sizeof stock[startrec].id, fp); Suggest checking the results of fscanf() and fgets() and dropping feof() printf("i"); if (fp != NULL) { int cnt; while((cnt = fscanf(fp, "%s", stock[startrec].id)) != EOF) { if (cnt < 1) Handle_NothingWasRead(); printf("%s", stock[startrec].id); printf(" \n"); getch(); startrec = startrec + 1; allrec = startrec; } }
doc_23535367
Installation from the link does not work: `install.packages('Rcartogram', repos = 'http://www.omegahat.org/R', type = 'source')` Installing package into ‘C:/Users/Milena/Documents/R/win-library/3.2’ (as `lib` is unspecified) Warning in install.packages : package ‘Rcartogram’ is not available (for R version 3.2.0) Neither from the zip file: install.packages("C:/Users/Milena/Downloads/Rcartogram_0.2-2.tar.gz", repos = NULL, type = "source") Installing package into ‘C:/Users/Milena/Documents/R/win-library/3.2’ (as lib is unspecified) * installing source package Rcartogram ... ********************************************** WARNING: this package has a configure script It probably needs manual configuration ********************************************** ** libs *** arch - i386 Warning: running command 'make -f "Makevars" -f "C:/PROGRA~1/R/R-3.2.0/etc/i386/Makeconf" -f "C:/PROGRA~1/R/R-3.2.0/share/make/winshlib.mk" SHLIB="Rcartogram.dll" OBJECTS="Rcart.o cart.o"' had status 127 ERROR: compilation failed for package 'Rcartogram' * removing 'C:/Users/Milena/Documents/R/win-library/3.2/Rcartogram' Warning in install.packages : running command '"C:/PROGRA~1/R/R-3.2.0/bin/x64/R" CMD INSTALL -l "C:\Users\Milena\Documents\R\win-library\3.2" "C:/Users/Milena/Downloads/Rcartogram_0.2-2.tar.gz"' had status 1 Warning in install.packages : installation of package ‘C:/Users/Milena/Downloads/Rcartogram_0.2-2.tar.gz’ had non-zero exit status How can I solve this problem? I am working on Windows machine. Thank you to everyone who took the time to look up this question. Here is my sessionInfo: R version 3.2.0 (2015-04-16) Platform: x86_64-w64-mingw32/x64 (64-bit) Running under: Windows 8 x64 (build 9200) locale: [1] LC_COLLATE=English_United Kingdom.1252 LC_CTYPE=English_United Kingdom.1252 LC_MONETARY=English_United Kingdom.1252 LC_NUMERIC=C [5] LC_TIME=English_United Kingdom.1252 attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] fftw_1.0-3 loaded via a namespace (and not attached): [1] tools_3.2.0 A: Installing Rcartogram on Windows Rcartogram is an R package (by Duncan Temple Lang) whose main purpose is to provide a R wrapper for a some C code (written by Mark Newman) which actually does the work of constructing a cartogram (aka morphing a map). The C code written by Mark Newman makes use of the FFTW (fastest Fourier Transform in the West) compiled library. The link in your question by Truc Viet Le describes how to install Rcartogram on a Unix system. There are a few extra tricks and traps involved in getting Rcartogram onto a Windows system, even though at its heart, it is pretty much the same process. To install Rcartogram on a Windows system you need first to install all the pre-requisites, namely: * *the Rtools suite of programs (which you need to be able to install any R package which involves C code onto a Windows machine), and *The FFTW library (which the Rcartogram code uses). You then need to tell R where to find the FFTW library when you are first installing Rcartogram, and you will almost certainly need to let R know where to find the FFTW library whenever you load Rcartogram, eg via library(Rcartogram) in an R session. I found I also needed to make a few very small changes to the Rcartogram R code (mostly to bring it into line with changes to R syntax since it was written) in order to get it to install happily, and run correctly on Windows. So the full answer involves several steps. Step 1 : Install the Rtools suite I suspect you need to install Rtools in order to get past the status 127 error. The official instructions on how to do that are here http://cran.r-project.org/doc/manuals/R-admin.html#The-Windows-toolset. There are user friendly explanations of how to install Rtools into a Windows system elsewhere on the web --- see for example https://github.com/stan-dev/rstan/wiki/Install-Rtools-for-Windows . (The official instructions tell you how to install lots of other stuff as well, that you need if you want to build R itself from source on Windows, or produce package documentation using LaTeX, but you don't need all that stuff to get Rcartogram working). A bit longer answer: I can now replicate your status 127 error ---by removing the references to directories where Rtools lives from my PATH. When I do that the Windows cmd shell (where you might type R CMD INSTALL … can’t find the Rtools executables and that results in the 127 error message. Likewise trying to run install.packages() from within R also fails the same way, since under the hood install.packages calls the Windows cmd shell. Why do you need Rtools? Well Rcartogram is a package which includes C code, as well as pure R code. In fact it is mostly C code – from Mark Newman. Installing from source for a package which includes C code requires a C compiler. In fact it is best (almost essential) that it is the same C compiler that R itself was built from. That is what Rtools mostly is – an installable on Windows version of a C compiler. Running a C compiler in Windows needs a few extra shell commands (aka small programs) as well and that is what the rest of Rtools is. Most of the (open source) C community seem to work in a Unix (or variant thereof) world and those extra commands (and indeed the C compiler itself) are part of the “standard” system in Unix. It’s only those of us who work in Windows who need to install Rtools, which is a port of the necessary tools from Unix to Windows. Step 2 : Install the FFTW library Initially I got the FFTW library from here http://www.fftw.org/ . There are two versions, a 32 bit version and a 64 bit version. On a Windows 64 bit machine you need both versions. (Aside, well there may be a way you can get away with only one, by setting flags when you install Rcartogram, but I haven't tested that route myself yet). Unzip the 32 bit version into a sub-directory /i386, and the 64 bit version into a subdirectory /x64. In my case (see below), I made these as subdirectories of "C:/msys/fftwtest". (Aside these subdirectories are conventions that R uses - you could theoretically put them elsewhere, but why make extra complications!). One trap that stumped me for quite a while was that these libraries are dynamic libraries (ie .dll s) and so - later on - I needed to make sure that when I installed them on my pc I put them in locations that were on my PATH (or alternatively I altered my PATH by adding in the locations - aka directories - where they were installed) since otherwise I got very unhelpful error messages in the final checks that R does before it finishes installing a package. Both the 32 bit and 64 bit (sub) directories should be included in your PATH. Step 3 : Tell R where to find the FFTW library The trick to telling R (on a Windows machine) where to find the FFTW libraries when it is trying to install Rcartogram is to add a src/Makevars.win file into the src subdirectory of the Rcartogram package. That means you will have to unzip and untar the Rcartogram tar.gz file before you can make this change. (Aside : I use 7zip to uncompress these types of files on my machine). My src/Makevars.win file (it is a text file) has 2 lines, PKG_CPPFLAGS=-I"C:\msys\fftwtest\x64\include" -DNOPROGRESS PKG_LIBS=-L"C:\msys\fftwtest\x64\lib" -L"C:\msys\fftwtest\i386\lib" -lfftw3 -lm The file names in quotes are where I put my versions of the FFTW library. (These aren't exactly the ones I downloaded, along the way I learnt how to compile FFTW from source, and made my own copies, but explaining how to do that is a looong story so I won't attempt it here). The directory mentioned in PKG_CPPFLAGS line is the one containing a header file called fftw3.h that the C pre-processor needs. It doesn't matter whether you point at the 32 bit (\i386 subdirectory) or the 64 bit (\x64 subdirectory) - the fftw3.h file is a C source file and is the same no matter what architecture R is installing for. The 2 directories mentioned in PKG_LIBS line are the ones where files called libfftw3.something can be found, and that the linker needs when it is putting everything together at the end of the compilation step. something might be ".dll" (in which case the subdirectory might be \bin instead of \lib), or it might be ".a" or ".la" (which is what R looks for when it uses the static FFTW libraries which I created once I had learnt how to compile FFTW from source). 2 directories are needed because R by default tries to install both 32 bit and 64 bit versions of Rcartogram on Windows machines. If you supply FFTW library files in .dll format, then these are the exact same libraries that must be on your PATH (because when you try to do library(Rcartogram) R needs to find the FFTW dll libraries again while it is loading the installed Rcartogram package) (Aside, that's why in the end I compiled my own static FFTW libraries, so I didn't have to mess with my PATH variable in the Windows environment). If you are using the downloaded binaries from the link above, the fftw3.h and the libfftw3.dll files are all in the same (sub) directory, and the libfftw3.dll file is actually called libfftw3-3.dll, so in this case your src/Makevars.win file would need to be : PKG_CPPFLAGS=-I"main libfftw directory\x64" -DNOPROGRESS PKG_LIBS=-L"main libfftw directory\x64" -L"main libfftw directory\i386" -lfftw3-3 -lm The key differences from my src/Makevars.win are : * *inserting the name of your main libfftw directory - ie the parent directory under which you created the /i386 and /x64 subdirectories when you unzipped the downloaded FFTW binaries *the deletion of the \include and \lib sub-sub-directories, and *changing -libfftw3 to -libfftw3-3 (note also that there must be a space in front of each - (minus) sign at the start of the -L and -l flags). What is the Makevars.win file doing? It is telling the R install process the flags that it will need when it tries to preprocess, compile and link the C code in Rcartogram's src subdirectory. The value of PKG_CPPFLAGS is a set of flags for the C pre-processor, and the value of PKG_LIBS is a set of flags for the link step. * *The -I is a flag that says 'try looking in the following directory when the C pre-processor is looking for include files', so in the example above it says to look in "main libfftw directory\x64". The include file it seeks is fftw3.h (that filename is buried in the C code inside Rcartogram) *The -L flag says 'try looking in the following directory when the linker is looking for files from any library that you expect to use', so -L"main libfftw directory\x64" says try looking in the "main libfftw directory\x64" directory. You can (and need to) have more than 1 directory on that search path - the linker just keeps looking till it finds what it is looking for (or runs out of places to look and gives an error message), and *The -l flags gives the name of the library file that the linker should look for, but not verbatim --- instead the name is constructed from what you enter following a (slightly crazy to me) convention from the unix world. Because the file name of the library always begins with “lib”, the first part of the convention is that you leave "lib" out of the name you put in the flag. The file name of the library can have several different extensions ( eg “.dll” or “.a”) so the second part of the convention is that you leave you leave the file extension out of the -l flag value as well, and let the linker sort out what it wants. So –lfftw3 says look for a file called either libfftw3.dll or one called libfftw3.a (there may be other possible extensions as well, I'm not sure). The downloaded dlls are actually called libfftw3-3.dll, (unlike the ones I compiled myself, which are called libfftw3.a) hence the need to change the –l flag to –lfftw3-3 NB If you are using the downloaded FFTW library which uses .dlls make sure you have put them on your PATH (see the last para of step 2) as well. Step 4 : Small fixes to the Rcartogram C code There were two other small changes I found I had to make to the Rcartogram code itself get things running. First in the file R/cart.R there are two lines, both of which use the .Call( ) function. I needed to add one more argument (namely PACKAGE = "Rcartogram") to the .Call function, so for example tmp = .Call("R_makecartogram", popEls, x, y, dim, as.numeric(blur)) became tmp = .Call("R_makecartogram", popEls, x, y, dim, as.numeric(blur), PACKAGE = "Rcartogram") Likewise, further down cart.R the altered .Call became .Call("R_predict", object, as.numeric(x), as.numeric(y), ans, dim(object$x), PACKAGE = "Rcartogram") Second, again in R/cart.R, I had to change tmp = rep(as.numeric(NA), length(x)) ans = list(x = tmp, y = tmp) to # Avoid problems with the same vector (tmp) being sent to C twice due to R's # copy-on-modify rules tmp_x = rep(as.numeric(NA), length(x)) tmp_y = rep(as.numeric(NA), length(y)) ans = list(x = tmp_x, y = tmp_y) This one took me a lot of work to find, but without it, the demo for Rcartogram gave the wrong results (even though it ran OK). Step 5 : Actually install Rcartogram You should now be able to install Rcartogram. Either * *by opening a cmd window, changing directory (cd to) the location where the unzipped and modified Rcartogram package source code lives, and typing R CMD INSTALL --preclean . or *by starting an R session, setting the working directory to wherever the Rcartogram source is and typing install.packages(".", repos = NULL, type = 'source', INSTALL_opts = "--preclean") The . works because you have cded to directory where the Rcartogram source code lives. The --preclean flag tells R to tidy up any leftover intermediate files from earlier (failed) attempts to compile the C code in Rcartogram before it begins. If you get this far and are still having trouble, there is also a --debug flag that can be added as well. It gives more detail about why the install is failing. Step 6 : Enjoy morphing maps I am just getting started actually using Rcartogram myself (it took me a while to get this far!), but you may want to check out the getcartr --- R package. That package uses Rcartogram, and it seems pretty neat! And the installation instructions given on the github website worked first time for me (I do already have devtools installed and working though). Hope this helps (and congratulations to anyone who has read this far) Update May 2017 I haven't worked on this for a couple of years now (so no guarantees it will still work), but after I wrote the post above, I created a forked copy of RCartogram at https://github.com/Geoff99/Rcartogram/tree/WindowsInstall. See the WindowsInstall branch which includes * *a heavily commented src/Makevars.win meant to make it easier to install RCartogram on Windows, and *an even more comprehensive tutorial than the above post in vignettes/README.WindowsInstall.Tutorial.Rmd . See the following link https://github.com/Geoff99/Rcartogram/blob/WindowsInstall/vignettes/README.WindowsInstall.Tutorial.Rmd (To use the tutorial, you need to use the WindowsInstall branch of the forked repository!) A: To install Rcartogram, you need to download the package from the website http://www.omegahat.org/Rcartogram/ and install it from source. Open your Terminal (in Windows, it’s called Command Prompt), change directory to where the downloaded file is and type: R CMD INSTALL Rcartogram_0.2-2.tar.gz That command is to install an R package from source. You will need a working C complier for the purpose. From your error messages, it looks like you have some problems with our C complier. Make sure if it works (or you have one). Check out this question: C compiler for Windows?
doc_23535368
How to convert my asp.net project to wsp file and to deploy to sharepoint environment. A: You cannot automatically convert asp.net project to wsp file! You need to adjust and change your asp.net code manually so it can be deployed to SharePoint and there is a lot to consider depending on your asp.net Projects functionality and structure - wsp (full trust code with c#) vs add-in (clientside code with JavaScript and REST). Start here at MSDN: SharePoint general development If you use the new SharePoint Framework (SPFx) to develop your client-side webparts, then you don't need a Visual Studio installed on a SharePoint server. You don't even need a SharePoint server. Using SPFx you can develop in Notepad or any other code editor. Note that SPFx is currently (Dec 13 2016) in preview. Start here at Office Dev Center: Overview of SharePoint Framework (SPFx)
doc_23535369
string firsttext = firsttextbox.Text.ToLower(); string name = firsttext.Replace(" - ", " "); But this fails to replace the string in firsttext's space hyphen space pattern with a single space. So when i try to use this text for example: Leasing‐Other it just returns this into string name: Leasing‐Other however it should actually be returning this: LeasingOther A: You have spaces in your searching pattern. Use string firsttext = firsttextbox.Text.ToLower(); string name = firsttext.Replace("-", " "); That will work. If your data are inconsistent, resolve all cases by replacing all variants. string name = firsttext.Replace("-", " ").Replace(" -", " ").Replace("- ", " ").Replace(" - ", " ");
doc_23535370
I have few cores. At the moment, custom properties for each core are defined in my_core_x/core.properties file. However, all custom properties are the same for all cores. So, I have multiple identical core.properties files. Is it possible to define properties somewhere else, in one place only? EDIT: I want to use these custom properties in solrconfig.xml like this: ${my.custom.property} A: You can add custom properties through the regular -D syntax when starting Solr / the JVM. From Configuring solrconfig.xml: Any JVM System properties, usually specified using the -D flag when starting the JVM, can be used as variables in any XML configuration file in Solr. For example, in the sample solrconfig.xml files, you will see this value which defines the locking type to use: <lockType>${solr.lock.type:native}</lockType> Which means the lock type defaults to "native" but when starting Solr, you could override this using a JVM system property by launching the Solr it with: bin/solr start -Dsolr.lock.type=none
doc_23535371
test.php <?php echo "seconds passed since 01-01-1970 00:00 GMT is ".time(); ?> index.php <?php $test=require("test.php"); echo "the content of test.php is:<hr>".$test; ?> Like file_get_contents() but than it should still execute the PHP code. Is this possible? A: If your included file returned a variable... include.php <?php return 'abc'; ...then you can assign it to a variable like so... $abc = include 'include.php'; Otherwise, use output buffering. ob_start(); include 'include.php'; $buffer = ob_get_clean(); A: I've also had this issue once, try something like <?php function requireToVar($file){ ob_start(); require($file); return ob_get_clean(); } $test=requireToVar($test); ?> A: You can write in the included file: <?php return 'seconds etc.'; And in the file from which you are including: <?php $text = include('file.php'); // just assigns value returned in file A: In PHP/7 you can use a self-invoking anonymous function to accomplish simple encapsulation and prevent global scope from polluting with random global variables: return (function () { // Local variables (not exported) $current_time = time(); $reference_time = '01-01-1970 00:00'; return "seconds passed since $reference_time GMT is $current_time"; })(); An alternative syntax for PHP/5.3+ would be: return call_user_func(function(){ // Local variables (not exported) $current_time = time(); $reference_time = '01-01-1970 00:00'; return "seconds passed since $reference_time GMT is $current_time"; }); You can then choose the variable name as usual: $banner = require 'test.php'; A: Use shell_exec("php test.php"). It returns the output of the execution. A: require/include does not return the contents of the file. You'll have to make separate calls to achieve what you're trying to do. EDIT Using echo will not let you do what you want. But returning the contents of the file will get the job done as stated in the manual - http://www.php.net/manual/en/function.include.php A: I think eval(file_get_contents('include.php')) help you. Remember that other way to execute like shell_exec could be disabled on your hosting. A: It is possible only if required or included php file returns something (array, object, string, int, variable, etc.) $var = require '/dir/file.php'; But if it isn't php file and you would like to eval contents of this file, you can: <?php function get_file($path){ return eval(trim(str_replace(array('<?php', '?>'), '', file_get_contents($path)))); } $var = get_file('/dir/file.php'); A: Or maybe something like this in file include.php: <?php //include.php file $return .= "value1 "; $return .= time(); in some other php file (doen't matter what is a content of this file): <?php // other.php file function() { $return = "Values are: "; include "path_to_file/include.php"; return $return; } return will be look like this for example: Values are: value1, 145635165 The point is, that the content of included file has the same scope as a content of function in example i have provided about.
doc_23535372
When user logs into php/codeigniter app, email address is stored into session data. onLoad, I want to prepop a field with the email address of the user that logs in using jquery. I am using this to output the code on the page for testing: <?php $userEmail = $this->session->userdata('USER_EMAIL'); echo $userEmail; ?> A: Best way to do it, would be to just use php like so: <input type="text" name="userEmail" value="<?php echo htmlentities($userEmail); ?>" /> But if you want to use jQuery, just do something like this: $("#userEmailId").val("<?php echo htmlentities($userEmail); ?>"); For clarity sake, session data only exists server-side. If you want to access it client-side(in jquery), it needs to be in the actual page somewhere, or in a cookie.
doc_23535373
For example i have the following piece of program :- # Error handling i=int(eval(input("Enter an integer: " ))) print(i) Now if the user enters a string following error is occurs : Enter an integer: helllo Traceback (most recent call last): File "C:/Users/Gaurav's PC/Python/Error Management.py", line 2, in <module> i=int(input("Enter an integer: " )) ValueError: invalid literal for int() with base 10: 'helllo' HERE, I want Python to re-run the line 2 only for a valid input until user inputs a correct input or cancel the operation and once a correct input is passed it should continue from line 3, How can i do that? I have tried a bit about it using try and except statement but their are many possible errors and i cannot find a way to run that line again without re-writing it in the except block and that too works for one error or the number of times i copy the same code in the except block. A: You can put it in a while loop and check to see if an an input is an integer or not: def is_integer(s): try: int(s) return True except ValueError: return False while True: i = input("Enter an integer: " ) if is_integer(i): print(i) break A: Try : while True: try: i = int(input("Enter number : ")) break except: continue Or you can use pass instead of continue.
doc_23535374
c=$(date +"%x") targets="www.example.com" docker build -t amass https://github.com/OWASP/Amass.git docker run amass --passive -d $targets > $c.txt The error is as follows: ./main.sh: 13: ./main.sh: cannot create 12/29/2018.txt: Directory nonexistent Running same commands from a terminal operate directly. How can I fix this? A: In your situation, it is too dangerous to use the %x option of date, which stands for: %x locale's date representation (e.g., 12/31/99) You wouldn't control anything, and may have various behaviour between your testing computer, and the docker, if the locale is different. Anyway, using date format with slash '/', which are going to be interpreted as directory separator will lead to issue. For both reasons, you should define the format of your date. For instance: #!/bin/bash c=$(date +'%Y-%m-%d-%H-%M-%S') targets="www.example.com" docker build -t amass https://github.com/OWASP/Amass.git docker run amass --passive -d $targets > $c.txt You should add as many information (hour, minute, second ...) in your date as you think you may run your script; otherwise, the output of previous run will be overriden.
doc_23535375
Say for example I have: A = [1 2 3 4] [5 6 7 8] [9 10 11 12] and B = [0] [2] [1] the resultant matrix should be C = [1 2 3 4] [NaN NaN 7 8] [NaN 10 11 12] I am trying to avoid using for loops because the matrix I'm dealing with is large and the this function will be repetitive. Is there an elegant pythonic way to implement this? A: Check out this code : here logic over which first method work is that create condition-matrix for np.where and which is done following ways import numpy as np A = np.array([[1, 2, 3, 4],[5, 6, 7, 8],[9, 10, 11, 12]], dtype=np.float) B = np.array([[0], [2], [1]]) B = np.array(list(map(lambda i: [False]*i[0]+[True]*(4-i[0]), B))) A = np.where(B, A, np.nan) print(A) Method-2: using basic pythonic code import numpy as np A = np.array([[1, 2, 3, 4],[5, 6, 7, 8],[9, 10, 11, 12]], dtype=np.float) B = np.array([[0], [2], [1]]) for i,j in enumerate(A): j[:B[i][0]] = np.nan print(A) A: Your arrays - note that A is float, so it can hold np.nan: In [348]: A = np.arange(1,13).reshape(3,4).astype(float); B = np.array([[0],[2],[1]]) In [349]: A Out[349]: array([[ 1., 2., 3., 4.], [ 5., 6., 7., 8.], [ 9., 10., 11., 12.]]) In [350]: B Out[350]: array([[0], [2], [1]]) A boolean mask were we want to change values: In [351]: np.arange(4)<B Out[351]: array([[False, False, False, False], [ True, True, False, False], [ True, False, False, False]]) apply it: In [352]: A[np.arange(4)<B] = np.nan In [353]: A Out[353]: array([[ 1., 2., 3., 4.], [nan, nan, 7., 8.], [nan, 10., 11., 12.]])
doc_23535376
gjb2() { printf "\n\n" printf "What is the id of the patient getting GJB2 analysis : "; read id printf "Enter variant(s): "; IFS="," read -a variant [ -z "$id" ] && printf "\n No ID supplied. Leaving match function." && sleep 2 && return [ "$id" = "end" ] && printf "\n Leaving match function." && sleep 2 && return for ((i=0; i<${#variant[@]}; i++)) do printf "NM_004004.5:%s\n" ${variant[$i]} >> c:/Users/cmccabe/Desktop/Python27/$id.txt done add2text ${id}.txt add2out gjb2name } command to combine: add2out() { cd 'C:\Users\cmccabe\Desktop\annovar' printf "NM_004004.5:%s\n" ${variant[$i]} > C:/Users/cmccabe/Desktop/annovar/out.txt >> out.txt } Thank you :).
doc_23535377
Imp: I'm testing the contracts (including the Chainlink contracts) locally on hardhat. I have added a file test/VRFCoordinatorV2Mock.sol which simply imports the VRFV2 Mock contract: import "@chainlink/contracts/src/v0.8/mocks/VRFCoordinatorV2Mock.sol"; Below is my NFT.sol file: // SPDX-License-Identifier: MIT pragma solidity ^0.8.9; import "@openzeppelin/contracts/token/ERC721/ERC721.sol"; import "@openzeppelin/contracts/access/Ownable.sol"; import "@openzeppelin/contracts/utils/math/SafeMath.sol"; import '@chainlink/contracts/src/v0.8/interfaces/VRFCoordinatorV2Interface.sol'; import '@chainlink/contracts/src/v0.8/VRFConsumerBaseV2.sol'; import "hardhat/console.sol"; contract NFT is ERC721, Ownable, VRFConsumerBaseV2 { using SafeMath for uint256; event RequestSent(uint256 requestId, uint32 numWords); event RequestFulfilled(uint256 requestId, uint256[] randomWords); struct NFTData { uint256 mintedOn; uint256 initialPower; } struct NFTInfo { uint256 cardId; address userAddress; uint256 initialPower; } VRFCoordinatorV2Interface public coordinator; uint64 private _subscriptionId; bytes32 public keyHash; uint32 public callbackGasLimit = 100000; uint16 requestConfirmations = 3; uint32 numWords = 1; mapping(uint256 => NFTInfo) private _requestIdToNFTInfo; mapping(uint256 => NFTData) private _nfts; constructor(string memory _name, string memory _symbol, address _coordinator, uint64 subscriptionId, bytes32 _keyHash) ERC721(_name, _symbol) VRFConsumerBaseV2(_coordinator) { coordinator = VRFCoordinatorV2Interface(_coordinator); _subscriptionId = subscriptionId; keyHash = _keyHash; } function mintNFT(uint256 cardId, address userAddress, uint256 _initialPower) external onlyOwner returns(uint256) { console.log("requestRandom: "); uint256 requestId = coordinator.requestRandomWords( keyHash, _subscriptionId, requestConfirmations, callbackGasLimit, numWords ); _requestIdToNFTInfo[requestId] = NFTInfo(cardId, userAddress, _initialPower); emit RequestSent(requestId, numWords); console.log("Emit"); return requestId; } function fulfillRandomWords(uint256 _requestId, uint256[] memory _randomWords) internal override { console.log("fulfill"); NFTInfo memory requestNFTInfo = _requestIdToNFTInfo[_requestId]; require(requestNFTInfo.userAddress != address(0), "Request not found"); _safeMint(requestNFTInfo.userAddress, requestNFTInfo.cardId); console.log("_safeMint"); _nfts[requestNFTInfo.cardId] = NFTData(block.timestamp, requestNFTInfo.initialPower); emit RequestFulfilled(_requestId, _randomWords); } } Below is my nft.test.js file: const { expect } = require("chai"); describe("Contract deployment", () => { let nft, vrfCoordinatorV2Mock, hardhatVrfCoordinatorV2Mock, owner; before(async () => { nftFactory = await ethers.getContractFactory("NFT"); vrfCoordinatorV2Mock = await ethers.getContractFactory("VRFCoordinatorV2Mock"); }); beforeEach(async () => { [owner] = await ethers.getSigners(); hardhatVrfCoordinatorV2Mock = await vrfCoordinatorV2Mock.deploy(0, 0); await hardhatVrfCoordinatorV2Mock.deployed(); await hardhatVrfCoordinatorV2Mock.createSubscription(); await hardhatVrfCoordinatorV2Mock.fundSubscription(1, 100000); nft = await nftFactory.deploy("NFT", "NFT", hardhatVrfCoordinatorV2Mock.address, 1, "0x79d3d8832d904592c0bf9818b621522c988bb8b0c05cdc3b15aea1b6e8db0c15"); await nft.deployed(); }); describe("NFT", () => { it("Mint NFT", async () => { await hardhatVrfCoordinatorV2Mock.addConsumer(1, nft.address); await nft.mintNFT(1, owner.address, 100); // here "fulfill" isn't being logged }); }); }); Is it even possible to test the VRF locally on hardhat or am I missing something? Any help is highly appreciated. Also any suggestions on correctly testing my NFT contract locally with the VRF are also welcome. A: It is not possible to test this locally unless you create an Oracle application. The way VFR works is you request randomness to the VFR contract. Chainlink have a separate service running somewhere (not on a blockchain) which monitors the request randomness requests. Chainlink then submits a transaction to the blockchain which calls your fulfillRandomWords function.
doc_23535378
I know how to remove the reviews just fine with add_filter( 'woocommerce_product_tabs', 'woo_remove_product_tabs', 98 ); function woo_remove_product_tabs( $tabs ) { unset( $tabs['reviews'] ); // Removes reviews return $tabs; } Now I want to add that back out somewhere different (outside of the tab area) A: Well it depend on where you want to output it. After you decide where you want to output it then use comments_template() function. For example if you want to output it after product summary section, then you could do something like this: add_action( 'woocommerce_after_single_product_summary', 'your_theme_review_replacing_reviews_position', 21 ); function your_theme_review_replacing_reviews_position() { comments_template(); } add_filter( 'woocommerce_product_tabs', 'woo_remove_product_tabs', 98 ); function woo_remove_product_tabs( $tabs ) { unset( $tabs['reviews'] ); return $tabs; } Or you could hook it somewhere else like all the way down the page using woocommerce_after_single_product, like so: add_action( 'woocommerce_after_single_product', 'your_theme_review_replacing_reviews_position'); function your_theme_review_replacing_reviews_position() { comments_template(); } add_filter( 'woocommerce_product_tabs', 'woo_remove_product_tabs', 98 ); function woo_remove_product_tabs( $tabs ) { unset( $tabs['reviews'] ); return $tabs; } Both examples are tested and work. Let me know if you were able to get it to work!
doc_23535379
We could do this per caches_action method by creating a custom cache_path using the controller variable for is_mobile?, but we'd prefer to do it globally somehow. Any suggestions? I imagine this would require monkey-patching ActionController::Caching but I can't figure out where it generates the "views/" prefix. A: I'm sorry, I was Rails nubie, so I don't really understand about your question, but if it right, is this what you mean? This is on my routes.rb: scope "/administrator" do resources :users end I changed my users_path 'prefix' to administrator. Sorry if wrong :D A: I actually ended up figuring this out myself. Basically ActionController::Base uses a function called fragment_cache_key to generate the cache key for a specific fragment (which is what ActionCaching uses deep down). So you basically override that method and include your own logic for how to generate the prefix. This is how my method override looks: # Monkey patch fragment_cache_key def fragment_cache_key(key) ActiveSupport::Cache.expand_cache_key(key.is_a?(Hash) ? url_for(key).split("://").last : key, mobile_device? ? "views-mobile" : "views") end Where mobile_device? is my own function that figures out whether the user is requesting the mobile or desktop version of the site.
doc_23535380
I have a UIPageViewController and when VO is on backward 3 finger scroll is working but when I try forward 3 finger swipe it just says page 1 of 2 but doesn't scroll or swipes. UIPageViewController has UIViewControllers in it, which shows HTML content, if I set my UIViewController.isAccessibilityElement = true Then it scrolls to next but obviously doesn't get into reading content of view controller. EDIT: I have also tried overriding, what I'm noticing is thats being called only for backward scrolls when I try forward scrolling (swipe for next view controller) this is not even called. accessibilityScroll(_ direction: UIAccessibilityScrollDirection) -> Bool Any help will be useful.
doc_23535381
When there is a click on a row (link), I set location to that URL like this: window.location=mytable.rows[temp_no].getElementsByTagName("a")[0]; And in one of those link, a video player starts to play a file in the link and I want it to keep playing when I go back to the previous page so that I can listen to the music when browsing other links. I go to the previous page with: window.location.href=".."; This destroys everything i.e. video player naturally. I can't popup a new window or open video player in a new window since this application works on devices which have single browser window. Any solutions ? A: Of course it does. Changing the location causes the full page to be unloaded and the new one to be loaded. If you do not want this behaviour you'll have to use AJAX to reload only parts of your site. Opening the video in a popup window would be another solutionbut new windows are usually annoying, so provide the user e.g. with a "open video in new window" link. Edit: In this case - assuming the TV browsers have sane JavaScript engines - use AJAX. Another "solution" would be adding an onbeforeunload event to request confirmation from the user before he navigates away from the page. Without being able to use a new window or AJAX it is impossible unless you use frames and just load another page in a different frame. A: Use window.open on your videos in a different window so the parent window can navigate wherever. Keep in mind that you'll have to disable any pop-up blocker. ** UPDATE ** If you need everything in the same window, consider using some iframe to view other pages. The advantage of iframes is that they have their own CSS styles, Javascript sandbox so any page viewed within an iframe does not (generally) affect it's parent container. Of course, there are ways to communicate between an iframe and it's parent and vice versa. But this is out of the question scope.
doc_23535382
if (rank == 0) { /* Send Ping, Receive Pong */ dest = 2; source = 2; rc = MPI_Send(pingmsg, strlen(pingmsg)+1, MPI_CHAR, dest, tag, MPI_COMM_WORLD); rc = MPI_Recv(buff, strlen(pongmsg)+1, MPI_CHAR, source, tag, MPI_COMM_WORLD, &Stat); printf("Rank0 Sent: %s & Received: %s\n", pingmsg, buff); } else if (rank == 2) { /* Receive Ping, Send Pong */ dest = 0; source = 0; rc = MPI_Recv(buff, strlen(pingmsg)+1, MPI_CHAR, source, tag, MPI_COMM_WORLD, &Stat); printf("Rank1 received: %s & Sending: %s\n", buff, pongmsg); rc = MPI_Send(pongmsg, strlen(pongmsg)+1, MPI_CHAR, dest, tag, MPI_COMM_WORLD); } I run this program on a 3 nodes environment. However, the system displays: Fatal error in MPI_Send: Other MPI error, error stack: MPI_Send(173)..............: MPI_Send(buf=0xbffffb90, count=10, MPI_CHAR, dest=2, tag=1, MPI_COMM_WORLD) failed MPID_nem_tcp_connpoll(1811): Communication error with rank 2: Unknown error 4294967295 I'm wondering why I can send a message from rank-0 node to rank-1 node, but an error occurs when changed from rank-0 node to rank-1 node? Thanks. A: Actually have you checked whether strlen(pingmsg) is the same in both MPI_SEND and MPI_RECV The amount of data sent using MPI_SEND should be less than or equal to the amount of data to be received by MPI_RECV or else it will lead to an error.
doc_23535383
private void parseJSon(String data) throws JSONException { if (data == null) return; List<Route> routes = new ArrayList<Route>(); JSONObject jsonData = new JSONObject(data); JSONArray jsonRoutes = jsonData.getJSONArray("routes"); long totalDistance = 0; int totalSeconds = 0; JSONObject jsonDistance = null; JSONObject jsonDuration = null; for (int i = 0; i < jsonRoutes.length(); i++) { JSONObject jsonRoute = jsonRoutes.getJSONObject(i); Route route = new Route(); JSONObject overview_polylineJson = jsonRoute.getJSONObject("overview_polyline"); JSONArray jsonLegs = jsonRoute.getJSONArray("legs"); for (int j = 0; j < jsonLegs.length(); j++) { jsonDistance = ((JSONObject) jsonLegs.get(j)).getJSONObject("distance"); totalDistance = totalDistance + Long.parseLong(jsonDistance.getString("value")); td = String.valueOf(totalDistance); /** Getting duration from the json data */ jsonDuration = ((JSONObject) jsonLegs.get(j)).getJSONObject("duration"); totalSeconds = totalSeconds + Integer.parseInt(jsonDuration.getString("value")); ts = String.valueOf(totalSeconds); double dist = totalDistance / 1000.0; Log.d("distance", "Calculated distance:" + dist); int days = totalSeconds / 86400; int hours = (totalSeconds - days * 86400) / 3600; int minutes = (totalSeconds - days * 86400 - hours * 3600) / 60; int seconds = totalSeconds - days * 86400 - hours * 3600 - minutes * 60; Log.d("duration", days + " days " + hours + " hours " + minutes + " mins" + seconds + " seconds"); } route.distance = td; route.duration = ts; route.points = decodePolyLine(overview_polylineJson.getString("points")); routes.add(route); } listener.onDirectionFinderSuccess(routes); } how can I do it? A: td = String.valueOf(totalDistance * 0.001); ts = String.valueOf(totalSeconds % 3600) / 60 ; A: As we know that 1km=1000 meters and 1min= 60 seconds So, if your result is coming in meters and seconds then you could easily convert this into km and minutes by simple mathematics calculation. Td=total_distance *0.001 Ts= total_duration*60 A: For time: long min = TimeUnit.SECONDS.convert(seconds,TimeUnit.MINUTES); For Distance: double km = (double) mtr / 1000; A: try this for convert meter to kilometer: public static String convertMeterToKilometer(int totalDistance) { double ff = totalDistance / 1000.0; BigDecimal bd = BigDecimal.valueOf(ff); bd = bd.setScale(2, RoundingMode.HALF_UP); return String.valueOf(bd.doubleValue()); }
doc_23535384
Using Wildfly (JBoss 9.0.2), play latest version( activator).Have placed my war files for service and web(play project) in Wildfly's standalone -->deployments folder. Application is working fine.Issue is I cannot debug the application. Have created a new debug configuration under Remote Java Application with host as localhost, port as 8787, source I am adding two projects web and service. As war files are placed in wildfly and which runs at port 8787 have mentioned this port number in debug configuration. It was working fine few days back, I was able to debug my play as well as service side code with above debug configuration.But now(since past two days) debug points are not working at all. Tried adding fork in run := false in build.sbt , it din't help. How should I debug this application with wildfly and activator.I am just running the jboss server and deploying the application in local.This is the way it works , do not want to run separate play application and service application. Totally at my wits end with this.Please suggest something. Thanks !! A: Start wildfly with the --debug option in command line.
doc_23535385
mywebsite/620x439/9122600a but if you want to have better resolution you have to pass an argument with it, so the link looks like this: mywebsite/620x439/9122600a?wersja=720p The problem is, that my program won't know if video from this link have better resolution, it have to figure it out itself. I had an idea to make a request to this URI and check the server response. I do it this way: Uri targetUri = new Uri(videoPageURLHD); var request = (HttpWebRequest)WebRequest.Create(targetUri); var response = (HttpWebResponse)request.GetResponse(); if (response.StatusCode == HttpStatusCode.OK) { //do something } else { //do something else } The problem is, that no matter if video file with resolution exists or not, server always responds with HTTP 200, despite in debug console in web browser I can see that it failed to load resource -> The question is: is it possible to get this info about failing to load resource using request/Webclient/whatever? As an output I need simple true/false answer to question if this video have better resolution or not. A: is it possible to get this info about failing to load resource using request/Webclient No. HttpRequest and WebClient only loads the html, nothing else. They don't render the page, validate the html syntax, let alone loading JavaScript and Video. You are getting a 200 is because the html download was successful. You could however try to figure out the Url for the video and try to download it using WebClient if its not blocked by the server.
doc_23535386
import win32com.client as win32 excel = win32.gencache.EnsureDispatch('Excel.Application') wb = excel.Workbooks.Open(r'C:\...\.xlsx') ws = wb.Worksheets('sheet1') ws.Cells(1,1).AddComment = "comment" --> object has no attribute 'AddComment' Do you know how to add new comment to excel using win32? Thank you! A: Add comment is a method not a property. ws = wb.Worksheets('sheet1') ws.Cells(1,1).AddComment("comment") Just read the the documentation in the MSDN.
doc_23535387
Assume, one has a table like this COL1 FLAG aaa 1 aaa 0 aaa 1 bbb 0 I need to write a query to get the following output: COL1_VALUE FLAGGED TOTAL aaa 2 3 bbb 0 1 where FLAGGED column contains the total count of the 'aaa' row values for which FLAG=1, and TOTAL column is the total number of rows containing 'aaa', in other words find how many rows containing 'aaa' are flagged in relation to total number of rows containing 'aaa'. Is it possible with a single query? (i.e. without using temp tables etc.) (MSSQL2008) A: SELECT COL1 AS COL1_VALUE, COUNT(CASE WHEN FLAG = 1 THEN 1 END) AS FLAGGED, COUNT(*) AS TOTAL FROM YourTable GROUP BY COL1 A: SELECT COL1, SUM(FLAG) AS FLAGGED, Count(*) AS TOTAL from tbl GROUP BY COL1 A: SELECT Tab.COL1 AS COL1_VALUE, SUM(CASE WHEN Tab.FLAG = 1 THEN 1 ELSE 0 END) AS FLAGGED, COUNT(*) AS TOTAL FROM Tab GROUP BY Tab.COL1
doc_23535388
SQL Query: Cast(Round(Column_Name, 2, 1) AS Decimal (18,2)) As New_Column gives me Value as 1234.23, I understand it is truncating my last digit by 1 upto 2 decimals. Snowflake Query: Cast(Round(Column_Name),2) AS Decimal (18,2) As New_Column gives me 1234.24 as we can use only 2 values "Cast(Round(Column_Name),1.6) AS Decimal (18,2) As New_Column" gives me 1234.23 as expected results, Can please explain what is the relation of (,2,1) in SQL and (,1.6) in snowflake? And are we allowed to use decimal values? Is there any option that I can pass 3 values in snowflake as we are doing in SQL? A: For Snowflake: ROUND accepts arguments, and the second one is "scale expression" which indicated the number of digits the output should include after the decimal point. It should evaluate to an integer from -38 to +38. So you should not enter 1.6. Although it works now, someday it may be fixed and produce an error. https://docs.snowflake.com/en/sql-reference/functions/round.html For Microsoft SQL Server: The third parameter indicated the function: When the function is omitted or has a value of 0 (default), numeric_expression is rounded. When a value other than 0 is specified, numeric_expression is truncated. https://learn.microsoft.com/en-us/sql/t-sql/functions/round-transact-sql?view=sql-server-ver15 So when you use, "Round(Column_Name, 2, 1)", you truncate the rest of the digits instead of rounding the number. If you want to do the same thing, you can use the TRUNCATE function in Snowflake: https://docs.snowflake.com/en/sql-reference/functions/trunc.html I suspect that you store your value as a floating-point number. Floating point numbers are approximate values. A floating point number might not round as expected. https://docs.snowflake.com/en/sql-reference/data-types-numeric.html#float-float4-float8
doc_23535389
I want to inspect dropdown options so I can edit css.
doc_23535390
Failed to execute goal org.apache.maven.plugins:maven-compiler-plugin:3.8.1:testCompile (default-testCompile) on project MarkitWireCheck: Compilation failure No compiler is provided in this environment. Perhaps you are running on a JRE rather than a JDK? What I did was: * *Open the project in cmd: cd C:\Users\MIRABR\eclipse-workspace\MarkitWireCheck *execute MVN: mvn test -Dtest=MKTWireComparisonTest#compareMKTWireIRS#compareMKTWireIRS (In this case executing just one test this class) This is my config: JDK used: C:\Program Files\AdoptOpenJDK\jdk-11.0.10.9-hotspot Execution Enviroment How can I solve this problem?
doc_23535391
var movies = [ { Name: "The Red Violin", ReleaseYear: "1998" }, { Name: "Eyes Wide Shut", ReleaseYear: "1999" }, { Name: "The Inheritance", ReleaseYear: "1976" } ]; var markup = "<li><b>${Name}</b> (${ReleaseYear})</li>"; /* Compile the markup as a named template */ $.template( "movieTemplate", markup ); /* Render the template with the movies data and insert the rendered HTML under the "movieList" element */ $.tmpl( "movieTemplate", movies ) .appendTo( "#movieList" ); now i am trying to access the name filed only how can i access the name field from json file $(document).ready(function(){ $.getJSON('dat.js', function(data) { $( "#movieTemplate" ).tmpl( data[i].name).appendTo( "#movieList" ); }); }); want to access the field but error showing as data[0] is undifined A: Use jQuery.getJSON() to load the json file content: var markup = "<li><b>${Name}</b> (${ReleaseYear})</li>"; /* Compile the markup as a named template */ $.template( "movieTemplate", markup ); $.getJSON('movies.json', function(data) { // 'data' is the json loaded and parsed from the file $.tmpl( "movieTemplate", data ) .appendTo( "#movieList" ); });
doc_23535392
Please help me, sorry my english, The methods below shows the configuration and the intention where I call a paypal service View.OnClickListener pagarPaypal = new View.OnClickListener() { @Override public void onClick(View v) { //Inicializar Paypal onBuyPressed("0.00", "USD"); } }; Metodos de configuracion public PayPalConfiguration initConfigPaypal() { PayPalConfiguration payPalConfiguration = new PayPalConfiguration(); payPalConfiguration.acceptCreditCards(true); payPalConfiguration.environment(PayPalConfiguration.ENVIRONMENT_SANDBOX); payPalConfiguration.merchantName("MERCHANT_NAME"); payPalConfiguration.clientId("CLIENT_ID"); payPalConfiguration.merchantPrivacyPolicyUri(Uri.parse("https://www.paypal.com/webapps/mpp/ua/privacy-full")); payPalConfiguration.merchantUserAgreementUri(Uri.parse("https://www.paypal.com/webapps/mpp/ua/useragreement-full")); payPalConfiguration.languageOrLocale("es_MX"); return payPalConfiguration; } public void onBuyPressed(String cantidad, String typeMoney) { PayPalPayment payPalPayment = new PayPalPayment(new BigDecimal(cantidad), typeMoney, "Sample Item", PayPalPayment.PAYMENT_INTENT_SALE); Intent intent = new Intent(getActivity(), PayPalService.class); intent.putExtra(PayPalService.EXTRA_PAYPAL_CONFIGURATION, initConfigPaypal()); intent.putExtra(PaymentActivity.EXTRA_PAYMENT, payPalPayment); getActivity().startService(intent); } A: In your intent you are using PayPalService.class instead of PaymentActivity.class. PayPalService.class does not have any View. Try this: // Import import com.paypal.android.sdk.payments.PaymentActivity; ............ ................. public void onBuyPressed(String cantidad, String typeMoney) { PayPalPayment payPalPayment = new PayPalPayment(new BigDecimal(cantidad), typeMoney, "Sample Item", PayPalPayment.PAYMENT_INTENT_SALE); Intent intent = new Intent(getActivity(), PaymentActivity.class); intent.putExtra(PayPalService.EXTRA_PAYPAL_CONFIGURATION, initConfigPaypal()); intent.putExtra(PaymentActivity.EXTRA_PAYMENT, payPalPayment); getActivity().startActivityForResult(intent, YOUR_REQUEST_CODE); } Here is a very nice article Android Integrating PayPal using PHP, MySQL Here is the PayPal SDK class description Hope this will help~
doc_23535393
To do so, I set the project.json as follow: "frameworks": { "netstandard1.1": { "imports": "dnxcore50" } } I want that library to use a full .NET library (let's call it OtherLib). I thought it could be possible as long as the .NET version of OtherLib would be compatible with the netstandard version of my library. But it appears not... Here is the error: Package OtherLib X.Y.Z is not compatible with netstandard1.1 (.NETStandard,Version=v1.1). Package OtherLib X.Y.Z supports: - net40 (.NETFramework,Version=v4.0) - net45 (.NETFramework,Version=v4.5) Here is my full project.json: { "version": "1.0.0-*", "dependencies": { "NETStandard.Library": "1.6.0", "OtherLib": "X.Y.Z" }, "frameworks": { "netstandard1.1": { "imports": "dnxcore50" } } } I suspect there is some tricky stuff to do in it, to get it working, or may be it is simply not possible? Thanks in advance. (Excuse me for my english, I am not a native speaker) A: Try to modify your project.json by changing "netstandard1.1" to "net45": { "version": "1.0.0-*", "dependencies": { "NETStandard.Library": "1.6.0", "OtherLib": "X.Y.Z" }, "frameworks": { "net45": { "imports": "dnxcore50" } } } A: I thought it could be possible as long as the .NET version of OtherLib would be compatible with the netstandard version of my library. No, it's not possible. Versions of .Net Framework are supersets of versions of .Net Standard. Since OtherLib is a .Net Framework library, you can't depend on it in a .Net Standard library. You will either have to limit your library to run only on .Net Framework, or you will have to remove the dependency on OtherLib. (Possibly by making two versions of your library: one for .Net Standard that does not depend on OtherLib and one for .Net Framework that does.)
doc_23535394
Local machine: Windows 10. I also have windows 7 machines here, but not really using them. Windows Server 2012 r2 (not in the same physical location). This is not a production server, it's just a server I use for hosting various scripts. I am currently just accessing the windows server with the IP address. However, if it some how makes this easier, I can assign it a static IP link it to a subdomain. Obviously using the real IP of the server, but for the sake of this post, I'm using 1.1.1.1 I've set up git on the windows server. I followed: https://www.server-world.info/en/note?os=Windows_Server_2012&p=openssh I just deleted all the repositories I was testing with, so I could go through it step by step here in hopes someone can point out where I've gone wrong. I created a user on the server, named 'kannkor', and assigned it a password. I am currently RDC (remote desktop connection) into the server. So I can do anything, verify from there if I need. I have putty open, and the connection type is "ssh", the host name is the IP of the server, on port 22. It asks me: login as: I type in "kannkor" It then asks: kannkor@1.1.1.1's password: I type it in. It takes me to: kannkor@computername C:\Users\kannkor> I'd like the repositories to be on the d drive. I can change directories: I create a new folder: d: mkdir repos cd repos From the RDC, I can verify the repos folder is now created under D: Going through a mental checklist, this means my username/password/permissions to that drive/folder are set. At this stage, I feel like I've followed 100 walkthroughs, and they all end up the same. So for sake of argument, I'm going to follow this one: http://thelucid.com/2008/12/02/git-setting-up-a-remote-repository-and-doing-an-initial-push/ On the local machine I open a git bash and type: ssh kannkor@1.1.1.1 It asks me for my password, I type it in. I do the following (following the walkthrough). d: cd repos I'm now at: D:\repos> Maybe this is where I've went wrong, by changing the drive/directory... But it must be possible.. continuing with the walkthrough mkdir my_project.git cd my_project.git git init --bare -> Initialized empty Git repository in D:/repos/my_project.git/ I did the git update-server-info (I've tried it with, and without, and had no impact on the final error). On RDC, I can see it created the folder my_project.git and it has a few files/folders, hooks, info, objects etc. Not touching it, just noting it. Onto the local machine I type exit, to exit the ssh Like previous, I want these saved on the d drive. To avoid confusion, I'm going to call the parent directory repositories. I'm currently in: /d/repositories mkdir my_project cd my_project git init -> Initialized empty Git repository in D:/repositories/my_project/.git/ (changed git add * to git add --all) git add --all git commit -m "my initial commit message" >On branch master Initial commit nothing to commit git remote add origin kannkor@1.1.1.1:d/repos/my_project.git git push -u origin master error: src refspec master does not match any. error: failed to push some refs to 'kannkor@1.1.1.1:d/repos/my_project.git' I believe this is because the initial commit didn't have anything. Still on the local machine, I navigate to: d:\repositories\my_project\ I create a file: placeholder.txt, and add a single line of text, then save it. Back to git bash git add --all git commit -m "my initial commit message" [master (root-commit) ac54490] my initial commit message 1 file changed, 1 insertion(+) create mode 100644 placeholder.txt Much better for the local commit. I try the push again. git push -u origin master kannkor@1.1.1.1's password: fatal: ''d/repos/my_project.git'' does not appear to be a git repository fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. This is about where I have gotten stuck. My assumption, is this line: git remote add origin kannkor@1.1.1.1:d/repos/my_project.git I've tried many various ways, including: git remote add origin kannkor@1.1.1.1:d:/repos/my_project.git With \ instead of /. Adding slashing to the end of it. Few more various ones I attempted that all failed. fatal: ''D:\repos\my_project.git'' does not appear to be a git repository fatal: ''D:/repos/my_project.git'' does not appear to be a git repository I also tried this, using the scp: https://stackoverflow.com/a/20987150 Which ended in the same results. Any advice would be appreciated. A: d/repos/my_project.git does not look like a valid path. /d/repos/my_project.git would. Try: git remote set-url origin kannkor@1.1.1.1:/d/repos/my_project.git git push -u origin master A: A little preliminary, but I do believe I have the answer/solution(work around). https://github.com/PowerShell/Win32-OpenSSH/wiki/Setting-up-a-Git-server-on-Windows-using-Git-for-Windows-and-Win32_OpenSSH In short, there is a known bug (been around since 2017), where when the default shell of the server is using cmd.exe, it is not consuming the single quotes. That's why the errors would come back with odd looking double single quotes. fatal: ''D:\repos\my_project.git'' does not appear to be a git repository The work around, is to change the default shell of the server, from cmd.exe to powershell. Details on doing this can be found here. I'm copy/pasting the link above incase that link ever goes dead. Before configuring DefaultShell, ensure the following prerequisites are met OpenSSH installation path is in system PATH. If not already present, amend system PATH and restart sshd service. Follow these steps: On the server side, configure the default ssh shell in the windows registry. Computer\HKEY_LOCAL_MACHINE\SOFTWARE\OpenSSH\DefaultShell - full path of the shell executable Computer\HKEY_LOCAL_MACHINE\SOFTWARE\OpenSSH\DefaultShellCommandOption (optional) - switch that the configured default shell requires to execute a command, immediately exit and return to the calling process. By default this is -c. Powershell cmdlets to set powershell bash as default shell New-ItemProperty -Path "HKLM:\SOFTWARE\OpenSSH" -Name DefaultShell -Value C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe" -PropertyType String -Force New-ItemProperty -Path "HKLM:\SOFTWARE\OpenSSH" -Name DefaultShellCommandOption -Value "/c" -PropertyType String -Force To confirm you've done the above correct, when you ssh into the server, you should be in a powershell, instead of a cmd prompt. And finally, the correct syntax for the commands, was indeed this: kannkor@1.1.1.1:D:/repos/my_project.git To kind of put it all together.. git remote add origin kannkor@1.1.1.1:D:/repos/my_project.git git push origin master kannkor@1.1.1.1's password: Everything up-to-date
doc_23535395
Here is the docker-compose: test: container_name: test volumes: - C:\test\:\test\ build: . When i hook into the docker image i can see that the folder is created on the root folder. Now i need to write the correct path to that folder into the application settings. Before it was something like this: "test": { "Path": "C:\\test\\" } But i dont know how to get the absolute path of my folder from inside docker, so my app can understand where to search for it. Thank you EDIT: it looks like the problem was on my side: the way i defined the volumes created a folder with the name "\test\" ... doing C:\test:/test/ instead did the trick A: If you’re building your own Docker image, you control the filesystem layout entirely. For Linux-based images it’s common to follow the FHS standard (I think the standard MySQL image stores its data in /var/lib/mysql) but it’s also common enough to just store data in subdirectories of the root directory (/data or /config or what not). If you have a setup like this, your image should pick a path. If the only thing in the configuration is the location of that directory, it’s fine to hard-code it in the image. However you document your image (even if it’s just a standard docker-compose.yml file) mention that you have this fixed path; it doesn’t need to match the host path (if any) on any particular system.
doc_23535396
I have added the field of Role in CategoryCrudController class and category_role table in DB and set relations in Category and Role models. the relation data is now stored in the table, although the checkbox remains unchecked! $this->crud->addField( [ 'label' => 'Roles', 'type' => 'checklist', 'name' => 'roles', 'entity' => 'roles', 'attribute' => 'name', 'model' => "Backpack\PermissionManager\app\Models\Role", 'pivot' => true, ] ); Added role field Now I want to allow the users with the granted role to use crud on the category's articles. I know I can use hasRole and use denyAccess() in CrudController's setup function but it doesn't what I want. public function setup(){ if (backpack_user()->hasRole('certainCategoryRole')) { $this->crud->denyAccess([ 'delete','edit','write']); } } I need to give articles operations access to users by their category's assigned role. Can anyone help? Thanx
doc_23535397
I have a Symfony2 form with a builder as you'd expect. Example: $builder->add('home_team', 'choice', [$options]) ->add('away_team', 'choice', [$more_options]) ->add('timestamp', 'datetime_picker', [$usual_stuff]); Now, I have some single-field validations, like NotNull etc, but I need this one validator that ensures me against near-duplicate entries. Specifically, I want to ensure that no one saves a row where there is an existing home_team, away_team, as well as a same-day timestamp. Question When I make a UniqueMatchValidator class, with a validate() function, how do I apply this validator to the entire form, rather than just to a field? Clarification I'm familiar with applying validators to single fields, but applying to a set of fields, or indeed an entire form, is what I'm wondering about. My thoughts I was thinking I could apply the validator to a single field, but then I need to know the values of the other ones. Would this be a possible solution?
doc_23535398
remaining keys = <All Keys> - <Set of Keys> Background of design: Our IOT JAVA Consumer applications are running in kubernetes pods with multiple replica. Number of applications are huge(More than millions) so we use Redis hash to store Appliance's metadata. e.g. Data sructure in redis is sample Hash - DEVICE|APC2(i.e. for 2nd series appliance) sample Key - APC278ER89A1(i.e. Device_ID) sample Value - Cooking|Claimed|Online(i.e. Device's metadata) For performance improvement we have In Memory JVM cache(i.e. HashMap) also to reduce lookup time in redis Whenever we receive any message from any appliance we check with device ID, if in-memory jvm cache contains Appiance meta data if it is not then we fetch it from redis and put it in memory cache, referring it for next time. There is confiugurable value for maximum data holding in memory cache (e.g. 2000). if maximum data reached eviction thread will remove it from inMemory JVM cache. Problem: Now the problem is whenever we need to lookup in data set present inMemoryJVM cache with that specific appliance id and isnot found then we need to scan in redis for all data again (including those already scanned in Memory JVM cache) for that hash. What could be the options to scan in redis within only those dataset which are not scanned/present in memory JVM cache for that pod? A: Problem: Now the problem is whenever we need to lookup in data set present inMemoryJVM cache with that specific appliance id and isnot found then we need to scan in redis for all data again (including those already scanned in Memory JVM cache) for that hash. You're not utilizing Redis Hash correctly, you need not have to scan all the keys of Redis using HGETALL or similar operations. You can do HGET of the device ID to find the metadata attached to that device ID. If you want to introspect more than one device ids at the same time, then you should use the HMGET command. All well-known java drivers support these commands. EDIT: HashMap: id1, id2, id3 Redis Map: id1, id2, id3, id4, id5, id6 , id7 Search id1 => Already in the hash map do not search in Redis Search id6 => Not in hash map search in Redis using HGET and update HashMap Updated Map: HashMap: id1,id3,id3,id6 Search id1,id2,id3 => All are in HashMap so do not search Redis Search id1,id2,id6,id7, id8 => id7, id8 not in HashMap so search these using HMGET Updated Map: HashMap: id1,id3,id3,id6, id7 In this example, I have updated only fetched required keys, but using HGETALL I can pull keys.
doc_23535399
I threw together a Fiddle that shows the current format I am receiving data. Controller method public JsonResult GetDeferredAccountDetailsByAccount(int id) { var details = _deferredAccountDetailsService.GetDeferredAccountDetailsByAccount(id); return Json(details, JsonRequestBehavior.AllowGet); } This returns : ..And in the browser : In the Fiddle I linked, simply wrapping the object literal in [] allows Knockout to interpret the object just fine, but without it fails. Is there something I am doing incorrectly or a reason why I'm not receiving JSON? Do I need to return an ICollection or something for it to be interpreted as JSON? I looked around but couldn't really find anything. A: You expects an Array, but you are returning a literal object at the controller. And you are biding a collection using knockout, but accounts It's a literal. That's why everything works when you put [] at the JSON. You should just push every property from the JSON to an Array instead of _map, or fix the _map function to bind property to an Array!