query_id
stringlengths
4
64
query_authorID
stringlengths
6
40
query_text
stringlengths
66
72.1k
candidate_id
stringlengths
5
64
candidate_authorID
stringlengths
6
40
candidate_text
stringlengths
9
101k
18f03127c07036452207b89f5ef53822c72d08d9aa924754c889751eda3def9d
['efacc30922fa4a9d8363a3a83d223046']
I had the same issue, while describing one of the pods with UnexpectedAdmissionError I saw the following: Update plugin resources failed due to failed to write deviceplugin checkpoint file "kubelet_internal_checkpoint": write /var/lib/kubelet/device-plugins/.525608957: no space left on device, which is unexpected. when doing describing node: OutOfDisk Unknown Tue, 30 Jun 2020 14:07:<PHONE_NUMBER> Tue, 30 Jun 2020 14:12:05 -0400 NodeStatusUnknown Kubelet stopped posting node status. I resolved this by rebooting node
263b9a090ef4ede788d9b4a15f7c535d5998f9904ec85aebbe4ef59b4e4c63e9
['efacc30922fa4a9d8363a3a83d223046']
I'm trying to create a dynamic gitlab pipeline based on it's own execution progress. For example I have 2 environments and deployment to each of them will be enabled/disabled based on the execution of the script in before_script. It doesn't work for me, seems that pipeline variable value can't be changed after pipeline has started. Any suggestions? (please see my gitlab-ci.yml below) variables: RELEASE: limited stages: - build - deploy before_script: - export RELEASE=${check-release-type-dynamically.sh} build1: stage: build script: - echo "Do your build here" ## DEPLOYMENT deploy_production_ga: stage: update_prod_env script: - echo "deploy environment for all customers" allow_failure: false only: - branches only: variables: - $RELEASE == "general_availability" deploy_production_limited: stage: update_prod_env script: - echo "deploy environment for limited customers" allow_failure: false only: - branches only: variables: - $RELEASE == "limited"
b8182c7d147e806a128e01e74432350e7f7839206c105b051196301331ca95f5
['efae30a2fa2a4b6785a6538201f927c8']
@Polygnome Actually, we from the start have the concept of _one icon to one function_ and from weekly meetings that we make, we realize this icon usage. We thought that it would be a bad thing to users, but asking this question showed me lots of points of view. I will discuss it in our next meeting
2cb0cedc32ab9e089caa9979a014119c969f8dd33e49b0328c74e7a167eb332f
['efae30a2fa2a4b6785a6538201f927c8']
I would hope so, because while the circuit is in motion it shows that for a sizable amount of time that a valid transmission took place. But yes, I have checked with a multimeter, and resistances are within ~7 ohms of the given values, even the 1.1M ohm resistance
47b694f28e63d87a816a24adf70a78c5f71bda614fa7dcf3200770a8d22a0801
['efb31890994a4366979e60ae3747cb9a']
As I understand WSL stands for WMI SPI Layer - After going through the code, WSL seems to be a wrapper layer where it just marshals/unmarshals the command/data/events over an SPI interface and actual processing is done in the firmware. Does anyone have references to this firmware/driver ? What does WMI stand for ? Does WMI/WSL runs on QC4004 chips only ?
682c8a87979b62edd5c77807370a6871bf7ea160079794181242bab2b955351b
['efb31890994a4366979e60ae3747cb9a']
I'm using IAR toolchain to compile few source files and then link generated .o files. However, I'm running into linking errors like below: Error[Li005]: no definition for "main" [referenced from cmain.o(rt7M_tl.a)] Error[Lc036]: no block or place matches the pattern "ro code section .intvec in vector_table_M.o(rt7M_tl.a)" As I understand, ILINK linker is trying to link object files as an executable image and in the process adding dependencies from standard libraries[ i.e looking for main() and interrupt vector table ]. What I'm looking for : How to configure linker to not to add these system-library dependencies like main/start/interrupt-vector-table etc. ? How to configure linker to output a non-executable image from bunch of object files - if that at all is possible ? You can think of this non-executable image sort of configuration-table image which will be put in persistent memory to be read/write by main application image.
50916a13f2c546c7092266de655b10f19366687184c64290f4d718f47cc2b521
['efc995090a21474eafbf609cfb639f29']
I know that it's not really an answer, but I think that it's necessary to point out the following fact. So, I put it as an answer, and I hope that it will be fine with the community. I'm not an expert in stopping time, but I don't think that it's true that $$\bigcup_{s<t}\{X_s\leq x\}=\bigcup_{\substack{s<t\\ s\in \mathbb Q}}\{X_s\leq x\}.$$ Take $X_s=|s-\sqrt 2|$. Then if $t>\sqrt 2$, $$\bigcup_{s\leq t}\{X_s\leq 0\}=\{X_{\sqrt 2}=0\}$$ but $$\bigcup_{\substack{s\leq t\\s\in \mathbb Q}}\{X_s\leq 0\}=\varnothing .$$ Nevertheless, this can be fix in considering $\{X_{\tau}<x\}$ instead of $\{X_\tau\leq x\}$ (since $\{(-\infty ,a)\mid a\in\mathbb R\}$ is a $\pi-$system generating Borel sets).
061f563b52362f901df963f141523859323f11c61205ff538b7fbb06d2e43162
['efc995090a21474eafbf609cfb639f29']
The curve defined implicitly by the equation $$xy^3+x^3y=4,$$ has no horizontal tangent. In the solution they did as follow : $$xy^3+x^3y=4\implies y^3+x3y^2y'+3x^2y+x^3y'=0\implies y'=\frac{3x^2y-y^3}{3xy^2+x^3}.$$ Question why they can say that $y=y(x)$ ? It looks a bit strange for me. In other word, why can I say that $$xy^3+x^3y=4\implies \exists y\in \mathcal C^1(\mathbb R): xy(x)^2+x^3y(x)=4\ \ ?$$
525590bad7d8ec45d4269c0227b2c92abde9f5bc8d5462b6ede98ae3b198e03e
['efdccf7b336b499eadbce78d4409803e']
I have actually found a webkit that worked! img.homepg { width:920px; } @media screen and (max-width:910px) { img.homepg { width:670px; } } img { transition:all .2s linear; -o-transition:all .2s linear; -moz-transition:all .2s linear; -webkit-transition:all .2s linear; } it was used to change text size when stretching a browser page, but by changing text to an img tag it works wonders!
36a7b1e2dafa7c693c4ed972baa04360af448f663a7e3084b19ceb3e89cdd42e
['efdccf7b336b499eadbce78d4409803e']
I had a similar problem for ipad app and i simply used this: img.homepg { width:920px; } @media screen and (max-width:910px) { img.homepg { width:670px; } } img { transition:all .2s linear; -o-transition:all .2s linear; -moz-transition:all .2s linear; -webkit-transition:all .2s linear; } with .homepg being the img class, 920 being the largest image width, 910 being the max screen width that triggers the image resize (aka when the orientation is changed), and 670 being the min img width!
55e3d5a270d0fbe07616be96dc24f193a88b8f22dd0d662a00af561766319ab9
['efe09dc5b5da4d28892939f4cbfaa9d3']
I have a main html file where i embedd Web Components. When debugging I noticed, that my Custom Elements are not marked, when I hover over them. Im also not able to set the size of the custom html element from an outisde css. Here is my code: view.html <!DOCTYPE html> <html> <head> <meta charset="utf-8"/> <link href="view.css" rel="stylesheet" /> <script defer src="widget-uhr.js"></script> </head> <body> <widget-uhr></widget-uhr> <widget-uhr></widget-uhr> <widget-uhr></widget-uhr> <button>View Button</button> </body> </html> widget-uhr.html <!DOCTYPE html> <html> <head> <meta charset="utf-8"/> <link href="widget-uhr.css" rel="stylesheet" /> </head> <body> <div id="time"></div> <button>Widget Button1</button> <button>Widget Button2</button> <button>Widget Button3</button> </body> </html> view.css body{ display:block; } button{ min-width: 250px; min-height: 100px; background-color: aqua; font-size: 32px; } widget-uhr.css body{ color:green; background-color: gray; } div{ color:blue; background-color: gray; } button{ background-color: red; } widget-uhr.js fetch( "widget-uhr.html" ) .then( stream => stream.text() ) .then( text => customElements.define( "widget-uhr", class extends HTMLElement { constructor() { super() this.attachShadow( { mode: 'open'} ) .innerHTML = text } } ) ) Is the reason maybe because I inject my html from the widget via fetch?
89981ed62373bc167327d0022a87718de02352f3b6feac94eb37d1907016bb37
['efe09dc5b5da4d28892939f4cbfaa9d3']
I have a C# Application with two different views. Each view has an own ViewModel. The ViewModels access the same Model. The Views need the data from the same Model in a different format. The ViewModels handle the formatting and validation. Both ViewModels should be able to communicate with each other in some way. For example if ViewModel1 updates something in the Model ViewModel2 should also update his View. The ViewModels don't have to know each other, they should just get synced when one side changes something. I found some old posts aout the Mediator pattern. Is this still the way to go? I think the Observer pattern would not work here. The only alternative I could think of was to create a Interface on both ViewModels which let them talk to each other.
a1dd148d65b7d4c51bc787f22b39b4194df252524638c3cfa5c8420211dfd5b2
['efe351349a08442d9923b0c6f92bbd64']
I've found the best way to handle this is by simply deleting the .nuget folder and re-enabling solution wide package restore. As said above, you could add a self update command to your build, but that will not update the targets or config files if there are changes between versions (or remove the reference from your solution). Perhaps its not that big of a deal, but this is the sure fire way to make sure you have the latest exe and configuration files. And at the end of the day, updating is only an issue if you need access to a new command or there is eventually a breaking change in a new release.
174012b589ee3b79b7a4bb552c0ac13498fae0baad8f4e232cf9d11162080853
['efe351349a08442d9923b0c6f92bbd64']
I found the answer and updated my question above with the solution. The solution itself was more or less present in the Using the Web API Dependency Resolver article, i just had to keep tweaking for ninject. Both answers helped me quickly narrow this down so thanks to @Remo and <PERSON>.
17b13d0f0e5ea88409263431b1902e381ec5479cdbc290bafaa9639ced969295
['efedf42f76164e989f4b9b491ad51a58']
you need to mention the encoding type as well while sending request. since your parameters data is json add encoding parameter as encoding: JSONEncoding.default for example i have used as: manager.request(url, method: httpMethod, parameters: parameters, encoding: JSONEncoding.default, headers: configuredHeader) .validate(statusCode: 200..<300) .responseJSON(completionHandler: { [weak self] response in self?.handleNetworkResponse(response: response, networkCompletionHandler: networkCompletionHandler) })
aed3cbf83ac3551a3806b008e96a014489e6e3cf02e28b1c20f8c26c4184f58b
['efedf42f76164e989f4b9b491ad51a58']
It will show 1.3.2 since you are referring from bundle infoDictionary which have value of 1.3.2. and not from the iTunes version. But I yet have question, did apple really accept your version 1.3.2 and its it live, because it may even fail to validate since the versions will not match.
28601471cd396e66ffe4bd86139a0940e0bee15ddf08851329a00aff1338d893
['efeec49bb7fc47538920210ee41068be']
I found a weird bug that only occurs in Internet Explorer (6 through 9). Take for example the URL http://www.spiegel.de/#any_anchor_value which I open in any Internet Explorer. (From what I can tell it works with any URL) As soon as the page finishes loading, that anchor tag is attached to the title of the browser window. (In this case even twice…) When I inspect the DOM of this page, it even appears in the title tag: This works on any website and in any version of the Internet Explorer from 6 through 9. I can't seem to find much information on this nor have I ever heard of it or even noticed it. What's the reason for this? What am I missing?
759cf4016d89e67e193ed07f7ead83abe3072af5e866db84419e8d11e6c2a140
['efeec49bb7fc47538920210ee41068be']
For a given dm-x with major M, minor m, there is a corresponding /sys/dev/block/M:m/dm/uuid file. If the content of the uuid file starts with part, it is safe to assume it is a partition. The corresponding whole device is found in /sys/dev/block/M:n/slaves/. For instance: [centos@try ~]$ cat /sys/dev/block/253:0/dm/uuid mpath-353333330000007d0 [centos@try ~]$ cat /sys/dev/block/253:1/dm/uuid part1-mpath-353333330000007d0 [centos@try ~]$ ls -l /sys/dev/block/253:1/slaves total 0 lrwxrwxrwx. 1 root root 0 15 août 22:06 dm-0 -> ../../dm-0
477a37c3317df0eb92948ac8bf5f9b2b8c7184c8baa0c35368dbfdc0a06c89c4
['eff8d567025f40f5a69c6834694de809']
When dealing with files in Java, my preferred option is to go with Apache VFS, as I can then treat them as any other POJO. Obviously, that's a lot of work when you are already stuck with the File API. Another option is to forget Mockito entirely and write those files on the system. I usually avoid that, as it sometimes make it harder to have tests run in parallel on some systems. For this specific situation, my solution is generally to provide a special class, say FileBuilder, that can instantiate new Files: public class FileBuilder { public java.io.File newFile(String pathname) { return new java.io.File(pathname); } } I then mock this class before passing it to MyClass, and instrument it as appropriate: @Test(expected = Exception.class) public void should_fail_when_file1_is_bigger_than_file2() { FileBuilder mockFile1 = file(2L); FileBuilder mockFile2 = file(1L); FileBuilder mockFileBuilder = mock(FileBuilder.class); when(mockFileBuilder.newFile("file1").thenReturn(mockFile1); when(mockFileBuilder.newFile("file2").thenReturn(mockFile2); new MyClass(mockFileBuilder).myMethodSpaceCheck(); } private static File file(long length) { File mockFile = mock(File.class); when(mockFile.length()).thenReturn(length); return mockFile; } (your example mentions File.size(); I assumed you meant File.length()) The actual implementation of MyClass would look like this: public class MyClass { private String file1; private String file2; private final FileBuilder fileBuilder; public MyClass() { this(new FileBuilder()); } @VisibleForTesting MyClass(FileBuilder fileBuilder) { this.fileBuilder = fileBuilder; } public void myMethodSpaceCheck() //... }
3aa61dec85b848f0098bf4514ab9531245b74507b94a9d02f6f64a926652cc87
['eff8d567025f40f5a69c6834694de809']
Code Quality is unpopular? Let me dispute that fact. Conferences such as Agile 2009 have a plethora of presentations on Continuous Integration, and testing techniques and tools. Technical conference such as Devoxx and Jazoon also have their fair share of those subjects. There is even a whole conference dedicated to Continuous Integration & Testing (CITCON, which takes place 3 times a year on 3 continents). In fact, my personal feeling is that those talks are so common, that they are on the verge of being totally boring to me. And in my experience as a consultant, consulting on code quality techniques & tools is actually quite easy to sell (though not very highly paid). That said, though I think that Code Quality is a popular subject to discuss, I would rather agree with the fact that developers do not (in general) do good, or enough, tests. I do have a reasonably simple explanation to that fact. Essentially, it boils down to the fact that those techniques are still reasonably new (TDD is 15 years old, CI less than 10) and they have to compete with 1) managers, 2) developers whose ways "have worked well enough so far" (whatever that means). In the words of <PERSON>, modern Code Quality techniques are still early in the adoption curve. It will take time until the entire industry adopts them. The good news, however, is that I now meet developers fresh from university that have been taught TDD and are truly interested in it. That is a recent development. Once enough of those have arrived on the market, the industry will have no choice but to change.
3e5b07adfa90d686bcd5e16646b287b55616507fb5f2ba237c30e191f83222ef
['f03a4a942910486ab36dece180785a89']
I am using a particle system to evenly distribute points on a sphere. That works great. I then place instances of a given geometry on those points. This part also works. I would now like to rotate those geometries to match the surface angle of the sphere. Here is the function so far : function placeGeometryAtPlatonicPoints (points) { var len = points.length -1, x, y, z, geometry, // subgroup a group to apply the rotations to subgroup = new THREE.Object3D(), mesh, material = new THREE.MeshLambertMaterial({ color: 0x0992299 }), r, theta, varphi; // I wait to append this group because I am waiting for the // particles to settle into there positions scene.add(group); for (len; len >= 0; len -= 1) { // Geometry could be any geometry I am just using a cube to test. geometry = new THREE.CubeGeometry( 25, 25, 25, 1, 1, 1 ); x = points[len].x; y = points[len].y; z = points[len].z; // Move the geometry to the point on the sphere. geometry.applyMatrix( new THREE.Matrix4().makeTranslation(x, y, z) ); mesh = new THREE.Mesh( geometry, material ); subgroup.add(mesh); // This next portion is just some guess work about // polar and azimuth coordinates r = Math.sqrt(Math.pow(x,2) + Math.pow(y,2) + Math.pow(z,2)); theta = Math.acos(z/r); varphi = Math.atan(y/x); theta = theta * (180/Math.PI); varphi = varphi * (180/Math.PI); console.log({ theta : theta, varphi : varphi }); // This would be the implementation of the rotation degrees. subgroup.rotation.x = 0; subgroup.rotation.y = 0; subgroup.rotation.z = 0; group.add(subgroup); } } I am new to Three js, so if there is a better way to do all of this please let me know. Here is my WIP. You will have to give it a second to place the particle correctly before rendering the geometry. I could speed this up but i like the animation : )
7c5073358ee4bf4aa08623f62b23f82c770b12cd0f5d080cba46d3c172dc652f
['f03a4a942910486ab36dece180785a89']
A more reusable option : function simulate_event(eventName, element) { // You could set this into the prototype as a method. var event; if (document.createEvent) { event = document.createEvent("HTMLEvents"); event.initEvent(eventName, true, true); } else { event = document.createEventObject(); event.eventType = eventName; }; event.eventName = eventName; if (document.createEvent) { element.dispatchEvent(event); } else { element.fireEvent("on" + event.eventName, event); } };
8e228f1b0e026097f022f2bfdc889f69501aa9589ba479fc67ec12ef2903a723
['f03a7d1001c943fb9bb7822904fae021']
Awesome ! I can finally shift my logo. But I don't understand what the two `[0pt]` stand for ? Is it the hiding of the depth and height of the image ? What is that ? Now I see that my logo is just a bit on the right, is there a way to correct that ? I tried using the `[0pt]` but it didn't work.
76712bfe34139cf0863a8c1e700999ad825de61eaeb706df534ae434f09452bf
['f03a7d1001c943fb9bb7822904fae021']
<PERSON> Thank you for the suggestion. I've made several succesful rewrites, and I know how they work. The main issue is, I think, that Magento does not pass the correct store view I also tried: location /ens/ { fastcgi_param MAGE_RUN_TYPE store; fastcgi_param MAGE_RUN_CODE en; rewrite /ens/(.*)?$ /en/$1 last; try_files $uri $uri/ index.php?$args; } But that refers also to the base store view and not the 'en' store view.
72012a21e780d67a21589f9b3c8c6d32431d278c08db67482f17977fdb746ead
['f03ba6e5467345cb9e667a530e29fd6e']
Many articles suggest that using public key authentication is more secure than using a password. References: http://www.linuxquestions.org/questions/linux-security-4/is-ssh-keys-authentication-more-secure-than-password-authentication-934866/ and http://www.spy-hill.net/~myers/help/PublicKey.html. I beg to differ. The problem with public key authentication is that it converts the authentication mode from "something you know" to "something you have in your home folder". Consider the attack propagation. If a single machine is compromised, you are better off containing the attack within the same machine when using password based authentication rather than a public key - using which every machine is compromised. My opinion is if scripts are not used in the server - that store the password, having password authentication and account lockout for bruteforce is much stronger than public key. When scripts are present - 1. Ensure that scripts run as least privileged user as possible and use public key authentication for the scripts to access other machines. 2. It scripts require root privileges to run, still use the same unprivileged account and use a local executable with suid (accessible only to the script) to run the privileged command. Any thoughts?
24a55d7aeeda1cb035510b64ad220facb279362917b37377023c1d984029139e
['f03ba6e5467345cb9e667a530e29fd6e']
Agree with <PERSON> and <PERSON>. However using passphrase doesnt help the scripts - since they need to be hard coded again in the scripts. <PERSON> takes time to retrieve passwords. It is not like a compromise while having public key authentication you have it lets access to other machines almost immediately. Of course each of these methods must have additional security components (e.g. system monitoring, key management software for public key).
4e53139352f542118c401a3486f732c029af1f8fa9c9757b95debac067d3d2d7
['f04814ef6e2c484bbe641fb121697eca']
I have a table in big query with 1 GB size. I create a view from this table with partitioning on created_at(timestamp) column. The view is useful for me but I want to write a query using created_at column. When I use this column , does the query run for whole data of view or run for only partitioned values? I want to limit usage of table like 500 MB. Is it possible with views by using partitioning column in where clause?
29aca0a60e9535c5cf6184714441a806550a602a4e5057103fe24d232822b611
['f04814ef6e2c484bbe641fb121697eca']
I want to show a page with different languages. I know to use message properties for different languages but this isn't solve my problem. Because my html page has a lot of dynamic variables. And I want to include page with language option. How can I solve this problem easily in thymeleaf?
3807954ae695d5c8c1ed441d4cc01c5f759ffc46eb497d0e8e80873545866811
['f052ad6e0be54df296b6de625ead2a05']
Woo-commerce product reviews are not working for single product page. I'm using flatsome theme. I already "Enable product reviews" from woo-commerce product settings I already allow comments from WordPress discussion settings I already Enable Reviews from single product page and added some comments on some products. e.g: http://edsfze.xyz/elexon/product/el-<PHONE_NUMBER>-multi-functional-rechargeable-lamp/ Also Tried Woo-commerce advanced reviews plugin. But, still no luck getting reviews or comments on products. Here is the link of setup: http://edsfze.xyz/elexon/
85101a6f1bd45726ac55d6cc580075e031b98cadded3ce6971888d55f8a43f1c
['f052ad6e0be54df296b6de625ead2a05']
Error Solved by HostGator my hosting provider when i escalate the issue. The browser 500 error that you were experiencing was due to many of the PHP modules not loading. When a suPHP directive is active in the .htaccess file of the document root or any of its parent folders, it will prevent the default .ini files for EasyApache4 versions of PHP from being loaded. I have corrected this issue by commenting out the suPHP directive in the home directory .htaccess.
fe9e640a917f56a12d949a7e12733bfd59b4cbf5472d93abb6742ce3479ece7d
['f064e6e829a64f1d9846b4accf67d118']
Old post but had to do it today. Highlight the bullet text by dragging from first letter in word to the end last word only. If you go pass the last letter you will see a half empty space highlighted. Drag the cursor back to end of the last letter if you also hightlight the half space. Copy the line (unfortunately, only one line at a time). That half empty space (end of each bullet) is the hidden formatting of the bullet. If you copy (double, triple clicking by default), you also copy the bullet or list number.
299eeec69583b2cc28b229275e40220c46c05a92f8da803ab5d5cb1f4d262ea6
['f064e6e829a64f1d9846b4accf67d118']
I am trying learn integrating fb connect with a website. There I am facing some issue as follows. After submitting the facebook information from (http://www.mydomain.com/exp/login.php) I am calling, process_data.php in the redirect uri like this, <fb:registration fields="[ {'name':'name'}, {'name':'email'}, {'name':'who', 'description':'additional info', 'type':'text'}]" redirect-uri="http://www.mydomain.com/exp/process_data.php" </fb:registration> Then, in the process_data.php, I do the following, if ($_REQUEST) { $response = parse_signed_request($_REQUEST['signed_request'], $appSect); if($response){ $_SESSION['facebook_data'] = $response; header('Location: '.$redirect_url); //http://www.mydomain.com/exp/home.php } else { echo '$_REQUEST is empty'; } and Finally, in the home.php, <?php session_start(); if(isset($_SESSION['facebook_data'])){ //do something with data { ?> but it never hits this code in home.php. Is there anything to do with the hosting? or am I am doing something wrong? Thanks in advance, <PERSON>
06385b3ce04902a909a8a2d911933dde72d3b420a55e01f3e2df78e30eb51b15
['f0932f5c099b423eb00b42982ec242aa']
If I recall correctly, prior 2.8, I could click anywhere to change my position. Now I have to click the bar with timnecodes at the top. Can I change my preferences to do that again? I am not sure about the proper terminology so I made this image to explain myself better (feel free to edit my question with proper terminology).
8e28e207e18012bed48caab340f532efa91eb79166b2cc9e1f88134f76ce0c38
['f0932f5c099b423eb00b42982ec242aa']
I see, it would be kind of reversing the default behaviour and (renaming result -> block ) request for the blocking. I am no sure which of the solutions I like more (yours or Nizam), but I both read quite nicely. And I learned a couple tricks. Thanks. (I'm sorry I cannot accept both).
ce228f518710a66bfc660647013cf57a22023aa665187477244648e996eb7de7
['f093e48df50d486997b7d1ac06060ab4']
I am trying to modify an ARM template that I have which deploys some VMs and defines some autoscale rules (you can see the full template at https://gist.github.com/jinky32/d80e0ab2137236ff262484193f93c946, it is based on the template at https://github.com/gbowerman/azure-myriad/tree/master/vmss-ubuntu-scale). I am trying to add in some load balancer rules so that traffic is spread across the new VMs as they are generated in reponse to the autoscale rules that are defined. When I run this template via Azure CLI I get no errors in terminal but the deployment fails. Digging into the error events I see two: statusCode:BadRequest serviceRequestId:ef42ec66-600e-4fb9-b4e2-dc2c06dda79c statusMessage:{"error":{"code":"InvalidRequestFormat","message":"Cannot parse the request.","details":[{"code":"InvalidJsonReferenceFormat","message":"Reference Id cc2bepool is not formatted correctly. The Id is expected to reference resources of type loadBalancers/backendAddressPools. Path properties.loadBalancingRules[0].properties.backendAddressPool."}]}} responseBody:{"error":{"code":"InvalidRequestFormat","message":"Cannot parse the request.","details":[{"code":"InvalidJsonReferenceFormat","message":"Reference Id cc2bepool is not formatted correctly. The Id is expected to reference resources of type loadBalancers/backendAddressPools. Path properties.loadBalancingRules[0].properties.backendAddressPool."}]}} and statusCode:BadRequest statusMessage:{"error":{"code":"InvalidRequestFormat","message":"Cannot parse the request.","details":[{"code":"InvalidJsonReferenceFormat","message":"Reference Id cc2bepool is not formatted correctly. The Id is expected to reference resources of type loadBalancers/backendAddressPools. Path properties.loadBalancingRules[0].properties.backendAddressPool."}]}} I've put some of the relevant variables below and have also included my loadbalancer object but I believe that the issue is related to how I am referencing backendAddressPool: "loadBalancingRules": [ { "name": "LBRule", "properties": { "frontendIPConfiguration": { "id": "[variables('frontEndIPConfigID')]" }, "backendAddressPool": { "id": "[variables('bePoolName')]" }, but I'm confused because I refer to it the same way elsewhere. Any advice on how to do this correctly much appreciated. "variables": { .... "loadBalancerName": "[concat(parameters('vmssName'), 'lb')]", "lbProbeID": "[concat(variables('lbID'),'/probes/tcpProbe')]", "publicIPAddressID": "[resourceId('Microsoft.Network/publicIPAddresses',variables('publicIPAddressName'))]", "lbID": "[resourceId('Microsoft.Network/loadBalancers',variables('loadBalancerName'))]", "natPoolName": "[concat(parameters('vmssName'), 'natpool')]", "bePoolName": "[concat(parameters('vmssName'), 'bepool')]", .... .... } ..... ..... { "type": "Microsoft.Network/loadBalancers", "name": "[variables('loadBalancerName')]", "location": "[variables('location')]", "apiVersion": "[variables('networkApiVersion')]", "dependsOn": [ "[concat('Microsoft.Network/publicIPAddresses/', variables('publicIPAddressName'))]" ], "properties": { "frontendIPConfigurations": [ .... ], "backendAddressPools": [ { "name": "[variables('bePoolName')]" } ], "inboundNatPools": [ { "name": "[variables('natPoolName')]", ... }, { "name": "natpooltileserver", .... }, { "name": "natpool2", .... ], "loadBalancingRules": [ { "name": "LBRule", "properties": { "frontendIPConfiguration": { "id": "[variables('frontEndIPConfigID')]" }, "backendAddressPool": { "id": "[variables('bePoolName')]" }, "protocol": "tcp", "frontendPort": 80, "backendPort": 80, "enableFloatingIP": false, "idleTimeoutInMinutes": 5, "probe": { "id": "[variables('lbProbeID')]" } } } ], "probes": [ { "name": "tcpProbe", "properties": { "protocol": "tcp", "port": 80, "intervalInSeconds": 5, "numberOfProbes": 2 } } ] } },
eeb9790c151f4a439d16d7d446210c58e7bef0eb06587e737d9f93598c5906bd
['f093e48df50d486997b7d1ac06060ab4']
OK so this was me being a newb really. Schema is now defined: var statementSchema = new Schema({ date : {type:Date, required:true}, name: {type:String, required:true}, amount: {type:Number, required:true} }); and I insert using: var statements= require('../data/convertcsv.json'); var Statement = require('../models/statement'); var mongoose = require('mongoose'); mongoose.connect('localhost:27017/statement'); var parseDate = require('../helpers/parseDate'); var done = 0; for( var i = 0; i < statements.length; i++ ) { var newStatement = new Statement(); //helper function breaks string apart, reassembles and returns a Date newStatement.date = parseDate.stringToDate(statements[i].FIELD1); newStatement.name = statements[i].FIELD2; newStatement.amount = Number(statements[i].FIELD3); newStatement.save(function(err, result){ done++; if(done === statements.length){ exit(); } }); } function exit(){ mongoose.disconnect(); };
4381a930779467fb21b2993d032bf4ebf5621eed3516fd2456a0aeaab0cba3eb
['f0d123ab4bfe41ec82255a168f86ed8b']
I am trying to obtain blocks of text (250 characters either side) of each occurrence of a word within dataset. When I call the same code logic on a toy example: import re list_one = ['as','the','word'] text = 'This is sample text to test if this pythonic '\ 'program can serve as an indexing platform for '\ 'finding words in a paragraph. It can give '\ 'values as to where the word is located with the '\ 'different examples as stated' # find all occurances of the word 'as' in the above text for i in list_one: find_the_word = re.finditer(i, text) for match in find_the_word: print('start {}, end {}, search string \'{}\''. format(match.start(), match.end(), match.group())) the code is able to detect the position of each occurrence of every item of the list with no issues. However, when I try to apply the same logic to a DataFrame using the 'apply' method, it returns the error TypeError: unhashable type: 'list' Code: import re import pandas as pd def find_text_blocks(text, unique_items): ''' This function doesn't work as intended. ''' empty_list = [] for i in unique_items: find_the_word = re.finditer(i, text) for match in find_the_word: pos_all = match.start() x = slice(pos_all-350, pos_all+350) text_slice = text[x] empty_list.append(text_slice) return empty_list dataset['text_blocks'] = dataset['text'].apply(find_text_blocks, unique_items = dataset['unique_terms']) Each row of the dataset['unique_items'] column contains a list, whilst each row of the dataset['text'] column contain strings. Any guidance on how to return a list of strings within each row of dataset['text_blocks'] is appreciated. Thanks in advance :)
ba48b34ba113c755d79655d9682753014224544ef1b84ede1762e6462c554b2b
['f0d123ab4bfe41ec82255a168f86ed8b']
I am working with weather data and am trying to calculate the number of daylight mins which correspond to the hourly observations within my timeseries. <PERSON> = pd.read_csv(root_dir + 'London.csv', usecols=['date_time','London_sunrise','London_sunset'], parse_dates=['date_time']) London.set_index(London['date_time'], inplace =True) London['London_sunrise'] = pd.to_datetime(London['London_sunrise']).dt.strftime('%H:%M') London['London_sunset'] = pd.to_datetime(London['London_sunset']).dt.strftime('%H:%M') London['time'] = pd.to_datetime(London['date_time']).dt.strftime('%H:%M') London['London_sun_mins'] = np.where(London['time']>=London['London_sunrise'], '60', '0') London.head(6) Dataframe: date_time time London_sunrise London_sunset London_sun_mins 2019-05-21 00:00:00 00:00 05:01 20:54 0 2019-05-21 01:00:00 01:00 05:01 20:54 0 2019-05-21 02:00:00 02:00 05:01 20:54 0 2019-05-21 03:00:00 03:00 05:01 20:54 0 2019-05-21 04:00:00 04:00 05:01 20:54 0 2019-05-21 05:00:00 05:00 05:01 20:54 0 2019-05-21 06:00:00 06:00 05:01 20:54 60 I have tried conditional arguments to generate the number of sunlight mins per hour, ie) 60 if a full sunlight hour, 0 if night. When I try to use a timedelta to generate the difference between sunrise and time ie) 05:00 and 05:01, the anticipated output isn't returned (59). A simple: London['London_sun_mins'] = np.where(London['time']>=London['London_sunrise'], '60', '0') Gets close to the required output, however, when I try to extend to: London['London_sun_mins'] = np.where(London['time']>=London['London_sunrise'], London['time'] - London['London_sunrise'], '0') The following error is returned: unsupported operand type(s) for -: 'str' and 'str' Also, when extending to encompass both sunrise and sunset: London['sunlightmins'] = London[(London['London_sunrise'] >= London['date_time'] & London['London_sunset'] <= London['date_time'])] London['London_sun_mins'] = np.where(np.logical_and(np.greater_equal(London['time'],London['London_sunrise']),np.less_equal(London['time'],London['London_sunset']))) The same error is returned. All help in reaching the anticipated output is appreciated!
793e4b1c8c62db4da459249b6c38588fb4110a7f92f1d2a44b0874e16279d38e
['f0e8f53dac9d44deb743645147ec8c15']
I want to use angular-ui's ui.select to populate a multiple dropdown field. I'd like to pass the selected objects to the ng-model and have it mapped to my options, containing objects of the same structure, but not from the same source: <div ng-repeat="training in trainings"> <form class="form-horizontal" role="form"> <ui-select multiple ng-model="training.skills" theme="select2" ng-disabled="disabled"> <ui-select-match placeholder="Wähle Skills...">{{$item.name}}</ui-select-match> <ui-select-choices group-by="skillTypeGrp" repeat="skill.id as skill in skills | filter: $select.search"> <span>{{skill.name}}</span> </ui-select-choices> </ui-select> </form> </div> The trainings list from ng-repeat with an example training looks like this: [{"description": "", "skills": [{"type": "tech", "name": "C", "id": 194}], "id": 1, "name": "Test"}] My skills list from ui-select-choices contains the same data structure as in training.skills: [{"type": "tech", "name": "C#", "id": 194}, {"type": "tech", "name": "Java", "id": 197}, ...] Unfortunatly this doesn't work. The ui-select empties my training.skills, showing me an blank select field. I understand that angularjs is not able to map those objects, if they do not origin the same array, like stated in this blogpost. It suggests using track by to tell ui.select which property to use to map the two lists of objects, but if i change the ui-select-choices line to: <ui-select-choices group-by="skillTypeGrp" repeat="skill.id as skill in skills track by skill.id | filter: $select.search"> nothing changes. Is there any possibility to get this running without replacing my objects by their ids like the single property binding example suggests?
9083e27fd1bec4f4fb3a58e2294074f68d9b8ebc25f2016cf225da71598850e1
['f0e8f53dac9d44deb743645147ec8c15']
Tiled is referencing its tileset image source in the .tmx file. So the tileset grass-tiles-2-small.png you used is not embedded and has to be loaded, too. Fortunately this is done automatically by resolving the image dependencies for you. So additionally to map.tmx you have to make sure that all used image files are reachable by copying them to your assets folder. It has to look like this: assets/tileset 2/grass-tiles-2-small.png. Refresh your Eclipse project after doing so, just in case it didn't notice. If you already did that, another source of the problem could be the whitespace in your directory name. Try to rename tileset 2 to tileset_2 and create a new .tmx file.
aeaf9051bd3239c36c34d959056f875175a3b45fb226aad3f98561228e5e29b9
['f0e9dc42855b412cb1e2b87ed2f715fe']
Using .NET Http client i login to jasper server. HttpResponseMessage loginResponse = loginClient.PostAsync("http://localhost:8080/jasperserver/rest/login", formContent).Result; IEnumerable<string> jaspsessid = loginResponse.Headers.GetValues("Set-Cookie"); Using above session id i pass to next request. HttpClient httpClient = new HttpClient(); httpClient.DefaultRequestHeaders.Add("Cookie", jaspsessid); httpClient.DefaultRequestHeaders.Accept.Add(new System.Net.Http.Headers.MediaTypeWithQualityHeaderValue("application/json")); StringContent requestContent = constructJasperRequestJson(reportParameters); HttpResponseMessage generateReportRequestResponse = new HttpResponseMessage(); generateReportRequestResponse = httpClient.PostAsync(AppConstant.JASPER_SERVER_BASE_URI + AppConstant.JASPER_SERVER_REPORT_EXECUTION_URI, requestContent).Result; In second request i am getting 401.Unauthorized. If anyone knows the issue,help me.
c0788193c539f03246290362c68b45b173d19f8b6d3d84708bc51f319d10eefe
['f0e9dc42855b412cb1e2b87ed2f715fe']
I use below code to add options for dropdown //HTML CODE select(name='folist', id='folist'). //JS Code for(var i=0;i<data.foNameArray.length;i++){ var combo = document.getElementById("folist"); option = document.createElement("option"); option.text = data.foNameArray[i]; option.value =data.foIdArray[i]; try { combo.add(option, null); //Standard }catch(error) { combo.add(option); // IE only } } It perfectly works.Now My doubt is how to select the value while adding like selected="selected".
dea9c5ddcf96148d6a2c1abca1252c5bc2c7af27a23db4284274b3ebc7a6ff61
['f0f3a1f8561a47fc8534267fe7756f66']
Yes, but now look back to your comment and to the listing (definitely not complete) of things they are yet missing and now consider whether this does not fit for the close vote with reason _Too broad_... I love answering and helping - in my free time and for free, but teaching somebody else how to fish is beyond of what is SO meant for...
a10d578063b175ec4b8cc1ddac247016d7f7a7913ceb81b35abff32ebb919509
['f0f3a1f8561a47fc8534267fe7756f66']
When there is a newcomer to SO that is also new to the technology he is asking about I always try to direct him to other sources where he could get some more general knowledge. E.g. when speaking of OpenCart I am mentioning above, I often direct them to a SO Q&A regarding "_How to become a Guru in OpenCart_" or to other tutorials found over the internet - but sadly this is not possible for every case. Honestly I do not care whether the asker understands the circumstances of down-vote or closing or feels offended since few cases they were even ignorant or rude or generally impolite.
ddd7c1dfe81b3e1f4b076d8e730da690cfd01705b6b635b49f9d7c444ac13cca
['f106de1f7b0149748c30731f1b60ba06']
For speed up development process, I'm using file zze.rb, which I edit in my IDE and reload in pry console with short command zze (zz - uncommon beginning for variable/method names; e - execute). This way I can stop execution process in any point of my application by binding.pry and can execute code many times, not waiting of full Rails environment restart each time. I have this code in my .pryrc file: Pry.config.commands.command "zze", 'Execute all from .pry_exec/zze.rb' do Dir['.pry_exec/autoload/**/*.rb'] .delete_if {|file| File.basename(file) =~ /^_/} # inore files that starting form underscore '_' .each { |f| load(f) } file_name = File.absolute_path '.pry_exec/zze.rb' code = File.open(file_name, 'r') {|f| f.read} eval(code, @target, file_name) end and have folder <myproject>/.pry_exec/: <myproject>/.pry_exec/ # ignored from CVS autoload/ # this folder loads automatically each `zze` call in cosnole some_class1.rb some_code2.rb zze.rb # this file loads automatically each `zze` call in console. Sometimes there are cases, when it would be good to use binding.pry inside <myproject>/zze.rb or <myproject>/autoload/some_class1.rb but it does not working. It just ignore that. I also tried to rewrite zze-code this way: # ~/.pryrc def zze(name = nil) Dir['.pry_exec/autoload/**/*.rb'] .delete_if {|file| File.basename(file) =~ /^_/} # inore files that starting form underscore '_' .each { |f| load(f) } load ".pry_exec/#{name || 'zze'}.rb" # instance_eval(File.read(".pry_exec/#{name || 'zze'}.rb")) end but it also ignore binding.pry inside <myproject>/zze.rb or <myproject>/autoload/some_class1.rb Also when zze is not pry-command, but is method, code from zze.rb executes inside another scope, and I will not have access to local variables defined from zze.rb in my pry-main context. Didn't find how to fix that yet with short way.
66a3e79c778c27116b69cda9b4d6450d49ecb910f6a34ffe68340f93ae705829
['f106de1f7b0149748c30731f1b60ba06']
There is a task to make a GUI table that is built based on data from N-join tables in PostgreSQL. This GUI table implies sorting and filtering with full-text search capability. I want to use elastic for this purpose. Prepared this data-structure for elasticsearch: { did_user_read: true, view_info: { total: 1, users: [ { name: 'John Smith', read_at: '2020-02-04 11:00:01', is_current_user: false }, { name: 'Samuel Jackson', read_at: '2020-02-04 11:00:01', is_current_user: true }, ], }, is_favorite: true, has_attachments: true, from: { short_name: 'You', full_name: 'Chuck Norris', email: '<EMAIL_ADDRESS><PERSON>', read_at: '2020-02-04 11:00:01', is_current_user: false }, { name: '<PERSON>', read_at: '2020-02-04 11:00:01', is_current_user: true }, ], }, is_favorite: true, has_attachments: true, from: { short_name: 'You', full_name: '<PERSON>', email: 'ch.norris@example.com', is_current_user: true }, subject: 'The secret of the appearance of navel lints', received_at: '2020-02-04 11:00:01' } Please advise how to index this structure correctly so that you can filter and search by nested objects and by nested arrays of objects? For example, I want to get all the records with these criteria: is_favorite IS false AND FULL_TEXT_SEARCH("sam <PERSON>") BY FIELDS users.name, -- inside of array(!) from.full_name, from.short_name AND users.is_current_user IS NOT false AND ORDER BY received_at DESC
499debd5317f60a163cb445704cc219760209f23e59f043ec9680a677012f4c5
['f129523455514d1bb7f3b01a7d745874']
I can see many answers to similar questions, but I can't seem to get them work for me. I have some xml files with some sibling element nodes having same tag name. I want to merge these nodes using XSLT. Any help would be deeply appreciated. Input: <?xml version="1.0"?> <Screen> <Shapes> <Triangle id="tri1"> <color>red</color> <size>large</size> </Triangle> </Shapes> <Shapes> <Rectangle id="rec1"> <color>blue</color> <size>medium</size> </Rectangle> </Shapes> <Shapes> <Circle id="cir1"> <color>green</color> <size>small</size> </Circle> </Shapes> <Shapes> <Square id="sqr1"> <color>yellow</color> <size>large</size> </Square> </Shapes> <Device> <Name>peg</Name> <type>X11</type> </Device> <Utilities> <Software>QT</Software> <Platform>Linux</Platform> </Utilities> </Screen> I want to merge all "Shapes" nodes. Required Output <?xml version="1.0"?> <Screen> <Shapes> <Triangle id="tri1"> <color>red</color> <size>large</size> </Triangle> <Rectangle id="rec1"> <color>blue</color> <size>medium</size> </Rectangle> <Circle id="cir1"> <color>green</color> <size>small</size> </Circle> <Square id="sqr1"> <color>yellow</color> <size>large</size> </Square> </Shapes> <Device> <Name>peg</Name> <type>X11</type> </Device> <Utilities> <Software>QT</Software> <Platform>Linux</Platform> </Utilities> </Screen> XSLT I tried was: <?xml version="1.0" encoding="utf-8"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:output indent="yes" /> <xsl:template match="Shapes"> <xsl:if test="not(preceding-sibling<IP_ADDRESS>*[local-name() = 'Shapes'])"> <Shapes> <xsl:apply-templates select="node() | @*" /> <xsl:apply-templates select="following-sibling<IP_ADDRESS>*[local-name() = 'Shapes']" /> </Shapes> </xsl:if> <xsl:if test="preceding-sibling<IP_ADDRESS>*[local-name() = 'Shapes']"> <xsl:apply-templates select="node() | @*" /> </xsl:if> </xsl:template> <xsl:template match="node() | @*"> <xsl:copy> <xsl:apply-templates select="node() | @*" /> </xsl:copy> </xsl:template> </xsl:stylesheet> But the output I got was ( :( ) <Screen> <Shapes> <Triangle id="tri1"> <color>red</color> <size>large</size> </Triangle> <Rectangle id="rec1"> <color>blue</color> <size>medium</size> </Rectangle> <Circle id="cir1"> <color>green</color> <size>small</size> </Circle> <Square id="sqr1"> <color>yellow</color> <size>large</size> </Square> </Shapes> <Rectangle id="rec1"> <color>blue</color> <size>medium</size> </Rectangle> <Circle id="cir1"> <color>green</color> <size>small</size> </Circle> <Square id="sqr1"> <color>yellow</color> <size>large</size> </Square> <Device> <Name>peg</Name> <type>X11</type> </Device> <Utilities> <Software>QT</Software> <Platform>Linux</Platform> </Utilities> </Screen> Is there a simple XSLT code I can use, or is there any modification in my xslt I can apply to get the output?
1d358e9943afaec95c2d3028a8af1e3b8af1beb0248ef2b369e57a07822371cc
['f129523455514d1bb7f3b01a7d745874']
I have few XML files, and some users have added extra spaces in middle (like in element tag or text tag), and it is getting really hard to compare multiple versions of files. Example (xml file) <?xml version="1.0"?> <catalog> <book id="bk101"> <author><PERSON>, Matthew</author > <title>XML Developer's Guide </title> <genre>Computer</genre> <price>44.95</price> <publish_date>2000-10-01</publish_date> <description>An in-depth look at creating applications with XML.</description> </book> <book id="bk102" > <author>Ralls, Kim</author> <title>Midnight Rain</title> <genre>Fantasy</genre> <price>5.95</price> <publish_date>2000-12-16</publish_date> <description>A former architect battles corporate zombies, an evil sorceress, and her own childhood to become queen of the world.</description> </book> </catalog> As you can see in above example code, element tag of author, and text node of title in first book element has extra spaces. Similarly element tag of second book element has extra spaces. I want a regular expression to search for these types of white spaces (more than 1 adjacent whitespace), but I don't want the leading white spaces. If I don't leave leading whitespaces (starting of the lines), and replace these with single space, indentation will be lost. There are some ways I can handle this (like first removing all double+ spaces and the doing a xmllint --format on the file), but it would be helpful if someone can give me a reg exp for spaces in middle of lines. i tried combinations of ^, \s and ^\s, but I cannot seem to get the solution. So if someone can suggest something, it would be really helpful. (The multiple spaces in text nodes are incorrect values as per our project's design. So removing them will not cause any adverse affect)
4da8ecc714d254bd7f29126a4601d546215e45888355b44914211b1ac8f8c25f
['f12f415aa45b4a4abc99177f7e32a4c9']
I have a class named Party that includes a private variable named players which is type vector, sting. class Party { vector <string> players; public: Party (string party_name, string boss) {}; ~Party() {}; vector<string> getNames() { return players; }; void setNames (const vector<string> &new_players) { players=new_players; } } I want to write a friend function which will show if the variable P (aslo class with a "Name" variable being private) is party of the Party. void part_of_party (Party &party, P name) { bool found=false; for (int i=0; ( found==false && i<party.name.size() ); ++i) { if ( (party.name[i]).compare(name.getName()) == 0) { found==true; }; } if (found==true) { /// } else { //// } } The compiler doesn't show any errors but no message is printed on the screen (as it supposed to). Do you have any idea? Thank you.
a8fd3dd1729b142788c09262d1219081d1da7e23ea652e84b0833d677284d065
['f12f415aa45b4a4abc99177f7e32a4c9']
So, I have a member function (show_players_names and show_players_levels) of a class which prints on the screen the value of a private variable. Although the compiler doesn't show any error, I do not see the names and the levels on the screen. Do you have any idea? Here's the code: #include <iostream> #include <vector> #include <array> #include <string> using namespace std; class Party { private: string boss; vector<string> players; vector<int> players_level; public: string party_name; Party (string party_name, string boss) { cout << party_name << " " << boss << endl; }; ~Party() { cout << "Party delete" << endl; }; vector<string> getNames() { return players; }; vector<int> getLevels() { return players_level; }; vector<string> setNames (const vector<string> &new_players) { players=new_players; } vector<int> setLevels (const vector<int> &new_players_level) { players_level=new_players_level; }; void show_players_names() { for(size_t i=0; i<players.size(); i++ ) cout << players[i] << ' '; cout << endl; }; void show_players_levels() { for(size_t i=0; i<players_level.size(); ++i ) cout << players_level[i] <<' '; cout << endl; }; }; int main () { Party party1("Witchers","Vesemir"); party1.setNames({"Gerald","Eskel","Lambert"}); party1.setLevels( {50,45,49} ); party1.show_players_names(); party1.show_players_levels(); return 0; }
34e685e51bec0f5dac4b25be7fdfee4814c7ac687cddc40a8fd38fb217551f8d
['f13e16a2865e41139aa13446d990ee26']
На убунте 19.10 использую докер (версия 19.03.6). Характеристики ноутбука: Memory 15.5 GiB; Intel® Core™ i5-8300H CPU @ 2.30GHz × 8; GeForce GTX 1050/PCIe/SSE2. Докер работает очень медленно. У коллеги на маке рестар контейнеров в проекте работает в 3 раза быстрее хотя должно быть наоборот. Выполнение миграции для переноса значений из одной таблицы в другую для 21 тысячи записи заняло 4 часа. У коллеги это заняло 8 минут. Подскажите пожалуйста в какую сторону можно "копать"?
157ed35072c8404d4dcffc6b929dd9f45fdb1c7ed5f09c18c733b0bd2c16ee29
['f13e16a2865e41139aa13446d990ee26']
Подскажите пожалуйста правильно ли я делаю. На сайте подключен GTM и google analytics. В GTM создан тэг(Track type="Event", Category="MarketoSubmit18", Action = "MarketoSubmit18") для отслеживания маркето формы. Теперь чтобы отследить конверсию мне нужно еще в Google Analytics в Admin->Goals создать новую цель с типом 'Событие' с категорией 'MarketoSubmit18'? В моем понимании GTM нужен для управления тегов. А вот как созданный тэг применить в Google Analytics и увидеть конверсию? Заранее спасибо.
b9b9bfd9462797a58a71cad5d1766b8ade7a629bdf8d2652425540eed7be1787
['f13f7cfee12a47cc881df25e2ff488e8']
I do something like this: >>>import datetime >>>datetime.datetime(2012,05,22,05,03,41) datetime.datetime(2012, 5, 22, 5, 3, 41) >>> datetime.datetime(2012,05,22,07,03,41) datetime.datetime(2012,05,22,07,03,41) >>> datetime.datetime(2012,05,22,9,03,41) datetime.datetime(2012, 5, 22, 9, 3, 41) >>> datetime.datetime(2012,05,22,09,03,41) SyntaxError: invalid token Why I get SyntaxError? How to fix it?
75d62120a42bc55f4818fee7225429a18f7c22b4a080daed67b5eba711ab98b7
['f13f7cfee12a47cc881df25e2ff488e8']
I do: class Window(QtGui.QWidget): def __init__(self): QtGui.QWidget.__init__(self) self.modelTree = QtGui.QTreeView() self.model = QtGui.QStandardItemModel() self.addItems(self.model, data) self.modelTree.setModel(self.model) self.modelTree.connect(self.modelTree, QtCore.SIGNAL('clicked(QModelIndex)'), self.treefunction) def treefunction(self, index): print index.model().itemFromIndex(index).text() '''if item.checkState(column) == QtCore.Qt.Checked: print "checked", item.text(column) if item.checkState(column) == QtCore.Qt.Unchecked: print "NOT checked",item.text(column)''' def addItems(self, parent,elements): column = 0 clients_item = QtGui.QTreeWidgetItem(parent, ['Serwis']) clients_item.setData(column, QtCore.Qt.UserRole, 'serwis 111') clients_item.setExpanded(True) item_1 = QtGui.QTreeWidgetItem(clients_item, ['Wartswa1']) item_1.setData(column, QtCore.Qt.UserRole, 'Wasrtwas 1') item_1.setCheckState(column, QtCore.Qt.Unchecked) item_2 = QtGui.QTreeWidgetItem(clients_item, ['Wartswa2']) item_2.setData(column, QtCore.Qt.UserRole, 'Wasrtwas 2') item_2.setCheckState(column, QtCore.Qt.Unchecked) I want get info if row is checked or unchecked. I found a example with this but for the QTreeWidget. I look for w way to do this with QTreeView. How to rewrite treefunction?
1ff395cc87ee341d7fe363a860743a24b47fea0e30c0691bbfb92ce94ff0d6a9
['f144b03df1714485b677ba555471a79f']
You can skip the first N rows by passing the optional min_row argument. Note that this uses a 1-base index, so min_row=2 starts on the second row and min_row=5 skips the first four rows. You would be using something like this: for index, row in enumerate(ws.iter_rows(min_row=5)): Full iter_rows documentation.
4265df1cf6077961af181a7be768d19ad7a8b85b9a853076c747764cc079e8dd
['f144b03df1714485b677ba555471a79f']
Your intuition was correct, you need to combine the loops. The first loop goes through each row, and saves the parID, Lline, and keep over the last value in each of those variables. After the loop, they only have the values from the last row because that was the only row to not have another one come along after it and overwrite the values. You can solve this by combining the actions into a single loop. maxRow = ws.max_row + 1 for row in range(2, maxRow): parID = ws['A' + str(row)].value Lline = ws['B' + str(row)].value Vect = ws['C' + str(row)].value print parID, Lline, Vect trash, keep = Vect.split("C") ws.cell(row=rowNum, column=3).value = keep ws.cell(row=rowNum, column=1).value = parID ws.cell(row=rowNum, column=2).value = <PERSON>
9843a62b8277526a925c610ae2a04cb3d1005d44882a378ff8e9c0714c62cace
['f149ba0f72174a2a87e1d55a79dd9144']
The goal is to generate events on every participating node when a state is changed that includes the business action that caused the change. In our case, Business Action maps to the Transaction command and provides the business intent or what the user is doing in business terms. So in our case, where we are modelling the lifecycle of a loan, an action might be to "Close" the loan. We model Event at a state level as follows: Each Event encapsulates a Transaction Command and is uniquely identified by a (TxnHash, OutputIndex) and a created/consumed status. We would prefer a polling mechanism to generate events on demand, but an asynch approach to generate events on ledger changes would be acceptable. Either way our challenge is in getting the Command from the Transaction. We considered querying the States using the Vault Query API vaultQueryBy() for the polling solution (or vaultTrackBy() for the asynch Obvservalble Stream solution). We were able to create a flow that gets the txn for a state. This had to be done in a flow, as Corda deprecated the function that would have allowed us to do this in our Springboot client. In the client we use vaultQueryBy() to get a list of States. Then we call a flow that iterates over the states, gets txHash from each StateRef and then calls serviceHub.validatedTransactions.getTransaction(txHash) to get signedTransaction from which we can ultimately retrieve the Command. Is this the best or recommended approach? Alternatively, we have also thought of generating events of the Transaction by querying for transactions and then building the Event for each input and output state in the transaction. If we go this route what's the best way to query transactions from the vault? Is there an Observable Stream-based option? I assume this mapping of states to command is a common requirement for observers of the ledger because it is standard to drive contract logic off the transaction command and quite natural to have the command map to the user intent. What is the best way to generate events that encapsulate the transaction command for each state created or consumed on the ledger?
832a8a109ce1ceb05ff15c01aa7476ba0dfc40c8e88141e0ed6e2292c6e62fff
['f149ba0f72174a2a87e1d55a79dd9144']
Just to finish up on my comment: While the approach met the requirements, The problem with this solution is that we have to add and maintain our own code across all relevant states to capture transaction-level information that is already tracked by the platform. I would think a better solution would be for the platform to provide consumers access to transaction-level information (selectively perhaps) just as it does for states. After all, the transaction is, in part, a business/functional construct that is meaningful at the client application level. For example, If I am "transferring" a loan, that may be a complex business transaction that involves many input and output states and may be an important construct/notion for the client application to manage.
b207ee5af1e7410f79262ac8cfdb1152d52eaaecd3b6b1707b26aad26b2010b9
['f1569deaa71b4fd0b0948aeeaf00778b']
I'm trying to retrieve data from "extra" key: Bundle[ {from=1058706545539, extra={ "ty":"msg", "d":"sec":<PHONE_NUMBER>,"usec":763000}, "iL":"86777e87a574c3f068f6525e", "tU":"7e0a9dbbd1d6ee1795d64fdf", "iP":"4f26e5f78d042e2224688ed7", "iM":"dd83db95e764b103b4fec99e"}, message=Oi , android.support.content.wakelockid=1, collapse_key=do_not_collapse }] If it was JSon I'd use JSONObject, I don't know how to retrieve the the whole "extra" on a HashMap structure. So that I can use something like that: String ty = extra.getString("ty"); I receive this bundle from Push Notification.
2e40dfa33fc28371805c39b1bb208ce41a64fc9a59cd511e4ae536ed6ce5f6c4
['f1569deaa71b4fd0b0948aeeaf00778b']
I tried to setChecked(true) RadioButton rbOk = new RadioButton(this); rbOk.setLayoutParams(ParamWCWC); rbOk.setText("OK"); if(situacao.equals("ok")){ rbOk.setChecked(true); }; It's show ok, but there are 3 RadioButtons, and only one is checked, and after this one is checked, I can do nothing to disable it, even if I checked another in the same RadioGroup and the selection become duplicated.
389d6db0042b33b06879ccaf0b8236b4780745023d0c0919a3d841651d6964a8
['f1686c8d02724fe892bdb164119080c0']
I cannot login via the GUI with a new user account I created in Ubuntu <IP_ADDRESS> I've read a lot of other threads that are similar to this problem, but nothing I've tried has worked. The user has been added with a custom uid and gid - both 789. The new user doesn't have a .Xauthority file in its home dir. I attempted to remove the user and add it again with adduser but without luck. The original user created during installation of Ubuntu 16.04.5 is able to login to the GUI without any problems. I have tried uninstalling, reinstalling, and restarting the service lightdm without any luck. I should also add that the new user has these permissions on its home dir, same as the other functioning user. drwxr-xr-x. Also I forgot to mention that on the login screen, the new users name is not displayed at all as an option, only the original user during installation and "guest" are available.
7b93489ccca32fc2a52947de52e14a8260c3ae3a5fb729972c77b5a6e3a90d6c
['f1686c8d02724fe892bdb164119080c0']
i would like to ask a question. i am now facing problems about English Text To Speech. I used System.Speech.Synthesis; namespace from .Net framework for my ETTS in C#.Net. first I can completely convert text into wav file. but after save into wave file,i can't speak anymore in that windows form. but it can speak if not save to wave file.however after saved file,i can't speak anymore. i wrote following code for that program. For Save text to Wave file SaveFileDialog sfd = new SaveFileDialog(); sfd.Filter = "All files (*.*)|*.*|wav files (*.wav)|*.wav"; sfd.Title = "Save to a wave file"; sfd.FilterIndex = 2; sfd.RestoreDirectory = true; if (sfd.ShowDialog() == DialogResult.OK) { FileStream fs = new FileStream(sfd.FileName, FileMode.Create, FileAccess.ReadWrite); voiceMe.SetOutputToWaveStream(fs); voiceMe.Speak(txtSpeakText.Text); fs.Close(); } for Text to speech voiceMe.Volume = VolumeMe.Value; voiceMe.Rate = RateMe.Value; voiceMe.SpeakAsync(txtSpeakText.Text); That is. if you not understand my question.please retell me. if you can solve that problem , Please tell me. Thanks you for your time.
a2c78a97311b7689e1a8630bbd1f8a178dd8efb3f5b2ddf97ab3f411ed1d5164
['f17bfcdea9df45c1920fbecbb1ef3b1e']
I got a problem when try to select a property as a class. When I get the whole properties (without $select), it works correctly. But if I use the $select with the class property, no error but that property does not return. GetByIds?$select=ItemId,Pick Pick is not return. Only ItemId. public class ItemNotification : Entity { public ItemNotificationSetting Pick { get; set; } public ItemNotificationSetting Receive { get; set; } public string ItemId { get; set; } } public class ItemNotificationSetting { public bool IsEmail { get; set; } public bool IsNotification { get; set; } } Below is my builder modelBuilder.EntitySet<ItemNotification>("ItemNotifications"); Thanks all in advance.
0ae1c025a5de54a089b98626ec3bba9b2a545747ac103272c0bd55a1c301c6fb
['f17bfcdea9df45c1920fbecbb1ef3b1e']
I need a way to also rotate the plot image in 3D column chart at the bottom to make it align with the rotation of the whole chart when I make a drag on the chart. Now the plot image keep static when the chart is dragging. Here is the example: options3d: { enabled: true, alpha: 20, beta: 30, depth: 200, viewDistance: 5, frame: { bottom: { size: 1, color: 'url(#frameBg)' } } }, http://jsfiddle.net/akoLv4z9/ Thanks in advance.
2caf5f372d4009a1a70a466b6ef9cddb17651dc307294afaa970bce456d2907b
['f1916fb63d0f4b9988d4e7cc661f08d0']
I understand you are using two LBs set in GKE, one using "externalTrafficPolicy: Local" which has your nginx as backend and another one whcih receives the requests sent by the nginx and is set with "externalTrafficPolicy: Cluster". Am I right? If this is the case and both LBs are Network load balancers, you should try setting both policies to "Local" to preserve the IP in all the path (including the second LB).
bde80c7fa1c47c42f33fef02651507746e9e8f756b57ebb142a25f331910fb2e
['f1916fb63d0f4b9988d4e7cc661f08d0']
If you are using Network load balancing (with target pools) the load balancer keeps the IP. What happens is that Kubernetes is changing the source IP with the cluster/node IPs. Kubernetes has a feature to preserve the client source IP. You can check in the docs for how to preserve the client source IP in the services with Type=LoadBalancer (Network load balancing).
e041db6f27f2fd9404a0b42afa20a640ba28fca1b1ae941f0c823be7bc9e3b61
['f19ee58cee6a479a899b848cef239ef6']
I'm working on MS SQL Server 2016, Hibernate 5.3.7.Final and Spring bot 2.2.0. For me, adding this line to properties worked: (without jtds!) spring.datasource.url=jdbc:servername;databasename=your_db_name;integratedSecurity=true You may also need these properties: spring.datasource.driverClassName=com.microsoft.sqlserver.jdbc.SQLServerDriver spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.SQLServer2012Dialect And if you encounter a problem with "no sqljdbc_auth in java.library.path". You may refer to this answer no sqljdbc_auth.
f54aa3033273e9bb348e2cb527783a672953070878cc65bb6c50efb983edb830
['f19ee58cee6a479a899b848cef239ef6']
I am taking care of extraction of microservice out of a monolith written in Java with help of Spring Boot. We are planning to divide the whole monolith into a few smaller microservices. I have to enable communication between the monolith and the new microservice as it needs entities from the new microservice (it has its own database) to perform certain actions. I thought of exposing REST endpoints but then I would have to duplicate entities. Is it acceptable? If so, then REST controllers based in monolith which retrieve entities from microservice should be placed in the same layer as repositories? This solution would increase coupling which should be avoided, are there any other approaches? I'll be grateful for any responses as well as articles which in your opinion may help here. Thank you in advance.
2007648cf26353f4a456a724ede1fc3c0fcad7817a58b49017954a3030ffa85f
['f1a63dc15e6e443ca7cf83ed24fab32a']
I am learning python and have come up with a way to calculate values row by row, but I am sure there is a more elegant (and quicker) solution. Here is simple example: df = pd.DataFrame(np.random.rand(10,3), columns=list('abc')) df.head() a b c 0 0.207455 <PHONE_NUMBER> <PHONE_NUMBER> 1 <PHONE_NUMBER> <PHONE_NUMBER> 2 <PHONE_NUMBER> <PHONE_NUMBER> <PHONE_NUMBER> 3 <PHONE_NUMBER> <PHONE_NUMBER> 4 <PHONE_NUMBER> <PHONE_NUMBER> df['d']='' df['e']='' for i in range(1,len(df)): df['d'][i]= sqrt((df['a'][i]-df['b'][i])**2+(df['a'][i-1]-df['b'][i-1])**2) df['e'][i]= (df['c'][i]-df['c'][i-1])*1609 df.head() a b c d e 0 0.207455 <PHONE_NUMBER> <PHONE_NUMBER> 1 <PHONE_NUMBER> <PHONE_NUMBER> 0.141986 <PHONE_NUMBER> -501.015 2 <PHONE_NUMBER> <PHONE_NUMBER> <PHONE_NUMBER> <PHONE_NUMBER> 3 <PHONE_NUMBER> <PHONE_NUMBER> <PHONE_NUMBER> -33.1396 4 <PHONE_NUMBER> <PHONE_NUMBER> <PHONE_NUMBER> <PHONE_NUMBER> Is there a better way to do this? I am working with some large datasets and it takes a while to run it this way.
46b75e562acb53985e82e27e5430d459a8224fc8ac48f063dec3f5f4ac86c156
['f1a63dc15e6e443ca7cf83ed24fab32a']
I am using geopy distance.distance function to calculate distance between each latitude and longitude points in a gpx file like this: lat lon alt time 0 44.565335 -123.312517 85.314 2020-09-07 14:00:01 1 44.565336 -123.312528 85.311 2020-09-07 14:00:02 2 44.565335 -123.312551 85.302 2020-09-07 14:00:03 3 44.565332 -123.312591 85.287 2020-09-07 14:00:04 4 44.565331 -123.312637 85.270 2020-09-07 14:00:05 I am using this code which creates new columns for lat and lon where the row is shifted down and then I can use apply to calculate the distance for each. This works, but I am wondering if there is a way to do it without creating additional columns for the shifted data. def calcDistance(row): return distance.distance((row.lat_shift,row.lon_shift),(row.lat,row.lon)).miles GPS_df['lat_shift']=GPS_df['lat'].shift() GPS_df['lon_shift']=GPS_df['lon'].shift() GPS_df['lat_shift'][0]=GPS_df['lat'][0] GPS_df['lon_shift'][0]=GPS_df['lon'][0] GPS_df['dist']= GPS_df.apply(calcDistance,axis=1)
258fd3c1ea8878779089e63f57ce6736993e175c342044ddb28ed911ceeb9dcd
['f1cc6df9143a43919d7bbd1a318ae3f8']
Currently using Xcode 6; While generating map view in iOS 8.1 using Mapkit.h and coreLocation framework,on clicking the button instead of showing latitude and longitude values in labels it's showing crash error as:unrecognized selector sent to instance 0x797a4e20 in GPSController.m - (IBAction)tap:(id)sender { NSLog(@"button clicked"); locationManager.delegate = self; locationManager.desiredAccuracy = kCLLocationAccuracyBest; [locationManager startUpdatingLocation]; [locationManager startMonitoringSignificantLocationChanges]; } should fetch the value in this: (void)locationManager:(CLLocationManager *)manager didUpdateLocations:(NSArray *)locations { NSLog(@"location info object=%@", [locations lastObject]); CLLocation* location = [locations lastObject]; NSLog(@"latitude %+.6f, longitude %+.6f\n", location.coordinate.latitude, location.coordinate.longitude); self.lat.text = [NSString stringWithFormat:@"%+.6f", location.coordinate.latitude]; self.longi.text = [NSString stringWithFormat:@"%+.6f", location.coordinate.longitude]; [self performSegueWithIdentifier:@"Map" sender:self]; }
b674a96b92c4841e97788945fc31b63722da7edf2b8e013964e47da3ba44ea5d
['f1cc6df9143a43919d7bbd1a318ae3f8']
function change() { var select = document.getElementById("slct"); var divv = document.getElementById("container"); var value = select.value; for (i = 0; i <value; i++) { toAppend += "<input type='textbox' >"; } divv.innerHTML=toAppend;`enter code here` return; } I Have this code and I am calling it by dropdown menu <select id="slct" onchange="change();"> <option value="0"> select value </option> <option value="1"> 1 </option> <option value="2"> 2 </option> <option value="3"> 3 </option> but its not showing anything
10f3044ac381a6198b1a2aa28d89bfd1094625caedea4324d8ed01850ffa3502
['f1e798133a634701a7c49a21f2d33c2c']
I've been trying to get a file from my res folder within my project so that I can use it as a BufferedImage, but to no avail. Running what I've done above throws an IllegalArgumentException (ImageIO.read(Unknown Source)). This shouldn't happen unless getResourceAsStream is returning a null value, which means that I'm not getting the file from my res folder properly. So overall, I'm a little lost on how to tackle this issue. Thanks for your time.
6a34e36263f7199450d617ce269ec87faa0eee9d0b7396e428de1374d79edfed
['f1e798133a634701a7c49a21f2d33c2c']
I'm attempting to get the number of lines in all files in my directory. I've tried using some kind of variable I set to 0 and looping through all the files and getting their line number. > $i=0;ls | foreach{$i += [int](get-content $_ | measure-object -line)};$i However, every time I try adding it to the variable I set to 0, it shoots out an odd error: Cannot convert the "Microsoft.PowerShell.Commands.TextMeasureInfo" value of type "Microsoft.PowerShell.Commands.TextMeasureInfo" to type "System.Int32". At line:1 char:19 + $i=0;ls | foreach{$i += [int](get-content $_ | measure-object -line)};$i + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (:) [], RuntimeException + FullyQualifiedErrorId : ConvertToFinalInvalidCastException Why does this operation not work and how could it be resolved? Is there a quicker way of obtaining the number of lines in all files in a directory?
8f4fce62afbf0ddde68e4d9fcbc211939ab88b8a67d03eaca9ab4dff287ef76f
['f1ef1e6ce1664e96ac09abfbff49a091']
Finally figured it out! I needed to use KSCrashReportFilterAppleFmt to convert the file to .crash file which then I could load into Xcode which happily symbolised it for me. The code I used to convert is below, if anyone else finds this useful. NSString *srcFilePath = @"crash-report.json"; NSString *destFilePath = @"crash-report.crash"; NSData *myJSON = [NSData dataWithContentsOfFile:srcFilePath]; NSError* localError = nil; NSDictionary *parsedJSON = [NSJSONSerialization JSONObjectWithData:myJSON options:0 error:&localError]; if(localError != nil) { return ; } id filter = [KSCrashReportFilterAppleFmt filterWithReportStyle:KSAppleReportStyleSymbolicatedSideBySide]; NSArray *reports = @[parsedJSON]; [filter filterReports:reports onCompletion:^(NSArray *filteredReports, BOOL completed, NSError *error) { if(error != nil) { return; } if(completed) { NSString *contents = [filteredReports objectAtIndex:0]; [contents writeToFile:destFilePath atomically:YES encoding:NSStringEncodingConversionAllowLossy error:nil]; } }];
86c5909ccef3b04bf6def353761e9b35c1e6590afefd5017407b73b857708868
['f1ef1e6ce1664e96ac09abfbff49a091']
I'm posting to see if anyone has a solution or can provide some guidance on modelling some data in order to be used in azure search. The problem domain I am currently using DocumentDB to model some data which I would like to search. My document, which I shall call "Entity A" at the moment looks something like: { _id, //key - Guid name, //searchable - String description, //searchable - String tags: [ "T1", "T2", ...] //facet - Collection(String) locations: [ { coordinate, //filter - GeoLocation (lat & long) startDateTime, //filter - DateTimeOffset endDateTime //filter - DateTimeOffset }, ... ] ... }, ... Relationships: tags 0...n Entity A & locations 0...n Entity A Flattening Entity A and setting up a simple index and query for name, description and facet for tags is fine and working great. The problem lies in trying to add locations to index. Effectively what I want to search (in natural language) is: For a given term, find all the Entity As near a coordinate that overlap with x start date and y end date From what I can find online - flattening the locations will only work if they become strings. https://blogs.msdn.microsoft.com/kaevans/2015/03/09/indexing-documentdb-with-azure-seach/ https://learn.microsoft.com/en-us/azure/search/search-howto-index-json-blobs This seems to lose the power of being able to perform geodistance, and date range queries. Current Thoughts Split the Entity A document into two collections The new Entity A document: { _id, //key - Guid name, //searchable - String description, //searchable - String tags: [ "T1", "T2", ...] //facet - Collection(String) ... }, and multiple location entities { _id, documentId, //relates to Document._id coordinate, startDate, endDate } Questions: Is it better to have two indices - one for the new Entity A and one for the locations and then join the results? I think this is the Multitenant Search https://learn.microsoft.com/en-us/azure/search/search-modeling-multitenant-saas-applications Does anyone know of an examples that implement this? Pros Think it will work Cons Would require two search hits for each query and then merging the results (this may or may not be ideal). OR Is is better to fully "invert" the Entity A and location entities, ie something like { _id, documentDBId, //relates to Document._id coordinate, startDate, endDate, name, description, tags: [] ... } Pros Pretty flat already so should be easy to index and query One search hit and no merging Cons For name, description, tags, etc it would required multiple updates if these changed. Would get multiple results for the same "Entity A" if the date spanned multiple start and end dates OR Is there another option? Thanks and I'm happy to clarify if needed
15f42ab7f970f0aa78ce9b07e9dba44f364e9f7cec3acb6d9292f6e2d9d521ed
['f1f3f8348cc64a18a5651098426ca072']
Not only is it haram to put photos without hijab, it is also haram or makrooh to make photos with hijab, due to the ruling on making imagery. Some scholars have ruled photographs as separate from the prohibition of making images, suggesting it is a capture of images instead. Regardless, even they have said it is better to not take photographs, so as to stay away from doubtful matters, as the Prophet said: عَنِ النُّعْمَانِ بْنِ بَشِيرٍ ـ رضى الله عنه ـ قَالَ قَالَ النَّبِيُّ صلى الله عليه وسلم ‏ "‏ الْحَلاَلُ بَيِّنٌ، وَالْحَرَامُ بَيِّنٌ وَبَيْنَهُمَا أُمُورٌ مُشْتَبِهَةٌ، فَمَنْ تَرَكَ مَا شُبِّهَ عَلَيْهِ مِنَ الإِثْمِ كَانَ لِمَا اسْتَبَانَ أَتْرَكَ، وَمَنِ اجْتَرَأَ عَلَى مَا يَشُكُّ فِيهِ مِنَ الإِثْمِ أَوْشَكَ أَنْ يُوَاقِعَ مَا اسْتَبَانَ، وَالْمَعَاصِي حِمَى اللَّهِ، مَنْ يَرْتَعْ حَوْلَ الْحِمَى يُوشِكْ أَنْ يُوَاقِعَهُ ‏ Narrated <PERSON>: The Prophet (ﷺ) said "Both legal and illegal things are obvious, and in between them are (suspicious) doubtful matters. So whoever forsakes those doubtful things lest he may commit a sin, will definitely avoid what is clearly illegal; and whoever indulges in these (suspicious) doubtful things bravely, is likely to commit what is clearly illegal. Sins are Allah's Hima (i.e. private pasture) and whoever pastures (his sheep) near it, is likely to get in it at any moment." (Bukhari) For extensive discussion on further proofs, see: https://islamqa.info/en/search?q=photography In addition, I will also say that many girls on facebook and other places are treating the hijab as a fashion statement, and are utilizing different hijab "fashions" in order to draw attention to themselves. This is known to anyone who has the faintest awareness of the different popular social media platforms. If you abide by the hadith above, this situation would not even arise as with or without hijab, all of it is at the very least makrooh (condemned/disliked).
4ef11bf380b373df50dc81263242aa15fdd425d8c5ad5b44ac7b6843206743d8
['f1f3f8348cc64a18a5651098426ca072']
Essentially, yes, it is possible. It's essentially a remote install of FC. Defrag Repartition to have a small partition in which you unpack the .iso Install grub (That's the tricky one. you'll have to use grub4dos. The PuppyLinux wiki has a good walkthrough, although they do it slightly differently than I suggest) Boot from the new partition install fedora over the XP partition boot into fedora and remove the partition in which you've unpacked the .iso resize the fedora partitions appropriately to recover the lost space. But it remains easier to grab your disk and get a friend to install it for you on a separate machine. for more info on remote, headless installs have a look at the depenguinator or The Archlinux wiki. You have the advantage of having access to the screen/kbd and so don't need to set up ssh.
cb7918e372bd7d3b35035776a84c71a8c6c9c8e79ebefe241006fa459e549e5e
['f220f9ec23854efe94e4c0e80477c192']
Maybe ClientEvents-OnLoad event of controls on form will help? It's not very beautiful, but seems to work. First make script on page <script type="text/javascript"> var textBoxFromFormClientObject; function onTextBoxLoad(sender) { textBoxFromFormClientObject = sender; } </script> Then in aspx, for control on edit form that you need to get on client, add event like this <rad:RadTextBox ID="txSomeTextBoxe" runat="server" ClientEvents-OnLoad="onTextBoxLoad"/> So, when you press edit or insert button on grid, and form will be shown - global variable in javascript will be set. And you can use it to manipulate textbox. Only problem is that you need event like this for each control on form, if you want to manipulate all controls =(
b0b03e67a94e975edc97c51df359f0252fc5aca24607cb45b7eb1da91ae7054a
['f220f9ec23854efe94e4c0e80477c192']
Both - 1 and 2. And also one without params for full-screen =) Some times you have separated x,y,width, height, some times you already have rectangle. Varian 3 - IMHO is very rare =) But user always can convert values to rectangle, or break apart rectangle to values) I think this is philosophical question. So make them all or pick random one )))
90708e0a58015e16e1d07ed5212ed744fd269d92e4939d01c79d72d52301c46f
['f224f4ab5c094fd0bc7f0567a6e59cad']
How to add a custom done button on the IOS decimal pad that I am using for my app.I've tried using screen touch to dismiss,but found it efficient.So if you can write a code in swift it would mean the world to me, and thank you in advance.There are help for this problem but all of them are in Objective-c which I don't know.
83f37c9e225c79a99fd151f153033d5924186a6f991238fc41d80ddc37f4ec6c
['f224f4ab5c094fd0bc7f0567a6e59cad']
Can someone please help me with my problem, the problem is that I want my first initial value that I enter into the textfield to be only number from 1 to 9, I am using Decimal pad so I also don't want my initial value to be a decimal point,but after the first value(only number from 1 t0 9) users are allowed to enter a only one decimal.I have done only one decimal thingy but cannot figure out how to let users not enter zero or decimal point as their initial/Starting value. please help me, Thank you in advance. I asked this before many said it is available online, but I cannot find it.If you upload the code it would mean the world to me.
66516951bc89760d7f63fd229539fc5f485ac2369a1032b3418dc4982f8c8dac
['f2584bb0995a4a50a7698f0984bbf20f']
JavaScript has its own native map function (it didn't for a long time, thus the jQuery shim), and it's very similar to jQuery's. In fact, both Array.prototype.map() and Array.prototype.forEach() are very similar, with similar interfaces, just begin your invocation with the name of the array. So instead of jQuery $.map(data, notification => { return notification.template }), it's data.map(notification => notification.template) or similar. And the only difference between native map() and forEach() is that forEach() applies the function to each item in the array, while map() goes one step further and returns a new array of resulting values if invoked correctly. https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/map https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/forEach
346c0448bc92bf1af1c7aa229c87a09e62411306b0216ce19264dc3de5aa7f3a
['f2584bb0995a4a50a7698f0984bbf20f']
const modal = document.querySelector('.modal') This line will always return the first occurrence of class 'modal', thus modal 1, assuming it comes first in the markup. If you want toggleModal() to operate on another div, you will have to pass the ID of the div as an argument to toggleModal(). https://developer.mozilla.org/en-US/docs/Web/API/Document/querySelector
891c33138aacef442fff9c30aa658583562ece371767b94dc1297dc99efc7d85
['f27113afb1a34988b4d84a1d9b2f112f']
I am an experienced developer but new to Raspberry Pis and python. I have connected a Pi to a motion sensor and picked up a infinite loop script that checks whether the Pi sees something "now" or not and returns 0 or 1. I modified the script to write when that 0 or 1 changes to a mysql record. Script below. This worked fine for a short time. Now something odd has happened. When it is run, it switches from 0 to 1 seemingly randomly, until I take out the writes to the MySQL DB. So if I remove the lines that "dbcur.execute(sql)" and "dbconnection.commit()" from the code it works perfectly (mostly 0 and then 1 for the short time i sees something), but when I add them back it, it returns to randomly and quickly switching between 0 and 1 regardless of the sensor. This is driving me mad and I've tried reinstalling the whole OS on the Pi again and installing everything but it still does it! Help! import mysql.connector import RPi.GPIO as GPIO import time import datetime from core import Core core=Core() GPIO.setmode(GPIO.BCM) PIR_PIN = core.gpio GPIO.setup(PIR_PIN, GPIO.IN) try: print "PIR Module Test (CTRL+C to exit)" time.sleep(2) print "Ready" lastval=999; cursensorlogid=-1 dbconnection = core.getDB() dbcur = dbconnection.cursor() while True: nowval=GPIO.input(PIR_PIN) if(nowval!=lastval): lastval=nowval logstart=datetime.datetime.now()-datetime.timedelta(seconds=core.logseconds) sql = "delete from sensorlog where `start`<'{}';" sql = sql.format(logstart.strftime('%Y-%m-%d %H:%M:%S')) sql = "insert into sensorlog (`val`,`start`,`end`,`init`) values ({},'{}','{}',{});" sql = sql.format(nowval,datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'),datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S'),1 if lastval==999 else 0) dbcur.execute(sql) dbconnection.commit() print "Changed to "+str(nowval)+" - "+time.strftime('%Y-%m-%d %H:%M:%S') time.sleep(0.2) except KeyboardInterrupt: print "Quit" GPIO.cleanup()
bcb579c7a2fb36412caa463f7423a64a9f054670b317623a5d6666a8fe980a7a
['f27113afb1a34988b4d84a1d9b2f112f']
I'm looking to migrate a system that uses an ODBC driver to talk to MySQL and change it to talk to MongoDB. By using mongosqld, BI Connector and a DRDL schema I can access the data and my "select * from ..." queries all work fine. However, I keep getting an error when I try to UPDATE the data. Is this a limitation of the driver or am I doing something wrong? The error I get is: [MySQL][ODBC 8.0(w) Driver][mysqld-5.7.12 mongosqld v2.14.0]parse sql 'update `User` set name='Simon Sawyer' where `userid`='3ecdf4a51478644780782b20'' error: unexpected UPDATE at position 8 near update
c166ad5aedb8c6d08d8e8ac60aadf520f3cd32ea73d035aa5c11d85033bc4f04
['f273a3444be84f5c951074a8fc62a8b0']
I am build some app using Vue JS, and I'm implement with SSR, when open the source code vue render some attribute <div id="page" data-server-rendered="true"> how to remove attribute data-server-rendered="true" , i want my final code like <div id="page"> is there a way to remove the code ? Thank you for the help
62c3e537fbdf1b79744a85352373514a901ced08d09c99ab5a6d4c6251619afd
['f273a3444be84f5c951074a8fc62a8b0']
I'm trying to solve programming question, a term called "FiPrima". The "FiPrima" number is the sum of prime numbers before, until the intended prime tribe. INPUT FORMAT The first line is an integer number n. Then followed by an integer number x for n times. OUTPUT FORMAT Output n number of rows. Each row must contain the xth "FiPrima" number of each line. INPUT EXAMPLE 5 1 2 3 4 5 OUTPUT EXAMPLE 2 5 10 17 28 EXPLANATION The first 5 prime numbers in order are 2, 3, 5, 7 and 13. So: The 1st FiPrima number is 2 (2) The 2nd FiPrima number is 5 (2 + 3) The 3rd FiPrima number is 10 (2 + 3 + 5) The 4th FiPrima number is 17 (2 + 3 + 5 + 7) The 5th FiPrima number is 28 (2 + 3 + 5 + 7 + 13) CONSTRAINTS 1 ≤ n ≤ 100 1 ≤ x ≤ 100 Can anyone create the code ?
98a2aca00290c49773a0e6c0fa991dd6fd28d1fb118f892eff3bad36ba56947f
['f2808bf0467f40f0bc2ae040d65f70ab']
Well, first consider just the space curve $\gamma$, whose image lies in the intersection of a plane and a sphere of radius $R$. We can write down $\gamma$ using coordinates on the plane and with arc-length parameter $s$ as: $x(s) = R \text{cos}(s/R)$, $y = R \text{sin}(s/R)$. So $||\gamma''(s)||^2 = x''(s)^2 + y''(s)^2 = 1/R^2$. In other words, $\kappa(s) = 1/R$. The other two conditions I added were just to make $\gamma$ and $a$ tangent to each other at $s_{0}$.
bcbfaa80e89bb64cc675b09d1811ad16573fde380be0c1e81efab1321a61b9ed
['f2808bf0467f40f0bc2ae040d65f70ab']
Your proof should probably go along the following lines: First, write out what $f^r$ means: $\ f^r = \underbrace{f...f}_{r \text{ times}}$. So for example \begin{equation} f^{r}f^{s} = (\underbrace{f...f}_{r \text{ times}})(\underbrace{f...f}_{s \text{ times}}) = \underbrace{f...f}_{r+s \text{ times}} = f^{r+s}.\end{equation}
f6b8b667e867596e76af639ad770c248781c94d5a288a465f6cb4720fa41694b
['f28e316a4cdc4fed86709604b3c979f8']
I don't know what your format is there. It's definitely not JSON. Regular expression is probably your only shot: $block = ''; // The text you pasted goes here. foreach (['kFj394', 'J883Dd'] as $id) { $nameSearchPattern = '/"' . $id . '"[\n\t\s\w{}"]*?name"\s+"(.+)"/'; $groupSearchPattern = '/"' . $id . '"[\n\t\s\w{}"]*?group"\s+"(.+)"/'; preg_match($nameSearchPattern, $block, $nameMatches); preg_match($groupSearchPattern, $block, $groupMatches); $name = $nameMatches[1]; $group = $groupMatches[1]; echo "The name for $id is $name\n"; echo "The group for $id is $group\n"; }
df14e6de6dce1197edd1ad4d64214877d4ac4debcc5964528b132f9f99ec80e3
['f28e316a4cdc4fed86709604b3c979f8']
You have two problems: You are mixing PHP syntax with JavaScript syntax here: $.post("action.php?module=cart&act=del&id=$_GET[id]"); If you view the source of this page, you will find that the end of this line has not been parsed into anything useful. As other users have pointed out, you are sending this request via POST, not GET, but you are passing this parameter in a querystring so it should still be there.But, because of the problem above, you will find that $id does not contain the ID, but a string containing "$_GET[id]". However, since you don't appear to be POSTing any parameters, better to just send this as a GET in my opinion. I assume you want something like this: $.get("action.php?module=cart&act=del&id=<?php echo $_GET['id']; ?>"); That way, assuming that the page on which this is being run is being parsed as PHP and assuming that id is being set to something in the querystring, that value will be parsed and added into the JavaScript. Good luck!
5c0ebe072840cd5d6c62141c4fb0224da635b48ffbf5f93663f551614b54e281
['f2b59abff4f146ce9ac6eda28696d226']
We want to find the gradient of policy "return" $V$ wrt. parameters of the policy $\theta$. Where the return $V$ could be written as "how good is an action $Q$ $\times$ probability of taking that action $\pi$". Consider the policy gradient, $\nabla_\theta V = \sum_a Q \nabla_\theta \pi + \pi \nabla_\theta Q$ The first term tells us to adjust the action probability proportionally to how good it is. To me it could read "if an action yields good, take more". That is to move the peak of $\pi$ to match the peak of $Q$. This is a reasonable thing to do. But of course since $Q$ cannot directly guide us toward its peak, it is up to our $\pi$ to luckily stumble upon the high peak of $Q$. This emphasizes the importance of exploratory nature of $\pi$. The second term is vice versa. That is to move the peak of $Q$ to match the peak of $\pi$. This is much harder a task because $Q$ is a function of both action and policy, $Q_{\pi_\theta}(s, a)$. We clearly don't have this in a differentiable form i.e. we don't have a universal $Q$ function over the space of all possible $\pi$. We now have a partial gradient from the first term but we have yet to estimate the second term. Turns out, the second term can be recursively written solely in the form of the first term but with subsequent actions and states. $$ \nabla_\theta V_0 = \sum Q_0 \nabla_\theta \pi_0 + \sum Q_1 \nabla_\theta \pi_1 + \sum Q_2 \nabla_\theta \pi_2 + \dots $$ That is to get good policy i.e. policy gradient we only need to move the peaks of $\pi$ to match the peaks of $Q$ not only the first (state, action) but also for all subsequent (state, action)'s. This yields the same result as if we differentiate through the $Q$.
e5d45975eb75c0fd42a8254cda62ab6012a7f180d48f605f6df3b1d64420ad4b
['f2b59abff4f146ce9ac6eda28696d226']
Yes, there are ways to 'exploit' buffer overflows. Sometimes the code may need to be executed via a separate script, and in theory you could assemble a virus from multiple images that contained code hidden within the picture using stenography but there are easier ways. Basically many computer systems expected images to comply with the exact specification for the type and the failed to correctly range check the formats/parameters being passed. By 'engineering' an image so that externally it looks like it complies but internally it does not, it was to be possible to trigger stack corruption/buffer overflows that would allow code hidden in an image to be executed under the authority of the user. But note that this does not ONLY apply to images, it can apply to ANY file, take a look at the recent RTF exploit in MS word.
3ff7d5fb8cf68c0f021ba03954bc7654030b30aceba90d97f3854a8bc3c3b0af
['f2c65bec473c4cb4a40e5804df1209dc']
Since your data is in raw format, you can look if the "function" field is automatically extracted by Splunk. If yes, you can simply search for index="index_1" function="delete" else, you can search for index="index_1" "function" "delete" as is, and Splunk will search for function and delete in your raw event.
19f5038485b12dd8da81b3b8ed3f01523f72435096ce7f2f0793238bb486f7d2
['f2c65bec473c4cb4a40e5804df1209dc']
If the total IP addresses are 600 and you wish to exclude 519, instead you can use index=indexer action=Null IP IN (<IP_ADDRESS> , <IP_ADDRESS> ... so on) Also, if you specific ranges of IP addresses you don't want, you can exclude or include them based on regex or wild-card. Although, the above answer too is useful if you really have a huge list of IP addresses.
a9d21a86dd3015adf81c157fa77995eb31a27b8085187522bc319bb19d5d9acc
['f2ccb4330c60440885acccf9152f5ba1']
Just transcribe the links then. That is sufficient if you bother to click them and see. Then click the links in the ul list above. Several are exacly on-topic here. A few others show overt and petty rep-system abuse from some highly respected members here. This is a significant problem here <PERSON>. Your comments are defensive.
566dc0080dbfad907e68439c5b54e7382794192cade5d321c3bc8a793d2177d4
['f2ccb4330c60440885acccf9152f5ba1']
This screenshot shows that IE Compatibility View can be overridden by a hosted frame.The technique the frame uses here is to (1) inject an X-UA-Compatible header into the Host head, (2) via document.write(), and then (3) reload the Host page.  It has the effect of boosting all frames in the page to the chosen level.online demoPrior to putting googledrive into Compatibility View, you'll notice the demo menu has no effect.  It is unable to override any mode IE9 or higher, as they are "locked". IE8, IE7, and IE5 modes (and thus CV) are "unlocked" and can be overridden.  That's the basis of the trick.On the other hand, this following demo's Host page contains an X-UA-Compatible IE=5 (aka Quirks) header to start with.  So the frame is able to override the Host mode even without putting googledrive into Compatibility View.online demo (Host XUA=IE5)The concept here is derived from this MS-Connect thread, which discusses IE-modes in iframes.
a5d2f78def48c46b123705ca6417dad11e2e888f12063281f211507c320c0916
['f2d9f3ebea794a4a94a9a189265c1621']
I'm using python 2.7 and trying to read the entries of a CSV file. I made a separate version of the original CSV that only has the first 10 rows of data and with the following code, it works the way I would like it to, where I can just edit the indexing of Z in genfromtxt's "usecols" field to read a specific range of columns in my CSV. import numpy as np import array Z = array.array('i', (i for i in range(0, 40))) with open('data/training_edit.csv','r') as f: data = np.genfromtxt(f, dtype=float, delimiter=',', names=True, usecols=(Z[0:32])) print(data) But when I use this code with my original CSV (250,000 rows x 33 columns) I get this kind of output and I don't know why: Traceback (most recent call last): File "/home/user/PycharmProjects/H-B2/Read.py", line 74, in <module> data = np.genfromtxt(f, dtype=float, delimiter=',', names=True,usecols=(Z[0:32])) File "/usr/lib/python2.7/dist-packages/numpy/lib/npyio.py", line 1667, in genfromtxt raise ValueError(errmsg) ValueError: Some errors were detected ! . . . Line #249991 (got 1 columns instead of 32) Line #249992 (got 1 columns instead of 32) Line #249993 (got 1 columns instead of 32) Line #249994 (got 1 columns instead of 32) Line #249995 (got 1 columns instead of 32) Line #249996 (got 1 columns instead of 32) Line #249997 (got 1 columns instead of 32) Line #249998 (got 1 columns instead of 32) Line #249999 (got 1 columns instead of 32) Line #250000 (got 1 columns instead of 32) Process finished with exit code 1 (I added the dots just to shorten the real output but you hopefully get the point)
187cc769129c93433bb1af700d21e8914850bb74ab825f18616513b1d22c2900
['f2d9f3ebea794a4a94a9a189265c1621']
I have some fairly large .graphml files (~7GB) and I would like to run some algorithms on these files using NetworkX. Whenever I try to read these graphml files with: print "Reading in the Data...\n" G = nx.read_graphml('%s' % path_string) plt.title('%s Network' % name_string) nx.draw(G) plt.show() I get the following output: /usr/bin/python2.7 /home/user/PycharmProjects/G_Project/Graph.py Reading in the Data... Process finished with exit code 139 I'm assuming this happens because my computer runs out of memory when trying to open the file, but I was wondering, is there a way to work with large .graphml files and still use NetworkX? I've gotten pretty use to NetworkX and find it useful, so if there is some sort of workaround for large graphml files I'd appreciate it.
700e892ce623562e85b7f77eb5203955fc95b48caf254e544533bf2de66c56bc
['f30ff506a778422fa1df398001abe7d1']
Here's an elementary solution: Consider expanding everything out to get a sum of elements of the form $\pm \frac1i \cdot \frac1j$. Fix $i$ and $j$ for $i\leq j$, and consider all occurrences of $\pm\frac1i \cdot \frac1j$. If $i<j$, then this occurs twice in each of the first $i$ original terms, each with sign $(-1)^{j-i}$, so they sum up to $(-1)^{j-i}\cdot\frac2j$. If $i=j$, then this occurs once in each of the first $j$ original terms, each with sign $+1$, so they sum up to $\frac1j$. Now let's fix $j$ and aggregate all $\pm\frac1i \cdot \frac1j$ with $i\leq j$. This value is $\displaystyle\sum\limits_{i=1}^{j-1} (-1)^{j-i}\cdot\frac2j + \frac1j$, which is easily seen to be just $(-1)^{j-1}\cdot\frac1j$. Finally, we sum over all $j$ to get $\displaystyle\sum\limits_{j=1}^\infty (-1)^{j-1} \cdot \frac 1j$, which is your $b_0$.
fbc4e57c94f2bc5d5f448f61cc92b0ac01f75be8e7b225cc952f9a77e99c2e1c
['f30ff506a778422fa1df398001abe7d1']
Did you replace $n$ with $\frac 1n$ and take limit $n\to 0$? Working with $\infty$ can be tricky due to undefined values like $\infty-\infty$. If you play with $\frac1n$ then you get $\displaystyle\lim\limits_{n\to0} \frac{\sqrt{1+n}+\sqrt{1-n}-2}{n^{7/16}}$, which should become $0$ after 2 rounds of L'hopital's.
805fc2c610d2897c6d6842a4c7a62de7a25a7a3057a9920e2e245cfbb3b068ab
['f310e68bc8d343c287e788061e913aa9']
For an assignment I need to write a basic HANGMAN game. It all works except this part of it... The game is supposed to print one of these an underscore ("_") for every letter that there is in the mystery word; and then as the user guesses (correct) letters, they will be put in. E.G Assuming the word was "word" User guesses "W" W _ _ _ User guesses "D" W _ _ D However, in many cases some underscores will go missing once the user has made a few guesses so it will end up looking like: W _ D instead of: W _ _ D I can't work out which part of my code is making this happen. Any help would be appreciated! Cheers! Here is my code: import random choice = None list = ["HANGMAN", "ASSIGNEMENT", "PYTHON", "SCHOOL", "PROGRAMMING", "CODING", "CHALLENGE"] while choice != "0": print(''' ****************** Welcome to Hangman ****************** Please select a menu option: 0 - Exit 1 - Enter a new list of words 2 - Play Game ''') choice= input("Enter you choice: ") if choice == "0": print("Exiting the program...") elif choice =="1": list = [] x = 0 while x != 5: word = str(input("Enter a new word to put in the list: ")) list.append(word) word = word.upper() x += 1 elif choice == "2": word = random.choice(list) word = word.upper() hidden_word = " _ " * len(word) lives = 6 guessed = [] while lives != 0 and hidden_word != word: print("\n******************************") print("The word is") print(hidden_word) print("\nThere are", len(word), "letters in this word") print("So far the letters you have guessed are: ") print(' '.join(guessed)) print("\n You have", lives,"lives remaining") guess = input("\n Guess a letter: \n") guess = guess.upper() if len(guess) > 1: guess = input("\n You can only guess one letter at a time!\n Try again: ") guess = guess.upper() while guess in guessed: print("\n You have already guessed that letter!") guess = input("\n Please take another guess: ") guess = guess.upper() guessed.append(guess) if guess in word: print("*******************************") print("Well done!", guess.upper(),"is in the word") word_so_far = "" for i in range (len(word)): if guess == str(word[i]): word_so_far += guess else: word_so_far += hidden_word[i] hidden_word = word_so_far else: print("************************") print("Sorry, but", guess, "is not in the word") lives -= 1 if lives == 0: print("GAME OVER! You ahve no lives left") else: print("\n CONGRATULATIONS! You have guessed the word") print("The word was", word) print("\nThank you for playing Hangman") else: choice = input("\n That is not a valid option! Please try again!\n Choice: ")
f8cb36efa2314b65dfd72fe2d75feeb46d8063286a6fd19b697cc2e971dbd75c
['f310e68bc8d343c287e788061e913aa9']
How can I get my python code to print (a), (b), (c) etc... before it prints out strings, each on a new line... effectively numbering them for x in range(10): print(LIST[x]) At the moment this just prints: LIST VALUE 1 LIST VALUE 2 LIST VAULE 3 and so on until 10... How can I get it to print: (a) LIST VALUE 1 (b) LIST VALUE 2 (c) LIST VALUE 3 and so on until 10... The reason I want this is because it is part of a larger project which is a multiple choice quiz so it would be like: "Whats 2 + 2?" choose (a), (b) or (c) Thanks!
99179f3ba000c40defd8b81bf1c7976683712dc69966daa02355d1fc8e612aaf
['f31d4314003c4a34ab71c63e16600132']
Thank you for every help! After many attemptions, I've noticed the problem was on my server side. The two sensor data was sent from one device with ~10ms difference and the message sending was not in Notification mode but Indication. I have not noticed it before, because I have sent only 1 sensor data with 2 seconds difference but problem came when I was trying to send 2 messages almost simultaneously. After setting it to Notification, the server could send messages rapidly (did not need acknoledgement). I know this error was not caused by pydbus in the end, but by my fault. I hope that if someone finds a similar problem, they should check the sender (server) side as well.
968a0b113be8f4ebe3fbc06a38dba19717ab8e9edcded13bb22422be7208d5ec
['f31d4314003c4a34ab71c63e16600132']
I have a Raspberry Pi 4 with the latest BlueZ (5.54) stack. My goal is to make a Python script which collects different sensor data (Air Quality and Temperature) via Bluetooth Mesh. I was googling many articles and forums but could not decide which one is the best solution. Tried to find a suitable Python library like PyBluez but as I know it is not under active developement (no mesh). Another solution is to use the DBus API. https://git.kernel.org/pub/scm/bluetooth/bluez.git/tree/test/test-mesh https://git.kernel.org/pub/scm/bluetooth/bluez.git/tree/doc/mesh-api.txt
0f79ed05b61a813c48164c201959e9af8a4776004fae8605b223d51338ceb990
['f32ba390e5bb4c75bf1b1c5fc4c93030']
I'm using antd to develop a form. But I couldn't get the value from my own defined select component. For other antd component like Input, there's no problem getting back the data. Code: https://codesandbox.io/s/form-hooks-select-coler In this form, the Input work properly, but the Select doesn't work. Could anyone help? Many thanks!
6f20ad6de7b92ec69bcb226a22f6c22152971cba5d255c133134758c9590dbc9
['f32ba390e5bb4c75bf1b1c5fc4c93030']
I'm setting up a Django-React application, with authentication through third party CAS. The process of CAS authentication looks like following: The web application redirects the user's browser to CAS's login URL with a "service" parameter, for instance https://cas.com/login?service=http://myapp.com. Once the user has been authenticated by CAS, CAS redirects the authenticated user back to the application, and it will append a parameter named "ticket" to the redirected URL. Its ticket value is the one-time identification of a "service ticket". For instance, http://myapp.com/?ticket=abcdefg. The application can then connect to the CAS "serviceValidate" endpoint to validate the one-time service ticket. For instance, https://cas.com/serviceValidate?service=http://myapp.com&ticket=abcdefg. In response, CAS shall return an XML containing the authenticated user id, for instance, <cas:serviceResponse> <cas:authenticationSuccess> <cas:user>johnd</cas:user> </cas:authenticationSuccess> </cas:serviceResponse> I've done some research and found it could be implemented in mainly two ways: Serve react as part of Django's static content. Standalone react single page application(SPA) through JWT. I've tried the first approach and it works, but the problem is that every time I want to test the authentication with React, I need to build the static file first and put them in Django, which is kind of slow. So I would like to try the second approach. My question is that is there any best practice I could implement for the standalone approach? If I were to implement JWT, is it safe to store the access token in localStorage or cookie? Many Thanks!
7c0cd34f46b7331cd134a4f9b73e99bbad381d54ce9b1bd627a9384eb00f387f
['f332cef3c01d47daa3ab48bee6ce0158']
In <PERSON>, I'm able to access the contents of a record instance like so: model Unnamed1 record Example parameter Real x = 5; end Example; Example ex; Real test; equation test = ex.x; end Unnamed1; However, I'd like to access the contents of the record without declaring an instance of the record, like so: model Unnamed1 record Example parameter Real x = 5; end Example; Real test; equation test = Example().x; end Unnamed1; ...but this doesn't work. Is there some way to achieve what I'm trying to do?
a2bdf8e21010a9dd17634d7063a3ba22489ca7e872107d484606eaf06e7e57fc
['f332cef3c01d47daa3ab48bee6ce0158']
I'm trying to get temperature differences quantities to report the correct result when displayed in non-absolute temperature scales. See the following example: model tempDiffTest Modelica.Blocks.Interfaces.RealOutput test1(quantity="ThermodynamicTemperature", unit="K") = 1 annotation(absoluteValue=false); Real test2(quantity="ThermodynamicTemperature", unit="K") = 2 annotation(absoluteValue=false); Modelica.SIunits.TemperatureDifference test3 = 3; end tempDiffTest; Note that type TemperatureDifference = Real ( final quantity="ThermodynamicTemperature", final unit="K") annotation(absoluteValue=false); which is what drove the modifications I made to the test1 and test2 variables. Now, the expectation is that when I display my results in degrees celsius they should be 1, 2, and 3 for test1, test2, and test3, respectively. The actual results are shown below from <PERSON>: Therefore, only test3 was apparently successful (note that none of the results were successful in OpenModelica). Now, my question is how do I achieve what I'm after for test1 and test2?
513fe767969ff87698791c35a15ddc516b2edb6f6bff44f556470a824163e5d2
['f333a035c83242ce8eb5bc8edc9a180a']
I have an express Typescript app that is the backend for my webapp. The general root structure of the app is - client - tsconfig.json - React code - server - application.ts - tsconfig.json I need to compile the server files so that I can run ts-node app.ts. The problem is running tsc compiles all of the files in the root directory and not just the server directory. I know I could use exclude in the tsconfig, but in app.ts I have type definitions that are imported from front-end. Is there anyway to only compile the files in the server directory while being able to keep the imports?
c9920240d4f2018d4c1a7907a9284b01ff71e25cae2cdb5adfbd430fede6e6c7
['f333a035c83242ce8eb5bc8edc9a180a']
I have a type type DiceNumber = 1 | 2 | 3 | 4 | 5 | 6 for part of a react state. I want to set the state to a random number between 1 and 6, so I try this const dice = Math.floor(Math.random() * 6) + 1; but I get Type 'number' is not assignable to type 'DiceNumber'. I understand why, but is there a way around this? Can I guarantee random numbers to fit the type?
dacb3f8a86996b9424c4d71101e7f85ec8c8eb5d6004b0829ea4e4c927348beb
['f3387f2a1d4545c1b79e991a3248f297']
You are putting the values for the same key in the map. So the initial value was getting overridden by the new value. Try the code below package com.operators; import java.util.ArrayList; import java.util.HashMap; import java.util.List; import java.util.Map; import java.util.Scanner; import java.util.function.BinaryOperator; public class TotalCost { public static void main(String[] args) { Scanner scan = new Scanner(System.in); double mealCost = scan.nextDouble(); // original meal price int tipPercent = scan.nextInt(); // tip percentage int taxPercent = scan.nextInt(); // tax percentage scan.close(); Map<Double,Double> map = new HashMap<>(); map.put(mealCost, (double)taxPercent + (double)tipPercent); BinaryOperator<Double> opPercent = (t1,t2) -> (t1*t2)/100; BinaryOperator<Double> opSum = (t1,t2) -> (t1+t2); calculation(opPercent,map); } public static void calculation(BinaryOperator<Double> opPercent , Map<Double,Double> map) { List<Double> biList = new ArrayList<>(); map.forEach((s1,s2)-> biList.add(opPercent.apply(s1, s2))); } }
a662d01c414a0655184b2a376dd093dad276901466784b796bf9a878a4638ad5
['f3387f2a1d4545c1b79e991a3248f297']
Use isolate-scope within the directive.. $scope.control.moveNeedle = function () { $scope.ScoreRotateNeedle(); } return { scope: **{control: '='}**, restrict: 'E', templateUrl: 'svgmeter.html', link: link, controller: 'delightMeterController' }; Add this code to your controller $scope.ctrl = {}; in HTML use <div ng-app="delightMeterApp" ng-controller="delightMeterController"> <delight-meter ng-model="delightScore" control = "ctrl"></delight-meter> <input id="Text1" type="text" ng-model="delightScore" ng-change="ctrl.rotateNeedle()" /> </div>
5945c20006e2391a66a7431c3152d3dc27395c457550992dc7af3b584aceb9aa
['f3448018da924661b909a3e0905bdbde']
Tengo un problema al subir una pequeña aplicación para buscar en los contáctos del teléfono. He subido otras app al google play sin problemas, pero en esta en concreto me da este error, y no tengo forma de saber el error, ¿existe alguna herramienta donde diga el error exacto por el cual no es compatible?. Desde Google Play Console, me dice dispositivos admitidos: 15.204 La aplicación no es de pago. En la 2ª versión puse : android:supportsRtl="false" , pero no funcionó. En la 3ª versión he añadido al manifest el uses-feature, pero no ha funcionado tampoco. No sé qué más modificar, pero desde la consola de google podrían tener alguna herramienta para analizar estos problemas... ***** este es el manifest ************ <?xml version="1.0" encoding="utf-8"?> <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.castalia.telefono"> <uses-permission android:name="android.permission.READ_CONTACTS" /> <uses-permission android:name="android.permission.WRITE_CONTACTS" /> <uses-permission android:name="android.permission.CALL_PHONE" /> <uses-feature android:name="android.hardware.telephony" android:required="false" /> <application android:allowBackup="true" android:icon="@drawable/icon_telefono" android:label="@string/app_name" android:roundIcon="@drawable/icon_telefono" android:supportsRtl="false" android:theme="@style/AppTheme"> <activity android:name=".MainActivity" android:label="@string/app_name" android:theme="@style/AppTheme.NoActionBar"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <receiver android:name=".WidgetControl" android:label="Agenda Castalia"> <intent-filter> <action android:name="android.appwidget.action.APPWIDGET_UPDATE" /> </intent-filter> <intent-filter> <action android:name="com.castalia.telefono.ACTUALIZAR_WIDGET"/> <action android:name="com.castalia.telefono.CLICK_WIDGET"/> </intent-filter> <intent-filter> <action android:name="android.intent.action.USER_PRESENT"/> </intent-filter> <meta-data android:name="android.appwidget.provider" android:resource="@xml/rc_widget_wprovider" /> </receiver> </application> ****** build.gradle (app) ***** apply plugin: 'com.android.application' android { compileSdkVersion 29 buildToolsVersion "29.0.3" defaultConfig { applicationId "com.castalia.telefono" minSdkVersion 14 targetSdkVersion 29 versionCode 3 versionName "3.20.03gp" testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner" } buildTypes { release { minifyEnabled false proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro' } } } Pego el enlace directo de la aplicación en Google Play: https://play.google.com/store/apps/details?id=com.castalia.telefono
369650361f30bd4e23250ac2d64c01e0024fbdc4791b888ba75e3bfc3e70d153
['f3448018da924661b909a3e0905bdbde']
In case of some specific code points, yes we've got Java XML librairies. Here it's sys scope. As I said if I can make it works with sed no need to use xmlstarlet or another tool. In my case, I only edit the content of a tag. I don't need to check inner or outer tags, it's no 'difficult' things. So, I personally think that sed is enough. In conclusion yes an XML parser is the best, but here sed might be OK. If our needs increases in complexity (XML or HTML complexity I mean) I will do my best to make an XML parser be installed on our servers.
add2df3c95d68834a2dada18337f35a53724d8014e7cb02e08177284f7677a83
['f359a12978cc4f419e843fa232ee7aa9']
I have a very large CSV file (~10mil rows) with 2 numeric column representing ids. The requirement is: given the first id, return very fast the second id. I need to get the CSV to behave like a map structure and it has to be in memory. I couldn't find a way to expose awk variables back to the shell so I thought of using bash associative arrays. The problem is that loading the csv into an associative array gets very slow/stuck after ~8 mil rows. I've been trying to eliminate the causes of slowdown that I could think of: file reading/IO, associative arraylimitations. So, I have a couple of functions that read the file into an associative array, but all of them have the same slowness problem. Here is the test data loadSplittedFilesViaMultipleArrays -> assumes the original file was split into smaller files (1 mil rows) and uses a while read loop to build 4 associative arrays (max 3 mil records each) loadSingleFileViaReadarray -> uses readarray to read the original file into a temp array and then goes through that to build the associative array loadSingleFileViaWhileRead -> uses a while read loop to build the associative array But I can't seem to figure it out. Maybe this way of doing it is completely wrong... Can anyone pitch in with some suggestions?
10fb50327abadf9e3ac50e616b48753ff292e8e0e130bdac69b18519d3e10e2e
['f359a12978cc4f419e843fa232ee7aa9']
I've figured this one out myself. It turns out that the message "Malformed PAC logon info" is actually correct. The code failed when it was trying to get the "Resource groups data". Initially I thought that the PAC_LOGON_INFO structure has changed since the last jaaslounge implementation was written (somewhere in 2010). I thought that because the MS-PAC specification does not mention it at all. Actually, the problem is coming from a completely different place: the KDC. It's running on a Win Server 2012, version in which Microsoft added by default resource SID Compression. There you have it, if you turn off resource SID Compression on the KDC, everything will start working (no need to touch anything else, i.e. the version of jaaslounge or to patch hava with an unlimited JCE policy).
8f94dc607d8a5d6dc463843cf8df409da6d111f4c6b54ac11ecb0de7e7ab5abb
['f37237f8922e468c8538e8a023c24526']
I've solved the problem with this method: + (int)getSplitIndexWithString:(NSString *)str frame:(CGRect)frame andFont:(UIFont *)font { int length = 1; int lastSpace = 1; NSString *cutText = [str substringToIndex:length]; CGSize textSize = [cutText sizeWithFont:font constrainedToSize:CGSizeMake(frame.size.width, frame.size.height + 500)]; while (textSize.height <= frame.size.height) { NSRange range = NSMakeRange (length, 1); if ([[str substringWithRange:range] isEqualToString:@" "]) { lastSpace = length; } length++; cutText = [str substringToIndex:length]; textSize = [cutText sizeWithFont:font constrainedToSize:CGSizeMake(frame.size.width, frame.size.height + 500)]; } return lastSpace; }
20086add72597fc2e401a5f613d34ea67891b7dcab7c3792f57c5f7def66fa57
['f37237f8922e468c8538e8a023c24526']
I have two UILabels, one above the other. The top one has fixed size (2 Lines), and the bottom one can expand (0 lines). The text which I using for the labels can be short and sometimes it can be very long. How can I calculate the first UILabel max string length without cut a word in the middle? //Here is the code which create the 2 labels. titleView = [[UILabel alloc] initWithFrame:CGRectMake(70, 0, 200, 50)]; titleView.numberOfLines = 2; titleView2 = [[UILabel alloc] initWithFrame:CGRectMake(20, 50, 250, 100)]; titleView2.numberOfLines = 0;
3dfa41b146b1ac1593d172df78198bb6e2b6b97db84ad6e4883ce17d3d7ce97e
['f37344ee10fc4776b02a1f9d79c0a459']
I'm creating a system of availability for rooms and with a calendar according to the days, with every day have it's own price. Room Type | 01 | 02 | 03 | 04 | 05 | ... | 31 | ----------------------------------------------------------- Single | $100 | $100 | $200 | $200 | $200 | ... | $300 | ----------------------------------------------------------- Double | $150 | $150 | $200 | $250 | $350 | ... | $320 | The best thing that I come accross is : RoomsAvailable -------------- # ID room_type from_date to_date price But, despite this table is flexible with the periodes, it is not with every single day's price. Thanks in advance.
46071815fbd0f42f38eded5b835a328049381ba6c69860f9222aeeaea92787fa
['f37344ee10fc4776b02a1f9d79c0a459']
It is a bash script that basically checks if the directory `/media/sharedfolder` exists, and if it doesn't runs `mount -a`. I simply added it (with `mv`) to that folder (`/etc/network/if-up.d/`) and set its mode to "executable" to all users (with `chmod 755 script.sh`).
1de024c0557e2206e6a9a8256c4eaf7c3a4c54708a46bc680f99fba54a9118fb
['f37994cb742b43e6999e41f4910a51f1']
I'm using the table2excel.js plugin to download an HTML table to Excel, in JavaScript. When I download the table, I get the message: "Excel cannot open the file 'Test.xlsx' because the file format or file extension is not valid. Verify that the file has not been corrupted and that the file extension matches the format of the file." When I manually change the name of the downloaded file to 'Test.xls', I'm able to open the file fine (with a small warning that the file format and extension don't match). Here is the link to table2excel.js: https://github.com/rainabba/jquery-table2excel/blob/377b933ae6b04f4c1826acc24a2bb0a049933f8b/dist/jquery.table2excel.js Some of the things I've tried: 1. I changed e.uri from "e.uri = data:application/vnd.ms-excel;base64," to "e.uri = "data:application/msexcel;base64," 2. I changed . . . 'return ( settings.filename ? settings.filename : "table2excel") + ".xlsx"'; to . . . 'return ( settings.filename ? settings.filename : "table2excel") + ".xls"'; (when making this change, the file was saved as Test.xls.xlsx and still had the same problem opening the .xlsx file). How can I get the file to save as .xls rather than xlsx? Or is there a way to make this work while still saving the file as .xlsx (presumably by matching the format with the .xlsx extension)? Note: if it matters, the file Test.xlsx is "plain vanilla" with just text in a bunch of cells. There's no fancy formatting, characters, etc. The text is all Alphanumeric, with just a few special characters such as : ",.#'/@" Many thanks if any ideas!
11131dcddf9c79efd9e6846213f8711915bf06f47b068ded1be6c25d488ad2bd
['f37994cb742b43e6999e41f4910a51f1']
I see in previous posts that someone has suggested the following to find the selected tab. var numberOfSelectedTab = $("#tabs").tabs("option", "active") However, when I do this, I get an error in the console indicating that the tab is not initialized. How can I initialize the tab? (Note: I only call this after $(document).ready, so I don't know why it's not initialized. Any ideas either on how to initialize, or how to use another way to get the current selected tab? (I also saw recommendations to use "ui,newTab.index()" but what if no tab has been selected yet by the user?).
451f20db22c6fdf2b2bb56d918d59c0eb1fa305e471211c975d305ad661ccbae
['f38a9e235edb4136b8bb1bd52c659d2e']
Developer A is working on a feature in his local repository. Developer A is unfinished with his work, however he needs to transfer his work to Developer B so Developer B may continue the feature in his own local repository and later push to master. Working with Git in Microsoft Visual Studio, is there a workflow that exists that would allow this?
a2b41b414cdacdc3270ddfe023234a538973f9b1b9be0ed9aa65f5fcba85ded6
['f38a9e235edb4136b8bb1bd52c659d2e']
Roughly, this is what my code looks like: template<typename <PERSON>, typename V> class A{ private: size_t num_; public: A(initializer_list< something<K,V> > smthng); friend ostream& operator<<(ostream &out, const A &as){ size_t number = num_; }; }; template<typename K, typename V> A<K,V><IP_ADDRESS>A(initializer_list< something<K,V> > smthng){ size_t sz = 5; num_ = sz; } For some reason my code will always give "error: invalid use of non-static data member" as an error when I attempt to compile. Obviously the code above isn't what I have exactly, but this is the only error I'm getting. I thought the benefit of using the friend function wasa thaat you can access private members, but I can't seem to do so.
aaeea87a8d0b952c5709df698ed4efce0e63c0c48b01edcf579e99edf669f247
['f39647060e714dea9bd44cd2246b192b']
Here is what I am trying to achieve. I want to use READ operation from the CRUD I developed which means that I have to invoke a HTTP request (Using RequestJS for example). And then use the response coming from the READ to do some other things. READ Operation router.get('/api/example', function(request, response, next) { //MongoDB Code to fetch a certain doc exampleModel.find(request.query.key) .then(function(doc) {response.status(200).json(doc)}) }); So As you see I respond to the user with the json. What I want to do is to use this route and get the JSON response to use in another function. A middleware in a sense. Other function function useRead(){ //Make an HTTP Request using localhost:3000/api/example?key=123 useJSON(doc) } I understand that I could use requestJS to get the response. But the problem is does it make sense to make a request to localhost:3000/api/example?key=123. and also when i deploy the application on Heroku for example, that URL would not make sense and I think would crash the application as it should use www.myDomain.com instead of localhost:3000, so how do I solve this problem. I tried to be concise as possible, Sorry if somethings are not clear.
fe0da8b96ff5158b644b28ca7b5275ef21107383e2ab9a0c6c5fd9403000d152
['f39647060e714dea9bd44cd2246b192b']
Lately, I have been playing around in Java's Networking and Threading capabilities, just as a background I am trying to develop a multi-threaded chatting application. The problem is that when a client sends a message that uses a wrong format ex : "UserID#Message" instead of "UserID*Message" an exception is thrown, the thread Halts completely, the client needs to close his session and re-open it again to re-establish a connection with the server instead of resuming after the error, and I can't resume it. Here is the Server Code: package AdvanceThreading.Server; import java.io.DataInputStream; import java.io.DataOutputStream; import java.io.IOException; import java.net.ServerSocket; import java.net.Socket; import java.util.ArrayList; public class Server { private ServerSocket serverSocket; private Socket socket; private DataInputStream messageFromClientToHandler; private DataOutputStream messageFromHandlerToRecipient; private int port; private static int userID; private ArrayList<ClientHandler> clientList; public Server(int port) { this.port = port; this.userID = 0; clientList = new ArrayList<>(); this.initialize(); } public void initialize() { try { serverSocket = new ServerSocket(port); } catch (IOException e) { e.printStackTrace(); } while (true) { try { socket = serverSocket.accept(); messageFromClientToHandler = new DataInputStream(socket.getInputStream()); messageFromHandlerToRecipient = new DataOutputStream(socket.getOutputStream()); String userName = messageFromClientToHandler.readUTF(); messageFromHandlerToRecipient.writeInt(userID); ClientHandler client = new ClientHandler(userID, userName, this, socket, messageFromClientToHandler, messageFromHandlerToRecipient); clientList.add(client); //Should set client.setUncaughtExceptionHandler(); client.start(); userID = userID + 1; } catch (IOException e) { e.printStackTrace(); } } } public ServerSocket getServerSocket() { return serverSocket; } public Socket getSocket() { return socket; } public DataInputStream getMessageFromClientToHandler() { return messageFromClientToHandler; } public DataOutputStream getMessageFromHandlerToRecipient() { return messageFromHandlerToRecipient; } public int getPort() { return port; } public static int getUserID() { return userID; } public ArrayList<ClientHandler> getClientList() { return clientList; } public static void main(String[] args) { Server server = new Server(5050); } } Client Handler Code: package AdvanceThreading.Server; import java.io.DataInputStream; import java.io.DataOutputStream; import java.io.IOException; import java.net.Socket; import java.util.StringTokenizer; public class ClientHandler extends Thread { private int clientID; private String clientName; private boolean loggedIn; private Socket socket; private Server server; private DataInputStream messageFromClientToHandler; private DataOutputStream messageFromHandlerToRecipient; public ClientHandler(int clientID, String clientName, Server server, Socket socket, DataInputStream messageFromClientToHandler, DataOutputStream messageFromHandlerToRecipient) { this.clientID = clientID; this.clientName = clientName; this.loggedIn = true; this.server = server; this.socket = socket; this.messageFromClientToHandler = messageFromClientToHandler; this.messageFromHandlerToRecipient = messageFromHandlerToRecipient; } @Override public void run() { String receivedFromClient; StringTokenizer tokenize = null; while (true) { try { receivedFromClient = messageFromClientToHandler.readUTF(); System.out.println(receivedFromClient); tokenize = new StringTokenizer(receivedFromClient, "#"); String recipient = tokenize.nextToken(); int recipientID = Integer.parseInt(tokenize.nextToken()); String message = tokenize.nextToken(); //Need to handle exception of client not found. for (ClientHandler client : server.getClientList()) { if (client.getClientID() == recipientID && client.isLoggedIn()) { client.getMessageFromHandlerToRecipient().writeUTF(this.clientName + ": " + message); break; } } } catch (IOException e) { while (tokenize != null && tokenize.hasMoreTokens()) tokenize.nextToken(); e.printStackTrace(); } } } public String getClientName() { return clientName; } public void setClientName(String clientName) { this.clientName = clientName; } public boolean isLoggedIn() { return loggedIn; } public void setLoggedIn(boolean loggedIn) { this.loggedIn = loggedIn; } public int getClientID() { return clientID; } public Socket getSocket() { return socket; } public Server getServer() { return server; } public DataInputStream getMessageFromClientToHandler() { return messageFromClientToHandler; } public DataOutputStream getMessageFromHandlerToRecipient() { return messageFromHandlerToRecipient; } } The Main thing i want to do that if a client using a terminal (Ex : CMD) does an error i want to catch the error handle it, and resume his session on the same terminal so that he doesn't have to reconnect to the server again.
34417de7a87860c990285845b1cc6f4db979513526d2bde64c596b32a4206866
['f39a5214ea694691b63111331d3c93b0']
//-Joint creation. auto fixedjointleft = PhysicsJointDistance<IP_ADDRESS>construct(upperbody, leftbody, upperbody->getPosition(),leftbody->getPosition() ); auto fixedjointright = PhysicsJointDistance<IP_ADDRESS>construct(upperbody, rightbody, upperbody->getPosition(), rightbody->getPosition()); auto jointgear = PhysicsJointGear <IP_ADDRESS>construct(leftball->getPhysicsBody(), rightball->getPhysicsBody(), 200.0,4.0); //--Adding into PhysicsWorld physics_world->addJoint(jointgear); physics_world->addJoint(fixedjointleft); physics_world->addJoint(fixedjointright); CCLOG(" distance = %f", fixedjointleft->getDistance());
cc8a4f7e6f67bf7623343ebe48a0d0609bed74aec4eeecbe740a74958f3f4878
['f39a5214ea694691b63111331d3c93b0']
VideoPlayer, EditBox and WebView Component are appended in cc.game.container so these component will add as last child and These will appear on the top. By following these steps you can change the order or appearance. If you wan to put something like a video player Behind the canvas and want to put other component above it, So first you will have to make your canvas transparent. You can do this by setting the flag of engine file "/engine/cocos2d/core/platform/CCMacro.js" Change the flag ENABLE_TRANSPARENT_CANVAS to true You need the change the ZOrder of canvas and video component class in the start of the script responsible for video player. start: function () { cc.director.setClearColor(new cc.Color(0, 0, 0, 0)) let videoElement = document.getElementsByClassName('cocosVideo')[0]; videoElement.style.zIndex = 2; let gameCanvas = document.getElementsByClassName('gameCanvas')[0]; gameCanvas.style.position = 'relative'; gCanvas.style.zIndex = 4; }, 3.Here is the complete script to play a video over canvas. cc.Class({ extends: cc.Component, properties: { video:{ default: null, type: cc.VideoPlayer } }, // use this for initialization onLoad: function () { this.videoPlayer = this.node.getComponent(cc.VideoPlayer); this.videoPlayer.node.on('ready-to-play', this.callback, this); }, start: function () { cc.director.setClearColor(new cc.Color(0, 0, 0, 0)) let videoElement = document.getElementsByClassName('cocosVideo')[0]; videoElement.style.zIndex = 2; let gameCanvas = document.getElementsByClassName('gameCanvas')[0]; gameCanvas.style.position = 'relative'; gCanvas.style.zIndex = 4; }, callback () { console.log("video ready to play") this.videoPlayer.play(); }, // called every frame update: function (dt) { }, });
2e014d55f731428815ff500b0558507b7e5301bb59eaa5ff70ad97ee92361de4
['f3a382ab9bfc4335acc1e622f63b6909']
Windows-8.1 is installed on HDD,Ubuntu is on USB drive. So now if I boot windows then USB will become simple USB drive for windows(instead system drive,also note that when I boot from windows i did not copied any data on USB, so I think it should not be corrupted data on drive ).
4fec758ac95cdf7cf036f74606b9c71e7d5ddf93f7216b9d166969f31ae8ff49
['f3a382ab9bfc4335acc1e622f63b6909']
Persnally, To Get started with your first 2D game, Don't straight away start with a Game Engine or 2D Frameworks. Here's a Game that I developed with just the bitmap manipulation logic and a java thread to run the game. Parachute Penguins https://play.google.com/store/apps/details?id=com.positivesthinking.parachutepenguinsfree Create a Java thread that serves as a game loop. Make use of SurfaceView and manipulate bitmaps and with onClickListeners you can achieve a simple 2D game. Go for Game Engine and Frameworks once you are comfortable with it.
acf112447a42d837586710cd65238d3f854c706c2d54e9022113d8f0f751c722
['f3aca19b888d4ed5b5439f5fb7d45b41']
It seems that your problem is in your activity_settings.xml as it says you are (supposedly), creating a view with the following id:android.R.id.list and it is not a ListView check your activity_settings.xml and you will probably find the problema, otherwise post your XML to see if there is something wrong.
a6ceb88da18078c5729eebce5fc40f8a847634b678536921a84d65eacbefbe73
['f3aca19b888d4ed5b5439f5fb7d45b41']
That error almost drove me to insanity when I struggled with it, I (think) solved the problem by making minor changes in my code. Hope it helps you too. If your httpurlconnection request is a POST request remove this line conn.setDoOutput(true); Before getting the outputStream call to conn.connect() Finally I removed http.keepAlive sentece at all. No more random EOF so far (since i've changed it last week).
e731b97067cf987d3868499413e3af1c6d79a982fd0e8333c0be865bb9b16b73
['f3aeaaaae6774c668e6f083945293dfb']
here are my models: var LocationSchema = new Schema( { events: [ { type: mongoose.Schema.Types.ObjectId, ref: 'Event' } ] }) var EventSchema = new Schema( { title : String, location: { type: mongoose.Schema.Types.ObjectId, ref: 'Location' } }) I would like to query from the Location model a field inside the Event model. The following one doesn't work findOne({events: {$elemMatch: {title: 'test'}}}) I'm not sure even that's possible to make it ...
8dca34c14925882fe49d980ba29dac77b6a1ec0f520d7d94f51ecf626267f452
['f3aeaaaae6774c668e6f083945293dfb']
I'm a beginner with React, and I would like to add an object into an array. Here is my code : const initialState = { messages: [], }; export default (state = initialState, action) => { switch (action.type) { case 'ADD_MESSAGE': return { messages: update(state.messages, {$push: [{text: action.text}]}) }; default: return state } } And in my component: <ul>{this.props.chat.messages.map((message) =>{ return <li>{message.text}</Link></li> }) } And I get the error: Encountered two children with the same key,[object Object]. Child keys must be unique; when two children share a key, only the first child will be used. Thank you for your help.
74f9d544da2df4c7d7f64e4de2a8a89950e0d8aeb626bdebe6b04b823791e808
['f3b6dfe7373e48fea24adcb2e685311e']
For some objects from catboost library (like the python code export model - https://tech.yandex.com/catboost/doc/dg/concepts/python-reference_catboostclassifier_save_model-docpage/) predictions (https://tech.yandex.com/catboost/doc/dg/concepts/python-reference_apply_catboost_model-docpage/) will only give a so called raw score per record (parameter values is called "RawFormulaVal"). Other API functions also allow the result of a prediction to be a probability for the target class (https://tech.yandex.com/catboost/doc/dg/concepts/python-reference_catboostclassifier_predict-docpage/) - parameter value is called "Probability". I would like to know how this is related to probabilities (in case of a binary classification) and if it can be transformed in such a one using the python API (https://tech.yandex.com/catboost/doc/dg/concepts/python-quickstart-docpage/)?
c914a50a3fe98a4ebf5dc7b08121decf51859f25d96013270b19dadcff4bfd7b
['f3b6dfe7373e48fea24adcb2e685311e']
Please check that you correctly setup the environment variables - but be aware: the snippets in answer 1 and from the deep water page (https://github.com/h2oai/deepwater#pre-release-downloads) contain a spelling error in the second line. Correctly it should read as: export CUDA_PATH=/usr/local/cuda export LD_LIBRARY_PATH=$CUDA_PATH/lib64:$LD_LIBRARY_PATH
16858fb99051851d4ab26aa2d0622721d9678273a091456a443aa38ba9d000e0
['f3c9d3b802044e1586bff0496a392ca7']
I solved by closing a session and loading the neural network model again. My answers are: (1) Exit with... block or sess.close() (2) Load neural network model (and trained weight) like: gd = tf.GraphDef.FromString(open(checkpoint + '_frozen.pb', 'rb').read()) inp, predictions = tf.import_graph_def(gd, return_elements=['input:0', 'MobilenetV2/Predictions/Reshape_1:0']) (3) When you print out model you may see like Tensorflow object <VSR.Backend.TF.Framework.Trainer.VSR object at 0x000001E5DA53C898> (4) I heard tf.reset_default_graph() and tf.keras.backend.clear_session() from here, but I never make the code work.
a59ea57c7048f45996697b2cfa9ab14de54a67b6ecd8dd843981f31fdbd74028
['f3c9d3b802044e1586bff0496a392ca7']
I had a same case with file close function. In my case, I solved by located the close function embedded other function body instead of having own function. I was also suspicious on (1) the name of file being duplicated (2) Windows scheduling (file IO wasn't completed before next task treading being started. Windows scheduling and multi-threading is behind of the curtain, so it is hard to verify, but I have similar issue when I tried to save many data in ASCII in the loop. Saving on binary solved at this case.) My environment, IDE: Visual Studio 2015, OS: Windows 7, language: C++
63bf52ae0911d96f86b8c140e44d17d01873ff3a6807b8e893294cba78dcc33d
['f3caf12589d140939de9922ef28aa162']
I have to filter out values from the List (include and exclude) which contains string values based upon the values which I am getting in variable requestedData. In the variable requestedData we have data as ['student', 'teacher'] include = ['student'] exclude = ['dentist+teacher'] Basically + in the exclude means AND, by that it means that user is both dentist and teacher. Same data with + can appear in include. In the variable sections am trying to get data which present in both requestedData and include and we remove it if its part of exclude. But before checking for exclude values we need to parse dentist+teacher to remove + and check dentist and teacher individually. So basically in the above example since we don't have both dentist and teacher in requestedData only student will be returned to sections as it is in both include and requestedData. Same thing need to be done for include if it has data with +. In the below solution I am able to check data without +. I was wondering how I can parse + and check for AND condition. val sections = this.data .map { prof -> prof.variations.filter { career -> (career.exclude.isEmpty() || career.exclude.intersect(requestedData).isEmpty()) && (career.include.isEmpty() || career.include.intersect(requestedData).isNotEmpty()) } } .filter { it.isNotEmpty() } .map { Section(it) }
8d6428a0bf651bc21a900cf3bc78c77ae50d679326165ba378fdf47952b49046
['f3caf12589d140939de9922ef28aa162']
Trying to create SparkConf using PySpark but getting an error Code from pyspark.python.pyspark.shell import spark from pyspark import SparkConf, SparkContext from pyspark.shell import sqlContext from pyspark.sql import SparkSession conf = SparkConf().setAppName("Test-1 ETL").setMaster("local[*]").set("spark.driver.host", "localhost").set("spark.sql.execution.arrow.pyspark.enabled", "true") sc = SparkContext(conf=conf) Error org.apache.spark.SparkException: Invalid Spark URL: spark://HeartbeatReceiver@xxxx_LPT-324:51380 I have also set set SPARK_LOCAL_HOSTNAME=localhost Can anyone please help me?
c3fb99de02394e8e6b871207fa8e8891022a5f2c7619886daf2bff3d927231f5
['f3d8b521f5fa4bf1a31cd7e6c65bd63c']
Приветствую! Есть меню. У одного пункта этого меню есть вложенный список. Как сделать, чтобы при наведении на элементы этого вложенного списка, основной пункт этого списка оставался активным (то есть его цвет оставался таким же, как и при наведении просто на этот пункт)? Тобишь, хочу добиться такого же результата, как на фото.
070c6c56436d591d39a3e51bbe5a8aa4954ce626e6c9fd5c7d768338e6fa9636
['f3d8b521f5fa4bf1a31cd7e6c65bd63c']
The distributive law solution l : MN -> NM is enough to guarantee monadicity of NM. To see this you need a unit and a mult. i'll focus on the mult (the unit is unit_N unitM) NMNM - l -> NNMM - mult_N mult_M -> NM This does not guarantee that MN is a monad. The crucial observation however, comes into play when you have distributive law solutions l1 : ML -> LM l2 : NL -> LN l3 : NM -> MN thus, LM, LN and MN are monads. The question arises as to whether LMN is a monad (either by (MN)L -> L(MN) or by N(LM) -> (LM)N We have enough structure to make these maps. However, as <PERSON> observes, we need a hexagonal condition (that amounts to a presentation of the Yang-Baxter equation) to guarantee monadicity of either construction. In fact, with the hexagonal condition, the two different monads coincide.
24d7f5dee582310e04345735f9f6fedd3a3bbbc8faa6833418be0a6981cf3ed8
['f3e992aa4077467eb49d3e8cd3b0f96e']
Currently GDB prints only trivial arguments in backtrace (only scalars); something like below (gdb) bt 1 (gdb) function1(this=this@entry=0xfff6c20, x1=-1, x2=3, x3=... and so on. x3 here could be a array/STL vector and by default GDB does not display it. I am using lot of STL vectors and Blitz arrays in my code. I have routines in .gdbinit file to display STL vectors, and subroutines in c++ where I can make use of call functionality in GDB, which can display the array contents. To manually print the vector/array contents, I would use (gdb) printVector vector_name -> this is a routine in my .gdbinit (gdb) call printBlitzArray(array_name) -> this is a routine inside my executable itself. How can we make GDB display the non trivial arguments of a function like below. void myFunc(int x1, int x2, std<IP_ADDRESS>vector<int> x3, blitz<IP_ADDRESS>Array<bool, 1> x4) I got to know using set print frame-arguments all can display some of the non trivial arguments. But how to really print arguments where GDB may not have a native support for printing them. The intent is to automatically print all the arguments at the start of the function (atleast whichever we can). I can write a GDB script and add prints individually for each vector/array, but doing this for every function would be very time consuming, since I have a large number of functions. This would help a lot to accelerate my debug. Any suggestion is highly appreciated. Thanks a lot in advance !
9f994caedb831db7ab53aa56b621aa8c1c2fc766f52ff1d05631dfa805958a99
['f3e992aa4077467eb49d3e8cd3b0f96e']
I have a template function which prints std<IP_ADDRESS>vector types to a file. is it possible to detect the type of vector in this function and change some prints, say for example, I like to know if its a "complex" type vector and print results in a different format - "x+iy" code snippet for reference template < typename myVec > void VectorPrint2File(const std<IP_ADDRESS>vector < myVec > & v, const char * str, std<IP_ADDRESS>ofstream & fileptr) { fileptr << str << std<IP_ADDRESS>endl; fileptr << "vector size: " << v.size() << std<IP_ADDRESS>endl; for (int i = 0; i < v.size(); ++i) { fileptr << v[i]; if (i != v.size() - 1) fileptr << "\n"; } } Thanks in advance !
7b05beb06b33b0cae425dfb70b2d5076e8058fa41edb3c4ec068f4f7ad9cdba6
['f3ebba0478a24204876f9d73bcf6bfde']
You could possibly recover the drive by using BOOTREC. You will need a bootable Windows install disk or flash drive. Once booted into the Windows installation click "OK" or whatever the button is to go to the next screen Click "Repair computer" or similar. This option will be in the bottom left of the pop up installation window. Click "Advanced options" Click "Command Prompt" No we should see out Command Prompt window come up, so we can get into the possible recovery. Type all of these commands without the quotations. The quotations are to define the command Type "Bootrec.exe /ScanOS", hit Enter Type "Bootrec.exe /RebuildBcd", hit Enter Type "Bootrec.exe /FixMBR", hit Enter Type "Bootrec.exe /FixBoot", hit Enter Hopefully this works for you! I've done the same in my early Windows OS days.
f4bbc8d67c2885840d9b0151d458ee2e080e3b2f8b9aeea01272df56188263e4
['f3ebba0478a24204876f9d73bcf6bfde']
There could be a more simple way to try. Open Windows settings Click "Devices" Make sure it's on "Bluetooth & other devices" Click on the device you want to remove Click "Remove device" Now go back into your Device Manager and make sure all the drivers are uninstalled. If they remain then uninstall them. Then restart the PC without installing any other drivers yet! If it restarts and the drivers are gone then the issue is fixed and you can ! I'm suspecting that it is reinstalling the drivers because it still sees the device as being attached to the PC in the Settings menu, but doesn't have a driver for that device. Windows automatically downloads and installs drivers for devices that it is attached to. This causes the drivers to reappear and cause the issue you're dealing with. Hope this fixes your issue! ~TJ
53d1a75cc02bf3540b351541821ca1b24ef2557fb4a3bf60c45e13ef0c264fd2
['f3f45c0dffb04584ab5772f49e441a79']
Yeah, You do have a problem of Index, the issue is when you delete something from the list, say at position i=10, the element of position 11 will be in the 10th and i will be in 11 so you will miss this element, the second issue is when you remove from the second list of k indexer, you should break, to not process all the rest elements (unless the element found is not duplicated), so here is my answer that work for me after trying your code: for (int i = 0; i < fromTagList.size(); i++) { for (int k = 0; k < fromImageList.size(); k++) { if (fromTagList.get(i).getImageURL().equals(fromImageList.get(k).getImageURL())) { fromTagList.remove(i); fromImageList.remove(k); i--; //break; this is optional } } }
10503d27371e7216de173ce5b311046dff5732035fb9ad45dc9cc939ac18ea14
['f3f45c0dffb04584ab5772f49e441a79']
No, at all, you can do this via Proguard Tools. In build.gradle you can enable Proguard release { minifyEnabled true proguardFiles getDefaultProguardFile( 'proguard-android-optimize.txt'), 'proguard-rules.pro' } Modify the proguard-rules.pro file, which should live under your standard Android app directory: -assumenosideeffects class android.util.Log { public static *** v(...); public static *** d(...); public static *** i(...); public static *** w(...); public static *** e(...); } I hope this answer helps you.
8622977abd0e9d90f990a42b6bb8f82ee9d2069dba0e4ec55ea8c515b2d8e5d7
['f3fd143072a04197b1d3a53f59621383']
It seems that the DataGridCellsPanel calculates a false height for the DataGrid. By replacing the DataGridCellsPanel with a StackPanel, the DataGrid calculates the right value for the Height and no more space is wasted. Solution in XAML: <wpf:DataGrid ItemsSource="{Binding DataGridDS, ElementName=mainWindow}" VirtualizingStackPanel.IsVirtualizing="False"> <wpf:DataGrid.ItemsPanel> <ItemsPanelTemplate> <StackPanel /> </ItemsPanelTemplate> </wpf:DataGrid.ItemsPanel> <wpf:DataGrid.Columns> <wpf:DataGridTextColumn Binding="{Binding Col1}" Width="*" /> <wpf:DataGridTextColumn Binding="{Binding Col2}" Width="100" /> <wpf:DataGridTextColumn Binding="{Binding Col3}" Width="100" /> </wpf:DataGrid.Columns> </wpf:DataGrid> Solution for code behind, I created a custom control that extends from DataGrid, and has the following code in constructor: var stackPanelFactory = new FrameworkElementFactory(typeof(StackPanel)); var template = new ItemsPanelTemplate(stackPanelFactory); this.ItemsPanel = template;
69e2f8648bf1ef081c05565cdc88b684c95da0045fe75f079054c5d9b0df69a9
['f3fd143072a04197b1d3a53f59621383']
I also had a problem in this direction. Closing tabs would cause memory leaks. I checked it with a profiler and it turned out that the ActiveContent would still keep a reference, preventing the GarbageCollector to kick in. my code for closing the tab: dc // DocumentContent, I want to close it documentPane // DocumentPane, containing the dc documentPane.Items.Remove(dc); this did the job of closing the tab, but learned that I need to call dc.Close(); before removing the content from the documentPane if I want ActiveContent to be set to null and let the GC do it's job. Note: I use version 1.2 of AvalonDock, this may have changed in newer versions.
a357b24a2018110c24aa42f22261bf399941eb5a8115054669c27fb1d71f7fe0
['f404aee205c6423fa9d151ccdebef46e']
!insertmacro MUI_PAGE_WELCOME !define MUI_LICENSEPAGE_CHECKBOX !insertmacro MUI_PAGE_LICENSE "license.txt" !insertmacro MUI_PAGE_COMPONENTS ; Directory page !insertmacro MUI_PAGE_DIRECTORY ;Confirmation Page Page custom Confirmationpage ; Instfiles page !insertmacro MUI_PAGE_INSTFILES ; Finish page !insertmacro MUI_PAGE_FINISH This is what I have in setup.nsi file at he beginning. at the end of the installation I am prompting user to install other software. If user chooses to install the second software, initial software (which was installing) should go silent and disappear as soon as it finishes installing. Here I tried to set SetAutoClose true But it ignores my SetAutoClose setting and brings up the finish page, prompting user to hit finish (which I do not want). Any one can help me with this?
ad33f7c59bbc0dd085834aae6edaedfded58c8a689d7625bfe8f978a24fd521c
['f404aee205c6423fa9d151ccdebef46e']
Section userSoftware MessageBox MB_YESNO|MB_ICONQUESTION "Insert user software DVD in to drive and click Yes to install User Software or click No to Proceed" /SD IDNO IDYES yes IDNO no yes: AutoCloseWindow true SetRebootFlag false Call installUserSoftware no: ;do nothing SectionEnd Section: "userSoftware" Error: command AutoCloseWindow not valid in Section This is the error I am getting with AutoCloseWindow. All I am trying to do is after installing the server software if user selects to install Client software, installation of server software should disappear without asking user to hit finish button. Code I gave I am just testing how AutoCloseWindow or SetAutoClose works, nut all I have is an error!!
556fa0526689a7846bf5656785e647795750ff8bf54c9a746a44563490c25b09
['f41d88c33de6473489d499ee254f6ba1']
I've got just one question -> is it good to have all parameters defined with default values in a function. I think it is a bad practice but I have a little argument with my colleague. So is either: public function getTestByUser($int_user_id, $limit, $offset) or public function getTestByUser($int_user_id = 0, $limit = 0, $offset = null) better / nicer? And why do you think so. Thanks in advance.
d86e5ffceea0215cc9507fe25388790a33943011317f0cc5a1f6d291bd136164
['f41d88c33de6473489d499ee254f6ba1']
Check your sourcecode! You have a div before the opening hmtl tag as well as some other issues. Also why do you have your css files located in the no script tag? Remove the html errors and remove the noscript tag around the css files. and everything should work fine...
24b1b110877139d90d2750d0ef30b02373fe0c6329025cbbd42a9790b50640f4
['f41d992a2af5442fb12cb2fe8abd762f']
I would like to use the PyPy Python JIT Implementation using Python 3. However I can only seem to install it using Python 2. Is there even an experimental implementation of PyPy for Python 3 I can try out? Are there plans to port it to Python 3? Or do I need to keep using Python 2 for it? I've gotten quite accustomed to Python 3 and I would like to use it as much as possible.
041a0692a9d71cd81e9cf9f28505515b975343a4e299706c0489b4062ae7c420
['f41d992a2af5442fb12cb2fe8abd762f']
I think we are both correct, indeed the matrix of the $\Delta$ is diagonal in the subspace with only electrons from valley K and holes from the time-reversed valley K'. The basis in the BdG should read $(\Psi_\text{A+},\Psi_\text{B+},\Psi_\text{A-},\Psi_\text{B-}$,$\Psi_\text{A+}^{\dagger},\Psi_\text{B+}^\dagger,\Psi_\text{A-}^\dagger,\Psi_\text{B-}^\dagger)$
0175b31033a4900790c865d90072f6a47ea8fcb3d8135ec1415644f392016c71
['f428e14806834ed594d6d8eeb3247aca']
I am trying to use BRIEF descriptor in OpenCV 3.1 for andoid. In order to achieve that OpenCV has to be built from source with _contrib. So I compiled it without errors and could also see BRIEF.cpp.o beeing built in the command window. So when I try to use it, my android app crashes throwing OpenCV Error: Bad argument (Specified descriptor extractor type is not supported.) in static cv<IP_ADDRESS>javaDescriptorExtractor* cv<IP_ADDRESS>javaDescriptorExtractor<IP_ADDRESS>create(int), file /home/maksim/workspace/android-pack/opencv/modules/features2d/misc/java/src/cpp/features2d_manual.hpp, line 374 So I checked features2d_manual.hpp. Line 374 is the default expression of a switch case block: CV_WRAP static javaDescriptorExtractor* create( int extractorType ) { //String name; if (extractorType > OPPONENTEXTRACTOR) { //name = "Opponent"; extractorType -= OPPONENTEXTRACTOR; } Ptr<DescriptorExtractor> de; switch(extractorType) { //case SIFT: // name = name + "SIFT"; // break; //case SURF: // name = name + "SURF"; // break; case ORB: de = ORB<IP_ADDRESS>create(); break; //case BRIEF: // name = name + "BRIEF"; // break; case BRISK: de = BRISK<IP_ADDRESS>create(); break; //case FREAK: // name = name + "FREAK"; // break; case AKAZE: de = AKAZE<IP_ADDRESS>create(); break; default: //**this is line 374** CV_Error( Error<IP_ADDRESS>StsBadArg, "Specified descriptor extractor type is not supported." ); break; } return new javaDescriptorExtractor(de); So the error clearly comes up, because case BRIEF is commented. So I modified it like that: #include "opencv2/xfeatures2d.hpp" . . . case BRIEF: de = xfeatures2d<IP_ADDRESS>BriefDescriptorExtractor<IP_ADDRESS>create(); break; . . . default: CV_Error( Error<IP_ADDRESS>StsBadArg, "---TEST--- Specified descriptor extractor type is not supported." ); break; } After rebuiling in a fresh directory and using the new build, the exact same error is persistent. Not even "---TEST---" is included with the message. So I am wondering why my changes do not have any effect. I am also wondering why the file path is: /home/maksim/workspace/android-pack/opencv/modules/features2d/misc/java/src/cpp/features2d_manual.hpp This dirctory doesn't even exist on my system and googling it showed, that /home/maksim/ is part of a lot of different error messages on android. The actual path before building is: C:\Users\JJG-CD\Desktop\Build_Workspace\opencv-3.1.0\modules\features2d\misc\java\src\cpp\features2d_manual.hpp I hope somebody can explain to me what the problem is and eventually give me a hint how to solve it.
fcd2f209be3382a2b574924f3dc5e999bb4b84d16da311608de0230e08f7a9e5
['f428e14806834ed594d6d8eeb3247aca']
I gave up already but found the solution by chance. The reason my own built libraries have not been used was the fact that those libraries are usually provided by the opencv manager app. To get rid of OpenCV manager and use my own libraries I just needed to initialize OpenCV statically. static { if (!OpenCVLoader.initDebug()) { // Handle initialization error} } Further details can be found here
84eb53ed18bf8f9a201430a60a706fbe925163e2ef59327c63d490ce3ecf13b0
['f42ca2bb58d548deba6177b36e43f0d7']
I am using libgdx to make a ios game with RoboVM. I am using the latest versions of LibGDX, RoboVM and my Eclipse is up to date. Recently I have been trying to add Google Analytics thanks to the Robovm bindings. I have manually imported the analytics project in Eclipse: with File -> Import -> Gradle project Everyting works fine, I can import and use the classes in my ios project. But then if I right click my ios project -> Gradle -> refresh all, the build is succesful but it removes the analytics project from the java build path. As a result, when I try to compile my ios project from a terminal with a command line, it doesn't compile since it doesn't fine the analytics classes. I am using "./gradlew -Probovm.device.name=myiPhone launchIOSDevice --stacktrace" I guess there is a setting or property in Gradle or Robovm that I should change, anybody has an idea?
2496e8c330c5f4c70a9625e6c48ebadab2240beff8398efbfe197eddbd70bbc3
['f42ca2bb58d548deba6177b36e43f0d7']
Google will change its policy on the 1st of Nov 2020 : subscription "Hold" will need to be enabled https://android-developers.googleblog.com/2020/06/new-features-to-acquire-and-retain-subscribers.html At the moment, here is how I query if a user has purchased my subscription or not (there is only 1 subscription in my app) and grant him privileges accordingly: private void queryPurchase() { Purchase.PurchasesResult purchasesResult = mBillingClient.queryPurchases(BillingClient.SkuType.SUBS); if (purchasesResult != null) { if (purchasesResult.getPurchasesList() != null) { if (purchasesResult.getPurchasesList().size() > 0) { String purchaseToken = purchasesResult.getPurchasesList().get(0).getPurchaseToken(); if (purchasesResult.getPurchasesList().get(0).toString().contains("productId\":\"" + "myID")) { //grant user subscription's privileges } } else { //do not grant user subscription's privilege } } } } My questions are : Will this method still properly detect whether or not a subscription is on hold? Do I need to add anything in terms of UI/messaging specifically related to a Hold status?
23919395ad50a4e4bd9aca57314ec4dd4c3b9b9aef2f4b43d271bbff775fe3c5
['f42cada178484ec28308e9f76f40d5f2']
<?php // Your code here! $ar[0] = array('name' => 'arr1', 'data' => array ( '0' => array ( 'name' => 'A', 'age' => 5, 'color' => 'green' ), '1' => array ( 'name' => 'B', 'age' => 4, 'color' => 'green' ), '2' => array ( 'name' => 'C', 'age' => 10, 'color' => 'Red' ), '3' => array ( 'name' => 'F', 'age' => 1, 'color' => 'green' ) ) ); $ar[1] = array ( 'name' => 'arr2', 'data' => array ( '0' => array ( 'name' => 'cc', 'age' => 8, 'color' => 'yellow' ), '1' => array ( 'name' => 'Y', 'age' => 20, 'color' => 'green' ), '2' => array ( 'name' => 'Y', 'age' => 9, 'color' => 'green' ) ) ); $green = array(); foreach($ar as $k1=>$a1){ foreach($a1['data'] as $k2=>$a2){ if($a2['color']=='green') { array_push($green,$a2['age']); } } } rsort($green); $green = array_splice($green,0,4); foreach($ar as $k1=>$a1){ foreach($a1['data'] as $k2=>$a2){ if($a2['color']=='green') { if(!in_array($a2['age'], $green)){ unset($ar[$k1]['data'][$k2]); } } } } print_r($ar); ?>
72c1539003528130f1cb35726dc233ae9a09dbf1ab4e4658b40baa986ec2df35
['f42cada178484ec28308e9f76f40d5f2']
$content = "<p>[Fm] [Gm] [Dm]</p> <p>لورم ایپسوم ، لورم ایپسوم</p> <p>[A] [Asus4] [Bb]</p> <p>لورم ایپسوم ، لورم ایپسوم</p>"; preg_match_all("/<p>.*<\/p>/", $content , $nodes); $final = []; foreach ($nodes[0] as $node ) { if(!empty($node)) { preg_match_all("/\[[^\]]*\]/", $node , $matches); if(!empty($matches[0])) { $final = array_merge($final, array_reverse($matches[0])); } } } print_r($final);
64a1ea3176bf0fcf8745bdc868d0df0b9b165de28383f06030370e2f4b8f665d
['f4311dcefbd941748ee87b64e24528a2']
I am exploring on the Security capabilities of Kafka 0.9.1 but unable to use it successfully. I have set below configuration in my server.properties allow.everyone.if.no.acl.found=false super.users=User:root;User:kafka I created an ACL using below command ./kafka-acls.sh --authorizer-properties zookeeper.connect= --add --allow-principal User:imit --allow-host --topic imit --producer --consumer --group imit-consumer-group and I see below response for it Current ACLs for resource Topic:imit: User:imit has Allow permission for operations: Describe from hosts: User:imit has Allow permission for operations: Read from hosts: User:imit has Allow permission for operations: Write from hosts: Note: Values mentioned in <> are replaced with some dummy values in the question and used correctly while creating the ACL I have following observations: a) Though I define the rule for imit topic to access for a particular using from a given host yet I can write to the topic from any host using any user account. b) I am unable to read the messages from topic from any host or any user account (even using the one for which I have defined the rules). I am running Kafka on RHEL 6.7 and all the users are local. Appreciate if someone can guide if I am missing any configuration parameters or commands to manage authorization or if Kafka is behaving in a weird way. Also where can I getting authorization related logs in Kafka? Thanks & Regards, <PERSON>
0706c9e73c06517c9d7e806f62e575e8807354e6cd7e2a14ead41013ffa6e345
['f4311dcefbd941748ee87b64e24528a2']
I am trying to run a Spark 1.6 job (written in Java) on Kerberized cluster. Through the job I am trying to read data from a Hive table which uses HBase for its storage. SparkConf conf = new SparkConf(); JavaSparkContext context = new JavaSparkContext(conf); HiveContext hiveContext = new HiveContext(context.sc()); hiveContext.setConf("hive.exec.dynamic.partition.mode", "nonstrict"); hiveContext.setConf("spark.sql.hive.convertMetastoreOrc", "false"); hiveContext.setConf("spark.sql.caseSensitive","false"); DataFrame df = hiveContext.sql(task.getQuery()); df.show(100); I am using below spark-sumbit command to run the job on YARN: spark-submit --master yarn --deploy-mode cluster --class <Main class name> --num-executors 2 --executor-cores 1 --executor-memory 1g --driver-memory 1g --jars application.json,/usr/hdp/current/hbase-client/lib/guava-12.0.1.jar,/usr/hdp/current/hbase-client/lib/hbase-client.jar,/usr/hdp/current/hbase-client/lib/hbase-common.jar,/usr/hdp/current/hbase-client/lib/hbase-protocol.jar,/usr/hdp/current/hbase-client/lib/hbase-server.jar,/usr/hdp/current/hive-client/lib/hive-hbase-handler.jar,/usr/hdp/current/spark-client/lib/datanucleus-api-jdo-3.2.6.jar,/usr/hdp/current/spark-client/lib/datanucleus-rdbms-3.2.9.jar,/usr/hdp/current/spark-client/lib/datanucleus-core-3.2.10.jar,/etc/hbase/conf/hbase-site.xml,/usr/hdp/current/spark-client/conf/hive-site.xml data-refiner-1.0.jar I have already performed a kinit before running the job. The job is able to communicate with Hive meta-store and parse the query. 17/04/05 06:15:23 INFO ParseDriver: Parsing command: SELECT * FROM <db_name>.<table_name> 17/04/05 06:15:24 INFO ParseDriver: Parse Completed But when trying to communicate with HBase to get data it is failing with below exception: 17/04/05 06:15:26 WARN AbstractRpcClient: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 17/04/05 06:15:26 ERROR AbstractRpcClient: SASL authentication failed. The most likely cause is missing or invalid credentials. Consider 'kinit'. javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:611) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.java:156) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:737) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:734) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:734) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:887) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:856) at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1199) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:213) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:287) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.execService(ClientProtos.java:32765) at org.apache.hadoop.hbase.protobuf.ProtobufUtil.execService(ProtobufUtil.java:1627) at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:104) at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel$1.call(RegionCoprocessorRpcChannel.java:94) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126) at org.apache.hadoop.hbase.ipc.RegionCoprocessorRpcChannel.callExecService(RegionCoprocessorRpcChannel.java:107) at org.apache.hadoop.hbase.ipc.CoprocessorRpcChannel.callBlockingMethod(CoprocessorRpcChannel.java:73) at org.apache.hadoop.hbase.protobuf.generated.AuthenticationProtos$AuthenticationService$BlockingStub.getAuthenticationToken(AuthenticationProtos.java:4512) at org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:86) at org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:111) at org.apache.hadoop.hbase.security.token.TokenUtil$1.run(TokenUtil.java:108) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724) at org.apache.hadoop.hbase.security.User$SecureHadoopUser.runAs(User.java:313) at org.apache.hadoop.hbase.security.token.TokenUtil.obtainToken(TokenUtil.java:108) at org.apache.hadoop.hbase.security.token.TokenUtil.addTokenForJob(TokenUtil.java:329) at org.apache.hadoop.hive.hbase.HBaseStorageHandler.addHBaseDelegationToken(HBaseStorageHandler.java:496) at org.apache.hadoop.hive.hbase.HBaseStorageHandler.configureTableJobProperties(HBaseStorageHandler.java:441) at org.apache.hadoop.hive.hbase.HBaseStorageHandler.configureInputJobProperties(HBaseStorageHandler.java:342) at org.apache.spark.sql.hive.HiveTableUtil$.configureJobPropertiesForStorageHandler(TableReader.scala:304) at org.apache.spark.sql.hive.HadoopTableReader$.initializeLocalJobConfFunc(TableReader.scala:323) at org.apache.spark.sql.hive.HadoopTableReader$anonfun$12.apply(TableReader.scala:276) at org.apache.spark.sql.hive.HadoopTableReader$anonfun$12.apply(TableReader.scala:276) at org.apache.spark.rdd.HadoopRDD$anonfun$getJobConf$6.apply(HadoopRDD.scala:174) at org.apache.spark.rdd.HadoopRDD$anonfun$getJobConf$6.apply(HadoopRDD.scala:174) at scala.Option.map(Option.scala:145) at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:174) at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:195) at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:242) at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:240) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:240) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:242) at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:240) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:240) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:242) at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:240) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:240) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:242) at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:240) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:240) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:242) at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:240) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:240) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:242) at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:240) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:240) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:242) at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:240) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:240) at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35) at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:242) at org.apache.spark.rdd.RDD$anonfun$partitions$2.apply(RDD.scala:240) at scala.Option.getOrElse(Option.scala:120) at org.apache.spark.rdd.RDD.partitions(RDD.scala:240) at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:190) at org.apache.spark.sql.execution.Limit.executeCollect(basicOperators.scala:165) at org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:174) at org.apache.spark.sql.DataFrame$anonfun$org$apache$spark$sql$DataFrame$execute$1$1.apply(DataFrame.scala:1499) at org.apache.spark.sql.DataFrame$anonfun$org$apache$spark$sql$DataFrame$execute$1$1.apply(DataFrame.scala:1499) at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56) at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:2086) at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$execute$1(DataFrame.scala:1498) at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$collect(DataFrame.scala:1505) at org.apache.spark.sql.DataFrame$anonfun$head$1.apply(DataFrame.scala:1375) at org.apache.spark.sql.DataFrame$anonfun$head$1.apply(DataFrame.scala:1374) at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:2099) at org.apache.spark.sql.DataFrame.head(DataFrame.scala:1374) at org.apache.spark.sql.DataFrame.take(DataFrame.scala:1456) at org.apache.spark.sql.DataFrame.showString(DataFrame.scala:170) at org.apache.spark.sql.DataFrame.show(DataFrame.scala:350) at org.apache.spark.sql.DataFrame.show(DataFrame.scala:311) at com.hpe.eap.batch.EAPDataRefinerMain.main(EAPDataRefinerMain.java:88) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:497) The job runs fine when we query a normal Hive table and also on non-kerberized cluster. Kindly suggest if we need to modify any configuration parameter/code changes to resolve the issue.
f2f6624e64b7cce0c6aa7eb72c43b6a08c4f4eb39af6bec5ae077ec847764604
['f4337c6ed0d74575b9dade5a26b78b5c']
This must be obvious, but I just don't get it. I'm trying to write a function where the values of a UIDatePicker and a UIPickerView (for seconds) need to be checked for nil (in case one of them is still spinning). So I had: let datePicker = view.viewWithTag(9) as! UIDatePicker if datePicker.date != nil { } For which I get a Binary operator '!=' cannot be applied to operands of type 'NSDate' and 'NilLiteralConvertible.
5156c388638e31eb1bd63939cde77577f8db47d880a86dd01eb0bb7b9c0450a3
['f4337c6ed0d74575b9dade5a26b78b5c']
I have two viewcontrollers that use view.translatesAutoresizingMaskIntoConstraints = true view.autoresizingMask = .FlexibleHeight Everything is working fine, when the in-call status bar is toggled the view is smoothly resizing its height. But when the in-call status bar is visible and the second view controller is presented via self.presentViewController(vc, animated: true, completion: nil) with modalTransitionStyle = .FlipHorizontal the presenting view jumps 20 points down and at the end of the transition the presented view jumps 20 points up (to the correct position). How could I prevent this from happening?
3a48018f55be0f86b0c8ce9e1e659a17573dd17ccd99893b6481271cf2e93b4f
['f465f91e1ca0406fb4dd30be71bb8e91']
I'm just using a simple slideToggle function of jQuery. It works properly in my HTML file. But whenever I take codes into a ASP.NET WebForm project, script doesn't work. I can't achieve slide effect. Is this a known issue? It's probably a popular mistake done by developers though. What could be the problem? Any possibilities? (I didn't copy&paste any code because of I thought this is not a specific situation, but I can show codes if you want)
9033c2da77b3d5b9506eeadd47f2e6fea43ec85bf3ff04ff848014f953a29dc4
['f465f91e1ca0406fb4dd30be71bb8e91']
I have a parameter just like params[:time_range]. In my controller, I want to use this time range parameter to specify a particular range at my query something like this: # params[:time_range] = "week" time = params[:time_range] query = Article.where(created_at: 1.#{time}.ago) Surely it doesn't work as it is right now. Is there a way to convert params[:time_range] string into the type of month, day or week? I tried to use to_sym but that week thing is not a symbol. When I try to find its class with 1.week I get Fixnum. Does anyone know a way to work this out?
bb41d69fe18763ce63531f77cad7832caf9f2ab42e422a26c3dd0c26a1628a4f
['f4666924dbc9453f9e78bd0417b5e9a5']
I got it it was supposed to be like this public Complex naive2(Polynomial poly, Complex x) { Complex p = new Complex(); for (int i = poly.getCoef().length - 1; i >= 0; i--) { p = p.add(poly.getCoef()[i].multiply(expo(x, i))); multiplyCountNaive2++; } return p; } private Complex expo(Complex a, int b) { if (b == 0) { return new Complex(1, 0); } else if (b == 1) { return a; } if (b % 2 == 0) { return expo(a.multiply(a), b / 2); } else { return a.multiply(expo(a.multiply(a), (b - 1) / 2)); } }
5324441047ca51181b6d9262d35906c14d283e5d60ba8e1214409e8621b68762
['f4666924dbc9453f9e78bd0417b5e9a5']
Guys I am doing polynomial evaluation and using algorithms such as Naive, Horner and FFT now there is one statement in my question that states. Run a variation of the naïve algorithm, where the exponentiation is performed by repeated squaring, a decrease-by-half algorithm I do not understand it, My current Naive algorithm is: public Complex naive(Polynomial poly, Complex x) { Complex p = new Complex(); for (int i = poly.getCoef().length - 1; i >= 0; i--) { Complex power = new Complex(1, 0); for (int j = 1; j <= i; j++) { power = power.multiply(x); } p = p.add(poly.getCoef()[i].multiply(power)); multiplyCountNaive++; } return p; } Kindly explain what needs to be modified. Thank you
e4a23bc737a6446203428f55addcada5e3c0c4e155ca8057a108f457f4c22bda
['f469330c987f41edbdc10b70344aab34']
Finally I'm able to install grub and now I can run kali. Using your excellent answer I did a work around. Remember your earlier answer ` $ sudo mount /dev/sda1 /mnt/boot/efi`. I inserted this command-line between the 5th & the 6th command. If I had carefully examined your previous edit I would've solved my problem earlier.
6374a374b1426e57a9efe8486e56eb0080e697ad396fac45fe534a1be6fefb32
['f469330c987f41edbdc10b70344aab34']
If i'm not sure if i have misunderstand your question might, yes you need to send a post request to POST /wp/v2/users api. That should do the trick i dont think programming language have issue on this. I believe you should decide how post submitting flow should be affecting.
45e0bbe14e25262f033abb31cab14fbce14ea2cbb80b22a687130bc46afd07a2
['f469dca6de4945209e4ed5951fb73022']
I've collected bus arrival times at my local bus stop from the past month - so I have every time my bus (a specific bus number) shows up at my bus stop for each day of the week (Monday, Tuesday, etc.). I am struggling at determining the best and clearest way to display this data. Eventually I want to use a clustering algorithm to help understand the most likely time the bus will show up. So on a Monday I know if the bus is more likely to show up at 7.45am or 7.48 am. I believe 8 charts will be best - one chart for each day of the week, and then one final chart that shows the average regardless of day of week. What would be my best chart type for clearly visualising this data?
ddad3562cbf800e4faa589e35b570cf4a688700d1ad776efbc3bb66a9f3ccb84
['f469dca6de4945209e4ed5951fb73022']
I'll start with the second question and then the first: If I remember correctly, the electric field in this particular scenario in the dielectric ball does come out to be uniform, but I have no trivial way to explain why that is so. Instead, I suggest you solve <PERSON>'s equation in and out of the sphere with the correct boundary conditions (this is how I solved it at the time...). The vacuum isn't polarized, $P$ is the polarization of the ball, not the vacuum.
bc80f12a36533cd559dc8abca46e24013ca14b8aeed865e26fc1365f589019c5
['f4720811ca4b466d82a5d5d4f4d964d7']
I'm having trouble configuring my log4j2.xml file to include hibernate logging. Here is my xml file: <?xml version="1.0" encoding="UTF-8"?> <Configuration status="TRACE"> <Properties> <Property name="logDir" value="${env:MY_APP}/logs"/> </Properties> <Appenders> <RollingFile name="MY_APP" fileName="${logDir}/my_app.log" bufferedIO="true" filePattern="${logDir}/my_app-%d{yyyy-MM-dd}-%i.log.gz"> <PatternLayout pattern="[%t] %d %-5p %x %m%n"/> <Policies> <OnStartupTriggeringPolicy/> <TimeBasedTriggeringPolicy/> </Policies> <DefaultRolloverStrategy> <Delete basePath="${logDir}" maxDepth="1"> <IfFileName glob="my_app-*.log.gz" /> <IfLastModified age="31d" /> </Delete> </DefaultRolloverStrategy> </RollingFile> </Appenders> <Loggers> <Root level="trace"> <AppenderRef ref="MY_APP" level="debug"/> </Root> <Logger name="com.company" level="info"/> <Logger name="com.companyName" level="info"/> <Logger name="com.companyName.myApp" level="debug"/> <Logger name="org.hibernate" level="info"/> </Loggers> And on startup of my jar i get the following printed to the console: log4j:WARN No appenders could be found for logger(org.hibernate.type.BasicTypeRegistry). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. I seem to get logs from my application into my RollingFile but i can't understand why this hibernate logger isn't using the same appender. Any help muchly appreciated!
4c970a876d7f66caafaed3e07e61f05de92f2060cc6eb416925c3277969fea13
['f4720811ca4b466d82a5d5d4f4d964d7']
I'm trying to achieve something similar to the attached image, where the circle is animated depending on your level progress and then a label is attached to both the end of the animated path to show the gained experience and also the undrawn part of the circle to show the remaining experience. I have the circle animating as wanted but am having trouble coming up with a solution to the labels to appear in the right spot. I've tried setting the position of the label to the path.currentPoint but that always seems to be the start of the drawn path and not the end. Any pointers on how to achieve this would be great!
35e1bdaa01fc33c3e2ff2917dc27d2b156afde6d90eae55a69cc29a6dab443b7
['f475199edb8d47e296bfae1e65247b74']
I have a document named "posts", it is like this : { "_id" : ObjectId("5afc22290c06a67f081fa463"), "title" : "Cool", "description" : "this is amazing" } And I have putted index on title and description : db.posts.createIndex( { title: "text", description: "text" } ) The problem is when I search and type for example "amaz" it return the data with "this is amazing" above, while it should return data only when I type "amazing" db.posts.find({ $text: { $search: 'amaz' } }, (err, results) => { return res.json(results); });
dba13f6c7fa5d57e0fe9743dc144a79339294f4379b1e63afcf353a81867f9d7
['f475199edb8d47e296bfae1e65247b74']
I am trying to deploy this react project https://github.com/tahnik/react-expressjs and use apache server for static file. The example that I know that is working is on Angular, on angular we just run ng build --prod and this will create a dist folder where there is an index.html. On apache, we just serve the dist folder. But here we use React with webpack that not contain dist folder with an index.html so I don't know how to do that on your project. PS : Sorry for my english, it's not my native language Thanks
ba453b971ac897833fbe187bb8c73bf4545dbd93f9adf43c2a3014a870f93514
['f47ceb5c44af4cf8a0d517ddb997acdd']
I want to add a grid in a canvas class, which supposed to look like a background image (actually a table) of this canvas (see below). I could add a div container below the canvas element, style it like a table and shift it in the background of the canvas. But I was wondering if I can add a grid like http://www.linker.ch/graf/sudoku/sudoku_vorlage.GIF directly via css by referencing only the canvas element. For example with before, after? <canvas class="sigma-mouse" width="964" height="602px" style="position: absolute; width: 964px; height: 602px;"></canvas>
6ed2c6bb9377686ad3d9fd9e42ffa8eefc1b48b5540f5b41adab26ddfac19c4b
['f47ceb5c44af4cf8a0d517ddb997acdd']
what is the best way to visualize relationships of contents (like a graph) in Drupal? Is there any tutorial or a good instruction? I just found https://www.drupal.org/node/1392374 but it doesn't work. I just got a grey graphic instead of the graph. Thank you very very much in advance for your help!
d8a672fb84b42ee0d45a96beb11fa50dab5cbc41fad0aa60aeef2ffdd6247758
['f4836d04e42d4995987b445088da55b3']
<PERSON> Yes, the random directory does not exist. But I don't know why: `/tmp` is `1777` which means every user should have the permission to create a directory under `/tmp`. According to the [source code](https://github.com/pulseaudio/pulseaudio/blob/master/src/pulsecore/core-util.c#L1791) there must be something wrong with `mkdir(fn,m)`. I tried to recompile pulseaudio with `-O0 -g` so I can attach `gdb` to it. However, packing a debian package ... hmmmm hard to say.
5ce96959601d61b5bcd4b8d74220878ac324dbeab842c58541116440ab4c31e5
['f4836d04e42d4995987b445088da55b3']
First, observe that $\lfloor \log N \rfloor < 2\lceil \log N \rceil$ holds for all $N \geq 2$. Let $n_0 = 2$, $c_1=2$, $f(N)=2\lfloor \log N \rfloor$, $g(N) = 2\lceil \log N \rceil$. So $$f(N) \leq c_1 \cdot g(N)$$ holds for all $N \geq n_0$. Next, observe that $\lfloor \log N \rfloor > \frac{1}{2}\lceil \log N \rceil$ is true for all $N \geq 2$. Let $c_2 = \frac{1}{2}$. Thus, $$f(N) \geq c_2 \cdot g(N)$$ Therefore, $$f(N) \in \Theta(g(N))$$ But the second question, it is obvious that the statement is incorrect, so I'll not be proving here.
26dcff2c706f33827af669b4d21c06575eefeac4ddabead041e49b5a702eba8c
['f48b61db223c454f81186a6aa5416eb8']
The type of problem you are talking about is called a regression problem. In such types of problems, you would have a single output neuron with a linear activation (or no activation). You would use MSE or MAE to train your network. If your problem is time series(where you are using previous values to predict current/next value) then you could try doing multi-variate time series forecasting using LSTMs. If your problem is not time series, then you could just use a vanilla feed forward neural network. This article explains the concepts of data correlation really well and you might find it useful in deciding what type of neural networks to use based on the type of data and output you have.
b3defbf0ab02401ef4a0ab0b8885d365228c315036425cdb6798a49c4c78fe11
['f48b61db223c454f81186a6aa5416eb8']
This kind of versioning issues is really common. python3 and pip3 might be referring to different versions/installations of python. This is why it is best to use virtual environments as it ensures that everything in the virtual environment is using the same python installation. Here is what I would suggest you to do: 1) First use the python3 installation you have to install virtualenv or any other virtual env manager. I am going to assume you are using virtualenv 2) Then you have to ensure that you use the pip that corresponds to your python3 installation to install virtualenv python3 -m pip install virtualenv 3) Now use virtualenv to create a new environment. Since virtualenv is installed with python3, in the environment, the python should also be python3 4) Activate the environment and use python --version and pip --version to check the version of python and pip in the environment Everytime you are working on a new project, you should ideally create a new environment to prevent versioning issues.
6c1ea7aaaa76ff4d7e2c3b2e1019c1f9cf89dad08668f9bdb5793db736ee8e2c
['f4a2acbf0c5f45239bde4b2d66b94a95']
El problema es que estás definiendo las columnas 0 a 16 como no ordenables: "aoColumnDefs": [ { "bSortable": false, "aTargets": [ 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16 ] } ] aoColumnDefs: This array allows you to target a specific column, multiple columns, or all columns, using the aTargets property of each object in the array bSortable Enable or disable sorting on this column. Quitando esa propiedad te funcionará. Te dejo aquí un ejemplo funcionando
1cf423c4c54abda20e6f889ee3515139f7fb280c3d99d8d4aaa402007e775fe0
['f4a2acbf0c5f45239bde4b2d66b94a95']
El problema es que estás buscando un elemento entre los li pero donde tienes que buscar es entre los hijos de esos li: console.log($(".events ol li").children().index($("a.selected"))); console.log($(".events ol li").children().index($("a.notSelected"))); <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <section class="cd-horizontal-timeline loaded" id="timelineSection"> <div class="timeline"> <div class="events-wrapper"> <div class="events" style="width: 300px; transform: translatex(-120px);"> <ol> <li><a href="url" ></a></li> <li><a class="notSelected" href="url" ></a></li> <li><a class="selected" href="url" ></a></li> </ol> <span class="filling-line" aria-hidden="true"></span> </div> </div> <ul class="cd-timeline-navigation"> <li> <a class="prev" href="#0">Prev</a> </li> <li> <a class="next" href="#0">Next</a> </li> <button id="closeTimeline" type="button">Chiudi</button> </ul> </div> </section>
4f522dd53aefc4c45d25f5ec0aae0d543d7609935016923fd487921c7e1f95d6
['f4b37cd48b6e4c9991ecaad8e8385672']
I have a SSRS report, contains 'State' and 'County' as parameter. In a scenario, state->CA(California), selected and all the counties are selected in County filter, belongs to CA. Now User selects a few other states, the county filter is refreshed with new list, corresponding to states, but they are un-selected/unchecked. Only CA counties are checked/selected in drop down list. Can I have a fix to slect all counties all time whenever refresh of states happens. Thanks, <PERSON>
6c689878faa842ff0f91b15ce73f16bc8035b1ddc2fe4e4411acab8ee35ce8c4
['f4b37cd48b6e4c9991ecaad8e8385672']
I have an "automatic" mode process say "BPSAuto". I need to keep thi process shut for certain time and later restart it. I used bat file and sheduled the task to stop and start. I used -->net start BPSAuto --- To start And --> net stop BPSAuto --- To stop. Now my problem is once I stop the process it again starts automatically. May be this is because it is in "Automatic" mode. Plesae help me to stop completely. And later restart. Thanks in Advance...
d78e50bb3d10fb18f2ad871e9e321e19a31c134a811626b42426f6ddf7354cce
['f4b5bb00de3d4959a247f4a40f9bcbcc']
The problem with this traditional folder structure is that you have to place e.g. models and views that belong together into different folders. That may create a nightmare. I personally find it hard and time consuming to navigate huge directory structures. Instead, I found it helpful to follow advice from <PERSON> Blog and put all components that belong to one module into corresponding folder. That way you also avoid huge directories but rather have more directories with meaningful names. Now in your question the word "any" is confusing. If you mean to be able to call any single functions, then certainly no - only those you really need. Otherwise - yes:) (see any source on MVC).
de86ce8095fb5e5d0bd44ec88821287431735e68deb7ab767be262a1f580b956
['f4b5bb00de3d4959a247f4a40f9bcbcc']
Why not creating an abstraction of your interface that can be specialised to each of the frameworks? I would write my own cacheing Controller with methods I need, that would encapsulate both caches. Then I can forget about them and only talk to my Controller. Any problems with that solution?
f66f75c86bf93bc5fcdf0adec982d96bafc27deb851ba08a7d3cbc77b5432fe5
['f4be7bac20284965b42c44920a9bb408']
It is now recommended to use SimpleNamespace from the types module. It does the same thing as the accepted answer except for it will be faster and have a few more builtins such as equals and repr. from types import SimpleNamespace sn = SimpleNamespace() sn.a = 'test' sn.a # output 'test'
8977227d56ce4f81c513cd4b2f2bcfb61b65e3a8110cd343b928d3f1e583cf92
['f4be7bac20284965b42c44920a9bb408']
Defining a property with a getter function but without a setter can be very useful in certain scenarios. Lets say you have a model as below in django; a model is essentially a database table with entries called fields. The property hostname is computed from one or more fields in the model from the database. This circumvents needing another entry in the database table that has to be changed everytime the relevant fields are changed. The true benefit of using a property is calling object.hostname() vs. object.hostname. The latter is passed along with the object automatically so when we go to a place like a jinja template we can call object.hostname but calling object.hostname() will raise an error. The example below is a virtualmachine model with a name field and an example of the jinja code where we passed a virtualmachine object. # PYTHON CODE class VirtualMachine(models.Model): name = models.CharField(max_length=128, unique=True) @property def hostname(self): return "{}-{}.{}".format( gethostname().split('.')[0], self.name, settings.EFFICIENT_DOMAIN ) # JINJA CODE ...start HTML... Name: {{ object.name }} # fails Hostname: {{ object.hostname() }} # passes Hostname: {{ object.hostname }} ...end HTML...
518b774623cf2fc0b62b7f93a9f7c94abc985418b4f0f99f2b73c665393fea8d
['f4c1e620f3664a669cfb7d93fcb25181']
Stacktrace reveals where's the problem: ... Caused by: java.lang.NullPointerException at ass2.session.UserProjectFacade.join(UserProjectFacade.java:37) Check the line 37 of your UserProjectFacade class and you'll get the answer. By the way, are you sure that in this line .. if (invitedLeader == user) { .. you want to compare references and not their primary keys?
3cfd7753281c9351797c4663b740a819d214318f5444679ebbfc74e197cf05f1
['f4c1e620f3664a669cfb7d93fcb25181']
There are few things to consider: check whether JNDI name java:global/RecieverApp/Controller!bean.ControllerRemote exists, there was nice JNDI browser in Glassfish 2.x, but they didn't put it in GF 3 (it should be in GF 4), but you still have good old command line: asadmin list-jndi-entries check whether your CallerRemote interfaces are in same packages in both applications there is no need to perform both injection (@EJB) and JNDI lookup, if your class Talker is container-managed (i.e. bean, servlet, etc.) then @EJB annotation will suffice, otherwise use only lookup
5c177398e22be4e223927c8e539374ebc27411f19acd9dc91c43343a18503157
['f4c3e33b42684b25837bd9635bd0eed9']
This may be very naive, since I just started trying to learn <PERSON> flow; but I couldn't really find any answer after looking for a while in all the textbooks and lecture notes I found online... If $(M,g_t)$ is a solution of the <PERSON> flow (normalized or not, I don't care), and $i\colon N\hookrightarrow (M,g_0)$ is a submanifold (with the induced metric), what is known about what happens to $(N,i^*g_t)$ in terms of its intrinsic/extrinsic geometry? This is somewhat vague, so, to be more precise: under what conditions a totally geodesic (resp. minimal) submanifold remains totally geodesic (resp. minimal)? What evolution equation is satisfied by the second fundamental form $B^t_{\xi^t}(X,Y)=g_t(\nabla^t_X Y,\xi^t)$ of $N\subset (M,g_t)$, or shape operator, in the codimension $1$ case? Note that here almost everything depends on $t$: the connection $\nabla^t$, the normal field $\xi^t$ and obviously the metric $g_t$. I tried to take the $t$ derivative using formulas for each of the objects (e.g., the ones found in <PERSON>'s notes), but it got incredibly messy very fast -- and there was nothing I could really read off the formulas. I then did some examples, but the only ones I could do all the computations for were somewhat trivial. I would be interested in any intuition/results related to the above, it could be for hypersurfaces (instead of general submanifolds), only in low dimensions, etc...
eb6963d3b750c6bd914f27d9f0473c8346dbcb5744419955efb5f8ef62b81e56
['f4c3e33b42684b25837bd9635bd0eed9']
@Boaz and @Vitali: An isometry of a semi-Riemannian (or pseudo-Riemannian) manifold $(M,g)$ is a diffeomorphism $\phi$ that preserves the metric tensor, i.e., the pull-back of $g$ by $\phi$ coincides with $g$: $\phi^*(g)=g$. I have never seen any other definitions in the literature. Although there is a notion of distance in Lorentzian manifolds (see e.g. the book of <PERSON> or the one of <PERSON>, <PERSON> and <PERSON>), there are very few results analogous to the Riemannian case.
283053e363fcf94308707006db7607f14ffd68e065ac618d3a5e51cb11f7cfc7
['f4ce6ece85014212b6b5a21e2df7c3ae']
I am new to the ACE framework.. and i'm looking to explore the socket programming using ACE. I found the doxygen documentation - http://www.dre.vanderbilt.edu/Doxygen/5.7.6/html/ace/a00614.html#aced00dccf394509a056ce4bccaf40b24 and it is no doubt helpful, but i was looking for some advanced examples of code to get a better understanding. If anyone can help me with it.
9ef2a9eee62ab9f195b54f7c8ff17816c6f9fb9baedf38eaf3f8f885cdbda5d2
['f4ce6ece85014212b6b5a21e2df7c3ae']
I get the idea that if locking and unlocking a mutex is an atomic operation, it can protect the critical section of code in case of a single processor architecture. Any thread, which would be scheduled first, would be able to "lock" the mutex in a single machine code operation. But how are mutexes any good when the threads are running on multiple cores? (Where different threads could be running at the same time on different "cores" at the same time). I can't seem to grasp the idea of how a multithreaded program would work without any deadlock or race condition on multiple cores?