text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Hi Michael, Am Mittwoch, den 18.04.2012, 19:21 +0300 schrieb Michael Snoyman: > I'm quite a novice at rewrite rules; can anyone recommend an approach > to get my rule to fire first? I’m not an expert of rewrite rules either, but from some experimentation and reading -dverbose-core2core (which is not a very nice presentation, unfortunately), I think that one reason why your rules won’t fire is that yieldMany is inlined too early. diff --git a/conduit/Data/Conduit/Internal.hs b/conduit/Data/Conduit/Internal.hs index bf2de63..8050c2c 100644 --- a/conduit/Data/Conduit/Internal.hs +++ b/conduit/Data/Conduit/Internal.hs @@ -353,7 +353,7 @@ yieldMany = where go [] = Done Nothing () go (o:os) = HaveOutput (go os) (return ()) o -{-# INLINE yieldMany #-} +{-# INLINE [1] yieldMany #-} {-# RULES "yield/bind" forall o (p :: Pipe i o m r). yield o >> p = yieldBind o p changes that. It might be hard to actually match on [1...1000], as that is very early replaced by the specific instance method which then takes part in the foldr/build-rewrite-reign. But maybe instead of specializing enumFromTo, you already get good and more general results in hooking into that? Juding from the code, you are already trying to do so, as you have a yieldMany/build rule that fires with above change: $ cat Test.hs module Test where import Data.Conduit import qualified Data.Conduit.List as CL x :: Pipe i Integer IO () x = mapM_ yield [1..1000] $ ghc -O -fforce-recomp -ddump-rule-firings Test.hs [1 of 1] Compiling Test ( Test.hs, Test.o ) Rule fired: Class op enumFromTo Rule fired: mapM_ yield Rule fired: yieldMany/build Oh, and as you can see, you don’t have to export the functions ocurring in the rules, as you did with yieldMany and yieldBuild. I don’t know conduits well, but you should check whether this also affects you: If conduits are constructed like in steam fusion, the build rule might not be of any use.: <>
http://www.haskell.org/pipermail/haskell-cafe/2012-April/100793.html
CC-MAIN-2014-10
refinedweb
334
64.81
From: Jeremy Siek (jsiek_at_[hidden]) Date: 2001-09-03 15:01:23 On Mon, 3 Sep 2001 williamkempf_at_[hidden] wrote: willia> --- In boost_at_y..., Jeremy Siek <jsiek_at_c...> wrote: willia> > willia> > in condition.html: willia> > willia> > Template parameter "Pr" should be replaced with "Predicate" willia> > and linked to the definition: willia> > willia> willia> Do we make external links like this in Boost documentation? I'm willia> willing to make the change, just want to make sure it's appropriate. In Boost we try to keep external links to a minimum, but encourage use of and links to components/concepts in the C++ standard. The SGI STL web site is a good location to point to for concepts such as Predicate that are used in the standard. Therefore Boost encourages this kind of external linkage. There are a few concepts that the SGI STL docs and the C++ std differ, so I've created C++ std compliant defs in the libs/utility directory. Notably the Assignable concept. willia> willia> > Move xlock.hpp to a detail/ directory, or document it and willia> > make it "public". willia> willia> Currently it is documented, but it's also in the detail namespace (I willia> expected others to make use of the templates). I guess I've left willia> these types in limbo, partially a detail item and partially public willia> concepts. A decision has to be made as to which direction it should willia> really go. willia> > in scoped_lock.html (and the other lock docs too) willia> > willia> > Header, just use mutex.hpp here, xlock.hpp is an implementation willia> > detail, right? willia> willia> We can't use just mutex.hpp, we'd have to use recursive_mutex.hpp as willia> well. However, the type isn't defined in either of these, but is willia> defined in xlock.hpp. Even if it's fully turned into a detail I willia> don't think this should be changed here. Yes, others will use the stuff in xlock.hpp, but the question is *who* those others are. Are they your ordinary user? No, they are library authors. Therefore I encourage the move of xlock.hpp to detail/. Think about the student trying to learn Boost.Thread for the first time... the teacher will have to keep answering questions like "what's this scoped_lock class template for?" and his reply "Oh, just ignore that". I encourage that the documentation of the stuff in xlock to be removed from the main documentation. Perhaps you could add an "Implementation notes" page that refers to it. Cheers, Jeremy ----------------------------------------------------------------------
https://lists.boost.org/Archives/boost/2001/09/16911.php
CC-MAIN-2019-22
refinedweb
424
77.33
What i gotta do: Write a program that inputs a tele phone #as astring in the form (555)555-5555. The program should use function strok() to extract the area code as a token , the first three digits of the phone # as a token and the last four as a token. the seven digits of the phone should be concatenated into one string.The program should convert the are-code string to int and convert the phone # string to long. both the area-code and the phone #should be printed Output: What i got sofar:What i got sofar:Code: Enter a phone number in the form (555) 555-5555: (555)555-5555 The integer area code is 555 The long integer phone number is 555555 press any key to continue... its only a bit of it i dont know how to set it up with gets, stoi(), strcopy(), and strcat()its only a bit of it i dont know how to set it up with gets, stoi(), strcopy(), and strcat()Code: #include <stdio.h> #include <string.h> int main() { char string[ 10 ]; char *tokenPtr; printf( "Enter a phone number in the form (555) 555-5555: \n" ); scanf( "%s", string); tokenPtr = strtok( string, "() -" ); while ( tokenPtr != NULL ){ printf( "\nThe integer area code is %s\n", tokenPtr ); tokenPtr = strtok( NULL, "() -" ); } return 0; }
http://cboard.cprogramming.com/c-programming/67025-strtok-phone-number-printable-thread.html
CC-MAIN-2016-18
refinedweb
220
73.81
I’ve been hard at work finishing the Controller chapter for San Gria and wanted to come up with fun and practical encoders example. Our main example app for hubbub is a twitter clone, and as I was working on the UI, and I was thinking “I wonder if we could add a TinyUrl facility? Actually, how about a TinyUrlCodec? Could probably do it in a one-liner, right?” class TinyUrlCodec { static encode = { fullUrl -> new URL("{fullUrl.encodeAsURL()}").text } } Which makes the controller implementation a snack! def tinyurl = { def tiny = params.fullUrl.encodeAsTinyUrl() log.debug "Encoded ${params.fullUrl} to ${tiny}" render tiny } But you’re keen to see it in action, right? But I’m going to need a little AJAX sauce to integrate the remoteForm with the dynamic adding to the textarea… <g:form TinyUrl: <g:textField <g:submitToRemote </g:form> With some matching Javascript goodness to catch the XmlHttpRequest… <g:javascript> function addTinyUrl(e) { var tinyUrl = e.responseText $("postContent").value += tinyUrl updateCounter() } </g:javascript> Anyways, it could probably do with a little more robust error handling, but it’ll be fine for a Codec example. The controllers chapters is almost done, which leaves only the Views chapter to get done next week. Grails in Action MEAP subscribers will receive a bunch of chapters next week (I think there’s 6 going out, including stuff about JMS, Quartz, Plugins, Build Scripts, Maven and all that, developer lifecycle and other tidbits). The final chapters for the book will be coming out in January. And it’s off to final edits in February before it heads off to the printer. If Sven and I can get a BOF off the ground, I’ll be giving away copies at JavaOne!
http://blogs.bytecode.com.au/glen/2008/12/06/a-cute-tinyurl-codec.html
CC-MAIN-2016-44
refinedweb
288
63.19
/* * __ #if __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 95) /* * The syntax of the asm() is different in the older compilers like the 2.7 * compiler in MacOS X Server so this is not turned on for that compiler. */ static inline int strcmp( const char *in_s1, const char *in_s2) { int result, temp; register const char *s1 = in_s1 - 1; register const char *s2 = in_s2 - 1; asm("1:lbzu %0,1(%1)\n" "\tcmpwi cr1,%0,0\n" "\tlbzu %3,1(%2)\n" "\tsubf. %0,%3,%0\n" "\tbeq- cr1,2f\n" "\tbeq+ 1b\n2:" /* outputs: */ : "=&r" (result), "+b" (s1), "+b" (s2), "=r" (temp) /* inputs: */ : /* clobbers: */ : "cr0", "cr1", "memory"); return(result); } /* "=&r" (result) means: 'result' is written on (the '='), it's any GP register (the 'r'), and it must not be the same as any of the input registers (the '&'). "+b" (s1) means: 's1' is read from and written to (the '+'), and it must be a base GP register (i.e., not R0.) "=r" (temp) means: 'temp' is any GP reg and it's only written to. "memory" in the 'clobbers' section means that gcc will make sure that anything that should be in memory IS there before calling this routine. */ #endif /* __GNUC__ > 2 || (__GNUC__ == 2 && __GNUC_MINOR__ >= 95) */ #endif /* __ppc__ */
http://opensource.apple.com/source/cctools/cctools-667.6.0/dyld/inline_strcmp.h
CC-MAIN-2016-30
refinedweb
206
65.05
Greetings, We're having some trouble with a custom i.MX6Q board (based on sabreSD reference). It appears that our DDR3 is close to timing margins. I'd like to slow the DDR3 clock rate down to 396MHz. I tried plugging that value into the DDR3 programming spreadsheet and replacing the values in flash_header.S with the updated spreadsheet values. However, when u-boot comes up, its still showing 528MHz for the ddr clock. I also noticed that mx6q pll2 is set to 528MHz. Is there something else I need to do, like change pll2 somehow? Thanks, FM Solved! Go to Solution. Hi, My custom hardware is based on i.MX6Q processor and 800MHz Alliance DDR3 memory AS4C256M16D3A-12BCN.Design is most related to Nitrogen6_max design. I set tCL = 6 and tCWL = 8 for the calibration and ddr init script. Some boards are working for this values and for some boards I had to change tCWL = 7 for better performance. Otherwise getting segmentation faults. 1) How this could happen with the same ddr layout and same production line boards ? What can we figure with this fine tuning of tCL and tCWL value of board to board ? I'm new to custom hardware designs and I'm going to scale up my custom hardware which is functioning well with few boards. But still I'm having trouble with my hardware which is really difficult to figure out as some boards are working really well and some are not (From 50 units of production). When I do memory calibration those are really looks like same. But although many boards have similar calibration results some are working really well and some are not (As an example lets take 10 boards, only 4 are working really well) Could you please go through with following calibration results of few boards. Please follow the pastebin links. Processor - i.MAX6Q Memory - MT41K256M16TW-107 IT:P From above 3 boards Board 1 and Board 2 are working really well and Board 3 is getting kernel panics while running same application on 3 boards. Can you help me to figure out any clue from this results from above 3 boards ? I did 50 units of production 6 months ago and only 30 were worked properly. But that is with Alliance memory AS4C256M16D3A-12BCN. So will this be an issue of the design ? If this is an issue of the ddr layout or whole design why some boards are working really well ? 2) Will this be an issue of the manufacturing side ? Then how this could be happen with the same production ? Because some are working and some are not. I don't have much experience with mass production and but I like to move forward after learning and correcting this issues. I must be thankful to you if you will kindly reply me soon. Many thanks and regards, Peter. Sounds like at least some people have having issue with DDR3 tracking constraints. I am currently laying out IMX6Q DDR3 4GB, There seems to be inconsistencies between the Hardware development guide and the Sabre reference board (LAY-27392_C.brd) and (IMX6DQ6SDLHDG Rev 1 06/2013). The hardware development guides says (Table 3-3) that the address lines cannot be longer than SD_CLK. The layout for the reference design has the address lines 0.09 inches longer than the ddr clock SD_CLK. Anyone think I am wrong? If I am not wrong which is correct, the HRG, or reference design? Since I've had no luck with these solutions, let me ask a different question. How low can I set the DDR speed with the 528 MHz sourced periph_clk? Can I drop the DDR clk to 500? or 475? or 452? somehow without changing the input to periph_clk? If so, how would I go about this? Thanks, FM Hello Frank, although an old topic, it is still marked "not answered". Were you able to slow down the DDR3 clock? We are experiencing similar problems. I believe the answer to your last question is minimum 400MHz. It is defined in DDR3 chip. It depends on the CAS latency setup. Freescale uses 8 CLK CAS latency in their script, which is valid from 2.5ns to 1.875ns (400MHz to 533MHz) for an 1600MHz DDR3 device. regards, Borut Thanks! I never thought that calibration values would converge... We'll try it out. Best regards, Borut Hi, Frank I think the mmdc_axi_ch0 is sourced from periph_clk, and periph_clk is from pll2 which is 528M, so to get 396MHz for DDR3, you may need to switch clock source of periph_clk from pll2 to pfd396M, you need to check the CCM for clock tree, and switch this clock before DCD table of configuring MMDC. Also, I think the DDR script for 528M an 396M is different, if it still not work, you may need to check the script. But first of all, just make the clock freq right, it must be 396M. Hi Huang, Thanks for the reply. I've already adjusted the script in flash_header.S and that did not change the clock. That's why I'm asking the question. The real problem we're having is as follows. We have 25 boards that are using an i.MX6Q @ 1.2 GHz. Only a couple of them boot up reliably. We had built 6 boards previously using a 1 GHz i.MX6D that ran fine. Both processors are TO1.2 lots. We upgraded to the 4.0.0 release as well. We ran DDR3 stress tests on some of the boards and have discovered that there seem to be failures as the DDR3 testing gets to 528 MHz. This behavior feels like there is some issue with timing margins or something between the processor and the DDR3 memory but we have not been able to isolate it as yet. We were going to try slowing the memory down on the 25 boards to see if we get better results. Do you have any other advice? I noticed another message thread on these boards where someone else was having similar problems: Tuning DDR3 configurations on iMX6Q board This other thread was marked as "assumed answered" but it was not clear from that thread what the solution was. Thanks, FM Hi FM, I created the thread Tuning DDR3 configurations on iMX6Q board. The issue is not solved yet.Re: Tuning DDR3 configurations on iMX6Q board I am thinking to reduce the DDR speed to 500MHz or lower as the stress test looks ok at lower speed (500MHz). Any luck on slowing the DDR speed in Android? Thanks. PK Hi, Frank I think we need to try 396M first. For some boards that can boot up reliably, did they run @528MHz DDR freq or 396MHz? You said that you have adjust the script to 396M, but the script did not change the clock, that is why I need you to try adding the clock change first, Setting pre_periph_clk select to from pfd396M, you may need to add one DCD item in flash_header.S, DCD(0x20c4018, 0x60324) in front of the MMDC and IOMUX settng. Then you also need to add this length to DCD length parameter. See below: /* CCM_BASE_ADDR = 0x020C4000 */ /*DDR clk to 400MHz*/ ldr r0, =CCM_BASE_ADDR ldr r1, =0x00060324 str r1, [r0, #0x018] [Anson] Change this to DCD(0x20c4018, 0x60324), please check RM to see whether this 0x60324 is right to select PFD396 as pre_periph_clk's parent. dcd_hdr: .word 0x40F003D2 /* Tag=0xD2, Len=125*8 + 4 + 4, Ver=0x40 */ write_dcd_cmd: .word 0x04EC03CC /* Tag=0xCC, Len=125*8 + 4, Param=0x04 */ [Anson]As the DCD number is increased, so upper value need to changed to be 126*8 + 4 + 4 and 126*8 + 4. Please try it first to see whether we can make more boards boot up reliably @396M, if it helps, then we can focus on the DDR timing adjustment. OK, I tried to add this DCD (per the Anson comment to the beginning of my flash_header.S script. The board does not boot at all now. Regarding the 0x60324 value, I checked the RM and the the 0x6 prefix does appear to set the pre_periph_clk to 396 MHz as you suggested. I did note however that the default value for this register is 0x22324 so your 0x60 prefix is setting different values for pre_periph_clk_sel as well as gpu2d_clk_sel, vpu_axi_clk_sel, and periph_clk2_sel? Also, the default value for this register means that 00 (528 MHz clk) is chosen for pre_periph_clk, but the explanation for bits 18-19 state that 01 should be the default value, which yields a 396 MHz clock default. Very confusing. Maybe there's something going on in the ROM here? Couple things to note here about my flash_header.S script. We are using 4 GB of DDR3 64-bit in 8 256MB Micron devices. I had created a script for our dual core boards and things were working fine. So in this script where you see the // 396 MHz comments are the places I had to change to switch to the 396 MHz speed according to the spreadsheet. I added the line you recommended at DCD item 1. This caused me to have to renumber all the other DCD items. I also bumped up the Len part of the dcd_hdr and write_dcd_cmd labels as directed. Here is what my flash_header.S script looks like: #else /* i.MX6Q */ //dcd_hdr: .word 0x40a002D2 /* Tag=0xD2, Len=83*8 + 4 + 4, Ver=0x40 */ //write_dcd_cmd: .word 0x049c02CC /* Tag=0xCC, Len=83*8 + 4, Param=0x04 */ //dcd_hdr: .word 0x40c802d2 /* Tag=0xD2, Len=88*8 + 4 + 4, Ver=0x40 */ //write_dcd_cmd: .word 0x04c402cc /* Tag=0xCC, Len=88*8 + 4, Param=0x04 */ dcd_hdr: .word 0x40d002d2 /* Tag=0xD2, Len=89*8 + 4 + 4, Ver=0x40 */ write_dcd_cmd: .word 0x04cc02cc /* Tag=0xCC, Len=89*8 + 4, Param=0x04 */ /* DCD */ // <added> // Change pre_periph_clk to pfd396M MXC_DCD_ITEM(1, 0x020c4018, 0x00060324) // </added> MXC_DCD_ITEM(2, IOMUXC_BASE_ADDR + 0x798, 0x000C0000) MXC_DCD_ITEM(3, IOMUXC_BASE_ADDR + 0x758, 0x00000000) MXC_DCD_ITEM(4, IOMUXC_BASE_ADDR + 0x588, 0x00000030) MXC_DCD_ITEM(5, IOMUXC_BASE_ADDR + 0x594, 0x00000030) MXC_DCD_ITEM(6, IOMUXC_BASE_ADDR + 0x56c, 0x00000030) MXC_DCD_ITEM(7, IOMUXC_BASE_ADDR + 0x578, 0x00000030) MXC_DCD_ITEM(8, IOMUXC_BASE_ADDR + 0x74c, 0x00000030) MXC_DCD_ITEM(9, IOMUXC_BASE_ADDR + 0x57c, 0x00000030) MXC_DCD_ITEM(10, IOMUXC_BASE_ADDR + 0x58c, 0x00000000) MXC_DCD_ITEM(11, IOMUXC_BASE_ADDR + 0x59c, 0x00000030) MXC_DCD_ITEM(12, IOMUXC_BASE_ADDR + 0x5a0, 0x00000030) MXC_DCD_ITEM(13, IOMUXC_BASE_ADDR + 0x78c, 0x00000030) MXC_DCD_ITEM(14, IOMUXC_BASE_ADDR + 0x750, 0x00020000) MXC_DCD_ITEM(15, IOMUXC_BASE_ADDR + 0x5a8, 0x00000030) MXC_DCD_ITEM(16, IOMUXC_BASE_ADDR + 0x5b0, 0x00000030) MXC_DCD_ITEM(17, IOMUXC_BASE_ADDR + 0x524, 0x00000030) MXC_DCD_ITEM(18, IOMUXC_BASE_ADDR + 0x51c, 0x00000030) MXC_DCD_ITEM(19, IOMUXC_BASE_ADDR + 0x518, 0x00000030) MXC_DCD_ITEM(20, IOMUXC_BASE_ADDR + 0x50c, 0x00000030) MXC_DCD_ITEM(21, IOMUXC_BASE_ADDR + 0x5b8, 0x00000030) MXC_DCD_ITEM(22, IOMUXC_BASE_ADDR + 0x5c0, 0x00000030) MXC_DCD_ITEM(23, IOMUXC_BASE_ADDR + 0x774, 0x00020000) MXC_DCD_ITEM(24, IOMUXC_BASE_ADDR + 0x784, 0x00000030) MXC_DCD_ITEM(25, IOMUXC_BASE_ADDR + 0x788, 0x00000030) MXC_DCD_ITEM(26, IOMUXC_BASE_ADDR + 0x794, 0x00000030) MXC_DCD_ITEM(27, IOMUXC_BASE_ADDR + 0x79c, 0x00000030) MXC_DCD_ITEM(28, IOMUXC_BASE_ADDR + 0x7a0, 0x00000030) MXC_DCD_ITEM(29, IOMUXC_BASE_ADDR + 0x7a4, 0x00000030) MXC_DCD_ITEM(30, IOMUXC_BASE_ADDR + 0x7a8, 0x00000030) MXC_DCD_ITEM(31, IOMUXC_BASE_ADDR + 0x748, 0x00000030) MXC_DCD_ITEM(32, IOMUXC_BASE_ADDR + 0x5ac, 0x00000030) MXC_DCD_ITEM(33, IOMUXC_BASE_ADDR + 0x5b4, 0x00000030) MXC_DCD_ITEM(34, IOMUXC_BASE_ADDR + 0x528, 0x00000030) MXC_DCD_ITEM(35, IOMUXC_BASE_ADDR + 0x520, 0x00000030) MXC_DCD_ITEM(36, IOMUXC_BASE_ADDR + 0x514, 0x00000030) MXC_DCD_ITEM(37, IOMUXC_BASE_ADDR + 0x510, 0x00000030) MXC_DCD_ITEM(38, IOMUXC_BASE_ADDR + 0x5bc, 0x00000030) MXC_DCD_ITEM(39, IOMUXC_BASE_ADDR + 0x5c4, 0x00000030) MXC_DCD_ITEM(40, MMDC_P0_BASE_ADDR + 0x800, 0xA1390003) MXC_DCD_ITEM(41, MMDC_P0_BASE_ADDR + 0x80c, 0x001F001F) MXC_DCD_ITEM(42, MMDC_P0_BASE_ADDR + 0x810, 0x001F001F) MXC_DCD_ITEM(43, MMDC_P1_BASE_ADDR + 0x80c, 0x001F001F) MXC_DCD_ITEM(44, MMDC_P1_BASE_ADDR + 0x810, 0x001F001F) //MXC_DCD_ITEM(45, MMDC_P0_BASE_ADDR + 0x83c, 0x4333033F) MXC_DCD_ITEM(45, MMDC_P0_BASE_ADDR + 0x83c, 0x43270338) //MXC_DCD_ITEM(46, MMDC_P0_BASE_ADDR + 0x840, 0x032C031D) MXC_DCD_ITEM(46, MMDC_P0_BASE_ADDR + 0x840, 0x03200314) //MXC_DCD_ITEM(47, MMDC_P1_BASE_ADDR + 0x83c, 0x43200332) MXC_DCD_ITEM(47, MMDC_P1_BASE_ADDR + 0x83c, 0x431a032f) //MXC_DCD_ITEM(48, MMDC_P1_BASE_ADDR + 0x840, 0x031A026A) MXC_DCD_ITEM(48, MMDC_P1_BASE_ADDR + 0x840, 0x03200263) //MXC_DCD_ITEM(49, MMDC_P0_BASE_ADDR + 0x848, 0x4D464746) MXC_DCD_ITEM(49, MMDC_P0_BASE_ADDR + 0x848, 0x4b434748) //MXC_DCD_ITEM(50, MMDC_P1_BASE_ADDR + 0x848, 0x47453F4D) MXC_DCD_ITEM(50, MMDC_P1_BASE_ADDR + 0x848, 0x4445404c) //MXC_DCD_ITEM(51, MMDC_P0_BASE_ADDR + 0x850, 0x3E434440) MXC_DCD_ITEM(51, MMDC_P0_BASE_ADDR + 0x850, 0x38444542) //MXC_DCD_ITEM(52, MMDC_P1_BASE_ADDR + 0x850, 0x47384839) MXC_DCD_ITEM(52, MMDC_P1_BASE_ADDR + 0x850, 0x4935493a) MXC_DCD_ITEM(53, MMDC_P0_BASE_ADDR + 0x81c, 0x33333333) MXC_DCD_ITEM(54, MMDC_P0_BASE_ADDR + 0x820, 0x33333333) MXC_DCD_ITEM(55, MMDC_P0_BASE_ADDR + 0x824, 0x33333333) MXC_DCD_ITEM(56, MMDC_P0_BASE_ADDR + 0x828, 0x33333333) MXC_DCD_ITEM(57, MMDC_P1_BASE_ADDR + 0x81c, 0x33333333) MXC_DCD_ITEM(58, MMDC_P1_BASE_ADDR + 0x820, 0x33333333) MXC_DCD_ITEM(59, MMDC_P1_BASE_ADDR + 0x824, 0x33333333) MXC_DCD_ITEM(60, MMDC_P1_BASE_ADDR + 0x828, 0x33333333) MXC_DCD_ITEM(61, MMDC_P0_BASE_ADDR + 0x8b8, 0x00000800) MXC_DCD_ITEM(62, MMDC_P1_BASE_ADDR + 0x8b8, 0x00000800) //MXC_DCD_ITEM(63, MMDC_P0_BASE_ADDR + 0x004, 0x00020036) MXC_DCD_ITEM(63, MMDC_P0_BASE_ADDR + 0x004, 0x00020024) // 396 MHz //MXC_DCD_ITEM(64, MMDC_P0_BASE_ADDR + 0x008, 0x09444040) MXC_DCD_ITEM(64, MMDC_P0_BASE_ADDR + 0x008, 0x00444040) // 396 MHz //MXC_DCD_ITEM(65, MMDC_P0_BASE_ADDR + 0x00c, 0x555A7975) MXC_DCD_ITEM(65, MMDC_P0_BASE_ADDR + 0x00c, 0x3f435313) // 396 MHz //MXC_DCD_ITEM(66, MMDC_P0_BASE_ADDR + 0x010, 0xFF538F64) MXC_DCD_ITEM(66, MMDC_P0_BASE_ADDR + 0x010, 0xb66e8b64) // 396 MHz //MXC_DCD_ITEM(67, MMDC_P0_BASE_ADDR + 0x014, 0x01FF00DB) MXC_DCD_ITEM(67, MMDC_P0_BASE_ADDR + 0x014, 0x01ff0092) // 396 MHz //MXC_DCD_ITEM(68, MMDC_P0_BASE_ADDR + 0x018, 0x00001740) //MXC_DCD_ITEM(68, MMDC_P0_BASE_ADDR + 0x018, 0x000f11c0) MXC_DCD_ITEM(68, MMDC_P0_BASE_ADDR + 0x018, 0x00001740) // 396 MHz MXC_DCD_ITEM(69, MMDC_P0_BASE_ADDR + 0x01c, 0x00008000) MXC_DCD_ITEM(70, MMDC_P0_BASE_ADDR + 0x02c, 0x000026D2) //MXC_DCD_ITEM(71, MMDC_P0_BASE_ADDR + 0x030, 0x005A1023) MXC_DCD_ITEM(71, MMDC_P0_BASE_ADDR + 0x030, 0x00431023) // 396 MHz //MXC_DCD_ITEM(72, MMDC_P0_BASE_ADDR + 0x040, 0x00000027) //MXC_DCD_ITEM(72, MMDC_P0_BASE_ADDR + 0x040, 0x0000003f) MXC_DCD_ITEM(72, MMDC_P0_BASE_ADDR + 0x040, 0x00000047) // 396 MHz //MXC_DCD_ITEM(73, MMDC_P0_BASE_ADDR + 0x000, 0x831A0000) MXC_DCD_ITEM(73, MMDC_P0_BASE_ADDR + 0x000, 0xc41a0000) MXC_DCD_ITEM(74, MMDC_P0_BASE_ADDR + 0x01c, 0x04088032) MXC_DCD_ITEM(75, MMDC_P0_BASE_ADDR + 0x01c, 0x00008033) MXC_DCD_ITEM(76, MMDC_P0_BASE_ADDR + 0x01c, 0x00048031) //MXC_DCD_ITEM(77, MMDC_P0_BASE_ADDR + 0x01c, 0x09408030) MXC_DCD_ITEM(77, MMDC_P0_BASE_ADDR + 0x01c, 0x05208030) // 396 MHz MXC_DCD_ITEM(78, MMDC_P0_BASE_ADDR + 0x01c, 0x04008040) // <added> MXC_DCD_ITEM(79, MMDC_P0_BASE_ADDR + 0x01c, 0x0408803a) MXC_DCD_ITEM(80, MMDC_P0_BASE_ADDR + 0x01c, 0x0000803b) MXC_DCD_ITEM(81, MMDC_P0_BASE_ADDR + 0x01c, 0x00048039) //MXC_DCD_ITEM(82, MMDC_P0_BASE_ADDR + 0x01c, 0x09408038) MXC_DCD_ITEM(82, MMDC_P0_BASE_ADDR + 0x01c, 0x05208038) // 396 MHz MXC_DCD_ITEM(83, MMDC_P0_BASE_ADDR + 0x01c, 0x04008048) // </added> MXC_DCD_ITEM(84, MMDC_P0_BASE_ADDR + 0x020, 0x00005800) MXC_DCD_ITEM(85, MMDC_P0_BASE_ADDR + 0x818, 0x00011117) MXC_DCD_ITEM(86, MMDC_P1_BASE_ADDR + 0x818, 0x00011117) //MXC_DCD_ITEM(87, MMDC_P0_BASE_ADDR + 0x004, 0x00025576) MXC_DCD_ITEM(87, MMDC_P0_BASE_ADDR + 0x004, 0x00025564) // 396 MHz MXC_DCD_ITEM(88, MMDC_P0_BASE_ADDR + 0x404, 0x00011006) MXC_DCD_ITEM(89, MMDC_P0_BASE_ADDR + 0x01c, 0x00000000) #endif Hi, Frank below(sorry I do NOT know how to attach a file) is the flash_header.S of 400MHz DDR using plugin mode, this is a debug version when I was trying 400MHz for DDR3, you can have a reference, but I am not sure whether it works on your platform. #include <config.h> #include <asm/arch/mx6.h> #ifdef CONFIG_FLASH_HEADER #ifndef CONFIG_FLASH_HEADER_OFFSET # error "Must define the offset of flash header" #endif .section ".text.flasheader", "x" b _start .org CONFIG_FLASH_HEADER_OFFSET /* First IVT to copy the plugin that initializes the system into OCRAM */ ivt_header: .long 0x402000D1 /*Tag=0xD1, Len=0x0020, Ver=0x40 */ app_code_jump_v: .long 0x00907458 /* Plugin entry point, address after the second IVT table */ reserv1: .long 0x0 dcd_ptr: .long 0x0 boot_data_ptr: .long 0x00907420 self_ptr: .long 0x00907400 app_code_csf: .long 0x0 reserv2: .long 0x0 boot_data: .long 0x00907000 image_len: .long 16*1024 /* plugin can be upto 16KB in size */ plugin: .long 0x1 /* Enable plugin flag */ /* Second IVT to give entry point into the bootloader copied to DDR */ ivt2_header: .long 0x402000D1 /*Tag=0xD1, Len=0x0020, Ver=0x40 */ app2_code_jump_v: .long _start /* Entry point for uboot */ reserv3: .long 0x0 dcd2_ptr: .long 0x0 boot_data2_ptr: .long boot_data2 self_ptr2: .long ivt2_header app_code_csf2: .long 0x0 reserv4: .long 0x0 boot_data2: .long TEXT_BASE image_len2: .long _end_of_copy - TEXT_BASE + CONFIG_FLASH_HEADER_OFFSET plugin2: .long 0x0 /* Here starts the plugin code */ plugin_start: /* Save the return address and the function arguments */ push {r0-r4, lr} /* * Note: The DDR settings provided below are specific to Freescale development boards and are the latest settings at the time of release. * However, it is recommended to contact your Freescale representative in case there are any improvements to these settings. */ /* Init the DDR according the init script */ ldr r0, =CCM_BASE_ADDR /* select 400MHz for pre_periph_clk_sel */ ldr r1, =0x00060324 str r1, [r0,#0x18] /* 64-bit DDR3 */ /* IOMUX setting */ ldr r0, =IOMUXC_BASE_ADDR mov r1, #0x30 str r1, [r0,#0x5a8] str r1, [r0,#0x5b0] str r1, [r0,#0x524] str r1, [r0,#0x51c] str r1, [r0,#0x518] str r1, [r0,#0x50c] str r1, [r0,#0x5b8] str r1, [r0,#0x5c0] ldr r1, =0x00020030 str r1, [r0,#0x5ac] str r1, [r0,#0x5b4] str r1, [r0,#0x528] str r1, [r0,#0x520] str r1, [r0,#0x514] str r1, [r0,#0x510] str r1, [r0,#0x5bc] str r1, [r0,#0x5c4] str r1, [r0,#0x56c] str r1, [r0,#0x578] str r1, [r0,#0x588] str r1, [r0,#0x594] str r1, [r0,#0x57c] ldr r1, =0x00003000 str r1, [r0,#0x590] str r1, [r0,#0x598] mov r1, #0x00 str r1, [r0,#0x58c] ldr r1, =0x00003030 str r1, [r0,#0x59c] str r1, [r0,#0x5a0] ldr r1, =0x00000030 str r1, [r0,#0x784] str r1, [r0,#0x788] str r1, [r0,#0x794] str r1, [r0,#0x79c] str r1, [r0,#0x7a0] str r1, [r0,#0x7a4] str r1, [r0,#0x7a8] str r1, [r0,#0x748] str r1, [r0,#0x74c] mov r1, #0x00020000 str r1, [r0,#0x750] mov r1, #0x00000000 str r1, [r0,#0x758] mov r1, #0x00020000 str r1, [r0,#0x774] mov r1, #0x30 str r1, [r0,#0x78c] mov r1, #0x000c0000 str r1, [r0,#0x798] /* Initialize 2GB DDR3 - Micron MT41J128M */ ldr r0, =MMDC_P0_BASE_ADDR ldr r2, =MMDC_P1_BASE_ADDR ldr r1, =0x02020207 str r1, [r0,#0x83c] ldr r1, =0x02020201 str r1, [r0,#0x840] ldr r1, =0x02020207 str r1, [r2,#0x83c] ldr r1, =0x02150203 str r1, [r2,#0x840] ldr r1, =0x3E35353B str r1, [r0,#0x848] ldr r1, =0x3A393541 str r1, [r2,#0x848] ldr r1, =0x41424744 str r1, [r0,#0x850] ldr r1, =0x4937483B str r1, [r2,#0x850] ldr r1, =0x33333333 str r1, [r0,#0x81c] str r1, [r0,#0x820] str r1, [r0,#0x824] str r1, [r0,#0x828] str r1, [r2,#0x81c] str r1, [r2,#0x820] str r1, [r2,#0x824] str r1, [r2,#0x828] ldr r1, =0x00081740 str r1, [r0,#0x18] ldr r1, =0x00008000 str r1, [r0,#0x1c] ldr r1, =0x555b99a4 str r1, [r0,#0x0c] ldr r1, =0xfe730e64 str r1, [r0,#0x10] ldr r1, =0x01ff00db str r1, [r0,#0x14] ldr r1, =0x000026d2 str r1, [r0,#0x2c] ldr r1, =0x005b0e21 str r1, [r0,#0x30] ldr r1, =0x1b334000 str r1, [r0,#0x08] ldr r1, =0x0003002d str r1, [r0,#0x04] ldr r1, =0x00000027 str r1, [r0,#0x40] ldr r1, =0xc31a0000 str r1, [r0,#0x00] ldr r1, =0x00000800 str r1, [r0,#0x8b8] ldr r1, =0x04088032 str r1, [r0,#0x1c] ldr r1, =0x0408803a str r1, [r0,#0x1c] ldr r1, =0x00008033 str r1, [r0,#0x1c] ldr r1, =0x0000803b str r1, [r0,#0x1c] ldr r1, =0x00428031 str r1, [r0,#0x1c] ldr r1, =0x00428039 str r1, [r0,#0x1c] ldr r1, =0x09308030 str r1, [r0,#0x1c] ldr r1, =0x09308038 str r1, [r0,#0x1c] ldr r1, =0x04008040 str r1, [r0,#0x1c] ldr r1, =0x04008048 str r1, [r0,#0x1c] ldr r1, =0xa5380003 str r1, [r0,#0x800] ldr r1, =0x00005800 str r1, [r0,#0x20] ldr r1, =0x00022221 str r1, [r0,#0x818] ldr r1, =0x00022221 str r1, [r2,#0x818] ldr r1, =0x00000000 str r1, [r0,#0x1c] /******************** The following is to fill in those arguments for this ROM function pu_irom_hwcnfg_setup(void **start, size_t *bytes, const void *boot_data) This function is used to copy data from the storage media into DDR. start - Initial (possibly partial) image load address on entry. Final image load address on exit. bytes - Initial (possibly partial) image size on entry. Final image size on exit. boot_data - Initial @ref ivt Boot Data load address. */ adr r0, DDR_DEST_ADDR adr r1, COPY_SIZE adr r2, BOOT_DATA /* * check the _pu_irom_api_table for the address */ before_calling_rom___pu_irom_hwcnfg_setup: mov r4, #0x2000 add r4, r4, #0xed blx r4 /* This address might change in future ROM versions */ after_calling_rom___pu_irom_hwcnfg_setup: /* To return to ROM from plugin, we need to fill in these argument. * Here is what need to do: * Need to construct the paramters for this function before return to ROM: * plugin_download(void **start, size_t *bytes, UINT32 *ivt_offset) */ pop {r0-r4, lr} ldr r5, DDR_DEST_ADDR str r5, [r0] ldr r5, COPY_SIZE str r5, [r1] mov r5, #0x400 /* Point to the second IVT table at offset 0x42C */ add r5, r5, #0x2C str r5, [r2] mov r0, #1 bx lr /* return back to ROM code */ DDR_DEST_ADDR: .word TEXT_BASE COPY_SIZE: .word _end_of_copy - TEXT_BASE + CONFIG_FLASH_HEADER_OFFSET BOOT_DATA: .word TEXT_BASE .word _end_of_copy - TEXT_BASE + CONFIG_FLASH_HEADER_OFFSET .word 0 #endif Hi Huang, I'm confused. The only difference between this code and what you posted before is the expanded headers. I don't think those are the problem. You are still using the same code to redirect the periph clk which I alread told you doesn't work. Can you comment on the items I discussed about the use of 0x60324 to accomplish that? Thanks, FM Also, I was trying to same ploblems. Results are 480Mhz operations good, but 400Mhz was failed. <u-boot> 528Mhz --> 480Mhz --> 400Mhz step operation is success, using the clk command in the u-boot. It's not working problems(400Mhz) is thought to be caused by something happens in the BOOTROM. <DCD Values for 480Mhz Clock> # 480MHZ #CBCDR, 480MHz MXC_DCD_ITEM(39, CCM_BASE_ADDR + 0x14, 0x02018d00) #CBCMR, 480MHz MXC_DCD_ITEM(40, CCM_BASE_ADDR + 0x18, 0x00020324) MXC_DCD_ITEM(41, MMDC_P0_BASE_ADDR + 0x800, 0xA1390003) MXC_DCD_ITEM(42, MMDC_P0_BASE_ADDR + 0x80c, 0x001F001F) MXC_DCD_ITEM(43, MMDC_P0_BASE_ADDR + 0x810, 0x001F001F) MXC_DCD_ITEM(44, MMDC_P1_BASE_ADDR + 0x80c, 0x001F001F) MXC_DCD_ITEM(45, MMDC_P1_BASE_ADDR + 0x810, 0x001F001F) MXC_DCD_ITEM(46, MMDC_P0_BASE_ADDR + 0x83c, 0x4333033F) MXC_DCD_ITEM(47, MMDC_P0_BASE_ADDR + 0x840, 0x032C031D) MXC_DCD_ITEM(48, MMDC_P1_BASE_ADDR + 0x83c, 0x43200332) MXC_DCD_ITEM(49, MMDC_P1_BASE_ADDR + 0x840, 0x031A026A) MXC_DCD_ITEM(50, MMDC_P0_BASE_ADDR + 0x848, 0x4D464746) MXC_DCD_ITEM(51, MMDC_P1_BASE_ADDR + 0x848, 0x47453F4D) MXC_DCD_ITEM(52, MMDC_P0_BASE_ADDR + 0x850, 0x3E434440) MXC_DCD_ITEM(53, MMDC_P1_BASE_ADDR + 0x850, 0x47384839) MXC_DCD_ITEM(54, MMDC_P0_BASE_ADDR + 0x81c, 0x33333333) MXC_DCD_ITEM(55, MMDC_P0_BASE_ADDR + 0x820, 0x33333333) MXC_DCD_ITEM(56, MMDC_P0_BASE_ADDR + 0x824, 0x33333333) MXC_DCD_ITEM(57, MMDC_P0_BASE_ADDR + 0x828, 0x33333333) MXC_DCD_ITEM(58, MMDC_P1_BASE_ADDR + 0x81c, 0x33333333) MXC_DCD_ITEM(59, MMDC_P1_BASE_ADDR + 0x820, 0x33333333) MXC_DCD_ITEM(60, MMDC_P1_BASE_ADDR + 0x824, 0x33333333) MXC_DCD_ITEM(61, MMDC_P1_BASE_ADDR + 0x828, 0x33333333) MXC_DCD_ITEM(62, MMDC_P0_BASE_ADDR + 0x8b8, 0x00000800) MXC_DCD_ITEM(63, MMDC_P1_BASE_ADDR + 0x8b8, 0x00000800) #MMDC init MXC_DCD_ITEM(64, MMDC_P0_BASE_ADDR + 0x004, 0x0002002D) MXC_DCD_ITEM(65, MMDC_P0_BASE_ADDR + 0x008, 0x00333040) MXC_DCD_ITEM(66, MMDC_P0_BASE_ADDR + 0x00c, 0x3F4352F3) MXC_DCD_ITEM(67, MMDC_P0_BASE_ADDR + 0x010, 0xB66D8B63) MXC_DCD_ITEM(68, MMDC_P0_BASE_ADDR + 0x014, 0x01FF00DB) #MDMISC MXC_DCD_ITEM(69, MMDC_P0_BASE_ADDR + 0x018, 0x00001740) MXC_DCD_ITEM(70, MMDC_P0_BASE_ADDR + 0x01c, 0x00008000) MXC_DCD_ITEM(71, MMDC_P0_BASE_ADDR + 0x02c, 0x000026D2) MXC_DCD_ITEM(72, MMDC_P0_BASE_ADDR + 0x030, 0x00431023) MXC_DCD_ITEM(73, MMDC_P0_BASE_ADDR + 0x040, 0x00000027) MXC_DCD_ITEM(74, MMDC_P0_BASE_ADDR + 0x000, 0x831A0000) # Initialize 2GB DDR3 - Micron MT41J128M MXC_DCD_ITEM(75, MMDC_P0_BASE_ADDR + 0x01c, 0x02008032) MXC_DCD_ITEM(76, MMDC_P0_BASE_ADDR + 0x01c, 0x00008033) MXC_DCD_ITEM(77, MMDC_P0_BASE_ADDR + 0x01c, 0x00048031) MXC_DCD_ITEM(78, MMDC_P0_BASE_ADDR + 0x01c, 0x05208030) MXC_DCD_ITEM(79, MMDC_P0_BASE_ADDR + 0x01c, 0x04008040) MXC_DCD_ITEM(80, MMDC_P0_BASE_ADDR + 0x020, 0x00007800) MXC_DCD_ITEM(81, MMDC_P0_BASE_ADDR + 0x818, 0x00022227) MXC_DCD_ITEM(82, MMDC_P1_BASE_ADDR + 0x818, 0x00022227) MXC_DCD_ITEM(83, MMDC_P0_BASE_ADDR + 0x004, 0x0002556D) MXC_DCD_ITEM(84, MMDC_P0_BASE_ADDR + 0x404, 0x00011006) MXC_DCD_ITEM(85, MMDC_P0_BASE_ADDR + 0x01c, 0x00000000) How did you use the clk command? I was under the impression that this was not implemented for mx6q u-boot? Thanks, FM Hi, FM clk command usage. *. step 1 --> 2 is ok, but direct to go down step2 failed. MX6Q U-Boot > help clk clk - Clock sub system Usage: clk Setup/Display clock clk - Display all clocks clk core <core clock in MHz> - Setup/Display core clock clk periph <peripheral clock in MHz> -Setup/Display peripheral clock clk ddr <DDR clock in MHz> - Setup/Display DDR clock clk nfc <NFC clk in MHz> - Setup/Display NFC clock Example: clk - Show various clocks clk core 665 - Set core clock to 665MHz clk periph 600 - Set peripheral clock to 600MHz clk ddr 166 - Set DDR clock to 166MHz 1. Down 480Mhz. MX6Q U-Boot > clk periph 480 source=pll3 CBCMR=00020324 CBCDR=02018 : 60000000Hz ipg per clock : 60000000Hz uart clock : 80000000Hz cspi clock : 60000000Hz ahb clock : 120000000Hz axi clock : 240000000Hz emi_slow clock: 120000000Hz ddr clock : 480000000Hz usdhc1 clock : 198000000Hz usdhc2 clock : 198000000Hz usdhc3 clock : 198000000Hz usdhc4 clock : 198000000Hz nfc clock : 11000000Hz 2. Down 400Mhz. MX6Q U-Boot > clk periph 400 source=pll2_pfd_400m CBCMR=00060324 CBCDR=00018 : 49500000Hz ipg per clock : 49500000Hz uart clock : 80000000Hz cspi clock : 60000000Hz ahb clock : 99000000Hz axi clock : 198000000Hz emi_slow clock: 99000000Hz ddr clock : 396000000Hz usdhc1 clock : 198000000Hz usdhc2 clock : 198000000Hz usdhc3 clock : 198000000Hz usdhc4 clock : 198000000Hz nfc clock : 11000000Hz Thanks for this. I was using 'clk ddr ...' and that did not change the ddr clk. Interesting that 'clk periph ...' changes the ddr clk when there is a specific subcommand to set the ddr clk. When I do 'clk periph 480' it seems to take. When I do 'clk periph 400' the board hangs. I do not get the source, CBCMR and CBCDR output. Did you modify your code to do that? Thanks, FM Yes, i modified source code, as bellows. u-boot : u-boot-2009.08 file : cpu/arm_cortexa8/mx6/generic.c static int config_periph_clk(u32 ref, u32 freq) { u32 cbcdr = readl(CCM_BASE_ADDR + CLKCTL_CBCDR); u32 cbcmr = readl(CCM_BASE_ADDR + CLKCTL_CBCMR); u32 pll2_freq = __decode_pll(BUS_PLL2, CONFIG_MX6_HCLK_FREQ); u32 pll3_freq = __decode_pll(USBOTG_PLL3, CONFIG_MX6_HCLK_FREQ); if (freq >= pll2_freq) { printf("source=pll2\n"); /* PLL2 */ writel(cbcmr & ~MXC_CCM_CBCMR_PRE_PERIPH_CLK_SEL_MASK, CCM_BASE_ADDR + CLKCTL_CBCMR); writel(cbcdr & ~MXC_CCM_CBCDR_PERIPH_CLK_SEL, CCM_BASE_ADDR + CLKCTL_CBCDR); } else if (freq < pll2_freq && freq >= pll3_freq) { printf("source=pll3\n"); /* PLL3 */ writel(cbcmr & ~MXC_CCM_CBCMR_PERIPH_CLK2_SEL_MASK, CCM_BASE_ADDR + CLKCTL_CBCMR); writel(cbcdr | MXC_CCM_CBCDR_PERIPH_CLK_SEL, CCM_BASE_ADDR + CLKCTL_CBCDR); } else if (freq < pll3_freq && freq >= PLL2_PFD2_FREQ) { printf("source=pll2_pfd_400m\n"); /* 400M PLL2 PFD */ cbcmr = (cbcmr & ~MXC_CCM_CBCMR_PRE_PERIPH_CLK_SEL_MASK) | _FREQ && freq >= PLL2_PFD0_FREQ) { printf("source=pll2_pfd_325m\n"); /* 352M PLL2 PFD */ cbcmr = (cbcmr & ~MXC_CCM_CBCMR_PRE_PERIPH_CLK_SEL_MASK) | _DIV_FREQ) { printf("source=pll2_pfd_200m\n"); /* 200M PLL2 PFD */ cbcmr = (cbcmr & ~MXC_CCM_CBCMR_PRE_PERIPH_CLK_SEL_MASK) | (3 << { printf("Frequency requested not within range [%d-%d] MHz\n", PLL2_PFD2_DIV_FREQ / SZ_DEC_1M, pll2_freq / SZ_DEC_1M); return -1; } puts("\n"); // bcchae printf("CBCMR=%08lx\n",readl(CCM_BASE_ADDR + CLKCTL_CBCMR)); printf("CBCDR=%08lx\n",readl(CCM_BASE_ADDR + CLKCTL_CBCDR)); return 0; } Hi byungchul, I have a couple of questions for? Hi, FM The answers OS : Linux 3.0.35, No android Board : SabreSD As cpu clock & pll1 is differrent, don't mind this values. I was modified change the init cpu clock, for the fast booting.? OS booting failed at 400Mhz clock, but memory test is good in the u-boot.
https://community.nxp.com/t5/i-MX-Processors/Slowing-DDR3-clock-for-i-MX6Q/m-p/282524/highlight/true
CC-MAIN-2021-25
refinedweb
4,100
59.74
DFS-N: Client failback should be enabled on the following namespace Published: April 27, 2010 Updated: June 30, 2010. Client computers could experience slower response times if they fail over to a remote folder target and do not fail back to the local folder target when it comes back online. Use DFS Management to enable client failback on the following namespace: To do so, use one of the following procedures to enable client failback on the namespace. - To enable client failback for a namespace root by using the Windows interface - To enable client failback for a namespace root by using a command line For failback to work, client computers must meet the requirements that are listed in the following topic: Review DFS Namespaces Client Requirements. Click Start, point to Administrative Tools, and then click DFS Management. In the console tree, under the Namespaces node, right-click a namespace, and then click Properties. On the Referrals tab, select the Clients fail back to preferred targets check box. To open an elevated Command Prompt window, click Start, point to All Programs, click Accessories, right-click Command Prompt, and then click Run as administrator. Type: Dfsutil property targetfailback enable \\namespace.
https://technet.microsoft.com/en-us/library/ff633410(v=ws.10).aspx
CC-MAIN-2017-22
refinedweb
197
58.01
14 December 2011 13:44 [Source: ICIS news] DUBAI (ICIS)--Saudi Arabia-based Petro Rabigh plans to switch one of its linear low density polyethylene (LLDPE) plants at Rabigh to easy-processing polyethylene (EPPE) production in 2013, a company source said on Wednesday. The Sumitomo Chemical and Saudi Aramco joint venture complex currently has two LLDPE plants at Rabigh, each with a 300,000 tonne/year capacity, the company source said. Petro Rabigh’s EPPE plant will have the capacity to produce 200,000-300,000 tonnes/year of material, he said. EPPE resin can substitute low density PE (LDPE) in many applications, he added. He was speaking on the sidelines of the 6th GPCA (Gulf Petrochemicals & Chemicals Association) Forum, which is being held in ?xml:namespace> The GPCA Forum has the theme: “Moving Downstream – Creating Value and Sustainable Growth.” For more on the 6th GPCA Forum visit IC
http://www.icis.com/Articles/2011/12/14/9516595/gpca-11-petro-rabigh-to-switch-lldpe-unit-to-eppe-in-2013.html
CC-MAIN-2014-52
refinedweb
149
51.58
csKeyValuePair Class Reference A Key Value pair. More... #include <cstool/keyval.h> Query whether this pair is an "editor-only" pair. They're marked as such in world files and are normally not kept in memory. Implements iKeyValuePair. Definition at line 75 of file keyval.h. Get the key string of the pair. Implements iKeyValuePair. Get a value string from the pair. Implements iKeyValuePair. Get the 'value' string of the pair. This is the same as calling 'GetValue ("value")'. Implements iKeyValuePair. Get a list of the names of values in the pair. Implements iKeyValuePair. Implements iKeyValuePair. Definition at line 65 of file keyval.h. Set the key string of the pair. Implements iKeyValuePair. Set a value string of the pair. Implements iKeyValuePair. Set the value string of the pair. This is the same as calling 'SetValue ("value", value)'. Implements iKeyValuePair. The documentation for this class was generated from the following file: Generated for Crystal Space 2.0 by doxygen 1.6.1
http://www.crystalspace3d.org/docs/online/new0/classcsKeyValuePair.html
CC-MAIN-2016-44
refinedweb
162
61.73
Ignoring once an unknown signature is worst than put as trusted any signature that comes by ? Shouldn't this signature be in archlinux-keyring ? Search Criteria Package Details: liquidwar6 0.6.3902-1 Dependencies (9) - curl (curl-http2-git, curl-git, curl-ssh) - glu - gtk2 (gtk2-patched-gdkwin-nullcheck, gtk2-patched-filechooser-icon-view,. gna commented on 2013-07-17 12:22 After downgrading libpng to 1.5.9: In file included from main.c:27:0: lib/liquidwar6.h:33:22: schwerwiegender Fehler: libguile.h: Not found #include <libguile.h> ^ Kompilierung beendet. make[3]: *** [main.o] Fehler 1 gna commented on 2013-07-17 12:13 Build fails: Liquid War 6 needs libpng 1.2 1.3 1.4 or 1.5 () Zann commented on 2012-03-05 04:52 This time updated Release also, Sorry! Zann commented on 2012-03-04 02:18 Updated matse commented on 2012-02-23 19:00 Package doesn't build with pacman 4 anymore, please remove the brackets "(" ")" in the "install=" line. Zann commented on 2011-12-20 23:20 Updated Zann commented on 2011-10-04 10:56 Updated Zann commented on 2011-08-09 21:41 Hi, tuxitop. I've disabled mod-http in 0.0.10beta tuxitop commented on 2011-08-02 23:39 it produces a curl/types.h error. a patch is needed to remove the "#include <curl/types.h>" from src/lib/cli/mod-http/mod-http-internal.h Author key - DE3F2BCDFD409E94 - is revoked.
https://aur.archlinux.org/packages/liquidwar6/?comments=all
CC-MAIN-2018-17
refinedweb
248
62.34
07 May 2009 21:19 [Source: ICIS news] SAO PAULO (ICIS news)--?xml:namespace> “The project is part of a $200m investment plan that aims to boost output capacity of acrylates, methacrylates, sulphates and styrene monomers produced by Unigel,” said Unigel strategic planning director Fabio Terzian on the sidelines of the Brasilplast trade show. Unigel will increase capacity for various chemicals to more than 1m tonnes/year combined from 590,000 tonnes/year of chemicals, Terzian said. Production capacity at Unigel’s ammonium sulphate plant in Candeias, Capacity at Unigel’s styrene monomer units in the states of Bahia and Unigel’s capacities of acrylates and methacrlylates produced in Unigel added it will start production of hydrogen cyanide (HCN) and acetone cyanohydrin (ACH) at its Candeias unit by September. The materials will be used as feedstocks for its methacrylates products, but did not offer volume details. Unigel’s plants in Brasilplast continues through Friday.
http://www.icis.com/Articles/2009/05/07/9214395/unigel-to-double-brazils-output-capacity-in-2009.html
CC-MAIN-2014-52
refinedweb
154
55.58
High Accuracy Remote Data Logging Using Mulitmeter/Arduino/pfodApp Introduction: High Accuracy Remote Data Logging Using Mulitmeter/Arduino/pfodApp Updated 26th April 2017 Revised circuit and board for use with 4000ZC USB meters. No Android coding required This instructable shows you how access a wide range of high accuracy measurements from your Arduino and also send them remotely for logging and plotting. For High Speed Data Logging (2000 samples/sec) see this instrucatble, Remote High Speed Data Logging using Arduino/GL AR150/Android/pfodApp The AtoD converter built into the Arduino's has poor accuracy, typically +/-10% and very limited range, typically 0 to 5V DC volts only. Using a simple circuit and library, you can feed your Arduino with high accuracy auto-ranging measurements from a multimeter with an optically isolated RS232 connection. Having the measurements available to your sketch lets you control outputs based on the values. This tutorial also covers sending the measurement remotely, via WiFi, Bluetooth, Bluetooth Low Energy or SMS, to an Android mobile for display, logging and plotting using pfodApp. This instructable uses an Arduino Mega2560 5V board which you can pair with a wide variety of communication shields, Ethernet, WiFi, Bluetooth V2 (classic), Bluetooth LE or SMS. The interface hardware and library presented here can alos be used with 3.3V Arduino compatible boards. As well as the Mega2560 you can use a wide variety of other boards such as UNO with and Ehternet shield, an ESP8266 base board (stand alone), a board with integrated Bluetooth Low Energy, like Arduino 101, or boards that connect to the communication sub-system using SPI such as RedBear BLE shield and Adafrut's Bluefruit SPI boards. pfodDesignerV2 supports all of these board combinations and will generate the code for them. The limiting condition is that you need to have a free Hardware Serial to connect to the this Multimeter RS232 shield. The circuit and code presented here works with a number of multimeters. A readily available, inexpensive, one is a Tekpower TP4000ZC, also know as Digitek TD-4000ZC. Multimeters that work with this circuit and library include Digitek DT-4000ZC, Digitech QM1538, Digitech QM1537, Digitek DT-9062, Digitek INO2513, Digitech QM1462, PeakTech 3330, Tenma 72-7745, Uni-Trend UT30A, Uni-Trend UT30E, Uni-Trend UT60E, Voltcraft VC 820, Voltcraft VC 840 Step 1: This tutorial has two parts: The first part covers the hardware interface to the multimeter and the code library using an Arduino Mega. If you only want to get the measurement into your Arduino this is all you need. The second part covers sending the measurement to a remote Android mobile for display, logging and plotting. In this example we will use a Bluetooth shield and generate the basic sketch using pfodDesignerV2, but you can also generate code for WiFi, Ethernet, Bluetooth Low Energy and SMS connections using pfodDesignerV2. The multimeter library is then added to the basic sketch to complete the code. No Android coding is required to display, log and plot the reading. Everything is controlled from your Arduino code. This project is also available on-line at For a remote head-up display of the multimeter, see this instructable, Arduino Data Glasses For My Multimeter by Alain. Step 2: The Multimeter The multimeters used in this tutorial are the inexpensive (~US40) Tekpower TP4000ZC (also know as Digitek DT-4000ZC) and the older Digitech QM1538, which it is no longer sold. Both these meters are visually the same and use the same RS232 encoding of the measurement. Here the specs for the Tekpower TP4000ZC:- DC Voltage: 400mV/4/40/400V ±0.5%+5, 600V ±0.8% AC Voltage: 4/40/400V ±0.8%+5, 400mV/600V ±1.2%+5 DC Current: 400/4000μA ±2.0%+5, 40/400mA ±1.5%+5, 4/10A ±2%+5 AC Current: 400/4000μA ±2.5%+3, 40/400mA ±2%+5, 4/10A ±2.5%+5 Resistance: 400Ω/4/40/400kΩ/4MΩ ±1%+5, 40MΩ ±2%+5 Capacitance: 40nF ±3.5%+10, 400nF/4/40μF ±3%+5,100μF ±3.5%+5 Frequency: 10Hz-10MHz ±0.1%+5Duty Cycle: 0.1%-99.9% ±2.5%+5 Temperature: 0oC - +40oC ±3oC, -50oC - +200oC±0.75% ±3oC, +200oC - +750oC ±1.5% ±3oC, Resolution 0.1oC via included thermocouple probe. The multimeter's RS232 connection is only one way and you cannot change the multimeter's settings remotely, so you need to manually select the type of measurement. However the meter is auto-ranging and the Voltage and Current settings handle both AC and DC. Step 3: The RS232 Interface Hardware There are two interfaces. The newer Digitek DT-4000ZC and Tekpower TP40000ZC meters come with a USB cable. While the older Digitek QM1538 was provided a RS232 9pin D connector cable. The above circuit (pdf version) shows how to connect the multi-meter's opto coupler to drive an Arduino RX serial pin. Note: This circuit has been updated to add another protection resistor, R2, for the Digitek DT-4000ZC and Tekpower TP40000ZC meters. This resistor was not included on the 9pin D connector board shown above. Digitek DT-4000ZC and Tekpower TP40000ZC For the Digitek DT-4000ZC and Tekpower TP40000ZC, you need a 3.5mm audio cable male to male, stereo or mono will do, and a 3.5mm socket. Digitek QM1538 For the older Digitek QM1538, you need a 9pin D socket.The 9pin D connector has offset pins that will not plug into the prototype shield. Just cut off the row of 4 pins so you can solder the connector to the board, as the circuit only uses pins in second row of 5 pins. The mounting legs where bent over to let the connector lie flat and the connector was secured to the prototype shield using 2 part epoxy glue (“Araldite”) The connector pin layout is shown above is from this site. The 10K resistor that comes mounted inside the connector of supplied RS232 cables (connected between pins 2 and 3) is not required for this project. Connecting the signal to an Arduino RX pin This circuit will work for both 5V and 3.3V Arduino boards. Here we are using an Mega2560 (5V) Arduino and mounted the circuit on a prototype shield as shown above. A flying lead is used to connect the TP1 on the shield to a Serial1 RX, pin D19, on the Mega2560. Note about Software Serial: Initially this shield was paired with an UNO using Software Serial on pins 10,11. However when paired with the Bluetooth Shield on Serial at 9600baud, some receive bytes were lost. Moving the RS232 to a Hardware Serial connection solved this issue. So for reliable remote displaying and logging, if you are using a communication shield that connects via serial, you need either a board with two or more Hardware Serials such as the Mega2560. Other alternatives are an UNO with and Ehternet shield, an ESP8266 base board (stand alone), a board with integrated Bluetooth Low Energy like Anduino 101 or boards that connect to the communication sub-system using SPI such as RedBear BLE shield and Adafrut's Bluefruit SPI boards. pfodDesignerV2 supports all of these boards and will generate the code for them. Step 4: The PfodVC820MultimeterParser Library The Tekpower TP4000ZC and a number of other mulimeter do not send the measurement via RS232 as ASCII text, rather it sends 14 bytes with bits set depending on which segments of the LCD display that are illuminated. The encoding of the 14 bytes are explained in this pdf. The pfodVC820MeterParser.zip library decodes these bytes into text strings and floats. (The VC820 refers to one of the meters that uses this encoding.) Also see QtDMM for Windows, Mac and Linux computer software that handles a wide range of multimeters. There is a minimal example, MeterParserExample.ino, of using the pfodVC820MeterParser library. Connect the meter to a 2400baud serial connection and then call haveReading() each loop to process the bytes. haveReading() will return true when there is a new complete reading parsed. Then you can call getAsFloat() to get the value (scaled) as a float or getAtStr() to get the reading with scaling for printing and logging. There are other methods available to access the type of measurement, getTypeAsStr() and getTypeAsUnicode(), as well as other utility methods. #include "pfodVC820MeterParser.h" pfodVC820MeterParser meter; // void setup() { Serial.begin(74880); Serial1.begin(2400); meter.connect(&Serial1); } float reading; void loop() { if (meter.haveReading()) { reading = meter.getAsFloat(); // use this for Arduino calculations Serial.print("Reading with units: "); Serial.print(meter.getDigits()); Serial.print(meter.getScalingAsStr()); Serial.print(meter.getTypeAsStr()); Serial.print(F(" = as float printed (6 digits):")); Serial.println(reading,6); Serial.println("Time(sec) and Reading as string for logging"); Serial.print(((float)millis())/1000.0); Serial.print(",sec,"); Serial.print(meter.getAsStr()); Serial.print(','); Serial.println(meter.getTypeAsStr()); } } With the meter set on Deg C and using the thermocouple probe, the example sketch gives this output on the Arduino IDE serial monitor Reading with units: 25.7C = as float printed (6 digits):25.700000 Time(sec) and Reading as string for logging 2.40,sec,25.7,C Step 5: Part 2 – Remote Display, Logging and Plotting This part of the tutorial covers how to remotely display, log and plot the meter reading on your Android mobile. pfodApp is used to handle the display, logging and plotting on your Android mobile. No Android programming is required. All the displays, logging and plotting are completely controlled by your Arduino sketch. The free pfodDesignerV2 app lets you design your Android menu and chart and then generates the an Arduino sketch for you. pfodApp supports a number of connection types, Ethernet, WiFi, Bluetooth V2 (classic), Bluetooth LE or SMS. This tutorial uses Arduino 101 (Bluetooth Low Energy) for data logging and plotting. Other Bluetooth Low Energy boards are also supported. This tutorial uses SMS to connect to pfodApp. You can use pfodDesignerV2 to add data logging and charting to that SMS example. pfodDesignerV2 also has options to generate Arduino code to a Bluetooth V2 (classic) shield to connect to pfodApp. For this example we will use an Iteadstudio Bluetooth Shield V2.2 that connects to the Arduino Mega2560 via a 9600baud serial connection. Using the free pfodDesignerV2 app we set up a simple menu that just has a label to show the meter reading and one button to open the chart. This page has a number of pfodDesignerV2 tutorials. Once we have a basic sketch, we will modify it to add the meter parser and to send the meter reading and data for logging and charting. Designing the Menu In this section we will design an Android/pfodApp menu that will display the meter reading and a button to open a chart of the readings. The readings are also saved to a file on the Android mobile Step 6: Adding a Label Install the free pfodDesignerV2 and start a new menu. The default Target is Serial at 9600baud which is what is need for the Iteadstudio Bluetooth Shield V2.2. If you are connecting using a Bluetooth Low Energy device or Wifi or SMS then click on Target to change the selection. To add a label to display the meter reading, click on Add Menu Item and select scroll down to select Label. Choose a suitable font size and colours. Leave the Text as Label as we will modify the generated code to replace this with the meter measurement later. Here we have set font size to +7, font colour to Red and background to Silver. Go back to the Editing Menu_1 screen and set a Refresh Interval 1 sec. The will make pfodApp re-request the menu about once a second to display the latest reading in the Label. Step 7: Adding a Chart Button Click on Add Menu Item again to add a Chart Button. Edit the text of the Chart Button to something suitable, e.g. just “Chart” and choose a font size and colours. Then click on the “Chart” button to open the plot editing screen. There will only be one plot so click in the Edit Plot 2 and Edit Plot 3 buttons and scroll down and click on Hide Plot for each of them. Edit the chart label to something suitable, e.g. “Multimeter”. No need to change any of the other plot settings as we will be modifying the sketch to send different y-axis label depending on the multimeter setting. Finally go back to the Editing Menu_1 and Edit Prompt, this sets the text at the bottom of the menu and overall menu background colour. Here we have set the prompt to “Remote Multimeter” with font size +3 and background colour Silver. You can now go back to Editing Menu_1 and click Preview Menu to preview the menu design. If you don't like the design you can change it before you generate the code. If you want to space out the Label from the button you can add some blank labels as described here. Adding a Chart and Logging Data on How to Display/Plot Arduino Data on Android is another tutorial on pfodDesignerV2/pfodApp datalogging and charting. Step 8: Generating the Arduino Sketch To generate the Arduino code that will display this menu in pfodApp, go back to the Editing Menu_1 screen and scroll down and click the Generate Code button. Click the “Write Code to file” button to output the Arduino sketch to the /pfodAppRawData/pfodDesignerV2.txt file on your mobile. Then exit the pfodDesignerV2. Transfer the pfodDesignerV2.txt file to your PC using either a USB connection or a file transfer app, like wifi file transfer pro. A copy of the generated sketch is here, pfodDesignerV2_meter.txt Load the sketch into your Arduino IDE and program your Uno (or Mega) board. Then add the Iteadstudio Bluetooth Shield V2.2. The install pfodApp on your Android mobile and create a new Bluetooth connection named, for example, Multimeter. See pfodAppForAndroidGettingStarted.pdf for how to create new connections. Then when you use pfodApp to open the Multimeter connection you will see your designed menu. Opening the Chart does not display anything interesting because we have not added in the multimeter hardware/software. Step 9: Adding the Multimeter We will modify the generated sketch to add the multimeter parser and to send its data to your Android mobile. The complete modified sketch is here, pfod_meter.ino These modifications add the multimeter parser and a 5sec timer. If there is no new valid reading in that time then the sketch stops sending data and updates the Android/pfodApp display to “- - - “. As the meter's manual selection is changed the chart labels are updated, but you need to exit the chart and re-select it to see the new labels. On the other hand, the meter reading is automatically updated every second. Finally pfodApp handles Unicode by default so when displaying the meter reading the method getTypeAsUnicode() is used to return the Unicode for ohms, Ω, and degsC, ℃ for the meter display. The chart button displays an updating chart of the readings :- The chart data, in CSV format, is also saved to a file to your Android mobile under /pfodAppRawData/Mulitmeter.txt for later transfer to your computer and import to a spreadsheet for further calculations and charting. Step 10: The Sketch Modifications in Detail - Download the pfodVC820MeterParser.zip library and then open Arduino IDE and click in the Sketch → Include Library → Add .zip to add this library to your IDE. - Add the pfodVC820MeterParser library to the sketch. Click on Sketch → Include Library → pfodVC820MeterParser. This will add the include statements at the top of the sketch. - Edit pfodParser_codeGenerated parser("V1"); to pfodParser_codeGenerated parser(""); This disables the menu caching in pfodApp so your menu changes will be displayed. You can revert to “V3” when you have finished all you changes to re-enable menu caching. - Add these lines to create the objects for the software serial and the multimeter. pfodVC820MeterParser meter; - At the end of setup() add Serial1.begin(2400); meter.connect(&Serial1); - Above loop() add unsigned long validReadingTimer = 0; const unsigned long VALID_READINGS_TIMEOUT = 5000; // 5secs bool haveValidReadings = true; // set to true when have valid readings int measurementType = meter.NO_READING; and at the top of the loop() add if (meter.haveReading()) { if (meter.isValid()) { validReadingTimer = millis(); haveValidReadings = true; } int newType = meter.getType(); if (measurementType != newType) { // output new datalogging titles parser.print(F("sec,")); parser.println(meter.getTypeAsStr()); } measurementType = newType; } if ((millis() - validReadingTimer) > VALID_READINGS_TIMEOUT) { haveValidReadings = false; // no new valid reading in last 5 sec } - Further down in loop replace parser.print(F("{=Multimeter|time (secs)|Plot_1~~~||}")); with parser.print(F("{=Multimeter|time (secs)|Meter Reading~~~")); parser.print(meter.getTypeAsStr()); parser.print(F("||}")); - At the bottom of loop() replace sendData(); with if (haveValidReadings) { sendData(); } - In sendData() replace parser.print(','); parser.print(((float)(plot_1_var-plot_1_varMin)) * plot_1_scaling + plot_1_varDisplayMin); with parser.print(','); parser.print(meter.getAsStr); - In sendMainMenu() replace parser.print(F("~Label")); with parser.print('~'); if (haveValidReadings) { parser.print(meter.getDigits()); parser.print(meter.getScalingAsStr()); parser.print(meter.getTypeAsUnicode ()); } else { parser.print(F("- - -")); } - In sendMainMenuUpdate() add parser.print(F("|!A")); parser.print('~'); if (haveValidReadings) { parser.print(meter.getDigits()); parser.print(meter.getScalingAsStr()); parser.print(meter.getTypeAsUnicode ()); } else { parser.print(F("- - -")); } To update the reading when using menu caching. Conclusion This tutorial has shown how to connect an inexpensive multimeter to an Arduino Mega2560 via RS232. Many other boards are also supported. The pfodVC820MeterParserlibrary parses the multimeter data into floats for Arduino calculations and strings for display and logging. pfodDesignerV2 was used to generate a basic sketch to display the multimeter reading and show a plot of the values in an Android mobile using pfodApp. No Android programming is required. To this basic sketch the multimeter handling was added and final sketch displays the current multimeter reading on your Android mobile as well as plotting the readings and logging them to a file on your mobile for later use. Updated 26th April 2017 Revised circuit and board for use with 4000ZC USB meters. Nice project, thanks for sharing!
http://www.instructables.com/id/High-Accuracy-Remote-Data-Logging-Using-Mulitmeter/
CC-MAIN-2017-43
refinedweb
3,009
56.25
I would like to modify the Pythonid plugin to create a new module type, "Python Module" analogous to "Java Module." I want the user to be able do the following: File | New Module -> The "Add Module" dialog appears. I want "Python Module" to appear as a choice. How do I get my module type to show up in this list? The user will select "Python Module" and then choose "next" to get to the module name/content root step. The user then enters a module name and a chooses an existing directory for the content root. The user presses "Next." IntelliJ will look for source files, but instead of Java source files, it should look for Python source files (*.py) How do I tell IntelliJ what types of files to search for? Thanks, Brian I would like to modify the Pythonid plugin to create a new module type, "Python Module" analogous to "Java Module." I want the user to be able do the following: Take a look at the J2ME plugin sources that come with the plugin-dev package. Brian Smith wrote: Ok, looking at the J2ME example is what I have done, and it is a bit tedious... ;o) Since I have just done that, I thought I give you some pointers (if you still need them). You need to create a subclass of a ModuleBuilder (the J2ME is using the JavaModuleBuilder, but the sources of JavaModuleBuilder is available, so you make an equiv for Python). You then create a public class PuthonModuleType extends ModuleType]]> with a public default constructor. You need to override the public ModuleWizardStep[] createWizardSteps( WizardContext wizardContext, OsgiModuleBuilder moduleBuilder, ModulesProvider modulesProvider ) And each ModuleWizardStep is an implementation that you provide, which will provide the JComponent via public JComponent getComponent() I hope this helps. Cheers Niclas Hello Brian, BS> I would like to modify the Pythonid plugin to create a new module BS> type, "Python Module" analogous to "Java Module." I want the user to BS> be able do the following: Don't forget to apply for project membership at and check in your changes once you get something working. :) -- Dmitry Jemerov Software Developer "Develop with Pleasure!" Thank you for your answers. Dmitry, I will send in my changes to Pythonid when they are done. But, I am mostly adding this feature to Pythonid because I want a similar feature for a different plugin that I am designing. My knowledge of Python is lacking, so my contributions to Pythonid will probably not end up being very useful. Okay, I got to the point where I can successfully create new new Module. Besides the excellent points mentioned above, here are two hurdles I ran up against. It was relatively painless. I wanted to use ProjectWizardStepFactory#createNameAndLocationStep, but I couldn't because it requires a JavaModuleBuilder. As a result, I ended up creating my own work-alike. I also want to use the standard "Paths" tab in my module settings dialog, but I cannot find a way to do this unless I subclass JavaModuleType. Is it possible otherwise? Am I going to run into a lot of difficulties if my ModuleType is not a subclass of JavaModuleType? With Pythonid, navigation using CtrlClick works but CtrlN and CtrlShiftAlt+N do not. Are these limitations of the Open API or are they just not implemented (yet) for Pythonid? Thanks, Brian Hello Brian, BS> With Pythonid, navigation using CtrlClick works but CtrlN and BS> CtrlShiftAlt+N do not. Are these limitations of the Open API or BS> are they just not implemented (yet) for Pythonid? The latter. OpenAPI support is available, but the current version of Pythonid doesn't build any global index for class or symbol navigation. -- Dmitry Jemerov Software Developer "Develop with Pleasure!" Please, take a look for JSSymbolContributor in JavaScript module sources Brian Smith wrote: -- Best regards, Maxim Mossienko IntelliJ Labs / JetBrains Inc. "Develop with pleasure!"
https://intellij-support.jetbrains.com/hc/en-us/community/posts/207042445-Create-a-non-Java-project
CC-MAIN-2019-18
refinedweb
647
63.29
Currently our application cannot retain its state if refreshed. One neat way to get around this problem is to store the application state to localStorage and then restore it when we run the application again. If you were working against a back-end, this wouldn't be a problem. Even then having a temporary cache in localStorage could be handy. Just make sure you don't store anything sensitive there as it is easy to access. localStorage# localStorage is a part of the Web Storage API. The other half, sessionStorage, exists as long as the browser is open while localStorage persists even in this case. They both share the same API as discussed below: storage.getItem(k)- Returns the stored string value for the given key. storage.removeItem(k)- Removes the data matching the key. storage.setItem(k, v)- Stores the given value using the given key. storage.clear()- Empties the storage contents. It is convenient to operate on the API using your browser developer tools. In Chrome especially the Resources tab is useful as it allows you to inspect the data and perform direct operations on it. You can even use storage.key and storage.key = 'value' shorthands in the console for quick tweaks. localStorage and sessionStorage can use up to 10 MB of data combined. Even though they are well supported, there are certain corner cases that can fail. These include running out of memory in Internet Explorer (fails silently) and failing altogether in Safari's private mode. It is possible to work around these glitches, though. You can support Safari in private mode by trying to write into localStoragefirst. If that fails, you can use Safari's in-memory store instead, or just let the user know about the situation. See Stack Overflow for details. localStorage# To keep things simple and manageable, we will implement a little wrapper for storage to wrap the complexity. The API will consist of get(k) to fetch items from the storage and set(k, v) to set them. Given the underlying API is string based, we'll use JSON.parse and JSON.stringify for serialization. Since JSON.parse can fail, that's something we need to take into account. Consider the implementation below: app/libs/storage.js export default storage => ({ get(k) { try { return JSON.parse(storage.getItem(k)); } catch(e) { return null; } }, set(k, v) { storage.setItem(k, JSON.stringify(v)); } }) The implementation is enough for our purposes. It's not foolproof and it will fail if we put too much data into a storage. To overcome these problems without having to solve them yourself, it would be possible to use a wrapper such as localForage to hide the complexity. FinalStore# Just having means to write and read from the localStorage won't do. We still need to connect out application to it somehow. State management solutions provide hooks for this purpose. Often you'll find a way to intercept them somehow. In Alt's case that happens through a built-in store known as FinalStore. We have already set it up at our Alt instance. What remains is writing the application state to the localStorage when it changes. We also need to load the state when we start running it. In Alt terms these processes are known as snapshotting and bootstrapping. An alternative way to handle storing the data would be to take a snapshot only when the window gets closed. There's a Window level beforeunloadhook that could be used. This approach is brittle, though. What if something unexpected happens and the hook doesn't get triggered for some reason? You'll lose data. We can handle the persistency logic at a separate module dedicated to it. We will hook it up at the application setup and off we go. Given it can be useful to be able to disable snapshotting temporarily, it can be a good idea to implement a debug flag. The idea is that if the flag is set, we'll skip storing the data. This is particularly useful if we manage to break the application state dramatically during development somehow as it allows us to restore it to a blank slate easily through localStorage.setItem('debug', 'true') ( localStorage.debug = true), localStorage.clear(), and finally a refresh. Given bootstrapping could fail for an unknown reason, we catch a possible error. It can still be a good idea to proceed with starting the application even if something horrible happens at this point. The snapshot portion is easier as we just need to check for the debug flag there and then set data if the flag is not active. The implementation below illustrates the ideas: app/libs/persist.js export default function(alt, storage, storageName) { try { alt.bootstrap(storage.get(storageName)); } catch(e) { console.error('Failed to bootstrap data', e); } alt.FinalStore.listen(() => { if(!storage.get('debug')) { storage.set(storageName, alt.takeSnapshot()); } }); } You would end up with something similar in other state management systems. You'll need to find equivalent hooks to initialize the system with data loaded from the localStorage and write the state there when it happens to change. We are still missing one part to make this work. We'll need to connect the logic with our application. Fortunately there's a suitable place for this, the setup. Tweak as follows: app/components/Provider/setup.js import storage from '../../libs/storage'; import persist from '../../libs/persist';import NoteStore from '../../stores/NoteStore'; export default alt => { alt.addStore('NoteStore', NoteStore);persist(alt, storage(localStorage), 'app');} If you try refreshing the browser now, the application should retain its state. Given the solution is generic, adding more state to the system shouldn't be a problem. We could also integrate a proper back-end through the same hooks if we wanted. If we had a real back-end, we could pass the initial payload as a part of the HTML and load it from there. This would avoid a round trip. If we rendered the initial markup of the application as well, we would end up implementing basic universal rendering approach. Universal rendering is a powerful technique that allows you to use React to improve the performance of your application while gaining SEO benefits. Our persistimplementation isn't without its flaws. It is easy to end up in a situation where localStoragecontains invalid data due to changes made to the data model. This brings you to the world of database schemas and migrations. The lesson here is that the more you inject state and logic to your application, the more complicated it gets to handle. NoteStore# Before moving on, it would be a good idea to clean up NoteStore. There's still some code hanging around from our earlier experiments. Given persistency works now, we might as well start from a blank slate. Even if we wanted some initial data, it would be better to handle that at a higher level, such as application initialization. Adjust NoteStore as follows: app/stores/NoteStore.js import uuid from 'uuid';import NoteActions from '../actions/NoteActions'; export default class NoteStore { constructor() { this.bindActions(NoteActions);this.notes = [ { id: uuid.v4(), task: 'Learn React' }, { id: uuid.v4(), task: 'Do laundry' } ];this.notes = [];} ... } This is enough for now. Now our application should start from a blank slate. Even though we ended up using Alt in this initial implementation, it's not the only option. In order to benchmark various architectures, I've implemented the same application using different techniques. I've compared them briefly below: Compared to Flux, Facebook's Relay improves on the data fetching department. It allows you to push data requirements to the view level. It can be used standalone or with Flux depending on your needs. Given it's still largely untested technology, we won't be covering it in this book yet. Relay comes with special requirements of its own (GraphQL compatible API). Only time will tell how it gets adopted by the community. In this chapter, you saw how to set up localStorage for persisting the application state. It is a useful little technique to know. Now that we have persistency sorted out, we are ready to start generalizing towards a full blown Kanban board. This book is available through Leanpub. By purchasing the book you support the development of further content.
https://survivejs.com/react/implementing-kanban/implementing-persistency/
CC-MAIN-2020-40
refinedweb
1,383
59.8
For example, if you have an array [20, 6, 7, 8, 50] and if I pass value 21 it should return [6,7,8] this sub array. Note: the sum of number should be in sequence. [20, 6, 7, 8, 50] 21 - [6, 7, 8] 13 - [6, 7] 50 - [] - because it is not in sequence I tried but not working function sumoftwonumer(arr, s) { let hashtable = {} let sum = [] for(let i=0; i<arr.length; i++) { let innerRequiredValue = s - arr[i] if(hashtable[innerRequiredValue.toString()] !== undefined) { sum.push([arr[i], innerRequiredValue]) } hashtable[innerRequiredValue.toString()] = arr[i] } console.log(hashtable) return sum } You can try using nested for loop. The time complexity in worst case will O(n^2). Make sure to see the second method its more efficient. let arr = [20, 6, 7, 8, 50] function findNums(arr,sum){ let temp = 0; for(let i = 0;i<arr.length;i++){ temp = arr[i]; for(let j = i+1;j<arr.length;j++){ temp += arr[j]; if(temp === sum) return arr.slice(i,j+1); if(temp > sum) break; } } return []; } console.log(findNums(arr,21)) console.log(findNums(arr,13)) console.log(findNums(arr,50)) A better solution is what create a variable temp. Start adding elements to it one by one. When it becomes greater than given sum then remove the element from start of array. let arr = [20, 6, 7, 8, 50] function findNums(arr,sum){ let temp = arr[0]; let low = 0; for(let i = 1;i<arr.length;i++){ temp += arr[i]; if(temp === sum) return arr.slice(low,i+1); while(temp > sum){ temp -= arr[low] low++; } } return []; } console.log(findNums(arr,21)) console.log(findNums(arr,13)) console.log(findNums(arr,50)) The problem with negative numbers is that we can't remove elements from start when the temp(current sum) becomes greater than given sum because we may have so negative numbers after in array which could make temp less than or equal to sum For negative number you need to create an object to keep track of already calculated sums of subarrays. let arr = [4,-2,-3,5,1,10] function findNums(arr,sum){ let obj = {} let temp = 0; for(let i = 0;i<arr.length;i++){ temp += arr[i]; if(temp === sum) return arr.slice(0,i+1); if(obj[temp - sum] != undefined){ if(obj[temp-sum]+1 !== i){ return arr.slice(obj[String(temp - sum)]+1,i+1); } } obj[temp] = i; } return []; } console.log(findNums(arr,0)) console.log(findNums(arr,-1)) console.log(findNums(arr,13)) console.log(findNums(arr,1)) One possibility, depending upon how you want to use it, is to do as your attempt seems to try, and create a hashmap of partial sums, returning a function that will lookup in that hashmap for your value. Here, I do that, making the function curried so that you don't have to go to all the work of creating a hashmap for one set of numbers on every call. The initial call is O(n^2), but subsequent calls are merely O(1). This style is useful if you want to search multiple values on the same set of numbers. If you only have one value to search, then a technique like the second one in Maheer Ali's answer would be more efficient. const range = (lo, hi) => [...Array (hi - lo) ] .map ( (_, i) => lo + i ) const sum = (nbrs) => nbrs .reduce ( (a, b) => a + b, 0 ) const findSequentialSum = (nbrs) => { const subtotals = range (0, nbrs .length) .flatMap ( i => range (i + 2, nbrs .length + 1) .map (j => nbrs .slice (i, j) ) ) // .reduce((a, b) => a.concat(b), [[]]) .reduce ( (a, sub, _, __, tot = sum(sub) ) => ({ ...a, [tot]: a [tot] || sub }), {} ) return (total) => subtotals [total] || [] } const nbrs = [20, 6, 7, 8, 50] const search = findSequentialSum (nbrs) console .log ( search (21), //~> [6, 7, 8] search (13), //~> [6, 7] search (50), //~> [] ) The two helper functions should be straightforward: range(3, 10) //=> [3, 4, 5, 6, 7, 8, 9] and sum([2, 5, 10]) //=> 17 If flatMap isn't available in your environment, then you can replace .flatMap ( i => range (i + 2, nbrs .length + 1) .map (j => nbrs .slice (i, j) ) ) with .map ( i => range (i + 2, nbrs .length + 1) .map (j => nbrs .slice (i, j) ) ) .reduce((a, b) => a.concat(b), [[]]) Also, the 2 in here i => range (i + 2, nbrs .length + 1) requires sequences to be at least two entries long. If you wanted 50 to return [50] (which seems more logical to me), then you could just replace that 2 with a 1. Or you could replace it with something larger, if you wanted to find only sub-sequences of, say, length 5 or more. Finally, if you wanted to return all subsequences that added to your total, it would be a simple modification, replacing: [tot]: a [tot] || sub with [tot]: (a [tot] || []).concat([sub])
https://cmsdk.com/node-js/sum-of-n-number-inside-array-is-x-number-and-create-subset-of-result.html
CC-MAIN-2019-22
refinedweb
814
75.71
One of the quickest wins for improving the speed of a Django app is to fix the "oops didn't mean to" inefficient database reads that slipped into a codebase unnoticed over the years. Indeed, as the codebase matures and the team members change, and code gather dust old inefficient ORM queries can be overlooked. Simple ORM queries like: def check_hounds(): queryset = HoundsModel.objects.all() if len(queryset) > 2: return "oh no. Run!" This is reading every record from the database and checking at application level. Much less efficient than def check_hounds(): if HoundsModel.objects.count() > 2: return "oh no. Run!" It's easy to scoff at such inefficiencies. "I would never do that". Well, maybe not. But what about some dev in the past that was rushing to meet a deadline, an 11pm code reviewer equally surviving on coffee during crunch time? And you're now responsible for that code! Slightly less textbook, but also just as easy to both overlook and improve: def write_condolence_letters(visited_coops): queryset = ChickenModel.objects.all() for chicken in queryset: if chicken.coop.pk in visited_coops: return f"dear {chicken.coop.owner_name}..." else: ... Spot the problems? Well, looping over all() will read everything from the db in one go. If there are billions of chickens then expect performance issues. For that we can use iterator() to chunk the reads 2000 (default) records pulled from the db at a time. Additionally, chicken.coop.pk does an additional database read for each chicken because relationships are lazily evaluated by default: coop is read from the db when only it was accessed via chicken.coop. For this particular field we can use Django's field caching: Django creates an _id field for related field. So this can be: def write_condolence_letters(visited_coops): queryset = ChickenModel.objects.all() for chicken in queryset.iterator(): if chicken.coop_pk in visited_coops: return f"dear {chicken.coop.owner_name}..." else: ... What if we're working with a field on a related model other than the pk? Sure coop_pk is created by Django for us, but what if we needed to access other fields such as chicken.coop.owner_name? For that we an use select_related or prefetch_related (depending on if it's a ForeignKey or OneToOne etc relationship): def write_condolence_letters(visited_coops): queryset = ChickenModel.objects.all() for chicken in queryset.select_related('coop'): if chicken.coop.pk in visited_coops: return f"dear {chicken.coop.owner_name}..." else: ... Now the related coop will be pulled out at during the same read as chicken. Notice though iterator is no longer used because iterator is useless with select_related and prefetch_related: iterator nullifies their effects. So these are mutually exclusive so only with context can we really say specifically what performance improvement strategy is needed. Does your codebase have old inefficient ORM lookups? It's easier to spot new inefficient ORM lookups during code review but what about those already in your codebase? I can check that for you at django.doctor, or can review your GitHub PRs: Or try out Django refactor challenges. Discussion Some comments have been hidden by the post's author - find out more
https://practicaldev-herokuapp-com.global.ssl.fastly.net/djangodoctor/spotting-inefficient-database-reads-in-django-5a9l
CC-MAIN-2021-04
refinedweb
514
59.3
Details - Type: Bug - Status: Resolved - Priority: Blocker - Resolution: Fixed - Affects Version/s: 1.1 - - Component/s: Replication - Labels:None - Skill Level:Dont Know Description. Issue Links - relates to COUCHDB-477 Add database uuid's - Open Activity - All - Work Log - History - Activity - Transitions I'd rather see the replicator respect a naming field. CouchDB core places no specific significance on the replication documents, treating them as any other document in the _local/ namespace. And we've heard a number of times, especially in the last few weeks, about how config files (and specifically ones that change) are ugly. I proposed UUIDs at the DB level a long time ago for this reason, and relatedly so that you could trigger push/pull without using HTTP at both sides and have it be the same replication (discover that you are a host via UUID). Configuration files would work to make it server level, but it's hacky. DB-level is a bad idea because sysadmins might copy couch files. Ultimately, the client should identify the replication if it can. I think that's the best solution. Couch already has a unique identifier: its URL. I'm not sure another per-server UUID buys you much. With a second unique identifier, you can determine that (this couch has moved on the Internet || this couch has a configuration error). Maybe the first condition is more likely. Meh. Couch already has a per-server UUID: _config/couch_httpd_auth/secret. Hashing this value can produce a public unique ID. Normal users may not read the config, so you have to expose this value to them somehow. Is that a new global handler? Or is it added to the{"couchdb":"Welcome"} response? For these reasons, I'm not sure the solution is to assign a random UUID to the Couch.. @Jason URL's are nice and all, but they're fairly unstable. What's the URL for a couch on a phone? Or after it changes networks? Or if its behind a proxy? And how would that couch figure out what's URL is? Randall's comment about copying files is why its not viable to just generate a UUID for each database. Because its quite likely that a sysadmin would copy that file which would result in to db's having the same UUID making the UUID not so UI. @Jason: The reason I filed this bug report is that the URL of a database in Couchbase Mobile isn't unique. It's barely meaningful at all; it's of the form "" where "nnnnn" is an upredictable port number assigned by the TCP stack at launch time. That URL isn't even exposed to the outside world because the CouchDB server is only listening on the loopback interface. The state that's being represented by a replication ID is the contents of the database (at least as it was when it last synced.) So it seems the ID should be something that sticks to the database itself, not to any ephemeral manifestation like a URL. @Paul, I overlooked or misunderstood Randal's point about duplicating the UUID. Makes sense. @Paul and Jens, totally: URLs are unstable. And, in general, on the web, if a URL changes, that is a huge piece of information. That is a huge hint that you need to re-evaluate your assumptions. Exhibit A: the replicator starts from scratch if you give it a new URL. That is the correct general solution. Your requirement to change the couch URL and everything is just fine and the application doesn't have to worry--I think that requirement is asking CouchDB to be a bad web citizen. In other words, IMO you have an application-level problem, not a couch problem, except that even if you could determine everything is alright, you can't specify the replication ID. I think Damien just time-traveled again. BTW, @Jens, you can already give Apache CouchDB a UUID for your own needs. If you have admin access, just put it anywhere, /_config/jens/uuid. If an unprivileged client must know this UUID, then change the "Welcome" mesage in _config/httpd_global_handlers/%2f. Any unprivileged client can find it there. (You could also place the UUID in the authentication realm in _config/httpd/WWW-Authenticate which actually sounds sort of appropriate.) Finally, if you are willing to run a fork (which mobile Couchbase is) then you could add any feature you need which doesn't make sense for Apache CouchDB. link relevant older ticket report. Hi, Jens. Yes, I take your point to stick to the specific topic. Thanks. To summarize my long rants: A Couch UUID already exists. A database UUID won't work. Allowing the client to provide the replication ID would be great! The following patch solves the issue by using the same approach as the auth cookie handler - using a node uuid which is stored in .ini config. The new replication ID generation no longer uses the hostname and port pair, but instead this uuid. It fallbacks to the current method when searching for existing checkpoints however (thanks to Randall's replication ID upgrade scheme). Besides mobile, desktop couch (Ubuntu One) is another example where the port is dynamic. I've also seen a desktop running normal CouchDB where the ID for a replication was different the first time it was generated from subsequent times - I suspect it was an issue with inet:gethostname/0 but unable to prove it. Personally I don't think using the .ini for a single parameter is that bad (unlike things like OAuth tokens for e.g.) like others commented here before - an alternative would be to store it in a local document of the replicator database for e.g. I would very much like to see this issue fixed in 1.2.0. I've picked up this task and prepared a branch with my work (1259-stable_replication_ids). This patch goes beyond filipe's original and applies to the source and target as well. If couchdb believes either is using a dynamic port (it's configurable, but defaults to any port in the 14192-65535 range), it will ask the server for its uuid (emitted in the / welcome message). If it has one, it uses that instead of the port (specifically it uses{Scheme, UserInfo, UUID, Path} instead of the full url). I was reading the comment, and not sure it's a probem. Irt's expected in a p2p world and master-master replication that the node at the end could change. What doesn't changes are the data inside the dbs. Imo tthis is the role of the application to handle port,ips, dns changes. Not couchdb. In short I would close this issue as a wontfix. Benoit: I think you're misunderstanding the issue. This isn't something about P2P. It's just that if the local CouchDB is not listening on a fixed port number, then replications made by that server to/from another server aren't handled efficiently ... even though the local server's port number has nothing at all to do with the replication (since it's the one making the connections.) In a real P2P case, this change makes even more sense, because the addresses of the servers are unimportant – as you said, it's the databases and their data that are the important thing. A UUID helps identify those. I don't. The replication is fundamentally a peer to peer protocol. or master to master. whatever. It is and should be expected that a node can disappear for many reasons. Relying on a server id is one of the possibilities to restart automatically a replication, other would use other parameters based on the ip and the location, etc... It is the responsibility of the application to know that. As a protocol the replication shouldn't force this way imo. As a client the replicator shouldn't too. It is already doing the best effort imo: ie restarting for the same ip, port and properties of the replication. As a side note, this is one of the reason the 0 port could be used here instead of letting your application fix a port. 0 won't change. And you could rely with the application on a unique property in the replication properties. > It is the responsibility of the application to know that. As a protocol the replication shouldn't force this way imo. Then how does the application do this? I haven't seen any API for it. Also, I don't see how this has anything to do with the case of a leaf node running a server that happens to have a dynamic port assignment. The port this node is running on has absolutely nothing to do with the replication. In the (now obsolete) case of Couchbase Mobile, the server doesn't even accept external requests, so its port number is purely an internal affair. I still have a feeling that we're talking about completely different things. But I can't really figure out what your point is... application that handle the routing policy. A server id can be used as an adress point but some could decide to use other parameters to associate this replication id to a node. Maybe instead of relying on a fixed node id, we could however introduce an arbitrary remote address id fixed on the node that handles the replication. This remote ID will be associated by to an host , port. The layer assigning this address id to the host/port could be switchable, so the application or user could introduce easily its own routing policy. Which could be relying on a server id or not.. > btw your example with couchbase mobile is generally solved by using the replication in pull mode only. So here it is relying on a fixed address to replicate. sigh No, that is exactly the situation I was describing. The mobile client is the only one initiating replication; it pulls from the central (fixed-address) server, and pushes changes to it. So the mobile device's IP address and port are irrelevant, right? Except that the replication state document stored in local has an ID based on several things _including the local server's address and port number. So the effect is that, every time the app launches, all the replication state gets lost/invalidated, and it has to start over again the next time it replicates. TouchDB doesn't have this problem because I didn't write it with this design flaw Instead every local database has a UUID as suggested here, and that's used as part of the key. Except if you are using either the same port on each devices (which is generally what does an application) or the ephemeral port "0" which is also the same then for reach replication. Also in that case you will also have to handle the full address change and security implications. Relying on a unique ids to continue the replication may be a design flaw or at least a non expected/wanted behaviour. What will prevents an hostile node to connect back to your node with the same id? How do you invalidate it? I am pretty sure anyway we should let to the application or final user the choice of the routing policy. I will have a patch for that later in the day. It seems like overkill to get the IANA to assign a fixed port number to an app that doesn't even listen on any external interfaces! The only use of that port is (was) over the loopback interface to let the application communicate with CouchDB. Passing zero for the port in the config file didn't make the problem go away. Apparently the replicator bases the ID on the actual random port number in use, not on the fixed 0 from the config. > What will prevents an hostile node to connect back to your node with the same id? Hello, are you listening at all to what I'm writing? I've already said several times that the app does not accept incoming connections at all. It only makes outgoing connections to replicate. And in general: obviously in any real P2P app there would be actual security measures in place to authenticate connections, most likely by using both server and client SSL certs and verifying their public keys. Once the connection is made, then database IDs can be used to restore the state of a replication. Hello.... are you understanding that this isn't only about your application? Some may have different uses. And different routing policy. And this is not the role of couchdb to fix them. If this is true, it explains why someone pointed me to this bug for the two issues I've been experiencing with replication: 1. My permanent replication sessions die almost every single day and come back after I restart CouchDB. 2. Sometimes I end up with more than one replication running for the same configured replica. Note: My local address changes at least twice a day when I go into the office. The other end of my replication doesn't. It's hard to argue that this behavior is desirable. Benoit, are you vetoing this change? If so, please include a reason why improving the hit rate for replication checkpoints should not be included in our next release.. @rnewson I'm confused. How does it improve the hit rate fro replications checkpoints ???? I am -1 on this patch for above reason. Which are likely the first the one i gave when saying won't fix. Changing a port or an IP isn't an innocent event. It has many implications. And such things like using a fixed replication id remove any of its implication and will make some app work difficult. What if the node id is for ex anonymized on each restart ? I think such behaviour should be configurable. Anyway I don't think this ticket should be a blocker. Let discuss it quitely for the 1.4 . Benoit, the only intention of the patch is to improve the hit rate for replication checkpoint documents. If the port of a participant changes, the current replicator will replicate from update_seq 0 because it won't find the checkpoint from the previous port. With this change, the replicator can negotiate a stable value (the UUID) to use in place of the unstable value (port), and thus find a valid checkpoint document. For large databases, this can be hugely valuable. If you are saying that this breaks replication or eventual consistency, please say so and explain how. The only thing this patch should do is prevent needless and time-consuming replication when a shortcut is available. If you feel it doesn't do that, please help me to see why. This ticket is over a year old, I do not want to bump it to 1.4 without a good reason. So i will repeat myself here: - changing a port is not an innocent event. With this patch the replication is just ignoring it. And i currently asking myself who take the responsibility to make sure that this replication can still happen with this environment change. Changing a port changes the condition in which your node is acting. This patch can only work in a trusted environment. Which can be ok but *must* be optional imo. Sorry, Benoit, I simply can't see the security problem you are trying to describe at all. Walk me through a process where a server changes ports and security is violated. As far as I can tell, the only difference this patch makes is that we can resume replication from a checkpoint where we previously couldn't. That a server changes port doesn't invalidate the checkpoint's integrity, nor do I trust the server more or less based on its numeric port value. If I want to trust the machine, I need TLS, which is completely orthogonal to this ticket. @rnewson part of the patch that make the checkpoint locally constant is fine for me (but not relying on a remote uuid) though i don't think we need a server id for that but that can probably be changed later. Anyway following the discussion (and that will be post 1.3) I think that what we should really do here is handling the following scenario: 1. port, ip changed, port crashed, restared -> a checkpooint is orphelin 2. node come back 3. Instead of restarting the replication , ask what to do and mark the replication task as paused By "ask" I mean having a switchable middle ware that could handle the scenario or just let the application do what it want. Since that is a braking change I guess it could only happen on 2.0 if some want.. I think{Scheme, UserInfo, UUID, Path} is necessary, actually, so ignore my second question. Scheme is justified as we should allow a http and https replication that is otherwise identical to run concurrently (though it's a bit silly). UserInfo is mixed in because different users might be able to write only subsets of the data, and Path ensures that checkpoints vary by database name, in the case that multiple sources replicate into the same target (confusing their checkpoints would be very wrong). Jens, will the patch address your issue? Overall I'm +1 on this approach for enabling faster restarts of replication – I think it's a huge win. I don't see that the behaviour of the new patch changes the security constraints vs today, but I think I see Benoit's point. Today if a replication endpoint changes its ephemeral port # (e.g. expired DHCP lease), the replication will fail and cannot restart until it is deleted & recreated. With the patch, the replication could restart in some situations, without requiring active intervention - that's the whole point. So if Dr.Evil has captured the UUID, it might be possible to acquire the replication without the source endpoint being aware. I think this should be addressed post 1.3. The proposed functionality could note that securing replication requires using TLS and appropriate SSL cert checking in both directions. Which seems common sense anyway! The Dr Evil scenario however is no different under today's activity - if an IP address is hijacked and SSL is not in use, Dr Evil has your documents. Dave, if the port of source or target changes, then, no, the replication cannot restart (because the replication task will be trying to contact a server on a port it is not listening on). If a new task is started pointing at the new port, it will simply pick up a valid checkpoint that it otherwise wouldn't. as an alternive solution I think that we could let the user fix the replication id by reusing the current _replicator id or the one used for the _replicate API. The only problem I see with that is when the the user will post 2 tasks doing the same thing. I have looked at the patch but I don't really understand what it's doing, both because my Erlang is really weak and because I don't know the internals of CouchDB. So I can't really comment on the code. It does sound like what's being suggested goes beyond what I asked for. This bug is about the local server (the one running the replication) having a different IP address or port than the last time. The suggested patches seem to also cover changes to the remote server's URL. That's an interesting issue but IMHO not the same thing. The point of this bug is that the URL of the local server running the replication is irrelevant to the replication. If I'm opening connections to another server to replicate with it, it doesn't matter what port or IP address I am listening on, because there aren't any incoming connections happening. They don't affect the replication at all. As for Benoit's security issues: Replication has no security. Security applies at a more fundamental level of identifying who is connecting and authenticating that principal. You absolutely cannot make security tests based on IP addresses or port numbers. Jens, Yes, the patch goes further than the ticket, I said as much in my first comment when I took the ticket. As you note, it has no security implications. There are three host:port values in play for any one replication task, it seems only a partial solution to fix the stability issue for one of them (though, I agree, that if we only fixed one, it would be the co-ordinating node). It is true that the host:port of the co-ordinating node does not affect the replication per se, as long as you ignore what would go wrong if two processes were doing the same replication. This is also true of the host:port of the "source" and "target" servers too. I am happy to solve just the initial problem identified in the ticket if that will allow Benoit to retract his veto, however I felt it important that we were all clear about the security implications here (namely, that there are none) before proceeding. If, as seems the case now, we all agree on the security aspect, I don't see the harm in all 3 participants having a stable identifier allowing replication checkpoints to be used if a machine changes name or port. There is a potential security issue using a remote node id like in the second part of the patch. Local to local there is none. I am + 1 for fixing the issue related to the node doing coordination. The basic version is committed (where we use UUID instead of the local hostname:port). Thanks Jens for filing this and the detailed description. To be more clear, the idea would be to add a UUID to each server and use it as input to the replication id generation instead of the local port number. It would be something similar to what is done for the cookie authentication handler: If such uuid doesn't exist in the .ini (replicator section), we generate a new uuid and save it. Damien's thought about allowing the client to name them, sounds also very simple - perhaps using the "_replication_id" field in replication documents (which already exists and is currently automatically set by the replication manager).
https://issues.apache.org/jira/browse/COUCHDB-1259?focusedCommentId=13089973&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2015-40
refinedweb
3,765
63.59
How does one generate an output signal? I've been playing with the demo code for our new ind.io but can't get it to raise any pins to HIGH. I'm assuming that the CH1, CH2... labels correspond to pins 0, 1... I've stripped the code down to the bare minimum writing HIGH and LOW to all the available pins. But I don't see voltage on any of the channels. What am I missing? (Code below) #include <Indio.h> #include <Wire.h> void setup() { // put your setup code here, to run once:(14, OUTPUT); pinMode(15, OUTPUT); pinMode(16, OUTPUT); pinMode(17, OUTPUT); } void loop() { // put your main code here, to run repeatedly: digitalWrite(0, LOW); digitalWrite(1, HIGH); digitalWrite(2, LOW); digitalWrite(3, HIGH); digitalWrite(4, LOW); digitalWrite(5, HIGH); digitalWrite(6, LOW); digitalWrite(7, HIGH); digitalWrite(8, LOW); digitalWrite(9, HIGH); digitalWrite(10, LOW); digitalWrite(11, HIGH); digitalWrite(12, LOW); digitalWrite(14, HIGH); digitalWrite(15, LOW); digitalWrite(16, HIGH); digitalWrite(17, LOW); } Tom, Thanks so much. I didn't understand how the I/O worked. Hi Michael, The IND.I/O uses an expander over I2C to drive the I/O so you need to use the Indio library to access the analog and digital pins. Detailed explanation on Only the pins on the 14-pin IDC connector (normally used for the Ethernet module) have direct access to the MCU; they can be controlled via standard commands as in your
https://industruino.com/forum/help-1/question/how-does-one-generate-an-output-signal-219
CC-MAIN-2019-39
refinedweb
246
57.16
12 November 2009 11:05 [Source: ICIS news] MOSCOW (ICIS news)--Voronezhsintezkauchuk’s (Voronezh Rubber Co) synthetic rubber production fell in the January-September period due to adverse market conditions in the first half of 2009, the Russian company said in a statement released on Thursday. In the first nine months of 2009, ?xml:namespace> The company’s synthetic bivinyl rubber output totalled 51,000 tonnes, down 44.2% from January-September 2008, the company said. Styrene rubber and latex production amounted to 46,000 tonnes, down or 23.1% year on year, while production of other types of synthetic rubber also decreased from the same period last year, according to the statement. The company exports nearly half of its synthetic rubber output.
http://www.icis.com/Articles/2009/11/12/9262976/voronezhsintezkauchuks-9-month-synthetic-rubber-output.html
CC-MAIN-2014-49
refinedweb
123
55.24
Fedora Finder finds Fedoras. It provides a CLI and Python module that find and providing identifying information about Fedora images and release trees. Try it out: fedfind images --release 26 fedfind images --release 27 --milestone Beta fedfind images --release 27 --milestone Beta --compose 1 --respin 1 fedfind images --dist Fedora-Atomic --compose 20180116 fedfind images --compose 20180118 fedfind images --composeid Fedora-27-20171110.n.1 fedfind images --release 27 --label Beta-1.1 fedfind images --release 5 --arch x86_64,ppc fedfind images --release 15 --search desk Fedora has stable releases, archive releases, really old archive releases, 'milestone' releases (Alpha / Beta), release validation 'candidate' composes, unstable nightly composes, and post-release nightly composes, all in different places, with several different layouts. There is no canonical database of the locations and contents of all the various composes/releases. We in Fedora QA found we had several tools that needed to know where to find various images from various different types of releases, and little bits of knowledge about the locations and layouts of various releases/composes had been added to different tools. fedfind was written to consolidate all this esoteric knowledge in a single codebase with a consistent interface. fedfind lets you specify a release/compose using five values: 'dist', 'release', 'milestone', 'compose', and 'respin'. It can then find the location for that compose, tell you whether it exists, and give you the locations of all the images that are part of the release and what each image actually contains. As an alternative to this versioning concept, you can also find releases by their Pungi 4 / productmd 'compose ID' or 'compose label' (see examples above). fedfind runs on Python versions 2.7 and later (including 3.x). fedfind is packaged in the official Fedora and EPEL repositories: to install on Fedora run dnf install fedfind, on RHEL / CentOS with EPEL enabled, run yum install fedfind. You may need to enable the updates-testing repository to get the latest version. You can visit the fedfind project page on Pagure, and clone with git clone. Tarballs are released through PyPI. You can use the fedfind CLI from the tarball without installing it, as ./fedfind.py from the root of the tarball (you will need cached_property and six). You can of course copy the Python module anywhere you like and use it in place. To install both CLI and module systemwide, run python setup.py install. You can file issues and pull requests on Pagure. Pull requests must be signed off (use the -s git argument). By signing off your pull request you are agreeing to the. Some usage of fedfind relies on understanding the 'dist', 'release', 'milestone', 'compose', 'respin' versioning concept, so here is a quick primer. Note that if you intend to use fedfind solely with compose IDs or URLs for modern Pungi 4-generated composes, this section will be of less interest to you and you can probably skip it. In this section we will write release, milestone, compose, dist quints as (release, milestone, compose, respin, dist) with '' indicating an omitted value, e.g. (22, Beta, TC3, '', 'Fedora') or (22, '', '', '', 'Fedora'). Note 'dist' is usually 'Fedora', and this is its default value, used when it is not explicitly specified. Conceptually, 'dist' should come at the front, but in fact fedfind functions which accept these values tend to place it after the others, for historical reasons (it did not exist when fedfind's versioning scheme was created). Dist is the term fedfind uses for what pungi and productmd refer to as the shortname; when fedfind was created this didn't really exist, but now the Fedora project produces many more composes than it used to, some with different dists / shortnames. In the compose with compose ID 'Fedora-Rawhide-20160301.n.0', the dist / shortname is Fedora; in a compose with compose ID 'Fedora-Modular-Rawhide-20170816.n.0', the dist / shortname is Fedora-Modular. Fedora is the dist for all mainline composes, and this is the default value. Fedora-Atomic is the dist for the nightly 'two week Atomic' composes, produced once a day for each of the current stable releases, which at present are found only in their initial location as output by release engineering, and not in their mirrored locations. Similarly, Fedora-Docker and Fedora-Cloud are the dists for the nightly Docker and Cloud composes for the current stable releases. So e.g. (27, '', 20180111, 0, 'Fedora-Atomic') will find the (first) 2018-01-11 nightly two-week Atomic compose for Fedora 27. Fedora-Modular was the dist for modular composes during the Fedora 27 cycle; at the time, Modularity was under development, and for technical reasons, required a separate stream of composes. FedoraRespin is the dist for the current post-release live respin compose in the live-respins directory; there is only ever one of these at a time (and note that these are only semi-official builds, provided by volunteers as a courtesy, they do not have the status of official composes). Any release or compose passed with the dist FedoraRespin is used as a check: if the existing contents of the live-respins directory don't match the expected release (number) or compose (date), fedfind will raise an exception instead of returning the RespinRelease instance. Release is usually a Fedora release number, e.g. 27, 15 or 1. The only non-integer value that is accepted is 'Rawhide', for Rawhide nightly composes. These do not, properly speaking, have a definite release number associated with them: Rawhide is a perpetually rolling tree. The canonical versioning for Rawhide nightly composes is (Rawhide, '', YYYYMMDD, N) (where N is the respin number). Note that python-wikitcms uses almost the same versioning concept as fedfind, but Wikitcms 'validation events' for Rawhide nightly composes do have a release number: this is a property of the validation event, not of the compose. Thus there may be a Wikitcms validation event (24, Rawhide, 20151012, 1, 'Fedora') for the fedfind compose (Rawhide, '', 20151012, 1). fedfind and python-wikitcms both recognize this case and will attempt to convert each other's values for convenience. Milestone indicates the milestone or the type of nightly compose. Valid milestones for current releases are Beta, RC (or Final, which means the same), Branched, and Production. The Alpha milestone existed until Fedora 25, but Fedora 26 and later releases have no Alpha; no Alpha releases can be found any more, as the Fedora 25 and earlier Alphas are now removed. fedfind will accept Rawhide as a milestone and convert it to the release - so ('', Rawhide, YYYYMMDD, N, 'Fedora') is not exactly valid but will be handled by the CLI and the get_release function and converted to (Rawhide, '', YYYYMMDD, N, 'Fedora'). Stable releases do not have a milestone; (23, RC, '', '', 'Fedora') will be accepted by get_release and the CLI but is treated internally as (23, '', '', '', 'Fedora'). The Production milestone indicates a so-called 'production' compose, which will usually also be an Alpha, Beta or Final 'candidate' compose - you may be able to find the same compose in two different places, for instance, with the Production milestone and a date-based compose and respin, or with the Alpha, Beta or Final milestone and a numeric compose and respin. It is approximately the same as the difference between searching for a production compose by compose ID and searching for it by compose label. See more on this in the fedfind / Wikitcms vs. Pungi / productmd versioning section below. Currently, the milestone value has no meaning for dists other than Fedora and Fedora-Modular; in future we may use it to distinguish between nightly and released two-week Atomic composes. The values Atomic, Docker, Cloud and Respin are accepted for backwards compatibility purposes; they will be translated into a dist value of Fedora-Atomic, Fedora-Docker, Fedora-Cloud or FedoraRespin respectively, and a blank milestone value. Versions of fedfind before 4.0 overloaded the milestone concept to handle these dist values, rather than properly handling them as dists. This functionality may be removed in a future major release. Compose is the precise compose identifier (in cases where one is needed). For candidate composes it is always 1. For nightly composes it is a date in YYYYMMDD format. Stable releases and milestone releases do not have a compose. Respin is an integer that's bumped any time a compose which would otherwise have the same version is repeated. The concept is taken from Pungi. If we attempt to build two Rawhide nightly composes on 2016-03-01, for instance, in fedfind's versioning they are (Rawhide, '', 20160301, 0) and (Rawhide, '', 20160301, 1). The corresponding Pungi / productmd 'compose ID'-style versioning is Fedora-Rawhide-20160301.n.0 and Fedora-Rawhide-20160301.n.1. Note that, as a convenience, fedfind will attempt to detect when a 'compose' value is in fact a combined compose and respin, and split it up - so you can specify the compose as 1.7 (for compose 1, respin 7) or 20160330.0 or 20160330.n.0 (both of which will be treated as compose 20160330, respin 0). Some examples: The test suite contains a bunch of tests for get_release() which incidentally may function as further examples of accepted usage. The fedfind CLI and get_release() are designed to guess omitted values in many cases, primarily to aid unattended usage, so e.g. a script can simply specify ('', Branched, '', '', 'Fedora') to run on the date's latest branched compose, without having to know what the current Branched release number is. More detailed information on various cases can be found in the get_release() function's docstring. The fedfind / Wikitcms versioning system was developed prior to the use of Pungi 4 for Fedora composes. The 'respin' concept from Pungi / productmd was then stuffed into the fedfind / Wikitcms versioning concept quite hastily to keep stuff working, and the productmd 'short' concept was also added to fedfind (usually under the name 'dist', which more closely describes its function in Fedora's context). For now, fedfind attempts to remain compatible with its legacy versioning approach as well as possible, while also supporting release identification using productmd versioning concepts. For instance, non-Pungi 4 composes have a sloppily-faked up cid attribute that at least will produce the correct release number when parsed like a 'compose ID', and get_release() can parse most (we aim for 'all', but we're also realists!) compose IDs, compose URLs and compose labels and return appropriate Release instances. Pungi has several version-ish concepts. The two most important to fedfind are the 'compose ID' and 'label'. All Pungi composes have a 'compose ID'. Not all have a label - only 'production' composes do (not 'nightly' or 'test' composes). A compose ID looks like Fedora-24-20160301.n.0 (nightly), or Fedora-Rawhide-20160302.t.1 (test), or Fedora-24-20160303.0 (production). The release number, date and type of compose are always indicated somehow. For Fedora purposes the respin value should always be present. There is no kind of 'milestone' indicator. A label looks like Alpha-1.2, or Beta-13.16. There's a list of supported milestones (including Fedora's Alpha and Beta, but not Final - productmd uses RC instead). The first number is a public release number (RHEL has numbered milestone releases, unlike Fedora); the second is the respin value, which is considered more of a private/internal property. The system around which the scheme was designed appears to be that multiple "Alpha 1" respins are produced and tested and the final one is released as the public "Alpha 1" - thus the 'respin' concept covers approximately the same ground as Fedora's "TC" and "RC" composes used to. Mainline Fedora nightly composes are built just as you'd expect: the short name is 'Fedora', the release number is 'Rawhide' or the actual release number, the type is nightly, and the respin value is incremented if multiple composes are run on the same date (usually only if the first fails and we want to fix it before the next day). So Fedora nightly compose IDs look like Fedora-Rawhide-20180119.n.0, for instance. For milestone releases, Fedora builds 'production' composes with the label release number always set as 1, and the respin number incremented each time a compose is run. Thus for Fedora 24 Alpha validation testing we built Alpha-1.1, Alpha-1.2, Alpha-1.3 and so on, until Alpha-1.7 was ultimately released as the public Alpha release. Each of these composes also had a compose ID in Fedora-24-20160323.0 format. The post-release nightly composes for Atomic, Cloud and Docker images are built almost as you'd expect, but with their type as 'production', not 'nightly'. So their compose IDs look like Fedora-Atomic-27-20180119.0 or Fedora-Docker-26-20171215.1. Their labels are always RC-(date).(respin), e.g. RC-20180119.0. Current fedfind should be capable in almost all cases of finding any findable Fedora compose by its compose ID or its label. The Fedora 'production' composes initially land in one location (on kojipkgs, alongside the nightly Branched and Rawhide composes, in directory names based on the compose ID) and are then mirrored to another location (on alt, in directory names based on the compose label). When you search for a given production / candidate compose whether you find its kojipkgs location or its alt location depends to some extent on how you search for it. Searching by compose label, or with something like milestone Alpha, compose 1, respin 7, will usually find its alt location (as a Compose class instance). Searching by compose ID, or with something like milestone Production, compose 20160323, respin 0, will usually find its kojipkgs location (as a Production class instance). However, you can set the get_fedora_release argument promote to True when searching by compose ID-ish values, and fedfind will attempt to find the Compose class (alt location) if it can. All fedfind release instances for Pungi 4 composes that actually exist should have the compose's compose ID as their cid attribute. If the compose has a label, it should be available as the label attribute. The fedfind CLI command gives you URLs for a release's images. For instance, fedfind images -r 25 will print the URLs to all Fedora 25 images. You can filter the results in various ways. For more information, use fedfind -h and fedfind images -h. The Python module provides access to all fedfind's capabilities. import fedfind.release comp = fedfind.release.get_release(release=27) print comp.location for img in comp.all_images: print(img['url']) The main part of fedfind is the Release class, in fedfind.release. The primary entry point is fedfind.release.get_release() - in almost all cases you would start by getting a release using that function, which takes the release, milestone, compose, respin, and dist values that identify a release as its arguments and returns an instance of a Release subclass. You may also pass url (which is expected to be the /compose directory of a Pungi 4 compose, or as a special case, for the semi-official post-release live respins which live there), cid (a Pungi 4 compose ID), or label (a Pungi 4 compose label) as an alternative to release/milestone/compose/respin. If you pass a url or cid, fedfind will run a cross-check to ensure the URL or CID of the discovered compose actually matches what you requested, and raise an exception if it does not. Anyone who used fedfind 1.x may remember the Image class for describing images and the Query class for doing searches. Both of those were removed from fedfind 2.x in favour of productmd-style metadata. All Release instances have a metadata dict which, if the release exists, will contain an images item which is itself a dict containing image metadata in the format of the productmd images.json file. For Pungi 4 releases this is read straight in from images.json; for pre-Pungi 4 releases fedfind synthesizes metadata in approximately the same format. You can also use the all_images convenience property; this is basically a flattened form of the images metadata. It's a list of image dicts, with each image dict in the same basic form as a productmd image dict, but with a variant entry added to indicate its variant (in the original productmd layout, the image dicts are grouped by variant and then by arch, which is kind of a pain to parse for many use cases). Note that since fedfind 3.1.0, from Fedora 9 onwards, boot.iso files are not included in all_images(or the lower-level all_paths). Since fedfind 3.3.0, image dicts also have url and direct_url entries added which provide full HTTPS URLs for the image files (so fedfind consumers no longer have to worry about constructing URLs by combining the release location or alt_location and the image path). url may go through the mirror redirector, which tries to spread load between mirrors. direct_url will always be a direct link. If the image is in the public mirror system, it will use the mirror. Please use url unless you have a strong reason to use direct_url, to avoid excessive load on the server. You're expected to roll your own queries as appropriate for your use case. The reason fedfind 1.x had a dedicated query interface was primarily to try and speed things up for nightly composes by avoiding Koji queries where possible and tailoring them where not; since fedfind no longer ever has to perform slow Koji queries to find images, the need for the Query class is no longer there, you can always just operate on the data in metadata['images'] or all_images. Note the image subvariant property is extremely useful for identifying images; you may also want to use the identify_image function from productmd for this purpose. All methods and functions in fedfind are documented directly: please do refer to the docstrings for information on their purposes. Attributes are documented with comments wherever their purpose is not immediately obvious. All methods, functions, attributes and properties not prefixed with a _ are considered 'public'. Public method and function signatures and all public data types will only change in major releases. fedfind has no control over the form or content of the productmd metadata, so when and how that changes is out of our hands; I will make a best effort to keep the synthesized metadata for old composes broadly in line with whatever current Pungi produces, though it is not perfect now and likely never will be (it's kinda tailored to the information I actually need from it). fedfind can do some useful stuff that isn't just querying images for releases: Release.check_expected()sees if all 'expected' images for the release are present. Release.previous_release()takes a cut at figuring out what the 'previous' release was, though this is difficult and may not always work or return what you expect. helpers.get_current_release()tells you what the current Fedora release is. fedfind has a bunch of knowledge about where Fedora keeps various composes wired in. For Pungi 4 compose types, finding the compose is about all fedfind has to do; it reads all the information about what images are in the compose out from the metadata and exposes it. For non-Pungi 4 composes (old stable releases) and Pungi 4 composes that are modified and have their metadata stripped (current stable and milestone releases), fedfind uses the imagelist files present on dl.fedoraproject.org, which contain lists of every image file in the entire tree. It finds the images for the specific compose being queried, then produces a path relative to the top of the mirror tree from the result. It can then combine that with a known prefix to produce an HTTPS URL. For metadata, fedfind first tries to see if the compose was originally produced with Pungi 4 and its metadata is available from PDC. It tries to guess the compose label from the image file names, and then a compose ID from the compose label, and then query PDC for metadata for that compose. If it is successful, it tries to match each image discovered from the imagelist file with an image dict from the original metadata, and combine the two so that the path information is correct but all other information for the image is taken from the original metadata, rather than synthesized by fedfind. For any discovered image no matching original image dict is found for, and for composes where no original metadata is available at all, fedfind synthesizes productmd-style metadata by analyzing the file path, guessing the properties that can be guessed and omitting others. In all cases, the result is that metadata and derived properties like all_images are as similar as possible for the different types of compose, so fedfind consumers can interact with non-Pungi 4 composes in the same way as Pungi 4 ones (to the extent of the metadata discovery and synthesis implementations, some stuff just isn't covered by the synthesis). With the use of small metadata files (for metadata composes) and still-quite-small image list files that are cached locally (for non-metadata composes), fedfind is much faster than it used to be. It can still take a few seconds to do all its parsing and analysis, though. When used as a module it caches properties and image lists for the lifetime of the release instance. The image list files are cached (in ~/.cache/fedfind) based on the 'last modified time' provided by the server they come from: for each new release instance, fedfind will hit the server to retrieve the last modified time header, but if the cached copy matches that time, it will not re-download the file. These files are also quite small so will not take long to download in any case. If the download fails for some reason, but we do have a cached copy of the necessary lists, fedfind will work but log a warning that it's using cached data which may be outdated. Certain PDC queries, whose results are never expected to change, are also cached (again in ~/.cache/fedfind), to speed up repeated searches and reduce load on PDC. If ~/.cache/fedfind is not writeable for the user running fedfind, we fall back to using a temporary cache location that is only valid for the life of the process (and deleted on exit). This at least ensures fedfind will work in this case, but results in it being slower and doing more round trips. It shouldn't use too much bandwidth (though I haven't really measured), but obviously the server admins won't be happy with me if the servers get inundated with fedfind requests, so don't go completely crazy with it - if you want to do something script-y, please at least use the module and re-use release instances so the queries get cached. All releases other than stable releases disappear. fedfind can find stable releases all the way back to Fedora Core 1, but it is not going to find Fedora 14 Alpha, Fedora 19 Beta TC3, or nightlies from more than 2-3 weeks ago. This isn't a bug in fedfind - those images literally are not publicly available any more. Nightlies only stick around for a few weeks, candidate composes for a given milestone usually disappear once we've moved on another couple of milestones, and pre-releases (Alphas and Betas) usually disappear some time after the release in question goes stable. fedfind will only find what's actually there. Also note that fedfind is not designed to find even notional locations for old non-stable releases. Due to their ephemeral nature, the patterns it uses for nightly builds and candidate composes only reflect current practice, and will simply be updated any time that practice changes. It doesn't have a big store of knowledge of what exact naming conventions we used for old composes. If you do comp = fedfind.release.Compose(12, 'Final', 'TC4') and read out comp.location or something what you get is almost certainly not the location where Fedora 12 Final TC4 actually lived when it was around. fedfind does not, for the present, handle secondary arches at all. It will find PPC images for releases where PPC was a primary arch and i686 images for releases where i686 was a primary arch, though. This is pretty much all my fault. Note that aside from its external deps, older versions of fedfind (up to 1.1.2) included a copy of the cached_property implementation maintained here by Daniel Greenfield. The bundled copy was dropped with version 1.1.3. Fedora Finder is available under the GPL, version 3 or any later version. A copy is included as COPYING.
https://pagure.io/fedora-qa/fedfind
CC-MAIN-2020-40
refinedweb
4,151
60.55
Feature #8977 String#frozen that takes advantage of the deduping Description During memory profiling I noticed that a large amount of string duplication is generated from non pre-determined strings. Take this report for example (generated using the memory_profiler gem that works against head) ">=" x 4953 /Users/sam/.rbenv/versions/2.1.0-dev/lib/ruby/2.1.0/rubygems/requirement.rb:93 x 4535 This string is most likely extracted from a version. Or "/Users/sam/.rbenv/versions/2.1.0-dev/lib/ruby/gems" x 5808 /Users/sam/.rbenv/versions/2.1.0-dev/lib/ruby/gems/2.1.0/gems/activesupport-3.2.12/lib/active_support/dependencies.rb:251 x 3894 A string that can not be pre-determined. It would be nice to have "hello,world".split(",")[0].frozen.object_id == "hello"f.object_id Adding #frozen will give library builders a way of using the de-duping. It also could be implemented using weak refs in 2.0 and stubbed with a .dup.freeze in 1.9.3 . Thoughts ? Related issues History #1 [ruby-core:57584] Updated by Charlie Somerville almost 3 years ago - Target version set to 2.1.0 I would love to see this feature in 2.1. These are the top duplicated strings in an app I work on: irb(main):023:0> GC.start; h = ObjectSpace.each_object(String).to_a.group_by { |s| s }.map{ |s, objs| [s, objs.size] }; h.sort_by { |s, count| -count }.take(10).each do |s| p s end; nil ["/", 5241] ["(eval)", 3207] ["application", 2389] ["", 1908] ["html.erb", 1720] ["base64", 1520] ["erb", 1464] ["IANA", 1389] ["initialize", 1147] ["recognize", 1036] Most of these could be deduplicated with String#frozen. #2 [ruby-core:57585] Updated by Nobuyoshi Nakada almost 3 years ago Won't those strings be shared with frozen string literal? #3 [ruby-core:57587] Updated by Sam Saffron almost 3 years ago @nobu "html.erb" is very unlikely to be shared cause it is a result of a parse. "base64" and "IANA" are coming from the super dodgy mime types gem here: it starts off unsplit. #4 [ruby-core:57600] Updated by Charlie Somerville almost 3 years ago ko1 and I discussed this in IRC and decided that #frozen would be too easily confused with #freeze. An idea that came up was to use #dedup or #pooled instead. What do you think Sam? #5 [ruby-core:57613] Updated by Charles Nutter almost 3 years ago How is this not just a symbol table of another sort? When do these pooled strings get GCed? Do they ever get GCed? What if the encodings differ? There's a whole bunch of implementation details that scare me about this proposal. #6 [ruby-core:57614] Updated by Charles Nutter almost 3 years ago After thinking a bit, I guess what your'e asking for is a method that gives you the VM-level object that would be returned for a literal frozen version of the same string. However, it's unclear to me what #frozen or #dedup or #pooled would do if there were no such string. If they'd return the original uncached object, you'd never know if you're actually saving anything. If they would cache the string, the concerns in my previous comment apply. Can you clarify? #7 [ruby-core:57624] Updated by Sam Saffron almost 3 years ago @hedius the request is all about exposing: VALUE rb_fstring(VALUE str) { st_data_t fstr; if (st_lookup(frozen_strings, (st_data_t)str, &fstr)) { str = (VALUE)fstr; } else { str = rb_str_new_frozen(str); RBASIC(str)->flags |= RSTRING_FSTR; st_insert(frozen_strings, str, str); } return str; } the encoding concerns are already handled by st_lookup afaik, as is the gc concern def test; x = "asasasa"f; x.object_id; end test => 70185750124120 undef :test GC.start def test; x = "asasasa"f; x.object_id; end test => 70185736068940 Overall this feature has some parity with Java / .NETs intern, adapted to the world where MRI is not allow you to shift objects around @charlie I like #pooled , #dedup feels a bit odd ... I totally understand the concern about #frozen vs #frozen? it can be confusing #8 [ruby-core:57638] Updated by Charles Nutter almost 3 years ago sam.saffron (Sam Saffron) wrote: the request is all about exposing: VALUE rb_fstring(VALUE str) ... the encoding concerns are already handled by st_lookup afaik, as is the gc concern I went to the source to understand how this is implemented. Summarized here for purposes of discussion. "fstrings" in source are added to the fstring table. Normally this would mean they're hard-referenced forever, but fstrings also get an FSTR header bit that the GC uses (via rb_str_free) to also remove the fstring table entry. So you're right, the fstrings will not fill up memory like the global symbol table and there's probably no DOS potential from creating lots of fstrings via eval or #frozen. I guess my next question is why we need a new method. Why can't String#freeze just do what you want String#frozen to do? Risk of too many strings going into that table? My other concerns are addressed by the handling of the fstring table. I think in JRuby we'd implement this as a weak hash map. def test; x = "asasasa"f; x.object_id; end test => 70185750124120 undef :test GC.start def test; x = "asasasa"f; x.object_id; end test => 70185736068940 I ran this in a loop and the object_id eventually stabilizes. I am not sure why. I also ran a version that loops forever creating new test methods with different fstrings, and confirmed that memory stays level. #9 [ruby-core:57639] Updated by Charles Nutter almost 3 years ago headius (Charles Nutter) wrote: I ran this in a loop and the object_id eventually stabilizes. I am not sure why. I think I realize why: eventually the only GC is for the objects in the loop, which are allocated and deallocated the same way every time. So although a new fstring is defined each time, it lives at the same location in memory as the one from the previous loop. #10 [ruby-core:57647] Updated by Nobuyoshi Nakada almost 3 years ago I don't think it needs a new method nor class. frozen_pool = Hash.new {|h, s| h[s.freeze] = s} 3.times { p frozen_pool["foo"].object_id } #11 [ruby-core:57648] Updated by Koichi Sasada almost 3 years ago (2013/10/04 9:14), nobu (Nobuyoshi Nakada) wrote: I don't think it needs a new method nor class. frozen_pool = Hash.new {|h, s| h[s.freeze] = s} 3.times { p frozen_pool["foo"].object_id } for `f' syntax, we prepare frozen string table. Let it name "FrozenTable". This proposal String#frozen can be defined by: class String def frozen FrozenTable[frozen] ||= self.freeze # rb_fstring(self) in C level end end The difference between hash table approach and rb_fstring() is GC. Frozen strings returned by Strign#frozen are collected if frozen strings are not marked. But hash table approach doesn't allow collecting. -- // SASADA Koichi at atdot dot net #12 [ruby-core:57649] Updated by Nobuyoshi Nakada almost 3 years ago It differs from the original proposal, which is called explicitly by applications/libraries. I think such pooled strings should not go beyond app/lib domains. #13 [ruby-core:57650] Updated by Koichi Sasada almost 3 years ago (2013/10/04 9:35), nobu (Nobuyoshi Nakada) wrote: It differs from the original proposal, which is called explicitly by applications/libraries. Not different. I described the implementation of String#frozen. I think such pooled strings should not go beyond app/lib domains. I can accept the way to get the string which we can get "foo"f dynamically. -- // SASADA Koichi at atdot dot net #14 [ruby-core:57660] Updated by Sam Saffron almost 3 years ago @hedius I have seen the suggestion around of having String#freeze amend the object id on the current string, so for example "hi"f.object_id 10 a="hi"; a.object_id => 100 a.freeze; a.object_id => 10 However how would such an implementation work with c extensions where we leak out pointers? (I think this would work fine though for JRuby) #15 [ruby-core:57661] Updated by Charlie Somerville almost 3 years ago I have seen the suggestion around of having String#freeze amend the object id on the current string, so for example However how would such an implementation work with c extensions where we leak out pointers? C extensions aren't the only reason this wouldn't work. Consider: a = "foo" b = a a.freeze puts a.object_id puts b.object_id Are both 'a' and 'b' updated to refer to the new object? #16 [ruby-core:57662] Updated by Sam Saffron almost 3 years ago @nobu You can implement a separate string pool in 2.0 using like so: require 'weakref' class Pool def initialize @pool = {} end def get(str) ref = @pool[str] # GC may run between alive? and __getobj__ copy = ref && ref.weakref_alive? && copy = ref.__getobj__ rescue nil unless copy copy=str.dup ref=WeakRef.new(copy) copy.freeze @pool[str] = ref end copy end def scrub! GC.start @pool.delete_if{|k,v| v.nil?} end def length @pool.length end end @pool = Pool.new def test puts @pool.get("test").object_id end test #69822933011880 test #69822933011880 p @pool.length #1 @pool.scrub! test #69822914568080 test #69822914568080 p @pool.length #1 but the disadvantage is that it is fiddely, requires manual management and will not reuse FrozenTable #17 [ruby-core:57782] Updated by Charles Nutter almost 3 years ago #18 [ruby-core:57786] Updated by Charles Nutter almost 3 years ago). #19 [ruby-core:58977] Updated by Aman Gupta almost 3 years ago - Assignee set to Yukihiro Matsumoto - Status changed from Open to Assigned I just made some more arguments for this feature in #9229. The goal here is to provide runtime access to the frozen string literal table. This is not a new idea. For instance, see String.Intern in .NET: @samsaffron also made a good point above: older versions and other implementations of ruby can easily provide their own implementations of String#frozen: It also could be implemented using weak refs in 2.0 and stubbed with a .dup.freeze in 1.9.3 . @ko1 also agrees with the feature above: I can accept the way to get the string which we can get "foo"f dynamically. So the remaining question (as always) is a naming issue. I like String#frozen, but maybe there is some argument that it can be confused with #freeze. @matz Do you approve of String#frozen, or do you have some other preference? Other proposals are: String#dedup String#pooled String::frozen(str) #20 [ruby-core:58978] Updated by Matthew Kerwin almost 3 years ago headius (Charles Nutter) wrote:). Is it any worse than the fact that String#intern returns a Symbol? IIRC this whole effort started because people were using Symbols as interned Strings (in the Java sense), but Symbols can't be GC'ed, so there were memory leak-type issues. If we're viewing the fstring cache as an effort to allow GC'ing of Symbols (effectively, though not in name) then it seems the issues and complexities are a given. I agree that we should make #freeze use the pool. If people really, really want to have a version that returns the same object (frozen), we could introduce String#freeze! - rb_define_method(rb_cString, "freeze", rb_obj_freeze, 0); - rb_define_method(rb_cString, "freeze", rb_fstring, 0); - rb_define_method(rb_cString, "freeze!", rb_obj_freeze, 0); This is based on my (possibly flawed) understanding that Ruby seems willing to make not-backwards-compatible changes between minor versions (1.8 -> 1.9), even if not between majors (1.9.3 -> 2.0). The benefits of having a pooled #freeze seem to outweigh the risk of someone depending on it returning the same object, especially if that person has an upgrade path to get their old functionality back. #21 [ruby-core:58990] Updated by Koichi Sasada almost 3 years ago Now, I have one concern about security concern. This kind of method can be used widely and easily. And if this method is used with external string getting from IO, fstring table can be grow and grow easily. I'm afraid about such kind of security risk: (1) DoS attack (2) Side channel attack (observe from outside) But I'm not a security expert. So I want to ask experts. Note that this problem has not impact than Symbol related DoS attack because these keys are collected. I think multiple tables support solve kind of issues. To solve such issue (and continue to discuss this issue to 2.2), Ruby level implementation and gem is reasonable alternative, I believe. However, in ruby-level, we can't do that same thing. Therefor nobu made a patch for WeakHash, a variant of WeakMap (we will make another ticket for it). WeakMap is object_id -> Object map. WeakHash is Object -> Object map. With this class, we can make fstring technique with multiple tables easily. class FrozenStringTable def initialize @table = {} # WeakHash.new end def get str raise TypeError unless str.kind_of?(String) unless @table.has_key? str str.freeze @table[str] = str end @table[str] end end F1 = FrozenStringTable.new p F1.get('foo').object_id #=> 8274120 p F1.get('foo').object_id #=> 8274120 In this comment, I show (1) security concern, and (2) alternative approach. #22 [ruby-core:58992] Updated by Aman Gupta almost 3 years ago @ko1 and I discussed this at length earlier. Although a frozen string table could be implemented in ruby (with the help of C-ext like WeakHash above), the current implementation of the finalizer table adds overhead that would make it unsuitable for long-lived strings. In particular, each finalizer currently requires 2 extra object VALUE slots and the finalizer_table is marked on every minor mark. The main concern with exposing the fstr table to ruby is that it could be easily be misused. Feeding a large number of entries into this table would slow down lookup times and subsequent calls to rb_fstring(). Currently rb_fstring() calls are isolated to boot-up and compile time, so runtime performance is not a factor. Misuse from ruby-land could increase the number of frozen_strings hash lookups, possibly introducing performance or security concerns. As as alternative, we discussed including a C-only api for 2.1. This would limit possible misuse/abuse, yet still allow for responsible use of de-duplication features present in 2.1. We should include the size of the frozen_strings table to encourage monitoring and size caps. This API might look something like the following (naming suggestions welcome): VALUE rb_str_frozen_dedup(VALUE str) size_t rb_str_frozen_table_size() #23 [ruby-core:60312] Updated by Hiroshi SHIBATA over 2 years ago - Target version changed from 2.1.0 to current: 2.2.0 Also available in: Atom PDF
https://bugs.ruby-lang.org/issues/8977
CC-MAIN-2016-40
refinedweb
2,467
66.23
RSS Viewer Applet: Ready to Rumble (6/6) - exploring XML RSS Viewer Applet: Ready to Rumble Helper Functions, Future Plans and Current DistributionFinally some typographic routines for converting pixel coordinates to and from channel item positions, and calculating line height, string widths and positions. When strings don't fit in the remaining bounding box they get pruned with an ellipsis appended: private int getItemFromCoordinates(int x, int y) { return (y - (titleLineHeight-itemLineHeight) - boxBorder) / (itemLineHeight + itemMargin) - 1; } This function calculates the item position under the mouse coordinates, used by mouse move and click actions to determine the selected item. private Rectangle getItemBounds(int item) { Rectangle r = new Rectangle(); r.x = boxBorder; r.y = (item < 0 ? 0 : titleLineHeight-itemLineHeight + boxBorder + (item+1) * (itemLineHeight + itemMargin)); r.width = getSize().width; r.height = (item < 0 ? titleLineHeight : itemLineHeight); return r; } This is the opposite calculation of item position to mouse coordinates, the bounding box to be more precise. private int getLineHeight(Font f) { FontMetrics fm = Toolkit.getDefaultToolkit().getFontMetrics(f); return fm.getHeight() + 2; } private int getLineBase(Font f) { FontMetrics fm = Toolkit.getDefaultToolkit().getFontMetrics(f); return fm.getAscent() + 1; } The above two functions calculate typographic line heights and base line pixels for correct text rendering. private String getPrunedString(String s, Font f, int width) { FontMetrics fm = Toolkit.getDefaultToolkit().getFontMetrics(f); if (fm.stringWidth(s) < width) return s; final String ellipsis = "..."; width -= fm.stringWidth(ellipsis); String str = s; while (fm.stringWidth(str) > width) { str = str.substring(0, str.length()-1); } return str + ellipsis; } If a string does not fit in its allocated box, this function abbreviates it with a trailing ellipsis, taking out characters from the string's end until it fits in its allocated space. private int getStringStartPosX(String s, Font f, int width, String align) { FontMetrics fm = Toolkit.getDefaultToolkit().getFontMetrics(f); int blankWidth = width - fm.stringWidth(s); if (blankWidth <= 0 ) return 0; if (align.equalsIgnoreCase("right")) { return blankWidth; } else if (align.equalsIgnoreCase("center")) { return blankWidth / 2; } else return 0; } } Finally we need a function that determines the exact coordinates for a text string depending on its specified alignment. It calculates the blank space remaining and then returns the full offset on right alignment, and half of it when centered. This completes the RSS applet viewing code. The jar file has increased from 25 to 28 kilobytes in size, which should be acceptable for the much improved look and feel we got in return. Future plans The Aelfred XML parser weighs in with 19 of the overall 28 kilobytes in our jar archive. Making the parser work via the standard SAX interface would mean including another 10 or so kilobytes of official org.xml.* classes, with no immediate benefit. So before adding more features such as scrolling and fading we should have a look at a custom solution for parsing RSS that does not involve a full generic XML parser and save 15-20 kb in the process. Let's see if this is feasible without sacrificing interoperability, because a fully compliant XML parser can hardly become much smaller than Aelfred. Distribution The applet is available for download in binary and source code form under the GNU Public License. As far as I can see Aelfred's license should be allowing this kind of bundling.If you own a Web site and want to display RSS news, feel free to use the applet on your site free of charge, and without warranty of any kind. If you are an HTML tool vendor and want to include the applet in your toolbox, let me know. The source code is also checked into public CVS at SourceForge, feel free to check out the Code4eXploringXML@WbReference.com Project at SourceForge. If you encounter problems drop me a line. I cannot guarantee timely support but I'll do my best to help you out. Happy news-serving and reading! Produced by Michael Claßen URL: Created: Mar. 12, 2000 Revised: Mar. 12, 2000
http://www.webreference.com/xml/column9/6.html
CC-MAIN-2013-48
refinedweb
651
57.47
This is a very rough post to collect some thoughts about trying to write .NET libraries that work inside of the Unity Editor and in UWP projects for Mixed Reality. Apply a large pinch of salt, it’s still a “work in progress” at this point so I’ll add to it/update it as I progress but I wanted somewhere to write things down but feel free to feedback… I’ve been trying to write a library which works in the Unity Editor and also at runtime in a UWP app built from the Unity Editor. Generally, I’ve been following the excellent guides; Porting Guide (the section titled ‘Writing Plugins’) Universal Windows Platform: Plugins on .NET Scripting Backend and the essence has been to produce 2 libraries by targeting .NET Framework 3.51 and the Universal Windows Platform (14393 in my case). I’ve been doing this by making use of shared projects in Visual Studio such that I have 3 projects inside of Visual Studio; with all of the code being shared between the two “head” projects, one of which compiles for .NET Framework 3.51 whereas the other compiles for UWP 14393. Within that code, I then make use of #if WINDOWS_UWP to conditionally work on pieces of code that do/don’t rely on the UWP and I’ve made sure that the publicly visible API surface of the library is the same in both cases. Initially, I thought that this would largely involve staying away from types like Task<T> which don’t exist in .NET Framework 3.51 but I found that it can be more nuanced than that. While the projects above don’t look like it, they both build out a DLL with the same name (as per the guidance) and I have then installed those two DLLs into my Unity project as the documentation suggests so that the UWP one is in a WSA sub-folder and the .NET Framework one is in the Plugins folder; and I configure the .NET Framework assembly to run only in the editor; and I configure the UWP assembly to run only in the WSA Player and I set its “placeholder” to point to the .NET Framework 3.51 assembly; This was all going rather well until I came to run my code in the Unity editor and at that point I found that I was having some trouble with the types that were being passed across the boundary in/out of my code. Purely as an example, I was making use of IPAddress as a return type from a public function and it seemed “safe” in that it seems to come from namespace System.Net for both .NET Framework 3.51 and for UWP but when I came to build in Unity I hit an error message telling me that Unity was looking for IPAddress in the System assembly whereas for UWP it looks to live in System.Net.Primitives. I’m not sure if there’s some “type forwarding” style smart away around this but it stopped me building and I had to replace the use of the IPAddress type with the String type in my public interface and, similarly, get rid of types like AddressFamily as well which I was also passing out of my assembly. That wasn’t too bad as it bit me at Unity build time but then I started to hit some runtime fun as well. For instance, I found that I had a piece of code inside of my DLL which did something similar to the fragment below. Given; class Base { } class Derived : Base { } and some code; var factories = new List<Func<Base>>(); factories.Add(() => new Derived()); then I would find that this would blow up on my at runtime with an ArrayTypeMismatchException in the Editor which I’m guessing is related to; C# Covariance+Arrays raises ArrayTypeMismatchException and which forced me to change the way in which I was writing code inside of my class library. Note – I wasn’t actually using Func<> in my example, that’s just to shorten the code here. It wasn’t a big change but it took quite a while to figure out why it was happening and I suspect I might hit more examples as I go along and so I’ll update this post as/when that happens… Pingback: Experiments with Shared Holograms and Azure Blob Storage/UDP Multicasting (Part 2) – Mike Taulty
https://mtaulty.com/2017/12/20/baby-steps-in-writing-libraries-for-uwp-and-the-unity-editor/
CC-MAIN-2022-05
refinedweb
746
72.8
Subject: [boost] [Config] Multiple versions of Boost From: Sohail Somani (sohail_at_[hidden]) Date: 2008-09-12 19:41:22 Recently, I had the pleasure of integrating a statically linked library that used a version of Boost different from the one I was using. All relevant compiler flags matched. Only the Boost version was the problem. Fortunately, I was able to solve the problem by getting the provider of the statically linked library to rename the Boost namespace by passing in -Dboost=boost_133_1_provider to their compiler. This worked fine for them. Obviously part of this solution involved them rebuilding the part of boost that was important to them in the same manner, but we always build Boost ourselves don't we? Anyway, I thought it might be a good idea to do the same on the larger code base so I attempted to do the same. Failed miserably. The problem is that some of the Boost code contains some variant of: #if defined(SOME_CONDITION) # define HEADER <boost/some/header.hpp> #else # define HEADER <boost/some/other/header.hpp> #endif #include HEADER Reading 16.2/4, it seems that the behaviour in this case is not defined. Specifically, an implementation may or may not replace the boost text with boost_133_1_provider. This is true whether or not you have <boost/some/header.hpp> or "boost/some/header.hpp". So on one compiler, you may end up with: #include <boost_133_1_provider/some/header.hpp> And on another: #include <boost/some/header.hpp> My immediate problem solved, I made do. However, the problem still bugs me. There should be /some/ way to do this. So... What if Boost.Config did something like: #if !defined(BOOST) # define BOOST boost #endif And we replace all uses of the boost namespace with BOOST? Would such a patch against trunk be acceptable? What are the problems? I know it isn't aesthetically pleasing but it might have made this a bit easier. -- Sohail Somani Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2008/09/142271.php
CC-MAIN-2021-39
refinedweb
346
69.18
Asked by: How to create chart on a MSWord doc? Question Hi every body. I've generated a word reporting class, and need to create some chart on it. I searched the forum and figured out that it should be like below: oDoc.Range.InlineShapes.AddChart(XLChartType.... The best overload for InlineShapes.Add gets two arguments: 1- An argument of type XLChartType 2- A Variant arument It seems to me that XLChartType enumeration is defined at Microsoft.Office.Core namespace (office.dll) or Microsoft.Office.Interop.Excel namespace, but there is no definition of that enumeration at none of these. Could anyone help me on this issue? I need to know where is XLChartType enumeration defined exactly, and which dll I have to reference? I'll appreciate that greatly. - Edited by a.hajihasani Wednesday, December 3, 2014 11:58 AM All replies Hi, Base on my research, there isn’t Add method in InlineShares. How/Where do you get that method? # InlineShapes methods There is AddChart method and the first parameter is XlChartType, this enum type is in Microsoft.Office.Core (office.dll), we should access and use this enumeration from the Excel primary interop assembly (PIA). I suggest that you could use this method. More information about XlCharType enumeration, please refer to: # XlChartType enumeration On the other hand, about creating charts in word, please refer to: # Creating Charts with VBA in Word 2010 Best Regards Starain We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place. Thanks dear Starain. InlineShapes.Add() method Corrected to InlineShapes.AddChart() method. I'm already using "C:\Program Files\Microsoft Visual Studio 10.0\Visual Studio Tools for Office\PIA\Office12\Office.dll" as Office reference and refer to it by "using Microsoft.Office.Core" statement. But the Microsoft.Office.Core does not contain XlChartType enumeration! It seems strange to me. That was the problem that made me to send this post! - Edited by a.hajihasani Wednesday, December 3, 2014 12:08 PM Hi, Base on my test in VS2010 and VS2008, I can access XLChartType enum. (using office 12.0.0.0 assembly) Please create a simple project (could reproduce that issue) and share on the OneDrive, we will check it. Best Regards Starain We are trying to better understand customer views on social support experience, so your participation in this interview project would be greatly appreciated if you have time. Thanks for helping make community forums a great place.
https://social.msdn.microsoft.com/Forums/en-US/1931c7e7-8f3c-4a08-b64d-e49c60070751/how-to-create-chart-on-a-msword-doc?forum=worddev
CC-MAIN-2020-50
refinedweb
433
60.11
CLion starts 2017.3 EAP Hi everyone, We are glad to say that the Early Access Program for CLion 2017.3 is now open. The release is scheduled for the end of this year, and there are so many things planned! For now, please feel free to download the build and check the new features and enhancements that are already included. Your feedback is very welcome. Download CLion 2017.3 EAP This build (173.2099.3) addresses issues in several major areas, including C++ parser and debugger, it also provides better support for unit testing and a more flexible toolchains configuration. Let’s see what’s inside: - Gutter icons to run/debug unit tests - Bundled GDB 8.0 - C++ language engine improvements - Multiple toolchains support Unit testing: easy to run and debug from the gutter If you use Google Test or Catch framework for unit testing on your project, you may benefit from the built-in test runner in CLion, which provides a progress bar, a tree view of all the tests running, information about the status, duration, and output stream. There is also the ability to easily sort and rerun tests, export/import testing results, and navigate to the test’s sources. This EAP now makes it easier to run tests and review results by adding special test icons to the editor’s left gutter: You can run the tests using these icons. Besides, they show the status of the tests – success or failure, so you always know, when looking through the code of your tests if they failed recently or not: Debugger: GDB 8.0 CLion 2017.3 EAP comes with GDB 8.0 bundled. Among other things, this brings fixes to several major issues and inconveniences. Since bundled GDB is now built with multiarch support, it can be used for remote cross-platform debug in various Linux/Windows and embedded cases. For example, target Linux from the IDE running on Windows. There is no need to find/build another GDB version for this. Previously, there was a bug with Application output on Windows, which redirected you to a separate console window instead of the default Run window when debugging (CPP-8175). The most critical case here was debugging the unit tests. Updating to GDB v8 helped to fix the bug, so now the output is printed to the default Run window for both, Run and Debug. (Bear in mind, that on Windows, CLion uses bundled GDB when MinGW32 is selected as the toolchain.) Several other bugs were also addressed, like “LZMA support was disabled at compile time” error and incorrect rvalue references rendering. C++ language engine improvements List initialization improvements We’ve promised a big overhaul in the problematic areas of CLion’s language engine, which parses, resolves and highlights the issues in your code, as well as provides info for the context search, navigation, and refactorings. Following the plan, we’ve started with the list initialization. This work addresses lots of issues, most of which are linked to CPP-8143. Like for example: - some false “no matching constructor” error, - some false “too many arguments” and “too few arguments” errors, - invalid “parameter type mismatch” error for boost weak pointer initialization, - failed resolve for members of auto variables initialized with uniform initialization, - false “Invalid initializer” error with C99 union initializers, - and many others. We definitely appreciate your feedback, and your reports on any issues you find (we expect some regressions here due to the massive changes). Support for __COUNTER__ __COUNTER__ macro is now properly supported in CLion, which means CLion increments its value properly in the language engine and doesn’t show invalid duplicate declaration errors now: Unique_ptr related issues (GCC7) In case of using GCC7 one may encounter various issues with unique_ptr. CLion 2017.3 EAP comes with the proper support for it, and thus issues with unique_ptr such as incorrect “parameter type mismatch” and false “applying operator ‘->’ to … instead of a pointer” are now fixed: Invert if condition One very common refactoring pattern is to invert the condition in the if clause and flip the if-else blocks. It can be used, for example, to simplify complex code using multiple nested if statements. CLion now provides a code intention that can do that: Other improvements In case you have include directives inside namespaces, which is a typical case for JUCE library, you will be glad to know that code completion, navigation and refactorings now work correctly for the symbols from such headers: Besides, CLion 2017.3 EAP comes with the support for C++11 friend declaration syntax (CPP-3680). Toolchains Upd. CLion 2017.3 was released! Checked the final UI and description in the release blog post. With this EAP we’ve started working on a configurable list of toolchains. When done, it should allow you to use different CMake/debugger or MinGW/Cygwin/Microsoft Visual C++ toolchains for different projects or configurations (ready in this EAP), and conveniently switch compilers, environment settings, etc (not yet ready). The work has just recently started, but we want to present this current state to you within this EAP build. Let’s look at what is now supported. In Settings/Preferences | Build, Execution, Deployment | Toolchains you can now add several toolchain configurations in addition to the Default one. Default toolchain (macOS case): Extra toolchain configured: For now you can change: - CMake - Debugger On Windows this comes with an ability to select the environment – MinGW, Cygwin or Microsoft Visual C++ (keep in mind that MSVC support is still experimental and is available under clion.enable.msvc setting in the Registry): Now, when you have several toolchains configured, you can go to CMake settings under Settings/Preferences | Build, Execution, Deployment | CMake and select different toolchains for different CMake configurations, for example, one toolchain for Debug, another one for Release: Note that currently the CMake configuration should be named differently (CPP-8466). These CMake configurations are now available for selection in Run/Debug Configurations: Besides this, on Windows CLion now works correctly with tools without providing a full path, for example, custom compiler path. The same logic as platform shell is used to find an executable. That’s it for now! Check the full release notes here. More changes are coming with further EAP builds – stay tuned. Download CLion 2017.3 EAP Your CLion Team The Drive to Develop
https://blog.jetbrains.com/clion/2017/09/clion-starts-2017-3-eap/
CC-MAIN-2022-21
refinedweb
1,057
59.74
I. Atelier How can I get a terminal connection to Caché server from Atelier? Is it possible? Atelier version 1.1.386 When I click on the Tools menu I see grayed out ("disabled") 'Add-Ins' and 'Templates' items. Hi, Working with Atelier a valid COS sentence like this: if $e(chunk,1,1)="""{ } Throws Missing closing quotation mark error. Any workaround? Thanks How can I access the InterSystems Class Database with the Atelier IDE? Say I want access to the Samples database and Namespace? Suppose I want to create an Eclipse plugin designed to be added to Atelier, and my plugin needs to perform some processing on the server that the current Atelier project is configured to connect to. Is there a way I can use the connection credentials that have already been entered by the user? I don't really want to make them enter these again into my own plugin. Well my last post got cut off for some reason. I am trying the new beta and wondering about the following functionality from Studio that I don't find in Atelier: F12 = Open the routine and jump to the tag of the current call the cursor is on in a routine F2 / <Ctrl> F2 = jumping to a bookmark / toggling a bookmark <Ctrl> G = Jump to a tag in the code When might these things be in Atelier? I use them a lot in Studio and the absence of these really slows me: Is is possible to install multiple copies of Eclipse/Atelier, and have each instance maintain it's own distinct list of server connections? I thought I had figured it out, uninstalled everything, and reinstalled without using bundle pools. I think this keeps projects separate, along with allowing different Eclipse settings and plug-ins. After installing the second instance, no server connections from the first were showing in the second. Hi, I have an Atelier question. I have a routine developed in Studio to upload a tab-delimited text file to process using Objectscript into Cache. In Windows, to upload the file “test.txt”, I used the Objectscript commands, For a beginner, a fresh developer with no preference for any specific IDE: What would be the easiest start for development in Caché on WIn10 ? - VSCode ? - Atelier ? - Caché Studio ?? Hi there. I'm really interested in using Atelier for its source control ability, and I have found a fair bit of info when it comes to starting a new project and then pushing that project to a chosen source control system. However, my environment has a lot of existing code which was developed within Studio and Ensemble, and placing entire namespaces within a single project file feels wrong. Here's what I get when I check for updates from my 1.0.245 on Windows: Thanks a lot, that we finally got the new version of Atelier, where most of the errors were fixed. But I've found some have not fixed yet. set destination = $listget(waypoints,*) set $list(waypoints,*,*)="" In this code, Atelier does know about asterisk as a second argument to $listget function. And in this case, even shows next lines as an error.? .png). I want to demonstrate using Git via Server-side Source Control hooks (to allow both Studio and Atelier to access a Shared Dev namespace), and I was planning to use the popular Caché Git hooks: Hi Im getting this error when trying to sync csp files on Atelier Synchronization failed: [dev10 is broken] ERROR #5002: Cache error: <STORE>zTS+6^%Library.RoutineMgr.1 This on Cache 2017.1 and Atelier 1.3.144 I am trying to use current implementation of source Control for Cache Studio in Atelier, but UserAction = 3 "Run an EXE on the client" is not working. Basically Atelier is launching a "page" using the target as url. Am I missing any settings in order to command Atelier to run a exe command instead? Cache Version: Cache 2017.2.0.744.0 Platform: Windows, When I go to the 2016.2 FT download page I'm also offered a link to the Atelier Beta: REST fromJSON nested structure date not working, getting "Datatype value is not a number" for a date, not possible to debug in Atelier..
https://community.intersystems.com/tags/atelier?filter=answered&sort=comments
CC-MAIN-2019-39
refinedweb
710
69.31
Find k numbers with most occurrences in the given Python array In this tutorial, we shall find k numbers with most occurrences in the given Python array. The inputs shall be the length of the array, its elements, and a positive integer k. We should find k numbers that occur the highest number of times in the given array. These k numbers should have the highest frequency. If the number k is larger than the number of elements with maximum frequency, the numbers are returned in decreasing order of frequency. The larger number comes first if the numbers are of the same frequency. Input: arr[] = [7, 1, 1, 4, 6, 2, 7, 3], k = 2 Output: 7 1 Explanation: Frequency of 7 = 2 Frequency of 1 = 2 These two have the maximum frequency and 7 > 1. Input : arr[] = [7, 10, 11, 5, 2, 5, 5, 7, 11, 8, 9], k = 4 Output: 5 11 7 10 Explanation: Frequency of 5 = 3, 11 = 2, 7 = 2, 10 = 1 Since 11 > 7, 11 is printed before 7 Implementation First, We shall create an empty dictionary and enter its key-value pair as the number-frequency pair. After traversing through all the elements of arr, the dictionary dct now contains the numbers and their frequencies. Next, we copy these values to another empty list a which contains the key-value pairs as individual lists. First, we sort the list in decreasing order of number so that all numbers are arranged according to high-value priority. Secondly, we again sort the list in decreasing order of frequency value so that the highest frequency lists come first. Print the first k numbers. Below is our Python program to find k numbers with most occurrences in the given array: def kFrequentNumbers(arr, n, k): dct = {} for i in range(n): #key = number and value = frequency if arr[i] in dct: dct[arr[i]] += 1 else: dct[arr[i]] = 1 a = [0] * (len(dct)) j = 0 for i in dct: a[j] = [i, dct[i]] #a[j][0] has number and a[j][1] has frequency j += 1 a = sorted(a, key = lambda x : x[0], reverse = True) #sorts in dec. order of number a = sorted(a, key = lambda x : x[1], reverse = True) #sorts in dec. order of frequency # display the top k numbers print(k, "numbers with most occurrences are:") for i in range(k): print(a[i][0], end = " ") #prints top k numbers # Driver code if __name__ =="__main__": arr = [] n = int(input("Enter the length of the array: ")) print("Enter elements: ") for i in range(n): e = int(input()) arr.append(e) k = int(input("Enter k: ")) kFrequentNumbers(arr, n, k) Implementation Example for the sorted() functions above: For arr = [3, 1, 4, 4, 5, 2, 6, 1] Now, a = [[3,1],[1,2],[4,2],[5,1],[2,1],[6,1]] After first sorted() -> a = [[6,1],[5,1],[4,2],[3,1],[2,1],[1,2]] And after second sorted() -> a = [[4,2],[1,2],[6,1],[5,1],[3,1],[2,1]] If k=3, Using a[i][0] gives the first 3 numbers which are 4, 1, 6 Output: Enter the length of the array: 11 Enter elements: 7 10 11 5 2 5 5 7 11 8 9 Enter k: 4 4 numbers with most occurrences are: 5 11 7 10 I am Vamsi Krishna and You can find my other posts here: Find bitonic point in given bitonic sequence in Python Get all possible sublists of a list in Python Also Read: Track the occurrences of a character in a string in Python How to Construct an array from its pair-sum array in Python Thank You for Reading and Keep Learning 🙂
https://www.codespeedy.com/find-k-numbers-with-most-occurrences-in-the-given-python-array/
CC-MAIN-2021-10
refinedweb
624
55.51
1 Aug 02:14 2004 Re: [dnsop] Re: getaddrinfo/TTL and resolver application-interface Jun-ichiro itojun Hagino <itojun <at> itojun.org> 2004-08-01 00:14:19 GMT 2004-08-01 00:14:19 GMT > On the other hand, introducing an new ai_ttl field at the end of struct > addrinfo can be done without any effect on binaries (since the field offsets of > the other fields do not change, and freeaddrinfo() will free the correct size > structure). Applications which want to use ai_ttl would then do > > call getaddrinfo... > #ifdef _GETADDRINFO_AI_TTL > ttl = ai->ai_tll; > #else > ttl = 30; /* Or some other constant */ > #endif > > While this is more painful than if we'd included ai_ttl from the start, > it seems to be less painful than standardizing getrrsetbyname() to > ensure portability of that interface across platforms. getaddrinfo() is not a DNS-only function; it can lookup hostname-to address using /etc/hosts, NIS, LDAP, whatever you have. what value would you put in to ai_ttl when the lookup is done by non-DNS method? itojun -- to unsubscribe send a message to namedroppers-request <at> ops.ietf.org with the word 'unsubscribe' in a single line as the message text body. archive: <>
http://blog.gmane.org/gmane.ietf.dnsext/month=20040801
CC-MAIN-2015-35
refinedweb
198
57.61
Please confirm that you want to add Yii2 Application Development Solutions–Volume 2 to your Wishlist. Yii is an optimal, high-performance PHP framework for developing Web 2.0 applications. It provides fast, secure, and professional features to create robust projects; however, this rapid development requires the ability to organize common tasks collectively to build a complete application. Being extremely performance-optimized, Yii is the perfect choice for projects of any size. It comes packaged with tools to help test and debug your application and has clear and comprehensive documentation. This video course is a collection of Yii2 videos. Each video is represented as a full and independent item, showcasing solutions from real web applications. So you can easily reproduce them in your environment and learn Yii2 rapidly and painlessly… In this video, you will get started by configuring your Yii2 application. After that, we will focus on how to make our extension as efficient as possible. Then we will cover some best practices for developing an application that will run smoothly until you have very high loads. Moving ahead, we will provide various tips, which are especially useful in application deployment and when developing an application in a team. Later, we will introduce the best technologies for testing and we will see how to write simple tests and avoid regression errors in our application. Finally, we conclude this course by discussing review logging, analyzing the exception stack trace, and implementing our own error handler. About the Authors Andrew Bogdanovis a seasoned web developer from Yekaterinburg, Russia with more than six years of experience in industrial development. Since 2010, he has been interested in the Yii and MVC frameworks. He has taken part in projects written in Yii, such as a work aggregator for a UK company, high-load projects, real-estate projects, and the development of private projects for the government. He has worked on various CMS and frameworks using PHP and MySQL, including Yii, Kohana, Symphony, Joomla, WordPress, CakePHP, and so on. Also, he is adroit at integrating third-party APIs such as Payment gateways (PayPal, Facebook, Twitter, and LinkedIn). He is very good in slicing and frontend, so he can provide full information about the Yii framework. He is also well-versed in PHP/MYSQL, Yii 1.x.x, Yii 2.x.x, Ajax, Jquery, MVC frameworks, Python, LAMP, HTML/CSS, Mercurial, Git, AngularJS, and adaptive markup. In his free time, he likes to visit and talk with new people and discuss web development problems. He is currently working with professionals. Dmitry Eliseev has been a web developer since 2008 and specializes in server-side programming on the PHP and PHP frameworks. Dmitry is interested in developmental best practices, software architectures, object-oriented programming, and other approaches. He is an author and a presenter of practical courses about the principles and best practices of object-oriented programming and the use of version control systems. Also, he is an author of webinars, the Yii2 Framework, and common developmental subjects. He practices teaching and counseling by development on frameworks and using the principles of software design and improvements in common code quality. This is his first book.: In his free time, Alexander speaks at conferences, and enjoys movies, music, traveling, photography, and languages. He currently resides in Voronezh, Russia with his beloved wife and daughter. This video gives an overview of entire course. The aim of this video is to explore ElasticSearch engine adapter, which is an ActiveRecord-like wrapper for ElasticSearch full text search engine integration into the Yii2 framework. The aim of this video is to learn about Gii code generator that provides a web-based code generator called Gii for Yii 2 applications. Pax is a widget that integrates the PJax jQuery plugin. All content that is wrapped by this widget will be reloaded by AJAX without refreshing the current page. The aim of this video is to learn about Redis database driver, which allows you to use Redis key-value storage in any project on the Yii2 framework. There are a lot of built-in framework helpers such as StringHelper in the yii\helpers namespace. These contain sets of helpful static methods for manipulating strings, files, arrays, and other subjects. There are many similar leading products such as Google's Gmail that define nice UI patterns. One of these is soft delete. Instead of a permanent deletion with tons of confirmations, Gmail allows us to immediately mark messages as deleted and then easily undo it. The same behavior can be applied to any object such as blog posts, comments, and so on. You may have some code that looks like it can be reused but you don't know if it's a behavior, widget, or something else, most probably it's a component. Common actions such as deleting the AR model by the primary key or getting data for AJAX autocomplete could be moved into reusable controller actions and later attached to controllers as needed. In Yii, you can create reusable controllers. If you are creating a lot of applications or controllers that are of the same type, moving all common code into a reusable controller will save you a lot of time. A widget is a reusable part of a view that not only renders some data but also does it according to some logic. It can even get data from models and use its own views, so it is like a reduced reusable version of a module. Yii has good command-line support and allows creating reusable console commands. Console commands are faster to create than web GUIs. If you need to create some kind of utility for your application that will be used by developers or administrators, console commands are the right tool. A filter is a class that can run before/after an action is executed. It can be used to modify execution context or decorate output. If you have created a complex application part and want to use it with some degree of customization in your next project, most probably you need to create a module. Yii2 only offers native PHP templates. If you want to use one of the existing template engines or create your own one, you have to implement it—of course, if it's not yet implemented by the Yii community. Here we will re-implement the Smarty templates support. Yii2 provides built-in i18n support for making multilanguage applications. In this video, we are translating the application interface to different languages. The aim of this video is to talk about how to share your results with people and why it's important. The aim of this video is to follow the best practices. Native session handling in PHP is fine in most cases. The aim of this video is to speed up the session handling. Yii supports many cache backends, but what really makes the Yii cache flexible is the dependency and dependency chaining support. There are situations when you cannot simply cache data for an hour because the information cached can be changed at any time. If all of the best practices for deploying a Yii application are applied and you still do not have the performance you want, then most probably there are some bottlenecks with the application itself. The main principle while dealing with these bottlenecks is that you should never assume anything and always test and profile the code before trying to optimize it. Instead of only server-side caching implementation, you can use client-side caching via specific HTTP-headers. If your web page includes many CSS and/or JavaScript files, the page will open very slowly because the browser sends a large number of HTTP requests to download each fie in separated threads. To reduce the number of requests and connections, we can combine and compress multiple CSS/JavaScript files into one or very few files in production mode, and then include these compressed files on the page instead of the original ones. HHVM transforms PHP code into intermediate HipHop bytecode (HHBC) and dynamically translates PHP code into machine code, which will be optimized and natively executed. By default, we have the Basic and Advanced Yii2 application skeletons with different directory structures. But these structures are not dogmatic, and we can customize them if required. By default, Yii2 applications work from the web directory for your site's entry script. But shared hosting environments are often quite limited when it comes to the configuration and directory structure. You cannot change the working directory for your site. By default, Yii2's Advanced template has console, frontend, and backend applications. However, in your specific case, you can rename the existing ones and create your own applications In the basic application template we have separated web and console configuration files. And usually we set some application components in the both the configuration files. Moreover, when we develop a big application, we may face some inconvenience. What problem does this video deal with and what is the solution the viewer is looking for? Sometimes, an application requires some background tasks, such as regenerating a site map or refreshing statistics. A common way to implement this is by using cron jobs. When using Yii, there is a way to use a command to run as a job. Sometimes, there is a need to fine tune some application settings or restore a database from a backup. When working on tasks such as these, it is not desirable to allow everyone to use the application because it can lead to losing the recent user messages or showing the application implementation details. There are many tools available for automating the deployment process. The aim of this video is to consider the tool named Deployer. By default, the basic and advanced Yii2 application skeletons use Codeception as a testing framework. Codeception supports writing of unit, functional, and acceptance tests out of the box. PHPUnit is the most popular PHP testing framework. It is simple for configuration and usage. Also, the framework supports code coverage reports and has a lot of additional plugins. Besides PHPUnit and Codeception, Atoum is a simple unit testing framework. You can use this framework for testing your extensions or for testing a code of your application. Behat is a BDD framework for testing your code with human-readable sentences that describes code behavior in various use cases. Logging is the key to understanding what your application actually does when you have no chance to debug it. If we are expecting unusual behavior in our application, we need to know about it as soon as possible and have enough details to reproduce it. This is where logging comes in handy. When an error occurs, Yii can display the error stack trace along with the error. A stack trace is especially helpful when we need to know what really caused an error rather than just the fact that an error occurred. If you are following the best practices and developing and testing an application with all possible errors reported, you can get an error message. However, without the execution context, it is only telling you that there was an error and it is not clear what actually caused it. In Yii, the error handling is very flexible, so you can create your own error handler for errors of a specifi type. The Yii2-debug extension is a powerful tool for debugging own code, analyzing request information or database queries, and so on..
https://www.udemy.com/yii2-application-development-solutionsvolume-2/
CC-MAIN-2017-39
refinedweb
1,917
53.71
13 July 2012 19:42 [Source: ICIS news] HOUSTON (ICIS)--Petrobras will increase its diesel price by 6%, effective on 16 July, the Brazilian energy major said on Friday. The increase is part of Petrobras’ policy to align ?xml:namespace> Petrobras said the price to which the adjustment applies does not include Brazilian federal taxes and state taxes. The final price at the pumps, which also includes biodiesel costs and distribution and sales margins, is expected to increase by around 4%, it added. The company did not disclose what its current diesel price is in absolute terms per litre or gallon. Petrobras' 2012 first-quarter domestic gasoline and diesel sales increased by 10% year on year, ICIS
http://www.icis.com/Articles/2012/07/13/9578307/brazils-petrobras-to-hike-diesel-prices-by-6.html
CC-MAIN-2015-22
refinedweb
117
60.85
My problems with my first example I want to write a desktop calculator using Qt. I designed the first step of the form this way: I added only one signal & slotusing Designer. The Calc.h, Calc.ppand main.cppare as follows respectively: #ifndef CALC_H #define CALC_H #include <QDialog> namespace Ui { class Calc; } class Calc : public QDialog { Q_OBJECT public: explicit Calc(QWidget *parent = 0); ~Calc(); private: Ui::Calc *ui; }; #endif // CALC_H #include "calc.h" #include "ui_calc.h" Calc::Calc(QWidget *parent) : QDialog(parent), ui(new Ui::Calc) { ui->setupUi(this); } Calc::~Calc() { delete ui; } #include <QApplication> #include "calc.h" int main(int argc, char* argv[]) { QApplication app(argc, argv); Calc* dialog = new Calc; dialog -> show(); return app.exec(); } Now what codes do I need to add into Calc.h& Calc.cppto make it worked please? What would you do if this is an CLI program(command line only, no gui)? They may looks like these double a = 0, b = 0; char operation = '+'; std::cin>>a; std::cin>>operation; std::cin>>b; switch(operation){ case '+': std::cout<<(a+b)<<std::endl; break; //....and so on } How would you make this work if it is gui app? you can do it as following(pseudo codes) 1 : show the result of your input button connect(ui->oneButton, &QPushButton::clicked, [this]() { ui->lineEdit->setText(ui->lineEit()->text() + "1"); }); 2 : store the result of lineEdit when user click on the add button connect(ui->addButton, &QPushButton::clicked, [this]() { firstNum = ui->lineEdit->text().toDouble(); }); 3 : show the result of your input button 4 : show the result of summation when user click on "=" button connect(ui->resultButton, &QPushButton::clicked, [this]() { auto const curNum = ui->lineEdit->text().toDouble(); ui->lineEdit->setText(QString::number(firstNum + curNum)); }); These are the simplest way I could think of. There are many things to improve, like support other number, operation, able to parse long equation(ex : 3 + 4 - 5 * 6 / 33.1), error handles etc. Something related to resource managament. This will cause memory leak Calc* dialog = new Calc; dialog -> show(); Use it like this(best solution in most cases) Calc dialog; dialog.show(); or(not as good as first solution) std::unique_ptr<Calc> dialog(new Calc); //c++14 support make_unique dialog -> show(); I strongly suggest you study the concept of RAII and understand the concept of parent--child relationship of Qt, they could make your life much more easier to live. After you know how they work, you will find out c++ is more easier to manage resource than those languages with gc(like java, c#, js, python etc). ps : I prefer to create the ui by handcrafting codes when I studied Qt codes, because this help me know better about Qt. At that time I even do not know what is widget, dialog, mainWindow, what they use for. @tham parent--child relationship of Qt. I am trying to understand better this and apply like him. But this is not different of other languages. Friendship and inheritance basically. @Jeronimo said in My problems with my first example: @tham parent--child relationship of Qt. I am trying to understand better this and apply like him. But this is not different of other languages. Friendship and inheritance basically. Looks like I did not express it clearly, I should said "Object trees & OwnerShip". It is not about the concept of oop but the memory management solution of Qt. Links : Object trees & OwnerShip Parent child relationship in Qt
https://forum.qt.io/topic/72139/my-problems-with-my-first-example
CC-MAIN-2018-39
refinedweb
571
56.15
This is your resource to discuss support topics with your peers, and learn from each other. 04-23-2010 06:31 AM I've seen it mentioned a few times and I'd REALLY need to know this. How can I place my resources in a different, updateable cod? (I need to read some text-only configuration files that control the application's features.) I created an "appNameres.cod" file (contains config text files), and it installs to the phone. It also appears in the "modules" listing on the device (listed as "appNameres"). However, when I use String filename = new String("cod://appNameres/appconfig.cfg"); InputStream is = file.getClass().getResourceAsStream(file); It doesn't find the file I need. Can anyone help me? Solved! Go to Solution. 04-23-2010 07:18 AM I have a simple Class in my separate resources COD that I pass what resource name I want and it returns the Stream to it. It might not be the way that you want it but it reduces the problems of making it work. 04-23-2010 07:37 AM Interesting idea, but could I ask you to provide with a bit more info? How do I call the class? Say I have class ResLoader { public static InputStream loadResource(String filename) { InputStream is = file.getClass().getResourceAsStream(filename); return is; } } this class. HOW exactly do I use it (sorry for the caps but I am kind of desperate here ). 04-23-2010 08:29 AM Something like: class Res { public static InputStream getResourceStream(String file) { return Res.class.getResourceAsStream(file); } } Just make sure you use a class in that COD file so it looks for resources in that COD file. 04-23-2010 09:30 AM - edited 04-23-2010 10:15 AM Okay, I'm being borderline stupid here. I added import rimbbresloader; ... InputStream is = rimbbresloader.loadResourceFile("appconfig.cfg"); ... And when I hit build on the main project, it dies with the error: [javac] C:\Work\proj\work\fctmain.java:1973: cannot find symbol [javac] symbol : variable rimbbresloader [javac] location: class fctmain [javac] InputStream is = rimbbresloader.loadResourceFile("appconfig.cfg"); [javac] ^ [javac] 1 error Also I (kinda have to) use eclipse. How could I import the blackberry JDE-built library into eclipse? PS: Before getting this job I only used NetBeans (a bit), Code::Blocks and Visual Studio, oh and notepad. Eclipse and ant are pretty much arcane magic to me. 04-23-2010 02:22 PM I've finally managed to build the application, loaded the library resource *.cod to the simulator, loaded the app *.cod to the simulator but I get the following error : "Can't find entry point". The application does not run. 04-23-2010 03:06 PM @rcmaniac2: Okay! I've managed to make it run. And it loads the resources too. Now, I have another issue, the library requires for permissions to be set to "Allow". Otherwise it gives me "Interprocess Communication" error and quits. Any points on that? 04-23-2010 07:15 PM Do you have the resources COD compiled as a Library? 04-24-2010 01:10 AM I think you're getting that error because accessing a COD from another COD requires the COD's to be signed, or have the permissions set to allow the same in your settings. 04-24-2010 07:37 AM I have a COD that contains resources and a COD that contains code, I never have to sign the resource COD and the COD that contains COD only needed to be signed when I added functions that required signing so I don't think you need to sign it in order to get it to work.
https://supportforums.blackberry.com/t5/Java-Development/How-can-I-place-resources-in-a-separate-cod/m-p/490262
CC-MAIN-2017-04
refinedweb
614
67.25
The PC parallel port can be damaged quite easily if you make incorrect connections. If the parallel port is integrated to the motherboard, repairing a damaged parallel port may be expensive, and in many cases, it is cheaper to replace the whole motherboard than to repair that port. Your safest bet is to buy an inexpensive I/O card which has a parallel port and use it for your experiment. If you manage to damage the parallel port on that card, replacing it will be easy and inexpensive. While every effort has been made to make sure the information in this article is correct, the author cannot be made liable for any damages, whatsoever, for loss relating to use or implementation of this article. Use this information at your own risk. In this article, we will connect a 3½ inch floppy drive to our computer's parallel port and write a program to control its stepper motor. We won't be taking the stepper motor out from the floppy drive because a floppy drive has a built in controller which can be easily used for controlling its stepper motor. The advantages of this are: However, if you have a good background of electronics and you're interested in controlling a stepper motor without the disk drive electronics, read Stepper Motor Control through Parallel Port by Bhaskar Gupta. Before we begin, I would recommend you to go through I/O Ports Uncensored - 1 - Controlling LEDs (Light Emitting Diodes) with Parallel Port by Levent Saltuklaroglu and be sure to read the sections on Parallel Ports and Hexadecimal / Decimal / Binary if you haven't already done so. Also, make sure that the floppy drive you use is in working order. I've wasted an entire day trying to make a broken one work. It's a waste of time. So, what are stepper motors and how are they different from conventional electric motors? Simply put, a stepper motor is a brushless, synchronous electric motor that can divide a full rotation into a large number of steps. Conventional motors spin continuously while a stepper motor moves only one step at a time. Therefore, stepper motors are useful for precise motion and position control. The simplest way to think of a stepper motor is a bar magnet and four coils. When current flows through coil "A" the magnet is attracted and moves one step to the right. Then, coil "A" is turned off and coil "B" turned on. Now, the magnet moves another step to the right and so on… A similar process happens inside a stepper motor, but the magnet is cylindrical and rotates inside the coils. For a stepper motor to move, these coils should be turned on in the correct sequence. However, we don't have to worry about this since we will be using the floppy drive's built in controller. The floppy cable is usually a flat, gray ribbon cable similar to the standard IDE cable. The floppy cable has 34 wires (odd colored wire is wire 1). There are normally five connectors on this cable, but some cables, like the ones I'm using, have only three. These connectors are grouped into three sets: An image from The PC Guide illustrates: Some connectors might supply only two wires, usually the +5 V and a ground pin. This is because the floppy drives in most new systems run only on +5 V and do not use the +12 V at all. This part is really easy. All you need are: Connect the 3.5" Drive "A" Connector on your floppy cable to your floppy drive. Now, make sure your computer's off and unplugged. Open your computer and take out the floppy power connector. Carefully plug this connector to your floppy drive. Reversing the red and yellow wires could fry your floppy drive. You'll see five notches on the power connector. They should point upward when they're installed. Fortunately, these connectors are keyed and therefore difficult to insert incorrectly. Check out the picture below: If you want, you can extend the power connector cord so that the floppy drive can reside outside the PC case. Just cut the wires (1 red, 1 yellow, 2 black) and patch in a couple feet. There's no need to extend the yellow wire since it carries +12 V, which will not be used by the floppy drive. As you can see in the picture above, I've only extended 1 red and 1 black. Now, take the motherboard end of the floppy cable and follow the instructions below to connect it to the parallel port cable. Here's a diagram I've made to show the connections: Pin # 2 (D0)------> Pin # 20 (Step Pulse) Pin # 3 (D1)------> Pin # 18 (Direction) Pin # 14 --> Ground (Drive Select A Here's a picture of the connections I made: I was using a defective parallel port cable and so my parallel port pins are in reverse order. Don't let that confuse you. Just go with the diagram. Finally, connect the other end of the parallel port cable to your computer. That's it! Make sure all your connections are correct and there are no short circuits. It's time to write some code. This is a fun and tricky part. It's tricky because even a small bug in your program could prevent the stepper motor from moving. I have used inpout32.dll for interoping. Download it from [^]. After downloading it, put it in your System32 folder and import it into your project: using System; using System.Runtime.InteropServices; private class PortAccess { [DllImport("inpout32.dll", EntryPoint="Out32")] public static extern void Output(int address, int value); } For sending values to our parallel port, we'll be using PortAccess.Output. This method takes in two parameters, address and value. For knowing). PortAccess.Output address value For moving the stepper motor, we will have to pulse pin 20 on the floppy drive connector. The direction of movement will depend on the high/low state of pin 18. Now, since pins 20 and 18 are connected to pins 2 (D0) and 3 (D1) on the parallel port, pulsing pin D0 will move the stepper motor and its direction will depend on the high/low logical state of D1. So, here's a sample code for moving the stepper motor 10 steps in one direction: for (int i = 0; i < 10; i++) { PortAccess.Output(888, 1); System.Threading.Thread.Sleep(50); // Delay PortAccess.Output(888, 0); System.Threading.Thread.Sleep(50); // Delay } I'm sending the values 1 and 0. 1 (Decimal) = 0001 (Binary)0 (Decimal) = 0000 (Binary) Here, I'm changing the high/low state of D0 but I'm keeping D1 constantly low. Therefore, the stepper motor will move 10 steps in one direction. Notice that I'm delaying the execution of the code after sending a value. This delay is needed to provide enough time for the magnetic field inside the coils to build up and move the magnet. Without this delay, the coils will switch on and off so fast that the magnet wouldn't move. To move the stepper in the other direction, send the values 3 and 2: for (int i = 0; i < 10; i++) { PortAccess.Output(888, 3); System.Threading.Thread.Sleep(50); // Delay PortAccess.Output(888, 2); System.Threading.Thread.Sleep(50); // Delay } 3 (Decimal) = 0011 (Binary)2 (Decimal) = 0010 (Binary) Here, I'm changing the high/low state of D0 but I'm keeping D1 constantly high. In future, if you plan to use pins other than D0 and D1, always make sure that the values you send are correct. The Windows Calculator can be helpful for performing binary to decimal conversions. The first time I tried controlling a floppy drive stepper motor, I chose wrong values and my stepper wouldn't budge! I was checking the connections over and over but I had no clue that the problem was in my program! I wasted at least two days because of this. Well, here's a screenshot of my 'working' application: We have reached the end of this article. I hope you enjoyed it and successfully controlled your floppy drive stepper motor. Now what? Just let your imagination go wild!! Stepper motors can be used to perform a variety of small tasks which require precise motion/position control (for e.g. in robotics). I used mine to pan a camera! Check it out on my blog:[^]. If you end up making something interesting, I'd love to hear about it. Happy coding!! This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here private void motor_forward_Click(object sender, EventArgs e) { MoveMotor(int.Parse(motor_speed_text.Text), int.Parse(motor_dur_text.Text), 1??, 0??); } private void motor_backward_Click(object sender, EventArgs e) { MoveMotor(int.Parse(motor_speed_text.Text), int.Parse(motor_dur_text.Text), 3??, 2??); } and private void MoveMotor(int speed, int time, Int32 onValue, Int32 offValue) { for (int x = 0; x < time; x++) { Application.DoEvents(); PortAccess.Output(adress, onValue); } } General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/16715/Controlling-Floppy-Drive-Stepper-Motor-via-Paralle?msg=2768573
CC-MAIN-2015-06
refinedweb
1,569
64.1
- Author: - vaughnkoch - Posted: - October 7, 2010 - Language: - Python - Version: - 1.2 - testing request client testcase - Score: - 2 (after 2 ratings) This is an update to Simon Willison's snippet, along with one of the comments in that snippet. This class lets you create a Request object that's gone through all the middleware. Suitable for unit testing when you need to modify something on the request directly, or pass in a mock object. (note: see for details on how to mock requests for testing) Example on how to use: from django.test import TestCase from yourapp import your_view from yourutils import RequestFactory class YourTestClass(TestCase): def setUp(self): pass def tearDown(self): pass # Create your own request, which you can modify, instead of using self.client. def test_your_view(self): # Create your request object rf = RequestFactory() request = rf.get('/your-url-here/') # ... modify the request to your liking ... response = your_view(request) self.assertEqual(response.status_code, 200) Suggestions/improvements are welcome. :) More like this - Automatic testing of add and changelist admin views by Taifu 5 years ago - Decorator and context manager to override settings by jezdez 5 years, 3 months ago - Deep package test runner by eternicode 5 years, 5 months ago - Testrunner with testmodels by nfg 6 years, 7 months ago - PreSaveMiddleware by pterk 8 years, 9 months ago # Please login first before commenting.
https://djangosnippets.org/snippets/2231/
CC-MAIN-2016-36
refinedweb
223
52.6
. Google Book Search API turns out to be the library that I found quite appealing. Yes, unfortunately again, I found there lacks a simple walk-through tutorial that can direct me step-by-step. That motivated me to write a blog tutorial for this! To get started, follow below steps: - Create a C# project, either a console or GUI (e.g. Winform or WPF) application will do; - Create a class in the project, let’s call it BookSearch class - Add a static method, as shown below: public static async Task<Volume> SearchISBN(string isbn) { Console.WriteLine(“Executing a book search request…”); var result = await service.Volumes.List(isbn).ExecuteAsync(); if (result != null && result.Items!=null) { var item = result.Items.FirstOrDefault(); return item; } return null; } - To make the above code compile, we need a service object: public static BooksService service = new BooksService( new BaseClientService.Initializer { ApplicationName = "ISBNBookSearch", ApiKey = "abcdefghijklmnopqrstuvwxyz", }); where the string “abcdefghijklmnopqrstuvwxyz” is the api key Google gives you, shall show you later how to get this! At the moment, let’s focus on the main logic! All right, everything seems trivial, but this won’t compile at all. To get this code compile, do the following: - Using Nuget to install the Google Books API .net lib: PM > Install-Package Google.Apis.Books.v1 - An alternative solution of not using the Nuget Package console is: Right click on the reference folder in the project, and click “Manage Nuget Packages” menu item, and then search “Google Books API” in the popup dialog - The Google Books API and its dependent library will be installed. - Build the project, everything should be fine. To test the code, I created a unit testing project, and added below tests: [TestClass] public class BookSearchTest { [TestMethod] public void TestIsbnSearch() { string isbn = "0071807993"; var output = BookSearch.SearchISBN(isbn); Assert.AreEqual(output.Result != null, true); var result = output.Result; Trace.WriteLine("\nBook Name: " + result.VolumeInfo.Title); Trace.WriteLine("Author: " + result.VolumeInfo.Authors.FirstOrDefault()); Trace.WriteLine("Publisher: " + result.VolumeInfo.Publisher); } } Success! Almost done! But wait, where can I get the API key? You might be interested in this article to find how. But to make this post self-contained, you might wish to do as following: - Login with you Google account - To acquire an API key, visit the APIs Console. - In the Services pane, activate the Books API; - Then go to the Credentials pane. Click the “Create new Key” button, and then you will get an API key such as “abcdefghijklmnopqrstuvwxyz” Once you have this api key, the above code will run without problem! You can get the example code of this project from GitHub here. Happy coding! Remco June 16, 2015 at 8:01 pm Hello Xinyustudio, I’m trying to turn this into a simple windows form application however without succes. Your tutorial mentions that your code should work in a windows form application however I can’t seem to get it to work. I’ve tested your code as a console application, which works perfectly. I can’t seem to find a PM option on this website, hence this (Somewhat long) comment. I’ve replaced my API key with “MyAPIKey” for obvious reasons. I hope you can give me some pointers! The code compiles without error and from what I’ve been able to determine it messes up when it tries to fill the variable ‘result’ in the SearchISBN function. Here’s my form1.cs: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Threading.Tasks; using System.Windows.Forms; using Google.Apis.Auth.OAuth2; using Google.Apis.Auth.OAuth2.Flows; using Google.Apis.Books.v1; using Google.Apis.Books.v1.Data; using Google.Apis.Services; using Google.Apis.Util.Store; namespace BookSearchApp3 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private static string API_KEY = “MyAPIKey”; public static BooksService service = new BooksService(new BaseClientService.Initializer { ApplicationName = “ISBNBookSearch”, ApiKey = API_KEY, }); public static async Task SearchISBN(string isbn) { MessageBox.Show(“Executing a book search request for ISBN: …”); var result = await service.Volumes.List(isbn).ExecuteAsync(); if (result != null && result.Items != null) { var item = result.Items.FirstOrDefault(); return item; } return null; } private void CheckISBN_Click_1(object sender, EventArgs e) { string isbn = textBox1.Text; var output = SearchISBN(isbn); var result = output.Result; MessageBox.Show(result.VolumeInfo.Title); MessageBox.Show(result.VolumeInfo.Authors.FirstOrDefault()); MessageBox.Show(result.VolumeInfo.Publisher); } } } xinyustudio June 17, 2015 at 9:10 am “I can’t seem to get it to work.” what is the error? Remco June 17, 2015 at 3:48 pm Ah shit, my bad, I run my form, fill out the ISBN, hit the check button and the “Executing a book search request” messagebox pops up, I hit OK and the program hangs. When I run the form the following things of note appear in my console: The thread 0xe6c has exited with code 259 (0x103). The thread 0xfc4 has exited with code 259 (0x103). After I fill in the ISBN and hit check the messagebox pops up, I hit OK and the following things of note appear in my console: The thread 0xed4 has exited with code 259 (0x103). The thread 0x13c0 has exited with code 259 (0x103). The thread 0x1094 has exited with code 259 (0x103). The thread 0x1484 has exited with code 259 (0x103). After which the program hangs. I just noticed that in my comment the Volume part of the SearchISBN method name was omitted, it’s there in the actual code though. The result variable in the SearchISBN method seems to stay null when I try debugging, However it’s neatly filled when running your code as a console application. Thanks for taking a look, I’m still trying to get the hang of this whole C# thing so any help is definitely appreciated! Amit December 12, 2015 at 8:25 pm Thanks for the wonderful article Lyon February 9, 2016 at 5:34 pm i have the same problem as Remco, if someone could help us please Lyon February 9, 2016 at 5:35 pm I have the same problem as Remco, someone could help us please siddharth February 9, 2016 at 9:39 pm same problem here..trying to make it work for c# windows form application makaveli75 March 5, 2016 at 10:18 pm up Lyon March 29, 2016 at 4:48 pm nobody for help ? Lyon March 31, 2016 at 3:14 pm up Ivan May 16, 2016 at 3:27 am I found the same error than @Remco. I was able to solve it adding “await” here: var output = await SearchISBN(isbn); // inside private void CheckISBN_Click_1 (@Remco’s code) The task returns the requested data at the first try. However, I can see that the application continues creating/closing threads. Ivan May 16, 2016 at 4:30 am Just added “async” and “await”: private async void CheckISBN_Click_1(object sender, EventArgs e) { string isbn = textBox1.Text; var output = await SearchISBN(isbn); var result = output.Result; MessageBox.Show(result.VolumeInfo.Title); MessageBox.Show(result.VolumeInfo.Authors.FirstOrDefault()); MessageBox.Show(result.VolumeInfo.Publisher); } } }
https://xinyustudio.wordpress.com/2014/12/18/google-book-search-in-c-a-step-by-step-walk-through-tutorial/
CC-MAIN-2019-30
refinedweb
1,178
57.37
class for the reordering buffer that keeps the data from lower layer, i.e. TcpL4Protocol, sent to the application More... #include <tcp-rx-buffer.h> class for the reordering buffer that keeps the data from lower layer, i.e. TcpL4Protocol, sent to the application Definition at line 40 of file tcp-rx-buffer.h. Insert a packet into the buffer and update the availBytes counter to reflect the number of bytes ready to send to the application. This function handles overlap by triming the head of the inputted packet and removing data from the buffer that overlaps the tail of the inputted packet Definition at line 137 of file tcp-rx-buffer.cc. References ns3::Packet::CreateFragment(), ns3::TcpHeader::GetSequenceNumber(), ns3::Packet::GetSize(), NS_ASSERT, NS_LOG_FUNCTION, and NS_LOG_LOGIC. Extract data from the head of the buffer as indicated by nextRxSeq. The extracted data is going to be forwarded to the application. Definition at line 220 of file tcp-rx-buffer.cc. References NS_ASSERT, NS_LOG_FUNCTION, and NS_LOG_LOGIC. Referenced by ns3::TcpSocketBase::Recv().
https://coe.northeastern.edu/research/krclab/crens3-doc/classns3_1_1_tcp_rx_buffer.html
CC-MAIN-2022-05
refinedweb
170
50.53
I have an environment where user's desktop and documents were redirected to a network share with offline files syncing to their desktops. Offline availability was also enabled for other network shares. Last week users came in and some had no files on their desktop when they logged in. The files were there on the server but the share was listed as offline as seen from the users computer. Even when browsing to the network share manually it just shows as empty. If I checked from a users computer that was not having this problem I could see all of the files. I tried adding the formatdatabase key in the registry to clear the csc cache and restarted which did not fix the problem. I then took control of and deleted the csc folder in the windows directory which also did not fix the problem. I then created a group policy to disable offline files completely and still those shares are shown as empty for the affected desktops. Now this server has a DNS alias pointing to it as there used to be a diskstation that held these files. If I browse to these file shares using this name then all computers can see the files. So I modified the group policy that redirects the users folders to use this alias instead and now they can see their files on their desktop. Now I'm still trying to figure out how to clear up this nonsense with the original name. As of now the desktop I'm testing from has offline files disabled. The csc folder has been deleted. I'm browsing to the network shares using the dfs-namespace \\ad.mydomain.com I am then shown a list of shares. When I try to open any of the shares there is a short delay and then I am shown an empty folder. If I do the same from a computer that did not have this problem last week then I am shown the files in the share. If I browse to the same location using the alias \\diskstation which is a cname for \\ad.mydomain.com then I can see and access the files. I've also discovered that if I browse to the network share using \\ad.mydomain.com and right click one of the shares and select map as network drive I will be able to access the files through that network drive. However I'm still unable to access the files through the network path directly. All of the desktops are running Windows 10 Pro and the servers are Server 2016. I'm not sure how to fix this, there seems to still be some mechanism that redirects the folder to the offline cache even though offline files is disabled. 5 Replies Got it. Due to some previous issues with offline files I've been chasing that path looking for a solution. It was in fact much easier. The DFS-Namespace service was stopped on both servers. The service started with no problem and now the files are accessible again. Now the question is why would the service be stopped on both servers?
https://community.spiceworks.com/topic/2205115-trouble-with-offline-files
CC-MAIN-2022-05
refinedweb
527
80.62
Like every person, I have a burning desire to know who’s in my house when I’m not. A few months ago, I decided that I had had enough of the uncertainty of my extradomicilial activities, and that I needed to do something about it. I realized that I had two options. The first option would be to hire someone to be in my house 24/7, but that would get a bit embarrassing when I wanted to watch reruns of Desperate Housewives. The other option would be a motion sensor that texts me when it detects motion. Luckily, this proved really easy to do with an Arduino. All I needed to get was the Arduino itself, and a PIR motion sensor, plus my home server. The motion sensor uses infrared to detect whether someone is moving, and outputs high or low accordingly, which the Arduino passes to the server via USB, and the server texts me using Twilio. Let me show you how the setup works. Detecting motion The thingy in the photo on the right is the sensor. It costs $3 or so, and it’s a thing of beauty. You simply connect it to the Arduino, and it outputs HIGH when it detects motion, and LOW otherwise. In itself, that isn’t very useful, but the Arduino can do much more with it. To make the motion sensor more useful, I decided to have the Arduino output “seconds since last motion” every second. This means that there’s a counter that counts up every second there’s no motion, and it resets when the sensor sees something moving, like my body. The code for this is pretty straightforward, it just resets the counter whenever there’s motion and outputs the value every second on the serial (USB, really) port: #define INPUT_PIN 5 unsigned long lastMotionTime = 0; unsigned long lastOutputTime = 0; void setup() { // Initialize various things. pinMode(INPUT_PIN, INPUT); lastMotionTime = millis(); Serial.begin(9600); } void loop() { if (digitalRead(INPUT_PIN) == 1) { // When we sense motion, store the time. lastMotionTime = millis(); } // Output the last motion time every second. if ((millis() - lastOutputTime) > 1000) { // Convert to seconds and print. Serial.println((millis() - lastMotionTime) / 1000); lastOutputTime = millis(); } } This whole thing is pretty much two useful lines, one to reset and one to output. The rest is just C. Do note that millis() wraps around to 0 at some point (after about 50 days, I think?), which may trigger some odd behaviour, but just make sure to get out of the house once every two months or so and you should be fine. Doing something with it Now that we know how many seconds it has been since the last attempt, it’s very easy to do something based on that value. I wrote a very short Python script that will read the value every time it’s output and will do something based on that. The “something” is that it will set an away variable to True if it detects no motion for more than five minutes, and, if the counter suddenly resets when away is True, it means there’s someone home, so the server should text/email/otherwise notify me. Here’s the code (it requires pyserial): import serial away = False # Open the connection. com = serial.Serial("/dev/ttyACM0", 9600) while True: # Read a line, convert it to a number and store it. last_motion = int(com.readline()) if away is False and last_motion > 10 * 60: # If there's no motion for more than 5 min, # we're not here any more. away = True elif away is True and last_motion < 10: # We are (were?) away, yet there's motion. # Someone's here! away = False send_message("Someone's home!") The send_message function is just what notifies me, and it’s rather outside the scope of this post. You can use the Twilio Python library, or send an email, or a push notification, or hook it up to a wet blanket so it can send smoke signals, whatever is your preference. I’d write some more stuff here, but this is a pretty simple setup, so I ran out of things to write. I could tell you about how I hooked this up to the house lights so it can turn them on and off when I’m in or out, but that will probably be the subject of another post. As always, if you have any feedback, questions, or free money you want to send me, leave a comment below, or get me on Twitter. Have fun sensing motion, and remember: This doesn’t work for vampires.
https://www.stavros.io/posts/arduino-texting-motion-sensor/
CC-MAIN-2020-29
refinedweb
765
69.41
Hi Team, I have been trying to store our data to an extended path of an s3 connection present in dataiku. Say the connection that was created takes us to: bucket_1 and my project name is dummy_1_project Hence, whenever we create a recipe to store the file through dataiku folder creation in S3, by default, it stores into: bucket_1/dummy_1_project/<folder_id_autogenerated_by dataiku>/ But I want my file to be stored at bucket_1/dummy_1_project/current_data/. Is there any way we can store it to some custom place without getting the autogenerated folder created? Regards, Shuvankar Mondal Hi @shuvankarm , By going to Settings > Connection of your output dataset, you can modify the "Path in bucket" to relocate your file. Please note that changing the path could lead to overlapping datasets. DSS defines how managed datasets and folders are located and mapped to paths based on the "Naming rules for new datasets/folders" section of your S3 connection. These settings are only applied when creating a new managed dataset or folder, and can be modified in the settings of the dataset. For information can be found here: Best, Elias Thank you Elias. Yes, I am aware of this setting, where I can mention the desired path, and that is how we have been doing. I was wondering if somehow we could do it in code without modifying the folder setting. I tried giving the folder id different like, path = dataiku.folder("current_data") instead of, path = dataiku.folder("ascd1234") But this gives error of not identifying "current_data". All I am wanting is to not to go in the folder and change the setting, instead I wanna achieve the same through code. Hi @shuvankarm , What you need to do is utilize the Python API for datasets and not managed folders, those are completely different. import dataiku client = dataiku.api_client() project = client.get_project('YOUR_PROJECT_KEY') dataset = project.get_dataset('NAME_OF_DATASET') settings = dataset.get_settings() raw_settings = settings.get_raw() raw_settings['params']['path'] = '/YOUR/DESIRED/PATH' settings.save() Please note that even though you are not changing the settings of the dataset through the UI you are still changing the settings of the dataset through the API. A full list of the Python APIs can be found here: Thanks for the info. I was wondering where I should be putting this. This is what I tried. 1. Creating a python recipe where the source is input_abc, one of the datasets created earlier. For this recipe I provided the the output dataset name as output_abc. 2. At the beginning, after the default imports I put the codes that you mentioned. Made changes to the path, project key and the dataset. Here, the dataset name I am providing is the output dataset of the python recipe. The path I am mentioning is similar to: '/${PROJECT_KEY}/current_data' The thing is, at first run, it creates the dataset at the output_abc folder. But in its second run, it creates the dataset at current_data folder. Is it the correct behavior? Did I miss anything, or did I put anything wrong anywhere. And thanks again for the info though.
https://community.dataiku.com/t5/Setup-Configuration/Dataiku-to-store-file-into-custom-s3-folder/td-p/18197
CC-MAIN-2021-49
refinedweb
511
74.19
Here I present version 1 . version 2 of the TINS windows binary packageAll 19 entries in a single convenient download. Please let me know if something doesn't work (game crashes, dll not present, ...). I expect I will need to fix things, so keep an eye on this thread for updates. I'll open the tins site for voting soon. --Martijn van Iersel | My Blog | Sin & Cos | Food Chain Farm | TINS 2022 is Aug 5-8 Everything worked for me except your entry (crashes on start up) and The hostages which wanted libpng13.dll ________________________________________________?" My entry is not working? Well stop the press! Could you do me a favor and try this debug version here, and if that one crashes too, run in through gdb and get a backtrace? Never used GDB so before so um... But good news! It didn't get very far at all.... C:\Downloads\eleven_monkeys_win_bin_dbg\eleven_monkeys>gdb tins07GNU gdb 5.2.1Copyright 2002 "i686-pc-mingw32"...(gdb) runStarting program: C:\Downloads\eleven_monkeys_win_bin_dbg\eleven_monkeys/tins07.exewarning: al-main INFO: Allegro initialised (instance 1) warning: al-gfx INFO: Called set_gfx_mode(2, 640, 480, 0, 0). warning: al-gfx INFO: First call, remembering console state. warning: al-gfx INFO: Autodetecting graphic driver. warning: al-gfx INFO: The driver will wait for vsync. warning: al-gfx INFO: set_gfx_card success for 640x480x16. Program received signal SIGSEGV, Segmentation fault.0x7c901010 in end_ ()(gdb) And that's where it dies. MSVC 8.0's Debuggers tells me it fails in find64.c #endif /* _USE_INT64 */ { WIN32_FIND_DATA wfd; DWORD err; _VALIDATE_RETURN( ((HANDLE)hFile != INVALID_HANDLE_VALUE), EINVAL, -1); _VALIDATE_RETURN( (pfd != NULL), EINVAL, -1); _VALIDATE_RETURN( (sizeof(pfd->name) <= sizeof(wfd.cFileName)), ENOMEM, -1); if (!FindNextFile((HANDLE)hFile, &wfd)) { err = GetLastError(); switch (err) { case ERROR_NO_MORE_FILES: case ERROR_FILE_NOT_FOUND: case ERROR_PATH_NOT_FOUND: errno = ENOENT; break; At the bolded line... now Since you probably compiled in MINGW I take it the info is pretty much useless. Anyway looking at your dependencies I think I can just compile this and play around with the code until it works. However I have an exam tomorrow and knowing my work habits it's probably in my best interest to fix this later I'll get around to it.. but just not yet. All you needed to do in GDB was to type "bt" or "backtrace" and hit take it that's after it crashes? New info: Thanks, that is what I needed. I haven't figured out what's going wrong yet. It's crashing on a call to al_findnext. Here is that part of code: Your binary crashes for me (Vista), but when I build it with VS 2005 it runs fine. Edit: Attached is my executable. --RTFM | Follow Me on Google+ | I know 10 people I can confirm that this version works. [edit]XP Home here. [edit2]Okay I got all entries working now. Found a copy of libpng13.dll, from last years tins entries . This is the part I like the most about speedhack-like competitions. -R Works fine on windows 2000 though my copy of allegro42.dll seems to be out of date, couldn't play Amarillion's game till I used the dll from the debug version from above. That would be it. I was using the allegro dll from this site for MSVC 8. When I took out the mingw version your original works. So it looks like it was a dll issue. So are the MSVC dll and mingw dll incompatible? This doesn't look good, the whole point of having dll's is to be able to share 1 with multiple applications, but as it seems, it doesn't end up working that way. I'd just like to point out the reason my game doesn't end is because on line 40 of CORRIDOR.cpp I have "==" when it should be "<=". It's as simple as that. Oh, and amarillion, you did get my source code, yes? It's not in the binary package above (I sent an email with the source code to ya). ------------Solo-Games.org | My Tech Blog: The Digital Helm Ok, I guess static linking allegro should fix the bug in my entry. Version 2 of the package is now available (linked above), with libpng13.dll included static linked version of my entry "eleven monkeys" Onewing: Yes, I got it, thanks. The source of your entry should now be available from the entries page on the TINS site. "The procedure entry point _install_allegro_version_check could not be located in the dynamic link library allegro42.dll" So much for not breaking ABI in 4.2 The 4.2.0 DLL does not work with 4.2.1 games. It's not supposed to.. i wanted to try some of them for the heck of it but couldnt, where can i get the dlls? IIRC there used to be an installer for this. -----------------I'm hell of an awesome guy :)
https://www.allegro.cc/forums/thread/590922/664310
CC-MAIN-2022-40
refinedweb
816
77.43
by Emmanuel Proulx and Lucian Agapie 01/31/2005 WebLogic Workshop 8.1 offers a world of possibilities to those who know how to extend it. This article gives a real-world example of how to extend Workshop by adding a menu item, a toolbar button, and a frame. It is the first in a series examining Workshop controls and extensions. Future articles will look at the help system, wizards, distribution, and other topics. We're in an age of convergence. Convergence occurs when technologies merge. When BEA brought us WebLogic Workshop 8.1, it merged the best features of J2EE, web services, and the WebLogic Platform technologies. At the same time, BEA opened the door for merging custom technologies into Workshop. This is done using controls and extensions. Workshop encompasses a highly customizable user interface, which includes the development environment, classes, runtime environment, and so on. In this series we will explore some of these features. We will do so by developing a set of controls and extensions called "plug-ins" by other IDE vendors. By the end of this series you should be able to quite comfortably build arbitrary extensions to the WebLogic Workshop environment. Before we start writing our controls and extensions, we need a case study. Sometimes we want to collaborate with remote developers. Let's assume that they don't have access to a source control repository. A good way to send them your source code is through email. But the process of opening your email client, attaching a Java file, and sending it can take a few minutes. What if you could send the document directly from inside Workshop? In this article we are going to create a simple email client, which will be activated by a button or, alternatively, by a menu item. By completing this project, you will learn how to: Extension points are a mechanism that determines the interface between the main Workshop program and extensions. It consists of a set of predefined places that allow the inclusion of arbitrary new objects, such as menus, toolbars, buttons, and views, on the main user interface. The Workshop core comes with a set of extension points. Custom extensions can use these extension points to add their own features. These custom extensions can also add their own extension points for other extensions to use. This creates a hierarchy of extension points. Extension points are retrieved by name (text value). They can hold objects that are of various types, depending on the kind of extension point. This means all extension points must be documented in order to be usable. A list of basic extension points is available in the Workshop help system. To view this list, go to the menu Help | Help Topics, then navigate to the category: Extension Development Kit -> Extending the WebLogic Workshop IDE -> Extension XML Reference When an extension wishes to add itself to Workshop, it must do so in one of two ways: extension.xml extension.xml There will be more coverage of the extension.xml file later on. Typically, an extension is a Workshop project with the structure shown in Figure 1. Our project is going to follow the same outline. Figure 1. Project structure The following table describes these folders and files. Other folders aren't relevant for us. Notice that the last item isn't named like the folder it represents. Figure 2 shows the resulting JAR file after the extension is compiled and archived. Figure 2. Java archive structure In the JAR file you will find the compiled files in their packages and other support files as they appear in the project. The rest of this section provides a step-by-step guide to creating the email project. The complete source code is also available for download at the end of the article. We will describe adding the extension by looking at several phases of development: creating the project structure, creating the build support files, creating the extension files, building the extension files, creating the support file, and, finally, implementing the extension. In this phase we will show you how to create a Workshop project for the extension and set up the classpaths and the integrated debugger. 1. First, we are going to create a new application. Select File | New | Application from the main menu. In the New Application dialog box choose an Empty Application from the All category. Type "dev2dev" in the Directory field and "controls_extensions" in the Name field. Leave the sample Workshop server, which appears as default in the Server field, and then click on the Create button. A directory called controls_extensions and subdirectories called Modules, Libraries, and Security Roles will be created and will be visible in the Application pane. This is the application that will host our extension. A single application can contain many projects like extensions and controls. 2. Add a new project for the extension. Right-click on the folder named controls_extensions in the Application pane, and then select File | New | Project. In the dialog box called New Project select the Java Project in the Business Logic category. Type "my_email" in the Project Name field, and press the Create button. A new folder called my_email will appear in the Application pane under controls_extensions. This project represents a single extension. It will be built into a deployable extension JAR file. 3. Create a repository for the source files. Right-click my_email, select New|Folder, type "src" in the Create New Folder dialog box, and then press OK. This is where all Java files will be saved under their respective packages. Only this folder will be compiled. Other folders contain supporting files. 4. Set the properties for the project. On the Application pane right-click on the project my_email, and select Properties from the menu. The project properties dialog box will appear. We will now set many things inside this dialog. Note: The provided source code may not compile if these properties aren't set. a) Select the Paths category. Classpath: Workshop extensions need a special library in order to be compiled and executed. This is the wlw-ide.jar library, and it contains all extension classes and interfaces. It is in the classpath when Workshop starts, but it is not in the classpath when compiling an extension. We need to add it. Click on the Add Jar button beside the Classpath list on the right side of the window, and browse to find: BEA_HOME\weblogic81\workshop\wlw-ide.jar, where BEA_HOME is the folder in which the BEA WebLogic is installed. This ensures the classes needed to build the application are provided. Source path: We don't have to compile everything in our project. Only the src folder contains Java code. We can specify this here. Click on Add Path beside the Source Path list on the right side of the window. Browse to find the C:\bea\user_projects\applications\controls_extensions\src folder, and click Select Directory. b) To compile our extension, we can rely on the default Workshop build mechanism. But this doesn't allow us to automatically deploy our extension. The only way we can do this is using an Ant build script. Workshop can create a default script that is customizable. Select Build on the left side of the window. On the right side under Build Type click on "Export to ant file" under "Use IDE build," and then press the OK button in the dialog box. A new file, exported_build.xml, has been created in the my_email folder. We will customize this file later on. c) Workshop is a great tool for extension development because it has an integrated debugger. But before we can debug an application we must configure Workshop. Select Debugger on the left side of the window. On the right side of the window, under Debugging Options, check "Build before debugging," uncheck "Pause all threads after stepping," and then select the radio button for "Create new process." Type "workshop.core.Workshop" in the Main Class field and enable "Smart debugging." The extension is now ready for debugging. Simply set a breakpoint in the code, and then run the debugger. A new instance of Workshop will come up, and you will be able to step into the code of your extension! For more information regarding debugging options, see Setting Up Extension Debugging Properties. In this phase we will show you how to customize our Ant build file. 5. Build file customizations: Since "exported_build.xml" isn't a common name or even the norm, let's rename it. In the Application pane right-click on the file exported_build.xml under the my_email project folder, and select Rename from the pop-up menu. Rename it to "build.xml" in order to be recognized by Ant. Double-click on build.xml to open it on the editor window. We are going to add the following lines at the end of the build task: <zip destfile="${platformhome.local.directory}/workshop/extensions/${output.filename}" basedir="${dest.path}" includes="**/*.*" encoding="UTF8"> <zipfileset dir="${project.local.directory}" excludes="build.xml,**/CVS/**,**/*.java,${output.filename}" includes="**/*.*"/> </zip> Those lines will create new JAR file in the extensions deployment folder, ready to be executed the next time Workshop starts. Below this, add the following: <copy todir="${platformhome.local.directory}/workshop/lib" file="${app.local.directory}/APP-INF/lib/mail.jar"/> This command will copy the third-party mail.jar library to its deployment folder. The last customization step is to make Workshop use our new Ant script instead of the default. Again, open the project properties, and in the Build category, click on the Use Ant Build radio button. Keep the defaults. At this point we have completed the preparation work. Let's dig into the coding part. 6. Workshop expects to find the controls and extensions deployment descriptors in a folder called META-INF. Let's create it. Right-click on the my_email project folder, and choose New | Folder from the pop-up menu. In the Dialog box type "META-INF," and press OK. In this phase, we will show you how to create an extension point file and customize it to your needs. The mechanism for creating new extension points is beyond the scope of this article. 7. Add the file extension.xml. This serves to define the extension objects as explained previously. Right-click on the META-INF folder, and then choose New | Other File Types from the pop-up menu. In the New File dialog box that appears, choose "XML file" from the Common category, and name it "extension.xml." Then click the Create button. Be warned: If the file name or its contents have a typo, the extension will compile fine but will not work. There may not be any error message. Let's write the contents of extension.xml. This file starts with the root tag <extension-definition>. Inside this there is a list of <extension-xml> tags. Each tag describes a single kind of extension point. We want to create a toolbar button and a menu item. These extensions are both added in extension points of type "action." Therefore, the following is used to specify action extensions: <extension-xml The "id" attribute contains a unique identifier for the kind of extension point that interests us. The syntax for the inside of the <extension-xml> tag is specified in the documentation. It is particular to each kind of extension point. For "actions," the syntax is two tags: <action-ui> and <action-set>. The first specifies the visible elements to create (menu, pop-up menu, toolbar button) and associates them to an action. The second lists those actions and the Java classes of type IAction. Here's the complete listing: <extension-definition> <extension-xml <action-ui> <menu id="email" path="menu/main" priority="100" label="E&mail"> <action-group </menu> <toolbar id="my_email" path="toolbar/main" priority="2" label="Email"> <action-group </toolbar> </action-ui> <action-set> <action class="dev2dev.controls_extensions.EmailApplication" label="E&mail" icon="images/email.gif" show- <location priority="10" path="menu/email/messagesgroup"/> </action> <action class="dev2dev.controls_extensions.EmailApplication" label="Email" icon="images/email.gif" show- <location priority="10" path="toolbar/my_email/default" /> </action> </action-set> </extension-xml> </extension-definition> A few notes on this file: The relative positioning of menu items and toolbar buttons is specified by an integer "priority" attribute. The path attribute serves to reference an already existing menu or toolbar. It contains a list of IDs of the parent menus/toolbars, separated by forward slashes ("/"). How do you know what IDs to use? Unfortunately, this isn't well-documented. We had to resort to looking inside the Shell extension (C :\bea\weblogic81\workshop\extensions\shell.jar), in the file extension.xml. This is where the basic menus and toolbars are created. Keyboard shortcuts are represented by the "&" character (coded in XML it becomes "&"). Images are loaded by the classloader. So they must be specified as a relative path to the root of the src folder. There are two UI elements (a toolbar button and a menu item), but there's only one action. Both UI elements execute the same action. In this phase we will show you how to create the support files for the project. 8. Create the file MANIFEST.MF in the META-INF folder. This tells Workshop where to look for any third-party libraries. In our case we need the JavaMail API library file. This is the content of MANIFEST.MF: Class-Path: ../lib/mail.jar The destination folder for the extension JAR file is C:\bea\weblogic81\workshop\extensions. So one level up and into the lib folder means the file mail.jar is located in the folder C:\bea\weblogic81\workshop\lib. 9. The graphic files are to be put in the images subfolder inside src, as we just discussed. Graphics for toolbar buttons and menu items are 16x16 pixel GIF files, with transparent color set for the background. We created the file email.gif, so you don't have to draw one yourself; just save this icon to your computer: In a previous section we defined the class to be executed for an action in the class attribute of the <action> element. Now we will implementing this class. 10. Let's code the action class. Create the file: /dev2dev/controls_extensions/EmailApplication.java. This class must implement the IAction interface. A convenient way to do this is to subclass the DefaultAction. This way only the actionPerformed() method has to be implemented. This method is called by Workshop when the action is executed. In our case this method opens a new frame to enter the email information. We will create this frame later. For now just type in the following listing: package dev2dev.controls_extensions; import com.bea.ide.actions.DefaultAction; import com.bea.ide.ui.frame.FrameSvc; import java.awt.event.ActionEvent; import javax.swing.JOptionPane; import net.sourceforge.jiu.util.Statistics; public class EmailApplication extends DefaultAction { public void actionPerformed(ActionEvent e) { FrameSvc.LayoutConstraints layout = new FrameSvc.LayoutConstraints(); layout.askAvailable = false; layout.exact = false; layout.focus = true; layout.hasAction = false; layout.hasMenu = false; layout.icon = null; layout.insert = true; layout.label = "eMail"; layout.open = true; layout.orientation = FrameSvc.NORTH; layout.proportion = 0.20; layout.scope = null; layout.viewClassDest = null; layout.viewIdDest = "main"; layout.visible = true; FrameSvc.get().addView("dev2dev.controls_extensions.EmailFrame", layout); } } This piece of code is an example of how one can programmatically instantiate an extension. One could also display this same pane by adding a frame extension in extension.xml. Workshop contains many classes that allow manipulating its windowing system and internal state. The FrameSvc is a helper class to create new panes in the Workshop GUI. Here we summon the EmailFrame class by name, and we specify its placement using a LayoutConstraints object. Refer to Workshop's documentation for help with these classes. This is available in the menu Help | Help topics, then in the category WebLogic Workshop Reference, Workshop API Javadoc Reference, Extension API Reference. You will have hours of fun browsing the many classes here. 11. The class addView() will fail in the previous listing. Let's add the missing class. Create the file: /dev2dev/controls_extensions/ EmailFrame.java. A Frame is a class that implements the IFrameView interface. This interface's two methods are: isAvailable(): Indicates (true or false) if a Workshop is in a state that allows displaying the frame. We could forbid displaying our frame in certain situations if we wanted to. getView(): Returns a visual Component to display as the frame. Usually this is a JPanel. package dev2dev.controls_extensions; import java.awt.Component; import java.awt.FlowLayout; import java.util.Properties; import javax.mail.Address; import javax.mail.Message; import javax.mail.Session; import javax.mail.Transport; import javax.mail.internet.InternetAddress; import javax.mail.internet.MimeMessage; import javax.swing.JButton; import javax.swing.JLabel; import javax.swing.JOptionPane; import javax.swing.JPanel; import javax.swing.JTextField; import com.bea.ide.Application; import com.bea.ide.document.IDocument; import com.bea.ide.ui.frame.IFrameView; import com.bea.ide.util.IOUtil; public class EmailFrame extends JPanel implements IFrameView { JTextField toField; JTextField subjectField; JButton sendBtn; Session mailSession; public EmailFrame() { this.initComponents(); } public Component getView(String arg0) { return this; } public boolean isAvailable() { return true; } private void initComponents() { this.setLayout(new FlowLayout()); sendBtn = new javax.swing.JButton(); sendBtn.setText("Send"); sendBtn.addActionListener(new java.awt.event.ActionListener() { public void actionPerformed(java.awt.event.ActionEvent event) { Session session = getMailSession(); MimeMessage msg = new MimeMessage(session); try { String messageText; IDocument doc = Application.getActiveDocument(); messageText = IOUtil.read(doc.getURI()); msg.setText(messageText); Address fromAddr = new InternetAddress( "sender@exemaple.com", "Joe Doe"); msg.setFrom(fromAddr); Address toAddr = new InternetAddress(toField.getText(), null); msg.addRecipient(Message.RecipientType.TO, toAddr); msg.setSubject(subjectField.getText()); Transport.send(msg); } catch (Throwable e) { JOptionPane.showMessageDialog(null, e.toString(), "Error while sending mail!", JOptionPane.ERROR_MESSAGE); } } }); JLabel toLabel = new JLabel("To:"); toField = new JTextField(40); JLabel subjectLabel = new JLabel("Subject:"); subjectField = new JTextField(40); add(toLabel); add(toField); add(subjectLabel); add(subjectField); add(sendBtn); } protected Session getMailSession() { if (mailSession == null) { Properties p = new Properties(); p.put("mail.host", "mailServerAddress"); p.put("mail.user", "userName"); p.put("mail.password", "passwod"); mailSession = Session.getDefaultInstance(p, null); } return mailSession; } } In short, we have here a JPanel that contains a To field (destination address), a Subject field, and a Send button. Inside the Send button, we use the JavaMail API to send an email with the document opened in the editor. Three interesting classes are used here. One is Application, which is used to retrieve the active (opened) document. The second is IDocument, an interface that represents that document. This has a getURI() method that returns the location of the file. Finally, there's the very cool IOUtil, used here to copy the file into a String. These are all documented in the Extension API Reference as noted above. After the project has been built, select Debug | Start from the main menu. Note the email icon on the left side of the toolbar as well as the new item named Email in the main menu. If you click on the icon a new frame is going to appear along the top side with the To and Subject fields and the Send button, as you can see in Figure 3. Figure 3. Result of the project With Workshop's powerful, easy-to-use extension mechanism, convergence is achievable. Whatever technology you use in your company, you can now easily integrate it into Workshop as an extension or a control. As you can see, there are many more tools to investigate in Workshop, so there are countless applications one can develop. In the next article we will explore the Workshop help system, and we will supplement a control with documentation suitable for official validation (validated controls and extensions can be added to the Premier Component Gallery). Emmanuel Proulx is an expert in J2EE and SIP. He is a certified WebLogic Server engineer. Lucian Agapie is a senior electrical engineer, member of IEEE, who enjoys writing software applications in Java, C++ and TCL/TK.
http://www.oracle.com/technetwork/articles/entarch/wlwseries-extensions-089994.html
CC-MAIN-2016-30
refinedweb
3,344
52.05
How to instantly destroy a conversation?Israel Fonseca Mar 23, 2009 9:53 PM Is there any way to instantly destroy the actual (long-running or not) conversation? I mean, something like: Conversation.instance().immediateEnd(). Thks, Israel This content has been marked as final. Show 2 replies 1. Re: How to instantly destroy a conversation?Ingo Jobling Mar 24, 2009 1:18 AM (in response to Israel Fonseca) 2. Re: How to instantly destroy a conversation?Israel Fonseca Mar 24, 2009 3:29 PM (in response to Israel Fonseca)Thks for the thread Ingo, but it still doesnt solve my problem. Look my example: My page <h:form Value: <h:outputText<br/> Input: <h:inputText<br/> <h:commandButton <h:commandButton <h:commandButton <h:commandButton <h:commandButton </h:form> My test Class @Name("test") @Scope(ScopeType.CONVERSATION) public class Test { private String value; //Should not work just to test it @End public void end(){ } //I dont know if it should work, but it doesnt anyway. @End(beforeRedirect=true) public void endBeforeRedirect(){ } //Should not work too. public void postback(){ } //I think that should, but it doenst too. public void endBeforePostback(){ Conversation.instance().end(); Conversation.instance().leave(); } //The only solution that i find, but what happens if i want to destroy all //the objects in this current conversation? public void endBeforePostbackAlternative(){ Contexts.getConversationContext().remove(getMyName()); } public void setValue(String value) { this.value = value; } public String getValue() { return value; } private String getMyName(){ Annotation an = this.getClass().getAnnotation(Name.class); return ((Name) an).value(); } } In this example i try to find a way to place a text in the input, and do not show it afterward. Why do i want this? I use a lot of ajax in my pages, and i dont like/want to use different pages, CRUDs in fact are all in one page, so i need to control the conversation and be able to destroy it right away, to re-render a part of the page with fresh values. The only way that i found is the endBeforePostbackAlternative, but.. what if i want to clear all the objects in the current conversation? I wanted something generic to do this job. Any idea? And thks again! o>
https://developer.jboss.org/thread/186915
CC-MAIN-2018-17
refinedweb
365
50.43
For a class X and a QSet< X* >, how is it possible to make sure that the QSet doesn't contain duplicate elements? The unique property in each object of type X is a QString that can be fetched using getName(). I've implemented the qHash(X*) function, the operator==(), operator<() and operator>(), but the QSet still accepts duplicate elements, i.e., those with the same Name. Could someone help me out in making this work? Ok. Here's what I'm trying to do. I have a class Y and a class X, both of which inherit QDialog. A function in class Y ( a slot), is responsible for spawning objects of class X. The dialog for Y is to be made responsible for the X objects spawned. This is why I created a QSet< X* > member in Y. The problem is that you cannot overload operator== like this: bool operator==(X*, X*); This is because at least one of the argument must be of class type. Since you say you implemented operator==, I suppose you did something like this: struct X { bool operator==(X*) const; }; This operator will never be called when QSet tries to fiend duplicates because it needs a left argument of type X and a right of type X* I can see two possible solutions to this problem: QSet<X>). This will allow you to overload the correct operators. This solution, however, is not always feasible. QSetwithout needing to overload any operators nor the qHashfunction. Edit: If your design allows to create multiple X-objects with the same id but you only want one such object to exist at any time, maybe it's best to use a QMap which maps from id to X*. When you create a new object, do something like this: QString newId = ...; delete objectsMap[newId]; objectsMap[newId] = new X(newId); Depending on your exact requirements, you could use a sorted vector together with std::unique (which accepts a custom binary predicate for comparison). Could you use QMap instead? Your dialog would have member variable QMap<QString, X*> items. Then the checking and creating new X's would be like: QString name = "foo"; if (!items.contains(name)) { items[name] = new X(name); } else { // "foo" already exists } Maybe this is not as elegant solution as using QSet might be, but I think this is shorter and easier to understand. I get exactly the same problem. In the end I get here. My solution is very simple. If class QSet can't do what I want, why don't use it object in my class with added code to every function I need. Here is my solution: Declaration of Set class: #pragma once #include<Plant.h> #include<qset.h> class Set { public: Set(void); ~Set(void); bool contains(Plant *plant); QSet<Plant*>::iterator insert(Plant *plant); QSet<Plant*>::iterator erase(Plant *plant); private: QSet<Plant*> plants; }; Definition of Set class #include "Set.h" Set::Set(void){ plants = QSet<Plant*>(); } Set::~Set(void){ } bool Set::contains(Plant *plant){ for(int i=0;i<plants.size();++i){ if(plants.values().at(i)->compare(plant)) return true; } return false; } QSet<Plant*>::iterator Set::insert(Plant *plant){ if(!contains(plant)) return plants.insert(plant); } QSet<Plant*>::iterator Set::erase(Plant *plant){ QSet<Plant*>::iterator it; for(it = plants.begin();it!=plants.end();++it){ if((*it)->compare(plant)){ return plants.erase(it); } } return it; It worked for me very well.
http://m.dlxedu.com/m/askdetail/3/b42a5b8a7b6128dab07a5626b501b980.html
CC-MAIN-2018-22
refinedweb
571
65.32
the main purpose of this application is to ask the user for head or tail and the output the numbers of guesses + the percentage of correct. Im having trouble with finding the percentage of correct. please help me fix this problem public class FlipCoin { public static void main(String[] args) { Scanner in = new Scanner(System.in); Random r = new Random(); int user; int answer = r.nextInt(2)+1; int quit = -1; int Guess = 0; int percent; do { System.out.println("1 for head and 2 for tails:"); user = in.nextInt(); Guess += 1; System.out.println("Type in 1 for heads and Type 2 for Tails."); System.out.println("Press" +" "+ (quit) +" "+ "to Quit."); answer = r.nextInt(2)+1; int HEAD = 1; int TAIL = 2; if (user == answer) System.out.println("You win!"); else if (user != answer) System.out.println("Try again!"); percent = ((HEAD/TAIL)*100); }while (user != -1); System.out.println("You made" +" "+ Guesses +" "+ "guess"); System.out.println("You Quit!"); System.out.println(percent+"Percent"); } }
https://www.daniweb.com/programming/software-development/threads/397876/finding-the-percentage
CC-MAIN-2021-04
refinedweb
164
53.07
Công Nghệ Thông Tin Quản trị mạng Tải bản đầy đủ 5 Certificates and CRLs 215 5 Certificates and CRLs 215 Tải bản đầy đủ 216 Demystifying the IPsec Puzzle currency of a certificate. OCSP provides more up-to-the-minute information, but that protocol has its own complications for IKE, because an IKE negotiation can time out while waiting for an OCSP response. 10.6 Certificate Formats For certificates and CRLs to be universally useful, it is important to establish a standard, unambiguous way in which to describe their components. Ideally, that is the function of Abstract Syntax Notation One [18, 19], generally referred to as ASN.1. It is a symbolic language, consisting of a series of rules that, together, definitively describe a composite object; in our case, the ultimate objects we want to define are certificates, CRs, and CRLs. That is accomplished in an iterative manner. The initial ASN.1 rule describes the highest level object in terms of a series of components. Successive rules refine the definition of each component in an increasingly concrete manner, until the lowest level, that of digits and characters, is reached. Figure 10.2 shows the ASN.1 representation of two portions of a certificate. The first rule defines the general structure of a certificate, which consists of a tbscertificate, the portion of the certificate that will be digitally signed, an identifier for the algorithm used to create the digital signature, and the signature itself. The second rule defines the time-related validity period of the certificate. Now that we have an abstract way to describe certificates, we need to be able to translate this structure into an encoding that consists of bits and bytes. That is where basic encoding rules (BER) and distinguished encoding rules (DER) come in [20]. Each ASN.1 component is assigned a unique identifier, a numeric object identifier (OID). BER and DER are used to Certificate ::= SEQUENCE { tbsCertificate TBSCertificate, signatureAlgorithm AlgorithmIdentifier, signatureValue BIT STRING Validity ::= SEQUENCE { notBefore Time, notAfter Time } Figure 10.2 Sample ASN.1 rules. } The Framework: Public Key Infrastructure (PKI) 217 translate the abstract definition, using OIDs and the specific data appropriate to an individual case, into an encoded certificate. Figure 10.3 shows the DER encoding of a sample certificate field, the email address jdoe@bb.gov, along with two BER alternative encodings. The first example is the DER encoding; the second is an alternative BER encoding; and the third shows a BER encoding with the e-mail address broken up into three components: jdoe, @, and bb.gov. Why two alternative encodings? The BER rules often allow the same object to be encoded in several different ways, while the DER rules define a single encoding for each case. BER can be more efficient to implement, because its alternative formats generally allow a program to encode or decode an object in a single pass, without the necessity to look ahead for coming attractions. Using DER may necessitate some lookahead, but a single standard encoding is essential to ensure that the verifying signature is computed over the same entity. Now that we have presented samples of ASN.1, DER, and BER for the readers edification and mystification, we will not delve further into their minutiae. Heres where it gets even more complicated, if possible. DER-encoded certificates need to be stored in repositories and transmitted over the network. Some transmission methods, such as email, cannot handle binary objects. That gave rise to the Privacy Enhanced Mail (PEM) [21], encoding of the DER encoding of an ASN.1 certificate. PEM-encoded certificates and CRLs thus can be sent as email attachments. PKCS#10 CRs and PKCS#7 cryptographic objects are defined over the DER format of a certificate. They can be transformed into, but are not equivalent to, the PEM-encoded versions. And let us not forget the PKCS#7-wrapped version of PKCS#10 objects. As if that were not confusing enough, the ASN.1 definitions and 16 0a 6a 64 6f 65 40 62 62 2e 67 6f 76 16 81 0a 6a 64 6f 65 40 62 62 2e 67 6f 76 36 13 16 04 6a 64 6f 65 16 01 40 16 06 62 62 2e 67 6f 76 Figure 10.3 Sample DER and BER encodings. 218 Demystifying the IPsec Puzzle OIDs for various certificate pieces are defined in numerous documents, some intended for the universal PKI domain and some aimed at specific subsets of that domain. IPsec implementers have tried to do an end run around some of this confusion by holding periodic IPsec interoperability workshops, also known as bake-offs. That allows developers to compare certificate contents and formats. At the end of each workshop, a list of issues that cropped up during the workshop is compiled. Solutions are discussed on the IPsec email list, and, once consensus is reached, those solutions are publicized in Internet Drafts. For vendors who are latecomers to the process, the email list archives supply a record of previously discussed issues, the array of proposed solutions, and the rationale behind the ultimate consensual solution. 10.7 Certificate Contents For an end user of IPsec, it would be nice to treat certificates as opaque entities that merely serve as grist for the IPsec mill. If that were the case, the fortunate end user would not need to be aware of the fields and the data contained within the certificate. Unfortunately, the literature and standards are replete with quaint compound terms such as subjectAltName and Distinguished Name. X.509 certificates consist of a number of basic fields found in all certificates and a number of optional extensions, added in X.509 version 3. In addition, communities of certificate users can agree on the definition, format, applicability, and use of other extensions. The basic fields are as follows. • Version. Identifies whether the X.509 conventions used in the cer- tificate conform to version 1, 2, or 3. For PKIX and IPsec, version 3 certificates are used. • Serial number. A number assigned by the CA that is unique among all the CAs certificates. • Signature. The identifier (OID) of the algorithms used by the CA to hash and digitally sign the certificate. Two examples mentioned in the IKE PKI profile are id-dsa-with-sha1 and sha-1WithRSAEncryption. The IKE PKIX profile suggests that all IKE implementations should be able to handle both RSA signatures and DSA signatures using the SHA-1 hash algorithm. As mentioned in Chapter 4, DSA can be computed only over a SHA-1 hash, but RSA can use a variety of hash algorithms, including MD5 and SHA-1. The Framework: Public Key Infrastructure (PKI) 219 • Issuer. The distinguished name (DN) of the CA. It generally is made up of a series of fields that uniquely characterize the CA. Figure 10.4 contains two DNs, the first of which could apply to a CA. Following are some of the fields that can be used within the DN and examples of their use. • Country (C): C = United States • Organization (O): O = Bureau of the Budget • Organizational unit (OU): OU = Red Ink Department • Validity. The start and end dates that delineate the certificates life- time. If an IKE SA is authenticated via a certificate, or an IPsec SA is generated using this type of IKE SA, the IKE PKI profile does not allow either SA to expire any later than the certificates expiration date. It also requires IKE to check that no certificates in the path from the peers certificate up to the issuing CA have been revoked. • Subject. The DN of the certificates holder. The second distinguished name in Figure 10.4 could appear as a certificates subject. All the fields shown for a CAs DN can also be used for a certificate holders DN. Some additional DN fields appropriate only for the holders DN are these: • Common name (CN): CN=Joe Smith • Surname (SN): SN=Smith • Given name (GN): GN=Joe • Personal name (PN): PN=SN=Smith, GN=Joe The DN was originally intended to place its subject at a unique node in the X.500 directory information tree (DIT), which was supposed to organize the whole world into a uniform, hierarchical framework. Because a unified framework has not been established, this field is of dubious value, and some of its lesser used components (such as organizationalUnitName, localityName, and stateOr Province Name) are applied differently, if at all, in different domains. • Subjects public key information. The public key algorithm to be used in conjunction with the certificate holders public key and the key itself. C=US, O=Bureau of the Budget, CN=Federal PKI, L=Baltimore C=US, O=Bureau of the Budget, OU=Red Ink Department, CN=John Doe, L=Gaithersburg Figure 10.4 Sample distinguished names (DNs). 220 Demystifying the IPsec Puzzle • Unique subject and issuer (CA) identifiers. These fields are intended to ensure that a CA cannot issue multiple certificates that have the same owners name but were actually issued to disparate entities. They also guard against the problem of multiple CAs with the same issuer name. PKIX disapproves of this approach and recommends careful use of issuer and subject namespace instead. • Signature algorithm. The identifiers of the algorithms used by the CA to hash and digitally sign the certificate. This field is not cryptographically protected by the digital signature, to enable the certificates users to verify the signature. It is duplicated in the signature field mentioned above, which is included in the digital signature; that ensures that an attacker cannot disable use of the certificate by altering this field. • Signature value. A hash of the DER-encoded form of the certificates contents, digitally signed with the CAs private key. The X.509 data definitions include multiple extensions, some of which are necessary for Internet-related communications. To interoperate, there must be agreement on support for those extensions. The handling of optional extensions also must be defined. That is an important step toward the interoperation of two implementations, one of which includes optional extensions but does not necessarily expect the peer to process them, and the other of which can ignore those extensions without rejecting the peers whole data object. Extensions to the basic certificate fields can be processed in several different ways. If they are marked as critical fields within the certificate, certificate users must be capable of processing and acting on the extension fields information; otherwise, the certificate must be ignored. Extension fields not marked as critical can be ignored by certificate users that do not accept or understand that particular extension. Some of the more commonly accepted extensions are the following. • CA. This extension includes the cA bit, used to identify a CAs pub- lic key certificate, whose private key can be used to sign other certificates as well as its own. When this extension is used and the cA bit is on, the maximum nesting depth of lower-level CA certificates may be specified. This extensions official name is basic constraints. PKIX requires this extension to be present and to be marked as critical in all CA certificates. The cA bit cannot be on for certificates whose owner is not a CA. The Framework: Public Key Infrastructure (PKI) 221 • Alternative name. This GeneralName (GN) contains any identifying names of the certificates holder that do not fit the DN format, for example, email address, fully qualified domain name (FQDN), IP address, or URL. If the certificate holder does not have a DN, this field must be present and is considered a critical field. The DN and any alternative names are the identities that are bound to the certificates keys. This field is formally labeled subjectAltName, a term that is often found in the PKI literature and commonly used by PKI aficionados. To add to the confusion, PKI documents frequently refer to email addresses as RFC822 [22] names. For IKE, one of the names in the certificate must match exactly the peers phase 1 ID payload; the ID types and content must be identical. For example, if the initiators phase 1 ID is a DN, it must match the DN in the certificate presented to the responder. If the responders phase 1 ID is an email address, one of the names that constitute the certificates subjectAltName field must be the same The IKE PKI profile allows (but does not require) an IKE participant to terminate an IKE negotiation if this field contains an IP address or DNS domain name that is deemed unacceptable in the context of the current negotiation. When a peers certificate is accessed and examined prior to an IKE negotiation, that information can be used by an initiator to generate the appropriate proposals or by a responder to evaluate the initiators proposals. If the certificate is sent as part of an IKE negotiation, an unfortunate situation can occur. In the digital signature mode, the certificates are exchanged after the protection suite has been negotiated. Thus, a proposal may have been proposed or accepted based on the IP address from which the peer sent the packet, which may not correspond to the address or other identity information found in the certificate. In the public key encryption modes, when a responder has multiple certificates, the relevant one is identified after the exchange of proposals, with the responder possibly facing the same dilemma as in the digital signature mode. In such a case, the only possible solution might be to terminate peremptorily the current phase 1 negotiation, optionally starting a new negotiation that takes into account the ID information that has been gleaned from the certificate. • Key usage. Suggests or mandates the uses to which the certificates public-private key pair can be put, including digital signature, key 222 Demystifying the IPsec Puzzle encipherment (i.e., transport of symmetric session keys), data encipherment (i.e., encryption), and certificate signing (found only in a CAs certificate). If this is a critical field, the key can be used only for one of the designated purposes. To limit the exposure of the private key, a single entity could have several certificates, each one used for a different purpose. If this field is marked as critical, that specialization is enforced; otherwise, it is suggested but not enforced. The PKIX profile requires the certificate signing bit to be in accord with the basic constraints extension. For a CA, both the cA bit and the certificate signing bit must be on; for a non-CA, both must be either omitted or turned off. • Extended key usage. In addition to the standardized key usage fields, additional ones may be defined for special-purpose use. One such is iKEIntermediate, proposed in the IKE PKI profile to designate a key that can be used for phase 1 IKE authentication. (In the early days of IKE, PKIX [23] listed several other IKE-related extended key usage values, but they were rejected by the proponents of IPsec.) This field also can be marked critical. If both the key usage field and the extended key usage fields are critical, the certificates key can be used only in situations that satisfy both fields. • CRL distribution points. A pointer to the location of the CRL. This is useful in cases where the CRLs are not colocated with the certificates. There is a subtle interplay among flexibility, interoperability, and security in the use and interpretation of many of the certificate fields [24], notably the key usage and extended key usage bits. If an IKE implementation is extremely demanding and limiting in the use, interpretation, and validation of certificate fields, security is enhanced but interoperability may be impossible. At the other end of the spectrum, too much flexibility maximizes interoperability at the expense of meaningful security. A CR has the same format as a certificate, but the only fields that contain data are those whose values are required to be matched by the certificate sent by the IKE peer or generated by the CA in response to the request. The CRs format specification currently is up to version 2. 10.8 IKE and IPsec Considerations Standards written for general certificate and PKI use do not always fulfill the specific needs of IKE and IPsec users. Pieces of the solution are contained in The Framework: Public Key Infrastructure (PKI) 223 the PKIX roadmap [2], the PKIX profile [23], and the IKE PKI profile [13]. At times, the PKIX profile and the IKE PKI profile are at odds; in such a situation, IKE wins hands down. An IPsec PKI profile has not yet been written, so its relationship to its fellow travelers is as yet undefined. On the other hand, with continued use and experimentation, new issues continue to crop up. In phase 1, peers certificates can be requested through the use of a CR payload and transmitted using a certificate payload. In addition to the peers certificate, the certificate payload can include the certificate of the CA whose private key was used to sign the peers certificate; a whole chain of intermediate CA certificates used to sign and validate the peers certificate; and/or the CAs latest CRL. Clearly, those payloads can contain data that would be of interest to an attacker. In particular, if the certificates identity is not identical to the peers ID address, revealing that information defeats IKEs phase 1 identity protection. Thus, the phase 1 messages in which it makes sense to include either CRs or certificates vary, depending on the type of phase 1 negotiation and the peer authentication method that is used. When IKE peers use digital signatures for authentication, the certificates public key is only needed by the initiator in Main Mode message 5 and by the responder in Main Mode message 6. Thus, to preserve identity protection, certificate payloads should be included only in Main Mode messages 5 or 6 if the identity is a value other than the peers IP address or domain name. A CR can include a specific CA or certificate type, limiting the types of certificates that will be accepted by the requester. If an IKE initiator does not want to reveal this type of information, it can send its CR payload as part of an encrypted Main Mode message 5. Because the responders only encrypted message is the last Main Mode message, message 6, there is no way for a responder to send a protected CR payload. In Aggressive Mode, because identity protection is not an issue, the CR payload can be part of message 1 or 2; in Base Mode it can appear in messages 1, 2, or 3. In those two modes, because the public key is used in only the last two messages, the certificate payload can appear in any message. With preshared secret key authentication, certificates can be requested and exchanged for use in future PKI-based negotiations. The messages in which they can be used are identical for those in digital signature mode. When the authentication method is public key encryption, the initiator and responder public keys are used in Main Mode messages 3 and 4, respectively; in Aggressive Mode and Base Mode, they are required in messages 1 and 2. Thus, for Aggressive Mode and Base Mode, CRs are not useful. The initiator must obtain the responders certificate before the negotiation Tài liệu liên quan Demystifying the IPSec puzzle computer securities series 1 The TCP/IP Protocol Stack 5 2 Security Associations and the Security Parameters Index 16 8 AH Processing for Outbound Messages 25 9 AH Processing for Inbound Messages 30 7 ESP Header Processing for Inbound Messages 49 9 Criticisms and Counterclaims 52 11 Why Two Security Headers? 55 3 The ESP Header Encryption Algorithms 68 5 Public Key Cryptography 79 4 Proposals and Counterproposals 90
https://toc.123doc.org/document/968242-5-certificates-and-crls-215.htm
CC-MAIN-2017-43
refinedweb
3,292
51.18
Agenda See also: IRC log <trackbot> Date: 05 July 2011 <scribe> SCRIBE: gpilz RESOLUTION: Agenda agreed RESOLUTION: minutes approved Issue-13016: <trackbot> Sorry... adding notes to ISSUE-13016 failed, please let sysreq know about it Gil: looks like a typo Bob: issue accepted Doug: it's already been fixed Bob: any objection to just fixing this? <Dug> RESOLUTION: Doug's proposal accepted Issue-13148: <trackbot> Sorry... adding notes to ISSUE-13148 failed, please let sysreq know about it Doug: Proposal is to just add 'REQUIRED' RESOLUTION: proposal for Issue-13148 is accepted as proposed <Dug> Issue-13151: <trackbot> Sorry... adding notes to ISSUE-13151 failed, please let sysreq know about it Bob: any objections to opening this issue? ... is the text in the spec correct? Doug: yes the text is correct RESOLUTION: Issue-13151 resolved as proposed <Bob> Bob: mismatch between namespace in WSDL and the URI of the location of the WSDL Tom: Do we have a RDDL file for this stuff? Doug: We do Tom: It seems this person just needs to be educated (on the difference between the URI and the @targetNamespace) (confusing dicsussion on possible changes to the RDDL file) Doug: when you click on the namespace link, you get an HTML page that describes the namespace Yves: the link to the WSDL is wrong - we have that in the ED copy as well ... I can do the change Doug: I don't mind doing it, but I need to know what the correct thing is Yves: dated WSDL reference is wrogn Doug: assuming we approve the docs, the dated links all get updated again ... perhaps we could just tell this person that things are out of synch now but will come back into synch on the next publishing event <Dug> birthing activity! ouch! Bob: who is going to take care of this? Yves: me Bob: we need to respond back to Andy ... who would like to do that? ... "we shall correct the RDDL file location at . . . when we publish our PR" ... Yves can you take care of this? Yves: ok <Bob> Bob: wondering why faults are not declared in the portType's of the WSDLs (mex, eventing, etc.) ... we don't normally do this sort of thing Doug: and we won't Bob: something along the lines of "it has not been the custom to define faults in the portTypes of infrastructure specs like . . ." Tom: is he talking about event notifications? Bob: no, he's referring to the XSDs Tom: we don't define any faults in our spec WSDLs? Gil: if you define faults in your WSDL they don't appear on the wire the way we say the should ... infrastructure faults versus application faults Tom: we have a different mapping for our faults than that defined in WSDL Gil: yes - no WS-* has ever used WSDL-defined faults for error handling Bob: anyone to volunteer Gil: I will Bob: should I create pro-forma issues to track these Yves: that would be best <Bob> Bob: look like we have met our criteria for 2 interoperable implementations for each specifications ... the exceptions are the metadata specifications (SOAP assertion and EventDescriptions) ... these don't have any direct, on-the wire tests associated with them . Bob: have folks had a chance to take a look at the latest scenario doc? <Dug> Bob: is that adequate? ... is there anyone who finds it inadequate <Bob> (pause while Ram is updated on progress of meeting) Bob: seems like we need to change the docs before we go to PR Doug: will be done within the hour Bob: seems unfair to ask people to vote based on documents that they have never seen ... better to let everyone review the docs as they will appear for PR ... we've passed all of our exit criteria ... is everyone able to make a meeting on July 12th? ... and is that enough time? Ram: a few questions? ... there hasn't been any substantive changes since the CR Bob: true ... people may quibble with things like getting the machine readable artifacts to match with the text of the spec ... but does any member believe there have been substantive changes? (silence) Ram: so all changes have been editorial? Bob: yes Ram: assuming that is the case, if the candidate PR drafts are available - i think i may be able to be ready as early as the 12th Bob: on most of the specs there have been no changes Doug: i've been doing some spec hygiene ... a couple of typos in eventing and enumeration Ram: when you send out the drafts, will you send out a diff-marked version relative to the PRs? Bob: Yves? Yves: yes I can do that Bob: we want to diff between the CR and the proposed PR drafts ... those will be valuable when we do the progression anouncement ... Doug, once you have the materials ready - let Yves know Doug: they are ready now <Yves> I'll produce them tomorrow morning Ram: I need roughly 3 days for internal review ... should have them by next Tuesday Bob: Yves, today or tomorrow would be good ... then we can make the decision next week ... and get to the PR progression before August Yves: have the diffs ready by my morning (your night) Bob: Yves - how is this going? Yves: I will slap Phillipe Bob: does this need to be nailed down before we do PR Yves: we do Bob: tell Phillipe that we would prefer if we didn't end up waiting on the MIME type assignment MEETING ADJOURNED <asoldano> bye
http://www.w3.org/2002/ws/ra/11/07/2011-07-05.html
CC-MAIN-2019-51
refinedweb
928
76.56
On Tue, Nov 17, 2015 at 5:00 PM, Ganesh Ajjanagadde <gajjanag at mit.edu> wrote: > On Sun, Nov 15, 2015 at 11:59 AM, Ganesh Ajjanagadde <gajjanag at mit.edu> wrote: >> On Sun, Nov 15, 2015 at 11:34 AM, Michael Niedermayer >> <michael at niedermayer.cc> wrote: >>> On Sun, Nov 15, 2015 at 11:01:58AM -0500, Ganesh Ajjanagadde wrote: >>>> On Sun, Nov 15, 2015 at 10:56 AM, Michael Niedermayer >>>> <michael at niedermayer.cc> wrote: >>>> > On Sun, Nov 15, 2015 at 10:03:37AM -0500, Ganesh Ajjanagadde wrote: >>>> >> It is known that the naive sqrt(x*x + y*y) approach for computing the >>>> >> hypotenuse suffers from overflow and accuracy issues, see e.g >>>> >>. >>>> >> This adds hypot support to FFmpeg, a C99 function. >>>> >> >>>> >> On platforms without hypot, this patch does a reaonable workaround, that >>>> >> although not as accurate as GNU libm, is readable and does not suffer >>>> >> from the overflow issue. Improvements can be made separately. >>>> >> >>>> >> Signed-off-by: Ganesh Ajjanagadde <gajjanagadde at gmail.com> >>>> >> --- >>>> >> configure | 2 ++ >>>> >> libavutil/libm.h | 23 +++++++++++++++++++++++ >>>> >> 2 files changed, 25 insertions(+) >>>> >> >>>> >> diff --git a/configure b/configure >>>> >> index d518b21..45df724 100755 >>>> >> --- a/configure >>>> >> +++ b/configure >>>> >> @@ -1774,6 +1774,7 @@ MATH_FUNCS=" >>>> >> exp2 >>>> >> exp2f >>>> >> expf >>>> >> + hypot >>>> >> isinf >>>> >> isnan >>>> >> ldexpf >>>> >> @@ -5309,6 +5310,7 @@ disabled crystalhd || check_lib libcrystalhd/libcrystalhd_if.h DtsCrystalHDVersi >>>> >> >>>> >> atan2f_args=2 >>>> >> copysign_args=2 >>>> >> +hypot_args=2 >>>> >> ldexpf_args=2 >>>> >> powf_args=2 >>>> >> >>>> >> diff --git a/libavutil/libm.h b/libavutil/libm.h >>>> >> index 6c17b28..f7a2b41 100644 >>>> >> --- a/libavutil/libm.h >>>> >> +++ b/libavutil/libm.h >>>> >> @@ -102,6 +102,29 @@ static av_always_inline av_const int isnan(float x) >>>> >> } >>>> >> #endif /* HAVE_ISNAN */ >>>> >> >>>> >> +#if !HAVE_HYPOT >>>> >> +#undef hypot >>>> >> +static inline av_const double hypot(double x, double y) >>>> >> +{ >>>> >> + double ret, temp; >>>> >> + x = fabs(x); >>>> >> + y = fabs(y); >>>> >> + >>>> >> + if (isinf(x) || isinf(y)) >>>> >> + return av_int2double(0x7ff0000000000000); >>>> > >>>> > if either is NaN the result should be NaN i think >>>> > return x+y >>>> > might achive this >>>> >>>> No, quoting the man page/standard: >>>> If x or y is an infinity, positive infinity is returned. >>>> >>>> If x or y is a NaN, and the other argument is not an infinity, >>>> a NaN is returned. >>> >>> indeed, the spec says thats how it should be. >>> >>> this is not what i expected though and renders the function >>> problematic in practice IMHO. >>> For example a big use of NaN is to detect errors. >>> One does a big complicated computation and if at the end the result is >>> NaN or +-Inf then one knows there was something wrong in it. >>> if NaN is infective then any computation that returns it can reliably >>> be detected. These exceptions in C like hypot() break this. >>> 1/hypot(x,y) should be NaN if either x or y was NaN >>> >>> also mathematically its wrong to ignore a NaN argument >>> consider >>> hypot(sqrt(-x), sqrt(x)) for x->infinite >>> >>> of course theres nothing we can or should do about hypot() its defined >>> in C as it is defined but its something one should be aware of if >>> one expects that NaNs can be used as a reliable means to detect >>> NaNs from intermediate steps of a complicated calculation >> >> Yes, I was extremely surprised myself, and you are right that it >> defeats the NaN's purpose. Some day I may dig up the committee's >> rationale for this, am curious. I doubt it was oversight, since they >> are usually very careful about such things, and the defined behavior >> is very specific suggesting deep thought. >> >> Anyway, I do not take this as a formal ack yet. Hopefully we don't run >> into Carl's weird debian thing that forced disabling fmax, fmin >> emulation. > > Anyone willing to do a Windows build test to make sure that the compat > hack works? I want to avoid build failures. Thanks. > hypot is available on windows, can't test your compat code, sorry. :D
https://ffmpeg.org/pipermail/ffmpeg-devel/2015-November/183433.html
CC-MAIN-2019-43
refinedweb
625
62.38
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives Not strictly PLT-related, but Neil Brown has contributed an amazing series of articles to Linux Weekly News: For this series we try to look for patterns which become visible only over an extended time period. As development of a system proceeds, early decisions can have consequences that were not fully appreciated when they were made. If we can find patterns relating these decisions to their outcomes, it might be hoped that a review of these patterns while making new decisions will help to avoid old mistakes or to leverage established successes. As the comments were made, trying to analyze Unix weaknesses and coming with something better was already made by Plan9 designers.. Yet Plan9 isn't used, maybe this is also a lesson in itself. I would expect the lesson to take from Plan 9 is not to change too much at once. For instance, if they had stuck with C as Plan 9's systems language, it would probably have caught on a lot more. As evidence, they've currently rewritten much in C as of this point. like /proc I am afraid only a *few* of the good ideas have been used. And their implementers missed the forest for the trees. The idea was not to represent just a process as a file but every system object as a named file or directory. Second, these files may be local or remote and accessed using a standard protocol (9P). Third, the file name space is process centric and can be modified as needed. Fourth, one can write user level "file servers" using the same protocol (compared to 9p "fuse" is quite unwieldy). Had Unix really embraced these ideas from Plan 9, the benefit would've been a hugh simplification, a very modular & easily extensible system. Unfortunately except for a few diehards it has remained a curiosity. And, IMHO, even the diehards have only scratched the surface as to what can be done with this simple model. New people check it out all the time but given a dearth of applications and eye-candy most of them won't or can't use it as their main OS (even as a hobby). I believe their original idea of a "clean slate" design was right. That avoided Unix compatibility unduly impacting the design early on. Perhaps a more faithful emulation of Unix could've been provided later on.... (plan9 has a library called `ape' for Ansi/Posix Environment but it doesn't quite cut it). In my view the main reason plan9 missed the popularity boat is that Bell Labs didn't open source it at the right time, which was 1993, around the time both 386bsd and linux-0.11 came out. Many of the early adapters of these two systems would've likely embraced plan9. I think it's also informative to take a look at OpenVMS when analyzing the faults of Unix based systems and possible solutions. At first (coming from a Unix background) it might seem a bit awkward, but in my experience it's actually a very nice system with such features as ACLs, logical names, great interactive help system, and versioning of files built into the file system. are only good for death spirals. The author really needs to eradicate the word "this" from his vocabulary. He would do us all a favor in making his argument clearer. I counted "this" eleven times in the first article alone, and many of the usages seemed ill-advised. I also dislike the phrase "full exploitation" in the article. So while hierarchical namespaces were certainly well exploited in the early design, they fell short of being fully exploited, and this lead to later extensions [such as network devices] not being able to continue the exploitation fully. What? The conclusion here did not follow from any of the examples given. For example, the quote below from an earlier paragraph in the same section: Part of the difficulty is maintaining backward compatibility with the original Unix way of using device special files which gave, for example, stable permission setting on devices. There are doubtless other difficulties as well. and earlier in the article where it states why the single, hierarchical namespace is supposedly good: The design idea being fully exploited here is the hierarchical namespace. The result of exploiting it within a single storage device, across all storage devices, and providing access to devices as well as storage, is a "single namespace". This provides a uniform naming scheme to provide access to a wide variety of the objects managed by Unix. So was the problem that the device special file namespace's permissions model was stupid? Or was it that the device special file wasn't in a common singly rooted DAG, and everyone who wants to add a new name to the namespace has to do an insert on the common singly rooted DAG? I don't get it. The author isn't clear. His examples don't actually seem to motivate his argument well. Its a string of disconnected examples with ephemeral "good" vs. "bad" qualifications and very few detailed sentences. Then there is the following example: By incorporating this feature directly in the namespace, the functionality becomes available to all programs. Why should functionality become available to all programs? This is NOT even the reason behind a single, hierarchical namespace. Or even a single namespace!! He touches upon access control, again: The device special files in Unix provide only limited access to this namespace. It can be helpful to see them as symbolic links into this alternate namespace which add some extra permission checking. However while symlinks can point to any point in the hierarchy, device special files can only point to the actual devices, so they don't provide access to the structure of the namespace. It is not possible to examine the different levels in the namespace, nor to get a 'directory listing' of all entries from some particular node in the hierarchy. It is probably best to think about what scenarios should support access to this information, and how to go about controlling that access. Since UNIX is a time-sharing environment, we need to think about how to do access control with consideration to concurrency. This is why some people hate "patterns" discussion. Some people are really awful at patterns discussion. At least I give the author tons of credit for trying. He meant well.
http://lambda-the-ultimate.org/node/4156
CC-MAIN-2019-51
refinedweb
1,094
61.87
None 0 Points Jun 18, 2007 10:23 PM|ruiray@hotmail.com|LINK I have a Web Application Project in VS2005, the project builds successfully. Then I use the VS2005 Web Deployment Project to build it, I got the following error which is absurd since I'm not missing the System.Web dll assembly reference. Error The type or namespace name 'Web' does not exist in the namespace 'Fan.Web.Admin.System' (are you missing an assembly reference?) Could anyone tell what is wrong with this problem? Thanks a lot!! Ray. None 0 Points Jun 18, 2007 11:03 PM|ruiray@hotmail.com|LINK I got the problem fixed. This error has been asked in the forum before but no one really gave any point of direction on the cause! It turns out that the Web Deployment Project is very sensitive about how you name your namespace in the web application. I have a folder named "System" and by default all the .cs created under this folder got "System" as part of its namespace and the Web Deployment Project will not build on this, changing the namespace to "Sys" for example fixed it. However, the Web Application itself builds without any problems, wish the Web Deployment Project could be consistent with the Web Application Project!!! Hope this helps to others. Ray. VS 2005 Web Deployment Project Jun 20, 2007 12:23 PM|Rizwan328|LINK thanks for sharing 2 replies Last post Jun 20, 2007 12:23 PM by Rizwan328
https://forums.asp.net/t/1123538.aspx?The+are+you+missing+an+assembly+reference+error
CC-MAIN-2021-31
refinedweb
249
74.69
Setup your iOS project environment with a Shellscript Setup an iOS project environment Nowadays an iOS project is more than only a *.xcodeproj file with some self-written Objective-C or Swift files. We have a lot of direct and indirect external dependencies in our projects and each new developer on the project or the build server have to get these. Developers need these before working on the app and the build server to build and deploy the app. Types of dependencies We can separate the project dependencies into different categories: Code: Because we don’t want reinvent the wheel for parts of our apps again and again, we use third-party libraries for common use cases. E.g. we use Alamofire for our network stack. Also, we want use the latest and hopefully greatest version of each dependency, to get the newest features and especially critical bug fixes almost automatically. To reach this goal you should use a dependency manager, which cares about these problems. The principle „never change a running“ system should not apply to third-party dependencies. Especially if these are responsible for critical parts of the app, like encryption. Code Dependency Manager: To manage code dependencies in our project we currently have two famous dependency management systems in our iOS world: Cocoapods and Carthage. Both have almost the same feature set and care about two important requirements: - Install the same versions of the dependencies on every system, so that every developer and the build server creates the same app artefact. - Support updating a to dedicated or latest version of the dependencies But neither Cocoapods or Carthage are bundled with macOS, therefore we have to install at least one of them. Cocoapods is available as Ruby Gem and the preferred way to install Carthage is via a Homebrew package. Dependency Manager Management: To manage our iOS dependency manager, we should use some kind of dependency manager, too. Cocoapods is available as Ruby Gem. So we should create a Gemfile for these type of dependencies (a Gemfile is like the Podfile for Ruby developers). We then need to use the bundler Ruby Gem to manage the dependencies in the Gemfile. Look at and for detailed information. We install Carthage with Homebrew via a shell command: brew install carthage. Homebrew itself is only available through a Ruby installation script. (See) Ruby: The prime dependency in this dependency chain is Ruby. The good news is that it is directly available in the latest macOS, in a 'not so old' version too! If you want the latest or a special version of Ruby you have to install it another way. Besides compiling it from source code, you can use RVM or rbenv, which provides an environment management for Ruby. Solutions for code dependencies After we see what dependencies our iOS project really has, we could look at possible solutions for managing them: Under version control If you put your code dependencies in your version control system, you will have a compile-ready state of the project in your repository. Then it’s not needed, at least for the build server, to have a way to install all the other indirect dependencies, like Cocoapods. But a developer, who wants install new or update old code dependency, will need them. Not under version control If you not put the code dependencies under version control, you have to provide a way that your colleagues and the build server can resolve and fetch them. The most important part is that everyone gets the same versions of each dependency, which is ensured via the *.lock / *.resolved files of each dependency manager. These files freeze the versions of used dependencies and you have to force update the dependency versions for newer versions. In this solution, it will be also easy to add, update or remove a dependency in each step, because every developer has the needed environment for it. E.g. fastlane is also provided as a Ruby Gem, so you only need to modify the Gemfile of the project and update the Gemfile.lock. An negative aspect is, that all of the dependencies have to always be available. Nowadays, most of the code dependency are public hosted on Github.com and will be consumed from there. If a developer decides to remove their library from Github.com you need to change the dependency or try to find another source for it. Regardless which way you choose, in my opinion you should provide an easy way to setup the whole project environment. Managing dependency chain Currently there is no single right way to manage the whole dependency chain for an iOS project environment. It depends on the project and what parts should be provided for the developers and what parts they want manage by their own. Especially for Xcode (Mac App Store or direct download from developer portal) and Ruby (RVM or rbenv), each developer has their favourite way to manage it. So, there is a part of the chain, which should already exists on the developers computer or the build server. For the rest, there are three common ways to install all the project dependencies: Shell script, Makefile and Rake script. Base setup The base setup, which should be already on the developers computers, normally contains Xcode, Ruby and Homebrew. It depends whether you use Cocoapods or Carthage if you need Homebrew. But we can use this as starting point. Xcode: You can install the latest release version of Xcode via the Mac App Store (), or via an direct download from the Apple Developer portal (). If you use the Mac App Store version, you can auto update to the latest version, but keep in mind that not every project is directly ready to run with the latest Xcode. Ruby: You can download the source code and compile it by your own (). Or you use third party tools to manage it, like rbenv or RVM. With the third party tools you can easily update or switch the current used Ruby version. So, you should really have a look at it. Homebrew: To install Homebrew a script is provided on the project website brew.sh. If you have installed it already, Homebrew can be self upgraded with the command: brew update Dependency setup Ruby dependencies: External dependencies for Ruby scripts are normally be managed via the packet manager system RubyGems. With the Gem bundler it's possible to install Gems from a Gemfile like this: ruby "~> 2.5.1" source '' gem 'cocoapods', '~> 1.5.3' gem 'fastlane', '~> 2.100.1' To install these dependencies, you only have to run bundle install. The first run of it produces also a Gemfile.lock file, which locks the version numbers for other clients. So it's guaranteed that the same artefact is produced on every system. Therefore the Gemfile.lock should be committed in the project Git repository. Cocoapods dependencies: Cocoapods manages the dependencies in their Podfile and Podfile.lock files. With a call of pod install you install all the right versions of the dependencies. With pod update you can update the Podfile.lock after changes in Podfile. The Podfile.lock also needs to be in you project Git repository. Carthage dependencies: Like Cocoapods, Carthage has it's Cartfile and Cartfile.resolved with the specified versions of the dependencies. Using carthage bootstrap you can build the frameworks. The Cartfile.resolved should also be in your Git repository. To install all the dependency a developer has to run the following commands: # Installs bundler gem gem install bundler # Installs Gems with versions of Gemfile.lock bundle install # Installs Pods with versions of Podfile.lock pod install # Builds the frameworks with code versions of Cartfile.resolved carthage boostrap Solutions After you have the base setup of your iOS project environment you have to find an easy and predictable way to execute all the steps for setup the iOS project environment. You should prevent your developer to read a long potential outdated documentation. We want do the following steps: - Check if Ruby is available - Check if Homebrew is available - Install Ruby Gems - Install Cocoapods dependencies - Install Carthage dependencies - Open for additional steps Additional steps could be triggering the build process and run the unit test, like with fastlane: fastlane test So the best way would be a solution where these steps are running with the other steps, but it should be also possible to run the additional steps only by their own, because you already could have setup your project environment. Shell script The shell script solution is inspired by the Firefox iOS App and their solution can be found here: bootstrap.sh The script will be executed from top to bottom and performs all the necessary steps to setup the project environment. Pros: - Same syntax as manual entered commands - Groups manual entered commands into one file - Can contain checks for dependencies - Customisable and extendible with functions Cons: - Bash syntax - No selective running of steps - No easy integration of optional additional steps I added the command_exists function, to check if a executable is available in the current shell path. #!/bin/bash # Checks if executable exists in current path command_exists () { command -v "$1" > /dev/null 2>&1; } echo "iOS project setup ..." # Check if Ruby is installed if ! command_exists ruby then echo 'Ruby not found, please install it:' echo '' exit 1 fi # Check if Homebrew is available if ! command_exists brew then echo 'Homebrew not found, please install it:' echo '' exit 1 else echo "Update Homebrew ..." brew update fi # Install Bundler Gem if ! command_exists bundle then echo "Bundler not found, installing it ..." gem install bundler -v '~> 1.16.2' else echo "Update Bundler" gem update bundler '~> 1.16.2' fi # Install Ruby Gems echo "Install Ruby Gems ..." bundle install # Install Cocopods dependencies echo "Install Cocoapods" pod install # Install Carthage echo "Install / Update carthage ..." brew unlink carthage || true brew install carthage brew link --overwrite carthage # Install Carthage dependencies echo "Install Carthage Dependencies ..." carthage bootstrap --platform ios --cache-builds A new developer only needs to run ./project_setup.sh to setup the iOS project environment. If you want to add additional steps, you should write additional functions for each step. With parameters at the call of ./project_setup.sh, you can control which step will be executed. For example, if we want to run our unit test, it would look like this #!/bin/bash echo "iOS project setup ..." # Check if user only wants to run unit tests only_test=false [ "$1" == "only_test" ] && only_test=true # Check if user wants to create build environment # and execute the unit tests with_test=false [ "$1" == "with_test" ] && with_test=true # Run fastlane unit tests unit_test() { fastlane test } # Run only unit tests if $only_test then unit_test exit 0 fi # # All boostrapping steps # # Run unit tests after project setup if $with_test then unit_test fi We define a unit_test function, which will executed if the first parameter is only_test or with_test. You can call the shell script with ./project_setup.sh only_test or ./project_setup.sh with_test For only_test, the function unit_test will be executed at the beginning and the script ends. For with_test, the whole bootstrapping steps will be executed and afterwards the function unit_test. Without parameters only the project setup will be executed. Makefile Inspired by the Kickstarter (Makefile) and Wikipedia (Makefile) app, a Makefile can also be a solution to execute all the steps with one command. Unlike the shell script solution, it does not execute all the steps from top to bottom,. It executes a target block using it's name like: make target_name Only the commands in this target will be executed. But you can define other targets which should be executed before. So you have a chain of commands which will be executed one after the other which you can see it in the example. You can also define a default target which will be executed if no target name is given. Pros: - Same syntax as manual entered commands - Groups manual entered commands in one file - Can contain checks for dependencies - Selective running of steps - Easy integration of optional additional steps Cons: - Makefile syntax - Limited customisable and extensible with targets A Makefile can look like the example below and contain setup steps for each project as target. The setup target only has other targets as dependencies and doesn’t execute anything. Targets with a colon after the target name will be executed in top to bottom order. So you can manage the execution order of your steps. The syntax of a Makefile is a little complicated, like the checks of the existing Ruby or Homebrew binaries shows, but normally you don’t need to know more. If you interested, read more in the make documentation. # Checks if executable exists in current path RUBY := $(shell command -v ruby 2>/dev/null) HOMEBREW := $(shell command -v brew 2>/dev/null) BUNDLER := $(shell command -v bundle 2>/dev/null) # Default target, if no is provided default: setup # Steps for project environment setup setup: \ pre_setup \ check_for_ruby \ check_for_homebrew \ update_homebrew \ install_carthage \ install_bundler_gem \ install_ruby_gems \ install_carthage_dependencies \ install_cocoapods # Pre-setup steps pre_setup: $(info iOS project setup ...) # Check if Ruby is installed check_for_ruby: $(info Checking for Ruby ...) ifeq ($(RUBY),) $(error Ruby is not installed) endif # Check if Homebrew is available check_for_homebrew: $(info Checking for Homebrew ...) ifeq ($(HOMEBREW),) $(error Homebrew is not installed) endif # Update Homebrew update_homebrew: $(info Update Homebrew ...) brew update # Install Bundler Gem install_bundler_gem: $(info Checking and install bundler ...) ifeq ($(BUNDLER),) gem install bundler -v '~> 1.16' else gem update bundler '~> 1.16' endif # Install Ruby Gems install_ruby_gems: $(info Install RubyGems ...) bundle install # Install Cocopods dependencies install_cocoapods: $(info Install Cocoapods ...) pod install # Install Carthage install_carthage: $(info Install Carthage ...) brew unlink carthage || true brew install carthage brew link --overwrite carthage # Install Carthage dependencies install_carthage_dependencies: $(info Install Carthage Dependencies ...) carthage bootstrap --platform ios --cache-builds Each of the targets can also be execute on their own. You have to execute make with the specific target name, like make install_ruby_gems So, it's also easy to add additional steps in our project setup. If you want to add a unit test to it, you can define an additional target ( unit_test). If you want to execute the setup and the unit_test target together, you can define an additional target with these targets as dependency. # Combines project setup with unit tests setup_with_unit_test: \ setup \ unit_test # # All other boostrapping steps # # Run fastlane unit tests unit_test: $(info Run Unittests ...) fastlane test So you can call make unit_test to run only the unit tests, and make setup_with_unit_test if you need also the project setup. Especially on an build server the last command is very useful. Rakefile The Wordpress (Rakefile) app uses a Rakefile for its project setup. This is similar to the Makefile solution, but it uses the Ruby variant of make: rake We don’t need a check for Ruby because Ruby and rake are preconditions on the developers system to execute the Rakefile tasks. Otherwise, the Rakefile solution is very similar to a Makefile. Each project setup step is in a task block and can be execute by its name, e.g. rake check_homebrew. It is also possible to have a default task, which will be execute if you only call rake, and each of the tasks can depend on others. Pros: - Groups manual entered commands in one file - Can contain checks for dependencies - Selective running of steps - Easy integration of optional additional steps - Customisation of build process via Ruby functionality Cons: - Needs rake on system - Rakefile syntax - Executes shell commands over an additional Ruby function sh You can see an example below. The main task is setup, which has other tasks as dependencies. You can define dependencies with an arrow operator pointing to the list of the dependencies. Each of the tasks can contain any Ruby code. So if you are familiar with Ruby you can adapt this solution very quickly. But you can also see that you will mostly execute shell commands from your Ruby script. That’s why you should decide for yourself if you really need this additional abstraction layer. # Checks if executable exists in current path def command?(command) system("command -v #{command} > /dev/null 2>&1") end # Default target, if no is provided task default: [:setup] # Steps for project environment setup task :setup => [ :pre_setup, :check_for_homebrew, :update_homebrew, :install_bundler_gem, :install_ruby_gems, :install_carthage, :install_cocoapods_dependencies, :install_carthage_dependencies, ] # Pre-setup steps task :pre_setup do puts "iOS project setup ..." end # Check if Homebrew is available task :check_for_homebrew do puts "Checking Homebrew ..." if not command?('brew') STDERR.puts "Homebrew not found, please install it:" STDERR.puts "" exit end end # Update Homebrew task :update_homebrew do puts "Updating Homebrew ..." sh "brew update" end # Install Bundler Gem task :install_bundler_gem do if not command?('bundle') sh "gem install bundler -v '~> 1.16'" else sh "gem update bundler '~> 1.16'" end end # Install Ruby Gems task :install_ruby_gems do sh "bundle install" end # Install Cocopods dependencies task :install_cocoapods_dependencies do sh "pod install" end # Install Carthage task :install_carthage do sh "brew unlink carthage || true" sh "brew install carthage" sh "brew link --overwrite carthage" end # Install Carthage dependencies task :install_carthage_dependencies do sh "carthage bootstrap --platform ios --cache-builds" end To add additional steps, you only have to add another task, like one for the unit tests # Run fastlane unit tests task :unit_test do sh "fastlane test" end You can call this directly with rake unit_test. To combine the project setup with the execution of the unit tests, you can define an extra task for it, which has the both tasks as dependencies. # Combines project setup with unit tests task :setup_with_unit_test => [ :setup, :unit_test ] This can be execute with rake setup_with_unit_test Conclusion If you use a shell script, a Makefile, a Rakefile or something else, you will provide an easy bootstrapping script for your iOS project. This makes it much easier for new developers to start and a build server needs only a one liner to build and deploy the app. The trouble with setting this up and learning some new script languages will it be worth it. Now, you can also easily use cloud continuous integration services like Travis CI, CircleCI or bitrise.io. Normally in the configuration of these services you will select an Xcode version and have also Ruby and Homebrew available. So your execution step will be the same as every developer does on their local machine: make setup_with_unit_test. My preferred solution is a Makefile, because it has a integrated dependency management between the targets and is directly callable, which is not as easy in a shell script solution. It also relies on make, which comes with every macOS in contrast to rake. If you need to execute more complex steps, which is not a strength of a Makefile, your can break the steps into multiple Shell or Ruby scripts and call them from your Makefile. Demo project I have provided a demo project, where you can test all three solutions on your own. Shell script Project setup: ./project_setup.sh Project setup with unit tests: ./project_setup.sh with_test Unit tests: ./project_setup.sh only_test Makefile Project setup: make setup Project setup with unit tests: make setup_with_unit_test Unit tests: make unit_test Rake Project setup: rake setup Project setup with unit tests: rake setup_with_unit_test Unit tests: rake unit_test
https://iosexample.com/setup-your-ios-project-environment-with-a-shellscript/
CC-MAIN-2019-13
refinedweb
3,182
53.41
pthread_detach - detach a thread #include <pthread.h> int pthread_detach(pthread_t thread); The pthread_detach() function shall indicate to the implementation that storage for the thread thread can be reclaimed when that thread terminates. If thread has not terminated, pthread_detach() shall not cause it to terminate. The behavior is undefined if the value specified by the thread argument to pthread_detach() does not refer to a joinable thread. If the call succeeds, pthread_detach() shall return 0; otherwise, an error number shall be returned to indicate the error.). If an implementation detects that the value specified by the thread argument to pthread_detach() does not refer to a joinable thread, it is recommended that the function should fail and report an [EINVAL] error. If an implementation detects use of a thread ID after the end of its lifetime, it is recommended that the function should fail and report an [ESRCH] error. None. pthread_join XBD . The pthread_detach() function is moved from the Threads option to the Base. Austin Group Interpretation 1003.1-2001 #142 is applied, removing the [ESRCH] error condition. The [EINVAL] error for a non-joinable thread is removed; this condition results in undefined behavior. return to top of pagereturn to top of page
http://pubs.opengroup.org/onlinepubs/9699919799.2008edition/functions/pthread_detach.html
CC-MAIN-2019-22
refinedweb
200
54.93
Perform a simple HTTP GET and parse the response as a DOM object: def http = new HTTPBuilder('') def html = http.get( path : '/search', query : [q:'Groovy'] ) assert html instanceof GPathResult assert html.HEAD.size() == 1 assert html.BODY.size() == 1 In the above example, we are making a request, and automatically parsing the HTML response based on the response's content-type header. The HTML stream is normalized (thanks to Neko, ) and then parsed by an XmlSlurper for easy DOM parsing. Keep reading for more examples of how the HTTPBuilder can handle more complex request/ response logic, as well as parsing other content-types... Next is another GET request, with custom response-handling logic that prints the response to System.out : def http = new HTTPBuilder('') http.get( path : '/search', contentType : TEXT, query : [q:'Groovy'] ) { resp, reader -> println "response status: ${resp.statusLine}" println 'Response data: -----' System.out << reader println '\n--------------------' } Note that in this version, the closure is a response handler block that is only executed on a successful response. A failure response (i.e. status code of 400 or greater) is handled by the builder's default failure handler . Additionally, we are telling HTTPBuilder to parse the response as ContentType.TEXT - which is built-in content-type handled by the default ParserRegistry to automatically create a Reader from the response data. This is a longer request form may be used for other HTTP methods, and also allows for response-code-specific handlers: import static groovyx.net.http.Method.GET import static groovyx.net.http.ContentType.TEXT http.request(GET,TEXT) { req -> url.host = '' // overrides default URL headers.'User-Agent' = 'Mozilla/5.0' response.success = { resp, reader -> println 'my response handler!' assert resp.statusLine.statusCode == 200 println resp.statusLine System.out << reader // print response stream } response.'404' = { resp -> // fired only for a 401 (access denied) status code println 'Not found' } } As mentioned above, you can also set a default "failure" response handler which is called for any status code > 399 that is not matched to a specific handler. Setting the value outside a request closure means it will apply to all future requests with this HTTPBuilder instance: http.handler.'401' = { resp -> println "Access denied" } // Used for all other failure codes not handled by a code-specific handler: http.handler.failure = { resp -> println "Unexpected failure: ${resp.statusLine}" } In this example, a registered content-type parser recognizes the response content-type header, and automatically parses the response data into a JSON object before it is passed to the 'success' response handler closure. import static groovyx.net.http.Method.GET import static groovyx.net.http.ContentType.JSON http.request( '', GET, JSON ) { url.path = '/ajax/services/search/web' url.query = [ v:'1.0', q: 'Calvin and Hobbes' ] response.success = { resp, json -> assert json.size() == 3 println "Query response: " json.responseData.results.each { println " ${it.titleNoFormatting} : ${it.visibleUrl}" } } } By default, HTTPBuilder uses ContentType.ANY as the default content-type. This means the value of the request's Accept header is */* , and the response parser is determined based on the response content-type header value. If any contentType is given (either in HTTPBuilder.setContentType(...) or as a request method parameter), the builder will attempt to parse the response using that content-type, regardless of what the server actually responds with. To add parsing for new content types, simply add a new entry to the builder's ParserRegistry . For example, to parse comma-separated-values using OpenCSV : import au.com.bytecode.opencsv.CSVReader http.parser.'text/csv' = { resp -> return new CSVReader( new InputStreamReader( resp.entity.content, ParserRegistry.getCharset( resp ) ) ) } A CSVReader instance will then be passed as the second argument to the response handler. See IANA for a list of registered content-type names. Probably the quickest way to debug is to turn on logging for HTTPBuilder and HttpClient. An example log4j configuration can be used to output headers and request/response body content.
http://groovy.codehaus.org/modules/http-builder/examples.html
CC-MAIN-2014-41
refinedweb
646
51.14
Is there any way to transfer files between computers using Python 3.6? I want to transfer txt files between computers wihs are not on the same network. 1 answer - answered 2018-01-13 18:11 Simon Yes you can and if you know socketyou could do it or use other libraries instead. You would need to create a script that works as a server (once again socketcan do it, or you could use the inbuilt library http.serveror anything that allows you to make a server). A simple server from the Python docs: import http.server import socketserver PORT = 8000 Handler = http.server.SimpleHTTPRequestHandler with socketserver.TCPServer(("", PORT), Handler) as httpd: print("serving at port", PORT) httpd.serve_forever() Once your sever is working, you can add files to it to send you just need to receive it on the other pc this can be done effectively with urllib.request. Also you need to find your ip address. Here is a short demonstration on receiving information from a server: Although you need to find your ip in this example it's 192.168.0.6:8000: import urllib.request urllib.request.urlretrieve("192.168.0.6:8000", "filename.txt") Obviously this is not complete these are the steps you need to perform in order to do what you are asking. Once you have a basic system going then you could focus on including a GUI or making it possible to send files both ways (this would just involve a script that allows both to serve and to receive. See also questions close to this topic - Python 3: largest number in txt file Thats my code: def get_latest(file_name): max = 0 with open(file_name) as file: for line in file.readlines(): num = int(line.split("\t")[2]) if (max < num): max = num line=line.split("\t") #y=max if str(max) in line: print(line) get_latest('game_stat.txt') That's output: ['Minecraft', '23', '2009', 'Survival game', 'Microsoft\n'] ['Terraria', '12', '2011', 'Action-adventure', 'Re-Logic\n'] ['Diablo III', '12', '2012', 'RPG', 'Blizzard Entertainment\n'] What should I do to print only line with newest released date? Txt file looks like this: Minecraft 23 2009 Survival game Microsoft World of Warcraft 14 2004 RPG Blizzard Entertainment Terraria 12 2011 Action-adventure Re-Logic Diablo III 12 2012 RPG Blizzard Entertainment Half-Life 2 12 2004 First-person shooter Valve Corporation Counter-Strike 12.5 1999 First-person shooter Valve Corporation The Sims 11.24 2000 Simulation Electronic Arts StarCraft 11 1998 Real-time strategy Blizzard Entertainment Garry's Mod 10 2006 Sandbox Facepunch Studios ....etc I can't figure it out. Thanks in advance! I was able to successfully scrape some text from a website and I'm now trying to load the text into a list so I can later convert it to a Pandas DataFrame. The site supplied the data in a scsv format so it was quick to grab. The following is my code: import requests from bs4 import BeautifulSoup import pandas as pd #Specify the url:url url = "" # Package the request, send the request and catch the response: r r = requests.get(url) #Extract the response:text html_doc = r.text #Create a BeautifulSoup object:soup soup = BeautifulSoup(html_doc,"html.parser") #Find the tags associated with the data you need, in this case it's the #"pre" tags for data in soup.find_all("pre"): print(data.text) Sample of what the scraped data looks like: Week;Year;GID;Name;Pos;Team;h/a;Oppt;DK points;DK salary 1;2017;1254;Smith, Alex;QB;kan;a;nwe;34.02;5400 1;2017;1344;Bradford, Sam;QB;min;h;nor;28.54;5900 1;2017;1340;Stafford, Matthew;QB;det;h;ari;28.08;6100 1;2017;1514;Wentz, Carson;QB;phi;a;was;23.88;5300 1;2017;1488;Siemian, Trevor;QB;den;h;lac;23.66;5000 Any feedback you could provide. Thank you. - What are the difference among 3 ways of accessing Google Drive files via Colab? In terms of advantages/disadvantages, which method should i use to access Google Drive files on Colab? Google describes 3 methods here in the section labeled "Google Drive" They have 3 ways: 1.Using the native REST API; 2.Using a wrapper around the API such as PyDrive; or 3.Mounting your Google Drive in the runtime's virtual machine. I tried the last one - Mounting ~ machine as it seems the least complex but I can't seem to be able to find documentation on how to read and download files to my google drive folders with multiple tensorflow runs easily. Is number 3 more limited than doing it via PyDrive for large runs and if so how it is so? -? - java socket server communication easy Hi i want to program a simple code: Server and Socket Server - starts Socket - want to connect Servers - accept it server - waits for a command from the socket socket - writes a command to the stream server - wants to respond on the stream ajsiodjasodasodsa doesnt work?! this ist the socket that connects to the server try (Socket socket = new Socket("localhost", 1234);) { // get the outputstream to write a command to the server as soon as it is // connected: OutputStream outputStream = socket.getOutputStream(); OutputStreamWriter outputStreamWriter = new OutputStreamWriter(outputStream); BufferedWriter bufferedWriter = new BufferedWriter(outputStreamWriter); // this is the command bufferedWriter.write("LISTPARTS"); bufferedWriter.flush(); // Now it should open a input stream to read the resonse of the server! InputStream inputStream = socket.getInputStream(); InputStreamReader inputStreamReader = new InputStreamReader(inputStream); BufferedReader bufferedReader = new BufferedReader(inputStreamReader); System.out.println("\nafter creating the reader"); String readLine; while ((readLine = bufferedReader.readLine()) == null) { System.out.println("what:" + readLine); } This ist the Server: while (true) { try (ServerSocket server = new ServerSocket(1234);) { System.out.println("wait for Client"); Socket socket = server.accept(); // wartet bis akzeptiert System.out.println("Client was accepted"); // Open input stream because server waits for a command: InputStream inputStream = socket.getInputStream(); InputStreamReader inputStreamReader = new InputStreamReader(inputStream); BufferedReader bufferedReader = new BufferedReader(inputStreamReader); // wait for command System.out.println("wait for command"); String readLine; while ((readLine = bufferedReader.readLine()) == null) { System.out.println("wait for client"); } System.out.println("read line:" + readLine); // here it should look if the command works ( this works fine ) if (readLine.equals("LISTPARTS")) { System.out.println("Commando: listparts"); // for responding the server opens a writer to write the response! OutputStream outputStream = socket.getOutputStream(); OutputStreamWriter outputStreamWriter = new OutputStreamWriter(outputStream); BufferedWriter bufferedWriter = new BufferedWriter(outputStreamWriter); // this method write onto the stream ( it should work normaly !) circuit.dumpParts(bufferedWriter); bufferedWriter.flush(); } } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } } this is the result of the servers main ouput on the console: Server- Main wait for Client Client was accepted wait for command and that of the sockets console : after creating the reader - Problem with Simple Server Client program I have a problem with this simple server client program in Java. My goal is to implement a server which gets from the client a String array. The server has to sort these words alphabetically and send it back to the client. Finally the ordered words are printed in the console. The problem is that I start the server and after starting the client nothing appears on the console. Where is the mistake? public class Server { public static void main(String[] args) { Socket s = null; ServerSocket server = null; BufferedWriter out = null; BufferedReader in = null; System.out.println("Server startet......."); try { server = new ServerSocket(6666); s = server.accept(); out = new BufferedWriter(new OutputStreamWriter(s.getOutputStream())); in = new BufferedReader(new InputStreamReader(s.getInputStream())); String text = in.readLine(); String array[] = text.split(";"); Arrays.sort(array); String answer = ""; for (int i = 0; i < array.length; i++) { answer = "" + array[i] + ";"; } out.write(answer); out.flush(); } catch (IOException e) { e.getMessage(); } finally { try { out.close(); } catch (IOException e) { e.getMessage(); } try { server.close(); } catch (IOException e) { e.getMessage(); } try { in.close(); } catch (IOException e) { e.getMessage(); } } } } public class Client { public static void main(String[] args) { String array[] = { "Hallo", "Bir", "Deme", "ABC" }; question(array); } public static void question(String array[]) { Socket s = null; BufferedWriter out = null; BufferedReader in = null; try { s = new Socket("localhost", 6666); out = new BufferedWriter(new OutputStreamWriter(s.getOutputStream())); in = new BufferedReader(new InputStreamReader(s.getInputStream())); String text = ""; for (int i = 0; i < array.length; i++) { text += array[i] + ";"; } out.write(text); out.flush(); String answer = in.readLine(); System.out.println(answer); } catch (IOException e) { e.getMessage(); } finally { try { in.close(); } catch (IOException e) { e.getMessage(); } try { out.close(); } catch (IOException e) { e.getMessage(); } } } } -
http://quabr.com/48242507/is-there-any-way-to-transfer-files-between-computers-using-python-3-6
CC-MAIN-2018-39
refinedweb
1,414
50.73
You can access the members of the IStorable interface as if they were members of the Document class: Document doc = new Document("Test Document"); doc.status = -1; doc.Read( ); You can also create an instance of the interface by casting the document to the interface type, and then use that interface to access the methods: IStorable isDoc = (IStorable) doc; isDoc.status = 0; isDoc.Read( ); In this case, in Main( ) you know that Document is in fact an IStorable, so you can take advantage of that knowledge. In general, it is a better design decision to access the interface methods through an interface reference. Thus, it is better to use isDoc.Read( ) than doc.Read( ) in the previous example. Access through an interface allows you to treat the interface polymorphically. In other words, you can have two or more classes implement the interface, and then by accessing these classes only through the interface, you can ignore their real runtime type and treat them interchangeably. See Chapter 5 for more information about polymorphism. In many cases, you don't know in advance that an object supports a particular interface. Given a collection of objects, you might not know whether a particular object supports IStorable or ICompressible or both. You can just cast to the interfaces: Document doc = new Document("Test Document"); IStorable isDoc = (IStorable) doc; isDoc.Read( ); ICompressible icDoc = (ICompressible) doc; icDoc.Compress( ); If it turns out that Document implements only the IStorable interface: public class Document : IStorable the cast to ICompressible would still compile because ICompressible is a valid interface. However, because of the illegal cast, when the program is run, an exception will be thrown: An exception of type System.InvalidCastException was thrown. Exceptions are covered in detail in Chapter 11. You would like to be able to ask the object if it supports the interface, in order to then invoke the appropriate methods. In C# there are two ways to accomplish this. The first method is to use the is operator. The form of the is operator is: expression is type The is operator evaluates true if the expression (which must be a reference type) can be safely cast to type without throwing an exception. Example 8-3 illustrates the use of the is operator to test whether a Document implements the IStorable and ICompressible interfaces. using System; interface IStorable { void Read( ); void Write(object obj); int Status { get; set; } } // here's the new interface interface ICompressible { void Compress( ); void Decompress( ); } // Document implements IStorable public class Document : IStorable { private int status = 0; public Document(string s) { Console.WriteLine( "Creating document with: {0}", s); } // IStorable.Read public void Read( ) { Console.WriteLine( "Implementing the Read Method for IStorable"); } // IStorable.Write public void Write(object o) { Console.WriteLine( "Implementing the Write Method for IStorable"); } // IStorable.Status public int Status { get { return status; } set { status = value; } } } public class Tester { static void Main( ) { Document doc = new Document("Test Document"); // only cast if it is safe if (doc is IStorable) { IStorable isDoc = (IStorable) doc; isDoc.Read( ); } // this test will fail if (doc is ICompressible) { ICompressible icDoc = (ICompressible) doc; icDoc.Compress( ); } } } Example 8-3 differs from Example 8-2 in that Document no longer implements the ICompressible interface. Main( ) now determines whether the cast is legal (sometimes referred to as safe) by evaluating the following if clause: if (doc is IStorable) This is clean and nearly self-documenting. The if statement tells you that the cast will happen only if the object is of the right interface type. Unfortunately, this use of the is operator turns out to be inefficient. To understand why, you need to dip into the MSIL code that this generates. Here is a small excerpt (note that the line numbers are in hexadecimal notation): IL_0023: isinst ICompressible IL_0028: brfalse.s IL_0039 IL_002a: ldloc.0 IL_002b: castclass ICompressible IL_0030: stloc.2 IL_0031: ldloc.2 IL_0032: callvirt instance void ICompressible::Compress( ) What is most important here is the test for ICompressible on line 23. The keyword isinst is the MSIL code for the is operator. It tests to see if the object (doc) is in fact of the right type. Having passed this test we continue on to line 2b, in which castclass is called. Unfortunately, castclass also tests the type of the object. In effect, the test is done twice. A more efficient solution is to use the as operator. The as operator combines the is and cast operations by testing first to see whether a cast is valid (i.e., whether an is test would return true) and then completing the cast when it is. If the cast is not valid (i.e., if an is test would return false), the as operator returns null. Using the as operator eliminates the need to handle cast exceptions. At the same time you avoid the overhead of checking the cast twice. For these reasons, it is optimal to cast interfaces using as. The form of the as operator is: expression as type The following code adapts the test code from Example 8-3, using the as operator and testing for null: static void Main( ) { Document doc = new Document("Test Document"); IStorable isDoc = doc as IStorable; if (isDoc != null) isDoc.Read( ); else Console.WriteLine("IStorable not supported"); ICompressible icDoc = doc as ICompressible; if (icDoc != null) icDoc.Compress( ); else Console.WriteLine("Compressible not supported"); } A quick look at the comparable MSIL code shows that the following version is in fact more efficient: IL_0023: isinst ICompressible IL_0028: stloc.2 IL_0029: ldloc.2 IL_002a: brfalse.s IL_0034 IL_002c: ldloc.2 IL_002d: callvirt instance void ICompressible::Compress( ) If your design pattern is to test the object to see if it is of the type you need, and if so you will immediately cast it, the as operator is more efficient. At times, however, you might want to test the type of an operator but not cast it immediately. Perhaps you want to test it but not cast it at all; you simply want to add it to a list if it fulfills the right interface. In that case, the is operator will be a better choice. Interfaces are very similar to abstract classes. In fact, you could change the declaration of IStorable to be an abstract class: abstract class Storable { abstract public void Read( ); abstract public void Write( ); } Document could now inherit from Storable, and there would not be much difference from using the interface. Suppose, however, that you purchase a List class from a third-party vendor whose capabilities you wish to combine with those specified by Storable? In C++, you could create a StorableList class and inherit from both List and Storable. But in C#, you're stuck; you can't inherit from both the Storable abstract class and also the List class because C# does not allow multiple inheritance with classes. However, C# does allow you to implement any number of interfaces and derive from one base class. Thus, by making Storable an interface, you can inherit from the List class and also from IStorable, as StorableList does in the following example: public class StorableList : List, IStorable { // List methods here ... public void Read( ) {...} public void Write(object obj) {...} // ... }
http://etutorials.org/Programming/Programming+C.Sharp/Part+I+The+C+Language/Chapter+8.+Interfaces/8.2+Accessing+Interface+Methods/
crawl-001
refinedweb
1,192
56.45
Caching Data Using URL Query Params in JavaScript May 20th, 2022 What You Will Learn in This Tutorial How to temporarily store data in a URLs query params and retrieve it and parse it for use in your UI. install one package, query-string: Terminal cd app && npm i query-string This package will help us to parse and set our query params on the fly. After that's installed, go ahead and start up the server: Terminal joystick start After this, your app should be running and we're ready to get started. Adding some global CSS In order to better contextualize our demo, we're going to be adding CSS throughout the tutorial. To start, we need to add some global CSS that's going to handle the overall display of our pages: /index.css * { margin: 0; padding: 0; } *, *:before, *:after { box-sizing: border-box; } body { font-family: "Helvetica Neue", "Helvetica", "Arial", sans-serif; font-size: 16px; background: #fff; } .container { width: 100%; max-width: 800px; margin: 15px auto; padding: 0 15px !important; } @media screen and (min-width: 768px) { .container { margin-top: 50px; } } By default when you open up this file, only the CSS for the body tag will exist. The specifics here don't matter too much, but what we're doing is adding some "reset" styles for all HTML elements in the browser (removing the default browser CSS that adds extra margins and padding and changes how elements flow in the box model) and a .container class that will allow us to easily create a centered <div></div> for wrapping content. That's all we need here. We'll be adding more CSS later at the individual component level. Next, we need to wire up a route for a dummy page that we'll use to test out our query params. Adding a route to redirect to for testing params In a Joystick app, all routes are defined on the server in one place: /index.server.js. Let's open that up now and add a route for a dummy page we can redirect to and verify our query params work as expected: /index.server.js import node from "@joystick.js/node"; import api from "./api"; node.app({ api, routes: { "/": (req, res) => { res.render("ui/pages/index/index.js", { layout: "ui/layouts/app/index.js", }); }, "/listings/:listingId": (req, res) => { res.render("ui/pages/listing/index.js"); }, "*": (req, res) => { res.render("ui/pages/error/index.js", { layout: "ui/layouts/app/index.js", props: { statusCode: 404, }, }); }, }, }); When you ran joystick start earlier from the root of your app, this is the file that Joystick started up. Here, the node.app() function starts up a new Node.js application using Express.js behind the scenes. To Express, the routes object being defined on the options object passed to node.app() is handed off. By default on this object, we see the / and * routes being defined. Above, we've added a new route /listings/:listingId. For our app, we're building a fake real estate search UI where users will be able to customize some search parameters and view listings. Here, we're creating the route for a fake—it won't load any real data, just some static dummy data—listing page that the user will be able to redirect to. The idea is that we'll set some query params on the URL on the / (index) route and then allow the user to click on a link to this /listings/:listingId page. When they do, the query params we set will "go away." When they go back, we expect those query params to restore. Inside of the route here, we're calling to a function on the res object, res.render() which is a special function that Joystick adds to the standard Express res object. This function is designed to take the path to a Joystick component in our app and render it on the page. Here, we're assuming that we'll have a page located at /ui/pages/listing/index.js. Let's go and wire that up now. Wiring up a fake listing page This one is quick. We don't care too much about the page itself here, just that it exists for us to redirect the user to. /ui/pages/listing/index.js import ui from '@joystick.js/ui'; const Listing = ui.component({ css: ` .listing-image img { max-width: 100%; width: 100%; display: block; height: auto; } .listing-metadata { margin-top: 25px; } .listing-metadata .price { font-size: 28px; color: #333; } .listing-metadata .address { font-size: 18px; color: #888; margin-top: 7px; } .listing-metadata .rooms { font-size: 16px; color: #888; margin-top: 10px; } `, render: () => { return ` <div class="container"> <div class="listing-image"> <img src="/house.jpg" alt="House" /> </div> <div class="listing-metadata"> <h2 class="price">$350,000</h2> <p class="address">1234 Fake St. Winter, MA 12345</p> <p class="rooms">3br, 2ba, 2,465 sqft</p> </div> </div> `; }, }); export default Listing; Here we create a Joystick component by calling the .component() function defined on the ui object we import from the @joystick.js/ui package. To that function, we pass an object of options to define our component. Starting at the bottom, we have a render() function which tells our component the HTML we'd like to render for our component. Here, because we don't need a functioning page, we just return a string of plain HTML with some hardcoded data. Of note, the house.jpg image being rendered here can be downloaded from our S3 bucket here. This should be placed in the /public folder at the root of the project. In addition to this, like we hinted at earlier, we're adding in some CSS. To do it, on a Joystick component we have the css option that we can pass a string of CSS to. Joystick automatically scopes this CSS to this component to help us avoid leaking the styles to other components. That's it here. Again, this is just a dummy component for helping us test the query parameter logic we'll set up in the next section. Wiring up a fake search UI with filters and results page While there's a lot going on in this component, the part we want to focus on is the logic for managing our query params. To get there, first, let's build out the skeleton UI for our component and then pepper in the actual logic to get it working. Though we didn't discuss it earlier, here, we're going to overwrite the existing contents of the /ui/pages/index/index.js file: /ui/pages/index/index.js import ui from '@joystick.js/ui'; const Index = ui.component({ css: ` .search { padding: 20px; } header { display: flex; margin-bottom: 40px; padding-left: 20px; } header > * { margin-right: 20px; } .options label { margin-right: 10px; } .options label input { margin-right: 3px; } .listings ul { display: grid; grid-template-columns: 1fr; list-style: none; } .listings ul li { position: relative; padding: 20px; border: 1px solid transparent; cursor: pointer; } .listings ul li:hover { border: 1px solid #eee; box-shadow: 0px 1px 1px 2px rgba(0, 0, 0, 0.01); } .listings ul li a { position: absolute; inset: 0; z-index: 5; } .listing-image img { max-width: 100%; width: 100%; display: block; height: auto; } .listing-metadata { margin-top: 25px; } .listing-metadata .price { font-size: 24px; color: #333; } .listing-metadata .address { font-size: 16px; color: #888; margin-top: 7px; } .listing-metadata .rooms { font-size: 14px; color: #888; margin-top: 7px; } @media screen and (min-width: 768px) { .search { padding: 40px; } .listings ul { display: grid; grid-template-columns: 1fr 1fr; } } @media screen and (min-width: 1200px) { .listings ul { display: grid; grid-template-columns: 1fr 1fr 1fr 1fr; } } `, render: () => { return ` <div class="search"> <header> <input type="text" name="search" placeholder="Search listings..." /> <select name="category"> <option value="house">House</option> <option value="apartment">Apartment</option> <option value="condo">Condo</option> <option value="land">Land</option> </select> <select name="status"> <option value="forSale">For Sale</option> <option value="forRent">For Rent</option> <option value="sold">Sold</option> </select> <div class="options"> <label><input type="checkbox" name="hasGarage" /> Garage</label> <label><input type="checkbox" name="hasCentralAir" /> Central Air</label> <label><input type="checkbox" name="hasPool" />; Above, we're getting the core HTML and CSS on page for our UI. Again, our goal is to have a pseudo search UI where the user can set some search params and see a list of results on the page. Here, we're building out that core UI and styling it up. After we add this, if we visit (ignore the 2605 in the screenshot below—this was just for testing while writing) in our browser, we should see something like this: Next, let's wire up a "default" state for our search UI (we're referring to everything in the header or top portion of the UI as the "search UI"). /ui/pages/index/index.js import ui from '@joystick.js/ui'; const Index = ui.component({ state: { search: '', category: 'house', status: 'forSale', hasGarage: false, hasCentralAir: false, hasPool: false, }, css: `...`, render: ({ state }) => { return ` <div class="search"> <header> <input type="text" name="search" value="${state.search}" placeholder="Search listings..." /> <select name="category" value="${state.category}"> <option value="house" ${state.category === 'house' ? 'selected' : ''}>House</option> <option value="apartment" ${state.category === 'apartment' ? 'selected' : ''}>Apartment</option> <option value="condo" ${state.category === 'condo' ? 'selected' : ''}>Condo</option> <option value="land" ${state.category === 'land' ? 'selected' : ''}>Land</option> </select> <select name="status" value="${state.status}"> <option value="forSale" ${state.status === 'forSale' ? 'selected' : ''}>For Sale</option> <option value="forRent" ${state.status === 'forRent' ? 'selected' : ''}>For Rent</option> <option value="sold" ${state.status === 'sold' ? 'selected' : ''}>Sold</option> </select> <div class="options"> <label><input type="checkbox" name="hasGarage" ${state?.hasGarage ? 'checked' : ''} /> Garage</label> <label><input type="checkbox" name="hasCentralAir" ${state?.hasCentralAir ? 'checked' : ''} /> Central Air</label> <label><input type="checkbox" name="hasPool" ${state?.hasPool ? 'checked' : ''} />; On a Joystick component, we can pass a state option which is assigned to an object of properties that we want to assign to our component's internal state by default (i.e., when the component first loads up). Here, we're creating some defaults that we want to use for our search UI. The important part here, back down in the render() function, is that we've added an argument to our render() function which we anticipate is an object that we can destructure to "pluck off" specific properties and assign them to variables of the same name in the current scope/context. The object we expect here is the component instance (meaning, the component we're currently authoring, as it exists in memory). On that instance, we expect to have access to the current state value. "State" in this case is referring to the visual state of our UI. The values on the state object are intended to be a means for augmenting this visual state on the fly. Here, we take that state object to reference the values to populate our search UI. We have three types of inputs in our UI: inputwhich is a plain text input used for entering a string of search text. selectwhich is used for our listing "category" and "status" inputs. checkboxwhich is used for our amenities checkboxes. Down in our HTML, we're referencing these values using JavaScript string interpolation (a language-level feature for embedding/evaluating JavaScript inside of a string). We can do this because the value we return from our component's render() function is a string. Depending on the type of input we're rendering, we utilize the corresponding state value slightly differently. For our plain text search input, we can just set a value attribute equal to the value of state.search. For our select <select> inputs we set both a value attribute on the main <select> tag as well as a conditional selected attribute on each option in that <select> list (important as if we don't do this, the current value of the input won't appear as selected without this attribute). Finally, for our checkbox inputs, we conditionally add a checked attribute value based on the corresponding state value for each input. This gives us the fundamentals of our UI. Now, we're ready to wire up the capturing of changes to our search UI and storing them as query params in our URL. Capturing search filters as query params Now that we have our base UI set, we can start to manage our query params. To do it, we're going to add some JavaScript event listeners to our UI so we can grab the latest values as they're set by the user: /ui/pages/index/index.js import ui from '@joystick.js/ui'; import queryString from 'query-string'; const Index = ui.component({ state: { ... }, methods: { handleUpdateQueryParams: (param = '', value = '') => { const existingQueryParams = queryString.parse(location.search); const updatedQueryParams = queryString.stringify({ ...existingQueryParams, [param]: value, }); window.history.pushState('', '', `?${updatedQueryParams}`); }, handleClearQueryParams: (component = {}) => { window.history.pushState('', '', `${location.origin}${location.pathname}`); component.methods.handleSetStateFromQueryParams(); }, }, css: `...`, events: { 'keyup [ ... </div> `; }, }); export default Index; Above, we've added two new properties to our component's options: events and methods. Focusing on events, here, Joystick helps us to listen for JavaScript DOM events on elements rendered by our component. Each event is defined as a property on the object passed to events where the property name is a string describing the type of DOM event to listen for and the element inside of our component to listen for the event on. To the property, we assign a function that should be called when that event is detected on the specified element. Here, we've added listeners for each of our search-related inputs (save for the checkbox inputs which we just listen for generically on inputs with a type of checkbox). Notice that the odd duck out here is the search text input. Here, we want to listen for the keyup event on the input as we want to capture each change to the input (if we listen for a change event like we do the others, it will only fire after the user has "blurred" or clicked out of the input). Inside of all event listeners (save for the last which we'll cover in a bit), we're calling to component.methods.handleUpdateQueryParams(). To an event listener's callback function, Joystick passes two values: event and component. event being the raw JavaScript DOM event that fired and component being the current component instance (similar to what we saw down in render())—the = {} part after component here is us defining a default value—a core JavaScript feature—to fallback to in the event that component isn't defined (this will never be true as it's automatic—consider adding this a force of habit). From the component instance, we want to access a method defined on the methods object (where we can store miscellaneous methods on our component instance). Here, we're calling to a method defined above, handleUpdateQueryParams(). Up top, we've added an import of the queryString package we installed earlier which will help us to parse the existing query params in the URL and prepare our values for addition to the URL. Inside of handleUpdateQueryParams(), we need to anticipate existing query params in our URL that we're adding to, so, we begin by grabbing any existing query params and parsing them into an object with queryString.parse(). Here, location.search is the global browser value that contains the current query string like ?someParam=value. When we pass that value to queryString.parse() we get back a JavaScript object like { someParam: 'value' }. With that, we create another variable updatedQueryParams which is set to a call to queryString.stringify() and passed an object that we want to convert back into a query string like ?someParam=value. On that object, using the JavaScript ... spread operator, we first "unpack" or spread out any existing query params and then immediately follow it with [param]: value where param is the name of the param we want to update (passed as the first argument to handleUpdateQueryParams()) and value being the value we want to set for that param—set via the second argument passed to handleUpdateQueryParams(). The [param] syntax here is using JavaScript bracket notation to say "dynamically set the property name to the value of the param argument." If we look down in our event handlers to see how this is called, we pass the param either as a string or in the case of our checkbox inputs, as the event.target.name value or the name attribute of the checkbox firing the event. With updatedQueryParams compiled, next, to update our URL, we call to the global window.history.pushState() passing an update we want to apply to the URL. Here, history.pushState() is a function that updates our browser's history but does not trigger a browser refresh (like we'd expect if we manually set the location.search value directly). Admittedly, the API for history.pushState() is a bit confusing (as noted in this MDN article on the function here). For the first two values, we just pass empty strings (see the previous link on MDN if you're curious about what these are for) and for the third argument, we pass the URL we want to "push" onto the browser history. In this case, we don't want to modify the URL itself, just the query params, so we pass a string containing a ? which denotes the beginning of query params in a URL and the value returned by queryString.stringify() in updatedQueryParams. That's it. Now, if we start to make changes to our UI, we should see our URL start to update dynamically with the input values of our search UI. Before we move on, real quick, calling attention to the click .clear event listener and subsequent call to methods.handleClearQueryParams(), here we're doing what the code suggests: clearing out any query params we've set on the URL when the user clicks on the "Clear" link at the end of our search UI. To do it, we eventually call to history.pushState(), this time passing the combination of the current location.origin (e.g.,) with the current location.pathname (e.g., / or /listings/123). This effectively clears out all query params in the URL and strips it down to just the base URL for the current page. After this, we're calling to another method we've yet to define: methods.handleSetStateFromQueryParams(). We'll see how this takes shape in the next—and final—section. Reloading search filters when page loads This part is fairly straightforward. Now that we have our query params in our URL, we want to account for those params whenever our page loads. Remember, we want to be able to move away from this page, come back, and have our search UI "reload" the user's search values from the URL. /ui/pages/index/index.js import ui from '@joystick.js/ui'; import queryString from 'query-string'; const Index = ui.component({ state: { ... }, lifecycle: { onMount: (component = {}) => { component.methods.handleSetStateFromQueryParams(); }, }, methods: { handleSetStateFromQueryParams: (component = {}) => { const queryParams = queryString.parse(location.search); component.setState({ search: queryParams?.search || '', category: queryParams?.category || 'house', status: queryParams?.status || 'forSale', hasGarage: queryParams?.hasGarage && queryParams?.hasGarage === 'true' || false, hasCentralAir: queryParams?.hasCentralAir && queryParams?.hasCentralAir === 'true' || false, hasPool: queryParams?.hasPool && queryParams?.hasPool === 'true' || false, }); }, handleUpdateQueryParams: (param = '', value = '') => { ... }, handleClearQueryParams: (component = {}) => { window.history.pushState('', '', `${location.origin}${location.pathname}`); component.methods.handleSetStateFromQueryParams(); }, }, css: `...`, events: { ... }, render: ({ state }) => { return ` <div class="search"> ... </div> `; }, }); export default Index; Last part. Above, we've added an additional property to our component options lifecycle and on the object passed to that, we've defined a function onMount taking in the component instance as the first argument. Here, we're saying "when this components mounts (loads up) in the browser, call to the methods.handleSetStateFromQueryParams() function. The idea being what you'd expect: to load the current set of query params from the URL back onto our component's state when the page loads up. Focusing on handleSetStateFromQueryParams(), the work here is pretty simple. First, we want to get the query params as an object queryParams by calling to queryString.parse(location.search). This is similar to what we saw earlier, taking the ?someParam=value form of our query params and converting it to a JavaScript object like { someParam: 'value' }. With that object queryParams, we call to component.setState() to dynamically update the state of our component. Here, we're setting each of the values we specified in our component's default state earlier. For each value, we attempt to access that param from the queryParams object. If it exists, we use it, and if not, we use the JavaScript or || operator to say "use this value instead." Here, the "instead" is just falling back to the same values we set on the default state earlier. Note: an astute reader will say that we can just loop over the queryParamsobject and selectively edit values on state so that we don't have to do fallback values like this. You'd be right, but here the goal is clarity and accessibility for all skill levels. That's it! Now when we set some search values and refresh the page, our query params will remain and be automatically set back on our UI if we refresh the page. If we click on the fake listing in our list to go to its detail page and then click "back" in the browser, our query params will still exist in the URL and be loaded back into the UI. Wrapping up In this tutorial, we learned how to dynamically set query parameters in the browser. We learned how to create a simple, dynamic search UI that stored the user's search params in the URL and when reloading the page, how to load those params from the URL back into our UI. To do it, we learned how to use the various features of a Joystick component in conjunction with the query-string package to help us encode and decode the query params in our URL. Get the latest free JavaScript and Node.js tutorials, course announcements, and updates from CheatCode in your inbox. No spam. Just new tutorials, course announcements, and updates from CheatCode.
https://cheatcode.co/tutorials/caching-data-using-url-query-params-in-javascript
CC-MAIN-2022-27
refinedweb
3,709
64.61
1 /* Soot - a J*va Optimization Framework2 * Copyright (C) 2005 package soot.tagkit;27 import soot.*;28 29 30 /** 31 * Represents the int annotation element32 * each annotation can have several elements 33 * for Java 1.5.34 */35 36 public class AnnotationDoubleElem extends AnnotationElem37 {38 39 double value;40 41 public AnnotationDoubleElem(double v, char kind, String name){42 super(kind, name);43 this.value = v;44 }45 46 public String toString(){47 return super.toString()+" value: "+value;48 }49 50 public double getValue(){51 return value;52 }53 }54 55 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/soot/tagkit/AnnotationDoubleElem.java.htm
CC-MAIN-2018-26
refinedweb
106
50.23
This design is intended to address the following issues: KNOX-179@jira: Simple way to introduce new provider/servlet filters into the chains The basic issues is that the development and delivery of code is currently required to add a new provider into Knox. This results in a barrier to entry for integrating new providers with Knox. In particular the integration of an existing servlet filter into Knox should not require code development. The design below is intended to introduce a very simple configuration based mechanism to accomplish this. Naturally this will only cover very simple use-cases. More advanced requirements will still require the development of a ProviderDeploymentContributor. <topology> <gateway> <provider> <role>authentication</role> <name>generic</name> <enabled>true</enabled> <param> <name>filterClassName</name> <value>org.opensource.filters.ExistingFilter</value> </param> </provider> ... </gateway> ... </topology> public class DefaultProviderDeploymentContributor { String getRole() { return "*"; // The "*" will require special handling in the framework. String getName() { return "generic"; } void initializeContribution( DeploymentContext context ) { // NoOp } void contributeProvider( DeploymentContext context, Provider provider ) { // NoOp } void contributeFilter( DeploymentContext context, Provider provider, Service service, ResourceDescriptor resource, List<FilterParamDescriptor> params ) { resource.addFilter() .name( getName() ) .role( provider.getRole() ) .impl( provider.getParams().get( "filterClassName" ) ) .params( params ); } void finalizeContribution( DeploymentContext context ) { // NoOp } } 3 Comments Larry McCay We need to spell out the use of the '*' role and the default name here. It seems like you are trying to say that a provider with a returned role as '*' will match anything in the config. This seems to make sense, though I think that the role is probably known for a given provider. It is either authn or authz or whatever. It also seems to me that you are hinting that any custom providers would be called default. If this is the case, how do we distinguish them apart from each other? I think that we have a couple foundational things now that we need to consider (you probably already have): What we are talking about here is the ability to indicate in topology the use of a custom provider for one of the existing roles. Can't we use what we have today and just happen to include the filterClassname init param? We could also have an optional boolean that indicates that it is custom and that would be how you bind it to the Custom contributor. Kevin Minder First off be aware that this was written in the context of the design for KNOX-177. Beyond that let me try and hit your points. 1) Yes "*" will match any role. Since this is a generic provider it would inherit the role in the topology. 2) I'm not saying that custom providers would be called default. We need a name for this specific special provider. I've since changed my opinion about the use of "default" as the name and I'm leaning toward "generic" although "customer" is in the running. 3a) Roles do represent placeholder in a filter chain. 3b) Names in general distinguish provider of the same role. However in this "special" case the name of this provider will globally apply to all roles. 3c) I tried to address this in the design for KNOX-177. 4) Not necessarily, due to KNOX-177 it should work for any role. 5) We could but then what does it mean to have a role+name+filterClassName init param? Would all ProviderDeploymentContributors be expected to honor this init param? 6) I don't think this makes sense given answers provided above. Kevin Minder I also do agree that the algorithm for selecting a provider implemented needs to be more clearly spelled out and documented. We need to get a Developer's Guide started for this type of info. At any rate I don't think that the "generic" provider should ever be considered by the framework when only provider role is specified in <topology><gateway><provider>. This is mainly because there is no reasonable default for the filterClassName init param.
https://cwiki.apache.org/confluence/display/KNOX/KNOX-179%3A+Simple+way+to+introduce+new+Provider
CC-MAIN-2019-26
refinedweb
654
54.93
I have broken out boilerplate setup code such as secrets and storage rights into importable scripts. My code works fine in Databricks v6.x, but not on a Databricks v7.x cluster. In v7.x my code is falsely detected as databricks-connect usage, even if I run it in a notebook in the Databricks web GUI. Traceback in screenshot of web notebook attached to a v7.2 standard cluster on Azure Databricks: databricks-connect-detection-false-positive.png Answer by 5ebastian · Sep 19 at 01:33 PM At some point recently the code for dbutils instantiation in the documentation was updated to a version that works with v7.x clusters. Here is the diff that fixed the problem for me: def get_dbutils(spark): - try: + if spark.conf.get("spark.databricks.service.client.enabled") == "true": from pyspark.dbutils import DBUtils dbutils = DBUtils(spark) - except ImportError: + else: import IPython dbutils = IPython.get_ipython().user_ns["dbutils"] return dbutils dbutils.fs copy and move are slow, when executed through databricks-connect from local machine 0 Answers Databricks-connect configuration not possible with environment variables 1 Answer Databricks Connect: How to properly access dbutils in Scala 2 Answers Databricks dbutils creates empty blob files for azure blob directories 3 Answers External dependencies while using databricks-connect 0 Answers Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 info@databricks.com 1-866-330-0121
https://forums.databricks.com/questions/45383/databricks-7-databricks-connect-detection-false-po.html
CC-MAIN-2020-45
refinedweb
231
56.05
Brief items Stable updates: 3.13.8, 3.10.35, and 3.4.85 were released on March 31. 3.12.16 and 3.2.56 were released on April 2. Christ, even *I* find our configuration process tedious. I can only imagine how many casual users we scare away. We thought this was a great idea, and have been experimenting with a new Facebook group dedicated to patch discussion and review. The new group provides a dramatically improved development workflow, including: To help capture the group discussion in the final patch submission, we suggest adding a Liked-by: tag to commits that have been through group review. The merge window has opened and Linus Torvalds is madly merging code into the mainline. We got a bit behind this week, but will (of course) be covering the merge window, starting in next week's edition. We are sorry for any inconvenience this may cause. Kernel development news At the 2014 Linux Storage, Filesystem, and Memory Management (LSFMM) Summit, Dave Chinner and Ted Ts'o jointly led a session that ended up spanning two slots over two days. The topic was, broadly, whether the filesystem or the block layer was the right interface for supporting shingled magnetic recording (SMR) devices. In the end, it ranged a bit more broadly than that. Ts'o began with a description of a kernel-level C interface to the zone information returned by SMR devices that he has been working on. SMR devices will report the zones that are present on the drive, their characteristics (size, sequential-only, ...), and the location of the write pointer for each sequential-only zone. Ts'o's idea is to cache that information in a compact form in the kernel so that multiple "report zones" commands do not need to be sent to the device. Instead, interested kernel subsystems can query for the sizes of zones and the position of the write pointer in each zone, for example. The interface for user space would be ioctl(), Ts'o said, though James Bottomley thought a sysfs interface made more sense. Chinner was concerned about having thousands of entries in sysfs, and Ric Wheeler noted that there could actually be tens of thousands of zones in a given device. The data structures he is using assume that zones are mostly grouped into regions of same-sized zones, Ts'o said. He is "optimizing for sanity", but the interface would support other device layouts. Zach Brown wondered why the kernel needed to cache the information, since that might require snooping the SCSI bus, looking for reset write pointer commands. No one thought snooping the bus was viable, but some thought disallowing raw SCSI access was plausible. Bottomley dumped cold water on that with a reminder that the SCSI generic (sg) layer would bypass Ts'o's cache. The question of how to handle host-managed devices (where the host must ensure that all writes to sequential zones are sequential) then came up. Ts'o said he has seen terrible one-second latency in host-aware devices (where the host can make mistakes and a translation layer will remap the non-sequential writes—which can lead to garbage collection and terrible latencies), which means that users will want Linux to support host-managed behavior. That should avoid these latencies even on host-aware devices. But, as Chinner pointed out, there are things that have fixed layouts in user space that cannot be changed. For example, mkfs zeroes out the end of the partition, and SMR drives have to be able to work with that, he said. He is "highly skeptical" that host-managed devices will work at all with Linux. Nothing that Linux has today can run on host-managed SMR devices, he said. But those devices will likely be cheaper to produce, so they will be available and users will want support for them. An informal poll of the device makers in the room about the host-managed vs. host-aware question was largely inconclusive. Ts'o suggested using the device mapper to create a translation layer in the kernel that would support host-managed devices. "We can fix bugs quicker than vendors can push firmware." But, as Chris Mason pointed out, any new device mapper layer won't be available to users for something like three years, but there is a need to support both types of SMR devices "tomorrow". The first session expired at that point, without much in the way of real conclusions. When it picked up again, Ts'o had shifted gears a bit. There are a number of situations where the block device is "doing magic behind the scenes", for example SMR and thin provisioning with dm-thin. What filesystems have been doing to try to optimize their layout for basic, spinning drives is not sensible in other scenarios. For SSD drives, the translation layer and drives were so fast that filesystems don't need to care about the translation layer and other magic happening in the drive firmware. For SMR and other situations, that may not be true, so there is a need to rethink the filesystem layer somewhat. That was an entrée to Chinner's thoughts about filesystems. He cautioned that he had just started to write things down, and is open to other suggestions and ideas, but he wanted to get feedback on his thinking. A filesystem really consists of two separate layers, Chinner said: a namespace layer and a block allocation layer. Linux filesystems have done a lot of work to optimize the block allocations for spinning devices, but there are other classes of device, SMR and persistent memory for example, where those optimizations fall down. So, in order to optimize block allocation for all of these different kinds of devices, it would make sense to split out block allocation from namespace handling in filesystems. The namespace portion of filesystems would remain unchanged, and all of the allocation smarts would move to a "smart block device" that would know the characteristics of the underlying device and be able to allocate blocks accordingly. The filesystem namespace layer would know things like the fact that it would like a set of allocations to be contiguous, but the block allocator could override those decisions based on its knowledge. If it were allocating blocks on an SMR device and recognized that it couldn't put the data in a contiguous location, it would return "nearby" blocks. For spinning media, it would return contiguous blocks, but for persistent memory, "we don't care", so it could just return some convenient blocks. Any of the existing filesystems that do not support copy-on-write (COW) cannot really be optimized for SMR, he said, because you can't overwrite data in sequential zones. That would mean adding COW to ext4 and XFS, Chinner said. But splitting the filesystem into two pieces means that the on-disk format can change, he said. All the namespace layer cares about is that the metadata it carries is consistent. But Ts'o brought up something that was obviously on the minds of many in the room: how is it different from object-based storage that was going to start taking over fifteen years ago?—but hasn't. Chinner said that he had no plans to move things like files and inodes down into the block allocation layer, as object-based storage does; there would just be a layer that would allocate and release blocks. He asked: Why do the optimization of block allocation for different types of devices in each filesystem? Another difference between Chinner's idea and object-based storage is that the metadata stays with the filesystem, unlike moving it down to the device as it is in the object-based model, Bottomley said. Chinner said that he is not looking to allocate an object that he can attach attributes to, just creating allocators that are optimized for a particular type of device. Once that happens, it would make sense to share those allocators with multiple filesystems. Mason noted that what Chinner was describing was a lot like the FusionIO filesystem DirectFS. Chinner said that he was not surprised; he looked and did not find much documentation on DirectFS and that others have come up with these ideas in the past. It is not necessarily new, but he is looking at it as a way to solve some of the problems that have cropped up. Bottomley asked how to get to "something we can test". Chinner thought it would take six months of work, but there is still lots to do before that work could start. "Should we take this approach?", he asked. Wheeler thought the idea showed promise; it avoids redundancy and takes advantage of the properties of new devices. Others were similarly positive, though they wanted Chinner to firmly keep the reasons that object-based storage failed in his mind as he worked on it. Chinner thought a proof-of-concept should be appearing in six to twelve months time. [ Thanks to the Linux Foundation for travel support to attend LSFMM. ] D. ] At the 2014 LSFMM Summit, held in Napa, California March 24-25, Martin Petersen and Zach Brown gave an update on the status of copy offload, which is a mechanism to handle file copies on the server or storage array without involving the CPU or network. In addition, Hannes Reinecke and Doug Gilbert took the second half of the slot to discuss an additional copy offload option. The Petersen/Brown talk was titled "Copy Offload: Are We There Yet?" and Petersen tried, unsuccessfully, to short-circuit the whole talk by simply answering the question: "Yes, thank you", he said and started to head back to his seat. But there was clearly more to say about a feature that allows storage devices to handle file copies without any involvement of either the server or the network—at least once the copy has been initiated. Petersen said that he had been working on the feature for some time. He rewrote it a few times and had to rebase it on top Hannes Reinecke's vital product data (VPD) work. That last step got rid of most of his code, he said, and led to a working copy offload. The interface is straightforward, just consisting of target and destination devices, target and destination logical block addresses (LBAs), and a number of blocks. Under the covers, it uses the SCSI XCOPY (extended copy) command because that is "supported by everyone". It does not preclude adding more complicated copy offload options later, Petersen said, but he just wanted something "simple that would work". Depending on the storage device, copy offload can do really large copies instantly, by just updating some references to the data, Ric Wheeler said. Someone asked what Samba's interface would look like. To that, Brown said that a new interface using file descriptors and byte ranges is the next step. It will be a single-buffer-at-a-time system call that handles descriptors rather than devices. It can return partial success, so user space needs to be prepared for that, he said. While he didn't commit to a date, Brown said that the interface would be much simpler now that Petersen had added XCOPY support. Moving on to the token-based copying, Gilbert noted that there are two big players in the copy offload world: VMware, which uses XCOPY (with a one-byte length ID, aka LID1), and Microsoft, which uses ODX (aka LID4 because it has a four-byte length ID). Storage vendors all support XCOPY, but ODX support is growing. LID4 added a number of improvements to LID1, but it adds lots of complexity and ugly hacks too, Gilbert said. ODX is a Microsoft name for the "lite" portion of the original T10 (SCSI standardization group) document "XCOPYv2: Extended Copy Plus & Lite". ODX is a two-part disk-to-disk token-based copy, he said. It uses a storage-based gather list to populate a "representation of data" (ROD), which can be thought of as a snapshot ID. It also generates a ROD token that can be used to access the data assembled. Wheeler noted that anyone who has the token value (and access to the storage) can copy the data without any security checks. "If you have the token, you have the data" is the model, Fred Knight said. That bypasses the usual operating system security model, though, which is something to be aware of, Wheeler said. The lifetimes of the tokens (typically 30-60 seconds) will help reduce problems, Reinecke said. But Knight cautioned that lifetimes vary between implementations. In addition, Reinecke noted that the token is not guaranteed to work throughout the entire lifetime. Gilbert said that ODX is a "point in time" copy, which sounds something like snapshots, but the 30-60 second lifetime makes them not particularly useful as snapshots. He then gave a demo that created a gather list, wrote a token to a file, used scp to copy the token file to another host, then used the token with his ddpt utility to retrieve the data. As Reinecke summed up, the main idea is to avoid data transfer via the CPU whenever possible. If that can be done efficiently, then Linux should look at supporting it. [ Thanks to the Linux Foundation for travel support to attend LSFMM. ] Hannes Reinecke led two sessions at this year's Linux Storage, Filesystem, and Memory Management (LSFMM) Summit that were concerned with errors in the block layer. The first largely focused on errors that come out of partition scanners, while the second looked at a fundamental rework of the SCSI error-handling path. Reinecke has added more detailed I/O error codes that are meant to help diagnose problems. One area where he ran into problems was that an EMC driver was returning ENOSPC when it hit the end of the disk during a partition scan. He would rather see that be an ENXIO, which is what the seven kernel partition scanners (and one in user space) return for the end-of-disk condition. So, he has remapped that error to ENXIO in the SCSI code. Otherwise, the thin provisioning code gets confused as it expects ENOSPC only when it hits its limit. Al Viro was concerned that the remapped error code would make it all the way out to user space and confuse various tools. But Reinecke assured him that the remapped errors stop at the block layer. Being able to distinguish between actual I/O errors and the end-of-disk condition will also allow the partition scanner to stop probing in the presence of I/O errors, he said. In another session, on day two, Reinecke presented a proposal for recovering from SCSI errors at various levels (LUN, target, bus, ...). In addition, doing resets at some of the levels do not make any sense depending on the kind of error detected, he said. If the target is unreachable, for example, trying to reset the LUN, target, or bus is pointless; instead a transport reset should be tried, if that fails, a host reset would be next. This would be the path taken when either a command times out or returns an error. There were lots of complaints from those in attendance about resetting more than is absolutely necessary. That disrupts traffic to other LUNs when a single LUN or target has an error, even though the other LUNs are handling I/O just fine. Part of the problem, according to Reinecke, is that the LUN reset command does not time out. But Roland Dreier noted that one missed I/O can cause a whole storage array to get reset, which can take a minute or more to clear. In addition, once the error handler has been entered, all I/O to the host in question is stopped. In some large fabrics, one dropped packet can lead to no I/O for quite some time, he said. Reinecke disputed that a dropped frame would lead to that, since commands are retried, but agreed that a more serious error could lead to that situation. Complicating things further, of course, is that storage vendors all do different things for different errors. The recovery process for one vendor may or may not be the same as what is needed for another. In the end, it seemed like there was agreement that Reinecke's changes would make things better than what we have now, which is obviously a step in the right direction. [ Thanks to the Linux Foundation for travel support to attend LSFMM. ] Al Viro gave an update on the long-awaited revoke() system call to the 2014 Linux Storage, Filesystem, and Memory Management (LSFMM) Summit. revoke() is meant to close() any existing file descriptors open for a given pathname so that a process can know that it has exclusive use of the file or device in question. Viro also discussed some work he has been doing to unify the multiple variants of the read() and write() system calls. Viro started out by saying that revoke() was the less interesting part of his session. It is getting "more or less close to done", he said. We looked at an earlier version of this work a year ago. Files will be able to be declared revokable at open() time. If they are, a counter will track the usage of the file_operations functions at any given time. Once revoke() is called, it waits for all currently active threads to exit the file_operations, and makes sure that no more are allowed to start. There are places in procfs and sysfs where something similar is open-coded, Viro said, that could be removed once the revoke() changes go in. One of the keys is to ensure that the common path does not slow down for revoke() since most files will not be revokable. There are several areas that still need work, including poll(), which "provides some complications", and mmap(), which has always been problematic for revoke(). In a bit of an aside, Viro noted that there is a lot of code that is "just plain broke". For example, if a file in debugfs is opened and the underlying code removes the file from the debugfs directory, any read or write operation using the open file descriptor will oops the kernel. Dynamic debugfs is completely broken, Viro said. He hopes that the revoke() code will be in reasonable shape in a couple of cycles—"it's getting there". Dynamic debugfs will be one of the first users, he said. Viro then moved on to the unification of plain read() and write() with the readv()/writev() variants as well as splice_read() and splice_write(). The regular and vector variants (readv()/writev()) have mostly been combined, he said. It is "not pretty", but it is tolerable. The splice variants got "really messy". Ideally, the code for all of the variants should look the same all the way down, until you get to the final disposition. But each of the variants has its own view of the data; the splice variants get/put their data into pages, which doesn't fit well with the iovec used by the other two variants (in most implementations, plain read() and write() are translated to an iovec of length one). Creating a new data structure that can hold both user and kernel iovec members, along with struct page for the splice variants may be the way to go, Viro said. Something that "fell out" of his work in this area is the addition of iov_iter. The iov_shorten() operation tries to recalculate the number of network segments that fall into a given iovec area, but the result is that the iovec gets modified when there are short reads or writes. Worse still, how the iovec gets modified is protocol-dependent, which makes it hard for users. In fact, someone from the CIFS team said that it makes a copy of any iovec before passing it in because it doesn't know what it will get back. Having it be protocol-dependent is "just wrong", Viro said. He has been getting rid of iov_shorten() calls, as well as other places that shorten iovec arrays. That might allow sendpage() to be removed entirely; protocols that want to be smart can set up an iov_iter, he said. [ Thanks to the Linux Foundation for travel support to attend LSFMM. ] Eric Sandeen, Lukáš Czerner, Mike Snitzer, and Dmitry Monakhov discussed thin provisioning support in Linux at the 2014 LSFMM Summit. Thin provisioning is a way for a storage device to pretend to have more space than it actually does, relying on the administrator to add more storage "just in time". For the most part, Snitzer said, thin provisioning using the device mapper (dm-thin) is pretty stable. But there are some performance issues that they would like to address. One of the problem areas in terms of performance is that the block allocator from dm-thin is "dumb". Multi-threaded writes end up on the disk with bad ordering that leads to bad read performance. What Snitzer would like to do is to split writes into different lists, one per thin volume, then sort the struct bio entries in the list. He doesn't want to add a full-on bio-based elevator, but does want to get some better locality to improve performance. As part of that, Snitzer would like to have a way to ask an XFS or ext4 filesystem about its allocation group boundaries. Those could be used as a hint for block allocation in the thin provisioning code. But Joel Becker wondered why that information was needed, and why the logical block address (LBA) information was not enough. Dave Chinner agreed, noting that the filesystem relies on the LBA information to make its allocation decisions. Becker suggested that what Snitzer was really after is the "borders at which we stop caring about locality"—basically the distance between two writes that would not be considered "close". Snitzer said that he is looking for something concrete that dm-thin can use. Ted Ts'o thought that both ext4 and XFS could provide some values that would be reasonable for dm-thin to use to determine locality. Monakhov noted that filesystems spread out their data throughout their volume, which causes problems for dm-thin. The problem, Chinner said, is that the filesystem needs to tell the block layer where the free space is. Chinner said that one dm-thin developer was asking for information on how the filesystem will spread things out, while another was asking that filesystems not spread things out. There needs to be an automatic way for the filesystem to tell the block layer about free space, Ts'o said. Discard is one mechanism to do so, but Roland Dreier said that most administrators are disabling discard. In addition, TRIM command (that tells devices about unused blocks) support has been spotty, Martin Petersen said. Unqueued TRIM didn't work early on, but works now, while queued TRIM support is being added and is not yet working. Unqueued TRIM requires stopping all other activity on the device, so performance suffers; queued TRIM was added to the standard relatively recently to avoid that problem. Someone said that fstrim (offline discard) is probably the right solution for most workloads. Snitzer said that mount -o discard (online discard) could be used with dm-thin. It passes the discard information to dm-thin, which doesn't (necessarily) pass it down to the storage device. That gives dm-thin the information it needs on free space, however. Another problem for dm-thin is that fallocate() will reserve space, but that isn't getting passed down to the block layer. The result is that even after a successful fallocate() call, applications can still get ENOSPC—exactly the outcome fallocate() was meant to avoid. Ts'o said that can't be solved without handing the block allocation job to the block layer. But, Chinner said, it is not necessarily right for dm-thin to handle allocation. Sandeen noted another problem: filesystems act differently when they run out of space. For XFS, it will keep trying to write any pending metadata, while ext4 and Btrfs do not. Chinner explained that XFS already has the metadata on stable storage in the log, so it retries to see if the administrator wants to add more storage. More generally, there are different classes of errors for the different filesystems, Chinner said. Some are considered transient errors by some filesystems, which leads to different behavior between them. But, Monakhov noted, "user space goes crazy" when it gets an ENOSPC. Monakhov went on to suggest (as he had in a lightning talk the previous day) that there be a standard way for the filesystem to report errors to the logs. James Bottomley said the obvious way to do that was with a uevent from the virtual filesystem (VFS). There is general agreement that some kind of event framework is needed, he said, but "we don't know who is going to do it, or when it will be done". As with many of the LSFMM sessions, few conclusions were drawn though some progress was clearly made. [ Thanks to the Linux Foundation for travel support to attend LSFMM. ] The block layer multi-queue work was the subject of a discussion led by Nic Bellinger at this year's Linux Storage, Filesystem, and Memory Management (LSFMM) Summit. One might have expected Jens Axboe and Christoph Hellwig to be part of any discussion of that sort, but Axboe was ill and Hellwig was boycotting the location, Bellinger said. That left it up to him, though Axboe did provide some notes for the block multi-queue work. From those notes, which Bellinger also provided to me, he said that the initial multi-queue work was merged for the 3.13 kernel. It only supported the virtio_block driver and "mostly worked". There have been changes since that time, but overall it appears to be architecturally sound. A basic conversion of the Micron mtip32xx SSD driver has been done. The existing driver is a single queue with shared tags. After the conversion, there are eight queues available. It runs at about 1.8 million I/O operations per second (IOPS), which is about the same as the unpatched driver. It works well on a two-socket system, but falls down on a four-socket machine. Part of the problem is a lack of tags. The percpu-ida code is not going to cut it for tag assignment. An audience member said they had replaced percpu-ida recently, which eliminated the tags problem. Matthew Wilcox noted another tag problem: the Linux implementation makes them unique per logical unit number (LUN), while the specification says they need only be unique per target. In addition, James Bottomley said, the specification allows for 16-bit tags, rather than the 8-bit tags currently being used. Bellinger then moved on to his and Hellwig's work on adding multiple queues to the SCSI subsystem. Since 2008, there have been reports of small-block random I/O performance issues in the SCSI core. Most of that is due to cache-line bouncing of the locks. It limits the performance of that type of I/O to 250K IOPS. Getting performance to 1 million IOPS using multiple SCSI hosts was taking up to 1/3 of the CPU on spinlock contention. So Bellinger used the block multi-queue infrastructure to preallocate SCSI commands, sense buffers, protection information, and requests. His initial prototype had no error handling, any errors would oops the system. But he was able to get 1.7 million IOPS out of that prototype code. Hellwig got error handling working and has been driving it to something that could be merged. There are plans for an initial merge, but Bottomley was concerned that Bellinger and Hellwig did not agree on whether the faster IOPS mode was the default case or not, with Bellinger on the side of it being an exception. Bellinger said that there had been no agreement yet on that, which would make merging difficult, Bottomley said. Converting drivers should be fairly easy, Bellinger said, though Bottomley cautioned that there would need to be a lot of work done on lock elimination in the drivers. There is also a question of per-queue vs. per-host mailboxes, Bottomley said. There is work to do to determine which submission model will work best, he said. [ Thanks to the Linux Foundation for travel support to attend LSFMM. ] Supporting large-sector (>4K) drives was the topic of Ric Wheeler's fairly brief session at the 2014 LSFMM Summit. Supporting drives with 4K sectors only came about in the last few years, but 4K sectors are not the end of the story. Wheeler asked the drive vendor representatives if they chose 4K as their sector size willingly, or simply because it is the largest size supported by operating systems. While no one directly answered the question, it was evident that drive vendors would like more flexibility in sector sizes. While they may not want sectors as large as 256M (jokingly suggested by Martin Petersen), storage arrays have been using 64K and 128K sectors internally for some time, Wheeler said. Dave Chinner asked which layers would need to change to handle larger sector sizes, and suggested that filesystems and partition-handling code were likely suspects. He also said that the ability to do page-sized I/O is a fundamental assumption throughout the page cache code. Jan Kara mentioned reclaim as another area that makes that assumption. Linux pages are typically 4K in size. One way to deal with that would be with chunks, as IRIX did with its chunk cache, Chinner said. It was a multi-page buffer cache that was created to handle exactly this case. But, he is not at all sure we want to go down that path. Petersen mentioned that there is another commercial Unix that has a similar scheme. There could also be a layer that allows for sub-sector reads and writes. Though, it would have to do read-modify-write cycles to write smaller pieces, which would be slow, Chinner said. Unlike the advent of 4K drives, the industry is not pushing for support of larger sector sizes immediately, Wheeler said. There is time to solve the problems correctly. The right approach is to handle sectors that are larger than the page size first, then to build on that, Ted Ts'o said. [ Thanks to the Linux Foundation for travel support to attend LSFMM. ] Kent Overstreet spoke about his rewrite of the direct I/O (DIO) code in a session at this year's Linux Storage, Filesystem, and Memory Management (LSFMM) Summit. Direct I/O accesses the underlying device directly and avoids data being stored in the page cache. Overstreet began by noting that the immutable biovec work was now upstream. That allows for more easily splitting a struct bio. The only remaining prerequisite for the DIO rewrite is a generic make_request() that can take arbitrarily sized bios. Once that's there, drivers will need to do the splitting. So, the kernel can implement a DIO operation by allocating a bio and putting a "bunch of pages into it". It reduces the code size and complexity significantly. A lot of the work currently done to manipulate arrays of pages just goes away. The end goal should be that dio should just be able to hand a bio to a filesystem and let it do the lookups, he said. In addition, hopefully the lower layers of the buffered I/O code can use the generic make_request() changes as well. Splitting a bio is now no less efficient that allocating two to begin with. [ Thanks to the Linux Foundation for travel support to attend LSFMM. ] Ostensibly, the Linux Storage, Filesystem, and Memory Management (LSFMM) Summit is broken up into three tracks, but for the most part there is enough overlap between the Storage and Filesystem parts that joint sessions between them are the norm. The only session where that wasn't the case was a discussion led by Chuck Lever and Jim Lieb that spanned two slots. It covered user-space file servers and, in particular, FedFS and user-space NFS. As I was otherwise occupied in the Storage-only track, Elena Zannoni sat in on the discussion and provided some detailed notes on what went on. Lever kicked things off by describing FedFS (short for "Federated Filesystem") as a way for administrators to create a single root namespace from disparate portions of multiple remote filesystems from multiple file servers. For that to work, there are filesystem entities known as "referrals" that contain the necessary information to track down the real file or directory entry on another filesystem. By building up referrals into the hierarchy the administrator wants to present, a coherent "filesystem" made up of files from all over can be created. Referrals are physically stored in objects known as "junctions". Samba uses symbolic links to store the metadata needed to access the referred-to data, while FedFS uses the extended attributes (xattrs) on an empty directory for that purpose. At last year's meeting, it was agreed to converge on a single format for junctions. Symbolic links would be used, though Samba would still use the linked-to name, while FedFS would use xattrs attached to the link. Since then, that decision was vetoed by the NFS developers, Lever said. So FedFS will stay with the empty directory to represent junctions. These empty directories are similar to the Windows concept of "reparse points", according to Steve French. It is an empty directory with a special attribute to distinguish it from a normal empty directory. It would be nice to be able to add a new type of inode (or mode bit) to the virtual filesystem (VFS) to support that, French said. But Ted Ts'o noted that doing so would require all filesystems to change to support it Lever also explained that a single administrative interface that could manage junctions for both Samba and NFS is needed. In answer to a question from Jeff Layton, Lever said that FedFS was looking for help from the kernel on reparse points (or junctions), as well as performance help to reduce lookups for discovering these referrals and where they go. Lieb noted that the latter would also help the Ganesha user-space NFS file server. Layton went on to explore why the symlink scheme could not be used, but Trond Myklebust was clear that using symbolic links was too ugly and hacky. It also limited the referral information to a single page, so supporting multiple protocols for a single referral was difficult. It is a "nasty hack" that Samba uses, but it is not sensible to spread it further, Myklebust said. It was agreed that more discussion was needed before any kind of proposal could be made. The session switched over to Lieb at that point. He had a number of topics where user-space file servers (like Ganesha) needed kernel help. The first is for file-private locks, which may have been solved with Layton's work on that feature. Lieb hopes to see that get merged in the 3.15 merge window. The next step will be to get GNU C library (glibc) patches to support the new style of locks merged. Another problem for Ganesha is with filtering inotify events. It would like to be able to get events for anything that some other filesystem does to the exported directories, while not getting notified for events generated by its own activities. The inotify events are used to invalidate caches in Ganesha and it is getting swamped by events from its own actions, Lieb said. Patches have been posted, but he would like to see the feature get added. Dealing with user credentials is another area where Ganesha could use some kernel help. Right now, for many operations, Ganesha must do seven system calls to perform what is (conceptually) one operation. It must call setuid(), setfsgid(), setgroups(), then the operation followed by an unwinding of the three credentials calls. He would like a simpler way, with fewer system calls. After a long discussion, it became clear that what Lieb was looking for was a way to cache a particular set of credentials in the kernel that could be recalled whenever Ganesha needed to do an operation as that user. Currently, the constant set up and teardown of the credentials information is time consuming. Ts'o thought he had a solution for that problem and he suggested that he and Lieb finish the discussion outside of the meeting. The never-ending battle for some sort of enhanced version of readdir() was next up. There is a need to get much more information than what readdir() provides. A number of proposals have been made over the years, including David Howells's xstat() system call. Those proposals have all languished for lack of someone driving the effort. The older patches need to be resurrected, refreshed, and reposted, but it will require finding someone to push them before they will be merged. The last problem discussed is support for access-control lists (ACLs). There are two kinds of ACLs being used today: POSIX and rich (NFSv4) ACLs. They have different semantics and the question is how the kernel can support both. Currently, the kernel has support for POSIX ACLs, but Samba, NFSv4, and others use the rich ACLs. One possible solution would be to just add rich ACLs to Linux, essentially sitting parallel to the existing POSIX ACLs. But Al Viro believes it is too complicated to have two similar features with slightly different semantics both in the kernel. There is also some thought that perhaps POSIX ACLs could be emulated by rich ACLs, but it is unclear if that is true. In the end, the kernel needs to do the ACL checking, since races and other problems are present if user space tries to do that job. The slot ended before much in the way of conclusions could be drawn. [ Thanks to Elena Zannoni for her extensive notes. Thanks, also, to the Linux Foundation for travel support to attend LSFMM. ] Patches and updates Kernel trees Architecture-specific Core kernel code Development tools Device drivers Filesystems and block I/O Miscellaneous Page editor: Jake Edge Next page: Distributions>> Linux is a registered trademark of Linus Torvalds
https://lwn.net/Articles/592321/
CC-MAIN-2018-05
refinedweb
6,443
69.31
We develop a website for a microcontroller With the advent of various kinds of smart sockets, light bulbs and other similar devices into our lives, the need for websites on microcontrollers has become undeniable. And thanks to the lwIP project (and its younger brother uIP), you won’t surprise anyone with such functionality. But since lwIP is aimed at minimizing resources, in terms of design, functionality, as well as usability and development, such sites lag far behind those to which we are used. Even for embedded systems, compare, for example, with an administration site on the cheapest routers. In this article we will try to develop a site on Linux for some smart device and run it on a microcontroller. To run on a microcontroller, we will use Embox… This RTOS includes a CGI-enabled HTTP server. We will use the HTTP server built into python as an HTTP server on Linux. python3 -m http.server -d <site folder> Static site Let’s start with a simple static site consisting of one or more pages. Everything is simple here, let’s create a folder and index.html in it. This file will be downloaded by default if only the site address is specified in the browser. $ ls website/ em_big.png index.html The site will also contain the Embox logo, the “em_big.png” file, which we will embed in the html. Let’s start the http server python3 -m http.server -d website/ Let’s go in the browser to localhost: 8000 Now let’s add our static site to the Embox file system. This can be done by copying our folder to the rootfs / template folder (the current template is in the conf / rootfs folder). Or create a module specifying files for rootfs in it. $ ls website/ em_big.png index.html Mybuild Content of Mybuild. package embox.demo module website { @InitFS source "index.html", "em_big.png", } For the sake of simplicity, we’ll put our site directly in the root folder (@InitFs annotation with no parameters). We also need to include our site in the mods.conf configuration file and add the httd server itself there. include embox.cmd.net.httpd include embox.demo.website Also, let’s start the server with our website during system startup. To do this, add a line to the conf / system_start.inc file "service httpd /", Naturally, all these manipulations need to be done with the config for the board. After that, we collect and run. We go in the browser to the address of your board. In my case it is 192.168.2.128 And we have the same picture as for the local site We are not web development specialists, but we have heard that various frameworks are used to create beautiful websites. For example, it is often used AngularJS… Therefore, we will give further examples using it. But at the same time, we will not go into details and apologize in advance if somewhere we have strongly adjusted with web design. Whatever static content we put in the folder with the site, for example, js or css files, we can use it without any additional effort. Let’s add app.js (an angular site) to our site and in it a couple of tabs. We will put the pages for these tabs in the partials folder, images in the images / folder, and css files in css /. $ ls website/ app.js css images index.html Mybuild partials Let’s launch our website. Agree, the site looks much more familiar and pleasant. And all this is done on the browser side. As we said, the entire context is still static. And we can develop it on the host like a regular website. Naturally, you can use all the development tools of common web developers. So, opening the console in a browser, we found an error message that the favicon.ico was missing: We found out that this is the icon that is displayed in the browser tab. You can, of course, put a file with this name, but sometimes you don’t want to spend on this place. Let me remind you that we want to run also on microcontrollers where there is little memory. A search on the Internet immediately showed that you can do without a file, you just need to add a line to the head html section. Although the error did not interfere, it is always pleasant to make the site a little better. And most importantly, we made sure that the usual developer tools are quite applicable with the proposed approach. Dynamic content CGI Let’s move on to dynamic content. Common Gateway Interface (CGI) an interface for interaction of a web server with command line utilities, which allows creating dynamic content. In other words, CGI allows you to use the output of utilities to generate dynamic content. Let’s take a look at some CGI script #!/bin/bash echo -ne "HTTP/1.1 200 OKrn" echo -ne "Content-Type: application/jsonrn" echo -ne "Connection: Connection: closern" echo -ne "rn" tm=`LC_ALL=C date +%c` echo -ne ""$tm"nn" First, the http header is printed to standard output, and then the data of the page itself is printed. output can be redirected anywhere. You can simply run this script from the console. We will see the following: ./cgi-bin/gettime HTTP/1.1 200 OK Content-Type: application/json Connection: Connection: close "Fri Feb 5 20:58:19 2021" And if instead of the standard output it is socket, then the browser will receive this data. CGI is often implemented with scripts, even cgi scripts are said. But this is not necessary, it is just that in scripting languages such things are faster and more convenient. A utility providing CGI can be implemented in any language. And since we focus on microcontrollers, therefore, we try to take care of saving resources. Let’s do the same in C. #include <stdio.h> #include <unistd.h> #include <string.h> int main(int argc, char *argv[]) { char buf[128]; char *pbuf; struct timeval tv; time_t time; printf( "HTTP/1.1 200 OKrn" "Content-Type: application/jsonrn" "Connection: Connection: closern" "rn" ); pbuf = buf; pbuf += sprintf(pbuf, """); gettimeofday(&tv, NULL); time = tv.tv_sec; ctime_r(&time, pbuf); strcat(pbuf, ""nn"); printf("%s", buf); return 0; } If we compile this code and run, we will see exactly the same output as in the case of the script. In our app.js, add a handler to call a CGI script for one of our tabs app.controller("SystemCtrl", ['$scope', '$http', function($scope, $http) { $scope.time = null; $scope.update = function() { $http.get('cgi-bin/gettime').then(function (r) { $scope.time = r.data; }); }; $scope.update(); }]); A small nuance for running on Linux using the built-in python server. We need to add the –cgi argument to our launch line to support CGI: python3 -m http.server --cgi -d . Automatic updating of dynamic content Now let’s take a look at another very important property of a dynamic site – automatic content updates. There are several mechanisms for its implementation: - Server Side Includes (SSI) - Server-sent Events (SSE) - WebSockets - Etc Server Side Includes (SSI). Server Side Includes (SSI)… It is an uncomplicated language for dynamically creating web pages. Usually files using SSI are in .shtml format. SSI itself even has control directives, if else, and so on. But in most of the microcontroller examples we found, it is used as follows. A directive is inserted into the .shtml page that periodically reloads the entire page. This could be for example <meta http- Or <BODY onLoad="window.setTimeout("location. And in one way or another, content is generated, for example, by setting a special handler. The advantage of this method is its simplicity and minimal resource requirements. But on the other hand, here’s an example of how it looks. The page refresh (see tab) is very noticeable. And reloading the entire page looks like an overly redundant action. The standard example from FreeRTOS is given – Server-sent Events Server-sent Events (SSE) it is a mechanism that allows a half-duplex (one-way) connection between a client and a server to be established. The client in this case opens a connection and the server uses it to transfer data to the client. At the same time, unlike classic CGI scripts, the purpose of which is to form and send a response to the client, and then complete, SSE offers a “continuous” mode. That is, the server can send as much data as necessary until it either completes itself or the client closes the connection. There are some minor differences from regular CGI scripts. First, the http header will be slightly different: "Content-Type: text/event-streamrn" "Cache-Control: no-cachern" "Connection: keep-alivern" Connection, as you can see, is not close, but keep-alive, that is, an ongoing connection. To prevent the browser from caching data, you need to specify Cache-Control no-cache. Finally, you need to specify that the special data type Content-Type text / event-stream is used. This data type is special format for SSE: : this is a test stream data: some text data: another message data: with two lines In our case, the data needs to be packed into the following line data: { “time”: “<real date>”} Our CGI script will look like #!/bin/bash echo -ne "HTTP/1.1 200 OKrn" echo -ne "Content-Type: text/event-streamrn" echo -ne "Cache-Control: no-cachern" echo -ne "Connection: keep-alivern" echo -ne "rn" while true; do tm=`LC_ALL=C date +%c` echo -ne "data: {"time" : "$tm"}nn" 2>/dev/null || exit 0 sleep 1 done Output if you run the script $ ./cgi-bin/gettime HTTP/1.1 200 OK Content-Type: text/event-stream Cache-Control: no-cache Connection: keep-alive data: {"time" : "Fri Feb 5 21:48:11 2021"} data: {"time" : "Fri Feb 5 21:48:12 2021"} data: {"time" : "Fri Feb 5 21:48:13 2021"} And so on once a second The same in C #include <stdio.h> #include <unistd.h> #include <string.h> int main(int argc, char *argv[]) { char buf[128]; char *pbuf; struct timeval tv; time_t time; printf( "HTTP/1.1 200 OKrn" "Content-Type: text/event-streamrn" "Cache-Control: no-cachern" "Connection: keep-alivern" "rn" ); while (1) { pbuf = buf; pbuf += sprintf(pbuf, "data: {"time" : ""); gettimeofday(&tv, NULL); time = tv.tv_sec; ctime_r(&time, pbuf); strcat(pbuf, ""}nn"); if (0 > printf("%s", buf)) { break; } sleep(1); } return 0; } And finally, we also need to tell angular that we have SSE, that is, modify the code for our controller app.controller("SystemCtrl", ['$scope', '$http', function($scope, $http) { $scope.time = null; var eventCallbackTime = function (msg) { $scope.$apply(function () { $scope.time = JSON.parse(msg.data).time }); } var source_time = new EventSource('/cgi-bin/gettime'); source_time.addEventListener('message', eventCallbackTime); $scope.$on('$destroy', function () { source_time.close(); }); $scope.update = function() { }; $scope.update(); }]); We launch the site, we see the following: It is noticeable that, unlike using SSI, the page is not overloaded, and the data is refreshed smoothly and pleasing to the eye. Demo Of course, the examples given are not real because they are very simple. Their goal is to show the difference between the approaches used on microcontrollers and in other systems. We made a small demo with real tasks. Controlling LEDs, receiving real-time data from an angular velocity sensor (gyroscope) and a tab with system information. The site was developed on the host. It was only necessary to make small plugs to emulate the LEDs and data from the sensor. Sensor data are just random values received through the standard RANDOM #!/bin/bash echo -ne "HTTP/1.1 200 OKrn" echo -ne "Content-Type: text/event-streamrn" echo -ne "Cache-Control: no-cachern" echo -ne "Connection: keep-alivern" echo -ne "rn" while true; do x=$((1 + $RANDOM % 15000)) y=$((1 + $RANDOM % 15000)) z=$((1 + $RANDOM % 15000)) echo -ne "data: {"rate" : "x:$x y:$y z:$z"}nn" 2>/dev/null || exit 0 sleep 1 done We simply store the state of the LEDs in a file. #!/bin/python3 import cgi import sys print("HTTP/1.1 200 OK") print("Content-Type: text/plain") print("Connection: close") print() form = cgi.FieldStorage() cmd = form['cmd'].value if cmd == 'serialize_states': with open('cgi-bin/leds.txt', 'r') as f: print('[' + f.read() + ']') elif cmd == 'clr' or cmd == 'set': led_nr = int(form['led'].value) with open('cgi-bin/leds.txt', 'r+') as f: leds = f.read().split(',') leds[led_nr] = str(1 if cmd == 'set' else 0) f.seek(0) f.write(','.join(leds)) The same is trivially implemented in the C variant. If you wish, you can see the code in repositories folder (project / website). On the microcontroller, of course, implementations are used that interact with real peripherals. But since these are just commands and drivers, they were debugged separately. Therefore, the very transfer of the site to the microcontroller did not take time. The screenshot running on the host looks like this In a short video, you can see the work on a real microcontroller. Note that there is not only communication via http, but also, for example, setting the date using ntp from the command line in Embox, and of course handling peripherals. Independently, everything given in the article can be reproduced by instructions on our wiki Conclusion In the article, we showed that it is possible to develop beautiful interactive sites and run them on microcontrollers. Moreover, it can be done easily and quickly using all the development tools for the host and then run from on microcontrollers. Naturally, the development of the site can be done by a professional web designer, while the embedded developer will implement the logic of the device. Which is very convenient and saves time to market. Naturally, you will have to pay for this. Yes SSE will require slightly more resources than SSI. But with the help of Embox, we easily fit into the STM32F4 without optimization and used only 128 KB of RAM. They didn’t just check anything less. So the overhead is not that big. And the convenience of development and the quality of the site itself is much higher. And at the same time, of course, do not forget that modern microcontrollers have grown noticeably and continue to do so. After all, devices are required to be more and more intelligent.
https://prog.world/we-develop-a-website-for-a-microcontroller/
CC-MAIN-2021-49
refinedweb
2,397
66.03
On Thu, 5 Feb 2004, [ISO-8859-1] Frédéric L. W. Meunier wrote: > BTW, to not write another e-mail. > > Thomas or anyone. How do I set --enable-vertrace in > makefile.bcb ? I tried adding -DLY_TRACELINE __LINE__ but it > complained. It would have to be expressed differently. The configure script is actually making a text substitution in the lynx_cfg.h file - not adding a -D option to the makefile. The usual approach to that is to add a chunk to userdefs.h, e.g., #if defined(USE_VERTRACE) && !defined(LY_TRACELINE) #define LY_TRACELINE __LINE__ #endif Then your makefile could define USE_VERTRACE. -- Thomas E. Dickey ; To UNSUBSCRIBE: Send "unsubscribe lynx-dev" to address@hidden
http://lists.gnu.org/archive/html/lynx-dev/2004-02/msg00104.html
CC-MAIN-2013-48
refinedweb
111
70.8
Lesson 21: Change Andee Bluetooth Device Name Written by Jonathan Sim You can find this lesson and more in the Arduino IDE (File -> Examples -> Andee). If you are unable to find them, you will need to install the Andee Library for Arduino IDE. You'll only need to run this code once to change the Annikken Andee's Bluetooth device name. Here's how it's done: #include <Andee.h> // Don't forget the necessary libraries #include <SPI.h> // This is where you change your device name: char newBluetoothName[] = "Hello New Device Name"; // New device name char cmdReply[64]; // String buffer char commandString[100]; // String to store the new device name and device command into one void setup() { Andee.begin(); Andee.clear(); // We need to combine the new device name with the device command sprintf(commandString, "SET BT NAME %s", newBluetoothName); // Send command to change device name Andee.sendCommand(commandString, cmdReply); } void loop() { // Disconnect user - there's nothing to do here anyway if(Andee.isConnected()) Andee.disconnect(); }
http://resources.annikken.com/index.php?title=Lesson_21:_Change_Andee_Bluetooth_Device_Name
CC-MAIN-2017-09
refinedweb
167
61.06
9327/aws-multichain-network-couldn-connect-to-the-seed-node-error I am new to multichain. I am using 2 instances on AWS EC2. I have created a blockchain using one instance.. >multichaind secondChain -daemon MultiChain Core Daemon build 1.0 alpha 27 protocol 10007 MultiChain server starting Looking for genesis block... Genesis block found Other nodes can connect to this node using: multichaind secondChain@XXX.XX.X.XX:XXXX Node started But i am not able to connect to blockchain from the 2nd instance. i am getting the following error: >multichaind secondChain@XXX.XX.X.XX:XXXX Retrieving blockchain parameters from the seed node XXX.XX.X.XX:XXXX ... Error: Couldn't connect to the seed node XXX.XX.X.XX on port XXXX - please check multichaind is running at that address and that your firewall settings allow incoming connections. I think this has something to do with the network because my commands seems to right. How to solve this problem? When you start with Geth you need ...READ MORE Run the following commandbefore updating: composer network start ...READ MORE You can extend your /etc/hosts file and make orderer.example.com domain name ...READ MORE You can change the configuration of the ...READ MORE Summary: Both should provide similar reliability of ...READ MORE This will solve your problem import org.apache.commons.codec.binary.Hex; Transaction txn ...READ MORE To read and add data you can ...READ MORE Follow the below steps to connect peers ...READ MORE In a Dev environment, you can first ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/9327/aws-multichain-network-couldn-connect-to-the-seed-node-error?show=9328
CC-MAIN-2019-47
refinedweb
265
62.24
I never really was much for pointers and strings...so I don't really see why a declaration like char *s = "This is a string"; ...works...when something like int *x = 5; ...does not. It has something to do with a pointer to a character being synonymous with an array of characters or something, right? But....a pointer is a variable that points to a memory address...how can it be perfectly legal to point to just an arbitrary string? Consider this code: That doesn't really work as planned...in fact, it crashes the program...why is this?That doesn't really work as planned...in fact, it crashes the program...why is this?Code:#include <iostream> using namespace std; char* Trunc(char* s, int max){ //'max' is 5: so when i=5, make the current character NULL and return the string for(int i=0; i<=max; i++){ if(i==max){ s[i]='\0'; return s; } } } int main(void){ char *m = Trunc("This is a string...",5); cout << m; cin.get(); return 0; }
http://cboard.cprogramming.com/cplusplus-programming/64818-pointers-characters-char-%2As.html
CC-MAIN-2015-14
refinedweb
175
78.35
I'm still learning C and I understand that to get rid of most implicit declaration warnings, you add the prototype header at the beginning. But I'm confused as to what you do when you have outside methods being used in your code. This is my code when I'm using the outside methods #include <stdio.h> #include <string.h> int main(void) { int arrayCapacity = 10; int maxCmdLength = 20; int A[arrayCapacity]; int count = 0; /* how many ints stored in array A */ char command[maxCmdLength + 1]; int n; while (scanf("%s", command) != EOF) { if (strcmp(command, "insert") == 0) { scanf("%d", &n); insert (n, A, arrayCapacity, &count); printArray(A, arrayCapacity, count); } else if (strcmp(command, "delete") == 0) { scanf("%d", &n); delete(n,A,&count); printArray(A, arrayCapacity, count); } else { scanf("%d", &n); printArray(A, arrayCapacity, count); } } return 0; } The methods printArray, insert, and delete are all in the form of: printArray.o, insert.o, delete.o This is how I compiled my program: gcc -Wall insert.o delete.o printArray.o q1.c and I get these warnings: q1.c: In function âmainâ: q1.c:20: warning: implicit declaration of function `insert' q1.c:21: warning: implicit declaration of function `printArray' q1.c:30: warning: implicit declaration of function `delete' I've tried including this in headers but I get errors saying file or directory not found. Any help appreciated. Put them in a header file foo.h like so: extern void printArray(int *A, int capacity, int count); ... then include that file in your source #include "foo.h" You need to include the correct headers to get rid of such warnings. If you get a "file not found" error, try to include them as #include "myheader.h" and put your header files in the same directory as your source code. Generally speaking, #include "file" is for programmer-defined headers while #include <file> is or standard headers. You should be able to just put in the function prototype at the top of the file like you do for other functions in the same file. The linker should take care of the rest. Where did you get those .o files from? If you have written them yourself, then you should create the corresponding .h files. If you got these files from somewhere else, then you should search for the headers in the same place. If all called functions are written before the main() function the compiler will know their name, return type and parameter signature and can match all three of these properties with each following function invocation. Some programmers like to write a function signature first, and do the implementation at a later time. The only time a function declaration is essential is when using co-routines: functionA invokes functionB which in turn invokes functionA. Done as follows: type a(...signatureOfA...) /* compiler now knows about a() */ type b(...signatureOfB...) {… // implementation of b a(…arguments for a…); /* compiler knows about above */ …} type a(...signatureOfA...)i {… // implementation of a b(…arguments for b…); /* compiler knows about above */ …} int main() { a(… arguments for a…); /* compiler knows */ return(code); }
http://www.dlxedu.com/askdetail/3/1059d7e31c6a3b909a0f11f4ab8f13b8.html
CC-MAIN-2018-39
refinedweb
515
57.06
db_command Creates an OLE DB command. Parameters - db_command - A command string containing the text of an OLE DB command. There are two ways to specify the command. The first way, appropriate for simpler commands, is to specify the command string as the value for the db_command argument: The second way, appropriate for lengthy commands, or for commands that use binding parameters, is to specify the command string in braces after the arguments. See the bindings parameter for this usage. - name (optional) - The name of the handle you use to work with the rowset. If you specify name, db_command generates a class with the specified name, which can be used to traverse the rowset or to execute multiple action queries. If you do not specify name, it will not be possible to return more than one row of results to the user. - source_name (optional) - The CSession variable or instance of a class that has the db_source attribute applied to it on which the command executes. See db_source. - hresult (optional) - Identifies the variable that will receive the HRESULT of this database command. If the variable does not exist, it will be automatically injected by the attribute. - bindings (optional) - Allows you to separate the binding parameters from the OLE DB command. If you specify a value for bindings, db_command will parse the associated value and will not parse the [bindtype] parameter. This usage allows you to use OLE DB provider syntax. To disable parsing, without binding parameters, specify Bindings="". If you do not specify a value for bindings, db_command will parse the binding parameter block, looking for '(', followed by [bindtype] in brackets, followed by one or more previously declared C++ member variables, followed by ')'. All text between the parentheses will be stripped from the resulting command, and these parameters will be used to construct column and parameter bindings for this command. - bulk_fetch (optional) - An integer value that specifies the number of rows to fetch. The default value is 1, which specifies single row fetching (the rowset will be of type CRowset). A value greater than 1 specifies bulk row fetching. Bulk row fetching refers to the ability of bulk rowsets to fetch multiple row handles (the rowset will be of type CBulkRowset and will call SetRows with the specified number of rows). If bulk_fetch is less than one, SetRows will return zero. - command string - The command string is enclosed in braces '{ }' and the syntax is as follows: A binding parameter block is defined as follows: ([bindtype] szVar1 [, szVar2 [, nVar3 [, ...]]] ) where: ( marks the start of the data binding block. [bindtype] is one of the following case-insensitive strings: - [db_column] binds each of the member variables to a column in a rowset. - [bindto] (same as [db_column]). - [in] binds member variables as input parameters. - [out] binds member variables as output parameters. - [in,out] binds member variables as input/output parameters. SzVarX resolves to a member variable within the current scope. ) marks the end of the data binding block. If the command string contains one or more specifiers such as [in], [out], or [in/out], db_command builds a parameter map. If the command string contains one or more parameters such as [db_column] or [bindto], db_command generates a rowset and an accessor map to service these bound variables. See db_accessor for more information. Note [bindtype] syntax and the bindings parameter are not valid when using db_command at the class level. Here are some examples of binding parameter blocks. The following example binds the m_au_fnameand m_au_lnamedata members to the au_fnameand au_lnamecolumns, respectively, of the authors table in the pubs database: This example does essentially the same binding as the previous code, but with a different syntax: Attribute Context For more information about the attribute contexts, see Attribute Contexts. Remarks. You can use multiple commands with the same data source; see the last two examples in the Examples section. db_command can be used to execute commands that do not return a result set. Examples Example 1 is code that can be copied into a Visual C++ .NET project, built, and run. Examples 2 and 3 are partial code examples that demonstrate usage and are not intended to be copied as is into an application. Example 1. To create an application using the code in Example 1, create a Win32 Project; on the Application Settings page, select Console application and check Add support for ATL. You will need to add the following preprocessor directives to support attributes: #define _ATL_ATTRIBUTESin stdafx.h, before #include <atlbase.h>. #include <atldbcli.h>in stdafx.h, after #include <atlbase.h>. Add a header file called Authors.h to the project, and copy and paste the following code, which declares the command class CAuthors. Note that this code requires you to provide your own connection string that connects to the pubs database. A convenient way to obtain a connection string is to right-click on Data Connections in Server Explorer and select Add Connection from the drop-down menu; fill out the Data Link Properties dialog and click OK. Left-click on the new data connection when it appears (as a subnode on Data Connections); in the Properties window, copy the connection string from the ConnectString field (you can double-click to highlight the entire string). /* authors.h : Declaration of the CAuthors class */ #pragma once [ db_source(L"your connection string"), // Provide a connection to the pubs database db_command(L" \ SELECT au_lname, au_fname \ FROM dbo.authors \ WHERE state = 'CA'") ] class CAuthors { public: // In order to fix several issues with some providers, the code below may bind // columns in a different order than reported by the provider [ db_column(L"au_lname", status=m_dwau_lnameStatus, length=m_dwau_lnameLength) ] TCHAR m_au_lname[41]; [ db_column(L"au_fname", status=m_dwau_fnameStatus, length=m_dwau_fnameLength) ] TCHAR m_au_fname[21]; [ db_param(7, DBPARAMIO_INPUT) ] TCHAR m_state[3]; DBSTATUS m_dwau_lnameStatus; DBSTATUS m_dwau_fnameStatus; DBLENGTH m_dwau_lnameLength; DBLENGTH m_dwau_fnameLength; void GetRowsetProperties(CDBPropSet* pPropSet) { pPropSet->AddProperty(DBPROP_CANFETCHBACKWARDS, true, DBPROPOPTIONS_OPTIONAL); pPropSet->AddProperty(DBPROP_CANSCROLLBACKWARDS, true, DBPROPOPTIONS_OPTIONAL); } }; Next, copy and paste the code indicated as follows into the project's .cpp file: #include "stdafx.h" #include "Authors.h" int _tmain(int argc, _TCHAR* argv[]) { HRESULT hr = CoInitialize(NULL); // Instantiate rowset CAuthors rs; // Open rowset and move to first row _tcscpy(); return 0; } Example 2 uses db_source on a data source class CMySource, and db_command on command classes CCommand1 and CCommand2: // Example 4: class usage for both db_source and db_command [db_source(...)] class CMySource {...}; [db_command(command = "SELECT * FROM Products")] class CCommand1 {...}; [db_command(command = "SELECT FNAME, LNAME FROM Customers")] class CCommand2 {...}; ... // Usage: CMySource s; HRESULT hr = s.OpenDataSource(); if (SUCCEEDED(hr)) { CCommand1 c1; hr = c1.Open(s); ... CCommand2 c2; hr = c2.Open(s); ... } s.CloseDataSource(); Example 3 uses db_source on a data source class CMySource, and db_command inline to create two separate commands: // Example 5: class usage for db_source and inline usage for db_command [db_source(...)] class CMySource {...}; ... // Usage: HRESULT SomeFunc() { CMySource s; HRESULT hr = s.OpenDataSource(); if (SUCCEEDED(hr)) { ... [ db_command(command = "SELECT * FROM Products",...,source_name="s",...) ]; [ db_command(command = " SELECT FNAME, LNAME from Customers",...,source_name="s",...) ]; ... } s.CloseDataSource(); ... } For other examples, see the MantaWeb Sample and the OnlineAddressBook Sample. See Also OLE DB Consumer Attributes | Stand-Alone Attributes | Attributes Samples
http://msdn.microsoft.com/en-US/library/5dcd01t8(v=vs.71).aspx
CC-MAIN-2013-48
refinedweb
1,175
54.83
Opened 10 years ago Closed 10 years ago #3239 closed enhancement (invalid) clear session method Description For some types of applications it's necessary/desirable to clear session variables at specific events (e.g. server start, user login). It would be convenient to have the following (untested) method available in SessionWrapper: def clear(self): try: self._session_cache.clear() except AttributeError: pass self._session.clear() I'm aware the Django policy is to not make it easy for users to delete data, but this is pretty explicit that you want to clear your session data. Change History (1) comment:1 Changed 10 years ago by Note: See TracTickets for help on using tickets. Upon further investigation of session data, I'm going to rescind my request. Session data is used by the system in ways the programmer may not anticipate: e.g.: I'll probably hack a method to clear my user session data based on the underscore prefix. It would still be nice to have an approved way to clear user (assigned by the developer) session data, leaving system session data (used by Django) untouched.
https://code.djangoproject.com/ticket/3239
CC-MAIN-2017-04
refinedweb
185
54.02
Arab -Israeli conflict 1945-1979 Crisis in the Middle East Day one • Roots of the conflict 1900-1945 Geography • Modern day Israel and Palestine • Located on the Eastern side of the Mediterranean Sea • Approximately 10,000 square miles Origins of the Conflict • Jews claim land back to 2000 BCE (Canaan) • For hundreds of years, the Israelites were invaded, exiled and conquered • Jewish Diaspora: Romans disperse Jews and raze Jerusalem (70 CE) Ottoman empire • Ottoman Empire takes over in 1517: Palestine is now part of the Ottoman (Muslim) Empire The break-up of the Ottoman empire • At the turn of the 20th century most of the ME was still under Ottoman control but the empire was imploding. • Young Turk movement • Arab Consciousness • Imperialism • European powers wanted influence in the crumbling empire. The Zionist Movement (1898) • Zionism: the establishment of a Jewish state in Palestine: the ancient homeland of the Jews. • The movement was founded by Theodore Herzl in the late 19th century. • Because of Jewish persecution the Zionist movement was gaining popularity among Jews • Jewish settlers began to move to Palestine. World War I • Ottoman Turkey joined the Central powers—UK and France began to plot the division of the Middle East. • Britain promises Palestinian Arabs an Arab state in exchange for their help in defeating the Ottoman Empire. • At the same time Britain also issued The Balfour Declaration • Issued in 1917; declared that a there should be a Jewish national home in Palestine. Free Palestine • Desire to encourage Jewish businessmen in America to support Wilson’s call for war loans. The Mandate System: • An authorization granted by the League of Nations to a member nation to govern the former German or Turkish colonies, such as the British mandate in Palestine. (San Remo Conference) • The Balfour Declaration was included in the obligations for the governance of Palestine; thus binding Britain to Jewish interests. The British Mandate in Palestine, 1922-1945 • Continued Jewish immigration, British support for the Zionist position, rejection of Arab demands for independence were met with resentment and led to several bloody clashes which created bitterness on all sides. Chapter X. - ConclusionConsidering. Source analysis 1922-1945 Peel Report 1937:. British White Paper- 1939. 1. The objective of His Majesty's Government is the establishment within 10 years of an independent Palestine State in such treaty relations with the United Kingdom as will provide satisfactorily for the commercial and strategic requirements of both countries in the future 2. Jewish immigration during the next five years will be at a rate which, if economic absorptive capacity permits, will bring the Jewish population up to approximately one third of the total population of the country…of some 75,000 immigrants over the next five years. After the period of five years, no further Jewish immigration will be permitted unless the Arabs of Palestine are prepared to acquiesce in it. 6,000,000 Jews were killed as a direct result of the Holocaust Hundreds of thousands more were left homeless after World War II Many countries would not allow displaced Jews to live in their countries UK was looking for an honorable way out of the situation in Palestine. The impact of WWII on the British Mandate in Palestine. Day 2: 1945-1948 • The last years of the British Mandate, UNSCOP and Partition. Key Terms: • Haganah: An underground Jewish group created in 1920, Haganah became a countrywide organization that involved young adults. • Irgun: An extreme Jewish organization founded in 1931 after a split within Haganah. They were more militant and advocated armed insurrection against the British and Arabs. • Lehi: Radical armed Zionist group dedicated to the creation of a Jewish state in Palestine. Lehi was responsible for the assassination of the UK’s top official in Palestine. • United Resistance: In 1945 these three underground groups joined together with the aim of creating an independent homeland ASAP. • UN SCOP: United Nations special Committee on Palestine. Britain and the post-war ME • Following WWII UK had significant holdings in the ME but faced financial difficulties. • In Palestine the UK had to figure out what to do with the mandate. Key issues: • Growing US interest in Palestine • Cold War: Soviet interests in Palestine. • Actions of Arabs and Jews during the war = increased violence. • Pro-Jewish support following the Holocaust • Displaced Persons Developments in Palestine 1945-46 • Arabs and Jews were unhappy to see the return of the British post WWII. • Arabs suffered from a lack of political structure and leadership and were in a poor position to represent their own interests. • Jewish Agency—Jews were in a better political position. The agency led by David Ben Gurion continued to represent Jewish interests to the British. • Zionist Underground activity had begun to increase. Diplomacy and the role of the United States • Committee of Enquiry was set up in November 1945 to resolve the Arab-Israeli situation. • Final recommendation= partition was rejected as unworkable and not in the best interest of the population. • Meanwhile, President Truman supported the Zionists and supported increased Jewish immigration into Palestine—this angered the British government. King David Hotel • Hotel was the headquarters of the British Mandate government and military command center. King David Hotel bombing 7/22/46 • The Causes: • After WWII the British decided to enforce tough measures to regain their authority—they were frustrated with the actions of sabotage and violence carried out by the underground resistance groups. • The British launched a campaign to search for weapons and imprisoned Zionists. • The bombing: • The King David Hotel bombing was an attack carried out by the militant Zionist group Irgun. • Telephoned warnings were sent to the switchboard by the hotel's main lobby, the Palestine Post newspaper, and the French consulate. • No evacuation was carried out. • 91 people were killed and 46 were injured. • Controversy has arisen over the timing and adequacy of these warnings and the reasons why the hotel was not evacuated. • The effects: • Jewish Agency condemned the attack • Worsened relations between the British and Palestinian Jews • Britain desired to turn over the mandate to the UN. Towards Partition • UNSCOP is established in May of 1947. • 11 man committee toured Palestine. • Palestinian Arabs refused to cooperate fully—believed the committee was weighted against them. • Jewish groups offered full cooperation and promoted their interests. • Event that influenced their decision • Exodus • A ship that carried Jewish emigrants, that left France on July 11, 1947, with the intent of taking its passengers to Palestine. • Most of the emigrants were Holocaust survivor refugees, who had no legal immigration certificates to Palestine. • Following wide media coverage, the British Navy seized the ship, and deported all its passengers back to Europe. • Realizing that they were not bound for Cyprus, the emigrants conducted a 24-hour hunger strike, refusing to cooperate with the British authorities. • But the British government had no intention of backing down or relaxing its policy. Were sent to Germany. • During this time, media coverage of the human ordeal intensified and the British became pressed to find a solution. • The matter came to the attention of UNSCOP and helped influence their final decision. UNSCOP Report, August 1947 • End to the mandate • Partition plan • Co-operate in an economic Union and share currency. • Jerusalem would be governed under an international trusteeship • Jewish state would be larger than the Arab state. The UN vote for Partition, November 1947 • 2/3 vote was needed • GA vote • 33 supported • 13 against—ALL Islamic countries voted against the Partition. • 10 abstained Final plan approved by the UN • 3 “cantons” each, connected at points Day three: From partition to war 11/47-5/48 • Key Terms: • Fatah—a radical Palestinian organization founded in the 1950’s, including Yasser Arafat, to liberate Palestine. • Arab League—Organization started in 1945 to promote Arab affairs and cooperation. Partition • The UN decision was met by outrage in the Arab world. • The Arabs had no clear political strategy to pursue—they were suspicious of each other and some Arab leaders had their own self interest in mind. • The Arab League proclaimed jihad against the Jews which gave them a bad reputation in much of the world. • The Jewish movement had superior leadership and organization. They also had experienced soldiers many who had fought during WWII. • Plan D • Gain control of vital areas of the Hebrew State and defend its borders from attack. A month before the declaration of the state of Israel an number of Arabs were killed by Jewish paramilitaries in the village of Deir Yassin near Jerusalem—100-254 were killed. The event encouraged Arab states to unite and intervene in 1948, against the creation of the state of Israel. Deir Yassin Israel is Born! • May 14, 1948 in Tel Aviv, the state of Israel was declared. • President—Chaim Weizmann • PM—David Ben Gurion • On the same day, Arab forces from neighboring forces invaded. The Arab-Israeli War (1948) • On May 15, 1948, Egypt, Iraq, Jordan, Lebanon, Saudi Arabia and Syria invaded the newly formed Israel—combined population equals 40 million (Jewish state 750,000). • Arab countries committed less than 30,000 men while the Jews had over 65,000 in the field. • The Arabs were not prepared for conflict and often pursued their own political and territorial objectives. • Israel was able to import heavy weaponry. Armistice • With support from the United States, Israel was able to not only defeat the Arabs, but expand their territory. • Negotiations began in January 1949 on the Greek Island of Rhodes and an agreement was signed in February. Israel after the 1948 War Israel occupied 20% more than she had been promised in the Partition plan Consequences of the war • 1948 Exodus • 750,000 Palestinian Arabs were expelled or fled. • Most have still not been able to return and are scattered in neighboring countries. • Military defeat split the Arab League • Jordan gained territory • Great Britain lost all influence in the region • Replaced by the US Day four: • Demographic shifts: The Palestinian Diaspora, Jewish immigration and the economic development of the Israeli state Key terms: • Diaspora—dispersion, scattering or forced exile. • Intifada—Arabic for “uprising”. Name given to the period of Palestinian resistance to Israeli occupation from 1987. The origins of the Palestinian Diaspora, 1947 • Palestinians claim that the Israelis followed a conscious policy of expulsion that started under the British Mandate The role of the UN in the refugee crisis • Majority of Palestinians fled to neighboring countries. • UN passed a resolution calling for a return of Palestinians to their homes and compensation if they choose not to return. • Israel would still have control of the land they gained in the 1948 war. • Plan was rejected by the Arab states. UN role continued: • UN relief and Works agency (UNRWA) helped set up camps in neighboring countries. • Irrigation projects, healthcare and schools were also established. • Approximately 35% of Palestinian refugees are still under UN control—the remainder have become part of the population of other Arab countries. Jewish immigration (Aliyah) • Israel passed laws forbidding the return of Palestinian refugees to claim land and property—many new Israeli settlements were built in the West Bank. • Law of return (1950) • Right of every Jew to settle in Israel • Citizenship Law (1952) • Immediate citizenship to immigrants. • Ashkenazim—Jews from France, Germany and Eastern Europe. • Sephardim—Jews from Spain and Portugal • Oriental—Jews from Iran, Iraq and Morocco. Economic development • Within 30 years Israel became an industrial economic power in the region. • Initially, Israel had to import raw materials and relied on outside help via loans in order to advance transport, aid agriculture and build the basic infrastructure in order to sustain the new nation. Day 5 • The Suez crisis of 1956 The Egyptian Revolution and the emergence of Nasser • Egyptian army officers (Free Officers Movement) during the War of Independence of Israel plotted to over through the monarch of Egypt because he was corrupt and incompetent. • The Egyptian Revolution eventually resulted in Gamal Abdul Nasser as prime minister and president. • Land redistribution program • Aswan Dam project • Control flooding of the Nile • Loans were initially scheduled to come from the US and UK through the World Bank Relations deteriorate • Nasser started to look for more sophisticated weaponry. • Chinese and Russians were willing to sell arms • Russians offered to lend money for the dam. • Nasser aids Algerians against France. • Nasser supported the dismissal of Jordan's pro-British head of Army. • Egypt’s diplomatic recognition of Communist China • In retaliation the US, UK and France refuse to loan money for Aswan Dam. Crisis to war • Arab-Israeli conflict becomes intertwined with the Cold War. • Nasser nationalizes Suez canal. • Cut off UK sea links • Tripartite talks—US, UK and France announced that that the Suez canal was to be an international waterway whose board would report to the UN. • Egypt rejected Operation Muskateer • Secretly military preparations were started by the UK and France. • The plan included an Israeli invasion of Egypt. • UK and France would intervene, occupy the canal zone and remove Nasser. October –November 1956 • War lasted one week • War worsened Arab Israeli relations. • 1. Israel quickly captured most of Sinai and Gaza • 2. Anglo-France ultimatum to both sides to withdraw. • 3. Egypt rejected and appealed to the UN. • British and France aircraft attack Egyptian airfields. • America orders a ceasefire • Results handout 1956 Suez Crisis • Israel withdrew fully within a year, and the original border was restored Day 6 • The development of Arabism and the emergence of the PLO
https://www.slideserve.com/norman-white/arab-israeli-conflict-1945-1979
CC-MAIN-2020-34
refinedweb
2,236
52.9
Better JS Logger for Debugging As web developers we really like putting console.log all over the place when debugging our applications, although the Chrome dev tools come with an actual debugger that can be started by simply writing debugger in your code. This gets messy rather fast, especially if you simply log the objects without an accompanying message. What I like to do is prepend the message with the class and function name so I can easily filter the message I'm looking for. It turns out you can actually automate that when using Chrome or Firefox and use colors on top! #Stacklogger I created an npm package called Stacklogger for everyone to use by running npm install stacklogger --save. You just import it, and call its own log function as you would with console.log. If you already have logging with console.log in place, just call its hookConsoleLog function and every console.log is redirected to stacklogger's custom log function. Check the npm readme if you want to use it, or see its source code on GitHub I included a small example that produces the above output in chrome: import log, { hookConsoleLog } from 'stacklogger' class ExampleLog { constructor () { this.obj = {hello: 'world', anotherKey: [0, 1]} this.arr = [1, 3, 5, 7, 9] } hello () { log('Logging some text with log()', this.obj, this.arr) } } class ExampleConsoleLog { hello () { console.log('Called with console.log') } } let e1 = new ExampleLog() let e2 = new ExampleConsoleLog() e1.hello() console.log('standard console.log without the hook') hookConsoleLog() console.log('console.log hooked now') e2.hello() #How does it work? #Stack trace Remember how when you throw an Error in JS, it prints the whole stack trace? What we do in the log function is to simply create a new Error object. Unfortunately, the stacktrace is not a well structured object, but just a string. The concrete stack trace string is even different for each JS engine, so I created two regex(es?) to parse them in Chrome's V8 engine and in Firefox. If you use Firefox be aware that it uses the file name as its class name as that's the only available information there. #Hooking console.log() Redirecting the calls from console.log to log is really easy in JS as it allows you to just overwrite every property of objects. First we save a reference to the original console.log and then redefine console.log: const consoleLog = console.log.bind(console) function log () { consoleLog('Hooked.', ...arguments) } function hookConsoleLog () { console.log = log } (Firefox is sad when a logging function doesn't run in the console context, so we bind(console) the reference.) #Unique colors for each class You can use CSS properties in the log to change the foreground/background color by simply passing them as an additional argument to console.log. It would also be nice if each class had its own color, so when looking for a specific debug message of a class, you only have to quickly look through the console's output and pattern match with that color. For this, I implemented a simple function to hash a string to an integer and use that hash as an index in a color array. #Source Code in 27 lines The source code of the whole stacklogger is really short: const consoleLog = console.log.bind(console) const chromeRegex = new RegExp('^\\s*?at\\s*(\\S*?)\\s') const firefoxRegex = new RegExp('^\\s*(\\S*?)@\\S*\\/(\\S*)\\.') export default function log () { let stackframe = (new Error()).stack.split('\n') // try to match chrome first let match = chromeRegex.exec(stackframe[2]) let callee = match ? match[1] : null if (!callee) { // try firefox match = firefoxRegex.exec(stackframe[1]) callee = match ? `${match[2]}.${match[1]}` : '' } let className = callee.split('.')[0] // make a certain className always have the same background color by computing a hash on it let hash = getHashCode(className) % colors.length consoleLog(`%c${callee}`, `color: #000; background: ${colors[hash]}`, ...arguments) } const getHashCode = s => s.split('').reduce((prevHash, curChar) => prevHash * 31 + curChar.charCodeAt(0), 0) export function hookConsoleLog () { console.log = log } // colors created by enumerating HSL color wheel from 0...360 in 30 degree steps, with luminosity 75 and 90 const colors = ['#ff8080', '#ffcccc', '#ffbf80', '#ffe6cc', '#ffff80', '#ffffcc', '#bfff80', '#e6ffcc', '#80ff80', '#ccffcc', '#80ffbf', '#ccffe6', '#80ffff', '#ccffff', '#80bfff', '#cce5ff', '#8080ff', '#ccccff', '#bf80ff', '#e5ccff', '#ff80ff', '#ffccff', '#ff80bf', '#ffcce6']
http://cmichel.io/better-js-logger-for-debugging/
CC-MAIN-2017-47
refinedweb
719
59.19
Whenever I open the scala project, the cpu consumption always goes up over 100%. Not in the loading time or indexing time, but while I am doing nothing. only open JAVA projects are normal. Is it Scala plugin issue? Whenever I open the scala project, the cpu consumption always goes up over 100%. Not in the loading time or indexing time, but while I am doing nothing. only open JAVA projects are normal. Is it Scala plugin issue? I concur - for me it's started happening after the latest plugin update. Code analysis seems stuck at between 20-30% and takes cpu over 100% (for the last 10mins). Have put code analysis in power save mode to disable it. EDIT - I removed and then re-installed the scala plugin and that seems to have fixed it for me. Hello, could you provide a snapshot () ? Unfortunately I can't - I didn't take any stats before re-installing the plugin. It's been working fine since the reinstall. So for me, and unless many others start noticing issues; I'm happy to consider it something local to my env which got cleaned up with the plugin reinstall. Perhaps yang sun can offer a snapshot Here's my CPU snapshot. Attachment(s): IU-129.713_yangsun_16.09.2013_11.50.18.zip I debug it a little. It appears that Slick has something to do with the big CPU usage. Whenever I create an object extends App, and add a few objects extends Table in the same file, and then add some code in the App object, the CPU usage goes up over 100%. (Only when the xxx.scala file is opened in intellij and in the front). If I separate the Table objects and App object in two files, it seems OK. I finally identified which line cause the problem. Whenever I have this line: def autoInc = id.? ~ channel_type ~ auth <>(Channel, Channel.unapply _) returning id CPU will run over 100% like crazy. If I comment it out, CPU runs normally. Any idea why? BTW, the line of code is in a Table definition using Slick. Can you give me example project which causes this performance problem? Best regards, Alexander Podkhalyuzin. I tried to recreate the problem on my Macbook Pro. just created an empty scala project and added "slick_2.10-1.0.1-RC1.jar" to the project Library. And then created Test.scala with the following code: import scala.slick.driver.MySQLDriver.simple._ case class Test(id:Option[Int], name:String) object Tests extends Table[Test]("test"){ def id = column[Int]("id", O.AutoInc, O.PrimaryKey) def name = column[String]("name") def * = id.? ~ name <> (Test, Test.unapply _) returning id } Once I remove the "returning id", CPU consumption reduces to normal. I tried with H2Driver as well. As long as I have the returning(id) in the code, the CPU consumption is over 100%. I can confirm this, always when I open a Scala file with Intellj my computer gets loud and starts overheating - once I watch activity monitor and CPU (consumed by IntelliJ) was 110% (!!). It doesn't happen with other files (HTML, Css, JS, Java). My specs: MacBook Pro 2.7 Ghz Intel Core i7 with 8 GB Ram. Mayor lag specially typing, when hold key e.g. /////////////////////////////////////////////////////// to make a separator it blocks completly and I have to wait 5-10 seconds until I can type again. I have no time to debug this, to see what code exactly is responsible. One file It's a quite simple controller object for play framework. I don't have "returning" anywhere in my code. Hi, I investigated this problem a bit and could find (one of?) the lines causing lag: def foo(url:String):List[String] = { val parts = ListBuffer[String]() val r = """^(?:https|http)?(?:://)?(?:www\.)?([^/]+)([^\?]*)?(.*)?""".r val r(a, b, c) = url val x = a.split(":")(0) //<-- this line parts.toList } There was a clear difference, when this line is enabled CPU is 100+%, otherwise 12% There complete file: package controllers import play.api.mvc.Controller import play.modules.reactivemongo.MongoController import scala.collection.mutable.ListBuffer object Bar extends Controller with MongoController { def foo(url:String):List[String] = { val parts = ListBuffer[String]() val r = """^(?:https|http)?(?:://)?(?:www\.)?([^/]+)([^\?]*)?(.*)?""".r val r(a, b, c) = url val x = a.split(":")(0) //<-- this line parts.toList } } Hope that this helps. Any updates on this? It's really annoying to use the IDE like this... I can check my emails or drink my tea each time I type 2 words... I disabled auto build, put highlighting level for the file to "None" (but nothing seems to happen, it continues highlighting normally) and disabled Scala type-aware highlighting (also continues highlighting). Problem persists. In my original file (which is long - I reduced it to the posted file deleting randomly code and checking if there's lag) the problem is not only the line I posted, there also seem to be other code, since if I comment this line there, there's still lag. I don't have time to find each of these problem lines though. Forgot to mention: I'm using IDEA 12.1.6, Plugin version 0.22.302 I'm working on performance issues right now. And I'm fixing performance of "for statements", in some cases it's completely unusable. I hope I'll do it today, then I'll continue with other performance issues. As for your issue please attach CPU snapshot (). Your problem is probably known and has workaround. Best regards, Alexander Podkhalyuzin. I just check your code example and for me it works ok. Please give me CPU snapshot to understand what's going on. Thank you! Best regards, Alexander Podkhalyuzin. Ok, snapshot attached. I just pasted the code sample I posted in an empty file and let "/" pressed a while until not responsive. Activity monitor was reporting 104% CPU. There I took the snapshot. P.S. Sorry for ignoring your message before, for some reason I didn't see it. Attachment(s): idea-2013-10-11-2.snapshot.zip Ok, looks like you have enabled language injection in Scala. Just disable it and performance will become normal. Best regards, Alexander Podkhalyuzin. I disabled all checkboxes in preferences -> Language injections (and I also selected in "Advanced" "No runtime instrumentation" and "Do not analyze anything (fast)", in the case it helps). I'm still having the same performance problems. We have setting for Scala: Settings -> Scala -> Other settings -> Disable language injection in Scala files. If it doesn't help either, please send me another snapshot of freesing (you also can try other performance related settings). Best regards, Alexander Podkhalyuzin.
https://intellij-support.jetbrains.com/hc/en-us/community/posts/205998929-Scala-project-occupies-more-than-100-CPU-time
CC-MAIN-2019-26
refinedweb
1,111
68.97
- Type: Bug - Status: Resolved (View Workflow) - Priority: Major - Resolution: Fixed - Component/s: subversion-plugin - Labels:None - Environment:sles 11, Jenkins 1.470, subversion server is CollabNet Subversion Edge 1.3.1 - Similar Issues: Using the "Emulate clean checkout by first deleting unversioned/ignored files, then 'svn update'" svn strategy, we recently had a branch merged into trunk that added some files. When the Jenkins jobs configured as above ran against trunk, those added files had their content duplicated. A quick google suggested it has happened before: - is duplicated by JENKINS-18140 SVNKit 1.7.6 library used in Jenkins causes file contents SVN checkouts to become duplicated - Resolved Seeing this on Jenkins 1.479 with plugin 1.42 as well. Worth noting that at Last.fm we merge down branches which add files quite often, but this behaviour only started happening recently when we upgraded. Probably within the last few point versions of the plugin. It happens also when using 'use svn update as much as possible'. When the build fails we wipeout the workspace and re-build. As this bug is really annoying i've created a 50$ reward to fix this, expires Nov/09/2012: Same issue: Jenkins 1.487, Jenkins Subversion Plug-in 1.43 It occurs always after a merge from a branch. I'm using the "Use 'svn update' as much as possible" check-out strategy. Same issue: Jenkins 1.489, Jenkins Subversion Plug-in 1.43 (it was already bugged in previous versions) Same for me. (It's really annoying!). Jenkins 1.492 and Subversion 1.43 I'm not sure that I understand the issue: Who or what is merging the branch into trunk? And what if you checkout the trunk with command line svn - is the content duplicated, too? The problems happens if, in a commit which already committed to the subversion server, a branch has been merged into trunk that where some files are added. Who or what is merging the branch into trunk? - A developer, on their own machine, which is then committed to svn. And what if you checkout the trunk with command line svn - is the content duplicated, too? - no, if you use the command line tools, the checkout is as expected, with no duplicated content. Okay. So what would really help to debug the issue would be if you could create a minimal subversion repository which has this characteristics. Ideally also a snapshot of the repository BEFORE and AFTER the branch has been merged. I have the same issue, file content is duplicated. update strategy is 'emulate clean...' and I only see this behavior when new files are added after merge. How to reproduce: - Do this on the Jenkins host: cd /tmp mkdir svn ws svnadmin create svn svn mkdir -m trunk svn mkdir -m branch cd ws svn co cd branch echo test > singleline svn add singleline svn commit -m added - Now create a Jenkins "freestyle" project with SVN URL and build it once, then continue in the shell: cd /tmp/ws svn co cd trunk svn merge -r 2:3 ../branch/ svn commit -m merged - Now build the project again. Check the workspace, the file "singleline" will contain 2 lines! Reproducible with: Jenkins 1.502, Subversion Plugin 1.45, Host SVN 1.6.12 (Debian/CollabNet). Same issue: Jenkins 1.501, Jenkins Subversion Plug-in 1.45 Check-out Strategy: Emulate clean checkout by first deleting unversioned/ignored files, then 'svn update' Seems not to be file-type related since it occurred on both xml and java files. This is Critical! Issues like these can make chain-reaction on ant build script based projects that do not utilize distributed build model or do use any kind of reverting mechanism on failed build. Same problem here. Jenkins ver. 1.505 Jenkins Subversion Plug-in ver. 1.45 We are seeing file contents double when using SVN mv commands as well. This means files that are svn mv'd to a location, then added back to the repo have their contents duplicated. Here are the reproduction steps in action: $ vim double.txt $ cat double.txt 1 2 3 $ svn add double.txt A double.txt $ svn ci . -m "Initial commit " Adding double.txt Transmitting file data . Committed revision <revision>. $ svn mv double.txt double_new.txt A double_new.txt D double.txt $ svn ci . -m "Renaming double" Deleting double.txt Adding double_new.txt Committed revision <revision>. Here is the output for Jenkins when it runs a job. The job updates a checkout which contains file “double_new.txt”, then does a ‘cat’ command on the file. Here’s the output: Started by user Kenneth Ayers Building remotely on builder in workspace /test Updating <repository> at revision <revision> A double_new.txt At revision <revision> [test] $ /bin/sh -xe /tmp/hudson5312350812342753447.sh + cat double_new.txt 1 2 3 1 2 3 Finished: SUCCESS As you can see the file contents are now doubled up. Versions: Jenkins: 1.508 Jenkins Subversion Plug-in: 1.45 We were able to reproduce this very simply: - Create a file in SVN - Rename this file in SVN (should stay in same folder, content must not change!) (A manual delete/add does not work) - Build the project with Jenkins and let him update its project with the "Revert before Update" strategy. I didn't check other strategies. Now it is very sure the file has double its content. Jenkins: 1.494 Jenkins Subversion Plugin: 1.45 PLEASE FIX THIS! Many ways to reproduce the Bug have been shown now. And since this is only a very subtile fail in a critical component which leads to hard to track and reproduce errors (dev environemnt works, production fails), this bug should be fixed immediatly. Because of the bug we had valid, but wrong, datafiles sent to our customers! And this bug ruins our convidence with the whole Jenkins toolchain in my company. This bug may be in SVNKit, which is the Java library that subversion-plugin uses. I determined this by modifying. Here's the git patch to hudson.scm.subversion.UpdateUpdater with the debug output. Please note I'm not a Java developer by trade, and this code has cruft from a lot of debugging iterations. I've also modified the parameters being used by doUpdate() as part of the debugging process: diff --git a/src/main/java/hudson/scm/subversion/UpdateUpdater.java b/src/main/java/hudson/scm/subversion/UpdateUpdater.java index f00ebdb..021e18c 100755 --- a/src/main/java/hudson/scm/subversion/UpdateUpdater.java +++ b/src/main/java/hudson/scm/subversion/UpdateUpdater.java @@ -40,11 +40,21 @@ import org.tmatesoft.svn.core.wc.SVNInfo; import org.tmatesoft.svn.core.wc.SVNRevision; import org.tmatesoft.svn.core.wc.SVNUpdateClient; import org.tmatesoft.svn.core.wc.SVNWCClient; +import org.tmatesoft.svn.core.wc.SVNClientManager; +import org.tmatesoft.svn.core.SVNDepth; +import org.tmatesoft.svn.core.wc.SVNRevision; import java.io.File; import java.io.IOException; import java.util.ArrayList; +import java.util.Arrays; import java.util.List; +import java.io.InputStream; +import java.io.BufferedInputStream; +import java.io.FileInputStream; +import java.io.DataInputStream; +import java.io.InputStreamReader; +import java.io.BufferedReader; /** * {@link WorkspaceUpdater} that uses "svn update" as much as possible. @@ -154,7 +164,38 @@ public class UpdateUpdater extends WorkspaceUpdater { switch (svnCommand) { case UPDATE: listener.getLogger().println("Updating " + location.remote + " at revision " + revisionName); - svnuc.doUpdate(local.getCanonicalFile(), r, svnDepth, true, true); + //svnuc.doUpdate(local.getCanonicalFile(), r, svnDepth, true, true); + SVNClientManager cM = SVNClientManager.newInstance(); + SVNUpdateClient updateClient = cM.getUpdateClient(); + updateClient.doUpdate(local.getCanonicalFile(), SVNRevision.HEAD, SVNDepth.INFINITY, true, true); + + ArrayList<File> files = new ArrayList<File>(Arrays.asList(local.listFiles())); + for (int i = 0; i < files.size(); i++) + { + if (files.get(i).isFile()) + { + String fname = files.get(i).getName(); + listener.getLogger().println("File: " + fname); + try{ + // Open the file that is the first + // command line parameter + FileInputStream fstream = new FileInputStream(files.get(i)); + // + listener.getLogger().println(strLine); + } + //Close the input stream + in.close(); + }catch (Exception e){//Catch exception if any + listener.getLogger().println("Error: " + e.getMessage()); + } + } + } break; case SWITCH: listener.getLogger().println("Switching to " + location.remote + " at revision " + revisionName); I've posted a bug report to the SVNKit issue tracker, hopefully this will get some attention: All, I may have found the issue. I'd like the Jenkins devs to take a look, and I've also appended this information to the SVNKit ticket, here: In org.tmatesoft.svn.core.internal.wc.SVNUpdateEditor15.java, in function addFileWithHistory (line 867), there's a code block that calls myFileFetcher.fetchFile() twice. Each time this is called, baseTextOS is written to. Upon the second write, the file contents are duplicated. Here's the code:; myFileFetcher.fetchFile(copyFromPath, copyFromRevision, baseTextOS, baseProperties); info.copiedBaseChecksum = checksumBaseTextOS.getDigest(); I was able to find this by stepping through the code using NetBeans IDE 7.3 attached to a remote debugging session on Jenkins. I've compiled and tested this change inside the context of the subversion-plugin and the file contents are no longer duplicated. I've forked the svnkit repo used in Jenkins here, and committed this change if anyone would like to download the fix and do some testing: Here's my patch: Index: SVNUpdateEditor15.java =================================================================== --- SVNUpdateEditor15.java (revision 9722) +++ SVNUpdateEditor15.java (working copy) @@ -864,7 +864,6 @@ OutputStream baseTextOS = null; try {; Thank you, Kenny Ayers I went ahead and compiled this change into the subversion plugin, at least I think I did. This is using the latest rev from subversion-plugin along with my svnkit library modification. If you guys would like to do some additional testing to verify my change, the plugin is located here: I do not suggest using this for anything but testing / verification. -Kenny I just hit this too, version 1.44 on Jenkins 1.513. Since it's fixed in 1.7.10, updating svnkit should fix this. However, the subversion plugin uses a patched version of svnkit so it's not necessarily as easy as updating the dependency. I would love to see the pull request: merged and the subversion plugin updated to prevent this from corrupting builds. I ended up patching the jenkins fork of svnkit locally, building the subversion-plugin 1.45 with the patched version of svnkin. It does indeed appear to fix the behavior. Kenny, Bret, Did you do any modification to the pom files of the fork of jenkinsci-svnkit or to the svnPlugin? In fact I'm using the svnkit code patched by Kenny: ...and the SVNPlugin 1.45: The first one was successfully built, but then I get tons of test errors while building the second one. Any idea? I know I could simply use the build of Kenny (btw, thx! ) but I wanted to try to build it on my own. Thanks for any help! p.s. Kenny, I had the very same problem mentioned by you here: ...and I ended up changing the exec task in the pom file of svnKit from this: <exec command="./gradlew clean assemble" /> ...to this: <exec command="./gradlew clean build -x test -x signMaven"/> Did you do the same? I then copied the jar file located in the gradle/build folder into my local maven repo, I hope this was not an incredible stupid idea... Hey LuFrija, If my memory is correct, I had to do the same thing - modify the build command for SVNKit, then copy the resultant jar file into my local maven repo, then build subversion-plugin against that. This seems like a hacky workflow, but it's the best I could stitch together given there are no instructions on how to build this properly, and the mailing list didn't respond to my question on how to build the correct way. Are you good to go now? -Kenny Hi guys, Thanks for your fast reply! I guess I found the problem...it's all about my company's proxy! In fact having a closer look at the failing tests I could see something like: ERROR: Failed to check out I'm now copying with it somehow, in the meantime I'm testing the hpi file uploaded by Kenny. Code changed in jenkins User: Kenny Ayers Path: svnkit/src/main/java/org/tmatesoft/svn/core/internal/wc/SVNUpdateEditor15.java Log: Removing duplicate myFileFetcher.fetchFile() call. JENKINS-14551 Code changed in jenkins User: Nicolas De loof Path: svnkit/src/main/java/org/tmatesoft/svn/core/internal/wc/SVNUpdateEditor15.java Log: Merge pull request #1 from theotherwhitemeat/master A fix for JENKINS-14551 Compare: Code changed in jenkins User: Christoph Kutzinski Path: pom.xml Log: [FIXED JENKINS-14551] added files merged from a branch results in those files having doubled content Compare:^...3599af503fc0 Having the same problem. Jenkins 1.479, Jenkins Subversion Plugin 1.42
https://issues.jenkins-ci.org/browse/JENKINS-14551?focusedCommentId=169317&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2020-16
refinedweb
2,118
60.92
0 Hi - I'm having a problem converting a couple of things that work fine spelled out in function main, but not when I try to define them as functions. One is a process of filling in a one-dimensional array. I'd like to just be able to say "here's the array; fill it in with fillArray()", but can't find the right syntax. Here's what works and what doesn't so far: What works: 1 // Rich Mansfield 0457321 11/11/09 2 // CIS180 fillArrays.cpp 3 // This asks the user for a number of scores to be entered, 4 // then it reads the scores into a one-dimensional array; 5 // but when I try to make a function out of it, it won't compile. 6 #include <iostream> 7 using namespace std; 8 9 10 11 12 int main() 13 { 14 const int SIZE = 3; 15 int arrayName[SIZE]; 16 17 cout << "Enter " << SIZE << " elements: "; 18 for (int i = 0; i < SIZE; i++) 19 cin >> arrayName[i]; 20 21 cout << "Here's the array: "; 22 for (int i = 0; i < SIZE; i++) 23 cout << arrayName[i] << " "; 24 cout << endl; 25 26 return 0; 27 } //End main fn And here's what does NOT work: 1 // Rich Mansfield 0457321 11/6/09 2 // CIS180 createArraysFn.cpp 3 // This is an attempt to make a fn that asks a user for a no. of scores 4 // to be entered, then input the scores into a one-dimensional array; 5 // but it won't compile. 6 7 #include <iostream> 8 using namespace std; 9 10 //Function prototype 11 int fillArray(int);//Trying to make a fn that fills an array 12 13 int main() 14 { 15 const int SIZE = 3; 16 int arrayName[SIZE]; 17 18 fillArray(arrayName[SIZE]); 19 20 return 0; 21 } //End main fn 22 23 //**************************************************** 24 //Function definition 25 int fillArray(int num) 26 { 27 const int SIZE = 3; 28 for (int i=0; i < SIZE; i++) 29 cin >> num[i]; 30 } //End fillArray fn. What am I doing wrong or not doing right? Rich Mansfield
https://www.daniweb.com/programming/software-development/threads/238129/array-functions
CC-MAIN-2016-50
refinedweb
351
64.27
Travelers Companies Inc (Symbol: TRV). So this week we highlight one interesting put contract, and one interesting call contract, from the January 2018 expiration for TRV. The put contract our YieldBoost algorithm identified as particularly interesting, is at the $110 strike, which has a bid at the time of this writing of $3.40. Collecting that bid as the premium represents a 3.1% return against the $110 commitment, or a 3.7% annualized rate of return (at Stock Options Channel we call this the YieldBoost ). Turning to the other side of the option chain, we highlight one call contract of particular interest for the January 2018 expiration, for shareholders of Travelers Companies Inc (Symbol: TRV) looking to boost their income beyond the stock's 2.2% annualized dividend yield. Selling the covered call at the $125 strike and collecting the premium based on the $5.50 bid, annualizes to an additional 5.4% rate of return against the current stock price (this is what we at Stock Options Channel refer to as the YieldBoost ), for a total of 7.6% annualized rate in the scenario where the stock is not called away. Any upside above $125 would be lost if the stock rises there and is called away, but TRV shares would have to climb 1.8% from current levels for that to occur, meaning that in the scenario where the stock is called, the shareholder has earned a 6.2% return from this trading level, in addition to any dividends collected before the stock was called. Top YieldBoost TR.
http://www.nasdaq.com/article/one-put-one-call-option-to-know-about-for-travelers-companies-cm763010
CC-MAIN-2017-13
refinedweb
261
64
!ATTLIST div activerev CDATA #IMPLIED> <!ATTLIST div nodeid CDATA #IMPLIED> <!ATTLIST a command CDATA #IMPLIED> If I have the following code: static class SceneData { static int width=100; static int score; static GameObject soundManager; } Can I use this to keep score that's available in all my scenes? If so then do I have to make an empty gameobject in each scene and then just attach the script to that or is there more I have to do? asked Sep 26, 2012 at 06:42 AM Sophia Isabella 39 ● 13 ● 919 ● 33 Sure! I am using the same methods in my own project right now. First let me say that a guy named Petey has a youtube channel called BurgZergArcade that has some really awesome Unity3d C# programming videos! That channel is here: Anyway what I usually do that I learned from Petey is make a new empty gameobject and name it GameMaster, then make a C# script named GameMaster.cs. There's at least two ways to do this. Here's the first way: using UnityEngine; public class GameMaster : MonoBehaviour { public static int score; void Awake() { DontDestroyOnLoad(this); } } This is the quick and easy way to access variables like 'score' from any class. To do this just call GameMaster.score from anywhere. The problem with this method is that it only works well for simple data types like int. Most of the time we want to drag and drop in Assets via the inspector, and this isn't allowed with static variables. In this case we create what's called a singleton. This is something Leepo from M2H uses in his multiplayer tutorials. We create a regular class and then make one single static instance of the whole class. This is the second way: using UnityEngine; public class GameMaster : MonoBehaviour { public static GameMaster GM; public GameObject soundManager; public int score; void Awake() { if(GM != null) GameObject.Destroy(GM); else GM = this; DontDestroyOnLoad(this); } } The beauty of using the singleton is that it can use the inspector to assign public assets and can still be used anywhere like a static class, even the functions will work from anywhere. We'd call the score for example by using GameMaster.GM.score Of course there are many other ways to accomplish this, but this is how I like to do it. Hope that helps! answered Sep 26, 2012 at 10:56 AM dscroggi 336 ● 1 ● 5 ● 4 You deserve more thumbs and at least one comment saying this is correct. Oh, and it should be marked as correct, obviously. I learned this technique in an official Unity Tutorial Video: see this link, video @ 13.30...so I'm pretty sure this is the right way to go. I HAVE ONE THING TO ADD: he should make a prefab of that GameMaster-object. The way I do it is that I have a normal variable (I call it temp) that I use in the scene and a static variable (static) that I use for the whole game. The reason is, if you use the static var for your point and the guy loses, when he starts again, he still has the point from previous run. You could use temp and if the guy wins the level then static += temp; If the guy loses temp= 0; Temp is lost when the new level is loaded, static remains. You can attach the static var to your player object. The static var will have a lifetime of the whole game and you can access it anywhere with ClassName.staticVar. In case you would not know much about static yet, it has been subject for a lot of controversy as beginners tend to think that the easiness of access is a good feature. Beware of that. answered Sep 26, 2012 at 06:56 AM fafase 25.2k ● 64 ● 66 ● 124 My game is a bit different in that it does not have a player object as such. Could I create a gameObject called StaticData in each scene and attach a static class to that. If I did that would it still have the same values from scene to scene? The game Object would be a new one but the data would be the same ones. It would be a new object linked to the same data. @fafase - What about the script thing that dscroggi is suggesting. Have you needed that before? Throw this utility script on any game object and it will not get destroyed when scenes change. Make sure that game object has your static data on it. Btw as far as I'm concerned, static data is faster to access than instanced data, so you're on the right track. using UnityEngine; public class KeepOnLoad : MonoBehaviour { void Awake () { DontDestroyOnLoad(this); } } answered Sep 26, 2012 at 07:34 AM @dscroggi - Can you explain a bit more? It's a method called DontDestroyOnLoad(this)? when you start a new scene, all objects from the previous scene are destroyed. This function tells to keep the variable when loading a new scene. Could be a solution. My issue is that if for some reason the object gets destroyed within the game, you would lose the data. The function keeps on load but I do not know if it preserves from Destroy. @dscroggi - Can you show me how the class would look that I would need to hold the static data. All I need to do is to hold one integer for score? Thanks Yes you can do this, but you need to specify everything as public. public static class SceneData { public static int width = 100; public static int score; public static GameObject soundManager; } Otherwise they default to private and your class will be useless. Now you'll be able to use SceneData.width anywhere. The other answers offer ways of having a class that inherits from MonoBehaviour which you can access from anywhere, but it doesn't look like you actually want any MonoBehaviour functionality, so a simple static class will work better. answered Jul 27 at 02:03 AM SilentSin 649 ● 11 ● 12 ● 19 edited Jul 27 at 0210596 asked: Sep 26, 2012 at 06:42 AM Seen: 5981 times Last Updated: Jul 27 at 02:04 AM Distribute terrain in zones Multiple Cars not working Making a bubble level (not a game but work tool) Flip over an object (smooth transition) Illuminating a 3D object's edges OnMouseOver (script in c#)? Initialising List array for use in a custom Editor Renderer on object disabled after level reload An OS design issue: File types associated with their appropriate programs I need a unity3d Professional mentor to boost my knowledge! Torch Problems EnterpriseSocial Q&A
http://answers.unity3d.com/questions/323195/how-can-i-have-a-static-class-i-can-access-from-an.html
CC-MAIN-2014-49
refinedweb
1,116
70.43
No project description provided Project description Rex: an open-source domestic robot This repository represent an experiment made using pyBullet and OpenAI Gym. It's a very work in progress project. This project is mostly inspired by the incredible works done by Boston Dynamics. The goal is to train a 3D printed legged robot using Reinforcement Learning. The aim is to let the robot learns domestic and generic tasks (like pick objects and autonomous navigation) in the simulations and then successfully transfer the knowledge (Control policies) on the real robot without any other tuning. rex_gym.playground.rex_reactive_env_play There are also videos under /videos. Start a new training simulation To start a new training session: python -m rex_gym.agents.scripts.train --config rex_reactive --logdir YOUR_LOG_DIR_PATH YOUR_LOG_DIR_PATH sets where the policy output is stored. PPO Agent configuration You may want to edit the PPO agent's default configuration, especially the number of parallel agents launched in the simulation. Edit the num_agents variable in the agents/scripts/configs.py script: def default(): """Default configuration for PPO.""" # General ... num_agents = 14 Install rex_gym from source. This configuration will launch 14 agents (threads) in parallel to train your model. Robot platform The robot used for this experiment is the Spotmicro made by Deok-yeon Kim. I've printed the components using a Creality Ender3 3D printer, with PLA and TPU+ (this last one just for the foot cover). The idea is to extend the basic robot adding components like a 3 joints robotic arm on the top of the rack and a Lidar sensor. Rex: simulation engine Rex is a 12 joints robot with 3 motors ( Shoulder, Leg and Foot) for each leg. The Rex pose signal (see rex_reactive_env.py) sets the 12 motor angles that make Rex stands up. The robot model was imported in pyBullet creating an URDF file. Tasks This is a very first list of tasks I'd like to teach to Rex: - Locomotion - Run/Walk - Stand up - Reach a specific point - Autonomous navigation - Map environment - Grab an object Locomotion: Run This task is about let Rex learns how to run in a open space. Reinforcement Learning Algorithm There is a good number of papers on quadrupeds locomotion, most of them comes with sample code. The most complete examples collection is probably the Minitaur folder in the PyBullet3 repository. This repository collects the code samples for the Sim-to-Real studies. I've extracted and edited the Minitaur Reactive Environment, sample code for the paper Sim-to-Real: Learning Agile Locomotion For Quadruped Robots, and used it to automate the learning process for the locomotion gait for Rex. I've tried to retain all the improvements introduced in that paper to overcome the Reality Gap. Galloping gait - from scratch In this very first experiment, I let the system learn from scratch: I set the open loop component a(t) = 0 and gave the feedback component large output bounds [−0.5,0.5] radians. The leg model (see rex_reactive_env.py) forces legs and foots movements (positive or negative direction, depending on the leg) influencing the learning score and time. In this first version, the leg model holds the Shoulder motors in the start position (0 degrees). As in the Minitaur example, I choose to use Proximal Policy Optimization (PPO). I've ran a first simulation (~6M steps), the output control policy is in /policies/galloping/-++-rex_reactive. The emerged galloping gait shows the robot body tilled up and some unusual positions/movements (especially starting from the initial pose). The leg model needs improvements. The policy video is policies/galloping/videos/rex-no-bounds.mp4 Galloping gait - bounded feedback To improve the gait, in this second simulation, I've worked on the leg model. I set bounds for both Leg and Foot angles, keeping the Shoulder in the initial position. I've ran the simulation (7M steps), the output control policy is in /policies/galloping/bounded-rex_reactive. The emerged gait looks more clear. The policy video is policies/galloping/videos/rex-galloping.mp4 Credits Sim-to-Real: Learning Agile Locomotion For Quadruped Robots and all the related papers. Google Brain, Google X, Google DeepMind - Minitaur Ghost Robotics. Deok-yeon Kim creator of SpotMini. The great work with the robot platform rendering done by Florian Wilk with his SpotMicroAI. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/rex-gym/0.1.5/
CC-MAIN-2021-04
refinedweb
742
56.25
Created on 2015-08-14 02:55 by Andre Merzky, last changed 2015-09-12 23:36 by Andre Merzky. - create a class which is a subclass of multiprocessing.Process ('A') - in its __init__ create new thread ('B') and share a queue with it - in A's run() method, run 'C=subprocess.Popen(args="/bin/false")' - push 'C' though the queue to 'B' - call 'C.pull()' --> returns 0 Apart from returning 0, the pull will also return immediately, even if the task is long running. The task does not die -- 'ps' shows it is well alive. I assume that the underlying reason is that 'C' is moved sideways in the process tree, and the wait is happening in a thread which is not the parent of C. I assume (or rather guess, really) that the system level waitpid call raises a 'ECHILD' (see wait(2)), but maybe that is misinterpreted as 'process gone'? I append a test script which shows different combinations of process spawner and watcher classes. All of them should report an exit code of '1' (as all run /bin/false), or should raise an error. None should report an exit code of 0 -- but some do. PS.: I implore you not to argue if the above setup makes sense -- it probably does not. However, it took significant work to condense a real problem into that small excerpt, and it is not a full representation of our application stack. I am not interested in discussing alternative approaches: we have those, and I can live with the error not being fixed. #!/usr/bin/env python from subprocess import Popen from threading import Thread as T from multiprocessing import Process as P import multiprocessing as mp class A(P): def __init__(self): P.__init__(self) self.q = mp.Queue() def b(q): C = q.get() exit_code = C.poll() print "exit code: %s" % exit_code B = T(target = b, args=[self.q]) B.start () def run(self): C = Popen(args = '/bin/false') self.q.put(C) a = A() a.start() a.join() I'll let someone else analyze this in detail if they want to, but I'll just note that mixing multiprocessing and threads is not a good idea and will lead to all sorts of strangeness. Especially if you are using the unix default of fork for multiprocessing. As mentioned in the PS, I understand that the approach might be questionable. But (a) the attached test shows the problem also for watcher *processes*, not threads, and (b) an error should be raised in unsupported uses, not a silent, unexpected behavior which mimics success. Looking a little further, it seems indeed to be a problem with ignoring SIGCHLD. The behavior has been introduced with [1] at [2] AFAICS, which is a response to issue15756 [3]. IMHO, that issue should have been resolved with raising an exception instead of assuming that the child exited successfully (neither is true in this case, not the 'exited' nor the 'successfully'). [1] [2] [3] Hi again, can I do anything to help moving this forward? Thanks, Andre. Not really. Give GPS a couple more weeks to respond, and then maybe bring it up on python-dev. My gut feeling is to say "don't do that" when it comes to passing subprocess instances between processes (ie: pickling and unpickling them) and expecting them to work as if nothing had changed... You already acknowledge that this is a strange thing for an application to do and that you have a workaround in your application. BUT: It does looks like we are doing something a bit weird here with the waitpid errno.ECHILD exception. However letting this bubble up to the application may, at this point, cause new bugs in code that isn't expecting it so I'm not sure we should change that in any circumstances. :/ FWIW there is also a comment at the end of the related issue1731717 (for Popen.wait() rather than .poll()) with a suggestion to ponder (though not directly related to this issue, if it is still relevant). Yes, I have a workaround (and even a clean solution) in my code. My interest in this ticket is more academic than anything else :) Thanks for the pointer to issue1731717. While I am not sure which 'comment at the end' you exactly refer to, the whole discussion provides some more insight on why SIGCHLD is handled the way it is, so that was interesting. I agree that changing the behavior in a way which is unexpected for existing applications is something one wants to avoid, generally. I can't judge if it is worth to break existing code to get more correctness in a corner case -- depends on how much (and what kind of) code relies on it, which I have no idea about. One option to minimize change and improve correctness might be to keep track of the parent process. So one would keep self.parent=os.getpid() along with self.pid. In the implementation of _internal_poll one can then check if self.parent==os.getpid() still holds, and raise an ECHILD or EINVAL otherwise. That would catch the pickle/unpickle across processes case (I don't know Python well enough to see if there are easier ways to check if a class instance is passed across process boundaries). The above would still not be fully POSIX (it ignores process groups which would allow to wait on non-direct descendants), but going down that route would probably almost result in a reimplementation of what libc does... This is patch is meant to be illustrative rather than functional (but it works in the limited set of cases I tested).
https://bugs.python.org/issue24862
CC-MAIN-2021-21
refinedweb
945
70.02
Saturday, April 01, 2017 my elasticsearch dies a lot recently, when I heard Algolia has free plan (you'll need to put their logo on your site), I decided to give it a try. hacker news readers will know, algolia is the search engine they're using. algolia supports large number of languages and frameworks, no clojure though. it's simple to use java client with clojure add the dependency [com.algolia/algoliasearch "2.8.0"] the :import looks like this: (ns algolia.core (:require [clojure.data.json :as json]) (:import (com.algolia.search ApacheAPIClientBuilder) (com.fasterxml.jackson.databind ObjectMapper))) to instantiate the ApacheAPIClient object: (defn init-client [app-id api-key] (-> (ApacheAPIClientBuilder. app-id api-key) (.build))) instantiate Index object: (defn init-index [client index-name] (.initIndex client index-name)) and sync a map to algolia: (let [client (init-client app-id api-key) index (init-index client "test_articles") data (-> (json/write-str {:objectID 1 :title "foo" :body "bar"}) (ObjectMapper.) (.readTree s))] (.add-object index data)) here, data is a com.fasterxml.jackson.databind.JsonNode converted from a clojure map. the java api client has async version as well. the search part is even easier, algolia's web explorer is very useful, it displays raw queries and results of your search, just put that into your code. I tried use stuartsierra/component: Managed lifecycle of stateful objects in Clojure before, found it's complicated. now my codebase becomes larger, I read it again and I found it useful. there is also tolitius/mount: managing Clojure and ClojureScript app state since (reset) for the same purpose. continue watching clojure west 2017 talks: Why Clojure? - Derek Slager he has some good opnions on microservices. I also have a single repo for most clojure projects. the tool mentioned in the video could help me: lein-monolith Faster Delivery with Pedestal and Vase - Paul deGrandis in the talk he talked about Pedestal, all things are interceptor and composed together. interceptor pattern is not a new idea, can ben found in Pattern-Oriented Software Architecture Volume 2: Patterns for Concurrent and Networked Objects and Volume 4: A Pattern Language for Distributed Computing - Servlets / Filters - Message-oriented Middleware - Computational pipelines also talked about vase, Paul introduced vase in one episode of cognitect podcast vase is on top of pedestal, and solve data problem in a microservice way. $ lein new vase project-name Fearless JVM Lambdas - John Chapin serverless, java and aws lambda Wednesday, April 12, 2017 when I create a new luminus project, I noticed there is a Capstanfile, looks like this (removed comments): base: cloudius/osv-openjdk8 cmdline: /java.so -jar /my-project/app.jar build: lein uberjar files: /my-project/app.jar: ./target/uberjar/my-project.jar it's for the operation system named: OSv looks like a lightweight way to run vm, never heard about it before, but I'm definitely interested. ProgresTouch RETRO TINY keyboard from Tokyo. pretty good. I also have an ikbc poker II keyboard, beautiful but need some time to get used to it. I have trouble with original fn + wasd as arrows setup, I finally decided to try the programming feature of it: first, caps lock is useless to me, while fn key is heavily used, I want to map caps lock as fn. it can be done by turning on dip 1 ( caps <-> left win) and 3 ( left win <-> fn) but then I will lost my left win key, which is quite useful under os x (as command) I found the solution online and the trick is, BEFORE turn on dip switches, map the caps to left win first by programming: fn + right ctrlto enter programming mode - tap caps - tap left win - tap pnto complete fn + right ctrlto exit programming mode then disconnect keyboard, turn on dip switches 1 and 3, re-connect keyboard. now caps should work as fn and pn + left win works as left win typing pn every time is nonsense, fn + right shift will switch to program layer, you don't need to hold pn for programmed keys (hold pn will access normal layer keys while you're on program layer) other custom keys I set: | fn + j, k, h, l | as arrows | | fn + esc | as ~ | | fn + v, b | as ctrl + left, right (switch workspace) | once I have a comfortable set of arrow keys, I'm quite productive on the 60% keyboard. my next target is a 40% keyboard (no number row), looking forward to its arrival. Professional Clojure has one chapter for datomic, a nice introduction. datomic is just like clojure, it changes the way you think. Thursday, April 13, 2017 some emacs stuffs: first, a comprehensive OrgMode tutorial second, learned restclient.el package after watched Emacs Rocks! Episode 15: restclient-mode one cool feature didn't mention in the video: C-c C-u: copy query under the cursor as a curl command when I got json response, I want to hide/show some blocks, there is a built-in feature in emacs for this: Hideshow M-x hs-minor-mode | C-c @ C-h | Hide the current block | | C-c @ C-s | Show the current block | | C-c @ C-c | Toggle current block | Saturday, April 15, 2017 zbarimg can help you extract urls from images contain qr code. some history books added to my read list: - The Dream Machine: J.C.R. Licklider and the Revolution That Made Computing Personal - Where Wizards Stay Up Late: The Origins Of The Internet - Dealers of Lightning: Xerox PARC and the Dawn of the Computer Age Saturday, April 22, 2017 things I read about clojure last couple days: core.async On the Judicious Use of core.async - Core.Async in Use - Timothy Baldridge (I also wrote about this talk last month) - halgari/naiad: A library for building declarative data flow graphs via a fluent api and core.async nullable Robust Clojure: The best way to handle nil to summarize: treat nil as type (concept of Maybe or Nullable) instead of value. use these functions/libraries to help you deal with nil: if-some: (if-some [it (get {:a 1 :b 2} :c)] (+ 1 it) nil) some-> and some->>: (some->> {:a 1 :b 3} :c vector (filter even?) (map inc) first) ;;=> nil fnil: (def safe-inc (fnil inc 0)) (safe-inc (get {:a 1 :b 2} :c)) ;=> 1 core.match: (match (foo) ;; pretend `foo` is function that returns a map nil (log "foo failed") {:ms t :user u :data data} (do (log "User: " u " took " t " seconds.") data)) Cats: Category Theory and algebraic abstractions for Clojure.: (require '[cats.core :as m]) (require '[cats.monad.either :as either]) @(m/mlet [x (if-let [v (foo)] (either/right v) (either/left)) y (if-let [v (bar x)] (either/right v) (either/left)) z (if-let [v (goo x y)] (either/right v) (either/left))] (m/return (qux x y z))) get and or to handle default values: (-> (get {:a 1 :b 2} :c 0) inc) ;=> 1 (-> :c {:a 1 :b 2} (or 41) inc) ;=> 42 further readings: - Fourteen Months with Clojure - Nil Punning (Or Null Pointers Considered Not So Bad) | LispCast - Discussion on Reddit spec stathissideris/spec-provider: Infer clojure specs from sample data. Inspired by F#'s type providers. native app JUXT Blog: Writing ClojureScript native apps is easy an interesting suggestion: Pattern Matching for Java I wish golang has pattern matching, to make their if err != nil to something like elixir's: with {:ok, result1} <- square(a), {:ok, result2} <- half(result1), {:ok, result3} <- triple(result2), {:ok, result4} <- do_even_more_stuff(result3) do IO.puts “Success! the result is #{result4}” else {:error, error} -> IO.puts “Oops, something went wrong: #{error}” end btw, rust supports pattern matching: fn main() { let file_name = "foobar.rs"; match find(file_name, '.') { None => println!("No file extension found."), Some(i) => println!("File extension: {}", &file_name[i+1..]), } } Tout est Terrible is a good read. I like the comment Doing It Right costs Lots Of Money. and Ninety-ninety rule Lots of good resources inside: Ask HN: Which companies have the best blogs written by their engineering team? docker now has a upper level project: moby: Introducing Moby Project: a new open-source project to advance the software containerization movement the relationship is like fedora(moby) and rhel(docker), I'm not sure it's a good idea for docker, but for developers don't like docker (like me), it is good that people can build their solutions using smaller components: - linuxkit/linuxkit: A toolkit for building secure, portable and lean operating systems for containers - docker/infrakit: A toolkit for creating and managing declarative, self-healing infrastructure. however, I think we'll wait couple more years for common practices about how we should do containers. found a useful tool for viewing logs: lnav: The Log File Navigator just apt-get install lnav to install. I'll write some notes later. Tmux and Vim - even better together, my colleague suggested using them before, but I don't quite want to change my workflow for now. there is a much more useful article mentioned in the article: Vim Splits - Move Faster and More Naturally put in .vimrc: set splitbelow set splitright nnoremap <C-J> <C-W><C-J> nnoremap <C-K> <C-W><C-K> nnoremap <C-L> <C-W><C-L> nnoremap <C-H> <C-W><C-H> useful shortcuts: "Max out the height of the current split ctrl + w _ "Max out the width of the current split ctrl + w | "Normalize all split sizes, which is very handy when resizing terminal ctrl + w = "Swap top/bottom or left/right split Ctrl+W R "Break out current window into a new tabview Ctrl+W T "Close every window in the current tabview but the current one Ctrl+W o last month I wrote about vim text objects, also should read documentation about object motions and selections Saturday, April 29, 2017 need to remove some sensitive data from git repo, found this tool is helpful: BFG Repo-Cleaner by rtyley it is very easy to use (compares to git filter-branch), simply: $ bfg -D api_key.conf repo.git few notes: - you'll need to use git clone --mirrorto clone a clean bare repo - remove the file before applying bfg - run git reflog expire --expire=now --all && git gc --prune=now --aggressivebefore git pushp about agile: the quote from Rich Hickey: “sprinters” only run short distances and “Agile” solves this by firing the starting gun every 100 meters. is from his famouse talk Simple Made Easy (just search "Sprinter") I worked for a team which runs scrum, then a no-agile team, now in agile again. I have to say I was more productive and happy under the no-agile environment. I can spend my time to tackle complex problem and it usually produce more high quality codes and pretty fast after that. I made bad trade-offs/decisions when I need to do daily standup, because it is always very hard to estimate how much time needs to complete a small part of the system. agile doesn't make estimation easier or more accurate, it encourages programmers make bad trade-offs and fake promises. gRPC-Web: Moving past REST+JSON towards type-safe Web APIs rest, graphql or grpc, I still don't have an answer for now. Instead of containerization, give me strong config & deployment primitives just read the discussion on hackernews, you'll know how messy the devops world is. no one agrees on anything (some probably agree on kubernetes but i'm not convinced yet) I found two good articles from skyliner: and they wrote the Fourteen Months with Clojure article. working on a project to pipeline analytics data to some storage backend. there're few ga alternatives: piwik is the most popular once, the problem with piwik is its backend is MySQL, probably needs more effort to maintain it for large scale. people will take its JavaScript Tracking Client and drop its MySQL backend. snowplow has few collectors, but we want to do serverless. aws api gateway can transform requests (enrich): and POST to another services, like kinesis firehose: which the storage endpoint is S3, and it supports encryption (through KMS) and compression ( gzip only if needs read from redshift) that's actually what we wanna do, but no server setup and maintenance. if mapping template is not enough for the enrichment, we can still pipe data to aws lambda function and push it to kinesis stream you still have the flexibility to solve quite complex logics by composing multiple lambda and kinesis stream, and there is tool to push kinesis stream to kinesis firehose: kinesis firehose also support binding a lambda function to transform data: Amazon Kinesis Firehose Data Transformation further reading: - Parse.ly's Data Pipeline - Self-host analytics for better privacy and accuracy - How self-hosted analytics preserve user privacy Wednesday, May 03, 2017 a really cool emacs package: rbanffy/selectric-mode: Make your Emacs sound like a proper typewriter. I'm trying to use emacs as much as possible: winner-mode is very useful, it registers window layout autometically, and use c-c left and c-c right to switch. I'll c-x 1 to maximize the working window, and c-c left to un-maximize. file management, dired for sure: s to toggle different sorts + to create directory m to mark file u to unmark U to unmark all g to refresh ^ goes up one level < and > to move to next/previous sub-directory j to quickly jump to a file to simulate midnight commander under emacs, first set this variable: (setq dired-dwim-target t) then in separate dired mode window, press C will default copy to target directory in other window. to select multiple files, use % m regex dired supports zip files, however, it can not extract file from the zip (open in buffer only) Blog Archive Newer Entries -
https://jchk.net/blog/2017-04
CC-MAIN-2021-21
refinedweb
2,313
55.27
A reader asked me last week I have always used Server.CreateObject in ASP scripts and WScript.CreateObject in WSH scripts, because I was unable to find any *convincing* information on the subject and therefore assumed that they were there to be used, and were in some way beneficial… but perhaps they’re not?! What exactly is the reasoning behind having additional CreateObjectmethods in ASP and WSH when VBScript already has CreateObject and JScript already has new ActiveXObject ? (new ActiveXObject is for all intents and purposes identical to CreateObject, so I’ll just talk about VBScript’s CreateObject for the rest of this article.) There are two reasons: 1) VBScript and JScript aren’t the only game in town. What if there is a third party language which (inexplicably) lacks the ability to create ActiveX objects? It would be nice to be able to create objects in such a language, thus the hosts expose said ability on their object models. (Similarly, IE, WSH and ASP all have “static” object tags as well.) Of course, this is a pretty silly reason. First off, no such language comes to mind, and second, why stop at object creation? A language might not have a method that concatenates strings, but that doesn’t mean that we should add one to WSH just because it comes in handy. We need a better justification. 2) The ASP and WSH versions of CreateObject are subtly different than the VBScript/JScript versions because they have more information about the host than the language engine posesses. The differences are as follows: WScript.CreateObject with one argument is pretty much identical to CreateObject with one argument. But with two arguments, they’re completely different. The second argument to CreateObject creates an object on a remote server via DCOM. The second argument to WScript.CreateObject creates a local object and hooks up its event handlers. Sub Bar_Frobbed() WScript.Echo “Help, I’ve been frobbed!” End Sub Set foo = CreateObject(“Renuberator”, “Accounting”) Set bar = WScript.CreateObject(“Frobnicator”, “Bar_”) This creates a renuberator on the accounting server, a frobnicator on the local machine, and hooks up the frobnicator’s events to functions that begin with the particular prefix. Remember, in the script engine model the control over how event binding works is controlled by the host, not by the language. Both IE and WSH have quite goofy event binding mechanisms for dynamically hooked up events — IE uses a single-cast delegate model, WSH uses this weird thing with the string prefixes. Server.CreateObject creates the object in a particular context, which is important when you are creating transactable objects. Windows Script Components, for example, are transactable objects. This means that, among other things, you can access the server object model directly from a WSC created with Server.CreateObject , because the object obtains the server context when it is created. Ordinary CreateObject has no idea what the current page context is, so it is unable to create transactable objects. There are many interesting facets of transactable objects which I know very little about, such as how statefulness and transactability interact, how the object lifetime works and so on. Find someone who writes a blog on advanced ASP techniques and ask them how it works, because I sure don’t know. I’ll answer the question “what’s the difference between WScript.CreateObject and WScript.ConnectObject?” in a later blog entry. I answered the question “What’s the difference between GetObject and CreateObject ?” in this blog entry. The business with the context is probably something to do with MTS or COM+. I’m not entirely clear on this – and I’ve read Tim Ewald’s "Transactional COM+". It left me with a strong feeling of knowing the what and how, but still a bit hazy on the why and when. If on Windows 2000 or later (if not, why not??) ASP uses COM+ services for context, which loads any new object into the context for the page, so long as the COM+ configuration for the new object is compatible. On NT 4.0, MTS is used instead – COM on NT 4.0 is totally ignorant of context (hence CreateObject cannot create an object which needs a context). One of the factors in being able to reuse the caller’s context is whether you’re in the same apartment. According to Ewald, ASP runs your page code in an STA; if your component is marked ‘Free’, it will load in the MTA and cannot inherit the page’s context. A post on the Wrox asp_xml mailing list (archived at) suggests that for Windows 2000 and later, there’s no difference. "Server.CreateObject creates the object in a particular context" Adding to this, before the whole context and IIS intrinsic business was finished, Server.CreateObject only did: – QueryInterface for IDispatch – GetIDsOfNames for "OnStartPage" – If method found, call it, and pass in an implementation of IScriptingContext, so that the object could access the ASP intrinsics – If method not found, don’t bother This also implies that ASP holds a reference to each object where OnStartPage is implemented, so it can call OnEndPage at the end of the request. FWIW, Kim Wscript.createobject also allows you to hook up events to the object since it’s added as a named item to the script engine. has the details Not quite. WS.CreateObject does NOT add the object as a named item and then use the named item connection mechanism. Rather, it walks the coclass typeinfo of the newly created object, searches it for events, and then hooks those events up to methods in the script engine namespace that match the right pattern. The point is that its the HOST which does the binding, not the ENGINE. The engine can only bind to named items. Ah yes it’s all coming back to me now. I remember much anglo-saxon remarks about connectobject when we took over WSH 1.0 way back when. TBH if we could redo it again I’d totally cut WScript.CreateObject because there are no Windows Script engines that don’t have the equivalent and the whole connectobject syntax was confusing (to put it in the least anglo-saxon!) Well from "Joe Coder’s" perspective IE offers the same sort of event binding to VBScript authors, at least for elements/objects present when the page is loaded. Who in their right mind writing VBScript-based DHTML would code something like: <body onload="VBScript:LoadHandler"> … or hope the default is JScript and write: <body onload="LoadHandler()"> … instead of using a Sub or Function named "window_onload" anyway? If one were to rip WScript.CreateObject( ) out of WSH how are we supposed to wire up events? Or do you mean that ConnectObject( ) would remain intact? I suppose there is plenty of redundancy here. I keep wondering why a .WSF’s <object> objects aren’t wired up to properly-named VBScript event handlers in the same <job>. Or are they and I just failed to ever try it? The documentation seems silent on this. I seem to recall a few cases where using VBScript’s CreateObject( ) was beneficial with ADO in ASP pages, though most of the time you ended up trashing connection pooling and transactability if you did this. Wish I remembered the details, but I’d have to do some excavation within several years of notes. I’m also puzzled why somebody would ask when to use WScript.CreateObject( ) and when to use VBScript’s CreateObject( ) in a WSH script. The Windows Script 5.6 CHM document seems to make it pretty clear, though you do have to get past the use of "sync" for "sink" under the description of WScript.Createobject( ). Keep in mind I’m looking at all of this from a user’s perspective though. Looking back on my scripting experiences I have to say the biggest pain in the rump for me was IE’s lack of something approximating WSH’s <reference> tag for loading type libraries. Or is it there, staring me in the face but undocumented? I’d love to toss all of those include files of Consts. A VBScript Enum construct would have been nice too while I’m wishing. The lack of a Sleep( ) in IE also rendered many an HTA more clunky than it needed to be. But now I’m REALLY digressing. Those are a whole lot of interesting topics for future blogs! I asked Eric when to use what simply because the topic has never adequately been explained in any "official" MS documentation or articles, and people such as myself are presently using one or the other because of "cargo-cult" mentality, founded on various performance tests, heresay, and guesswork. Therefore, it would be nice to have a definitive answer. Fundamentally, I wanted to know which object creation method I should use when I simply want to create and use an instance of the FileSystemObject (or a Dictionary/Recordset/etc), and what are the performance/stability implications of the choice? While I see that for many people the ability to bind events and suchlike is useful and/or necessary to many, these don’t really concern me personally. Therefore, as it stands, I’m not sure my question (or at least, my *intended* question) has been answered! lol 🙂 If you don’t care about event binding, transactability, or DCOM then it doesn’t matter which you call. I really have no idea what the performance implications of each are other than the fact that object creation is EXTREMELY expensive no matter how you slice it. Object creation almost always requires hitting the registry and the disk, and those are not cheap. As I’ve said before: If you care deeply about performance, using late-bound unoptimized dynamic languages like VBScript and JScript is possibly a poor choice. There are early-bound heavily optimized languages like C++ that are much more amenable to high performance. <blockquote>Looking back on my scripting experiences I have to say the biggest pain in the rump for me was IE’s lack of something approximating WSH’s <reference> tag for loading type libraries. […] I’d love to toss all of those include files of Consts.</blockquote> My emperical JScript testing showed that loading constants from a typelib is one or two orders of magnitude <i>slower</i> than preloading an engine with script which defines those same constants. Plus, the implementation of ATL in VC6 had a multi-threading bug (fixed in VC7) that caused the typelib to report that it failed to load every once in a while (even though it did load). I wrote all the type library importation code — you are correct, it is NOT FAST compared to simply defining a bunch of consts/vars. There are good reasons why. I’ll blog about them at some point. I too wish there was some way to do this in IE, but AFAIK, there is not. Is the slow loading of constants from typelibs also a true for VBScript too? (Compared to using constants defined in the code, such as with ADOVBS.INC?) Yes, JScript and VBScript share the code for everything they have in common — the engine framework, type library importer, regexp parser, etc. Cheers for clearing that up – another cargo-cultist mantra shot down in flames! 🙁 IMHO, the mantra should be "when it matters, profile it." This is why the command-line (developer) version of our script-host spits out both parse time and run time. There is also a "loop" command line option so that we can measure run times after our copious lazy instantiation has done its thing. PingBack from (Sorry about the title. I work for Microsoft; we like nouns .) Over a year ago now a reader noted in Iam working on a project where Server.CreateObject is replaced with CreateObject all over the project. Though I know that this will improve performance in terms of Memory overload because of how the object creation happes with CreateObject, I would like to have ‘visible scenarios’ that i create with code to show people that ‘"look. here’s the difference". Is it possible ? Can i create a simple project to simulate and see the performance differences between Server.CreateObject and CreateObject ? Thanks. Anand. Sure, you could do that, but that would be the wrong thing to do. Doing so would answer the question "is there a perfornance difference in a fake, contrived, small, unrealistic scenario?" Do the people you are going to show this to care about the performance of fake, contrived, small and unrealistic scenarios? If the question you want to answer is "what is the performance difference in the real scenario that actually matters to our business?" then find a way to measure that. This is almost certainly the wrong area for exploration. It is a waste of time and money to spend any time analyzing and remediating anything other than the _slowest fixable thing_. If the difference between CO and S.CO takes up 0.1% of the total time of your application, then fixing all of them cannot speed it up by more than 0.1%. There is probably a bigger win somewhere else; spend the time that you would have spent twiddling CreateObject calls on finding out where that bigger win is. When you do so, you’ll then have the figures to show the people who write the cheques what their return on investment is. PingBack from @Eric: "WSH uses this weird thing with the string prefixes" In fact this is weird. This means one can not dynamically bind events after the COM object has been created, as the event function has to exist in the host namespace before the object creation. This is quite different from what is possible in an IE host and makes writing framework that work in both environments more difficult. For example, XMLHttpRequest is easier to use in synchronous mode in WSH. This is why almost no one use event binding in WSH.
https://blogs.msdn.microsoft.com/ericlippert/2004/06/01/whats-the-difference-between-wscript-createobject-server-createobject-and-createobject/
CC-MAIN-2016-36
refinedweb
2,338
63.49
Last Updated on August 28, 2020 Real. Kick-start your project with my new book Data Preparation for Machine Learning, including step-by-step tutorials and the Python source code files for all examples. Let’s get started. Note: The examples in this post assume that you have Python 3 with Pandas, NumPy and Scikit-Learn installed, specifically scikit-learn version 0.22 or higher. If you need help setting up your environment see this tutorial. - Update Mar/2018: Changed link to dataset files. - Update Dec/2019: Updated link to dataset to GitHub version. - Update May/2020: Updated code examples for API changes. Added references. Overview This tutorial is divided into 6 parts: -. Diabetes Dataset The Diabetes Dataset involves predicting the onset of diabetes within 5 years in . Want to Get Started With Data Preparation? Take my free 7-day email crash course now (with sample code). Click to sign-up and also get a free PDF Ebook version of the course. Download Your FREE Mini-Course 2. Mark Missing Values Most data has missing values, and the likelihood of having missing values increases with the size of the dataset. Missing data are not rare in real data sets. In fact, the chance that at least one data point is missing increases as the data set size increases. — Page 187, Feature Engineering and Selection, 2019.. Missing values are frequently indicated by out-of-range entries; perhaps a negative number (e.g., -1) in a numeric field that is normally only positive, or a 0 in a numeric field that can never normally be 0. — Page 62, Data Mining: Practical Machine Learning Tools and Techniques, 2016. Specifically, the following columns have an invalid zero minimum value: - 1: Plasma glucose concentration - 2: Diastolic blood pressure - 3: Triceps skinfold thickness - 4: 2-Hour serum insulin - 5: Body mass index Let’s. When a predictor is discrete in nature, missingness can be directly encoded into the predictor as if it were a naturally occurring category. — Page 197, Feature Engineering and Selection, 2019.. Missing values are common occurrences in data. Unfortunately, most predictive modeling techniques cannot handle any missing values. Therefore, this problem must be addressed prior to modeling. — Page 203, Feature Engineering and Selection, 2019.. Many popular predictive models such as support vector machines, the glmnet, and neural networks, cannot tolerate any amount of missing values. — Page 195, Feature Engineering and Selection, 2019. Now, we can look at methods to handle the missing values. 4. Remove Rows With Missing Values The simplest strategy for handling missing data is to remove records that contain a missing value. The simplest approach for dealing with missing values is to remove entire predictor(s) and/or sample(s) that contain missing values. — Page 196, Feature Engineering and Selection, 2019.. Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome. The example runs successfully and prints the accuracy of the model. Removing rows with missing values can be too limiting on some predictive modeling problems, an alternative is to impute missing values. 5. Impute Missing Values Imputing refers to using a model to replace missing values. … missing data can be imputed. In this case, we can use information in the training set predictors to, in essence, estimate the values of other predictors. — Page 42, Applied Predictive Modeling, 2013. SimpleImputer pre-processing class that can be used to replace missing values. It is a flexible class that allows you to specify the value to replace (it can be something other than NaN) and the technique used to replace it (such as mean, median, or mode). The SimpleImputer class operates directly on the NumPy array instead of the DataFrame. The example below uses the SimpleImputer SimpleImputer transformed dataset. We use a Pipeline to define the modeling pipeline, where data is first passed through the imputer transform, then provided to the model. This ensures that the imputer and model are both fit only on the training dataset and evaluated on the test dataset within each cross-validation fold. This is important to avoid data leakage. Note: Your results may vary given the stochastic nature of the algorithm or evaluation procedure, or differences in numerical precision. Consider running the example a few times and compare the average outcome. Running the example prints the accuracy of LDA on the transformed dataset. Try replacing the missing values with other values and see if you can lift the performance of the model. Maybe missing values have meaning in the data. For a more detailed example of imputing missing values with statistics see the tutorial:. Naive Bayes can also support missing values when making a prediction. One of the really nice things about Naive Bayes is that missing values are no problem at all. — Page 100, Data Mining: Practical Machine Learning Tools and Techniques, 2016. There are also algorithms that can use the missing value as a unique and different value when building the predictive model, such as classification and regression trees. … a few predictive models, especially tree-based techniques, can specifically account for missing data. — Page 42, Applied Predictive Modeling, 2013. Sadly, the scikit-learn implementations of naive bayes, decision trees and k-Nearest Neighbors are not robust to missing values. Although it is being considered. Nevertheless, this remains as an option if you consider using another algorithm implementation (such as xgboost) or developing your own implementation. Further Reading This section provides more resources on the topic if you are looking to go deeper. Related Tutorials Books - Feature Engineering and Selection, 2019. - Data Mining: Practical Machine Learning Tools and Techniques, 2016. - Feature Engineering and Selection, 2019. - Applied Predictive Modeling, 2013. APIs. Hi, friend I need that dataset ” Pima-Indians-diabetes.csv” how can I access it. it is not available on this site All datasets are here: thnx Jason You’re welcome. Hi, I have a data set with 3 lakhs row and 278 columns. I used MissForest to impute missing values. But, the system (HP Pavilion Intel i5 with 12GB RAM) runs for a long time and still didn’t complete..Can you suggest any easy way? should I have to use any loop? Perhaps use less data? Perhaps fit on a faster machine? Type diabetes dataset in below link please tell me about how to impute median using one dataset please tell me, in case use Fancy impute library, how to predict for X_test?. You helped me keep my sanity. THANK YOU!! I’m really glad to hear that Patricia! How to know whether to apply mean or to replace it with mode? Try both and see what results in the most skillful models. Hi Sachin, Mode is effected by outliers whereas Mean is less effected by outliers. Please correct me if i am wrong@Jason I think you meant “Median” is not affected by outliers. “Mode” is just the most common value. If I have a 11×11 table and there are 20 missing values in there, is there a way for me to make a code that creates a list after identifying these values? Let us say that the first column got names and the first row has Day 1 to 10. Some of the names does not show up all of the days and therefore there are missing gaps. I put this table into the code and rather than reading the table I get a list with: Name, day 2, day 5, day 7 Name, Day 1, day 6 I understand that this could take some time to answer, but if you are able to just tell me that this is possible and maybe know of good place to start on how to start on this project that would be of great help! Sure, if the missing values are marked with a nan or similar, you can retrieve rows with missing values using Pandas. can we code our own algorithms to impute the missing values???? if it is possible then how can i implement it?? Yes. You can write some if-statements and fill in the n/a values in the Pandas dataframe. I would recommend using statistics or a model as well and compare results. Hi Jason, I am trying to prepare data for the TITANIC dataset. One of the columns is CABIN which has values like ‘A22′,’B56’ and so on. This column has maximum number of missing values. First I thought to delete this column but I think this could be an important variable for predicting survivors. I am trying to find a strategy to fill these null values. Is there a way to fill alphanumeric blank values? If there is no automatic way, I was thinking of fill these records based on Name, number of sibling, parent child and class columns. E.g. for a missing value, try to see if there are any relatives and use their cabin number to replace missing value. Similar case is for AGE column which is missing. Any thoughts? Sounds like a categorical variable. You could encode them as integers. You could also assign an “unknown” integer value (e.g. -999) for the missing value. Perhaps you can develop a model to predict the cabin number from other details and see if that is skilful. Good day, I ran this file code pd.read_csv(r’C:\Users\Public\Documents\SP_dow_Hist_stock.csv’,sep=’,’).pct_change(252) and it gave me missing values (NAN) of return of stock. How do I resolve it. pd.read_csv(r’C:\Users\Public\Documents\SP_dow_Hist_stock.csv’,sep=’,’) Out[5]: Unnamed: 0 S&P500 Dow Jones 0 Date close Close 1 1-Jan-17 2,275.12 24719.22 2 1-Jan-16 1,918.60 19762.60 3 1-Jan-15 2,028.18 17425.03 4 1-Jan-14 1,822.36 17823.07 5 1-Jan-13 1,480.40 16576.66 6 1-Jan-12 1,300.58 13104.14 7 1-Jan-11 1,282.62 12217.56 8 1-Jan-10 1,123.58 11577.51 9 1-Jan-09 865.58 10428.05 10 1-Jan-08 1,378.76 8776.39 11 1-Jan-07 1,424.16 13264.82 12 1-Jan-06 1,278.73 12463.15 13 1-Jan-05 1,181.41 10717.50 14 1-Jan-04 1,132.52 10783.01 15 1-Jan-03 895.84 10453.92 16 1-Jan-02 1,140.21 8341.63 17 1-Jan-01 1,335.63 10021.57 18 1-Jan-00 1,425.59 10787.99 19 1-Jan-99 1,248.77 11497.12 20 1-Jan-98 963.36 9181.43 21 1-Jan-97 766.22 7908.25 22 1-Jan-96 614.42 6448.27 23 1-Jan-95 465.25 5117.12 24 1-Jan-94 472.99 3834.44 25 1-Jan-93 435.23 3754.09 26 1-Jan-92 416.08 3301.11 27 1-Jan-91 325.49 3168.83 28 1-Jan-90 339.97 2633.66 29 1-Jan-89 285.4 2753.20 .. … … … 68 1-Jan-50 16.88 235.42 69 1-Jan-49 15.36 200.52 70 1-Jan-48 14.83 177.30 71 1-Jan-47 15.21 181.16 72 1-Jan-46 18.02 177.20 73 1-Jan-45 13.49 192.91 74 1-Jan-44 11.85 151.93 75 1-Jan-43 10.09 135.89 76 1-Jan-42 8.93 119.40 77 1-Jan-41 10.55 110.96 78 1-Jan-40 12.3 131.13 79 1-Jan-39 12.5 149.99 80 1-Jan-38 11.31 154.36 81 1-Jan-37 17.59 120.85 82 1-Jan-36 13.76 179.90 83 1-Jan-35 9.26 144.13 84 1-Jan-34 10.54 104.04 85 1-Jan-33 7.09 98.67 86 1-Jan-32 8.3 60.26 87 1-Jan-31 15.98 77.90 88 1-Jan-30 21.71 164.58 89 1-Jan-29 24.86 248.48 90 1-Jan-28 17.53 300.00 91 1-Jan-27 13.4 200.70 92 1-Jan-26 12.65 157.20 93 1-Jan-25 10.58 156.66 94 1-Jan-24 8.83 120.51 95 1-Jan-23 8.9 95.52 96 1-Jan-22 7.3 98.17 97 1-Jan-21 7.11 80.80 [98 rows x 3 columns] pd.read_csv(r’C:\Users\Public\Documents\SP_dow_Hist_stock.csv’,sep=’,’).pct_change(251) Out[7]: Unnamed: 0 S&P500 Dow Jones 0 NaN NaN NaN 1 NaN NaN NaN 2 NaN NaN NaN 3 NaN NaN NaN 4 NaN NaN NaN 5 NaN NaN NaN 6 NaN NaN NaN 7 NaN NaN NaN 8 NaN NaN NaN 9 NaN NaN NaN 10 NaN NaN NaN 11 NaN NaN NaN 12 NaN NaN NaN 13 NaN NaN NaN 14 NaN NaN NaN 15 NaN NaN NaN 16 NaN NaN NaN 17 NaN NaN NaN 18 NaN NaN NaN 19 NaN NaN NaN 20 NaN NaN NaN 21 NaN NaN NaN 22 NaN NaN NaN 23 NaN NaN NaN 24 NaN NaN NaN 25 NaN NaN NaN 26 NaN NaN NaN 27 NaN NaN NaN 28 NaN NaN NaN 29 NaN NaN NaN .. … … … 68 NaN NaN NaN 69 NaN NaN NaN 70 NaN NaN NaN 71 NaN NaN NaN 72 NaN NaN NaN 73 NaN NaN NaN 74 NaN NaN NaN 75 NaN NaN NaN 76 NaN NaN NaN 77 NaN NaN NaN 78 NaN NaN NaN 79 NaN NaN NaN 80 NaN NaN NaN 81 NaN NaN NaN 82 NaN NaN NaN 83 NaN NaN NaN 84 NaN NaN NaN 85 NaN NaN NaN 86 NaN NaN NaN 87 NaN NaN NaN 88 NaN NaN NaN 89 NaN NaN NaN 90 NaN NaN NaN 91 NaN NaN NaN 92 NaN NaN NaN 93 NaN NaN NaN 94 NaN NaN NaN 95 NaN NaN NaN 96 NaN NaN NaN 97 NaN NaN NaN [98 rows x 3 columns] Perhaps post your code and issue to stackoverflow? Hi Jason, Thanks for your valuable writing. I have one question :- We can also replace NaN values with Pandas fillna() function. In my opinion this is more versatile than Imputer class because in a single statement we can take different strategies on different column. df.fillna({‘A’:df[‘A’].mean(),’B’:0,’C’:df[‘C’].min(),’D’:3}) What is your opinion? Is there any performance difference between two? Great tip. No idea, on the performance difference. Is there a recommended ratio on the number of NaN values to valid values , when any corrective action like imputing can be taken? If we have a column with most of the values as null, then it would be better off to ignore that column altogether for feature? No, it is problem specific. Perhaps run some experiments to see how sensitive the model is to missing values. Hi Jason, Thanks for this post, I wanted to ask, how do we impute missing text values in a column which has either text labels or blanks. Good question, I’m not sure off hand. Perhaps start with simple masking of missing values. To fill the nan for a categorical column df = df.fillna(df[‘column’].value_counts().index[0]) This fills the missing values in all columns with the most frequent categorical value Thanks a lot Jason ! but I have a little question, how about if we want to replace missing values with the mean of each ROW not column ? how to do that ? if you have any clue, please tell me.. Thank you again Jason. Why would you do this? numpy.mean() allows you to specify the axis on which to calculate the mean. It will do it for you. Hi Jason, I wanted to ask you how you would deal with missing timestamps (date-time values), which are one set of predictor variables in a classification problem. Would you flag and mark them as missing or impute them as the mode of the rest of the timestamps? Here are some ideas: Hi Jason, A big fan of yours. I have a question about imputing missing numerical values. I don’t really want to remove them and I want to impute them to a value that is like Nan but a numerical type? Would say coding it to -1 work? (0 is already being used). I guess I am trying to achieve the same thing as categorising an nan category variable to unknown and creating another feature column to indicate that it is missing. Thanks, NaN is a numerical type. It is a valid float. You could use -999 or whatever you like. Be careful that your model can support them, or normalize values prior to modeling. Hello Jason, You mentioned this here: “if you choose to impute with mean column values, these mean column values will need to be stored to file for later use on new data that has missing values.”, but I wanted to ask: Would imputing the data before creating the training and test set (with the data set’s mean) cause data leakage? What would be the best approach to tackle missing data within the data pipeline for a machine learning project. Let’s say I’m imputing by filling in with the mean. For the model tuning am I imputing values in the test set with the training set’s mean? Yes. You want to calculate the value to impute from train and apply to test. The sklearn library has an imputer you can use in a pipeline: Hi Jason, Thanks again for that huge nice article! is there a neat way to clean away all those rows that happen to be filled with text (i.e. strings) in a certain column, i.e. List.ImportantColumn . This destroys my plotting with “could not convert string to float” Thanks already in advance! Yes, you can remove or replace those values with simple NumPy array indexing. For example, if you have ‘?’ you can do: Hi Jason, I tried using this dropna to delete the entire row that has missing values in my dataset and after which the isnull().sum() on the dataset also showed zero null values. But the problem arises when i run an algorithm and i am getting an error. Error : Input contains NaN, infinity or a value too large for dtype(‘float64’) This clearly shows there still exists some null values. How do i proceed with this thanks in advance Perhaps print the contents of the prepared data to confirm that the nans were indeed removed? Hi Jason, Thanks for this post, I’m using CNN for regression and after data normalization I found some NaN values on training samples. How can I use imputer to fill missing values in the data after normalization. Does the above tutorial not help? should I apply Imputer function for both training and testing dataset? Yes, but if the imputer has to learn/estimate, it should be developed from the training data and aplied to the train and test sets, in order to avoid data leakage. I feel that Imputer remove the Nan values and doesn’t replace them. For example the vector features length in my case is 14 and there are 2 Nan values after applying Imputer function the vector length is 12. This means the 2 Nan values are removed. However I used the following setting: imputer = Imputer(missing_values=np.nan, strategy=’mean’, axis=0) I don’t know what is happening in your case, perhaps post/search on stackoverflow? You mean I should fit it on training data then applied to the train and test sets as follow : imputer = Imputer(strategy=”mean”, axis=0) imputer.fit(X_train) X_train = imputer.transform(X_train) X_test = imputer.transform(X_test) Looks good. Thanks for this post!!! A dataSet having more than 4000 rows and rows can be groupby their 1st columns and let there is many columns (assume 20 columns) and few columns(let 14 columns) contains NaN(missing value). How we populate NaN with mean of their corresponding columns by iterative method(using groupby, transform and apply) . Sorry, I don’t understand. Perhaps you can elaborate your question? actually i want to fill missing value in each column. Value is the mean of corresponding column. Is there any iterative method? What do you mean by iterative method? Is it iterative imputer? where missing value acts as dependent variable and independent variables are other features No. After replacing zeroes,Can I save it as a new data set? Yes, call to_csv() on the dataframe. what does this mean? It is a function, learn more here: import numpy as np import pandas as pd mydata = pd.read_csv(‘diabetes.csv’,header=None) mydata.head(20) 0 1 2 3 4 5 6 7 8 0 Pregnancies Glucose BloodPressure SkinThickness Insulin BMI DiabetesPedigreeFunction Age Outcome 1 6 148 72 35 0 33.6 0.627 50 1 2 1 85 66 29 0 26.6 0.351 31 0 3 8 183 64 0 0 23.3 0.672 32 1 4 1 89 66 23 94 28.1 0.167 21 0 5 0 137 40 35 168 43.1 2.288 33 1 print((mydata[0] == 0).sum()) — for any column it always shows 0 0 >>>>>>>…. any thing wrong here ? whereas i have 0’s in dataset 0 Pregnancies 1 6 2 1 3 8 4 1 5 0>>>>>>>>> 6 5 7 3 8 10 9 2 10 8 11 4 12 10 13 10 14 1 15 5 16 7 17 0 >>>>>> Perhaps post your code and issue to stackoverflow? Hello, More than one year later, I have the same problem as you. When i search for 0 it does not work. However, when I look for ‘0’ it does, which means the table is filled with strings and not number… Any idea how I can handle that? Best Regards Perhaps your data was loaded as strings? Try converting it to numbers: Hi sir, For my data after executing following instructions still I get same error dataset= dataset.replace(0, np.NaN) dataset.dropna(inplace=True) dataset= dataset.replace(0, np.Inf) dataset.dropna(inplace=True) print(dataset.describe()) F1 F2 F3 F4 count 1200.000000 1200.000000 1200.000000 1200.000000 mean 0.653527 0.649447 1.751579 inf std 0.196748 0.194933 0.279228 NaN min 0.179076 0.179076 0.731698 0.499815 25% 0.507860 0.506533 1.573212 1.694007 50% 0.652066 0.630657 1.763520 1.925291 75% 0.787908 0.762665 1.934603 2.216663 max 1.339335 1.371362 2.650390 inf How can I get out from this problem. Sorry to hear that, perhaps try posting your code and question to stackoverflow? df.replace(-np.Inf, 0 ) df.replace(np.Inf, 0 ) how can we impute the categorical data in python You can use an integer encoding (label encoding), a one hot encoding or even a word embedding. Hi Jason I am new to Python and I was working through the example you gave. For some reason, When I run the piece of code to count the zeros, the code returns results that indicate that there are no zeros in any of those columns. Can you please assist? Sorry to hear that, I have some suggestions here: Hi Jason, Great post. Thanks so much. Say I have a dataset without headers to identify the columns, how can I handle inconsistent data, for example, age having a value 2500 without knowing this column captures age, any thoughts? You can use statistics to identify outliers: Hi Jason, Nice article. How can we add (python) another feature indicating a missing value as 1 if available and 0 if not? Is that a sensible solution? Thank you. Great question. You could loop over all rows and mark 0 and 1 values in a another array, then hstack that with the original feature/rows. Pima Indians Diabetes Dataset doesn’t exist anymore 🙁 Thanks, I have updated the link. What is the current situation in AutoML field? What researchers try to bring out actually? Good question, I need to learn more about that field. Let me know ,once you get to know about that someday. Thank you for your response!! [Ignore earlier misplaced post] Jason, Many thanks for your work in preparing these awesome tutorials! In order to fill missing values with mean column values, I had to switch from: from sklearn.preprocessing import Imputer … # fill missing values with mean column values imputer = Imputer() To: from sklearn.impute import SimpleImputer … imputer = SimpleImputer(missing_values=numpy.NaN, strategy=’mean’) Thanks for sharing. Jason, thanks a lot for your article,very useful. Regards You’re welcome. Hi Jason, great tutorial! If I were to impute values for time series data, how would I need to approach it? My dataset has data for a year and data is missing for about 3 months. Is there any way to salvage this time series for forecasting? See this: Thanks Jason! You’re welcome. I have near about 4 lakhs of data. The shape of my dataset is (400000,114). I want to first impute the data and then apply feature selection such as RFE so that I could train my model with only the important features further instead of all 114 features. But I am unable to understand how after using SimpleImputer and MinMax scaler to normalize the data as : values = dataset.values imputer = SimpleImputer() imputedData = imputer.fit_transform(values) scaler = MinMaxScaler(feature_range=(0, 1)) normalizedData = scaler.fit_transform(imputedData) How will we use this normalized data ?? Because on normal dataset further I am making X,Y labels as: X = dataset.drop([‘target’], axis=1) y = dataset.target How RFE will be used here further ? Whether on X and y labels or before that do we have to convert all X labels to normalized data ? Also training this huge amount of data with Random Forest or Logistic Regression for RFE is taking much of time ? So is a better solution available for training ? Perhaps use a smaller sample of your data to start with. I have tried it with smaller set of data which is working fine. But in a requirement I have to use this large sized i.e. 4 lakhs of data with 114 features. Also RFE on RandomForest is taking a huge amount of time to run. And if I go with model = LogisticRegression(‘saga’), then the amount of time is less but I am dealing with warnings which I am unable to resolve as: ConvergenceWarning: The max_iter was reached which means the coef_ did not converge “the coef_ did not converge”, ConvergenceWarning) How should I go further for feature selection on this large dataset ? Thank you very much !! Perhaps fit on less data, at least initially. I would recommend developing a pipeline so that the imputation can be applied prior to scaling and feature selection and the prior to any modeling. Hi Jason I have Time Series Data so i need to fill missing values , so which is best technique to fill time series data ? See this tutorial: Hello Jason If we impute a column in our dataset the data distribution will change, and the change will depend on the imputation strategy. This in turns will affect the different ML algorithms performance. We are tuning the prediction not for our original problem but for the “new” dataset, which most probably differ from the real one. My question is, for avoiding error predictions or overestimated performance of our algorithm, shouldn’t we avoid having any NA’s imputed values in our test dataset? I guess We can use them in the training dataset and using different imputation techniques to check performance of the algorithms on the test data (without imputed NA’s). Thanks Bruno Yes. Great question! Test a few strategies and use the approach that results in a model that has the best skill. Hi Jason, First of all great job on the tutorials! This is my go to place for Machinel earning now. I am trying to impute values in my dataset conditionally. Say I have three columns, If Column 1 is 1 then Column 2 is 0 and Column 3 is 0; If column 1 is 2 then Column 2 is Mean () and Column 3 is Mean(). I tried running an if statement with the function any() and defined the conditions separately. However the conditions are not being fulfilled based on conditions, I am either getting all mean values or all zeroes. I have posted this on Stackoverflow and haven’t gotten any response to help me with this.Please do suggest what should I apply to get this sorted. Thanks a lot! Thanks! Perhaps try writing the conditions explicitly and enumerate the data, rather than using numpy tricks? It will be slower but perhaps easier to debug. Yes, I used iloc to define the conditions separately. Worked fine. Thanks! Great! Thanks for the post. Please correct me if i am wrong. Applying these techniques for training data works for me. However, if the data in real-time (test data) is received with standard inverval (100 milliseconds), then algorithms suchs as LGBM, XGBoost and Catboost (scikit) with inherent capabilities can be used. Your Weka post on missing values by defining threshold works great. 100ms is a long time for a computer, I don’t see the problem with using imputation. Thanks for the reply. Just a clarification. If one instance of data from several sensors arrive with some missing values for every 100ms, is it possible to classify based on the current instance alone. (one instance at a time). My presumption is that we need multiple instances to calculate the statistics even for stream data. Sorry. A bit confused on this. Generally, you can frame the prediction problem any way you wish, e.g. based on the data you have and the data you need at prediction time. Then train a model based on that framing of the problem. Thanks a lot replying with patience. I’m here to help if I can. Dear Dr Jason, Background information and question: Background information: Instead of playing around with the “horse colic” data with missing data, I constructed a smaller version of the iris data. I had to shuffle the data to get an even spread of species 0, 1 or 2. Otherwise if I took the first 20 rows the last column would be full of species 0. Hence my shuffling of the data. I’ve had great success in predicting the kind of species. So my iris20 data looks like this – the first four columns are in the correct order of the original iris data and the last column are a variety of species. . I removed 10 values ‘at random’ from my iris20 data, called it iris20missing Question: I have successfully been able to predict the kind of species of iris whether it is species 0, 1, 2. Examples: My question: In listing 8.19, 3rd last line, page 84 (101 of 398): row is enclosed in brackets [row]. that is we have for example row = [[6.3 ,NaN,4.4 ,1.3]] Why please do we double enclose the array in predict function? When I do this I get errors. Thank you for your time, Anthony of Sydney Why enclose row as [row] since row is already enclosed by brackets. That is why .predict([row]) and not .predict(row) Nice work! The predict() function expects a 2d matrix input, one row of data represented as a matrix is [[a,b,c]] in python. First, Thanks in advance for your reply. It is appreciated. A question on your answer please. Background info In the above example we had to structure the variable ‘row’ as a 2d matrix for use in the predict() function. Here ‘row’ is changed from an array of size 4 to a 1 x 4 matrix. I’ve worked out that one can construct an n x m matrix and have the model predict for an n x m matrix To illustrate: Recall in my above example I made a series of rows and made individual predictions on the model with these rows: Now if we made an n x m matrix and feed that n x m matrix into the predict() function we should expect the same outcomes as individual predictions. Result is the same as if making individual predictions. Hence I understand the predict() function expecting a matrix and if predicting for single rows, make the single row into a 1xm matrix. Conclusion: the predict() function expects a matrix, and we can make an n x m matrix containing the rows of what we want to predict AND get multiple results. Thank you again in advance Anthony of Sydney Yes. Perhaps this will help clarify: Dear Dr Jason, Thank you for the blog at. Relevant to answer my question about prediction are the sections “Class Predictions”, “Single Class Predictions” and “Multiple Class Predictions”. (These are presented in order of 1, 3 and 2 ). The variable Xnew is of the structure [[],[]] which is a 2D structure. This is for one prediction. In the multiple class predictions, Xnew is a 2D matrix. In the example it is a 3 x 2 2D matrix In both cases of single or multiple class predictions we feed the 2D matrix in the form In sum predicting requires our feature matrix to be 2D whether 1 x m or n x m, where 1 or n are the number of predictions and m being the number of features. Thank you, Anthony of Sydney Dear Dr Jason, I wish to share my two ways of deleting specific rows from a dataset as per subheading 4, “Remove Rows With Missing Values” HOW TO DELETE SPECIFIC VALUES FROM SPECIFIC COLUMNS – TWO METHODS Method #1 as per heading 4 = listing 7.16 on p73 (90 of 398) of your book. Method #2 – using arrays The last method was presented in case your data set is not as a DataFrame. Thank you, Anthony of Sydney Thanks for sharing! Hi Jason, I was just wondering if data imputing (e.g. replacing all missing values by the arithmetic mean of the corresponding column) in fact results in data leakage, implementing bias into the model during training? Such data imputing will, after all, fill up the dataset with information provided by instances (rows) that should be unseen by the model while training. If that is indeed a problem, what would you recommend we do? Would it be better to add data imputing to the pipeline and thus, implement it separately for each fold of cross validation, together with other feature selection, preprocessing, and feature engineering steps? Thanks a lot, Levente It doesn’t as long as you only use the training data to calculate stats.
https://machinelearningmastery.com/handle-missing-data-python/
CC-MAIN-2021-04
refinedweb
5,856
75.3
The esusers realm is the default Shield realm. The esusers realm enables the registration of users, passwords for those users, and associates those users with roles. The esusers command-line tool assists with the registration and administration of users. Like all other realms, the esusers realm is configured under the shield.authc.realms settings namespace in the elasticsearch.yml file. The following snippet shows an example of such configuration: Example esusers Realm Configuration. shield: authc: realms: default: type: esusers order: 0 When no realms are explicitly configured in elasticsearch.yml, a default realm chain will be created that holds a single esusers realm. If you wish to only work with esusers realm and you’re satisfied with the default files paths, there is no real need to add the above configuration.
https://www.elastic.co/guide/en/shield/shield-1.3/esusers.html
CC-MAIN-2019-35
refinedweb
131
57.37
Table of Contents IRule Locationheader @EnableZuulProxyvs. @EnableZuulServer @EnableZuulServerFilters @EnableZuulProxyFilters @Primary autoCommitOffsetto falseand Relying on Manual Acking republishToDlq=false republishToDlq=true @SpanNameAnnotation toString()method toString()method TracingFilter WebClient HttpClientBuilderand HttpAsyncClientBuilder HttpClient UserInfoRestTemplateCustomizer @AsyncAnnotated methods @ScheduledAnnotated Methods Content-TypeTemplate and Version HttpServerStub PropertySourceLocatorbehavior.RELEASE). Patterns ribbon. the LoadBalancerClient.BackOffPolicyFactory and return the BackOffPolicy you would like to use for a given service, as shown in the following example: ); } } WebClient can be configured to use the LoadBalancerClient. LoadBalancerExchangeFilterFunction is auto-configured if spring-webflux is on the classpath.. Finchley.RELEASE a git repository (which must be provided), as shown in the following example: spring: cloud: config: server: git: uri:: pom.xml. Repository.. With your config server running, you can make HTTP requests to the server to retrieve values from the Vault backend. To do so, you need a token for your Vault server. First, place some data in you Vault, as shown in the following example: $ vault write secret/application foo=bar baz=bam $ vault write, you can either set the key as a PEM-encoded text value (in encrypt.key) or use a keystore (such as the keystore created by the keytool utility that comes with the JDK). The following table describes the keystore properties:tore; } The Config Server runs best as a standalone application. However, if need be, you can embed it in another application. To do so, use the @EnableConfigServer annotation. An optional property named spring.cloud.config.server.bootstrap can be useful in this case is. It is a flag to indicate whether the server should configure itself from its own remote repository. By default, the flag is off, because it can delay startup. However, when embedded in another application, it makes sense to initialize the same way as any other application. wat an embedded config server with no endpoints. You can switch off the endpoints entirely by not using the @EnableConfigServer annotation (set spring.cloud.config.server.bootstrap=true). Many source code repository providers (such as Github, Gitlab, Gitee,e,). A applciations username /{name}/ read timeout, this can be done by using the property spring.cloud.config.request-read. Finchley) or management endpoint path (such as management.contextPath=/admin). The following example shows the default values for the two settings: application.yml. eureka: instance: statusPageUrlPath: ${management.server.servlet.context-path}/info healthCheckUrlPath: ${management.server.servlet.context-path} ad Turnbine, backend backend backend" } HTheBackOffPolicyFactory, which is used to create a BackOffPolicy for a given service, as shown in the following example: @Configuration public class MyConfiguration { @Bean LoadBalancedBackOffPolicyFactory backOffPolicyFactory() { return new LoadBalancedBackOffPolicy. Finchley.RELEASE This project provides OpenFeign integrations for Spring Boot apps through autoconfiguration and binding to the Spring Environment and other Spring programming model idioms. ClosableHttpClient when using Apache or OkHttpClient when using OK HTTP.; } } OtherClass.someMethod(myprop.get()); } } stripped). The proxy uses Ribbon to locate an instance to forward to via discovery, and all requests are executed in a <<hystrix-fallbacks-for-routes, hystrix command>>, so failures will show up in Hystrix metrics, and once the circuit is open the proxy will not try to contact the service.. Spring Cloud Stream introduces a number of new features, enhancements, and changes. The following sections outline the most notable ones: MeterRegistryis also provided as a bean so that custom applications can autowire it to capture custom metrics. See “Chapter 35, 30, 25,. To 30, 27.3 natiural 30, becouse the payload of the message is not yet converted from the wire format ( byte[]) to the desired type. In other words, it has not yet gone through the type conversion process described in the Chapter 30, Content Type Negotiation. So, unless you use a SPeL expression that evaluates raw data (for example, the value of the first byte in the byte array), use message header-based expressions (such as condition = "headers['type']=='dog'"). (throw an exception to reject the message); } } }; }>>() {}); Errors happen, and Spring Cloud Stream provides several flexible mechanisms to handle them. The error handling comes in two flavors: Spring Cloud Stream uses the Spring Retry library to facilitate successful message processing. See Section preceeding example the destination name is input.myGroup and the dedicated error channel name is input.myGroup.errors. Also, in the event you are binidng.. dcument overrides the one provided by the framework. 30, Content Type Negotiation”. Default: null (no type coercion is performed). The binder used by this binding. See “Section 28, it allows customizing the instance index of this consumer (if different from spring.cloud.stream.instanceIndex). When set to a negative value, it defaults to spring.cloud.stream.instanceIndex. See “Section 32 32.2, “Instance Index and Instance Count”” for more information. Default: -1. 30. NOTEDo not expect Message to be converted into some other type based only on the contentType. Remember that the contentType is complementary to the target type. If you wish, you can provide a hint, which MessageConverter may or may not take into consideration. 30 31, Schema Evolution Support” for details. Spring 31). Spring Spring.: The name of the metric being emitted. Should be a unique value per application. Default: ${spring.application.name:${vcap.application.name:${spring.config.name:application}}} Allows white listing application properties that are added to the metrics payload Default: null. Pattern to control the 'meters' one wants to capture. For example, specifying spring.integration.* captures metric information for meters whose name starts with spring.integration. Default: all 'meters' are captured. } ] } For Spring Cloud Stream samples, see the spring-cloud-stream-samples repository on GitHub. On CloudFoundry, services are usually exposed through a special environment variable called VCAP_SERVICES. When configuring your binder connections, you can use the values from an environment variable as explained on the dataflow Cloud Foundry Server docs. 37 37.6, “Dead-Letter Topic Processing” processing for more information. Starting with version 2.0, messages sent to the DLQ topic are enhanced with the following headers: x-original-topic, x-exception-message, and x-exception-stacktrace as byte[]..someGroup.someTopic.lag: This metric indicates how many messages have not been yet consumed from a given binder’s topic by a given consumer group. For example, if the value of the metric spring.cloud.stream.binder.kafka.myGroup.myTopic.lag is 1000, the consumer group named myGroup has 1000 messages waiting to be consumed from the topic calle myTopic.. Application ID for all the stream configurations in the current application context. You can override the application id for an individual StreamListener method using the group property Section 39.3.1, “RabbitMQ Binder Properties” for more information about these properties. The framework does not provide any standard mechanism to consume dead-letter messages (or to re-route them back to the primary queue). Some options are described in Section 39.6, . This section contains settings specific to the RabbitMQ Binder and bound channels. For general binding configuration options and properties, see the Spring Cloud Stream core documentation. By default, the RabbitMQ binder uses Spring Boot’s ConnectionFactory. Conseuqently, it supports all Spring Boot configuration options for RabbitMQ. (For reference, see. A comma-separated list of RabbitMQ node names. When more than one entry, used to locate the server address where a queue is located.. The compression level for compressed bindings. See java.util.zip.Deflater. Default: 1 (BEST_LEVEL). A connection name prefix used to name the connection(s) created by this binder. The name is this prefix followed by #n, where n increments each time a new connection is opened. Default: none (Spring AMQP default).> is appended. Default: #. Whether to bind the queue to the destination exchange. Set it to false if you have set up your own infrastructure and have previously created and bound the queue. Default: true. The name of the DLQ Default: prefix+destination.dlq A DLX to assign to the queue. Relevant only if autoBindDlq is true. Default: 'prefix+DLX' A dead letter routing key to assign to the queue. Relevant only How long before an unused dead letter queue is deleted (in milliseconds). Default: no expiration Declare the dead letter queue with the x-queue-mode=lazy argument. See “Lazy Queues”. Consider using a policy instead of this setting, because using a policy allows changing the setting without deleting the queue. Default: false. (in milliseconds). Default: no limit Whether the subscription should be durable. Only effective if group is also set. Default: true. If declareExchange is true, whether the exchange should be auto-deleted (that is, removed after the last queue is removed). Default: true. If declareExchange is true, whether the exchange should be durable (that is, it survives broker restart). Default: true. The exchange type: direct, fanout or topic for non-partitioned destinations and direct or topic for partitioned destinations. Default: topic.. How long before an unused queue is deleted (in milliseconds). Default: no expiration The interval (in milliseconds). The maximum number of messages in the queue. Default: no limit The maximum number of total bytes in the queue from all messages. Default: no limit The maximum priority of messages in the queue (0-255). Default: none When the queue cannot be found, whether to treat the condition as fatal and stop the listener container. Defaults to false so that the container keeps trying to consume from the queue — for example, when using a cluster and the node hosting a non-HA queue is down. Default: false Prefetch count. Default: 1. A prefix to be added to the name of the destination and queues. Default: "". The number of times to retry consuming from a queue if it is missing. Relevant only when missingQueuesFatal is true. Otherwise, the container keeps retrying indefinitely. Default: re-queued when retry is disabled or republishToDlq is false. Default: false. When republishToDlq is true, specifies the delivery mode of the republished message. Default: DeliveryMode.PERSISTENT Whether to use transacted channels. Default: false. Default time to live to apply to the queue when declared (in milliseconds).. Messages are batched into one message according to the following properties (described in the next three entries in this list): 'batchSize', batchBufferLimit, and batchTimeout. See Batching for more information. Default: false. The number of messages to buffer when batching is enabled. Default: 100. The maximum buffer size when batching is enabled. Default: 10000. The batch timeout when batching is enabled. Default: 5000. The routing key with which to bind the queue to the exchange (if bindQueue is true). Only applies to non-partitioned destinations. Only applies if requiredGroups are provided and then only to those groups. Default: #. Whether to bind the queue to the destination exchange. Set it to false if you have set up your own infrastructure and have previously created and bound the queue. Only applies if requiredGroups are provided and then only to those groups. Default: true. Whether data should be compressed when sent. Default: false. The name of the DLQ Only applies if requiredGroups are provided and then only to those groups. Default: prefix+destination.dlq A DLX to assign to the queue. Relevant only when autoBindDlq is true. Applies only when requiredGroups are provided and then only to those groups. Default: 'prefix+DLX' A dead letter routing key to assign to the queue. Relevant only when autoBindDlq is true. Applies only when requiredGroups are provided and then only to those groups. Default: destination Whether to declare the exchange for the destination. Default: true. A SpEL expression to evaluate the delay to apply to the message ( x-delay header). It. The delivery mode. Default: PERSISTENT. When a DLQ is declared, a DLX to assign to that queue. Applies only if requiredGroups are provided and then only to those groups. Default: none When a DLQ is declared, a dead letter routing key to assign to that queue. Applies only when requiredGroups are provided and then only to those groups. Default: none How long (in milliseconds) before an unused dead letter queue is deleted. Applies only when requiredGroups are provided and then only to those groups. Default: no expiration x-queue-mode=lazyargument. See “Lazy Queues”. Consider using a policy instead of this setting, because using a policy allows changing the setting without deleting the queue. Applies only when requiredGroupsare provided and then only to those groups. Maximum number of messages in the dead letter queue. Applies only if requiredGroups are provided and then only to those groups. Default: no limit Maximum number of total bytes in the dead letter queue from all messages. Applies only when requiredGroups are provided and then only to those groups. Default: no limit Maximum priority of messages in the dead letter queue (0-255) Applies only when requiredGroups are provided and then only to those groups. Default: none Default time (in milliseconds) to live to apply to the dead letter queue when declared. Applies only when requiredGroups are provided and then only to those groups. Default: no limit If declareExchange is true, whether the exchange should be auto-delete (it is removed after the last queue is removed). Default: true. If declareExchange is true, whether the exchange should be durable (survives broker restart). Default: true. The exchange type: direct, fanout or topic for non-partitioned destinations and direct or topic for partitioned destinations. Default: topic. How long (in milliseconds) before an unused queue is deleted. Applies only. Applies only when requiredGroups are provided and then only to those groups. Default: false. Maximum number of messages in the queue. Applies only when requiredGroups are provided and then only to those groups. Default: no limit Maximum number of total bytes in the queue from all messages. Only applies if requiredGroups are provided and then only to those groups. Default: no limit Maximum priority of messages in the queue (0-255). Only applies if requiredGroups are provided and then only to those groups. Default:. Applies only when requiredGroups are provided and then only to those groups. Default: false. A SpEL expression to determine the routing key to use when publishing messages. For a fixed routing key, use a literal expression, such as routingKeyExpression='my.routingKey' in a properties file or routingKeyExpression: '''my.routingKey''' in a YAML file. Default: destination or destination-<partition> for partitioned destinations. Whether to use transacted channels. Default: false. Default time (in milliseconds) to live to apply to the queue when declared. Applies only when requiredGroups are provided and then only to those groups. Default: no limit When retry is enabled within the binder, the listener container thread is suspended for any back off periods that are configured. This might be important when strict ordering is required with a single consumer. However, 39.3.1, “RabbitMQ Binder Properties”” for more information about the properties discussed here. You can use the following example configuration to enable this feature: autoBindDlqto true. The binder create a DLQ. Optionally, you can specify a name in deadLetterQueueName. dlqTtlto the back off time you want to wait between redeliver. The following configuration creates an exchange myDestination with queue myDestination.consumerGroup bound to a topic exchange with a wildcard routing:. Starting with version 1.3, the binder unconditionally sends exceptions to an error channel for each consumer destination and can also be configured to send async producer send failures to an error channel. See “???” for more information. RabbitMQ has two types of send failures: The latter is rare. According to the RabbitMQ documentation "[A nack] will only be delivered if an internal error occurs in the Erlang process responsible for a queue.". As well as enabling producer error channels (as described in “???”),. Because you cannot anticipate shows an example of how to route those messages back to the original queue but moves them to a third “parking lot” queue after three attempts. The second example uses the RabbitMQ Delayed Message Exchange to introduce a delay to the re-que. We determine the original queue from the headers. When republishToDlq is false, RabbitMQ publishes the message to the DLX/DLQ with an x-death header containing information about the original destination,); } }); } } application.yml. spring: cloud: stream: bindings: input: destination: partitioned.destination group: myGroup consumer: partitioned: true instance-index: 0: application.yml.. Spring Cloud Bus provides two endpoints, /actuator/bus-refresh and /actuator/bus-env that correspond to individual actuator endpoints in Spring Cloud Commons, /actuator/refresh and /actuator/env respectively.. Adrian Cole, Spencer Gibb, Marcin Grzejszczak, Dave Syer, Jay Bryant Finchley.RELEASE-rabbitor spring-kafka, your app sends traces to a local code, you can run it inside a span, as shown in the following example: @Autowired Tracer tracer; Span span = tracer.newTrace().name("encode").start(); try { doSomethingExpensive(); } finally { span.finish(); } In the preceding example, the span is the root of the trace. In many cases, the span is part of an existing trace. When this is the case, call newChild instead of newTrace, as shown in the following example: @Autowired Tracer tracer; Span span = tracer.newChild(root.context()).name("encode").start(); try { doSomethingExpensive(); } finally { span.finish(); } Tracer tracer; // before you send a request, add metadata that describes the operation span = tracer.newTrace().name("get").type(CLIENT); span.tag("clnt/finagle.version", "6.36.0"); span.tag(TraceKeys.HTTP_PATH, "/api"); span.remoteEndpoint(Endpoint.builder() .serviceName("backend") .ipv4(127 << 24 | 1) .port(8080).build()); // when the request is scheduled, start the span span.start(); // if you have callbacks for when data is on the wire, note those events span.annotate(Constants.WIRE_SEND); span.annotate(Constants.WIRE_RECV); // Tracer tracer; // start a new span representing a client request oneWaySend = tracer.newSpan(parent).kind(Span Tracing tracing; // derives a sample rate from an annotation on a java method DeclarativeSampler<Traced> sampler = DeclarativeSampler.create(Traced::sampleRate); @Around("@annotation(traced)") public Object traceThing(ProceedingJoinPoint pjp, Traced traced) throws Throwable { Span span = tracing.tracer().newTrace(sampler.sample(traced))... try { return pjp.proceed(); }; Span newTrace(Request input) { SamplingFlags flags = SamplingFlags.NONE; if (input.url().startsWith("/experimental")) { flags = SamplingFlags.SAMPLED; } else if (input.url().startsWith("/static")) { flags = SamplingFlags.NOT_SAMPLED; } return tracer.newTrace(flags); }("baggage-", Arrays.asList("country-code", "user-id")) .build() ); Later, you can call the following code to affect the country code of the current trace context: ExtraFieldPropagation.set("country-code", "FO"); String countryCode = ExtraFieldPropagation.get("country-code"); Alternatively, if you have a reference to a trace context, you can use it explicitly, as shown in the following example: ExtraFieldPropagation.set(span.context(), "country-code", "FO"); String countryCode = ExtraFieldPropagation.get(span.context(), ch
https://cloud.spring.io/spring-cloud-static/Finchley.RELEASE/single/spring-cloud.html
CC-MAIN-2018-34
refinedweb
3,036
50.43
In this section of the C++ tutorial, let’s discuss the concept of loops in C++. Loops in C++ programming are used when a sequence of statements is to be executed repeatedly. Loops in C++ There are times when you want the same thing to happen numerous times. For example, if you want to print a table of 2, how would you do that? 2*1=2 2*2=4 2*3=6 2*4=8 2*5=10 2*6=12 2*7=14 2*8=16 2*9=18 2*10=20 Would you print each step one by one? It won’t be a smart way. Right? That’s where you will need the concept known as loops in the programming language. C++ Program to Print Table of Any Number #include<iostream> using namespace std; int main() { int i,n=2; for(i=1;i<=10;++i) cout<<"\n"<<n<<" * "<<i<<" = "<<n*i; return 0; } Output: 2 * 1 = 2 2 * 2 = 4 2 * 3 = 6 2 * 4 = 8 2 * 5 = 10 2 * 6 = 12 2 * 7 = 14 2 * 8 = 16 2 * 9 = 18 2 * 10 = 20 Types of Loops in C++ Infinite loop An infinite loop (sometimes referred to as an endless loop) is a coding piece that lacks a functional exit to repeat indefinitely. When a condition always evaluates to true, an infinite loop occurs. Example of Infinite loop #include <iostream> using namespace std; int main () { int i; for ( ; ; ) { cout << "This is an example of infinite loop using for loop.\n"; } while (i != 0) { i-- ; cout << "This is an example of infinite loop using while loop.\n"; } }
https://www.codeatglance.com/loops-in-cpp/
CC-MAIN-2020-40
refinedweb
271
76.96
What short. A YouTube commenter said, “Too bad the talk ends just before the more interesting part where there’s actual logic in the item method. Was curious how you would pull these apart.” So in this video, let’s continue the demo from where it left off. There will be more videos playing around with this code, from the Null Object Pattern to Model-View-Presenter. Subscribe to receive email notifications so you don’t miss the next video. Brief Context: Sale Items at eBay If you haven’t seen the previous video I really encourage you to go back and check it out. It’ll give you more context for what’s going on. Here’s a quick refresher: We’re working at eBay, which has items listed on its site. An item can have a title, an image, and a price… a bunch of other stuff, of course. Some items are on special sale. A sale item has those same things—the title, the image, and the price. But it also has the original price with a strikethrough line, to show you what a discount you’re getting. Let’s continue from the hiding and showing of this strikethrough price label. AppCode, not Xcode?! You may be startled to see me using an IDE you don’t recognize. I use AppCode, a more powerful IDE for the kind of refactoring I do daily. If you’d like to try this refactoring yourself, get the code from GitHub. If you want to try it in AppCode, then download the AppCode EAP (Early Access Program) if it’s available. The EAP is a free prerelease that expires when they turn it a proper release. Going from Xcode to AppCode, the main thing you need to know is how to run tests. AppCode doesn’t use Xcode’s schemes. Instead, you set up an XCTest configuration. From there, “Build” builds the the test code and the production code it depends on. “Run” does a build, then runs the tests. For more help getting started with AppCode, see the AppCode Quick Start Guide. For this particular project, make sure to select an iPad simulator as the destination. Run tests. You’ll see it pass 10 unit tests which fully test this part of our view controller. The unit tests act as a safety net, allowing us to refactor. They are a critical piece of real refactoring. Moving the Attributed Text Uncovering a Side Effect In part one of the demo, I took advantage of a recurring pattern. On one side of the if-statement, the code set a view property. On the else-side, it cleared that same property, setting it to nil. As the view model evolved, this behavior gradually moved over there. Both the if and the else clause came to set the property from the same computed property in the view model. Then I could fold them together into a single line. But this isn’t the case for the strikethrough price label. On the else side, the attributedText is set to nil. But on the if side, it doesn’t appear to set this property. Its appearance is misleading, but there is a hint in the method name: setStrikethroughText. When we go to the implementation, we can see that it sets the attributedText property. This imbalance is curious and hinders the course of the refactoring. The method sets a property as a side effect, instead of making the side effect clear. I find this to be a warning against making extensions like this on objects you don’t own—in this case, UILabel. extension UILabel { func setStrikethroughText(_ text: String) { // ... } } So the first step in refactoring this is to turn it from a hidden side effect into a plain old setting of a property. We do this by creating a standalone function to create the attributed text we want, then assigning that to the property. func strikethroughText(_ text: String) -> NSAttributedString { // ... } Quickly Creating the New Function To create the function itself, I use an AppCode technique. At the call site, I call the new, non-existent function. I pretend that it exists, and make sure that it looks clear at the point of use. Then with a click, I ask AppCode to create an outline of the new function. It figures out the signatures and generates a skeleton for us to fill in. We create the body of the new function by copying and pasting the old code, then fixing it up to fit into its new home. Another AppCode technique I use is to increase or decrease the currently selected scope. This makes it easy to select the right portion of code. With the new call in place and the new implementation in place, we can shift away from the old code. When I’m not entirely certain if something will work, I leave the old code in place, commented out. That way if something goes wrong, I can find what to restore by looking for it. And if all is well, I delete the commented-out code. Moving the Free Function into a Namespace The names of freestanding functions exist in the module’s namespace. Rather than have them live out in the open, I prefer to put free functions inside namespaces when I work on large projects. Swift namespaces are implicit and normally invisible. But it’s easy to use an empty enumeration as an explicit namespace. Move the free function into the enum, and declare it to be static. Then the caller can reference the function by its full name. For a single function or a small project this doesn’t matter, as the name of the new enum itself goes in the module’s namespace. But this can be a helpful way to organize free functions into groups as you get more of them. enum AttributedText { static func strikethroughText(_ text: String) -> NSAttributedString { // ... } } Moving Code Around in Small, Verified Steps So far we’ve taken dissimilar code and made it more similar. Now both the if-clause and the else-clause make assignments to the label’s attributedText. The clearing-up has worked, so now we can move this behavior into the view model. I create a new computed property in the view model. The initial implementation does nothing, but it does compile. Then we use the same trick I showed in part one: - Copy the entire if-else statement. - Paste it into its new home. - Trim it down by removing anything unrelated to the attributedText. - Fix it up by changing the assignments to return statements. - Where there are missing returns, add them in with an appropriate do-nothing value. Then we shift the assignments to use this new computed property in the view model. One by one, I do the replacement and run tests. Eventually, both lines look the same, in both the if clause and the else clause: if let item = item { strikethroughPriceLabel.attributedText = viewModel.strikethroughPrice // more stuff... } else { strikethroughPriceLabel.attributedText = viewModel.strikethroughPrice // possibly other stuff... } Now we can lift it out of the if statement altogether. This changes the order of the statements, moving the assignment ahead of the conditional. As long as this isn’t a problem, we can lift both statements out into a single statement outside the if. Effectively, this is the same as the Slide Statements refactoring from the Refactoring book. With a good set of unit tests, we can do these refactoring steps with little thought or analysis. With a good set of unit tests, we can do these refactoring steps with little thought or analysis. All we’re doing is moving code around. As long as the test pass, we’re good. Finally, we can clear up the implementation in the view model. A guard clause seems like a helpful way to make the code more expressive. In the end, the code still isn’t as bright as I’d like it to be. But sometimes it’s best not to “clean all the things” at once. Continue with other refactoring. As common patterns emerge, we can extract those later as we discover them. This after-the-fact discovery happens often. Moving the “isHidden” Making similar changes, we can move the label’s isHidden value. I encourage you to practice these steps on your own computer. - Determine what return type we need for this value. In this case, we want a Bool. - Define the skeleton of a computed property in the view model that returns this type. Come up with a good name. Give it a bare-bones implementation that builds. - Copy the code and paste it into its new home. We’re down to the last property, so there’s no remaining code that sets other properties. Change it from making assignments to returning values. Add extra return statements where needed. Get it to build. - Change one call site to use the new computed property. Confirm by running tests. - Change the other call site and run tests. - Now that both calls are identical, lift them out of the conditional and run tests. Then it’s a journey of clearing up the implementation. We can shape the code to be similar to the other property. Where we fine similar “shapes” in the code, we can extract helper functions. Here, both Xcode and AppCode support “Extract Method.” The results are slightly different, though. - In Xcode, the extracted method comes above the call site and is marked fileprivate. - AppCode instead displays a dialog to let you decide the accessibility, the name, the parameter names, and parameter order. It then creates the new code below the call site. Having the calling code above the helper code reads better to me because it’s top-down. More to Do, For Me & You As it stands, the view model keeps checking whether the item is nil. This is crying out to use the Null Object Pattern to eliminate those conditionals. I do that refactoring in a separate video. While I was at eBay writing the code this is based on, I didn’t go to model-view-view-model (MVVM). Instead, I went to model-view-presenter (MVP). I plan to show that in yet another video. As you can see there’s a lot to mine in this example. I encourage you to get the code and try refactoring it yourself. You may end up with different results, which is totally fine. But make sure to move in small, verified steps. I want you to experience the power of having a fully-tested view controller. See the next video where I refactor this further using the Null Object Pattern.
https://qualitycoding.org/refactoring-mvvm-part2/
CC-MAIN-2022-27
refinedweb
1,782
75.5
# Virtual function calls in constructors and destructors (C++) ![Virtual function calls in constructors (C++)](https://habrastorage.org/r/w1560/getpro/habr/post_images/7df/2f2/1ad/7df2f21ad97ecd6a2002f8b9df12b277.png) In different programming languages, the behavior of virtual functions differs when it comes to constructors and destructors. Incorrect use of virtual functions is a classic mistake. Developers often use virtual functions incorrectly. In this article, we discuss this classic mistake. Theory ------ I suppose the reader is familiar with [virtual functions](https://en.wikipedia.org/wiki/Virtual_function) in C++. Let's get straight to the point. When we call a virtual function in a constructor, the function is overridden only within a base class or a currently created class. Constructors in the derived classes have not yet been called. Therefore, the virtual functions implemented in them will not be called. Let me illustrate this. ![Virtual function calls in constructors (C++)](https://habrastorage.org/r/w1560/getpro/habr/post_images/687/48c/a79/68748ca79d3c3386def3f99c1e908b3d.png) Explanations: * Class *B* is derived from class *A*; * Class *C* is derived from class *B*; * The *foo* and *bar* functions are virtual; * The *foo* function has no implementation in the *B* class. Let's create an object of the *C* class and call these two functions in the class *B* constructor. What would happen? * **The *foo* function.** The *C* class has not yet been created. The *B* class doesn't have the *foo* function. Therefore, the implementation from the *A* class is called. * **The *bar* function.** The *C* class has not been created yet. Thus, a function related to the current *B* class is called. Now look at the same thing in the code. ``` #include class A { public: A() { std::cout << "A()\n"; }; virtual void foo() { std::cout << "A::foo()\n"; }; virtual void bar() { std::cout << "A::bar()\n"; }; }; class B : public A { public: B() { std::cout << "B()\n"; foo(); bar(); }; void bar() { std::cout << "B::bar()\n"; }; }; class C : public B { public: C() { std::cout << "C()\n"; }; void foo() { std::cout << "C::foo()\n"; }; void bar() { std::cout << "C::bar()\n"; }; }; int main() { C x; return 0; } ``` If we [compile and run](https://godbolt.org/z/9eGTKf5Gd) the code, it outputs the following: ``` A() B() A::foo() B::bar() C() ``` The same happens when we call virtual methods in destructors. So, what's the problem? You can find this information in any C++ programming book. The problem is that it's easy to forget about it! Thus, some programmers assume that *foo* and *bar* functions are called from the most derived *C* class. People keep asking the same question on forums: "Why does the code run in an unexpected way?" Example: [Calling virtual functions inside constructors](https://stackoverflow.com/questions/962132/calling-virtual-functions-inside-constructors). I think now you understand why it's easy to make a mistake in such code. Especially if you write code in other languages where the behavior is different. Let's look at the code fragment in C#: ``` class Program { class Base { public Base() { Test(); } protected virtual void Test() { Console.WriteLine("From base"); } } class Derived : Base { protected override void Test() { Console.WriteLine("From derived"); } } static void Main(string[] args) { var obj = new Derived(); } } ``` If we run it, the program outputs the following: ``` From derived ``` The corresponding visual diagram: ![Virtual function calls in constructors (C#)](https://habrastorage.org/r/w1560/getpro/habr/post_images/42a/e02/1cc/42ae021cc59b8841c149d04daebbc6bd.png) The function overridden in the derived class is called from the base class constructor! When the virtual method is called from the constructor, the run-time type of the created instance is taken into account. The virtual call is based on this type. The method is called in the base type constructor. Despite this, the actual type of the created instance — *Derived*. This determines the choice of the method. You can read more about virtual methods in the [specification](https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/language-specification/classes). Note that this behavior can cause errors. For example, if a virtual method works with members of a derived type that have not yet been initialized in its constructor. In this case, there would be problems. Look at the example: ``` class Base { public Base() { Test(); } protected virtual void Test() { } } class Derived : Base { public String MyStr { get; set; } public Derived(String myStr) { MyStr = myStr; } protected override void Test() => Console.WriteLine($"Length of {nameof(MyStr)}: {MyStr.Length}"); } ``` If we try to create an instance of the *Derived* type, *NullReferenceException* is thrown. That happens even if we pass a value other than *null* as an argument: *new Derived("Hello there")*. The constructor of the *Base* type calls an instance of the *Test* method from the *Derived* type. This method accesses the *MyStr* property. It is currently initialized with a default value (*null*) and not the parameter passed to the constructor (*myStr*). Done with the theory. Now let me tell you why I decided to write this article. How this article appeared ------------------------- It all started with a question on StackOverflow: "[Scan-Build for clang-13 not showing errors](https://stackoverflow.com/questions/69592513/scan-build-for-clang-13-not-showing-errors)". More precisely, it all started with a discussion in comments under our article — "[How we sympathize with a question on StackOverflow but keep silent](https://pvs-studio.com/en/blog/posts/cpp/0877/)". You don't have to follow the links. Let me briefly retell the story. One person asked how static analysis helps to look for two patterns. The first pattern relates to variables of the *bool* type. We don't discuss it in this article, so we are not interested in this pattern now. The second one is about searching for virtual function calls in constructors and destructors. Basically, the task is to identify virtual function calls in the following code fragment: ``` class M { public: virtual int GetAge(){ return 0; } }; class P : public M { public: virtual int GetAge() { return 1; } P() { GetAge(); } // maybe warn ~P() { GetAge(); } // maybe warn }; ``` Suddenly, it turns out that not everyone understands the danger here and why static analysis tools warn developers about calling virtual methods in constructors/destructors. ![danger](https://habrastorage.org/r/w1560/getpro/habr/post_images/a87/3ec/9c0/a873ec9c08718c99049e35b421004d84.png) The article on habr has the following [comments (RU)](https://habr.com/en/company/pvs-studio/blog/585272/comments/): ***Abridged comment N1:** So the compiler's right, no error here. The error is only in the developer's logic. This code fragment always returns 1 in the first case. He could use inline to speed up the constructor and the destructor. It doesn't matter to the compiler anyway. The result of the function is never used, the function doesn't use any external arguments — the compiler will just throw an example as an optimization. This is the right thing to do. As a result, no error here.* ***Abridged comment N2:** I didn't get the joke about virtual functions at all. [quote from a book about virtual functions]. The author emphasizes that the keyword virtual is used only once. The book further explains that it is inherited. Now, my dear students, answer me: what's wrong with calling a virtual function in the class constructor and destructor? Describe each case separately. I assume you're both far from being diligent students. You have no idea when the class constructor and destructor are called. Besides, you missed the lesson "In what order to determine objects of parent classes when you determine a parent, and in what order to destroy them".* After reading the comments, you're probably wondering how they relate to the topic discussed later. And you have every right to do so. The answer is that they don't. The person who left these comments couldn't guess what kind of problem the author of the question on StackOverflow wanted to protect the code from. I admit that the author could have framed the question better. Actually, the code above has no problems. Yet. But they will appear later, when classes obtain new children that implement the *GetAge* function. If this code fragment had another class that inherit *P*, the question would become more complete. However, anyone who knows the C++ language well immediately understands the problem and why this person is so concerned about function calls. Even the coding standards prohibit virtual function calls in constructors/destructors. For example, the SEI CERT C++ Coding Standard has the following rule: [OOP50-CPP. Do not invoke virtual functions from constructors or destructors](https://wiki.sei.cmu.edu/confluence/display/cplusplus/OOP50-CPP.+Do+not+invoke+virtual+functions+from+constructors+or+destructors). Many code analyzers implement this diagnostic rule. For example, Parasoft C/C++test, Polyspace Bug Finder, PRQA QA-C++, SonarQube C/C++ Plugin. PVS-Studio (static analysis tool developed by us) implements it too — the [V1053](https://pvs-studio.com/en/docs/warnings/v1053/) diagnostic. What if there's no error here? ------------------------------ We have not studied such a situation. That is, everything works as we expected. In this case, we can explicitly specify which functions we plan to call: ``` B() { std::cout << "B()\n"; A::foo(); B::bar(); }; ``` Thus, your teammates will correctly understand the code. Static analyzers will also understand the code and remain silent. Conclusion ---------- Static analysis is helpful. It identifies potential problems in code. Even those that you and your teammates could've missed. A couple of examples: * [V718](https://pvs-studio.com/en/docs/warnings/v718/). The 'Foo' function should not be called from 'DllMain' function. * [V1032](https://pvs-studio.com/en/docs/warnings/v1032/). Pointer is cast to a more strictly aligned pointer type. * [V1036](https://pvs-studio.com/en/docs/warnings/v1036/). Potentially unsafe double-checked locking. The way virtual functions work is not such secret knowledge as the examples above :). However, the comments and questions on StackOverflow show that this topic deserves attention and control. If it was obvious, I wouldn't write this article. Static analyzers help developers work with code. Thank you for your attention, come and [try](https://pvs-studio.com/pvs-studio/try-free/?utm_source=virtual_functions_article&utm_medium=articles&utm_term=link_try-free) the PVS-Studio analyzer.
https://habr.com/ru/post/591747/
null
null
1,689
50.73
If I have a random assortment of points in space, is there a way for me to copy and paste (or array) an object to all selected points using a chosen base point of the object? Thanks in advance! If I have a random assortment of points in space, is there a way for me to copy and paste (or array) an object to all selected points using a chosen base point of the object? Thanks in advance! Hello - that is a great beginning project to solve using a script, if you have not yet started down that road. -Pascal I wish I had the time to take that road. Are you able to point me in the right direction to give me a head start or either tag someone here that could completely fashion it up for us instead. Cheers! Hello - this is a quick version - it assumes all the points in the file are target objects. import scriptcontext as sc import Rhino import rhinoscriptsyntax as rs def CopyToPoints(): ptIds = rs.ObjectsByType(1) if not ptIds: return ids = rs.GetObjects("Select objects to copy.") if not ids: return basePt = rs.GetPoint("Set point to copy from") if not basePt: return for ptId in ptIds: pt = rs.coercegeometry(ptId) vecDir = pt.Location-basePt rs.CopyObjects(ids, vecDir) if __name__ == '__main__': CopyToPoints() -Pascal Forgive me - scripting is not my forte. How do I run this? I saved it as a .txt file and tried loading script. No luck. I also tried straight up pasting it into the command line and then selecting the .txt file I saved. No luck (get message “unknown command: scriptcontext”). I’m sure you’ve explained this a million times over somewhere, but I can’t find a simple step by step. Thank you very much, though, for whacking that together for me - really appreciated! Hello - save it as a .py file and To use the Python script use RunPythonScript, or a macro: _-RunPythonScript "Full path to py file inside double-quotes" Here it is as a py: CopyToPoints2.py (528 Bytes) -Pascal Thanks a million <3
https://discourse.mcneel.com/t/paste-or-array-object-to-multiple-points-in-space/114017
CC-MAIN-2021-04
refinedweb
351
75.4
I caused a stupid accident in a single-disk ZFS pool, seemingly in the same way as the person in this mailing list thread, i. e., I seem to have overwritten important metadata. Can this be restored from the actual payload, or is there a way to retrieve the payload without the metadata? Here's what I did, exactly: - had a ZFS pool running with a single disk on one machine - wanted to migrate it to a new ZFS pool on another machine - forgot to zpool exportit on the first machine - when zpool createcomplained that the device was in use, I thought "No problem, I just took down the host, it's not in use anymore" and did zpool create -f What I should have done (as I realised after RTFM) is import instead of create on the new host. Now I have a working zfspool, but the filesystems are gone / invisible. I tried to reimport the device on the old host, and later tried import -D, but, quite obviously, both didn't work.
https://serverfault.com/questions/118099/how-can-i-recover-overwritten-labels-pointer-blocks-and-ueberblocks-in-a-zfs-po/118209
CC-MAIN-2020-10
refinedweb
174
59.67
. Note that all operations on Option values are currently using the database’s null propagation semantics which may differ from Scala’s Option semantics. In particular, None === None evaluates to false. This behaviour may change in a future major release of Slick..) Joins are used to combine two different tables or queries into a single query. There are two different ways of writing joins: Explicit joins are performed by calling a method that joins two queries into a single query of a tuple of the individual results. Implicit joins arise from a specific shape of a query without calling a special method. An implicit cross-join is created with a flatMap operation on a Query (i.e. by introducing more than one generator in a for-comprehension): val implicitCrossJoin = for { c <- Coffees s <- Suppliers } yield (c.name, s.name) If you add a filter expression, it becomes an implicit inner join: val implicitInnerJoin = for { c <- Coffees s <- Suppliers if c.supID === s.id } yield (c.name, s.name) The semantics of these implicit joins are the same as when you are using flatMap on Scala collections. Explicit joins are created by calling one of the available join methods: val explicitCrossJoin = for { (c, s) <- Coffees innerJoin Suppliers } yield (c.name, s.name) val explicitInnerJoin = for { (c, s) <- Coffees innerJoin Suppliers on (_.supID === _.id) } yield (c.name, s.name) val explicitLeftOuterJoin = for { (c, s) <- Coffees leftJoin Suppliers on (_.supID === _.id) } yield (c.name, s.name.?) val explicitRightOuterJoin = for { (c, s) <- Coffees rightJoin Suppliers on (_.supID === _.id) } yield (c.name.?, s.name) val explicitFullOuterJoin = for { (c, s) <- Coffees outerJoin Suppliers on (_.supID === _.id) } yield (c.name.?, s.name.?) The explicit versions of the cross join and inner join will result in the same SQL code being generated as for the implicit versions (usually an implicit join in SQL). Note the use of .? in the outer joins. Since these joins can introduce additional NULL values (on the right-hand side for a left outer join, on the left-hand sides for a right outer join, and on both sides for a full outer join), you have to make sure to retrieve Option values from them. In addition to the usual. The simplest form of aggregation consists of computing a primitive value from a Query that returns a single column, usually with a numeric type, e.g.: val q = Coffees.map(_.price) val q1 = q.min val q2 = q.max val q3 = q.sum val q4 = q.avg Some aggregation functions are defined for arbitrary queries: val q = Query(Coffees) val q1 = q.length val q2 = q.exists) } Note that the intermediate query q contains nested values of type Query. These would turn into nested collections when executing the query, which is not supported at the moment. Therefore it is necessary to flatten the nested queries by aggregating their values (or individual columns) as done in q2. Queries are executed using methods defined in the Invoker trait (or UnitInvoker for the parameterless versions). There is an implicit conversion from Query, so you can execute any Query directly. The most common usage scenario is reading a complete result set into a strict collection with a specialized method such as list or the generic method to which can build any kind of collection: val l = q.list val v = q.to[Vector] val invoker = q.invoker val statement = q.selectStatement This snippet also shows how you can get a reference to the invoker without having to call the implicit conversion method manually. All methods that execute a query take an implicit Session value. Of course, you can also pass a session explicitly if you prefer: val l = q.list(session) If you only want a single result value, you can use first or firstOption. The methods foreach, foldLeft and elements can be used to iterate over the result set without first copying all data into a Scala collection. Deleting works very similarly to querying. You write a query which selects the rows to delete and then call the delete method on it. There is again an implicit conversion from Query to the special DeleteInvoker which provides the delete method and a self-reference deleteInvoker: val affectedRowsCount = q.delete val invoker = q.deleteInvoker val statement = q.deleteStatement A query for deleting must only select from a single table. Any projection is ignored (it always deletes full rows). inserting are defined in InsertInvoker and FullInsertInvoker. Coffees.insert("Colombian", 101, 7.99, 0, 0) Coffees.insertAll( ("French_Roast", 49, 8.99, 0, 0), ("Espresso", 150, 9.99, 0, 0) ) // "sales" and "total" will use the default value 0: (Coffees.name ~ Coffees.supID ~ Coffees.price).insert("Colombian_Decaf", 101, 8.99) val statement = Coffees.insertStatement val invoker = Coffees.insertInvoker While some database systems allow inserting proper values into AutoInc columns or inserting None to get a created value, most databases forbid this behaviour, so you have to make sure to omit these columns. Slick does not yet have a feature to do this automatically but it is planned for a future release. For now, you have to use a User(None, "Stefan", "Zeiger") Note that many database systems only allow a single column to be returned which must be the table’s auto-incrementing primary key. If you ask for other columns a SlickException is thrown at runtime (unless the database actually supports it). Instead of inserting data from the client side you can also insert data created by a Query or a scalar expression that is executed in the database server:") Updates are performed by writing a query that selects the data to update and then replacing it with new data. The query must only return raw columns (no computed values) selected from a single table. The relevant methods for updating are defined in UpdateInvoker. val q = for { c <- Coffees if c.name === "Espresso" } yield c.price q.update(10.49) val statement = q.updateStatement val invoker = q.updateInvoker There is currently no way to use scalar expressions or transformations of the existing data in the database for updates. Query templates are parameterized queries. A template works like a function that takes some parameters and returns a Query for them except that the template is more efficient. When you evaluate a function to create a query the function constructs a new query AST, and when you execute that query it has to be compiled anew by the query compiler every time even if that always results in the same SQL string. A query template on the other hand is limited to a single SQL string (where all parameters are turned into bind variables) by design but the query is built and compiled only once. You can create a query template by calling flatMap on a Parameters object. In many cases this enables you to write a single for comprehension for a template. val userNameByID = for { id <- Parameters[Int] u <- Users if u.id is id } yield u.first val name = userNameByID(2).first val userNameByIDRange = for { (min, max) <- Parameters[(Int, Int)] u <- Users if u.id >= min && u.id < max } yield u.first val names = userNameByIDRange(2, 5).list If your database system supports a scalar function that is not available as a method in Slick you can define it as a SimpleFunction. There are predefined methods for creating unary, binary and ternary functions with fixed parameter and return types. // H2 has a day_of_week() function which extracts the day of week from a timestamp val dayOfWeek = SimpleFunction.unary[Date, Int]("day_of_week") // Use the lifted function in a query to group by day of week val q1 = for { (dow, q) <- SalesPerDay.map(s => (dayOfWeek(s.day), s.count)).groupBy(_._1) } yield (dow, q.map(_._2).sum): def dayOfWeek2(c: Column[Date]) = SimpleFunction("day_of_week")(TypeMapper.IntTypeMapper)(Seq(c)) SimpleBinaryOperator and SimpleLiteral work in a similar way. For even more flexibility (e.g. function-like expressions with unusual syntax), you can use SimpleExpression. If you need a custom column type you can implement TypeMapper and TypeMapperDelegate. The most common scenario is mapping an application-specific type to an already supported type in the database. This can be done much simpler by using a MappedTypeMapper which takes care of all the boilerplate: // An algebraic data type for booleans sealed trait Bool case object True extends Bool case object False extends Bool // And a TypeMapper that maps it to Int values 1 and 0 implicit val boolTypeMapper = MappedTypeMapper.base[Bool, Int]( { b => if(b == True) 1 else 0 }, // map Bool to Int { i => if(i == 1) True else False } // map Int to Bool ) // You can now use Bool like any built-in column type (in tables, queries, etc.) You can also subclass MappedTypeMapper for a bit more flexibility.
http://slick.lightbend.com/doc/1.0.1/lifted-embedding.html
CC-MAIN-2018-17
refinedweb
1,462
58.08
China 15.6 inch notebook laptop with Intel Z8350 CPU support Win 10 os US $132.5-155.5 / Piece 1 Piece (Min. Order) Shenzhen GST Communication Co., Ltd. 94.4% import used computers wholesale used computers and laptops US $120-450 / Set 5 Sets (Min. Order) Shenzhen Riguan Photoelectric Co., Ltd. 96.5% Delux Smart Voice Mechanical Wireless Gaming Keyboard for Designer US $66.53-66.53 / Piece Shenzhen ONU Mall Technology Co., Limited 78.6% Metal mesh computer desk storage monitor stand US $7-8.5 / Piece 1000 Pieces (Min. Order) Cixi Ciyi Steel Tube Factory 65.2% japanese used computers laptop mini computer i7 US $120-450 / Set 5 Sets (Min. Order) Shenzhen Riguan Photoelectric Co., Ltd. 96.5% - About product and suppliers: Alibaba.com offers 5 used laptops no os products. About 20% of these are laptops. A wide variety of used laptops no os options are available to you, There are 5 used laptops no os suppliers, mainly located in Asia. The top supplying country is China (Mainland), which supply 100% of used laptops no os respectively. Used laptops no os products are most popular in Western Europe, Northern Europe, and North America.
https://www.alibaba.com/showroom/used-laptops-no-os.html
CC-MAIN-2018-34
refinedweb
198
69.68
Tue 24 Jun 2014 PyDev and Python profiler UI Crowdfunding Project Posted by Mike under Advocacy, Cross-Platform, Python Last year there was an Indiegogo crowdfunding campaign in support of PyDev, the Python IDE plugin for Eclipse. It was put on by the primary developer of PyDev, Fabio Zadrozny. As a part of that campaign, Fabio also created LiClipse. Anyway, Fabio is at it again with a new crowdfunding campaign. You can read about it here. Fabio has two targets for this campaign. The first is adding the following features to PyDev: - Allow preferences which are currently global to be configured per-project. - Provide a proper way to export/import PyDev settings to a new workspace. - Suport for namespace packages in code-completion. - Provide a way to auto-document types as well as checking existing types in docstrings when doing a test run. - Allow running an external pep8 linter (to support newer versions of pep8.py). - Show vertical lines for indentation. - Attach debugger to running process (although some caveats apply and under certain circumnstances this may not be possible). - Other requests to be defined based on community input as funding allows. The second is, in my opinion, a bit more interesting. Fabio is planning on writing a profiler for PyDev that can also work outside of PyDev using Python and Qt. He has a list of features for the profiler listed in his campaign. It sounds pretty interesting. It should also be noted that PyDev and the proposed profiler will be open source, so you can always check out how the code works behind the scenes. If you think either of these items sound interesting, then you should consider supporting Fabio in his endeavors.
http://www.blog.pythonlibrary.org/2014/06/24/pydev-and-python-profiler-ui-crowdfunding-project/
CC-MAIN-2014-35
refinedweb
284
65.83
What is lambda expressions? In essence, a lambda expression is an anonymous function, which can be used in a context where a regular named function could be used, but where the function in question exists for a single purpose and thus should not be given a name (to avoid using up the namespace). They are most often used as a function argument for a higher-order function (that is, a function that operates on other functions, either by taking a function as a parameter or by returning a function as a value, or both). For example (in pseudo-code), if you have a function map() which applies a function to every element of a list or array, map() map(increment, [1, 2, 3, 4, 5]) -> [2, 3, 4, 5, 6] then you could have a lambda expression that squares its argument like so: map(lambda(x) { return x * x }, [1, 2, 3, 4, 5]) -> [1, 4, 9, 16, 25] This is exactly how the map() function in many functional languages works, in fact. To give the above example in Scheme, (map (lambda (x) (* x x)) '(1 2 3 4 5)) or in Haskell (if I am recalling it right): map (\x -> x * x) [1, 2, 3, 4, 5] (I am deliberately not giving you an example in C#, as that would be too close to doing your homework for you.) The name comes from the lambda calculus, a model of mathematics which is built up from anonymous recursive expressions (in the mathematical sense). Edited 4 Years Ago by Schol-R-LEA
https://www.daniweb.com/programming/software-development/threads/474397/lambda-expressions
CC-MAIN-2018-13
refinedweb
261
51.55
On 04/03/03 19:04:20 -0800 Craig R. McClanahan wrote: > If you are willing to ensure that all your Actions use the same forward > name, you don't have to do anything at all to accomplish this goal. When > you call ActionMapping.findForward(), Struts checks the local forward > declarations first, and then the globals. So, if you say: > > return (mapping.findForward("mainMenu"); > > in your doSomethingAction, since you don't have a local definition for > this forward you will automatically be using the global one. That way, > you only have to change one path if the URI of the page to be executed > changes. I am using this idea for some projects. My issue is that I want to be able to map "standard" results to various pages. I've been using ActionMessages to give feedback to the user instead of forwarding to a seperate "thank you" style page. So, for example, many of my actions forward back to the "list" page after putting something informative in the ActionMessages. If the page authors decide that some action needs to be forwarded to a specific page on "failure", I want them to be able to simply change the "failure" forward to something else. Would this be difficult to implement? Thanks, A. -- Adam Sherman Tritus CG Inc. +1 (613) 797-6819 --------------------------------------------------------------------- To unsubscribe, e-mail: struts-user-unsubscribe@jakarta.apache.org For additional commands, e-mail: struts-user-help@jakarta.apache.org
http://mail-archives.apache.org/mod_mbox/struts-user/200304.mbox/%3c14840000.1049462152@saturn%3e
crawl-003
refinedweb
241
66.13
End a scroll view begun with a call to BeginScrollView. See Also: GUILayout.BeginScrollView // The variable to control where the scrollview 'looks' into its child elements. var scrollPosition : Vector2; // The string to display inside the scrollview. 2 buttons below add & clear this string. var longString = "This is a long-ish string"; function"; } using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { public Vector2 scrollPosition; public string longString = "This is a long-ish string"; void OnGUI() {"; } } import UnityEngine import System.Collections public class ExampleClass(MonoBehaviour): public scrollPosition as Vector2 public longString as string = 'This is a long-ish string' def OnGUI() as void:'
http://docs.unity3d.com/ScriptReference/GUILayout.EndScrollView.html
CC-MAIN-2015-11
refinedweb
102
50.94
Folks, I am trying to use utf-8 Tamil in a Rails application. When I edited Tamil unicode contents in a text box and checked the output. The text got distorted when it is showing it back. Any help appreciated -- -- Nandri, Dalit Amala தமிழ௠இணையம௠Telford, UK on 2006-12-29 00:56 on 2006-12-29 01:22 On Dec 28, 5:39 pm, "Dalit Amala" <amalasi...@gmail.com> wrote: > I am trying to use utf-8 Tamil in a Rails application. Are you using any plug-ins? I have an application that uses Mandarin Chinese. It works beautifully with the standard installation of Rails but when I added the Ferret text search plug-in, I could no longer use Unicode. on 2006-12-29 05:12 There are many topics covering the proper setup of UTF-8. Try searching the forums. UTF-8 works fine for me with Japanese. on 2006-12-29 05:38 Dalit, There are several things you need to do. 1) Make sure your database has UTF-8 internationalization set 2) Check this out: InternationalizationComparison. 3) Use this for rails 1.1 script/plugin install globalize/ branches/for-1.1. on 2006-12-29 05:54 I've had that problem as well. Adding this as a before_filter to application.rb seems to fix things. It makes sure that all rhtml files are set to utf8, and spares RJS from setting the content-type header so the code still executes. # Switches to UTF8 charset # def configure_charsets content_type = @headers["Content-Type"] || 'text/html' if /^text\//.match(content_type) @headers["Content-Type"] = "#{content_type}; charset=utf-8" end ActiveRecord::Base.connection.execute 'SET NAMES UTF8' end Hope that helps. On 12/28/06, Dalit Amala <amalasingh@gmail.com> wrote: > -- > > > > -- -------------------- seth at subimage interactive ----- ----- on 2006-12-29 21:23 I apologize for pointing people to my blog and then having it tank. I'm having some server issues. Allow me to say that my solution is not as elegant as RaPT but in case you're interested, here is my install_plugins.rake (in the lib/ tasks directory) require 'yaml' namespace :plugins do desc "install a known list of plugins, specified in config/ base_plugins.yml" task 'install_base' do f = YAML::load_file(File.join(RAILS_ROOT, 'config', 'base_plugins.yml')) f.each_pair do |plugin, options| force, svn = '', '' force = ' --force' if (options[:force] && options [:force].downcase == 'yes') svn = ' -X' if (options[:svn] && options [:svn].downcase == 'yes') if force.empty? && File.exist?(File.join(RAILS_ROOT, "vendor/plugins/#{plugin}")) puts "Skipping #{plugin} -- already installed and force options not specified." else cmd = "ruby #{RAILS_ROOT}/script/plugin install # {svn}#{options['repos']} #{plugin}#{force}" puts "*** installing #{cmd}" system(cmd) end end end end Here is my standard 'base_plugins.yml' (in the config directory) exception_notification: repos: exception_notification/ acts_as_authenticated: repos: acts_as_authenticated rspec: repos: svn://rubyforge.org/var/svn/rspec/tags/REL_0_7_4/vendor/ rspec_on_rails/vendor/plugins/rspec --steve
https://www.ruby-forum.com/topic/92431
CC-MAIN-2018-13
refinedweb
486
52.15
23 November 2010 16:24 [Source: ICIS news] TORONTO (ICIS)--?xml:namespace> K+S's agreed bid for Potash One on 22 November came amid some doubt over foreign investment in Canada after the government this month blocked BHP Billiton’s $39bn hostile bid for PotashCorp – only the second time since 1985 that a foreign takeover of a Canadian company was rejected. However, Horst Hueniken, a research analyst at The takeover by Germany’s K+S would provide the necessary capital to develop the project in Saskatchewan province, thus creating jobs and providing a new tax revenue stream for the government, Hueniken said in a briefing on Canadian television. In the case of BHP’s bid for PotashCorp, Hueniken said, the government had been concerned about tax revenue losses, the possibility that BHP could take PotashCorp out of the Canpotex potash export marketing group and the effects on global potash pricing. “None of these issues apply in [the Potash One] instance. [Potash One] is far too small a player to have a meaningful impact on potash pricing,” said Hueniken. “In this case, there are no tax revenues being lost, because there were no tax revenues to begin with,” he added. Commentators also noted that In a televised interview on 22 November, Wall said Saskatchewan welcomed K+S's move as a foreign investment that would lead to the development of a new mine in the province. Wall also said the government would “strongly encourage” K+S to join Canpotex. Canpotex, which includes PotashCorp, Agrium and Mosaic, was a “strong marketing alliance” that gave Wall added that K+S's bid for Potash One comes as BHP is planning to develop its $12bn Jansen potash project in Saskatchewan and Brazil’s Vale is considering a $3bn potash project in the province. If all three projects – K+S/Potash One, BHP and Vale – were to be completed, they could threaten PotashCorp’s leading role in global potash markets, analysts said. ($1 = C$1.02) For more on PotashCorp,.
http://www.icis.com/Articles/2010/11/23/9413229/canada-not-likely-to-oppose-k-s-bid-for-potash-one-analysts.html
CC-MAIN-2013-20
refinedweb
336
61.7
@wimd Could you share how you done that? I also want to check if gate is alive (backup-batterypowered node - if mains or gateway died, send me a message). I also use domoticz. nekitoss @nekitoss Best posts made by nekitoss Latest posts made by nekitoss - RE: Own action on heatbeat request @wimd Could you share how you done that? - RE: Problem with battery powered temperature sensor First of all i'd like to say about value "85" - according to DS18B20 datasheet page 6, note after table 4.1: *The power-on reset value of the temperature register is +85°C. Your code looks fine and similar to my mains-powered DS18B20 sensor (because we used same example). I have some thoughts: you may have power problems: a. from logs i see that you have 1 sensor, but you left defined #define MAX_ATTACHED_DS18B20 16. If (suddenly) you would have 16 sensors - when you ask them for conversion they would consume 16* 1.5mA(max at 5v) = 24mA which is above recomended 20mA for atmega pin b. you are powering from battery: DS18B20 oficially requres minimum of 3 v (as for datasheet). People say that with lower voltage it may have strange readings or even do not work at all c. you are powering from battery through pin: is voltage raise fast enougth before start reading? is voltage drop low enought so voltage supplied to ds18b20 is >=3v? i'd suggest you to measure voltages (but multimeter won't show you short drop-outs when start conmsuming) and try to feed VCC of DS18b20 directly from power source. If not possible - at least measure voltage and try to add some wait() after powering up pin (or also before measure start or after sleep). When running from 3v, what frequency of atmega processor is? If it is 16mHz - that it is not in safe operating area for that voltage. you should run 8 mHz. (>=2.4v for 8mHz and >=3.78v for 16mHz) Also i suggest you try to change sleep(conversionTime);with wait(conversionTime);and check if it will change anything (especially when ds18b20 directly powered from source). Also try with fresh batteries. If nothing helped, then for "85" value simple solution will be if (temperature != 85) send(); because it is very rarely that people need to measure 85 degrees of Celsius at home... ~~ ~~ For advanced battery-powering i highly recommend to read this. Also you can find there about frequency vs voltage. And for internal battery voltage reading: i don't remember where i found it: here1 or here2 or somwhere else.. but you may want to get more accurate readings by measuring and setting Internal Reference Voltage value and using it in the code, and give it some time to stabilize and/or use some type of averaging. - RE: Detect missing/unresponsive sensor.) - RE: sonda PH Hi, @szybki946 ! Code in your post is working or not? If yes - you want to someone modify it, so it will send your pHto domoticz from MySensors node? If yes - if you use nrf24l01+ radio - maybe you want this code: #include <Average.h> #define MY_NODE_ID 100 //set node fixed id #define MY_BAUD_RATE 9600 //set serial baud rate #define MY_RADIO_RF24 //if you use nrf24l01+ #include <MySensors.h> #define PH_CHILD_ID 0 MyMessage ph_msg(PH_CHILD_ID, V_PH); Average<float> sredniaPH(100); //średnia ze 100 pomiarów void presentation() { sendSketchInfo("PH_meter", "1.0"); // Send the sketch version information to the gateway and Controller present(PH_CHILD_ID, S_WATER_QUALITY, "average_ph"); } void setup() { //Serial.begin(9600); //don't need it because of #define MY_BAUD_RATE 9600 } void loop() { int Volty = analogRead(A7); float V =(float) Volty * 5.0 / 1024.0; float peha =(float) V*3.5; sredniaPH.push(peha); sredniaPH.mean(); float pH =((float) sredniaPH.mean()); Serial.println(pH); send(ph_msg.set(pH, 2)); //2 = number of digits after comma wait(1000); //in mysensors do not use delay - use wait() or sleep() } (change IDs, texts and numbers of digits after comma as you want) - bme280 sensor missing/combining/domoticz heartbeat and battery level Re: BME280 temp/humidity/pressure sensor Re2: DS18B20 ans SHT31-D show up as combined sensors on Domoticz Re3: Heartbeat working or not? I'm trying to make outdoors(now debugging indoors) sleep node with tqfp32 atmega328p internal 8mhz + bme280 + BH1750 (in future battery powered with two lithium AA batteries (not accumulators). I'm using Domoticz stable Version 4.9700 (June 23th 2018) and MySensors 2.3.0 at nodes and at gateway(almost shure about gateway version) Note, that according MYSController (by tekka) all messages and presentations are received and sent normally, so no problems in my pcb, radiochannel and gateway are expected. Only in code order and/or domoticz processing. I've faced three problems: First is related to combining sensors - i do not fully understand, how to avoid and how to not avoid. To avoid i've tried to change code, so to present humidity before temperature(but not id's), as mentioned in topic: humidity - temperature - pressure. But it didn't helped - temperature and humidity sensors are combined. I've also couldn't find topic about combining, mentioned in issue on git by gizmocuz. Second might be related to first. I see all presented childs in my node in domoticz hardware. But in devices i'm missing one of them (temp-hum-pres-lux). For code that i will show below - i'm missing pressure. That code uses @sundberg84 order of ID's and order of presenting, taken from his post here because he mentioned that he avoided combining. And what are "red nodes" in domoticz? (also i have one boolean and one text - for debugging and to be sure have reboot, it was before i found about smarts sleep "true". Also i have a lot of defines and ifdefs - to easily test this problem - about this later) Here is my code at the moment: //The maximum payload size is 25 bytes! The NRF24L01+ has a maximum of 32 bytes. The MySensors library (version 2.0) uses 7 bytes for the message header. //mysensors 8mhz bits L=0xE2 H=0xD2 E=0x06 L=0x3F #define MY_NODE_ID 4 #define MY_BAUD_RATE 9600 //#define MY_DEBUG #define MY_SPECIAL_DEBUG #define MY_RADIO_RF24 #define MY_RF24_IRQ_PIN (8) #define DEB_ID 88 #define PRES_BME_ID 0 #define TEMP_BME_ID 1 #define HUMID_BME_ID 2 #define LIGHT_LUX_ID 3 #define REBOOT_ID 10 #define MEASURE_BATT_FUNCT //uncomment to measure and send battery level #define LUX_FUNCT //uncomment to enable light measuring #define LUX_AUTOAJUST //uncomment to enable auto-ajust resolution of sensor #define BME_FUNCT //uncomment to enable bme measuring #define SLEEP_TIME 20000UL // sleep time between reads (seconds * 1000 milliseconds) #include <MySensors.h> #define BME_PRES_FUNCT //uncomment to enable sending pressure #define BME_TEMP_FUNCT //uncomment to enable sending temperature #define BME_HUM_FUNCT //uncomment to enable sending humidity #if defined(LUX_FUNCT) || defined(BME_FUNCT) # include <Wire.h> #endif #ifdef LUX_FUNCT # include <BH1750.h> BH1750 lightMeter; MyMessage light_msg(LIGHT_LUX_ID, V_LEVEL); //VARIABLE_TYPE: V_TEXT - из второй таблицы #endif #ifdef BME_FUNCT # include "SparkFunBME280.h" BME280 bme; MyMessage pressure_msg(PRES_BME_ID, V_PRESSURE); //VARIABLE_TYPE: V_TEXT -set,req table MyMessage temperature_msg(TEMP_BME_ID, V_TEMP); //VARIABLE_TYPE: V_TEXT -set,req table MyMessage humidity_msg(HUMID_BME_ID, V_HUM); //VARIABLE_TYPE: V_TEXT -set,req table #endif //MyMessage(uint8_t childSensorId, uint8_t variableType); MyMessage deb_msg(DEB_ID, V_TEXT); //VARIABLE_TYPE: V_TEXT -set,req table MyMessage reboot_msg(REBOOT_ID, V_STATUS); int oldBatteryPcnt = 0; long old_lux = 0; float old_t = 0; int old_h = 0; int old_p = 0; bool bme_init_status = false; bool lux_init_status = false; bool got_reboot_response = false; bool metric = false; /* * before * connect * present * setup * loop */ void(* resetFunc) (void) = 0; //declare reset function @ address 0 void before() { } void presentation() { // Send the sketch version information to the gateway and Controller //void sendSketchInfo(const char *name, const char *version, bool ack); sendSketchInfo("lodjiya", "0.1"); //void present(uint8_t childSensorId, uint8_t sensorType, const char *description, bool ack); present(DEB_ID, S_INFO, "lodjiya_Debug");//SENSOR_TYPE: S_INFO - из таблицы presentation #ifdef BME_FUNCT # ifdef BME_PRES_FUNCT present(PRES_BME_ID, S_BARO, "lodjiya_bme_pres"); wait(100); # endif #endif #ifdef LUX_FUNCT present(LIGHT_LUX_ID, S_LIGHT_LEVEL, "lodjiya_lux_light"); #endif #ifdef BME_FUNCT # ifdef BME_TEMP_FUNCT present(TEMP_BME_ID, S_TEMP, "lodjiya_bme_t"); wait(100); # endif #endif present(REBOOT_ID, S_BINARY, "lodjiya_reboot_bool"); #ifdef BME_FUNCT # ifdef BME_HUM_FUNCT present(HUMID_BME_ID, S_HUM, "lodjiya_bme_humid"); wait(100); # endif #endif } void setup() { analogReference(INTERNAL); // use the 1.1 V internal reference send(deb_msg.set("")); #if defined(LUX_FUNCT) || defined(BME_FUNCT) Wire.begin(); // Initialize the I2C bus (BH1750 library doesn't do this automatically) #endif #ifdef LUX_FUNCT lux_init_status = lightMeter.begin(BH1750::ONE_TIME_LOW_RES_MODE); if (lux_init_status == false) send(deb_msg.set("ErrorInitLux")); #endif #ifdef BME_FUNCT bme.setI2CAddress(0x76); bme_init_status = bme.beginI2C(); if (bme_init_status == false) send(deb_msg.set("ErrorInitBme")); bme.setMode(MODE_SLEEP); //Sleep for now #endif send(reboot_msg.set(false)); } void loop() { //wait(5000); //ждать команды bool wait(unsigned long ms, uint8_t cmd, uint8_t msgtype); unsigned long timer = millis(); got_reboot_response = false; request(REBOOT_ID, V_STATUS); #ifdef MEASURE_BATT_FUNCT { int batteryPcnt = getBandgap() / 3.3; if (oldBatteryPcnt != batteryPcnt) { sendBatteryLevel(batteryPcnt);// Power up radio after sleep oldBatteryPcnt = batteryPcnt; wait(50); } } #endif #ifdef LUX_FUNCT if (lux_init_status) { long lux = lightMeter.readLightLevel(); # ifdef LUX_AUTOAJUST /* After the measurement the MTreg value is changed according to the result: lux > 40000 ==> MTreg = 32 lux < 40000 ==> MTreg = 69 (default) lux < 10 ==> MTreg = 138 */ if (lux < 0) { send(deb_msg.set("ErrorLuxMsr")); } else { if (lux > 40000.0) // reduce measurement time - needed in direct sun light { if (!lightMeter.setMTreg(32)) send(deb_msg.set("ErrorSetMTReg32")); } else { if (lux > 10.0) // typical light environment { if (!lightMeter.setMTreg(69)) send(deb_msg.set("ErrorSetMTReg69")); } else { if (lux <= 10.0) //very low light environment { if (!lightMeter.setMTreg(138)) send(deb_msg.set("ErrorSetMTReg138")); } } } } # endif if (old_lux != lux) { send(light_msg.set(lux)); old_lux = lux; wait(50); } } #endif #ifdef BME_FUNCT if (bme_init_status) { bme.setMode(MODE_FORCED); int p = bme.readFloatPressure()/133; int h = bme.readFloatHumidity(); float t = bme.readTempC(); bme.setMode(MODE_SLEEP); if (abs(old_t - t) > 0.1) { # ifdef BME_TEMP_FUNCT send(temperature_msg.set(t, 1)); # endif old_t = t; wait(50); } if (old_h != h) { # ifdef BME_HUM_FUNCT send(humidity_msg.set(h)); # endif old_h = h; wait(50); } if (old_p != p) { # ifdef BME_PRES_FUNCT send(pressure_msg.set(p)); # endif old_p = p; wait(50); } } #endif // send(deb_msg.set(( abs(millis() - timer) ))); //50-60ms if (!got_reboot_response) { wait(3000, C_REQ, V_STATUS);//wait(const uint32_t waitingMS, const uint8_t cmd, const uint8_t msgType) // send(deb_msg.set("endwait1")); } // send(deb_msg.set(( abs(millis() - timer) ))); //800-900ms // sleep(SLEEP_TIME, true); wait(SLEEP_TIME); //void requestTime(); //answer will be back to void receiveTime(uint32_t ts); } const long InternalReferenceVoltage = 1068; // Adjust this value to your board's specific internal BG voltage // Code courtesy of "Coding Badly" and "Retrolefty" from the Arduino forum // results are Vcc * 100 // So for example, 5V would be 500. int getBandgap() { // REFS0 : Selects AVcc external reference // MUX3 MUX2 MUX1 : Selects 1.1V (VBG) ADMUX = bit (REFS0) | bit (MUX3) | bit (MUX2) | bit (MUX1); ADCSRA |= bit( ADSC ); // start conversion while (ADCSRA & bit (ADSC)) { } // wait for conversion to complete int results = (((InternalReferenceVoltage * 1024) / ADC) + 5) / 10; return results; } // end of getBandgap void receive(const MyMessage &message) //handles received message { // send(deb_msg.set(message.type)); // send(deb_msg.set(message.sensor)); // send(deb_msg.set(mGetCommand(message))); if (message.type == V_STATUS) { if (message.sensor == REBOOT_ID) { got_reboot_response = true; // send(deb_msg.set("got answer")); if (message.getBool()) resetFunc(); } } } How i tried to test this behaviour: (a)comment in code one of (temp-hum-pres) ... start upload in MYSController while uploading code to node via MYSController (so node doesn't send any presentation or child data to domoticz) i go to domoticz-devices, select all devices realated to this node and in upper left corned press bucket to delete them (to do that i present all sensors with names, starting with "lodjiya", so i can find them by sorting by name. also remember to press "all devices" and choose in upper left to show all, not only 25 devices) then i go to hardware-mysensors-setup select node and delete it (or delete only testing childs) but domoticz may prevent them from appearing/registering/presentating again, so i go to my putty window and do sudo service domoticz.sh restart after that code is finished uploading, device reboots, present nodes, send first-time data for each sensor. then i checked and printsreened domoticz list of this node childs (that they all appeared and send their data, confirm identical for all last seen time) and same for list of devices (b)then uncomment sensor that was commented, and upload code (so "new" child-sensor will present himself and sent his data after upload and reboot while other sensors already present in domoticz system) after doing checking and printscreenenig i go and comment next one of (temp-hum-pres) and start from beginning Also i tested case without commenting any sensor (all appear at once after domoticz restart) ID's: lux=commented,disabled; p=1; t=2; hum=3; presenting order don't remember, but i think temperature was presented last I've got next results: (i have screens, bu will not post them, because it is already big and hard to read post) Note: in all cases in hardware all childs were present with all latest data! so, below are described DEVICES status only - all sensors: t+h not combined, because temperature is missed in devices, h and p present - p+h: a)ok b) t is missed, so nothing combined, p+h still present - p+t: a)ok b) h is missed, so nothing combined, p+t still present - t+h: a)ok b) p is missed, t+h still present and still combined (as one temp sensor and temp+hum WTGR800 device) so in all cases i always miss one of (temp-hum-pres), even if i tried to present to domoticz later Third is related to sendHeartbeat or sendbatterylevel without sending any sensors data or having no sensors. I've asked in old closed issue here (there is something about workaround after issue was closed, but i wasn't able to find in commits). So i've opened new issue, and i think i must look at line 470 which is called at line 1819 but i see only 12 types. And i do not understand if it is possible to update only node (first table) last seen time and/or #255 child (second table in hardware in domoticz in mysensors setup). If send no sensor data, but only heartbeat (with wait or with smartsleep) - node and childs will not update "last seen". Nor wait neither smart sleep. Only on presentation (without sending sensor data). Same for battery level if i send only battery level without any sensors data it seems not to update nor last seen neither battery level. Nor wait neither smart sleep. Only on presentation (if i do not send sensor data)... Function is called at line 1797 looks like must update all childs... - RE: 💬 Battery Powered Sensors It would be great to add to this arctle that if you want to be battery powered outdoors with temperature below zero - you have to use lithium batteries (FR6 for AA) (yes, batteries, not li-ion accumulators!) (for example ultimate lithium energizer, also could be found lithium batteries from other vendors), because Alkaline (LR6 for AA) will be frozen and loose their capacity heavily. If i remember it is >50% at -10 C and death at -20 C. Same problem for li-ion accumulators - when liquid is frozen - electrons are stucked... Also would be great to add some link about battery/accumulator types, advanteges and disadvantages, but i have no links in english, easy-to-read and in one place... But i have very great link about battery-powering that really should be added here, but in advanced section: - RE: 💬 In Wall AC/DC Pcb (with Relay) for MySensors (SMD).
https://forum.mysensors.org/user/nekitoss
CC-MAIN-2020-24
refinedweb
2,564
53.31
0 im stuck on a question from the book, java in 2 semesters chapter 7. 2a) write a program that asks the user to input a string, followed by a single character, and then tests whether the string starts with that character. public class StringTest { public static void main(String[] args) { String s; char c; //input string System.out.println("please enter a string: "); s = EasyScanner.nextString(); //input char System.out.println("please enter a character: "); c = EasyScanner.nextChar(); //compare string and char if(s.charAt(0) == c){ System.out.println("string starts with the character entered"); } else{ System.out.println("string does not start with character entered"); } } } im using a class called EasyScanner which helps in not calling the Scanner. Ok i think this is right for part a but im stuck on part b. b) make your program work so that the case of the character is irrelevant. *need help on this part* Thanks in advance :)
https://www.daniweb.com/programming/software-development/threads/397794/comparing-a-string-and-a-character-problem
CC-MAIN-2016-44
refinedweb
158
77.43
by David Tucker The embedded local database included with Adobe AIR gives AIR developers a great deal of power. Now you can develop extensive relational data models that exist solely on the user's hard drive. As you develop powerful applications with Adobe AIR, you may be interested in using Object Relational Mapping (ORM) to lessen the amount of SQL you need to write. In this two-part series, I introduce you to one of the options for ORM in AIR — FlexORM — as well as highlight some additional tools you may want to use. Through the course of these articles, you will learn how to set up the ORM to work with a database and map both simple and complex relationships in an AIR application. In addition, I examine what an ORM is, and what terminology you will need to know when working with FlexORM. To follow along, download the sample files. (180K, ZIP). Server-side developers have been using ORM tools for many years. At its core, an ORM simply enables you to think of your application's data in terms of objects, not database tables. To understand this, consider the class diagram in Figure 1. Figure 1. Sample contact class diagram. In Figure 1, we are mapping a contact. In this case, we are creating a contact with a first name, last name, group of e-mail addresses, group of phone numbers, and group of addresses. This is not an unrealistic example, and if you were creating a real contact management application, your class model would probably be a great deal more complex than this. However, this should help illustrate the benefit of using an ORM. While our class diagram includes four classes, to create the same data schema in a relational database, we need at least eight database tables (see Figure 2). In this case, each of our data types has a table, but we also have created a linking table to connect addresses, e-mail addresses, and phone numbers to our contact object. In addition, we have created a type table to store the type for each of those connected objects. Figure 2. Contacts database tables diagram. This particular collection of tables creates many complexities when dealing with the application's data. When creating a new contact, we need to perform the following steps: In this case, creating a contact becomes a very complex task. While performing this with a synchronous connection for the database would be tedious, it would be unwieldy if you were performing all of these queries with an asynchronous connection. In addition to this complexity, you would have to tackle some additional questions as you develop the application, such as: Why does this complexity exist? If we look at this from a theoretical viewpoint, simply creating a single piece of data for your application should not be this complex. This problem is linked to the object-relational paradigm mismatch . While relational databases are the standard for storing data, object-oriented programming (OOP) is the standard for defining data. This leads to a mismatch. Normalized data in a database will not match up with proper OOP class architecture. This is where an ORM comes in. Just as its name indicates, ORM provides a mapping between the object-oriented classes and the relational database. While object-oriented databases do exist, they represent only a minimal percentage of the market. In addition, Adobe AIR only supports the relational SQLite database locally. According to Christian Bauer and Gavin King1, an ORM provides four main features: 1. Bauer, Christian and Gavin King. Java Persistence with Hibernate. New York: Manning, 2006. Some ORMs provide all four of these functions, and these are referred to as full-featured ORMs. In reality, none of the ORMs for AIR are full featured yet, but they still provide a great deal of functionality that developers can utilize. A quick search across Google Code, Github, RIAForge, and other open source sites reveals many different AIR ORM solutions. Some of the solutions meet one or two of the above criteria, but I found one open source project that meets the first, third, and fourth. This project, FlexORM, is an open source project hosted at RIAForge and is released under the BSD license. Few things are as central to an ORM as the way the tool defines the object mapping. In this case, FlexORM utilizes metadata to define the object mapping. This is my personal preference for any ORM tool because: To begin, we will create a mapping for a simplified version of our Contact class. package vo { [Bindable] [Table( name="CONTACT" )] public class ContactVO { [Id] public var id:int; [Column( name="first_name" )] public var firstName:String; [Column( name="last_name" )] public var lastName:String; } } In this example, you can see that we have a basic ActionScript class. This will save our basic contact data (which initially will have only a first name and last name). There are a few metadata tags that Flex developers should recognize as being out of the ordinary. These are the metadata tags used by FlexORM. For example, the [Table] metadata tag is what FlexORM uses to define this class as an entity that needs to be persisted. This also has an argument that allows you to define the name of the table that will store the entity. In addition, there is an [Id] tag that specifies which property will serve as the primary key for the database. This needs to be type int. Finally, [Column] tags are used to define the database columns. These are not required if you want the name of the property to be the name of the database column. For creating a basic model object without any complex relationships, these are the only metadata tags you need to work with in FlexORM. One important step you need to follow, however, is to be sure that the Flex compiler keeps all of these metadata tags when it compiles your application. First, right-click your project in Adobe Flex Builder and choose Properties. Select the Flex Compiler option on the left and add the following line to whatever is currently in the Additional Compiler Arguments field: -keep-as3-metadata+=Table,Id,Column,ManyToOne,OneToMany,ManyToMany,Transient Once these settings have been added and the basic model class is defined, we can create a basic application to test the ORM. In this case, I already created an AIR project and saved the model class from the first code sample above into a package named vo. Next, we need to add the FlexORM library to our project. You can download the file from RIAForge or from the sample files included with this article. Then, we need to add the flexorm.swc file to the project: Now that we have added FlexORM to the application, we can construct the actual test application. In the main application file, I have the following code that is executed on the creationComplete event of the main WindowedApplication: protected var entityManager:EntityManager = EntityManager.instance; protected function application_creationCompleteHandler( event:FlexEvent ):void { var dbFile:File = File.applicationStorageDirectory.resolvePath( "contacts.db" ); var sqlConnection:SQLConnection = new SQLConnection(); sqlConnection.open( dbFile ); entityManager.sqlConnection = sqlConnection; } In this example, the first thing we do is create an instance of the EntityManager. Because the EntityManager is a singleton, we get the instance by referencing the static property instance on the EntityManager class. Next, we define the database file. In this case, it's a file named contacts.db in the application storage directory. Then, the SQLConnection is created. It is pointed to the database file and then it opens a connection using the synchronous mode. Finally, the instance of the SQLConnection is set on the EntityManager's sqlConnection property. Next, a simple form is created to collect the first and last name of the contact. This form also has a Save Contact button. When this button is clicked, the saveContact() method is called: protected function saveContact():void { var contact:ContactVO = new ContactVO(); contact.firstName = firstNameInput.text; contact.lastName = lastNameInput.text; entityManager.save( contact ); } In this method, a new instance of the ContactVO is created. The first and last names are assigned, and the save method of the EntityManager is called with the instance of the ContactVO passed into it. This takes care of the entire save process. When that is called, the following SQL is executed on the contacts.db database: create table if not exists main.CONTACTS(id integer primary key autoincrement,first_name text, updated_at date,last_name text,created_at date,marked_for_deletion boolean) create index if not exists main.c_o_n_t_a_c_t_id_idx_id_idx on CONTACTS(id asc) insert into main.CONTACTS( first_name, updated_at, last_name, created_at, marked_for_deletion) values (:firstName, :updatedAt, :lastName, :createdAt, :markedForDeletion) The first thing FlexORM does is create the Contacts table based on the mapping you have defined in the model class. It also adds some additional properties that are used internally by FlexORM. Next, it creates an index for the table based on the primary key. Finally, it uses an insert statement with named parameters to insert your values into the database. If you use an SQLite administration tool (such as the SQLite Admin tool for AIR developed by Christophe Coenraets), you can view the data in the table to verify that is was inserted correctly. In Figure 3, you can see that I have verified the data was inserted correctly. Figure 3. SQLite administration tool. In this example, we have accomplished many things with only a little ActionScript. We connected FlexORM to a file on the local file system. We used metadata to define simple mapping within the application's data model. And we utilized the EntityManager in FlexORM to save new instances of a data model class. I hope you see some of the benefits of using an ORM already. However, the true benefits of using an ORM will become extremely obvious in the second part of this article when I cover: As I mentioned previously, ORM is a tool that should be in any AIR developer's toolbox. With the techniques from this two-part article, you will be able to develop AIR applications that logically persist data without writing a single line of SQL code. After I started this article, there was an interesting development in the AIR ORM field. The Adobe ColdFusion 9 public beta shipped with an AIR ORM that is used with the ColdFusion 9 AIR online/offline data synchronization feature. Interestingly enough, this ORM does not require a server-side component to work. It can be used entirely on the client side. While this is still in beta, it is a promising solution that AIR developers should keep an eye on because it is similar to FlexORM in its implementation. David Tucker is a software engineer for Universal Mind, focusing on the next generation of RIAs with Adobe Flex and AIR. He is based in Savannah, Georgia, but you can find him online at DavidTucker.net.
http://www.adobe.com/inspire-archive/october2009/articles/article7/
CC-MAIN-2015-35
refinedweb
1,819
54.73
The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.. As part of this tutorial, you’ll use the Bootstrap toolkit to style your application so it is more visually appealing. Bootstrap will help you incorporate responsive web pages in your web application so that it also works well on mobile browsers without writing your own HTML, CSS, and JavaScript code to achieve these goals. The toolkit will allow you to focus on learning how Flask works. Flask uses the Jinja template engine to dynamically build HTML pages using familiar Python concepts such as variables, loops, lists, and so on. You’ll use these templates as part of this project. In this tutorial, you’ll build a small web blog using Flask and SQLite in Python 3. Users of the application can view all the posts in your database and click on the title of a post to view its contents with the ability to add a new post to the database and edit or delete an existing post. Before you start following this guide, you will need: flask_blog. In this step, you’ll activate your Python environment and install Flask using the pip package installer. If you haven’t already activated your programming environment, make sure you’re in your project directory ( flask_blog) and use the following command to activate the environment: - source env/bin/activate Once your programming environment is activated, your prompt will now have an env prefix that may look as follows: This prefix is an indication that the environment env is currently active, which might have another name depending on how you named it during creation. Note: You can use Git, a version control system, to effectively manage and track the development process for your project. To learn how to use Git, you might want to check out our Introduction to Git Installation Usage and Branches article. If you are using Git, it is a good idea to ignore the newly created env directory in your .gitignore file to avoid tracking files not related to the project. Now you’ll install Python packages and isolate your project code away from the main Python system installation. You’ll do this using pip and python. To install Flask, run the following command: - pip install flask Once the installation is complete, run the following command to confirm the installation: - python -c "import flask; print(flask.__version__)" You use the python command line interface with the option -c to execute Python code. Next you import the flask package with import flask; then print the Flask version, which is provided via the flask.__version__ variable. The output will be a version number similar to the following: Output1.1.2 You’ve created the project folder, a virtual environment, and installed Flask. You’re now ready to move on to setting up your base application. Now that you have your programming environment set up, you’ll start using Flask. In this step, you’ll make a small web application inside a Python file and run it to start the server, which will display some information on the browser. In your flask_blog directory, open a file named hello.py for editing, use nano or your favorite text editor: - nano hello.py This hello.py file will serve as a minimal example of how to handle HTTP requests. Inside it, you’ll import the Flask object, and create a function that returns an HTTP response. Write the following code inside hello.py: from flask import Flask app = Flask(__name__) @app.route('/') def hello(): return 'Hello, World!' In the preceding code block, you first import the Flask object from the flask package. You then use it to create your Flask application instance with the name app. You pass the special variable __name__ that holds the name of the current Python module. It’s used to tell the instance where it’s located—you need this because Flask sets up some paths behind the scenes. Once you create the app instance, 'Hello, World!' as a response. Save and close the file. To run your web application, you’ll first tell Flask where to find the application (the hello.py file in your case) with the FLASK_APP environment variable: - export FLASK_APP=hello Then run it in development mode with the FLASK_ENV environment variable: - export FLASK_ENV=development Lastly, run the application using the flask run command: - flask run Once the application is running the output will be something like this: Output* Serving Flask app "hello" (lazy loading) * Environment: development * Debug mode: on * Running on (Press CTRL+C to quit) * Restarting with stat * Debugger is active! * Debugger PIN: 813-894-335 The preceding output has several pieces of information, such as: Debug mode: onsignifies that the Flask debugger is running. This is useful when developing because it gives us detailed error messages when things go wrong, which makes troubleshooting easier., 127.0.0.1is the IP that represents your machine’s localhostand :5000is the port number. Open a browser and type in the URL, you will receive the string Hello, World! as a response, this confirms that your application is successfully running. Warning Flask uses a simple web server to serve our application in a development environment, which also means that the Flask debugger is running to make catching errors easier. This development server should not be used in a production deployment. See the Deployment Options page on the Flask documentation for more information, you can also check out this Flask deployment tutorial. You can now leave the development server running in the terminal and open another terminal window. Move into the project folder where hello.py is located, activate the virtual environment, set the environment variables FLASK_ENV and FLASK_APP, and continue to the next steps. (These commands are listed earlier in this step.) Note: When opening a new terminal, it is important to remember activating the virtual environment and setting the environment variables FLASK_ENV and FLASK_APP. at the same time, you can pass a different port number to the -p argument, for example, to run another application on port 5001 use the following command: - flask run -p 5001 You now have a small Flask web application. You’ve run your application and displayed information on the web browser. Next, you’ll use HTML files in your application. Currently your application only displays a simple message without any HTML. Web applications mainly use HTML to display information for the visitor, so you’ll now work on incorporating HTML files in your app, which can be displayed on the web browser. Flask provides a render_template() helper function that allows use of the Jinja template engine. This will make managing HTML much easier by writing your HTML code in .html files as well as using logic in your HTML code. You’ll use these HTML files, (templates) to build all of your application pages, such as the main page where you’ll display the current blog posts, the page of the blog post, the page where the user can add a new post, and so on. In this step, you’ll create your main Flask application in a new file. First, in your flask_blog directory, use nano or your favorite editor to create and edit your app.py file. This will hold all the code you’ll use to create the blogging application: - nano app.py In this new file, you’ll import the Flask object to create a Flask application instance as you previously did. You’ll also import the render_template() helper function that lets you render HTML template files that exist in the templates folder you’re about to create. The file will have a single view function that will be responsible for handling requests to the main / route. Add the following content: from flask import Flask, render_template app = Flask(__name__) @app.route('/') def index(): return render_template('index.html') The index() view function returns the result of calling render_template() with index.html as an argument, this tells render_template() to look for a file called index.html in the templates folder. Both the folder and the file do not yet exist, you will get an error if you were to run the application at this point. You’ll run it nonetheless so you’re familiar with this commonly encountered exception. You’ll then fix it by creating the needed folder and file. Save and exit the file. Stop the development server in your other terminal that runs the hello application with CTRL+C. Before you run the application, make sure you correctly specify the value for the FLASK_APP environment variable, since you’re no longer using the application hello: - export FLASK_APP=app - flask run Opening the URL in your browser will result in the debugger page informing you that the index.html template was not found. The main line in the code that was responsible for this error will be highlighted. In this case, it is the line return render_template('index.html'). If you click this line, the debugger will reveal more code so that you have more context to help you solve the problem. To fix this error, create a directory called templates inside your flask_blog directory. Then inside it, open a file called index.html for editing: - mkdir templates - nano templates/index.html Next, add the following HTML code inside index.html: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>FlaskBlog</title> </head> <body> <h1>Welcome to FlaskBlog</h1> </body> </html> Save the file and use your browser to navigate to again, or refresh the page. This time the browser should display the text Welcome to FlaskBlog in an <h1> tag. In addition to the templates folder, Flask web applications also typically have a static folder for hosting static files, such as CSS files, JavaScript files, and images the application uses. You can create a style.css style sheet file to add CSS to your application. First, create a directory called static inside your main flask_blog directory: - mkdir static Then create another directory called css inside the static directory to host .css files. This is typically done to organize static files in dedicated folders, as such, JavaScript files typically live inside a directory called js, images are put in a directory called images (or img), and so on. The following command will create the css directory inside the static directory: - mkdir static/css Then open a style.css file inside the css directory for editing: - nano static/css/style.css Add the following CSS rule to your style.css file: h1 { border: 2px #eee solid; color: brown; text-align: center; padding: 10px; } The CSS code will add a border, change the color to brown, center the text, and add a little padding to <h1> tags. Save and close the file. Next, open the index.html template file for editing: - nano templates/index.html You’ll add a link to the style.css file inside the <head> section of the index.html template file: . . . <head> <meta charset="UTF-8"> <link rel="stylesheet" href="{{ url_for('static', filename= 'css/style.css') }}"> <title>FlaskBlog</title> </head> . . . Here you use the url_for() helper function to generate the appropriate location of the file. The first argument specifies that you’re linking to a static file and the second argument is the path of the file inside the static directory. Save and close the file. Upon refreshing the index page of your application, you will notice that the text Welcome to FlaskBlog is now in brown, centered, and enclosed inside a border. You can use the CSS language to style the application and make it more appealing using your own design. However, if you’re not a web designer, or if you aren’t familiar with CSS, then you can use the Bootstrap toolkit, which provides easy-to-use components for styling your application. In this project, we’ll use Bootstrap. You might have guessed that making another HTML template would mean repeating most of the HTML code you already wrote in the index.html template. You can avoid unnecessary code repetition with the help of a base template file, which all of your HTML files will inherit from. See Template Inheritance in Jinja for more information. To make a base template, first create a file called base.html inside your templates directory: - nano templates/base.html Type the following code in your base.html template: <>{% block title %} {% endblock %}</title> </head> <body> active"> <a class="nav-link" href="#">About</a> </li> </ul> </div> </nav> <div class="container"> {% block content %} {% endblock %} </div> <!--> Save and close the file once you’re done editing it. Most of the code in the preceding block is standard HTML and code required for Bootstrap. The <meta> tags provide information for the web browser, the <link> tag links the Bootstrap CSS files, and the <script> tags are links to JavaScript code that allows some additional Bootstrap features, check out the Bootstrap documentation for more. However, the following highlighted parts are specific to the Jinja template engine: {% block title %} {% endblock %}: A block that serves as a placeholder for a title, you’ll later use it in other templates to give a custom title for each page in your application without rewriting the entire <head>section each time. {{ url_for('index')}}: A function call that will return the URL for the index()view function. This is different from the past url_for()call you used to link a static CSS file, because it only takes one argument, which is the view function’s name, and links to the route associated with the function instead of a static file. {% block content %} {% endblock %}: Another block that will be replaced by content depending on the child template (templates that inherit from base.html) that will override it. Now that you have a base template, you can take advantage of it using inheritance. Open the index.html file: - nano templates/index.html Then replace its contents with the following: {% extends 'base.html' %} {% block content %} <h1>{% block title %} Welcome to FlaskBlog {% endblock %}</h1> {% endblock %} In this new version of the index.html template, you use the {% extends %} tag to inherit from the base.html template. You then extend it via replacing the content block in the base template with what is inside the content block in the preceding code block. This content block contains an <h1> tag with the text Welcome to FlaskBlog inside a title block, which in turn replaces the original title block in the base.html template with the text Welcome to FlaskBlog. This way, you can avoid repeating the same text twice, as it works both as a title for the page and a heading that appears below the navigation bar inherited from the base template. Template inheritance also gives you the ability to reuse the HTML code you have in other templates ( base.html in this case) without having to repeat it each time it is needed. Save and close the file and refresh the index page on your browser. You’ll see your page with a navigation bar and styled title. You’ve used HTML templates and static files in Flask. You also used Bootstrap to start refining the look of your page and a base template to avoid code repetition. In the next step, you’ll set up a database that will store your application data. In this step, you’ll set up a database to store data, that is, the blog posts for your application. You’ll also populate the database with a few example entries. You’ll use a SQLite database file to store your data because the sqlite3 module, which we will use to interact with the database, is readily available in the standard Python library. For more information about SQLite, check out this tutorial. First, because data in SQLite is stored in tables and columns, and since your data mainly consists of blog posts, you first need to create a table called posts with the necessary columns. You’ll create a .sql file that contains SQL commands to create the posts table with a few columns. You’ll then use this file to create the database. Open a file called schema.sql inside your flask_blog directory: - nano schema.sql Type the following SQL commands inside this file: DROP TABLE IF EXISTS posts; CREATE TABLE posts ( id INTEGER PRIMARY KEY AUTOINCREMENT, created TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP, title TEXT NOT NULL, content TEXT NOT NULL ); Save and close the file. The first SQL command is DROP TABLE IF EXISTS posts;, this deletes any already existing tables named posts so you don’t get confusing behavior. Note that this will delete all of the content you have in the database whenever you use these SQL commands, so ensure you don’t write any important content in the web application until you finish this tutorial and experiment with the final result. Next, CREATE TABLE posts is used to create the posts table with the following columns: id: An integer that represents a primary key, this will get assigned a unique value by the database for each entry (that is a blog post). created: The time the blog post was created at. NOT NULLsignifies that this column should not be empty and the DEFAULTvalue is the CURRENT_TIMESTAMPvalue, which is the time at which the post was added to the database. Just like id, you don’t need to specify a value for this column, as it will be automatically filled in. title: The post title. content: The post content. Now that you have a SQL schema in the schema.sql file, you’ll use it to create the database using a Python file that will generate an SQLite .db database file. Open a file named init_db.py inside the flask_blog directory using your preferred editor: - nano init_db.py And then add the following code. import sqlite3 connection = sqlite3.connect('database.db') with open('schema.sql') as f: connection.executescript(f.read()) cur = connection.cursor() cur.execute("INSERT INTO posts (title, content) VALUES (?, ?)", ('First Post', 'Content for the first post') ) cur.execute("INSERT INTO posts (title, content) VALUES (?, ?)", ('Second Post', 'Content for the second post') ) connection.commit() connection.close() You first import the sqlite3 module and then open a connection to a database file named database.db, which will be created once you run the Python file. Then you use the open() function to open the schema.sql file. Next you execute its contents using the executescript() method that executes multiple SQL statements at once, which will create the posts table. You create a Cursor object that allows you to use its execute() method to execute two INSERT SQL statements to add two blog posts to your posts table. Finally, you commit the changes and close the connection. Save and close the file and then run it in the terminal using the python command: - python init_db.py Once the file finishes execution, a new file called database.db will appear in your flask_blog directory. This means you’ve successfully set up your database. In the next step, you’ll retrieve the posts you inserted into your database and display them in your application’s homepage. Now that you’ve set up your database, you can now modify the index() view function to display all the posts you have in your database. Open the app.py file to make the following modifications: - nano app.py For your first modification, you’ll import the sqlite3 module at the top of the file: import sqlite3 from flask import Flask, render_template . . . Next, you’ll create a function that creates a database connection and return it. Add it directly after the imports: . . . from flask import Flask, render_template def get_db_connection(): conn = sqlite3.connect('database.db') conn.row_factory = sqlite3.Row return conn . . . This get_db_connection() function opens a connection to the database.db database file, and then sets the row_factory attribute to sqlite3.Row so you can have name-based access to columns. This means that the database connection will return rows that behave like regular Python dictionaries. Lastly, the function returns the conn connection object you’ll be using to access the database. After defining the get_db_connection() function, modify the index() function to look like the following: . . . @app.route('/') def index(): conn = get_db_connection() posts = conn.execute('SELECT * FROM posts').fetchall() conn.close() return render_template('index.html', posts=posts) In this new version of the index() function, you first open a database connection using the get_db_connection() function you defined earlier. Then you execute an SQL query to select all entries from the posts table. You implement the fetchall() method to fetch all the rows of the query result, this will return a list of the posts you inserted into the database in the previous step. You close the database connection using the close() method and return the result of rendering the index.html template. You also pass the posts object as an argument, which contains the results you got from the database, this will allow you to access the blog posts in the index.html template. With these modifications in place, save and close the app.py file. Now that you’ve passed the posts you fetched from the database to the index.html template, you can use a for loop to display each post on your index page. Open the index.html file: - nano templates/index.html Then, modify it to look as follows: {% extends 'base.html' %} {% block content %} <h1>{% block title %} Welcome to FlaskBlog {% endblock %}</h1> {% for post in posts %} <a href="#"> <h2>{{ post['title'] }}</h2> </a> <span class="badge badge-primary">{{ post['created'] }}</span> <hr> {% endfor %} {% endblock %} Here, the syntax {% for post in posts %} is a Jinja for loop, which is similar to a Python for loop except that it has to be later closed with the {% endfor %} syntax. You use this syntax to loop over each item in the posts list that was passed by the index() function in the line return render_template('index.html', posts=posts). Inside this for loop, you display the post title in an <h2> heading inside an <a> tag (you’ll later use this tag to link to each post individually). You display the title using a literal variable delimiter ( {{ ... }}). Remember that post will be a dictionary-like object, so you can access the post title with post['title']. You also display the post creation date using the same method. Once you are done editing the file, save and close it. Then navigate to the index page in your browser. You’ll see the two posts you added to the database on your page. Now that you’ve modified the index() view function to display all the posts you have in the database on your application’s homepage, you’ll move on to display each post in a single page and allow users to link to each individual post. In this step, you’ll create a new Flask route with a view function and a new HTML template to display an individual blog post by its ID. By the end of this step, the URL will be a page that displays the first post (because it has the ID 1). The URL will display the post with the associated ID number if it exists. Open app.py for editing: - nano app.py Since you’ll need to get a blog post by its ID from the database in multiple locations later in this project, you’ll create a standalone function called get_post(). You can call it by passing it an ID and receive back the blog post associated with the provided ID, or make Flask respond with a 404 Not Found message if the blog post does not exist. To respond with a 404 page, you need to import the abort() function from the Werkzeug library, which was installed along with Flask, at the top of the file: import sqlite3 from flask import Flask, render_template from werkzeug.exceptions import abort . . . Then, add the get_post() function right after the get_db_connection() function you created in the previous step: . . . def get_db_connection(): conn = sqlite3.connect('database.db') conn.row_factory = sqlite3.Row return conn def get_post(post_id): conn = get_db_connection() post = conn.execute('SELECT * FROM posts WHERE id = ?', (post_id,)).fetchone() conn.close() if post is None: abort(404) return post . . . This new function has a post_id argument that determines what blog post to return. Inside the function, you use the get_db_connection() function to open a database connection and execute a SQL query to get the blog post associated with the given post_id value. You add the fetchone() method to get the result and store it in the post variable then close the connection. If the post variable has the value None, meaning no result was found in the database, you use the abort() function you imported earlier to respond with a 404 error code and the function will finish execution. If however, a post was found, you return the value of the post variable. Next, add the following view function at the end of the app.py file: . . . @app.route('/<int:post_id>') def post(post_id): post = get_post(post_id) return render_template('post.html', post=post) In this new view function, you add a variable rule <int:post_id> to specify that the part after the slash ( /) is a positive integer (marked with the int converter) that you need to access in your view function. Flask recognizes this and passes its value to the post_id keyword argument of your post() view function. You then use the get_post() function to get the blog post associated with the specified ID and store the result in the post variable, which you pass to a post.html template that you’ll soon create. Save the app.py file and open a new post.html template file for editing: - nano templates/post.html Type the following code in this new post.html file. This will be similar to the index.html file, except that it will only display a single post, in addition to also displaying the contents of the post: {% extends 'base.html' %} {% block content %} <h2>{% block title %} {{ post['title'] }} {% endblock %}</h2> <span class="badge badge-primary">{{ post['created'] }}</span> <p>{{ post['content'] }}</p> {% endblock %} You add the title block that you defined in the base.html template to make the title of the page reflect the post title that is displayed in an <h2> heading at the same time. Save and close the file. You can now navigate to the following URLs to see the two posts you have in your database, along with a page that tells the user that the requested blog post was not found (since there is no post with an ID number of 3 so far): Going back to the index page, you’ll make each post title link to its respective page. You’ll do this using the url_for() function. First, open the index.html template for editing: - nano templates/index.html Then change the value of the href attribute from # to {{ url_for('post', post_id=post['id']) }} so that the for loop will look exactly as follows: {% for post in posts %} <a href="{{ url_for('post', post_id=post['id']) }}"> <h2>{{ post['title'] }}</h2> </a> <span class="badge badge-primary">{{ post['created'] }}</span> <hr> {% endfor %} Here, you pass 'post' to the url_for() function as a first argument. This is the name of the post() view function and since it accepts a post_id argument, you give it the value post['id']. The url_for() function will return the proper URL for each post based on its ID. Save and close the file. The links on the index page will now function as expected. With this, you’ve now finished building the part of the application responsible for displaying the blog posts in your database. Next, you’ll add the ability to create, edit, and delete blog posts to your application. Now that you’ve finished displaying the blog posts that are present in the database on the web application, you need to allow the users of your application to write new blog posts and add them to the database, edit the existing ones, and delete unnecessary blog posts. Up to this point, you have an application that displays the posts in your database but provides no way of adding a new post unless you directly connect to the SQLite database and add one manually. In this section, you’ll create a page on which you will be able to create a post by providing its title and content. Open the app.py file for editing: - nano app.py First, you’ll import the following from the Flask framework: requestobject to access incoming request data that will be submitted via an HTML form. url_for()function to generate URLs. flash()function to flash a message when a request is processed. redirect()function to redirect the client to a different location. Add the imports to your file like the following: import sqlite3 from flask import Flask, render_template, request, url_for, flash, redirect from werkzeug.exceptions import abort . . . The flash() function stores flashed messages in the client’s browser session, which requires setting a secret key. This secret key is used to secure sessions, which allow Flask to remember information from one request to another, such as moving from the new post page to the index page. The user can access the information stored in the session, but cannot modify it unless they have the secret key, so you must never allow anyone to access your secret key. See the Flask documentation for sessions for more information. To set a secret key, you’ll add a SECRET_KEY configuration to your application via the app.config object. Add it directly following the app definition before defining the index() view function: . . . app = Flask(__name__) app.config['SECRET_KEY'] = 'your secret key' @app.route('/') def index(): conn = get_db_connection() posts = conn.execute('SELECT * FROM posts').fetchall() conn.close() return render_template('index.html', posts=posts) . . . Remember that the secret key should be a long random string. After setting a secret key, you’ll create a view function that will render a template that displays a form you can fill in to create a new blog post. Add this new function at the bottom of the file: . . . @app.route('/create', methods=('GET', 'POST')) def create(): return render_template('create.html') This creates a /create route that accepts both GET and POST requests. GET requests are accepted by default. To also accept POST requests, which are sent by the browser when submitting forms, you’ll pass a tuple with the accepted types of requests to the methods argument of the @app.route() decorator. Save and close the file. To create the template, open a file called create.html inside your templates folder: - nano templates/create.html Add the following code inside this new file: {% extends 'base.html' %} {% block content %} <h1>{% block title %} Create a New Post {% endblock %}</h1> <form method="post"> <div class="form-group"> <label for="title">Title</label> <input type="text" name="title" placeholder="Post title" class="form-control" value="{{ request.form['title'] }}"></input> </div> <div class="form-group"> <label for="content">Content</label> <textarea name="content" placeholder="Post content" class="form-control">{{ request.form['content'] }}</textarea> </div> <div class="form-group"> <button type="submit" class="btn btn-primary">Submit</button> </div> </form> {% endblock %} Most of this code is standard HTML. It will display an input box for the post title, a text area for the post content, and a button to submit the form. The value of the post title input is {{ request.form['title'] }} and the text area has the value {{ request.form['content'] }}, this is done so that the data you enter does not get lost if something goes wrong. For example, if you write a long post and you forget to give it a title, a message will be displayed informing you that the title is required. This will happen without losing the post you wrote since it will be stored in the request global object that you have access to in your templates. Now, with the development server running, use your browser to navigate to the /create route: You will see a Create a New Post page with a box for a title and content. This form submits a POST request to your create() view function. However, there is no code to handle a POST request in the function yet, so nothing happens after filling in the form and submitting it. You’ll handle the incoming POST request when a form is submitted. You’ll do this inside the create() view function. You can separately handle the POST request by checking the value of request.method. When its value is set to 'POST' it means the request is a POST request, you’ll then proceed to extract submitted data, validate it, and insert it into your database. Open the app.py file for editing: - nano app.py Modify the create() view function to look exactly as follows: . . . @app.route('/create', methods=('GET', 'POST')) def create(): if request.method == 'POST': title = request.form['title'] content = request.form['content'] if not title: flash('Title is required!') else: conn = get_db_connection() conn.execute('INSERT INTO posts (title, content) VALUES (?, ?)', (title, content)) conn.commit() conn.close() return redirect(url_for('index')) return render_template('create.html') In the if statement you ensure that the code following it is only executed when the request is a POST request via the comparison request.method == 'POST'. You then extract the submitted title and content from the request.form object that gives you access to the form data in the request. If the title is not provided, the condition if not title would be fulfilled, displaying a message to the user informing them that the title is required. If, on the other hand, the title is provided, you open a connection with the get_db_connection() function and insert the title and the content you received into the posts table. You then commit the changes to the database and close the connection. After adding the blog post to the database, you redirect the client to the index page using the redirect() function passing it the URL generated by the url_for() function with the value 'index' as an argument. Save and close the file. Now, navigate to the /create route using your web browser: Fill in the form with a title of your choice and some content. Once you submit the form, you will see the new post listed on the index page. Lastly, you’ll display flashed messages and add a link to the navigation bar in the base.html template to have easy access to this new page. Open the template file: - nano templates/base.html Edit the file by adding a new <li> tag following the About link inside the <nav> tag. Then add a new for loop directly above the content block to display the flashed messages below the navigation bar. These messages are available in the special get_flashed_messages() function Flask provides: "> <a class="nav-link" href="#">About</a> </li> <li class="nav-item"> <a class="nav-link" href="{{url_for('create')}}">New Post</a> </li> </ul> </div> </nav> <div class="container"> {% for message in get_flashed_messages() %} <div class="alert alert-danger">{{ message }}</div> {% endfor %} {% block content %} {% endblock %} </div> Save and close the file. The navigation bar will now have a New Post item that links to the /create route. For a blog to be up to date, you’ll need to be able to edit your existing posts. This section will guide you through creating a new page in your application to simplify the process of editing a post. First, you’ll add a new route to the app.py file. Its view function will receive the ID of the post that needs to be edited, the URL will be in the format /post_id/edit with the post_id variable being the ID of the post. Open the app.py file for editing: - nano app.py Next, add the following edit() view function at the end of the file. Editing an existing post is similar to creating a new one, so this view function will be similar to the create() view function: . . . @app.route('/<int:id>/edit', methods=('GET', 'POST')) def edit(id): post = get_post(id) if request.method == 'POST': title = request.form['title'] content = request.form['content'] if not title: flash('Title is required!') else: conn = get_db_connection() conn.execute('UPDATE posts SET title = ?, content = ?' ' WHERE id = ?', (title, content, id)) conn.commit() conn.close() return redirect(url_for('index')) return render_template('edit.html', post=post) The post you edit is determined by the URL and Flask will pass the ID number to the edit() function via the id argument. You add this value to the get_post() function to fetch the post associated with the provided ID from the database. The new data will come in a POST request, which is handled inside the if request.method == 'POST' condition. Just like when you create a new post, you first extract the data from the request.form object then flash a message if the title has an empty value, otherwise, you open a database connection. Then you update the posts table by setting a new title and new content where the ID of the post in the database is equal to the ID that was in the URL. In the case of a GET request, you render an edit.html template passing in the post variable that holds the returned value of the get_post() function. You’ll use this to display the existing title and content on the edit page. Save and close the file, then create a new edit.html template: - nano templates/edit.html Write the following code inside this new file: {% extends 'base.html' %} {% block content %} <h1>{% block title %} Edit "{{ post['title'] }}" {% endblock %}</h1> <form method="post"> <div class="form-group"> <label for="title">Title</label> <input type="text" name="title" placeholder="Post title" class="form-control" value="{{ request.form['title'] or post['title'] }}"> </input> </div> <div class="form-group"> <label for="content">Content</label> <textarea name="content" placeholder="Post content" class="form-control">{{ request.form['content'] or post['content'] }}</textarea> </div> <div class="form-group"> <button type="submit" class="btn btn-primary">Submit</button> </div> </form> <hr> {% endblock %} Save and close the file. This code follows the same pattern except for the {{ request.form['title'] or post['title'] }} and {{ request.form['content'] or post['content'] }} syntax. This displays the data stored in the request if it exists, otherwise it displays the data from the post variable that was passed to the template containing current database data. Now, navigate to the following URL to edit the first post: You will see an Edit “First Post” page. Edit the post and submit the form, then make sure the post was updated. You now need to add a link that points to the edit page for each post on the index page. Open the index.html template file: - nano templates/index.html Edit the file to look exactly like the following: {% extends 'base.html' %} {% block content %} <h1>{% block title %} Welcome to FlaskBlog {% endblock %}</h1> {% for post in posts %} <a href="{{ url_for('post', post_id=post['id']) }}"> <h2>{{ post['title'] }}</h2> </a> <span class="badge badge-primary">{{ post['created'] }}</span> <a href="{{ url_for('edit', id=post['id']) }}"> <span class="badge badge-warning">Edit</span> </a> <hr> {% endfor %} {% endblock %} Save and close the file. Here you add an <a> tag to link to the edit() view function, passing in the post['id'] value to link to the edit page of each post with the Edit link. Sometimes a post no longer needs to be publicly available, which is why the functionality of deleting a post is crucial. In this step you will add the delete functionality to your application. First, you’ll add a new /ID/delete route that accepts POST requests, similar to the edit() view function. Your new delete() view function will receive the ID of the post to be deleted from the URL. Open the app.py file: - nano app.py Add the following view function at the bottom of the file: # .... @app.route('/<int:id>/delete', methods=('POST',)) def delete(id): post = get_post(id) conn = get_db_connection() conn.execute('DELETE FROM posts WHERE id = ?', (id,)) conn.commit() conn.close() flash('"{}" was successfully deleted!'.format(post['title'])) return redirect(url_for('index')) This view function only accepts POST requests. This means that navigating to the /ID/delete route on your browser will return an error because web browsers default to GET requests. However you can access this route via a form that sends a POST request passing in the ID of the post you want to delete. The function will receive the ID value and use it to get the post from the database with the get_post() function. Then you open a database connection and execute a DELETE FROM SQL command to delete the post. You commit the change to the database and close the connection while flashing a message to inform the user that the post was successfully deleted and redirect them to the index page. Note that you don’t render a template file, this is because you’ll just add a Delete button to the edit page. Open the edit.html template file: - nano templates/edit.html Then add the following <form> tag after the <hr> tag and directly before the {% endblock %} line: <hr> <form action="{{ url_for('delete', id=post['id']) }}" method="POST"> <input type="submit" value="Delete Post" class="btn btn-danger btn-sm" onclick="return confirm('Are you sure you want to delete this post?')"> </form> {% endblock %} You use the confirm() method to display a confirmation message before submitting the request. Now navigate again to the edit page of a blog post and try deleting it: At the end of this step, the source code of your project will look like the code on this page. With this, the users of your application can now write new blog posts and add them to the database, edit, and delete existing posts. This tutorial introduced essential concepts of the Flask Python framework. You learned how to make a small web application, run it in a development server, and allow the user to provide custom data via URL parameters and web forms. You also used the Jinja template engine to reuse HTML files and use logic in them. At the end of this tutorial, you now have a fully functioning web blog that interacts with an SQLite database to create, display, edit, and delete blog posts using the Python language and SQL queries. If you would like to learn more about working with Flask and SQLite check out this tutorial on How To Use One-to-Many Database Relationships with Flask and SQLite. You can further develop this application by adding user authentication so that only registered users can create and modify blog posts, you may also add comments and tags for each blog post, and add file uploads to give users the ability to include images in the post. See the Flask documentation for more information. Flask has many community-made Flask extensions. The following is a list of extensions you might consider using to make your development process easier: tutorial! It isn’t common that a posted tutorial runs exactly as staged as the instructions suggest, but everything went according to plan (with the exception of running this under Windows 10 as opposed to Linux). Made sure to sign up just to leave this comment, but this guide deserves it. Thank you very much! This is a great tutorial! This as definitely helped me understand the basics of web development using Flask!Do you have anymore articles/ resources that you would recommend a beginner? There’s a slight part that the tutorial didn’t cover. The blog posts don’t render newlines This tutorial is GREAT! Everything works as it should and is well explained (with the exception of how to use ‘set’ instead of ‘export’ to set environment variables in Windows). thank you for such an easy to follow tutorial and all your brilliant explanations. This has to be the best tutorial I’ve ever read. Loving how much effort you have put into this - from carefully explaining the concepts and how all the code works, to even visually highlighting the code that we’re supposed to modify in each snippet. I signed up just so I could leave a comment, because this article greatly deserves it. Awesome, just awesome <3 This comment has been deleted This comment has been deleted This comment has been deleted Thanks a lot, awesome tutorial. Worked well on Windows as well.
https://www.digitalocean.com/community/tutorials/how-to-make-a-web-application-using-flask-in-python-3?comment=91408
CC-MAIN-2022-27
refinedweb
7,450
64
#include <wx/richtext/richtextsymboldlg.h> wxSymbolPickerDialog presents the user with a choice of fonts and a grid of available characters. This modal dialog provides the application with a selected symbol and optional font selection. Although this dialog is contained in the rich text library, the dialog is generic and can be used in other contexts. To use the dialog, pass a default symbol specified as a string, an initial font name, and a current font name. The difference between the initial font and current font is that the initial font determines what the font control will be set to when the dialog shows - an empty string will show the selection normal text. The current font, on the other hand, is used by the dialog to determine what font to display the characters in, even when no initial font is selected. This allows the user (and application) to distinguish between inserting a symbol in the current font, and inserting it with a specified font. When the dialog is dismissed, the application can get the selected symbol with wxSymbolPickerDialog::GetSymbol and test whether a font was specified with wxSymbolPickerDialog::UseNormalFont,fetching the specified font with wxSymbolPickerDialog::GetFontName. Here's a realistic example, inserting the supplied symbol into a rich text control in either the current font or specified font. Default ctor. Constructor. Creation: see the constructor for details about the parameters. Returns the font name (the font reflected in the font list). Returns true if the dialog is showing the full range of Unicode characters. Gets the font name used for displaying symbols in the absence of a selected font. Gets the current or initial symbol as a string. Gets the selected symbol character as an integer. Returns true if a symbol is selected. Sets the initial/selected font name. Sets the internal flag indicating that the full Unicode range should be displayed. Sets the name of the font to be used in the absence of a selected font. Sets the symbol as a one or zero character string. Sets Unicode display mode. Returns true if the has specified normal text - that is, there is no selected font.
https://docs.wxwidgets.org/3.0/classwx_symbol_picker_dialog.html
CC-MAIN-2018-51
refinedweb
356
55.44