text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
From: Paul A. Bristow (pbristow_at_[hidden]) Date: 2001-04-23 10:08:42 > -----Original Message----- > From: John Maddock [mailto:John_Maddock_at_[hidden]] > Sent: Sunday, April 22, 2001 12:02 PM > To: INTERNET:boost_at_[hidden] > Subject: [boost] Boost.MathConstants: Review > > The documentation suggests: > > " Const double pi = (double)BOOST_PI; /* C style cast */" > > But that involves rounding the constants twice - once to a long > double, and > then again to a float/double, potentially that is less precise than using > > const double pi = 3.14159265358979323846264338327950288; > const float fpi =3.14159265358979323846264338327950288F; > > each of which should only be rounded once. Is this double rounding really true? Isn't the macro simply a textual replacement? Does it ever give a different result? (Experience suggests that it doesn't). Please can any language lawyers pronounce authoritively on this? > > BTW I assume that: > > "Or C++ static cast down to double and/or float precisions. > > const long double pi = static_cast<double>(BOOST_PI); // C++ static cast > const long double piF = static_cast<float>(BOOST_PI); // C++ static cast > " > > is a typo - the declared types should be double and float respectively? Well spotted! > > One possible implementation strategy to avoid the constants > getting rounded > twice would be to use something like: > > #define BOOST_PI 3.14159265358979323846264338327950288 > > #define BOOST_JOIN(x,y) BOOST_DO_JOIN(x,y) > #define BOOST_DO_JOIN(x,y) x##y > > namespace boost{ > > template <class T>struct constants; > > template<> > struct constants<float> > { > static float pi() { return BOOST_JOIN(BOOST_PI, F); } > }; > template<> > struct constants<double> > { > static double pi() { return BOOST_PI; } > }; > template<> > struct constants<long double> > { > static long double pi() { return BOOST_JOIN(BOOST_PI, L); } > }; > > #undef BOOST_PI > I also agree that to get the constants really correct for a particular > platform one would have to generate the actual binary representation, the > template approach allows for that. Can you explain why please? > The docs provide a nice rationale, but don't indicate what constants are > actually available (perhaps the most important point?). As Beman suggests I will add this. > > Defining the constants as members of a template, allows us to add > specialisations for types other than built in floating point types - for > example if we add boost::interval or boost::bcd, then these should have > math constants available as well. BTW what ever happened to the boost > interval library? > > In an ideal world the interface would also be extensible to allow > additional constants to be added (perhaps by third parties), I don't see > how we can do that though, other than by adding new template classes for > new categories of constants. I don't see how to provide extensibility either, though adding new constants should be easy, very low risk of causing trouble, and quite infrequent? > I prefer "boost::constants<T>" to the "boost::math::constants<T>" which > some have suggested, I don't see what the latter buys us, other than more > typing. Shouldn't nested namespaces be reserved for things probably not > used by end users (or only used by power users), this definitely > in the end > user camp. I think there are other physical constants which would be useful, but which are no really constant (and have an uncertainty which should be included) So I think we should type more to distinguish math from physical constants. > How does this library interact with the POSIX standard, see > for a > listing, these don't cut and paste well :-( > > I assume that this library provides a superset of these values? I will study this - I am ashamed to plead ignorance so far. Paul Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2001/04/11275.php
CC-MAIN-2021-49
refinedweb
592
53.31
Entitlements outside Roles Report in Oracle Identity Analytics By user12582982 on Mar 13, 2011 As a followup to my previous blog entry on reporting in Oracle Identity Analytics (OIA) I have looked at another probably very common OIA report. This report will list all the entitlements (imported in the Identity Warehouse) that are outside of (not contained in) any role. This report can be very useful during the role mining process to see what entitlements are not contained in any role. I have setup the SQL query so that it will look at all attributes (entitlements, e.g. 'groups' in AD) that are set as 'minable' in the resource type configuration. Basically the SQL query will find for all of these attributes all of the values (SELECT ... FROM ... WHERE ...) and see if these are contained in any role through the relation role->policy->attribute (... AND NOT EXISTS (SELECT ... FROM ... WHERE...)). The final SQL looks as follows: I have taken this SQL as the basis for my iReport design for the report that I have called 'User Entitlements outside Roles'. It will display an ordered list (grouped by namespace) of all attributes that are set as minable and its values (entitlements), not contained in any role. The resulting jrxml file can be found here: UserEntitlementsOutsideRoles.jrxml. An example of what the final report will look like is shown below. In this example I have ran it against a sample dataset where for two of the resources (Microsoft SQL Server, Windows Active Directory) some attributes have been set as minable (e.g. 'serverRoles', 'groups', etc.). As said before they have been set as minable here for the sake of reporting but these are obviously typically the same attributes taking part in a role mining process and hence more or less automatically the ones we are interested in... Have fun, René...
https://blogs.oracle.com/renek/tags/rbac
CC-MAIN-2015-32
refinedweb
308
60.75
Which One Is Better Response.redirect Or Postbackurl(asp:button Feature) To Redirect WebpageAug 13, 2010 which one is better response.redirect or postbackurl(asp:button feture) to redirect web page?View 3 Replies which one is better response.redirect or postbackurl(asp:button feture) to redirect web page?View 3 Replies i'm using button for open a new form.the code in test.aspx file is, < asp:Button ID="Button3" runat="server" Text="Send Enquiry" Postbackurl="~/contact.aspx" Width="90px" onclick="Button3_Click" /> this is also not working. And i tried this control in code behind file,test.aspx.vb. Public Sub Button3_Click(ByVal sender As Object, ByVal e As EventArgs) Handles Button3.Click Response.Redirect("contact.aspx") End Sub i'm sending a label value to next form(i.e "contact.aspx")or user filling form,where some textboxes must be filled by user. i have a page that have gridview control which is dynamically generated and have select button when i press any select button it takes ID from grid row and i use Response.Redirect() to navigate to some other page. now the problem is when i press Back button on browser it gives me an error saying that Webpage has expired Most likely cause: The local copy of this webpage is out of date, and the website requires that you download it again. here is my code where i use response.redirect: int LeadId = Convert.ToInt16(GridViewCommon.DataKeys[Int32.Parse(e.CommandArgument.ToString())].Value.ToString()); Response.Redirect("~/Common/NewLead.aspx?LeadId=" + LeadId + "&Op=Update"); so when i try to come back to this page again pressing back button on browser i got error stated above.... I also tried Server.Redirect() but that also doesnt works... :-( I need add a buttom to my aspx page. My code is byte[] buffer; FileStream fs = new FileStream(@"C:info.pdf", FileMode.Open); try { [Code].... I need a click button after the code. How can I do this. Im using asp.net c# (webforms) I want to add a back button to my page. (you land on this page if you incorrectly fill in a form). if javascript is enabled i want to go back via javascript, but if it is disabled i'll just do a response.redirect("~/home.aspx"). how can i implement this? is it 2 buttons? how can i hide the other in the 2 different states if so.View 2 Replies How can I use response.redirect on button onclick or onclientclick event?View 3 Replies I am doing a message box that when user clicks on the Ok button, it will redirects to another aspx page. However, I do not know what is wrong with the code. This is the code that I have input in: [Code].... [Code].... I'm rewriting a messaging module and the old asp application has a send button image and it used HTML submit button. My new application is asp.net. Can I use the asp send button image to response.redirect to the View message? I want to redirect from one page to another based on condition . For e.g I want to redirect from to based on condition which I need to check from code behind C# I was trying to use Response.Redirect("movies.aspx"); but its not working in IE7 I am getting Http 400 bad request.... whereas in Firefox its working fine redirect a page into a div using Response.Redirect()?View 2 Replies Response.Redirect() to an address in another server, for example my site is in 172.16.10.20 but i want to have Response.Redirect to 172.16.10.21 Here's my code: [Code].... What this code does at the end is Redirects to a direct download, and THEN afterwards, to a "tutorial page" on how to use the download. I can't seem to use these back to back. I've even used Response.Redirect(tutorialURL, false) so it wouldn't terminate processing, but it didn't work. It just STOPPED page processing. I've tried to use the Sleep() method of the Threading namespace, no luck. I'm sure there's an easy way to do this, I just don't know what it is. When I went for interview, they ask me this question. I don't know this question and concept is not clear.View 7 Replies For some reason, the response.redirect seems to be failing and it is maxing out the cpu on my server and just doesn't do anything. The .net code uploads the file fine, but does not redirect to the asp page to do the processing. I know this is absolute rubbish why would you have .net code redirecting to classic asp, it is a legacy app. I have tried putting false or true etc. at the end of the redirect as I have read other people have had issues with this. it runs locally on my machine but won't run on my server! I am getting the following error when I debugged remotely.View 6 Replies
https://asp.net.bigresource.com/which-one-is-better-response-redirect-or-postbackurl-asp-button-feature-to-redirect-webpage-PaIwMSiXA.html
CC-MAIN-2021-49
refinedweb
849
77.53
Building, testing and distributing systems¶ In this tutorial we guide you into installing the required tools to start using haskus-system to create your own systems, to test them within QEMU and to distribute them. Installing dependencies¶ You need to install several programs and libraries before you can start using haskus-system. Most of them are very common and are required to build a Linux kernel. Please install the following packages: git - make gcc binutils static libraries (e.g., glibc-static and zlib-static packages on Fedora) (un)packing tools: lzip, gzip, tar, cpio QEMU Installing haskus-system-build tool¶ To get started, you need to install the haskus-system-build program. It is available on Hackage in the haskus-system-build package. You can install it from here using your favorite method or install the latest version from source as follows: > git clone > cd haskus-system > stack install haskus-system-build Using this method will install the program into ~/.local/bin. Be sure to add this path to your $PATH environment variable. Starting a new project¶ To start a new project, enter a new directory and use the init command: > mkdir my-project > cd my-project > haskus-system-build init It downloads the default system template into the current directory. It is composed of 4 files: > find . -type f ./stack.yaml ./src/Main.hs ./my-system.cabal ./system.yaml src/Main.hs¶ This the system code. For now it’s only a simple program that prints the “Hello World” string in the kernel console, waits for a key press and then shutdowns the system. import Haskus.System main :: IO () main = runSys <| do -- Initialize the terminal term <- defaultTerminal -- print a string on the standard output writeStrLn term "Hello World!" -- wait for a key to be pressed waitForKey term -- shutdown the computer powerOff_ my-system.cabal¶ This is the package configuration file that has been tweaked to build valid executables (use static linking, etc.) system.yaml¶ This is the haskus-system-build configuration file. linux: source: tarball version: 4.11.3 options: enable: - CONFIG_DRM_BOCHS - CONFIG_DRM_RADEON - CONFIG_DRM_NOUVEAU make-args: "-j8" ramdisk: init: my-system qemu: # Select a set of options for QEMU: # "default": enable recommended options # "vanilla": only use required settings to make tests work profile: vanilla options: "" kernel-args: "" As you can see, it contains a Linux kernel configuration, a reference to our system as being the ramdisk “init” program and some QEMU configuration. The selected Linux kernel will be automatically downloaded and built with the given options in the following steps. Building and Testing¶ Now let’s try the system within QEMU: > haskus-system-build test On the first execution, this command downloads and builds everything required to test the system so it can take quite some time. Then QEMU’s window should pop up with our system running in it. On following executions building is much faster because the tool reuses previously built artefacts (in particular the Linux kernel) if the configuration hasn’t changed. If you only want to build without launching QEMU, use the build command: > haskus-system-build build Distributing and testing on real computers¶ This tutorial wouldn’t be complete without an exaplanation of how to distribute your system to other people. We obviously don’t want them to build it from source. Physical distribution¶ You can easily distribute your system on a bootable storage device (e.g. USB stick). To do that, you only have to install your system on an empty storage device. Warning: data on the device will be lost! Don’t do that if you don’t know what you are doing! To install your system on the device whose device file is /dev/sde: > haskus-system-build make-device --device /dev/sde Note that you have to be in the sudoers list to access the device. ISO image distribution¶ Another distribution method is to create an ISO image that you can distribute online or burn on CD/DVD. > haskus-system-build make-iso ... ISO image: .system-work/iso/my-system.iso Note that you can test the ISO image with QEMU before you ship it: > haskus-system-build test-iso This allows you to test the boot-loader configuration.
https://docs.haskus.org/system/tutorial/building.html
CC-MAIN-2020-10
refinedweb
699
50.87
import "github.com/go-stack/stack" Package stack implements utilities to capture, manipulate, and format call stacks. It provides a simpler API than package runtime. The implementation takes care of the minutia and special cases of interpreting the program counter (pc) values returned by runtime.Callers. Package stack's types implement fmt.Formatter, which provides a simple and flexible way to declaratively configure formatting when used with logging or error tracking packages. Code: // +build go1.2 package main import ( "fmt" "github.com/go-stack/stack" ) func main() { logCaller("%+s") logCaller("%v %[1]n()") } func logCaller(format string) { fmt.Printf(format+"\n", stack.Caller(1)) } ErrNoFunc means that the Call has a nil *runtime.Func. The most likely cause is a Call with the zero value. Call records a single function invocation from a goroutine stack. Caller returns a Call from the stack of the current goroutine. The argument skip is the number of stack frames to ascend, with 0 identifying the calling function. Format implements fmt.Formatter with support for the following verbs. %s source file %d line number %n function name %k last segment of the package path %v equivalent to %s:%d It accepts the '+' and '#' flags for most of the verbs as follows. %+s path of source file relative to the compile time GOPATH, or the module path joined to the path of source file relative to module root %#s full path of source file %+n import path qualified function name %+k full package path %+v equivalent to %+s:%d %#v equivalent to %#s:%d Frame returns the call frame infomation for the Call. MarshalText implements encoding.TextMarshaler. It formats the Call the same as fmt.Sprintf("%v", c). PC returns the program counter for this call frame; multiple frames may have the same PC value. Deprecated: Use Call.Frame instead. String implements fmt.Stinger. It is equivalent to fmt.Sprintf("%v", c). CallStack records a sequence of function invocations from a goroutine stack. Trace returns a CallStack for the current goroutine with element 0 identifying the calling function. Format implements fmt.Formatter by printing the CallStack as square brackets ([, ]) surrounding a space separated list of Calls each formatted with the supplied verb and options. MarshalText implements encoding.TextMarshaler. It formats the CallStack the same as fmt.Sprintf("%v", cs). String implements fmt.Stinger. It is equivalent to fmt.Sprintf("%v", cs). TrimAbove returns a slice of the CallStack with all entries above c removed. TrimBelow returns a slice of the CallStack with all entries below c removed. TrimRuntime returns a slice of the CallStack with the topmost entries from the go runtime removed. It considers any calls originating from unknown files, files under GOROOT, or _testmain.go as part of the runtime. Package stack imports 7 packages (graph) and is imported by 439 packages. Updated 2018-08-26. Refresh now. Tools for package owners.
https://godoc.org/github.com/go-stack/stack
CC-MAIN-2020-16
refinedweb
478
59.9
fixed a few bugs and documented others added required and require added [ENDIF] ! 95: 96: create locals-buffer 1000 allot \ !! limited and unsafe 97: \ here the names of the local variables are stored 98: \ we would have problems storing them at the normal dp 99: 100: variable locals-dp \ so here's the special dp for locals. 101: 102: : alignlp-w ( n1 -- n2 ) 103: \ cell-align size and generate the corresponding code for aligning lp 104: aligned dup adjust-locals-size ; 105: 106: : alignlp-f ( n1 -- n2 ) 107: faligned dup adjust-locals-size ; 108: 109: \ a local declaration group (the braces stuff) is compiled by calling 110: \ the appropriate compile-pushlocal for the locals, starting with the 111: \ righmost local; the names are already created earlier, the 112: \ compile-pushlocal just inserts the offsets from the frame base. 113: 114: : compile-pushlocal-w ( a-addr -- ) ( run-time: w -- ) 115: \ compiles a push of a local variable, and adjusts locals-size 116: \ stores the offset of the local variable to a-addr 117: locals-size @ alignlp-w cell+ dup locals-size ! 118: swap ! 119: postpone >l ; 120: 121: : compile-pushlocal-f ( a-addr -- ) ( run-time: f -- ) 122: locals-size @ alignlp-f float+ dup locals-size ! 123: swap ! 124: postpone f>l ; 125: 126: : compile-pushlocal-d ( a-addr -- ) ( run-time: w1 w2 -- ) 127: locals-size @ alignlp-w cell+ cell+ dup locals-size ! 128: swap ! 129: postpone swap postpone >l postpone >l ; 130: 131: : compile-pushlocal-c ( a-addr -- ) ( run-time: w -- ) 132: -1 chars compile-lp+! 133: locals-size @ swap ! 134: postpone lp@ postpone c! ; 135: 136: : create-local ( " name" -- a-addr ) 137: \ defines the local "name"; the offset of the local shall be 138: \ stored in a-addr 139: create 140: immediate 141: here 0 , ( place for the offset ) ; 142: 143: : lp-offset ( n1 -- n2 ) 144: \ converts the offset from the frame start to an offset from lp and 145: \ i.e., the address of the local is lp+locals_size-offset 146: locals-size @ swap - ; 147: 148: : lp-offset, ( n -- ) 149: \ converts the offset from the frame start to an offset from lp and 150: \ adds it as inline argument to a preceding locals primitive 151: lp-offset , ; 152: 153: vocabulary locals-types \ this contains all the type specifyers, -- and } 154: locals-types definitions 155: 156: : W: 157: create-local ( "name" -- a-addr xt ) 158: \ xt produces the appropriate locals pushing code when executed 159: ['] compile-pushlocal-w 160: does> ( Compilation: -- ) ( Run-time: -- w ) 161: \ compiles a local variable access 162: @ lp-offset compile-@local ; 163: 164: : W^ 165: create-local ( "name" -- a-addr xt ) 166: ['] compile-pushlocal-w 167: does> ( Compilation: -- ) ( Run-time: -- w ) 168: postpone laddr# @ lp-offset, ; 169: 170: : F: 171: create-local ( "name" -- a-addr xt ) 172: ['] compile-pushlocal-f 173: does> ( Compilation: -- ) ( Run-time: -- w ) 174: @ lp-offset compile-f@local ; 175: 176: : F^ 177: create-local ( "name" -- a-addr xt ) 178: ['] compile-pushlocal-f 179: does> ( Compilation: -- ) ( Run-time: -- w ) 180: postpone laddr# @ lp-offset, ; 181: 182: : D: 183: create-local ( "name" -- a-addr xt ) 184: ['] compile-pushlocal-d 185: does> ( Compilation: -- ) ( Run-time: -- w ) 186: postpone laddr# @ lp-offset, postpone 2@ ; 187: 188: : D^ 189: create-local ( "name" -- a-addr xt ) 190: ['] compile-pushlocal-d 191: does> ( Compilation: -- ) ( Run-time: -- w ) 192: postpone laddr# @ lp-offset, ; 193: 194: : C: 195: create-local ( "name" -- a-addr xt ) 196: ['] compile-pushlocal-c 197: does> ( Compilation: -- ) ( Run-time: -- w ) 198: postpone laddr# @ lp-offset, postpone c@ ; 199: 200: : C^ 201: create-local ( "name" -- a-addr xt ) 202: ['] compile-pushlocal-c 203: does> ( Compilation: -- ) ( Run-time: -- w ) 204: postpone laddr# @ lp-offset, ; 205: 206: \ you may want to make comments in a locals definitions group: 207: ' \ alias \ immediate 208: ' ( alias ( immediate 209: 210: forth definitions 211: 212: \ the following gymnastics are for declaring locals without type specifier. 213: \ we exploit a feature of our dictionary: every wordlist 214: \ has it's own methods for finding words etc. 215: \ So we create a vocabulary new-locals, that creates a 'w:' local named x 216: \ when it is asked if it contains x. 217: 218: also locals-types 219: 220: : new-locals-find ( caddr u w -- nfa ) 221: \ this is the find method of the new-locals vocabulary 222: \ make a new local with name caddr u; w is ignored 223: \ the returned nfa denotes a word that produces what W: produces 224: \ !! do the whole thing without nextname 225: drop nextname 226: ['] W: >name ; 227: 228: previous 229: 230: : new-locals-reveal ( -- ) 231: true abort" this should not happen: new-locals-reveal" ; 232: 233: create new-locals-map ' new-locals-find A, ' new-locals-reveal A, 234: 235: vocabulary new-locals 236: new-locals-map ' new-locals >body cell+ A! \ !! use special access words 237: 238: variable old-dpp 239: 240: \ and now, finally, the user interface words 241: : { ( -- addr wid 0 ) 242: dp old-dpp ! 243: locals-dp dpp ! 244: also new-locals 245: also get-current locals definitions locals-types 246: 0 TO locals-wordlist 247: 0 postpone [ ; immediate 248: 249: locals-types definitions 250: 251: : } ( addr wid 0 a-addr1 xt1 ... -- ) 252: \ ends locals definitions 253: ] old-dpp @ dpp ! 254: begin 255: dup 256: while 257: execute 258: repeat 259: drop 260: locals-size @ alignlp-f locals-size ! \ the strictest alignment 261: set-current 262: previous previous 263: locals-list TO locals-wordlist ; 264: 265: : -- ( addr wid 0 ... -- ) 266: } 267: [char] } parse 2drop ; 268: 269: forth definitions 270: 271: \ A few thoughts on automatic scopes for locals and how they can be 272: \ implemented: 273: 274: \ We have to combine locals with the control structures. My basic idea 275: \ was to start the life of a local at the declaration point. The life 276: \ would end at any control flow join (THEN, BEGIN etc.) where the local 277: \ is lot live on both input flows (note that the local can still live in 278: \ other, later parts of the control flow). This would make a local live 279: \ as long as you expected and sometimes longer (e.g. a local declared in 280: \ a BEGIN..UNTIL loop would still live after the UNTIL). 281: 282: \ The following example illustrates the problems of this approach: 283: 284: \ { z } 285: \ if 286: \ { x } 287: \ begin 288: \ { y } 289: \ [ 1 cs-roll ] then 290: \ ... 291: \ until 292: 293: \ x lives only until the BEGIN, but the compiler does not know this 294: \ until it compiles the UNTIL (it can deduce it at the THEN, because at 295: \ that point x lives in no thread, but that does not help much). This is 296: \ solved by optimistically assuming at the BEGIN that x lives, but 297: \ warning at the UNTIL that it does not. The user is then responsible 298: \ for checking that x is only used where it lives. 299: 300: \ The produced code might look like this (leaving out alignment code): 301: 302: \ >l ( z ) 303: \ ?branch <then> 304: \ >l ( x ) 305: \ <begin>: 306: \ >l ( y ) 307: \ lp+!# 8 ( RIP: x,y ) 308: \ <then>: 309: \ ... 310: \ lp+!# -4 ( adjust lp to <begin> state ) 311: \ ?branch <begin> 312: \ lp+!# 4 ( undo adjust ) 313: 314: \ The BEGIN problem also has another incarnation: 315: 316: \ AHEAD 317: \ BEGIN 318: \ x 319: \ [ 1 CS-ROLL ] THEN 320: \ { x } 321: \ ... 322: \ UNTIL 323: 324: \ should be legal: The BEGIN is not a control flow join in this case, 325: \ since it cannot be entered from the top; therefore the definition of x 326: \ dominates the use. But the compiler processes the use first, and since 327: \ it does not look ahead to notice the definition, it will complain 328: \ about it. Here's another variation of this problem: 329: 330: \ IF 331: \ { x } 332: \ ELSE 333: \ ... 334: \ AHEAD 335: \ BEGIN 336: \ x 337: \ [ 2 CS-ROLL ] THEN 338: \ ... 339: \ UNTIL 340: 341: \ In this case x is defined before the use, and the definition dominates 342: \ the use, but the compiler does not know this until it processes the 343: \ UNTIL. So what should the compiler assume does live at the BEGIN, if 344: \ the BEGIN is not a control flow join? The safest assumption would be 345: \ the intersection of all locals lists on the control flow 346: \ stack. However, our compiler assumes that the same variables are live 347: \ as on the top of the control flow stack. This covers the following case: 348: 349: \ { x } 350: \ AHEAD 351: \ BEGIN 352: \ x 353: \ [ 1 CS-ROLL ] THEN 354: \ ... 355: \ UNTIL 356: 357: \ If this assumption is too optimistic, the compiler will warn the user. 358: 359: \ Implementation: migrated to kernal.fs 360: 361: \ THEN (another control flow from before joins the current one): 362: \ The new locals-list is the intersection of the current locals-list and 363: \ the orig-local-list. The new locals-size is the (alignment-adjusted) 364: \ size of the new locals-list. The following code is generated: 365: \ lp+!# (current-locals-size - orig-locals-size) 366: \ <then>: 367: \ lp+!# (orig-locals-size - new-locals-size) 368: 369: \ Of course "lp+!# 0" is not generated. Still this is admittedly a bit 370: \ inefficient, e.g. if there is a locals declaration between IF and 371: \ ELSE. However, if ELSE generates an appropriate "lp+!#" before the 372: \ branch, there will be none after the target <then>. 373: 374: \ explicit scoping 375: 376: : scope ( -- scope ) 377: cs-push-part scopestart ; immediate 378: 379: : endscope ( scope -- ) 380: scope? 381: drop 382: locals-list @ common-list 383: dup list-size adjust-locals-size 384: locals-list ! ; immediate 385: 386: \ adapt the hooks 387: 388: : locals-:-hook ( sys -- sys addr xt n ) 389: \ addr is the nfa of the defined word, xt its xt 390: DEFERS :-hook 391: last @ lastcfa @ 392: clear-leave-stack 393: 0 locals-size ! 394: locals-buffer locals-dp ! 395: 0 locals-list ! 396: dead-code off 397: defstart ; 398: 399: : locals-;-hook ( sys addr xt sys -- sys ) 400: def? 401: 0 TO locals-wordlist 402: 0 adjust-locals-size ( not every def ends with an exit ) 403: lastcfa ! last ! 404: DEFERS ;-hook ; 405: 406: ' locals-:-hook IS :-hook 407: ' locals-;-hook IS ;-hook 408: 409: \ The words in the locals dictionary space are not deleted until the end 410: \ of the current word. This is a bit too conservative, but very simple. 411: 412: \ There are a few cases to consider: (see above) 413: 414: \ after AGAIN, AHEAD, EXIT (the current control flow is dead): 415: \ We have to special-case the above cases against that. In this case the 416: \ things above are not control flow joins. Everything should be taken 417: \ over from the live flow. No lp+!# is generated. 418: 419: \ !! The lp gymnastics for UNTIL are also a real problem: locals cannot be 420: \ used in signal handlers (or anything else that may be called while 421: \ locals live beyond the lp) without changing the locals stack. 422: 423: \ About warning against uses of dead locals. There are several options: 424: 425: \ 1) Do not complain (After all, this is Forth;-) 426: 427: \ 2) Additional restrictions can be imposed so that the situation cannot 428: \ arise; the programmer would have to introduce explicit scoping 429: \ declarations in cases like the above one. I.e., complain if there are 430: \ locals that are live before the BEGIN but not before the corresponding 431: \ AGAIN (replace DO etc. for BEGIN and UNTIL etc. for AGAIN). 432: 433: \ 3) The real thing: i.e. complain, iff a local lives at a BEGIN, is 434: \ used on a path starting at the BEGIN, and does not live at the 435: \ corresponding AGAIN. This is somewhat hard to implement. a) How does 436: \ the compiler know when it is working on a path starting at a BEGIN 437: \ (consider "{ x } if begin [ 1 cs-roll ] else x endif again")? b) How 438: \ is the usage info stored? 439: 440: \ For now I'll resort to alternative 2. When it produces warnings they 441: \ will often be spurious, but warnings should be rare. And better 442: \ spurious warnings now and then than days of bug-searching. 443: 444: \ Explicit scoping of locals is implemented by cs-pushing the current 445: \ locals-list and -size (and an unused cell, to make the size equal to 446: \ the other entries) at the start of the scope, and restoring them at 447: \ the end of the scope to the intersection, like THEN does. 448: 449: 450: \ And here's finally the ANS standard stuff 451: 452: : (local) ( addr u -- ) 453: \ a little space-inefficient, but well deserved ;-) 454: \ In exchange, there are no restrictions whatsoever on using (local) 455: \ as long as you use it in a definition 456: dup 457: if 458: nextname POSTPONE { [ also locals-types ] W: } [ previous ] 459: else 460: 2drop 461: endif ; 462: 463: : >definer ( xt -- definer ) 464: \ this gives a unique identifier for the way the xt was defined 465: \ words defined with different does>-codes have different definers 466: \ the definer can be used for comparison and in definer! 467: dup >code-address [ ' bits >code-address ] Literal = 468: \ !! this definition will not work on some implementations for `bits' 469: if \ if >code-address delivers the same value for all does>-def'd words 470: >does-code 1 or \ bit 0 marks special treatment for does codes 471: else 472: >code-address 473: then ; 474: 475: : definer! ( definer xt -- ) 476: \ gives the word represented by xt the behaviour associated with definer 477: over 1 and if 478: does-code! 479: else 480: code-address! 481: then ; 482: 483: \ !! untested 484: : TO ( c|w|d|r "name" -- ) 485: \ !! state smart 486: 0 0 0. 0.0e0 { c: clocal w: wlocal d: dlocal f: flocal } 487: ' dup >definer 488: state @ 489: if 490: case 491: [ ' locals-wordlist >definer ] literal \ value 492: OF >body POSTPONE Aliteral POSTPONE ! ENDOF 493: [ ' clocal >definer ] literal 494: OF POSTPONE laddr# >body @ lp-offset, POSTPONE c! ENDOF 495: [ ' wlocal >definer ] literal 496: OF POSTPONE laddr# >body @ lp-offset, POSTPONE ! ENDOF 497: [ ' dlocal >definer ] literal 498: OF POSTPONE laddr# >body @ lp-offset, POSTPONE d! ENDOF 499: [ ' flocal >definer ] literal 500: OF POSTPONE laddr# >body @ lp-offset, POSTPONE f! ENDOF 501: abort" can only store TO value or local value" 502: endcase 503: else 504: [ ' locals-wordlist >definer ] literal = 505: if 506: >body ! 507: else 508: abort" can only store TO value" 509: endif 510: endif ; immediate 511: 512: : locals| 513: BEGIN 514: name 2dup s" |" compare 0<> 515: WHILE 516: (local) 517: REPEAT 518: drop 0 (local) ; immediate restrict
http://www.complang.tuwien.ac.at/cvsweb/cgi-bin/cvsweb/gforth/glocals.fs?sortby=rev;f=h;only_with_tag=MAIN;content-type=text%2Fx-cvsweb-markup;ln=1;rev=1.9
CC-MAIN-2021-17
refinedweb
2,434
57.71
branch coverage of with statement in 2.7 With Python 2.7, branch detection doesn't get a with statement right with an inner return right. Coverage show the line "with open("test", "w") as f" as a branch, that never reaches "exit". Under Python 2.6 the with statement is shown as fully covered. {{{ !python def example(): with open("test", "w") as f: # exit f.write("") return 1 example() }}} Tested with coverage 3.4 and 3.5a1. The problem could be caused by a change in Python 2.7's context protocol. Since 2.7 the enter and exit methods are looked up on the type rather than on the object. The lookup is done in the new opcode SETUP_WITH. Python 2.7 dis Python 2.6 dis Turns out this was fixed in 52789af288d1 , and discovered as tested in de3815101d3d . I'm experiencing this issue too. Coverage 3.6, tried both Python 2.6 and Python 2.7 and had the same results. Screenshot: @Gre7g Luterman: This issue is marked as resolved, can you provide a sample that demonstrates it still happening? Even if you can just point to an open source project with runnable tests that show it happening? Notice that the original problem report mentioned a return statement inside a with statement, which is not how your code is structured.
https://bitbucket.org/ned/coveragepy/issues/128/branch-coverage-of-with-statement-in-27
CC-MAIN-2018-26
refinedweb
224
87.31
Definition An instance s of the class CryptByteString is basically a string of bytes. When s is not used anymore its memory is wiped out (by overwriting it a couple of times) before the memory is freed and returned to the system. The goal is to prevent an attacker from reading security sensitive data after your process has terminated. We want to point out that this mechanism can be foiled by the operating system: If it swaps the memory occupied by s to a swap file on a hard disc then the data will not be erased by s. (Some platforms offer to lock certain parts of the memory against swapping. CryptByteString uses this feature on Windows NT/2000/XP to protect its memory.) As we have stated above s can be used like a string or an array of bytes. The size n of s is the number of bytes in s, they are indexed from 0 to n - 1. Important: If you create a CryptByteString s from a C-style array or a string, or if you convert s to a string, then only the memory of s will be wiped out but not the memory of the array or the string. #include < LEDA/coding/crypt_key.h > Creation Operations
http://www.algorithmic-solutions.info/leda_manual/cryptbytestring.html
CC-MAIN-2017-13
refinedweb
211
67.59
DEBSOURCES Skip Quicknav sources / valgrind / 1:3.12.0~svn20160714-1 / include / pub_tool_hashtable /*--------------------------------------------------------------------*/ /*--- A hash table implementation. pub_tool_hashtable.h ---*/ /*--------------------------------------------------------------------*/ /* This file is part of Valgrind, a dynamic binary instrumentation framework. njn@valgrind contained in the file COPYING. */ #ifndef __PUB_TOOL_HASHTABLE_H #define __PUB_TOOL_HASHTABLE_H #include "pub_tool_basics.h" // VG_ macro /* Generic type for a separately-chained hash table. Via a kind of dodgy C-as-C++ style inheritance, tools can extend the VgHashNode type, so long as the first two fields match the sizes of these two fields. Requires a bit of casting by the tool. */ // Problems with this data structure: // - Separate chaining gives bad cache behaviour. Hash tables with linear // probing give better cache behaviour. typedef struct _VgHashNode { struct _VgHashNode * next; UWord key; } VgHashNode; typedef struct _VgHashTable VgHashTable; /* Make a new table. Allocates the memory with VG_(calloc)(), so can be freed with VG_(free)(). The table starts small but will periodically be expanded. This is transparent to the users of this module. The function never returns NULL. */ extern VgHashTable *VG_(HT_construct) ( const HChar* name ); /* Count the number of nodes in a table. */ extern UInt VG_(HT_count_nodes) ( const VgHashTable *table ); /* Add a node to the table. Duplicate keys are permitted. */ extern void VG_(HT_add_node) ( VgHashTable *t, void* node ); /* Looks up a VgHashNode by key in the table. * Returns NULL if not found. If entries * with duplicate keys are present, the most recently-added of the dups will * be returned, but it's probably better to avoid dups altogether. */ extern void* VG_(HT_lookup) ( const VgHashTable *table, UWord key ); /* Removes a VgHashNode by key from the table. Returns NULL if not found. */ extern void* VG_(HT_remove) ( VgHashTable *table, UWord key ); typedef Word (*HT_Cmp_t) ( const void* node1, const void* node2 ); /* Same as VG_(HT_lookup) and VG_(HT_remove), but allowing a part of or the full element to be compared for equality, not only the key. The typical use for the below function is to store a hash value of the element in the key, and have the comparison function checking for equality of the full element data. Attention about the comparison function: * It must *not* compare the 'next' pointer. * when comparing the rest of the node, if the node data contains holes between components, either the node memory should be fully initialised (e.g. allocated using VG_(calloc)) or each component should be compared individually. Note that the cmp function is only called for elements that already have keys that are equal. So, it is not needed for cmp to check for key equality. */ extern void* VG_(HT_gen_lookup) ( const VgHashTable *table, const void* node, HT_Cmp_t cmp ); extern void* VG_(HT_gen_remove) ( VgHashTable *table, const void* node, HT_Cmp_t cmp ); /* Output detailed usage/collision statistics. cmp will be used to verify if 2 elements with the same key are equal. Use NULL cmp if the hash table elements are only to be compared by key. */ extern void VG_(HT_print_stats) ( const VgHashTable *table, HT_Cmp_t cmp ); /* Allocates a suitably-sized array, copies pointers to all the hashtable elements into it, then returns both the array and the size of it. The array must be freed with VG_(free). If the hashtable is empty, the function returns NULL and assigns *nelems = 0. */ extern VgHashNode** VG_(HT_to_array) ( const VgHashTable *table, /*OUT*/ UInt* n_elems ); /* Reset the table's iterator to point to the first element. */ extern void VG_(HT_ResetIter) ( VgHashTable *table ); /* Return the element pointed to by the iterator and move on to the next one. Returns NULL if the last one has been passed, or if HT_ResetIter() has not been called previously. Asserts if the table has been modified (HT_add_node, HT_remove) since HT_ResetIter. This guarantees that callers cannot screw up by modifying the table whilst iterating over it (and is necessary to make the implementation safe; specifically we must guarantee that the table will not get resized whilst iteration is happening. Since resizing only happens as a result of calling HT_add_node, disallowing HT_add_node during iteration should give the required assurance. */ extern void* VG_(HT_Next) ( VgHashTable *table ); /* Destroy a table and deallocates the memory used by the nodes using freenode_fn.*/ extern void VG_(HT_destruct) ( VgHashTable *table, void(*freenode_fn)(void*) ); #endif // __PUB_TOOL_HASHTABLE_H /*--------------------------------------------------------------------*/ /*--- end ---*/ /*--------------------------------------------------------------------*/
https://sources.debian.org/src/valgrind/1:3.12.0~svn20160714-1/include/pub_tool_hashtable.h/
CC-MAIN-2019-47
refinedweb
688
56.45
this should do what you are wanting: String fileLocation; // the folder the file is located String file; ... this should do what you are wanting: String fileLocation; // the folder the file is located String file; ... right i figure it out i was doing it right all along! :P String[] commands = {"cmd", "any cmd command here", "you can do a series of commands", "eg cd C:/Users/Ollie/Documents"};... yess thats what im tryind to do! open up a cmd window the write some texts to it to before a task. i tried searching google but i still dont really know what im looking for, it just comes up with... Hi, i have opened up using String[] commands = {"cmd", "/c", fileLocation}; Runtime.getRuntime().exec(commands); and now i need to write a couple of lines to the application. Please... i've just seen this: FileZilla - The free FTP solution what that be a good substitute if i don't upgrade with webs? how easy will it be to write a java program to read/write files to it? thanks... yer i guess i was a bit naive thinking that! :/ you see its not just me who will be using this so need something on the web! yess webs does have an ftp interface but you need a premium account for... i haven't a clue tbh! i dont really know what im doing here, all i know is what i want to happen and that google hasnt helped me! also i dont know how much control i have over the server because its... Okay i have this website, (Project Quattro - Home), and i have uploaded a file to it (). i have entered some text into it and i can read that text... Happy birthday guys! thanks for all your help!! :D Hi all, didnt know where this goes so i'll try here! i need to add a load of integers into a string but not adding the up, e.g Integers 1, 8, 3, 0, 2, 4, 6 becomes String 1830246 if it... haha its ok just glad i have this it looks alot easier than what i have been doing!! XD yer thats what i have been doing for the BufferedReader aswell but thanks for the heads up! Thanks... hi thanks for the response, i tried using the reader.hasNext() but it just comes up with this error: i have imported: import java.io.*; import java.io.BufferedReader; import... Hi i have this code that reads a this txt file below. My code: try { BufferedReader in = new BufferedReader(new FileReader("file.txt")); String str; while ((str =... oh ok thanks, i just got the impression it was set but hey i'll take another look! thanks again for th help!! =] yer i have done that but process is still underlined. the help box comes up with: hi again, i have found this code online and when i use it says next to process: "cannot find symbol" do i need to import anything to make it work? try { BufferedReader in = new... hi thanks for the quick responce, now re the case1 error that was me just being tired and typing it out wrong i didnt do that in my actual project! when i do as you say above it just comes up... hi so fare i have this but it doesnt seem to be working: in one class i have: public class siwtchMenu{ public String[][] arrayA(int x, int y){ String arrayTest[][] = {{"0", "1",...
http://www.javaprogrammingforums.com/search.php?s=a7f04fcc22f7cc37bdf95785d8ebbc96&searchid=1348568
CC-MAIN-2017-47
refinedweb
582
82.24
Low-level sound support and memory management. More... #include <sys/cdefs.h> #include <arch/types.h> Go to the source code of this file. Low-level sound support and memory management. This file contains declarations for low-level sound operations and for SPU RAM pool memory management. Most of the time you'll be better off using the higher-level functionality in the sound effect support or streaming support, but this stuff can be very useful for some things. Transfer a packet of data from the AICA's SH4 queue. This function is used to retrieve a packet of data from the AICA back to the SH4. The buffer passed in should at least contain 1024 bytes of space to make sure any packet can fit. Initialize the sound system. This function reinitializes the whole sound system. It will not do anything unless the sound system has been shut down previously or has not been initialized yet. This will implicitly replace the program running on the AICA's ARM processor when it actually initializes anything. The default snd_stream_drv will be loaded if a new program is uploaded to the SPU. Get the size of the largest allocateable block in the SPU RAM pool. This function returns the largest size that can be currently passed to snd_mem_malloc() and expected to not return failure. There may be more memory available in the pool, especially if multiple blocks have been allocated and freed, but calls to snd_mem_malloc() for larger blocks will return failure, since the memory is not available contiguously. Free a block of allocated memory in the SPU RAM pool. This function frees memory previously allocated with snd_mem_malloc(). Reinitialize the SPU RAM pool. This function reinitializes the SPU RAM pool with the given base offset within the memory space. There is generally not a good reason to do this in your own code, but the functionality is there if needed. Allocate memory in the SPU RAM pool. This function acts as the memory allocator for the SPU RAM pool. It acts much like one would expect a malloc() function to act, although it does not return a pointer directly, but rather an offset in SPU RAM. Shutdown the SPU RAM allocator. There is generally no reason to be calling this function in your own code, as doing so will cause problems if you try to allocate SPU memory without calling snd_mem_init() afterwards. Poll for a response from the AICA. This function waits for the AICA to respond to a previously sent request. This function is not safe to call in an IRQ, as it does implicitly wait. Copy a request packet to the AICA queue. This function is to put in a low-level request using the built-in streaming sound driver. Begin processing AICA queue requests. This function begins processing of any queued requests in the AICA queue. Stop processing AICA queue requests. This function stops the processing of any queued requests in the AICA queue. Shut down the sound system. This function shuts down the whole sound system, freeing memory and disabling the SPU in the process. There's not generally many good reasons for doing this in your own code.
http://cadcdev.sourceforge.net/docs/kos-2.0.0/sound_8h.html
CC-MAIN-2018-05
refinedweb
531
66.44
| | Disclaimer The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way. This forum post triggered me to share some code that I had. A while back I was building a site which let the customers define their own custom overviews One requirement was that users could define the sort by property . To do this you can use the property picker but it wasn’t friendly enough for my case so I ended up building a custom macro parameter type. To start you need to create a new class library and add references to the following assemblies: In the example below I’ve created a class that inherits from dropdownlist. This was the easiest way since I needed a dropdownlist and didn’t want to play with the control tree, Now more important is that this class inherits from the IMacroGuiRendering Interface. This add two properties ShowCaption, to show the caption on the parameters tab and more important value which holds the selected value . 1: using System; 2: using System.Collections.Generic; 3: using System.Text; 4: using System.Web.UI.WebControls; 5: using umbraco.interfaces; 6: 7: namespace MacroRenderDemo 8: { 9: public class OrderBy : DropDownList, IMacroGuiRendering 10: { 11: protected override void OnLoad(EventArgs e) 12: { 13: base.OnLoad(e); 14: if (this.Items.Count == 0) 15: { 16: this.Items.Add(new ListItem("Title", "nodeName")); 17: this.Items.Add(new ListItem("Date", "createDate")); 18: } 19: } 20: 21: #region IMacroGuiRendering Members 22: 23: public bool ShowCaption 24: { 25: get { return true; } 26: } 27: 28: public string Value 29: { 30: get 31: { 32: return this.SelectedValue; 33: } 34: set 35: { 36: this.SelectedValue = value; 37: } 38: } 39: 40: #endregion 41: } 42: } When you build the project and add the binary into the bin folder of your Umbraco site you still can’t select the new type from the parameter type list. First we need to register the class in the database. Below you’ll see the database table that holds all the Macro property types. It is important that you add the Assemblyname including the namespace to the macroPropertyTypeRenderAssembly column and add the name of your class to the macroPropertyTypeRenderAssembly column . Now we can use the parameter in our macro’s and off course we can now use the new parameter in our template when we select the macro Another example how flexible Umbraco is. If a requirement isn’t available out of the box, usuallyall it takes is to implement an interface and write a few lines of code. Let’s start with wishing you all the best for 2010. 2009 was an insane year for me, I had given myself one year being successful in business running my own Umbraco projects instead of being a .net contractor. That turned out great, I’ve got a lot of new customers, have build a few great sites for my customers and did a few Umbraco consulting jobs. So now it’s time to make plans for 2010. First thing is that I’m moving to a real office instead of working from home. I’m really excited about this because I’m moving to an office building with lots of small companies, I think that’s really inspiring. When I’ve started investigating Umbraco back in 2008 we had a small amount of companies in the Netherlands (mostly freelancers) that where building websites based on Umbraco. Nowadays I see new companies that build Umbraco websites every week, not only small shops but also really big agencies. For this year my focus will be on helping companies (worldwide) being successful implementing Umbraco sites for their clients by offering consultancy services and custom package development instead of building sites from front to end. As you might know I’m also working for a long time on UmbImport PRO. My plan was to release this package last year. It’s the top prio on my todolist for Q1 this year. Also another package will see the light this year, UmbLinkChecker. As the name says this will be a package that checks every link in your published site, not only in content but also hard coded links in your templates etc., I will build a free and Pro version. Since Umbraco has more than 75000 active installs I think it’s profitable to build commercial packages for Umbraco and we will see more and more commercial packages from different vendors. Hope the Umbraco store will be back up in 2010 and filled with great products. The last year I couldn’t find enough time to blog or write WIKI’s and helping the community with their problems on the forum. I’m hoping this year that will change, since I’ve got a lot of info that I’d like to share. Have a great:27:02 (GMT Daylight Time, UTC+01:00)
http://www.richardsoeteman.net/default,month,2010-01.aspx
CC-MAIN-2017-30
refinedweb
816
61.87
return {expr}Be Explicit return? tupleCounter-Example This paper is in response to: N4074: Let return {expr}Be Explicit N4074 proposes (with the best of intentions) to ignore the explicit qualifier on conversions when the conversion would take place on a return statement and is surrounded with curly braces. For example: struct Type2 { Type2(int) {} }; Type2 makeType2(int i) { return {i}; // error now, ok with adoption of N4074. } Note that this code today results in a compile-time error, not because of the braces, but because of the explicit qualifier on the int. Remove that explicit qualifier as in the example below, and the code compiles today: struct Type1 { explicitType1(int) {} }; Type1 makeType1(int i) { return {i}; // ok since C++11, implicit conversion from int to Type1 } The motivation for this change comes from the original of N4074, N4029: I believe this is inconsistent because the named return type is just as "explicit" a type as a named local automatic variable type. Requiring the user to express the type is by definition redundant — there is no other type it could be. This is falls into the (arguably most) hated category of C++ compiler diagnostics: "I know exactly what you meant. My error message even tells you exactly what you must type. But I will make you type it." We can relate. Unnecessary typing is frustrating. However we believe it is possible that ignoring the explicit qualifier on type conversions, even if only restricted to return statements enclosed with curly braces, could turn compile-time errors into run-time errors. Possibly very subtle run-time errors (the worst kind). How likely is it that such run-time errors might be introduced? <shrug> Such conjecture is always highly speculative. But in this instance, we can offer a realistic example where exactly this would happen. And it happens very close to our home: by ignoring the explicit qualifier of the constructor of a std-defined class type. Bear with us while we set up the likelihood of this counter-example actually happening in the wild... Today in <string> we have the following functions which turn a std::string into an arithmetic type: int stoi (const string& str, size_t* idx = 0, int base = 10); long stol (const string& str, size_t* idx = 0, int base = 10); unsigned long stoul (const string& str, size_t* idx = 0, int base = 10); long long stoll (const string& str, size_t* idx = 0, int base = 10); unsigned long long stoull(const string& str, size_t* idx = 0, int base = 10); float stof (const string& str, size_t* idx = 0); double stod (const string& str, size_t* idx = 0); long double stold(const string& str, size_t* idx = 0); It seems quite reasonable that a programmer might say to himself: I would like to write a function that turns a std::stringinto a std::chrono::duration. Which duration? Well, let's keep it simple and say: any one of the six "predefined" durations: hours, minutes, seconds, milliseconds, microseconds, or nanoseconds can be parsed and returned as nanoseconds. What would be the syntax for doing this? Well, it would be quite reasonable to follow the same syntax we already have for duration literals: 2h // 2 hours 2min // 2 minutes 2s // 2 seconds 2ms // 2 milliseconds 2us // 2 microseconds 2ns // 2 nanoseconds And here is quite reasonable code for doing this: std::chrono::nanoseconds stons(std::string const& str) { using namespace std::chrono; auto errno_save = errno; errno = 0; char* endptr; auto count = std::strtoll(str.data(), &endptr, 10); std::swap(errno, errno_save); if (errno_save != 0) throw std::runtime_error("error parsing duration"); if (str.data() < endptr && endptr < str.data() + str.size()) { switch (*endptr) { case 'h': return hours{count}; case 'm': if (endptr + 1 < str.data() + str.size()) { switch (endptr[1]) { case 'i': if (endptr + 2 < str.data() + str.size() && endptr[2] == 'n') return {count}; break; case 's': return milliseconds{count}; } } break; case 's': return seconds{count}; case 'u': if (endptr + 1 < str.data() + str.size() && endptr[1] == 's') return microseconds{count}; break; case 'n': if (endptr + 1 < str.data() + str.size() && endptr[1] == 's') return nanoseconds{count}; break; } } throw std::runtime_error("error parsing duration"); } Here is a simplistic "are you breathing" test for this code: int main() { using namespace std; using namespace std::chrono; cout << stons("2h").count() << "ns\n"; cout << stons("2min").count() << "ns\n"; cout << stons("2s").count() << "ns\n"; cout << stons("2ms").count() << "ns\n"; cout << stons("2us").count() << "ns\n"; cout << stons("2ns").count() << "ns\n"; } Ok, so what is our point? There is a careless type-o / bug in stons. Do you see it yet? stons is not trivial. But neither is it horribly complicated. Nor is it contrived. Nor is the bug in it contrived. It is a type of bug that occurs commonly. Today the bug in stons results in a compile-time error. This compile-time error was intended by the designers of the <chrono> library. If we accept N4074, the bug in stons becomes a run-time error. Have you spotted it yet? Here is a clue. The output of the test (with N4074 implemented) is: 7200000000000ns 2ns 2000000000ns 2000000ns 2000ns 2ns The error is this line that parses "min" after the count: return {count}; which should read: return minutes{count}; Once the error is corrected, the correct output of the test is: 7200000000000ns 120000000000ns 2000000000ns 2000000ns 2000ns 2ns return? The expression return {x}; has only been legal since C++11. There have only been 3 (official) years for it to be adopted by programmers. Is there any evidence anyone is actually using this syntax? Yes. These were just the ones we were able to find. Searching for "return[\s]*{" is exceedingly challenging. But just the above results clearly demonstrate a use that is significantly greater than "vanishingly rare." tupleCounter-Example What if the return isn't a scalar? For example is this just as unsafe? return {5, 23}; If the return type is tuple<seconds, nanoseconds> (which we still do not want to construct implicitly from int), then this would definitely not be safe! We never want to implicitly construct a duration from an int. Here is how we say that in C++: template <class Rep2> constexpr explicit duration(const Rep2& r); That is, we put explicit on the constructor (or on a conversion operator). To change the language to ignore explicit in some places is a dangerous direction. All that being said, if the constructions are implicit then they are safe. For example: tuple<seconds, nanoseconds> test1() { return {3, 4}; // unsafe, 3 does not implicitly convert to seconds // 4 does not implicitly convert to nanoseconds } tuple<seconds, nanoseconds> test2() { return {3h, 4ms}; // safe, 3h does implicitly convert to seconds // 4ms does implicitly convert to nanoseconds } tuple<int, float, string> test3() { return {3, 4.5, "text"}; // safe, 3 does implicitly convert to int // 4.5 does implicitly convert to float // "text" does implicitly convert to string } None of the three examples compile today according to C++14 because the std::tuple constructor invoked is marked explicit. However test2 and test3 are both perfectly safe. It is unambiguous what is intended in these latter two cases. And they would indeed compile if the tuple constructor invoked was not marked explicit. N3739 is a proposal which makes test2 and test3 legal, while keeping test1 a compile-time error. We fully support this proposal. Indeed Howard implemented this proposal in libc++ years ago, for precisely the same reasons that N4074 is now proposed: To ease the syntax burden on the programmer. But unlike N4074, N3739 never disregards the explicit keyword that the programmer has qualified one of his constructors with. Perhaps there is some defective quality about chrono::duration which is causing the problem. Is chrono::duration the only problematic example? Consider a class OperatesOnData. class OperatesOnData { public: OperatesOnData(VerifiedData); // knows that the Data has been verified // Use this constructor ONLY with verified data! explicit OperatesOnData(Data); // trusts that the Data has been verified }; Data that comes from untrusted sources must first be verified to make sure it is in the proper form, all invariants are met, etc. This unverified Data is verified with the type VerifiedData. This VerifiedData can then be safely sent to OperatesOnData with no further verification. However verifying the data is expensive. Sometimes Data comes from a trusted source. So the author of OperatesOnData has set up an explicit constructor that takes Data. Clients know that OperatesOnData can't verify Data all the time because of performance concerns. So when they know that their Data need not be verified, they can use an explicit conversion to OperatesOnData to communicate that fact. Is this the best design in the world? That's beside the point. This is a reasonable design one might find coded in C++. Now we need to get the Data and start working on it: VerifiedData getData(); OperatesOnData startOperation() { return {getData()}; } So far so good. This is valid and working C++14 code. Three years go by, and the programmers responsible for this code change. C++17 decides that making return {x}; bind to explicit conversions is a good idea. Now, because of some changes on the other side of a million line program, it is now too expensive for getData() to do the verification as it has acquired some new clients that must do verification anyway. So getData() changes to: Data getData(); Bam. Unverified data gets silently sent to OperatesOnData. The original code was designed precisely to catch this very error at compile time. Data getData(); OperatesOnData startOperation() { return {getData()}; } test.cpp:23:12: error: chosen constructor is explicit in copy-initialization return {getData()}; ^~~~~~~~~~~ test.cpp:14:14: note: constructor declared here explicit OperatesOnData(Data); // trusts that the Data has been verified ^ 1 error generated. But because the committee has redefined what return {getData()} means between C++11 and C++17 (a period of 6 years), a serious run time error (perhaps a security breach) has been created. This example highlights the fact that the return type of an expression is not always clear at the point of the return statement, nor in direct control of the function author. When what is being returned is the result of some other function, declared and defined far away, there is potential for error. If explicit conversions are implicitly allowed between these two independently written pieces of code, the potential for error increases greatly: shared_ptr<BigGraphNode> makeBigGraphNode() { return {allocateBigGraphNode()}; } What type does allocateBigGraphNode() return? Today, if this compiles, we are assured that it is some type that is implicitly convertible to shared_ptr<BigGraphNode>. Implicit conversions are conversions that the author of shared_ptr deemed safe enough that the conversion need not be explicitly called out at the point of use. Explicit conversions on the other hand are not as safe as implicit conversions. And for whatever reasons, the author of shared_ptr deemed the conversions of some types to shared_ptr sufficiently dangerous that they should be called out explicitly when used. We ask again: What type does allocateBigGraphNode() return? The answer today is that we don't know without further investigation, but it is some type that implicitly converts to shared_ptr<BigGraphNode>. And we think that is a much better answer than "some type that will convert to shared_ptr<BigGraphNode> explicitly or implicitly." This latter set includes BigGraphNode*. If the above code is written today, and the meaning of that return statement changes tomorrow. And the next day the return type of allocateBigGraphNode() changes, there is a very real potential for turning compile-time errors into run-time errors. The above potential for error is exactly the same as it is for this code: void acceptBigGraphNode(shared_ptr<BigGraphNode>); // ... acceptBigGraphNode(allocateBigGraphNode()); If performing explicit conversions is dangerous in one of the above two examples, then it is dangerous in both. The presence of curly braces in the first example makes zero difference, as illustrated by the stons example. Indeed, the presence of curly braces is today legal in the second example: acceptBigGraphNode({allocateBigGraphNode()}); but only if the conversion is implicit. In N4029, Sutter responds to the concerns about implicit ownership transfer by saying "it's okay", and continues I believe return p;is reasonable "what else could I possibly have meant" code, with the most developer-infuriating kind of diagnostic (being able to spell out exactly what the developer has to redundantly write to make the code compile, yet forcing the developer to type it out), and I think that it's unreasonable that I be forced to repeat std::unique_ptr<T>{}here. It's not at all clear that it's "obvious" what the programmer meant. It's NOT about knowing the return type. It's about knowing the source type and its semantics. As it happens, with a raw pointer, we do not know its ownership semantics, because it doesn't have any. We do not know where it came from - it may be a result of a malloc inside a legacy API. It may be a result of a strdup (same difference, really). It might not be a pointer to a resource dynamically-allocated with a default_delete-compatible allocator. The implementation diagnostic is not saying "write the following boiler-plate". It's saying "stop and think". It's specifically saying "you're about to cross the border between no ownership semantics and strict ownership semantics, please make sure you're doing what you think you're doing". It also seems questionable that we would break the design expectations of smart pointers well older than a decade just to make interfacing with legacy raw pointers easier. Modern C++ code uses make_unique and make_shared, not plain new. And such modern C++ code uses smart pointers as return types for factory functions that create the resources, not raw pointers. Even worse, this proposal means that the most convenient (since shortest) way to create a unique_ptr<T> from arguments of T is not return make_unique<T>(args); but rather return {new T{args}}; The Finnish experts pull no punches when they say such a thing is the wrong evolution path to take. They have voiced vocal astonishment to the proposed core language change just to support use cases that should be or at the very least should become rare and legacy, especially when that change breaks design expectations of tons of libraries. Returning a braced-list has a meaning today. Consider: char f() {int x = 0; return {x};} // error: narrowing conversion of 'x' from 'int' to 'char' inside { } People learn that in order to avoid potentially-corrupting narrowing conversions between built-in types, they can use braces. It seems very illogical that for library types with explicit conversions, braces suddenly bypass similar checking for potentially-corrupting cases! For us, these examples stress the point that when a type author writes " explicit", we should never ignore it. Not on a return statement. Not anywhere. The type author knows best. He had the option to not make his conversion explicit, and he felt strongly enough about the decision to opt-in to an explicit conversion. We recall that when <chrono> was going through the standardization process, the entire committee felt very strongly that int should never implicitly convert to a duration. If we want more conversions to be implicit, then we should not mark those conversions with the explicit qualifier. This is a far superior solution compared to marking a conversion explicit, and then treating the conversion as implicit in a few places. And this is the direction of N3739. Instead of changing the rules for every type, N3739 removes the explicit qualifier from specific types where it is undesirable to have it, while keeping the explicit qualifier in places that respect the type author's wishes. All of the chrono, tuple and smart pointer examples are practical cases of a general conceptual case: a conversion from a type with no semantics to a type with strict semantics. Bypassing the explicitness of such conversions is exceedingly dangerous, especially when it happens after-the-fact that the designers of those library types chose to mark such conversions explicit. The proposed change is moving the earth under the library designers' feet.
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2014/n4094.html
CC-MAIN-2017-17
refinedweb
2,685
54.02
today I re-stumbled upon an issue I quick-and-dirty solved a while ago but I want to solve it more elegantly while I am doing code polishing these days. I use Regexp::Assemble to assemble regex that are about 15kb to 87kb large. Now I very simply run through a large (~10GB) file and match the regex. I used to do this on the command line in the style perl -ne 'print if (/MYLARGEREGEX_HERE/../END_OF_BLOCK/)' inputfile > +outputfile [download] this was fast as hell. However when my regex grew in size, I was not able to copy paste them into the bash so I started to read the regex from a file and did something like this #!perl use strict; use warnings; open my $fh_big_file, '<', $ARGV[0] || die; #first argument must be th +e input file open my $fh_regex, '<', $ARGV[1] || die; # second argument points to t +he file containing the regex my $regex = <$fh_regex>; while(<$fh_big_file>) { print if (/$regex/../^END_OF_BLOCK/); } [download] The funny thing is, that this flavour of code costs me factor 20 in speed or even more. I can reclaim the speed by avoiding to store the regex in a variable, e.g. while(<$fh_big_file>) { print if (/MY_HUGE_REGEX_JUST_PLAIN/../^END_OF_BLOCK/); } [download] so I assume this has something to do with fetching the content of the variable (from RAM to CPU?) over and over for each loop of while(<>), whereas inputing the regex directly doesnt need to re-read it every time. This approach however requires me to manually copy the regex to its place each time I run the whole procedure of "assembling, searching, processing, assembling, seachring, processing" and I would like to automize it without loss of performance. Any ideas? thanks and cheers! Update/Solution/Close The suggestion to use the o operator works. However it needs to be behind /$regex/ not behind /END_OF_BLOCK/. i.e. like shmem suggested: while(<$fh_big_file>) { print if (/$regex/o .. /^END_OF_BLOCK/); } [download] thanks! esteemed monks: greetings all! i've been struggling with this for days now, and figured it was finally time to ask for some help. I've been following the instructions found at to set up a wifi SD card for use in a photoframe. I wanted to use Perl rather than the python referenced on the page because I generally prefer perl and because i kinda wanted to do it myself. it's been smooth sailing, right up to the point of having to submit a FAT32 timestamp to set the creation date of the uploaded file. This part has me completely stumped. i've been all over google, and the best reference I can find about the required formats is from this thread on stackoverflow:. There's no search results here for fat32. interestingly enough, I was able to figure out how interpret the dates coming off the card (for the purposes of knowing which photos were the last ones added, so I can only upload new ones), but trying to apply the same logic in reverse did not work so well. I'm also not entirely clear on why it worked the way it did, such that they come out split up but when going in, the card wants just one string, but I suppose that's just quirkiness in the API i've got to live with. anyhow, i suspect the answer has something to do with pack, but i won't lie, i'm just flinging stuff at the wall in the hopes that it sticks. I thought i was making progress, in that I have some idea what the actual value for a current timestamp is that would be submitted to the api call (through trial and error/guesswork -- manually trying to make things up, based on the example given in the API documentation). so i've got some idea what the value needs to look like, but for the life of me i can't get that output to generate programatically. it seems that it's inclusive of 0-9 and a-f, so it seems like a hexadecimal number (as per), but being honest, this gets a little deeper than i usually go. thinking about it sometimes makes my head hurt. has anyone ever bumped into anything like this before or can offer any insight? code snippets below that do the relevant stuff with dates/times. here's the code that pulls the time and date from the card and interprets it. #Each row of the list is returned in the following format. #<directory>, <filename>, <size>, <attribute>, <date>, <time> # date 16 bit int -- bit 15-9 value based on 0 as 1980, bit 8-5 month +value from 1 to 12, bit 4-0 day, value from 1 to 31 # time 16 bit int -- bit 15-11 hour, 10-5, minute, 4-0 second / 2 # size my $fileList = getHttp($cardip, "command.cgi?op=100&DIR=/"); #print ($fileList . "\n"); my @fileArray = split("\n", $fileList); my @fileTimesArray; my $lastTime = 0; print (ref(@fileArray) . "\n"); foreach my $file (@fileArray) { if (index($file, ",") != -1) { print "working with file $file\n"; my ($directory, $name, $size, $att, $date, $time) = split(",", + $file); # example date, time = 18151,39092 my $day = ($date >> 0) & (2**5-1); my $month = ($date >> 5) & (2**4-1); my $year = ($date >> 9) & (2**7-1); $year = 1980 + $year; my $second = ($time >> 0) & (2**5-1); my $minute = ($time >> 5) & (2**6-1); my $hour = ($time >> 11) & (2**5-1); $second = $second * 2; print ("file: $name | $month - $day - $year | $hour : $minute +: $second | $epochTime\n"); #print ("file: $name | $date | $time\n"); #print ("day: $day\n"); #print ("month: $month\n"); #print ("year: $year\n"); #print ("second: $second\n"); #print ("minute: $minute\n"); #print ("hour: $hour\n"); my $perlMonth = $month - 1; my $epochTime = timelocal($second,$minute,$hour,$day,$perlMont +h,$year); if ($epochTime > $lastTime) { $lastTime = $epochTime; } push @fileTimesArray, { file => $name, epoch => $epochTime, si +ze => $size }; } } [download] and here's the code for trying to generate a timestamp based on the timestamp returned from the file on disk. @info = stat($path->{file}); my $createdtime = $info[10]; print "regular created time is: $createdtime\n"; my ($sec, $min, $hour, $day,$month,$year) = (localtime($createdtim +e))[0,1,2,3,4,5]; # You can use 'gmtime' for GMT/UTC dates instead of 'localtime' $month++; my $displayYear = $year; my $realYear = $year; $year = $year - 80; $second = ceil($sec / 2); # example date, time = 18151,39092 --> not same format!?! # time 16 bit int -- bit 15-11 hour, 10-5, minute, 4-0 second / 2 #my $encSecond = $second & (2**5-1); #my $encMinute = $minute & (2**6-1); #my $encHour = $hour & (2**5-1); # date 16 bit int -- bit 15-9 value based on 0 as 1980, bit 8-5 mo +nth value from 1 to 12, bit 4-0 day, value from 1 to 31 #my $encDay = $day & (2**5-1); #my $encMonth = $month & (2**4-1); #my $encYear = $month & (2**7-1); #use integer; my $data = $year . $month . $day . $hour . $min . $second; print "using $data as input for pack\n"; #my $data = $second . " " . $minute . " " . $hour . " " . $day . " + " . $month . " " . $year; #my $createdtimeFat = pack "N8", $data; my $createdtimeFat = pack "N8", $year,$month,$day,$hour,$min,$sec +ond; #my $createdtimeFat = ($year << 25) | ($month << 21) | ($day << 1 +6) | ($hour << 11) | ($min << 5) | ($second << 0); #my $createdtimeFat = (($year & (2**7-1)) << 25) | (($month & (2* +*4-1)) << 21) | (($day & (2**5-1)) << 16) | (($hour & (2**5-1)) << 11 +) | (($min & (2**6-1)) << 5) | (($second & (2**5-1)) << 0); # 8 digits # 8 = year # 7 = year + month # 6 = month + day # 5 = day #### # 4 = hour # 3 = hour + month # 2 = minute + second # 4 = second #my $createdtimeFat = '469f9f01'; #my $hex = sprintf("0x%x", $createdtimeFat); #my $hex = printf("%x",$createdtimeFat); print "Unix time ".$createdtime." converts to ".$month." ".$day.", + ".($displayYear+1900)." ".$hour.":".$min.":".$sec." year (in offset +from 1980) is $year [real year is $realYear]\n"; #print $encSecond ." ". $encMinute ." ". $encHour ." ". $encDay ." + ". $encMonth ." ". $encYear . "\n"; print "createdtimeFat should look something like 46ef99c6\n"; print "createdtimeFat is $createdtimeFat\n"; my @unpacked = unpack("N8",$createdtimeFat); print "and unpacked: " . @unpacked . "\n"; my $setdate = getHttp($cardip, "upload.cgi?FTIME=0x" . $createdtim +eFat); print "result of setdate operation: $setdate\n"; [download] and this is the getHttp function/sub sub getHttp() { my $ip = shift; my $args = shift; my $status; my $url = "http://" . $ip . "/" . $args; #print ("accessing " . $url . "\n"); # set custom HTTP request header fields my $req = HTTP::Request->new(GET => $url); my $resp = $ua->request($req); if ($resp->is_success) { my $message = $resp->decoded_content; #print "Received reply: $message\n"; $status = $resp->decoded_content; } else { print "HTTP GET error code: ", $resp->code, "\n"; print "HTTP GET error message: ", $resp->message, "\n"; $status = $resp->message; } return $status } [download] many thanks in advance for any help or guidance anyone can offer - i'm at wit's end, and i feel like i've got to be missing something! use CGI qw(:standard); use CGI::Carp qw(warningsToBrowser fatalsToBrowser); use Net::SSL::ExpireDate; use strict; my $sitename; my $ed; my $expire_date; print header; print start_html("Thank You"); print h2("Thank You"); my %form; foreach my $p (param()) { $form{$p} = param($p); print "$p = $form{$p}<br>\n"; $sitename = $form{$p}; } #$sitename = ""; chomp($sitename); $sitename =~ s/^\s+|\s+$//g; print "\nWebsite name is: $sitename.\n"; $ed = Net::SSL::ExpireDate->new( https => $sitename ); if (defined $ed->expire_date) { $expire_date = $ed->expire_date; print "$expire_date\n"; } print end_html; [download] Thanks, Alok What is the proper way to get the caller to our object creation (the object's client code) inside Mo/Moo/Moose's BUILD or BUILDARGS? I'm okay with getting a subclass. From a quick glance of the Moo and Moose codebase, it doesn't seem like Moo/Moose provides a utility routine for this. A quick search on CPAN also doesn't yield anything yet. Example: package Class1; use Moo; has attr1 => (is => 'rw'); sub BUILD { no strict 'refs'; my $self = shift; # XXX set default for attr1 depending on the caller unless (defined $self->attr1) { $self->attr1(${"$object_caller_package\::FOO"}); } } package C2; use Moo; extends 'C1'; package main; our $FOO = 42; say C2->new->attr1; # prints 42 [download] In principle it should be easy enough to loop over the caller stack and use the first non-Moo* stuff. Once again, I have a module but no name. I come here in the hope of finding a good name that helps others find this module and put it to good use. Let me first describe what the module does: The module exports two functions, rewrite_html and rewrite_css. These functions rewrite all things that look like URLs to be relative to a given base URL. This is of interest when you're converting scraped HTML to self-contained static files. The usage is: use HTML::RewriteURLs; my $html = <<HTML; <html> <head> <link rel="stylesheet" src="" /> </head> <body> <a href="">Go to Perlmonks.org</a> <a href="">Go to home page/a> </body> </html> HTML my $local_html = rewrite_html( "", $html ); print $local_html; __END__ <html> <head> <link rel="stylesheet" src="../css/site.css" /> </head> <body> <a href="">Go to Perlmonks.org</a> <a href="..">Go to home page/a> </body> </html> [download] The current name for the module is HTML::RewriteURLs, and this name is bad because the module does not allow or support arbitrary URL rewriting but only rewrites URLs relative to a given URL. The functions are also badly named, because rewrite_html doesn't rewrite the HTML but it makes URLs relative to a given base. And the HTML::RewriteURLs name is also bad/not comprehensive because the module also supports rewriting CSS. I'm willing to stay with the HTML:: namespace because nobody really cares about CSS before caring about HTML. I think a better name could be HTML::RelativeURLs, but I'm not sure if other people have the same association. The functions could be renamed to relative_urls_html() and relative_urls_css(). Another name could be URL::Relative or something like that, but that shifts the focus away from the documents I'm mistreating to the URLs. I'm not sure what people look for first. Below is the ugly, ugly regular expression I use for munging the HTML. I know and accept that this regex won't handle all edge cases, but seeing that there is no HTML rewriting module on CPAN at all, I think I'll first release a simpleminded version of what I need before I cater to the edge cases. I'm not fond of using HTML::TreeParser because it will rewrite the document structure of the scraped pages and the only change I want is the change in the URL attributes. =head2 C<< rewrite_html >> Rewrites all HTML links to be relative to the given URL. This only rewrites things that look like C<< src= >> and C<< href= >> attri +butes. Unquoted attributes will not be rewritten. This should be fixed. =cut sub rewrite_html { my($url, $html)= @_; $url = URI::URL->new( $url ); #croak "Can only rewrite relative to an absolute URL!" # unless $url->is_absolute; # Rewrite relative to absolute rewrite_html_inplace( $url, $html ); $html } sub rewrite_html_inplace { my $url = shift; $url = URI::URL->new( $url ); #croak "Can only rewrite relative to an absolute URL!" # unless $url->is_absolute; # Rewrite relative to absolute $_[0] =~ s!((?:src|href)\s*=\s*(["']))(.+?)\2!$1 . relative_url(UR +I::URL->new( $url ),"$3") . $2!ge; } [download] Update: Now released as HTML::Rebase, thanks for the discussion and improvements! 20150619-17:30:43.616, 26 20150619-17:30:33.442, 23 20150619-17:30:40.376, 26 20150619-17:30:38.863, 26 20150619-17:30:56.936, 26 20150619-17:30:34.952, 24 20150619-17:30:45.889, 26 20150619-17:30:53.940, 23 20150619-17:30:51.154, 25 20150619-17:30:48.699, 26 Dear ALL, I was doing some parsing for log file and come to this bug (i think), I add the next simple code to show you what I face: use strict; use warnings; my $data = 'blabla;tag1=12345;blabla;'; # my $data = 'blabla;tag1=12345;blabla;tag2=99999'; # get tag1 value $data =~ m/tag1=(\d+)/g; my $tag1 = $1; # get tag2 value $data =~ m/tag2=(\d+)/g; my $tag2 = $1; print "tag1 = $tag1\n"; print "tag2 = $tag2\n"; [download] The output: tag1 = 12345 tag2 = 12345 [download] Us you can see the are only tag1 value in $data, so should be no match and the second tag $tag2 should be undefined, but what i got is $tag1 =$tag2!!!. So can any monk here (and pretty please) explain to my what happen here. BR Hosen rs6413438,CYP2C19_10 rs4986910,CYP2C19_20 --------- Converting Star Allele references to rs numbers --------- Current input line is Index 0 is rs6413438 Index 0 is rs6413438 is stored as rs6413438 <-- Correct display Index 1 is CYP2C19_10 is stored as CYP2C19_10 <--- WTF, where is the first variable? Comparing CYP2C19_10 and CYP2C19_10 and CYP2C19_12 Index 0 is rs4986910 Index 0 is rs4986910 is stored as rs4986910 <-- Correct display Index 1 is CYP2C19_20 is stored as CYP2C19_20 <--- WTF Comparing CYP2C19_20 -------------- Done converting Star Allele references ------------- #!perl use strict; use 5.010; my $STARFile; + # File handle to reference file my @Stars; $Stars[0] = "CYP2C19_10"; + # Mock array of values to cross reference $Stars[1] = "CYP2C19_12"; + # if(@Stars==0){return;} + # If no Star Alleles were specified then no n +eed to do this so return to the main body if(! open $STARFile,"<","test.csv"){die "Reference file could not be f +ound or could not be opened.";} # Open the Star reference file to +prepare to convert information and store the file handle to $STARFile +. print "Converting specified Star Designations to SNPs..."; # The conversion table is opened so convert the Star name to rs number +s and then store the rs numbers to the @SNPs array and the correspond +ing Star name to the @Stars array at the same index. my @SNPs; my @StarsCon; my $RefIndex; + # Holds the line in the reference table file my $StarIndex; + # Holds the index of the @Stars Array that is + being checked my $tmpSNPIndex; + # Holds the index in the @SNPs array that we ar +e comparing my $tmpStar; + # Holds the Star Allele name my $tmpRS; + # Holds the SNP's rs number my @tmpConv; + # Holds the split Star and rs numbers say "\n--------- Converting Star Allele references to rs numbers ----- +----"; while (<$STARFile>){ + # Input a line from the database and as long a +s we haven't reached the end of the file chomp; + # Remove the trailing newline say "Current input line is @_"; @tmpConv = split ",",$_; + # Split the CSV line from the reference table s +uch that @tmpConv[0] = Star name and @tmpConv[1] = rs number $tmpStar = $tmpConv [1]; $tmpRS = $tmpConv[0]; say "Index 0 is $tmpConv[0]"; + # Displays correctly say "Index 0 is $tmpConv[0] is stored as $tmpRS"; + # Displays correctly say "Index 1 is $tmpConv[1]"; + # Displays correctly say "Index 1 is $tmpConv[1] is stored as $tmpStar"; + # Displays INcorrectly for($StarIndex=0;$StarIndex<@Stars;$StarIndex++){ say "Comparing $tmpStar and $Stars[$StarIndex]"; if($tmpStar eq $Stars[$StarIndex]){ + # If the current line of the database file c +ontains the Star Allele rs number then $tmpSNPIndex = @SNPs; + # Get the number of entries in the @SNPs array +. say "1. $tmpRS was converted from $tmpStar"; say "2. $tmpStar was converted to $tmpRS"; say "3. $tmpStar was converted to $tmpRS"; say "4. $tmpRS was converted from $tmpStar"; push @StarsCon, $tmpStar; + # Add the Star allele name to the @StarsCon ar +ray push @SNPs, $tmpRS; + # Add the new rs number to the @SNPs array if(@Stars>0){ + # If we have more than one SNP then splice @Stars,$StarIndex,1; + # and @Stars array }else{ + # Otherwise Pop off the last one pop @Stars; + # } + # Exit the for loop } } if(! @Stars>0){last;} + # If that was the last entry then stop searchi +ng } say "-------------- Done converting Star Allele references ----------- +--"; if(@Stars>0){ + # If any SNPs have not been found then say "\n"."Conversions not completed: @Stars."; + # Inform the user which ones were not found }else{ + # Otherwise say "\n"."All conversions successful."; + # Inform the user that all were found } close $STARFile; + # Close the reference file print "Done!\n"; [download] my $endofline = ( /\r\n$/ ) ? "\r\n" : "\n"; [download] print OUTFILE "$processed_string","$endofline"; [download] I started using Log4perl some time ago and it is the bees knees, though I only access its most basic functionality. I use it in a cron job that goes something like this: use Log::Log4perl qw(:easy); Log::Log4perl->easy_init( { level => $INFO, file => $logfile } ); INFO "Start process..."; # lots of stuff here my $sender = Email::Stuffer->new; set_transport($sender); # set up SMTP # construct email here, then my $sent = $sender->send; if ( $sent ) { INFO "Email sent..."; } # ... [download] Log::Log4perl->easy_init( { level => $ERROR, file => $logfile } ); my $sender = Email::Stuffer->new; # etc my $sent = $sender->send; Log::Log4perl->easy_init( { level => $INFO, file => $logfile } ); [download] Update: As is often the case I think I have found the answer just after asking the question - when I set up the SMTP transport I have debug => 1 so maybe Log4perl is appropriating the debug messages. Will test later. After 15 years of writing Perl, I've finally achieved something that I've wanted for a very long time... a module that allows you to inject code into subroutines, and add subs on the fly to files. The original purpose of this was to add a call to a stack trace method to each sub, so a long running program would keep track of its own information. I'm days away from releasing candidate one of the upgraded module, and I'll be requesting of the Monks who have some time to spare to review the README for too much/too little info, and if they are so kind, to test out the code examples provided in the SYNOPSIS (and beyond). What I'd like to do is see how it looks after the CPAN parser does its job. My question is this: Can I upload a devel version to PAUSE (a devel version in CPAN is one that has a _NN designation at the end of the version info) without pissing all over my existing relatively stable version, and view it all the same, while current users will still pull stable in a cpan install Module::Name?. I could try it, but I felt it better to ask than to put undue load on the CPAN servers. -stevieb For those curious, current code can be found in the 'engine' branch here: DES. I have been tinkering with a few tools lately, and now want to put up a portfolio of some web applications that I am working on. I have an account on pair Networks (they also host this site), so I set up local::lib and went ahead and tried to install Mojolicious::Lite, since that's the platform I'm working on these days. No dice -- Mojo::Lite requires 5.10, and pair only has 5.8.9. I checked with the other provider I use, and they have 5.8.8. So the two options I can see are a) install an up-to-date Perl on one of those accounts, or b) have these web applications run on my home machine (perhaps using to provide consistent name resolution -- not sure is this is still available). I could go find another web provider, but that's additional expense, and not really my best option right now. Feedback welcome! Alex / talexb / Toronto Thanks PJ. We owe you so much. Groklaw -- RIP -- 2003 to 2013. I just read about an IBM programming challenge to try and entice developers to IBMs Bluemix Cloud development environment. (Don't bother if you're outside the UK; or if you want to use Perl (it ain't supported :(); or if stupid sign-up processes that don't work annoy you; or ... ) What struck me was that the three programming tasks are, at least notionally, so trivial. It took me less than 5 minutes to write (working, but probably not best) solutions to all three. (Whether they would pass their test criteria I guess we'll probably never know) I was also struck by this part of the description: that you can put together a programme that can run within a time limit or on limited resources rather than just lashing together a hideous brute-force monstrosity. And that you can actually read the questions properly in the first place (a useful start, but one that's often forgotten). I think it would be interesting to see how the best Perlish solutions we can come up with compare with those other languages that get entered to the competition; when and if they are actually made public. So have at them. (Don't forget to add <spoiler></spoiler> tags around your attempts.) I'd post the questions here but I'm not sure it wouldn't be a problem copyright wise? I'll post my solutions here in a few days. Priority 1, Priority 2, Priority 3 Priority 1, Priority 0, Priority -1 Urgent, important, favour Data loss, bug, enhancement Out of scope, out of budget, out of line Family, friends, work Impossible, inconceivable, implemented Other priorities Results (254 votes), past polls
http://www.perlmonks.org/index.pl?14069
CC-MAIN-2015-32
refinedweb
3,822
65.56
Weekly Assignments One of the main deliverables of Fab Academy is to document the progress; what I do, how I did it and what I have learned. In the assessment guidelines it literally says "Imagine that one day you need to reproduce all of your assignments. You have no internet, no-one is able to help you. You have only your Fab Academy archive folder and the resources within the Fab Lab. Ensure that you have documented everything in enough detail in your archive so that you can do this easily." So that's what I'm going to do; I'm will document my learnings like there was no tomorrow ;) Week 1: Principles and Practices assignment: plan and sketch a potential final project A toy for kids My initial idea for my final project is to create a toy for kids, to learn from, and interact with. A goal of mine is to start a preschool in Sweden at some point, and I would like to make a toy that could be used there, and elsewhere. I want kids to explore and learn from experiences. I want kids to not just be consumers of technology, but also producers. Ingrid the Tiger and Ebbe the Bear Interactive building blocks The toy would look like a typical wooden building block, a cube. I know from playing with my sister’s kids; Ingrid the Tiger and Ebbe the Bear, four and soon to be two years old, that they really like playing with these old-school wooden blocks. I will add some features to these blocks, since I also know that the Bear loves music and the Tiger loves stuff that is lit up. I’m not sure just yet what the different features will we, maybe some sound, some light and maybe some movement or vibration, that I will figure out later. The idea is that if you put two or more blocks together, one or several features will be activated. My initial idea is that the two opposite sides of the block will have the same feature, let’s say sound. If you put two blocks together, with the sound side towards each other, a sound will play. However, if you put a sound and a light side together, nothing would happen. I’m in the process of exploring what would happen if you put three or more blocks together, if that would create a sequence of light, music, movement, or what would happen. Who is it for? This toy is for all kids, mostly targeting kids around the age of two to four, that like to build things and explore possibilities. The toy could possibly enhance the kid’s logical thinking and stimulate their different senses. The explicit advantages of this toy are still to be discovered. Material I want the main material to be some type of wood. I believe it's a fun material to work with. It looks nice too and it's many times a better material for kids compared for example plastic. I don't know yet what components I will use, most certainly some type of battery, electronic connector and some magnets or similar to connect the blocks to one another. Inspiration I got inspired by my local instructor Emma Pareschi’s final project from back in 2014. I especially like how she integrated the electronics into the wood and the small holes she made for the light. I also got to see a project from some time back when the blocks were connected using magnets and with a thing (a name I don’t know yet) that connected the electronics between the blocks. Week 2: Project Management assignment: build a personal site and upload it to the class archive My first website I have built one of the first websites of my life. This time without using a content management system and instead only code and pushing it from Git. I did use a template from Bootstrap to help me start, but I have been altering the HTML to explore new possibilities. Installing Git First I installed Git. Git is a distributed version control system meaning it’s “a form of version control where the complete codebase - including its full history - is mirrored on every developer's computer.” (Wikipedia) Connecting local repository to GitLab Since Fab Academy are using GitLab, a web-based Git repository manager, I had to connect using an SSH Keys, which allows me to use the Git repository in a secure way without using a password every time. I followed Fiore’s tutorial and got help from friendly Henk. I generated the ssh-key: $ cd ~/.ssh $ ssh-keygen -t rsa -b 4096 -C "johannan.nordin@gmail.com" I edited my ssh config file and added Host gitlab Hostname gitlab.fabcloud.org User git Identityfile ~/.ssh/fabacademy-gitlab IdentitiesOnly yes And then I changed the “git@gitlab.fabcloud.org” to “gitlab” I set up my identity $ git config --global user.name "Johanna Nordin" $ git config --global user.email "johannan.nordin@gmail.com" And then creating my first repository $ git clone gitlab:academany/fabacademy/2018/labs/fablabamsterdam/....git $ cd johanna-nordin $ touch README.md $ git add README.md $ git commit -m "add README" $ git push -u origin master To make sure everything was set and to know which remote repository was tracked by my local repository, I used the git remote command $ git show remote origin And then I got to see all the details: * remote origin Fetch URL: gitlab:academany/fabacademy/2018/labs/fablabamsterdam/... Push URL: gitlab:academany/fabacademy/2018/labs/fablabamsterdam/... code>HEAD branch: master Remote branch: master tracked Local branch configured for 'git pull': master merges with remote master Local ref configured for 'git push': master pushes to master (up to date) In GitLab I later added the file .gitlab-ci.yml to make the repository viewable as a site. This was a specific file for this occasion so I followed Fiore’s instructions and copy/pasted the content. Using terminal I wasn’t that familiar with Terminal before. To be honest, when I was watching my first Git tutorial I had to google “What is the terminal on Mac”. I, of course, felt a bit embarrassed, but I guess I can’t be the only one starting from Zero. Creating the actual website I started out locally on my computer, just because I was worried that I would mess up the Fab Academy repository. I created an index.html file and started from there. I used Atom as my source code editor, which I have tried before. I soon moved to Adobe Dreamweaver that I heard was even better. I picked a template from Bootstrap to help me get started. I didn’t really think things through when picking the template; the Clean blog template doesn’t really suit my needs, so my site is a bit messy at the moment. After playing around a bit locally in a different folder, I decided to add my files to my local repository (connected to the origin master) and make my first commit. It went well. I had some trouble, I didn’t read what Git told me, so I messed up a few times before it finally worked. Here you can see a video of my first (second, and third) real commit. HTML w3schools.com explain HTML very well; HTML is the standard markup language for creating web pages. HTML stands for Hyper Text Markup Language and describes the structure of web pages using markup. HTML elements are the building blocks of HTML pages, and they are represented by tags. HTML tags label pieces of content such as "heading", "paragraph", "table", and so on. Browsers do not display the HTML tags, but use them to render the content of the page. For learning HTML I went through a few tutorials. I did a few classes in Codecademy and I used w3schools.com for some help with specific elements. For me, learning basic HTML has mostly been about googling. I learned about the difference between HTML, JavaScript, and CSS, when to use what and an overview of how it works. The elements that I mostly use is making headings, lists, paragraphs, add pictures, links, etc. The commands The most common commands that I used altering the template and making this site is the following. I will not write the symbols since that would make Dreamweaver think I'm writing HTML code. - p - Paragraph - h1-h6 - Heading - li and ul - Unordered List and List Item - img - Image - iframe - Embed another document, for example youtube - div - Division - span - A container for content inside a paragraph - a href = "document location" - Link - !-- -- - Comment. Anything between these tags is not displayed on the screen - br - Line break - header - Introductory content for my page - nav - Navigation content, my menu - main - Main content of the web page - footer - Footer of a page - html - Opens and closes an HTML document - head - Pprovide information about the document - title - The title of document - body - Contains all the content JavaScript and CSS JavaScript is the programming language of HTML and the Web. CSS is a language that describes the style of an HTML document and how HTML elements should be displayed. CSS stands for Cascading Style Sheets. CSS saves a lot of work since it can control the layout of multiple web pages all at once (w3schools.com). I didn't do any changes in the JavaScript, but a few in the CSS-file that came with the template. I changes the colors and added the new background picture. To do so, I checked what hex color code the template had, and replaces that number with the number for the mustard yellow color that I wanted to use. This is how the template looked before I made the changes. The biggest change is that I added more pages to the site; having seperate pages for Home, My Story, Final Project and Weekly Assignments. A new look for the website (added 14th of Feb, 2018) So what you can see now is the new look of my website. This time I used HTML 5 UP for my template, I used this one called Read Only. I have also made a lot of changes in the CSS file this time. Making new formats for showing pictures etc. The main reason for changing template was the menu. I find this menu, not perfect, but better suited for my needs. Week 3: Computer-Aided Design assignment: model (draw, render, animate, simulate, ...) a possible final project, and post it on your class page with original 2D and 3D files. Simple 2D design in InDesign 2D in InDesign I made a very simple sketch of the cube in InDesign, showing how the cube would be hollow on the inside, keeping all the electronics inside the cube. I decided to keep it very simple since I still don't know what components I will add to my cube. Now in retrospect, I should have used Adobe Illustrator instead of Adobe Indesign. The two software have man similar functions, InDesign is a vector based program, however, InDesign's strengths really lie in its ability to handle multiple pages and create master pages. Illustrator is also a vector based program, meaning unlike photoshop which is rastor based and uses pixels, Illustrator uses a mathematical grid to map the artwork that is created, therefore all artwork created in Illustrator is scaleable, resizeable without losing quality. However, since my design is so simple, it doesn't really matter this time. For the file to be useful for machining, I exported the InDesign file as a PDF. I then opened it in Illustrator and exported it as an DXF file that could be used for machining. I used the "Line tool" and the "Rectangle Frame tool" to create my design. I changed the solid line to be dotted on the lines that you wouldn't see from looking at it. Fusion 360 I had barely any experience in 3D software before this week. I have played around in Tinkercad with the kids during Hyper Island Kids Summer Camp last summer, but that was about it. On Thursday we had a full day workshop in Fusion 360 with Mauro Jannone. Fusion 360 is a 3D CAD/CAM design software, free for students, educators, and academic institutions. During the lecture I had some problem understanding the software; why we did what we did, and why the specific order of things. Also the linguistic was fairly new to me. After the lecture I've been playing around in Fusion 360. I also watched this tutorial to get a better understanding of the software. Designing a possible final project Since I don’t yet know how my final project will look and work, I decided to make a design of how it will look from the outside, not spending time on the inside of the cube. To start a new sketch click “Sketch”, “Create new sketch” and then select the plane you want to work from. If you, for example, want to create a rectangle, click “Rectangle” and select what type you want to draw. To set the measurement, just type in the numbers you wish. If you wish to make a parametric design, click “Modify” and then “Change Parameters” to set the measurement. To be able to make a sketch on your model is really useful. To do that, just click “Create Sketch” and then click the plane you want to start from. You can also choose to have an offset from the plane. You can use shortcuts for different commands, “L” for “Line” for example. Another useful command is “E” or “Extrude”. The command that can either cut, join or intersect a body. “Circular pattern” and “Rectangular Pattern” is two commands that duplicate sketch curves, in a circular or rectangular pattern. Render I decided to render the model a bit, having it look like it's made of oak. To do so, I changed “Workspace” to “Render”. Under ”Appearance” you can change the appearance of the model, just drag and drop the different material to your faces or bodies. After that I did a “In canvas-render” making the finishing to the appearance. 3D CAD One part of this week’s assignment is to try different tools. I downloaded FreeCAD, an open source parametric 3D CAD modeler and watched this tutorial on basic functions. I don’t really like how FreeCAD structure functions. I have a hard time understanding what button does what. I will play around some more to get a better understanding. Download files here Possible final project STL Possible final project f3d 2D model in InDesign 2D model in DXF Week 4: Computer-Controlled Cutting assignment: 1) cut something on the vinyl cutter, 2) design, laser cut, and document a parametric press-fit construction kit accounting for the laser cutter kerf, which can be assembled in multiple ways This was a really fun week! I made a lot of progress, at least in my head :) I also made a lot of mistakes, which was probably the most fun part. To give an overview. I started out making a press-fit construction kit. Parallel I was working on a laptop case that I would print in MDF material. Both of these I sketched in Fusion 360 using parametrics. I also made stickers, one easy one with my name on it, and then I went ahead and made a tiger and a bear for my niece and nephew. My first press-fit construction kit Fusion 360 I made my first ever press-fit construction kit. I decided to start “easy”, but it wasn’t that easy to me. I used Fusion 360 for creating the sketch. Since I’m not yet friends with Fusion 360 since last week, I was a bit worried, but it worked out fine with some help from friendly Frank. Sketch I started off adding the parametrics, since this was going to be a parametric design. I went to “Modify” “Change Parametrics” and then I typed in the numbers I wanted. I will get back to which numbers I chose below. I made a sketch; a circle and a rectangle. I created a few constraints; that the circle should be “tangent” to the end of the rectangle, the rectangle “horizontal”, and the center or the rectangle “fixed” to the center line of the circle. I used the tool “Circular pattern” to create the rectangles spread out even around the circle. The sketch was done. I right clicked the sketch icon and “Save as DXF”. Adobe Illustrator I instead used Adobe Illustrator to make the final changes to the design. I altered the “stroke weight” to “0.1 pt”, and the “shape builder” to only keep the actual lines that I wanted to cut. I had to ungroup the shapes och delete the extra ones. I copy/pasted the shape three times to cut more pieces to my kit. Done. I exported the file to DXF. Lasercut5.3 Then it was time to laser cut. I had saved my file on a USB. I opened Lasercut5.3, the program our Fab Lab used for the machine. I imported the file. My shapes were huge! I resized the shapes, only the vertical and made it proportional since I had four pieces laying next to each other. I downloaded the design, telling the machine what to cut. I now know two different ways for the design to stay the same size. Make sure the measurement in Illustrator is mm, not pt. Also, I don’t need to export a DXF file for the laser cutter, I can just use my Illustrator file (Illustrator CS6 for this machine). I had a look in the cutting journal, looking for 3mm cardboard that was cut perfectly. I found: Cut: - Speed: 150 - Power: 90 I kept corner power to 20% and overlap to 0.05. The corner power is the decreased power the machine use doing corners since too high power might cause cut off corners. The overlap is the mm of overlap the laser goes around your cut, making sure the piece is cut off. In the software, different colours represent different settings, that the user choose. Since I was only going to cut, I picked black and selected cut and my prefered settings. To pick the origin of the laser, I went to the menu “Set laser origin” and then “left top”. To test the origin I later did a test when the machine was on. I went to the menu and clicked “simulate” to see how the laser would go. I downloaded the file to send it to the machine, I clicked “delete all” so no other files where to be left in the machine, and then upload current. The machine, a BRM Lasers 120160 - laser cutting my first own thing I placed the cardboard on the laser bed. I moved the laser by using the arrows and placed it close to the end on the cardboard. I calibrated the laser to make sure the distance between the laser and the material was good, using the wooden measurement that comes with the machine. I did a test to make sure I was going to cut in the right place. The test was fine - so now time for the actual cutting. No fire and no accidents - my pretty design were created. Safety Apparently all Fab Labs have had a laser cut fire. I don’t want to be the one causing one. It is very important the the ventilation system is on at all times when using the laser. When the cut is done, turn off the laser tube and open the door to the bed. Then smell - very important, and also very enjoyable due to the nice smell of burned wood/paper. Keep the ventilation on for a while. There is a red line around the laser cutter which you are never supposed to leave when you’re cutting - a sneaky way to have your classmates make coffee for you, hint David. There is also an emergency button to stop the machine, and always water close by. First try I made a simple sketch, using some parametrics bot didn’t account for the kerf or really thinking about how far in I wanted the slots. - Circle diameter = 30mm - Slot length = “circle diameter/3” - Material thickness = 3mm The slots was to long so when assembling them, they couldn’t fit together. And the fit wasn’t tight enough. Second try I changed the length of the slots to ⅕ of the diameter for all to fit together. I did also account for the kerf that I had measured using a test on cardboard earlier on. Parametrics: - Circle diameter = 30mm - Slot length = “circle diameter/5” - Material thickness = 3mm-kerf - Kerf = 0.175mm The slots were now fine, but the fit wasn’t 100%. I then noticed that the cardboard I was using wasn’t 3mm as I had assumed, but 2.76mm. Third try I changed the material thickness and this time it worked really well. Chamfers Later on I added "chamfers” to my design, so the slots could more easily slide in. In my sketch in Fusion 360, I added 2 lines, going from the center of the rectangle 45 degrees. I then made a “Circular Pattern” to apply to all slots. Looks like a cactus, right? A full on press-fit construction kit As easy as it might look, I believe Ingrid the Tiger and Ebbe the Bear will love playing with these. So I cut a whole bunch plus made some simple longer shapes so it can be constructed in different ways. I almost nailed a laptop case on the first try. Almost I have seen designs using “living hinge” and I wanted to try that. A living hinge is when you cut the material in a way that you can bend it. I decided to do a laptop case. I used Fusion 360 and I made it parametric. I must have been tired when measuring my laptop, because I made the length too short. However, since I made my living hinge a bit too flexibel, my laptop could still fit. I will make some alterations to the sketch and try again soon. The design I have seen a few different box designs; different amounts of slots, different length of the slots, different type of lid etc. The one I made had many slots on the bottom of the box and just three to hold the lid in place when closing. Since this was my first go on a more complex design, I decided to cut it and try it, before going into too many details. For example, I didn’t account for the kerf, and the MDF thickness was varying in different places, from 3.07 to 3.14mm and I accounted for 3mm. Simple kerf test Even if I didn’t account for it in my first try, I did a simple kerf test. Before cutting my design, I made a 10x10mm rectangle and cut that, also to test the settings of the laser cutter. I had a look in the cutting journal, looking for 3mm MDF that was cut perfectly. I found: Cut: - Speed: 40 - Power: 100 I kept corner power to 20% and overlap to 0.05. I cut the rectangular and measured it across. It was 9.74mm meaning the machine had cut away 0.26mm. Since the machine cut on two side, the kerf equals 0.26/2 = 0.13mm. I should also mention that when I first made my living hinge, I wasn’t sure how much I could alter the pattern. I downloaded a pattern from Obrary, I used the “Straight 1.5mm” since I thought the tighter the better. Using the same proportions as the pattern sketch, I ended up having only three “pattern parts”. Emma, my local instructor saw the design before printing and recommended me to squeeze more in and make the area of the living henge wider. So I did, but too much. When cut, the living hinge became a accordion. It might not be that I had too broad hinge and too many pattern part, however, I should have added more pattern parts on the lengths of the pattern, not just adding more to the width, but also to the length. The rest of the laptop case turn out ok. The press-fit wasn’t tight enough, witch I knew since I didn’t account for the kerf and the side of the laptop case was too short, which must have been a measuring mistake. I will alterate my design and cut it again soon to see if I can make a perfect laptop case. Kerf, cutting and engraving test Me and my classmates made a few kerf test. We also tested the power and speed for cutting and engraving different materials. For speed and power we looked in the cutting journal. Cut 3mm cardboard: - Speed: 150 - Power: 90 - Kerf: 0.175mm with simle test cutting a 10x10mm rectangular, dividing the cut by 2 (2 cuts) - Kerf: 0.27mm with the test cutting 9 pieces and measuring the space cut away, divided by 10 (10 cuts) Cut 3mm MDF: - Speed: 40 - Power: 100 - Kerf: 0.15mm with simle test cutting a 10x10mm rectangular, dividing the cut by 2 (2 cuts) - Kerf: 0.18mm with the test cutting 9 pieces and measuring the space cut away, divided by 10 (10 cuts) Cut 3mm Acrylic: - Speed: 20 - Power: 100 - Kerf: 0.15mm with simle test cutting a 10x10mm rectangular, dividing the cut by 2 (2 cuts) - Kerf: 0.18mm with the test cutting 9 pieces and measuring the space cut away, divided by 10 (10 cuts) We also made a engrave test, using different power and different speed. In this test the speed was always 350, but the power 40, 60, 80 or 100. I made stickers - Tigers and Bears! Joey sticker But first But before I got to make Tigers and Bears, I decided to start easy and make a sticker with my name. I also did it directly in the computer that is connected to the vinyl cutter at the Fab Lab, which made it even more simple. In Illustrator I wrote my name. I marked the letters and clicked “Object” “Expand” to make the letter wider and with a path. I changes the color to see through “Fill” and black “Stroke” I changed the thickness of the “Stroke” to 0.1 pt. Ready to cut! The machine Roland I loaded the material, a role of vinyl to the back of the machine, securing it with the two locks. I inserted the blade in the blade holder. I changed the amount of "blade extension", i.e how much the tip of the blade extending from the blade holder. I wanted to make sure the blade would go through the vinyl but not through the underlaying material. The menu in this machine is not really user friendly, it’s hard to know where to go and why. But by clicking around I got to pick if I was using a role or a piece of vinyl. Furthermore, I set the origin of the blade; pressing "origin" till it flashes on the display. I set the "blade force" to 90 gf and "speed" to 5 cm/second, and I did a test. The test is done by pressing "test" for 0.5 seconds or more. The test was fine. Back to Illustrator To cut vinyl from Illustrator I just used “Print”. In the settings it’s important to choose "Cutting Area” to “Get from machine”. It is also important to pick the placement of the design to let the machine know where the design is. And then print. Tigers and Bears I downloaded the emojis of the tiger and the bear and alterated it it Illustrator. I cut away parts and added a few to make lines thicker. I used the tool “Image trace”, then “Expand” and then I changed the “Stroke” to 0.1 pt. It might sound easy now but I had some trouble on the way. First I made a too detailed design for the size of the sticker I wanted to make. I also had some problem with double lines which made the machine angry and it messed up my design. What I ended up doing was just making sure I had no double lines just before cutting. With this type of construction in Illustrator, I had no idea where all the lines was coming from. Manage the sticker once it's cut The process of getting the sticker off the vinyl is fairly easy if you use tweezers and the tape-like material that you transfer the sticker onto. First, peel off the part that is cut off; the vinyl around the sticker and the parts inside that is cut off. Take some of the tape material and attach to the front of the sticker. When you want to place your sticker in a place, peel off the backside and attach it. Make sure all bubbles are taken away. Then take off the tape. Download files here Laptop case.fsd Laptop case.ai Laptop case.dxf Press fit construction kit.ai Link to Tiger and Bear ai group assignment: assignment: characterize your lasercutter, making test part(s) that vary cutting settings and dimensions Week 5: Electronics Production assignment: make an in-circuit programmer by milling the PCB, then optionally trying other processes It's easy when you know what you're doing This week’s assignment it to make an in-circuit programmer. I didn’t even know what that was when starting this week. An in-circuit is short for integrated circuit and that is simply any type of circuit, made to fit into a chip. It can be called a chip, a microchip or an IC. A programmer is the hardware that can conduct the process of transferring a computer program into an in-circuit. A PCB is a Printed Circuit Board; in this case, a board made of composite epoxy, with conductive pathways of copper that has been etched or "printed" onto the board, that can connect different components (wikipedia). So if I understood it right, I'm making a board, downloading software to it so that it can later program other boards, i.e a programmer. The whole process of cutting and tracing the board is pretty simple. But of course I took the opportunity to mess up so I had to practice the process twice. ;) Roland MDX-20 milling machine Set the machine and board At the Fab Lab we have a Roland MDX-20 milling machine. To prepare the machine I first cleaned it from leftovers from earlier projects. In this assignment we use a one-sided copper board. I chose to wash my hands and the board before getting started to minimize fat and dirt on the board. I used double-sided tape to attached the copper plate to the bed. I have learned that you should press it onto the bed really hard to make sure its not tilting. Also not to overlap the double-sided tape since that also could make the board tilt. Hold the end mill so it doesn't break I pressed the button “View” on the machine to change my 1/64 in end mill that I would first use to do the tracing on the board. I untighten the screw and placed the end mill. I made sure only the black part of the end mill was visible. I held my finger on the end mill so it wouldn’t hit the bed and break. And I also didn’t tighten the screw too hard, since it’s not needed and it will be hard to change later. I went out from view mode by clicking “View” again. The software At our Fab Lab we have a computer connected to the milling machine. In the Linux operating system we have a shortcut to a Fab Lab program called Mods In the software I click “Programs” and then “open server program”. As “file to open” for this assignment we click Roland, mill, MDX-20, PCB. What opens up is a process of transforming a png file to something that can be read by the machine and cut. For this assignment we had a ready made pgn for traces, I used Brian's. In Mods, there is two processes to choose from; mill traces or cut. Just like when laser cutting, it's good to mill first (engrave for laser cutting) and then cut to keep the board stable. I clicked on the button “Mill traces” and I used the default setting for mill traces. Set PCB defaults: Mill traces (1/64) - Tool diameter (in): 0,0156 - Cut depth (in): 0.004 - Max depth (in): 0.004 - Offset number: 4 Mill raster 2D - Tool diameter: mm: 0.39624 in: 0.0156 - Cut depth: mm: 0.1016 in: 0.004 (first cut - enough to cut away for traces, not when cutting trough) - Max depth: mm 0.1016 in: 0.004 (deep at the end) - Offset number: 4 (0=fill) - Offset stepover: 0.5 (1=diameter) - Direction climb (not conventional) - Path merge: 1 (1=diameter) - Path order: forward (not reversed) - Sort distance (yes) - Calculate the path - click button calculate I clicked calculate for the program to calculate the png and the settings. A new tab opens with the path. Roland MDX-20 milling machine - Speed: 4 mm/s - Origin x=10 y=10 - Jog height: 2 (how high the mill goes when its moving) Prepare the machine for tracing In the software I need to set the origin. I tested the origin x=20 y=20. For the machine to respond to this I had to open the communication between the computer and machine. On this computer we have a shortcut, (serialserver.sh) and by clicking that, then “Run in terminal” a terminal window opens saying “listening for connection from client…” Back in Mods in the box “serial server” click “open” status change to “serial port opened”. Now the machine and the program was connected. That placement of the end mill wasn't correct. I change the origin and tried x=15 y=17. Hard to see where the end mill was since I wanted to reduce the use of material so I made it really close to the end of the copper. So I move the end mill down with the button on the machine. I didn’t dare to go that close, so it was still hard to see, I had about 2 mm to spare. So I changed the origin to x=20 y=20 again to then calibrate the end mill. What that means is to set the distance between the end mill and the copper. I move the end mill down by using “down” on the machine. The reason is that you don’t want the end mill to far out since that can cause vibrations when its running. I loosen the screw, held the end mill with two fingers so I wouldn’t drop it and break it. I made it touch the copper and I rotated it to make sure there were no dust between the mill and the copper. And then I tighten the screw. Now the calibration was done. Then I changed origin to x=15 y=17. It looked good. I need to update the file since I changed the origin. I need to recalculate the path by clicking “calculate” again. I put the cover on the machine and then I clicked “send file” which starts the machine. When the tracing was done I use a brush and I vacuumed the bed to take away the material that had been cut away. Cutting off the board I have to change the tool to cut out the board. I use a 1/32 in end mill for the cutting of the edge. I had to repeat the calibration. I pressed “View mode” and then I changed the origin to x=0 y=0. I did this because when I move the end mill at this point, it will go down since I have done the calibration for the other setting that I just ran. So if my tool is too long, it will cut the copper and maybe brake the end mill. I then moved the end mill up using the button on the machine. After that I went back to my old origin, x=15 y=17. Now I can do the calibration again. I press the end mill down and turned it a few times, and then tighten the screw. The machine was now ready to cut. I upload the new file for cutting. I clicked “mill outline (1/32) since that was the process I wanted to preform. I got the default setting and I clicked calculate. Set PCB defaults: Mill outline (1/32) - Tool diameter (in): 0,0312 - Cut depth (in): 0.024 - Max depth (in): 0.072 - Offset number: 1 (this is the number of lines, I only have one line so I kept it to one) Mill raster 2D - Tool diameter: mm: 0.79248 in: 0.0312 - Cut depth: mm: 0.6096 in: 0.024 (first cut - enough to cut away for traces, not when cutting trough) - Max depth: mm 1.82879 in: 0.072 (deep at the end) - Offset number: 1 (0=fill) - Offset stepover: 0.5 (1=diameter) - Direction climb (not conventional) - Path merge: 1 (1=diameter) - Path order: forward (not reversed) - Sort distance (yes) - Calculate the path - click button calculate Mill outline (1/32) - Roland MDX-20 milling machine - Speed: 4 mm/s - Origin x=15 y=17 - Jog height: 2 (how high the mill goes when its moving) It was then ready to cut. My cut didn’t go the whole way through. I probably put the speed roll to low. I had to repeat the calibration. My biggest mistake was that I didn’t look if it cut the whole way through when it was still on the bed, I removed it. So when I wanted to place it back, there was no way for me to do that perfectly in the same spot. So I ruined my board. And I had to start all over again. New origin - x=15 - y= 37 My cut-out board This time I forgot to press “mill outline” so it was cutting the cut as a trace. I saw this when going to “view mode”. To start again when I had changed the settings, I had to press the “up” and “down” button at the same time to reset the machine. I started cutting again. This time I checked if it was cut the whole way through before taking away the bed. It was! I then cleaned the machine and washed my board with water and soap. The components For this assignment we are making a board that a former student Brian did. I followed his tutorial to see which components I needed. I collected them all and wrote their name on a paper. To get an idea of which one to solder first, I placed them on the board. I had a look at the schematic and the board image Brian made to get the component values and placement. The ATtiny85 is the most difficult part. The ISP header I will solder last as it is large and might get in my way. The ATtiny85 has a dot on it. It’s important that the dot is in the same place as the dot on the board image. Same goes for the diodes since they have a orientation. Soldering I turned on the soldering iron to 70 (Fx10). I cleaned the iron by melting som tin on it and cleaned it with a spunge. I stared soldering. My hands were shaking even before starting. It didn't make things easier. If you are good at soldering, the tin is supposed to be smooth and shiny. I managed to make a few of them. If a component has many “legs”, it's good to first attach the component by first melting some tin to one of the plates fastening one “leg”, and then solder the other “legs” of the component. Sometimes I got too much tin on the board, then I used desoldering copper wire to take it away. The last step was to create a bridge on the jumper. Apparently this connects VCC to the Vprog pin on the ISP header so that the header can be used to program the tiny85. Im supposed to remove this bridge later to turn the board into a programmer. Shorts When I was done soldering I checked the shorts by using a Multimeter. I turned on the sound alternative and place the two on different parts. I made sure Ground and VCC wasn’t connected. I check the shorts to the microcontroller. Emma checked if her programmer could see my microcontroller. I inspect my board just by looking at it; the components wasn't super flat onto the boart, but it worked so I guess it was fine. I attached some double-sided tape and some thicker paper to make my board a bit thicker and fit better into the port. Program the ATtiny Then it was time to program my programmer. I downloaded this software. The software runs from the terminal. I used Emma’s programmed board to program my board, connected to each other and placing Emmas board in my computer. The rest of the process was very hard to understand. And apparently we were a few weeks ahead of schedule doing this, so this week it was ok that I just followed Brian's tutorial and not really understanding what I did. In short. I downloaded a folder and opened it in Terminal. I used the command “make” to build the hex file that will get programmed onto the ATtiny85, the file “fts_firmware.hex.“. I updated the “Makefile” because I was making a tiny85, not a tiny45 which was pre programmed. I used the commands: “make flash” to erase the target chip. Then I ran the command “make fuses” that set up all of the fuses (except from one). To test my USB on the board, before blowing the fuse, I unplugged it from Emma’s programmer and plugged it in to my computer. My computer registered my board I opened ”About this Mac” in the Apple System Profiler, the ”System Report”. I selected USB and I say my USBTiny listed as a device. I did this multipel time to make sure it was working properly. After that I blew the ”reset fuse” to make my board into a programmer that can program other boards. I ran the command "make rstdisbl”. What I didn’t do which was part of the tutorial was to remove solder from the jumper, Emma told me to leave it for now. Now I had my very own working ISP programmer! Download files here group assignment: assignment: characterize the specifications of your PCB production process Week 6: 3D Scanning and Printing assignment: design and 3D print an object that could not be made subtractively and 3D scan an object 3D printing The assignment of this week is to 3D print something that could not be made subjectively, meaning more or less, print something that can’t be done with the other machines. I decided to make a fairly simple design that I have been wanting to test for a while; a sphere inside a square. Fusion360 For designing the print I used Fusion360. I made a sketch of a rectangle 50x50mm and then extruded it and made it into a cube. To make the spheres inside the cube, I used the function “Construct” “Midplane” to create two planes to place the two spheres on. The first sphere I made bigger than the cube itself and selected the operation “cut”. This made the holes in the cube. Then I created a smaller sphere that I place inside the cube, this one with the operation “new body”. The design was ready and I exported it as a STL file. Cura I imported the file into Cura. I had the Cura 2.5.0 version from before on my computer. In Cura I selected all the settings for the printer. I clicked Ultimaker 2+ which is the printer I decided to use. I went for the default settings, but did a few changes that would speed up the printing time. - Nozzle: 0.4 - Material: PLA - Profile: Fast print - Print Speed: 60mm/s - Travel Speed: 150mm/s - Shell, Wall Thickness: 0,7mm - Shell, Top/Bottom Thickness: 0,75mm - Infill: Light (20%) - Enable support: Yes - Build plate adhesion: Yes The temperature among some other settings for the material can not be set directly in Cura, but in Ultimaker 2+. 3dverkstan, a company in Sweden that I have been working with before have a great tutorial on how to create custom material profiles. I went into custom settings and changed the Layer height from 0.15 to 0.20. This makes the Layers “thicker” hench the print faster. I changes the size of the design by changing the “Scale” to 75%. I decided I wanted the design to have two different colors, but Ultimaker 2+ only takes one at the time. Friendly Frank told me this was possible by pausing the print and changing the colors. To do that in a more controlled way than just pausing the print, I used the setting “Extensions” “Post Processing”, “Modify G-Code” then “Add a script”. I picked “Paused at height” and changes the settings to pause halfway. I also added 128 “Extrude amount” since that is recommended when changing material. In inserted the SD card in my computer and exported the file. The print was, according to Cura, going to take approximately two hours. Ultimaker 2+ First I had to load my first material, white PLA. I turned on the power on the machine. Selected “Change material”. I place the material on the back of the machine and cut the PLA 45 degrees, which is apparently good to make the material a bit more friendly when extruding. I clicked the menu so the material started to heat up and come out. First some green PLA came out; material that was used before me, and then my white PLA. I stopped the machine and took away the material that was not going to be used. I clicked “start” and my design started to print. Halfway the machine stopped and I changed the material to black PLA. Then I clicked “resume print”. After two hours my print was done. No big mistakes had been done during the printing, only at one point I took away some of the support that had misplaces for some reason. Clean up the print My design was full of support material when it was done. I pulled it off but many times I was worried that i used too much force. I pulled too hard at one point and ripped off the line of material closest to the edge. The design was still fine, but a bit frustrating of course. The side facing down, the surface that the sphere was standing on when printing, had lots of support material that was hard to get rid of. I used sandpaper to smoothen the surface. I’m very happy with the result. I’m happy I used two different colors which made the design look so much better. I must have sandpapered a bit too much in some places where the whole now aren’t that circular anymore. However, good enough for my first try making this design. 3D Scanning And then it was time for 3D scanning. I used the 3D Systems Sense 3D scanner that we have in the Fab Lab and the Sense software to go with it. I plugged in the USB 3 cable into the USB 3 port on the back of the computer. I opened up the software and the scanner was ready to use. I wanted to scan my head, so I picked “Head” and started to move the scanner around my head. I soon realised that this was really hard; it was hard to stand still while scanning, and hard to move the scanner slow enough. I tried it a few times and all the scans looked terrible. Then I asked a friendly stranger if he wanted to be my model. The scan when better this time; when doing it on someone else, however still not perfect. Friendly Frank did a scan of me. I asked him to do what I had done before with the friendly stranger, using the same settings, "Head". I sat down where the light was good, and not too many things in the background. I asked Frank to keep the scan steady and move it slovely around me. It is very hard to get all the parts of the face to be perfect when scanning like this. It would probably be better if the scan was still and the obejct could move around, or the opposite, have the object to be perfectly still and only move the scan. The scan didn’t turn out perfect, however good enough for this time. To clean up the scan, I used “Solidify” for the non-solid spots on my head. Next time I will create a better “studio” when scanning. I think the light could have been better, also the space of walking around the model. To scan a person is always hard since the person needs to sit still. I will make a new scan om myself at one point, and then print it to keep a little figure of myself. Download files here Sphere inside cube f3d Sphere inside cube STL Link to 3D scanning files group assignment: assignment: test the design rules for your 3D printer(s) Week 7: Electronics Design assignment: redraw the echo hello-world board, add (at least) a button and LED (with current-limiting resistor), check the design rules, make it, and test it What an interesting week, slightly similar to the week on electronics production but this time more in-depth about electronics and also adding the design part to the task. The weekly assignment is to redraw the echo hello-world board, a board that I actually don’t know yet what it can do, add some components, check the design rules, make it and test it. EAGLE The week kicked off with a great session with our local instructor Emma. She talked us through and showed us the basics of electronics and some of the laws that it follows. The lecture continued with an overview of EAGLE; a program for electronic design with schematic capture, printed circuit board layout, the program that I would test this week. I downloaded the software here. I was recommended a tutorial before digging deeper into EAGLE, I watched this one and this one. Libraries In electronic design you start with the schematics, using symbols to draw the circuit. I had never used EAGLE before but it was intuitive, at least for the easy commands that I now know. Before even starting to create the schematic I had to upload the library of components that we were going to use for this assignment. This is done in the Control Panel of EAGLE and in the subsequent library folder on your computer that is created when you install EAGLE, mine was in Applications/EAGLE-8.6.3/lbr. I added the libraries to the folder and went back to EAGLE. In EAGLE I unselected the other libraries that come with the software and selected “Use” for the two ones I’ve gotten from Emma. I also selected a library “Supply1” from the software that has basic symbols like GND and VCC. Components I was then ready to start my project. I clicked File/New/Project and then I right clicked on the project icon and New/Schematic. List of components on my Echo Hello board: - 6-pin header - Microcontroller ATtiny44A - FTDI header - 20MHz resonator - 2x Resistor (10k) - Resistor (value unknown till I calculate for the LED) - Button (6mm switch) - LED (red) - Phototransistor - Capacitor (1uF) - Ground - VCC Schematics This is the video on when I created the schematic of the Echo Hello board: I used different commands to create my schematic. I used the Command Line to do quick commands. - Add = to add a new object from the library - Net = to start a net wire (the connections) - Move = move objects - Delete = delete objects - Name = name the objects and nets (this is important for the connections; named nets attached to components can be connected by naming them the same, EAGLE ask if there is a connection) - Value = add a value to the object - Label = label the object (similar to the name, but this is the command that makes the name show up in the schematic) - Info = get information about the object There is a setting in Edit/Net Classes where you can set the Name, Width, Drill, Clearance for your net, but I only used the default settings and one Net Class since my board was fairly simple. Lastly, I used the tool ERC from the top menu. ERC stands for Electronic Rules Check and ensures the board will work. No problems here :) When my schematics was ready I clicked the “Switch to Board” button to create my board design. Board Layout Autorouter vs manually This was more tricky. At first, I had a hard time understanding how I could figure out where to best place the components out. The where airwires everywhere. I first tried the Autorouter to help me find the best routes, but that just made everything more complicated. In all ways I placed the components, the Autorouter said it was only 70-92% good. I thought you had to get 100%, I then didn’t know you could start there, and then route the traces manually. But anyway, it was more fun to do it yourself. To route the traces I used the command Trace and connected the components. The wire turns red when the route is in place. Lost connection Early on I stumbled upon a challenge. For some reason, my board layout wasn’t connected to the schematic anymore. I couldn’t really find out why, but I noticed this when I could delete components and airwires without the software saying anything. I have read that in the free version of EAGLE, this can happen when you put a component below or to the left of the origin. Also if you close the board layout or the schematic and make edits to one of them, will break the connection. Commands For the board layout, I used some new commands and left some that I only used for the schematics. - Route = route the traces - Ripup = ripup a route - Ratsnest = clean up the airwires - Text = add text to the board I also set some the Design Rules found in the main menu Tool/DRC, which stands for Design Rules Check. I did a few changes, mostly based on the size of the end mill that I was going to use, 0.4mm. Clearance: - Wire-wire: 0.4mm - Wire-pad: 0.4mm - Pad-pad: 0.4mm - I made no changes to the Via since that is for when you have multiple layers on your board. Distance: - Copper/Dimension: 0.4mm Sizes: - Minimum width: 0.4mm I didn’t use the File tab, which can be useful if one wants to save the design rules and store it with the board file. Design This was time-consuming. I don’t know why I started off so small and tight. It made it all very hard. At first, I didn’t know that I could route the trace under components. I also wanted my design to look good. I guess I spent almost 5 hours getting it to look good. I made one design where I had two routes under a component. It worked with the design rules, so it must have been ok, but since this is a fairly long process of designing, cutting, soldering and testing, so I decided to do some changes. DRC showed an error saying “Airwire”. Apparently I had short airwires that were not connected to anything. I zoomed in to locate them and added some route to connect them. I added my name with the Text command. First I got a design rule message saying it wasn’t vector. In the settings for text, I changed the size and the thickness of the letters. This could have been done better looking, but it was all just for fun and not that important to get beautiful. After that my board design was done! I thought. Prepare for cutting For the CNC mill, I need two pgn files, one for the traces and one for the cut. Exporting the board design to a pgn file was fairly simple, but I stumbled across some challenges. To create the png for traces, I selected only the layer that I have designed the board on, for me Top Layer. I then went to the menu bar File/Export/Image. - File name - Monochrome - Resolution: 1000 dpi - Area: Full Text still showned in pgn file The pgn for the traces were created. For some reason that I don’t know, the name of two of the components was left in the png. It wasn’t there when looking at the top layer of the board design. For my first try, I decided to erase that in Photoshop. I later learned that you in Option/Set/Misc you can deselect “Display pad names” “Display signal names on pads” “Display signal names on traces”, however, that won’t take away the fact that the file still accounts for the text. I have learned that it's good not to change the size of the png file since the chance to make it different than the cut-file is too big. However, since I was going to make a file to cut the board, I could just make sure it cuts away this extra part that I have got, to save material. This would have worked fine, however, in the end I didn't save any material. The origin of the pgn file was in the corner where I had the extra black area. That was where the machine started cutting. I could have solved this in two ways, either rotating the pgn 90 degrees before cutting, or change the origin of the end mill to start outside my copper plate. Making the cut png became more complicated than I thought. I had learned to make a rectangle that was 1 mm bigger than the board. I would do this in layer 51 (a layer I didn’t use for this design). After exporting the pgn I could fill in the 1mm offset in Photoshop, but this didn’t work for me. I instead made two rectangles, on just as big as the board in layer 52 and one 1mm bigger in layer 51. I exported the file not as Monochrome but as Clipboard. I could then color the areas in Photoshop, what was going to be cut in black and the board in white. I first noticed the mistake in the pgn cut file when actually cutting. This was when only doing one rectangle. I must have missed makeing the 1mm offset black in Photoshop because the cut only became one line on the bottom. When I changed this later on, the cut was then 1 mm higher up the the failed cut, which was a nice surprise (since I now had two rectangles). Traces in Mods Cutting the board I followed my steps from the prior week and it all when smooth. That's not entirely true. First time I did the calibration I could tell that my traces were too close to each other. Even though I had used the design rules, this happened. I had to go back to my board design and move the traces that were to close. I also changed my name that I had written on the board and made spaces between the letters to be able to read the name better. After that it all went smooth. I made the traces and I cut off the board. Components My board was then cut and ready to make magic. I collected all the components from my component list and a few times I had to check with Emma that I had the right one “do we have different kinds of 6-pin headers?” Then I had to fetch the right resistor for my LED. For that, I had to do some calculations to make sure I pick the a value that wouldn’t burn out my LED but still make it shine well enough. I had a look at the paper in the LED drawer which indicates what LED this was: 160-1167-1-ND. I wrote that in the web browser and added the text datasheet. I went to the site digikey and clicked on the datasheet. I used this site to calculate for the LED. In the datasheet for the LED I was looking for: - Source voltage - Diode forward voltage - Diode forward current (mA) I have learned that the Voltage in the USB is 5V. The Diode forward voltage I found in the datasheet. I picked 1.8 because that is the “Type” which means it’s the typical value. Sometimes it’s a min and max and then I would have picked the average. For Diode forward current the datasheet said 40mA. - Source voltage: 5 - Diode forward voltage: 1.8 - Diode forward current (mA): 40 The wizard recommended a 1/4W or greater 82-ohm resistor. At Fab Lab we didn’t have any 100-ohm resistors, the smallest we had was 499, so I used that. I placed all my components on my board to get an understanding of what to solder first. I started with the biggest component, the ATtiny. Orientation Soldering went fine, I actually got a compliment for them “nice and shiny” :). My biggest challenge was to make sure I put the LED and the Phototransistor in the right orientation. I learned that the cathode is marked on the LED and that the cathode always connects to ground (or indirect to the ground as in my case). The phototransistor has a cut off the corner. That corner marks the side that should be connected to VCC (or indirect as in my case). I checked that I had not shorts with a digital multimeter and after that my board was ready to be tested. My programmer can see my board Test the board I connect my own ISP programmer to my Echo Hello board and then to the USB port on my computer. I used a flat cable to do so. When doing this it's important to check the orientation. I used Xavier’s picture from his weekly assignment week 4 and located the pins on my ISP programmer. For the Echo Hello board, I looked at my board design parallel to the schematic to figure out which pin was which. In Terminal I wrote the command “avrdude -c usbtiny -p t84” but I soon realized that I have a t44 in my Echo Hello and changed it to 44 instead of 84. My Echo Hello was working. I got the nice message in terminal. That means that my programmer can see my board and it’s ready to be programmed. Download files here Traces file Cut file Schematics Board group assignment: assignment: use the test equipment in your lab to observe the operation of a microcontroller circuit board Week 8: Computer-Controlled Machining assignment: make something big The weeks are just getting more and more fun! Such an amazing week! This week’s assignment was short and sweet “make something big”. I guess one of my biggest challenges was to come up with what I wanted to make. Before, I have been doing some stuff for my sister’s kids, but this time I wanted to make something that I can keep, and hopefully for long, reminding me of the time here in Amsterdam. So I went to explore my everyday needs. One is that I don’t like putting worn clothes back in the wardrobe. I want to keep then a bit tidy and not just throw them on a chair. My dad has always, as far as I remember, have a “herrbetjänt”, a clothes valet, also called men's valet, on which his clothes were hung. I want one for myself. Simple prototype of the Herrbetjänt Designing the herrbetjänt Designing is not my best skill, not drawing on paper either, so I started off cutting out pieces in cardboard to get an idea of how the herrbetjänt could look. I wanted to make it stable and with clean cuts, with many different possibilities where to hang the clothes, bags or whatever I will want to hang there. Cardboard was a bit tricky to do so small, so I used thicker paper instead. Fusion 360 When I had an idea of how I wanted my design to look I went into Fusion 360 to make my sketch. I feel comfortable with 2D modeling in Fusion 360 now, so it all went pretty smooth. I made the design parametric since I knew I would have to change the measurements later for the slots and a perfect press-fit. I wanted the press-fit to be stable, however, easy enough to take apart and move pieces if wanted. Illustrator From Fusion 360 I saved my sketch as a DXF-file and opened it in Illustrator. I cleaned up the design from construction lines and made the paths joint. This I had some troubles with. I thought I cleaned it up in Illustrator, but later in the machine software, it would show that I had some unjoint paths. I saved my file as a pdf for the machine software to read. My design was now ready to be milled! My laser cut prototype in cardboard Prototyping some more and testing the cut But before I went to mill the whole thing, I first did a full size (almost, I had to scale it down to fit on the bed) prototype with the laser cutter. On purpose, I didn’t change the slot size, so I knew this prototype wouldn’t be stable, but I just wanted to get an idea of the design I had made; if I liked it, if it was too big or just straight ugly. I also did a test of the slots and the holes, actually milling some test pieces to get an understanding of the machine. I made holes with different sizes, and slots with different widths. I was a bit quick when putting together the test design, so I forgot to make the opposite slots in the same size, to actually test the press-fit, now I had to test it on any piece, which is fine, however, would have been better to test the actual press-fit. I also could have made more test holes, and then try more measurements. To save material, I could have made the cut much smaller too. Moreover, I did get to test which of the t-bones or dog-bone I like the most. I was now ready to mill! Big CNC Machine Shopbot and Shopbot Console The machine we have at the Fab Lab is a Shopbot and it runs with the software Shopbot Console. The process of making it ready for milling is: - Turn the machine on, the big red switch on the side. - Turn the software on, the Shopbot Console. - Press K in the yellow field to be able to move the mill head. You don’t want it in your way when placing your material on the sacrificial layer. - Place your material on the sacrificial layer. - Measure it and make sure your design will fit. - Screw it onto the board. - Move the mill head closer to you to change the end mill. Loosen the mill head using two screwdrivers. Make sure your end mill is longer than the material you want to cut. Place the end mill so you have the material thickness plus some safe distance. Use the measurement tool. - Set the X, Y, and Z for milling. X and Y are set by moving the end mill to your choice of origin. Then click the menu “Zero” and then “Axes X & Y”. It very important not to click the button which looks like it sets the XY-origin. Z is set by using the tool attached to the board. Test the connection by tapping it onto the end mill, a lamp should lit up in the software. Click the button with a Z on it. Hold the tool with two hands and watch when the end mill stops right above the tool. - Do the settings in Partworks (a seperate process). - Load file, “Part file load”. - Make sure there is nothing else on the board. - Turn on the ventilation. - Make sure you know where the emergency brake is. - Turn on the spindle, using the key on the side of the machine. - Ready. Click start. - Be ready to press space if something goes wrong. Job settings in PartWorks PartWorks To prepare the file for the Shopbot Console, we use the software PartWorks. In Partworks, I created a new file, and I set the dimensions for my design. I set width/height to 750mm/1500mm (my design was 711mm/1443mm and I wanted to be safe). This was the size in Illustrator, but since I wanted to cut in the direction of the grains, I later had to rotate my design, hence also change the dimensions to 1500mm/750mm. In this window, I also set the Material thickness to 15 mm. I plugged in my USB with my pdf file and saved it in the computer. I Imported the file by using the “Import vectors” command. I had downloaded a plug-in for Fusion 360 to easily create dog-bones, but I found this being a bit messy, so I decided to do them in Partworks, which also gives you alternative to dog-bones. For the t-bones and dog-bones I used Fillet 2,5mm since I was using a 5 mm end mill. I placed the dog-bones on the holes and the t-bones on the slots, which I thought looked the best. I probably should have done a different version of t-bones on the slot; the ones that go further in, not to the sides, that would have looked better. Make dogbones and T-bones in Partworks Some of my vectors (paths) were still open. I used the command “Join open vectors” to make sure all were closed before milling. The settings for which end mill is to be used is set in Tool Database: - Tool type: End mill - Diameter: 5mm - Pass Depth: 3 - Stepover 2.5mm (50%) - Spindle feed: 14 000 r.p.m - Feed Rate: 20 - Plunge Rate: 20 - Tool Number: 1 I selected a fairly low speed, but a high spindle feed to make a nice and clean cut. When all basic settings were done, I created the different toolpaths. I started with the pocket, then the inside and then the outside. This is for a similar reason for why you engrave first when laser cutting. The settings are similar with a few exceptions. - Start depth: 0 - Cut depth 15.5mm for inside and outside (I decided to cut deeper than my material to make sure it would cut through) 7,5 mm for pocket - I selected my saved end mill - Machine vectors Outside/Right for outside, Inside/Left for inside - Directions: Climb - Add tabs to toolpath (I added 6mm thick tabs to the holes and 7,5mm thick tabs to the rest of the pieces) I made sure the toolpaths were in the order of the milling, and then I saved the file. Milling First, everything when really smooth. The machine started with the pocket, just as planned, and it took ages. Nothing wrong though, I wanted speed 20 to not stress the machine. After about one hour of milling, Emma came running into the room. I stopped the machine, but without really knowing what had just happened. Me and David that was in the room, both with ear protection, didn’t react as fast as Emma. Emma had heard a sound that she didn’t like. We turned off the spindle and had a look at the end mill. It was loose. Similar to what happened when Bas showed us the machine the day before. Then we thought that it was because of the friction that was created when doing the opposite direction as to climbing, but that wasn’t the case this time. We can only guess, but the mill head was probably not tightened enough. We changed the end mill since the used one now was a bit beaten. This meant I had to repeat the calibration of Z. X and Y were still set, which was important for me to continue my work. This made the already long process even longer. I took a picture of the “Line”, that's the order of the milling, showing every “step” the machine takes. I was on line 38322. Reading about starting the Shotbot from a line in the middle of a job tells you that one shouldn’t start on a minus line, ie when it is actually milling. So I had to find the last line that was at plus; a line before I stopped. To do so I opened the spf-file in a text editor. I searched for the line where I had stopped and went further up to find the first line that wasn’t on minus. That was line 33745. In Shotbot Console I went to the option “Start from Line”. I changed the heading of the pop-up window to “33745”. The software warned me and asked if I wanted to start from the line just before I stopped. But to be sure not to break the end mill by starting on a minus line, I changed the line to 33745. I clicked GO and held my fingers crossed. It worked! I spent about two more hours with the machine, watching while it was milling, this time without ear protection. Everything worked fine. Furbishing my herrbetjänt When the milling was done I cleaned the machine and put away all the material that was left. I used a small saw to saw away the tabs in the slots and a rubber hammer and a wrote chisel to get the tabs from the holes. Plywood is made in layers, so sometimes the top layer came off. I noticed that if you hammer from the side you want to look the best, that will create the best results. Some holes broke a bit Then it was a lot of sandpapering to be done. The holes were really hard to make look good. The dog-bones and the tabs made the squares not look like squares anymore. And it’s a fine line of sandpapering too much and make the holes too big. In total I think I spent about three hours sandpapering the herrbetjänt, making it look its best. Almost done! I might paint it later, but for now, I’m going to enjoy the nice looking plywood! Download files here Herrbetänt pdf Herrbetjänt ai Herrbetjänt dxf Link to Herrbejänt in spb and crv format group assignment: assignment: test runout, alignment, speeds, feeds, and toolpaths for your machine Week 9: Embedded Programming assignment: read a microcontroller data sheet, program your board to do something, with as many different programming languages, and programming environments as possible Getting started What does “embedded programming” even mean? This is a tough week, not necessarily the weekly assignment in it self, but to really understand what I’m doing. I know parts of this week will be stuff that I just need to accept, thing that I might need more knowledge to understand. But I will try my hardest to understand most of it. Install FTDI drivers It started off exactly like that; me not understanding what I was doing or why. Before Thursday local session we were asked to “install FTDI drivers”. We got this link, and I just followed the tutorial to download the right version for Mac OS. But I will try to understand what I did. I know I have a FTDI header on my Echo Hello board. That is the one with six pins sticking out from the board, typically something that I would plug in somewhere. I googled but that didn’t really help, I got to a Wikipedia page which said something about it being a company specializing in USB technology. Something about converting RS-232 or TTL serial transmissions to USB signals. My conclusion is that the FTDI helps my computer understand what my board is saying. But this installation was for later, it would take a while till I came to the point when I wanted my computer to talk to my board. Test my board This I had already done in a previous week. The process is to connect the board to the programmer using a flat cable, and to plug in the programmer in the USB port of the computer. My own flat cable I assembled my own flat cable, making sure the direction of the two pin sockets were the same. That makes it easier when connecting. I also made sure the board and the programmer were in line, with the same pin in the same orientation, ground to ground, etc. In terminal I wrote the command that tells my programmer to look for a board called ATtiny44, “avrdude -c usbtiny -p t44”. The programmer found it and I was ready to start programming. Arduino software setup One way to program the board is by using Arduino software. I already had it installed on my computer. Arduino uses a very simple version of the programming language C. Some libraries are written in C++ thus it’s also C++. The software is actually called Arduino IDE. IDE stands for Integrated Development Environment and it’s a software application and it normally consists of a source code editor. Before starting to program, I had to tell the software what type of board I was going to program. In the menu bar of the Arduino software, under Preferences, there is a text field “Additional Board Managers URLs”. In that field I wrote “”; a link I have gotten from Emma that tells the Arduino software that I am using a thrid party board. I clicked OK and went to the “Tools” menu, “Board” and then “Board Manager”. I found the ATtiny, and clicked “Install”. Settings for the ATtiny44 Then, under the menu “Tools” I could define the specific environment for my board. - Board: ATtiny24/44/84 - Processor: ATtiny44 - Clock: External 20MHz - Programmer: USBtinyISP This is not enough for the setting to start working. For that I need to run the command “Burn Bootloader” to “empty” the board from prior information and apply the new settings. Programming The weekly assignment this week is to “program your board to do something”. Since I have a few different components on my board, I have a great variety of things I can make it do. I decided I wanted to code at least two different things, and if I got time left I would code something more. First start easy and make the LED blink when pushing the button. Then as a second thing, I wanted to make the board tell the computer, through the Serial Monitor (the connection between the board and the computer from where one can send and receive messages), that when the phototransistor is at a certain level, being dark around it, a text message should say “Good night”, and when lit up level, it should say “Good day”. Arduino software The code in the Arduino software is called sketches. All sketches have two void type functions; “void setup” and “void loop”. The setup is only run once by the microcontroller while the loop continuously run over and over again. 1) LED lit up when pushing the button So to make my LED blink when pushing the button, I used the control structure “if...else”. A great library with all references to Arduino’s structure, values, and functions can be found here, this I used a lot during programming. Before starting, I needed to get an understanding of the different pins, and which one is connected to the led and the button. I had learned that the pins on the ATtiny does not exactly translate to the pins in Arduino IDE since that one relates to a microcontroller called ATmega. So I used Emma’s scheme to understand which pin was which. The LED is connected to the pin PA7 and the button to pin PA3. These have to be defines in the code. The code “int” converts a value to the int data type. So in the code, before void setup, these can be defined. int LED = 7; int BUTTON = 3; In the void setup, I tell the micro controller the different “pin modes”, ie. the use of the pins (actually it’s changes the electrical behavior of the pin); INPUT, OUTPUT or INPUT_PULLUP. The INPUT_PULLUP enables the internal pull-up resistor in the microcontroller which acts like a large current-limiting resistor. void setup() { pinMode(LED, OUTPUT); pinMode(BUTTON, INPUT_PULLUP); } So for the loop I need to write the code that tells the LED to lit up when pushing the button, ie I want to read the voltage in the button pin when it’s pushed and when it’s not. To do that I need to create a new variable that will save the data. For this I create another int, I give it the value 0. int LED = 7; int BUTTON = 3; int BUTTON_value = 0; For the loop, I need to define what happens when pressing the button. I first tell the microcontroller to read the value of the button. I do this with digital read, which reads the value from a specified digital pin, and could either be HIGH or LOW. void loop() { BUTTON_value = digitalRead(BUTTON); I then do my if...else command. “==” means “equal to”. void loop() { BUTTON_value = digitalRead(BUTTON); // read the value of the button if (BUTTON_value == HIGH) { //when the button is not pressed digitalWrite(LED, LOW);//the LED is off } else { digitalWrite(LED, HIGH);//the LED is on } } The LED lid up when pushing the button It worked, my LED lit up every time I pressed the button. During my first try, the LED was lit up when not pressing the button and turned off when pressed. I then figured out that the LED pin has 5V when the button is not pressed, meaning it is HIGH. So when button pin is HIGH, LED pin should be LOW. 2) Good Night/Good day message on serial monitor For this to work, I need to include the library SoftwareSerial.h to be able to send the information I’m getting from the micro controller to the computer. #include SoftwareSerial.h> //added a library to use the Serial Monitor I then have to define the pins I’m using for my serial communication. On the FTDI: - Pin tx = Transmitting (the pin the computer use to transmit) - Pin rx = Receiving (the pin the computer use to receiving) Pin rx is pin number 1 in the schematic, but I will define it as pin number 0 in my code. This is because FTDI has the priority. And the same goes for pin tx that has the number 0 in the schematics, but will be pin number 1 in the code. I could have use the code “#define” but since “define” can mess up the code if the word is used in any other place, I will use “int” instead. int rxPin = 0; //the receiving pin int txPin = 1; //the transmitting pin SoftwareSerial Serial(rxPin, txPin); // to set up the serial object The phototransistor that reacts to light is connected to pin 2, which is an analog input pin. I also need to create a value for the phototransistor to read. int pin_phototransistor = 2; //the phototransistor is connected to pin 2 int value_phototransistor = 0; //value for the phototransistor to read Now when all values are set, I can go ahead and write the code for the setup that will run once. I set the pins; the rx to input, the tx to output and the phototransistor to input as well. I also need to tell the SoftwareSerial to enable communication. I set the number to 9600 since that corresponds to what I will set in the serial monitor. void setup() { pinMode(rxPin, INPUT); //the rx pin is the input of the communication pinMode(txPin, OUTPUT); //the tx pin is the output of the communication pinMode(pin_phototransistor, INPUT); //the phototransistor is an input Serial.begin(9600); // to enable communication, should correspond to serial monitor } In the loop I tell the microcontroller to read the value of the phototransistor. The value can be more than HIGH or LOW, (0V or 5V), like in the case with the LED. For the phototransistor, the voltage can be anything between 0-5V, so to read this analog sensor, I need to read analog voltage. To read the voltage from the analog pin, you can use the formula: V=(data)*(VCC/2^10)-1 in this case V=(data)*(VCC/1024), 1024 because the microcontroller’s ADC (Analog Digital Converter) has a resolution of 10 bit. This I can read in the datasheet of the ATtiny44. Since part of this week's assignment is to read a datasheet, I will also, shortly add some things one can read about in the datasheet. To start with, the datasheet is very long, 286 pages !!!) and before you get familiar with it, it's very hard to grasp and get an overview. It has 27 bookmarks which makes it easier to navigate. The pin figuration is very useful and something I have gone back to many times. It tells the characteristics of all the pins. In the datasheet, there is a section about clocks. I have used the settings for using an external clock, which one can read more about in section 6.2.1 External Clock. In section 7, Power Management and Sleep Modes, there are explanations about how to put the microcontroller in sleep mode to save power. In section 20, Electrical Characteristics it says that the maximum operating voltage is at 6V. Stresses beyond those listed under “Absolute Maximum Ratings” may cause permanent damage to the device. I run my microcontroller on 5V from the computer's USB. After that I add my if..else command. I picked the value 400. If the value is bigger than 400 the serial monitor will type “Good night”, if not it will say “Good Morning”. void loop(){ value_phototransistor = analogRead(pin_phototransistor); // read the value of the phototransistor if (value_phototransistor > 400) { //if it's dark Serial.println("Good night"); } else { Serial.println("Good morning"); //is it's light } } To be able to see my message, I need to connect my board with the FTDI cable. I made sure the GND pin was connected to the GND on the cable, the black one. I selected the port in the top menu and then I clicked Serial Monitor to see my data. Download files here Make LED lit up when pushing button Good morning / Good night group assignment: assignment: compare the performance and development workflows for other architectures Week 10: Molding and Casting assignment: design a 3D mold around the stock and tooling that you'll be using, machine it, and use it to cast parts There are so many steps in this assignment, and thinking negative and positive parts really messed up my head. Also, just the difference between molding and casting is confusing. Google explains: “Molding is the process of manufacturing by shaping liquid or pliable raw materials using a mold or matrix, which have been made using a pattern or model of final object. But, Casting is a manufacturing process in which a molten metal is injected or poured into a mold to form an object of the desired shape.” This didn’t really make me smarter, but I guess it means that I first need to do a mold, then pour something in it, ie. casting my final design. But we are doing this is three steps, first using a CNC mill to create a positive mold out of wax. After that, I’m creating a negative mold by casting, i.e pouring material into the wax mold. After that I’m pouring another material into the negative part, making a positive part; my final object. The hardest part is to decide what to make. I have a ring that I really like, a ring that some of my friends have said they would like too. I wanted to do some changes to it, to make it a bit more “mine”. I thought about engraving it, but that was too tiny and wouldn’t probably work, so instead, I designed a ring that looks more or less exactly like the one I already have. One part of the ring in Fusion 360 Designing - Fusion 360 I designed the ring in Fusion 360, now a bit more comfortable with 3D modeling than before, but still hard though. To make the actual ring wasn’t that hard, more or less some rectangles and some circles. I made it two-part, dividing the ring where I thought it was most appropriate. What was hard was to build the box around the parts that would keep the liquid material in place when curing. To be able to pour the material down the mold I also needed a conduit connected to the ring. Apparently, a lot of bubbles are created when casting, so I also needed to create paths where the bubbles could escape; air vents. To not stress the end mill when cutting 90 degrees angles, I sloped the walls of the box. I also added holes on the bottom of one part and pins sticking up on the other, so-called “registration marks”. Those would connect the two pieces and make the mold more stable and in place when pouring the material. I saved the two pieces as two STL files by right-clicking the body in Fusion 360 and selecting “Save As STF”. Measure the part of the end mill that is out Milling Prepare the ShopBot This process was similar to the week of making something big, but this time we were to mill in wax. I attached the wax block to the sacrificial layer on the Shotbot. I used some blocks of wood and double-sided tape to attach the wax and make it stable when milling. I was going to use a 3 mm flat end mill with 2 flutes, that was a good choice for milling in wax I was told. I attached my end mill to the mill head and made sure it was tightened. I measured the part of the end mill that was out and made sure it was more than I was going to mill. Orient and Size Model ShopBot PartWorks3D Opening up PartWorks3D, the first thing I did was to upload the first stl-file, I picked the file with the majority of the parts of the ring. PartWorks3D is more straightforward than PartWorks when not 3D-milling; it has 7 different steps to follow. The first step is “Orient and Size Model”. I could rotate the surface, I picked “Back” because that worked best for my design. I made sure the size of my design was correct and I picked “mm” for units to be sure. I clicked “Apply” and then next. Set Model and Material Size Next step is “Set Model and Material Size". My design was 70x70x18 mm. I picked the “Z Zero” to top left corner. I also unclicked “Use Model Silhouette” since I was not going to cut any parts away, they were going to stay in the wax. Same for tabs, I didn’t add any for this reason. For “Depth Below Surface” I wrote 18 since my design had the depth of 18 mm. Step three is “Roughing Toolpath”. In this step, I decided the settings for my tool. I was going to use an end mill of 3mm. Roughing Toolpath Cutting Parameters - Pass Depth: 1.0 mml - Stepover: 0.5mm (16.7%) Feeds and Speeds - Spindle Speed: 6000 r.p.m - Feed Rate: 25 mm/sec - Plunge Rate: 25 mm/sec I then clicked “Apply”. Toolpath Parameters - Rapid clearance gap: 2.0mm - Machining Allowance 0.5 mm Finishing Toolpath In “Strategy” I set the “Z Level” to “Raster Y”, which is the direction this toolpath will take. After that I clicked “Calculate” to get the time. It was going to take 1 hour. In step four, “Finishing Toolpath” the settings where similar as for the roughing toolpath. I was going to use the same end mill that I used for the roughing toolpath, but with slightly different settings; I would change the stepover to 10% and change the “Raster Angle” to “Along X”. After that, I clicked “Calculate” to get the time; 19 minutes. The fifth step is for cutting which I wasn’t going to do. Preview of the toolpaths The sixth step is “Preview Machining”. I checked my two toolpaths and both looked fine. In the last step, “Save Toolpaths”, I saved the toolpaths as two different files and I also saved the settings for PartWorks3D. I then did the same process for the other part of the ring, the other DXF file. ShopBot - milling I turned the ShopBot on and opened the ShopBot software. To set X, Y, I moved the mill head to my origin using the arrows on the computer. In the menu, I clicked “Zero” and then “zero axis (X & Y)". To set Z I used the tool on the machine, made sure it was connected, and then clicked the button “Z”. I uploaded my file and was now ready to start milling. Milling my mold After about an hour my roughing toolpath was done and I uploaded the file for the finishing toolpath. 19 minutes later, my first piece was done. I changed the Y for the next cut, my second piece. Since the wax block was 150 mm long, my design 70x70mm, I changed X to +75mm to start in the right spot. Then the whole process of milling again. About 1 hour and twenty minutes later my mold was done! I cleaned the machine, saved the “cut dust” to be melted and used again, and brushed my mold clean. Creating the negative This process is done in the ventilation cabinet since the fumes can be toxic. I also used gloves and protected myself with comprehensive clothing to be safe. For some reason, I decided to use Vytaflex 50 for my negative. It's a “Urethane Rubber Compound” and it’s mix ratio for part A and B is 1:1 in volume. I used cups and filled them up the same height to then blend the two parts together. I poured it into my model and then I waited. I left it there for more than 24 hours (the curing time was set to 16-24 hours), but it never cured. For some reason, it stayed sticky and too soft. I decided to take the Vytaflex out and clean up my mold and go for another material. PMC 121/30 wet and Mold Release For my next try, I used PMC 121/30 WET, a soft and flexible liquid rubber. I mixed it by 1:1 ratio, part A, and part B. Before pouring it into the mold I sprayed the mold with “Mold Release”, which will make it easier when taking the cast out of the mold. Easter came and I had to wait a few days to check on my cast. This time all went great! The cast was steady and good. It was a bit sticky on the surface, but that was from the release. I sprayed it with release again to prepare it for the final casting. Epoxy Creating the positive, final cast For the final cast, I was going to use epoxy. At the Fab Lab, we had a SUPER SAP® CLR Epoxy System which is a clear liquid epoxy resin. With that, I can use different hardeners, but we only had the Super Sap® CLF (FAST) Hardener so I used that. It was hard to get the mix ratio right since I was only going to use a small amount of liquid and I didn’t find any small cups or measuring bowls. The mix ratio is 2:1 meaning two parts of the epoxy and one part of the hardener. I stirred it slowly but for a long time to make sure the two were blended. For my mold, I had tightened the two parts together using some plywood and some tie wrap to secure it. To pour the epoxy into the mold was really hard. It was hard to not make more bubbles, make sure I poured enough but not too much, and to get it everywhere in all the angles. I left my cast in the mold overnight. The next day I took it out, and as I thought the bubbles in the epoxy made my design a bit deformed. I have learned that the chance of getting bubbles in your cast when the design has 90 degrees angles is more or less unavoidable. However, it still looked a bit cool. One bubble was just in the middle of the cross, which actually looks like I designed it that way. You can tell on the ring the rough edges from the wax mold, that the CNC mill couldn't cut a smooth circle. What I'm really happy about is that you can tell that the mold parts were really tight together when pouring in the epoxy. There is no trace of material wanting to pour out from the mold except from the air vents which is almost unavoidable in this case when I didn't measure the amount of material and compared it to the volume of the object. All in all, I'm happy with the learnings I have gotten and will make another cast with another material to try my luck again. Maybe a flexible material this time. Download files here First part of the ring STL Second part of the ring STL Design of ring f3d group assignment: assignment: review the safety data sheets for each of your molding and casting materials, then make and compare test casts with each of them Week 11: Input Devices assignment: measure something; add a sensor to a microcontroller board that you have designed and read it Interesting week! I guess you can make it as easy or hard as you please. That is a struggle of mine, continuously thinking I making things that are not challenging enough. But they are - at least for me. But the feeling of not making “cool” stuff is hard to get rid of. Reed switch We have already done a few things with input devices; we added a phototransistor and a button to our Echo Hello board, which is more or less what this week is all about. Since the final project is geting closer, I thought I should use this week to learn about an input device that I might use for my final project. That would be a so-called reed switch. When the reed switch is exposed to a magnetic field, the two materials, the reeds, inside the switch pull together and the switch closes. When the magnetic field is removed, the reeds separate and the switch opens. The Fab Lab ordered this reed switch, there are many, but many are very big even in SMD-format. We ordered this one that was 17.6mm long. Eagle I decided to make a new board, similar to the Echo Hello but only with the components I needed to test the reed switch. Create new icon In Eagle I soon noticed, when adding the different components to my schematic, that I didn’t have an icon for the reed switch. This became one of my biggest challenges this week, to make the icon. But even before that, I didn’t even know that it was possible; to make your own icons for the board design. It is cool to realize how many things I learn here, big and small. I watched this tutorial to learn how to make your own icon. However, in this tutorial, the guy is showing how to make a component with many pins that more or less look the same in the schematic and the board design. But my reed switch looked nothing like that. I actually couldn’t find the design of how the schematic icon should look, but I figured it was less important than getting the board icon right. I googled reed switched and made a schematic icon that was similar to those. Now when googling this again, I found many. I then thought that it was a specific one to the specific reed switch I was using, which is not the case. Schematic I had no challenges in this step except making the schematic icon for the reed switch. I used the same commands as during the week of Electronics Design to place, name, label and connect the different components. This is the list of components I added: - 6-pin header - Microcontroller ATtiny44A - FTDI header - 20 MHz resonator - Resistor (10k) - Reed switch - Capacitor (1uF) - Ground - VCC I decided to use a 20 MHz resonator and use it as the external clock. I only needed one resistor for the reset pin, for the reed switch I was going to use the pull-up resistor and activate it in my code. The ATtiny44 needs a capacitor between VCC and GND, preferably close to the microcontroller. Other than that, just the regular components; the 6-pin header to connect to the programmer and the FTDI header to be able to use the serial monitor. Board design and Photoshop This is always a bit tricky, mostly because I for some reason want to keep the board small and good looking. Like last time, I spend a good amount of time moving the components around to make the traces. When I was almost done, I figured it was a really boring board. My quick fix was to make the board look like a cat. To not just make the traces look like a cat, I changed the dimensions of the board and made the whole board cat-shaped. This is way easier than I thought. Just like making a line or a rectangular, you can delete and add a line to your dimension. I had some challenges when exporting the two files; the trace file and the cut file. The trace file went ok. I had to pick the top layer and the pad layer to get the right design, the pads I made for the reed switch were at that layer. I also had to erase the component itself in Photoshop after exporting the file since I don’t necessarily want the mill to cut the shape of the component, only the pads are needed. The cut file, however. Last time when designing a board I made two different rectangles. This time a had a shape of a cat. I learned that when doing this way, I make a rectangular in another layer as last time, but I only export the dimension layer. This will result in a png that is black and has a white line in the shape of the cat. What I first forgot was to fix this file in Photoshop which messed up my cut (tell more about that later). I had to make sure the share of the cat was white and the surrunding black. Milling As I mentioned, I had made my cut file the wrong way which messed up my board. Actually twice. The first time I didn’t even notice because the cut didn’t go the whole way through, so I was mostly focusing on that. The next time the cut went through and then I noticed how the cut had messed up my traces. I asked Emma why and she told me to have a look at my cut file from some weeks before to see if I saw a difference, and I did. I changed the file and did my milling for the third time. Soldering I thought the milling went ok the third time so I collected my components and started soldering. It was sadly after I was done soldering and trying to connect my board, that I notice that something was wrong. Before, I had checked if VCC and GND were short, and they weren't, but I didn’t notice that four other pins on the microcontroller were connected. It was actually a trace between them. Emma helped figure out the problem, and since I had a picture of the board right after cutting, I could see the trace that was not supposed to be there. The problem must have been in Mods. The distance between the traces were ok when checking them i Eagle, doing the DRC-test of having at least 0.4 mm between the traces. I used a hot air gun I had to remove my ATtimy and scrape off the traces that wasn’t supposed to be there. This went ok. I used a hot air gun to heat the tin. I was holding on to the ATtiny and let gravity do its part. I scraped the trace off and soldered the ATtiny back to the board. Now my board was a bit messy of all the tin from before. I tried to not use too much tin which later on resulted in that the pins weren’t connected properly (more about that later). Connect the board to the programmer Connection The first time I tried to see if my programmer could see my board, that was when the extra trace still connected the four pins on the ATtiny. Next time was when I realized that I had too little tin, that the pins wasn’t connected to the pad. The third time it worked. Program board I decided to code in Arduino IDE. The code I created for my reed switch was a combination of the code I made before; making the LED blink and reading the serial monitor. I had to include the library SoftwareSerial.h to be able to use the serial monitor. I set the receiving and transmitting pin. I set the pin number for the reed switch and a variable to read. In the void setup I wrote the code for my reed switch being an input and specifying it to be a input pullup to activate the pullup resistor. And I set the serial begin to 9600 which corresponds to my computer. In the void loop I identified the value to be the digital read of the reed pin and I also added code for the serial monitor to write the value. #include //added a library to use the Serial Monitor int rxPin = 0; //the receiving pin int txPin = 1; //the transmitting pin SoftwareSerial Serial(rxPin, txPin); int REED = 3; // my reed switch is pin 3 int REED_value = 0; //variable to read the reed pin void setup() { pinMode (REED, INPUT_PULLUP); // my reed is an input (pullup) Serial.begin(9600); // to enable communication, should correspond to serial monitor } void loop() { REED_value = digitalRead(REED); // read the value of the reed Serial.println(REED_value); } Testing my input device I connected my programmed board with the FTDI cable to the computer. I selected the port in the Arduino IDE and I could see the number "1" shown on the monitor. I used a magnet to test my reed switch. It worked! I noticed that the magnet need to be strong enough, how strong I don’t know since I didn’t test this. And it also needs to come from the right angle (at least if the magnet is as strong/weak as the one I used) to close the two switched; reeds. Download files here Schematic Board design Png trace file Png cut file Code group assignment: assignment: measure the analog levels and digital signals in an input device Week 12: Output Devices assignment: add an output device to a microcontroller board you've designed, and program it to do something Like last week I wanted to learn something that would later be useful for my final project. I wanted to explore the RGB LEDs and see how they work. For power source, I was not going to use the USB on the computer again, but try an portable spruce. I’m not sure yet, but I think my power source for the interactive building blocks will be a 9V battery, so that’s what I went for this week too. Eagle I wanted to make two different boards, one with more or less only the LED and one with the rest. This to test the minimum size I could create. This went really smooth, both the schematic and the board design. What I wasn’t aware of at this point was the wrong design of the LED icon in the board design which later on resulted in me making a new board. This is the list of components for the main board: - 6-pin header - Microcontroller ATtiny44A - FTDI header - 20 MHz resonator - Resistor (10k) - Reed switch - Capacitor (1uF) - 4 pin header for power - Regulator - Ground - VCC This is the list of components for the LED board: - RGB LED - Resistors x3 - 4-pin header Milling This also when really smooth I thought. But of course not. I had based my design on the cat board from last week, knowing that mods would leave a trace connecting some of the pins. I was just going to take them off after the cut. And I did, but in the wrong direction. I cut the trace off that I was going to leave. I must have been a bit tired.. So I just went ahead and cut a new board, this time taking the right trace off. Soldering Shiny and smooth as Neil would say! Good enough at least this time. My only obstacle was the direction of the LED. If I had looked closer, I would then have noticed that the icon for the LED was wrong and that I had to make a new board. But I didn’t notice this until starting to code and testing the board, when the wrong color was lit up. The mistake To walk through the whole process of what went wrong. There were two different board icons in the Fab library in Eagle, one big and one small. I didn’t know the difference, I should have done better research when choosing between the two, but I went for the one with big pads since that might be easier to solder. What I didn't see then, but what I see now, is that the orientation or the anode, red, green, blue pins is differnent in the two icons. This is the RGB LED that I was going to use. It’s a so-called “common anode”. With a common anode the anode is connected to VCC and each individual LED to one resistor each that is then connected to an output pin. When coding and writing “LOW” to that pin, the LED will turn on, and “HIGH” will turn it off. With the opposite, a “common cathode” the cathode is connected to GND and each LED’s anode through a resistor to the output pin. Then a HIGH turns the LED on. On my board design, I had gotten the component icon from the library, and in that one, the corner mark wasn’t corresponding with the corner mark on the component itself. In the datasheet of the RGB LED it says in one place (very small, and not so clear) that pin 2 of the LED is the anode. This meant that my component had to be turned 180 degrees (from where the corner mark was - at the actual corner mark), but that also meant that my colors would be switched around and that the anode wasn’t connected to VCC anymore. Of course I didn’t notice this until much later, which meant that I had to go back to the schematic, pick the right icon, mill a new board and then solder it. I was a bit fast when doing this so I missed a trace, but I could easily fix that with some tin. Calculate resistors for the LEDs I should also mention that I calculated which resistors I should use for the board. I looked at the datasheet and found, “forward current” and “forward voltage”. I added the numbers for “typ” and the conditions used for typ in this wizard. As for “Source voltage” I wrote 5 since I was going to use a 9V battery but with a regulator making it to 5V. - Red: 1/8W or greater 150 ohm resistor - Green: 1/8W or greater 82 ohm resistor - Blue: 1/8W or greater 82 ohm resistor Orientation of connection Testing the board and some simple code As prior weeks, I checked if my computer could read my new board (the one with the ATtiny, not the LED board), and it could. In terminal I wrote the command that tells my programmer to look for ATtiny44, “avrdude -c usbtiny -p t44”. I then burned the bootloader since I had a new microcontroller and an external clock. I used Arduino IDE this week too. I found some code in this tutorial that I wanted to try. But before this worked out for me, I had the whole mess with the wrong orientation of my LED and the board design, hence the connections to the pins. Just a note about programming the ATtiny board that has a regulator to regulate a 9V battery to 5V. When using the USB, I had to be really fast uploading the code and taking the programmer out since the regulator gets really hot. To test my LED I used this easy code below to test the different colors. This was when I noticed the connections were wrong. int led_pin = 5; void setup() { pinMode(led_pin, OUTPUT); } void loop() { digitalWrite(led_pin, LOW); delay(100); } Later on when I used my new board, with the right orientation, I saw from this code, the code that would make the LED lit up in six different colors that two colors were missing. Emma helped me out, telling me to check which pins on the ATtiny I had used. I had used one pin that wasn’t PWM which is the analog output. When designing the board I thought I was only going to use digital output (HIGH or LOW) but to change the brightness of the LED I need an analog output pin. Color pattern In this arduino tutorial it says: ." This resulted in that my blue LED, could only have one brightness. Looking at the color pattern you could see that I will not be able to create purple and the aqua this way. The code I mentioned that I found a tutorial from which I used the code. But I wanted to really understand the code and what it would do. First it was easy, I set the different pins. Pin 4 is not PWM but for next time using an ATtiny44 I will use any of the PWM pins: PA5, PA6, PA7, PA8 or PB2 int REDpin = 5; //my red light is pin 5 int GREENpin = 6; // my green light is pin 6 int BLUEpin = 4; //my blue light is pin 4 For the next part the tutorial said “If you are using a Common Anode RGB LED, then you need to change the analog write values so that the color is subtracted from 255, Uncomment the line #define COMMON_ANODE in the sketch!” So I guess, since the common anode is connected in the opposite way, LOW being the LED off, the analog output probably function the same way. I was defining for the analogWrite how to translate the variables to suit the common anode output. I also set the value for the variable setColor; red, green and blue. The function analogWrite is there to set the brightness off each LED, from 0-255. In the loop function, I can later on, set the amount of green, red and blue. #define COMMON_ANODE void setColor(int red, int green, int blue) { #ifdef COMMON_ANODE red = 255 - red; green = 255 - green; blue = 255 - blue; #endif analogWrite(REDpin, red); analogWrite(GREENpin, green); analogWrite(BLUEpin, blue); } For the void set up I set the pinMode of each pin to be an output. void setup() { pinMode (REDpin, OUTPUT); //my red pin is an output pinMode (GREENpin, OUTPUT); //my green pin is an output pinMode (BLUEpin, OUTPUT); //my blue pin is an output } In the void loop I could, as mentioned decide the amount of brightness of each LED to set a specific color. Now knowing that my blue LED can’t change brightness I could try different colors mainly using the red and the green LED. The delay between the functions is to have a second between every new color. void loop() { setColor(255, 0, 0); // red delay(1000); setColor(0, 255, 0); // green delay(1000); setColor(0, 0, 255); // blue delay(1000); setColor(255, 255, 0); // yellow delay(1000); setColor(80, 0, 80); // purple delay(1000); setColor(0, 255, 255); // aqua delay(1000); } The result So my LED is working except the blue LED that can’t change brightness. I will remake the board during the week of networks and see how many LED boards I could connect to each other or explore in what outer way I can have multiple RGB LEDs in one of my interactive building blocks. Download files here LED board schematic LED board design main board schematic main board design LED board trace file LED board cut file Main board trace file Main board cut file Code group assignment: assignment: measure the power consumption of an output device Week 13: Interface and Application Programming assignment: write an application that interfaces with an input and/or output device that you made I wish I had more time this week, this is a fun topic and it opens up lots of possibilities. I could only spend one day with this, so the result is fairly simple. But I will keep exploring this in the future! Processing I decided that I would do Processing this week. On Processing’s website, they say “Processing is a flexible software sketchbook and a language for learning how to code within the context of the visual arts.” It is free to download and it’s open source. I started off downloading the software for Mac. After that, I watched a bunch of tutorials made by this lovely charismatic man called Daniel Shiffman. I could marry that guy, so fun, humble and crazy! His “Hello Processing” tutorial is amazing! It’s great for a newbie like me, since he really explains everything from scratch. I also looked at some of the written tutorials on Processing’s website. This video series by Daniel Shiffman is also great. Watching tutorials was how I spent most of the time this week. You will see that from the result. Program my input board (my Echo Hello board) I decided to use my Echo Hello board this week. The board has a push button and a phototransistor, enough for me to test Processing for this week’s assignment. I decided that I wanted to create an interface for how long the button was pushed down. I wanted to create something that would change depending on the time. Board over the Serial Monitor to Processing over the Serial Port The workflow I will use is, first programming my board to do something. Then, connected it to the FTDI cable to read the serial monitor. Then, writing code in Processing, having it read the serial monitor to it’s serial port and make the interface do something. But for this to work I need to make sure that I’m transferring the right values/numbers from serial monitor to Processing. And this is not as easy as it might seem. For example, having a digital read in your Arduino code, meaning the serial monitor will show either true or false; could be 0 and 1. When sending this data to Processing, the software will read the 0 and the 1 as chars, more or less characters. Those characters has an equivalent number which can be seen in an ASCII table. In the table we can see that 0 = 48 and 1 = 49. About ASCII, Wikipedia says:. So to transfer the values right, there is different code that could be used, more about below. Good-to-know-code This is some good-to-know-code for two things; one is for helping Processing if the port can’t read any value, not to get a hiccup, and the other thing is that it transforms chars the integers, trimming the line, making it easy for Processing to read. import processing.serial.*; Serial myPort; // Create object from Serial class String val = "in"; int val_aux = 0; void setup() { myPort = new Serial(this, "/dev/cu.usbserial-FT9QO5SE”, 9600); } the value Program the board in Arduino IDE I’m Arduino IDE, the code is the same as if I was to just program my board to do something. I will need to make sure that I have included the library for the serial monitor. The code I was going to use is a version of this tutorial of making one button have the functionally of two or more. The tutorial explains the method of using the time for how long a button is pushed down to determine functions. My code would be a bit different, but this was a good start. First I included the library SoftwareSerial.h to be able to use the serial monitor. I set the receiving and transmitting pin. I then set up a new serial object. #include #define rxPin 0 #define txPin 1 SoftwareSerial Serial(rxPin, txPin); int pressLength = 0; //the value for how long the button is pushed down int One = 100; //define the *minimum* length of time, in milli-seconds, that the button must be pressed for a particular option to occur; } if (pressLength >= One){ Serial.println(pressLength); } pressLength = 0; //reset the pressLength every loop } Alternativ code that print every value while pushing down the button #include #define rxPin 0 #define txPin 1 SoftwareSerial Serial(rxPin, txPin); int pressLength = 0; //the value for how long the button is pressed down; Serial.println(pressLength); } if (digitalRead(buttonPin) == HIGH ){ } pressLength = 0; //reset the pressLength every loop } Serial monitor in Arduino IDE I uploaded the code and tested the result in the serial monitor. It was showing the value of how long I was pushing down the button. In the first code it showed the value first when I let go of the button, and in the second code it showed how long the button was pressed down while pressing. Processing The Processing software has a similar interface to Arduino IDE. Just like Arduino has setup() and loop(), Processing has setup() and draw() (instead of loop). Before the setup, just like in Arduino IDE I need to import the library processing.serial.*. Then, to be able to listen in on a serial port on our computer for any incoming data I created a Serial object. I also added a value to receive the data coming in. import processing.serial.*; Serial myPort; // Create object from Serial class String val = "in"; // Data received from the serial port int val_aux = 0; For the setup I set the serial port that the board is connected to and set up our Serial object to listen to that port. In the setup I also set the size of the interface window, but more about that later. void setup() { myPort = new Serial(this, "/dev/cu.usbserial-FT9QO5SE", 9600); size(500, 500); } In draw I used the code that Emma taught us that help the process if the port can’t read any value, just like I explained earlier in the “good-to-know-code”. value } } } With this code my port could read the data from my board. But nothing would happen. So in the if-statement I wrote some simple code for how a circle would appear in the interface when pressing the button. If pressing the button longer, the circle became bigger. The code says, if the value is equal to a value, for example 100, then the background should turn black. The background goes back to black every time I press the button so the circles do not appear on top of each other. Then the fill which sets the color of the circle. And then the size and the position of the circle (or ellipse as Processing calls it). In the setup I had set the size of the interface window to size(500, 500);. In the last circle, the one when you push the button for a long time, I added a text and made the circle red. if (val_aux == 100) { background(0); fill(160, 220, 90); ellipse(250, 250, 10, 10); } if (val_aux == 200) { background(0); fill(160, 220, 90); ellipse(250, 250, 40, 40); } if (val_aux == 300) { background(0); fill(160, 220, 90); ellipse(250, 250, 70, 70); } if (val_aux == 400) { background(0); fill(160, 220, 90); ellipse(250, 250, 100, 100); } if (val_aux == 500) { background(0); fill(160, 220, 90); ellipse(250, 250, 130, 130); } if (val_aux == 600) { background(0); fill(160, 220, 90); ellipse(250, 250, 160, 160); } if (val_aux == 700) { background(0); fill(160, 220, 90); ellipse(250, 250, 190, 190); } if (val_aux == 800) { background(0); fill(160, 220, 90); ellipse(250, 250, 220, 220); } if (val_aux == 900) { fill(160, 220, 90); ellipse(250, 250, 250, 250); } if (val_aux > 900) { background(0); fill(255, 0, 0); ellipse(250, 250, 500, 500); textSize(150); // Set text size to 32 fill(0, 0, 0); text("BOOM", 25, 300); } } Download files here Code Arduino IDE Alternative code Arduino IDE Processing code group assignment: assignment: measure the power consumption of an output device Week 14: Networking and Communications assignment: design and build a wired and/or wireless network connecting at least two processors Again I had too little time to make something more interesting than making a LED blink. But again, hard enough! I had so many challenges this week, mainly regarding what I was actually doing. I have been following several different tutorials and none of them have had the same structure or ways of doing things. In the end, I figured it all out, but it took some time. It's about networking and communications this week. I decided to try out I2C since it allows you to connect many slaves to a master with only two pins. Sparkfun explains I2C like this: .” To start off easy I decided to make only two boards, one master, and one slave. I wanted the slave to have a LED that the master could turn on and off. And when the LED was on or off I wanted the slave to send a message back to the master’s serial monitor saying if the LED was on or off. I2C Making the boardsI used Eagle, as usual, making the schematic and the board design. This went really smooth. I followed this site as a reference. My boards are fairly simple with as few components as possible. I have mainly been using ATtiny44’s in the previous project, so I went with that again. If I would have known what I know today, I would probably be using an ATtiny45 instead since there are more libraries to choose from. The Fablab was out of ATtiny44 so I used ATtiny84 instead, but they are the same. The pins used or the I2C signals are SCL and SDA. SCL is the clock signal, and SDA is the data signal. It is very important to use a pullup resistor for both these signals. This is to restore the signal to high when no device is asserting it low. Connect to programmer Testing the boards with simple code When the boards were designed, milled and all components were soldered on, I tested if my programmer could see the two boards, typing “avrdude -c usbtiny -p t84” in Terminal. Before even going inte I2C code, I used very simple code to test the serial monitor on the master board and if the LED could blink on the slave board. This worked, so I knew that the boards were working fine. Simple code for the master serial monitor: #include #define rxPin 0 #define txPin 1 SoftwareSerial serial(rxPin, txPin); void setup() { pinMode(rxPin, INPUT); pinMode(txPin, OUTPUT); serial.begin(9600); } void loop(){ serial.println("the serial monitor is working"); //serial.println(value_L_sensor); delay(1000); } Since I have no 20 MHz resonator on this board, I have no external clock. I had to change the settings in Arduino IDE to internal 8 MHz clock. I also made sure all the other settings were correct, the board set to an ATtiny84, Processor to ATtiny84, and the right programmer. When I was sure, I burned the bootloader. This is important that I change next time using a external clock of 20 MHz since the settings stay with what it was saved with last. Simple code for the slave LED: int led_pin = 7; void setup() { pinMode(led_pin, OUTPUT); } void loop() { digitalWrite(led_pin, HIGH); delay(100); digitalWrite(led_pin, LOW); delay(100); } Adding libraries To have the master talking to the slave using I2C I needed to download libraries to be used in Arduino IDE. This was a bit hard, many libraries were not recommended for ATtiny84 hench not compatible. I soon found a suitable master library, I would have love one library for both the slave and the master, but I couldn't find that for the ATtiny84. Also, when googling, I read things like “the AtTiny84 doesn't have hardware TWI. You need to use i2cMaster.S which implements a software master.” To be honest, I still don't know what this means except that I had to find a suitable library for the slave. I ended up with this one that worked really well. For the master, I used this one. To add a library, in Arduino IDE, go to “Sketch”, “Include library” then “Add ZIP-library”. In the folder, add the downloaded ZIP-file. Then go back to “Sketch”, “Include library” and click “Manage Libraries…”. Search for your library and click to install. Sometimes you need to restart the software for the libraries to appear in the list of options. Final codeMy final code is a version of this one and this one from former Fab Academy students, mainly the first one from Teo Cher Kok. Master First I added the library TinyWireM.h and then I defined the address that I was going to use for the slave. I don’t think it’s necessary to define the master, but I decided to do that to make it easier for me to keep them apart. #include //include the library #define device (1) //define master #define SLAVE_ADDRESS 0x6 //define the address for the slave Since I wanted to use the serial monitor I had to add that library and define the pins. #include //add library to be able to use the serial monitor int rxPin = 0; //the receiving pin int txPin = 1; //the transmitting pin SoftwareSerial serial(rxPin, txPin); // to set up the serial object In I2C, the void setup requires a initialization. The command "begin" is used. Then I set the code for the serial monitor. void setup() { TinyWireM.begin();//setup initialization pinMode(rxPin, INPUT); //the rx pin is the input of the communication pinMode(txPin, OUTPUT); //the tx pin is the output of the communication serial.begin(9600); //begin communication with computer } In the void loop I set what the master will communicate to the slave. The master will send "1" and "0" to turn the LED on or off on the slave board. The slave will then send a message to the master telling if the LED is on or off through the serial monitor. The loop commands below are all standard commands for I2C and for the communication to start and end. TinyWireM.beginTransmission(SLAVE_ADDRESS);/ TinyWireM.send TinyWireM.endTransmission() The I2C protocol might give a better understanding. This text is from and this tutorial .” void loop() { TinyWireM.beginTransmission(SLAVE_ADDRESS);//begin transmission to slave address TinyWireM.send(1);//send "1" TinyWireM.endTransmission();//end the transmission msg();//receive message delay(2000);//wait 2000 millisec TinyWireM.beginTransmission(SLAVE_ADDRESS);//begin transmission to slave address TinyWireM.send(0);//send "0" TinyWireM.endTransmission();//end the transmission msg();//receive message delay(2000);//wait 2000 millisec } For the master to pick up a message from the slave, I added the command “msg();” which is defined in the void msg. First I make sure to set the message to be treated as a variable. Then the setting for the master to request a message from the slave, making sure I type the right name of the slave. Then, to make sure the master is available when the slave is sending the message, I added an if-function. Then the actual if/else if message saying what the serial monitor should write depending on the message sent from the slave. void msg() { volatile byte msg =0; //treat as variable TinyWireM.requestFrom(SLAVE_ADDRESS,1);//request from slave address if (TinyWireM.available()){ //to make sure the master is available msg = TinyWireM.receive();//receive message if (msg ==4){//if this message serial.println("Slave LED off");}//print this in serial monitor else if (msg ==5){//if this message serial.println("Slave LED on");}//print this in serial monitor } } Slave The code for the slave was a bit more straightforward. At first, I included the library for the slave and defined the address of the slave. I also set the pin for the LED. #include //include the library #define I2C_SLAVE_ADDRESS 0x6//define the address for the slave int LED = 7; In void setup, it is similar to the master code. First, a command telling the slave it is a slave and initializing the setup. The pinmode for the LED also need to be set in the setup. In the void loop, similar to the master code, I need to tell the slave to treat the message as a variable. Then again, setting an if/else-command, saying if the slave is available, the slave should receive bytes from the master. If the bytes are equal to 0x01, the LED should be turned off and the message sent to the master should be “4”. Else, the LED should be turned on and the message sent to the master should be “5”. void loop() { byte byteRcvd =0; volatile byte msg =0;//treat as variable if(TinyWireS.available()){ //if input and slave available byteRcvd = TinyWireS.receive(); //receive from master if (byteRcvd == 0x01){ //if bytes received digitalWrite (LED, HIGH); //turn off LED TinyWireS.send(4);//send message } else { digitalWrite (LED, LOW);//turn on LED TinyWireS.send(5); //send message } } Download files here Master trace Master cut Slave trace Slave cut Master board design Master schematic Slave board design Slave schematic Simple code LED Simple code serial monitor Master code Slave code Week 15: Mechanical Design assignment: design a machine that includes mechanism, actuation and automation. Build the mechanical parts and operate it manually Finally a real group assignment! I love working together with others! Sadly it happened to be the week that I had to go home to Stockholm, so the collaboration part was a bit harder than it could have been. Brainstorming Since we are a big group in Amsterdam, we decided to create two teams. First, we brainstormed in the big group to later decide the two smaller groups. Very early in the brainstorming session, we noticed that we all wanted to do something silly and fun, not necessarily the most useful machine. Ideas like a karaoke machine, Neil bingo, and engraving cheese came up. One idea stuck a bit more with me. It was the Dutch lunch generator. If you have ever been in the Netherlands, you know that Dutch people many times eat sandwiches for lunch. What they put on the sandwiches differs from chocolate, minced raw meat, cheese, etc. Me, Henk, Jelka, and Hanna decided to go ahead and create a random Dutch lunch generator. Random Dutch lunch generator The machine will help you decide the topping on your sandwich, randomly, and in combinations that you might otherwise wouldn’t have thought of. The bread piece will be placed in a lunchbox that is already standing under the different toppings. Exactly how the random generator will start is not decided yet, but the idea is that a motor will act accordingly to code that is randomizing the different sections of the stage and then opening up the respective funnel that is stopping the topping from coming out. We decided to divide the work between us; I was going to build the stage, Jelka the construction over the stage, holding up the topping funnels, Henk would create a construction that could turn tubes so that we could have some tube toppings as well, and Hanna would create the construction for the topping funnels and their respective motor. Creating the stage and the overall construction Before starting designing the stage, I had a look at some of the previous student’s work. Emma had also shown us a modular system for stages. However, we decided fairly early that we wanted to work with MDF, and the folding versions of stages would only work with cardboard. I found this tutorial from previous years. They had made a design in MDF that looked similar to what we wanted to do. The also had a downloadable version of their design. I downloaded the file and started making some changes in Ilustrator. The team had decided on some measurements for the machine, and for some reason, I must have gotten them really wrong. However, we didn’t notice this until much later when the stage was cut and assembled. Which resulted in Hanna and Jelka re-doing the design when I was away in Stockholm. Blessing in disguise, I got to help when I got back since the file had minor mistakes and we also decided to make a few tweaks to it. To walk through the processes of re-designing and cutting the stages - I exported the dxf file. - My Fusion 360 was not happy with me at that moment, and decided to be slow and shut itself off. For that reason, I decided to make the alterations in Illustrator. - The design was for 6mm MDF, and we were going to use 3mm, hence I had to change all the slots. - Before changing them all, we did a kerf test. We made three different material thickness; 3mm, 3,2mm, and 3,5mm. All of them had the same slot width hench good press-fit, but the slots with the material thickness of 3,2mm looked the best. - I changed the slot depth, the material thickness, on all slots and changed the size of the holes. I also took away parts that we were not going to use. This took longer than expected since I had to do every single piece. - When the design was done, I changed the stroke size to 0,1 and saved the file as an Illustrator CS6 file. - I uploaded it to the laser cutting software and made sure everything was correct. - I placed the material on the bed and made a test run for the design. It all looked good. - I used the settings Power: 45 and Speed: 100 for the cutting. - Then I pressed start and the cutting went well. - I assembled the parts and noticed a mistake. The green parts in the picture weren’t cut off. I decided to wait with fixing that, and maybe even do it by hand. This wasn't necessary since we re-did the whole design. - Then we noticed that the design was too big. Actually twice, which resulted in us doing this process two times more. The first stage was just too big, the second one we figured that there was no “motor screw” long enough, and the length of our design would look unsymmetric. Download files here Stage parts in Illustrator (top part not right length) Stage parts DXF Original stage file group assignment: assignment: same as individual assignment Week 16: Machine Design assignment: actuate and automate your machine and document the group project and your individual contribution The second week of making a machine! We left off somewhere having the machine work manually. We could turn the screw on the motor to make the movable part on the stage go from one side to another. Now it was time to automate it! Motor, Arduino Uno, CNC shield and motor drivers The motor that we are going to use is a Pololu stepper motor, NEMA 17. It is bipolar, is has a 1.8° step angle, meaning it has 200 steps/rev. Each phase draws 1.7 A at 2.8 V. All this can be read in the datasheet on this site. Pololu stepper motor, NEMA 17 Just a quick overview of what a stepper motor is. I found this great explanation on learn.adafruit; .” In this assignment, we are allowed to use an Arduino to run our machine. I was really happy to work with an Arduino Uno and it was interesting to better understand an Arduino CNC Shield and Stepper Motor Drivers that was going to run the three stepper motors. I did not work with the two server motors that we used for the funnels, those were connected directly to pins on the Arduino Uno. I attached the CNC Shield to the Arduino Uno. The CNC shield takes the trouble out of doing own hardware layout, it can host four different stepper motors. The CNC shield also allows high external power supply for powering the motors. It’s very important to get the orientation right; the USB port on the Arduino should be on the same side as the power supply on the CNC Shield. The power supply wire can be connected to the 12V power source. Next, I connected the motor drivers DRV8825 on to the CNC shield. Again it’s important to get the orientation right; the potentiometers should be on top in the orientation of the Arduino. What the DRV8825 does is that it allows higher resolutions by allowing intermediate step locations, which are achieved by energizing the coils with intermediate current levels. Step-modes Under the motor driver chips I added three jumpers per motor driver. The jumpers enable the different current levels which result in the different step-modes, as shown in the table. With all three jumpers, i.e. all three microstep selection pins connected, the result is a 1/32 step-mode. The 1/32 step-mode means 6400 microsteps per revolution (200x32). This I learned from the product page earlier referred to. Before going further me and Henk had to set the current limitation. Set the current limit on the stepper motor This site explains really well why you want to set the current limitation on the stepper motor. It says: .” All in all, this means that you can use voltages above the stepper motor’s rated voltage to achieve higher step rates, which in turn means that you can achieve higher speed to send the step pulses to the motor driver. All three stepper motors connected I used a power source, a multimeter a small screwdriver to set the current limit. In our case the power source was broken, so we had to use an extra multimeter to measure the voltage from the power source. To set the current limitation you trim the potentiometer or the board. Me and Henk did this together since someone has to look at the multimeter and turn the screwdriver, and the other one needs to hold the multimeter and make sure not to create any shorts. We set the three potentiometers to approx 0.8 V referring to David’s calculations. David writes: .” Jumpers broken Something was very wrong when doing this, the multimeter went down to zero every time we did the measurement. I figured out that something must be wrong with the hardware, and after a while, I saw that one of the jumpers were broken. I replaced it and now it worked fine. In Arduino IDE Test the stepper motors I had gotten some example code from Emma that I tried out to see if it could run the three motors at the same time. I plugged the Arduino to the computer with the USB cable and uploaded the code using Arduino IDE as usual. What I first forgot was to change the settings in Arduino IDE, that I now was using an Arduino Uno and the programmer “AVRIPS mkll”. I will explain the final code below which includes this code too, hence no explanation of the code. #define EN 8 //Direction pin #define X_DIR 5 #define Y_DIR 6 #define Z_DIR 7 //Step pin #define X_STP 2 #define Y_STP 3 #define Z_STP 4 //DRV8825 int delayTime=30; //Delay between each pause (uS) int stps=6400;// Steps to move void step(boolean dir, byte dirPin, byte stepperPin, int steps){ digitalWrite(dirPin, dir); delay(100); for (int i = 0; i(symbol less than)steps; i++) { digitalWrite(stepperPin, HIGH); delayMicroseconds(delayTime); digitalWrite(stepperPin, LOW); delayMicroseconds(delayTime); } } void setup(){ pinMode(X_DIR, OUTPUT); pinMode(X_STP, OUTPUT); pinMode(Y_DIR, OUTPUT); pinMode(Y_STP, OUTPUT); pinMode(Z_DIR, OUTPUT); pinMode(Z_STP, OUTPUT); pinMode(EN, OUTPUT); digitalWrite(EN, LOW); } void loop(){ step(false, X_DIR, X_STP, stps); //X, Clockwise step(false, Y_DIR, Y_STP, stps); //Y, Clockwise step(false, Z_DIR, Z_STP, stps); //Z, Clockwise delay(100); step(true, X_DIR, X_STP, stps); //X, Counterclockwise step(true, Y_DIR, Y_STP, stps); //Y, Counterclockwise step(true, Z_DIR, Z_STP, stps); //X, Counterclockwise delay(100); } The workflow Before starting to create the code, we as a team talked about what the code will do, making sure we had a common idea. This was the workflow we decided on: - Click button to start Arduino and the loop. - Randomizer creates a random number 0, 1, 2, 3. - The plate moves to the equivalent stage section as the random number. - The equivalent topping funnel/tube opens up and releases some topping. - The plate moves back to the origin (stage section 0). - To get one more topping, repeat the process. Division of labor was challenging, only working with one Arduino that was connected to the motors. But we made it work and move the Arduino and the machine between us, making sure everyone could be part of making the code and testing different versions. I feel very much responsible for really understanding the code and making sure everything was covered. I had a big part in figuring out the workflow above to set the structure of the code. Testing To make sure the toppings were to be released over the middle of the sandwich I had to count the steps taken for the stepper motor to come to that specific position. To do that I changed the code so the motor took one step, then delayed a bit, and then a new one etc. In that way, I could count. The first stage section would be 0 steps, the next one was 8, then 23 then 33. Final code The first part of the code is about the server motors that I have barely mentioned. Hanna has been working on this mainly, hence I don’t have that much information except the code. We had to include a server library and create an object. Jelka also found some code for creating random numbers, that is the next part of the code. #include (less than symbol)Servo.h> // Include servo library for the funnels, see Servo myservo1; // create servo object to control a servo Servo myservo2; // create servo object to control a servo int pos = 0; // Variable to store the servo position long randNumber; // for using random numbers, see The next part is typical code for a stepper motor with a CNC shield and motor drivers. First, I enable the motor drivers, the enable pin is pin 8. Then I set the direction pins for the X, Y and Z axis, meaning these will control the direction of the motors. Then the step pins which control the number of steps the motors do. #define EN 8 // enable pin is 8 //Direction pin #define X_DIR 5 //X axis stepper motor #define Y_DIR 6 //Y axis stepper motor #define Z_DIR 7 //Z axis stepper motor //Step pin #define X_STP 2 //X axis stepper control #define Y_STP 3 //Y axis stepper control #define Z_STP 4 //Z axis stepper control We decided that we wanted to have a switch, a button, to turn on the loop. For that, a button pin had to be defined. I was also responible for designing the placement of the button. #define SWITCH1Pin A5 The next part of the code is about the motor drivers and the stepper motor. This tell the number of microsteps. The delay time is the delay between each phase. //DRV8825 int delayTime=60; // Delay between each phase (uS) int stps=6400; // Steps to move microsteps 1/32 (200*32 = 6400) The void step, the direction of the motor and the number of steps, is decided by the function “(DIR Boolean, dirPin byte, stepperPin byte, int, steps)”. The “boolean” variable means that it can be true or false which will control the direction of the motor. The dirPin corresponds to the stepper motor dir pin, the stepperPIN corresponds to the step pin. “steps” is the number of steps. The “for” code, the “(int i = 0; i (symbol less than) steps; i++)” is a loop that says that for every time “i” is smaller than the numbers of steps, the next part of the code will be executed and looped. After each loop, the numbers of steps will be increments i by 1 (i++), so that the loop will stop when the numbers of steps are reached, (i (symbol less than) steps). void step(boolean dir, byte dirPin, byte stepperPin, int steps){ digitalWrite(dirPin, dir); for (int i = 0; i (symbol less than) steps; i++) { digitalWrite(stepperPin, HIGH); delayMicroseconds(delayTime); digitalWrite(stepperPin, LOW); delayMicroseconds(delayTime); } } Note: I can't make the symbol "less than", since my html code then thinks is a function. In the void setup I added the code for the randomizer and set the pin modes for the different motors, the enable pin, and the switch. void setup(){ randomSeed(analogRead(0)); // if analog input pin 0 is unconnected, random analog // noise will cause the call to randomSeed() to generate // different seed numbers each time the sketch runs. // randomSeed() will then shuffle the random function. pinMode(Y_DIR, OUTPUT); pinMode(Y_STP, OUTPUT); // Motor for Stage pinMode(Z_DIR, OUTPUT); pinMode(Z_STP, OUTPUT); // Motor for tube 1 pinMode(X_DIR, OUTPUT); pinMode(X_STP, OUTPUT); // Motor for tube 2 pinMode(EN, OUTPUT); digitalWrite(EN, LOW); pinMode(SWITCH1Pin, INPUT_PULLUP); myservo1.attach(9); // attaches the servo on pin 9 to the servo object myservo2.attach(10); // attaches the servo on pin 10 to the servo object } The void loop is the main loop saying if the button is pressed, a random number will be read. The random number is generated from the “void randoM”, starting from 0 and giving 3 as a maximum; 0, 1, 2, 3. That random number will define which if-function that will be used, hence which “void stage”. void loop(){ int currentButtonState = digitalRead(SWITCH1Pin); if (currentButtonState == LOW) { randoM(); if (randNumber == 0){ stage1(); //0= position 1 } if (randNumber == 1){ stage2(); //8= position 2 } if (randNumber == 2){ stage3(); //24=position 3 } if (randNumber == 3){ stage4(); //34=position 4 } } } void randoM(){ randNumber = random(0, 4); Serial.println(randNumber); } void stage1(){ //position 1. Don't move the stage, use motor connected to the Z-dir to release some topping, wait some ms. step(false, Z_DIR, Z_STP, 1600); //Z, Clockwise delay(2000); } void stage2(){ //position 2. Move the stage 8 steps, release topping 2 (turn servo), wait some ms and return home. for(int i =0; i(symbol less than)8; ++i){ step(false, Y_DIR, Y_STP, stps); } delay(2000); for (pos = 0; pos (symbol less than)= 90; pos += 1) { // goes from 0 degrees to 180 degrees // in steps of 1 degree myservo1.write(pos); // tell servo to go to position in variable 'pos' delay(5); // waits 5ms for the servo to reach the position } for (pos = 90; pos >= 0; pos -= 1) { // goes from 180 degrees to 0 degrees myservo1.write(pos); // tell servo to go to position in variable 'pos' delay(5); // waits 5ms for the servo to reach the position } delay(2000); for(int i =0; i(symbol less than)8; ++i){ step(true, Y_DIR, Y_STP, stps); //go back to starting position } delay(2000); } void stage3(){ //position 3. Move the stage 23 steps, release topping 3 (turn servo), wait some ms and return home. for(int i =0; i(symbol less than)23; ++i){ step(false, Y_DIR, Y_STP, stps); } delay(2000); for (pos = 0; pos (symbol less than)= 180; pos += 1) { // goes from 0 degrees to 180 degrees, in steps of 1 degree myservo2.write(pos); // tell servo to go to position in variable 'pos' delay(5); // waits 5ms between each degree for the servo to reach the position (speed of the shaft) } for (pos = 180; pos >= 0; pos -= 1) { // goes from 180 degrees to 0 degrees myservo2.write(pos); // tell servo to go to position in variable 'pos' delay(5); // waits 5ms between each degree for the servo to reach the position (speed of the shaft) } delay(2000); for(int i =0; i(symbol less than)23; ++i){ step(true, Y_DIR, Y_STP, stps); //go back to starting position } delay(2000); } void stage4(){ for(int i =0; i(symbol less than)33; ++i){ step(false, Y_DIR, Y_STP, stps); } //turn motor tube 2 step(false, X_DIR, X_STP, 1600); //Z, Clockwise delay(2000); for(int i =0; i(symbol less than)33; ++i){ step(true, Y_DIR, Y_STP, stps); //go back to starting position } delay(2000); } Assemble the machine When the code was up and running we put together the machine. I added some clue and tape to make it stable. What would I have made differently next time? We have made some many versions of the stage, making it smaller, shorter, changed the top part, made it fit other parts of the machine, etc, so what I would do different next time would probably be to not make the actual shell of the machine before making sure everything works as I planned. I would make sure we had all the components before deciding what types of functions the machine would have. Now there wasn’t enough stepper motors without an attached screw to it, which resulted in us not having the second tube topping. Download files here Final code for Random Dutch Lunch Generator Stepper motor test code Dxf-file for cutting group assignment: assignment: same as individual assignment Week 17: Wildacard Week assignment:. Fun week! But wildcard week didn’t become so “wild” for me, I decided to try composite as previous years of Fab Academy. I have realized that I prefer the weeks of creating something with my hands, preferably something bigger, and also something that could be useful in my everyday life. Most often it is the time to decide what to make that proportionally takes the longest. Therefore, I decided to make something that I need. And I need a new flower pot for a plant that my sister has been taking care of while I’m here in Amsterdam. I had a look at some work that has been done in Fab Academy before: this one and this one. My requirements for this assignment are similar to the ones for the composite week in prior years; Design and make a 3D mold, and produce a fiber composite part in it. Designing a flower pot I decided that I wanted to make my mold in cardboard, creating slices that when assembled could work as my mold. Therefore, I used Fusion 360 and Slicer for Fusion 360 to create the flower pot. Fusion 360 I made a simple shape of the flower pot, more or less a rectangle with the corners rounded by using the command “Fillet”. I also made circles on the side of the rectangle, that would, when revolved around would create a pattern. I made one circle and then I used the “Rectangular Pattern” to create more in line with the first one. When I then marked all profiles and the axis to revolve around, using the “Revolve” command, the design became a solid flower pot. I right--clicked on the body in the browser and saved as STL. Slicer for Fusion 360 I have used this program a lot to make lamps in my spare time. It’s great because it helps you create slices of your design and do all calculations for slots etc. However, it has many bugs, one of them being you can’t change the material thickness in an easy way (David actually found a way to go around this) and the pdf or dxf files of the parts need some post-work in Illustrator according to me. Settings in Slicer Workflow in Slicer for Fusion 360 - Import model - Click on the pen symbol in the “Manufacturing Settings” and create a new item. Set the material thickness, the size of the material you will cut and the offset, being the kerf. - In “Construction Technique” click on the technique you prefer. I used “Radial Slices”. - Set the “Slice Distribution”, deciding the number of radial and axis. - In “Modify Form” I click “Hollow” and make it hollow the amount I want. In this case, I kept them model solid. - Click “Get Plans” to export your file. I use PDF and then I do some post-work in Illustrator. Post-work in Illustrator In Illustrator I place the parts in a better way, that will save the amount of waste material. I also talk away the numbers for the assembling since this design is easy to understand how to assemble. Laser cut the design I used 3mm cardboard for my design and I followed the workflow of the week of Computer-Controlled Cutting. 3 mm cardboard - Speed: 150 - Power: 90 The cut went great and I assembled the pieces. Now my mold was ready! Creating the flower pot of composite Prepare for composites If putting epoxy on cardboard, the cardboard will stick too. For that reason, I needed to prepare the mold with plastic foil and vaseline for the epoxy not to stick to my mold and easily slide off when cured. I figured, to attach the plastic foil and to keep some of the shape of the design, “the bubbles”, some elastic bands would do the trick. I then added vaseline on top of the plastic foil. The material that I was going to use was a jute fabric that we have in the Fablab. The material was pretty thick so I decided to do only two layers. I cut the fabric in three different parts; one circle for the bottom, one long rectangle for the side, these being the base layer. And then I cut one sun-like shape to put over the base layer. Prepare the space Epoxy is both sticky and bad to inhale and get in contact with, so preparing the space is essential for this project. My working area would be next to the open window for better ventilation. I covered the table with some big black garbage bags and made sure I had everything I would need at the table before starting; scissors, paper towels, extra protection gloves, vaseline, elastic bands, my fabric, etc. I worn a laboratory jacket, protective gloves, and glasses and a gas mask. Blend the epoxy I did the blend of the epoxy in the ventilation cabinet. The epoxy that I was using is the Tarbender, High Gloss Clear Coating and Encapsulant. On the datasheet, it says that the mix ratio by volume is 2A:1B, but I decided to go for the more precise measurement and mix ratio by weight, 100A:41B. I did two batches because the data sheet also gives you an estimate of the coverage rates which make me believe that two batches would be enough. It actually turned out to be the perfect amount. I poured part A and part B in separate cups and weigh them. When I had the right amount, I poured both of them in a bigger cup. Then I stirred for two minutes to make sure they would blend well. Attaching the fabric to the mold To make sure I didn’t add too much epoxy to my fabric, I squeezed every piece before attaching it to the mold, seeing if epoxy would come out, and also making it spread better on the material. Then I attached it to the mold, having some elastic band helping it to hold. Curing in vacuum bag Curing For the epoxy to cure in a good way, I decided to use vacuum bagging. I put some vaseline on some baking paper and wrapped that around my flower pot full of epoxy, for it to not stick to the vacuum bag. I again used elastic bands to keep it together. I placed the flower pot inside the bag and used the regular vacuum to suck out all the air. My flower pot was ready to cure, cure time 16 hours. The final flower pot I left my flower pot to cure from Friday to Monday. When I on Monday took the flower pot out of the vacuum bag, I could feel it had hardened. It was a bit sticky, but that was from the vaseline. My mold was stuck in the flower pot since I had bent the edges of it a bit. Because of this, I had to tear the cardboard off. I’m really happy with the result of the pot. The “bubbles” are somewhat visible, but even more is the area between each cardboard which I enhanced with pulling the material a bit extra in that middle space. Download files here Flower pot ai Link to Flower Pot STL Flower pot f3d Week 18: Applications and Implications assignment: propose a final project that integrates the range of units covered. What will it do? I have made interactive cardboard cubes for my final project. Looking at it, it seems like it’s only a stack of cardboard pieces. But when you hold them or put them together, LEDs will lit up and blink in different ways depending on the action; how many you put together or what you put them close to. The cube is a toy for kids. My sister’s two kids, Ingrid the Tiger and Ebbe the Bear, they like playing with regular building blocks, many times made out of wood. I thought I should add some features to this, making them blink, sing, vibrate, etc. This was my initial idea for my final projects; making wooden cubes that would have multiple outputs depending on the connection made between the cubes. But with the time limitation (I have to leave the Fab Lab in Amsterdam already at the end of May), I am really happy with the result, and I really like the look of “a stack of cardboard”. I have been striving for something that looks simple, but when interacting with it, it becomes more interesting. That is also one of the reasons for choosing cardboard as my main material. I have also made sure that the cube can be taken apart and assemble again, without the use of clue, screws or similar. The reason for this, is because I want the users to be able to have a look and see how it works. The inner structure can also be placed in other shapes; spheres, cylinders, stars ect, which makes the design flexible and exploitative. The cardboard cube is 8x8x8 cm, and it has 8 neodymium magnets in each corner. Inside the cube, there is a 3D printed 5x5x5 cm inner structure. The inner structure is holding a PCB in the middle of the cube. The cube runs on a 3V coin battery placed on the backside of the board. My input device is two capacitor sensor, connected to vinyl cut copper foil that is designed to fit the 3D printed inner structure. The output is the 6 LEDs using the charlieplexing technique to fit all the LEDs with the number of pins on the ATtiny84. Who's done what beforehand? Neil says “Stand on the shoulders of giants”, meaning we should learn from what others have done and made something better. I have been standing on my own shoulders during this project and it's been more than I can handle at this point and with the time limit. I have seen many interactive cubes before, many made at the Fab Academy, see list below. And I have seen a great number of interactive toys for kids, many that are very fun and complex, and some really exploitative and useful in learning. My cube is far from as interesting, fun, or developing as many of these. I, however, like that mine is made out of cardboard; an affordable and creative material. I also like that the parts can be taken apart and be assembled again. Inspiration: - Johanna Okerlund from Fab Academy 2017, made a set of blocks, that when connecting them to each other, and pressing a button on the computer, a poem was created. - Takuma Oami from Fab Academy 2014, made building blocks that can be exported to PC as 3D model. - Dana Schwimmer from Fab Academy 2014 made sound cubes. - AutomaTiles has made some really nice looking hexagons that can connect in on all sides - and do many more things - Cubelets Robot Blocks - robot-like cubes with many different inputs and outputs. Very good for kids exploration. - Magna Tiles - kids toy for construction of shapes with magnets being able to connect on all sides. I have designed and made all the parts of the interactive cardboard cube (except from the magnets and the components on the board. I have done the schematic and board design in Eagle, vinyl cut the copper foil designed in Illustrator, laser cut the cardboard, made in Fusion 360 and Slicer for Fusion 360. I have also 3D printed the inner structure which is also designed in Fusion 360. I have incorporated all the requirements for the final project. - 2D and 3D design - in Fusion 360, Slicer for Fusion 360, and in Adobe Illustrator - additive and subtractive fabrication processes - Additive by using the 3D printer and subtractive by using the laser cutter, the vinyl cutter and the small CNC milling machine for making the board. - electronics design and production - creating the schematic, designing the board, milling the board, solder on components - microcontroller interfacing and programming - programming the board in Arduino IDE using C. - system integration and packaging - I have designed my cube so that all parts fit nicely together. I have used the square centimeters in effective way, not leaving any space unused. Except from the LEDs on the top and the bottom, there is no wires to connect the pieces. The cube is ready to use, and even if we’re not used to a “stack of cardboard pieces”, this cube is ready to be played with! What materials and components will be used? Components on the PCB: - ATtiny84 - 6 pin header - 4 pin header x2 - Capacitor 1uf - Resonator 20 mhz - Resistor 0 ohm x3 - Resistor 499 ohm x3 (might change depending on LEDs) - Resistor 10k ohm x1 - Resistor 10M ohm x2 (might change depending on capacitors when inside cube) - 2032 battery holder - CR2032 Lithium battery 3V - LED (through hole) 6x - Wires - Soldered on to a one-sided copper plate Material: - Cardboard, 8x8x29 cm - PLA 0,4 mm, approx 50 cm - Cube magnets, neodymium, 5mm, 8 - Copper foil, 5x5x15x2 cm How much will it cost and where do you buy it? All my components and material can be found in the Fab Lab. Fab Lab buys them from electronic resellers in China or from Digi-Key. The magnets and the battery makes up more than half the cost of the cube. The components and the material all cost less if you buy more than just one, making these costs at the highest level. Cost per cube: - Cardboard: € 0,98 (4,90 for a piece fitting 5 cubes) - PLA: - Cube Magnets: € 3.04 (0.38 €/magnet) - Copper foil: - Components: - ATtiny84: € 0,58 - 6 pin header: € 0,24 - 4 pin header x2 - Capacitor 1uf - Resonator 20mhz - Resistor 0 ohm 3x - Resistor 499 ohm x3 (might change) - Resistor 10k ohm x1 - Resistor 10M ohm x2 - 2032 battery holder - CR2032 Lithium battery 3V: 2,95 € - LED (through hole) 6x - Wires - Soldered on to a one-sided copper plate - Time: :) - Total: around 10 € How will it be evaluated? My interactive cube should be evaluated on the creativeness of the simple design, making a cube that can be taken apart and reassembled. This creates possibilities in making other shapes, and also exploring the different components inside the cube. It can also be evaluated on the consideration of designing and making all parts, and making them from cheap material, resulting in a very affordable cube. It should of course be evaluated on the performance of the input and the output. Since none of these are that special, I want it to be evaluated on the first and second impression; first just seeing the stack of cardboards, to then hold it and put it together with other cubes, and then taking it apart to look inside - was it what you had expected? My own concern and thoughts Oh, there are many thing that I would do differently if I had more time. The biggest thing would be to make more outputs. At this point, my cube isn’t that interesting, it’s just LEDs blinking in different ways, and I would have liked it to be more fun. Also with different outputs, maybe on different sides of the cube, it would have been more explorative for the kid. If I had more time I would redo the board, making it look better. I really like making the board as small as possible and also good-looking. I also made the holes for the battery holder a bit too big, I could just have made the holes small enough for a regular wire. Right now I haven’t figured out how to attach the pieces together. I have holes for a stick to be placed, but it will be important to attach the first and the last one really hard so that the magnets do not open up the cube. Next steps - 3D print one more inner structure - Attach copper foil to inner structure - Solder two more boards - Program all three boards with final code - Test the cubes together and calibrate the sensors - Find out a way to put the pieces together + eventual design and make that piece - Write documentation - Make video I’m almost there! Week 19: Invention, Intellectual Property, and Income assignment: develop a plan for dissemination of your final project This week’s assignment is about how I plan to disseminate or spread and promote, my final project to a wider audience. It is also about the associated property rights and what sort of advantage or profit I will gain from it. Licenses Since my final product isn’t rocket science and not very innovative, I have no plans of licensing it or disseminating it further than just the final presentation and then apply my learnings to projects that I’ll do in the future. Furthermore, I have no plans of claiming any intellectual property rights. Wikipedia explains it well; Intellectual property (IP) is a category of property that includes intangible creations of the human intellect, and primarily encompasses copyrights, patents, and trademarks. If anyone, at any time, wants to add on to my final project or use it as it is, I would be honored and happy. One could say that I’m applying a version of an open source license in which I give others the rights to study, change, and distribute my final project to anyone and for any purpose. This is also in line with the Fab Labs that adopt a sharing model where everything is made available for others to learn from. Even though I'm not planning to claim any rights or licensing my final project, I did find one licens that I liked and could use another time. On Creative Commons website you can read about how they help to legally share knowledge and creativity. They so this to build a more equitable, accessible, and innovative world which is inline with my values. Creative Commons offer free, easy-to-use copyright licenses. They provide a simple, standardized way to give permission to share and use creative work. The license I would use is the "Attribution 4.0 International" which they recommend when going through their website and selecting the options you would like for your license. The license says ." It continues ." Income generation My intention is not to make money on my final project. The main gain of it was to learn from and explore. That I have succeeded with. Week 20: Project Development assignment: complete your final project, tracking your progress Most of this documentation could be found under the Final Project page. What tasks have been completed, and what tasks remain? When I left to go home to Sweden, during the last week, I was mostly done will all the different parts of the cube. I had to figure out a way to attach the pieces together. In Sweden, I designed and printed the rods and caps. Back in Amsterdam for final presentation, I did the final calibration of the capacitive sensors. What is the deadline? The deadline for the final project is 20th of June when I present to all the other students. How will I complete the remaining tasks in time? I have planned the different tasks well so I would complete everything in time. If I would have more time, I would redesign the board and keep only one sensor that would ease the calibration, which has taken the longest time due to the unstableness of capacitive sensors. What has worked? What hasn't? Everything has worked fine. Like mentioned, it's been some troubles with the capacitive sensors since the normal condition change depending on what cables are attached to the board, how close it is to the computer, is the battery is attached etc, which made that part really hard and time-consuming. It was hard to find a threshold that was correct at all times. What questions need to be resolved? If I were to do this project again, I would re-think the input device. Capacitive sensors are not the most stable and easy to calibrate. So I guess the question is - which input device should I use? What have you learned? I have learned so many things. I integrated most of the processes and methods that we learned during Fab Academy. But mostly I have learned things about myself; how I function under stress, what I like exploring, what I don't like as much, how much I'm capable of, and that I could possibly make (almost) anything.'
http://fab.academany.org/2018/labs/fablabamsterdam/students/johanna-nordin/weekly%20assignments.html
CC-MAIN-2019-13
refinedweb
29,561
70.63
3.3. Python3 compatibility¶ A good place to start looking for advice to ensure that any code is compatible with both Python-3.x and Python2.6/2.7 is too look at the python-future cheat sheet . Buildbot uses the python-future library to ensure compatibility with both Python2.6/2.7 and Python3.x. 3.3.1. Imports¶ All __future__ import have to happen at the top of the module, anything else is seen as a syntax error. All imports from the python-future package should happen below __future__ imports, but before any other. Yes: from __future__ import print_function from builtins import str No: from twisted.application import internet from twisted.spread import pb from builtins import str from __future__ import print_function 3.3.2. Dictionaries¶ In python3, dict.iteritems is no longer available. While dict.items() does exist, it can be memory intensive in python2. For this reason, please use the python.future function iteritems(). Example: d = {"cheese": 4, "bread": 5, "milk": 1} for item, num in d.iteritems(): print("We have {} {}".format(num, item)) should be written as: from future.utils import iteritems d = {"cheese": 4, "bread": 5, "milk": 1} for item, num in iteritems(d): print("We have {} {}".format(num, item)) This also applies to the similar methods dict.itervalues() and dict.values(), which have an equivalent itervalues(). If a list is required, please use list(iteritems(dict)). This is for compatibility with the six library. For iterating over dictionary keys, please use for key in dict:. For example: d = {"cheese": 4, "bread": 5, "milk": 1} for item in d: print("We have {}".format(item)) Similarly when you want a list of keys: keys = list(d) 3.3.3. New-style classes¶ All classes in Python3 are newstyle, so any classes added to the code base must therefore be new-style. This is done by inheriting object Old-style: class Foo: def __init__(self, bar) self.bar = bar new-style: class Foo(object): def __init__(self, bar) self.bar = bar When creating new-style classes, it is advised to import object from the builtins module. The reasoning for this can be read in the python-future documentation 3.3.4. Strings¶ Note This has not yet been implemented in the current code base, and will not be strictly adhered to yet. But it is important to keep in mind when writing code, that there is a strict distinction between bytestrings and unicode in Python3’ In python2, there is only one type of string. It can be both unicode and bytestring. In python3, this is no longer the case. For this reasons all string must be marked with either u'' or b'' to signify if the string is a unicode string or a bytestring respectively Example: u'this is a unicode string, a string for humans to read' b'This is a bytestring, a string for computers to read' 3.3.5. Exceptions¶ All exceptions should be written with the as statement. Before: try: number = 5 / 0 except ZeroDivisionError, err: print(err.msg) After: try: number = 5/0 except ZeroDivisionError as err: print(err.msg) 3.3.6. Basestring¶ In Python2 there is a basestring type, which both str and unicode inherit. In Python3, only unicode should be of this type, while bytestrings are type(byte). For this reason, we use a builtin form python future. Before: s = "this is a string" if(isinstance(basestring)): print "This line will run" After: from builtins import str unicode_s = u"this is a unicode string" byte_s = b"this is a bytestring" if(isinstance(unicode_s, str)): print("This line will print") if(isinstance(byte_s, str): print("this line will not print") 3.3.7. Print statements¶ Print statements are gone in python3. Please import from __future__ import print_function at the very top of the module to enable use of python3 style print functions 3.3.8. Division¶ Integer division is slightly different in Python3. // is integer division and / is floating point division. For this reason, we use division from the future module. Before: 2 / 3 = 0 After: from __future__ import division 2 / 3 = 1.5 2 // 3 = 0 3.3.9. Types¶ The types standard library has changed in Python3. Please make sure to read the official documentation for the library and adapt accordingly
http://docs.buildbot.net/current/developer/py3-compat.html
CC-MAIN-2017-26
refinedweb
714
67.76
hey, i am gopi. i am new to arduino, i have a problem with the serial monitor in arduino. i have done some library updations and board updations. when i run the code and open the serial monitor the window is displaying some encrypted text. Previously it was good showing some english words and numerical values. can you help me in fixing this? and the code i was running is showing with one error Arduino: 1.8.4 (Windows 10), Board: “Arduino/Genuino Uno” C:\Users\gopi\Documents\Arduino\my_research\my_research.ino:3:20: fatal error: boards.h: No such file or directory #include <boards.h> ^ compilation terminated. exit status 1 Error compiling for board Arduino/Genuino Uno.
https://forum.arduino.cc/t/arduino-serial-monitor/481226
CC-MAIN-2021-31
refinedweb
118
54.29
Servlet Tutorial | Java Swing Tutorial | JEE 5 Tutorial | JDBC... example | Java Programming | Java Beginners Examples | Applet Tutorials...; | Java Servlets Tutorial | Jsp Tutorials | Java Swing Tutorials index Fortran Tutorials Java Tutorials Java Applet Tutorials Java Swing and AWT Tutorials JavaBeans Tutorials creating index for xml files - XML creating index for xml files I would like to create an index file... you tell me if there any example or reference I can use. Sorry that my... 30-50 records. I would like to retrieve that xml files from the directory one Date picker in Java Swing in need of Time Picker just like the Date picker available in java Swing. Kindly... thoroughly . But Sir, I want Time Picker In java swing desktop application..., I want Time Picker In java swing desktop application. In time picker we can Hi .Again me.. - Java Beginners :// Thanks. I am sending running code...Hi .Again me.. Hi Friend...... can u pls send me some code on JPanel.. JPanel shoul have 1pic 1RadioButton .. Like a Voter.   default values | C Array Declaration | C Array copy example Java Tutorial... Tutorial | JPA Tutorial | Java Swing Tutorials | Java Servlet Tutorials... | Site Map | Business Software Services India Tutorial Section   WEB SITE WEB SITE can any one plzz give me some suggestions that i can implement in my site..(Some latest technology) like theme selection in orkut like forgot password feature.. or any more features Sorry but its Java Code example Java Programming Java Beginners... Servlets Tutorial Jsp Tutorials Java Swing Tutorials... Site Map We have organized our site map for easy access. You can java swing - Swing AWT java swing how i can insert multiple cive me exampleolumn and row in one JList in swing?plz g Hi Friend, Please clarify your question. Swing - Java Beginners : Hi friends. I need a swing programming book for free download... links: Java swing in NetBeans - Swing AWT . i will specify a swing code for JTable using NETBEANS so would you tell me...Java swing in NetBeans thanks a lot sir for everything you answered... table like this is it possible: if it is can u give me code in NetBeans Swing paint - Swing AWT Swing paint hi, i want to print something on window using swing... the Swing Applet, use html file with the following code: Java Applet Demo... on window... while compiling error is coming in super... so pls let me knw whats Java swing - Java Beginners Java swing Hi, I want to copy the file from one directory to another directory,the same time i want to get the particular copying filename will be displayed in the page and progress bar also. Example, I have 10 files java swings - Swing AWT . swings I am doing a project for my company. I need a to show... write the code for bar charts using java swings. Hi friend, I am Dialogs - Swing AWT visit the following links: Dialogs a) I wish to design a frame whose layout mimics.... It should just show me how to lay out components. b) I wish to write an applet Java Swing Java Swing I want scroll bar in a frame is it possible? if it is possible means i want coding for that please send me a reply java swing java swing iam using my project text box, label , combobox and that the same time i want menubar and popmenu. plz give me code for this. i want immediately plz send me that code - Swing AWT . How can I run an application of Microsoft Windows like notepad or paint from my swing program? Hi friend, import java.awt. i want make a simple tools like turnitin using java i want make a simple tools like turnitin using java it just simple tools can select the file like .doc,.pdf,.txt..the tools can read inside the file.....can u help me Java Swing Java Swing i have a Label more than that of a frame .... how to extend the frame to view the label which are hidden.. please send me the coding sir/madam Java Swing Java Swing i have a Label more than that of a frame .... how to extend the frame to view the label which are hidden.. please send me the answer sir/madam-swings - Swing AWT :// Thanks. Amardeep...java-swings How to move JLabel using Mouse? Here the problem is i have a set of labels in a panel. I want to arrange them in a customized order Swing - Swing AWT Swing hi sir i have a code like this: import... information, visit the following link: Thanks lOGIN according to der roles lOGIN according to der roles how to login a user according to roles... false; var iChars = "*|,\":<>[]{}`\';()@&$#%"; for (var i = 0; i < string.length; i++) { if (iChars.indexOf(string.charAt(i)) != -1) return false index - Java Beginners index Hi could you pls help me with this two programs they go hand in hand. Write a Java GUI application called Index.java that inputs several... the number of occurrences of the character in the text. Write a Java GUI What is Java Swing? What is Java Swing? Here, you will know about the Java swing. The Java Swing provides... and GUIs components. All Java Swing classes imports form the import  Java arraylist index() Function Java arrayList has index for each added element. This index starts from 0. arrayList values can be retrieved by the get(index) method. Example of Java Arraylist Index() Function import Session in java Swing Session in java Swing Hello Sir please tell me that how can i Maintain session in Java Swing How about this site? Java services What is Java WebServices? If you are living in Dallas, I'd like to introduce you this site, this home security company seems not very big, but the servers of it are really good. Dallas Alarm systems create , edit MS WORD like document file using Java Swing - Swing AWT ? I am using Java SWING. Plz. email your answer...create , edit MS WORD like document file using Java Swing In my..., font style etc. I want this text in JTextArea to be saved to MS WORD like HTML FAQ site site which uses natural language processing to get the answers. Natural... or range tree or something like that. The user input will usually be like... of answers or the actual answer should be generated. As close as possible. I need Swing - Applet information on swing visit to : Hello, I am creating a swing gui applet, which is trying to output all the numbers between a given number and add them up. For example scope of Java Swing in IT ? scope of Java Swing in IT ? Hi....... I am fresher ,now I m working in Java Swing developing desktop apps. I am confused abt use of Java Swing... the "How much scope of Java Swing in IT now ?" & wht should i do automatically break line when ever I put enter. automatically break line when ever I put enter. code is working fine... break lines like what I am typing above... StringBuffer(OriginalMsg); for(int i = 0; i < sb.length(); i What is Swing? What is Swing? What is Java Swing? Where to get the tutorials of Swing? I am beginner in Java Swing programming and trying to find the tutorials of Swing. Tell me the urls to learn swing. Thanks Hi, Swing is Java How to index a given paragraph in alphabetical order How to index a given paragraph in alphabetical order Write a java program to index a given paragraph. Paragraph should be obtained during runtime... : A -> a G -> given I -> index is P -> paragraph java - Swing AWT Java Implementing Swing with Servlet How can i implement the swing with servlet in Java? Can anyone give an Example?? Implementing Swing with Servlet Example and source Code Servlet SwingToServlet Java - Swing AWT Java How can I interact with a hardware like scanner, barcode-reader or a digital camera through my created swing program Java Swing : JFrame Example Java Swing : JFrame Example In this section, you will learn how to create a frame in java swing. JFrame : JFrame class is defined in javax.swing.JFrame.... index), frameInit(), getAccessibleContext(), getDefaultCloseOperation Java Swings-awt - Swing AWT Java Swings-awt Hi, Thanks for posting the Answer... I need to design a tool Bar which looks like a Formating toolbar in MS-Office Winword(Standard & Formating) tool Bar. Please help me... Thanks in Advance Give me some java Forum like Rose India.net Give me some java Forum like Rose India.net Friends... Please suggest some forum like RoseIndia.net where we can ask question like here. Thanks Java swing - Java Beginners Java swing Hi, I want to copy the file from one directory to another directory,the same time i want to get the particular copying filename will be displayed in the page and progress bar also. Example, I have 10 files java swing - Java Beginners (){ JFrame f = new JFrame("Frame in Java Swing"); f.getContentPane().setLayout(null...java swing How to set the rang validation on textfield, compare validation textfields , and also if we create a groupbutton like male female, if we -to-One Relationship | JPA-QL Queries Java Swing Tutorial Section Java Swing Introduction | Java 2D API | Data Transfer in Java Swing | Internationalization in Java Swing | Localization | What is java swing java swing - Java Beginners java swing How to upload the image in using java swings. ex- we make a button to browsbutton and savebutton , when we click on the browsbutton , i...; for ( Iterator i = filenames.iterator(); i.hasNext(); ) { String Java swing - Java Beginners Java swing I created a text box and a button. When clicking a button a message have to come like "welcome then the thing i entered in the text box" Hi Friend, Try the following code: import java.awt.*; import Hibernate Criteria Like and Between Example the result according to like and between condition. Here is the simple Example code... Hibernate Criteria Like and Between Example In this Example, We.... In this example we create a criteria instance and implement the factory methods swing. java swing. Hi How SetBounds is used in java programs.The values in the setBounds refer to what? ie for example setBounds(30,30,30,30) and in that the four 30's refer to swing - Swing AWT : Thanks...java swing how to add image in JPanel in Swing? Hi Friend, Try the following code: import java.awt.*; import java.awt.image. Java Programming: Chapter 7 Index routines covered in that chapter suffice. But the Swing graphical user interface... to Swing. Although the title of the chapter is "Advanced GUI Programming," it is still just an introduction. Full coverage of Swing would require at least "Doubt on Swing" - Java Beginners "Doubt on Swing" Hi Friend.... Thanks for ur goog Response.. i need to create a GUI Like... pic1.gif RadioButton pic2.gif RadioButton Pic3.gif RadioButton If we have select d appropriate radio Reply Me - Java Beginners Reply Me Hi,.. I have a some fields in one form like security and some more fields this fields maintain the table if user click security... question then reply me Swing Button Example Swing Button Example Hi, How to create an example of Swing button in Java? Thanks Hi, Check the example at How to Create Button on Frame?. Thanks Java Swing Tutorials button in java swing. Radio Button is like check box.  ... how to create the JTabbedPane container in Java Swing. The example... of GUI. Chess Application In Java Swing In the given example Java Swing code for zoom in and out Java Swing code for zoom in and out hi.......... I require a code in java swing for image zoom in and zoom out can u tell me how it can be done or what is the code plz help Reply me - Java Beginners Reply me Hi, this code is .java file but i am working in jsp tchnologies and i wantr this if user input a in text box the table have... in the database... if u understood my question then then please send me code oterwise Java swing java swing - Swing AWT java swing how i can insert in JFrame in swing? Hi Friend, Try the following code: import java.awt.*; import javax.swing.*; import java.awt.event.*; class FormDemo extends JFrame { JButton ADD; JPanel help me - Java Beginners help me helo guys can you share me a code about Currency Conversion.Money will convert according to the type of currency.would you help me please...: sample I input DOLLARS (so the user must choose a currency if dollars ,swiss Java - Swing AWT How to start learning Java I am a Java Beginner ...so, please guide me how to start How to create Runtime time jLabel in java swing ? How to create Runtime time jLabel in java swing ? hi sir. my problem is that i want to display database row on a jLabel. Suppose i retrived a table from database & i want to display its row value on jLabel. i m facing according to me its ok but is calculating wrong values in according to me its ok but is calculating wrong values in program... += 5; break; case 'I...': decimal2 += 5; break; case 'I Java - Swing AWT Java Hello friends, I am developing an desktop application in java & I want to change the default java's symbol on the top & put my own symbol there.. Can Anyone help me Java Programming: Chapter 10 Index is referred to as input/output, or I/O. Up until now, the only type of interaction... connections. In Java, input/output involving files and networks is based on streams, which are objects that support the same sort of I/O commands that you Reply Me - Java Beginners Reply Me Hi Rajnikant, I know MVC Architecture but how can use this i don't know... please tell me what is the use... class (which contain your database information). Check previous I send U, Details structure Using jsp code I m... in name text box like a,then display record related to a not b and c(means searching using name with alphabetical order). I want to display only record based Java Programming: Chapter 2 Index Chapter 2 Programming in the Small I Names and Things... is what I call "programming in the large." Programming in the small... and decisions. In a high-level language such as Java, you get to work java - Swing AWT java how can i add items to combobox at runtime from jdbc Hi Friend, Please visit the following link: Thanks Hi Friend Reply Me - Java Beginners Reply Me Hi, For table I have a two table first is sales... Hi, In which technology we should developed the example. * JSP * Servlets * Struts 1 * Struts 2 or JSF? Please tell me Swing In Java Swing Example Here I am giving a simple example where we will use the Swing API. This example will demonstrate you about how to use the Swing components in Java...Swing In Java In this tutorial we will read about the various aspects of Swing Java Programming: Chapter 4 Index Chapter 4 Programming in the Large I Subroutines ONE WAY.... As mentioned in Section 3.7, subroutines in Java can be either static or non... | Main Index java swing (jtable) java swing (jtable) hii..how to get values of a particular record in jtable from ms access database using java swing in netbeans..?? please help..its urgent.. Here is an example that retrieves the particular record Java Swing : JButton Example Java Swing : JButton Example In this section we will discuss how to create button by using JButton in swing framework. JButton : JButton Class extends... property to a value according to present look and feel. Example Help me Help me plz i want code of program to add real numbers and magic numbers in java example this input :- 5+3i-2+3i output:- 3+6i Java question - Swing AWT Java question I want to create two JTable in a frame. The data... columns-Item_code,Item_name,Item_Price. When I click on one of the row in first... = md.getColumnCount(); for (int i = 1; i <= columns; i
http://roseindia.net/tutorialhelp/comment/8519
CC-MAIN-2014-35
refinedweb
2,695
65.73
vikas sharmaa wrote:i have not written reset or validate method in the LoginForm class. is it ok? and also, i have not entered input tag for action class in the struts-config.xml file. is it ok? as per my knowledge, these are optional, but am asking this after reading below lines from the webpage : execute() method of an action class is not called if ActionForm.validate() returns non-empty ActionErrors object. Instead, control is directly routed to the URI defined in "input" attribute of action mapping. vikas sharmaa wrote:I have developed a very basic struts application. but my action class in not getting called. Please check below the web.xml, struts-config.xml files, and LoginAction.java files: public class LoginAction extends Action { public ActionForward execute(ActionMapping mapping, ActionForm form, HttpServletRequest request, HttpServletResponse response) throws IOException, ServletException { System.out.println("called...."); return mapping.findForward("success"); } } Please let me know, whats the problem in above code. if you require, i will copy LoginForm.java and welcome.jsp files also. vikas sharmaa wrote:the problem is fixed now! i was mistakenly using older version of struts.jar where execute method was not defined in the Action class.
http://www.coderanch.com/t/428581/Struts/action-class-called-jsp-page
CC-MAIN-2014-15
refinedweb
198
53.47
1552115182 The title says it all. The following bit of pseudo-code returns the following error: df = pd.read_sql(query, conn, parse_dates=["datetime"], index_col="datetime") df['datetime'] I get : Exception in thread Thread-1: Traceback (most recent call last): File "C:\Users\admin\.virtualenvs\EnkiForex-ey09TNOL\lib\site-packages\pandas\core\indexes\base.py", line 2656, in get_loc return self._engine.get_loc(key) File "pandas\_libs\index.pyx", line 108, in pandas._libs.index.IndexEngine.get_loc File "pandas\_libs\index.pyx", line 132, in pandas._libs.index.IndexEngine.get_loc File "pandas\_libs\hashtable_class_helper.pxi", line 1601, in pandas._libs.hashtable.PyObjectHashTable.get_item File "pandas\_libs\hashtable_class_helper.pxi", line 1608, in pandas._libs.hashtable.PyObjectHashTable.get_item KeyError: 'datetime' Am I misunderstanding what's going on by indexing the datetime col? I can access all the other columns normally though. #python 1552274924 An index is not a column. Think of the index as labels for the rows of the DataFrame. index_col='datetime' makes the datetime column (in the csv) the index of df. To access the index, use df.index. 1552274984 import pandas as pd d = {'col1': [1, 2], 'col2': [3, 4]} df = pd.DataFrame(d) time = pd.date_range(end='4/5/2018',periods=2) df.index = time df.index the end is DatetimeIndex(['2018-04-04', '2018-04-05'], dtype='datetime64[ns]', freq='D') just use df.index can get the information of the index_col 1623396211 So you, my dear Python enthusiast, have been learning Pandas and Matplotlib for a while and have written a super cool code to analyze your data and visualize it. You are ready to run your script that reads a huge file and all of a sudden your laptop starts making un ugly noise and burning like hell. Sounds familiar? Well, I have got a couple of good news for you: this issue doesn’t need to happen anymore and you no, you don’t need to upgrade your laptop or your server. Dask is a flexible library for parallel computing with Python. It provides multi-core and distributed parallel execution on larger-than-memory datasets. It figures out how to break up large computations and route parts of them efficiently onto distributed hardware. A massive cluster is not always the right choice Today’s laptops and workstations are surprisingly powerful and, if used correctly, can, Dask can empower analysts to manipulate 100GB+ datasets on their laptop or 1TB+ datasets on a workstation without bothering with the cluster at all. The project has been a massive plus for the Python machine learning Ecosystem because it democratizes big data analysis. Not only can you save money on bigger servers, but also it copies the Pandas API so you can run your Panda script changing very few lines of code. #making pandas fast with dask parallel computing #dask parallel computing #pandas #pandas fast #dask #dask parallel 1602550800 Pandas is used for data manipulation, analysis and cleaning. What are Data Frames and Series? Dataframe is a two dimensional, size mutable, potentially heterogeneous tabular data. It contains rows and columns, arithmetic operations can be applied on both rows and columns. Series is a one dimensional label array capable of holding data of any type. It can be integer, float, string, python objects etc. Panda series is nothing but a column in an excel sheet. s = pd.Series([1,2,3,4,56,np.nan,7,8,90]) print(s) How to create a dataframe by passing a numpy array? #pandas-series #pandas #pandas-in-python #pandas-dataframe #python 1616050935 In my last post, I mentioned the groupby technique in Pandas library. After creating a groupby object, it is limited to make calculations on grouped data using groupby’s own functions. For example, in the last lesson, we were able to use a few functions such as mean or sum on the object we created with groupby. But with the aggregate () method, we can use both the functions we have written and the methods used with groupby. I will show how to work with groupby in this post. #pandas-groupby #python-pandas #pandas #data-preprocessing #pandas-tutorial
https://morioh.com/p/b646c3fcf7f2
CC-MAIN-2021-39
refinedweb
686
58.28
I’ve been working on a project recently that we’ve using React for the UI component of it. While starting planning out the next phase of the project we looked at a requirement around doing charting. Now it’s been a while since I’ve done charting in JavaScript, let alone charting with React, so I did what everyone does these days and shouted out on the twittersphere to get input. Joke replies aside there was the suggestion that, if I’m using React, to just do raw SVG and add a touch of d3 to animate if required. Well that’s an approach I’d never thought of, but pondering it a bit, it made a lot of sense. If you look at charting libraries what are they doing? Providing you helper methods to build SVG elements and adding them to the DOM. And what does React do? Creates a virtual DOM which is then rendered to the browser in the real DOM. So using an external library what you find is that you’re creating elements that lives outside the virtual DOM and as a result can cause issues for React. That was all a few weeks ago and while the idea seemed sound I didn’t need to investigate it much further, at least not until earlier this week when charting + React came up again in conversation. So I decided to have a bit of a play around with it and see how it’d work. Basic React + SVG Honestly drawing SVG’s in React isn’t really that different to doing any other kind of DOM elements, it’s as simple as this: const Svg = () => ( <svg height="100" width="100"> <circle cx="50" cy="50" r="40" stroke="black" stroke- </svg> ); ReactDOM.render(<Svg />, document.getElementById('main')); Ta-da! React + SVG + animations Ok, so that wasn’t a particularly hard ey? Well how if we want to add animations? I grabbed an example off MSDN (example #2) to use as my demo. I created a demo that can be found here. Comparing that to the original example code it’s a lot cleaner as we no longer need to dive into the DOM ourselves, by using setState it’s quite easy to set the transform attribute. Now we’re using requestAnimationFrame to do the animation (which in turn calls setState) which we can use the componentDidMount to start and componentWillUnmount to stop it. Adding HOC So we’ve got a downside, we’re combining our state in with our application code, so what if we wanted to go down the path of using a Higher Order Component to wrap up the particular transformation that we’re applying to SVG elements. Let’s create a HOC like so: const rotate = (Component, { angularLimit, thetaDelta }) => { class Rotation extends React.Component { constructor(props) { super(props); this.state = { currentTheta: 0 }; } componentDidMount() { const animate = () => { const nextTheta = this.state.currentTheta > angularLimit ? 0 : this.state.currentTheta + thetaDelta; this.setState({ currentTheta: nextTheta }); this.rafId = requestAnimationFrame(animate); }; this.rafId = requestAnimationFrame(animate); } componentWillUnmount() { cancelAnimationFrame(this.rafId); } render() { return ( <g transform={`rotate(${this.state.currentTheta})`}> <Component {...this.props} /> </g> ); } } Rotation.displayName = `RotatingComponent(${getDisplayName(Component)})`; return Rotation; }; Basically we’ve moved the logic for playing with requestAnimationFrame up into it, making it really easy to rotate a lot of different SVG elements. Also instead of applying the transform to the rect element itself we apply it to a wrapping <g> element. I’ve created a second example to show how this works too. Conclusion Ultimately I thought this was going to be a lot harder than it turned out to be! If you spend a bit of time aiming to understand how SVG works directly rather than relying on abstraction layers we can quickly make a React application that uses inline SVG + animation. Now back on the original topic of charting? Well that really just comes down to using array methods to go over a dataset, create the appropriate SVG elements and apply attributes to them, so I don’t see it being much more than taking this simple example and expanding on it.
https://www.aaron-powell.com/posts/2017-08-08-react-svg-animations/
CC-MAIN-2019-43
refinedweb
686
53.81
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project. Hi, I'm compiling library that should give a C interface to the outside word, but has C++ code inside. The end is that library will be able to link with C programs, as well as with C++ programs. While it worked with previous version of gcc it has problem with gcc 3.2. the following example explains the problem: ************** file foo.h ************ #ifndef _FOO_H #define _FOO_H #ifdef __cplusplus extern "C" { #endif int fooFunc(); #ifdef __cplusplus } #endif #endif ************** file foo.c ************ #include "foo.h" class Foo{ public: Foo(){} ~Foo(){} int getFoo(){ return i; } private : int i; }; extern "C" int fooFunc() { Foo f; return 1; } ************** file foom.c ************ #include "foo.h" #include <stdio.h> int main() { return fooFunc(); } ************** makefile ************ all: lib prog lib: foo.cpp g++ -c -o foo.o foo.cpp ar rcv libfoo.a foo.o ranlib libfoo.a prog: foom.c gcc foom.c libfoo.a ************** ************ Result: gcc foom.c libfoo.a libfoo.a(foo.o)(.eh_frame+0x11): undefined reference to `__gxx_personality_v0' collect2: ld returned 1 exit status make: *** [prog] Error 1 QUESTION is how I could force the definition of __gxx_personality_v0 for the STATIC library, without explicit linking with libstdc++ ? I know that if I build shared library, like this g++ -shared -Wl,-h libfoo.so.1.0 -o libfoo.so.1 foo.o it will work OK... Regards, Igor.
http://gcc.gnu.org/ml/libstdc++/2002-12/msg00260.html
crawl-001
refinedweb
240
70.39
Introduction: Disc Throwing Machine This instructable was created in fulfillment of the project requirement of the Makecourse at the University of South Florida () Instructable for the Disc Throwing Machine. This Machine is simple in what it does but did get a little complicated during the creation of this project. The Project holds discs and launches the discs at high speed. The discs are launched out one at a time with the use of a stepper motor and remote control. The project uses a brushless motor to spin two wheels in opposite directions to give the disc spin and forward force. Part List: Step 1: Soldering Electronic Speed Controller and Brushless Motor When I bought the brushless motor and ESC from amazon I didn't realize I'd have to solder it myself. Please look up a video but the basic instructions are as follows - Heat up connection pin - Touch Solder Wire to heated pin - When enough Solder melted insert exposed wire, example shown is brushless motor. Don't forget to insulate with electrical tape, or you'll end up with the last picture Repeat for all connections Step 2: How to Power Arduino and Brushless Motor From Electronic Speed Controller The ESC has a servo cable. This cable conveniently delivers power from the ESC to the arduino along with the control wire. - To power the ESC we plug it into a Li-Po battery, red to red black to black - The project needed a CCW rotation for it to launch discs with the 3d part so we connected the brushless motor with the wires in what looks like a jumbled up way. (To be in CW rotation switch the red and yellow wires seen from the brushless motor - Using the servo cable we plug in our esc to the breadboard to power the breadboard which will power the arduino Step 3: Arduino Code for IR Remote and Brushless Motor #include <IRremote.h> //library found at <a href="" rel="nofollow">irremote.zip</a> #include <Servo.h> //Using servo library to control ESC and mini motor #define stepperpin 12 //defining pin on the arduino int RECV_PIN = 13; //ir sensor pin IRrecv irrecv(RECV_PIN); //defining for the library Servo esc; //Creating a servo class with name as esc for the electronic speed controller connected to the drone motor that powers the whole project Servo stepper; //servo class with name stepper decode_results results; void setup();//drone motor specific { esc.attach(8); //Specify the esc signal pin,Here as D8 esc.writeMicroseconds(1000); //initialize the signal to 1000 Serial.begin(9600); //end drone motor //start stepper and IR code pinMode(RECV_PIN, INPUT); irrecv.enableIRIn(); // Start the receiver stepper.attach(stepperpin); Serial.begin(9600); } //end stepper and IR code void loop()<br>{ int i=0; if (irrecv.decode(&results)) { translateIR(); irrecv.resume(); // Receive the next value } } void translateIR() // takes action based on IR code received describing Car MP3 IR codes { int stepval = stepper.read(); int droneval = esc.read(); esc.writeMicroseconds(droneval); //using droneval as the signal to esc switch(results.value){ case 0xFF22DD: Serial.println("Prev "); esc.write(72); Serial.print(droneval); break; case 0xFF02FD: Serial.println("NEXT "); esc.write(90); Serial.print(droneval); break; case 0xFFE01F: Serial.println(" VOL- "); esc.write(droneval-1); Serial.print(droneval); break; case 0xFFA857: Serial.println(" VOL+ "); esc.write(droneval+1); Serial.print(droneval); break; case 0xFFC23D: Serial.println(" Play "); //The command that controls the push gear stepper.write(stepval-90); esc.write(droneval); //without this command the drone motor goes to 0 delay(1000); stepper.write(stepval+90); //resets back to default position break; case 0xFF906F: Serial.println(" EQ "); stepper.write(90); esc.write(droneval); //without this command the drone motor goes to 0 delay(1000); break; default: Serial.print(" unknown button "); Serial.println(results.value, HEX); } delay(500); }void unknownRemoter(){ // code that tells arduino what to do when an undeclared button is pressed long RED_LED_OFF = 0xFF40BF; if (results.value == RED_LED_OFF){ Serial.println ("Red led off"); digitalWrite(12,LOW); } else if (Serial.print(" still an unknown button ")); Serial.println(results.value, HEX); } Step 4: Arduino Connections From the code in previous step: - Connect ESC servo controller pin to pin 8 - Connect receiver pin to pin 13 - Connect Microservo controller pin to pin 12 Step 5: Autodesk Inventor 3D Modeled Parts This device consisted of many parts, that being: - The hopper: It holds the drone motor and wheels to launch the disks. This was designed to be placed over the required box for the make course and a2mm gap was left between the box and the hopper to allow one disk at a time to be launched as CD-ROMs are 1.2mm high, and so the 2nd disc wouldn't get through. - The Microservo holder was used to house the microservo that would attach to the push gear. - The push gear was designed to slide under the hopper and contact with one disc as to push 1 disc out at a time. - I bought RC car wheels originally. The RC car wheels came with plastic rims that were too concave in shape. I needed thick rims that would not wobble. But I still used the rubber tires from the RC car wheels. - The electronic speed controller cover up was used to give the project a finished look. - The top wheel axle, spacers, and caps were used to keep the wheel with the tire on in one position. Recommendations We have a be nice policy. Please be positive and constructive. Looks good, this could be a fun tool to use for skeet shooting :)
http://www.instructables.com/id/Disc-Throwing-Machine/
CC-MAIN-2018-17
refinedweb
923
64.3
Greentea testing applications Greentea is the automated testing tool for Arm Mbed OS development. It's a test runner that automates the process of flashing development boards, starting tests and accumulating test results into test reports. You can use it for local development, as well as for automation in a continuous integration environment. Greentea tests run on embedded devices, but Greentea also supports host tests. These are Python scripts that run on a computer and can communicate back to the embedded device. You can, for example, verify that a value wrote to the cloud when the device said it did. This document will help you start using Greentea. Please see the htrun documentation, the tool Greentea uses to drive tests, for the technical details of the interactions between the platform and the host machine. Using tests Test code structure You can run tests throughout Mbed OS and for your project's code. They are located under a special directory called TESTS. The fact that the code is located under this directory means that it is ignored when building applications and libraries. It is only used when building tests. This is important because all tests require a main() function, and building them with your application would cause multiple main() functions to be defined. The macro MBED_TEST_MODE is defined when building tests with Mbed CLI versions 1.9.0 and later. You can wrap your application's main() function in a preprocessor check to prevent multiple main() functions from being defined: #if !MBED_TEST_MODE int main() { // Application code } #endif In addition to being placed under a TESTS directory, test sources must exist under two other directories: a test group directory and a test case directory. The following is an example of this structure: myproject/TESTS/test_group/test_case_1 In this example, myproject is the project root, and all the source files under the test_case_1 directory are included in the test. The test build also includes any other source files from the OS, libraries and projects that apply to the target's configuration. Note: You can name both the test group and the test case directory anything you like. However, you must name the TESTS directory TESTS for the tools to detect the test cases correctly. Test discovery Because test cases can exist throughout a project, the tools must find them in the project's file structure before building them. Test discovery also obeys the same rules that are used when building your project. This means that tests that are placed under a directory with a prefix, such as TARGET_, TOOLCHAIN_ or FEATURE_, are only discovered, built and run if your current configuration matches this prefix. For example, if you place a test under the directory FEATURE_BLE with the following path: myproject/mbed-os/features/FEATURE_BLE/TESTS/ble_tests/unit_test This test case is only discovered if the target being tested supports the BLE feature. Otherwise, the test is ignored. Generally, a test should not be placed under a TARGET_ or TOOLCHAIN_ directory because most tests should work for all target and toolchain configurations. Tests can also be completely ignored by using the .mbedignore file described in the documentation. Test names A test case is named by its position in your project's file structure. For instance, in the above example, a test case with the path myproject/TESTS/test_group/test_case_1 would be named tests-test_group-test_case_1. The name is created by joining the directories that make up the path to the test case with a dash - character. This is a unique name to identify the test case. You will see this name throughout the build and testing process. Building tests You can build tests through Arm Mbed CLI. For information on using Mbed CLI, please see the CLI documentation. When you build tests for a target and a toolchain, the script first discovers the available tests and then builds them in parallel. You can also create a test specification file, which our testing tools can use to run automated hardware tests. For more information on the test specification file, please see the Greentea documentation. Building process The test.py script (not to be confused with tests.py), located under the tools directory, handles the process for building tests. It handles the discovery and building of all test cases for a target and toolchain. The full build process is: - Build the non-test code (all code not under a TESTSfolder), but do not link it. The resulting object files are placed in the build directory. - Find all tests that match the given target and toolchain. - For each discovered test, build all of its source files and link it with the non-test code that was built in step 1. - If specified, create a test specification file and place it in the given directory for use by the testing tools. This is placed in the build directory by default when using Mbed CLI. Application configuration When building an Mbed application, the presence of an mbed_app.json file allows you to set or override different configuration settings from libraries and targets. However, because the tests share a common build, this can cause issues when tests have different configurations that affect the OS. The build system looks for an mbed_app.json file in your shared project files (any directory not inside of a TESTS folder). If the system finds it, then this configuration file is used for both the non-test code and each test case inside your project's source tree. If there is more than one mbed_app.json file in the source tree, then the configuration system will error. If you need to test with multiple configurations, then you can use the --app-config option. This overrides the search for an mbed_app.json file and uses the configuration file that you specify for the build. Writing your first test You can write tests for your own project or add more tests to Mbed OS. You can write tests by using the Greentea client and the UNITY and utest frameworks, which are located in /features/frameworks. To write your first test, use Mbed CLI to create a new project: $ mbed new first-greentea-test By convention, all tests live in the TESTS/ directory. In the first-greentea-test folder, create a folder TESTS/test-group/simple-test/. first-greentea-test/ └── TESTS/ └── test-group/ └── simple-test/ └── main.cpp Test structure for Greentea tests In this folder, create a file main.cpp. You can use UNITY, utest and the Greentea client to write your test: #include "mbed.h" #include "utest/utest.h" #include "unity/unity.h" #include "greentea-client/test_env.h" using namespace utest::v1; // This is how a test case looks static control_t simple_test(const size_t call_count) { /* test content here */ TEST_ASSERT_EQUAL(4, 2 * 2); return CaseNext; } utest::v1::status_t greentea_setup(const size_t number_of_cases) { // Here, we specify the timeout (60s) and the host test (a built-in host test or the name of our Python file) GREENTEA_SETUP(60, "default_auto"); return greentea_test_setup_handler(number_of_cases); } // List of test cases in this file Case cases[] = { Case("simple test", simple_test) }; Specification specification(greentea_setup, cases); int main() { return !Harness::run(specification); } Running the test Tip: To see all tests, run mbed test --compile-list. Run the test: # run the test with the GCC_ARM toolchain, automatically detect the target, and run in verbose mode (-v) $ mbed test -t GCC_ARM -m auto -v -n tests-test-group-simple-test This yields (on a NUCLEO F411RE): mbedgt: test suite report: +-----------------------+---------------+------------------------------+--------+--------------------+-------------+ | target | platform_name | test suite | result | elapsed_time (sec) | copy_method | +-----------------------+---------------+------------------------------+--------+--------------------+-------------+ | NUCLEO_F411RE-GCC_ARM | NUCLEO_F411RE | tests-test-group-simple-test | OK | 16.84 | default | +-----------------------+---------------+------------------------------+--------+--------------------+-------------+ mbedgt: test suite results: 1 OK mbedgt: test case report: +-----------------------+---------------+------------------------------+-------------+--------+--------+--------+--------------------+ | target | platform_name | test suite | test case | passed | failed | result | elapsed_time (sec) | +-----------------------+---------------+------------------------------+-------------+--------+--------+--------+--------------------+ | NUCLEO_F411RE-GCC_ARM | NUCLEO_F411RE | tests-test-group-simple-test | simple test | 1 | 0 | OK | 0.01 | +-----------------------+---------------+------------------------------+-------------+--------+--------+--------+--------------------+ mbedgt: test case results: 1 OK mbedgt: completed in 18.64 sec Change the test in a way that it fails (for example, expect 6 instead of 4), rerun the test and observe the difference. Writing integration tests using host tests The previous test was self-contained. Everything that ran only affected the microcontroller. However, typical test cases involve peripherals in the real world. This raises questions such as: Did my device actually get an internet connection, or did my device actually register with my cloud service? (We have a lot of these for Pelion Device Management.) To test these scenarios, you can use a host test that runs on your computer. After the device says it did something, you can verify that it happened and then pass or fail the test accordingly. To interact with the host test from the device, you can use two functions: greentea_send_kv and greentea_parse_kv. The latter blocks until it gets a message back from the host. Creating the host test This example writes an integration test that sends hello to the host and waits until it receives world. Create a file called hello_world_tests.py in the TESTS/host_tests folder, and fill it with: from mbed_host_tests import BaseHostTest from mbed_host_tests.host_tests_logger import HtrunLogger import time class HelloWorldHostTests(BaseHostTest): def _callback_init(self, key, value, timestamp): self.logger.prn_inf('Received \'init\' value=%s' % value) # sleep... time.sleep(2) # if value equals 'hello' we'll send back world, otherwise not if (value == 'hello'): self.send_kv('init', 'world') else: self.send_kv('init', 'not world') def setup(self): # all functions that can be called from the client self.register_callback('init', self._callback_init) def result(self): pass def teardown(self): pass def __init__(self): super(HelloWorldHostTests, self).__init__() self.logger = HtrunLogger('TEST') This registers one function you can call from the device: init. The function checks whether the value was hello, and if so, returns world back to the device using the send_kv function. Creating the Greentea test This example writes the embedded part of this test. Create a new file main.cpp in TESTS/tests/integration-test, and fill it with: #include "mbed.h" #include "utest/utest.h" #include "unity/unity.h" #include "greentea-client/test_env.h" using namespace utest::v1; static control_t hello_world_test(const size_t call_count) { // send a message to the host runner greentea_send_kv("init", "hello"); // wait until we get a message back // if this takes too long, the timeout will trigger, so no need to handle this here char _key[20], _value[128]; while (1) { greentea_parse_kv(_key, _value, sizeof(_key), sizeof(_value)); // check if the key equals init, and if the return value is 'world' if (strcmp(_key, "init") == 0) { TEST_ASSERT_EQUAL(0, strcmp(_value, "world")); break; } } return CaseNext; } utest::v1::status_t greentea_setup(const size_t number_of_cases) { // here, we specify the timeout (60s) and the host runner (the name of our Python file) GREENTEA_SETUP(60, "hello_world_tests"); return greentea_test_setup_handler(number_of_cases); } Case cases[] = { Case("hello world", hello_world_test) }; Specification specification(greentea_setup, cases); int main() { return !Harness::run(specification); } You see the calls to and from the host through the greentea_send_kv and greentea_parse_kv functions. Note the GREENTEA_SETUP call. This specifies which host test to use, and the test is then automatically loaded when running (based on the Python name). Run the test: $ mbed test -v -n tests-tests-integration-test Debugging tests Debugging tests is a crucial part of the development and porting process. This section covers exporting the test and driving the test with the test tools while the target is attached to a debugger. Exporting tests The easiest way to export a test is to copy the test's source code from its test directory to your project's root. This way, the tools treat it like a normal application. You can find the path to the test that you want to export by running the following command: mbed test --compile-list -n <test name> Once you've copied all of the test's source files to your project root, export your project: mbed export -i <IDE name> You can find your exported project in the root project directory. Running a test while debugging Assuming your test was exported correctly to your IDE, build the project and load it onto your target by using your debugger. Bring the target out of reset and run the program. Your target waits for the test tools to send a synchronizing character string over the serial port. Do not run the mbed test commands because that will attempt to flash the device, which you've already done with your IDE. Instead, you can use the underlying test tools to drive the test. htrun is the tool you need to use in this scenario. Installing the requirements for Mbed OS also installs htrun. You can also install htrun by running pip install mbed-host-tests. First, find your target's serial port by running the following command: $ mbed detect [mbed] Detected KL46Z, port COM270, mounted D: ... From the output, take note of your target's serial port (in this case, it's COM270). Run the following command when your device is running the test in your debugger: mbedhtrun --skip-flashing --skip-reset -p <serial port>:9600 Replace <serial port> with the serial port that you found by running mbed detect above. So, for the example above, the command is: mbedhtrun --skip-flashing --skip-reset -p COM270:9600 This detects your attached target and drives the test. If you need to rerun the test, reset the device with your debugger, run the program and run the same command. For an explanation of the arguments used in this command, please run mbedhtrun --help. Command-line use This section highlights a few of the capabilities of the Greentea command-line interface. For a full list of the available options, please run mbed test --help. Listing all tests You can use the --compile-list argument to list all available tests: $ mbed test --compile-list [mbed] Working path "/Users/janjon01/repos/first-greentea-test" (program) Test Case: Name: mbed-os-components-storage-blockdevice-component_flashiap-tests-filesystem-fopen Path: ./mbed-os/components/storage/blockdevice/COMPONENT_FLASHIAP/TESTS/filesystem/fopen Test Case: Name: mbed-os-features-cellular-tests-api-cellular_device Path: ./mbed-os/features/cellular/TESTS/api/cellular_device ... After compilation, you can use the --run-list argument to list all tests that are ready to be ran. Executing all tests The default action of Greentea using mbed test is to execute all tests found. You can also add -v to make the output more verbose. Limiting tests You can select test cases by name using the -n argument. This command executes all tests named tests-mbedmicro-rtos-mbed-mail from all builds in the test specification: $ mbed test -n tests-mbedmicro-rtos-mbed-mail When using the -n argument, you can use the * character as a wildcard. This command executes all tests that start with tests- and have -rtos- in them. $ mbed test -n tests-*-rtos-* You can use a comma ( ,) to separate test names (argument -n) and build names (argument -t). This command executes the tests tests-mbedmicro-rtos-mbed-mail and tests-mbed_drivers-c_strings for the K64F-ARM and K64F-GCC_ARM builds in the test specification: $ mbed test -n tests-mbedmicro-rtos-mbed-mail,tests-mbed_drivers-c_strings -t K64F-ARM,K64F-GCC_ARM Selecting platforms You can limit which boards Greentea uses for testing by using the --use-tids argument. $ mbed test --use-tids 02400203C3423E603EBEC3D8,024002031E031E6AE3FFE3D2 --run Where 02400203C3423E603EBEC3D8 and 024002031E031E6AE3FFE3D are the target IDs of platforms attached to your system. You can view target IDs using mbed-ls, which is installed as part of Mbed CLI. $ mbedls +--------------+---------------------+------------+------------+-------------------------+ |platform_name |platform_name_unique |mount_point |serial_port |target_id | +--------------+---------------------+------------+------------+-------------------------+ |K64F |K64F[0] |E: |COM160 |024002031E031E6AE3FFE3D2 | |K64F |K64F[1] |F: |COM162 |02400203C3423E603EBEC3D8 | |LPC1768 |LPC1768[0] |G: |COM5 |1010ac87cfc4f23c4c57438d | +--------------+---------------------+------------+------------+-------------------------+ In this case, you won't test one target, the LPC1768. Creating reports Greentea supports a number of report formats. HTML This creates an interactive HTML page with test results and logs. mbed test --report-html html_report.html --run JUnit This creates an XML JUnit report, which you can use with popular Continuous Integration software, such as Jenkins. mbed test --report-junit junit_report.xml --run JSON This creates a general JSON report. mbed test --report-json json_report.json --run Plain text This creates a human-friendly text summary of the test run. mbed test --report-text text_report.text --run Test specification JSON format The Greentea test specification format decouples the tool from your build system. It provides important data, such as test names, paths to test binaries and the platform on which the binaries should run. This file is automatically generated when running tests through Mbed CLI, but you can also provide it yourself. This way you can control exactly which tests are run and through which compilers. Greentea automatically looks for files called test_spec.json in your working directory. You can also use the --test-spec argument to direct Greentea to a specific test specification file. When you use the -t / --target argument with the --test-spec argument, you can select which "build" to use. In the example below, you could provide the arguments --test-spec test_spec.json -t K64F-ARM to only run that build's tests. Example of test specification file The below example uses two defined builds: - Build K64F-ARMfor NXP K64Fplatform compiled with ARMCCcompiler. - Build K64F-GCCfor NXP K64Fplatform compiled with GCC ARMcompiler. Place this file in your root folder, and name it test_spec.json. { "builds": { "K64F-ARM": { "platform": "K64F", "toolchain": "ARM", "base_path": "./BUILD/K64F/ARM", "baud_rate": 9600, "tests": { "tests-mbedmicro-rtos-mbed-mail": { "binaries": [ { "binary_type": "bootable", "path": "./BUILD/K64F/ARM/tests-mbedmicro-rtos-mbed-mail.bin" } ] }, "tests-mbed_drivers-c_strings": { "binaries": [ { "binary_type": "bootable", "path": "./BUILD/K64F/ARM/tests-mbed_drivers-c_strings.bin" } ] } } }, "K64F-GCC": { "platform": "K64F", "toolchain": "GCC_ARM", "base_path": "./BUILD/K64F/GCC_ARM", "baud_rate": 9600, "tests": { "tests-mbedmicro-rtos-mbed-mail": { "binaries": [ { "binary_type": "bootable", "path": "./BUILD/K64F/GCC_ARM/tests-mbedmicro-rtos-mbed-mail.bin" } ] } } } } } If you run mbed test --run-list, this will now list only these tests: mbedgt: greentea test automation tool ver. 1.2.5 mbedgt: using multiple test specifications from current directory! using 'BUILD\tests\K64F\ARM\test_spec.json' using 'BUILD\tests\K64F\GCC_ARM\test_spec.json' mbedgt: available tests for built 'K64F-GCC_ARM', location 'BUILD/tests/K64F/GCC_ARM' test 'tests-mbedmicro-rtos-mbed-mail' mbedgt: available tests for built 'K64F-ARM', location 'BUILD/tests/K64F/ARM' test 'tests-mbed_drivers-c_strings' test 'tests-mbedmicro-rtos-mbed-mail' Known issues There cannot be a main() function outside of a TESTS directory when building and running tests. This is because this function will be included in the nontest code build, as described in the building process section. When the test code is compiled and linked with the nontest code build, a linker error will occur, due to there being multiple main() functions defined. This is why you should either rename your main application file if you need to build and run tests, or use a different project. Note that this does not affect building projects or applications, only building and running tests.
https://os.mbed.com/docs/mbed-os/v5.15/tools/greentea-testing-applications.html
CC-MAIN-2022-40
refinedweb
3,129
55.24
Featured Replies in this Discussion - In environment variable ... ,you should add root folder and scripts folder of python26. ... This is what you add to ... ... Restart. Test it out in cmd(command line) type ... . Now running program ... will use python26 and install modules with setuptools,pip will find and install to python26. ... ... In environment variable Path,you should add root folder and scripts folder of python26. This is what you add to Path ;C:\python26\;C:\python26\scripts\; Restart. Test it out in cmd(command line) type python. Now running program pythnon somecode.py will use python26 and install modules with setuptools,pip will find and install to python26. python setup.py install pip install somepackage I added the semi-colen before the python path that I need the program to access, the program is not finding the path in the system variable ? I think you are mixing the understanding of environment variable and PYTHONPATH. Environment variable you set so Windows find the Python version you use. PYTHONPATH is the path Python search for Python code/programs. To se where Python search you can do this. >>> import sys >>> print sys.path #Here you get a list of all folder python search for code(PYTHONPATH) #If you have a .py in one of this folders Python will find it I don't understand what I should test out in a python command No you should test python command out in command line for OS called cmd Here how it look for me when i type in python in cmd. Microsoft Windows [Versjon 6.1.7601] Copyright (c) 2009 Microsoft Corporation. Med enerett. C:\Users\Tom>cd\ C:\>python Python 2.7.3 (default, Apr 10 2012, 23:31:26) [MSC v.1500 32 bit (Intel)] on win 32 Type "help", "copyright", "credits" or "license" for more information. >>> If environment variable is set up correct you can type python anywhere in cmd(command line) and Python will start. So to sum it up Environment variable is so that windows can find Python. PYTHONPATH is where Python search for Python code(.py). Yes Python only find code in PYTHONPATH. There are some solutions,you can move your "modules subfolder" to a folder in PYTHONPATH. Usually all Python modules/code get placed in C:\Python26\Lib\site-packages You can append to PYTHONPATH. import sys sys.path.append(r'C:\path\to\module') The only bad ting with this solution is that is not permanent. To permanently add to PYTHONPATH,you can do this. sitecustomize.py,if you dont find it make it in C:\Python26\Lib\site-packages In this sitecustomize.py file add this. #sitecustomize.py import sys sys.path.append(r'C:\path\to\module') Restart Python. Now see if your C:\path\to\module are in print sys.path. An other option is just to make .pth file in site-packages folder. In that file just add C:\path\to\module I created the sitecustomize.py file and witin this file I entered the path to the modules, restarted the computer. Then I ran 'import sys' followed by 'print sys' and in the list I couldn't find my modules path ? I did find out I had (2) sitecustomize.py files, one that you told me to create, the other in my software, that I can't edit. No one else has experienced a headache, it just works when they place the correct modules in the right sub-folder.
https://www.daniweb.com/software-development/python/threads/450836/python-2-6-environment-variable-issue
CC-MAIN-2015-22
refinedweb
578
77.23
The thrd_detach() function is used to tell the underlying system that resources allocated to a particular thread can be reclaimed once it terminates. This function should be used when a thread's exit status is not required by other threads (and no other thread needs to use thrd_join() to wait for it to complete). Whenever a thread terminates without detaching, the thread's stack is deallocated, but some other resources, including the thread ID and exit status, are left until it is destroyed by either thrd_join() or thrd_detach(). These resources can be vital for systems with limited resources and can lead to various "resource unavailable" errors, depending on which critical resource gets used up first. For example, if the system has a limit (either per-process or system wide) on the number of thread IDs it can keep track of, failure to release the thread ID of a terminated thread may lead to thrd_create() being unable to create another thread. Noncompliant Code Example This noncompliant code example shows a pool of threads that are not exited correctly: #include <stdio.h> #include <threads.h> const size_t thread_no = 5; const char mess[] = "This is a test"; int message_print(void *ptr){ const char *msg = (const char *) ptr; printf("THREAD: This is the Message %s\n", msg);; } Compliant Solution In this compliant solution, the message_print() function is replaced by a similar function that correctly detaches the threads so that the associated resources can be reclaimed on exit: #include <stdio.h> #include <threads.h> const size_t thread_no = 5; const char mess[] = "This is a test"; int message_print(void *ptr){ const char *msg = (const char *)ptr; printf("THREAD: This is the Message %s\n", msg); /* Detach the thread, check the return code for errors */ if (thrd_detach(thrd_current()) != thrd_success) { /* Handle error */ }; } Risk Assessment Related Vulnerabilities Search for vulnerabilities resulting from the violation of this rule on the CERT website. Bibliography 6 Comments David Svoboda The severity is wrong. I'm guessing the problem with not calling detach is essentially resource exhaustion aka denial-of-service, right? The material in the Risk Assessment should be repeated earlier in the rule, either in the intro or in the NCCE, as it proves that this is a security issue and not just "good programming style". Robert Seacord (Manager) Don't declare two variables on the same line (violates a recommendation). Your loop counter ishould be declared as size_tand not as a signed type. You should declare THREAD_NOin both the nce and cs, preferably as an enum. I don't care if you have the includes or not, but it should be the same for each example. The cs and nce should be exactly the same, except for the specific problem that you are addressing. Geoff Clare Are there really any implementations where additional resources other than the thread ID and exit status are retained when a joinable thread exits? Those are the only things needed by pthread_join(). The reference to the heap is definitely wrong: memory allocated on the heap by a thread before it exits could subsequently be used by another thread. Therefore memory allocated on the heap by a detached thread cannot be deallocated when it exits. (Unless the reference is intended to be only about memory allocated on the heap internally by the implementation - e.g. the thread ID could be on the heap. In which case it should say so.) My advice would be to reword the discussion to be primarily about the thread ID. The resource that matters is the maximum number of threads per process. Also the discussion should cover the use of pthread_attr_setdetachstate() with PTHREAD_CREATE_DETACHED to create threads in the detached state. (I would have thought this is the more usual way to do it, rather than having the thread detach itself in its start function as done in the compliant solution.) Geoff Clare I have made some changes to address the first part of my previous comment, and to remove left-over POSIX specifics. I also tidied up a few minor things at the same time. One left-over POSIX item that I did not change is the reference to POSIX/SUS and pthread_detach()in the Bibliography. These need to be changed to appropriate C11 references. David Svoboda If this is unenforceable, shouldn't it be a rec instead of a rule? Aaron Ballman I switched it to being a recommendation instead of a rule for just this reason.
https://wiki.sei.cmu.edu/confluence/display/c/CON04-C.+Join+or+detach+threads+even+if+their+exit+status+is+unimportant?focusedCommentId=88022046
CC-MAIN-2019-51
refinedweb
737
59.84
Consider the following program: #include <sys/time.h> #include <sys/types.h> #include <unistd.h> int main(void) { struct timeval tv; tv.tv_sec = 0; tv.tv_usec = -100000; printf("select => %d\n",select(0,NULL,NULL,NULL,&tv)); return 0; } This program, when executed, prints: select => 0 and causes the kernel to printk a message something like the following: schedule_timeout: wrong timeout value fffffff7 from c0130a98 This happens because the code in "fs/select.c" carefully makes sure not to overrun the _maximum_ timeout value, but performs no checking against negative timeout values. So the negative value is passed directly through to the timeout scheduler, which isn't so lax; it prints the warning message quoted above and wakes up the process. This is trivially fixed, but I'm not sure what the correct behavior is with a negative timeout value. Presumably one of the following should happen: 1. select() fails with EINVAL 2. select() succeeds, returning immediately (as with a 0 timeout) The Linux select() man page says nothing about this case, but only documents EINVAL for a negative FD count. The Unix98 spec documents EINVAL in case of "invalid timeout value" but doesn't specify what that means. The wording for timeout behavior could be interpreted as allowing negative timeouts (they would just expire immediately). (Pragmatically, I'd say that given the ambiguity of the spec, any user-land program which generates a negative timeout is broken, and we should fail the call to select().) This is an issue because some broken program (yet to be tracked down) somehow ended up with a negative timeout; select() kept "succeeding" despite this, and the syslogs started filling up with confusing "wrong timeout value" errors. If the kernel returned EINVAL, the program would have an opportunity to discover the error of its ways; if the kernel silently accepted it, at least the syslogs wouldn't fill with spooge. Dan - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at
http://lkml.iu.edu/hypermail/linux/kernel/9812.0/1125.html
CC-MAIN-2019-35
refinedweb
344
54.63
Man Page Manual Section... (3) - page: rtime NAMErtime - get time from a remote machine SYNOPSIS #include <rpc/des_crypt.h> int rtime(struct sockaddr_in *addrp, struct rpc_timeval *timep, struct rpc_timeval *timeout); DESCRIPTIONThis function uses the Time Server Protocol as described in RFC 868 to obtain the time from a remote machine. The Time Server Protocol gives the time in seconds since 00:00:00 UTC, 1 Jan 1900, and this function subtracts the appropriate constant in order to convert the result to seconds since the Epoch, 1970-01-01 00:00:00 +0000 (UTC). When timeout is non-NULL, the udp/time socket (port 37) is used. Otherwise, the tcp/time socket (port 37) is used. RETURN VALUEOn success, 0 is returned, and the obtained 32-bit time value is stored in timep->tv_sec. In case of error -1 is returned, and errno is set appropriately. ERRORSAll errors for underlying functions (sendto(2), poll(2), recvfrom(2), connect(2), read(2)) can occur. Moreover: - EIO - The number of returned bytes is not 4. - ETIMEDOUT - The waiting time as defined in timeout has expired. NOTESOnly IPv4 is supported. Some in.timed versions only support TCP. Try the example program with use_tcp set to 1. Libc5 uses the prototype int rtime(struct sockaddr_in *, struct timeval *, struct timeval *); and requires <sys/time.h> instead of <rpc/auth_des.h>. BUGSrtime() in glibc 2.2.5 and earlier does not work properly on 64-bit machines. EXAMPLEThis example requires that port 37 is up and open. You may check that the time entry within /etc/inetd.conf is not commented out. The program connects to a computer called "linux". Using "localhost" does not work. The result is the localtime of the computer "linux". #include <stdio.h> #include <stdlib.h> #include <errno.h> #include <string.h> #include <time.h> #include <rpc/auth_des.h> #include <netdb.h> int use_tcp = 0; char *servername = "linux"; int main(void) { struct sockaddr_in name; struct rpc_timeval time1 = {0,0}; struct rpc_timeval timeout = {1,0}; struct hostent *hent; int ret; memset((char *) &name, 0, sizeof(name)); sethostent(1); hent = gethostbyname(servername); memcpy((char *) &name.sin_addr, hent->h_addr, hent->h_length); ret = rtime(&name, &time1, use_tcp ? NULL : &timeout); if (ret < 0) perror("rtime error"); else printf("%s\n", ctime((time_t *) &time1.tv_sec)); exit(EXIT_SUCCESS); } SEE ALSOntpdate(1), inetd(8)
https://linux.co.uk/documentation/man-pages/subroutines-3/man-page/?section=3&page=rtime
CC-MAIN-2017-22
refinedweb
383
61.73
sendsyslog — send a message to syslogd #include <sys/syslog.h> #include <sys/types.h> int sendsyslog(const char *msg, size_t len, int flags); The sendsyslog() function is used to transmit a syslog(3) formatted message direct to syslogd(8) without requiring the allocation of a socket. The msg is not NUL terminated and its len is limited to 8192 bytes. If LOG_CONS invoked on the klog file descriptor. After that the messages can be be read from the other end of the socket pair. By utilizing /dev/klog the access to log messages is limited to processes that may open this device. Upon successful completion, the value 0 is returned; otherwise the value -1 is returned and the global variable errno is set to indicate the error. sendsyslog() can fail if: EFAULT] EMSGSIZE] ENOBUFS] ENOTCONN] The sendsyslog() function call appeared in OpenBSD 5.6. The flags argument was added in OpenBSD 6.0.
https://man.openbsd.org/OpenBSD-current/man2/sendsyslog.2
CC-MAIN-2020-29
refinedweb
153
65.42
Board index » python All times are UTC ANNOUNCE MPM.py Network Maximum flow algorithm implementation GO:.*-*-*.com/ MPM.py provides a python based implementation of the MPM network maximum-flow algorithm from V.M. Malhotra, M. Pramodh-Kumar and S.N. Maheshwari "An O(V**3) algorithm for finding maximum flows in networks" Information Processing Letters, 7 (1978), 277-278. The module is implemented in Python. The MPM algorithm is one of the better algorithms for finding network maximum flows. This module will certainly be easy to use and fast enough for network flow maximization problems with thousands of nodes and edges. For example: ============ A simple motivating example may be stolen from A Quantitative Approach to Management Levin, Ruben, and Stinson McGraw Hill, 1986. p.542 There are a number of freeways connected by intersections (interchanges). Each of the freeways has a maximum number of cars that can pass across it per hour in each direction (and for some "one way" freeways the capacity in one direction is zero). We number the intersections 1 through 6 and depict the freeways as "arcs" between the intersections: 6 0 (1)--------(2) | \ | \ | \5 | \4 |3 \ |7 \ | \ | \ | \ | \ | \9 |2 \5 |0 \ | \ | 5 3 \ | 5 4 \ (4)--------(3)-------(5) \ / \ /2 \7 / \ / \ / \ / \0 /0 \ / \ / (6) Furthermore we make the somewhat artificial assumption that the only "exits" from the freeway system are at intersection 6 (the sink) and the only "onramps" are at intersection 1 (the source) <b>and no rest stops are permitted!</b> Above 6 0 (1)---------(2) Indicates that intersection 1 is connected by a freeway to intersection 2 and the maximum cars-per-hour from 1 to 2 is 6000 and the maximum from 2 to 1 is 0. The network maximum flow problem is to determine the maximum flow assignment for a network. In the above network we want to determine the maximal number of cars per hour that can pass across the network (without stopping) entering at intersection 1 and leaving at intersection 6. Using MPM.py this model may be encoded and maximized as follows: def test2(): D = {} # 6000 cars/hour can travel from intersection 1 to intersection 2 D[1,2] = 6000 D[1,3] = 5000 D[1,4] = 3000 D[2,3] = 7000 #capacities from intersection 2 D[2,5] = 4000 D[3,2] = 2000 #capacities from intersection 3 D[3,1] = 9000 D[3,2] = 2000 D[3,4] = 3000 D[3,5] = 5000 D[4,3] = 5000 #capacities from intersection 4 D[4,6] = 7000 D[5,3] = 4000 #capacities from intersection 5 D[5,6] = 2000 (total, flow, net) = maxflow(source=1, sink=6, capacities=D) print "maximum at", total, "cars per hour" print "travelling" edges = flow.keys() edges.sort() for edge in edges: (i1, i2) = edge fl = flow[edge] if fl>0: print fl, "cars from intersection", i1, "to", i2 When executed the function prints maximum at 8000 cars per hour travelling 5000 cars from intersection 1 to 3 3000 cars from intersection 1 to 4 3000 cars from intersection 3 to 4 2000 cars from intersection 3 to 5 6000 cars from intersection 4 to 6 2000 cars from intersection 5 to 6 Thus the following flow assignment maximizes conservative flow on the network: 0 0 (1)--------(2) | \ | \ | \5 | \0 |3 \ |0 \ | \ | \ | \ | \ | \0 |0 \0 |0 \ | \ | 0 3 \ | 2 0 \ (4)--------(3)-------(5) \ / \ / \6 /2 \ / \ / \ / \0 /0 \ / \ / (6) Please look at.*-*-*.com/ more information and downloads. -- Aaron Watters === "To repeat that, this time a little more slowly..." -Wilf -- -------- comp.lang.python.announce (moderated) -------- Python Language Home Page:.*-*-*.com/ ------------------------------------------------------- 1. Fvwm.py -- A Python module for Fvwm modules! 2. Need help in developing a Tk GUI to represent packet flow in a network 3. Monitoring network flow in Win32? 4. Maximum entropy module ? 5. re module - Maximum recursion limit exceeded. 6. fortran module flow chart 7. ModRuby / Apache2.0 worker mpm 8. trying to run boa: problem with stc.py / stc_.py / stc_c.py 9. PythonWin networking telnet.py? 10. proxy_auth_ldap-0.1.3.py: LDAP auth module for Squid 11. proxy_auth_ldap-0.1.py: LDAP auth module for Squid 12. HTMLized calendar module (html_cal.py)
http://computer-programming-forum.com/37-python/f2a9973927463879.htm
CC-MAIN-2020-50
refinedweb
692
60.45
Setting Up Development Environment Follow these steps to set up your development environment for JavaFX Script: JavaFX Script has no special project type, so to start you just create a regular Java project using File->New Project… from the menu (see Figure 7). The new project will be empty. Enter "stopwatch" as the project name, uncheck "Create Main Class", and click "Finish" (Figure 8 shows the result). Next, right-click on <default package> and select New->File/Folder… (see Figure 9 for the result). The next time you add a file, the "JavaFX File" option should appear in the pop-up menu under New. Now you have an empty JavaFX project. To enable your IDE to run it, right-click on the project name and chose "Properties". Select "Other" from the categories list, pick "JavaFX File" as the file type, and click "Next". Then enter "stopwatch" as the file name and click Finish (Figure 10 shows the result). To set the main JavaFX file in Project Properties, enter "stopWatch" into the "Arguments" field on the "Run" property sheet and click OK (see Figure 11 for the results). Now paste the following minimal application code into the stopWatch.fx file: import javafx.ui.*; import javafx.ui.canvas.*; Frame { title: "StopWatch" width: 260, height: 280 content: [] visible: true } Frame{} declares the application frame. You separate its attributes with either new lines or commas. Press F6 to run the application (see Figure 12 for the window you'll render). Building a UI from Original Artwork First download the accompanying JavaFX Script code and extract it into the "assets" subfolder within your "src" folder. Next, make it display the stopwatch background and buttons by adding the following code inside the square brackets of Frame's content attribute: Canvas { background: white content: [ // Background ImageView { transform: [Translate {x: 5, y: 5} ] image: Image { url: "assets/sport_bg.png" } }, // Start / Stop button ImageView { transform: [Translate {x: 197, y: 16.85} ] image: Image { url: "assets/start_btn.png" } }, // Clear button ImageView { transform: [Translate {x: 190.5, y: 193.35}] image: Image { url: "assets/clear_btn.png" } }, ] } You use the Canvas object to create a canvas on which you can render graphics. ImageView displays bitmap images (note how the transform attribute is used to position them). Run the application to verify that it displays the background and buttons. Unfortunately Java and JavaFX Script offer no easy, cross-platform way to support transparent windows. So you can't easily get the nice custom-shaped window effect you did with AIR. Also, you still have to convert a hand and the background artwork into a vector format that JavaFX Script recognizes. You'll find a very good SVG-to-JavaFX converter online (naturally implemented in Java). Working with the converter is easy. Just open the SVG file and the converter turns it into JavaFX Script source, renders it on the screen, and allows you to save the resulting code. I found only one bug: it doesn't convert the corner radius values used in SVG into the arcWidth/arcHeight attributes used in JavaFX Script, so you have to manually multiply all such values by a factor of two. See Figure 13 for a preview of the Stopwatch SVG artwork displayed within the converter. (Note that corner radiuses are smaller than they should be.) Converters normally create wrapper classes, which contain functions to access all nodes in the source SVG individually. For this simple demo you can just cut the necessary code from the wrapper class and insert in into your markup. Next, add the items in Listing 7 to Canvas's content attribute in order to render the digital display and hands. Note that the arcWidth and arcHeight attributes contain 2* multipliers. That's a small example of JavaFX Script's power. You can mix the code with markup at will (because markup is code too!). So in this example drawHand() is a function, which is defined in Listing 8. This code was copied from the output of the SVG-to-JavaFX converter and wrapped into the drawHand() function. You can efficiently reuse JavaFX UI elements this way. These pieces combined give you the complete user interface for the stopwatch application. Press F6 to test it before going further. Implementing the Animation JavaFX Script offers an interesting approach for implementing dynamic UI behaviors. It uses the bind operator, which allows you to associate a variable or attribute with a certain expression so that every time the expression changes, the attribute changes too. Add the following code right below the original import operators and above the Frame object: import java.lang.System; import java.lang.Math; class stopWatch { attribute _is_running: Boolean; attribute _startTime: Integer; attribute time: Integer; attribute timeString: String; attribute angleMinutes: Number; attribute angleSeconds: Number; attribute timer: Number; operation updateTime(); operation start(); operation stop(); operation clear(); } attribute stopWatch.angleSeconds = 0.0; attribute stopWatch.angleMinutes = 0.0; attribute stopWatch.timeString = "00:00:00.00"; attribute stopWatch._startTime = System.currentTimeMillis(); attribute stopWatch.time = 0; attribute stopWatch._is_running = false; var watch = new stopWatch(); With this code you declared the stopWatch class, assigned initial values to the attributes, and created an instance of the class. You can add operations to a class definition without implementing them. Now you can bind some markup attributes to your class attributes as follows: // Seconds hand Group { transform: [Translate{x: 122.38, y: 122.38} Rotate{angle: bind watch.angleSeconds}] content: [ drawHand() ] }, // Minutes hand Group { transform: [Translate{x: 122.38, y: 75.65}, Scale{x: 0.286, y: 0.286} Rotate{angle: bind watch.angleMinutes}] content: [ drawHand() ] }, Square brackets denote a series of objects assigned to the attribute. In this case you applied several transformations simultaneously. Try altering the initial values and then running the application to see that the UI changes as you modify class attributes. Now you need a timer. The timer implementation involves creating the attribute (which changes its value periodically using the dur operator), adding a trigger (which fires when that attribute changes), and implementing the handler code within that trigger. To that end, add the following code below the last attribute initialization line: stopWatch.timer = bind [1..50] dur 1000 linear while _is_running continue if true; trigger on stopWatch.timer = value { if (_is_running) { time = System.currentTimeMillis() - _startTime; updateTime(); } } operation stopWatch.updateTime() { var hours: Integer = Math.floor(time / 3600000); var rem: Integer = time % 3600000; var minutes: Integer = Math.floor(rem / 60000); rem %= 60000; var seconds: Integer = Math.floor(rem / 1000); var milliseconds: Integer = rem % 1000; angleMinutes = minutes * 30 + seconds * 0.5; angleSeconds = seconds * 6 + milliseconds / 1000 * 6; timeString = "{hours format as <<00>>}:{minutes format as <<00>>}:{seconds format as <<00>>}.{milliseconds/10 format as <<00>>}"; } Your timer triggers display updates 50 times per second. The updateTime() operation uses JavaFX Script's format as operator to format digital display output. The operator supports formatting using conventions of the Java classes java.text.DecimalFormat, java.text.SimpleDateFormat, and java.util.Formatter. Curly brackets allow you to include the code within literal strings. Although a bit unusual, the syntax is very efficient. Change the initial value of the _is_running attribute to true and test the application to verify that the timer is working. Adding the Interactivity First add the implementations for the start(), stop(), and clear() operations: operation stopWatch.start() { _is_running = not _is_running; if (_is_running) { _startTime = System.currentTimeMillis() - time; } else { time = System.currentTimeMillis() - _startTime; } } operation stopWatch.stop() { _is_running = false; clear(); } operation stopWatch.clear() { _startTime = System.currentTimeMillis(); time = 0; updateTime(); } Note that the logical not operator uses not notation, which is different from Java and C++, where you use an exclamation point. Next, add the appropriate event handlers to the Start/Stop and Clear buttons: // Start / Stop button ImageView { transform: [Translate {x: 197, y: 16.85} ] image: Image { url: "assets/start_btn.png" } onMousePressed: operation(event) { watch.start(); } }, // Clear button ImageView { transform: [Translate {x: 190.5, y: 193.35}] image: Image { url: "assets/clear_btn.png" } onMousePressed: operation(event) { watch.clear(); } }, To set an element's event handler you assign the necessary operation to the appropriate attribute of the element. Mouse event handlers must have one argument, so you wrap the stopWatch class methods inside an in-place operation definition. Finally, add the following attribute to the Frame element, which directs the application to call the onClose handler when the window is closed: onClose: operation() { watch.stop(); delete watch; } This assures the application shuts down correctly and prevents it from hanging on exit. Otherwise when a user closes the application while it's running, the GUI thread will exit but the timer thread could hang and prevent the runtime from unloading correctly. Deployment Building a JavaFX project with NetBeans produces a stopwatch.jar package, which you can deploy with Java Web Start. The process involves the following steps: AddType application/x-java-jnlp-file JNLP (The JFX Wiki has a step-by-step guide for deploying JavaFX applications using Java Web Start.) Click here for a live example of the JavaFX Stopwatch demo deployment. Bringing Richness to Java UIs In a sense, Java is the oldest internet applications platform of the three; it's just been missing the "rich" part. JavaFX Script is a big leap in that direction: The bad news is it's slow. Classical Java often feels slow but JavaFX Script is even slower. This might spoil all its potential advantages, especially when it comes to highly dynamic, rich user interfaces. Performance aside, however, JavaFX Script seems to be well positioned against Adobe AIR but it loses out in the ease-of-deployment and convenience departments. JavaFX Script is more hardcoreit offers the experienced programmer more sugar, but if you just want quick results AIR probably is a better choice. Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/RichInternetApps/Article/35208/0/page/4
CC-MAIN-2013-48
refinedweb
1,638
58.58
This action might not be possible to undo. Are you sure you want to continue? Instructions for Form 1041 and Schedules A, B, D, G, I, J, and K-1 U.S. Income Tax Return for Estates and Trusts Section references are to the Inter nal Revenue Code unless otherwise noted. Schedule J (Form 1041)—Accumulation Distribution for a Complex Trust 24 Schedule K-1 (Form 1041)—Beneficiary’s Share of Income, Deductions, Credits, etc. 26 Pending Legislation At the time these instructions were printed, Congress was considering legislation that would: ● Change the tax treatment of capital gains effective for gains and losses after December 31, 1994. ● Limit the tax year of a new estate to a year ending on October 31, November 30, or December 31. ● Treat certain revocable trusts as part of the decedent’s estate at the election of the executor of the estate and the trustee of the trust. ● Extend to estates the section 663(b) election to treat any amount paid or credited to a beneficiary within 65 days following the close of the tax year as being paid or credited on the last day of that year. ● Extend to estates the rule in section 663(c) to treat separate shares as separate estates when figuring distributable net income. ● Generally treat as related persons an executor of an estate and a beneficiary of that estate for purposes of sections 267 and 1239. ● Provide special rules for the taxation of qualified funeral trusts for trustees that elect these rules. Get Pub. 553, Highlights of 1995 Tax Changes, for more information. 40 hr., 53 min. 16 hr., 1 min. 39 hr., 28 min. 8 hr., 22 min. Learning about the law or the form 18 hr., 37 min. 1 hr., 47 min. 1 hr., 5 min. 1 hr., 12 min. Preparing the form 34 hr., 58 min. 2 hr., 8 min. 1 hr., 47 min. 1 hr., 23 min. Copying, assembling, and sending the form to the IRS 4 hr., 17 min. Pending Legislation Changes To Note General Instructions Purpose of Form Income Taxation of Trusts and Decedents’ Estates Definitions Who Must File Electronic Filing When To File Period Covered Where To File Who Must Sign Accounting Methods Accounting Periods Rounding Off to Whole Dollars Estimated Tax Interest and Penalties Other Forms That May Be Required Attachments Additional Information Unresolved Tax Problems Of Special Interest to Bankruptcy Trustees and Debtors-inPossession 1 1 2 2 2 2 2 3 3 3 3 3 4 4 4 4 4 5 5 5 6 Specific Instructions 7 Name of Estate or Trust 7 Address 7 Type of Entity 7 Number of Schedules K-1 Attached 8 Employer Identification Number 8 Date Entity Created 8 Nonexempt Charitable and Split-Interest Trusts 9 Initial Return, Amended Return, Final Return; or Change in Fiduciary’s Name or Address 9 Pooled Mortgage Account 9 Income 9 Deductions 10 Tax and Payments 13 Schedule A—Charitable Deduction 14 Schedule B—Income Distribution Deduction 15 Schedule G—Tax Computation 16 Other Information 17 Schedule I—Alternative Minimum Tax 18 Schedule D (Form 1041)—Capital Gains and Losses 23 Changes To Note ● Employment taxes on wages paid to household employees are now reported on Form 1041, Schedule G, line 7, using new Schedule H (Form 1040). Prior to 1995, some of these taxes were reported and paid quarterly using Form 942, which is now obsolete. An estate that maintained a private home for the decedent’s family (or a trust that maintained a private home for a beneficiary) may owe employment taxes if the estate or trust paid someone to work in or around that home. See the instructions for Schedule G, line 7 on page 17. If the estate or trust paid these taxes in 1994, the IRS will send it a separate package in January containing Schedule H, Form W-2, and other needed items. If the estate or trust doesn’t receive a package, it can be ordered by calling 1- 800-TAX-FORM (1-800-829-3676). ● Former Schedule H of Form 1041, Alternative Minimum Tax, has been redesignated as Schedule I. ● For tax years beginning in 1995, the requirement to file a return for a 6 Cat. No. 11372D bankruptcy estate applies only if gross income is at least $5,775. Definitions Beneficiary A beneficiary is an heir, a legatee, or a devisee. General Instructions Purpose of Form The fiduciary of a domestic decedent’s estate, trust, or bankruptcy estate uses Form 1041 to report: (a) the income, deductions, gains, losses, etc. of the estate or trust; (b) the income that is either accumulated or held for future distribution or distributed currently to the beneficiaries; (c) any income tax liability of the estate or trust; and (d) employment taxes on wages paid to household employees. Distributable Net Income (DNI) The income distribution deduction allowable to estates and trusts for amounts paid, credited, or required to be distributed to beneficiaries is limited to distributable net income (DNI). This amount, which is figured on Schedule B, line 9, is also used to determine how much of an amount paid, credited, or required to be distributed to a beneficiary will be includible in his or her gross income. ● Interest deductible under section 163. ● Taxes deductible under section 164. ● Investment expenses described in section 212 (in excess of 2% of AGI). ● Percentage depletion allowed under section 611. ● Foreign tax credit. For more information, see section 691. Income Required To Be Distributed Currently Income required to be distributed currently is. Income Taxation of Trusts and Decedents’ Estates A trust (except a grantor type the distribution that is. Income and Deductions in Respect of a Decedent When completing Form 1041, you must take into account any items that are income in respect of a decedent (IRD). In general, income in respect of a decedent is income that a decedent was entitled to receive but that was not properly includible in the decedent’s final Form 1040 under the decedent’s method of accounting. IRD includes: (a) all accrued income of a decedent who reported his or her income on a cash method of accounting; (b) income accrued solely because of the decedent’s death in the case of a decedent who reported his or her income on the accrual method of accounting; and (c). ● IRD has the same character it would have had if the decedent lived and received such amount. The following deductions and credits, when paid by the decedent’s estate, are allowed on Form 1041 even though they were not allowable on the decedent’s final Form 1040: ● Business expenses deductible under section 162. Fiduciary A fiduciary is a trustee of a trust; or an executor, executrix, administrator, administratrix, personal representative, or person in possession of property of a decedent’s estate. Note: Any reference in these instructions to “you” means the fiduciary of the estate or trust. Trust A trust is an arrangement created either by a will or by an inter vivos declaration by which trustees take title to property for the purpose of protecting or conserving it for the beneficiaries under the ordinary rules applied in chancery or probate courts.. Trust The fiduciary (or one of the joint fiduciaries) must file Form 1041 for a domestic trust taxable under section 641 that has: 1. Any taxable income for the tax year, or 2. Gross income of $600 or more (regardless of taxable income), or 3. A beneficiary who is a nonresident alien. Page 2 attributable to contributions to corpus made after March 1, 1984. If you are a fiduciary of a nonresident alien estate or foreign trust with U.S. source income, file Form 1040NR, U.S. Nonresident Alien Income Tax Return. Bankruptcy Estate The bankruptcy trustee or debtor-in-possession must file Form 1041 for the estate of an individual involved in bankruptcy proceedings under chapter 7 or 11 of title 11 of the United States Code if the estate has gross income for the tax year of $5,775 or more. See Of Special Interest To Bankruptcy Trustees and Debtors-in-Possession on page 6 for other details. estates and trusts, file Form 1041 by the 15th day of the 4th month following the close of the tax year. If the due date falls on a Saturday, Sunday, or legal holiday, file on the next business day. For example, an estate that has a tax year that ends on June 30, 1996, must file Form 1041 by October 15, 1996. Atlanta, GA 39901 Cincinnati, OH 45999 Austin, TX 73301 Extension of Time To File Estates.—Use Form 2758, Application for Extension of Time To File Certain Excise, Income, Information, and Other Returns, to apply for an extension of time to file. U.S. Return for a Partnership, REMIC, or for Certain Trusts, for an additional extension of up to 3 months. To obtain this additional extension of time to file, you must show reasonable cause for the additional time you are requesting. Form 8800 must be filed by the extended due date for Form 1041. Ogden, UT 84201 Qualified Settlement Funds The trustee of a designated or qualified settlement fund should file Form 1120-SF, U.S. Income Tax Return for Settlement Funds. See Regulations section 1.468B-5. Fresno, CA 93888 Kansas City, MO 64999 Electronic and Magnetic Media Filing Qualified fiduciaries or transmitters may be able to file Form 1041 and related schedules electronically or on magnetic media. Tax return data may be filed electronically using telephone lines or on magnetic media using magnetic tape or floppy diskette. If you wish to do this, Form 9041, Application for Electronic/Magnetic Media Filing of Business and Employee Benefit Plan Returns, must be filed. If Form 1041 is filed electronically or on magnetic media, Form 8453-F, U.S. Estate or Trust Income Tax Declaration and Signature for Electronic and Magnetic Media Filing, must also be filed. For more details, get Pub. 1437, Procedures for Electronic and Magnetic Media Filing of U.S. Income Tax Returns for Estates and Trusts, Form 1041, for Tax Year 1995, and Pub. 1438, File Specifications, Validation Criteria, and Record Layouts for Electronic and Magnetic Media Filing of Estate and Trust Returns, Form 1041. To order these forms and publications, or for more information on electronic and magnetic media filing of Form 1041, call the Magnetic Media Unit at the Philadelphia Service Center at (215) 516-7533 (not a toll-free number), or write to: Internal Revenue Service Philadelphia Service Center ATTN: Magnetic Media Unit–DP 115 11601 Roosevelt Blvd. Philadelphia, PA 19154 Memphis, TN 37501 Philadelphia, PA 19255 Period Covered File the 1995 return for calendar year 1995 and fiscal years beginning in 1995 and ending in 1996. If the return is for a fiscal year or a short tax year, fill in the tax year space at the top of the form. The 1995 Form 1041 may also be used for a tax year beginning in 1996 if: 1. The estate or trust has a tax year of less than 12 months that begins and ends in 1996; and 2. The 1996 Form 1041 is not available by the time the estate or trust is required to file its tax return. However, the estate or trust must show its 1996 tax year on the 1995 Form 1041 and incorporate any tax law changes that are effective for tax years beginning after December 31, 1995. For a charitable or split-interest trust described in section 4947(a) and a pooled income fund defined in section 642(c)(5): Please mail to the following Internal Revenue Service Center Atlanta, GA 39901 Austin, TX 73301 Cincinnati, OH 45999 Where To File For all estates and trusts, except charitable and split-interest trusts and pooled income funds: Please mail to the following Internal Revenue Service Center Fresno, CA 93888 Holtsville, NY 00501 If you are located in New Jersey, New York (New York City and counties of Nassau, Rockland, Suffolk, and Westchester) New York (all other counties), Connecticut, Maine, Massachusetts, New Hampshire, Rhode Island, Vermont Kansas City, MO 64999 Holtsville, NY 00501 Delaware, District of Columbia, Maryland, New Jersey, Pennsylvania, Philadelphia, PA 19255 Virginia, any U.S. possession, or foreign country When To File For calendar year estates and trusts, file Form 1041 and Schedules K-1 on or before April 15, 1996. For fiscal year Andover, MA 05501 Who Must Sign The fiduciary, or an authorized representative, must sign Form 1041. Page 3 account. If you are an attorney or other individual functioning in a fiduciary capacity, leave this space blank. DO NOT enter your individual social security number (SSN). If you, as fiduciary, fill in Form 1041, leave the Paid Preparer’s space Regulations section 1.6695-1(b)(4)(iv) for details. ● Give you a copy of the return in addition to the copy to be filed with the IRS.. The fiduciary of a decedent’s estate may make a section 643(g) election only for the final year of the estate. See the instructions for line 24b for more details. Interest and Penalties Interest. Rounding Off to Whole Dollars You may show the money items on the return and accompanying schedules as whole-dollar amounts. To do so, drop amounts less than 50 cents and increase any amounts from 50 to 99 cents to the next dollar. Late Filing of Return The law provides a penalty of 5% a month, or part of a month, up to a maximum of 25%, for each month the return is not filed. The penalty is imposed on the net amount due. If the return is more than 60 days late, the minimum penalty is the smaller of $100 or the tax due. The penalty will not be imposed if you can show that the failure to file on time was due to reasonable cause. If the failure is due to reasonable cause, attach an explanation to the return. Estimated Tax Generally, an estate or trust must pay estimated income tax for 1996 if it expects to owe, after subtracting any withholding and credits, at least $500 in tax, and it expects the withholding and credits to be less than the smaller of: 1. 90% of the tax shown on the 1996 tax return, or 2. 100% of the tax shown on the 1995 tax return (110% of that amount if the estate’s or trust’s adjusted gross income on that return is more than $150,000, and less than 2⁄3 of gross income for 1995 or 1996 is from farming or fishing). However, if a return was not filed for 1995 or that return did not cover a full 12 months, item 2 does not apply. Accounting Methods, get Pub. 538, Accounting Periods and Methods. Late Payment of Tax Generally, the penalty for not paying tax when due is 1⁄2 of 1% of the unpaid amount for each month or part of a month it remains unpaid. The maximum penalty is 25% of the unpaid amount. The penalty is imposed on the net amount due.. Exceptions Estimated tax payments are not required from: 1. An estate of a domestic decedent or a domestic trust that had no tax liability for the full 12-month 1995 tax year; 2. A decedent’s estate for any tax year ending before the date that is 2 years after the decedent’s death; or 3., get Form 1041-ES, Estimated Income Tax for Estates and Trusts. Failure To Supply Schedule K-1 The fiduciary must. Accounting Periods, get Form 1128, Application To Adopt, Change, or Retain a Tax Year. Underpaid Estimated Tax If the fiduciary underpaid estimated tax, get Form 2210, Underpayment of Estimated Tax by Individuals, Estates, and Trusts, to figure any penalty. Enter the amount of any penalty on line 26, Form 1041. Section 643(g) Election Fiduciaries of trusts that pay estimated tax may elect under section 643(g) to have any portion of their estimated tax payments allocated to any of the beneficiaries. Page 4 Other Penalties Other penalties can be imposed for negligence, substantial underpayment of tax, and fraud. Get Pub. 17, Your Federal Income Tax, for details on these penalties. Other Forms That May Be Required Forms W-2 and W-3, Wage and Tax Statement; and Transmittal of Income and Tax Statements. Form 56, Notice Concerning Fiduciary Relationship. 720, Quarterly Federal Excise Tax Return. Use Form 720 to report environmental excise taxes, communications and air transportation taxes, fuel taxes, luxury tax on passenger vehicles, manufacturers’ taxes, ship passenger tax, and certain other excise taxes. Caution: See Trust fund recovery penalty below. Form 940 or Form 940-EZ, Employer’s Annual Federal Unemployment (FUTA) Tax Return. The estate or trust estate or trust for some part of a day in any 20 different weeks during the calendar year. withheld and employer and employee social security and Medicare taxes on farmworkers. Caution: See Trust fund recovery penalty below. Form 945, Annual Return of Withheld Federal Income Tax. Use this form to report income tax withheld from nonpayroll payments, including pensions, annuities, IRAs, gambling winnings, and backup withholding. Caution: See Trust fund recovery penalty below. Form 1040, U.S. Individual Income Tax Return. Form 1040NR, U.S. Nonresident Alien Income Tax Return. Form 1041-A, U.S. Information Return— Trust Accumulation of Charitable Amounts. Forms 1042 and 1042-S, Pub.. Forms 8288 and 8288-A, U.S. Withholding Tax Return for Dispositions by Foreign Persons of U.S. Real Property Interests; employer identification number on each sheet. Do not file a copy of the decedent’s will or the trust instrument unless the IRS requests it. Additional Information The following publications may assist you in preparing Form 1041. Pub. 448, Federal Estate and Gift Taxes; Pub. 550, Investment Income and Expenses; and Pub. 559, Survivors, Executors, and Administrators. These and other publications may be obtained at most IRS offices. To order publications and forms, call 1-800-TAX-FORM (1-800-829-3676). Or, use your computer. If you subscribe to an on-line service, ask if IRS information is available and, if so, how to access it. You can also get information through IRIS, the Internal Revenue Information Services, on FedWorld, a government bulletin board. The tax forms, Page 5 instructions, publications, and other IRS information are available through IRIS. IRIS is accessible directly by calling 703-321-8020. On the Internet, you can telnet to fedworld.gov or, for file transfer protocol services, connect to. If; the forms). U.S. Code must file a return if the bankruptcy estate has gross income of $5,775 or more for tax years beginning in 1995. Failure to do so may result in an estimated Request for Administrative Expenses being filed by the IRS in the bankruptcy proceeding or a motion to compel filing of the return. Note: The filing of a tax return for the bankruptcy estate does not relieve the individual debtor of his or her (or their) individual tax obligations. Employer Identification Number Every bankruptcy estate of an individual required to file a return must have its own employer identification number (EIN). You may apply for one on Form SS-4, Application for Employer Identification Number. The social security number (SSN) of the individual debtor cannot be used as the EIN for the bankruptcy estate. 6. Basis, holding period, and character of assets; 7. Method of accounting; and 8. Other tax attributes to the extent provided by regulations. For bankruptcy cases beginning after November 8, 1992, the bankruptcy estate succeeds to the individual debtor’s unused passive activity losses, unused passive activity credits, and unused section 465 losses. For cases beginning before November 9, 1992, the individual debtor and bankruptcy estate may jointly elect to have the estate succeed to these attributes. See Regulations sections 1.1398-1 and 1.1398-2 for more details. Income, Deductions, and Credits Under section 1398(c), the taxable income of the bankruptcy estate generally is figured in the same manner as an individual. The gross income of the bankruptcy estate includes any income included in property of the estate as defined in Bankruptcy Code section 541. Also included is gain from the sale. Administrative expenses.—The bankruptcy estate is allowed a deduction for any administrative expense allowed under section 503 of title 11 of the U.S. Code, and any fee or charge assessed under chapter 123 of title 28 of the U.S. Code, to the extent not disallowed under an Internal Revenue Code provision (e.g., section 263, 265, or 275). Administrative expense loss.—When figuring a net operating loss, nonbusiness deductions (including administrative expenses) are limited under section 172(d)(4) to the bankruptcy estate’s nonbusiness income. The excess nonbusiness deductions are an administrative expense loss that may be carried back to each of the 3 preceding tax years and forward to each of the 7 succeeding tax years of the bankruptcy estate. The amount of an administrative expense loss that may be carried to any tax year is determined after the net operating loss deductions allowed for that year. An administrative expense loss is allowed only to the bankruptcy estate and cannot be carried to any tax year of the individual debtor. Accounting Period A bankruptcy estate is allowed to have a fiscal year. The period can be no longer than 12 months.. Hearing-impaired persons who have access to TDD equipment may call 1-800-829-4059 to ask for help. The Problem Resolution Office will ensure that your problem receives proper attention. Although the office cannot change the tax law or make technical decisions, it can help you clear up problems that resulted from previous contacts. When To File File Form 1041 on or before the 15th day of the 4th month following the close of the tax year. Use Form 2758 to apply for an extension of time to file. Disclosure of Return Information or Transcript of Tax Form, to request copies of the individual debtor’s tax returns. If the bankruptcy case was not voluntary, disclosure cannot be made before the bankruptcy court has entered an order for relief, unless the court rules that the disclosure is needed for determining whether relief should be ordered. Of Special Interest to Bankruptcy Trustees and Debtors-in-Possession Taxation of Bankruptcy Estates of an Individual A bankruptcy estate is a separate taxable entity created when an individual debtor files a petition under either chapter 7 or 11 of title 11 of the U.S. Code. The estate is administered by a trustee, or a debtor-in-possession. If the case is later dismissed by the bankruptcy court, the debtor is treated as if the bankruptcy petition had never been filed. This provision does NOT apply to partnerships or corporations. Transfer of Tax Attributes From the Individual Debtor to the Bankruptcy Estate The bankruptcy estate succeeds to the following tax attributes of the individual debtor: 1. Net operating loss (NOL) carryovers; 2. Charitable contributions carryovers; 3. Recovery of tax benefit items; 4. Credit carryovers; 5. Capital loss carryovers; Who Must File Every trustee (or debtor-in-possession) for an individual’s bankruptcy estate under chapter 7 or 11 of title 11 of the Page 6 Carryback of net operating losses and credits.—If the bankruptcy estate itself incurs a net operating loss (apart from losses carried forward to the estate from the individual debtor), it can carry back its net operating losses not only to previous tax years of the bankruptcy estate, but also to tax years of the individual debtor prior to the year in which the bankruptcy proceedings began. Excess credits, such as the foreign tax credit, also may be carried back to pre-bankruptcy years of the individual debtor. Exemption.—For tax years beginning in 1995, a bankruptcy estate is allowed a personal exemption of $2,500. Standard deduction.—For tax years beginning in 1995, a bankruptcy estate that does not itemize deductions is allowed a standard deduction of $3,275. Discharge of indebtedness.—In a title 11 case, gross income does not include amounts that normally would be included in gross income resulting from the discharge of indebtedness. However, any amounts excluded from gross income must be applied to reduce certain tax attributes in a certain order. Attach Form 982, Reduction of Tax Attributes Due to Discharge of Indebtedness, to show the reduction of tax attributes. The IRS will notify the trustee or debtor-in-possession within 60 days from receipt of the application whether application or within any additional time permitted by the bankruptcy court. See Rev. Proc. 81-17, 1981-1 C.B. 688. Decedent’s Estate. Simple Trust A trust may qualify as a simple trust if: 1. The trust instrument requires that all income must be distributed currently; 2. The trust instrument does not provide that any amounts are to be paid, permanently set aside, or used for charitable purposes; and 3. The trust does not distribute amounts allocated to the corpus of the trust. Special Filing Instructions for Bankruptcy Estates, any tax due from line 54 of Form 1040. Sign and date Form 1041., for transfers made in trust after March 1, 1986, the grantor is treated. Also, the grantor is treated as holding any power or interest that was held by either the grantor’s spouse at the time that the power or interest was created or who became the grantor’s spouse after. The income taxable to the grantor or another person under sections 671 through 678 and the deductions and credits applied to the income must be Tax Rate Schedule Figure the tax for the bankruptcy estate using the tax rate schedule shown below. Enter the tax on Form 1040, line 38. If taxable income is: Over— $0 19,500 47,125 71,800 128,250 But not over— $19,500 47,125 71,800 128,250 ----The tax is: 15% $2,925.00 + 28% 10,660.00 + 31% 18,309.25 + 36% 38,631.25 + 39.6% Of the amount over— $0 19,500 47,125 71,800 128,250 Specific Instructions Name of Estate or Trust Copy the exact name of the estate or trust from the Form SS-4, Application for Employer Identification Number, that you used to apply for the employer identification number (EIN). If a grantor type trust (discussed below), write the name, identification number, and address of the grantor(s) or other person(s) in parentheses after the name of the trust. Prompt Determination of Tax Liability To request a prompt determination of the tax liability of the bankruptcy estate, the trustee or debtor-in-possession must file a written application for the determination with the IRS District Director for the district in which the bankruptcy case is pending. The application must be submitted in duplicate and executed under the penalties of perjury. The trustee or debtor-in-possession must submit with the application an exact copy of the return (or returns) filed by the trustee with the IRS for a completed tax period, and a statement of the name and location of the office where the return was filed. The envelope should be marked, “Personal Attention of the Special Procedures Function (Bankruptcy Section). DO NOT OPEN IN MAILROOM.” Address Include the suite, room, or other unit number after the street address. If the Post Office does not deliver mail to the street address and the fiduciary has a P.O. box, show the box number instead of the street address. If you change your address after filing Form 1041, use Form 8822, Change of Address, to notify the IRS. A. Type of Entity Check the appropriate box that describes the entity for which you are filing the return. Note: There are special filing requirements for grantor type trusts and bankruptcy estates (discussed below). Page 7. Generally, a family estate trust is treated as a grantor type trust.. 156. Nonqualified deferred compensation plans.—Taxpayers may adopt and maintain grantor trusts in connection with nonqualified deferred compensation plans (sometimes referred to as “rabbi trusts”). Rev. Proc. 92-64, 1992-2 C.B. 422, (SSN) SSN is considered a payor of reportable payments received by the trust for purposes of backup withholding. If the trust has 10 or fewer grantors, a reportable payment made to the trust is treated as a reportable payment of the same kind made to the grantors on the date the trust received the payment. If the trust has more than 10 grantors, a reportable payment made to the trust is treated as a payment of the same kind made by the trust to each grantor in an amount equal to the distribution made to each grantor on the date the grantor is paid or credited. The trustee must withhold 31% of reportable payments made to any grantor who is subject to backup withholding. For more information, see section 3406 and Temporary Regulations section 35a.9999-2, Q&A 20. For more information, see section 1398 and Pub. 908, Tax Information on Bankruptcy., attach a statement to support the following: ● The calculation of the yearly rate of return.. You must also file Form 5227, Split-Interest Trust Information Return, for the pooled income fund. B. Number of Schedules K-1 Attached Every trust or decedent’s estate claiming an income distribution deduction on page 1, line 18, must enter the number of Schedules K-1 (Form 1041) that are attached to Form 1041. C. Employer Identification Number Every estate or trust must have an EIN. To apply for one, use Form SS-4. You may get this form from the IRS or the Social Security Administration. See Pub. 583, Starting a Business and Keeping Records,. Bankruptcy Estate A chapter 7 or 11 bankruptcy estate is a separate and distinct taxable entity from the individual debtor for Federal income tax purposes. See Of Special Interest to Bankruptcy Trustees and Debtors-in-Possession on page 6. D. Date Entity Created Enter the date the trust was created, or, if a decedent’s estate, the date of the decedent’s death. Page 8, or estates or trusts (including a deduction for estate or gift tax purposes). however, it must file Form 1041 to report all of its income and to pay any tax due. Nonexempt Charitable Trust it has no taxable income under Subtitle A, it may file Form 990-PF instead of Form 1041 to meet its section 6012 filing requirement. check the “Bought” box and enter the date of purchase. If you sold a pooled mortgage account that was purchased during this, or a previous, tax year, check the “Sold” box and enter the date of sale. If you neither bought nor sold a pooled mortgage account, skip this item. Income-EZ), it has no taxable income under Subtitle A, it can file either Form 990 or Form 990-EZ instead of Form 1041 to meet its section 6012 filing requirement. Line 1—Interest Income Report the estate’s or trust’s share of all taxable interest income that was received during the tax line 1 of Schedule B (Form 1040) or Schedule 1 (Form 1040A). F. Initial Return, Amended Return, Final Return; or Change in Fiduciary’s Name or Address Amended Return If you are filing an amended Form 1041, check the “Amended return” box. Complete the entire return, correct the appropriate lines with the new information, and refigure the estate’s or trust’s tax liability. On an attached sheet explain the reason for the amendments and identify the lines and amounts, or. If a split-interest trust has any unrelated business taxable income, Final Return Check this box if this is a final return because the estate or trust has terminated. Also, check the “Final K-1” box at the top of Schedule K-1. If, on the final return, there are excess deductions, an unused capital loss carryover, or a net operating loss carryover, see the discussion in the Schedule K-1 instructions on page 28. Figure the deductions on an attached sheet. Line 2—Dividends G. Pooled Mortgage Account If you bought a pooled mortgage account during the year, and still have that pool at the end of the tax year, Page 9 tax return, report on line 5 of Schedule B (Form 1040) or Schedule 1 (Form 1040A) the total dividends reported on Form 1041 and subtract it from the subtotal. Note: Report capital gain distributions on Schedule D (Form 1041), line 10.. amount on an attached schedule if the estate or trust has more than one item. Items to be reported on line 8 include:. Deductions Amortization, Depletion, and Depreciation A trust or decedent’s estate is allowed a deduction for amortization, depletion, and depreciation only to the extent the deductions are not apportioned to the beneficiaries. For a decedent’s estate, the depreciation deduction is apportioned between the estate and the heirs, legatees, and devisees on the basis of the estate’s income allocable to each. For a trust, the depreciation deduction is apportioned between the income beneficiaries and the trust on the basis of the trust income allocable to each, unless the governing instrument (or local law) requires or permits the trustee to maintain a depreciation reserve. If the trustee is required to maintain a reserve, the deduction is first allocated to the trust, up to the amount of the reserve. Any excess is allocated among the beneficiaries in the same manner as the trust’s accounting income. See Regulations section 1.167(h)-1(b). For mineral or timber property held by a decedent’s estate, the depletion deduction is apportioned between the estate and the heirs, legatees, and devisees on the basis of the estate’s income from such property allocable to each. For mineral or timber property held in trust, the depletion deduction is apportioned between the income beneficiaries and the trust based on the trust income from such property allocable to each, unless the governing instrument (or local law) requires or permits the trustee to maintain a reserve for depletion. If the trustee is required to maintain a reserve, the deduction is first allocated to the trust, up to the amount of the reserve. Any excess is allocated among the beneficiaries in the same manner as the trust’s accounting income. See Regulations section 1.611-1(c)(4). The deduction for amortization is apportioned between an estate or trust and its beneficiaries under the same principles for apportioning the deductions for depreciation and depletion. An estate or trust is not allowed to make an election under section 179 to expense certain tangible property. The deduction for the amortization of reforestation expenditures under section 194 is allowed only to an estate. The estate’s or trust’s share of amortization, depletion, and depreciation should be reported on the appropriate lines of Schedule C (or C-EZ), E, or F (Form 1040), the net income or loss from which is shown on line 3, 5, or 6 of Form 1041. If the deduction is not related to a specific business or activity, then report it on line 15a. Allocation of Deductions for Tax-Exempt Income Generally, no deduction that would otherwise be allowable is allowed for any expense (whether for business or for the production of income) that is allocable to tax-exempt income. Examples of tax-exempt income include: ● Certain death benefits (section 101); ● Interest on state or local bonds (section 103); ● Compensation for injuries or sickness (section 104); and ● Income from discharge of indebtedness in a title 11 case (section 108). Exception. State income taxes and business expenses that are allocable to tax-exempt interest are deductible. Expenses that are directly allocable to tax-exempt income are allocated only to tax-exempt income. A reasonable proportion of expenses indirectly allocable to both tax-exempt income and other income must be allocated to each class of income. Line 4—Capital Gain or (Loss) Enter the gain from Schedule D (Form 1041), Part III, line 17, column (c); or the loss from Part IV, line 18. Note: Do not substitute Schedule D (Form 1040) for Schedule D (Form 1041). Line 5—Rents, Royalties, Partnerships, Other Estates and Trusts, etc. Use Schedule E (Form 1040), Supplemental Income and Loss, to report the estate’s or trust’s share of income or (losses) from rents, royalties, partnerships, S corporations, other estates and trusts, and REMICs.. Deductions That May Be Allowable for Estate Tax Purposes. Line 6—Farm Income or (Loss) If the estate or trust operated a farm, use Schedule F (Form 1040), Profit or Loss From Farming, to report farm income and expenses. Enter the net profit or (loss) from Schedule F on line 6. Line 7—Ordinary Gain or (Loss) Enter from line 20, Form 4797, Sales of Business Property, the ordinary gain or loss from the sale or exchange of property other than capital assets and also from involuntary conversions (other than casualty or theft). Accrued Expenses. Line 8—Other Income Enter other items of income not included on lines 1 through 7. List the type and Page 10 There are exceptions for recurring items. See section 461(h)., get Pub. 925, Passive Activity and At-Risk Rules. figure the amount of losses allowed from passive activities. See Form 8582-CR, Passive Activity Credit Limitations, to figure the amount of credit allowed for the current year. Passive Activity Loss and Credit Limitations Section 469 and the regulations thereunder generally limit losses from passive activities to the amount of income derived from all passive activities. Similarly, credits from passive activities are generally limited to the tax attributable to such activities. These limitations are first applied at the estate or trust level. Generally, an activity is a passive activity if it involves the conduct of any trade or business, and the taxpayer does not materially participate in the activity. Passive activities do not include working interests in oil and gas properties. See section 469(c)(3). For a grantor trust, material participation is determined at the grantor level. Generally, rental activities are passive activities, whether or not the taxpayer materially participates. However, certain taxpayers who materially participate in real property trades or businesses are not subject to the passive activity limitations on losses from rental real estate activities in which they materially participate. For more details, see section 469(c)(7). Note: Material participation standards for estates and trusts had not been established by regulations at the time these instructions went to print. For tax years of an estate ending less than 2 years after the decedent’s date of death, up to $25,000 of deductions and deduction equivalents of credits from rental real estate activities in which the decedent actively participated is allowed. Any excess losses and/or credits are suspended for the year and carried forward. If the estate or trust distributes an interest in a passive activity, the basis of the property immediately before the distribution is increased by the passive activity losses allocable to the interest, and such losses cannot be deducted. See section 469(j)(12). Note: Losses from passive activities are first subject to the at-risk rules. When the losses are deductible under the at-risk rules, the passive activity rules then apply. or incurred. Personal interest is not deductible. Examples of personal interest include interest paid on: ● Revolving charge accounts. ● Personal notes for money borrowed from a bank, credit union, or other person. ● Installment loans on personal use property. ●. Investment interest.—Generally, investment interest is interest (including amortizable bond premium on taxable bonds acquired after October 22, 1986, but before January 1, 1988) that is paid or incurred on indebtedness that is properly allocable to property held for investment. Investment interest does not include any qualified residence interest, or interest that is taken into account under section 469 in figuring figure the allowable investment interest deduction. If you must complete Form 4952, check the box on line 10 and attach Form 4952. Then, add the deductible investment interest to the other types of deductible interest and enter the total on line 10. Qualified residence interest.—Interest paid or incurred Pub. 936, Home Mortgage Interest Deduction, for an Page 11 explanation of the general rules for deducting home mortgage interest. See section 163(h)(3) for a definition of qualified residence interest and for limitations on indebtedness. Line 11—Taxes Enter any deductible taxes paid or incurred during the tax year that are not deductible elsewhere on Form 1041. Deductible taxes include: ● State and local income or real property taxes. ● The generation-skipping transfer (GST) tax imposed on income distributions. Do not deduct: ● Federal income taxes. ● Estate, inheritance, legacy, succession, and gift taxes. ● Federal duties and excise taxes. ● State and local sales taxes. Instead, treat these taxes as part of the cost of the property. Line 12—Fiduciary Fees Enter the deductible fees paid or incurred to the fiduciary for administering the estate or trust during the tax year. Note: Fiduciary fees deducted on Form 706 cannot be deducted on Form 1041. Casualty and theft losses.—Use Form 4684, Casualties and Thefts, to figure any deductible casualty and theft losses. Deduction for clean-fuel vehicles.— Section 179A allows a deduction for part of the cost of qualified clean-fuel vehicle property. Get Pub. 535, Business Expenses, for more details. Net operating loss deduction (NOLD).—An estate or trust is allowed the net operating loss deduction (NOLD) under section 172. If you claim an NOLD for the estate or trust, figure the deduction on a separate sheet and attach it to this return. Estate’s or trust’s share of amortization, depreciation, and depletion not claimed elsewhere.—If you cannot deduct the amortization, depreciation, and depletion as rent or royalty expenses on Schedule E (Form 1040), or as business or farm expenses on Schedule C, C-EZ, or F (Form 1040), itemize the fiduciary’s share of the deductions on an attached sheet and include them on line 15a. Itemize each beneficiary’s share of the deductions and report them on the appropriate line of Schedule K-1 (Form 1041). Line 15a—Other Deductions NOT Subject to the 2% Floor Attach your own schedule, listing by type and amount, all allowable deductions that are not deductible elsewhere on Form 1041. Do not include any losses on worthless bonds and similar obligations and nonbusiness bad debts. Report these losses on Schedule D (Form 1041). Pub. 550. If you claim a bond premium deduction for the estate or trust, figure the deduction on a separate sheet and attach it to Form 1041. Line 15b—Allowable Miscellaneous Itemized Deductions Subject to the 2% Floor Miscellaneous itemized deductions are deductible only to the extent that the aggregate amount of such deductions exceeds 2% of adjusted gross income (AGI). Miscellaneous itemized deductions do not include deductions for: ● Interest under section 163. ● Taxes under section 164. ● The amortization of bond premium under section 171. ● Estate taxes attributable to income in respect of a decedent under section 691(c). For other exceptions, see section 67(b). For estates and trusts, the AGI is figured by subtracting the following from total income on line 9 of page 1: 1. The administration costs of the estate or trust (the total of lines 12, 14, and 15a to the extent they are costs incurred in the administration of the estate or trust) that would not have been incurred if the property were NOT held by the estate or trust; 2. The income distribution deduction (line 18); 3. The amount of the exemption (line 20); 4. The deduction for clean-fuel vehicles claimed on line 15a; and 5. The net operating loss deduction claimed on line 15a. For those estates and trusts whose income distribution deduction is limited to the actual distribution, and NOT the figured taking into account the allowable miscellaneous itemized deductions (AMID) after application of the 2% floor. In this situation there are two unknown amounts: (a) the AMID; and 1995. The trust instrument provides that capital gains are added to corpus. 50% of the fiduciary fees are figure the DNI, taking into account the allowable miscellaneous itemized deductions, to determine the amount to enter that would not have been incurred if the property were NOT held by the estate or trust) – (line 18) – (line 20). Note: There are no other deductions claimed by the trust on line 15a that are deductible in arriving at AGI. In the above example: AGI = 35,000 – 2,000 – DNI – 100 Since the value of line 18 is not known because it is limited to the DNI, you are left with the following: AGI = 32,900 – DNI Page 12 capital loss from line 4); less total deductions from line 16 (excluding 1.02AMID = 1,102 AMID = 1,080 DNI = 11,920 (i.e., 13,000 – 1,080) AGI = 20,980 (i.e., 32,900 – 11,920) Note: The income distribution deduction is equal to the smaller. than $5 per gravesite, paid for maintenance of cemetery property. To the right of the entry space for line 18, enter the number of gravesites. Also write “Section 642(i) trust” in parentheses after the trust’s name at the top of Form 1041. You do not have to complete Schedules B of Form 1041 and K-1 (Form 1041). Pub. estate’s or trust’s share of these deductions on line 19. On the termination of the estate or trust, any unused NOL carryover that would be allowable to the estate or trust in a later tax year, but for the termination, is allowed to the beneficiaries succeeding to the property of the estate or trust. See the instructions for Schedule K-1, lines 12d and 12e. Excess deductions on termination.—If the estate or trust has for its final year deductions (excluding the charitable deduction and exemption) in excess of its gross income, the excess is allowed as an itemized deduction to the beneficiaries succeeding to the property of the estate or trust. However, an unused NOL carryover that is allowed to beneficiaries (as explained in the above paragraph) cannot also be treated as an excess deduction. If the final year of the estate or trust is also the last year of the NOL carryover period, the NOL carryover not absorbed in that tax year by the estate or trust is included as an excess deduction. See the instructions for Schedule K-1, line 12a. Line 24a—1995 Estimated Tax Payments and Amount Applied From 1994 Return Enter the amount of any estimated tax payment you made with Form 1041-ES for 1995 plus the amount of any overpayment from the 1994 return that was applied to the 1995 estimated tax. If the estate 20—Exemption Decedent’s estates.—A decedent’s estate is allowed a $600 exemption. Trusts.—A trust whose governing instrument requires that all income be distributed currently is allowed a $300 exemption, even if it distributed amounts other than income during the tax year. All other trusts are allowed a $100 exemption. See Regulations section 1.642(b)-1. Line 18—Income Distribution Deduction. If the estate or trust claims an income distribution deduction, complete and attach: ● Parts I and II of Schedule I to refigure the deduction on a minimum tax basis; AND ● Schedule K-1 (Form 1041) for each beneficiary to which a distribution was made or required to be made. Cemetery perpetual care fund.—On line 18, deduct the amount, not more Line 24b—Estimated Tax Payments Allocated to Beneficiaries, line 13a. Failure to file Form 1041-T by the due date (March 5, 1996, for calendar year estates and trusts) will result in an invalid election. An invalid election will Tax and Payments Line 22—Taxable Income Net operating loss.—If line 22 is a loss, the estate or trust may have a net operating loss (NOL). Do not include the deductions claimed on lines 13, 18, and 20 when figuring the amount of the NOL. An NOL generally may be carried back to the 3 prior tax years and forward to the following 15 tax years. Complete Schedule A of Form 1045, Application for Tentative Refund, to figure the amount of the NOL that is available for carryback or carryover. Use Form 1045 or file an amended return to apply for a refund based on an NOL carryback. For more information, get Pub. 536, Net Operating Losses. Page 13 require the filing of amended Schedules K-1 for each beneficiary who was allocated a payment of estimated tax. Attach Form 1041-T to your return ONLY if you have not yet filed it. If you have already filed Form 1041-T, do not attach a copy to your return. trust owes a penalty and to figure the amount of the penalty. Note: The penalty may be waived under certain conditions. Get Pub. 505, Tax Withholding and Estimated Tax, for details. Line 24d—Tax Paid With Extension of Time To File If you filed either Form 2758 (for estates only), Form 8736, or Form 8800 to request an extension of time to file Form 1041, enter the amount that you paid with the extension request and check the appropriate box(es). Line 27—Tax Due You must pay the tax in full when the return is filed. Make the check or money order payable to “Internal Revenue Service.” Write the EIN and “1995 Form 1041” on the payment. Enclose, but do not attach, the payment with Form 1041. 5. The amount of each contribution and date of actual payment or, if applicable, the total amount of contributions paid to each organization during the next tax year, to be treated as paid in the prior tax year. The election must be filed by the due date (including extensions) for Form 1041 for the next tax year. For more information about the charitable deduction, see section 642(c) and related regulations. Specific Instructions Line 1—Amounts Paid for Charitable Purposes From Gross Income. Do not include any capital gains for the tax year allocated to corpus and paid or permanently set aside for charitable purposes. Instead, enter these amounts on line 6. Line 2—Amounts Permanently Set Aside for Charitable Purposes From Gross Income Estates, and certain trusts, may claim a deduction for amounts permanently set aside for a charitable purpose from gross income.). Do not include any capital gains for the tax year allocated to corpus and paid or permanently set aside for Line 24e—Federal Income Tax Withheld Use line 24e to claim a credit for any Federal income tax withheld (and not repaid) by: (a) an employer on wages and salaries of a decedent received by the decedent’s estate; (b) a payer of certain gambling winnings (e.g.,. Backup withholding.—If the estate or trust received a 1995 Form 1099 showing Federal income tax withheld (i.e., backup withholding) on interest income, dividends, or other income, check the box and include the amount withheld on income retained by the estate or trust in the total for line 24e. Report on Schedule K-1 (Form 1041), line 13, any credit for backup withholding on income distributed to the beneficiary. Line 29a—Credit to 1996 Estimated Tax Enter the amount from line 28 that you want applied to the estate’s or trust’s 1996 estimated tax. Schedule A—Charitable Deduction General Instructions Generally, any part of the gross income of an estate or trust (other than a simple trust) that, under figure the charitable deduction. Election to treat contributions as paid in the prior tax year.—The fiduciary of an estate or trust may elect to treat as paid during the tax year any amount of gross income received during that tax year or any prior tax year that was paid in the next tax year for a charitable purpose. To make the election, the fiduciary must file a statement with Form 1041 for the tax year in which the contribution is treated as paid. This statement must include: 1. The name and address of the fiduciary; 2. The name of the estate or trust; 3. An indication that the fiduciary is making an election under section 642(c)(1) for contributions treated as paid during such tax year; 4. The name and address of each organization to which any such contribution is paid; and Line 24f—Credit From Regulated Investment Companies Attach copy B of Form 2439, Notice to Shareholder of Undistributed Long-Term Capital Gains.. Get Pub. 378, Fuel Tax Credits and Refunds, for more information. Line 26—Underpayment of Estimated Tax If line 27 is at least $500 and more than 10% of the tax shown on Form 1041, or the estate or trust underpaid its 1995 estimated tax liability for any payment period, it may owe a penalty. See Form 2210 to determine whether the estate or Page 14 charitable purposes. Instead, enter these amounts on line 6. Line 4—Tax-Exempt Income Allocable to Charitable Contributions—Capital Gains for the Tax Year Allocated to Corpus and Paid or Permanently Set Aside for Charitable Purposes Enter the total of all capital gains for the tax year that are: ● Allocated to corpus; and ● Paid or permanently set aside for charitable purposes. information, see section 663(c) and related regulations. Specific Instructions Line 1—Adjusted Total Income If the amount on line 17 of page 1 is a loss that is attributable wholly or in part to the capital loss limitation rules under section 1211(b) (line 4), then enter as a negative amount on line 1, Schedule B, the smaller of the loss from line 17 on page 1, or the loss from line 4 on page 1. If the line 17 loss is not attributable to the capital loss on line 4, enter zero. If you are filing for a simple trust, subtract from adjusted total income any extraordinary dividends or taxable stock dividends included on page 1, line 2, and determined under the governing instrument and applicable local law to be allocable to corpus. Line 2—Adjusted Tax-Exempt Interest To figure the adjusted tax-exempt interest: Step 1. Add tax-exempt interest income on line 4. Line 3 Include all capital gains, whether or not they are distributed, that are attributable to income under the governing instrument or local law. For example, if the trustee distributed 50% of the current year’s capital gains to the income beneficiaries (and reflects this amount in column (a), line 17 of Schedule D (Form 1041)), but under the governing instrument all capital gains are attributable to income, then include 100% of the capital gains on line 3. If the amount on Schedule D (Form 1041), line 17, column (a) is a net loss, enter zero. Line 5 In figuring the amount of long-term capital gain for the tax year included on Schedule A, line 3, the specific provisions of the governing instrument control if the instrument specifically provides as to the source from which amounts are paid, permanently set aside, or to be used for charitable purposes. In all other cases, determine the amount to enter by multiplying line 3 of Schedule A by a fraction, the numerator of which is the amount of long-term capital gains that are included in the accounting income of the estate or trust (i.e., not allocated to corpus) AND are distributed to charities, and the denominator of which is all items of income (including the amount of such long-term capital gains) included in the DNI. Line 6 Figure line 6 in a similar manner as line 5. Line 10—Accounting Income. Lines 11 and 12 Do not include any: ● Amounts deducted on prior year’s return that were required to be distributed in the prior. ● Amount paid or permanently set aside for charitable purposes or otherwise qualifying for the charitable deduction. Line 11—Income Required To Be Distributed Currently Line 11 11 distributions are referred to as first tier distributions and are deductible by the estate or trust to the extent of the DNI. The beneficiary includes such amounts in his or her income to the extent of his or her proportionate share of the DNI. Schedule B—Income Distribution Deduction General Instructions. Note: Use Schedule I. Any deduction or loss that is applicable solely to one separate share of the trust is not available to any other share of the same trust. For more Page 15. If Form 1041-T was filed to elect to treat estimated tax payments as made by a beneficiary, the payments are treated as paid or credited to the beneficiary on the last day of the tax year and must be included on line 12. Unless a section 643(e)(3) election is made, the value of all noncash property actually paid, credited, or required to be distributed to any beneficiaries is the smaller of: 1. The estate’s or trust’s adjusted basis in the property immediately before distribution, plus any gain or minus any loss recognized by the estate or trust on the distribution (basis of beneficiary), or 2. The fair market value (FMV) of such property. If a section 643(e)(3) election is made by the fiduciary, then the amount entered on line 12 will be the FMV of the property. A fiduciary of a complex trust may elect to treat any amount paid or credited to a beneficiary within 65 days following the close of the tax year as being paid or credited on the last day of that tax year. To make this election, see the instructions for Question 6 on page 18. The beneficiary includes the amounts on line 12 in his or her income only to the extent of his or her proportionate share of the DNI. Complex trusts.—If the second tier distributions exceed the DNI allocable to the second tier, the trust may have an accumulation distribution. See the line 13 instructions below. Line 13—Total Distributions If line 13 is more than line 10 and you are filing for a complex trust, complete Schedule J (Form 1041) and file it with Form 1041 unless the trust has no previously accumulated income. Line 14—Adjustment for Tax-Exempt Income In figuring the income distribution deduction, the estate or trust is not allowed a deduction for any item of the DNI that is not included in the gross income of the estate or trust. Thus, for purposes of figuring the allowable income distribution deduction, the DNI (line 9) is figured figure the adjustment by multiplying line 2 by a fraction, the numerator of which is the total distributions (line 13), and the denominator of which is the DNI (line 9). Enter the result on line 14. If line 13 includes tax-exempt income other than tax-exempt interest, figure line 14 by subtracting the total of the following from tax-exempt income included on line 13: 1. The charitable contribution deduction allocable to such tax-exempt income, and 2. Expenses allocable to tax-exempt income. Expenses that are directly allocable to tax-exempt income are allocated only to tax-exempt income. A reasonable proportion of expenses indirectly allocable to both tax-exempt income and other income must be allocated to each class of income. Line 17—Income Distribution Deduction The income distribution deduction determines the amount of income that will be taxed to the beneficiaries. The total amount of income for regular tax purposes that is reflected on line 7 of the individual beneficiaries’ Schedules K-1 should equal the amount claimed on line 17. Schedule D (Form 1041), enter the tax from line 45 of Schedule D, and check the “Schedule D” box. Line 1b Other taxes.—Include any additional tax from the following: ● Form 4970, Tax on Accumulation Distribution of Trusts. ● Form 4972, Tax on Lump-Sum Distributions.: 1. There is an includible gain (defined below) recognized by the trust; and 2. At the time the trust received the property, the property had an FMV FMV includible. When figuring the trust’s taxable income, exclude the amount of any includible gain minus any deductions allocable to the gain. Line 2a—Foreign Tax Credit Attach Form 1116, Foreign Tax Credit (Individual, Estate, Trust, or Nonresident Alien Individual), if you elect to claim credit for income or profits taxes paid or accrued to a foreign country or a U.S. possession. The estate or trust may Schedule G—Tax Computation Line 1a Tax rate schedule.—For tax years beginning in 1995, figure the tax using the Tax Rate Schedule below. Enter the tax on line 1a and check the “Tax rate schedule” box. 1995 Tax Rate Schedule If the amount on line 22, page 1, is: Over— $0 1,550 3,700 5,600 7,650 But not over— $1,550 3,700 5,600 7,650 ----Enter on line 1a: 15% $232.50 + 28% 834.50 + 31% 1,423.50 + 36% 2,161.50 + 39.6% Of the amount over— $0 1,550 3,700 5,600 7,650 Schedule D.—If the estate or trust had a net capital gain and taxable income of more than $3,700, complete Part VI of Page 16 claim credit for that part of the foreign taxes not allocable to the beneficiaries (including charitable beneficiaries). Enter the estate’s or trust’s share of the credit on line 2a. See Pub. 514, Foreign Tax Credit for Individuals, for details. Line 2b—Nonconventional Source Fuel Credit If the estate or trust can claim any section 29 credit for producing fuel from a nonconventional source, figure the credit on a separate sheet and attach it to the return. Include the credit on line 2b. Qualified Electric Vehicle Credit Use Form 8834, Qualified Electric Vehicle Credit, if the estate or trust can claim a credit for the purchase of a new qualified electric vehicle. Include the credit on line 2b. Line 2c—General Business Credit Complete this line if the estate or trust is claiming any of the credits listed below. Use the appropriate credit form to figure the credit. If the estate or trust is claiming only one credit, enter the form number and the amount of the credit in the space provided. If the estate or trust is claiming more than one credit (not including the empowerment zone employment credit), a credit from a passive activity (other than the low-income housing credit or the empowerment zone employment credit), or a credit carryforward, also complete Form 3800, General Business Credit, to figure the total credit and enter the amount from Form 3800 on line 2c. Also, be sure to check the box for Form 3800. Do not include any amounts that are allocated to a beneficiary. Credits that are allocated between the estate or trust and the beneficiaries are listed in the instructions for Schedule K-1, line 13, on page 28. Generally, these credits are apportioned on the basis of the income allocable to the estate or trust and the beneficiaries. ● Investment credit (Form 3468). ● Jobs credit (Form 5884). ● Credit for alcohol used as fuel (Form 6478). ● Credit for increasing research activities (Form 6765). ● Low-income housing credit (Form 8586). ● Disabled access credit (Form 8826). ● Enhanced oil recovery credit (Form 8830). ● Renewable electricity production credit (Form 8835). ● Empowerment zone employment credit (Form 8844). ● Indian employment credit (Form 8845). ● Credit for employer social security and Medicare taxes paid on certain employee tips (Form 8846). ● Credit for contributions to selected community development corporations (Form 8847). Line 2d—Credit for Prior Year Minimum Tax An estate or trust that paid alternative minimum tax in a previous year may be eligible for a minimum tax credit in 1995. See Form 8801, Credit for Prior Year Minimum Tax—Individuals, Estates, and Trusts. Line 5—Recapture Taxes Recapture of investment credit.—If the estate or trust disposed of investment credit property or changed its use before the end of its useful life or recovery period, get Form 4255, Recapture of Investment Credit, to figure the recapture tax allocable to the estate or trust. Recapture of low-income housing credit.—If the estate or trust disposed of property (or there was a reduction in the qualified basis of the property) on which the low-income housing credit was claimed, get Form 8611, Recapture of Low-Income Housing Credit, to figure any recapture tax allocable to the estate or trust. Recapture of qualified electric vehicle credit.—If the estate or trust claimed the qualified electric vehicle credit in a prior tax year for a vehicle that ceased to qualify for the credit, part or all of the credit may have to be recaptured. See Pub. 535 for details. If the estate or trust owes any recapture tax, include it on line 5 and write “QEV” on the dotted line to the left of the entry space. Recapture of the Indian employment credit.—Generally, if the estate or trust terminates a qualified employee less than 1 year after the date of initial employment, any Indian employment credit allowed for a prior tax year by reason of wages paid or incurred to that employee must be recaptured. See Form 8845 for details. If the estate or trust owes any recapture tax, include it on line 5 and write “45A” on the dotted line to the left of the entry space. Line 7—Household Employment Taxes If any of the following apply, get Schedule H (Form 1040), Household Employment Taxes, and its instructions, to see if the estate or trust owes these taxes. 1. The estate or trust paid any one household employee cash wages of $1,000 or more in 1995.. 2. The estate or trust withheld Federal income tax during 1995 at the request of any household employee. 3. The estate or trust paid total cash wages of $1,000 or more in any calendar quarter of 1994 or 1995 to household employees. Line 8 8. Form 5329, Additional Taxes Attributable to Qualified Retirement Plans (Including IRAs), Annuities, and Modified Endowment Contracts.—If the estate or trust fails to receive the minimum distribution under section 4974, use Form 5329 to pay the excise tax. To the left of the entry space, write “From Form 5329” and the amount of the tax. 10. Report the amount of tax-exempt interest income received or accrued in the space provided below Question 1. Also, include any exempt-interest dividends the estate or trust received as a shareholder in a mutual fund or other regulated investment company. Question 2 All salaries, wages, and other compensation for personal services must be included on the return of the person who earned the income, even if Page 17 2, see the Grantor Type Trust instructions on page 7. property to a foreign trust, or the creation of a foreign trust. Form 3520-A, Annual Return of Foreign Trust With U.S. Beneficiaries, must be filed under section 6048(c) by any U.S. person who directly or indirectly transfers property to a foreign trust (with certain exceptions) that has one or more U.S. beneficiaries. ● Complete Schedule I, Parts I and III, if the decedent’s estate’s or trust’s share of alternative minimum taxable income (Part I, line 12) exceeds $22,500. Recordkeeping. I for 1996. Question 5 An estate or trust claiming an interest deduction for qualified residence interest (as defined in section 163(h)(3)) on seller-provided financing, must include on an attachment to the 1995 Form 1041 the name, address, and taxpayer identifying number of the person to whom the interest was paid or accrued (i.e., the seller). If the estate or trust received or accrued such interest, it must provide identical information on the person liable for such interest (i.e., the buyer). This information does not need to be reported if it duplicates information already reported on Form 1098. Question 3, file Form TD F 90-22.1 by June 30, 1996, with the Department of the Treasury at the address shown on the form. Form TD F 90-22.1 is not a tax return, so do not file it with Form 1041. You may order Form TD F 90-22.1 by calling 1-800-829-3676 (1-800-TAX-FORM). Question 6 To make the section 663(b) election for a complex trust to treat any amount paid or credited to a beneficiary within 65 days following the close of the tax year as being paid or credited on the last day of that tax year, check the box. For the election to be valid, you must file Form 1041 by the due date (including extensions). Once made, the election is irrevocable. Credit for Prior Year Minimum Tax Estates and trusts that paid alternative minimum tax in 1994, or had a minimum tax credit carryforward, may be eligible for a minimum tax credit in 1995. See Form 8801. Question 7 To make the section 643(e)(3) election to recognize gain on property distributed in kind, check the box and see the Instructions for Schedule D (Form 1041). Partners, Shareholders, etc. An estate or trust that is a partner in a partnership or a shareholder in an S corporation must take into account its share of items of income and deductions that enter into the computation of its adjustments and tax preference items. Question 8 If the decedent’s estate has been open for more than 2 years, check the box and attach an explanation for the delay in closing the estate. Question. Schedule I—Alternative Minimum Tax General Instructions Use Schedule I to compute: 1. The estate’s or trust’s alternative minimum taxable income; 2. The income distribution deduction on a minimum tax basis; and 3. The estate’s or trust’s alternative minimum tax (AMT). Who Must Complete ● Complete Schedule I, Parts I and II, if the decedent’s estate or trust is required to complete Schedule B. Page 18 Optional Write-Off Period Under Section 59(e) The estate or trust may elect under section 59(e) to use an optional 10-year (60-month for intangible drilling and development expenditures and 3-year for circulation expenditures) write-off period for certain expenditures. details. property held for investment, any net gain from the disposition of property held for investment, and any investment expenses, taking into account all AMT adjustments and tax preference items that apply. Include any interest income and investment expenses from private activity bonds issued after August 7, 1986. To figure 1995 of taxes described for line 4b above that were deducted in a tax year after 1986. Line 4e—Depreciation of Property Placed in Service After 1986 Caution: Do not include on this line any depreciation adjustment from: (a) an activity for which you are not at risk; (b) a partnership or an S corporation if the basis limitations under section 704(d) or 1366(d) apply; (c) a tax shelter farm activity; or (d) a passive activity. Instead, take these depreciation adjustments into account when figuring the adjustments on line 4l, 4m, or 4n, whichever applies. For AMT purposes, the depreciation deduction for tangible property placed in service after 1986 (or after July 31, 1986, if an election was made) must be refigured under the alternative depreciation system (ADS) described in section 168(g). For property, other than residential rental and nonresidential real property, use the 150% declining balance method (switching to the straight line method in the first tax year when that method gives a better result). However, use the straight line method if that method was used for regular tax purposes. Generally, ADS depreciation is figured over the class life of the property. For tangible personal property not assigned a class life, use 12 years. See Pub. 946, How To Depreciate Property, for a discussion of class lives. For residential rental and nonresidential real property, use the straight line method over 40 years. Use the same convention that was used for regular tax purposes. See Rev. Proc. 87-57, 1987-2 C.B. 687, or Pub. 946 for the optional tables for the alternative minimum tax, using the 150% declining balance method. Do not make an adjustment for motion picture films, videotapes, sound recordings, or property depreciated under the unit-of-production method or any other method not expressed in a term of years. (See section 168(f)(1), (2), (3), or (4).) When refiguring the depreciation deduction, be sure to report any adjustment from depreciation that was allocated to the beneficiary for regular tax purposes separately on line 11 of Schedule K-1 (Form 1041). To figure the adjustment, subtract the depreciation for AMT purposes from the depreciation for regular tax purposes. If the depreciation figured for AMT purposes exceeds the depreciation allowed for regular tax purposes, enter the adjustment as a negative amount. Line 4f—Circulation and Research and Experimental Expenditures Paid or Incurred After 1986 Caution: Do not make this adjustment for expenditures for which you elected the optional 3-year write-off period generally must be amortized for AMT purposes over 10 years beginning with the year the expenditures were paid or incurred. However, do not make an adjustment for expenditures paid or incurred in connection with an activity in which the estate or trust materially participated under the passive activity rules. 4g—Mining Exploration and Development Costs Paid or Incurred After 1986 Caution: Do not make this adjustment for costs for which you elected the Specific Instructions Part I—Estate’s or Trust’s Share of Alternative Minimum Taxable Income Line 1—Adjusted Total Income or (Loss) 4p. Step 2. On line 2, enter the AMT disallowed investment interest expense from 1994. Step 3. When completing Part II of Form 4952, refigure gross income from Page 19 4h—Long-Term Contracts Entered Into After February 28, 1986 For AMT purposes, the percentage of completion method of accounting described in section 460(b) generally must be used. This rule generally does not apply to home construction contracts (as defined in section 460(e)(6)). Note: Contracts described in section 460(e)(1) are subject to the simplified method of cost allocation of section 460(b)(4). Enter the difference between the amount reported for regular tax purposes and the AMT amount. If the AMT amount is less than the amount figured for regular tax purposes, enter the difference as a negative amount. Line 4i—Pollution Control Facilities Placed in Service After 1986 For any certified pollution control facility placed in service after 1986, the deduction under section 169 is not allowed for AMT purposes. Instead, the deduction is determined under the ADS described in section 168(g) using the Asset Depreciation Range class life for the facility under the straight line method. To figure the adjustment, subtract the amortization deduction taken for regular tax purposes from the depreciation deduction determined under the ADS. If the deduction allowed for AMT purposes is more than the amount allowed for regular tax purposes, enter the difference as a negative amount. Line 4j—Installment Sales of Certain Property For either of the following kinds of dispositions in which the estate or trust used the installment method for regular tax purposes, refigure the for AMT purposes. If the AMT amount is less than that reported for the regular tax, enter the difference as a negative amount. Line 4k—Adjusted Gain or Loss (Including Incentive Stock Options) Adjusted gain or loss.—If the estate or trust sold or exchanged property during the year, or had a casualty gain or loss to business or income-producing property, it may have an adjustment. The gain or loss on the disposition of certain assets is refigured for AMT purposes. Use this line if the estate or trust reported a gain or loss on Form 4797, Schedule D (Form 1041), or Form 4684 (Section B). When figuring the adjusted basis for those forms, take into account any AMT adjustments made this year, or in previous years, for items related to lines 4e, 4f, 4g, and 4i of Schedule I. For example, to figure the adjusted basis for AMT purposes, reduce the cost of an asset only by the depreciation allowed for AMT purposes. Enter the difference between the gain or loss reported for regular tax purposes, and that figured for AMT purposes. If the AMT gain is less than the gain reported for regular tax purposes, enter the adjustment as a negative amount. If the AMT loss is more than the loss allowed for regular tax purposes, enter the adjustment as a negative amount. Incentive stock options (ISOs).—For regular tax purposes, no income is recognized when an incentive stock option (as defined in section 422(b)) is granted or exercised. However, this rule does not apply for AMT purposes. Instead, the estate or trust must generally include the excess, if any, of: 1. The fair market value of the option (determined without regard to any lapse restriction) at the first time its rights in the option become transferable or when these rights are no longer subject to a substantial risk of forfeiture, over 2. The amount paid for the option. Increase the. Line 4l—Certain Loss Limitations Caution: If the loss is from a passive activity, use line 4n instead. If the loss is from a tax shelter farm activity (that is not passive), use line 4m. Refigure your allowable losses for AMT purposes from activities for which you are not at risk and basis limitations applicable to interests in partnerships and stock in S corporations, by taking into account your AMT adjustments and tax preference items. See sections 59(h), 465, 704(d), and 1366(d). Enter the difference between the loss reported for regular tax purposes and the AMT loss. If the AMT loss is more than the loss reported for regular tax purposes, enter the adjustment as a negative amount. Line 4m—Tax Shelter Farm Activities Note: Use this line only if the tax shelter farm activity is not a passive activity. Otherwise, use line 4 4e, 4r, or 4s. Determine your tax shelter farm activity gain or loss for AMT purposes using the same rules you used for regular tax purposes except that any AMT loss is allowed only to the extent that a taxpayer is insolvent (see section 58(c)(1)). An AMT 4n—Passive Activities For AMT purposes, the rules described in section 469 apply, except that in applying the limitations, minimum tax rules apply. Refigure passive activity gains and losses on an AMT basis. Refigure a passive activity gain or loss by taking into account all AMT adjustments or tax preference items that pertain to that activity. Page 20. Enter the difference between the loss reported on page 1, and the AMT loss, if any. Caution: Do not enter again elsewhere on this schedule any AMT adjustment or tax preference item included on this line. Publicly traded partnerships (PTPs).— If the estate or trust had a loss from a PTP, refigure the loss using any AMT adjustments and tax preference items. Line 4o—Beneficiaries of Other Trusts or Decedent’s Estates If the estate or trust is the beneficiary of another estate or trust, enter the adjustment for minimum tax purposes from line 8, Schedule K-1 (Form 1041). Line 4p—Tax-Exempt Interest From Specified Private Activity Bonds Enter the interest earned from specified private activity bonds reduced (but not below zero) by any deduction that would have been allowable if the interest were includible in gross income for regular tax purposes. Specified private activity bonds are any qualified bonds (as defined in section 141) issued after August 7, 1986. See section 57(a)(5) for more information. Exempt-interest dividends paid by a regulated investment company are treated as interest from specified private activity bonds to the extent the dividends are attributable to interest received by the company on the bonds, minus an allocable share of the expenses paid or incurred by the company in earning the interest. Line 4q—Depletion Refigure the depletion deduction for AMT purposes by using only the income and deductions allowed for the AMT when refiguring the limit based on taxable income from the property under section 613(a) and the limit based on taxable income, with certain adjustments, under section 613A(d)(1). Also, the depletion deduction for mines, wells, and other natural deposits under section 611 is limited to the property’s adjusted basis at the end of the year, as refigured for the AMT, unless the estate or trust is an independent producer or royalty owner claiming percentage depletion for oil and gas wells. Figure this limit separately for each property. When refiguring the property’s adjusted basis, take into account any AMT adjustments made this year or in previous years that affect basis (other than the current year’s depletion). Enter on line 4q the difference between the regular tax and AMT deduction. If the AMT deduction is more than the regular tax deduction, enter the difference as a negative amount. Line 4r—Accelerated Depreciation of Real Property Placed in Service Before 1987 For AMT purposes, use the straight line method to figure depreciation. Use a recovery period of 19 years for 19-year real property and 15 years for low-income housing. Enter the excess of depreciation claimed for regular tax purposes over depreciation refigured using the straight line method. Figure this amount separately for each property and include on line 4r only positive amounts. Line 4s—Accelerated Depreciation of Leased Personal Property Placed in Service Before 1987 For leased personal property other than recovery property, enter the amount by which the regular tax depreciation using the pre-1987 rules exceeds the depreciation allowable using the straight line method. For leased 10-year recovery property and leased 15-year public utility property, enter the amount by which the depreciation deduction determined for regular tax purposes is more than the deduction allowable using the straight line method with a half-year convention, no salvage value, and the following recovery period: 10-year property 15 years 15-year public utility property 22 years Figure this amount separately for each property and include on line 4s only positive amounts. Line 4t—Intangible Drilling Costs Caution: Do not make this adjustment for costs for which you elected the optional 60-month write-off under section 59(e) for regular tax purposes. Except as provided below, intangible drilling costs (IDCs) from oil, gas, and geothermal wells are a tax preference item to the extent that the excess IDCs exceed 65% of the net income from the wells. Figure the tax preference item for all geothermal properties separately from the preference for all oil and gas properties. Excess IDCs are figured. Net income is determined by taking the gross income from all oil, gas, and geothermal wells reduced by the deductions allocable to those properties (determined without regard to excess IDCs). When figuring net income, use only income and deductions allowed for the AMT. Exception. The preference for IDCs from oil and gas wells does not apply to taxpayers who are independent producers (i.e., not integrated oil companies as defined in section 291(b)(4)). However, this benefit may be limited. First, figure the IDC preference as if this exception did not apply. Then, for purposes of this exception, complete Schedule Iu—Other Adjustments Include on this line: ● Patron’s adjustment.—Distributions the estate or trust received from a cooperative may be includible in income. Unless the distributions are nontaxable, include on line 4u the total AMT patronage dividend adjustment reported to the estate or trust from the cooperative. ● Related adjustments.—AMT adjustments and tax preference items may affect deductions that are based on an income limit other than AGI or modified AGI (e.g., farm conservation expenses). Refigure these deductions using the income limit as modified for the AMT. Include the difference between the regular tax and AMT deduction on line 4u. If the AMT deduction is more than the regular tax deduction, include the difference as a negative amount. Note: Do not make an adjustment on line 4u for an item you refigured on another line of Schedule I (e.g., line 4q). Line 7—Alternative Tax Net Operating Loss Deduction (ATNOLD) For tax years beginning after 1986, the net operating loss (NOL) under section 172(c) is modified for alternative tax purposes by (a) adding the adjustments made under sections 56 and 58 (subtracting if the adjustments are negative); and (b) reducing the NOL by any item of tax preference under section Page 21 57 (except the appreciated charitable contribution preference item). When figuring an NOL from a loss year prior to 1987, the rules in effect before enactment of the Tax Reform Act (TRA) of 1986 apply. The NOL under section 172(c) is reduced by the amount of the tax preference items that were taken into account in figuring the NOL. In addition, the NOL is figured by taking into account only itemized deductions that were alternative tax itemized deductions for the tax year and that were a modification to the NOL under section 172(d). See sections 55(d) and 172 as in effect before the TRA of 1986. If this estate or trust is the beneficiary of another estate or trust that terminated in 1995, include any AMT NOL carryover that was reported on line 12e of Schedule K-1 (Form 1041). The ATNOLD may be limited. To figure the ATNOLD limitation, first figure AMTI without regard to the ATNOLD. For this purpose, figure a tentative amount for line 4q of Schedule I by treating line 7 as if it were zero. Then, figure a tentative amount for line 6 of Schedule I. The ATNOLD limitation is 90% of the tentative line 6 amount. Enter on line 7 the smaller of the ATNOLD or the ATNOLD limitation. Any alternative tax NOL not used because of the ATNOLD limitation can be carried back or forward. See section 172(b) for details. The treatment of alternative tax NOLs does not affect your regular tax NOL. Note: If you elected under section 172(b)(3) to forego the carryback period for regular tax purposes, the election will also apply for the AMT. 212 expenses that are directly allocable to tax-exempt interest are allocated only to tax-exempt interest. A reasonable proportion of section 212 expenses that are indirectly allocable to both tax-exempt interest and other income must be allocated to each class of income. Line 17 Enter any capital gains that were paid or permanently set aside for charitable purposes from the current year’s income included on line 3 of Schedule A. Lines 18 and 19 Capital gains and losses must take into account any basis adjustments from line 4k, Part I. Line 24—Adjustment for Tax-Exempt Income In figuring the income distribution deduction on a minimum tax basis, the estate or trust is not allowed a deduction for any item of DNAMTI (line 20) that is not included in the gross income of the estate or trust figured on an AMT basis. Thus, for purposes of figuring the allowable income distribution deduction on a minimum tax basis, the DNAMTI is figured without regard to any tax-exempt interest (except for amounts from line 4 figure the adjustment by multiplying 4p), figure line 24 by subtracting the total expenses allocable to tax-exempt income that are allowable for AMT purposes from tax-exempt income included on line 23. Expenses that are directly allocable to tax-exempt income are allocated only to tax-exempt income. A reasonable proportion of expenses indirectly allocable to both tax-exempt income and other income must be allocated to each class of income. Line 27—Income Distribution Deduction on a Minimum Tax Basis Allocate the income distribution deduction figured on a minimum tax basis among the beneficiaries in the same manner as income was allocated for regular tax purposes. Report each beneficiary’s share on line 6 of Schedule K-1 (Form 1041). Part III—Alternative Minimum Tax Computation Line 36—Alternative Minimum Foreign Tax Credit To figure the AMT foreign tax credit: 1. Complete and attach Form 1116, with the notation at the top, “Alt Min Tax” for each type of income specified at the top of Form 1116. 2. Complete Part I, entering income, deductions, etc., attributable to sources outside the United States computed on a minimum tax basis. 3. Complete Part III. On line 9, do not enter any taxes taken into account in a tax year beginning after 1986 that are treated under section 904(c) as paid or accrued in a tax year beginning before 1987. On line 10 of Form 1116, enter the alternative minimum tax foreign tax credit carryover, and on line 17 of Form 1116, enter the alternative minimum taxable income from line 12 of Schedule I. On line 19 of Form 1116, enter the amount from line 35 of Schedule I. Complete Part IV. The foreign tax credit from line 32 of the AMT Form 1116 is limited to the tax on line 35 of Schedule I, less 10% of what would have been the tax on line 35 of Schedule I, if line 7 of Schedule I had been zero and the exception for intangible drilling costs does not apply (see the instructions for line 4t on page 21). If Schedule I, line 7, is zero or blank, and the estate or trust has no intangible drilling costs (or the exception does not apply), enter on Schedule I, line 36, the smaller of Form 1116, line 32; or 90% of Schedule I, line 35. If line 7 has an entry (other than zero), or the exception for intangible drilling costs applies, for purposes of this line refigure what the tax would have been on Schedule I, line 35, if line 7 were zero and the exception did not apply. Multiply that amount by 10% and subtract the result from line 35. Enter on Schedule I, line 36, the smaller of that amount or the amount from Form 1116, line 32. If the AMT foreign tax credit is limited, any unused amount can be carried back or forward in accordance with section 904(c). Note: The election to forego the carryback period for regular tax purposes also applies for the AMT. Line 38—Regular Tax Before Credits Enter the tax from line 1a of Schedule G plus any section 667(b) tax from Form 4970 entered on line 1b of Schedule G. From that amount subtract any foreign tax credit entered on line 2a of Schedule G. DO NOT deduct any foreign tax credit that was allocated to the beneficiaries. Part II—Income Distribution Deduction on a Minimum Tax Basis Line 13—Adjusted Alternative Minimum Taxable Income If the amount on line 8 of Schedule I is less than zero, and the negative number is attributable wholly or in part to the capital loss limitation rules under section 1211(b), then enter as a negative number the smaller of (a) the loss from line 8; or (b) the loss from line 4 on page 1. Line 14—Adjusted Tax-Exempt Interest To figure the adjusted tax-exempt interest (including exempt-interest dividends received as a shareholder in a mutual fund or other regulated investment company), subtract the total of (a) any tax-exempt interest from line 4 of Schedule A of Form 1041 figured for AMT purposes; and (b) any section 212 expenses allowable for AMT purposes allocable to tax-exempt interest from the amount of tax-exempt interest received. DO NOT subtract any deductions reported on lines 4a through 4c. Section Page 22 Section 643(e)(3) Election Schedule D (Form 1041)— Capital Gains and Losses General Instructions Use. For.. reinvestment plan. See Pub. 550 for details. ● Transfer of appreciated property to a political organization (section 84). ● Distributions received from an employee pension, profit sharing, or stock bonus plan. See Form 4972. ● Disposition of market discount bonds (section 1276). ● Section 1256 contracts and straddles are reported on Form 6781, Gains and Losses From Section 1256 Contracts and Straddles.. Column (d)—Sales Price Enter either the gross sales price or the net sales price from the sale. On sales of stocks and bonds, report the gross amount as reported to the estate or trust on Form 1099-B or similar statement. However, if the estate or trust was advised that gross proceeds less commissions and option premiums were reported to the IRS, enter that net amount in column (d). Column (e)—Cost or Other Basis Basis of trust property.—Generally, the basis of property acquired by gift is the same as the basis in the hands of the donor. If the FMV of the property at the time it was transferred to the trust is less than the transferor’s basis, then the FMV is used for determining any loss on disposition. If the property was transferred to the trust after 1976, and a gift tax was paid under Chapter 12, then increase the donor’s basis as follows: Multiply the amount of the gift tax paid by a fraction, the numerator of which is the net appreciation in value of the gift (discussed below), and the denominator of which is the amount of the gift. For this purpose, the net appreciation in value of the gift is the amount by which the FMV of the gift exceeds the donor’s adjusted basis. Basis of decedent’s estate property.— Generally, the basis of property acquired by a decedent’s estate is the FMV of the property at the date of the decedent’sanda,: ● Pub. 544, Sales and Other Dispositions of Assets; and ● Pub. 551, Basis of Assets. Section 644 Tax on Trusts If a trust sells or exchanges property at a gain within 2 years after receiving it from a transferor, a special tax may be due. Do not report includible gains under section 644 on Schedule D. The tax on these gains is reported separately on Form 1041. For more information, see the instructions for Schedule G, line 1b.. Short-Term or Long-Term by a decedent’s estate from the decedent is considered as held for more than 1 traded on an exchange or over-the-counter market. Items for Special Treatment The following items may require special treatment: ● Exchange of “like-kind” property. ● Wash sales of stock or securities (including contracts or options to acquire or sell Sales of stock received under a qualified public utility dividend Page 23 death, or the alternate valuation date if the executor elected to use an alternate valuation under section 2032. See Pub. 551 for a discussion of the valuation of qualified real property under section 2032A. Basis of property for bankruptcy estates.—Generally, the basis of property held by the bankruptcy estate is the same as the basis in the hands of the individual debtor. Adjustments to basis.—Before figuring any gain or loss on the sale, exchange, or other disposition of property owned by the estate or trust, adjustments to the property’s basis may be required. Some items that may increase the basis include: 1. Broker’s fees and commissions. 2. Reinvested dividends that were previously reported as income. 3. Reinvested capital gains that were previously reported as income. 4. Costs that were capitalized. 5. Original issue discount that has been previously included in income. Some items that may decrease the basis include: 1. Nontaxable distributions that consist of return of capital. 2. Deductions previously allowed or allowable for depreciation. 3. Casualty or theft loss deductions. See Pub. 551 for additional information. See section 852(f) for treatment of load charges incurred in acquiring stock in a regulated investment company. Carryover basis.. Lines 2 and 8 Installment sales.—If the estate or trust sold property at a gain during the tax year, and will receive a payment in a later tax year, report the sale on the installment method and file Form 6252, Installment Sale Income, unless you elect not to do so. Also, use Form 6252 to report any payment received in 1995 from a sale made in an earlier tax year that was reported on the installment method. To elect out of the installment method, report the full amount of the gain on a timely filed return (including extensions). Related Persons on page 23), and before 2 years after the date of the last transfer that was part of the exchange the related person disposes of the property, or the trust disposes of the property received in exchange from the related person, then the original exchange will not qualify for nonrecognition. See section 1031(f) for exceptions. Complete and attach Form 8824, Like-Kind Exchanges, to Form 1041 for each exchange. Line 10—Capital Gain Distributions Enter on line 10 capital gain distributions paid during the year as a long-term capital gain, regardless of how long the estate or trust held its investment. Also. Except in the final year,. Except in the final year, if the losses from the sale or exchange of capital assets are more than the gains, all of the losses are allocated to the estate or trust and none are allocated to the beneficiaries. Line 15, column (b)—Estate’s or Trust’s Net Short-Term Capital Gain or Loss Enter the amount of the net short-term capital gain or loss allocable to the estate or trust.. Part IV—Capital Loss Limitation If the sum of all the capital losses is more than the sum of all the capital gains, then these capital losses are allowed as a deduction only to the extent of the smaller of the net loss or $3,000. Part V—Capital Loss Carryovers From 1995 to 1996. Part VI—Tax Computation Using Maximum Capital Gains Rate Line 37c If the estate or trust received capital gains that were derived from income in respect of a decedent, and a section 691(c)(4) deduction was claimed, then line 37c must be reduced by the portion of the section 691(c)(4) deduction claimed on Form 1041, page 1, line 19. Line 44 To figure the regular tax, use the 1995 Tax Rate Schedule on page 16. Line 45 If the tax using the maximum capital gains rate (line 43) is less than the regular tax (line 44), enter the amount from line 45 on line 1a of Schedule G, Form 1041, and check the “Schedule D” box. Schedule J (Form 1041)— Accumulation Distribution for a Complex Trust General Instructions Use Schedule J (Form 1041) to report an accumulation distribution for a complex trust. An accumulation distribution. Page 24 Specific Instructions Part I—Accumulation Distribution in 1995 Line 1—Distribution Under Section 661(a)(2) Enter the amount from Schedule B of Form 1041, line 12, for 1995. This is the amount properly paid, credited, or required to be distributed other than the amount of income for the current tax year required to be distributed currently. Line 2—Distributable Net Income Enter the amount from Schedule B of Form 1041, line 9, for 1995. This is the amount of distributable net income (DNI) for the current tax year determined under section 643(a). Line 3—Distribution Under Section 661(a)(1) Enter the amount from Schedule B of Form 1041, line 11, for 1995. This is the amount of income for the current tax year required to be distributed currently. Line 5—Accumulation Distribution If line 13, Schedule B of Form 1041 is more than line 10, Schedule B of Form 1041,, Tax on Accumulation Distribution of Trusts,).) Amount from line Schedule C, Form 1041, line 8 1969–1977 1978 1979 1980 1981–1982 1983–1994 Form Form Form Form 1041, 1041, 1041, 1041, line line line line 64 65 64 62 the following: Throwback year(s) Amount from line 1969–1977 Schedule C, Form 1041, line 2(a) 1978–1979 1980 1981–1982 1983–1994 Form 1041, line 58(a) Form 1041, line 57(a) Form 1041, line 55(a) Schedule B, Form 1041, line 2 Schedule B, Form 1041, line 13. Note: The alternative tax on capital gains was repealed for tax years beginning after December 31, 1978. The maximum rate on net capital gain for 1981, 1987, and 1991 through 1994 is not an alternative tax for this purpose. Line 18—Regular Tax Enter the applicable amounts as follows: Throwback year(s) Form 1969–1976 Form 1977 1978–1979 1980–1984 1985–1986 1987 1988–1994 Amount from line 1041, page 1, line 24 1041, page 1, line 26 Form 1041, line 27 Form 1041, line 26c (a) during that year the trust received outside income or (b) the trustee did not distribute all of the trust’s: (a) income taxable to the trust under section 691; (b) unrealized accounts receivable that were assigned to the trust; and (c) distributions from another Form 1041, line 25c Form 1041, line 22c Schedule G, Form 1041, line 1a Part II—Ordinary Income Accumulation Distribution Line 6—Distributable Net Income for Earlier Years Enter the applicable amounts as follows: Throwback year(s) Amount from line Schedule C, Form 1041, line 5 1969–1977 Form 1041, line 61 1978–1979 1980 Form 1041, line 60 Form 1041, line 58 1981–1982 1983–1994 Schedule B, Form 1041, line 9. Page 25 Throwback year(s) 1969–1970 1971–1978 1979 1980–1981 1982 1983–1994 Part IV—Allocation to Beneficiary. Line 20—Trust’s Share of Net Long-Term Gain Enter the applicable amounts as follows:–1994 of any gain on line 16 or 17, column (b). follows the specifications for filing substitute Schedules K-1 in Pub. 1167, Substitute Printed, Computer-Prepared, and Computer-Generated Tax Forms and Schedules, or is an exact copy of an IRS Schedule K-1. You must request IRS approval to use other substitute Schedules K-1. To request approval, write to: Internal Revenue Service, Attention: Substitute Forms Program Coordinator, T:FP:S, 1111 Constitution Avenue, N.W., Washington, DC 20224. Line 22—Taxable Income Enter the applicable amounts as follows: Throwback year(s) Amount from line 1969–1976 Form 1041, page 1, line 23 Form 1041, page 1, line 25 1977 Form 1041, line 26 1978–1979 Form 1041, line 25 1980–1984 Form 1041, line 24 1985–1986 1987 Form 1041, line 21 Form 1041, line 22 1988–1994 Schedule K-1 (Form 1041)— Beneficiary’s Share of Income, Deductions, Credits, etc. General Instructions Use Schedule K-1 (Form 1041) to report the beneficiary’s share of income, deductions, and credits from a trust or a decedent’s estate. Inclusion of Amounts in Beneficiaries’ Income Simple trust.—The beneficiary of a simple trust must include in his or her gross income the amount of the income required to be distributed currently, whether or not distributed, or if the income required to be distributed currently to all beneficiaries exceeds the distributable net income (DNI), his or her proportionate share of the DNI. The determination of whether trust income is required to be distributed currently depends on the terms of the trust instrument and applicable local law. See Regulations section 1.652(c)-4 for a comprehensive example. Estates and complex trusts.—The beneficiary of a decedent’s estate or complex trust must include in his or her gross income the sum of: 1. The amount of the income required to be distributed currently, or if the income required to be distributed Schedule D, line 14, column 2 1971–1978 Beneficiary’s Identifying Number As a payer of income, you are required under section 6109 to request and provide a proper identifying number for each recipient of income. Enter the beneficiary’s number on the respective Page 26 currently to all beneficiaries exceeds the DNI (figured without taking into account the charitable deduction), his or her proportionate share of the DNI (as so figured); and. See Regulations section 1.662(c)-4 for a comprehensive example. For complex trusts that have more than one beneficiary, and if different beneficiaries have substantially separate and independent shares, their shares are treated as separate trusts income of the estate or trust is half dividends and half interest). Allocation of deductions.. In no case can excess deductions from a passive activity be allocated to income from a nonpassive. column (a), Schedule D (Form 1041), minus allocable deductions. Do not enter a loss on line 3b. If, for the final year of the estate or trust, on an attachment to Schedule K-1. Line 4a—Annuities, Royalties, and Other Nonpassive Income Enter the beneficiary’s share of annuities, royalties, or any other income, minus allocable deductions (other than directly apportionable deductions), that is NOT subject to any passive activity loss limitation rules at the beneficiary level. Use line 5a to report income items subject to the passive activity rules at the beneficiary’s level. Lines 4b and 5b—Depreciation Enter the beneficiary’s share of the depreciation deductions attributable to each activity reported on lines 4a and 5a. See the instructions on page 10 for a discussion of how the depreciation deduction is apportioned between the beneficiaries and the estate or trust. Report any AMT adjustment or tax preference item 10 for a discussion of how the depletion deduction is apportioned between the beneficiaries and the estate or trust. Report any tax preference item attributable to depletion separately on line 11b. Lines 4d and 5d—Amortization Itemize the beneficiary’s share of the amortization deductions attributable to each activity reported on lines 4a and 5a. Apportion the amortization deductions between the estate or trust and the beneficiaries in the same way that the depreciation and depletion deductions are divided. Report any AMT adjustment attributable to amortization separately on line 11c. Line 5a—Trade or Business, Rental Real Estate, and Other Rental Income Enter the beneficiary’s share of trade or business, rental real estate, and other Beneficiary’s Tax Year. Specific Instructions Line 1—Interest Enter the beneficiary’s share of the taxable interest income minus allocable deductions. Line 2—Dividends Enter the beneficiary’s share of dividend income minus allocable deductions. Line 3a—Net Short-Term Capital Gain Enter the beneficiary’s share of the net short-term capital gain from line 15, column (a), Schedule D (Form 1041), minus allocable deductions. Do not enter a loss on line 3a. If, for the final year of the estate or trust, there is a capital loss carryover, enter on line 12b the beneficiary’s share of short-term capital loss carryover as a loss in parentheses. However, if the beneficiary is a corporation, enter on line 12b, Page 27. Lines 5.” Rules for treating a beneficiary’s income and directly apportionable deductions from an estate or trust and other rules for applying the passive loss and credit limitations to beneficiaries of estates and trusts have not yet been issued. figuring any applicable passive activity loss limitations, also attach a separate schedule showing the beneficiary’s share of directly apportionable deductions derived from each trade or business, rental real estate, and other rental activity. Line 6—Income for Minimum Tax Purposes Enter the beneficiary’s share of the income distribution deduction figured on a minimum tax basis from line 27 of Schedule I. Line 7—Income for Regular Tax Purposes Enter the beneficiary’s share of the income distribution deduction figured on line 17 of Schedule B. This amount should equal the sum of lines 1 through 3b, 4a, and 5 13), figure the beneficiary’s foreign tax credit. See Pub. 514 and section 901(b)(5) for special rules about foreign taxes. Lines 11a through 11c Enter any adjustments or tax preference items attributable to depreciation, depletion, or amortization that were allocated to the beneficiary. For property placed in service before 1987, report separately the accelerated depreciation of real and leased personal property. Line 11d—Exclusion Items Enter the beneficiary’s share of the adjustment for minimum tax purposes from Schedule K-1, line 8, that is attributable to exclusion items (Schedule I, lines 4a through 4d, 4p, and 4q). Line 12a—Excess Deductions on Termination If this is the final return and there are excess deductions on termination (see the instructions for line 22 on page 13), enter the beneficiary’s share of the excess deductions on line 12a... Printed on recycled paper Lines 12b and 12c—Unused Capital Loss Carryover Upon termination of the trust or decedent’s estate, the beneficiary succeeding to the property is allowed as a deduction any unused capital loss carryover under section 1212. If the estate or trust incurs capital losses in the final year, use Part V of Schedule D (Form 1041) to figure the amount of capital loss carryover to be allocated to the beneficiary. Lines 12d and 12e—Net Operating Loss (NOL) Carryover Upon termination of a trust or decedent’s estate, a beneficiary succeeding to its property is allowed to deduct any unused NOL (and any AMT NOL) carryover for regular and AMT purposes if the carryover would be allowable to the estate or trust in a later tax year but for the termination. Enter on lines 12d and 12e the unused carryover amounts. Line 13—Other Itemize on line 13, or on a separate sheet if more space is needed, the beneficiary’s tax information not entered elsewhere on Schedule K-1. This includes the allocable share, if any, of: ● Payment of estimated tax to be credited to the beneficiary (section 643(g)); ● Tax-exempt interest income received or accrued by the trust (including exempt-interest dividends from a mutual fund or other regulated investment company); ● Investment income (section 163(d)); ● Gross farming and fishing income; ● Credit for backup withholding (section 3406); ● Low-income housing credit; ● The jobs credit; ● The alcohol fuel credit; ● The credit for increasing research activities; ● The renewable electricity production credit; ● The Indian employment credit; ● The empowerment zone employment credit; ● The information a beneficiary will need to figure any investment credit; and ● The information a beneficiary will need to figure any recapture taxes. Note: Upon termination of an estate or trust, any suspended passive activity losses (PALs) relating to an interest in a passive activity cannot be allocated to the beneficiary. Instead, the basis in such activity is increased by the amount of any PALs allocable to the interest, and no losses are allowed as a deduction on the estate’s or trust’s final Form 1041. Page 28 This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue reading from where you left off, or restart the preview.
https://www.scribd.com/document/543879/US-Internal-Revenue-Service-i1041-1995
CC-MAIN-2017-04
refinedweb
18,505
51.18
I am probably well on the wrong path with this one which is why I can't get the program to compile in the slightest. The assignment instructions are : The Frozen Tongue Ice Cream Shop sells six flavors of ice cream: chocolate, vanilla, strawberry, mint, rocky road, and mocha. The shop wants a program that tracks how many scoops of each flavor are sold each day. The console input for each transaction will be the name of a flavor followed by how many scoops of that flavor were sold in that transaction: example of initial output: Enter the flavor of the scoops sold (STOP to exit): vanilla Enter how many scoops were sold: 3 If the user enters a flavor that doesn't match any of the known flavors, the program should display an error message. The user should be allowed to enter as many transactions as desired, and each flavor will likely appear in more than one transaction. Once the user enters "STOP", the program should terminate, and the program should display how many scoops of each flavor were tallied in the program run: DAILY SCOOP REPORT: Chocolate: 58 Vanilla: 65 Strawberry: 49 Mint: 23 Rocky Road: 37 Mocha: 31 One way to make sure that the scoops are added to the proper flavor counter is to have a series of decision statements comparing the user's input to each flavor. Another approach would be to use parallel arrays, with an array of strings storing the name of the each flavor and an array of integers storing the count for each flavor at the same index as the flavor name in the other array. Write your code so that it handles the flavor name with a space and so that the user's input will be matched to the correct flavor no matter what type of capitalization is used. Position the output so that the number of scoops for each flavor is right-aligned (assume that the demand for flavors can vary, so the number of scoops for any flavor may be between one and three digits): Example output: DAILY SCOOP REPORT: Chocolate: 92 Vanilla: 103 Strawberry: 89 Mint: 8 Rocky Road: 76 Mocha: 64 I tried to use the switch statement but not sure if that is correct. Also, it wouldn't allow the return of the flavors in the count function due to them being changed int when applying the update to the count, so I tried to use the atoi() but don't really know how to use it and the tutorials I read about it don't fit my situation close enough for me to manipulate it to make it work. I hope that I have described things enough. Here is my code thus far. #include <iostream> #include <cstdlib> #include <string> using namespace std; String count (string& flavor, string& count, string& chocolate, string& vanilla, string& strawberry, string& mint, string& rocky_road, string& mocha); int main() { string stop, flavor, count = 0, chocolate, vanilla, strawberry, mint, rocky_road, mocha; cout << "Welcome to Frozen Tongue Ice Cream Shop\n"<<endl; cout << "Please enter the flavor of icecream sold or 'STOP' to exit.\n"<<endl; << "Flavors avaliable:\n"<<endl; << "chocolate\n"<<endl; << "valnilla\n"<<endl; << "strawberry\n"<<endl; << "mint\n"<<endl; << "rocky road\n"<<endl; << "mocha\n"<<endl; << "Enter flavor : "; cin.getline >> flavor; cout <<"Now enter number of scoops sold\n"<<endl; cin >> count; if (flavor = 'STOP' || 'stop') { cout<<"You have chosen to exit the program. Thank you."<<endl; exit(1); } else count ( flavor, count, chocolate, vanilla, strawberry, mint, rocky_road, mocha); cout << "\nDAILY SCOOP REPORT: \n"<<endl; << "chocolate :"<<chocolate<<"\n"<<endl; << "vanilla :"<<vanilla<<"\n"<<endl; << "strawberry :"<<strawberry<<"\n"<<endl; << "mint :"<<mint<<"\n"<<endl; << "rocky road :"<<rocky_road<<"\n"<<endl; << "mocha :"<<mocha<<"\n"<<endl; return 0; } String count (string& flavor, string& count, string& chocolate, string& vanilla, string& strawberry, string& mint, string& rocky_road, string& mocha) { string choc, van, str, mt, rrd, mch; chocolate = choc; vanilla = van; strawberry = str; mint = mt; rocky_road = rrd; mocha = mch; switch (count) { case 'choc' : chocolate = count; break; case 'van': vanilla = count; break; case 'str': strawberry = count; break; case 'mt': mint = count; break; case 'rrd': rocky_road = count; break; case 'mch': mocha = count; break; default: cout<<"You have entered an unavaliable flavor. The program will now end."<<endl; exit(1); } cout << atoi(count ()) return chocolate, vanilla, strawberry, mint, rocky_road, mocha; } Thanks in advance for any assistance.
https://www.daniweb.com/programming/software-development/threads/239298/string-program
CC-MAIN-2017-34
refinedweb
724
50.33
Coffeehouse Thread85 posts Forum Read Only This forum has been made read only by the site admins. No new threads or comments can be added. Developer Interview Questions Back to Forum: Coffeehouse Conversation locked This conversation has been locked by the site admins. No new comments can be made. Pagination I have a "Technical Interview" with a large Manhattan-based software company on Tuesday and I need you guys' help. For those of you who do the hiring at your company: - What are the toughest questions you ask of your interviewees? - What are the questions that get the most blank stares? - What is the one thing an interviewee can do to get/lose the job right off the bat? And for those of you who have been through plenty of interviews: - What's the toughest question you've had to answer during an interview? - Is there any question in particular that you dread being asked? If I get this job it will be my big break. I think I'm ready, but any extra help will definitely be appreciated. Either you are right for the job, or you aren't. If you are right for the job, be prepared to answer the interviewer's questions in a way that demonstrates that you are indeed right for the job. The interview process is a two-way street. Ask questions about the company. Find out if it's the kind of place you want to work. It may seem like a "big break" job to you now, but maybe not so much after you find out what exactly the job entails. Don't lie. You would be surprised how many people think that they can BS their way into a job. If you don't know something, say so and note it as something that you should study to improve yourself. The best questions are ones which anybody can answer, but not everyone can answer well. Some of the genius questions we've used at work before (mainly in the context of Maths) are things like What is the deriviative of x? A bad answer is "I don't know". // "lied on their resume" A good answer is "one" // "knows maths ok, we can continue to more difficult questions" An excellent answer is "With respect to what?" // "has a degree in maths." What is the deriviative of x to the n with respect to x? A bad answer is "I don't know" // "lied on their resume" A good answer is "n multiplied by x to the (n minus one)" // "has A level maths" An excellent answer is "n multipled by x to the (n minus one) plus x to the n multiplied by the natural logarithm of x multiplied by the deriviative of n with respect to x". // "has a degree and experience in maths" In the context of computer science you can ask questions like What is a page fault? A bad answer is "I don't know" A good answer is "Something which happens when you try to access memory that isn't yours or is paged out." An excellent answer is "System interrupt 14, used by the OS to page in and out virtual memory in the context of an OS, and a critical error in the context of a user-mode program". or given a rectangle R of width and height (Rw, Rh) and a second rectangle S of width and height (Sw, Sh), find the top left corner of S offset from the top left corner of R when using a) Top align, Bottom align, Middle vertical align b) Left align, Right align, Middle horizontal align A bad answer is "I don't know". A good answer is left align: Sx = Rx, middle = Sx = Rx + (Rw - Sw)/2, right = Rx + (Rw - Sw) and simmilar for vertical (sub x for y) An excellent answer says public enum Align { Left = 0, Middle = 1, Right = 2 } Sx = Rx + (int)align (Rw - Sw)/2 Write down when you take notes on the interview roughly how long it took for them to get to the answer, but don't time it or make the interviewee feel under time-pressure. If they need to draw a diagram for the question above it means that they can reason their way through a problem. If they can just give you the answer it means that they've reasoned about it before. I.e. they are experienced. The reason these questions are so good is that you don't get blank responses from most people. Many people can answer the question, but you have shades of correctness which help you see the people who've read books on the matter from the people who have worked in the field and have seen these problems before. Also, try and steer away from yes or no questions, and never tell them that there is a better solution. If they say the deriviative of x is 1, then write it down and say "yes, you're absolutely right". Also, try and spend a lot of the interview looking at their past experience. Ask questions with hidden agendas, such as "is this person a team player?", "is this person going to fit into our team?", "does this person have simmilar interests to the rest of the team, or is he just going to wind everyone up?". Don't tell them the hidden meaning behind the question or you'll just get a standard defensive response. A question which includes the word "team" will get their memorised "I am a team player" paragraph that they memorized last night, so be creative. Also, drop in a couple of questions like "What is your biggest character flaw", and "what is the biggest failure that you've ever had while working together in a team? Why did that failure happen?" because these types of questions tell you whether their immediate response is to shift blame, and whether they're going to learn from their and other people's mistakes. I'm currently rewriting our technical test (I have a lovely real world scenario which the good answer too would be generics or linq) but the question I used to ask to see how people thought was Write a multiplication function; without using the multiply operator. It must take two integers as a parameter and be optimised. Which field would this be exactly? Computational Research. Is there a requirement for the function return type or a requirement for how to handle an overflow condition? Aren't you also meant to prohibit use of For? int Multiply(int operand1, int operand2) { for(int i=1;i<=operand2;i++) { operand1 += operand1; } return operand1; } //wrote it without thinking ///is it correct? No it's not *grin* In 3 very big ways. Which is why for isn't forbidden. i thought the same thing, but that would give you 32 for 2*6. Same 3 mistakes. Fun this isn't it? Whops, silly mistake there. How's this? int Multiply(int operand1, int operand2) { int origOperand1 = operand1; for(int i=1;i<=operand2;i++) { operand1 += origOperand1; } return operand1; } EDIT: Getting there.. 2*1 4 2*2 6 2*6 14 6*2 18 9*4 45 10*0 10 10*1 20 long f(int i1, int i2) { return ( (long) ( ((double) i1) / ( 1.0 / ((double) i2) ))); } long multiply(int op1,int op2) { return MassiveLookupTable[op1,op2]; }; Well you didn't say what it had to be optimised for. Ah first mistake solved; yes the return type should be looked at; overflow handling is a bonus. Now that's a very very nice approach. Not seen that before; of course if i2 was 0 then bang. *snicker* No points for cheating, but a point for long. it works, but i sure wish you didn't have to cast so much.
https://channel9.msdn.com/Forums/Coffeehouse/259397-Developer-Interview-Questions
CC-MAIN-2017-22
refinedweb
1,311
78.28
import "github.com/coreos/go-systemd/journal" Package journal provides write bindings to the local systemd journal. It is implemented in pure Go and connects to the journal directly over its unix socket. To read from the journal, see the "sdjournal" package, which wraps the sd-journal a C API. Enabled returns true if the local systemd journal is available for logging Print prints a message to the local systemd journal using Send(). Send a message to the local systemd journal. vars is a map of journald fields to values. Fields must be composed of uppercase letters, numbers, and underscores, but must not start with an underscore. Within these restrictions, any arbitrary field name may be used. Some names have special significance: see the journalctl documentation () for more details. vars may be nil. Priority of a journal message Package journal imports 11 packages (graph) and is imported by 361 packages. Updated 2017-03-30. Refresh now. Tools for package owners.
https://godoc.org/github.com/coreos/go-systemd/journal
CC-MAIN-2017-39
refinedweb
160
58.58
This create a new MoviesController class, and write some code that retrieves our Movie data and displays it back to the browser using a View template. Right click on the Controllers folder and make a new MoviesController. This will create a new "MoviesController.cs" file underneath our \Controllers folder within our project. Let's update the MovieController to retrieve the list of movies from our newly populated database. using System; using System.Linq; using System.Web.Mvc; using Movies.Models; namespace Movies.Controllers { public class MoviesController : Controller { MoviesEntities db = new MoviesEntities(); public ActionResult Index() { var movies = from m in db.Movies where m.ReleaseDate > new DateTime(1984, 6, 1) select m; return View(movies.ToList()); } } } We are performing a LINQ query so that we only retrieve movies released after the summer of 1984. We'll need a View template to render this list of movies back, so right-click in the method and select Add View to create it. Within the Add View dialog we'll indicate that we are passing a List<Movies.Models.Movie> to our View template. Unlike the previous times we used the Add View dialog and chose to create an "Empty" template, this time we'll indicate that we want Visual Studio to automatically "scaffold" a view template for us with some default content. We'll do this by selecting the "List" item within the “View content dropdown menu. Remember, when you have a created a new class you'll need to compile your application for it to show up in the Add View Dialog. Click add and the system will automatically generate the code for a View for us that displays our list of movies. This is a good time to change the <h2> heading to something like "My Movie List" like we did earlier with the Hello World view. Run your application and visit /Movies in the address bar. Now we've retrieved data from the database using a basic query inside the Controller and returned the data to a View that knows about Movies. That View then spins through the list of Movies and creates a table of data for us. We won’t be implementing Edit, Details and Delete functionality with this application - so we don’t need the default links that the scaffold template created for us. Open up the /Movies/Index.aspx file and remove them. Here is the source code for what our updated View template should look like once we make these changes: <%@ Page Movie List </asp:Content> <asp:Content <h2>My Movie List</h2> <table> <tr> <th>Title</th> <th>ReleaseDate</th> <th>Genre</th> <th>Rating</th> <th>Price</th> </tr> <% foreach (var item in Model) { %> <tr> <td><%: item.Title %></td> <td><%: String.Format("{0:g}", item.ReleaseDate) %></td> <td><%: item.Genre %></td> <td><%: item.Rating %></td> <td><%: String.Format("{0:F}", item.Price) %></td> </tr> <% } %> </table> <p> <%: Html.ActionLink("Create New", "Create") %> </p> </asp:Content> It's creating links that we won't need, so we'll delete them for this example. We will keep our Create New link though, as that's next! Here's what our app looks like with that column removed. We now have a simple listing of our movie data. However, if we click the "Create New" link, we'll get an error as it's not hooked up! Let's implement a Create Action method and enable a user to enter new movies in our database. This article was originally created on August 14, 2010
http://www.asp.net/mvc/overview/older-versions-1/getting-started-with-mvc/getting-started-with-mvc-part5
CC-MAIN-2015-27
refinedweb
587
66.64
WAIS and other large documents services - BOF Steve Hardcastle-Kille, chair IETF San Diego, evening, March 18, 1992 Purpose: to discuss information services that seem to becoming popular enough to become "standards." Consider: WWW, WAIS, DS (X.500) Relationships between: documents, objects, and directory entries UDI: Need, Form, X.500 Need for whom (see Steve H-K slide) John Curran (BBN) WAIS: an implementation of Z39.50. Architecture from users point of view: -Servers: source for a collection of documents, indexed in some way. -User: can send queries to servers. All documents in in a server indexed by all words in each document. Returns bibliographic and other info. including a handle for retrieving. Provides searching and retrieval all using Z39.50. - Server can serve more than one source.. Servers use native file system for documents. Don't need to duplicate files. - All "things" are considered documents, regardless of format or content - Can query a server to find out which sources it provides. TMC also has a source of sources. Source descriptions might be better off somewhere else, such as X.500. Differences between Z39.50 and WAIS: Z39.50 is very general, like about form of data, indices, specific form of queries. WAIS essentially uses Z39.50 as a transport. Brewster would actually say that WAIS is the protocol - extensions to Z39.50 - want to merge them. There are 2 indexing models - public and private (need CM to use it). Has relevance feedback. Can attach a particularly relevant to a future query, using all words in document as part of query. Can add new routines to index on new types of objects. Currently view everything as text documents. Wengyik Yeong (PSI): Representing new kinds of objects in X.500 Have presently added RFCs (documents), have 2 document series (RFCs and FYIs). Now want to move on to archives (OSI-DS 22 - describes archives in X.500). Model is that each archive is a file. Not always true. Sometimes each source is a separate file. Experience: * Need more sophisticated approach * Need to custom objects - least common denominator not the best (eg language, size of binary, machine, etc. - not things that one will find) * More documentation info would be helpful. * Flat organization not very good. * Need more sophisticated experiments - used only two. Tim Bernersr-Lee (World Wide Web - Cern) Hypertext like model: simple uniform interface. All are subsets of hypertext. The problem is searching in the hypertext model. Use WAIS or something else for searching - comes back with a hypertext document. Architecture: client server. Client machine which knows lots of protocols for going out over the network (FTP, Prospero, home-brew,(HTTP) etc.) Addressing scheme: this is a reference. Also need common formats. Servers Gateways to other worlds such as WAIS, VMS help files. To other kinds of servers. HTTP: Runs on TCP, send query, get response. Wnat to extend to sending authentication, perhaps profile of client so can know what the client can display. HTML: mockup language for sending back hypertext, also very simple User interfaces: for non mouse users tag things with numbers that they can type. Have problem of multiple indices. To fast run through. More support for interfaces than for setting up servers. How does it fit into everything else? X.500: need to be able to refer to anything - needs universal document identifiers (currently use address, but wrong - might move) Could use DNS,, but no further work on it Resovlability Lasting value Cover current situation Relevance openness uniqueness readability structure: 3-parts: eg. protocol, host, port consensus Could get to information (objects as above) from X.500. WAIS vs. WWW vs Gopher WWW data model: document, text, or hypertext, open addressing (can always add more components) Gopher: file or menu, open addressing, very simple server, large deployment, indexes WAIS: relevance feedback restricted to a single server, source file contains organization, indexing, each source is a closed world. Gopher, WWW, Prospero, pointers can go back and forth and all over the place. Question or comment: concern about being to jump or charge - people might like to peer over the edge before jumping, either because may be hard to get back and to understand cost of jumping. Code is available to "collaborators" - anyone who uses it or writes code. timbl@info.cern.ch SLAC, Fermi Lab, etc, really for high energy physicists. Steve Hardcastle-Kille (Directory issues) OSI-DS 25 Directories in the real world Global naming: benefits * labelling * express relationships in names * Listing services in the directory. In the broadest sense bringing things together.. Might use for yellow pages, multiple provides for similar things. Might use it for localizing activity. Listings in one place might lead to listing in others. * Browsing through X.500 to an external listing service, such as WWW or WAIS. * Hierarchy - rigid, but can overlay multiple hierarchies. * Pointers - alias (forward pointer across the hierarhcy) and "see also" * Use to model groups as objects with components. Can parts of the hierarchy (DSA's) really be something else besides X.500. Might be WWW or WAIS, etc. Paul Barker (?), UCL project: (just starting up, trying to push the forefront) 3 foci (did I miss something here - I have only 2) * gray literature - unpublished, research documents. Not systematically available. Store this stuff in the directory. Question of how to organize, where to hang them - - off individuals, docs for dept, docs for institution, etc. Experiment in putting documents in the thing. * (funded by British Library) Want to take Mark records of library and model them in X.500. One issue is that LOTS of attributes. (Issue - there is no one standard for Mark records.) * Librarians are especially interested in looking for strings, queries. Question of whether "The Directory" can contain orders of magnitude more objects and bigger objects that hertofore. Cliff Neuman (Prospero) How relates to others (non-X.500) Goal mechanism for organizing information, follows filesystem model rather than hypertext is in W3. Causes multiple queries, therefore have to be fast. Directory service with references to other directories or files. Does not deal with retrieval (FTP, Andrew, NFS, currently adding WAIS, will add HTTP). Prospero views a query as a directory, and response is a file. Prospero and X.500: can use X.500 to translate soft names to things to put into Prospero query. Real problem is a single global naming scheme. Generally organized by owner, authority, not necessarily organized by topics. Real problem is what the topics should be and what should be in them. Believes in multiple name spaces. People can have own, but typically will start with either a copy of or a link to another one. Need shortcuts, so user doesn't have to construct all the detail of a namespace. Prospero allows you to glue together parts of other directories, called filters. There are canned ones, but users can build their own. Closure: (namespace, object) this is how to pass names. Namespaces really have addresses that are global, and not used by the user. On the other hand each user can have his/her own name for any particular namespace. info-prospero@isi.edu Larry Masinter, Xerox, System 33 * Document handle: uninterpreted, max 32 byte id that every doc has. Truly only a content identifier. (A substring of this is used to find the document, but hidden from users.) * file location: protocol, host, path, offset, format, timeout * description * document: a thing that has a handle. A lot of the work was in conversion of formats. Also time on access control - per document ACLs. Made them part of the description. Multiple protocols was a problem because not all machines had the same protocols. Done by a gateway. Normalizing attribute-vallue space would cause there to be none - LOTS of different kinds of documents. Some are lit, and library docs, but others might be quotes, job applications, references, financial reports, etc. Some properties actually require computation. Tim back again W3 document = Prospero directory = menu All based on an address W3 has an all inclusive model, but only 2 global namesspaces (DNS and X.500, but DNS is no longer being extended, so the only one is X.500). Peter Deutsch: equivalence. Question of two udi's or pointers to one document. Also question of exact duplicates with separate udi's. Larry Masinter believes it is ok to have a timestamp in it.
http://www.w3.org/Conferences/IETF92/WWX_BOF_Sollins.html
CC-MAIN-2018-05
refinedweb
1,389
60.21
#include <rtt/scripting/PeerParser.hpp> #include <rtt/scripting/PeerParser.hpp> List of all members. Definition at line 59 of file PeerParser.hpp. false Create a PeerParser which starts looking for peers from a task. The locator tries to go as far as possible in the peer-to-object path and will never throw. peer() and object() will contain the last valid peer found and its supposed object, attribute or value. The parser tries to traverse a full peer-to-object path and throws if it got stuck in the middle. peer() will return the target peer and object() is this or the supposed object of the peer. The parser does not check if this object exists.
http://people.mech.kuleuven.be/~orocos/pub/stable/documentation/rtt/v1.8.x/api/html/classRTT_1_1detail_1_1PeerParser.html
crawl-003
refinedweb
116
76.52
My objective is to find out "what subroutine called what subroutine" & accomplish this for the *entire* perl program. To clarify this objective: for each subroutine, I am trying to find the parent & do this for every subroutine that exists. This would allow me to get a subroutine stack of my entire program. My program - including my own modules - is ~ 10,000 lines. The number of subroutines is ~ 100 (one hundred). I am aware of the "caller" function, which provides access to the call stack. Furthermore, in order to find out who the "parent" is, something like this can be used: $parent = ( caller(1) )3 However, the above requires inserting the aforementioned code snippet into each & every subroutine (in order to achieve the objective). Question: is there an easy way to obtain my objective without inserting the caller function all over my code? Is there an easier alternative? Thanks in advance for any help. Anyway, this looks like an XY Problem to me - what's your ultimate goal? For example, the way the Perl debugger does this: the DB::sub subroutine gets called automatically by the interpreter for every sub call in the progtram being run; DB::sub captures caller()'s output, and then uses an eval() sub in the debugger's namespace (not the core eval!) to run the subroutine in the caller's context. When DB::eval is over, we pop the stack to remove the caller info for this call. Meanwhile, DB::DB gets called by the interpreter after every line, so you can set a variable in the DB:: namespace in your main program to signal DB::DB that now's the time to dump the stack. if you just need a quick-and-dirty one-time, you can put a $DB::single=1 in your code at the point where you want to stop and do your traceback, and then run your code under the debugger. It will stop at the $DB::single, at which point you can issue the T command to get the call trace. My spouse My children My pets My neighbours My fellow monks Wild Animals Anybody Nobody Myself Spies Can't tell (I'm NSA/FBI/HS/...) Others (explain your deviation) Results (51 votes). Check out past polls.
http://www.perlmonks.org/?node_id=788453
CC-MAIN-2016-50
refinedweb
377
70.33
. import java.math.*; import java.util.*; public class PairOfDice { public int die1; // Number showing on the first die. public int die2; // Number showing on the second die . public PairOfDice() { // Constructor. Rolls the dice, so that they initially // show some random values . roll(); //Call the roll() method to roll the dice. } public PairOfDice(int val1, int val2) { // Constructor. Creates a pair of dice that // are initially showing the values val1 and val2. die1 = val1; // Assign specified values die2 = val2; // to the instance variables. } public void roll() { // Roll the dice by setting each of the dice to be // a random number between 1 and 6. die1 = (int)(Math.random( */ public static void main(String[] args) { PairOfDice firstDice; // Refers to the first pair of dice. firstDice = new PairOfDice(); PairOfDice secondDice; // Refers to the second pair of dice. secondDice = new PairOfDice(); int countRolls; // Counts how many times the two pairs of // dice have been rolled. int total1; // Total showing on first pair of dice. int total2; // Total showing on second pair of dice. countRolls = 0; do { // Roll the two pairs of dice until totals are the same. firstDice.roll(); // Roll the first pair of dice. total1 = firstDice.die1 + firstDice.die2; // Get total. System.out.println( " F i r s t pair comes up " + total1); secondDice.roll(); // Roll the second pair of dice. total2 = secondDice.die1 + secondDice.die2; // Get total. System.out.println( " Second pair comes up " + total2); countRolls++; // Count this roll . System.out.println(); // Blank line. } while (total1 != total2); System.out.println( " It took " + countRolls + " rolls until the totals were the same. " ); } // end main () } // end class RollTwoPairs.
http://www.javaprogrammingforums.com/whats-wrong-my-code/27110-confusing-problem-some-java-code-book-deals-classes-constructors.html
CC-MAIN-2015-22
refinedweb
263
78.14
Generate QR Code using Python Want to share your content on python-bloggers? click here. This article will explore how to generate QR code in Python and some useful creation features from pyqrcode library. Table of Contents - Introduction - Create a simple QR code image - QR code parameters - More QR code examples - Conclusion Introduction QR codes recently became more popular than ever before, yet few people know that the first iterations of QR codes were created back in 1990s in Japan for the automotive industry. QR (quick response) code is essentially a barcode that we are all used to see on the products we buy in grocery stores. It works the same way. QR code is a label that contains specific information. Unlike a traditional barcode, QR codes are capable of storing more information and are often use to store product details, geolocations, coupons, URLs, and much more. Due to its interesting capabilities of storing information, it became an area of interest in data science and machine learning mostly in analytics area. If you think of a retail store that sells apparel, a simple QR code on each item can potentially store item description, colour, price, and other information. Once an item is purchased, that information can be retrieved from a POS system or data storage and further become a feed into a recommender system for example. Now that we know what QR codes are and how they can be used. Let’s dive into actually creating our first simple QR code image and try to access the information using it. Create a simple QR code using Python To continue following this tutorial we will need two Python libraries: pyqrcode and pypng. If you don’t have them installed, please open “Command Prompt” (on Windows) and install them using the following code: pip install pyqrcode pip install pypng Import the required library: import pyqrcode Once the libraries are downloaded, installed, and imported, we can proceed with Python code implementation. I will be creating a QR code which, when scanned, will take you to the this tutorial in your mobile device browser. To do this, I first need to find the URL to this post and store it as some variable: dest = '' The next step is actually creating the QR code object that will contain our link: myQR = QRCode(dest) Here we create an instance of .QRCode() class and pass our dest (destination) to it as an argument and in return it creates a QR code object. We can take a look at it right away using: myQR.show() And see: Very simple and easy to understand right? To reuse this QR code, we will go ahead and save it as PNG: myQR.png('qrcode1.png', scale=8) Note: scale=8 is a parameter that adjusts the size of the QR code PNG, and you can adjust it to increase/decrease the size of the QR code image. You can test this QR code using your phone’s camera (I used my iPhone) and it will take you to the URL of this article. QR Code Object Parameters The section above showed how to create a simple QR code without any specific adjustments or selected parameters. When they aren’t specified, they will take the default values and the code will be executed. However, if we want to add some customization to it, it is worth discussing how and what we can adjust. When we created our simple QR code, using the following line of code: myQR = QRCode(dest) In reality, there were certain preset default parameters, and, if expanded, the code would look like the following (yet produce the same output): myQR = QRCode(dest, error='H', version=None, mode=None, encoding='iso-8859-1') Let’s begin with listing all possible parameters of QRCode() class and discuss what each of them does: - content: this is the ‘target’ destination that we want to encode in the QR code. - error: error correction level of the code (by default set to ‘H’ which is the highest possible level). - version: specifies the size and data capacity of the code (can take integer values between 1 and 40) and when left unspecified, it will find the smallest possible QR code version to store the data that we want (knowing its size). - mode: specifies how the content will be encoded (there are four options: numeric, alphanumeric, binary, kanji). If left unspecified, it will be guessed by the algorithm. - encoding: specifies how the content will be encoded and defaults to iso-8. Detailed explanation of each parameter is available here. You can play around with our initial code and tune the above parameters to see how the differences will be represented in the final QR code image. More QR code examples using Python What is interesting is how adapted the smartphones algorithms are for QR code readings. In other words, when scanning these with an iPhone, Apple’s QR code decoders will know right away which app to use for each content of the QR code. To test this, let’s try creating QR codes for: URL, address, and phone number: dest = ['', '1 Yonge Street, Toronto, Ontario, Canada', '+1 (999) 999-9999'] for i in dest: myQR = pyqrcode.QRCode(i) myQR.png('myqrcode'+str(dest.index(i))+'.png', scale=8) This will create and save three QR codes in the same directory where your script is located. For instance, each of these, when scanned, will be identified by an iPhone’s QR code decoder and apps to open them will be suggested automatically: Link (will be prompted to open in Safari): Address (will be prompted to open in Apple Maps): Phone number (will be prompted to open in Phone and call): And interesting thing to notice is the sizes of each QR code. Essentially the way to think about it is the longer (larger) is the content you are trying to store in the QR code, the larger will be it’s size. In addition, exploring the related documentation also shows the parameters of PNG output and explains how to adjust it (for example, make the QR code background green, and so on). In conclusion, we only saved the QR codes we made in PNG formats, yet the pyqrcode library allows for great flexibility it terms of output formats of QR codes. Conclusion This article focused on exploring the process creating QR codes and saving it as PNG files using Python. It should be a good foundation to understand the process and have the knowledge to build on the functionality. Feel free to leave comments below if you have any questions or have suggestions for some edits. The post Generate QR Code using Python appeared first on PyShark. Want to share your content on python-bloggers? click here.
https://python-bloggers.com/2020/07/generate-qr-code-using-python/
CC-MAIN-2021-10
refinedweb
1,130
55.88
XML-RPC Plugin Dependency: compile ":xmlrpc:0.1" Summary Description XML RPC PluginThe XML RPC plugin allows a GRAILS application to act as an XML RPC server. HOWTOThis is how to set up the GRAILS app to handle an RPC call. The ServiceThe first step is to implement a Service that extends the Xmlrpc class. class HelloService extends Xmlrpc { boolean transactional = false // say hello def hello(params) { return "hello" } } The ControllerThe second step is to arrange for a controller action to service the RPC call. The contoller needs to inject the HelloService, and then call the service() routine. The server side is now set up. class MyController { def helloService def xmlrpc = { helloService.service(request, response) } } Doing a callTo check that everything is set up correcty, we can make a call into the server.Here is code that uses John Wilson's groovy-xmlrpc library to call the hello() function. import groovy.net.xmlrpc.*class XMLRPC1 { def static main(args) { def remote = new XMLRPCServerProxy("") def response = remote.hello() println "response = " + response.class + " => " + response } } An More Useful ExampleIn reality, we need the RPC to do more than say "hello". The example here, is an outline for implmenting the MetaWebLog API for blogs. See for more details.One of the things to notice about this API, is that the function names have dots (.) in them. e.g, metaWeblog.getRecentPosts or metaWeblog.newPost. Also, these functions take a set of parameters, which are passed to the server.How these are dealt with will be shown here. The ServiceThis service acts as the handler for the MetaWebLog API.To call functions with names that are not legal as groovy method names, there is a mapping from function name to legal method name. This is held in the static property called 'mapping'The parameters to the original call come in as a list of values. For example, the MetaWebLog call: passes a list of 5 elements to the method. Any structured values, such as the struct parameter, are handled properly, and get passed to the method as a Map.Return values can be simple types or structured, and are handled correctly. Here is the outline of the service metaWeblog.newPost (blogid, username, password, struct, publish) Obvioulsy, this can be extended to implement the full behaviour of the API. class MetaweblogService extends Xmlrpc { boolean transactional = false // this is the mapping from function name to method name static final def mapping = [ 'metaWeblog.getRecentPosts' : 'recentPosts', 'metaWeblog.newPost' : 'newPost' ] // its a new post def newPost(params) { def blogid = params[0] def username = params[1] def password = params[2] def struct = params[3] def publish = params[4] // The struct is a map def newTitle = struct.title def newDesc = struct.description … } // return some recent posts def recentPosts(params) { def count = params[3] def answer = [] answer << [title: "One", description: "Number 1"] answer << [title: "Two", description: "Number 2"] return answer }} The ControllerThis is set up exactly the same way as before. class MyController { def metaweblogService def xmlrpc = { metaweblogService.service(request, response) } }
http://www.grails.org/plugin/xmlrpc
CC-MAIN-2015-11
refinedweb
497
57.37
Frequency-tagging: Basic analysis of an SSVEP/vSSR dataset# In this tutorial we compute the frequency spectrum and quantify signal-to-noise ratio (SNR) at a target frequency in EEG data recorded during fast periodic visual stimulation (FPVS) at 12 Hz and 15 Hz in different trials. Extracting SNR at stimulation frequency is a simple way to quantify frequency tagged responses in MEEG (a.k.a. steady state visually evoked potentials, SSVEP, or visual steady-state responses, vSSR in the visual domain, or auditory steady-state responses, ASSR in the auditory domain). For a general introduction to the method see Norcia et al. (2015) for the visual domain, and Picton et al. (2003) for the auditory domain. Data and outline: We use a simple example dataset with frequency tagged visual stimulation: N=2 participants observed checkerboard patterns inverting with a constant frequency of either 12.0 Hz of 15.0 Hz. 32 channels wet EEG was recorded. (see SSVEP for more information). We will visualize both the power-spectral density (PSD) and the SNR spectrum of the epoched data, extract SNR at stimulation frequency, plot the topography of the response, and statistically separate 12 Hz and 15 Hz responses in the different trials. Since the evoked response is mainly generated in early visual areas of the brain the statistical analysis will be carried out on an occipital ROI. Outline # Authors: Dominik Welke <dominik.welke@web.de> # Evgenii Kalenkovich <e.kalenkovich@gmail.com> # # License: BSD-3-Clause Data preprocessing# Due to a generally high SNR in SSVEP/vSSR, typical preprocessing steps are considered optional. This doesn’t mean, that a proper cleaning would not increase your signal quality! Raw data have FCz reference, so we will apply common-average rereferencing. We will apply a 0.1 highpass filter. Lastly, we will cut the data in 20 s epochs corresponding to the trials. # Load raw data data_path = mne.datasets.ssvep.data_path() bids_fname = (data_path / 'sub-02' / 'ses-01' / 'eeg' / 'sub-02_ses-01_task-ssvep_eeg.vhdr') raw = mne.io.read_raw_brainvision(bids_fname, preload=True, verbose=False) raw.info['line_freq'] = 50. # Set montage montage = mne.channels.make_standard_montage('easycap-M1') raw.set_montage(montage, verbose=False) # Set common average reference raw.set_eeg_reference('average', projection=False, verbose=False) # Apply bandpass filter raw.filter(l_freq=0.1, h_freq=None, fir_design='firwin', verbose=False) # Construct epochs event_id = { '12hz': 255, '15hz': 155 } events, _ = mne.events_from_annotations(raw, verbose=False) tmin, tmax = -1., 20. # in s baseline = None epochs = mne.Epochs( raw, events=events, event_id=[event_id['12hz'], event_id['15hz']], tmin=tmin, tmax=tmax, baseline=baseline, verbose=False) Frequency analysis# Now we compute the frequency spectrum of the EEG data. You will already see the peaks at the stimulation frequencies and some of their harmonics, without any further processing. The ‘classical’ PSD plot will be compared to a plot of the SNR spectrum. SNR will be computed as a ratio of the power in a given frequency bin to the average power in its neighboring bins. This procedure has two advantages over using the raw PSD: it normalizes the spectrum and accounts for 1/f power decay. power modulations which are not very narrow band will disappear. Calculate power spectral density (PSD)# The frequency spectrum will be computed using Fast Fourier transform (FFT). This seems to be common practice in the steady-state literature and is based on the exact knowledge of the stimulus and the assumed response - especially in terms of it’s stability over time. For a discussion see e.g. Bach & Meigen (1999) We will exclude the first second of each trial from the analysis: steady-state response often take a while to stabilize, and the transient phase in the beginning can distort the signal estimate. this section of data is expected to be dominated by responses related to the stimulus onset, and we are not interested in this. In MNE we call plain FFT as a special case of Welch’s method, with only a single Welch window spanning the entire trial and no specific windowing function (i.e. applying a boxcar window). Calculate signal to noise ratio (SNR)# SNR - as we define it here - is a measure of relative power: it’s the ratio of power in a given frequency bin - the ‘signal’ - to a ‘noise’ baseline - the average power in the surrounding frequency bins. This approach was initially proposed by Meigen & Bach (1999) Hence, we need to set some parameters for this baseline - how many neighboring bins should be taken for this computation, and do we want to skip the direct neighbors (this can make sense if the stimulation frequency is not super constant, or frequency bands are very narrow). The function below does what we want. def snr_spectrum(psd, noise_n_neighbor_freqs=1, noise_skip_neighbor_freqs=1): """Compute SNR spectrum from PSD spectrum using convolution. Parameters ---------- psd : ndarray, shape ([n_trials, n_channels,] n_frequency_bins) Data object containing PSD values. Works with arrays as produced by MNE's PSD functions or channel/trial subsets. noise_n_neighbor_freqs : int Number of neighboring frequencies used to compute noise level. increment by one to add one frequency bin ON BOTH SIDES noise_skip_neighbor_freqs : int set this >=1 if you want to exclude the immediately neighboring frequency bins in noise level calculation Returns ------- snr : ndarray, shape ([n_trials, n_channels,] n_frequency_bins) Array containing SNR for all epochs, channels, frequency bins. NaN for frequencies on the edges, that do not have enough neighbors on one side to calculate SNR. """ # Construct a kernel that calculates the mean of the neighboring # frequencies averaging_kernel = np.concatenate(( np.ones(noise_n_neighbor_freqs), np.zeros(2 * noise_skip_neighbor_freqs + 1), np.ones(noise_n_neighbor_freqs))) averaging_kernel /= averaging_kernel.sum() # Calculate the mean of the neighboring frequencies by convolving with the # averaging kernel. mean_noise = np.apply_along_axis( lambda psd_: np.convolve(psd_, averaging_kernel, mode='valid'), axis=-1, arr=psd ) # The mean is not defined on the edges so we will pad it with nas. The # padding needs to be done for the last dimension only so we set it to # (0, 0) for the other ones. edge_width = noise_n_neighbor_freqs + noise_skip_neighbor_freqs pad_width = [(0, 0)] * (mean_noise.ndim - 1) + [(edge_width, edge_width)] mean_noise = np.pad( mean_noise, pad_width=pad_width, constant_values=np.nan ) return psd / mean_noise Now we call the function to compute our SNR spectrum. As described above, we have to define two parameters. how many noise bins do we want? do we want to skip the n bins directly next to the target bin? Tweaking these parameters can drastically impact the resulting spectrum, but mainly if you choose extremes. E.g. if you’d skip very many neighboring bins, broad band power modulations (such as the alpha peak) should reappear in the SNR spectrum. On the other hand, if you skip none you might miss or smear peaks if the induced power is distributed over two or more frequency bins (e.g. if the stimulation frequency isn’t perfectly constant, or you have very narrow bins). Here, we want to compare power at each bin with average power of the three neighboring bins (on each side) and skip one bin directly next to it. Plot PSD and SNR spectra# Now we will plot grand average PSD (in blue) and SNR (in red) ± sd for every frequency bin. PSD is plotted on a log scale. fig, axes = plt.subplots(2, 1, sharex='all', sharey='none', figsize=(8, 5)) freq_range = range(np.where(np.floor(freqs) == 1.)[0][0], np.where(np.ceil(freqs) == fmax - 1)[0][0]) psds_plot = 10 * np.log10(psds) psds_mean = psds_plot.mean(axis=(0, 1))[freq_range] psds_std = psds_plot.std(axis=(0, 1))[freq_range] axes[0].plot(freqs[freq_range], psds_mean, color='b') axes[0].fill_between( freqs[freq_range], psds_mean - psds_std, psds_mean + psds_std, color='b', alpha=.2) axes[0].set(title="PSD spectrum", ylabel='Power Spectral Density [dB]') # SNR spectrum snr_mean = snrs.mean(axis=(0, 1))[freq_range] snr_std = snrs.std(axis=(0, 1))[freq_range] axes[1].plot(freqs[freq_range], snr_mean, color='r') axes[1].fill_between( freqs[freq_range], snr_mean - snr_std, snr_mean + snr_std, color='r', alpha=.2) axes[1].set( title="SNR spectrum", xlabel='Frequency [Hz]', ylabel='SNR', ylim=[-2, 30], xlim=[fmin, fmax]) fig.show() You can see that the peaks at the stimulation frequencies (12 Hz, 15 Hz) and their harmonics are visible in both plots (just as the line noise at 50 Hz). Yet, the SNR spectrum shows them more prominently as peaks from a noisy but more or less constant baseline of SNR = 1. You can further see that the SNR processing removes any broad-band power differences (such as the increased power in alpha band around 10 Hz), and also removes the 1/f decay in the PSD. Note, that while the SNR plot implies the possibility of values below 0 (mean minus sd) such values do not make sense. Each SNR value is a ratio of positive PSD values, and the lowest possible PSD value is 0 (negative Y-axis values in the upper panel only result from plotting PSD on a log scale). Hence SNR values must be positive and can minimally go towards 0. Extract SNR values at the stimulation frequency# Our processing yielded a large array of many SNR values for each trial x channel x frequency-bin of the PSD array. For statistical analysis we obviously need to define specific subsets of this array. First of all, we are only interested in SNR at the stimulation frequency, but we also want to restrict the analysis to a spatial ROI. Lastly, answering your interesting research questions will probably rely on comparing SNR in different trials. Therefore we will have to find the indices of trials, channels, etc. Alternatively, one could subselect the trials already at the epoching step, using MNE’s event information, and process different epoch structures separately. Let’s only have a look at the trials with 12 Hz stimulation, for now. Get index for the stimulation frequency (12 Hz)# Ideally, there would be a bin with the stimulation frequency exactly in its center. However, depending on your Spectral decomposition this is not always the case. We will find the bin closest to it - this one should contain our frequency tagged response. # find index of frequency bin closest to stimulation frequency i_bin_12hz = np.argmin(abs(freqs - stim_freq)) # could be updated to support multiple frequencies # for later, we will already find the 15 Hz bin and the 1st and 2nd harmonic # for both. i_bin_24hz = np.argmin(abs(freqs - 24)) i_bin_36hz = np.argmin(abs(freqs - 36)) i_bin_15hz = np.argmin(abs(freqs - 15)) i_bin_30hz = np.argmin(abs(freqs - 30)) i_bin_45hz = np.argmin(abs(freqs - 45)) Get indices for the different trial types# i_trial_12hz = np.where(epochs.events[:, 2] == event_id['12hz'])[0] i_trial_15hz = np.where(epochs.events[:, 2] == event_id['15hz'])[0] Get indices of EEG channels forming the ROI# # Define different ROIs roi_vis = ['POz', 'Oz', 'O1', 'O2', 'PO3', 'PO4', 'PO7', 'PO8', 'PO9', 'PO10', 'O9', 'O10'] # visual roi # Find corresponding indices using mne.pick_types() picks_roi_vis = mne.pick_types(epochs.info, eeg=True, stim=False, exclude='bads', selection=roi_vis) Apply the subset, and check the result# Now we simply need to apply our selection and yield a result. Therefore, we typically report grand average SNR over the subselection. In this tutorial we don’t verify the presence of a neural response. This is commonly done in the ASSR literature where SNR is often lower. An F-test or Hotelling T² would be appropriate for this purpose. snrs_target = snrs[i_trial_12hz, :, i_bin_12hz][:, picks_roi_vis] print("sub 2, 12 Hz trials, SNR at 12 Hz") print(f'average SNR (occipital ROI): {snrs_target.mean()}') sub 2, 12 Hz trials, SNR at 12 Hz average SNR (occipital ROI): 41.6936554171862 Topography of the vSSR# But wait… As described in the intro, we have decided a priori to work with average SNR over a subset of occipital channels - a visual region of interest (ROI) - because we expect SNR to be higher on these channels than in other channels. Let’s check out, whether this was a good decision! Here we will plot average SNR for each channel location as a topoplot. Then we will do a simple paired T-test to check, whether average SNRs over the two sets of channels are significantly different. # get average SNR at 12 Hz for ALL channels snrs_12hz = snrs[i_trial_12hz, :, i_bin_12hz] snrs_12hz_chaverage = snrs_12hz.mean(axis=0) # plot SNR topography fig, ax = plt.subplots(1) mne.viz.plot_topomap(snrs_12hz_chaverage, epochs.info, vmin=1., axes=ax) print("sub 2, 12 Hz trials, SNR at 12 Hz") print("average SNR (all channels): %f" % snrs_12hz_chaverage.mean()) print("average SNR (occipital ROI): %f" % snrs_target.mean()) tstat_roi_vs_scalp = \ ttest_rel(snrs_target.mean(axis=1), snrs_12hz.mean(axis=1)) print("12 Hz SNR in occipital ROI is significantly larger than 12 Hz SNR over " "all channels: t = %.3f, p = %f" % tstat_roi_vs_scalp) sub 2, 12 Hz trials, SNR at 12 Hz average SNR (all channels): 16.985902 average SNR (occipital ROI): 41.693655 12 Hz SNR in occipital ROI is significantly larger than 12 Hz SNR over all channels: t = 6.950, p = 0.000067 We can see, that 1) this participant indeed exhibits a cluster of channels with high SNR in the occipital region and 2) that the average SNR over all channels is smaller than the average of the visual ROI computed above. The difference is statistically significant. Great! Such a topography plot can be a nice tool to explore and play with your data - e.g. you could try how changing the reference will affect the spatial distribution of SNR values. However, we also wanted to show this plot to point at a potential problem with frequency-tagged (or any other brain imaging) data: there are many channels and somewhere you will likely find some statistically significant effect. It is very easy - even unintended - to end up double-dipping or p-hacking. So if you want to work with an ROI or individual channels, ideally select them a priori - before collecting or looking at the data - and preregister this decision so people will believe you. If you end up selecting an ROI or individual channel for reporting because this channel or ROI shows an effect, e.g. in an explorative analysis, this is also fine but make it transparently and correct for multiple comparison. Statistical separation of 12 Hz and 15 Hz vSSR# After this little detour into open science, let’s move on and do the analyses we actually wanted to do: We will show that we can easily detect and discriminate the brains responses in the trials with different stimulation frequencies. In the frequency and SNR spectrum plot above, we had all trials mixed up. Now we will extract 12 and 15 Hz SNR in both types of trials individually, and compare the values with a simple t-test. We will also extract SNR of the 1st and 2nd harmonic for both stimulation frequencies. These are often reported as well and can show interesting interactions. snrs_roi = snrs[:, picks_roi_vis, :].mean(axis=1) freq_plot = [12, 15, 24, 30, 36, 45] color_plot = [ 'darkblue', 'darkgreen', 'mediumblue', 'green', 'blue', 'seagreen' ] xpos_plot = [-5. / 12, -3. / 12, -1. / 12, 1. / 12, 3. / 12, 5. / 12] fig, ax = plt.subplots() labels = ['12 Hz trials', '15 Hz trials'] x = np.arange(len(labels)) # the label locations width = 0.6 # the width of the bars res = dict() # loop to plot SNRs at stimulation frequencies and harmonics for i, f in enumerate(freq_plot): # extract snrs stim_12hz_tmp = \ snrs_roi[i_trial_12hz, np.argmin(abs(freqs - f))] stim_15hz_tmp = \ snrs_roi[i_trial_15hz, np.argmin(abs(freqs - f))] SNR_tmp = [stim_12hz_tmp.mean(), stim_15hz_tmp.mean()] # plot (with std) ax.bar( x + width * xpos_plot[i], SNR_tmp, width / len(freq_plot), yerr=np.std(SNR_tmp), label='%i Hz SNR' % f, color=color_plot[i]) # store results for statistical comparison res['stim_12hz_snrs_%ihz' % f] = stim_12hz_tmp res['stim_15hz_snrs_%ihz' % f] = stim_15hz_tmp # Add some text for labels, title and custom x-axis tick labels, etc. ax.set_ylabel('SNR') ax.set_title('Average SNR at target frequencies') ax.set_xticks(x) ax.set_xticklabels(labels) ax.legend(['%i Hz' % f for f in freq_plot], title='SNR at:') ax.set_ylim([0, 70]) ax.axhline(1, ls='--', c='r') fig.show() As you can easily see there are striking differences between the trials. Let’s verify this using a series of two-tailed paired T-Tests. # Compare 12 Hz and 15 Hz SNR in trials after averaging over channels tstat_12hz_trial_stim = \ ttest_rel(res['stim_12hz_snrs_12hz'], res['stim_12hz_snrs_15hz']) print("12 Hz Trials: 12 Hz SNR is significantly higher than 15 Hz SNR" ": t = %.3f, p = %f" % tstat_12hz_trial_stim) tstat_12hz_trial_1st_harmonic = \ ttest_rel(res['stim_12hz_snrs_24hz'], res['stim_12hz_snrs_30hz']) print("12 Hz Trials: 24 Hz SNR is significantly higher than 30 Hz SNR" ": t = %.3f, p = %f" % tstat_12hz_trial_1st_harmonic) tstat_12hz_trial_2nd_harmonic = \ ttest_rel(res['stim_12hz_snrs_36hz'], res['stim_12hz_snrs_45hz']) print("12 Hz Trials: 36 Hz SNR is significantly higher than 45 Hz SNR" ": t = %.3f, p = %f" % tstat_12hz_trial_2nd_harmonic) print() tstat_15hz_trial_stim = \ ttest_rel(res['stim_15hz_snrs_12hz'], res['stim_15hz_snrs_15hz']) print("15 Hz trials: 12 Hz SNR is significantly lower than 15 Hz SNR" ": t = %.3f, p = %f" % tstat_15hz_trial_stim) tstat_15hz_trial_1st_harmonic = \ ttest_rel(res['stim_15hz_snrs_24hz'], res['stim_15hz_snrs_30hz']) print("15 Hz trials: 24 Hz SNR is significantly lower than 30 Hz SNR" ": t = %.3f, p = %f" % tstat_15hz_trial_1st_harmonic) tstat_15hz_trial_2nd_harmonic = \ ttest_rel(res['stim_15hz_snrs_36hz'], res['stim_15hz_snrs_45hz']) print("15 Hz trials: 36 Hz SNR is significantly lower than 45 Hz SNR" ": t = %.3f, p = %f" % tstat_15hz_trial_2nd_harmonic) 12 Hz Trials: 12 Hz SNR is significantly higher than 15 Hz SNR: t = 7.510, p = 0.000037 12 Hz Trials: 24 Hz SNR is significantly higher than 30 Hz SNR: t = 8.489, p = 0.000014 12 Hz Trials: 36 Hz SNR is significantly higher than 45 Hz SNR: t = 14.899, p = 0.000000 15 Hz trials: 12 Hz SNR is significantly lower than 15 Hz SNR: t = -5.692, p = 0.000297 15 Hz trials: 24 Hz SNR is significantly lower than 30 Hz SNR: t = -7.916, p = 0.000024 15 Hz trials: 36 Hz SNR is significantly lower than 45 Hz SNR: t = -3.519, p = 0.006525 Debriefing# So that’s it, we hope you enjoyed our little tour through this example dataset. As you could see, frequency-tagging is a very powerful tool that can yield very high signal to noise ratios and effect sizes that enable you to detect brain responses even within a single participant and single trials of only a few seconds duration. Bonus exercises# For the overly motivated amongst you, let’s see what else we can show with these data. Using the PSD function as implemented in MNE makes it very easy to change the amount of data that is actually used in the spectrum estimation. Here we employ this to show you some features of frequency tagging data that you might or might not have already intuitively expected: Effect of trial duration on SNR# First we will simulate shorter trials by taking only the first x s of our 20s trials (2, 4, 6, 8, …, 20 s), and compute the SNR using a FFT window that covers the entire epoch: stim_bandwidth = .5 # shorten data and welch window window_lengths = [i for i in range(2, 21, 2)] window_snrs = [[]] * len(window_lengths) for i_win, win in enumerate(window_lengths): # compute spectrogram windowed_psd, windowed_freqs = mne.time_frequency.psd_welch( epochs[str(event_id['12hz'])], n_fft=int(sfreq * win), n_overlap=0, n_per_seg=None, tmin=0, tmax=win, window='boxcar',) ax.boxplot(window_snrs, labels=window_lengths, vert=True) ax.set(title='Effect of trial duration on 12 Hz SNR', ylabel='Average SNR', xlabel='Trial duration [s]') ax.axhline(1, ls='--', c='r') fig.show() You can see that the signal estimate / our SNR measure increases with the trial duration. This should be easy to understand: in longer recordings there is simply more signal (one second of additional stimulation adds, in our case, 12 cycles of signal) while the noise is (hopefully) stochastic and not locked to the stimulation frequency. In other words: with more data the signal term grows faster than the noise term. We can further see that the very short trials with FFT windows < 2-3s are not great - here we’ve either hit the noise floor and/or the transient response at the trial onset covers too much of the trial. Again, this tutorial doesn’t statistically test for the presence of a neural response, but an F-test or Hotelling T² would be appropriate for this purpose. Time resolved SNR# ..and finally we can trick MNE’s PSD implementation to make it a sliding window analysis and come up with a time resolved SNR measure. This will reveal whether a participant blinked or scratched their head.. Each of the ten trials is coded with a different color in the plot below. # 3s sliding window window_length = 4 window_starts = [i for i in range(20 - window_length)] window_snrs = [[]] * len(window_starts) for i_win, win in enumerate(window_starts): # compute spectrogram windowed_psd, windowed_freqs = mne.time_frequency.psd_welch( epochs[str(event_id['12hz'])], n_fft=int(sfreq * window_length) - 1, n_overlap=0, n_per_seg=None, window='boxcar', tmin=win, tmax=win + window_length,) colors = plt.get_cmap('Greys')(np.linspace(0, 1, 10)) for i in range(10): ax.plot(window_starts, np.array(window_snrs)[:, i], color=colors[i]) ax.set(title='Time resolved 12 Hz SNR - %is sliding window' % window_length, ylabel='Average SNR', xlabel='t0 of analysis window [s]') ax.axhline(1, ls='--', c='r') ax.legend(['individual trials in greyscale']) fig.show() Well.. turns out this was a bit too optimistic ;) But seriously: this was a nice idea, but we’ve reached the limit of what’s possible with this single-subject example dataset. However, there might be data, applications, or research questions where such an analysis makes sense. Total running time of the script: ( 0 minutes 16.157 seconds) Estimated memory usage: 316 MB Gallery generated by Sphinx-Gallery
https://mne.tools/dev/auto_tutorials/time-freq/50_ssvep.html
CC-MAIN-2022-21
refinedweb
3,572
56.55
Arrays /*I don't quite understand array. This is my first time seeing them in my life, can you please explain to me what is the error here?*/ Your calendar program should output all the days of week, but it has errors. Change the code so that the program prints the days. public class Main { public static void main(String[] args) { int[] days = ["Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"]; for (int i = 0; i < 7; i++) { System.out.println(days[i]); } } } 7 AnswersNew Answer You have to declare arrays by curly braces int[] days = { .. }; //replace with [ ] edit : Array days should be string type as well. Oh thanks guys, i’ve tried before replacing the curly braces, but didnt think about the array name. thanks a lot! public class Main { public static void main(String[] args) { String[] days = {"Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"}; //Please Subscribe to My Youtube Channel //Channel Name: Fazal Tuts4U for (int i = 0; i < 7; i++) { System.out.println(days[i]); } } } As Jayakrishna🇮🇳 writes, you should use curly braces. What's more, there are strings in the array, not numbers. Therefore, you should change the data type from int to String: String[] days = {"Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday", "Sunday"}; Think of arrays as groups of storage containers. A single dimensioned array is just a long line of boxes that are numbered starting from 0. An individual box can be identified by a single number. A two dimensional array is like a storage rack of boxes. To identify an individual box requires two numbers, the shelf number and the box number on that shelf. A three dimensional array is like having rows of shelves. An individual location requires three numbers to identify it. The row number, the shelf number, and the box number. A four dimensional array is like identifying the warehouse, row, shelf, box. etc. etc. etc. what does the for loop do. I'm doing the same problem why is there and why are we printing it. The loop is there to iterate through the days of the week. Remember for loops determine how many times a specific piece of code will run. Thus: for (x = 0; x < 7; x++) { System.out.println(days[2]); } will print Wednesday seven times. That should point you in the right direction.
https://www.sololearn.com/Discuss/2729526/arrays
CC-MAIN-2021-39
refinedweb
386
75.1
I need to make a script that can make my maincamera fly around a sphere at a constant height and speed. I need to write it in c# and I'm kinda new to that, so I need some help. So far I have just wrote a script that orbits around the object, but I want to control the camera with WASD. This is my orbit-script so far public class CameraOrbit : MonoBehaviour { public GameObject target = null; public bool orbitY = false; void Start () { } void Update () { if (target != null) { transform.LookAt(target.transform); if(orbitY) { transform.RotateAround(target.transform.position, Vector3.up, Time.deltaTime * 10); } } } } -------------Problems Reply------------ you have 2 options how to do this: option #1 is the way you started option #2: instead of rotating the camera around the sphere center, you can make the camera a child node of a node in the sphere center and then just change the euler angles of this parent node: using UnityEngine; public class RotateChild : MonoBehaviour { void Update() { if( Input.GetKey( KeyCode.W ) ) transform.Rotate( 1, 0, 0 ); if( Input.GetKey( KeyCode.S ) ) transform.Rotate( -1, 0, 0 ); if( Input.GetKey( KeyCode.A ) ) transform.Rotate( 0, -1, 0 ); if( Input.GetKey( KeyCode.D ) ) transform.Rotate( 0, 1, 0 ); } } use this on a node in sphere center. make the camera a child of this node. the sphere with scale 5: the parent of the camera with attached script RotateChild: this works 100% - i tested it :)
http://www.dskims.com/make-camera-follow-that-can-fly-around-a-sphere-at-a-constant-height-and-speed/
CC-MAIN-2018-22
refinedweb
242
69.92
Weekly Challenge #1 Good Morning/Afternoon/Evening/Night to all beautiful replers! Today we are starting something very requested set of events. That's right! TODAY WE START WEEKLY CHALLENGES ONCE AGAIN! For the new users who were not around the last time, we were hosting these. These are short coding challenges that you are required to finish within 1 week. A new challenge is posted every weekend and you have until the next challenge is posted, to finish that challenge. At the end of every month, the total score of the 4 challenges held within that month is your score. The one with the highest score at the end of every month will be awarded free replit hacker plan! To post your submission, just publish your repl onto apps and make sure to include the tag #weekly{n} and replace {n} with the number of the weekly challenge in the title. For example, for the submission to this weekly challenge, publish the repl that contains your submission on apps and include the tag weekly1. More guidelines - You are allowed to make only 1 submission. Only submit after you're completely sure about submitting your submission. Your score won't be updated once your submission is scored. - If there is any sort of condition in which your submission does not satisfy the challenge's requirement, its score will be 0. And that's it! Now, let's get back to this week's weekly challenge. SQUARE ROOT! Inspired by the last year's first challenge, you have to write a program that finds a number's square root. BUT, as usual, with a twist! You are not allowed to use the arithmetic operators *and /(some languages use these for operations other than multiplication and division so they're fine there). You can also not use any external libraries, or the square root functions of any internal libraries of any language you might be using, or any special square root or square operators specific to your language. Same goes for exponentiation operators/pre-defined functions - not allowed. It is also fine if your program cannot find the square root in case the number is not a perfect square, the minimum requirement is for the program to be able to detect at least the square roots of perfect squares. Note that first your output is judged and only if it can be figured out without having to look at the code, will the code be judged. Basically, you just have to add prompts that tell the user what to enter and what each value is. For example: This is wrong > 25 5 This is right Please enter a number: 25 The square root of 25 is 5. If you have any further questions, you can ask them via the comments section, and if you don't, I would still recommend going through the comments section as they may contain some extra information. The criteria for scoring is subjective but there are points for creativity, uniqueness, clean code, etc. Also, you may find @DynamicSquid hosting these alongside me so just know that those are official too and you will be getting scores for those. Good luck to all the replers, have fun and hack away! @JeffreyChen13 You can do multiplication and exponents using addition. eg: function mult(a, b) { let result = 0; for (let i = 0; i < a; i++) result += b; return result; } console.log(mult(0, 6)); // -> 0 console.log(mult(6, 5)); // -> 30 console.log(mult(9, 9)); // -> 81 @JeffreyChen13 In python: def mult(a, b): result = 0 for i in range(a): result += b return result print(mult(0, 6)); # -> 0 print(mult(6, 5)); # -> 30 print(mult(9, 9)); # -> 81 @MattDESTROYER Haha yes! I thought of that. It is multiplying but in the way of repeated addition. I thought of that too, just working on the Pygame parts in my code : ) @JeffreyChen13 Yep, you got it. Nice :) You can even do long division manually, I used these tricks in my project. @JeffreyChen13 Just repeated addition in a for loop, I did it like this: def square(root): sum = 0 for i in range(root): sum += root return root @MattDESTROYER And to say, you really don't need that extra parameter b, because a and b are the same thing, you can just substitute b with a! Clean code there, yay! def mult(a): result = 0 for i in range(a): result += b return result @JeffreyChen13 The parameter b was to enable you to do multiplication as well as squares. To be honest I think I recreated all the operators in my project manually lol. I also took a different approach to getting square roots to everyone else as far as I can tell. @FlaminHotValdez You're really enjoying low-key flexing on everyone that you found such a unique solution, aren't you :P @RayhanADev Not Babylonian. I don't think the method has a name, it's just based on logic. You can easily derive the result from simple algebra. (Ik this coz we used the same method) I gave an explanation on my spotlight page if you want to see it. @kannibalistic Yeah. Don't worry, it's rare for plagiarism to happen. And if it does, just let me know :) Also for your scores, you'll get a comment on your repl once it has been marked @LavernieChen No, if you move top to the post, today it will say "Posted 4 days ago" only if you live right now in America PST haha, if you are in another time zone, please convert, which means it would have started at July 1 (in my time zone), so it must be due at July 8. @JeffreyChen13 Haha, Yes only in america not sure if time zone in america matters, should be same day @cuber1515 Do you have discord? Just DM Drone the submission. Or reply to my comment with your submission Hey, for C++ am I allowed to include 'iostream' or that other library that imports the printf statement? Also, when is the submission date. @Whippingdot Yes you can use that library, and the due date is anytime between when you publish to apps and the next challenge good cause i deleted my python one when i learned that you could only submit it in one language :( @FlaminHotValdez @DynamicSquid so you can't use it in any way where it will be dividing or multiplying something, string or integer? Is hard-coding the solutions allowed? e.g a single massive object with key-value pairs of perfect squares and their square roots xD @EducatedStrikeC well, you'd need an infinite object, as the set of perfect squares is infinite... @TheDrone7 @EducatedStrikeC the other option would be to use psychic powers and predict exactly which numbers will be tested, and hard-code those only. @TheDrone7 great, thanks, I wasn't sure if it might be banned because idk if it uses /behind the scenes I made a version in python, not sure if it fits requirements though, still working on it I'm confused @TheDrone7 , So you can't add multiplication or division to put in your code? @DEANKASOZI This doesn't require advanced programming knowledge-just your brain! Basic knowledge should be all that's necessary @FlaminHotValdez ik, im trying to figure it out with my brain. i mean, im pretty sure im on the right track, though it depends on the answer for this question: for python, can we use the "pow(x, y)"? where's the frontend part ew frontend @Lord_Poseidon What's wrong with frontend? .-. @MrVoo nah I just suck at it @Lord_Poseidon 😭 @Bookie0 frontend requires knowledge of several specific languages/libraries, this requires nothing but your brain @FlaminHotValdez precisely @FlaminHotValdez idk, frontend seems easier for me but ¯\_(ツ)_/¯ @Bookie0 yeah but it requires knowledge of specific things. This challenge requires your brain and only the most basic programming knowledge(variables, loops, if, input/output). Hmm.. I do know those basics, but I just can't think of something lol (most likely because this week I've been rahter distracted and away). Oh well! :D @FlaminHotValdez
https://replit.com/talk/announcements/Weekly-Challenge-1/142232
CC-MAIN-2022-21
refinedweb
1,364
68.2
libbind_getipnodebyname man page getipnodebyname, getipnodebyaddr — get network host entry Synopsis #include <netdb.h> struct hostent * getipnodebyname(const char *name, int af, int flags, int *error); struct hostent * getipnodebyaddr(const void *addr, size_t len, int af, int *error); void freehostent(struct hostent *he); Description Getipnodebyname(), and getipnodebyaddr() each return a pointer to a hostent structure (see below) describing an internet host referenced by name or by address, as the function names indicate. This structure contains either the information obtained from the name server, zero. This structure should be freed after use by calling freehostent(). When using the nameserver, getiphostbyaddr() will search for the named host in each parent domain given in the “ search” directive of resolv.conf(5) unless the name contains. Getiphostbyaddr() can be told to look for IPv4 addresses, IPv6 addresses or both IPv4 and IPv6. If IPv4 addresses only are to be looked up then af should be set to AF_INET, otherwise it should be set to AF_INET6. There are three flags that can be set - AI_V4MAPPED - Return IPv4 addresses if no IPv6 addresses are found. This flag is ignored unless af is AF_INET6. - AI_ALL - Return IPv4 addresses as well IPv6 addresses if AI_V4MAPPED is set. This flag is ignored unless af is AF_INET6. - AI_ADDRCONFIG - Only return addresses of a given type if the system has an active interface with that type. Also AI_DEFAULT is defined to be (AI_V4MAPPED|AI_ADDRCONFIG). Getipnodebyaddr() will lookup IPv4 mapped and compatible addresses in the IPv4 name space and IPv6 name space Freehostent() frees the hostent structure allocated be getipnodebyname() and getipnodebyaddr(). The structures returned by gethostbyname(), gethostbyname2(), gethostbyaddr() and gethostent() should not be passed to freehostent() as they are pointers to static areas. Environment - HOSTALIASES - Name of file containing ( host alias, full hostname) pairs. Files /etc/hosts - See hosts(5). Diagnostics Error return status from getipnodebyname() and getipnodebyaddr() is indicated by return of a null pointer. In this case error may then be checked to see whether this is a temporary failure or an invalid or unknown host.. - NO_ADDRESS - hosts(5), hostname(7), resolver(3), resolver(5), gethostbyname(3), RFC2553.
https://www.mankier.com/3/libbind_getipnodebyname
CC-MAIN-2017-04
refinedweb
348
55.34
Is there a working example available for a tabbar? (where each tab has its own Component/Page) I’ve used Angular 1 , React-native, ReactJS. Vue.js syntax is silky compared to those. Love it! Is there a working example available for a tabbar? (where each tab has its own Component/Page) I’ve used Angular 1 , React-native, ReactJS. Vue.js syntax is silky compared to those. Love it! I can’t get this set up correctly. Toolbar (above tabbar) and Footer (beneath tabbar) stay hidden under the Tabbar pages. Is there a way to fix this? render() { return ( <Page> <Toolbar inline> <div className="left"><BackButton>Back</BackButton></div> <div className="center">{'Skill' + this.props.skill.skillNr}</div> </Toolbar> <Tabbar index={this.state.index} position='top' onPreChange={({index}) => this.setState({index})} renderTabs={this.renderTabs.bind(this)} /> <Foot Ah, found it. isSwipeable={false} makes the scrollbar work perfectly. return ( <Splitter> <SplitterSide isSwipeable={false}> Thanks, unordered list works well except not with the Splitter (sidebar navigation) as the outer component of the app. Is there a way to disable the Splitter on certain pages? Or should I set the Navigator (stacked navigation) as outer component and then add the splitter on pages where I need them? (index.html) <Splitter> <SplitterSide side='left' collapse={true} isOpen={this.state.isOpen} onClose={this.hide.bind(this)} isSwipeable={true}> <Page> Menu content </Page> </SplitterSide> <SplitterContent> <Navigator initialRoute={{component: Skills}} renderPage={this.renderPage}/> </SplitterContent> </Splitter> see you can add css components with the Monaca online tool. However my project is set up with Reactjs. which excludes using the online tool. I can only use the Monaca CLI for builds etc., but I can’t see how to add/remove JS/CSS components. monaca preview: All icons display fine. monaca debug: All icons appear as placeholders with a cross. Any ideas? OnsenUI 2 with Reactjs
https://community.onsen.io/user/beebase
CC-MAIN-2019-18
refinedweb
308
53.27
select Select Clause from Insurance table using Hibernate Select Clause. The select clause picks up... Hibernate Select Clause  ..., insurance.investementDate) from Insurance table. Hibernate generates HQL from clause Example clause. The from clause is the simplest possible Hibernate Query. Example of from clause is: from Insurance insurance Here is the full code of the from clause... HQL from clause Example   Select Clause in hibernate. Select Clause in hibernate. Example of select clause of hibernate Hibernate ORDER BY Clause Hibernate ORDER BY Clause In this tutorial you will learn about HQL ORDER BY clause. Purpose for using ORDER BY clause is similar to that it is used in SQL... java file into which we will use HQL order by clause for sorting the data from Hibernate In Clause In this section we will introduce concept of Hibernate In Clause Hibernate SELECT Clause Hibernate SELECT Clause In this tutorial you will learn how to use HQL select clause. Use of SELECT clause in hibernate is as same as it used in SQL... select clause for selecting the data from the database. Complete Code What's the usage of Hibernate QBC API? What's the usage of Hibernate QBC API? Hi, What's the usage of Hibernate QBC API? Thanks, Hi, Hibernate Query By Criteria.... For getting clear concept about Hibernate QBC API visit below link: http HQL Where Clause Example ; Where Clause is used to limit the results returned from... Hibernate: select insurance0_.ID as col_0_0_ from insurance insurance0... With Select Clause Hibernate: select insurance0_.ID as col_0_0 Exception Usage Java NotesException Usage Exceptions Exception Usage | Exceptions - More... to convert a number from an illegal String form IOException... do silently ignore exceptions, enclose only one call in the try clause HQL Group By Clause Example HQL Group By Clause Example Group by clause is used to return the aggregate values by grouping on returned component. HQL supports Group By Clause. In our example we Where clause Where clause I have doubt in Where clause .. i got a Employid from...; The WHERE clause is used to filter records.It is used to extract only those..._name(s) FROM table_name WHERE column_name = value; Example: In the given Hibernate; import org.hibernate.Session; import org.hibernate.*; import...(); / Determining Memory Usage in Java - java tutorial Determining Memory Usage in Java 2001-08-28 The Java Specialists' Newsletter [Issue 029] - Determining Memory Usage in Java Author: Dr. Heinz M. Kabutz.... Memory Usage in Java In Java, memory is allocated in various places SQL SELECT DISTINCT Statement ; In this example we will show you the usage of Distinct clause with the select statement. The SQL Distinct clause is used with the Select statement to show all unique records from database. In this example, the columns where clause in select statement where clause in select statement In Mysql,I write the following query: select * from emp where designation = "MANAGER" But the actual value in the table is "manager" Will the above query retrieve any records? According to me Group by Clause In this tutorial we will discuss about Hibernate group by with example SQL HAVING Clause The Tutorial illustrates an example from SQL HAVING Clause... SQL HAVING Clause Having clause is used with the select clause to specify a search Mysql Date in where clause records from table employee1 on the basis of condition specified in WHERE clause... of condition specified in where clause. It return you the records from the employee1... Mysql Date in where clause   SQL HAVING Clause The Tutorial illustrates an example from SQL HAVING Clause...; clause restrict your records from a table. The group by statement... SQL HAVING Clause   Data fetch from multiple SQL tables - Hibernate Data fetch from multiple SQL tables I am in the process of writing my first hibernate application. I have a sql query that fetches data from... clause sample: FROM table9 il LEFT OUTER JOIN table4 RIGHT OUTER JOIN Where Clause in SQL The Tutorial illustrate an example from where clause in SQL. In this Tutorial we use... Where Clause in SQL The Where clause in SQL query is used with the SELECT keyword Using WHERE Clause in JDBC clause is used to retrieve the data from a table based on some specific conditions. It allows to filter the data from a table, a WHERE clause can be added... by the SELECT statement. Example of WHERE Clause SELECT Name from Hibernate hibernate library files are not included in the project build path. Please download the example from Hibernate Getting Started Tutorial. Please follow the steps... of Hibernate. I followed all th steps as given in the tutorial, but a build error PHP WHERE clause example to fetch records from Database Table query in PHP MYSQL example. PHP MYSQL WHERE Clause Syntax select * from... different actions in MYSQL Database Tables. PHP WHERE Clause is one of them. We can use the WHERE clause in PHP & MYSQL to add, edit or delete Hibernate Hibernate Can we write more than one hibernate.cfg.xml file... ? if so how can we call and use it.? can we connect to more than one DataBase from a single Hibernate program SQL Aggregate Functions Where Clause SQL Aggregate Functions Where Clause SQL Aggregate Functions Where Clause return you the aggregate sum of the records based on the condition specified in Where Clause condition JDBC: WHERE Clause Example JDBC: WHERE Clause Example In this section, we will discuss how to use WHERE Clause for putting condition in selection of table records using JDBC API. WHERE Clause : MySql Where Clause is used to write some condition and data JDBC: LIKE Clause Example JDBC: LIKE Clause Example In this section, you will learn how to use LIKE Clause for selection of table records using JDBC API. LIKE Clause : Like Clause is used for comparing part of string. For pattern matching we use Hibernate Tutorials Hibernate Query. In this example you will learn how to use the HQL from clause... code to select the data from Insurance table using Hibernate Select Clause... framework. Hibernate takes care of the mapping from Java classes to database tables SQL Aggregate Functions In Where Clause SQL Aggregate Functions In Where Clause SQL Aggregate Function In Where Clause return the maximum conditional value of the record from a table. Understand Hibernate how to update record by calling stored procedure from java prog using hibernate query Hibernate update query This will show you how to update Hibernate Select Query This tutorial contain the explanation of Select clause JPA Query with where clause and group by function JPA Query with where clause and group by function Any thing wrong in my JPA Query. TypedQuery<RaBdrRating> uQuery = (TypedQuery<...(charge),COUNT(activePackage) FROM User WHERE callType = :callType and startDate What is the difference between IN and BETWEEN, that are used inside a WHERE clause? clause? What is the difference between IN and BETWEEN, that are used inside a WHERE clause? Hi, The BETWEEN clause is used to fetch a range of values, whereas the IN clause fetches data from a list of specified values PHP SQL Query Where Clause PHP SQL Query Where Clause  ... query with where clause. To understand how to display the result returned from the sql query execution, we have created whereClause.php page. First... release You can learn hibernate 4 from our tutorials. Hibern SQL Aggregate Functions Where Clause ; exists in the name records in where clause from table stu.. ... SQL Aggregate Functions Where Clause SQL Aggregate Functions Where Clause is used" Java Get Memory Usage Java Get Memory Usage  ... memory from the actual memory. As you know in Java programming memory can... that in the given example code output. Java Code to Get Memory Usage Java Array Usage Java Array Usage  ... to print each element individually starting from the month "Jan".  ... that the loop starts from the 0th position of an array and goes upto the Configuration, Resource Usage and StdSchedulerFactory Configuration, Resource Usage and StdSchedulerFactory Quartz is architected... file. Generally, properties are stored in and loaded from a file, but can also..  php MySQL order by clause php MySQL order by clause: In SQL generally we use three clauses, select ,from, and where. .... Order by is one of the important clause of SQL. Order by clause is used to sort HOW CN I USE WHERE AND BETWEEN CLAUSE SIMULTANEOUSLY... IN MYSQL HOW CN I USE WHERE AND BETWEEN CLAUSE SIMULTANEOUSLY... IN MYSQL sir, how can i use WHERE and BETWEEN clause simultaneously..? my tables details... between two dates whose expfor_id = 1 my query is SELECT * FROM expenditures Hibernate Tutorial Roseindia class HQL from Clause Example Hibernate Count Query... Hibernate tutorials you can easily learn mapping from Java classes to database...Roseindia.net provides you the best Hibernate tutorials with full examples Complete Hibernate 3.0 and Hibernate 4 Tutorial by retrieving data from the underlying database using the hibernate. Lets... rows from the underlying database using the hibernate. Lets first write a java... file. HQL from Clause Example The from clause Hibernate code - Hibernate Hibernate code how to write hql query for retriveing data from multiple tables Hibernate 3.1.3 Released () Hibernate 3.1.3 can be downloaded from here. Back...Hibernate 3.1.3 Released Back to Hibernate Tutorials Page Hibernate... service. Hibernate lets you develop persistent classes following object-oriented update count from session - Hibernate update count from session I need to get the count of how many rows got updated by session.saveOrUpdate(). How would I get this? Thanks, mpr Hibernate-HQL subquery - Hibernate Hibernate-HQL subquery Hi, I need to fetch latest 5 records from a table.My query goes here. select * from (select * from STATUS... for more details Hibernate Aggregate Functions Hibernate Aggregate Functions In this tutorial you will learn about aggregate functions in Hibernate In an aggregate functions values of columns are calculated accordingly and is returned as a single calculated value. Hibernate hibernate software hibernate software please give me the link from where i can freely download the hibernate software jsp:forward tag usage and stntax jsp:forward tag usage and stntax jsp:forward tag usage and syntax with an example Projections Hibernate Projections In this section, you will learn about the hibernate projections...;Select" clause. Most of the applications uses the built-in projection types by means Hibernate code - Hibernate inserted in the database from this file. code firstExample code that you have given for hibernate to insert the record in contact table,that is not happening neither it is giving Spring Usage - Spring MySQL Case In Where Clause MySQL Case In Where Clause This example illustrates how to use case statement where clause...; ) Query to select all data: SELECT * FROM employeedetails Reg Hibernate - Hibernate Reg Hibernate Hi, iam using hibernate in myeclipse ide. I got... and written query as from user. I am getting this error. org.hibernate.hql.ast.QuerySyntaxException:user not mapped[from user] plz help me. I tried Criteria Grouping Example Hibernate Criteria Grouping Example In this tutorial you will learn about the Hibernate Criteria Projection Grouping. As we were using a 'group by' clause explicitly in HQL for grouping, there is no need to use this clause
http://roseindia.net/tutorialhelp/comment/36710
CC-MAIN-2014-10
refinedweb
1,857
55.34
Retail Modern Point of. Learn Retail Modern POS clients can communicate with databases in your store, or Retail Servers that are deployed in your store or in a data center. Retail Modern POS clients can also communicate with peripheral devices such as cash drawers, credit card readers, and printers either directly, or by using Microsoft Dynamics AX Hardware Station. If you use Hardware Station, it must be deployed in your store, so that all Retail Modern POS clients can connect to the same Hardware Station. Install The following tables provide information about the Microsoft Dynamics AX features and components that you must install before you install Retail Modern POS. Install components at headquarters Install the following components at the headquarters location. Install components at the stores Install the following components at each store location. Extend You can customize the behavior of your modern POS system in several ways. For example, you can modify the information that is made available to modern POS by extending a commerce entity to include a new column from your Microsoft Dynamics AX database. You can then make use of that new column in commerce runtime in a service and workflow, and then expose it in the commerce runtime API. Because you modified the commerce entity, you would also need to customize the corresponding controller and metadata in Retail Server. For an end to end customization example, see Walkthrough: Extend Modern Point of Sale for Microsoft Dynamics AX. In other extensibility scenarios, you wouldn’t need to modify every layer of the stack. For example, you could simply modify the way a workflow behaves without modifying the database schema. The Retail SDK includes apps for various clients. You can customize those apps to match the branding of your organization or to extend their functionality. Extend Microsoft Dynamics AX Retail Server The Microsoft Dynamics AX Commerce runtime is wrapped in a Retail Server layer. Retail Server uses a web API with OData to support thin clients within the store like tablets and phones. Commerce runtime communicates through Commerce Data Exchange services to Microsoft Dynamics AX for Retail Headquarters. Extend clients You can customize the look and feel of a Modern POS client to make it an extension of your brand. We recommend that you use your own file names and namespaces for any customizations. Configure Before you can process transactions or perform any other retail operations using Retail Modern POS, you must first set up your retail store in Microsoft Dynamics AX. Each retail store can have its own payment methods, price groups, point-of-sale (POS) registers, income accounts and expense accounts, and staff. The following tables describe the configuration tasks that you must complete in Microsoft Dynamics AX for a retail store.
https://learn.microsoft.com/en-us/dynamicsax-2012/appuser-itpro/retail-modern-point-of-sale?redirectedfrom=MSDN
CC-MAIN-2022-40
refinedweb
457
52.29
Recently I had a short conversation with James Clark about the structures xml.el produces. I asked him what he thought about the current (CVS) namespace-aware processing. Based on his feedback, I plan to submit changes that will return an incompatible structure to the one currently in CVS. Currently, when xml.el encounters a bit of XML like: <ns:xml xmlns: it produces: (({uri:namespace}xml (({\.w3\.org/2000/xmlns/}ns . "uri:namespace") ({uri:namespace}attr . "value")))) At the time that I wrote this, I saw some W3 docs where this style was used and copied it. Some people here asked me why I did this instead of something like (uri:namespace . "xml"), but I forged ahead. Now, after my conversation with Mr. Clark, I've been persuaded that I was wrong. At his suggestion, I'd like to change the above xml representation produce the following: (((uri:namespace . "xml") ((((\.w3\.org/2000/xmlns/ . "ns") . "uri:namespace") ((uri:namespace . "attr") . "value"))))) As Mr. Clark said: ... there are typically not very many different namespace URIs, so keeping them in Emacs symbol table is not a problem; in the returned representation of the XML, the namespaces would be shared, but strings are mutable in Emacs, which is kind of ugly. Where there is no namespace given: <xml attr="value"> It would produce the following: (("xml" (("attr" . "value")))) Unless there are major objections, I'd like to repent of my previous code and submit changes to produce the above. Mark. -- If you want to know who is funding terrorists, look in the vanity mirror as you turn the key of your SUV. --
https://lists.gnu.org/archive/html/bug-gnu-emacs/2003-09/msg00165.html
CC-MAIN-2021-10
refinedweb
270
72.76
Subject: Re: [Boost-build] Test if file exists From: John Reid (j.reid_at_[hidden]) Date: 2011-09-01 05:00:14 On 01/09/11 09:48, John Reid wrote: > I need to define some targets conditional on a file existing. How can I > test for this in my Jamfile? Where should I have looked in the > documentation to find this out? I found it: import path ; local HAS_SOURCE_CODE = [ path.exists $(SRC_DIR)/logs2.h ] ; if $(HAS_SOURCE_CODE) { echo "Found source code at $(SRC_DIR)" ; } else { echo "Did not find source code" ; } There is no documentation for this I guess. Boost-Build list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/boost-build/2011/09/25284.php
CC-MAIN-2021-10
refinedweb
125
78.85
paint007@mc.duke.edu wrote: >> >> Why exactly is using the XInclude processor before the XSP processor a >> hack? I'm doing this now so that I can get reusability out of my code >> (some XML fragments can be shared among different pages). The caching >> issue is annoying, but I just touch the including files every time I change >> an included file, which is adequate in a development environment. This >> won't be an issue for me in production as I'll touch everything when I >> release new code. >Because once you move to Cocoon 2, your code will no longer be functional. >XSP is a Generator (or something that creates XML), and not a Transformer >(or something that changes XML). Okay, that's clear enough. I'm glad to know this now! >> The bits of code that I'm including are completely independent, producing >> their own XML nodes for processing in the xsl files (which are also >> factored to match the xml and included using xsl:import). Is there a >> better way to do this? >As long as the XSP is static (i.e. not forcing Cocoon to recompile the >XSP every time a request is received), you can use the util:include or >such like. You might consider positioning XInclude after the XSP page. >If you are combining multiple logicsheets together (which is what it >sounds like you are doing), then use the namespaces for each logicsheet, >and let Cocoon worry about assembling the end product. The XSP documents are static. At the moment they mostly use the esql taglib to select data from a db. I can certainly try the util:include mechanism. I'm not sure I understand the rest of your paragraph, but I'll read some more and play around with it. In particular, if I put the XInclude after the XSP, how will the included xml files get processed by XSP? An example of what I'm doing is: One xml file creates a node that will be turned into an HTML form by the xsl. The form allows a user to select a few values from drop down lists and click a button to submit. The selected values provide parameters for a query that will be run in a second xml file. The form action is actually a third xml file which does nothing but use XInclude to include the first two xml files. The resulting XML has a node with the query options (from the first file) and a node with the results of the query (from the second file). The user sees HTML with the results as well as the query form; they can submit a new query from the results page. The separation of the two xml makes it very easy to do this without resorting to conditional logic. Also, since the query form portion is farily generic, I can re-use it to drive very different queries in other parts of the application. Etc. A lot of OOP instincts coming in to play here. Thanks for your help. -Christopher --------------------------------------------------------------------- Please check that your question has not already been answered in the FAQ before posting. <> To unsubscribe, e-mail: <cocoon-users-unsubscribe@xml.apache.org> For additional commands, e-mail: <cocoon-users-help@xml.apache.org>
http://mail-archives.apache.org/mod_mbox/cocoon-users/200103.mbox/%3COF1FD1A830.42F3043C-ON85256A10.005D2DB6@mc.duke.edu%3E
CC-MAIN-2018-05
refinedweb
546
71.95
This file defines a mutable Unicode code point trie. More... #include "unicode/utypes.h" #include "unicode/ucpmap.h" #include "unicode/ucptrie.h" #include "unicode/utf8.h" #include "unicode/localpointer.h" Go to the source code of this file. This file defines a mutable Unicode code point trie. Definition in file umutablecptrie.h. Mutable Unicode code point trie. Fast map from Unicode code points (U+0000..U+10FFFF) to 32-bit integer values. For details see Setting values (especially ranges) and lookup is fast. The mutable trie is only somewhat space-efficient. It builds a compacted, immutable UCPTrie. This trie can be modified while iterating over its contents. For example, it is possible to merge its values with those from another set of ranges (e.g., another mutable or immutable trie): Iterate over those source ranges; for each of them iterate over this trie; add the source value into the value of each trie range. Definition at line 1 of file umutablecptrie.h. Compacts the data and builds an immutable UCPTrie according to the parameters. After this, the mutable trie will be empty. The mutable trie stores 32-bit values until buildImmutable() is called. If values shorter than 32 bits are to be stored in the immutable trie, then the upper bits are discarded. For example, when the mutable trie contains values 0x81, -0x7f, and 0xa581, and the value width is 8 bits, then each of these is stored as 0x81 and the immutable trie will return that as an unsigned value. (Some implementations may want to make productive temporary use of the upper bits until buildImmutable() discards them.) Not every possible set of mappings can be built into a UCPTrie, because of limitations resulting from speed and space optimizations. Every Unicode assigned character can be mapped to a unique value. Typical data yields data structures far smaller than the limitations. It is possible to construct extremely unusual mappings that exceed the data structure limits. In such a case this function will fail with a U_INDEX_OUTOFBOUNDS_ERROR. Clones a mutable trie. You must umutablecptrie_close() the clone once you are done using it. Creates a mutable trie with the same contents as the UCPMap. You must umutablecptrie_close() the mutable trie once you are done using it. Creates a mutable trie with the same contents as the immutable one. You must umutablecptrie_close() the mutable trie once you are done using it. Returns the last code point such that all those from start to there have the same value. Can be used to efficiently iterate over all same-value ranges in a trie. (This is normally faster than iterating over code points and get()ting each value, but much slower than a data structure that stores ranges directly.) The trie can be modified between calls to this function. If the UCPMapValueFilter function pointer is not NULL, then the value to be delivered is passed through that function, and the return value is the end of the range where all values are modified to the same actual value. The value is unchanged if that function pointer is NULL. See the same-signature ucptrie_getRange() for a code sample. Creates a mutable trie that initially maps each Unicode code point to the same value. It uses 32-bit data values until umutablecptrie_buildImmutable() is called. umutablecptrie_buildImmutable() takes a valueWidth parameter which determines the number of bits in the data value in the resulting UCPTrie. You must umutablecptrie_close() the trie once you are done using it.
https://unicode-org.github.io/icu-docs/apidoc/released/icu4c/umutablecptrie_8h.html
CC-MAIN-2021-39
refinedweb
576
67.76
From ADO to ADO.NET: A Gradual Approach Although some of Microsoft's marketing materials have presented ADO.NET (the data access layer in the Microsoft .NET Framework) as a simple upgrade to ADO, that's a rather misleading way to look at it. ADO.NET is really an almost completely new architecture for data access. This means that inevitably the developer faces a learning curve when moving from ADO to ADO.NET. But how can you remain productive while working your way up that learning curve? Fortunately, Microsoft thought about that problem before releasing the .NET Framework. The developers of .NET worked hard to provide interoperability between COM applications (developed with tools such as Visual Basic 6.0 or Visual C++ 6.0) and .NET applications. In this article, I'll review some of the differences between ADO and ADO.NET, and then show you how you can use existing ADO-based COM components from your new ADO.NET-based .NET applications. From Recordset to DataSet In ADO through version 2.8 (sometimes called "classic ADO") the basic object for holding a group of related records in an application is the Recordset. A Recordset is, roughly, a single table or view stored in memory. It also has a direct and intimate connection with the original data source. Depending on your cursor settings, a Recordset may retrieve batches of records as you move through the data. Each Recordset has a pointer to a current record, which you can edit. Edits to the current record are saved or discarded before you move to another record. The Recordset and the other classic ADO objects are universal, applying equally well to any type of data. If you've been working with ADO for any length of time, you've probably memorized all of those Recordset facts, and now assume that this is how data access is supposed to work. Well, in ADO.NET, things are almost completely different. Here are some basic facts about the DataSet, which is the new core data object in ADO.NET: -. Obviously the differences between the Recordset and the DataSet are profound. And yet, if you've been following Microsoft's architectural recommendations, you probably have a sizeable investment in data access layer components that return Recordsets. Do you need to discard that entire investment to move to .NET? I'm happy to say that the answer to that question is "no". Microsoft has provided ways to draw data from a Recordset into a DataSet. In the next section of this article, I'll show you a simple example of the code to do this. A Simple COM Server To demonstrate how this interoperability between COM and .NET works, I'll start with a simple COM server writte in VB 6.0. In fact, it's so simple that it's under ten lines of code: Public Function GetCustomers(strCountry As String) As Recordset Dim cnn As New ADODB.Connection Dim rst As New ADODB.Recordset cnn.ConnectionString = "Provider=SQLOLEDB;Data Source=(local);" & _ "Initial Catalog=Northwind;Integrated Security=SSPI" cnn.Open rst.Open "SELECT * FROM Customers WHERE Country = '" & strCountry & "'", cnn Set GetCustomers = rst End Function This code resides in a class named Customers in a project named DataLayer. When you invoke the GetCustomers method of the Customers class with a string specifying the name of a country, it returns a Recordset object containing all of the customers in the country from the SQL Server version of the Northwind sample database. Compiling this code produces a COM server named DataLayer.dll From COM to .NET Suppose your existing code uses COM servers similar to DataLayer.dll to return Recordsets with data of interest. How can you use that data in a .NET application? Here's a step-by-step approach to building a .NET client for this COM server. First, create a new Visual Basic .NET Windows application. I gave my application the uninspired name "Client." Right-click on the References node in Solution Explorer and select Add Reference. In the Add Reference dialog box, shown in Figure 1, select the COM tab. Click the Browse button and browse to the DataLayer.dll file to add it to the Selected Components list. Then click OK to add the reference to the project. Under the covers, there's quite a lot going on when you add this reference. You'll notice that the ADODB library (containing the classic ADO objects) and your DLL both show up in the References node in Solution Explorer. Visual Studio .NET automatically creates runtime-callable wrappers (RCWs) for the two COM libraries. An RCW is a .NET library which wraps a COM library and presents the same interfaces to your .NET code that the COM component exposes. RCWs are the only mechanism through which a .NET component can call a COM server. Next, add a DataGrid control to the default form in the project. Switch to code view, and add references to the .NET namespaces containing the data access components: Imports System.Data Imports System.Data.OleDb Now, write code to execute when the form is loaded: Private Sub Form1_Load(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles MyBase.Load ' Create an instance of the COM server Dim c As DataLayer.Customers = New DataLayer.Customers() ' Create a DataSet to hold the .NET data Dim ds As DataSet = New DataSet() ' Retrieve a classic ADO recordset from the COM server Dim rs As ADODB.Recordset = c.GetCustomers("France") ' Move the data from the Recordset to the DataSet Dim da As OleDbDataAdapter = New OleDbDataAdapter() da.Fill(ds, rs, "Customers") ' Bind the data to the user interface dgCustomers.DataSource = ds dgCustomers.DataMember = "Customers" End Sub The comments should make it clear what's going on in this code, but take a moment to appreciate the results. Even though COM and .NET are two very different ways to write applications, the .NET designers went the extra distance to make sure they can be seamlessly connected. In particular, this code instantiates COM components and works with them just like native .NET components. It also uses an overloaded form of the OleDataAdapter.Fill method to shuffle the data from the Recordset to the DataSet. Does it work? Sure! Figure 2 shows the .NET form displaying data from the COM server. There is much more to COM-.NET interoperability than I can show you in a short example. If you need to move an application from the COM world to the .NET world, I recommend purchasing a copy of Adam Nathan's excellent book .NET AND COM: THE COMPLETE INTEROPERABILITY GUIDE (Sams, 2002). Enabling Gradual Migration The interoperability between COM servers and .NET clients is one of the secrets to making the transition from old code to new code. If you're maintaining a large existing code base of VB6 or VC6 components, it's likely that you've divided it into functional clients and servers. In that case, there's no need to do a "big bang" migration of all the components at once. Instead, you can keep your existing COM servers, and write new client components as .NET components (or vice versa; it's also possible for COM clients to use .NET servers). This way you can slide new .NET components into your existing system without overhauling interfaces and without undertaking the risk of a complete rewrite. It's a winning situation all around._2<<
http://www.developer.com/db/article.php/2228221/From-ADO-to-ADONET-A-Gradual-Approach.htm
CC-MAIN-2017-09
refinedweb
1,239
67.96
Stack Trace - Online Code Description Every "Exception" (or subclass) object contains a "stackTrace", ortraceback, meant to indicate where the error occurred.Let's find out where a stackTrace comes from, and how to use it, when Exceptions are created and thrown. Some textbooks claim that it is the operation of constructing the exception that anchors its trace,others the throwing. Let us see for ourselves. Source Code public class StackTrace { IllegalArgumentException ex; public static void main(String[] argv) { StackTrace st = new StackTrace(); st.makeit(); System.out.println("CONSTRUCTED BUT NOT THROWN... (login or register to view full code) To view full code, you must Login or Register, its FREE. Hey, registering yourself just takes less than a minute and opens up a whole new GetGyan experience.
http://www.getgyan.com/show/1334/Stack_Trace
CC-MAIN-2017-30
refinedweb
126
56.45
Introduction: This blogpost aims at creating the custom tile in SAP Cloud Platform Portal site. Sometimes it is required to show some custom data like micro chart or slide tiles on the portal site but it is difficult to create them using standard options available on the portal. For that, we can create a custom SAP UI5 App and use it as custom tile to show it on the launchpad. Prerequisite: Nothing major is required to create this. Basic knowledge of following things works good: - Using WEB IDE Full Stack - SAP UI5 Controls - Portal Site - SAP Cloud Platform Main Content: Let’s Begin to create the custom tile. First of all, we will start from creating a Custom SAPUI5 App. - Open the WEB IDE Full stack and create a new project in it. Choose project template as UI5 Application. - Click Next. Choose your project name and namespace. I have used CustomTileUI5 as Project name and com.blog as namespace. - Click Next. Choose the Name for the view. This will be only view that would be displayed on the launchpad. I have used the name for view as MainView. - Click Finish. Now you can see the project in your workspace. Now we will code for the view. I am taking the sample code from Here . Open the MainView.view.xml file from View folder. <mvc:View <m:GenericTile <m:tileContent> <m:TileContent <m:content> <ComparisonMicroChart scale="M" view="Wide" size="Responsive"> <data> <ComparisonMicroChartData title="Americas" value="10" color="Critical"/> <ComparisonMicroChartData title="EMEA" value="50" color="Good"/> <ComparisonMicroChartData title="APAC" value="-20" color="Error"/> </data> </ComparisonMicroChart> </m:content> </m:TileContent> </m:tileContent> </m:GenericTile> </mvc:View> - The Above code will create a Generic tile in which we will creating a Comparison Micro Chart. All the data shown is dummy data picked from above link at the time of blogpost publishing. - Now Run the application to see if it is working fine. - You can see that I have changed the reference to controller in the view file. This is because while referring from the launchpad, it should not conflict with the path of the files. - Now we will deploy the file to the SAP Cloud Platfom. - Now after deployment, we need to configure the portal site to add our tile. Go to the SAP Cloud Platform Cockpit and from services, launch the portal Service. - Create a new site from the portal service. Use the Fiori Launchpad site and name it as you want. - Create a new app and select the deployed app as resource for the app. Main Changes are in the visualization tab of the application. - We have given the path as sap/fiori/customtileui5 because it is the path to the application formed when we set the application to the launchpad.Prefix is given as customtileui5 and Name as customtileui5/view/MainView because this is the reference to the files/view we want to show on the custom tile. - Assign the Desired Group and Catalog to the application. - Save the Application. - Publish the changes. - You can see the custom tile on the portal. Conclusion: If a user want to show the custom content, Custom UI5 is a good option and user can customize the tile as per the requirements. Good luck for the future. Posts any doubts. Nice blog, Anmol, it really helped me on a project, but can you please elaborate more on the Name, Prefix and Path? It would be clearer if you could point to where we can find these if we want to achieve this on an already existing project. hi, i followed all the steps and i ran into a problem, any idea, thanks regards
https://blogs.sap.com/2020/02/19/custom-tile-in-sap-cloud-platform-portal-site/
CC-MAIN-2020-50
refinedweb
609
74.79
Python model in a streams flow The Python model operator provides a simple way for you to run Python models to do real-time predictions and scoring. The operator now enables to select a model to be loaded from IBM Cloud Object Storage or IBM Watson Machine Learning. How it works File objects are the external file resources that are required by your code at execution time. Your Machine Learning process() function can expect these resources to be available on the runtime-local file system before it gets invoked for the first time. You can specify more than one file object, such as when you want to also use a tokenizer or a dictionary for text analysis. For each file object, you specify its location path in Cloud Object Storage and a typically short reference name that is used in its callback function. Clicking Generate Callbacks appends a callback function stub to your code, for each file object. When the flow starts running, each specified file object is downloaded from Cloud Object Storage and placed at a unique location on the runtime-local file system. At that point, your callback function is called with the runtime-local file path as an argument. Your callback function then instantiates and keeps the respective object for usage in subsequent processing. All specified file objects must be available on Cloud Object Storage before your process() function is called for the first time. Until then, any incoming events are held back. Cloud Object Storage is continually scanned and it is checked if the file object was updated. If so, the file is reloaded to the runtime-local file system. Then, its callback function is called again, which redeserializes the respective object and updates the state with the new model object, without restarting the flow. Important The Python objects that you load into Cloud Object Storage must be created with the same version of packages that are used in the streams flow. To see the list of preinstalled and user-installed packages, go to the canvas, click , and then click Runtime. Example Goal: Run predictive analysis by using a tokenizer and a model that were uploaded to Cloud Object Storage. After you define the file objects and click Generate Callbacks, we generate stubs for your load_model() and load_tokenizer() callback functions. import requests import sys import os import pickle def init(state): pass def process(event,state): text = event[‘tweet’] text_t = state[‘vectorizer’].transform([text]) y_pred = state[‘classifier’].predict(text_t) # predicts class as number labels = [‘irrelevant’, ‘negative’, ‘neutral’, ‘positive’] event[‘sentiment’] = labels[y_pred] return event def load_classifier(state, path_classifier): state[‘classifier’] = pickle.load(open(path_classifier, “rb” )) def load_vectorizer(state, path_vectorizer): state[‘vectorizer’] = pickle.load(open(path_vectorizer, “rb” )) When you select Watson Machine Learning as your source for the model to be loaded, all Python models from all associated Machine Learning instances are listed. Select the model you want to load. The load_model function is called when the model is loaded for the first time. It is continually checked if the model was updated in Watson Machine Learning and if it is, the load_model function is invoked again with the updated model. The model is now part of the state to be later used in the process function. Until the model is loaded for the first time, any incoming events are held back. Important Ensure that the Python packages used in this streams flow are compatible with the packages that you used to create the model. To see the list of preinstalled and user-installed packages, go to the canvas, click , and then click Runtime. Example Below you see an example of a Python model code for the Watson Machine Learning option. import sys def init(state): pass def process(event, state): image = event['image'] model = state['model'] event['prediction'] = model.predict(image) return event def load_model(state, model): state['model'] = model
https://dataplatform.cloud.ibm.com/docs/content/wsj/streaming-pipelines/python_machine_learning.html
CC-MAIN-2019-26
refinedweb
644
53.41
0 I'm writing a program which accepts the price of a tour. A tour is priced between 29.95 and 249.99. It's supposed to repeat until a valid price is entered... However, even when it is, the program continues to repeat. Here's what I have so far. import java.util.Scanner; import java.util.InputMismatchException; public class TourPrices { public static void main(String args[]) { Scanner scan = new Scanner(System.in); int tourPrice = 0; boolean ok; do { ok = true; try { System.out.println("Enter a score"); if(tourPrice < 29.95 || tourPrice >= 249.99) tourPrice = scan.nextInt(); ok = false; } catch(InputMismatchException e) { ok = false; scan.nextLine(); } } while(!ok); System.out.println("The valid price is " + tourPrice); } } How can I modify it so it ends when a valid price is entered?
https://www.daniweb.com/programming/software-development/threads/283326/ending-a-do-loop
CC-MAIN-2017-30
refinedweb
132
63.76
The const keyword is used to modify a declaration of a field or local variable. It specifies that the value of the field or the local variable cannot be modified. A constant declaration introduces one or more constants of a given type. The declaration takes the form: [attributes] [modifiers] const type declarators; where: identifier = constant-expression The attributes and modifiers apply to all of the members declared by the constant declaration., for example: public const double x = 1.0, y = 2.0, z = 3.0; The static modifier is not allowed in a constant declaration. A constant can participate in a constant expression, for example: public const int c1 = 5.0; public const int c2 = c1 + 100; // ); } } x = 11, y = 22 c1 = 5, c2 = 10 This example demonstrates using constants as local variables. // const_keyword2.cs using System; public class TestClass { public static void Main() { const int c = 707; Console.WriteLine("My local constant = {0}", c); } } My local constant = 707 Note; public static readonly uint l1 = (uint) DateTime.Now.Ticks; C# Keywords | Modifiers
http://msdn.microsoft.com/en-us/library/e6w8fe1b(VS.71).aspx
crawl-002
refinedweb
172
50.33
This is a playground to test code. It runs a full Node.js environment and already has all of npm’s 400,000 packages pre-installed, including @filmgardi/videojs-theme with all npm packages installed. Try it out: @filmgardi/videojs-theme. Filmgardi themes for video.js player :nail_care:. You can pull in the CSS via link tags <!-- Video.js base CSS --> <link href="" rel="stylesheet"> <!-- Filmgardi Theme --> <link href="" rel="stylesheet"> Or, if you're using CSS modules in JavaScript, you can install the NPM module: npm install --save video.js @filmgardi/videojs-theme Then just import the files as you would other CSS. import 'video.js/dist/video-js.css'; // Filmgardi Theme import '@filmgardi/videojs-theme/dist/videojs-filmgardi.css'; When you've got the theme pulled in, you must add the vjs-filmgardi class to your player. <video id="my-player" class="video-js vjs-filmgardi" ...>
https://npm.runkit.com/%40filmgardi%2Fvideojs-theme
CC-MAIN-2020-29
refinedweb
148
54.08
In this post I provide a primer on the ASP.NET Core data-protection system: what it is, why do we need it, and how it works at a high level. Why do we need the data-protection system? The data-protection system is a set of cryptography APIs used by ASP.NET Core to encrypt data that must be handled by an untrusted third-party. The classic example of this is authentication cookies. Cookies are a way of persisting state between requests. You don't want to have to provide your username and password with every request to a server, that would be very arduous! Instead, you provide your credentials once to the server. The server verifies your details and issues a cookie that says "This is Andrew Lock. He doesn't need to provide any other credentials, trust him". On subsequent requests, you can simply provide that cookie, instead of having to supply credentials. Browsers automatically send cookies with subsequent requests, which is how the web achieves the smooth sign-in experience for users. This cookie is a very sensitive item. Any request that includes the cookie will be treated as though it was sent by the original user, just as though they provided their username and password with every request. There are lots of protections in browsers (for example, CORS) to stop attackers getting access to these cookies. However, it's not just a case of stopping others getting their hand on your cookies. As a website owner, you also don't want users to tamper with their own cookies. Authentication cookies often contain more than just the ID or name of the user that authenticated. They typically contain a variety of additional claims. Claims are details about the user. They could be facts, such as their name, phonenumber, but they can also be permission-related claims, such as IsAdmin or CanEditPosts. If you're new to claims-based authentication, I wrote an introduction to authentication and claims several years ago that many people have found useful. Alternatively, see the authentication chapter in my book (shameless plug!). These claims are typically needed for every request, to determine if the user is allowed to take an action. Instead of having to load these claims from a database with every request, they're typically included in the authentication cookie that's sent to the user. This makes it easier for the application—once the user is authenticated by extracting the user principal from the cookie, the application will know exactly which claims the user has. That means the application has to trust the cookie. If the cookie was sent in plain-text, then the user could just edit the values, exposing a glaring security hole in the application. The ASP.NET Core data-protection system is used for exactly this purpose. It encrypts and decrypts sensitive data such as the authentication cookie. By encrypting the authentication cookie before it's returned in the response, the application knows that the cookie has not been tampered with, and can trust its values. How does the data-protection system work at a high level? The data-protection system tries to solve a tricky problem: how to protect sensitive data that will be exposed to attackers, ideally without exposing any key material to developers, while following best practices for key-rotation and encryption at rest. The data-protection system uses symmetric-key encryption to protect data. A key containing random data is used to encrypt the data, and the same key is used to decrypt the data. The ASP.NET Core data-protection system assumes that it will be the same app or application decrypting the data as encrypted it. That implies it has access to the same key, and knows the parameters used to encrypt the data. In a typical ASP.NET Core application there might be several different types of unrelated data you need to encrypt. For example, in addition to authentication cookies, you might also need to encrypt Cross-Site Request Forgery Tokens (CSRF) or password reset tokens. You could use the same key for all these different purposes, but that has the potential for issues to creep in. It would be far better if a password reset token couldn't be "accidentally" (more likely, maliciously) used as an authentication token, for example. The ASP.NET Core data-protection system achieves this goal by using "purposes". The data-protection system has a parent key which can't be used directly. Instead, you must derive child keys from the parent key, and it's those keys which are used to encrypt and decrypt the data. Deriving a key from a parent key using the same purpose string will always give the same key material, so you can always decrypt data that was encrypted if you have the parent key and know the purpose string. If a key is derived using a different purpose, then attempting to decrypt the data will fail. That keeps the data isolated which is better for security. The image above also shows that you can derive child keys from a child key. This can be useful in some multi-tenant scenarios for example. There's no special relationship between the child and grand-child keys—neither can read data encrypted with the other key. You can read the gory details about key derivation here. In most cases you won't have to interact with the data-protection system directly to create keys or encrypt data. That's handled by the core ASP.NET Core framework and the accompanying libraries. They make sure to use unique strings for each different purpose in your application. You can create your own protectors and encrypt other data if you wish (see below), but that's not required for day-to-day running of ASP.NET Core applications. I'm a .NET Framework developer—this sounds a lot like <machineKey>? The data-protection system is new to ASP.NET Core, but the need to protect authentication tokens isn't new, so what were we using before? The answer is <machineKey>. The <machineKey> element was used in much the same way as the ASP.NET Core data-protection system, to configure the keys and cryptography suite used to encrypt data by the authentication system (among other places). Unfortunately there were some complexities in using this key as it was typically read from the machine.config, would have to be configured on the machine running your application. When running in a cluster, you'd have to make sure to keep these keys in sync which could be problematic! The need to keep keys in sync doesn't change with the data-protection system, it's just a lot easier to do, as you'll see shortly. In .NET Framework 4.5 we got the ability to replace the <machineKey> element and the whole cryptography pipeline it uses. That means you can actually replace the default <machineKey> functionality with the new ASP.NET Core data-protection system, as long as you're running .NET Framework 4.5.1. You can read how to do that in the documentation. Warning: If you're migrating ASP.NET applications to ASP.NET Core, and are sharing authentication cookies, you'll need to make sure you do this, so that authentication cookies can continue to be decrypted by all your applications. I won't go any more into <machineKey> here, partly because that's the old approach, and partly because I don't know much about it! Needless to say, many of the challenges with managing the <machineKey> have been addressed in the newer data-protection system. How is the data protection key managed? Do I need to rotate it manually? If you know anything about security, you're probably used to hearing that you should regularly rotate passwords, secrets, and certificates. This can go some way to reducing the impact if one of your secrets is compromised. That's why HTTPS certificates are gradually being issued with smaller and smaller lifetimes. How often the best-practice of secret-rotation is actually done is another question entirely. Depending on the support you get from your framework and tools, rotating secrets and certificates can be painful, especially in that transition period, where you may have to support both old and new secrets. Given that the data-protection keys are critical for securing your ASP.NET Core applications, you won't be surprised that key rotation is the default for the data-protection system. By default, data-protection keys have a lifetime of 90 days, but you generally don't have to worry about that yourself. The data-protection system automatically creates new keys when old keys are near to expiration. The collection of all the available keys is called the key ring. I won't go into the details of key management in this post. Just be aware that key rotation happens automatically, and as long as you don't delete any old keys (or explicitly revoke them), then encrypted data can still be retrieved using an expired key. Expired keys aren't used for encrypting new data. Can I protect other data too or is it just for authentication cookies? The data-protection system is used implicitly by ASP.NET Core to handle encryption and decryption of authentication tokens. It's also used by the ASP.NET Core Identity UI to protect password reset and MFA tokens. You don't need to do anything for this protection—the framework handles the protection itself. If you have your own temporary data that you want to encrypt, you can use the data-protection APIs directly. I'll go into more detail in a later post, but the quick example below taken from the docs shows how you can use the IDataProtectionProvider service (registered by default in ASP.NET Core apps) to encrypt and decrypt some data: public class MyClass { // The IDataProtectionProvider is registered by default in ASP.NET Core readonly IDataProtectionProvider _rootProvider; public MyClass(IDataProtectionProvider rootProvider) { _rootProvider = rootProvider; } public void RunSample() { // Create a child key using the purpose string string purpose = "Contoso.MyClass.v1"; IDataProtector protector = provider.CreateProtector(purpose); // Get the data to protect Console.Write("Enter input: "); string input = Console.ReadLine(); // Enter input: Hello world! // protect the payload string protectedPayload = _protector.Protect(input); Console.WriteLine($"Protect returned: {protectedPayload}"); //PRINTS: Protect returned: CfDJ8ICcgQwZZhlAlTZT...OdfH66i1PnGmpCR5e441xQ // unprotect the payload string unprotectedPayload = _protector.Unprotect(protectedPayload); Console.WriteLine($"Unprotect returned: {unprotectedPayload}"); //PRINTS: Unprotect returned: Hello world } } Generally speaking though, this isn't something you'll want to do. I've personally only needed it when dealing with password reset and similar tokens, as mentioned previously. Is there anything I shouldn't use data protection for? An important point is that the data-protection system isn't really intended for general-purpose encryption. It's expected that you'll be encrypting things which, by their nature, have a limited lifetime, like authentication tokens and password reset tokens. Warning: Don't use the data-protection system for long-term encryption. The data-protection keys are designed to expire and be rotated. Additionally, if keys are deleted (not recommended) then encrypted data will be permanently lost. Theoretically, you could use the data-protection system for data that you wish to encrypt and store long-term, in a database for example. The data-protection keys expire every 90 days (by default), but you can still decrypt data with an expired key. The real danger comes if the data-protection keys get deleted for some reason. This isn't recommended, but accidents happen. When used correctly, the impact of deleting data-protection keys on most applications would be relatively minor—users would have to log-in again, password reset keys previously issued would be invalid—annoying, but not a disaster. If, on the other hand, you've encrypted sensitive data with the data-protection system and then stored that in the database, you have a big problem. That data is gone, destroyed. It's definitely not worth taking that risk! Instead you should probably use the dedicated encryption libraries in .NET Core, along with specific certificates or keys created for that purpose. How do I configure data-protection in my ASP.NET Core application? This is where the rubber really meets the road. Typically the only place you'll interact with the data-protection system is when you're configuring it, and if you don't configure it correctly you could expose yourself to security holes or not be able to decrypt your authentication cookies. On the one hand, the data-protection system needs to be easy to configure and maintain, as complexity and maintenance overhead typically lead to bugs or poor practices. But ASP.NET Core also needs to run in a wide variety of environments: on Windows, Linux, macOS; in Azure, AWS, or on premises; on high end servers and a Raspberry Pi. Each of those platforms has different built-in cryptography mechanisms and features available, and .NET Core needs to be safe on all of them. To work around this, the data-protection system uses a common "plugin" style architecture. There are basically two different pluggable areas: - Key ring persistence Location: Where should the keys be stored? - Persistence encryption: Should the keys be encrypted at rest, and if so how. ASP.NET Core tries to to set these to sensible options by default. For example on a Windows (non-Azure App) machine, the keys will be persisted to %LOCALAPPDATA%\ASP.NET\DataProtection-Keys and encrypted at rest. Unfortunately, most of the defaults won't work for you once you start running your application in production and scaling up your applications. Instead, you'll likely need to take a look at one of the alternative configuration approaches. Summary In this post I provided a high-level overview of ASP.NET Core's data protection system. I described the motivation for the data-protection system—transient, symmetric, encryption—and some of the design principles behind it. I described the system at a high level, with a master key that is used to derive child keys using "purpose" strings. Next I described how the data-protection system is analogous to the <machineKey> in .NET Framework apps. In fact, there's a <machineKey> plugin, which allows you to use the ASP.NET Core data-protection system in your .NET Framework ASP.NET apps. Finally, I discussed key rotation and persistence. This is a key feature of the data protection system, and is the main area on which you need to focus when configuring your application. Expired keys can be used to decrypt existing data, but they can't be used to encrypt new data. If you're running your application in a clustered scenario, you'll want to take a look at one of the alternative configuration approaches
https://andrewlock.net/an-introduction-to-the-data-protection-system-in-asp-net-core/
CC-MAIN-2021-04
refinedweb
2,477
56.76
301 This item is only available as the following downloads: ( XML ) Full Text (67ajo/Z2 ZePt zbh "?' te EXPORT BRIEFS The following trade items have been gathered from Agricultural Attache and other government reports as a service to U.S. exporters of food and agricultural products. In supplying the trade leads the Department of Agriculture does not guarantee reliability f the overseas inguire .Yupr best source for further information on These trade leads is the listed foreigit fir-g iVplng 'l3rwRi ou may also contact the Export Trade Services Division, FAS, (202) 447-7106. OCT 08 1980 SEPTEMBER 26, 1980 2744 Horses (Ecuador). Wants 10 ma.le.poritOety i f P jumpers), min. height 1.65 meters, weight 750 kg. drik lir, prerelu haaniover or similar; 130 castrated male all purpose horses (for presidential escorts), quarter horse or similar, weight 450 kg average, dark hair, w/out training or mounting defects (tamed). Both 3-5 months, vaccinated with certificates stating free of infectious Anemia. Quote C&F Quito by Air, FOB. Bank ref: Banco Central Del Ecuador. CONTACT: LT. Col. Dr. Carlos Fonseca Z, Conandancia General del Ejercito, Ministerio De Defensa, Exposition 2U8, Quito, Ecuador. Phone: 216-150. 2745- Seeds, feeds (Colombia). Wants seeds and feeds. Total supplies for 2746 general farm, garden, pet store. High quality products. Delivery immediately. Buyers will visit contacts in U.S. Requirement: Contacts and products to be offered with prices. Bank ref: Banco Cafetero, Carrera Decima-Bogota. CONTACT: Eduardo Gutierrez De Pineros, Seguros Universal SS.A., Ave. Jimenez No. 8-77, Piso 6, Bogota, Colombia. TELEX: 43385 Suval Co. 2747 Wines (Colombia). Needs bottle wines, dark red wines, pink wines and white wines. Quantity: 1,000 cases every two months. Delivery immediately. Requirement: samples in advance if possible to register mark with Ministry of Health. Quote FOB price quotes & shipping ports. Bank ref: Banco Cafetero, Bogota. CONTACT: Alfonso Martinez Arevalo, Alfonso Martinez and Co., P.O. Box 57734, Calle 12 No. 5-32 Oficina 1004, Bogota, Colombia. Phone: 283-2914, 282-1982. 2748- Beef Steers (Hong Kong). Would like (A) chilled & frozen beef; (B) live 2749 steers. Quantity: (A) 100 kg per day; (B) 100 300. Quality: (A) 1st & 2nd Grade; (B) Grade 3. Packed in (A) 2 5 kg each; (B) below 250 kgs. Bank ref: Wing Lungbank. CONTACT: Raymond Tsang, Gen. Merchandise Association, 10 Tsat Tsi Mui Road, North Point, Hong Kong. Phone: 5-634-954. 2750 Joint venture poultry (Nigeria). Newly established Nigerian Livestock Company with Borno State Government holding majority equity shares interested in a joint venture turnkey poultry project of some 30,000 layers and 100,000 broilers annually. Bank ref: Bank of the North, Maiduguri. CONTACT: General Manager, Borno Livestock Company, P.M.B. 1495, Maiduguri, Nigeria. Phone: 076/232-014, Ext. 887. Issued weekly by the Export Trade Services Division, Foreign Agricultural Service. U.S. Department of Agriculture, Room 4945 South Building, Washington, D.C. 20250 2 2751 Chicken (Egypt). Requests bids for Tender No. 31/1980 for the supply of 10,000 tons more or less lu% American frozen chickens.within A.I.D. program under loan 54, grant 602 as follows: (1) offers to be submitted on FAS basis American Ports indicating loading port and/or C&F Egyptian Ports Alexandria or Port Said buyers option; Program of loading as follows: first consignment 5,000 tons more or less 10% to be shipped in Nov. 80 & to arrive in Dec. 80, second consignment 5,000 tons more or less 10% to be shipped in Dec. 80 & to arrive Jan. 81. CONTACT: Gen. Authority for Supply Commodities, Purchasing Committee for Animal Products, 24 El Gomhouria Street, Cairo, Egypt ARE. 2752 Wheat gluten (Belgium). Would like wheat gluten. Quantity: min. 1,500 - 3,000 mt per year. Quality: vital (for human consumption), 80% protein. Packed in 50 kilo bags. Delivery urgent. Quote: C&F Antwerp. Bank ref: Banque Bruxelles-Lambert. CONTACT: Mr. Frey, INNOVATEC, rue Duquesnoy 14, B-1000 Brussels, Belgium. Phone: 02/513 72 44. 2753 Wines (England). Desires wine. Quality: all grades. Packed in 12 bottle cartons. Delivery as soon as possible. Requirement: company wishes to act as U.K. agent for U.S. company. Quote CIF, U.K. Port. Bank ref: Barclays Bank Ltd. CONTACT: James R. Bennewith, James R. Bennewith Ltd., 4 Regina Road, Chelmsford CMI 1PE, Essex, England. Phone: 0245 356559. 2754 Seeds (Portugal). Needs white millet and canary seeds. Quantity: 18 mt of each. Packed in containers. Delivery immediately. Quote CIF Lisbon or Oporto. Bank ref: Banco Totta & Acores, Av. Dos Aliadoa 42, Oporto. CONTACT: Fernandes Cruz E Silva, Fernandes Cruz E Silva LDA., Rua Antonio Candido 83-1, Oporto, Portugal. TELEX: 24215 OPORTO. 2755 Joint venture fruits (Nigeria). A Nigerian business firm which is currently building a soft rink plant expected to begin production in December is interested in beginning an orchard to grow citrus fruits (oranges, grapefruits, papaya). Firm would like to make contact with U.S. orchards interested in supplying technical expertise, seedlings, and management. This is also a joint venture opportunity. Bank ref: First Bank. CONTACT: Pavasalt Enterprises Ltd., P.O. Box 5142b, Falomo, Ikoyi, Lagos, Nigeria. 2756 Olives (Denmark). Wants black olives, large, pitted, 1 to 1.5 tons per month, five kilo cans, approx. Delivery as soon as possible. Quote CIF. Bank ref: Handelsbanken, Copenhagen. Contact: Preben Friis, Strom & Svendsen, Toldbodgade 61, DK-1253 Copenhagen, Denmark. Telex: 10681. Phone: (01) 15 62 54. 2757 Corn (France). Wants yellow corn (American type), 3,000 metric tons. Quote CIF Abidjan, Ivory Coast. Bank ref: Banque Francaise Du Commerce. Exterieur. CONTACT: Edmond Veller,International Business Manager, UNIFRA, 14, Boulevard Montmartre, 75009 Paris, France. TELEX: 660300 UNIFRA. Phone: 770 48 62. 2758 Canned sweet corn (France). Wants sweet corn "sous vide" (without water), three to five million 12 oz./tins, fancy Grade A size medium less than 20% wet. Packed in cases of 12 tins; cases of 24 tins; and bulk on 80 x 120 pallets. Delivery from October until June. Most of tins should be under customer labels in French. Quote CIF Le Havre & Anvers. .Bank ref: Banque Populaire Bretagne Atlantique, 12 Cours De La Bove, 56100 Lorient. CONTACT: Mr. Tanguy or Mr. Le Beuve, S.A. Le Beuve, 22, Quai Des Indes, 56107 Lorient, France. TELEX: 740-771. Phone: (97) 64 35 63. 2759 Cottonseed (France). Interested in buying 40,000 metric tons of cottonseed. Quote FOB American Port. Bank ref: Banque Francaise Du Commerce Exterieur. CONTACT: Edmond Veller,International Business Manager, UNIFRA, 14, Blvd. Montmartre, 75009 Paris, France. TELEX: 660300 UNIFRA. Phone: 770 48 62. 2760- Holstein & Brown Swiss heifers (Ecuador). Wants dairy heifers, open, 14 2761 months old, and dairy heifers 3-5 months pregnant. Quantity: 10,000. Holstein & Brown Swiss, certified at least 75% pure (each animal at least three quarters purebred). Bids on partial lots, I.E. fewer than 10,000 acceptable. First shipment late 1980. Usual health certificates. Certification of 75% purebred required from an official source (Association, Government or University). As much detail as possible on previous experience in hot (80 F) climate and low altitude (2,000 5,000 feet) required. Quote C&F Guayaquil Via Air & Via Boat, and FOB Point of Origin. Bank ref: Ecuadorean Central Bank Providing Financing. CONTACT: Econ. Fabian Armijos, Predesur, Juan Larrea Y Arenas, Mezzanine, Quito, Ecuador. Phone: 544-517 & 522-746. 2762 Wine (Peru). Interested in all types of Californian table wines, up to $200,000 during the first year, in bottles, liters or gallons, all sealed. This company is being established for the purpose of representing & importing all types of grape wines from California exclusively. Quote FOB & C&F Callao. Bank ref: Banco De Lima, Carabaya 698, Lima 1, Peru. CONTACT: Marcos Wolfenson,, Jr. Carabaya 685, OF 217, Lima 1, Peru. TELEX: 20478 Pe Wolfenson. Phone: Z73054. 2763- Tallow, cottonseed oil, sunflower seed oil, soybean oil, frozen chicken, ?768 frozen fish, fresh table eggs (Egypt). El Nasr is the largest of five public sector trading companies acting on behalf of Ministry of Supply. Purchases will be through public tenders requiring Bid Bonds & Performance Bonds. El Nasr needs contact with large, reliable U.S. suppliers to compete with bids from firms in other countries. Next GOE Tenders scheduled October 4 for 10,000 mt frozen broilers (CIP). Several others scheduled to follow. Quote C&F & FOB, Alexandria. Bank ref: Nat'l Bank of Egypt. CONTACT: Nabil Mostafa Kamel, El Nasr Export & Import Co., P.O. Box 1589, Cairo, Egypt. TELEX: 92232 SHIN UN. 2769- Raisins, currants, dates, almonds, rolled oats, glazed mixed fruit (mixed 2771 peel) (Canada). Looking for sources of supply of above products. Wants price quotations to Canadian destinations. CONTACT: Andrew Sealy, A&S Int'l Trading Enterprises, P.O. Box 921, Station B, Willowdale, Ontario M2K 2T6, Canada. Cable: ASINPTRENT. Phone: (416) 493-0403. 2772 Wine (Netherlands). Interested in container loads of California wines. Quote CIF or FOB. Bank ref: Amro Bank Utrecht. CONTACT: E. J. J. Booden, Makro International Purchasing, Spaklerweg 53, 1099 BB Amsterdam, Netherlands. TELEX: 16634. Phone: 020-944484. 4 2773 Cnicken (France). Wants offers for frozen chicken, 300 metric tons, 800 to 1,400 grams size. Destination Middle East with Halal Certificate of killing specifications as usual for Arabic Gulf. Quote FOB American Port. Bank ref: Banque Francaise Du Commerce Exterieur. CONTACT: Edmond Veller, International Business Manager, UNIFRA, 14, Blvd. Montmartre, 75009 Paris, France. TELEX: 660U00 UNIFRA. Phone: 770 48 62. 2774 Rapeseed & cornoil (Australia). Wishes to buy rapeseed & cornoil, edible/deodorized/winterized, in 200 kg steel drums. Delivery soonest. Quote C&F Fremantle. Bank ref: Commonwealth Bank of Australia, High St. Fremantle, WA. CONTACT: B. Jacovich, B. A. Jacovich & Co., 15 Derinton Way, Hamilton Hill, W.A. 6163, Fremantle, Australia. Phone: 09-4182881. 2775 Cocktail mix (Israel). Interested in Pina Colada cocktail mix, 10,000 cartons, 2 times a year, labels to be supplied by importer. Quote FOB East Coast. Bank ref: Bank Hapoalim, Kikar, Hamedina Branch. Contact: Yaakov Shnitzer, Wagner-Shnitzer, Barsilai Street, Tel Aviv, Israel. Telex: 35770 COIN IL ATIN: HERMAT. Phone: 03-625413. 2776 Deer (Taiwan). Importer looking for supply of deer, such as Fallow deer, Red deer, elk and Samber. Quantity: Fallow deer 100 head, Samber deer 50 head. All adult in good condition. Crated 15-20 Fallow deer per box 116 INCHES X 79 inches X 72 inches. Need health certificate issued by the country of origin. Quote FOB or C&F by air. CONTACT: Rick Pan, Safari Animal Corporation. TELEX: 23457 TIMOSLED ATTN: SAFARI. Phone: (02) 841-1012. 2777 Frozen roasting chickens (Canada). Wishes to contact U.S. poultry processors who can supply basted frozen roasting chickens, under private label, 5 los. and up. Prefer chickens not processed in chlorinated water. CONTACT: Gary O'Brian, or Percy Welsh, Burns Food Ltd., International Trade Division, P.O. Box 2520, Calgary, Alberta, Canada T2P 2M7. Phone: (403) 267-0110. 2778- Beef, chicken (Egypt). Export agent for Egyptian firm requests offers for 2779 500 long tons of Grade A beef per month; 5UU long tons per month Grade A chicken, giblets packed separately. (Islamic Kill pack) Quote CIF Alexandria, Egypt on a one year contract. Partial offers acceptable: CONTACT: Walter Larke Sorg Associates phone (202) 546-6371. Confirm offers by Western Union or mailgram to 234 F Street, NE, Washington, D.C. 2U002. Phone: (202) 546-6371. 2780- Flower seeds, tree seeds, nursery stock (United Arab Emirates). Interested 2782 in flower seeds, tree seeds, herbaceous plants, nursery stock, consisting of Acacai Arabica, Acacia Tortilis, Prosopis Specigera, Zizyphus Spina Christii (all forest plants). Company plans to set up business for Ornamental, forest plants, and indoor plants. They have a particular interest in plants for landscaping purposes. Contact: Juhani Hulkko, : Lanner Project Co. Ltd., P.U. Box 6298, Abu Dhabi, U.A.E. TELEX: 26166. 2783- Canned foods (Nigeria). Interested in canned fish, fruits and vegetables, 2787 meats and evaporated milk. Quote C&F Lagos. Terms of payment: 30 to 60 days document against payment basis. Bank ref: Bank of Credit & Commerce International (NIG) Ltd., 39/48 Tafawa Balewa Square, P.M.B. 12763, Lagos. CONTACT: S.A.R. Sanusi, Managing Director, Rio de Oro Continental Agency, Ist Floor, Offin House, 5 Rabiatu Thompson Crescent, Surulere, P.O. Box 1918, Surulere, Lagos, Nigeria. Cable: RIDORCUN BOX 1918, Surulere. 2788 Red kidney beans (Panana). Wants 39,u00 CWT, U.S. No. 1 1980-81 crop, jute sacks of 100 lbs. each. Delivery November 4,500 CWT, December - 5,000 CWT, January 5,000 CWT, February 4,500 CWT, May 2,000 CWT, June 4,000 CWT, July 4,000 CWT, August 5,000 CWT, September 5,UOO CWT. Max. cooking time 90 minutes. Phytosanitary certificate from country of origin required. Quote CIF Port Balboa. Bank ref: Banco Nacional. CONTACT: Sr. Everado Bertoli, Instituto de Mercado Agropecuario, Apartado 5638, Panama 2, Panama. TELEX: 2994 MERCADEO. Cable: IMA Panama. Phone: 61-5072. 2789 Lentils (Panama). Wants 34,000 CWT of lentils, U.S. No. 1 1980-81 crop, jute sacks of 100 Ibs. each. Delivery December 4,500 CWT, January - 4,500 CWT, February 4,000 CWT, May 3,000 CWT, June 4,500 CWT, July - 4,500 CWT, August 4,500 CWT, September 4,500 CWT. Max. cooking time - 90 minutes. Phytosanitary certificate from country of origin required. Quote CIF Port of Balboa. Bank ref: Banco Nacional. CONTACT: Everarado Bertoli, Instituto de Mercado Agropecuario, Apartado 5638, Panama 2, Panama. TELEX: 2994 MERCADEO. Cable: IMA Panama. Phone: 61-5072. 2790- Wheat, wheat flour (Algeria). Interested in large negotiable quantities of 2791 hard wheat semolina and soft wheat flour, correspondence in French required. No tender documents issued yet. Looking for direct contact with producers and exporters in order to fill future demand. Mainly interested in signing large long-term contracts so tnat goods cover 1981 period. Firms will be selected through experience, production capacity, export capacity, turnover and number of employees. CONTACT: Sn. Sempac, Direction des Approvisionnements, 28, Avenue Colonel Bougara, El-Harrach, Algiers, Algeria. TELEX: 52912. Phones: 76.41.84; 76.49.73. 2792 Broilers (Greece). Wants frozen broilers, 2,000 mt., USDA Grade A, ready to cook; 900-1300 grs. each. Bird, fully eviscerated; giblets in 10-14 birds per carton; individually poly-wrapped. Islamic rite slaughter and Arabic Label required. Quote FOB, U.S. ports (East Coast). Bank ref: National Bank of Greece. CONTACT: John Ritsonis, 5, Zaimis Street, Athens, Greece. TELEX: 216925 RITS. Phone: 3601859. 2793 Broiler (France). Import-export company founded in 1880 now servicing French overseas territories ana Middle East wants negotiate with primary suppliers only for 1,850 tons per month, 800-1400 grams, white skin, Islamic kill, fully eviscerated, in bags with trademark "Poulet Joli", payment by Letter of Credit. Bank ref: Banque Nationale de Paris, Marseille, Banque Francaise et Commerce Exterieur, Marseille. CONTACT: G. Orinier, SUCAB, B.P. 255, 13308 Marseille Cedex 14, France. TELEX: 410617 F. 2794 Broilers (Egypt). Wants frozen chicken, 800-1400 gram/bird, 10,000 mt., Grade A, 5,000 mt. in December, 5,000 mt. in January. Slaughtered according to Islamic rites. Quote C&F Alexandria. Bank ref: National Ban of Egypt, Main Branch Cairo, Egypt. CONTACT: Gouda Kaiser, Tractor & Engineering Co., 23, Boustan Street, Bab El Louk, Cairo, Egypt. TELEX: 92286 TECO UN, 92247 CROCDL UN. Cable: AZIZ, Cairo. Phone: 742696, 742613. 2795 Broilers (Egypt). Wants frozen broilers, 100 metric tons, Grade A, 0.800 kgm 1,400 kgm each, packaging in polythylene bags in cardboard boxes. Delivery two eaqual shipments during November and December. Origin and date of slaughter must be stamped on box. Quotations CIF Alexandria. CONTACT: Zarif Risgallah, Alpha Egypt Import-Export, 11 Saint Saba Street, Alexandria, Egypt. Phone: 26476 & 34048. 2796 Chicken (Hong Kong). Wishes to buy frozen poultry and poultry parts, Grade A, standard packaging, prompt shipment. Bank ref: Hong Kong and Shanghai Banking Corp; Banque Nationalde de Paris; and Sanwa Bank. CONTACT: J. F. Redondo or Abraham Li, F.A.B. Limited, 2/F Highburgh House, Hung Hom Bya Centre, Kowloon, Hong Kong. TELEX: 33683 OAKPA HX. Phone: 3-656-395. 2797- Chicken, eggs (Egypt). Interested in frozen broilers and table eggs, trial 2798 order of 100 mt of chicken and 1/2 million eggs, Grade A, eggs of 40-50 grams each. Quote CIF Alexandria. Bank ref: Cairo Bank, Shoubra Branch. CONTACT: Fathi Mahmoud, Arab Trade & Exports Bureau, 37 Talaat Hape St., Cairo, Egypt. TELEX: 93731 DG Cairo UK. Cable: NIBESKAR, Cairo. Phone: 701219/757524. 2799 Barley (Saudi Arabia). Wants 20,000 tons or more of barley interested U.S. Firms should telex price quotes and shipping details immediately. Contact: A. R. Khan, IDO (General Contracting), Al-Saif Contracting and Trading EST., P.O. Box 6754, Riyadh, Saudi Arabia. Phone: 4776004/4776037. 2800 Beef (Egypt). Government tender with closing date of October 11 for frozen boneless beef. Requests bias for tender No. 32/1980 for 8,000 tons more or less 1U% frozen boneless beef meat fore quarters and/or compensated quarters, males 100% age 1/3 years and/or 3/5 years. Shipment during December in two consignments. Conditions and specifications available address below against payment due fees taking into consideration ministerial decree No. 103o/1978 concerning the commercial agency. CONTACT: General Authority for supply commodities, Purchasing Committee for Animal Products, 24, El Gomhouria St., Cairo, A.R.E. Foreign Trade Developments Egyptian Cabinet decisions on meat production, supply, prices and imports. According to Egyptian press reports following a meeting of the full Cabinet, all meat prices will be controlled following the termination of the meat ban Sept. 30. A committee of "experts" has been formed to determine the appropriate prices for meat, poultry, eggs and other protein sources. These prices will be adjusted periodically depending on demand ana supply considerations. The projected imports of basic foodstuffs by the GOE during the 1980/81 fiscal year have been revised to reflect the government's desire to ease supply problems. The following figures are in tons: Previous FY 1980/81: meat 80,000; fish 70,000; poultry - 30,000; & lentils 50,U00; Revised FY 1980/81: meat.- 120,000; fish - 150,000; poultry 85,000; & lentils 80,000. The FY 1980/81 corresponds to the period July 1, 1980 June 30, 1981. The government also plans to import 10,000 head of breeding cattle to bolster local production. The press further notes that extra ration cards will be issued early in October to allow regular card holders to purchase additional amounts of such items as cooking oil, lentils, soap and beans. The monthly ration of imported chickens has been raised from 2 to 4 per ration-card holder. 7 Tokyo food exhibit. Members of the processed food industry are urged to signify their interest in an American food exhibit at the Hotel, Restaurant, Institutional food show in Tokyo, Marcn lb-20, 1981. Tnis will be an opportunity to enter the rapidly expanding Japanese market and only fifty (0O) booths will be available on a first-come first-served basis. Because of this limitation on the number of exhibitors, interested participants are urged to contact FAS without delay. The participation fee is $400. If interested in receiving further details, contact the Export Trade Service Division, FAS, U.S. Department of Agriculture, Washington, D.C. 20250. New Publications Single copies of the following are available by writing the Information Services Staff, FAS, U. S. Department of Agriculture, Washington, DC 20250. World Grain Situation Outlook for 1980/81, FG-26-80, September 15, 1980. Prospective Outlook for Grain in the USSR 1/, FG-25-80, September 12, 1980. Agricultural Outlook, August 1980, AO-57, September 1980. UNIVERSITY OF FLORIDA - IIIIII NIIII IIIH M II |i 3 1262 09051 4398 i-.. : .... ..... .. . ";i Full Text xml version 1.0 encoding UTF-8 REPORT xmlns http: xmlns:xsi http: xsi:schemaLocation http: INGEST IEID E3LJUFH2E_HND9N9 INGEST_TIME 2012-10-23T16:46:37Z PACKAGE AA00012161_00301 AGREEMENT_INFO ACCOUNT UF PROJECT UFDC FILES
http://ufdc.ufl.edu/AA00012161/00301
CC-MAIN-2018-05
refinedweb
3,327
71
Overview From version 4.5, Metview uses CMake for its compilation and installation. This is part of the process of homogenising the installation procedures for all ECMWF packages. As with configure, CMake will run some tests on the system to find out if required third-party software libraries are available and notes their locations (paths). Based on this information it will produce the Makefiles needed to compile and install Metview. CMake is a cross-platform free software program for managing the build process of software using a compiler-independent method. Requirements Platforms At ECMWF, openSUSE 11.3, openSUSE 13.1, SLES 11 and Red Hat 6 Linux systems (64bit) were used for the regular usage and testing of Metview. Other Linux platforms are used for occasional testing. ECMWF support libraries All required support libraries from ECMWF are available without charge from the Software Services web page: To produce plots, Magics must be installed: Magics++ (2.22 or higher is required) should be configured with the -DENABLE_METVIEW for a 'pure batch' installation of Metview with no user interface, it is possible to supply Magics with the option -DENABLE_METVIEW_NO_QT The following two libraries need to be installed (both are required, even if you will not handle GRIB or BUFR data): GRIB_API (1.9.9 or higher) see the Installation FAQ for details of building GRIB_API for Metview, as this contains some important information version 392: compiled with double floating point precision (answer “y” to “Do you want 64-bit reals? [y,n]”) must be built with GRIB_API support 64-bit versions should be built with -fPIC compilation flag - Remember to set the ARCH environment variable before building Emoslib, e.g. export ARCH=linux. - version 400 or above: - compiled with double floating point precision (this is the default) The latest versions of EmosLib depend on GRIB_API, therefore GRIB_API must be installed before EmosLib. Required third-party software First, ensure that all third-party libraries required by Magics and GRIB_API are installed (this is likely to have been fulfilled already unless Magics was built on another system and copied across). Additionally, the following list of software should be installed on your system before you try to install Metview. If you use a package manager, such as RPM, to install software make sure to include the corresponding development packages with the header files. CMake will test for these libraries and give error messages if an essential one is missing. Qt (4.6.2 or later) if building the user interface (default=yes) note that on some systems it is also necessary to install the libQtWebKit-devel development package (it may have different names on different systems) NetCDF library with C++ interface () OpenMotif (if enabling the old user interface with -DENABLE_MOTIF) gdbm - ImageMagick (Metview uses the convertcommand during the build process) - ksh - the Korn Shell is used by Metview's startup script and some other internal scripts If you wish to access OPERA radar BUFR data, then you will need to also install the proj4 development libraries. Compilation environment Any C++ Compiler which supports features required for the ANSI C++ standard from 1998 (STL, namespaces, templates) should work with Metview. At ECMWF we tested GCC’s g++ 4.1, 4.3 and 4.5 successfully. A Fortran compiler is required to build some of Metview's modules. It will also be required to build EmosLib, for which Cray pointer support is required. At ECMWF the Portland Pgf90 compiler 10.8 and GFortran 4.5 and newer were tested successfully on Linux platforms.. Compilation, testing and installation It is advisable to perform an 'out-of-source build', meaning that the build should take place in a directory separate from where the source code is. Here is an example set of commands to set up and build Metview using default settings: # unpack the source tarball into a temporary directory mkdir -p /tmp/src cd /tmp/src tar xzvf Metview-4.5.0-Source.tar.gz # configure and build in a separate directory mkdir -p /tmp/build cd /tmp/build cmake /tmp/src/Metview-4.5.0-Source make The Metview distribution includes a small set of tests which can help ensure that the build was successful. To start the tests, type: make test Although it is possible to run Metview directly from the build directory, it is best to install it. The installation directory is /usr/local by default, but can be changed by adding the -DCMAKE_INSTALL_PREFIX flag to the cmake command. In this case, the configure, build, test and install step would look like this: cmake /tmp/src/Metview-4.5.0-Source -DCMAKE_INSTALL_PREFIX=/path/to/metview_install_dir make make test make install CMake options used in Metview CMake options are passed to the cmake command by prefixing them with -D, for example -DENABLE_UI=OFF.
https://confluence.ecmwf.int/pages/viewpage.action?pageId=45744182&navigatingVersions=true
CC-MAIN-2019-43
refinedweb
797
59.03
* Launching an app within in app? Jennifer Sohl Ranch Hand Joined: Feb 28, 2001 Posts: 455 posted Jul 31, 2003 07:45:00 0 I have create an application for our Costing department and also have created a separate application for our order entry department. Each application has it's own main method. What I am trying to do , is within the Costing application, I would like the user to be able to click on a button, and launch the order entry application. I tried just creating an object of the order entry class on an actionPerformed event in the costing app, but nothing came up. Is there something special I need to be doing? Here is the class with the main method in it for the order entry app... package com.storekraft.ord; import javax.swing.*; import java.awt.*; import java.sql.*; import java.awt.event.*; import com.storekraft.util.*; import java.net.*; public class CVOrderEntry extends JFrame { private SKComboBox customer; private JTabbedPane jtp; private Connection con; public CVOrderEntry() { super("Client Vision Order Entry"); URL url = getClass().getResource("CVWelcome2.jpg"); addWindowListener(new WindowKill()); DB2Connection db2Con = new DB2Connection("cvuser","cvuser1"); con = db2Con.getCon(); jtp = new JTabbedPane(); SKUtil util = new SKUtil(con); // Create welcome tab. WelcomeTab wt = new WelcomeTab(jtp,url); // Create customer Tab. CustomerTab ct = new CustomerTab(db2Con); customer = ct.getCust(); customer.addActionListener(new CustEvent()); // Create OrderMaintenance Tab OrderMaintenance om = new OrderMaintenance(db2Con,ct,util); // Create ProjectMaintenance Tab. ProjectMaintenance pm = new ProjectMaintenance(db2Con,ct,om,util); jtp.addTab("Welcome", null, wt, "Welcome"); jtp.addTab("Customer", null, ct, "Customer"); jtp.addTab("ProjectMaintenance", null, pm, "Project Maintenance"); jtp.addTab("Order Maintenance", null, om, "Order Maintenance"); jtp.setEnabledAt(2,false); jtp.setEnabledAt(3,false); getContentPane().add(jtp); UIManager.put("ToolTip.background",new Color(204,204,255)); UIManager.put("ToolTip.foreground",Color.black); UIManager.put("ComboBox.disabledBackground",Color.white ); UIManager.put("ComboBox.disabledForeground",new Color(0,0,255)); UIManager.put("TextField.foreground",Color.black); } public static void main(String[] args) { CVOrderEntry cvoe = new CVOrderEntry(); //Center the frame... cvoe.setSize(740,745); //Get the screen size.. Dimension d = Toolkit.getDefaultToolkit().getScreenSize(); //Get the size of the Frame... Dimension s = cvoe.getSize(); //Subtract half the frame from half the screen to get //the x and y coordinates of the upper left corner of the screen. int x = (d.width / 2) - (s.width / 2); int y = (d.height / 2) - (s.height / 2); //Apply these calculations to the frame... cvoe.setBounds(x, y, s.width, s.height); cvoe.pack(); cvoe.setSize(740, 745); cvoe.setResizable(false); cvoe.setVisible(true); } class CustEvent implements ActionListener { public void actionPerformed(ActionEvent e) { if (e.getSource() == customer) { if(customer.getSelectedIndex() > 0) { jtp.setEnabledAt(2,true); jtp.setEnabledAt(3,true); } else { jtp.setEnabledAt(2,false); jtp.setEnabledAt(3,false); } } } } } Thanks for any help! Barry Andrews Ranch Hand Joined: Sep 05, 2000 Posts: 523 I like... posted Jul 31, 2003 10:41:00 0 You can do it 1 of 3 ways: 1. If you want the CVOrderEntry executing as a separate process (i.e. in a separate JVM), then use one of the java.lang.Runtime.exec() methods to kick off another process. 2. If you want them to execute in the same JVM, then as long as your other class has access to this CVOrderEntry object you can just call it directly. But, since you have some initialization stuff in your main(), you will have to do that as well in the calling class. 3. You could also use reflection and call the main() method in your CVOrderEntry class. Check out the java.lang.reflect package. Options 1 or 2 are the easiest. Hope it helps! Barry Ernest Friedman-Hill author and iconoclast Marshal Joined: Jul 08, 2003 Posts: 24166 30 I like... posted Jul 31, 2003 15:47:00 0 Don't know why she would need to use reflection to call main() in the other class; main is just an ordinary method, and she could just call it if that's what would do the job. [Jess in Action] [AskingGoodQuestions] Joel McNary Bartender Joined: Aug 20, 2001 Posts: 1815 posted Aug 01, 2003 07:29:00 0 The only thing to watch out for when calling the main method directly is what happens when the window closes. If both applications call System.exit(0) when their main window closes, then the User could start the first app, start the second window within the first app's process. The user finishes what he is doing in the second app and closes the window. This calls System.exit(0), which also then closes the first window! If this is the case, calling the Runtime.exec() methods is better -- just make sure that you capture the output (if there is any) from the called program. Piscis Babelis est parvus, flavus, et hiridicus, et est probabiliter insolitissima raritas in toto mundo. I agree. Here's the link: subject: Launching an app within in app? Similar Threads Passing objects Constructor confusion... Passing objects Help with Runtime.exec() Middling The JFrame All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/394167/java/java/Launching-app-app
CC-MAIN-2014-15
refinedweb
851
51.95
public class IntRange extends AbstractList Represents a list of Integer objects from a specified int up (or down) to and including a given to. This class is a copy of ObjectRange optimized for int. If you make any changes to this class, you might consider making parallel changes to ObjectRange. Instances of this class may be either inclusive aware or non-inclusive aware. See the relevant constructors for creating each type. Inclusive aware IntRange instances are suitable for use with Groovy's range indexing - in particular if the from or to values might be negative. This normally happens underneath the covers but is worth keeping in mind if creating these ranges yourself explicitly. Creates a new non-inclusive IntRange. If from is greater than to, a reverse range is created with from and to swapped. from- the first number in the range. to- the last number in the range. Creates a new non-inclusive aware IntRange. fromis greater than to. from- the first value in the range. to- the last value in the range. reverse- trueif the range should count from toto from. Creates a new inclusive aware IntRange. from- the first value in the range. to- the last value in the range. inclusive- trueif the to value is included in the range. Determines if this object is equal to another object. Delegates to AbstractList.equals if that is anything other than an IntRange. It is not necessary to override hashCode, as AbstractList.hashCode provides a suitable hash code. Note that equals is generally handled by DefaultGroovyMethods.equals instead of this method. that- the object to compare trueif the objects are equal Compares an IntRange to another IntRange. that- the object to compare for equality trueif the ranges are equal Gets the 'from' value as a primitive integer. Returns the inclusive flag. Null for non-inclusive aware ranges or non-null for inclusive aware ranges. Gets the 'to' value as a primitive integer. A method for determining from and to information when using this IntRange to index an aggregate object of the specified size. Normally only used internally within Groovy but useful if adding range indexing support for your own aggregates. size- the size of the aggregate being indexed
http://docs.groovy-lang.org/latest/html/gapi/groovy/lang/IntRange.html
CC-MAIN-2016-44
refinedweb
369
69.48
Python is one of the easiest programming languages to either newbies or seasoned professionals alike. If you are completely new to. The basics of any programming languages (C++ or Java etc) are the CLASSES and OBJECTS of the classes. The following section deals in great detail about these concepts, so let us get started. In general object-oriented programming notation, a class is a container of your data and the methods that perform actions on these data. In other words, a class is the one that bundles your data and the functionality together as a single unit. Once a class is created, you create instances of the class for your programmatic usage which are then termed as objects. You could understand the class to be equivalent to a template with empty placeholders and objects to be photocopies of the template but the only difference is that each photocopy has its own related details added to it, than that are available on the class itself. As already mentioned above that Python is an object-oriented programming language, the main focus remains on classes and objects rather than on functions as present in any procedure oriented programming language. A class can be defined in Python programming language using the class keyword. A class that we create using Python should always have a docstring (a string to define and describe what the class is for?), not mandatory but it is good to have one. Let us now take a look at a sample class definition in Python programming language: “This is a docstring to explain that it is my very first Python class” pass A class once created will create a new local namespace where all the class’s attributes (data or functions) are defined. As soon as we define a class, an object of same name will also be created which will be helpful to allow us access the data or functions on that object. You can then create multiple other objects of the class as like this: newObject = MyFirstPythonClass() Now let us take a look at an example of creating a class in Python and at the same time, let us create an object of the class created and also check whether we are able to access the data and the functions of the class. The example below will give you all the necessary details in a single shot, as to how we can create a class, create an object out of it, how to use data and functions of a class via the object that we have newly created. Related Page: Classes And Objects - Python In the class definition, you would have observed that the function had an argument named self but the method invocation via the object didn’t use any argument while invoking it. This is because, whenever a method is called from an object, the object becomes the first argument to the method by default. The method call can safely by transformed as the following for better understanding. newObject.function() -> MyFirstPythonClass.function(newObject) Along with the points mentioned above, we need to focus on methods that being with double underscores, as these are all special functions with specific meaning. A constructor method __init__() will always be called the first whenever a new object is created out of a class. Let’s extend the above example to incorporate a constructor to assign default values whenever there are any new objects created out of this class especially. As discussed earlier, the double underscored special function called the constructor of a class is always an important function that gets executed whenever there is a new object is created out of the class. There are two objects that we have created out of the class that we have created and the constructor will be executed twice to instantiate the values of the class variables in each of the case. Related Page: Regular Expression Operations-Python Check also the ease of usage, the first object was with two parameters as specified in the class but the second object that was created had an additional class variable added to it. In this article, we have seen what Python is and at the same time tried to understand its importance and its ease of use. Taking a step further, we also have tried to understand the concept of classes and objects in the Python programming language. Hope that you have understood the concepts quite well and if you still require any further details on these topics, one of the best resources to rely on is the Python documentation. Free Demo for Corporate & Online Trainings.
https://mindmajix.com/python-classes-and-objects
CC-MAIN-2019-04
refinedweb
770
53.34
How to use Python 3 Pillow on Raspbian Stretch Lite to compress your jpeg image When you are building a Raspberry Pi camera project, you may want to compress the images captured from the camera to reduce the time to upload your image to a server endpoint. Moreover, when you connect your Raspberry Pi to your iPhone Personal WiFi hotspot, you will want to incur minimal mobile bandwidth charges from demonstrating your Raspberry Pi project in your class. You may have either: - Setup Raspbian Stretch Lite on Raspberry Pi 3 to run Python 3 applications, or - Setup Raspbian Stretch Lite on Raspberry Pi Zero W to run Python 3 applications. In this case, you will have the option to use Pillow, a fork of Python Imaging Library, to compress your jpeg image. Setting up Pillow on your Raspbian Stretch Lite Before you can use Pillow, you need to setup Pillow dependencies on your Raspbian Stretch Lite. Follow through that tutorial before continuing. Python 3 code example on using Pillow to compress a jpeg image After making sure that Pillow can be used in your Python 3 environment on Raspbian Stretch Lite, you can write the codes to compress your jpeg images. The following is a Python 3 function that takes in a path to an image and compresses the image with 65% quality: from PIL import Image def compress_jpeg_image(image_path): picture = Image.open(image_path) picture.save(image_path, "JPEG",optimize=True,quality=65) You can then use the function to compress an image that resides on the filesystem of your Raspbian Stretch Lite: compress_jpeg_image('/home/pi/a_jpeg_image.jpg') With these Python 3 codes, I can get around 8 to 10 times reduction in the size of the image that my Raspberry Pi camera captures.
https://www.techcoil.com/blog/how-to-use-python-3-pillow-on-raspbian-stretch-lite-to-compress-your-jpeg-image/
CC-MAIN-2019-47
refinedweb
293
51.52
Infrastructure as Code Infrastructure as code is the process of managing and provisioning computer data centers through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools. If you want to know more about Infrastructure as Code and and when to use different IaC tools take a look at this article of mine. What is Infrastructure-as-Code and how is Terraform, CDK Ansible different? AWS CDK The AWS Cloud Development Kit (AWS CDK) is an open-source software development framework to define cloud infrastructure in code and provision it through AWS CloudFormation.. The AWS CDK has first-class support for TypeScript, JavaScript, Python, Java, and C#. If you want to know more about AWS CDK go through below article. Everything about AWS CDK? Some information before we start - What is TypeScript Typ. - Why am I choosing TypeScript for todays example? TypeScript is a fully-supported client language for the AWS CDK and is considered stable. Working with the AWS CDK in TypeScript uses familiar tools, including Microsoft's TypeScript compiler (tsc), Node.js and the Node Package Manager (npm). Also TypeScript has biggest community support for AWS CDK. Pre requisite Today we will be creating a S3 bucket in your AWS environment by writing some sample code in TypeScript language. Find below steps as pre requisite. To use the AWS CDK, you need an AWS account and a corresponding access key. If you don't have an AWS account yet, see Create and Activate an AWS Account To find out how to obtain an access key ID and secret access key for your AWS account, see Understanding and Getting Your Security Credentials To find out how to configure your workstation so the AWS CDK uses your credentials, see Setting Credentials in Node.js Download the latest version of Node.js (this will enable NPM which is package manager for Node). All AWS CDK applications require Node.js 10.13 or later, even if you work in Python, Java, or C#. You may download a compatible version. Or, if you have the AWS CLI installed, the simplest way to set up your workstation with your AWS credentials is to open a command prompt and type: aws configure Install AWS CLI latest version on Windows (or Linux or Mac based on what OS you are using) After installing Node.js, install the AWS CDK Toolkit (the cdk command): npm install -g aws-cdk - Test your installation cdk --version Here comes the final steps to create your bucket - You need TypeScript itself. If you don't already have it, you can install it using npm. npm install -g typescript - You create a new AWS CDK project by invoking cdk init in an empty directory. mkdir my-project cd my-project cdk init app --language typescript Creating a project also installs the core module and its dependencies. cdk init uses the name of the project folder to name various elements of the project, including classes, subfolders, and files. - Use the Node Package Manager (npm), included with Node.js, to install and update AWS Construct Library modules for use by your apps, as well as other packages you need. The AWS CDK core module is named @aws-cdk/core. AWS Construct Library modules are named like @aws-cdk/SERVICE-NAME. We will install S3 as we will be creating a bucket it in, so run below command npm install @aws-cdk/aws-s3 - Your project's dependencies are maintained in package.json. You can edit this file to lock some or all of your dependencies to a specific version or to allow them to be updated to newer versions under certain criteria. To update your project's NPM dependencies to the latest permitted version according to the rules you specified in package.json: npm update - Now if you go in your projects lib folder, you will find a file names 'my-project-stack.tc'. That's your parent app. It will initially not have any code to create any infrastructure. Copy below code snippet and replace your code with it. import * as cdk from '@aws-cdk/core'; import * as s3 from '@aws-cdk/aws-s3'; export class BucketResourceStack extends cdk.Stack { constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) { super(scope, id, props); //The code that defines your stack goes here new s3.Bucket(this, 'MyFirstBucket', { bucketName: 'my-first-bucket', publicReadAccess: true, removalPolicy: cdk.RemovalPolicy.DESTROY }); } } - Finally run it cdk deploy Login to your AWS console and go to S3. If you have followed all the steps properly you will see 'my-first-bucket' in there. - Now if you wish you can destroy the bucket using below command cdk destroy Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/aws-builders/create-your-first-s3-bucket-using-aws-cdk-cj7
CC-MAIN-2021-31
refinedweb
782
64.2
Sign up to receive Decoded, a new monthly digest with product updates, feature release info, continuing education opportunities, and more. public class Person { public string Name { get; set; } public int Age { get; set; } public List<Person> Children { get; set; } } List<Person> people = CreatePeopleList(); gridPeople.DataSource = people; Are you are experiencing a similar issue? Get a personalized answer when you ask a related question. Have a better answer? Share it in a comment. To do what you want you will need to DataGrid controls, one for the Person object and the other to show the Children objects. You will need to build a relationship between the two grids. Please see this web page, Creating Master-Details Lists with the Windows Forms DataGrid Control, it will give you the procedure to implement it either through the designer or through code. Thanks for replying, but that wasn't what I was looking for. I have an object list, not a DataSet. And I don't want to display in two grids. I want a single grid, with the list of "children" objects showing as a sub-list below its parent People object. The DataGrid is not shown in the Toolbox by default it must be added to it. This can be done by finding the Tab in the Toolbox called Data. Right click on that Tab and select Choose Items... from the context menu. In the new window that opens select the .Net Framework Components. In the Filter text box type in DataGrid. When the list is displayed place a check mark next to the DataGrid control and then click OK button. The DataGrid control is now in the Toolbox in the Data Tab. Drag it on to the Form and in code assign the list to DataGrid1.DataSource = YourListOfObjects;. Everything else is connected automatically for you.
https://www.experts-exchange.com/questions/28405571/How-do-I-bind-a-list-of-objects-which-contain-a-sublist-to-a-DataGridView.html
CC-MAIN-2018-17
refinedweb
304
67.45
Molecular Database Handling¶ The OEMolDatabase class provides a fundamentally different abstraction over a molecule file than the combination of oemolistream and OEReadMolecule. The central underlying principle utilized by this class is that many operations can be performed on a molecular file without requiring the overhead of fully parsing the molecule record into an OEMolBase object. Instead, we can think of a molecular file as a database that can be manipulated with much cheaper operations than OEReadMolecule and OEWriteMolecule. Opening and Reading¶ OEMolDatabase objects provide the ability to access any molecule in a molecular database file in constant time, O(1). This is accomplished by paying the overhead of scanning the file during the OEMolDatabase.Open call. However, OEMolDatabase.Open is designed to operate extremely fast on any molecule file format OEChem TK supports. OEMolDatabase.Open is usually limited by disk bandwidth instead of parsing and perception like OEReadMolecule and OEWriteMolecule. After a database file is opened, the memory overhead of OEMolDatabase is minimal since no molecules are stored in memory. Instead, the OEMolDatabase only stores a 8 byte file offset for each molecule record in the file. Listing 1 demonstrates how to utilize this feature to retrieve the “Nth” molecule from a molecule file using the OEMolDatabase.GetMolecule method. Listing 1: Retrieving the Nth molecule in a file package openeye.docexamples.oechem; import openeye.oechem.*; public class NthMolecule { public static void main(String argv[]) { if (argv.length != 3) oechem.OEThrow.Usage("NthMolecule <input> <output> <index>"); OEMolDatabase moldb = new OEMolDatabase(); if (!moldb.Open(argv[0])) oechem.OEThrow.Fatal("Unable to open " + argv[0]); oemolostream ofs = new oemolostream(); if (!ofs.open(argv[1])) oechem.OEThrow.Fatal("Unable to open " + argv[1]); int idx = Integer.parseInt(argv[2]); OEMol mol = new OEMol(); if (!moldb.GetMolecule(mol, idx)) oechem.OEThrow.Fatal("Unable to read a molecule from index " + idx); oechem.OEWriteMolecule(ofs, mol); ofs.close(); } } Listing 1 checks the return value of OEMolDatabase.GetMolecule for false, indicating the molecule record at that position in the file does not contain a valid molecule. For example, molecules without any atoms are valid records in .sdf files. Note OEMolDatabase.Open is still a O(N) operation in order to know the position of each molecule record in the file. However, this method is significantly cheaper than using OEReadMolecule, instead being limited by hard disk bandwidth instead of processing speed. The OEMolDatabase.Open method can also be sped up by creating an associated .idx file as described by the Index Files section. Direct Data Access¶ OEMolDatabase achieves much of its speed by treating molecules as chunks of bytes instead of OEMolBase objects. This abstraction is leaked a little bit by providing users the ability to access the raw bytes of the molecule record as well through the OEMolDatabase.GetMolecule method that takes a oemolostream. For example, the user could pass this method a oemolostream that has been opened with oemolostream.openstring in order to dump the desired bytes to an in memory buffer. Listing 2 demonstrates how to use this feature to retrieve a subset of molecules from a database file similar to how the LIMIT and OFFSET keywords work for an SQL query. Listing 2: Retrieving a subset of a file package openeye.docexamples.oechem; import openeye.oechem.*; public class DatabaseSubset { public static void main(String argv[]) { if (argv.length != 4) oechem.OEThrow.Usage("DatabaseSubset <input> <output> <offset> <limit>");]); int offset = Integer.parseInt(argv[2]); int limit = Integer.parseInt(argv[3]); int maxIdx = offset + limit; for (int idx = offset; idx < maxIdx; ++idx) { moldb.WriteMolecule(ofs, idx); } ofs.close(); } } Note The oemolostream must be set up to write output to the exact same file format that the OEMolDatabase is opened on. If file format conversion is also desired during the read operation, the user should user OEMolDatabase.GetMolecule to read the molecule into an OEMolBase and then use OEWriteMolecule. Title Access¶ Molecule meta-data is often useful for manipulating databases regardless of the molecule connection table. For this reason the OEMolDatabase provides access to the molecule title through the OEMolDatabase.GetTitle method. The OEMolDatabase.GetTitle returns the same string that would be returned by OEMolBase.GetTitle if the molecule was read in with OEReadMolecule. The difference is that the OEMolDatabase.GetTitle method is more efficient because it will only parse the title from the molecule record, and skip the rest of the bytes in the molecule record. Listing 3 demonstrates how to use OEMolDatabase.GetTitle to implement a more efficient version of the molextract example. Listing 3: Extract molecules by title package openeye.docexamples.oechem; import openeye.oechem.*; public class DatabaseExtract { public static void main(String argv[]) { if (argv.length != 3) oechem.OEThrow.Usage("DatabaseExtract <input> <output> <title>");]); String title = argv[2]; for (int idx = 0; idx < moldb.GetMaxMolIdx(); ++idx) { if (title.equals(moldb.GetTitle(idx))) moldb.WriteMolecule(ofs, idx); } ofs.close(); } } Note Multi-conformer .oeb files can have multiple titles per molecule record. The top-level OEMCMolBase can have a title, as well as each OEConfBase can have a title. OEMolDatabase.GetTitle will only return the title of the top-level OEMCMolBase object and make no attempt to search for a title among the conformer data. In practice, this is fine since OMEGA will leave the OEMCMolBase title the same as the input file, and append warts to the individual conformer titles. Index Files¶ The speed of OEMolDatabase.Open is limited by how fast data can be read from disk. For this reason, file position offsets can be precomputed and stored in a parallel .idx file. OEMolDatabase.Open will automatically detect the presence of this file based upon the file name of the file being opened and use those file offsets instead. For example, if the database file is called, “my_corporate_conformers.oeb”, OEMolDatabase.Open will look for a file named “my_corporate_conformers.oeb.idx” to open as an index file. If an index file can not be located, a full file scan will occur instead. For files written once, and read many times, it can be highly beneficial to create a parallel index file with the OECreateMolDatabaseIdx function. Note OEMolDatabase.Save will automatically create a .idx file parallel to the file being saved. This behavior can be modified by the OEMolDatabaseSaveOptions.SetWriteIdx method on the OEMolDatabaseSaveOptions options class. See also OEGetMolDatabaseIdxFileNamefor the default way index file names are created. Database Generic Data¶ OEMolDatabase inherits from OEBase, allowing it to contain and round-trip generic data as described in the Generic Data chapter. This data is only written and read from .oeb files. It is stored in the .oeb file as a OEHeader record at the beginning of the file. OEMolDatabase.Save will write this record back out the .oeb file so that it be read by a subsequent OEMolDatabase.Open operation. Caveats¶ Warning OEMolDatabase will make an uncompressed copy of a molecular database file when opened on a .gz, GZipped file. The temporary file will be deleted upon the destruction of the OEMolDatabase object, however, it is still recommended to not use .gz files with OEMolDatabase if it can be avoided. If a different directory is desired for the uncompressed file, alter the environment variables used by the OEFileTempPath function. The largest caveat when working with OEMolDatabase objects is that they require a file on disk to provide storage. This allows the object to have a very small in-memory footprint at the expense of higher latency access to individual molecule records. The OEMolDatabase object makes no attempt to cache molecules that may be frequently read, instead leaving this up to the user at a higher level, or up to the operating system to cache frequently accessed disk pages. The above reason is why .gz files are not well supported. The OEMolDatabase needs to be able to read a molecule record by seeking to a particular location in the file. Though the result is that multi-threaded applications can efficiently call const OEMolDatabase methods like OEMolDatabase.GetMolecule without any synchronization overhead.
https://docs.eyesopen.com/toolkits/java/oechemtk/moldatabase.html
CC-MAIN-2022-40
refinedweb
1,323
50.02
Red Hat Bugzilla – Bug 280041 Please branch revisor for EPEL5 Last modified: 2008-05-15 15:15:24 EDT Description of problem: We need revisor branched for EPEL5. Thanks. Version-Release number of selected component (if applicable): From the devel branch please. Package Change Request ====================== Package Name: revisor New Branches: EL-5 cvs done. Am I just looking badly or is this not live on EPEL yet? I've just sent off a build of 2.0.5, 2.0.4.3 was already in CVS, not sure if a build had been sent off. Please let us know if 2.0.5 even runs... it should, but I've been doing release testing from F7. today revisor is not installable from epel5-testing because livecd-tools is missing. Bug 382401 has been filed, asking for livecd-tools. Will test revisor as soon as it can be installed. Bug 382401 tells us that livecd-tools will not happen for epel5 so, if revisor can work in a 'make ISO but not liveCD' way without livecd-tools, can you please adjust the Requires for the epel5 branch. if OTOH revisor cannot function without livecd-tools, then I guess we need to pull revisor from epel5 :-( I think the only thing we would need to pull into revisor is mayflower. We've addressed the known issues stated in #382401 Jon, should we branch EL-5 upstream and remove live media options? I'd prefer to pull mayflower into revisor to enable live image creation over adding another branch to support EL-5 without live creation. I'll work to pull in mayflower now and send off a build. *** Bug 425830 has been marked as a duplicate of this bug. *** OK, this will need testers. If I find some time, I'll provision a el5 guest to do a test. in response to comment #12 the package from epel-testing installs fine but fails to start for me # yum --enablerepo=epel-testing install revisor [...] # rpm -qa revisor\* revisor-comps-2.0.5-15.el5 revisor-2.0.5-15.el5 # rpm -V revisor revisor-comps # revisor Traceback (most recent call last): File "/usr/sbin/revisor", line 43, in ? import revisor.base File "/usr/lib/python2.4/site-packages/revisor/base.py", line 34, in ? import revisor.cfg File "/usr/lib/python2.4/site-packages/revisor/cfg.py", line 37, in ? import kickstart File "/usr/lib/python2.4/site-packages/revisor/kickstart.py", line 29, in ? class RevisorKickstart: File "/usr/lib/python2.4/site-packages/revisor/kickstart.py", line 134, in RevisorKickstart def _group(self, name = "", include = constants.GROUP_DEFAULT): AttributeError: 'module' object has no attribute 'GROUP_DEFAULT' # What should I do to debug this: (In reply to comment #14) > This message is a reminder that Fedora 7 is nearing the end of life. This bug looks confusing in any case; there might be still open issues, but I suspect they are better acted upon in a separate bug
https://bugzilla.redhat.com/show_bug.cgi?id=280041
CC-MAIN-2016-40
refinedweb
493
67.25
$ cnpm install cypress-shadow-patch ⚠️ This library is still in it's early days. Expect a few rough edges during the first couple of weeks. If you encounter anything unexpected, please open an issue and we will fix it as quickly as possible. This library makes Shadow DOM a first class citizen of Cypress. Commands such as get, contains, click and type now understand Shadow DOM, so you get all the sweetness of Cypress while maintaining the benifits of Shadow DOM. You no longer have to worry about using special commands for dealing with Shadow DOM, - everything just works out of the box. This library does it by patching internals of Cypress. First of all this shows that it's possible for Cypress to support Shadow DOM. The patch also acts as a guide for where Cypress could update it's own code if they were to add native support for Shadow DOM in the future. If you are interested in showing support for natively having Shadow DOM support in Cypress, please go to this issue and take part in the discussion. The package is distributed via npm and should be installed as one of your project's devDependencies. npm install cypress-shadow-patch -D Add the following line to the top of cypress/support/commands.js import "cypress-shadow-patch"; In order to apply the Shadow DOM patch to your local installation of Cypress you will have to run the following command. This command will tell you what's being patched. Read the FAQ to learn why we need to apply this patch. npx cypress-shadow-patch apply Note: You can always remove the patch by running npx cypress-shadow-patch reset or cypress cache clear. To automate this process we recommend that you add a postinstall script to your package.json file like this: { "scripts": { "postinstall": "npx cypress-shadow-patch apply" } } After installation you can use Cypress as you normally would. Selectors now support JQuery and can pierce through shadow boundaries. cy.contains("Elements").click(); cy.get("#container my-text-field[name=email]").type("test@test.com{enter}"); cy.get("container-element") .contains("Get Started") .click(); In order to only search in light dom, use { shallow: true }. cy.get("#get-started", { shallow: true }).click(); A lot of commands/features have been tested with this library, but some commands might not work with Shadow DOM right now. In this section we will note if we find commands/features that do not yet work with cypress-shadow-patch. This library uses the wonderful library query-selector-shadow-dom to query the DOM. This library provides a querySelector that can pierce shadow boundaries without knowing the path through nested shadow roots. We patched the library to support JQuery selectors because Cypress require JQuery support. A shortcoming with existing approaches for adding Shadow DOM support to Cypress is that they need to 'reimplement' various Cypress commands. For example, you can no longer use type, but you will have to use shadowType and so on. This is needed because Cypress validates input/output to commands that are executed to ensure that they, for example, are connected to the DOM. Cypress doesn't think elements within Shadow Roots are connected to the DOM and fails. The thing that makes this library special is that you can use the existing commands documented in Cypress with Shadow DOM support. The overall goal is to seamlessly make Shadow DOM a first-class citizen of Cypress. Here are some of the most important design goals we had in mind while building this library. We decided that, in order achieve our goals as decribed above, we needed to patch the internals within Cypress. An alternative solution would have been to upload/publish executables from a fork of Cypress, but we decided against it because we think this will become a bigger burden than simply applying the patch using the CLI. We maintain a fork of Cypress with Shadow DOM support. Here you can see which changes we made to the internals of Cypress. Feel free to contribute. Here is a short overview of the main challenges encountered: Cypress: Cypress assumes that root of a node is always the document of the node by using el.ownerDocument, but this is not always the case. Instead, use el.getRootNode() where applicable to get a Shadow Root instead. Instead of just finding document.activeElement, traverse recursively by using activeElement through shadow boundaries. JQuery: isConnectedand parent. The aim of this library is to make Shadow DOM a native part of Cypress. When writing tests, you shouldn't care about shadow boundaries, you should care about writing good tests with simple selectors. Therefore we chose to make Shadow DOM opt-out instead of opt-in. The main limitation of patching Cypress internals is the fact that we need to generate a new patch for each new version of Cypress to keep the patch up-to-date with the new code. This means that this library will always be slighty behind in supporting the newest version of Cypress. We generate the patch using a fork of Cypress and diffing the file cypress_runner.js. We will make sure to add a new patch for each new major and minor versions of Cypress. Currently this library works with the following versions of Cypress: 3.8.0 You are more than welcome to give our library a spin and give us feedback on how we might improve it. If you like the library you are more than welcome to share it with other people you think you know about this tiny corner of the internet. This library is still in its early days. If you find a bug, a use-case that is not covered or have any ideas for improvements you are very welcome to open an issue on this repository. A lot of awesome people have already built some great libraries that adds support for Shadow DOM in Cypress. Here are a few of the ones we could find: If you use this library you will also need to patch Cypress when running your CI. Here's an example on how you would run your Cypress tests using Github Actions. - uses: cypress-io/github-action@v1 with: browser: chrome build: npm run build_and_patch Note: The command build_and_patch would need to build your project and run cypress-shadow-patch apply. Important: You will need to patch Cypress in the build hook because the Github action cypress-io/github-action@v1 overwrites the Cypress executable when running. Therefore you cannot patch Cypress before this action.
https://developer.aliyun.com/mirror/npm/package/cypress-shadow-patch
CC-MAIN-2020-34
refinedweb
1,097
63.49
Log message: p5-IPC-Run: update to 20180523.0. 20180523.0 Wed May 23 2018 - #99 - Fix using fd in child process when it happens to be the same number in the child as it was in the parent. Log message: Update to 0.99 Upstream changes:] Log message: Recursive revbump from lang/perl5 5.26.0 Log message: Updated p5-IPC-Run to 0.96. 0.96 Fri May 12 2017 - Update bug tracker to Log message: Updated devel/p5-IPC-Run to 0.95 -------------------------------- + an additional unit test - Catching previously non-detected malformed time strings - Let Timer accept all allowable perl numbers - allow the OS to choose the ephemeral port to use - Don't use version.pm to parse the perl version in Makefile.PL - perltidy - Do not import POSIX into local namespace (it's a memory hog).
http://pkgsrc.se/devel/p5-IPC-Run
CC-MAIN-2018-34
refinedweb
140
75
#include <Q3WhatsThis> This class is part of the Qt 3 support library. It is provided to keep old source code working. We strongly advise against using it in new code. See Porting to Qt 4 for more information. Inherits QObject.(). This virtual function is called when the user clicks inside the "What's this?" window. href is the link the user clicked on, or an empty string if there was no link. If the function returns true (the default), the "What's this?" window is closed, otherwise it remains visible. The default implementation ignores href and returns true. Display text in a help window at the global screen position pos. If widget widget is not 0 and has its own dedicated QWhatsThis object, this object will receive clicked() messages when the user clicks on hyperlinks inside the help text. See also clicked(). Enters "What's This?" mode and returns immediately.().. See also add().WhatsThis::whatsThisButton( my_help_tool_bar );
http://doc.trolltech.com/4.0/q3whatsthis.html
crawl-001
refinedweb
156
69.68
Shh! Don't Digg this post. This is a secret post. Build something in Seaside. Something as idiotic and simple as a table in HTML. It's fun. Rails has hit the point where you can make crazy money. Yay Rails. That's awesome. I'm happy for Rails. But if you play with Seaside, even just for a few hours, you quickly realize that Seaside is to Rails what Rails is to J2EE. The logo on the shirts at RailsConf was an exponential growth curve. Because Rails has one. Why doesn't Seaside have that kind of adoption curve? Because Rails runs on Unix servers, and Seaside runs on Squeak VMs. Rails runs on Subversion, and Seaside runs on Monticello. Seaside runs on a dialect of Smalltalk that the industry has already reached an opinion about, and Rails runs on a dialect of Smalltalk which looks more like a cross between Perl and Python. From one perspective, Seaside has a marketing problem. If your goal is to find a job writing Seaside, then Seaside definitely has a marketing problem. But if your goal is to find a secret weapon, Seaside doesn't have a marketing problem - Rails has a marketing problem. There's nothing secret about that weapon. I loved Ayn Rand books when I was a teenager; these days I'm embarassed to admit I even read them at all. But there's a scene in Atlas Shrugged where the heroine walks into a diner and finds the greatest composer on the face of the planet working there, flipping burgers. Sometimes Seaside feels like that. How dumb must the tech industry be, overlooking something like this? Is it run by monkeys? What the hell is going on? You are the big monkey man! You post are so funny......... Thanks. :-) So have you built anything non trivial in it yet? A good question! The answer's no. Ever since I got to Los Angeles the siren song of showbiz has been luring me away from programming. Pretty much all the free time I was spending on programming I'm now spending on acting classes. I was fiddling with it last night and trying to think of something interesting to build. I definitely need a better blogging system than Blogger, and a better portfolio site than my current one, so a combined blog/portfolio site might be a good thing, but that's not really nontrivial. Honestly I haven't had any interesting Web app ideas recently. I am working on adapting Kirk Haines' CSS DSL for Rails, but that's about it at the moment. I had a mental note to build a spellchecker with Seaside, I think I'll give that a go. I don't think it's a marketing problem, as much as an engineering problem. I've taken a look at Seaside (although admittedly not a long enough look), and Smalltalk, and Lisp -- but there's something missing in these environments that is overwhelmingly evident in Rails, or pieces of J2EE. It's the practical side. Here's how you write the code. Here's where the stuff lives in your file system. Here's how you source control it, deploy it, upgrade versions of the framework, upgrade deployed versions of your code, connect it to an existing database, add security, integrate with other stuff, etc. Rails is a reaction to real-world problems, whereas some of the more sophisticated solutions out there have extremely elegant science, but inconvenient engineering. I agree that marketing is a big part of Rails, but it had to impress and conquer a lot of minds before the concept of marketing was even relevant. Maybe Seaside can still get over that hump -- I'm going to take another look at it. Actually Seaside has solutions for these things as well, I think GemStone will even make load-balancing Seaside effortless, but the information is difficult to obtain, and on Unix, you know where to find all this stuff, and in many cases already know it (Subversion, for instance). This is what I mean by a marketing problem. It takes time to find out what version control you should use with Seaside, for instance. And if you can't find it, despite its existence, that's where the conclusion of an engineering problem comes from. Seaside has those solutions - they're just not easy to find. Findability is a very important part of marketing. The 'secretiveness' of seaside is one of the most compelling parts of it. So what will you use your secret weapon for? Maybe to build the next web 2.0 application that will put you on google's radar. It may worked for, but for everything else the obscurity of the technology is just holding us back. A lot of bright minds and bright ideas will never come to the platform. I do believe using smalltalk will make me a better programmer, but believe more in pair programming and I cant find anyone to pair program in smalltalk/seaside with me. Personally I don't know how RoR managed to pull it off. It still hasn't hit the big time in adoption, but everybody at least knows it exists now. It's got conferences built around it. There are books on Ruby and Rails, and I'm sure that helps a lot. I agree Seaside has a marketing problem. It has part of the problem solved in the sense that there are books you can find online (free ones) on the Smalltalk language itself. There are books on Squeak you can find at Amazon. A few, at least, will teach you about most of the development tools you can use within it. There are NO books on Seaside. I think that would be a good start. One reason I found out about it is there are some demo videos of it on Google Video. One of the problems with Squeak is it's been going into a "divergence" phase. It was originally written by Alan Kay and his compatriots as an educational tool for school kids, and it's been kinda successful at that. It is used in some school systems around the world (and it's in the OLPC). I heard just recently that a North Carolina school district is going to start using Squeak soon. Most of the literature out there on it is focused on education. The most recent one, which you can find at the major booksellers is "Squeak: Programming With Robots", by Stephane Ducasse. So there's energy behind that. There's developer energy behind Seaside, but there's hardly any support. To find out about it you have to "pay attention" to the Seaside mailing list, and any blogs that talk about it, like Ramon's, or Lukas Renggli's. Otherwise, there is always the source code... It's an adjustment all around to use Squeak/Seaside. You have to think about the development process in its entirety in a different way. Some/most developers don't like that. They like the fact that you can use any editor you want with Ruby, and you can save code in files. That's at least familiar to them. One of the objections is they see Squeak as forcing them to use one code editor (there's more than one, BTW). It seems to me, though, that programmers are willing to put up with a lot just to get this. In Ruby you basically have a line debugger. I don't know if there's even a full screen debugger for it. At least you get this in Squeak, and you get the ability to correct code while the app. is running, just like in Ruby. Apparently there's a way to get Emacs to work with Squeak, if that's really your cup of tea. The "conclusion" the industry has come to about Smalltalk may be that it chose Unix/Linux as their development environment rather than the "one GUI to unite them all" approach of Smalltalk. Smalltalk, like Lisp, became somewhat popular in the 1980s. From what I remember, in terms of finding work, Smalltalk was more popular than RoR is today. Lisp may have been at the level Ruby is now. They both flamed out in the early 90s. The reason is still kind of a mystery to me. I've heard a couple people blame Java, that it sucked the oxygen out of the Smalltalk community, but I don't understand why. Maybe it was the draw of the web, and the Smalltalk and Lisp communities were behind the curve on it. The internet changed a lot of things in the technology world back then. This is just a guess, but I think it killed Commodore and Atari (the computer makers) as well. Amigas did fine on the "text internet", but the web was another matter. The ST had absolutely no internet connectivity back then (unless you count logging in to a terminal server, which I don't), because Atari flat out didn't anticipate it. Posted confusingly: "It seems to me, though, that programmers are willing to put up with a lot just to get this. In Ruby you basically have a line debugger." Meant to say: It seems to me, though, that programmers are willing to put up with a lot just to get the ability to save to files and use their own editor. I'm too using RoR for my early-afternoon-to-late-night work. ;-) I started with Smalltalk few days ago and been reading a lot about Seaside and Aida/Web (written by Janko Mivsek). It seems to me, both can run fastest circles side by side around today's web frameworks. Even RoR. I have no real hands-on expirience with either of them (yet), but from what I've seen so far, choosing either Seaside or Aida/Web for web development is a step in the right direction. At least for some types of web applications. Of course, if one plans a web app with few 10.000 concurrent users, there are other tools available. ;-) You're on the right track Giles, but I think the problem is engineering as well as marketing. I agree with Matthew that there are some practical considerations as well. Coincidentally, only last night I was watching the ec3 presentation on Seaside and while it seems very interesting for the project I have in mind, my first thought was, "how am I going to deploy this?" The Seaside page doesn't have a whole lot of deployment info out there. Even dabble-db deployment info is hard to find. Ruby and Rails build on familiar terrain for most people One can get a host, or even get EC2 for scaled deployment. How do I deploy squeak/seaside on a linux box? I just haven't been very lucky in finding that information readily (maybe it's out there?) I guess it could be done via X somehow, but that information is not out there readily available. Thanks for a thought provoking article, maybe you should do one on Seaside deployment options. Before one can get to source control, one has to have the confidence that their legwork done on a macbook will be easily transportable to a deployable form if/when that time comes. cheers, -Amr @Mark Miller There are a whole lot of Squeak forks: - SmallLand - SqueakLand - OLPC - Croquet - Sophie - Scratch The problem is there aren't any resources maintaining it they all flow into one of the forks. Just look at Monticello, because it is basically not maintained everyone runs his own fork. I/O: Everyone knows I/O sucks in Squeak, nobody does anything against it. The VW: don't even get me started. Officially everything is fine and perfect. Behind the scenes already ten years ago they talked about the new and cool VM/JIT/GC they'd have today. Today we still have the one of ten years ago. Just for fun:. There are even class comments outlining what they'd do differently. Tools: The code browsers don't support multi selection (eg. drag multiple classes), noone cares. Have you recently used the Refactoring Browser in Squeak and noticed how many refactorings don't work? Noone cares. Serious tool support for traits anyone? Mophic (the GUI framework): seriously? And then there's what Alan said about Squeak and EToys at the London meeting with Shuttleworth (the stuff that was not published). There are so many design decisions in there that'd made sense in the 70ties that don't make sense on a machine with 1GB of RAM. @matthew Here's how you write the code: code browser (standard, rb, omni, whatever) Here's where the stuff lives in your file system: not at all Here's how you source control it, deploy it, upgrade versions of the framework, upgrade deployed versions of your code: Monticello (in Squeak) add security: do it yourself connect it to an existing database: depends on database :( (in Squeak) integrate with other stuff: libraries Seaside is just a web framework it does not persistence at all. Seaside is pretty cool, but I didn't keep using it because I hate that my files aren't files, and thus can't be edited with vim, or whatever else I want. The version control is sketchy, or maybe it was just a lack of understanding but a) it seemed like i was version controlling the whole damn image and b) it *was* an image. The other problem being that nobody offers seaside hosting. No, you have to pay for a box entirely for you, which I'll do for a commercial app with a real future but for something, like seaside, to really catch on people have to be able to put up apps that exist for fun, learning, or to scratch a personal itch not profit. *If* those apps go over well then maybe they'll invest the time in writing a commercial one. @masukomi There is free non-commercial available at Seaside-Hosting. This is done by the current core developer of Seaside so these guys know what they're doing. The reason they don't offer commercial hosting is because they can't offer it at a price that competes with a virtual or even full root server rental. If you're willing to pay this premium write them a mail, they work for money too. There is also commercial hosting available at Seaside Parasol but I know nobody who uses it so I can't tell you about the quality of it. It's true, Seaside has lots of obstacles to adoption. I don't know about the problems with Squeak but I've heard some of those things before, so I'm inclined to believe it. Maybe that's the answer to the question in my post - without serious chops in an obscure language, you don't even know where to begin. It still doesn't really explain why people with options don't choose Seaside. For instance, virtual hosting, with virtually every other system you have a whole marketplace, with Seaside you only have a few options - but if you're in a large corporation you can set up internal Seaside aps with commodity hardware. Obviously the first question in this post's comments is whether or not I've built anything serious with it, and I have to admit the answer's no, but I've been spending a lot of time on my acting classes, and I've definitely explored Seaside's libraries and learned enough about it to enjoy it a lot. Actually, wait. I know exactly the answer to this question. There's very interesting research in online music demonstrating that the difference between excellent material which becomes a hit and excellent material which does reasonably well is essentially random. What's remarkable isn't that Seaside is excellent but obscure; that's totally logical. What's remarkable is that Rails is a hit. Looking at Seaside and comparing it to Rails just creates unreasonable expectations. It's like asking why Happs isn't exploding when it hasn't even hit version 1.0 yet. The problem with seaside and squeak is its too different from what people are used to. Its beautiful technology, but you have to do too much learning and change all your workflow to use it. I'm playing with it for months but I never had enough time to use it for real projects. The good stuff about platforms like Python, and Unix in general, is that you can learn and add new tools and technologies to your workflow as you need them - you don't have to switch everything at once. Secret weapon is great, but when its too secret, there are not enough developers, and then the technology does not advance fast enough. A healthy technology can not be too secret. Just had a brainfart... why not have a "Scratch like" interface to seaside? Scratch has a widget set for simple programming constructs. Take that same approach with Seaside, build a widget set for web programming and hook those up to seaside on the back. have two modes: Visual mode i.e., Designer Mode and Expert Mode which basically shows the class browser and friends if you feel like going further than this "scaffolding" of sorts. This will give corporate tinkerers a quick OOTB option to create one off apps which a Desktop type machine would be more than happy to handle. I'm not a smalltalk expert, but wondering how hard it would be to build a scratch-like widget set to get people started. Right now, the examples and screencasts are for web developers, to get the general public interested the above mentioned out of the box experience would be sure to create a lot of converts. Actually I'm talking about something like Peter Howell's Hands On environment done in Squeak. In my opinion, that should be the next logical step for Seaside (but I'm just a lay person when it comes to web development) I would really love to see something like that though.. that would be awesome! Unfortunately the thesis website has gone off the air. I can't find any mirrors anywhere. It was a great idea though. @Anonymous: Re: How am I going to deploy this? Someone else (perhaps you?--another anonymous) posted about two sites that host Seaside. I know that's paltry. RoR hosting sites are a dime a dozen. One of the major obstacles to Squeak hosting at this point is neither of these hosters hosts a database, at least the last time I checked. At this point Seaside is basically for those who can self-host. You can use the Seaside-Hosting service (previously mentioned and linked here) for the fun, creative projects. They host for free. As for self-hosting, check out Ramon Leon's blog. He has a couple posts on how to host and scale Seaside, both on Windows Server and Linux. It turns out the scaling solution is not that different from how you'd scale RoR. As for version control, use Monticello. Again, it works differently than you'd expect. This is a challenge that everyone ends up going through when they're first learning this stuff. You have to almost blank out all of the idiosyncratic stuff you've learned over the years about how computing is supposed to work, and just think, "How would this be done more intuitively?" One of the previous (anonymous) posts is right. You essentially do not save code into files. You can, but that's not the normal mode of operation in Squeak. Instead everything lives in an image, in memory or serialized to a file. The analogy I'd make to how it works is putting a Windows PC into hibernation, and then waking it up again. That's essentially what's going on when you start or quit the Squeak environment, except most everything lives in memory, not files. There is a text backup of every source code change you make, in what's called the "changes" file, which accompanies the image. If you make a change that was a mistake, you can revert to a previous version of what you changed. It's built in to the system. When you want to look up a class or a method, there are helpers for finding what you're looking for. Just look it up by name, not by file. If it's in the image, it will find it for you, or give you a selection of possible matches. Code is categorized, by package names and then categories. These distinctions are non-binding, meaning that a class A in one category is not considered distinct from a class A in another category. It's just a way for you to find the class in the system. Methods are also grouped by categories (within classes). Monticello works on this principle. Rather than version controlling the whole image (which you can do, but is pretty wasteful), you can version control packages, and method categories with Monticello. When you bring it up, it shows you a listing of all of the packages and method categories, and their repositories (versioning files). You can add new repositories for packages/method categories you create. I've just started using it myself, so I'm no expert yet. There's a tutorial on it here. It uses "red click", "yellow click", and "blue click" as mouse clicking terms. It defines them though. Squeak is based on the Smalltalk-80 system, which used 3-button mice. Each mouse button was a different color. Even if you don't have a 3-button mouse, you can get to the correct menu by going through a hoop or two. If you want to save source code to individual files, you can do so. It's called "filing out". You can select a class, or a category, right-click on it, and select "fileout". Squeak will ask you for the filename you want. It will save the source to [name].st. This is viewable in a text editor. You'll notice that there's some extra mark-up in the file. This is metadata, which makes it possible to "filein" the file. If you bring up the File List (an app. in Squeak--like Windows Explorer), right-click on the [name].st file, and select "filein", it will load the file's classes and methods into the image. The fundamental concept of Squeak that is different from almost anything else out there, is that you are extending the system itself. It's like in Unix if you create a new utility. You can use it in conjunction with other utilities that are already part of the system. It's the same way with classes in Squeak. Any classes you create become part of the system and can be used in conjunction with the classes that are already there. You can have application-specific classes. Those are necessary for specific apps you create. I find it's easier to create reusable code in this system, because all of my classes are globally accessible. There is an extension to Squeak called "Tweak" which adds the concept of "islands", which I believe are analogous to namespaces in more traditional languages. You can define classes in an "island", and it will be considered different from another class by the same name on a different "island". I think that's how it works. Re: Squeak forks Anonymous (whoever you are), you forgot Seaside and Tweak. I think of the forks as different versions of Windows, or Unix, or Linux, etc. They each have their unique features and different ways of operating. Some OSes fork less than others. It's a matter of taste. The way Squeak has been developed is a lot like the way Linux has been developed, if you look at the community models. From what I've heard Linux development has been just as chaotic. The chaos is smoothed out by the commercial distros, but each is its own fork. It's possible to have multiple images around, if you want. When you start Squeak up, it'll ask you which one you want to load (or I've seen demos where you can drag and drop the image onto the VM to start it up). I keep a baseline image around that's unmodified just in case I don't want to deal with the modifications I made in a different project. Re: Seaside is just a web framework. It persists nothing at all I beg to disagree. It persists session state better than anything else out there. True, it does not persist using a database. It uses Squeak's own facilities to persist state between round trips to the server. The great thing about it is you hardly have to worry at all about session state. Seaside does it for you. If you want to scale Seaside you need to use "sticky" sessions. As for the VM being old, I found a blog post last year by someone who had done some benchmark tests between Squeak, Ruby, Python, and Io (Self?). Squeak beat Ruby in performance. Maybe you still don't consider that fast, but even though Ruby has a lot more support, and is updated more often, it's still a laggard when it comes to performance. Re: Scratch-like interface, or "scaffolds" for Seaside There was a template system added to Seaside in one of its early versions, but it was later taken out. This was the most jarring difference between Seaside and the other web frameworks I had used. In Seaside you write your HTML in Smalltalk code, and it's quite easy to do. As for fashioning the end user interface, Seaside developers recommend you use CSS for that. It turns out that the lack of a templating system tends to make your apps. more scalable. Seaside uses a component model of web page development. So the page is made up of component "pieces" that are aggregated together when the page is rendered. Think of partials in RoR or user controls in .Net, except that's the whole model. This way if you need to reuse a component in another page it's no problem. You don't have to copy template markup from one page and paste it into another, along with its code. You just re-use the component in the other page's code. Whew! Was that enough info. for y'all? Seaside killed by Squeak. In any other language you start with a blank page in your favorite editor. When you start in Squeak you have all thouse tons of things on screen loaded. Fun to play, terrible to understand. It is a main reason why i abadoned learning SmallTalk. What Squeak needs is a desktop preconfigured for PROGRAMMING. Not for amazinization with bells and whistles. Only workspace, inspector, browser and tip how to start. AND NOTHING ELSE. "In any other language you start with a blank page in your favorite editor." Which is what's wrong with all those other languages. Seriously, how do you expect technology to ever advance if you can't let go how things "are usually done". Yes, if you want to use Smalltalk you have to learn a new way of doing things. What everyone seems to miss is that the Smalltalk way of doing things is freaking simpler that what you do now. Let go of your files, you don't need them, they're holding you back. Source control in Smalltalk is better than your file based source control. The Smalltalk IDE and environment are better than your text editor. It doesn't matter whether you believe it or not, it's true. If you think Squeak looks silly, stop downloading the public image meant for kids and go get a developer image from Damien or me. The hardest thing about learning Smalltalk and using Seaside is forgetting all that crap you've learned over the years that you don't need anymore. Smalltalk, Squeak, and Seaside aren't hard to learn at all. It's forgetting all the stuff you already know long enough to see that things can be done differently, and with much less effort that is the hard part. Between the Seaside mailing list and a few blogs here and there, you have everything you need to learn and get answers easily. Just ask questions, someone will answer. Smalltalk is possibly, still too far ahead of its time. Take 70% of Smalltalk, slap a Python/JavaScript'ish syntax on it, put it in files, and call it Ruby and suddenly everyone goes gaga over it. It doesn't say a lot about developers being open minded folk. No wonder Alan Kay always looks so frustrated. It's 2007 and the mainstream world still hasn't caught up to 1980! @ivan - I have a Squeak image exactly like what you describe. I use it as the starting point when I tinker with Seaside. I can make it available, although my schedule's kinda crazy for the next few days. but I think you're right - just as Rails has scaffolding, generators, etc., Seaside should have a "Start Here" image. Damien Cassou has one but it's still just a bit compicated. Really, the only question I have is how do you export an image for other people to download, and how do you include the background image when you do so? (Might as well add some marketing dazzle.) Session: GLASS: GemStone, Linux, Apache, Seaside, and Smalltalk Detailed Session Information Seminar # S312 Day Wednesday,' May 02, 2007 Time 11:00 am-11:50 am Title GLASS: GemStone, Linux, Apache, Seaside, and Smalltalk Speaker(s) Dale Henrichs , James Foster Abstract The Seaside framework provides a layered set of abstractions over HTTP and HTML that can be used for developing sophisticated web applications in Smalltalk. Seaside was developed in Squeak and ports are available for VisualWorks and for Dolphin. While the Seaside framework elegantly addresses HTML generation and application flow-of-control issues, it still leaves a few challenges for the developer�including persistence and multi-user coorrdination. In this seminar we will demonstrate a port of Seaside to a new dialect: GemStone/S. As a multi-user, persistent Smalltalk implementation that has no native user interface, GemStone/S provides an excellent environment for serving HTML and keeping Dale Henrichs's Bio Dale Henrichs has been working in computers since 1975. He has been working on and in Smalltalk nearly fulltime since 1985 and has worked for Tektronix, Digitalk, Parc-Place/Digitalk and Gemstone. Currently Dale is a Principal Engineer on the Gemstone/S server team at Gemstone. James Foster's Bio James Foster has been working with computers since Fortran programs were submitted on punch cards and �core� was a fine mesh of wires with tiny magnetic rings (1971). He has been programming in Smalltalk since 1993 and has developed applications with VisualSmalltalk, VisualAge, VisualWorks, Dolphin, and GemStone/S. James is currently QA Lead on the Well, that was kind of weird. A little context would have been nice! Here's a little context for the link - it's a presentation on Seaside and GemStone from IT 360, earlier this month. I mean, for the comment, not for the link. @Giles To share your image, just put the .image and .changes on a server. That's it. If you've set a kickass background, it comes along for free. Some people use Squeak as a presentation vehicle with all kinds of crazy background images going on in multiple projects. All they share is the image and changes. @Ramon: As you can tell I agree with you. :) And as Alan Kay has said sometimes, "Forget the adults." They're hopeless. I read a blog post a couple days ago that was called something like "Why I took the kids off of Python". He said that he introduced his kids to Python to learn programming, doing turtle graphics, but the whole thing with switching window focus between the editor and the runtime environment, and managing files made his kids spend most of their time figuring out how to manage the system rather than programming. So he had them use Squeak instead with Ducasse's book, and they loved it. It just made sense to them. It's so ironic it's funny. Squeak is great for total beginners who have no context on what programming is about, but it's not so hot for experts, who are already so used to a different system that they find no benefit in it at all. When I talk to people about Squeak and how to use it I try to use analogies to what they already know. Usually they're analogies to OS features. I read a conversation recently between Alan Kay and some folks on the Squeak developer list. They were discussing operating systems, and Kay said something like, "The operating system is what's left out of the programming language.", and most OSes do only a fair job of bringing the two parts together into a functioning executable program. Smalltalk was originally the operating system of the computer it ran on. Today, not so much, but it still "thinks" it's an OS all the same. I've been reading some of the trackbacks to Giles's post. Some of the people who've commented in these places have pointed to just what Giles talked about: Most smalltalkers don't want the community to grow, or want it to grow at a slow pace. They're happy that it's a "secret sauce" that's not handed to people on a silver platter. I don't agree with the attitude, as I imagine you don't either. As is pretty clear, the key to higher adoption is clear documentation. Until that comes along, most people are going to turn their nose up at it. The downside to the obscurity is that if you want to say "No more Rails! It's Seaside for me!" then you kind of have a more tightly constrained set of job options. I'm actually OK with the obscurity on every other front, certainly the huge popularity of Rails makes crufty Rails apps an inevitability for the near future, whereas the upside to the obscurity is nobody will ever curse under their breath about having to maintain this frustrating, crufty legacy app in Seaside. But the obscurity isn't actually what bothers me. What bothers me is that the obscurity doesn't make sense for business reasons. It seems as if Seaside's strengths are known, and known to be significant. It would seem there's clear evidence that the effort pays off if you take it, and once you get past the hump of total unfamiliarity, it's also fun. Maybe it's just that there aren't more programmers who make decisions for business reasons, or business people who understand this sort of information, that mystifies me. You could say Seaside's a terrible business decision, because there's no market, or you could say it's an incredible business decision, because it's pure blue ocean strategy; the tech is gold and there's nobody else in the market. I don't know. @david - thanks! I'm going to see about putting together a real basic "getting started" image then. @Giles: Come to think of it I don't remember (and I'm not going to take the time to look now) whether these commenters elsewhere were speaking in the present tense or past tense. They may have been talking about why Smalltalk flamed out in the early 1990s. To clarify, what they were saying is that most Smalltalkers like (or liked) the "secret sauce" quality of it, because they feel it gives them a competitive advantage in the business sense. It's like what Paul Graham talked about with ViaWeb. He kept his use of Lisp a secret. He was afraid if word got out then his competitors would start using it. I don't know how real that fear was. It sounds to me like he discounted the fact that Lisp wasn't a popular language with developers in the first place. Even if competitors wanted to use it they'd be hard pressed to find programmers who were proficient at it. I think a different mindset is emerging. At least I hope it is. Squeak has an open source license of a sort. I can't remember, but I don't think it's GPL. It encourages contributions to the community somehow. The software that comes with it encourages it as well. The entire system's source code is available right in the system, and with Monticello it's easy to share what you've written with others, using squeaksource.com as a repository (the Squeak equivalent of SourceForge). Seaside, Magritte, Glorp, Pier, etc. are good frameworks that solve business problems, and are completely open source. So it seems like now more sharing is going on than maybe there used to be. On the Lisp front there's Peter Siebel's "Practical Common Lisp", which talks about how to solve business problems with it. I like the idea of popularizing both Squeak and Lisp, because it will ultimately advance the state of the art. It does only a few people any good if it's kept secret. The wider society doesn't benefit at all. More money and wasted energy gets flushed down the drain on failed projects than is really justified in our industry, partly because we're using languages that aren't that powerful. For complex projects, the more code you have to write, the more the project gets away from you. You have less control over it. Even if Smalltalk and Lisp were to become popular and, as you predict, the code gets messier, and the competitive advantage goes away, the state of the art in programming can and should still advance. There's Self, the prototype-based language. There's Erlang, etc. In years to come there will be even better ones. As Alan Kay said a few years ago, even though Lisp and Smalltalk still look good, compared to what's out there now, "they're really quite obsolete", and, "Smalltalk is like an ancient Greek play that was just better than what most other cultures had come up with." I think he also compared it to the development of the arch for building, as opposed to making buildings out of blocks and huge sculpted rocks, like the Egyptians did. We've got a long way to go before building software becomes like building the Empire State Building, which took a few hundred builders less than a year to construct, as opposed to tens of thousands of builders taking 20 years to build a pyramid. Oh, and I found the link to "Why we took the kids off of Python". I'm not trying to bash Python as a language. I was using this article to make a different point. Maybe it's just that there aren't more programmers who make decisions for business reasons, or business people who understand this sort of information, that mystifies me. There are a LOT of programmers who make decisions for non-business reasons. It's my theory that most programmers don't understand business. I'll speak for myself. I only understand it a little. When I paid more attention to business computing I used to hear ALL the time programmers groan on and on about how the business managers were making stupid IT decisions. And a lot of times they were right. You've talked about this some yourself. Most programmers couldn't run a successful business if their lives depended on it. They have their favorite technology and they want so badly for that to be the only thing that matters, but it doesn't. You still have to convince people to use what you've created, but that means distilling the whole thing down to where it's useful for them. Whether it's fun or interesting to you does not matter. You know all this, of course. The main obstacle to dynamic languages becoming popular is unfamiliarity, but this is kind of a chicken and egg problem. Up until recently it wasn't too practical to use dynamic languages, except on high-end systems, because otherwise they ran too slowly. The hardware wasn't designed to run this stuff, and still isn't, but now it's fast enough that it runs at a decent speed. These languages are viable now. So I think people are beginning to understand the advantages of them. You see that with Ruby and Python. What I've been able to glean from this discussion is that Ruby and Python take what people are already familiar with and extend it. They're file-based, and Perl programmers will find some things that are familiar. Perl has its own history. Perl became popular in the 90s as a way of building web sites (in conjunction with C++). My own theory is Perl became popular for that because that's what Unix system administrators used before the web became popular. They used it as a more powerful shell scripting language for managing processes. It's natural to assume that Unix system administrators became webmasters when that came along, and probably set the standard for what languages would be used. As you've seen in this discussion, Smalltalk is unlike anything else out there. It really is like learning a new operating system. A lot of people who would be interested in it are already steeped in the way Unix does things and they like it. They've always frowned on GUI-intensive interfaces anyway. I think the people who would be more amenable to Squeak are people like you, me, and Ramon: Windows and Mac users/developers. We're already familiar with working with a GUI environment. We work in it and we like it, but we also have experience developing for the web. The thing is, Squeak has an image problem. People who like GUIs appreciate aesthetics (alright, Ramon, I think I've figured this out now...). Most newbies to Squeak get the standard version which screams "KID'S TOY" to them. I've kind of been at fault for this in my own small way, because when I blogged about where to get it, I directed people to the standard version. It doesn't make a good first impression--except to kids. The reason I got interested in it at all is I was introduced to the Smalltalk language 16 years ago in college, and I really liked it then. I learned the history of it more recently. The Smalltalk system invented at Xerox PARC was a HUGE inspiration for Apple to create the Lisa and the Macintosh. All of this has spurred me on to learn it more, even when I'd get discouraged with it. You kind of have to fall in love with what Squeak is, rather than how it looks in order to appreciate it. Most people go by first impressions, which is natural. Every once in a while, like you've seen in these comments, I see Smalltalk programmers who like a more refined interface, and they go with the commercial Smalltalk implementations. So if people diss Squeak, it doesn't mean they don't like the fundamentals of it. They just like a nicer package, and Seaside will run on a couple of them.
http://gilesbowkett.blogspot.com/2007/05/seasides-marketing-problem.html?showComment=1180438980000
CC-MAIN-2013-20
refinedweb
7,334
72.76
FDA Raids Compilation Meetup I’d like to see us compile documented lists like these for other federal agencies, as well. It would be a good resource and would drive traffic to our website. -----Attachment----- Timeline of FDA raids against farmers, health clinics and dietary supplement providers- 1985, July 7. FDA agents raid the Burzynski Research Clinic (Texas), steal 200,000 medical and research documents, and force Dr. Stanislaw Burzynski to pay for copies to be made of them. No official charges are ever filed by the FDA (). - 1987, February 26. Twenty-five armed FDA agents and US Marshals storm offices of the Life Extension Foundation (Florida), terrorize employees and seize thousands of nutritional products, materials, computers, files, and newsletters. Eighty percent of seized items are later determined not to even have been on the warrant (). - 1988, November. FDA agents raid Traco Labs (Illinois), seize several drums of black currant oil as well as many containers of encapsulated product. The FDA claims the capsules the oil was being put into are an "unapproved food additive" (...). - 1998, Summer. FDA agents seize entire inventory and business records of Pets Smell Free (Utah), a company that produces a natural product for eliminating pet odor. The company later wins a lawsuit against the FDA in court (). - 1990, October 6. Federal agents raid HA Lyons (Arizona), a women-run, home-based mailing service that publishes materials for vitamin companies. Armed agents seize all business records and literature, and even try to steal the owner's checkbook and cash. The FDA eventually drives the company out of business (). - 1990, Fall. FDA agents raid Highland Laboratories (Oregon), a company that produces vitamins and nutritional supplements. The agents do not present a warrant, but proceed to seize everything except for office furniture, and threaten employees with violence if they fail to comply (). - 1990, March. FDA agents raid Solid Gold Pet Foods (California), seize all pet food products without a warrant, and shut down the store. Owner Sissy Harrington-McGill is later indicted, and spends 179 days in prison with leg irons clasped to her legs (). - 1990. Agents from both the FDA and US Postal Service twice raid Century Clinic (Nevada), and steal chelation products, computers, and various other equipment. No official charges are ever filed against the clinic, however both illegal raids go unpunished (). - 1991, Fall. FDA agents raid Scientific Botanicals (Washington), a nutritional supplement company, and seize herbal extracts and literature. FDA strong-arms company into complying with its unlawful demands before agreeing to release seized products (). - 1991, December 12. RDA agents raid Thorne Research (Idaho), and seize $20,000 worth of vitamin products, and 11,000 pieces of literature. The company cannot afford to fight the battle in court because of high legal costs, and decides to no longer publish literature (). - 1991. FDA agents raid NutriCology (California), a nutritional supplement company. All FDA injunctions against it are later tossed out of court (). - 1991. Agents from the FDA and the Texas Department of Health again raid the Burzynski Research Clinic (Texas), and seize more products and materials. Dr. Burzynski eventually wins the fight against the FDA (...). - 1991, March. Armed Mexican police officers raid offices of alternative cancer clinic in Tijuana, and kidnap the owner without warrant or charges. They then ship him across the US border and into the hands of the US Justice Department, where he unlawfully spends two years in prison (). - 1992, May 6. Agents from the FDA and officers from the King County Police Department raid the Tahoma Clinic (Washington), a natural health clinic. Because Dr. Jonathan Wright has been giving patients injectable B vitamins in high doses, agents decide to storm the clinic with guns drawn, and seize product, computers, records, and other products. The FDA shows no valid warrant to justify its actions (). - 1992, June 2. FDA agents raid the personal home of Mihai Popescu (California) for producing and selling a natural supplement called GH-3. The raid involves agents stealing $5,000 worth of GH-3, personal records, computers, and other equipment, and results in the false arrest and imprisonment of Popescu, as well as the termination of his business (). - 1992, June 30. FDA agents raid Nature's Way (Utah), a vitamin and nutritional supplement company, and seize bulk containers of primrose oil because the addition of vitamin E to the formula was allegedly "unapproved" (). - 1992, June. The FDA prompts the Texas Department of Health to conduct raids on numerous health food stores throughout Texas. They seize natural oils, aloe vera, zinc, vitamin C, and other natural products. Agents reportedly threaten store owners not to speak of the raid, or more raids will ensue. No valid warrants are presented, and no charges are ever filed against the stores (). - 1992, August 14. FDA agents raid Family Acupuncture Clinic (California), and seize $15,000 worth of natural tea pills. The products are left to spoil, and then sent back to China by the FDA (). - 1992. Federal agents arrest three vitamin company owners (California) for selling supplements freely available throughout Europe. Agents try to get the men imprisoned for a collective total of 990 years (). - 1993, May. Agents from the FDA raid Zerbo's Health Food Store (Michigan) for "illegal drug trafficking" involving the natural supplements coenzyme Q10, selenium, carnitine, and GH-3. Agents threaten the owner's 78-year-old father with imprisonment if the family attempts to fight the FDA's indictment (). - 1993, May 12. Dozens of armed federal agents storm Hospital Santa Monica (California), an alternative cancer treatment center, seizing records, charts, computers, and other equipment. The agents also steal hundreds of thousands of dollars from the hospital's bank account, as well as from two vitamin companies with which it works, and even steal $80,000 from the hospital owner's personal safe (). - 1993, May 12. Agents raid personal home of Kirwin Whitnah, claim he is selling "unapproved drugs." No products are ever found, but agents proceed to terrorize a woman staying at the home, and seize thousands of dollars in equipment, literature, and even money orders (). - 1993, May 14. FDA agents raid Waco Natural Foods (Texas) in search of a natural supplement called deprenyl citrate. Owner Tom Wiggins tells agents that his attorney holds tremendous clout in the Waco area, and the agents immediately apologize, leave, and never return (). - 1993, June 24. FDA agents, a Federal Marshal, and a public relations specialist together raid International Nutrition Inc. (New Mexico), and seize $1 million worth of vitamins and nutritional supplements, as well as computers and business records. The owner ends up losing 80 percent of his business, and subsequently has to lay off 80 percent of his workforce (). - 1993. Federal marshals raid Natural Vision International (Wisconsin), and steal 17,000 pairs of pinhole glasses that help customers exercise their eyes and improve vision. Valued at over $200,000, the confiscating of these products by the agents results in the company going out of business (). - 2001, March 23. Forty armed federal agents and USDA officials storm Three Shepherd's Farm (Vermont), and confiscate and destroy the farm's entire flock of sheep for supposedly having mad cow disease. Government laboratories verify previously that the sheep are healthy, and that sheep cannot even contract the disease, but the USDA persists in eliminating them anyway, destroying evidence and breaking various other laws along the way (...). - 2004, Spring. Upon being prompted by the FDA, state officials show up unannounced at Organic Pastures Dairy (California) and pretend to be evaluating cheese production. However OPD workers see Special Agent Jennifer King secretly taking pictures of private customer files, and they tell her to leave (...). - 2005, June 23. Federal agents perform a series of raids on medical marijuana dispensaries, businesses, and personal homes throughout Northern California. They arrest many individuals along the way, despite the fact that marijuana dispensaries are legal in California (...). - 2006, March 6. Ohio police, Ohio Department of Agriculture officials, FDA agents, and agents from unmarked vehicles intercept a raw milk pickup in the Cincinnati area. They confiscate milk and harass customers, and leave farm owner Gary Oakes so shaken up that he is hospitalized three times for post-traumatic stress disorder (...). - 2006, September 14. Armed agents storm the hunting preserve of Danny and Cindi Henshaw (Virginia), and perform SWAT-style raid of property They shoot dozens of hogs with 12-gauge shotguns and drag them off (...). - 2006, October 6. Armed agents from the FBI and FDA arrive at Growers Express (California), a produce company, and begin searching the premises for evidence that the company's bagged spinach might be linked to an E. coli outbreak. Agents never even try contacting the company prior the raid, and find nothing in violation (). - 2006, October 13. Michigan Department of Agriculture agents and police officers stop Richard Hebron on the way to deliver raw milk to cow share owners. Agents seize his cell phone and wallet, and proceed to unload 453 gallons of fresh milk from his truck. A six-month investigation finds Hebron innocent, but he gets stuck paying a $1,000 "administrative" fee (...). - 2006, November 21. Dozens of armed agents storm Glencolton Farms (Ontario), a farm that produces dairy among other things, poring through every building and structure on the property. They steal records, computers, and milk processing equipment. Farm owner Michael Schmidt, who provides raw milk to cow share owners, as well as other farm-fresh food, is fined $3,500 and placed on two years probation (...). - 2007, August. Pennsylvania Mennonite farmer Mark Nolt declares his God-given right to sell fresh milk and has his farm raided by federal and state agents, who seize $25,000 worth of milk, milk products, and other equipment. (... small.html). - 2007, September 21. FDA agents spur the Virginia Department of Agriculture & Consumer Services to raid Double H Farm (Virginia) and seize and destroy pork products. Agents try to justify their actions by claiming the products contain the wrong price tags. Owners say they have been needlessly harassed by officials for years (...). - 2007, October 11. New York Department of Agriculture officials raid Meadowsweet Dairy (New York) and seize 260 pounds of raw milk products (...). - 2008, April 28. Agents again raid Mennonite farmer Mark Nolt's (Pennsylvania) property, and steal more milk, milk products, and equipment. This time, agents charge and take him into custody (...). - 2008, December 15. Armed agents storm Manna Storehouse (Ohio), a family farmhouse that operates an organic food buying cooperative. Agents terrorize and hold family hostage for eight hours while ransacking house, and seizing food, computers, and records (...). - 2008, December 18. Agents pose as customers trying to buy goat cheese from Sharon Palmer's Healthy Family Farms (California). She is then arrested and thrown in jail, and her businesses temporarily shut down (...). - 2009, April 8. Undercover agents trick daughter of farmers Armand and Teddi Bechard (Missouri) into selling them raw milk, which sparked harassment and extensive legal troubles for the family (...). - 2009, January 16. Federal marshals raid Cocoon Nutrition (South Carolina), a nutritional supplement company, and arrest owner Stephen Heuer at gunpoint (). - 2010, April 14. Dozens of FDA, IRS, and FBI agents conduct full-scale raid on Maxam Nutraceuticals (Oregon). Company complies with all notices, but gets targeted anyway by a fully-armed, SWAT-style cadre of officers, who steal products, paperwork, computers, and personal files (...). - 2010, April 20. Two FDA agents, two US Marshals, and one state trooper raid Rainbow Acres (Pennsylvania) at 5 am, breaking their warrant's restrictions instructing an inspection at "reasonable business hours." Agents scour the premises for hours and charge the farm with illegally selling raw milk across state lines (...). - 2009, Fall. USDA inspector shows up at Dollarhite Family Farm (Missouri) and demands an inspection. After insisting there were no problems with the family's small-scale raising of bunny rabbits, USDA officials later try to fine the family $90,000 for alleged violations (...). - 2010, May 26. Officials from the Minnesota Department of Agriculture and the Minnesota Department of Health send armed deputies to raid Hartmann Farm (Minnesota). Agents order Mike and Diana Harmann to stop selling all meat and dairy products, and to stop delivering raw milk (-...). - 2010, June 2. Agents from the Wisconsin Department of Agriculture, the Trade and Consumer Protection Agency, and local health officials, arrive unannounced at Hershberger Farm (Wisconsin). They violate private property signs, demand to do an inspection, and proceed to take shut coolers and order that raw milk be dumped in a field (...). - 2010, June 10. Officials from various health and law enforcement agencies raid the personal home of Rae Lynn Sandvig (Minnesota), a raw milk and local food consumer, for allegedly "assisting in the sale of raw milk" from her home by sharing food with neighbors (...). - 2010, June 30. Various federal agents, and even Canadian agents, raid Rawesome Foods (California), a private, raw food buying club, and steal computers, raw food products, and other materials. They hold members and workers hostage for many hours before finally leaving with hundreds of thousands of dollars worth of product (...). - 2010, August 26. Agents show up at Morningland Dairy (Missouri) and confiscate cheese samples for testing. After improper handling of the samples, which allegedly return positive for contamination, the farm appeals. Officials then demand that the farm destroy its entire 50,000 pound inventory of raw, artisan cheese, valued at $250,000 (...). - 2010, September 21. Federal agents arrive unannounced at Camino de Paz Montessori School and Farm (New Mexico) on supposed suspicion of marijuana. After scouring the premises and terrorizing teachers and students, they find nothing but fruits, vegetables, and other produce (...). - 2010, September 21. After years of fighting back against federal tyranny, the Christian church ministry Daniel Chapter One (Rhode Island) is raided by agents from the FDA, IRS, the US Army Criminal Investigation Command and several other agencies. Agents break into the home of owners and steal computers, paperwork, files, personal documents, and hold owner at gunpoint (...). - 2010, October 15. Georgia State Department of Agriculture officials illegally search raw milk buying club truck without warrant, seizes and orders 110 gallons of product to be dumped (...). - 2011, March 9. Minnesota officials again target Hartmann Farms (Minnesota), this time going after James Roettger, a man who helps distribute the farm's food. Agents pull Roettger over while driving, seize as much as $6,000 worth of food from his van, and arrest him (...). - 2011, June 3. At the prompting of the FDA US Marshals raid Wyldewood Cellars (Kansas), producer of natural elderberry juice, and confiscate the entire stock of product, claiming it is an "unapproved drug" (...). - 2011, August 3. Slew of agents conduct second raid on Rawesome Foods (California), a private, raw buying club, and confiscate everything in sight. They handcuff and arrest founder James Stewart, without warrant, and proceed to destroy the shop's entire food inventory (...). Learn more:
https://groups.yahoo.com/neo/groups/RLC-Action/conversations/topics/2870
CC-MAIN-2015-27
refinedweb
2,465
55.34
After solving several "Game Playing" questions in leetcode, I find them to be pretty similar. Most of them can be solved using the top-down DP approach, which "brute-forcely" simulates every possible state of the game. The key part for the top-down dp strategy is that we need to avoid repeatedly solving sub-problems. Instead, we should use some strategy to "remember" the outcome of sub-problems. Then when we see them again, we instantly know their result. By doing this, we can always reduce time complexity from exponential to polynomial. (EDIT: Thanks for @billbirdh for pointing out the mistake here. For this problem, by applying the memo, we at most compute for every subproblem once, and there are O(2^n) subproblems, so the complexity is O(2^n) after memorization. (Without memo, time complexity should be like O(n!)) For this question, the key part is: what is the state of the game? Intuitively, to uniquely determine the result of any state, we need to know: - The unchosen numbers - The remaining desiredTotal to reach A second thought reveals that 1) and 2) are actually related because we can always get the 2) by deducting the sum of chosen numbers from original desiredTotal. Then the problem becomes how to describe the state using 1). In my solution, I use a boolean array to denote which numbers have been chosen, and then a question comes to mind, if we want to use a Hashmap to remember the outcome of sub-problems, can we just use Map<boolean[], Boolean> ? Obviously we cannot, because the if we use boolean[] as a key, the reference to boolean[] won't reveal the actual content in boolean[]. Since in the problem statement, it says maxChoosableInteger will not be larger than 20, which means the length of our boolean[] array will be less than 20. Then we can use an Integer to represent this boolean[] array. How? Say the boolean[] is {false, false, true, true, false}, then we can transfer it to an Integer with binary representation as 00110. Since Integer is a perfect choice to be the key of HashMap, then we now can "memorize" the sub-problems using Map<Integer, Boolean>. The rest part of the solution is just simulating the game process using the top-down dp. public class Solution { Map<Integer, Boolean> map; boolean[] used; public boolean canIWin(int maxChoosableInteger, int desiredTotal) { int sum = (1+maxChoosableInteger)*maxChoosableInteger/2; if(sum < desiredTotal) return false; if(desiredTotal <= 0) return true; map = new HashMap(); used = new boolean[maxChoosableInteger+1]; return helper(desiredTotal); } public boolean helper(int desiredTotal){ if(desiredTotal <= 0) return false; int key = format(used); if(!map.containsKey(key)){ // try every unchosen number as next step for(int i=1; i<used.length; i++){ if(!used[i]){ used[i] = true; // check whether this lead to a win (i.e. the other player lose) if(!helper(desiredTotal-i)){ map.put(key, true); used[i] = false; return true; } used[i] = false; } } map.put(key, false); } return map.get(key); } // transfer boolean[] to an Integer public int format(boolean[] used){ int num = 0; for(boolean b: used){ num <<= 1; if(b) num |= 1; } return num; } } Updated: Thanks for @ckcz123 for sharing the great idea. In Java, to denote boolean[], an easier way is to use Arrays.toString(boolean[]), which will transfer a boolean[] to sth like "[true, false, false, ....]", which is also not limited to how maxChoosableInteger is set, so it can be generalized to arbitrary large maxChoosableInteger. Brilliant solution! I think using Arrays.toString() is better. Here is my code: public class Solution { public boolean canIWin(int maxChoosableInteger, int desiredTotal) { if (desiredTotal<=0) return true; if (maxChoosableInteger*(maxChoosableInteger+1)/2<desiredTotal) return false; return canIWin(desiredTotal, new int[maxChoosableInteger], new HashMap<>()); } private boolean canIWin(int total, int[] state, HashMap<String, Boolean> hashMap) { String curr=Arrays.toString(state); if (hashMap.containsKey(curr)) return hashMap.get(curr); for (int i=0;i<state.length;i++) { if (state[i]==0) { state[i]=1; if (total<=i+1 || !canIWin(total-(i+1), state, hashMap)) { hashMap.put(curr, true); state[i]=0; return true; } state[i]=0; } } hashMap.put(curr, false); return false; } } Or, using int is enough. public class Solution { public boolean canIWin(int maxChoosableInteger, int desiredTotal) { if (desiredTotal<=0) return true; if (maxChoosableInteger*(maxChoosableInteger+1)/2<desiredTotal) return false; return canIWin(desiredTotal, maxChoosableInteger, 0, new HashMap<>()); } private boolean canIWin(int total, int n, int state, HashMap<Integer, Boolean> hashMap) { if (hashMap.containsKey(state)) return hashMap.get(state); for (int i=0;i<n;i++) { if ((state&(1<<i))!=0) continue; if (total<=i+1 || !canIWin(total-(i+1), n, state|(1<<i), hashMap)) { hashMap.put(state, true); return true; } } hashMap.put(state, false); return false; } } Very smart and detailed explanation. I have one quick question regarding the memorization. If we cannot use Map<boolean[], boolean> because of the shallow copy like you said, can we simply use Map<Set<Integer>, boolean>? The Set<Integer> is the set of chosen numbers. Thank you so so much. @LeoM58 After some research, I think your idea is feasible because the HashCode for Set<Object> is the sum of hashcode of it's object, and in this case it can uniquely determine a hash set. Here is a small example: Map<Set<Integer>, Integer> map = new HashMap<>(); Set<Integer> set1 = new HashSet<>(); Set<Integer> set2 = new HashSet<>(); set1.add(2); set1.add(3); map.put(set1, 1); // put set1 into map set2.add(2); set2.add(3); System.out.print(map.get(set1)); // 1 System.out.print(map.get(set2)); // 1 Thank you for your solution, can you please explain the logic of the helper function? Or point out the invariant? @Rhodey said in Java solution using HashMap with detailed explanation: Thank you for your solution, can you please explain the logic of the helper function? Or point out the invariant? Sure. First, this helper function has a parameter desiredTotal, and it determines that if a player plays first with such a desiredTotal, can s/he win? Then it comes to how to decide whether s/he can win. The strategy is we try to simulate every possible state. E.g. we let this player choose any unchosen number at next step and see whether this leads to a win. If it does, then this player can guarantee a win by choosing this number. If we find that whatever number s/he chooses, s/he won't win the game, then we know that s/he is guarantee to lose given such a state. See explanations below: // try every unchosen number as next step for(int i=1; i<used.length; i++){ if(!used[i]){ used[i] = true; // check whether this lead to a win, which means helper(desiredTotal-i) must return false (the other player lose) if(!helper(desiredTotal-i)){ map.put(key, true); used[i] = false; return true; } used[i] = false; } } map.put(key, false); @leogogogo Very detailed and understandable answer! Thank you so much! @leogogogo Thank you so much for your time and help. It means a lot. I tried the set idea and it is too slow. I guess it's because the frequent copy of sets and space issue. Inspired by your idea and code, I finished my version. Thank you again. public class Solution { int n; public boolean canIWin(int newN, int target) { n = newN; if (target > n * (n + 1) / 2) { return false; } Map<Integer, Boolean> memo = new HashMap<>(); return helper(0, memo, target); } private boolean helper(int visiting, Map<Integer, Boolean> memo, int target) { if (memo.get(visiting) != null) { return memo.get(visiting); } for (int i = n ; i >= 1 ; i --) { int choice = 1 << i; if ((visiting & choice) == 0) { if (i >= target) { memo.put(visiting, true); return true; } visiting += choice; boolean nextWinner = helper(visiting, memo, target - i); visiting -= choice; if (!nextWinner) { memo.put(visiting, true); return true; } } } memo.put(visiting, false); return false; } } @leogogogo Thanks again for your timely replay. Can you teach me how to post code? I typed ``` and copied all my code there. But they do not seem to align automatically. @LeoM58 Yeah, sometimes the code doesn't align automatically, so I just manually add some space for better appearance. Hi @leogogogo, thank you for your post. May I ask what else "game playing" problems in LC can also solved by top down DP. I want to do more practice. Thank you! @wondershow like Flip Game II, Burst Balloons , they all can be solved with top-down dp @ckcz123 you can just use int state as the key, absolutely faster and shorter. public class Solution { public boolean canIWin(int maxChoosableInteger, int desiredTotal) { if (desiredTotal <= 0) return true; if (maxChoosableInteger * (maxChoosableInteger + 1) / 2 < desiredTotal) return false; return canIWin(maxChoosableInteger, desiredTotal, 0, new HashMap <> ()); } private boolean canIWin(int length, int total, int state, HashMap <Integer, Boolean> hashMap) { if (hashMap.containsKey(state)) return hashMap.get(state); for (int i = 0; i < length; i++) { if ((1 << i & state) == 0) { if (total <= i + 1 || !canIWin(length, total - (i + 1), 1 << i | state, hashMap)) { hashMap.put(state, true); return true; } } } hashMap.put(state, false); return false; } } Hi, In your post, you mentioned that time complexity will transformed from "exponential to polynomial." My thought is that there is still 2^n possible boolean[]used array and 2^n possible key in the hashmap. Could you explain a little bit more about the polynomial time complexity? Thanks! @leogogogo Hi, In the p("leet") problem there, there are 4 possible subproblems p("eet"),p("et"),pt("t") and p(""). So using the memo schema would lower the bound to polynomial (or proportional to the length of the given string). However, in this problem, there are possibly 2^n possible boolean[]used, the subproblem size is scaled to 2^n here. The memo process will lower the time complexity of every subproblem to exactly O(1) and the altogether should be O(2^n). However, the memo process will lower the complexity than the brute force which may cost more. @billbirdh Well it seems that you are right, because there are O(2^n) combinations, and by using memo we only calculate for each combination at most once. I will update my answer, thanks. @billbirdh By the way, without memo, what is the time complexity? Looks like your connection to LeetCode Discuss was lost, please wait while we try to reconnect.
https://discuss.leetcode.com/topic/68896/java-solution-using-hashmap-with-detailed-explanation
CC-MAIN-2017-51
refinedweb
1,729
57.37
I was using the most recent cedet pre releases... Now i changed to the most recent cedet beta, as you said, and the problem is fixed. ;) thanks! Can you please tell me if there is an updated documentation for semantic besides this one (this one is not updated, because it still has 'semantic-nonterminal-abstract' and should have 'semantic-tag-abstract' like you told me): One last thing. Having the following class definition (in java), i would like to know if semantic already provide functions to get the method tokens and the variable tokens. For e.g: public class x { private int y = 0; public int z () { return y; } } Now i would like to get the token corresponding to the 'z' method, and the token corresponding to the 'y' variable. I was using something like: (semantic-find-nonterminal-by-token 'function <and here i putted a buffer>) -- for trying to get the z method -- but this always returned 'nil'. I did something similar to the variable and returned 'nil' either. But to get the class token type it seems to work well... So can you please tell me if semantic already has functions to get the methods tokens and the variable tokens. If the answer is yes, which functions are these? Thanks for the help, Henda What version of semantic are you using? It should exist in the version of semantic in the most recent cedet betas. Eric >>> "Henda Carvalho" < henda.for.work@gmail.com> seems to think that: >Hi Eric, > >I tried to use 'semantic-tag-abstract' but it still doesn't work. I still >receive the same error: > >Debugger entered--Lisp error: (void-function semantic-tag-abstract) > >Henda > > >On 3/17/06, Eric M. Ludlam <eric@siege-engine.com> wrote: >> >> >>> "Henda Carvalho" < henda.for.work@gmail.com> seems to think that: >> >Hi there, >> >I'm trying to use the function *semantic-nonterminal-abstract (that >> should >> >come with semantic!)*, but i receive an error. The error is: >> > >> >Debugger entered--Lisp error: (void-function >> semantic-nonterminal-abstract)=3D >> >. >> > >> > >> > >> >Can somebody help me please. >> [ ... ] >> >> Hi, >> >> I think you want 'semantic-tag-abstract'. Semantic 1.4 used >> 'nonterminal' in a confusing way, and all occurrences that really were >> referring to 'tags' as were switched to use that name. >> >> Eric >> >> -- >> Eric Ludlam: zappo@gnu.org, >> eric@siege-engine.com >> Home: Siege: >> Emacs: GNU: >> > -- Eric Ludlam: zappo@gnu.org, eric@siege-engine.com Home: Siege: Emacs: GNU:
http://sourceforge.net/p/cedet/mailman/attachment/c75436270603190033r479bcbaai515d8e0960c9f441@mail.gmail.com/1/
CC-MAIN-2015-06
refinedweb
403
57.16
iPluginManager Struct Reference [Shared Class Facility (SCF)] This is the plugin manager. More... #include <iutil/plugin.h> Detailed Description This is the plugin manager. The plugin manager is guaranteed thread-safe. Main creators of instances implementing this interface: Main ways to get pointers to this interface: Definition at line 62 of file plugin.h. Member Function Documentation Unload all plugins from this plugin manager. Get an iterator to iterate over all loaded plugins in the plugin manager. This iterator will contain a copy of the plugins so it will not lock the plugin manager while looping over the plugins. Load a plugin and (optionally) initialize it. If 'init' is true then the plugin will be initialized and QueryOptions() will be called. - Parameters: - Query all options supported by given plugin and place into OptionList. Normally this is done automatically by LoadPlugin() if 'init' is true. If 'init' is not true then you can call this function AFTER calling object->Initialize(). Find a plugin given his class ID.. Register a object that implements the iComponent interface as a plugin. Remove a plugin from system driver's plugin list. The documentation for this struct was generated from the following file: Generated for Crystal Space 1.4.1 by doxygen 1.7.1
http://www.crystalspace3d.org/docs/online/api-1.4/structiPluginManager.html
CC-MAIN-2014-41
refinedweb
209
51.14
Subject: [Boost-build] question on path.makedirs behavior From: Tom (tabsoftwareconsulting_at_[hidden]) Date: 2014-09-14 00:18:40 Hi all, I'm trying to use the 'path.makedirs' rule in a project, but it's not working as I expected and reports that it cannot create a directory in certain circumstances. I think I've boiled it down to a simple example that shows what I didn't expect. The first time I run the following Jamroot, I get an error trying to create the '$(root)/b' directory. If I run it again, it succeeds. Does anyone have an idea why this happens? ``` # Jamroot import path ; path-constant root : ./tmp ; path.makedirs $(root)/a ; path.makedirs $(root)/b ; ``` Thanks, Tom Boost-Build list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/boost-build/2014/09/27689.php
CC-MAIN-2020-50
refinedweb
149
71.1
Data Visualization in Machine Learning — Beyond the Basics This is not a tutorial. These are my notes from various Machine Learning articles and tutorials. My personal cheatsheet for interviews and reviews. Any feedback and corrections are welcome. If you’d like to read more, please let me know as well. These notes are more applicable for python users. Does not include ggplot, great for R. Prerequisites and Dependencies This tutorial and overview is python based so we use matplotlib.pyplot. These commands can be run in command line and in Python Notebook with just a bit of modifications. Any reference to plt means the function is from the matplotlib library. import matplotlib.pyplot as plt # will get object does not have bar, scatter.... function_name error # if not imported Plot a Bar Chart Bar chart, bin chart: useful for frequency analysis, distributions and counts. labels = ['A','B','C','D','E','F','G'] nums = [13,24,5,8,7,10,11] xs = range(len(nums)) #[0, 1, 2, 3, 4, 5, 6] #xs is a convention variable name for x axis plt.bar(xs,nums) plt.ylabel("Customize y label") plt.title("Customize graph label") plt.show() #display the plot Don’t be deceived by its simple look. Frequency analysis is very powerful in data EDA, stats and machine learning. Plot a Histogram Histogram will automatically divide data into bins. import matplotlib.pyplot as plt import pandas as pd nums = [99, 1, 3, 5, 7,33, 23,684, 13, 3 ,0, 4] pd.Series(nums).hist(bins=30) # <matplotlib.axes._subplots.AxesSubplot object at 0x10d340d90> # returns object in memory plt.show() Also useful for visualizing distribution and outliers. Scatter Plot How is scatter plot beyond the basics? Scatter plot is extremely intuitive yet powerful. Just plot the vertical coordinate and horizontal coordinate of each data point in the sample to get its scatter plot. If the relationship is non-linear, or there may be the presence of an outlier, these targets will be clearly visible in the scatter plot. In the case of many features i.e. dimensions, a scatterplot matrix can be used. Below is a screenshot of pandas scatterplot matrix in the official documentation. Clearly the relationship is not linear. The diagonal is the variable vs itself, so it’s showing a distribution graph instead of scatter plot. Neat, looks like the variable is normally distributed. Scatterplot is a great first visual. Too many features? Try sampling or generating data subsets before visualizing. Use pandas.DataFrame.describe() to summarize and describe datasets that are simply too big. This function will generate summary stats. Scatterplots are useful for pairwise comparison of features. Scatterplots can go beyond two dimensions. We can use marker size and color to illustrate the 3rd dimension, even 4th dimension as in the famous TED talk of economical inequality. The presenter even used timeline (animation) as the 5th dimension. Visualizing Error Youtube deep learning star Sraj shows a 3D visual of error function while altering y intercept aka bias and slope for linear regression. The global optima i.e. the global minimum in this case is the goal of gradient descent algorithm. Error functions have shapes and can be visualized. Local optima which prevents your model from improving can potentially be visualized. Gradient can be visualize as directional arrows that travel in the direction of the global minima along the shape of the 3D plot. It can also be visualized as a field of arrows in a matrix. Each residual (y_i — y_hat) can be visualize as a vertical line connecting the data point with the fitted line in linear regression. Data Scientists Love Box Plots Why? It displays essential stats about distribution in a concise visual form. Aka candle stick plot. Also popular in finance. Max, 3rd Quartile, Median, 1st Quartile, min. This is known as the box and whisker graph too. It’s popular among statisticians. Used to visualize range. It can be drawn horizontally. What’s between Q3 and Q1? The interquartile range, which used in analyzing outliers. Q1–1.5*IQR is too low, Q3+1.5*IQR is too high. Box whisker plot displays outliers as a dot! Check out Boston University’s Blood Pressure dataset box whisker plot with outliers. Heatmap Did you say heat map? Heat map has been in and out of favor. Web analytics still use heat map to track events and clicks on a webpage to identify key screen real estates. Why should we use heat map for machine learning? It turns out that generating a heat map of all the feature variables — feature variables as row headers and column headers, and the variable vs itself on the diagonal— is extremely powerful way to visualize relationships between variables in high dimensional space. For example, a correlation matrix with heat map coloring. A covariance matrix with heat map coloring. Even a massive confusion matrix with coloring. Think less about the traditional use of heat map, but more like color is another dimension that can visually summarize the underlining data. Correlation Matrix Heat Maps are frequently seen on Kaggle, for exploratory data analysis (EDA). More Data Visualization Magic Did you know that you can visualize decision trees using graphviz. It may output a very large PNG file. Remember the split of decision tree is not always stable — consistent over time. Take it with a grain of salt. The benefit of visualizing a decision tree is to understand where and how machines made decision splits. Decision tree boundaries can be visualized too, see screenshot below from Sklearn documentation. Visualizing models, decision boundaries and prediction results may give hints whether the model is indeed a good fit or it is a poor fit for the data. For example, it is high bias to ignore the nature of our data if use a straight line to fit a circular scatter of dots. Researchers even visualized different optimizers to see their descend to minimize loss. Did you know you can create interactive plots using Plotly right in Jupyter Notebook? Interactive plots allow you to visualize complex data, toggle and change parameters. For example you can slide to change values of your hyperparameters and visualize how the model performance change in gridsearch and other systematic search of the space. kellybags onlineshop If you want to buy gucci bags with good quality and low price, you can choose to buy them here kelly bag
https://www.siliconvanity.com/2020/05/intro-to-data-visualization.html
CC-MAIN-2020-29
refinedweb
1,067
59.5
Created on 2019-04-18 03:43 by Windson Yang, last changed 2020-01-25 19:40 by berker.peksag. This issue is now closed. > The tokenize() generator requires one argument, readline, which must be a callable object which provides the same interface as the io.IOBase.readline() method of file objects. Each call to the function should return one line of input as bytes. Add an example like this should be easier to understand: # example.py def foo: pass # tokenize_example.py import tokenize f = open('example.py', 'rb') token_gen = tokenize.tokenize(f.readline) for token in token_gen: # Something like this # TokenInfo(type=1 (NAME), string='class', start=(1, 0), end=(1, 5), line='class Foo:\n') # TokenInfo(type=1 (NAME), string='Foo', start=(1, 6), end=(1, 9), line='class Foo:\n') # TokenInfo(type=53 (OP), string=':', start=(1, 9), end=(1, 10), line='class Foo:\n') print(token) This could be added to the examples section of the tokenize doc, would you want to make the PR Windson? Yes, I can make a PR for it. I do not think a new example is needed. The existing example already demonstrates the use of file's readline method. If you need an example for opening a file, the tokenize module documentation is not an appropriate place for this. New changeset 4b09dc79f4d08d85f2cc945563e9c8ef1e531d7b by Berker Peksag (Windson yang) in branch 'master': bpo-36654: Add examples for using tokenize module programmically (#12947) New changeset 1cf0df4f1bcc38dfd70a152af20cf584de531ea7 by Berker Peksag (Miss Islington (bot)) in branch '3.8': bpo-36654: Add examples for using tokenize module programmatically (GH-18187) New changeset 6dbd843dedc9e05c0e3f4714294837f0a83deebe by Berker Peksag (Miss Islington (bot)) in branch '3.7': bpo-36654: Add examples for using tokenize module programmatically (GH-12947) Wow, I managed to make typos in all three commits! PR 12947 has some discussion about why adding these examples would be a good idea as we now have two different APIs for unicode and bytes input. Thanks for the PR, Windson.
https://bugs.python.org/issue36654
CC-MAIN-2021-49
refinedweb
330
56.25
Re: Encryption of Connection String - From: "Mark Rae" <mark@xxxxxxxxxxxxxxxxx> - Date: Mon, 11 Dec 2006 13:12:39 -0000 "Ashish Jain" <erashishjain@xxxxxxxxxxx> wrote in message news:%23B0BYpQHHHA.1064@xxxxxxxxxxxxxxxxxxxxxxx Environment: .Net Framwework 2.0/SQL Server 2005 - Windows XP SP2/Windows Server 2003 My web application is a mix of ASP and ASP.Net. My "ASP" web application uses a serviced COM+ component written in C# 2.0 for authentication. The confuguration file itself is located in System32 folder as required by COM+. For reading the config file, we are creating a separate appdomain in the serviced component (since COM+ component is used by ASP pages). The connection string is stored in clear text right now and I want to encrypt it. My developer environment is Windows XP SP2 and deployment is Windows Server 2003. I want the approach to work on both systems. Also, I want to keep it easy to copy the installation from one machine to another with minimum changes (say from QA to deployment). Can you please guide me on the standard approach for encryption of connection string? There is no "standard approach" per se - take a look at the System.Cryptography namespace - loads of options for string encryption... Also, a cursory Google search would have shown you loads of possibilities too: However, you might ask yourself why you should bother doing this in the first place... Who are you trying to hide the connection string from...? If your users can see it, then you need to start from the beginning! If a hacker is clever enough (or your security is poor enough) to get as far as being able to read your config file, then the fact that its contents may or may not be encrypted really is the least of your worries... :-) . - References: - Encryption of Connection String - From: Ashish Jain - Prev by Date: invalidcastexception - Next by Date: Re: invalidcastexception - Previous by thread: Encryption of Connection String - Next by thread: Enforcing https. - Index(es):
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.aspnet/2006-12/msg01739.html
crawl-002
refinedweb
330
63.09
Hello all, I am trying to use the #if directive to compile extra functionality acording to some class template parameter defined integer constant. What I want to do is this: struct A1{ static const int _AINT = 0; }; struct A2{ static const int _AINT = 1; }; template< class T > struct B{ void Fun(){ #if T::_AINT == 1 cout << " TADA! "<< endl; #endif }; }; int main(void){ B< A1 > b1; B< A2 > b2; b1.Fun(); b2.Fun(); return 0; }; This compiles, I guess because the compiler recognizes the integer constant and accepts it. But when it runs it does not behave as (I) expected. Can someone explain me why does this happen and maybe suggest some other way of doing it (avoiding an approach like the Strategy pattern)? Thank you in advance Rui
https://www.daniweb.com/programming/software-development/threads/177248/question-issue-on-static-const-vars-and-if-directive
CC-MAIN-2018-51
refinedweb
128
70.43
This question is interesting, and it is actually a bunch of questions bunched up together. It took me a while to figure out why I was getting so many questions around this topic but I finally figured it out. Most of the sessions around MVC and even most of the samples you find online are built over an Entity Framework or a Linq to SQL model. My session demos also rely on Entity Framework as a quick and reliable way of getting around the nasty subject of building a database access layer and speed up development (well, actually the Entity Framework does a bit more than that, but let’s keep it simple). So, I received 3 interesting questions after my last session: · In my application I need to display data from custom queries, it doesn’t come straight from a table in my database. Can I do it in MVC? · In my application I need to display the results of a stored procedure or SQL Views I built. Can I do it in MVC? · In my application I need to display data coming from an Oracle database. Can I do it in MVC? Well, let me jump in there quickly and answer “Yes, yes and yes”. But we are really confusing things here… ASP.NET MVC builds on the notion that you have a model (which represents your data) that is accessed and manipulated by a controller action. The result of that interaction will be forwarded to a view for rendering the results (hence MVC = Model/View/Controller). In my sessions, as in most samples you come across with, the model is built using Entity Framework. That doesn’t mean you can’t do it in the traditional way, where we used to build database access layers ourselves and have our model be built over our custom developed business objects. Even using Entity Framework, you may choose to encapsulate some logic in your own objects without providing “direct” access to the entity framework generated classes. Let’s see an example to make it more clear: public ActionResult Details(int id) { using (StackFAQEntities faq = new StackFAQEntities()) { var questionList = from q in faq.Questions where q.Id == id select q; Question myQuestion = questionList.First(); return View(myQuestion); } } This is a sample of code from the application I built in one of my demos. It’s basically interacting with my model in order to get one Question object to send to a view that will display the details of that question. The StackFAQEntities and Question objects were generated by Entity Framework, but they could have come from anywhere really. Imagine I refactor the code to something like this: FaqModel myFaq = new FaqModel(); Question myQuestion = myFaq.GetQuestionByID(id); return View(myQuestion); Where FaqModel is a newly created class that looks like this: public Question GetQuestionByID(int id) using (StackFAQEntities faq = new StackFAQEntities()) { var questionList = from q in faq.Questions where q.Id == id select q; if (questionList.Count() > 0) return questionList.First(); else return null; } So, we have pretty much the same functionality, but as you can see our Details action is simply calling a generic method that returns a Question object. Obviously we are still using Entity Framework: not only we are using it to perform the query, but the Question class was generated by it. Let’s take care of those two “little” details: public class Question public int id; public string Title; public string Body; public int Votes; public int Views; public DateTime DateCreated; public List<Answer> Answers; So, I created my own implementation of the Question class. Let’s refactor the GetQuestionByID method so we completely lose the dependency from Entity Framework in my code: string connectionString = "..."; SqlConnection con = new SqlConnection(connectionString); SqlCommand com = con.CreateCommand(); com.CommandText = "select title, body, votes, views, DateCreated from Question where Id=@id; com.Parameters.Add(new SqlParameter("@id", id)); Question q = new Question2(); con.Open(); SqlDataReader dr = com.ExecuteReader(); if (dr.HasRows) dr.Read(); q.Id = id; q.Title = dr.GetString(dr.GetOrdinal("title")); q.Body = dr.GetString(dr.GetOrdinal("body")); q.Votes = dr.GetInt32(dr.GetOrdinal("votes")); q.Views = dr.GetInt32(dr.GetOrdinal("views")); q.DateCreated = dr.GetDateTime(dr.GetOrdinal("DateCreated")); q.Answers = null; //we could change the query to retrieve the answers dr.Close(); con.Close(); return q; Ugly code, but suits our needs. I guess this makes it easy to understand how you can use ASP.NET MVC without any sort of abstraction layer like the Entity Framework. We are simply using a query to get some sort of information and then using the result to hydrate a business object we defined in our code (the Question object). We could have changed the query to fetch items from a SQL View or to execute a stored procedure. We could have opted to direct our query to an Oracle Server instead of SQL Server. Heck, we could do this without a database even and read this information from a XML file or a WCF service. To sum it up, your Model can be whatever you want. You can build objects that represent your data, or if you feel inclined to, you can simply pass a string like “field1;field2;field3” and then do the parsing on the View itself. Obviously, this is not recommended as you should prefer using strongly-typed Views to allow for a better experience and to be able to benefit from the scaffolding templates included in Visual Studio. Let me just add that, anyway, Entity Framework has full support for Stored Procedures and SQL Views so that takes care of the first two questions up there. As for the third, doing a quick Bing search I was able to find a few Entity Framework providers for Oracle so you still have that option if you prefer (as I do) to use the Entity Framework. Until next time!
http://blogs.msdn.com/b/nunos/archive/2010/02/04/quick-tips-about-asp-net-mvc-what-does-my-model-need-to-be.aspx
CC-MAIN-2015-48
refinedweb
981
53.71
found on GitHub. The aim is to free up developers to spend more time on business logic. The fact that it is open source is a key differentiator between it and AWS Lambda: you are not locked in to a vendor. You can also look at the milestones of the project and see the features being developed, as well as submitting your own as pull requests. It is currently an experimental offering in beta, and you can either install vagrant and clone the repository: git clone cd openwhisk/tools/vagrant vagrant up Or get a free account with IBM Bluemix. Runtimes OpenWhisk supports NodeJS, Java 8, Python 2.7 and (unlike many competitors) Swift. If you want to use something else, say, legacy C code, you can. Any thing you can package as a binary command in a docker file (the rule of thumb is, a statement with standard input and standard output that would run in bash) then it can be managed. For more information on this see the OpenWhisk tech talk. How it works Under the covers, OpenWhisk uses Kafka and Docker among others. It is event driven: the application is structured where events are flowing through the system and event handlers listen. There are four key concepts. Action An action is a stateless function, which acts as an event handler. It should be short running: the default is 5 minutes, after which the function is disposed of. In order for the platform to know what to do with it, it needs to have a public main function that is the entry point, and return Json. For example: import com.google.gson.JsonObject public class HelloWorld { public static JsonObject main(JsonObject args) { JsonObject response = new JsonObject(); return response; } } Or in Javascript: function main(params) { return {payload: 'Hello ' + params.name}; } To create the action: wsk action create hello hello.js The main function can call additional functions but must follow the conventions to be picked up as the entry point. Importantly, actions can be chained together in a sequence, like a piping operation. This is a key feature if you want to reuse code. It allows you to reuse implementations of actions by composing them with other actions. You can build up more powerful general solutions. For example, you could call a news API with one action, and filter it with another before returning the results as Json. This allows you to move key processing away from the client. For more information on actions, including how to invoke them, see the documentation or the OpenWhisk tech talk. Trigger This is the name for a class of events, it also might be described as a ‘feed’. However in OpenWhisk, a feed refers to a trigger along with control operations (such as starting or stopping a feed of events). A trigger is the events themselves. Triggers can be fired by using a dictionary of key-value pairs: sometimes referred to as the event. This results in an activation ID. They can be fired explicitly or by an external event source, by a feed. To create a trigger: wsk trigger create sayHello To fire a trigger: wsk trigger fire sayHello --param name "World" See the trigger documentation. Package This refers to a collection of actions and feeds which can be packaged up and shared in an ecosystem. You can make your feeds and actions public and share them with others. An example is some of the IBM Watson APIs, which are exposed as OpenWhisk packages. Rule A rule is a mapping from a trigger to an action. Each time the trigger fires, the corresponding action is invoked, with the trigger event as its input. It is possible for a single trigger event to invoke multiple actions, or to have one rule that is invoked as a response to events from multiple triggers. To create the rule: wsk rule create firstRule sayHello hello It can be disabled at any time: wsk rule disable firstRule After firing the trigger you can check and get the activation ID: wsk activation list --limit 1 hello And check the result: wsk activation result <<activation id number>> To see the payload: { "payload": "Hello World" } Use cases Much like AWS Lambda, OpenWhisk is not going to fit every use case. However for bots and mobile back end solutions, it is perfect. Having the option to use Swift as well as Java means it is an easy transition for many iOS mobile developers. Being able to abstract away costly operations away from devices is perfect for filtering, and avoiding making too many API calls from the client. Like AWS Lambda, the pain of setting up and maintaining infrastructure is nonexistent, and you don’t have to worry about scaling. Serverless architecture will undoubtably be a key player in the Internet of Things. In terms of pricing, its not that straightforward. OpenWhisk is free to get started with, but its not that simple to find out how much it will cost you. It also depends entirely on the services you want to use.
https://www.voxxed.com/2016/09/serverless-with-openwhisk/
CC-MAIN-2017-34
refinedweb
842
62.78
Here is a listing of C++ Programming quiz on “Input Stream” along with answers, explanations and/or solutions: 1. Which operator is used for input stream? a) > b) >> c) < d) << [expand title="View Answer"] Answer:b Explanation:The operator of extraction is >> and it is used on the standard input stream. [/expand] 2. Where does a cin stops it extraction of data? a) By seeing a blankspace b) By seeing ( c) Both a & b d) None of the mentioned View Answer Explanation:cin will stop its extraction when it encounters a blank space. 3. Which is used to get the input during runtime? a) cout b) cin c) coi d) None of the mentioned View Answer Explanation:cin is mainly used to get the input during the runtime. 4. What is the output of this program? #include <iostream> using namespace std; int main () { int i; cout << "Please enter an integer value: "; cin >> i + 4; return 0; } a) 73 b) your value + 4 c) Error d) None of the mentioned View Answer Explanation:We are not allowed to do addition operation on cin. 5. What is the output of this program? ; } a) 50 b) Depends on value you enter c) Error d) None of the mentioned View Answer Explanation:In this program, We are getting the input on runtime and manipulating the value. Output: $ g++ inp.cpp $ a.out Enter price: 3 Enter quantity: 4 Total price: 12 6. What is the output of this program? #include <iostream> #include <ios> #include <istream> #include <limits> using namespace std; template <typename CharT> void ignore_line ( basic_istream<CharT>& in ) { in.ignore ( numeric_limits<streamsize> :: max(), in.widen ( '\n' ) ); } int main() { cout << "First input: "; cin.get(); cout << "Clearing cin.\n"; cin.clear(); ignore_line ( cin ); cout << "All done.\n"; } a) First input b) Clearing cin c) Error d) None of the mentioned View Answer Explanation:In this program, We are getting the input and clearing all the values. Output: $ g++ inp1.cpp $ a.out First input: 4 Clearing cin. All done. 7. What is the output of this program? #include <iostream> using namespace std; int main( ) { char line[100]; cin.getline( line, 100, 't' ); cout << line; return 0; } a) 100 b) t c) It will print what we enter till character t is encountered in the input data d) None of the mentioned View Answer Explanation:The program will store all strings entered and will print them only when the character ‘t’ is encountered. Input >> coding Input >> is fun Input >> t Output: coding is fun 8. How many parameters are there in getline function? a) 1 b) 2 c) 2 or 3 d) 3 View Answer Explanation:There are two or three parameters in getline() function. They are a pointer to an array of characters and maximum number of characters and an optional delimiter. 9. What can be used to input a string with blankspace? a) inline b) getline c) putline d) None of the mentioned View Answer Explanation:If a user wants to input a sentence with blankspaces, then he may use the function getline. 10. When will the cin can start proceessing of input? a) After pressing return key b) BY pressing blankspace c) Both a & b++.
http://www.sanfoundry.com/cplusplus-programming-quiz-input-stream/
CC-MAIN-2017-30
refinedweb
531
64.41
All The first snapshot release of the LSB-VSX test suite for the POSIX.1 aspects of the Linux Standard Base is now available for download from: This version of the VSX-PCTS has been setup to autoconfigure on Linux systems. A front end script install_wrapper.sh is used to auto install, setup and run the test suite. In theory this should allow running of the test suite by those unfamiliar with POSIX.1 and its myriad of options and thus the associated test suite configurables. I include the release notes below. If you encounter a problem in the suite, please send in a bug report directly to me at ajosey@opengroup.org, cc lsb-test plus many others involved in this packaging of the POSIX test suite for Linux. Our next steps are now to expand our test coverage to other areas outside the core POSIX. ---------------------------------------------------------------- LSB-VSX Release 1.0-1 Release Notes This document provides Release notes for the Verification suite for the POSIX.1 coverage of the Linux Standard Base. 1. Release Overview LSB-VSX 1.0-1 is the first release of this test suite for the POSIX.1 coverage for the Linux Standard Base. LSB-VSX is built using the VSXgen (the generic VSX test framework), with the VSX-PCTS. 1.1 Changes since the last release The following changes have been made since the last release: This version of the VSX-PCTS has been setup to autoconfigure on Linux systems. A front end script install_wrapper.sh is used to auto install, setup and run the test suite. 1.2 Status A few bugs were found in the suite during the port which were verified with the official VSX-PCTS support team and fixed; for example the namespace test tool did not understand GNU header files correctly (#include_next and other features). Also a bug shown up by fclose() behavior in glibc was fixed. Pseudottys versus Real Ttys: The tests for the tty subsystem run best when using real tty ports, however since many test configuration may not have two tty ports and a loopback cable available we have enabled the tests to run using pseudottys. This does mean that users will encounter more failures when running using pseudottys than real tty ports, and this is because pseudottys are unable to emulate all the hardware characteristics of real ttys. The list of expected failures is given with the release in the FAILURES.VSX4 file. The remaining failures have been categorised as Linux non-compliance issues, specification issues and test suite issues. 1.2.1. Linux compliance Issues: Kernel level problems : Incorrect errno values being set (eg dup2, unlink) Header file inconsistencies between the limits in limits.h and the kernel returns from sysconf() (eg LINK_MAX). For the tests related to mount points, the system is setting errno to the wrong value when an attempt is made to remove a directory that is a mount point. Since the directory exists, ENOENT is not appropriate. The calls should be setting errno to EBUSY (if they fail). Kernel inconsistency? For directories in use by another process, it seems that rename() gives EBUSY when it attempts to remove this type of busy directory, but remove() and rmdir() can successfully remove them. (Otherwise rmdir 7 and remove 7 would show two failures, the same as in rename 13). This is a strangely inconsistent implementation. Every implementation we have encountered so far has treated all attempts to remove a busy directory in the same way, regardless of which system call is used. That is why VSX4 only has the one VSX_REMOVE_DIR_EBUSY parameter for all the calls. One opinion of this difference between the behavior is a bug in the implementation, however, it isn't a non-compliance. glibc related problems Mostly header file problems (visiblity of function prototypes, etc) - expected to be fixed in later releases locale related failures need to be investigated further , experiments on glibc2.2 appear to have resolved these, they could be a test locale setup problem Some of the above problems may be fixed in 2.4.0 and many are reportedly fixed in glibc 2.2 to be released soon. 1.2.2. Specification Issues tmpfile() ignores umask. There is some debate at POSIX whether this is a good idea or not, at the moment the specification requires tmpfile() to honour umask. time.h not explicitly allowed to be included in signal.h . This is expected to be resolved in favour of the current implementation. 1.2.3. Issues Identified with VSX-PCTS These are mainly setup related in areas that are allowed to change for each operating system . Two tests fail because of the way test suite is setup (for ease of use reasons, allowing us to switch between suites). (a) getlogin() does not match vsx0 user (b) Home directory of vsx0 user is not the same as $HOME These would pass once test suite setup correctly Possible test locale setup problem (VSX4L2) under investigation (see glibc problems listed before). At the moment we are unable to build the locale, with localedef on glibc2.1 systems that has passed the tests under glibc2.2. 2. Release Contents The release consists of the following files. LSB-VSX-1.0-1.tar.gz The source code to the suite in gzip'd tar format vsxgde.ps The Users Guide in postscript vsxgde.pdf The Users Guide in PDF format relnote.txt These release notes Two files are included with the release to ease installation and test setup: INSTALL.SETUP and INSTALL.VSX4. Please follow the instructions within these to install, configure and execute the tests. For more in depth information consult the User's Guide which provides comprehensive information on the installation, configuration and use of this test suite. 5. Support Support related questions and problem reports should be directed to by email to ajosey@rdg.opengroup.org. six to eight hours to run depending on the speed of the platform under test. The signals tests are known to take several hours to run. Licensing information. See the Licence file included in the distribution for licensing information. At the moment this is the Artistic License. The Open Group intends to change the license to that recommended by the Linux Standard Base team. ----- Andrew Josey The Open Group Director, Server Platforms Apex Plaza,Forbury Road, Email: a.josey@opengroup.org Reading,Berks.RG1 1AX,England Tel: +44 118 9508311 ext 2250 Fax: +44 118 9500110
https://lists.debian.org/lsb-test/2000/07/msg00000.html
CC-MAIN-2017-09
refinedweb
1,078
64.41
Red Hat Bugzilla – Full Text Bug Listing running ksvalidator without any parameter on rawhide gives a traceback, package is pykickstart-1.49-1.fc11.noarch [dan@localhost ~]$ ksvalidator Traceback (most recent call last): File "/usr/bin/ksvalidator", line 61, in <module> op.print_help() File "/usr/lib64/python2.6/optparse.py", line 1648, in print_help file.write(self.format_help().encode(encoding, "replace")) UnicodeDecodeError: 'ascii' codec can't decode byte 0xc4 in position 134: ordinal not in range(128) the same in F-10 with pykickstart-1.47-1.fc10.noarch [dan@eagle cdcollect]$ ksvalidator Traceback (most recent call last): File "/usr/bin/ksvalidator", line 61, in <module> op.print_help() File "/usr/lib64/python2.5/optparse.py", line 1655, in print_help file.write(self.format_help().encode(encoding, "replace")) UnicodeDecodeError: 'ascii' codec can't decode byte 0xc4 in position 134: ordinal not in range(128) I am running with LANG=cs_CZ.UTF-8 Lovely. An ancient issue. Created attachment 329149 [details] Patch for ksvalidator in pykickstart 1.50 This patch fixes the issue in an ugly way. I'd rather that it be fixed in Python directly though. Created attachment 329151 [details] Better ksvalidator patch Okay, after doing some reading I've decided that this isn't a Python bug after all, but rather a bug in how gettext is being used. I'm suspicious given that we have hacks in yum to work around optparse blowing up in the same way ... also using install() means that _ is put in the global namespace (pointing to the ksvalidator domain) ... is that valid? Also _() is now returning unicode and not str() as it did before the patch, that could be a huge change for the rest of the code. On the other hand, the hack in yum to make it work is hella ugly and _() returns unicode in yum ... and getting optparse fixed is unlikely, so have fun Dan :)... This bug appears to have been reported against 'rawhide' during the Fedora 11 development cycle. Changing version to '11'. More information and reason for this action is here: This patch makes me a little nervous, but the worst that can happen is it breaks other things and I have to back it out. Let's see what.
https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=479519
CC-MAIN-2016-22
refinedweb
374
66.33
We are about to switch to a new forum software. Until then we have removed the registration on this forum. The surface thing works in java, but when I do the same thing in python, it says "global name surface is not defined." If anyone know the solution for this, please help me out! void setup() { size(400, 400); // size always goes first! surface.setResizable(true); } Answers You can also shoot your luck and beg for both surface & getSurface() to be directly accessed w/o this.here: Ah, while you're at it, request the same for getGraphics(), sketchFile(), dataFile(), dataPath(), and args[] too. :P Amazing! Thank you!!!! But what are these: getGraphics(), sketchFile(), dataFile(), dataPath() and args[] ? :( Got it! You are such a nice guy. Thanks a lot! Thanks for the info GoToLoop! I was wondering if this works when using the PDF renderer? I'm outputting large PDF files from processing and would like to dynamically set the PDF size after I've done some calculations on the input files. I'm getting the following error when I try to include the frame.setResizable(True). I'm using Processing 3.3.6 btw. AttributeError: 'NoneType' object has no attribute 'surface' Thanks in advance for any info you might have! @pxlmnkeee, any PApplet publicmember not yet available in globals(): Can be accessed by prefixing them w/ this.. :-B You can see in my 1st reply example, I can access the non-globals() member getSurface() either w/ this.getSurface()or this.surface. :ar! Also, you can importany Processing classes not yet available in globals() via from package.name import class. For example, we can importthe non-globals() classIntList like this: :bz Or even something more "obscure" like the PGL class: >-) Finally got a chance to try this, thanks again for the knowledge GoToLoop! Well, I added this to the surface call, but the PDF output is still at the initial size set in the size command. Sorry if this is obvious, still a Processing newbie. THANKS again for any additional info! :D @pxlmnkeee, as explained at the 1st comment of class PSurfaceNone: As we can attest inside class PGraphicsPDF at its method createSurface(): It doesn't seem the renderer PDF can be resizable at all. :( OK, it didn't seem like I could get the PDF to resize so I'm glad you came to the same result GoToLoop. Thanks again for your time and helpful info!
https://forum.processing.org/two/discussion/16721/surface-doesn-t-work-in-python-mode
CC-MAIN-2019-18
refinedweb
407
73.98
22 June 2011 20:50 [Source: ICIS news] LONDON (ICIS)--German bioethanol producer CropEnergies expects its fiscal 2011/2012 sales to increase by up to 21%, it said on Wednesday. CropEnergies's optimistic outlook came even as Germany's drivers remain unwilling to accept 10% bioethanol blended gasoline (E10), which has been approved for sale at the country’s pumps since 1 January. CropEnergies said sales for the fiscal year ending 29 February 2012 should reach up to €570m ($826m), compared with €473m in the 2010/2011 fiscal year. Sales for the first three months ended 31 May 2011 were up by 41% to €132m, mainly due to higher plant utilisation rates, CropEnergies said. The company is one of ?xml:namespace> (
http://www.icis.com/Articles/2011/06/22/9471924/german-cropenergies-targets-21-sales-growth-despite-e10-woes.html
CC-MAIN-2015-06
refinedweb
121
65.46
Trump: God’s Chosen Candidate? savior, likening him to Queen Esther or King Cyrus. They maintain that he saved the country from the tyranny of Hillary and the Democrats by coming to protect the soul of our country during its hour of need. It all sounds completely ludicrous, right? I scoffed at these laughable quotes as well. Trump is the antithesis of everything Christians believe in. He is a criminal, a racist, and a misogynistic philanderer. He is an incompetent, corrupt, greedy buffoon. He completely lacks decency, humility, empathy, and intelligence, which are traits one needs in order to be a good leader. He is a pathological liar, a man incapable of telling the truth, and when confronted with irrefutable evidence that he lied, he doubles down and lashes out, screaming “FAKE NEWS!” or worse, does/says something even worse in an attempt to distract. Most of his policies are rooted in hatred and ignorance, and he refuses to listen to evidence to the contrary, no matter how overwhelming and damning that evidence may be. Trump is, by any measure, a deplorable excuse for a human being. He is precisely the kind of person that Christians point to as an example of the worst ills of humanity, the sinner who will burn in Hell for eternity. According to Christian theology, he can’t even be a true Christian because he’s never asked God for forgiveness. Surely, no God would choose such a man to be our president, would they? After my initial reaction of laughing, I started to examine this claim in more detail. Maybe there was a degree of truth to it in some strange way. I am agnostic, preferring spirituality to faith or religion, but I believe that everything happens for a reason; whether it is the work of God or Gods or simply fate, I do not know, but I believe a reason exists nonetheless. We aren’t always meant to know those reasons, and even when there’s something to be learned, it isn’t always easy to discern. So I asked the question: could there be a lesson we need to learn from Trump being president? The first answer I came to was somewhat superficial, but I don’t think it’s necessarily wrong. Given his record support among the Evangelical community and his inability to tell even the simplest truth, many, myself included, view him as a sort of false prophet. When I first started examining this question, I began to wonder if Trump’s candidacy could have been a test; could God or fate be testing people to see if they can recognize a false prophet? Within the texts of most religions are parables that serve as warnings about the dangers of false prophets, so I don’t see this as very far-fetched. There may be some truth to this, but I don’t think it’s the real answer to my question, instead being an unrelated secondary reason. It’s too narrow an answer to be applicable to the country as a whole. Trump’s candidacy and presidency do exist for a reason, but I’ve come to realize that it’s one few people realize because it has to do with the uncomfortable and painful truths we work so hard to suppress and bury. In short, Trump is president because it’s a chance for us to come face to face with the fact that our country’s entire history is one of darkness, greed, hatred, ignorance, arrogance, narcissism, and violence. He ran on the promise to “Make America Great Again”, but he is actually a mirror for our country. He shows the worst parts of who we are and have been, particularly those things we still avoid coming to terms with. Having such a prominent example of the worst ills of our country lead it may seem counterintuitive, but I believe this happened because these are things we cannot continue to ignore. We must face these difficult truths so that we can finally make our country and the world a better place. So the question now is, what terrible misdeeds and traits of our country does Trump represent? Given the countless number of terrible, dark stains on our country’s history, a full accounting of how Trump represents them could easily fill an entire book. However, given the kind of person and president Trump is, there are three particularly egregious areas that cannot go unnoticed: Hatred and Racism, Greed, and Reliability. - Hatred and Racism The first area, hatred and racism, is perhaps the most obvious. Trump is indisputably a racist, and his rhetoric proves it. He has demonized immigrants since declaring his candidacy (remember him calling them “rapists and thugs”?). He called African countries “shitholes”. He was one of the most ardent proponents of the racist Birther conspiracy. Former aides, as well as his former lawyer Michael Cohen, have noted that he regularly denigrates and mocks minorities, even using vile racial slurs. Even before he ran for office, he was known to be a racist; in the 1970s, both he and his father were sued for violating the Fair Housing Act in their discriminatory housing practices, which they later settled (though they refused to admit any wrongdoing). Their racism was so vile and blatant that folk singer Woody Guthrie, who once rented an apartment from them, wrote a song about it. Trump doesn’t stop at just being racist, however. He also pushes policies that are anti-LGBTQ and misogynistic. He has pushed for a ban on transgender people in the military (which is still working its way through the courts as of this writing). His administration has undone rules and regulations put in place by Obama to protect LGBTQ people in the workplace and in healthcare. His HUD secretary, Ben Carson, is currently pushing through a regulation to ban transgender people from being able to use federally funded shelters. He’s supportive of virtually eliminating abortion access (despite the fact that he was pro-choice for his entire life until he declared his candidacy in 2015, something that should have raise a red flag with his supporters) and is on record as saying he will not nominate any judge unless they are anti-abortion. He is on tape bragging about sexually assaulting women, and has also been proven to have paid off at least two women during the 2016 election. These are just a few of the ways in which Trump is a hateful, bigoted, racist, misogynistic man; a full accounting would likely fill a book or two. But it cannot be said that our country as a whole is necessarily much better. Things have improved immensely over the past few decades, but under Trump, we’ve actually taken several steps back. Racists and white supremacists have been emboldened by his hateful rhetoric and refusal to call them out, which has led to a spike in hate crimes since the election. Islamophobia is once again becoming mainstream, as is anti-LGBTQ sentiment. Trump hasn’t made America great again; he’s simply made it okay to hate again. Given his propensity for hatred, how can we not see the parallels between Trump and our country’s history of racism and hatred? He is the symbol of the truth we have tried to ignore and bury for so long. We like to pretend that racism and sexism are relics of the past, but they’re still very much part of our society.Innocent black people are still being pulled over simply for driving while black and are still being murdered by our police. A massive wage gap still exists for minorities and women (and particularly for women of color). Women are still subjected to a great deal of sexual harassment in the workplace (though this appears to be changing, albeit slowly, thanks to the #MeToo movement). LGBTQ people still have no guarantee that they won’t be fired, be able to get healthcare, or even be able to buy a wedding cake. The rights and sovereignty of Native Americans are under attack once again as corrupt politicians (including Trump himself) are pushing to build an ill-conceived oil pipeline through sacred land. All of these deplorable things and more continue to happen every single day because we refuse to address the truth. We continually refuse to come to terms with the hateful, racist deeds of our past, which guarantees they will continue to be repeated. This isn’t about atonement or making reparations, but about learning both why things are wrong, and how they continually lead to problems within society. America today is still rife with systemic racism and sexism, and all of it can be directly traced to our refusal to actually admit that we were wrong or truly understand the consequences of our actions. Think about the major steps our country has taken in the fight for civil rights. The abolition of slavery, granting citizenship to African-Americans, the right to vote being granted to non-whites and then to women, the striking down of anti-miscegenation laws and the end of legally sanctioned segregation. All of these came only after a long and difficult fight. Blood was shed and lives were lost to move our country forward. But that movement was slow and begrudging. Yes, over time, the country largely accepted that things had changed, but that doesn’t mean we actually accepted the change itself; rather, many simply accepted that continued fighting was pointless. Those opposed to these changes continued to be opposed. Few people actually changed their views. As a result, society at large never actually addressed the underlying problem or even admitted it was wrong; this is why racism, sexism, homophobia, and transphobia are still so embedded in our society. We accepted these things and normalized them because we never admitted they were wrong. Trump is the embodiment of this problem for three distinct reasons. First, he is without a doubt a racist. However, I do not believe his racial views are born from malice; instead, they are born of ignorance, which is the second reason. Trump is a remarkably ignorant and dim human being. He cannot grasp simple concepts, and his vocabulary level is among the lowest of any president in history. This in turn creates the third reason, which is that because he is so astonishingly ignorant, he has never in his life admitted he was wrong. Because he cannot admit he was wrong about something, Trump is thus incapable of learning from his mistakes, which makes it certain he’ll continue to make them. Sound familiar? It should. This is the cycle we’ve been trapped in with regards to racism for our country’s entire history. Racism, sexism, homophobia, and transphobia, whether they are born of ignorance or malice, aren’t questioned by the masses, which leads to ignorance about whether they are right or wrong. This in turn leads to an inability to admit that such things are wrong, meaning we can’t learn from the mistake. Not learning from mistakes guarantees that they will happen again. Many even defend such views because they were taught not to question them; to quote Thomas Paine, “A long habit of not thinking a thing wrong, gives it a superficial appearance of being right, and raises at first a formidable outcry in defense of custom.” Trump is a pertinent example of this because he was quite literally taught to never admit he was wrong, and his narcissism and ego make it impossible for him to even admit it to himself. He holds the same archaic and ignorant attitudes towards race, gender, and sexuality that continue to hold our society back from achieving true equality. He shows how so many people in this country refuse to admit they are wrong, even when faced with overwhelming evidence, facts, and science. He is showing us exactly the mentality we must get rid of if we ever hope to create a more just and equal world. 2. Greed The second flaw Trump represents is perhaps the most obvious. It is impossible to look at his history as a businessman, real estate magnate, and casino mogul without practically seeing his name as a synonym for greed. While we can’t fault him for wanting to be profitable in his ventures, we can and should fault him for his lifelong lust for money and power. Trump has always bragged about his success as a businessman, but his record shows a man who failed at nearly every venture he engaged in. He constantly entered into grandiose schemes designed to bring him vast amounts of wealth, and when they inevitably failed, he blamed others and was bailed out by his father. Many of his businesses failed so badly that their names are now used as jokes and pejoratives, such as Trump Steaks, Trump Vodka, and the Trump Shuttle. We now know that much of his wealth was derived from fraud, which is why he has refused to release his tax returns. The New York Times examined the few returns of his that are accessible, as well as decades of his fathers, and found a level of fraud and theft rarely seen. Some of it was actually legal, such as writing off his substantial losses on his taxes; from 1985 to 1995, he accrued nearly $1 billion in losses, and due to a quirk in the tax code passed in 1995 (which was, to Congress’ credit, fixed quite rapidly), he was able to write them off and avoid paying taxes. Under the 1995 tax code, he was legally able to do this each year for as long as there were losses to write off, and given that his 2005 return (the next year available) does not have these losses listed, it isn’t unreasonable to assume he didn’t pay taxes for as much as a decade. This was, however, just the tip of the iceberg. Most of his father’s bailouts (which total more than $400 million in today’s dollars) came in the form of illicit tax schemes and illegal gifts; one of the more notorious examples was his father having an associate buy several million dollars’ worth of chips at one of Donald’s casinos without using them or cashing them in, which actually resulted in a hefty fine for violating New Jersey gaming laws. The New York Times investigation found that Trump also repeatedly cheated on taxes by intentionally undervaluing his assets, in some cases by tens of millions of dollars. All told, the Times estimated that Trump and his family defrauded both the Federal government and the state of New York out of hundreds of millions of dollars over the course of decades. As if such blatant fraud wasn’t damning enough, Trump also tried to steal from his father, the same person who repeatedly bailed him out of financial trouble. Per the New York Times: .” Trump quite literally tried to force his father to rewrite his will to better benefit himself. It seems like something out of a movie, but it actually happened. Moreover, the family was still able to ensure that Fred’s vast fortune was passed in a way that avoided the inheritance tax, and at least some the tactics used appear to have been illegal. This is a level of greed and selfishness rarely seen. Sadly, our country’s history is mired in an even worse level of greed. This country was founded on land that was already inhabited by Native Americans who had lived here for thousands of years. The country expanded by taking that land from them, usually by force. Millions of Natives were slaughtered simply because we wanted the land they lived on. Lust for land and physical resources hasn’t been limited to just this continent, however; we have invaded numerous countries in just the last century PURELY because they had some resource we wanted. There have been two wars in Iraq in my lifetime, and it is widely believed that the true motivation of both was oil (at the very least, it is now common knowledge that George W. Bush lied about the true motivations of the second Iraq War). When the monarchy was reinstalled in Iran in 1953 during a coup d’état orchestrated by both the US and Britain, we openly admitted that it was about maintaining access to their oil reserves. During the 1960s through the 1990s, we backed multiple coups and insurgencies throughout South and Central America, mostly under the guise of combatting Communism, but in actuality to install dictators that would be more amenable to our interests. The numerous invasions we’ve committed and coups we’ve backed have cost millions of lives, both American and foreign, both during the invasion and in the years following due to the brutality of the governments we install. However, we’ve done just as much damage by cozying up to dictators already in place. Brutal African warlords, who have committed some of the worst human rights abuses imaginable, control many of the mines from which we get precious metals and gemstones, but we continue buying from them nonetheless. Most rare earth metals, which are used in many things from common electronics to specialized scientific equipment, are currently produced by China, whose human rights record speaks for itself, yet we continue buying them. Our country’s hands are stained a deep shade of red from the blood of the dead and suffering because we care more about the almighty dollar than a human life. Our own citizens suffer needlessly because our government can’t be bothered to help them. Greed has sapped us of our ability to empathize, which means we willing turn a blind eye to those who are most vulnerable and most in need. Tens of millions of people in this country are living at or below the poverty line, struggling to shelter, food, and medicine. Hundreds of thousands sleep on the streets every night. Veterans, thousands of whom are homeless, have great difficulty getting even routine healthcare, despite being promised coverage by our government. Our entire economic system has long been designed for enriching and empowering the wealthy at the expense of everyone else. We’ve had these problems for most of our country’s history, but under Trump’s leadership, these policies have proliferated and worsened.Greed is more blatant now than it has been in decades. The growth and wealth created by his new tax code was concentrated almost entirely among billionaires.His administration has rolled back regulations on banks, credit card companies, and payday lenders that protect consumers. The wealth gap has widened and wage growth has largely stagnated. The majority of middle class taxpayers saw substantial net decreases in their tax refunds this year, yet corporations and the ultra-wealthy enjoyed massive tax cuts, with many paying nothing. With so much pandering to the wealthy and corporations, many are now suggesting we’ve entered a “New Gilded Age”, and this probably isn’t far from the truth (though there are fortunately still many laws and regulations preventing the kind of abuses we saw during the original Gilded Age of the late 19th century). Trump caused this latest surge of wealth to the top, but he’s also the symbol of the greed that has plagued our country from the beginning. He was a millionaire before he could walk, and was raised with the sense of entitlement that is so common among the wealthy. He believes that if he wants something, he has a right to have it, and to hell with what anyone thinks. He believes that rules and consequences, and even the law, do not apply to him. The only things that matter to him are his own ego and finding ways to enrich himself, particularly at the expense of others. This mentality is a perfect description of how our country’s economy and foreign affairs have worked for over 200 years. Ironically, Trump’s numerous and spectacular failures are equally symbolic of our country’s obsession with greed. Most of his business ventures were abject failures, and many have suggested that he would have made more money had he simply invested his inheritance in index funds instead of using it for his own business ventures (there’s a lot of debate on this, but no conclusive answer because it depends on how you run the math; additionally, it’s hard to compare because we still don’t know his true net worth due to his refusal to release his tax returns, though available records suggest it to be around $2.3 billion). His business have declared bankruptcy at least six times, and no bank in the United States will loan him a penny because of his appalling business record. He has been involved in thousands of lawsuits for things ranging from failing to pay people to cheating people to devaluing property and assets all the way to flagrantly violating labor and civil rights laws. Yet, despite this, he was able to continue doing business and is still doing business even now (and yes, he is still doing business even while president; he may not be overseeing day-to-day operations, but he has never divested himself from his companies, meaning he is still involved to at least some degree and still making money). The survival of Trump as a businessman despite so many abject failures shows how our country has always rewarded the wealthy and powerful, even when they fail. Trump should have lost everything on numerous occasions, but he was bailed out every time. Sometimes it was his father, sometimes it was banks giving him money he didn’t deserve, and on many occasions, it was taxpayers, but he was never allowed to truly fail. As such, he’s never learned the powerful lessons that can only come with failing, but neither have we. We continually allow the wealthy to lie, cheat, steal, and abuse their power in every way imaginable, but we don’t do anything to stop such behavior. Even when we have, such efforts inevitably get undone; after the 2008 recession, President Obama helped create the Consumer Financial Protection Bureau and passed rules and regulations designed to help rein in some of the abuses, but Trump has worked tirelessly to undo all of this. The CFPB still exists, but it has done virtually nothing since he came into power (and has in fact dropped several investigations) and he has rolled back dozens of regulations that were specifically designed to protect consumers. Unfortunately, this isn’t even new; this is a vicious cycle that has happened time and time again. Greed has been a constant, toxic companion since before our country was founded. One cannot look at Trump without seeing greed. Given our country’s long obsession with greed, it almost makes sense for such a man to be so prominent because we must come to terms with our own history of greed. We cannot continue to ignore the stains of blood and the terrible deeds we’ve done in the name of enriching ourselves. Just as with racism, we cannot create a more equal and just society until we stop giving into our lust for power and wealth. Having such an undeniable example of this right before our eyes might just be what we need to finally come to terms with our dark past. 3. Reliability Reliability is perhaps the least obvious of these points, but it is no less important. For the purposes of this essay, I am referring to two kinds of reliability: the ability to keep a promise, and the ability to tell the truth. The first category, keeping promises, is not quite as obvious, but it’s quite possibly the more important of the two. Our country’s history is riddled with broken promises, and Trump is the perfect symbol of this because his own history is one of total unreliability. During the campaign, he styled himself as a master deal maker, and most of his books (most notably “The Art of The Deal”) go out of their way to praise his supposed skill, but most of the deals he was involved in throughout his career were terrible by any objective measure. More to the point, he has reneged on more deals than anyone could begin to count. Trump’s legal history alone shows how incapable of keeping even a simple promise he is. Analysts have found that he and his companies have been involved in no less than 3,500 lawsuits during his career, most of which involved him reneging on a deal. Hundreds of these were from employees and contractors that he hired and then didn’t pay. Dozens more were for not paying companies, local and state governments, and even his own legal bills. He settled several high-profile cases involving investors and tenants who had put down deposits for units in planned buildings that were never built that he refused to reimburse. In many of these cases, Trump angrily filed spurious countersuits in an attempt to bully those he cheated into backing down, and most of them were settled for undisclosed terms (which proves his claim that he “never settles” to be a flagrant lie). Perhaps one of the most legendary (yet seemingly unknown today) examples of Trump’s willingness to break a deal occurred when he built Trump Tower in New York City. The lot the tower is on was previously occupied by a Bonwit Teller store, and the building was renowned around the city for its architecture and its two beautiful limestone reliefs. When Trump bought the building in 1980 with the intention of demolishing it to build his tower, he promised to save the limestone reliefs and donate them to the Metropolitan Opera. But in an act of petty vengeance towards all of those who had tried to keep him out of Manhattan, he broke the deal. The reliefs were destroyed with jackhammers. He later made excuses about it being too expensive to remove them, but it is widely known in New York that this is a lie. Unsurprisingly, Trump has brought this same sense of pettiness about deals to the presidency. He has pulled out of numerous deals since becoming president (and in fact campaigned on doing exactly this), such as the Paris Climate Change Agreement and the Iran Nuclear deal (and in a truly baffling display of pettiness and idiocy, slammed Iran for announcing their intention to enrich uranium above the levels set by the deal over a year after he had already pulled out of the deal), for absolutely no legitimate reason. He has also withdrawn us from the Trans-Pacific Partnership and NAFTA trade agreements, the UN Human Rights Council, and has threatened to leave the International Criminal Court in The Hague, and has given no viable reason for any of these moves aside from crying about “political bias” and how “unfairly” we’ve been treated. These moves, which have no logic behind them other than pettiness, have shocked the country and the world, and have made us fundamentally less safe. Geopolitical tensions have skyrocketed, our relations with even our closest allies have soured, and the consequences of climate change may already be irreversible. Withdrawing us from so many deals and treaties also makes it difficult for us to enter into deals in the future because countries can no longer trust that we will stay in a deal after a new president comes into office. Why should any country sign a deal with us when a future president could undo it with a stroke of their pen? Sadly, questions about our reliability are nothing new. As with racism and greed, even a cursory look at our history shows that we have no qualms about breaking a deal if it suits us. Some of the violations are obvious and well-known, such as the Bush-era torture program being in blatant defiance of the Geneva Conventions. Others are practically unknown today, but speak worlds about how unreliable our foreign policy has always been. A prominent example is our violation of the Mallarino-Bidlack treaty during the construction of the Panama Canal; the treaty, signed decades before, granted us certain transit rights in what is now Panama (it was still part of Columbia, then New Granada, at the time) in exchange for promising to help suppress any attempts at independence. When Panama declared independence from Columbia in 1903, we backed them, despite promising to do precisely the opposite. Moreover, we actively stymied Columbia’s attempt to quell the rebellion by slowing down their troops. We willfully broke a treaty simply because building the canal was easier if we negotiated directly with Panama. Unfortunately, the most powerful and terrible example of our unwillingness to respect our own deals didn’t happen abroad. It happened as our country forced Native Americans off of their ancestral lands. As the United States expanded westward, there were countless confrontations between settlers and tribes, some of which actually erupted into wars. Until 1871 (when the US Code was amended to no longer recognize tribes as sovereign entities), we entered into hundreds of treaties with various tribes (the exact number is somewhat disputed, though an exhibit at the National Museum of the American Indian found at least 370 ratified treaties) to end such confrontations and open up new lands for settlement, and we violated or broke every single one of them in some way. Every. Single. One. Think about that for a second. Out of nearly 400 treaties with Native Americans (some sources say there have been more than 500), there isn’t a SINGLE ONE that we didn’t break. Untold thousands died because we decided to ignore treaties. In at least one instance, we even willfully violated a Supreme Court ruling; in Worcester v. Georgia, the Supreme Court held that Native American tribes were sovereign entities, meaning that only the federal government could engage in any kind of negotiation with them and that they were outside the jurisdictions of states (the case was specifically about a Georgia law prohibiting non-Natives from being on Native land). President Andrew Jackson willfully and intentionally ignored this ruling and forcibly relocated Native Americans in what became known as the Trail of Tears. With such a vile history of unreliability and violence, it’s nothing short of miraculous that any country is willing to sign any kind of deal with us. Unfortunately, an inability to keep a promise or adhere to a deal is only part of the problem Trump symbolizes. As I noted in the beginning of this section, reliability also refers to being able to trust that what someone says is truthful. Trump is, to quote Ted Cruz, “a pathological liar.” We all know that politicians lie with ease, but Trump has taken this to a previously inconceivable level. The Washington Post’s Fact Checker has examined Trump’s many statements, speeches, and tweets, and has found that as of June 7, 2019, he “has made 10,796 false or misleading claims” since taking office. This number is from more than a month ago, but one only has to turn on the television to know it’s grown considerably since then. Analyses of some of his recent tweets and statements have shown yet more lies; for example, on July 22, Trump talked briefly about Mueller’s scheduled Congressional testimony, and managed to lie six times in 90 seconds. We’ve had plenty of previous presidents who had difficulty in telling the truth. Clinton was impeached for lying, and Nixon would have been impeached had he not resigned (the first Article of Impeachment against him specifically notes that he obstructed justice by, among many other acts, knowingly and intentionally making false statements to the American people). But we’ve never had a president that was this dishonest, that was this incapable of admitting even the simplest of truths. Trump clearly believes that what he says IS the truth, even when the facts say otherwise. He seems to believe that his words can somehow transmute the fabric of reality so that what he says is magically true. The danger of such delusion cannot be overstated, and it is just one of many things that renders him completely unfit to be president. Whether or not he is fit to serve is beyond the bounds of this essay, however. The point I’m making here is that like his racism and greed, his inability to adhere to a deal or tell even the simplest truth are a symbol of our own failures. We have intentionally ignored our country’s unreliability, both at home and abroad, for most of our history, preferring to paint ourselves as some golden example for the rest of the world to follow. But that couldn’t be further from the truth. We break deals whenever we feel like it, and we lie about the true motivations behind our actions constantly. I grant that sometimes bending the truth is warranted, given the complexity of geopolitical relations, but there’s a time and a place for such things. We have done this far more than we ever should have, and we have lied not just to other countries, but to ourselves as well. There is no better symbol of this than Trump, and we must learn from his example that we cannot continue down this path. Too many have already suffered and died because we refuse to admit the truth, and we cannot continue this cycle indefinitely. Trump shows us that not adhering to deals and lying aren’t the way forward; such behavior will only continue to make relations both domestic and foreign difficult and tense. We must end the cycle if we ever hope to have a more unified country and world. Now that I’ve laid out my primary points, this is normally the part of the essay where I start wrapping things up. Unfortunately, recent events must be taken into account, and they necessitate a re-examination of my first point. A little over a week ago, Trump sent out a tweet suggesting that four Congresswomen, all of whom are women of color, should go back to the “places from which they came”. Since then, he has been roundly condemned by most of the country as a racist (though Fox News and his supporters have stood staunchly by him). To add insult to injury, at a rally a few days later, he attacked these Congresswomen again, and after attacking Rep. Ilhan Omar, the audience chanted “Send her back!” Trump did nothing to stop this disgusting display of racism; on the contrary, he stood in silence for a full 15 seconds, looking around the room while they chanted. In the days since, he has changed his story multiple times, from first claiming he tried to stop the chant (a blatant lie) to saying he wasn’t happy with it (also a lie, as evidenced by his continued attacks on the four Congresswomen that use precisely the same language) to defending the audience as “great patriots.” He, along with his supporters, has defended himself with semantics arguments, claiming that his intent wasn’t a racial attack. But as someone who grew up being severely bullied, I know that words can hurt as much as a fist and that intent isn’t as relevant as how people hear and interpret words. Every single person that bullied me used intent as a defense when they were caught by a parent or teacher; it was a pathetic excuse then, just as it is now. It doesn’t matter what Trump says about this display or his tweets. They are the dictionary definition of racist. Telling someone to “go back to where [they] came from” is a racist trope nearly as old as the United States. Nearly every person of color has heard this bigotry. It’s such a well-known racist line that the Equal Employment Opportunity Commission LITERALLY DEFINES it as racist harassment. Wikipedia has an ENTIRE PAGE discussing the meaning and history behind the phrase. Most news outlets are now outright referring to his tweets and the chant as racist (though they have largely refrained from calling Trump himself a racist), which is astounding because they generally refrain from such judgment, and I for one applaud them for doing so. Given this vile, naked racism, I must reexamine my point about racism because this somewhat alters my conclusion (though it doesn’t render it inaccurate). I opined that Trump’s racism wasn’t born of ignorance, and I stand by that opinion, but that doesn’t mean he isn’t a malicious person. He is a petty, vengeful narcissist who can’t let even a tiny insult or slight on his character slide. As such, though his racism was born of ignorance, it is empowered by his malice. He can’t stop himself from lashing out, and he’s willing to use any tools at his disposal to do it, regardless of how amoral they may be. That he is willing to stoop to such amorality, let alone defend it, shows that he truly has no moral center whatsoever, and there’s only one word to describe such a person: evil. I don’t use that word lightly. Nor do I use it simply because I oppose Trump. I use it because it is true. Only an evil person could say what Trump has said and see nothing wrong with it. Only an evil person could say that a statement that is LITERALLY DEFINED AS RACISM BY A FEDERAL AGENCY as racial harassment isn’t racist. The Oxford English Dictionary defines evil as “Morally depraved, bad, wicked, vicious,” and there can be no better description of Trump. This makes him a truly perfect symbol of the racism that has always plagued our country; racism may more often than not be born of ignorance, but it often breeds malice and evil. We must learn from this example because we cannot allow such hatred to continue to exist. Trump’s recent displays of vile racism make my first point perhaps the most important in this essay, but it doesn’t render my other two points less important. Our country’s history of greed and unreliability cannot be ignored. Those traits have caused just as much suffering and death as racism, if not more. Moreover, all three of these traits often comingle; the Trail of Tears, for example, was a combination of all three. These traits are reprehensible by themselves, but when they combine, they show just how despicable both a country and a single human being can be. That’s the real reason Trump is president. It isn’t because God chose him as some sort of savior. It’s because he is the lesson we need to learn. He symbolizes us at our absolute worst. We cannot created a more unified country or world so long as we cling to racism, greed, and unreliability. But we also cannot move forward so long as we deny these things. To deny one’s own history is to ensure it happens again, and our country’s history is mired in exactly this kind of cycle. One only needs to open a history book to see it. The fact that so many still defend Trump, despite the overwhelming evidence that he is racist, that he cares only about enriching himself, and that he cannot stay in a deal or tell even a simple truth shows how far we still have to go. We must see Trump for exactly what he is, and we must learn from his despotic nature. History shows us the darkness that awaits if we don’t. Originally published at.
https://drwillis.medium.com/trump-gods-chosen-candidate-7406268ceb53?source=post_internal_links---------3----------------------------
CC-MAIN-2021-17
refinedweb
6,662
56.59
Translated string in multiple languages Is there any way to get a translated string in multiple languages? Specifically, I want to look up a bible book name like 'Genesis' and get a list of 'Genesis' translated in to each language that the app I work on supports? Will I have to manually parse the language files? Edit: To make myself clearer, the application has / is already being translated. I just want to access all the languages it has been translated in to at run time so I can get, for example the french, german and spanish translations of one particular string, - goldenhawking The "ts" files seems just to be a XML file, may be you have to manually parse the language files using linguist. I have a similar problem like this -- how to translate dynamic data at run-time instead of re-compile ts files? This post is deleted! - Pablo J. Rogina @Phil please be aware that your requirements are "slightly" away from what was/is the original intent of the Qt translation approach, where the translatable strings were translated into several languages, but only one was used at any time, even if the language in use can be switched at runtime. Given that said, using the Qt translation features in a "slightly" different way, you can have all the translated values for a particular string at once if you create one translator per language, and then ask every translator for a particular string. One drawback from this approach is that you need to provide the context, in this example "QObject" #include <QCoreApplication> #include <QDebug> #include <QObject> #include <QTranslator> int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); QTranslator translator_es; QTranslator translator_it; QTranslator translator_fr; translator_es.load("lang/translate_es.qm"); translator_it.load("lang/translate_it.qm"); translator_fr.load("lang/translate_fr.qm"); qDebug() << "ES: " << translator_es.translate("QObject", "Hello"); qDebug() << "IT: " << translator_it.translate("QObject", "Hello"); qDebug() << "FR: " << translator_fr.translate("QObject", "Hello"); return a.exec(); } with output like this: ES: "Hola" IT: "Ciao" FR: "Bonjour" @Pablo-J.-Rogina I like your approach, but I think if I cant use the QTranslaor that is being used to translate the ui, I think parsing the language files and then storing the limited subset I need in a dictionary (I'm using Python) may be the best option. - mrjj Qt Champions 2017 Hi Just as a note. QTranslator do not translate UI as such. Its a dictionary based text loop up solution that will return the translated text based on the input. Its not tied to (G)UI at all. However, using python, it wont be able to automatically extract the texts i guess ? so if you dont need Qt Linguist tool , i guess its not that useful overall. There is also I like gettext for c++ programs. I cant tell for python as i never used it. - Pablo J. Rogina parsing the language files and then storing the limited subset I need in a dictionary I don't want to convince you about my idea, but based on my experience it looks like you are heading to re-invent the wheel. A parser for .ts files? A map for "translatable string" KEY -> "translated string" VALUE? Let's see below. (I'm using Python) may be the best option I assume you're using PyQt , if so you have most of the same tools for Qt translations. See here. @mrjj There are tr()/translate() functions (with some caveats) but the idea is the same: all the strings wrapped within tr()/translate() will be copied into .ts files upon using pylupdate5 (lupdate equivalent) if I cant use the QTranslaor that is being used to translate the ui The main thing is that only one translator is used at a time by Qt to translate the UI. You can install tons of translators for the UI with qApp->installTranslator(& aTranslator) but (from Qt documentation):.
https://forum.qt.io/topic/85716/translated-string-in-multiple-languages
CC-MAIN-2018-39
refinedweb
645
60.65
SoFile.3iv man page SoFile — node that reads children from a named file Inherits from SoBase > SoFieldContainer > SoNode > SoFile Synopsis #include <Inventor/nodes/SoFile.h> Fields from class SoFile: SoSFString name Methods from class SoFile: SoFile() SoGroup * copyChildren() const subgraph that was read from a named input file. When an SoFile node is written out, just the field containing the name of the file is written; no children are written out. When an SoFile is encountered during reading, reading continues from the named file, and all nodes read from the file are added as hidden children of the file node. Whenever the name field changes, any existing children are removed and the contents of the new file is read in. The file node remembers what directory the last file was read from and will read the new file from the same directory after checking the standard list of directories (see SoInput), assuming the field isn't set to an absolute path name. The children of an SoFile node are hidden; there is no way of accessing or editing them. If you wish to edit the contents of an SoFile node, you can modify the contents of the named file and then "touch" the name field (see SoField). Alternatively, you can use the copyChildren() method to get a editable copy of the file node's children. Note that this does not affect the original file on disk, however. Fields SoSFString name Name of file from which to read children. Methods SoFile() Creates a file node with default settings. SoGroup * copyChildren() const Returns a new SoGroup containing copies of all of the file node's children. static SoType getClassTypeId() Returns type identifier for this class. Action Behavior SoGLRenderAction, SoCallbackAction, SoGetBoundingBoxAction, SoGetMatrixAction, SoHandleEventAction Traverses its children just as SoGroup does. SoRayPickAction Traverses its hidden children, but, if intersections are found, generates paths that end at the SoFile node. SoWriteAction Writes just the name field and no children. File Format/Defaults File { name "<Undefined file>" } See Also SoInput, SoPath
https://www.mankier.com/3/SoFile.3iv
CC-MAIN-2017-17
refinedweb
335
61.46
cout <<x". For short, you can print several things out in a single "cout" statement by separating them with "<<", like this: cout << "I am " << 44*12 + 3 << " months old!" << endl... and the "endl", as mentioned above, produces a newline. cout << (3.55 - 17.017)*(3.55 - 17.017)*(3.55 - 17.017) << endl;This is a bit of a hassle. Normally we would think of first computing (3.55 - 17.017), then taking the resulting value and cubing it. Hopefully we think something like: "let x be (3.55 - 17.017) and compute x*x*x." We need a variable in which to store the value (3.55 - 17.017)! When we compile this, we get an error message likeWhen we compile this, we get an error message like error line 4: 'x' is an undeclared identifierWhat the compiler is saying is this: "x??? you never told me there was going to be an x!" If you want to use x to store the value (3.55 - 17.017), you need to first tell the compiler that the name x is going to stand for a number - this is called declaring the variable x. The statement double x; declares that x is a variable of type double, which means that it's a variable that can stand for numbers with decimal points. The type of a variable tells you what kind of data objects, such as an integer or real number or something else ..., can be stored in the variable. There are many different types in C++, and in fact understanding types is one of the most important skills you'll learn in this course. =operator assigns a value to the variable x. Once x has been declared as a variable of type double, it can be used in the program just like a normal number, either to print or to use in arithmetic expressions, likeOnce x has been declared as a variable of type double, it can be used in the program just like a normal number, either to print or to use in arithmetic expressions, like cout << x*x*x << endl, which does both. What really goes on here is that space in your computer's memory is allocated for one object of type double, and given the name x. When you ask for the value of x (by involving it in an arithmetic expression or by trying to print it) the computer fetches a copy of the value from memory. When you use the = operator, the computer takes the value on the right hand side, and copies it into the space reserved for x. double x; x = 3.5; x = 2.4;... make perfect sense. After the statement double x;, space is reserved for x, though we have no idea what actual value is in there - at this point x is uninitialized. The statement x = 3.5;copies the value 3.5 into the space reserved for x. Finally, the statement x = 2.4;copies the value 2.4 into the space reserved for x, thus overwriting the 3.5 that had been there previously. Think of xas being the name of a box into which doublevalues can be written. cout. In C++ (and in many other places) we refer to an output stream, the idea being that each thing we write goes out sequentially in the order we write it. In exactly the same way, we read from an input stream. Not surprisingly (given that our output stream is cout) our input stream object is called cin. [Note: Really it's called std::cin, but if we use using namespace std;we may drop the std::part. (We usuallpy put the line using namespace std;after the #include's.) It's defined in the iostreamlibrary, just like cout.] You may only read into variables, so cin >> 1.5 will cause a compiler error. (And what would it mean????) When reading into double x, cin skips any spaces, tab's, or newlines until it comes across a number. So, for example: Putting this together, we can construct a very simple program Addition Calculator, which reads in two numbers from the user and adds them together. Notice that the variable that contains the sum of the two numbers input by the user is actually called sum. This is just to enhance the readability of my code. I could've called the variable "George" and it would've worked just the same. doubles? iostreamlibrary we've been using is one example. There is a library called cmaththat allows you to perform other operations on doubles, like square roots and sines and logarithms. (Here is some online documentation on cmath.) double, int, and return. (A complete list can be found in Appendix A of your textbook.) There are also names that you could choose for variables, but which are already used for important things. Examples of this are main, cinand cout. The problem with using such a name, is that it creates ambiguity. For example, what would happen with the following: double cin; cin >> cin;As it turns out, the compiler will assume that both cin's refer to your new doubleand you won't be able to use cinfor reading. As we proceed, it will (hopefully!) become obvious what cannot be used as variable names. C++ distinguished between uppercase and lowercase. As a result, Answer and answer will be considered different variable names. A very common mistake that beginning programmers make is to be sloppy in writing variable names, sometimes using capitals and sometimes not. It is not good programming practice to use two variables names that are spelled the same except for capitalization because it leads to errors. Your source code will be easier for mere mortals to understand (interpret this to mean the instructor grading your programs) if you use meaningful variables names. sqrtfunction, so it includes a brief description of the cmathlibrary. "Libraries" extend the base C++ language.
https://www.usna.edu/Users/cs/wcbrown/courses/F15IC210/lec/l03/lec.html
CC-MAIN-2018-43
refinedweb
992
73.68
Installing the Multilingual Plugin for DokuWiki DokuWiki is capable of making a multil language site. This functionality is not installed in new DokuWiki installations by default. In order to get the multi language functionality to work, you will need to use the "Multilingual Plugin". The "Multilingual Plugin" does not translate your pages into a language. The plugin makes namespaces particular to the language so pages can be created in a specific language namespace. This plugin is completely different than changing the admin interface language. For information on how to change teh admin interface language in DokuWiki, please click here. When the namespaces are created, different links are placed in your site that allow visitors to switch to a different language version of your site. The following steps will explain how to use the "Multilingual Plugin" for DokuWiki. Important! This tutorial is based off of the "DokuWiki" theme. If you are editing a different theme, the location where you place the code may change. Steps to install the Multilingual Plugin Go to the following link and get the link for the plugin. Unless the version changes, you should have a link like the following: Log into DokuWiki. Go to Admin > Manage Plugins. Paste the URL in the "Download and Install a new plugin" box. Click Download. Create a file called "show_languages.html" on your server in the following location of your DokuWiki installation directory: /lib/tpl/dokuwiki Add the following code in the "show_languages.html" file: <?php $multi_lingual_plugin = &plugin_load('syntax','multilingual'); if ( $multi_lingual_plugin ) { if ( !plugin_isdisabled($multi_lingual_plugin->getPluginName() ) ) { print $multi_lingual_plugin->_showTranslations(); } } ?> Save the file. Next, edit your lib/tpl/dokuwiki/tpl_header.php. Add the following code towards the end of the file before the <hr class="ally" />. <div style="float:left;"><?php @include(dirname(__FILE__).'/show_languages.html')?></div> Your file should look like the image to the right. Save the changes. Go back to your DokuWiki admin section and select Configuration Settings. Scroll towards the bottom and you will see the configuration settings for the multilingual plugin. In the first box that says "plugin»multilingual»enabled_langs", enter the languages you want separated by commas. See the image to the right. Note! For a list of language codes, see the article on Language code list for PHP programs. Not all languages work with DokuWiki. Unfortunately, there is no list of language codes that work specific with DokuWiki. You will need to trial and error on languages that are not as common. When finished, click save at the bottom of the page. Now when you go to your home DokuWiki page, The language options will show as links towards the top left of your site. Each language has its own namespace created. When clicking the links they will take you to the language namespace. For example, German will have the de:startnamespace. For information on changing the administrator interface language in DokuWiki, please see our tutorial on How to change the DokuWiki admin language.
http://www.inmotionhosting.com/support/edu/dokuwiki/change-language-doku-wiki/installing-multi-language-plugin-doku?tsrc=rsbedu
CC-MAIN-2016-26
refinedweb
490
59.09
Hey, Scripting Guy! How can I get the name of the last user to log on to a computer?-- SV Hey, SV. You know, the Scripting Guy who writes this column never watches the news on TV. (OK, sports news being the one exception.) That’s not because he doesn’t like to keep up with current events, it’s just because – well, let’s give you an example. The other night the Scripting Guy who writes this column was at the gym, dutifully riding the exercise bike. The news happened to be on, and he glanced up from his exertions and caught a story about the “wacky weather” (their phrase, not his) that recently roared through Europe. After showing a number of suitably wacky scenes (mostly of people and things being blown around by the wind) the segment concluded with the newscaster noting, “41 people died in the recent windstorms.” Forty-one people died in those “wacky windstorms?” That’s why the Scripting Guy who writes this column doesn’t watch the news on TV. That’s also why he writes Hey, Scripting Guy!, the daily scripting column where you get all of the wackiness without any of the fatalities. Or at least without as many fatalities. For example, here’s a script that reports back the name of the last user to log on to a computer, and does so without causing anyone any pain or suffering. Or at least not any permanent pain or suffering: As it turns out, any time a user logs on to a computer his or her logon name is saved in the registry; in particular, that name is stored in HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\WinLogon\DefaultUserName. (As an added bonus, the user’s domain is stored in the same registry key, in the value DefaultDomainName.) If we want to know the name of the last user to log on to a computer all we have to do is read the value of DefaultUserName. Now, admittedly, this is not 100% guaranteed; that’s because it’s possible to change the value of (or even erase) DefaultUserName. In theory, someone with the user name kenmyer could log on to a computer and then change the value of default user to, say, pilarackerman. We’re willing to bet, however, that scenarios such as that don’t happen all that often; consequently, while this approach isn’t guaranteed it’s still going to work more often than not. Which is something you can’t say about the Scripting Guy who writes this column. Note. Does that mean that you use this same script to determine which user is currently logged-on to a computer? Well, maybe, assuming that someone is logged on to the computer; after all, the value of DefaultUserName remains in the registry even if no one is logged-on to the machine. If you’re interested in determining which user is currently logged-on to a computer you might check to see if Explorer.exe is running and, if it is, determine the owner of that process. But that’s another column for another day. As you can see, our script begins by defining a constant named HKEY_LOCAL_MACHINE, with a value of &H80000002; this constant tells WMI’s Standard Registry provider which registry hive we want to work with. After defining the constant we then connect to the WMI service on the local computer, taking care to bind the root\default namespace (not the much-more commonly-used root\cimv2 namespace). And yes, we could easily connect to the WMI service on a remote computer; all we have to do is assign the name of that remote machine to the variable strComputer: strComputer = "atl-fs-01" After hooking up with the WMI service we then assign values to a pair of variables: the variable strKeyPath is assigned the registry path within HKEY_LOCAL_MACHINE, and the variable strValueName is assigned the name of the registry value we want to read (DefaultUserName). We then use the GetStringValue method to read the registry and store the name of the last user to log on to the computer in an out parameter named strValue: objRegistry.GetStringValue HKEY_LOCAL_MACHINE, strKeyPath, strValueName, strValue Note. What’s an “out parameter?” An out parameter is simply a variable that we hand to a method; notice that strValue is the final parameter supplied to the GetStringValue method. In turn, the method assigns a value to that variable for us. In this case, of course, that value happens to be the value of DefaultUserName. All we have to do now is echo back the value of the variable strValue and we’re done: Wscript.Echo strValue Incidentally, if you’re interested in getting back the domain name as well as the logon name, well, here’s a script that does just that: strValueName = "DefaultDomainName" objRegistry.GetStringValue HKEY_LOCAL_MACHINE, strKeyPath, strValueName, strValue Wscript.Echo strValue And by the way, this really is true: Hey, Scripting Guy! has been running for over two and a half years now, and, to the best of our knowledge, no one has ever died from reading the column. (See? We told you that you couldn’t really be bored to death.) Admittedly, doctors point to the fact that Scripting Guy Peter Costantini has been in a coma for years as cause for concern. However, even that issue is controversial: Peter insists that he’s not in a coma, but the doctors refuse to change their diagnosis. The rest of the Scripting Guys have no comment on the issue, although we’re not sure whether, in Peter’s case, it really makes all that much difference anyway. Dear Scripting Guy, The DefaultUserName and DefaultDomainName values don't get populated in Windows 7. They're under Wow6432Node, but their values are blank. Any thoughts on an alternative with support for 2000 - 2008R2/7 ? Thanks If you use powershell: (get-wmiobject win32_computersystem).UserName ... will give you the currently logged on user. Emil Rakoczy....I've been trying to find this forever. THANK YOU! Note that this: '(get-wmiobject win32_computersystem).UserName' only works for logons at teh console. It will not tell you about someone logged on remotely via RDP. You can also sort the user profiles by modifed date and get the last logged on user that way. Pre Windows 7, I used to use the DefaultUserName to determine who was logged on. If explorer.exe was running, then the DefaultUserName would tell you who was currently logged on. Then I tried this on Windows 7 and only sometimes is this value populated. It also did not tell you if there was more than one user logged on. Instead, I use the owner of the Explorer.exe process. There is always at least one explorer.exe process for every usesr that logs on. Here is the code I use: Function WhosLoggedOnTS(strComputer) on error resume next dim strUsers dim objWMIService dim colProcessList dim objProcess dim strNameOfUser dim strUserDomain dim colProperties strUsers="") If objProcess.Name = "explorer.exe" then strUsers = strUsers & strNameOfUser & "," & GetFullName(strNameOfUser) End If If strUsers="" then strUsers="Logged Off" End If Set objWMIService=Nothing Set colProcessList=Nothing Set objProcess=Nothing WhosLoggedOnTS=strUsers End Function Now I've been asked to find a way to determine who was last logged on...... Hey Scripting Guy, This is a great script and you have explained it fantastically. How do I use it to find the last logged in on remote computers? I'm using a domain admin account. Answer to "lee" question, replace the dot by a computer name in the line (strComputer = ".") Thanks. You can also use Powershell : What environment do I run this in, cmd/batch?? If you read through Ed's Code, I think you will find that it is VB Script. I rely more on PowerShell than VB Script. Less coding, IMO Cheers,
http://blogs.technet.com/b/heyscriptingguy/archive/2007/01/23/how-can-i-get-the-name-of-the-last-user-to-log-on-to-a-computer.aspx
CC-MAIN-2015-48
refinedweb
1,312
62.48
Hi James, Am Montag, den 11.09.2006, 10:07 +0100 schrieb James Fidell: > hermann pitton wrote: > > Am Sonntag, den 10.09.2006, 02:00 +0200 schrieb Hartmut Hackmann: > >> Hi, James > >> > >> James Fidell wrote: > >>> Meant to send this to the list rather than privately. Bah. > > > > Sorry, > > > > never mind, or don't mind me at least. > > > > You claim all broken here and seem not to have even udev straight. > > > maybe some step by step checking. Useful information you get with e.g. "modinfo saa7134", has also a "depends on" section. But for example oss=1 is deprecated, since saa7134-oss (not longer supported by default FC5 kernels) and saa7134- alsa are outfactored. Try also "modinfo saa7134-alsa" etc. If you are not sure what is going on with your modules and "depmod -a" returns no errors, always use the -v option. FC5 will attempt to autoload the card for you, since for the Compro cards the PCI ID read from the card's eeprom is not always stable, you might end up with a default UNKNOWN/GENERIC card loaded not suited for your needs. Check like "dmesg |grep saa713". After installing new modules, make sure all old modules are unloaded. Either reboot or "modprobe -vr saa7134-dvb" saa7134-alsa. The videodev.ko might be still in use by a webcam, cx88 stuff ... also you want to remove the old tuner module. Check with "lsmod". If you now load "modprobe -v saa7134 card=71,71" you can also see if something made it in /etc/modeprobe.conf and overides your command line and can adjust it there and do then a "depmod -a" again. Also if you have duplicate/mixed modules should become visible. sample. # modprobe -v saa7134 insmod /lib/modules/2.6.18-rc1/kernel/drivers/media/video/v4l2-common.ko insmod /lib/modules/2.6.18-rc1/kernel/drivers/media/video/v4l1-compat.ko insmod /lib/modules/2.6.18-rc1/kernel/drivers/media/video/videodev.ko insmod /lib/modules/2.6.18-rc1/kernel/drivers/media/common/ir-common.ko insmod /lib/modules/2.6.18-rc1/kernel/drivers/media/video/ir-kbd-i2c.ko insmod /lib/modules/2.6.18-rc1/kernel/drivers/media/video/compat_ioctl32.ko insmod /lib/modules/2.6.18-rc1/kernel/drivers/media/video/video-buf.ko insmod /lib/modules/2.6.18-rc1/kernel/drivers/media/video/saa7134/saa7134.ko card=12,78,6 video_nr=0,1,2 vbi_nr=0,1,2 tuner=5,54,5 radio_nr=0,1,2 latency=64 gbuffers=32 You don't have tuner, vbi and radio for analog. # modprobe -v saa7134-alsa index=1,2,3 insmod /lib/modules/2.6.18-rc1/kernel/drivers/media/video/saa7134/saa7134-alsa.ko index=1,2,3 dmesg part. ... saa7134[2]: board init: gpio is 100a0 PM: Adding info for No Bus:i2c-3 tuner 3-0060: All bytes are equal. It is not a TEA5767 tuner 3-0060: chip found @ 0xc0 (saa7134[2]) PM: Adding info for i2c:3-0060 tuner 3-0060: type set to 5 (Philips PAL_BG (FI1216 and compatibles)) saa7134[2]: Huh, no eeprom present (err=-5)? saa7134[2]: registered device video2 [v4l2] saa7134[2]: registered device vbi2 saa7134[2]: registered device radio2 saa7134 ALSA driver for DMA sound loaded saa7134[0]/alsa: saa7134[0] at 0xcfffbc00 irq 177 registered as card 1 saa7133[1]/alsa: saa7133[1] at 0xcfffb000 irq 185 registered as card 2 saa7134[2]/alsa: saa7134[2] at 0xcfffb800 irq 177 registered as card 3 next. # modprobe -v saa7134-dvb insmod /lib/modules/2.6.18-rc1/kernel/drivers/media/dvb/frontends/tda1004x.ko insmod /lib/modules/2.6.18-rc1/kernel/drivers/media/dvb/dvb-core/dvb-core.ko insmod /lib/modules/2.6.18-rc1/kernel/drivers/media/video/video-buf-dvb.ko insmod /lib/modules/2.6.18-rc1/kernel/drivers/media/dvb/frontends/dvb-pll.ko insmod /lib/modules/2.6.18-rc1/kernel/drivers/media/video/saa7134/saa7134-dvb.ko It shows dvb_attach is enabled, else you would see more frontends. dmesg shows. saa7134[0]: frontend initialization failed DVB: registering new adapter (saa7133[1]). Check "dmesg" and it should be without errors up to this point. You could now test composite and svideo with say tvtime for picture and external sound input.", MODE="0660" SUBSYSTEM=="dvb", PROGRAM="/bin/sh -c 'K=%k; K=$${K#dvb}; printf dvb/adapter%%i/%%s $${K%%%%.*} $${K#*.}'", \ NAME="%c", MODE="0660" This is the same on working FC3, # DVB KERNEL=="dvb", MODE="0660" SUBSYSTEM=="dvb", PROGRAM="/bin/sh -c 'K=%k; K=$${K#dvb}; printf dvb/adapter%%i/%%s $${K%%%%.*} $${K#*.}'", NAME="%c", MODE="0660" except the backslash to break the line. There was some previous discussion on its use with gcc4x. Still trouble with the dvb device nodes? Have no DVB device on FC5, but could put one. You might eventually find something useful here and previously but you don't have an analog tuner and would use the other inputs. The tda10046 does not correct for frequently reported offsets of 167000Hz, which seem to be mostly negative in the UK. So in case you miss a multiplex, you might try to "dvbscan" again with such offsets. Good Luck, Hermann B.T.W. To be able to use the current mercurial on nominal 2.6.17 FC5 kernels, which are 2.6.18 for that and to avoid a compile error, it is enough to change v4l-dvb/linux/drivers/media/dvb/dvb-core/dvbnet.c @ line 1140 and 1169 #if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,18) spin_lock_bh(&dev->xmit_lock); #else netif_tx_lock_bh(dev); #endif if (dev->flags & IFF_PROMISC) { dprintk("%s: promiscuous mode\n", dev->name); priv->rx_mode = RX_MODE_PROMISC; } else if ((dev->flags & IFF_ALLMULTI)) { dprintk("%s: allmulti mode\n", dev->name); priv->rx_mode = RX_MODE_ALL_MULTI; } else if (dev->mc_count) { int mci; struct dev_mc_list *mc; dprintk("%s: set_mc_list, %d entries\n", dev->name, dev->mc_count); priv->rx_mode = RX_MODE_MULTI; priv->multi_num = 0; for (mci = 0, mc=dev->mc_list; mci < dev->mc_count; mc = mc->next, mci++) { dvb_set_mc_filter(dev, mc); } } #if LINUX_VERSION_CODE < KERNEL_VERSION(2,6,18) spin_unlock_bh(&dev->xmit_lock); #else netif_tx_unlock_bh(dev); #endif dvb_net_feed_start(dev); } the compat stuff from < KERNEL_VERSION(2,6,18) to (2,6,17)
http://www.linuxtv.org/pipermail/linux-dvb/2006-September/012813.html
CC-MAIN-2014-42
refinedweb
1,030
50.73
On 11/12/10 8:12 AM, Micah Carrick wrote: > My company is working on releasing some of our code as open-source python > modules. I don't want my "foo" module conflicting with other modules called > "foo" on PyPi or github or a user's system. Is there anything wrong, from a > conventions standpoint, with having modules like company.foo and company.bar > even if foo and bar are not necessarily related other than being released by us? > I really don't like the cryptic module names or things like foo2 and the like. Yes, using namespace packages. You need to use `distribute` in your setup.py in order to accomplish this. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco
https://mail.python.org/pipermail/python-list/2010-November/592305.html
CC-MAIN-2017-17
refinedweb
151
65.62