text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Webapps on App Engine, part 4: Templating Posted by Nick Johnson | Filed under python, coding, app-engine, tech, framework This is part of a series on writing a webapp framework for App Engine Python. For details, see the introductory post here. In the first three posts of this series, we covered all the components of a bare bones webapp framework: routing, request/response encoding, and request handlers. Most people expect a lot more out of their web framework, however. Individual frameworks take different approaches to this, from the minimalist webapp framework, which provides the bare minimum plus some integration with other tools, to the all-inclusive django, to the 'best of breed' Pylons, which focuses on including and integrating the best libraries for each task, rather than writing their own. For our framework, we're going to take an approach somewhere between webapp's and Pylons': While keeping our framework minimal and modular, we'll look at the best options to use for other components - specifically, templating and session handling. In this post, we'll discuss templating. To anyone new to webapps, templates may seem somewhat unnecessary. We can simply generate the output direct from our code, right? Many CGI scripting languages used this approach, and the results are often messy. Sometimes, which page to be generated isn't clear until after a significant amount of processing has been done, and dealing with errors and other exceptional conditions likewise becomes problematic. Finally, this approach tends to lead to gobs of print statements cluttering up the code, making both the structure of the page and the flow of the code unclear. Templating systems were designed to eliminate these issues. Instead of generating the page as we process the request, we wait until we're done, construct a dictionary of variables to pass to the template, and select and render the template we need. The templating system then takes care of interpreting the template, substituting in the variables we passed where necessary. Templating doesn't have to be complicated - here's a simple templating system: def render_template(template, values): return template % values # Example template = """Hello, %(name)s! How are you this %(time_of_day)s?""" self.response.body = render_template(template, values) This 'templating system' simply uses Python's string formatting functionality to generate its result. It quickly becomes apparrent that this isn't sufficient for templating web pages, though - we need more functionality. At a minimum, we need some form of flow control, so we can include sections of a page conditionally, such as login/logout links, and some form of looping construct, so we can include repeated sections, such as results from datastore queries. It helps if our templating system provides some features for template reuse, too, such as including other templates, or extending them. How powerful should our templates be, though? This is a source of some disagreement. Some templating systems, like Django's, take a very minimalist approach, and contain only the bare minimum of functionality required to render templates. Any form of calculation - even things as simple as basic math and comparisons - should be done in code, with the results passed to the template. Other templating systems, like Mako, provide a much more full-featured templating language, and trust you not to abuse it. Here's what sample templates look like in a few templating languages: %} You're probably already familiar with Django's template syntax from previous posts. Because it's included with App Engine, it's often the easy default. As we've already mentioned, it's very restrictive about what you can do: It provides a few primitives, and relies on an extensible library of tags and filters (the bits after the | in {{...}}) to make it useful. Mako: <%inherit <% rows = [[v for v in range(0,10)] for row in range(0,10)] %> <table> % for row in rows: ${makerow(row)} % endfor </table> <%def <tr> % for name in row: <td>${name}</td>\ % endfor </tr> </%def> Mako is another popular templating engine, and takes the opposite approach to Django: Templates are created by inserting actual Python code, inside special processing directives of the form <% .. %>. It even goes so far as to permit and encourage defining functions inside templates! Mako works around Python's use of indentation for control flow by defining new keywords such as 'endfor'. Cheetah <html> <head><title>$title</title></head> <body> <table> #for $client in $clients <tr> <td>$client.surname, $client.firstname</td> <td><a href="mailto:$client.email">$client.email</a></td> </tr> #end for </table> </body> </html> Cheetah is another templating system that takes the "we provide the gun, you point it at your foot" approach. It's similar to Mako in many ways, but doesn't require variable substitutions to be embedded in curly braces, and instead of using processing directives, it treats lines starting with a # as Python code. It also uses special directives such as 'end for' for nesting. Jinja2 <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN"> <html lang="en"> <html xmlns=""> > Jinja2 claims to have "Django-like syntax (but faster)". Notably, the syntactic elements for loops, control flow, etc, are a lot more Python-like, but they're still limited to what the language supports, and it still relies on specially defined tests, filters, etc. Tenjin <html> <body> <h1>${title}</h1> <table> <?py i = 0 ?> <?py for item in items: ?> <?py i += 1 ?> <?py color = i % 2 == 0 and '#FFCCCC' or '#CCCCFF' ?> <tr bgcolor="#{color}"> <td>#{i}</td> <td>${item}</td> </tr> <?py #endfor ?> </table> </body> </html> Tenjin claims to be the fastest Python templating framework - and it has benchmarks (although not exhaustive ones) to back it up. It takes the general approach, with Python expressions and statements embedded directly into the markup. Notably, indentation of the Python statements maters, producing a somewhat confused mix of markup and Python code. Chameleon <table border="1"> <tr tal: <td tal: <span tal:1 * 1 = 1</span> </td> </tr> </table> Chameleon is based on the Zope Page Templates specification, which takes an interesting approach. Chameleon templates are valid XML documents, unlike many templating engines, and it uses namespaces for attributes and tags to define the template behaviour. For example, the "tal:repeat" tag indicates that the tag it's on and all its children should be repeated for each value in the Python iterator passed as an argument. tal:replace replaces an element with the value of the expression, while other expressions permit setting attributes and replacing body content, and a 'meta' namespace handles operations such as including other templates. Chameleon extends the Zope standard in several ways: expressions can be arbitrary Python expressions, and values can be substituted using a ${...} syntax in addition to the XML syntax, which avoids a lot of boilerplate in some situations. Chameleon templates are compiled to Python code on first use - so you're not doing XML DOM manipulation on every template generation. Chameleon recently got App Engine support when the author refactored out some code that relied on modules not available in App Engine. I haven't mentioned all Python's templating systems here, by a long shot - this is merely a representative sample. For a more complete list, see this page. Our framework Examining the different templating engines leads to an interesting observation: Those templating engines that take the Django approach of "only what's necessary" tend to, of necessity, be a lot larger and more involved - and hence have a steeper learning curve - than those that allow you to leverage your Python knowledge in some fashion. For that reason and others, I'm not a big fan of them - I'd rather provide someone with a powerful but lightweight system, and trust them not to shoot themselves in the foot, than use a more complicated system designed to make foot-shooting an impossibility. With that consideration and others in mind, we'll look at what is required to use Chameleon in our framework. Using Chameleon Installation is straightforward: Download the tarball from the PyPi page, and copy the Chameleon-1.1.1/src/chameleon directory into a directory on the system path (eg, your app's root folder). To use Chameleon, we first define a template loader: from chameleon.zpt import loader template_path = os.path.join(os.path.dirname(__file__), "..", "templates") template_loader = loader.TemplateLoader(template_path, auto_reload=os.environ['SERVER_SOFTWARE'].startswith('Dev')) Template loaders serve to cache loaded and compiled templates, which is essential for performance. As such, it makes sense to define our loader once at module level, and use it for all requests. To render a template, we fetch a template from the loader, then call it with keyword arguments corresponding to the parameters we want to pass to the template: def TestHandler(RequestHandler): def get(self): template = template_loader.load("test.pt") self.response.body = template(name="test") In terms of integrating templates into our framework, there's not a great deal we can do without locking users of our framework into using our preferred templating system. You can bundle the templating system with the framework, and even make it create a loader when it's first used. The approach you take depends on where you want to be on the flexibility / ease-of-use axis. In the next post, we'll discuss session handling, the options available on App Engine, and how to integrate it into our framework.Previous Post Next Post
http://blog.notdot.net/2010/02/Webapps-on-App-Engine-part-4-Templating
CC-MAIN-2017-39
refinedweb
1,562
52.19
Scala.Rx is an experimental change propagation library for Scala. Scala.Rx gives you Reactive variables (Rxs), which are smart variables who auto-update themselves when the values they depend on change. The underlying implementation is push-based FRP based on the ideas in Deprecating the Observer Pattern. A simple example which demonstrates the behavior is: import rx._ val a = Var(1); val b = Var(2) val c = Rx{ a() + b() } println(c.now) // 3 a() = 4 println(c.now) // 6 The idea being that 99% of the time, when you re-calculate a variable, you re-calculate it the same way you initially calculated it. Furthermore, you only re-calculate it when one of the values it depends on changes. Scala.Rx does this for you automatically, and handles all the tedious update logic for you so you can focus on other, more interesting things! Apart from basic change-propagation, Scala.Rx provides a host of other functionality, such as a set of combinators for easily constructing the dataflow graph, compile time checks for a high degree of correctness, and seamless interop with existing Scala code. This means it can be easily embedded in an existing Scala application. ContentsContents - Getting Started - ScalaJS - Using Scala.Rx - Design Considerations - Related Work - Scaladoc Getting StartedGetting Started Scala.Rx is available on Maven Central. In order to get started, simply add the following to your build.sbt: libraryDependencies += "com.lihaoyi" %% "scalarx" % "0.4.1" After that, opening up the sbt console and pasting the above example into the console should work! You can proceed through the examples in the Basic Usage page to get a feel for what Scala.Rx can do. ScalaJSScalaJS In addition to running on the JVM, Scala.Rx also compiles to Scala-Js! This artifact is currently on Maven Central and an be used via the following SBT snippet: libraryDependencies += "com.lihaoyi" %%% "scalarx" % "0.4.1" There are some minor differences between running Scala.Rx on the JVM and in Javascript particularly around asynchronous operations, the parallelism model and memory model. In general, though, all the examples given in the documentation below will work perfectly when cross-compiled to javascript and run in the browser! Scala.rx 0.4.1 is only compatible with ScalaJS 0.6.5+. Using Scala.RxUsing Scala.Rx The primary operations only need a import rx._ before being used, with addtional operations also needing a import rx.ops._. Some of the examples below also use various imports from scala.concurrent or scalatest aswell. Basic UsageBasic Usage import rx._ val a = Var(1); val b = Var(2) val c = Rx{ a() + b() } println(c.now) // 3 a() = 4 println(c.now) // 6 The above example is an executable program. In general, import rx._ is enough to get you started with Scala.Rx, and it will be assumed in all further examples. These examples are all taken from the unit tests. The basic entities you have to care about are Var, Rx and Obs: - Var: a smart variable which you can get using a()and set using a() = .... Whenever its value changes, it pings any downstream entity which needs to be recalculated. - Rx: a reactive definition which automatically captures any Vars or other Rxs which get called in its body, flagging them as dependencies and re-calculating whenever one of them changes. Like a Var, you can use the a()syntax to retrieve its value, and it also pings downstream entities when the value changes. - Obs: an observer on one or more Var s or Rx s, performing some side-effect when the observed node changes value and sends it a ping. Using these components, you can easily construct a dataflow graph, and have the various values within the dataflow graph be kept up to date when the inputs to the graph change: val a = Var(1) // 1 val b = Var(2) // 2 val c = Rx{ a() + b() } // 3 val d = Rx{ c() * 5 } // 15 val e = Rx{ c() + 4 } // 7 val f = Rx{ d() + e() + 4 } // 26 println(f.now) // 26 a() = 3 println(f.now) // 38 The dataflow graph for this program looks like this: Where the Vars are represented by squares, the Rxs by circles and the dependencies by arrows. Each Rx is labelled with its name, its body and its value. Modifying the value of a causes the changes the propagate through the dataflow graph As can be seen above, changing the value of a causes the change to propagate all the way through c d e to f. You can use a Var and Rx anywhere you an use a normal variable. The changes propagate through the dataflow graph in waves. Each update to a Var touches off a propagation, which pushes the changes from that Var to any Rx which is (directly or indirectly) dependent on its value. In the process, it is possible for a Rx to be re-calculated more than once. ObserversObservers As mentioned, Obs s can be created from Rx s or Var s and be used to perform side effects when they change: val a = Var(1) var count = 0 val o = a.trigger { count = a.now + 1 } println(count) // 2 a() = 4 println(count) // 5 This creates a dataflow graph that looks like: When a is modified, the observer o will perform the side effect: The body of Rxs should be side effect free, as they may be run more than once per propagation. You should use Obs s to perform your side effects, as they are guaranteed to run only once per propagation after the values for all Rxs have stabilized. Scala.Rx provides a convenient .foreach() combinator, which provides an alternate way of creating an Obs from an Rx: val a = Var(1) var count = 0 val o = a.foreach{ x => count = x + 1 } println(count) // 2 a() = 4 println(count) // 5 This example does the same thing as the code above. Note that the body of the Obs is run once initially when it is declared. This matches the way each Rx is calculated once when it is initially declared. but it is conceivable that you want an Obs which fires for the first time only when the Rx it is listening to changes. You can do this by using the alternate triggerLater syntax: val a = Var(1) var count = 0 val o = a.triggerLater { count = count + 1 } println(count) // 0 a() = 2 println(count) // 1 An Obs acts to encapsulate the callback that it runs. They can be passed around, stored in variables, etc.. When the Obs gets garbage collected, the callback will stop triggering. Thus, an Obs should be stored in the object it affects: if the callback only affects that object, it doesn't matter when the Obs itself gets garbage collected, as it will only happen after that object holding it becomes unreachable, in which case its effects cannot be observed anyway. An Obs can also be actively shut off, if a stronger guarantee is needed: val a = Var(1) val b = Rx{ 2 * a() } var target = 0 val o = b.trigger { target = b.now } println(target) // 2 a() = 2 println(target) // 4 o.kill() a() = 3 println(target) // 4 After manually calling .kill(), the Obs no longer triggers. Apart from .kill()ing Obss, you can also kill Rxs, which prevents further updates. In general, Scala.Rx revolves around constructing dataflow graphs which automatically keep things in sync, which you can easily interact with from external, imperative code. This involves using: - Vars as inputs to the dataflow graph from the imperative world - Rxs as the intermediate nodes in the dataflow graphs - Obss as the output from the dataflow graph back into the imperative world Complex ReactivesComplex Reactives Rxs are not limited to Ints. Strings, Seq[Int]s, Seq[String]s, anything can go inside an Rx: val a = Var(Seq(1, 2, 3)) val b = Var(3) val c = Rx{ b() +: a() } val d = Rx{ c().map("omg" * _) } val e = Var("wtf") val f = Rx{ (d() :+ e()).mkString } println(f.now) // "omgomgomgomgomgomgomgomgomgwtf" a() = Nil println(f.now) // "omgomgomgwtf" e() = "wtfbbq" println(f.now) // "omgomgomgwtfbbq" As shown, you can use Scala.Rx's reactive variables to model problems of arbitrary complexity, not just trivial ones which involve primitive numbers. Error HandlingError Handling Since the body of an Rx can be any arbitrary Scala code, it can throw exceptions. Propagating the exception up the call stack would not make much sense, as the code evaluating the Rx is probably not in control of the reason it failed. Instead, any exceptions are caught by the Rx itself and stored internally as a Try. This can be seen in the following unit test: val a = Var(1) val b = Rx{ 1 / a() } println(b.now) // 1 println(b.toTry) // Success(1) a() = 0 intercept[ArithmeticException]{ b() } assert(b.toTry.isInstanceOf[Failure]) Initially, the value of a is 1 and so the value of b also is 1. You can also extract the internal Try using b.toTry, which at first is Success(1). However, when the value of a becomes 0, the body of b throws an ArithmeticException. This is caught by b and re-thrown if you try to extract the value from b using b(). You can extract the entire Try using toTry and pattern match on it to handle both the Success case as well as the Failure case. When you have many Rxs chained together, exceptions propagate forward following the dependency graph, as you would expect. The following code: val a = Var(1) val b = Var(2) val c = Rx{ a() / b() } val d = Rx{ a() * 5 } val e = Rx{ 5 / b() } val f = Rx{ a() + b() + 2 } val g = Rx{ f() + c() } inside(c.toTry){case Success(0) => () } inside(d.toTry){case Success(5) => () } inside(e.toTry){case Success(2) => () } inside(f.toTry){case Success(5) => () } inside(g.toTry){case Success(5) => () } b() = 0 inside(c.toTry){case Failure(_) => () } inside(d.toTry){case Success(5) => () } inside(e.toTry){case Failure(_) => () } inside(f.toTry){case Success(3) => () } inside(g.toTry){case Failure(_) => () } Creates a dependency graph that looks like the follows: In this example, initially all the values for a, b, c, d, e, f and g are well defined. However, when b is set to 0: c and e both result in exceptions, and the exception from c propagates to g. Attempting to extract the value from g using g.now, for example, will re-throw the ArithmeticException. Again, using toTry works too. NestingNesting Rxs can contain other Rxs, arbitrarily deeply. This example shows the Rxs nested two levels deep: val a = Var(1) val b = Rx{ (Rx{ a() }, Rx{ math.random }) } val r = b.now._2.now a() = 2 println(b.now._2.now) // r In this example, we can see that although we modified a, this only affects the left-inner Rx, neither the right-inner Rx (which takes on a different, random value each time it gets re-calculated) or the outer Rx (which would cause the whole thing to re-calculate) are affected. A slightly less contrived example may be: var fakeTime = 123 trait WebPage{ def fTime = fakeTime val time = Var(fTime) def update(): Unit = time() = fTime val html: Rx[String] } class HomePage(implicit ctx: Ctx.Owner) extends WebPage { val html = Rx{"Home Page! time: " + time()} } class AboutPage(implicit ctx: Ctx.Owner) extends WebPage { val html = Rx{"About Me, time: " + time()} } val url = Var("") val page = Rx{ url() match{ case "" => new HomePage() case "" => new AboutPage() } } println(page.now.html.now) // "Home Page! time: 123" fakeTime = 234 page.now.update() println(page.now.html.now) // "Home Page! time: 234" fakeTime = 345 url() = "" println(page.now.html.now) // "About Me, time: 345" fakeTime = 456 page.now.update() println(page.now.html.now) // "About Me, time: 456" In this case, we define a web page which has a html value (a Rx[String]). However, depending on the url, it could be either a HomePage or an AboutPage, and so our page object is a Rx[WebPage]. Having a Rx[WebPage], where the WebPage has an Rx[String] inside, seems natural and obvious, and Scala.Rx lets you do it simply and naturally. This kind of objects-within-objects situation arises very naturally when modelling a problem in an object-oriented way. The ability of Scala.Rx to gracefully handle the corresponding Rxs within Rxs allows it to gracefully fit into this paradigm, something I found lacking in most of the Related Work I surveyed. Most of the examples here are taken from the unit tests, which provide more examples on guidance on how to use this library. Ownership ContextOwnership Context In the last example above, we had to introduce the concept of Ownership where Ctx.Owner is used. In fact, if we leave out (implicit ctx: Ctx.Owner), we would get the following compile time error: error: This Rx might leak! Either explicitly mark it unsafe (Rx.unsafe) or ensure an implicit RxCtx is in scope! val html = Rx{"Home Page! time: " + time()} To understand ownership it is important to understand the problem it fixes: leaks. As an example, consider this slight modification to the first example: var count = 0 val a = Var(1); val b = Var(2) def mkRx(i: Int) = Rx.unsafe { count += 1; i + b() } val c = Rx{ val newRx = mkRx(a()) newRx() } println(c.now, count) //(3,1) In this version, the function mkRx was added, but otherwise the computed value of c remains unchanged. And modfying a appears to behave as expected: a() = 4 println(c.now, count) //(6,2) But if we modify b we might start to notice something not quite right: b() = 3 println(c.now, count) //(7,5) -- 5?? (0 to 100).foreach { i => a() = i } println(c.now, count) //(103,106) b() = 4 println(c.now, count) //(104,211) -- 211!!! In this example, even though b is only updated a few times, the count value starts to soar as a is modified. This is mkRx leaking! That is, every time c is recomputed, it builds a whole new Rx that sticks around and keeps on evaluating, even after it is no longer reachable as a data dependency and forgotten. So after running that (0 to 100).foreach statment, there are over 100 Rxs that all fire every time b is changed. This clearly is not desirable. However, by adding an explicit owner (and removing unsafe), we can fix the leak: var count = 0 val a = Var(1); val b = Var(2) def mkRx(i: Int)(implicit ctx: Ctx.Owner) = Rx { count += 1; i + b() } val c = Rx{ val newRx = mkRx(a()) newRx() } println(c.now,count) // (3,1) a() = 4 println(c.now,count) // (6,2) b() = 3 println(c.now,count) // (7,4) (0 to 100).foreach { i => a() = i } println(c.now,count) //(103,105) b() = 4 println(c.now,count) //(104,107) Ownership fixes leaks by keeping allowing a parent Rx to track its "owned" nested Rx. That is whenever an Rx recaculates, it first kills all of its owned dependencies, ensuring they do not leak. In this example, c is the owner of all the Rxs which are created in mkRx and kills them automatically every time c recalculates. Data ContextData Context Given either a Rx or a Var using () (aka apply) unwraps the current value and adds itself as a dependency to whatever Rx that is currently evaluating. Alternatively, .now can be used to simply unwrap the value and skips over becoming a data dependency: val a = Var(1); val b = Var(2) val c = Rx{ a.now + b.now } //not a very useful `Rx` println(c.now) // 3 a() = 4 println(c.now) // 3 b() = 5 println(c.now) // 3 To understand the need for a Data context and how Data contexts differ from Owner contexts, consider the following example: def foo()(implicit ctx: Ctx.Owner) = { val a = rx.Var(1) a() a } val x = rx.Rx{val y = foo(); y() = y() + 1; println("done!") } With the concept of ownership, if a() is allowed to create a data dependency on its owner, it would enter infinite recursion and blow up the stack! Instead, the above code gives this compile time error: <console>:17: error: No implicit Ctx.Data is available here! a() We can "fix" the error by explicitly allowing the data dependencies (and see that the stack blows up): def foo()(implicit ctx: Ctx.Owner, data: Ctx.Data) = { val a = rx.Var(1) a() a } val x = rx.Rx{val y = foo(); y() = y() + 1; println("done!") } ... at rx.Rx$Dynamic$Internal$$anonfun$calc$2.apply(Core.scala:180) at scala.util.Try$.apply(Try.scala:192) at rx.Rx$Dynamic$Internal$.calc(Core.scala:180) at rx.Rx$Dynamic$Internal$.update(Core.scala:184) at rx.Rx$.doRecalc(Core.scala:130) at rx.Var.update(Core.scala:280) at $anonfun$1.apply(<console>:15) at $anonfun$1.apply(<console>:15) at rx.Rx$Dynamic$Internal$$anonfun$calc$2.apply(Core.scala:180) at scala.util.Try$.apply(Try.scala:192) ... The Data context is the mechanism that an Rx uses to decide when to recaculate. Ownership fixes the problem of leaking. Mixing the two can lead to infinite recursion: when something is both owned and a data dependency of the same parent Rx. Luckily though it is almost always the case that only one or the other context is needed. when dealing with dynamic graphs, it is almost always the case that only the ownership context is needed, ie functions most often have the form: def f(...)(implicit ctx: Ctx.Owner) = Rx { ... } The Data context is needed less often and is useful in, as an example, the case where it is desirable to DRY up some repeated Rx code. Such a funtion would have this form: def f(...)(implicit data: Ctx.Data) = ... This would allow some shared data dependency to be pulled out of the body of each Rx and into the shared function. By splitting up the orthogonal concepts of ownership and data dependencies the problem of infinite recursion as outlined above is greatly limited. Explicit data dependencies also make it more clear when the use of a Var or Rx is meant to be a data dependency, and not just a simple read of the current value (ie .now). Without this distiction, it is easier to introduce "accidental" data dependencies that are unexpected and unintended. Additional OperationsAdditional Operations Apart from the basic building blocks of Var/Rx/Obs, Scala.Rx also provides a set of combinators which allow your to easily transform your Rxs; this allows the programmer to avoid constantly re-writing logic for the common ways of constructing the dataflow graph. The five basic combinators: map(), flatMap, filter(), reduce() and fold() are all modelled after the scala collections library, and provide an easy way of transforming the values coming out of an Rx. MapMap val a = Var(10) val b = Rx{ a() + 2 } val c = a.map(_*2) val d = b.map(_+3) println(c.now) // 20 println(d.now) // 15 a() = 1 println(c.now) // 2 println(d.now) // 6 map does what you would expect, creating a new Rx with the value of the old Rx transformed by some function. For example, a.map(_*2) is essentially equivalent to Rx{ a() * 2 }, but somewhat more convenient to write. FlatMapFlatMap val a = Var(10) val b = Var(1) val c = a.flatMap(a => Rx { a*b() }) println(c.now) // 10 b() = 2 println(c.now) // 20 flatMap is analogous to flatMap from the collections library in that it allows for merging nested Rx s of type Rx[Rx[_]] into a single Rx[_]. This in conjunction with the map combinator allow for scala's for comprehension syntax to work with Rx s and Var s: val a = Var(10) val b = for { aa <- a bb <- Rx { a() + 5} cc <- Var(1).map(_*2) } yield { aa + bb + cc } FilterFilter val a = Var(10) val b = a.filter(_ > 5) a() = 1 println(b.now) // 10 a() = 6 println(b.now) // 6 a() = 2 println(b.now) // 6 a() = 19 println(b.now) // 19 filter ignores changes to the value of the Rx that fail the predicate. Note that none of the filter methods is able to filter out the first, initial value of a Rx, as there is no "older" value to fall back to. Hence this: val a = Var(2) val b = a.filter(_ > 5) println(b.now) will print out "2". ReduceReduce val a = Var(1) val b = a.reduce(_ * _) a() = 2 println(b.now) // 2 a() = 3 println(b.now) // 6 a() = 4 println(b.now) // 24 The reduce operator combines subsequent values of an Rx together, starting from the initial value. Every change to the original Rx is combined with the previously-stored value and becomes the new value of the reduced Rx. FoldFold val a = Var(1) val b = a.fold(List.empty[Int])((acc,elem) => elem :: acc) a() = 2 println(b.now) // List(2,1) a() = 3 println(b.now) // List(3,2,1) a() = 4 println(b.now) // List(4,3,2,1) Fold enables accumulation in a similar way to reduce, but can accumulate to some other type than that of the source Rx. Each of these five combinators has a counterpart in the .all namespace which operates on Try[T]s rather than Ts, in the case where you need the added flexibility to handle Failures in some special way. Asynchronous CombinatorsAsynchronous Combinators These are combinators which do more than simply transforming a value from one to another. These have asynchronous effects, and can spontaneously modify the dataflow graph and begin propagation cycles without any external trigger. Although this may sound somewhat unsettling, the functionality provided by these combinators is often necessary, and manually writing the logic around something like Debouncing, for example, is far more error prone than simply using the combinators provided. Note that none of these combinators are doing anything that cannot be done via a combination of Obss and Vars; they simply encapsulate the common patterns, saving you manually writing them over and over, and reducing the potential for bugs. FutureFuture import scala.concurrent.Promise import scala.concurrent.ExecutionContext.Implicits.global import rx.async._ val p = Promise[Int]() val a = p.future.toRx(10) println(a.now) //10 p.success(5) println(a.now) //5 The toRx combinator only applies to Future[_]s. It takes an initial value, which will be the value of the Rx until the Future completes, at which point the the value will become the value of the Future. This async can create Futures as many times as necessary. This example shows it creating two distinct Futures: import scala.concurrent.Promise import scala.concurrent.ExecutionContext.Implicits.global import rx.async._ var p = Promise[Int]() val a = Var(1) val b: Rx[Int] = Rx { val f = p.future.toRx(10) f() + a() } println(b.now) //11 p.success(5) println(b.now) //6 p = Promise[Int]() a() = 2 println(b.now) //12 p.success(7) println(b.now) //9 The value of b() updates as you would expect as the series of Futures complete (in this case, manually using Promises). This is handy if your dependency graph contains some asynchronous elements. For example, you could have a Rx which depends on another Rx, but requires an asynchronous web request to calculate its final value. With async, the results from the asynchronous web request will be pushed back into the dataflow graph automatically when the Future completes, starting off another propagation run and conveniently updating the rest of the graph which depends on the new result. TimerTimer import rx.async._ import rx.async.Platform._ import scala.concurrent.duration._ val t = Timer(100 millis) var count = 0 val o = t.trigger { count = count + 1 } println(count) // 3 println(count) // 8 println(count) // 13 A Timer is a Rx that generates events on a regular basis. In the example above, using println in the console show that the value t() has increased over time. The scheduled task is cancelled automatically when the Timer object becomes unreachable, so it can be garbage collected. This means you do not have to worry about managing the life-cycle of the Timer. On the other hand, this means the programmer should ensure that the reference to the Timer is held by the same object as that holding any Rx listening to it. This will ensure that the exact moment at which the Timer is garbage collected will not matter, since by then the object holding it (and any Rx it could possibly affect) are both unreachable. DelayDelay import rx.async._ import rx.async.Platform._ import scala.concurrent.duration._ val a = Var(10) val b = a.delay(250 millis) a() = 5 println(b.now) // 10 eventually{ println(b.now) // 5 } a() = 4 println(b.now) // 5 eventually{ println(b.now) // 4 } The delay(t) combinator creates a delayed version of an Rx whose value lags the original by a duration t. When the Rx changes, the delayed version will not change until the delay t has passed. This example shows the delay being applied to a Var, but it could easily be applied to an Rx as well. DebounceDebounce import rx.async._ import rx.async.Platform._ import scala.concurrent.duration._ val a = Var(10) val b = a.debounce(200 millis) a() = 5 println(b.now) // 5 a() = 2 println(b.now) // 5 eventually{ println(b.now) // 2 } a() = 1 println(b.now) // 2 eventually{ println(b.now) // 1 } The debounce(t) combinator creates a version of an Rx which will not update more than once every time period t. If multiple updates happen with a short span of time (less than t apart), the first update will take place immediately, and a second update will take place only after the time t has passed. For example, this may be used to limit the rate at which an expensive result is re-calculated: you may be willing to let the calculated value be a few seconds stale if it lets you save on performing the expensive calculation more than once every few seconds. Design ConsiderationsDesign Considerations Simple to UseSimple to Use This meant that the syntax to write programs in a dependency-tracking way had to be as light weight as possible, that programs written using FRP had to look like their normal, old-fashioned, imperative counterparts. This meant using DynamicVariable instead of implicits to automatically pass arguments, sacrificing proper lexical scoping for nice syntax. I ruled out using a purely monadic style (like reactive-web), as although it would be far easier to implement the library in that way, it would be a far greater pain to actually use it. I also didn't want to have to manually declare dependencies, as this violates DRY when you are declaring your dependencies twice: once in the header of the Rx, and once more when you use it in the body. The goal was to be able to write code, sprinkle a few Rx{}s around and have the dependency tracking and change propagation just work. Overall, I believe it has been quite successful at that! Simple to Reason AboutSimple to Reason About This means many things, but most of all it means having no globals. This greatly simplifies many things for someone using the library, as you no longer need to reason about different parts of your program interacting through the library. Using Scala.Rx in different parts of a large program is completely fine; they are completely independent. Another design decision in this area was to have the parallelism and propagation-scheduling be left mainly to an implicit ExecutionContext, and have the default to simply run the propagation wave on whatever thread made the update to the dataflow graph. - The former means that anyone who is used to writing parallel programs in Scala/Akka is already familiar with how to deal with parallelizing Scala.Rx - The latter makes it far easier to reason about when propagations happen, at least in the default case: it simply happens right away, and by the time that Var.update()function has returned, the propagation has completed. Overall, limiting the range of side effects and removing global state makes Scala.Rx easy to reason about, and means a developer can focus on using Scala.Rx to construct dataflow graphs rather than worry about un-predictable far-reaching interactions or performance bottlenecks. Simple to InteropSimple to Interop This meant that it had to be easy for a programmer to drop in and out of the FRP world. Many of the papers I read in preparing for Scala.Rx described systems that worked brilliantly on their own, and had some amazing properties, but required that the entire program be written in an obscure variant of an obscure language. No thought at all was given to inter-operability with existing languages or paradigms, which makes it impossible to incrementally introduce FRP into an existing codebase. With Scala.Rx, I resolved to do things differently. Hence, Scala.Rx: - Is written in Scala: an uncommon, but probably less-obscure language than Haskell or Scheme - Is a library: it is plain-old-scala. There is no source-to-source transformation, no special runtime necessary to use Scala.Rx. You download the source code into your Scala project, and start using it - Allows you to use any programming language construct or library functionality within your Rxs: Scala.Rx will figure out the dependencies without the programmer having to worry about it, without limiting yourself to some inconvenient subset of the language - Allows you to use Scala.Rx within a larger project without much pain. You can easily embed dataflow graphs within a larger object-oriented universe and interact with them via setting Vars and listening to Obss Many of the papers reviewed show a beautiful new FRP universe that we could be programming in, if only you ported all your code to FRP-Haskell and limited yourself to the small set of combinators used to create dataflow graphs. On the other hand, by letting you embed FRP snippets anywhere within existing code, using FRP ideas in existing projects without full commitment, and allowing you easy interop between your FRP and non-FRP code, Scala.Rx aims to bring the benefits FRP into the dirty, messy universe which we are programming in today. LimitationsLimitations Scala.Rx has a number of significant limitations, some of which arise from trade-offs in the design, others from the limitations of the underlying platform. No "Empty" ReactivesNo "Empty" Reactives The API of Rxs in Scala.Rx tries to follow the collections API as far as possible: you can map, filter and reduce over the Rxs, just as you can over collections. However, it is currently impossible to have a Rx which is empty in the way a collection can be empty: filtering out all values in a Rx will still leave at least the initial value (even if it fails the predicate) and Async Rxs need to be given an initial value to start. This limitation arises from the difficulty in joining together possibly empty Rxs with good user experience. For example, if I have a dataflow graph: val a = Var() val b = Var() var c = Rx{ .... a() ... ... some computation ... ... b() ... result } Where a and b are initially empty, I have basically four options: - Block the current thread which is computing c, waiting for aand then bto become available. - Throw an exception when a()and b()are requested, aborting the computation of cbut registering it to be restarted when a()or b()become available. - Re-write this in a monadic style using for-comprehensions. - Use the delimited continuations plugin to transform the above code to monadic code automatically. The first option is a performance problem: threads are generally extremely heavy weight on most operation systems. You cannot reasonably make more than a few thousand threads, which is a tiny number compared to the amount of objects you can create. Hence, although blocking would be the easiest, it is frowned upon in many systems (e.g. in Akka, which Scala.Rx is built upon) and does not seem like a good solution. The second option is a performance problem in a different way: with n different dependencies, all of which may start off empty, the computation of c may need to be started and aborted n times even before completing even once. Although this does not block any threads, it does seem extremely expensive. The third option is a no-go from a user experience perspective: it would require far reaching changes in the code base and coding style in order to benefit from the change propagation, which I'm not willing to require. The last option is problematic due to the bugginess of the delimited continuations plugin. Although in theory it should be able to solve everything, a large number of small bugs (messing up type inferencing, interfering with implicit resolution) combined with a few fundamental problems meant that even on a small scale project (less than 1000 lines of reactive code) it was getting painful to use. No Automatic Parallelization at the StartNo Automatic Parallelization at the Start As mentioned earlier, Scala.Rx can perform automatic parallelization of updates occurring in the dataflow graph: simply provide an appropriate ExecutionContext, and independent Rxs will have their updates spread out over multiple cores. However, this only works for updates, and not when the dataflow graph is being initially defined: in that case, every Rx evaluates its body once in order to get its default value, and it all happens serially on the same thread. This limitation arises from the fact that we do not have a good way to work with "empty" Rxs, and we do not know what an Rxs dependencies are before the first time we evaluate it. Hence, we cannot start all our Rxs evaluating in parallel as some may finish before others they depend on, which would then be empty, their initial value still being computed. We also cannot choose to parallelize those which do not have dependencies on each other, as before execution we do not know what the dependencies are! Thus, we have no choice but to have the initial definitions of Rxs happen serially. If necessary, a programmer can manually create independent Rxs in parallel using Futures. Glitchiness and Redundant ComputationGlitchiness and Redundant Computation In the context of FRP, a glitch is a temporary inconsistency in the dataflow graph. Due to the fact that updates do not happen instantaneously, but instead take time to compute, the values within an FRP system may be transiently out of sync during the update process. Furthermore, depending on the nature of the FRP system, it is possible to have nodes be updated more than once in a propagation. This may or may not be a problem, depending on how tolerant the application is of occasional stale inconsistent data. In a single-threaded system, it can be avoided in a number of ways - Make the dataflow graph static, and perform a topological sort to rank nodes in the order they are to be updated. This means that a node always is updated after its dependencies, meaning they will never see any stale data - Pause the updating of a node when it tries to call upon a dependency which has not been updated. This could be done by blocking the thread, for example, and only resuming after the dependency has been updated. However, both of these approaches have problems. The first approach is extremely constrictive: a static dataflow graph means that a large amount of useful behavior, e.g. creating and destroying sections of the graph dynamically at run-time, is prohibited. This goes against Scala.Rx's goal of allowing the programmer to write code "normally" without limits, and letting the FRP system figure it out. The second case is a problem for languages which do not easily allow computations to be paused. In Java, and by extension Scala, the threads used are operating system (OS) threads which are extremely expensive. Hence, blocking an OS thread is frowned upon. Coroutines and continuations could also be used for this, but Scala lacks both of these facilities. The last problem is that both these models only make sense in the case of single threaded, sequential code. As mentioned on the section on Concurrency and Parallelism, Scala.Rx allows you to use multiple threads to parallelize the propagation, and allows propagations to be started by multiple threads simultaneously. That means that a strict prohibition of glitches is impossible. Scala.Rx maintains somewhat looser model: the body of each Rx may be evaluated more than once per propagation, and Scala.Rx only promises to make a "best-effort" attempt to reduce the number of redundant updates. Assuming the body of each Rx is pure, this means that the redundant updates should only affect the time taken and computation required for the propagation to complete, but not affect the value of each node once the propagation has finished. In addition, Scala.Rx provides the Obss, which are special terminal-nodes guaranteed to update only once per propagation, intended to produce some side effect. This means that although a propagation may cause the values of the Rxs within the dataflow graph to be transiently out of sync, the final side-effects of the propagation will only happen once the entire propagation is complete and the Obss all fire their side effects. If multiple propagations are happening in parallel, Scala.Rx guarantees that each Obs will fire at most once per propagation, and at least once overall. Furthermore, each Obs will fire at least once after the entire dataflow graph has stabilized and the propagations are complete. This means that if you are relying on Obs to, for example, send updates over the network to a remote client, you can be sure that you don't have any unnecessary chatter being transmitted over the network, and when the system is quiescent the remote client will have the updates representing the most up-to-date version of the dataflow graph. Related WorkRelated Work Scala.Rx was not created in a vacuum, and borrows ideas and inspiration from a range of existing projects. Scala.ReactScala.React Scala.React, as described in Deprecating the Observer Pattern, contains a reactive change propagation portion (there called Signals) which is similar to what Scala.Rx does. However, it does much more than that: It contains implementations for using event-streams, and multiple DSLs using delimited continuations in order to make it easy to write asynchronous workflows. I have used this library, and my experience is that it is extremely difficult to set up and get started. It requires a fair amount of global configuration, with a global engine doing the scheduling and propagation, even running its own thread pools. This made it extremely difficult to reason about interactions between parts of the programs: would completely-separate dataflow graphs be able to affect each other through this global engine? Would the performance of multithreaded code start to slow down as the number of threads rises, as the engine becomes a bottleneck? I never found answers to many of these questions, and had did not manage to contact the author. The global propagation engine also makes it difficult to get started. It took several days to get a basic dataflow graph (similar to the example at the top of this document) working. That is after a great deal of struggling, reading the relevant papers dozens of times and hacking the source in ways I didn't understand. Needless to say, these were not foundations that I would feel confident building upon. reactive-webreactive-web reactive-web was another inspiration. It is somewhat orthogonal to Scala.Rx, focusing more on event streams and integration with the Lift web framework, while Scala.Rx focuses purely on time-varying values. Nevertheless, reactive-web comes with its own time-varying values (called Signals), which are manipulated using combinators similar to those in Scala.Rx ( map, filter, flatMap, etc.). However, reactive-web does not provide an easy way to compose these Signals: the programmer has to rely entirely on map and flatMap, possibly using Scala's for-comprehensions. I did not like the fact that you had to program in a monadic style (i.e. living in .map() and .flatMap() and for{} comprehensions all the time) in order to take advantage of the change propagation. This is particularly cumbersome in the case of [nested Rxs](Basic-Usage#nesting), where Scala.Rx's // a b and c are Rxs x = Rx{ a() + b().c() } becomes x = for { va <- a vb <- b vc <- vb.c } yield (va + vc) As you can see, using for-comprehensions as in reactive-web results in the code being significantly longer and much more obfuscated. Knockout.jsKnockout.js Knockout.js does something similar for Javascript, along with some other extra goodies like DOM-binding. In fact, the design and implementation and developer experience of the automatic-dependency-tracking is virtually identical. This: this.firstName = ko.observable('Bob'); this.lastName = ko.observable('Smith'); fullName = ko.computed(function() { return this.firstName() + " " + this.lastName(); }, this); is semantically equivalent to the following Scala.Rx code: val firstName = Var("Bob") val lastName = Var("Smith") fullName = Rx{ firstName() + " " + lastName() } a ko.observable maps directly onto a Var, and a kocomputed maps directly onto an Rx. Apart from the longer variable names and the added verbosity of Javascript, the semantics are almost identical. Apart from providing an equivalent of Var and Rx, Knockout.js focuses its efforts in a different direction. It lacks the majority of the useful combinators that Scala.Rx provides, but provides a great deal of other functionality, for example integration with the browser's DOM, that Scala.Rx lacks. OthersOthers This idea of change propagation, with time-varying values which notify any value which depends on them when something changes, part of the field of Functional Reactive Programming. This is a well studied field with a lot of research already done. Scala.Rx builds upon this research, and incorporates ideas from the following projects: All of these projects are filled with good ideas. However, generally they are generally very much research projects: in exchange for the benefits of FRP, they require you to write your entire program in an obscure variant of an obscure language, with little hope inter-operating with existing, non-FRP code. Writing production software in an unfamiliar paradigm such as FRP is already a significant risk. On top of that, writing production software in an unfamiliar language is an additional variable, and writing production software in an unfamiliar paradigm in an unfamiliar language with no inter-operability with existing code is downright reckless. Hence it is not surprising that these libraries have not seen significant usage. Scala.Rx aims to solve these problems by providing the benefits of FRP in a familiar language, with seamless interop between FRP and more traditional imperative or object-oriented code. Version HistoryVersion History 0.4.00.4.0 - Dropped Scala 2.10 support - Added Bidirectional Vars and friends - Fixed multiple rx "glitches" 0.3.20.3.2 - Bumped to Scala 2.12.0. 0.3.10.3.1 Fixed leak with observers (they also require an owning context). Fixed type issue with flatMap 0.3.00.3.0 Introduced Ownerand Datacontext. This is a completely different implementation of dependency and lifetime managment that allows for safe construction of runtime dynamic graphs. More default combinators: foldand flatMapare now implemented by default. CreditsCredits.
https://index.scala-lang.org/lihaoyi/scala.rx/scalarx/0.4.3?target=_2.13
CC-MAIN-2020-34
refinedweb
7,319
56.45
" Will LSB 4 Standardize Linux? (InternetNews) Posted Jul 31, 2008 16:55 UTC (Thu) by JoeBuck (subscriber, #2330) [Link] Posted Jul 31, 2008 17:35 UTC (Thu) by elanthis (subscriber, #6227) [Link] That makes little sense. Distros can be certified LSB4 compliant, and they can offer meta-packages to pull in any libraries necessary for LSB4 compliance if they are not in the default install. Posted Jul 31, 2008 18:19 UTC (Thu) by derekp (guest, #53203) [Link] What the poster above was referring to is that if you develop your app on say RHEL, just because RHEL is LSB compliant doesn't mean that your app won't accidentally rely on non-LSB components also included in the base OS. So your app may not work on other LSB-compliant systems (because your app isn't a "pure" LSB app). The solution to this is to set up a directory structure that contains _only_ LSB components, then chroot into that directory and test your app. If there are no problems, then it should run on any other LSB-compliant OS. Posted Jul 31, 2008 18:31 UTC (Thu) by JoeBuck (subscriber, #2330) [Link] The best the ISV can do is to say is "try it, it might work, if it doesn't you'll have to use a supported distro". Posted Aug 1, 2008 0:42 UTC (Fri) by k8to (subscriber, #15413) [Link] I disagree, an ISV *could* do better. They can say "we support the LSB4 platform". And they can investigate all bugs which come to light running on the LSB4 platform, and determine whose side of the fence such problems are on and resolve them. Some bugs will be on the distro side, and then it will be up to the distro to support the platform. Really it would be ideal if you could actually get both parties interested in such bugs. You may be right that vendors will not go down this path, but there's no reason they can't do so. The issues you have to worry about are significantly curtailed if you target LSB. However the catch is that LSB is a pretty limited platform, or was. I haven't checked lately. Better tooling and IDE support Posted Aug 1, 2008 10:07 UTC (Fri) by salimma (subscriber, #34460) [Link] This sounds like an integration problem: compared to, say, OS X development, where the Xcode IDE let you target any of the last three OS X releases, Linux IDEs are not configured to target any LSB versions. Ideally, you should be able to deploy to a chroot-ed environment as part of the build process too. There's still the library mess if your application depends on libraries that are not part of the LSB standard, though.. Makes me think that adopting the Acorn RISC / OpenStep / OS X concept of application and framework bundles would be a neat idea, as far as binary proprietary applications go: the application is just a bundle (a directory), within which there is a predefined place for bundling self-contained frameworks (libraries) that are in turn bundles themselves. Application bundles? Puhlease. Posted Aug 1, 2008 11:46 UTC (Fri) by khim (subscriber, #9252) [Link] If you ever worked with these "magic" bundles you'll know they work just fine if we are talking about small self-contained applications (like Firefox or Skype) - exactly where trivial solution (.tar.lzma) already exist - but easily break apart if we are starting to talk about big integrated packages like MS Office/Acrobat (not reader). Then you have the whole non-standard scripts, arcane installation commands and so on. Sorry. This is non-solution for non-problem. 0install is slightly better - but not by much: basically it's huge distribution without any control with ability to run software from thousand of repositories. All existing systems are totally, utterly useless because they are addressing easiest 90% of the problem while leaving really problematic 10% untouched. Result: complex system which offers no significant additional value over simple tarball. Posted Aug 2, 2008 14:18 UTC (Sat) by salimma (subscriber, #34460) [Link]. Posted Aug 2, 2008 14:51 UTC (Sat) by salimma (subscriber, #34460) [Link] And who can use this metadata? Posted Aug 3, 2008 7:20 UTC (Sun) by khim (subscriber, #9252) [Link] The advantage of bundles is that it contains more metadata than a simple tarball. So what? README.txt/INSTALL.txt written in English is quite good for human and it's impossible to use metadata till there are no organization which resolves conflicts, give canonical names for libraries, etc. It's easy to request "convert" binary - but then your application will fail due to difference between ImageMagick and GraphicMagick flags. It's easy to request QT 3.3.x but then your application can not start since ABI was changed when immodule-qt was used. And if you need to just start your own background service - you still need to supply bunch of scripts for different distributions. Libraries and binaries (which is currect focus of all these systems) are easy, almost trivial: HDD space is cheap so just bundle all libraries with your package. Easily doable with tarball. And the hard stuff (do you need to talk with esd? with arts? or use PulseAudio daemon?) is not solved by current generation of bundle systems. Note: I've not said that bundles can never ever work. Who knows? Perhaps sometime in the future... when all will ditch this obsolete Linux and switch to great new HURD... but not today. So far all tries failed miserably. They are either work for limited usecases (0install is example here) or threaten to kill the host system altogether (autopackage and friends). Wrong goal Posted Jul 31, 2008 19:43 UTC (Thu) by ncm (subscriber, #165) [Link] This isn't meant to help users directly. As Joe notes, no software vendor will claim to run on any old LSB4-certified system. However, ISVs who code to LSB4 -- i.e., rely only on what's documented to be present in LSB4 -- will find porting to any LSB4 system will be easier than what they have to do now. Typically, now, they have to build release packages on some ancient version of Red Hat, and then rely on forward link compatibility to run on new Red Hats, and everywhere else too. With any luck this means they can rely on newer OS features than they previously had access to. E.g. does LSB have futex? If this works out, users will benefit by having a wider and possibly more modern choice of targets certified by the ISVs, because it will be cheaper for the ISVs to port and certify. ISVs benefit by being able to sell into shops that run distros they hadn't previously been able to afford to support. Anyway that's the theory. Posted Jul 31, 2008 21:18 UTC (Thu) by jwb (subscriber, #15467) [Link] That could happen, but what's more likely is that vendor-distributed binary software packages for Linux will continue to be garbage regardless of the LSB. It's easy to laugh at random sourceforge projects, but you haven't really experienced the extremes of software failure if you've never tried to download and install Oracle or DB2 or even Eclipse for that matter. Posted Aug 1, 2008 1:12 UTC (Fri) by jd (guest, #26381) [Link] I get nervous with standardizing layout, because there are still way too many overlaps between existing packages, way too many version dependencies, and way way too many namespace collisions. I would need to study LSB4 to see how it addresses these concerns, but really you want installs in isolated areas and some method of collecting the necessary files into virtual distributions as needed. Otherwise, you cannot guarantee a suitable environment for any application, but you do guarantee that with enough applications, collisions and conflicts will happen. Distros, not ISVs, more deserve some help Posted Aug 1, 2008 1:50 UTC (Fri) by coriordan (guest, #7544) [Link] Standardisation is a good thing, but the end goal should be to respond to the requests of distros, rather than making distros bypassable via write-for-LSB-run-anywhere. I think the distro maintainers are one of the best parts in the process of free software distribution. Proprietary software requires you to get your software from multiple vendors because each software developer reserves the right to the the only person distributing their software. The end result is that the software does not install in the way users want. Instead, you get tonnes of desktop icons and menu items and applications fighting to be the default application for a slew of file formats. So I'd say that it's the distros, not the ISVs, that should be the focus of LSB if LSB wants to help the free software community. offtopic link Posted Aug 1, 2008 2:39 UTC (Fri) by coriordan (guest, #7544) [Link] This issue reminded me of something RMS wrote in October 1998: I've searched for that essay many times, so now that I've found it I want to pass it on :-) Posted Aug 1, 2008 6:54 UTC (Fri) by k3ninho (subscriber, #50375) [Link] Perhaps one direction would be to have ISV's use a "LSB distro" against which they can test their packages. If it works on the LSB edition, it should work against any other GNU/Linux distribution also certified to meet the LSB. However, I doubt that the distributors will agree easily about the minimal content for such a distribution. These days, such an idea may even be moot if the QA process can test against VM images of Debian Stable, Ubuntu LTS, SUSE and Red Hat Enterprise (it's not like there's licensing costs...). K3n. Posted Aug 1, 2008 9:28 UTC (Fri) by drag (subscriber, #31333) [Link] The same thing that makes life hard in Linux for ISVs and proprietary software makers to support Linux is the same things that make it hard for open source developers and users to use Linux. For example: I like games. I want to play around with video games and I want to play around with making simple games. It's something enjoyable and fun to do. Unless, of course, your running Linux. Then it's a huge PITA. Language bindings, boost, Python, various libs and all that are used in all sorts of games or game engines. Very complex levels of dependencies and compiling versions for your own distribution are very very difficult. No distribution will support everything you need and even if they do the build-time options are probably not going to be correct and probably not the correct versions either. I have spent _days_ trying to get this or that to compile and to install. Over and over again I've tried to do this with all sorts of things with Crystalspace or Ogre3D and all of that. All of these are open source projects, all of these have developers that use and are fans of Linux and come from Linux oriented backgrounds. You know what I would have to do to get it to work in Windows? Download a EXE and run it. At worst, 2 or three. That's all. Days and days of struggling with Linux and dependencies and hell and failing at it, vs spending 10 minutes to download a executable. (And, of course, that isn't counting the low-quality of Linux 3D drivers which usually end up thwarting me even after I get everything built.) So you know most of their users are not going to use Linux, they are going to use Windows. I've even thought about dual booting Windows just so I can play with open source software. ----------------------------- Distributions work by simply brute forcing the software into submission. They spend hundreds of man-hours compiling, testing, and bug hunting. That's how it works and it's the only way it works.. Through simply massive amounts of time and effort. And not only that but it's all duplicated. The same amount of work that is required to get Debian's high quality packages still needs to be entirely duplicated for Suse, Redhat, Fedora, Ubuntu, Mandriva, etc. etc.'. There is very little logical reason for this! Why, if Debian does things well and Fedora does things well and Redhat does things well... Why on earth is Suse or Ubuntu developers forced to recreate all of that on their own? Even if they use Fedora's scripts and packages as the basis for things they use it's still a huge PITA to _port_ it over to Ubuntu and all sorts of variables inflict all sorts of bugs and broken behavior on end users. And, again, the only way to fix it is just a massive amount of _DUPLICATE_ effort on the part of every Linux distribution developer out there. They are all using the same programs, similar scripts, similar versions, same everything. The only difference is that they have to compile the software and build it into packages. ------------------------- IMO one of the most important things that Linux distributions can do now is work together to eliminate needless distinctions between them. LSB is probably a good start, but it's probably not nearly enough. Distros, not ISVs, more deserve help -- and can't standardize w/o it Posted Aug 1, 2008 10:41 UTC (Fri) by krishna (subscriber, #24080) [Link] <quote>'. </quote> Which is one reason Debian's Filesystem Hierarchy Standard and packaging requirements are IMHO the gold [meta-]standard for standardization. I think they got it right when they produced lintian, and flag installation and interoperability problems as show-stoppers for any given package -- in a way, it's like the US Armed Forces' boot camp -- you come in a tar.gz, they strip you down, issue you a uniform (the FHS), teach you the rules (the control file), and run you through a gauntlet until you either drop out and are not included in the release, or conform and can be guaranteed to work with others. As a result, you do have packages that are almost totally guaranteed to work together, with 'almost' being well-defined by a rigorously specified set of requirements. They take interoperability and standardization as seriously as everything else should be taken in software, and it shows in the conspicuous lack of installation problems in Debian's (albeit delayed) releases -- when a package is told to 'install', it knows what to pull in, can count on everything it pulls in following the same rules, and the installation succeeds. Assembling a standard for distros that don't believe making standardization compatibility an absolute requirement of the distribution qa process seems to provide a tool while sidestepping the attitude towards standardization. Posted Aug 1, 2008 13:51 UTC (Fri) by drag (subscriber, #31333) [Link] If it was up to me and I was the king of the universe I'd issue a decree that everybody should simply drop what they are doing and re-base off of Debian Testing. _Right_Now_. Debian has the most complete, most consistent, and most bug-free packages anywhere I've seen in any Linux distribution, bar none. And this is due entirely to the sort of thing you've described AND the tremendous amount of work and organizational effort that they put into it. Ubuntu _almost_ gets it right. Their mistake was not working towards backward compatibility. If everything in Ubuntu was installable in Debian and visa versa that would save everybody involved a lot of effort and a lot of heartache. IMO, Ubuntu should be not a entirely new distribution, but a installer and a repository based off of Debian. A carefully tested and configured default configuration that differs from what you'd get from Debian's "tasksel desktop". That is you would end up with something like: deb Hardy main contrib non-free ubuntu One distribution that seems to get it right is 64Studio. They do a very good job of making their packages work with Debian transparently, while still making significant changes to the kernel and other important system files. ------- Of course this isn't going to happen and I am sure it would be a unpopular concept among non-Debian developers. Posted Aug 1, 2008 15:05 UTC (Fri) by michaeljt (subscriber, #39183) [Link] Ubuntu's marketing is based around being something different and special. I also often feel that they stick too many of their limited resources into reinventing wheels their own way, rather than reusing other people's work well. Posted Aug 1, 2008 13:06 UTC (Fri) by cgyrovague (guest, #21692) [Link] "Standardisation is a good thing, but the end goal should be to respond to the requests of distros, rather than making distros bypassable via write-for-LSB-run-anywhere." I look at it a bit different. In any system, there are places where disagreement is important, because choice is needed. There are also many areas where using multiple approaches is silly or trivial - best to pick one and run with it. A standard like LSB should be a way to sort between the two. Stuff that's really common gets rolled into a standard, distro-unique stays unique. Then app developers can use LSB to cover the 80% common, and focus on distro-specific goop. Likewise distros can split effort between maintaining the commons and keeping their identity-specific bits. Memory, memory Posted Aug 1, 2008 5:15 UTC (Fri) by renox (guest, #23785) [Link] I don't remember exactly which LSB version that was but I do remember that previous LSB version was criticised: - for having thread tests which didn't work on fast computers due to a race condition in the tests, which is amusing. - by the gcc team for using a C++ ABI which didn't belong to a stable version(*). Both events made me quite dubious about LSB value at the time.. *: Note that this IMHO was also the 'fault' of the gcc team: if the C++ ABI was stable this kind of thing wouldn't have happened.. Apparently the C++ ABI has stabilised since then, so at least this won't be an issue anymore. ABI stability Posted Aug 1, 2008 5:42 UTC (Fri) by coriordan (guest, #7544) [Link] ABI stability might be a priority for an organisation such as LF which is a consortium of proprietary software developers. For free software projects, it might be quite justified that this wasn't a priority (it can be good, but just not a top priority). Posted Aug 1, 2008 11:21 UTC (Fri) by iabervon (subscriber, #722) [Link] ABI stability is important for distributors of binaries, regardless of whether there is source available. It doesn't help much that you would be able to compile a program with a stable compiler version if all of your C++ system libraries are required not to be possible to link against it by the standard your distribution conforms to. Just because users could throw out all of their binary packages that use C++ and get the source packages and build them doesn't mean this isn't a problem for them. And it's particularly bad for users of free software if it's impossible to have on their system for a given library, both a file that will work with the provided software and a file that will work with output from the reliable compiler, because users are likely to want to compile things and have them work, which is less of an issue for users of only proprietary software. Posted Aug 1, 2008 6:07 UTC (Fri) by michaeljt (subscriber, #39183) [Link] The lack of C++ ABIs is a pain in the neck, but it is not something I would like to blame on the GCC people. The C++ standard is both complex and itself instable, and producing a stable ABI for it (let alone one compatible with other compilers) is no mean task. The easiest way to deal with the problem that I know of is to wrap the ABI in something else (i.e. all exports are extern "C", or on Windows COM is popular for the purpose). Posted Aug 1, 2008 10:37 UTC (Fri) by muwlgr (guest, #35359) [Link] What about package format specified by LSB ? Is it still RPM-only ? What exactly is the problem? Posted Aug 4, 2008 4:51 UTC (Mon) by ldo (subscriber, #40946) [Link] Aren't distro-specific issues the responsibility of the distro maintainers? There are already tens of thousands of software packages available across hundreds of distros, and it all just works, with no further effort on the part of the software developers--they just release the source code and let the maintainers do all the necessary distro-specific packaging. Oh, you mean you're talking about closed-source ISVs? Why didn't you say so? Posted Aug 4, 2008 6:13 UTC (Mon) by coriordan (guest, #7544) [Link] nail's head says ouch. Posted Aug 4, 2008 10:54 UTC (Mon) by seyman (subscriber, #1172) [Link] > Oh, you mean you're talking about closed-source ISVs? Why didn't you say so? The LSB has always been about making it easier closed-source for ISVs to package their applications for GNU/Linux distributions. If you don't care about them, you're better off ignoring LSB discussions altogether. Even for closed-source ISVs... Posted Aug 4, 2008 11:20 UTC (Mon) by khim (subscriber, #9252) [Link] Do you know anything released for LSB? Adobe Reader, Google Earth, Opera, Skype, etc... all these programs are NOT targeted for LSB. If it does not concern open-source ISVs and closed-source ISVs are ignoring it too... then what's the point? Posted Aug 4, 2008 19:12 UTC (Mon) by seyman (subscriber, #1172) [Link] > Do you know anything released for LSB? Our (grumpy) editor once noted that MySQL and Real Player had been certified LSB: > If it does not concern open-source ISVs and closed-source ISVs are ignoring it too... then what's the point? Search me... A while back, I became convinced that the LSB was a conspiracy to get all the people interested in running closed-source software on GNU/Linux distributions involved in a project that a) required vast amounts of time and energy to run and b) was doomed to fail, thus minimizing the harm they could do. I haven't seen anything since then that would make me change my mind. Linux is a registered trademark of Linus Torvalds
http://lwn.net/Articles/292298/
crawl-002
refinedweb
3,779
67.28
Basic question about Ruby's class hierarchy Discussion in 'Ruby' started by Patrick Li, Aug 11, 2008. Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum. - Similar Threads (namespace) class Hierarchy Chart for .net 2.0Mukund Patel, Dec 31, 2004, in forum: ASP .Net - Replies: - 0 - Views: - 1,726 - Mukund Patel - Dec 31, 2004 Invoking the Constructor of the Top Most Class in the Hierarchy from the Bottom most classH.MuthuKumaraRajan, Feb 3, 2004, in forum: Java - Replies: - 3 - Views: - 438 - H.MuthuKumaraRajan - Feb 4, 2004 Nested Class, Member Class, Inner Class, Local Class, Anonymous ClassE11, Oct 12, 2005, in forum: Java - Replies: - 1 - Views: - 4,736 - Thomas Weidenfeller - Oct 12, 2005 ObjectGraph: a Ruby class inheritance hierarchy graphMehr, Assaph (Assaph), Apr 15, 2004, in forum: Ruby - Replies: - 1 - Views: - 485 - Simon Strandgaard - Apr 15, 2004 ruby class hierarchyJason Lillywhite, Aug 12, 2009, in forum: Ruby - Replies: - 0 - Views: - 77 - Jason Lillywhite - Aug 12, 2009
http://www.thecodingforums.com/threads/basic-question-about-rubys-class-hierarchy.852131/
CC-MAIN-2014-35
refinedweb
190
67.89
Powerful Web Scraping/Crawling||Scrapy and BS4 - 5.5 hours on-demand video - 1 article - 2 downloadable resources - Full lifetime access - Access on mobile and TV - Certificate of Completion Get your team access to 4,000+ top Udemy courses anytime, anywhere.Try Udemy for Business - Web Scraping using Scrapy Frame work - Building a Spiders or Web Crawlers - Writing and Execute Web Scraping Script - Source Code for all Spiders or web crawlers - Exporting data extracted by Scrapy into Excel or CSV files - Fine tuning Your Spiders using Scraping Build in Settings - No Programming Required, Python Refresher is Provided with this course. - A laptop with Internet Connection - Attitude to Learn Web Scraping - A Smile....:) I will share my experience , how did i came up to develop this course.When i started out, My problem was not that i don't know that what i want to learn but my problem was following in HINDI/URDU LANGUAGE. 1- Which tools are the best 2- Which techniques are the Best 3- Will this Course Make me find my way or Make me more Confuse because there is too much Information out there. 4- Then i found The course but that was in English. and All above went into the Drain. But Don't worry, amigo, Very First time you are going to get all your needs satisfied, plus in your mother tongue. Its so easy to grab concepts in your mother tongue, that programming language seems natural and learning comes easy. Above all, I have distilled my six years of University Teaching Experience in this course to make it One of the Best Course you will ever take. In this Course, We will start from Zero. This Course is divided into three sections 1- Python Refresher Here You will learn all the Python Concepts needed to get started with Web Scraping Frame Work 2- Beautiful Soup BS4 In this Section, You will do your First Project of Scraping a real website, using BS4, One the most Famous Web-scraping Python Library 3- Scrapy In this Section, You will learn Scrapy, An Asynchronous Web Scraping Framer Work Build on Twisted. You will build a Scrapy Spider, and Learn how to use Scrapy Shell. 'Great Teacher So Far (5 Stars)' Saqib Munir Last but not the least if you have any Question don't hesitate to ask question. Good luck and Enjoy. - Python Programmers - Web Developers - Data-mining or Machine Learning Students - Students who want to built web Crawlers. - Students who want to extract data from web sites efficiently # -*- coding: utf-8 -*- import scrapy class QuotesSpider(scrapy.Spider): name = 'quotes' allowed_domains = ['quotes.toscrape.com'] start_urls = [''] def parse(self, response): # h1_tag = response.xpath('//*[@class="tag-item"]/a/text()').extract() # tags = response.xpath("//h1/a/text()").extract_first() # yield {'H1_Tag':h1_tag, 'Tags':tags} container = response.xpath('//*[@class="quote"]') for quote in container: text = quote.xpath('.//*[@class="text"]/text()').extract_first() author = quote.xpath('.//*[@class="author"]/text()').extract_first() keywords = quote.xpath('.//*[@class="keywords"]/@content').extract_first() yield { 'Text':text, 'Author':author, 'Key':keywords } next_url = response.xpath('//*[@class="next"]/a/@href').extract_first() abs_next_url = response.urljoin(next_url) yield scrapy.Request(abs_next_url)
https://www.udemy.com/course/web-scraping-python-scrapy-bs4-hindi-urdu/?couponCode=D2BE5D6383CF3B6CDF26
CC-MAIN-2020-29
refinedweb
517
55.64
#include <math.h> double ceil(double x); float ceilf(float x); long double ceill(long double x); These functions shall compute the smallest integral value not less, ceil(), ceilf(), and ceill() shall return the smallest integral value not less than x, expressed as a type double, float, or long double, respectively. If x is NaN, a NaN shall be returned. If x is ±0 or ±Inf, x shall be returned. If the correct value would cause overflow, a range error shall occur and ceil(), ceilf(), and ceill() need not be expressible as an int or long. The return value should be tested before assigning it to an integer type to avoid the undefined results of an integer overflow. The ceil(). feclearexcept() , fetestexcept() , floor() , isnan() , the Base Definitions volume of IEEE Std 1003.1-2001, Section 4.18, Treatment of Error Conditions for Mathematical Functions, <math.h>
http://www.makelinux.net/man/3posix/C/ceill
CC-MAIN-2015-22
refinedweb
146
62.38
Servlet Session Tracking - JSP-Servlet Servlet Session Tracking Hi I have made a main page called... == null_ { pw.println("No Session found"); } else... to a new Servlet of name: SessionNew by URL Redirecting method, but instead fo giving Session javax.servlet not found error javax.servlet not found error why iam getting javax.servlet error while running my servlet program...after setting the classpath also i am finding that problem creation and tracking Session creation and tracking 1.Implement the information persistence across servlet destroy or servlet container start/stop. Write a servlet such that when it is stopped (either by container shutdown or servlet stop no def found - JSP-Servlet no def found i have used the code of file upload from rose india but when i run no def found for fileupload exception although i have put jar file... error Session tracking basics . In session tracking client first make a request for any servlet, container... Session Tracking Basics Session Tracking Session tracking is a process that servlets use J2EE Tutorial - Session Tracking J2EE Tutorial - Session Tracking  ... is Session-Tracking. HTTP is a stateless protocol. (ie) In a multiform... to various reasons, it is the 'Session ' object method , that is found class not found error - JDBC class not found error thanks for your response. please clarify the following doubts. i am having the specified mysql connector jar file. where that jar file has to be placed. also does the jdbc driver need to be installed session session explain sessioln tracking in jsp and display program identifier not found identifier not found Getting compilation error " identifier not found " please explain ? identifier means variable name,,,,make sure it same as declaration Tracking User's Session using Hidden Form Fields Tracking User's Session using Hidden Form Fields In this Section, We will discuss about tracking user's session using Hidden form field. Hidden Form Field is used to maintain the session. It is one of the method to handle http 404 resource not found error http 404 resource not found error Sir, I have downloaded your RichLRApplication and deployed on tomcat 6 with oracle backend. Application started... not found error. Kindly advise. ss error in servlet error in servlet while excuting the servlet program, it produce the error as "HTTP 404 not found". i couldnt rectify that error. can you help me...;servlet-name>Serv1</servlet-name> <servlet-class>Serv1< session session What mechanisms are used by a Servlet Container to maintain session information? Servlet Container uses Cookies, URL rewriting, and HTTPS protocol information to maintain the session File not found ;% this is my code part getting an error "emp_code.doc"(specified file not found File not found ;lt;% this is my code part getting an error "emp_code.doc"(specified file not found File not found (); <% } this is my code part getting an error "emp_code.doc"(specified file not found File not found getting an error "emp_code.doc"(specified file not found) Hi Friend Steps not found. Steps not found. import java.io.*; import java.net.*; import javax.servlet.*; import javax.servlet.http.*; public class Details1 extends...;"); out.println("<head>"); out.println("<title>Servlet Details1 Servlet Session Servlet Session Sometimes it is required to maintain a number of request... before invalidating this session by the servlet container. If you do not want... and also can delete the any session data. Session Tracking How File not found ;% } this is my code part getting an error "emp_code.doc"(specified file not found) Hi Friend, Visit Here Thanks Session ;A servlet session is created and stored on the server side. The servlet container keeps track of all the sessions it manages and fulfills servlet API requests to get.... To the maintain the session, Web clients must pass back a valid session mycustomer.hbm.xml not found /hibernate-configuration-3.0.dtd" > <hibernate-configuration> <session...="p1.mycustomer.hbm.xml"/> </session-factory> </hibernate...: org.hibernate.MappingNotFoundException: resource: p1.mycustomer.hbm.xml not found mycustomer.hbm.xml not found -configuration> <session-factory> <property name...;mapping </session...: org.hibernate.MappingNotFoundException: resource: p1.mycustomer.hbm.xml not found Session Session why do we need session? actually what stored in a session... and user would not able to retrieve the information. So, Session provides that facility to store information on server memory. 2)Variables are stored in session exception handling code for file not found error.. exception handling code for file not found error.. How to do exception handling for file not found error servlet error - JSP-Servlet overthere. Some times it shows that servlet class is not found but I have done...servlet error I have develope a servlet program when I run... specified wrong path or you haven't set the classpath properly. Do you have servlet Servlet-session Servlet-session step by step example on session in servlets J2EE Tutorial - Session Tracking Example J2EE Tutorial - Session Tracking Example... to the list and displayed, thereby achieving session-tracking... session-tracking feature. ( A stateless session bean does not retain  Track user's session using 'session' object in JSP Track user's session using 'session' object in JSP This section is about tracking user identity across different JSP pages using 'session' object. Session Tracking : Session tracking is a mechanism that is used spring3 mvc appliation bean definition not found error spring3 mvc appliation bean definition not found error hi I... the following error, can you suggest me how to solve. The error message is: Error creating bean with name Real time Mechanism fot Sessio Tracking - Servlet Interview Questions Real time Mechanism fot Sessio Tracking Hi Friends, Which is d most commonly used mechanism for tracking a session. and y it is best . thanks in advance spring3 mvc appliation bean definition not found error spring3 mvc appliation bean definition not found error hi deepak I... execute it shows index page, when I click on link it shows the following error Error creating bean with name JSP error: class UserForm not found in class model.UserAction JSP error: class UserForm not found in class model.UserAction etting the following error in the program..wat shod i do to remove this?I am using jdeveloper 10 g [code] package model; import javax.servlet.http.HttpServletRequest java profram:error 405 method not found java profram:error 405 method not found import javax.servlet.http.*; import javax.servlet.*; import java.io.*; public class Configservlet extends HttpServlet { ServletConfig con; ServletContext ctx; public void Java httpsession and is used for the purpose of session tracking while working with servlets. Session tracking is a mechanism that is used to maintain state about a series of requests... some period of time. The session object can be found using getSession() method jsp error - JSP-Servlet , The 404 or Not Found error message is an HTTP standard response code indicating... with "server not found" or similar errors, in which a connection to the destination server Programming Error - JSP-Servlet host = "smtp.yourisp.host"; // Create properties, get Session... props.put("mail.debug", "true"); Session session... Message msg = new MimeMessage(session); //Set message CLASS NOT FOUND EXCEPTION CLASS NOT FOUND EXCEPTION I AM USING INTERNET EXPLORER VERSION 6.00... LOGIN IN SITE AND UPDATE DIGITAL CERTIFICATE THE ERROR SHAWN IS BELOW CLASS NOT FOUND EXCEPTION AND THE BOX IS DISPLAYED AND I CLICK ON DETAILS BUTTON Programming Error - JSP-Servlet the count box should get transmitted to the Payment page that is through session... ??? And when i tried its giving me error than cannot find variable.In request.getParameter it wont work in next to next page so please help me how to do through session KRA tracking KRA tracking hi, I am doing a project on KRA tracking.. ie timesheet traking in jsp servlet. can u just help me out with the flow Programming Error - JSP-Servlet ("mail.smtp.host", host); // Get the default Session object. Session session = Session.getDefaultInstance(properties); // Create a default MimeMessage object. MimeMessage message = new MimeMessage(session Programming Error - JSP-Servlet that is session then how to do that because the value inside the count box in an integer unable to import class com.opensymphony.xwork2.ActionContext not found unable to import class com.opensymphony.xwork2.ActionContext not found Imported class com.opensymphony.xwork2.ActionContext not found gettin the above srvlet compilation error - JSP-Servlet srvlet compilation error while compiling a servlet, httpservlet class not found errors & many such class not are generated. Hi Friend, It seems that compiler haven't found the servlet-api.jar file.So put Session Related Interview Questions ; Question: What is Session Tracking? Answer: HTTP is stateless protocol...: What are different types of Session Tracking? Answer: Mechanism for Session... the user. Question: Why do u use Session Tracking in HttpServlet? Answer file not found exception Java file not found exception This section illustrates you the concept of file not found exception. Java provides a powerful concept of Exceptions. An exception is an error that occurs at runtime. It is either generated by the Java java compilation error - JSP-Servlet .I have also set classpath and path.But stil on compiling my servlet programs...* ,javx.servlet.io.* packages not found . Sir ,plz tell me what should i do? Hi Friend, Put servlet-api.jar in the lib folder of your jdk.Restart JSP Translation error - JSP-Servlet declared first variable shows the following error. ----------------------------------------------- An error occurred at line: 2 in the jsp file: /stud/../session_files/sessioncheck.jsp Duplicate field stud_005fdetails_jsp.a 1: 2 Java Servlet : Hidden Field Field. It is one of session tracking technique. Hidden Field : Session... for a specific time period. You can maintain session tracking in three ways - Hidden form field, Cookies, URL rewriting. Hidden form field is a way of session tracking jsp - session - JSP-Servlet JSP - Session How to manage session in JSP Stateless Session Bean Example Error Stateless Session Bean Example Error Dear sir, I'm getting following error while running StatelessSessionBean example on Jboss. Please help me... [STDOUT] Error:example not bound 22:57:12,683 INFO [STDOUT] add 22:57:12,687 ERROR Session removing - JSP-Servlet Session removing Hi, I am destroying session by using session.invalidate() in JSP but I am not able to destroy it completely can anyone help me... has been in session using session. setAttribute() but at log off I am using runtime error - JSP-Servlet as per guidelines...but i got error..!!!! org.apache.jasper.JasperException... org.apache.poi.poifs.filesystem.POIFSFileSystem not found in import. import Location finding error - JSP-Servlet the error as file not found. Below is the coding.The problem line is marked...Location finding error Location needs from drive name: My file uploading program has an error. It requires the location should be given from Session Timeour - JSP-Servlet Session Timeour Hi, How to create a session timeout page in JSP? Session timeout should happen after 15 mins of idle instance. Thanks ...)to remove the session after 15 min(900 seconds). For more information, visit servlet-error servlet-error where we declare a servlet errors in web application Eclipse-launch failed binaries not found small hello world program. 4.then running the file it is giving me an error concept - JSP-Servlet Session concept Hai friends, I am doing a jsp project with session concept. If one person is not accessing his logged window for more than 10 minutes it gets automatically log out.Anybody explain me the reason SERVLET ERROR SERVLET ERROR message description The server encountered an internal error () that prevented it from fulfilling this request. exception javax.servlet.ServletException: Error instantiating servlet class ServletDemo error Servlet error In netbeans6.9 how can we deploy our servlet in my project i design servlet to receive the data through html pages and design database also but unable to do with xml files what we can change in sun-web.xml Session management Session management I am new to servlet....developing a project in servlet-jsp.... i want to know about session management... that i don't want to let a user can copy url address and run it on same os and other browser Thanks Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/93052
CC-MAIN-2015-22
refinedweb
2,030
58.28
line number of error in .sage file When sage reports an error, it reports the line number of the .py file where the error occured. I am wondering if there is a way to configure sage so that it instead reports the line number of the code in the .sage file that ultimately generated that problematic line in the .py file. This would be useful because it is the .sage file that I am actually editing. This is theoretically possible: other languages and transpilers use sourcemaps to solve this same issue. What is a .sagefile and what is a .pyfile? Does import traceback, followed by traceback.print_exc()at the sensible places, not give the corresponding information? Where does which sage process report an error? Command line, interpreter? Example: File content: Then and so on... My original file is called file.sage(the ".sage file"). When I execute it via the terminal command sage file.sage, then the file file.sage.py(the ".py file") is automatically generated. I hope this clarifies the issue for you.
https://ask.sagemath.org/question/40891/line-number-of-error-in-sage-file/
CC-MAIN-2018-17
refinedweb
175
71.41
React componentDidMount Not Called in Server Render You have access to some great component lifecycle hooks in React. componentDidMount is one of the most useful, but it’s not going to be available when rendering on the server. You may have been stumped when you wrote a React component and were trying to get some code to run in the componentDidMount. But try as you might, console.log as you might, that function was just never called. You double-checked your spelling and remained baffled. Server Difference Then you remembered you were rendering React on the server. The server has a key difference in comparison the browser. The server environment has no DOM. The browser provides the DOM. Mounting in React is synonmous with DOM insertion. Even the library API tries to give you a strong indication that things are going to be different on the server. You import differently: import ReactDOMServer from 'react-dom/server' You render differently: // if rendering to match with a React client ReactDOMServer.renderToString(reactElements) // if rendering for static html ReactDOMServer.renderToStaticMarkup(reactElements) Your Only Hope So things are different. But there’s always hope. Components are built on hope. You can add code to your component inside the constructor and componentWillMount. That is all: class OobiDoobBenooby extends React.Component { componentWillMount() { console.log('Will be called on the server...') } } And it makes sense. You won’t be inserting into a DOM. Mount will never happen. You’re simply outputting a string of HTML. Besides the situation of rendering on the server, have there been other times componentDidMount didn’t seem to work for you? And what other interesting differences have you found while server-rendering React versus the browser experience?
https://jaketrent.com/post/react-componentdidmount-not-called-server-render/
CC-MAIN-2017-43
refinedweb
285
52.97
" is a C++ persistence library or framework that allows programmers to easily save and load objects using keys. Objects are associated with a user configurable type of key that can be used to name objects specifically and save them or load them by that name. This feature is the primary distinguishing difference between Calvin and the many other persistent libraries. A quick snippet shows what this means: // archives hold named objects like a database filesys_archive ar( "../data" ); // name of the variable c is "c" boost::shared_ptr<C> c( new C( "c" )); // save the object ar.save( c ); // delete the object c c.reset(); // load it from the archive c = ar.load<C>( "c" ); Calvin has most of the features you would expect in a persistence library. It's relatively painless to add persistence to your objects, with the only overhead being the addition of the name member to your objects. The above snippet uses a string, but virtually any value type1 can be used as a key. This article assumes a working knowledge of C++ and how to use templates. The accompanying code has only been tested on VC 7.1, though I suspect it would work on either the Comeau or GCC 3.3+ compilers. It also requires the Boost libraries 1.31 or greater to be accessible. There are many persistence libraries available for C++ programmers, so why write another one? The short answer is that no library had the features I needed and to revise any of them would probably have taken more time than rolling my own. Calvin isn't a large library. That's not to say that I don't owe those other libraries an intellectual debt. Calvin builds on their ideas to enable the features that I needed. Perhaps you might need these features as well. As you can see in the above snippet, Calvin allows a programmer to name instances and then save and load them. Of what value is this feature? To see the difference, consider what most other persistence frameworks do (or don't do as the case may be). Other persistence frameworks are little more than serialization of an object into an alternate form. This is good for allowing an object to be sent across the wire or simply dumped to disk. But what if that object contains references to other objects? Usually these contained objects are serialized as well within the original object. Say you have objects A, B and C. B contains a pointer to A, as does C. You serialize B, it in turn serializes A and both are dumped to their store. You do likewise for C, and all is good with the world. But what happens when you load B and C from the store? In most libraries, you would end up with two objects of type A that B and C each reference now. How to get around this duplication problem? There are two commonly implemented solutions to this problem. The first and most common is to require a root object that contains all the objects, such as a document or a 3D scene. This root object is the only object that may start a persistence operation, thereby eliminating the duplicate references outside of a single archive. This really isn't a solution but a constraint to the problem above. Another solution is to allocate all objects to be persisted from a special pool, and then the pool is what is saved. This really doesn't eliminate the root object, but simply shifts it, making the pool the mandatory root object to be persisted. Many applications don't find this prohibiting and can operate well within these constraints. Some applications, such as the one I write, share data across documents quite a bit, and if one piece of shared data is updated or changed, that change should be reflected in all the other documents. Calvin's solution is to allow shared instances to be named, and to then save each instance in its own record within an archive. Therefore, in our scenario above, when B is saved, so is A, but A is stored on its own record and a reference to A is saved with B. Likewise, when C is saved, A is saved and C saves a reference to A. When C is loaded, it loads A. When B is loaded, Calvin notices that A has already been loaded and a reference to A is returned to B, so that B and C once again refer the same object. This type of late binding allows me to write utilities that operate on certain objects, such as a texture, without having to know about which 3d models use it. To use Calvin in your own applications, it requires three steps. First you must outfit your classes so that they may be persisted. Second you must name your instances when created. Last, write the code to load and save your objects in the appropriate places in your application. Let's look at how these are done in that order. As nice as it would be to gain persistence with no additional code, C++ just doesn't have the facilities necessary to do it. Some scaffolding is necessary to achieve persistence. The first question to ask yourself is, if the class must be made persistent in the first place? While lightweight, there is still some overhead involved. If yes, then will it need its own name? If it is an object potentially referenced by more than one object, then yes, it will need a name. Otherwise, it may be persisted inside a containing object (as Figure 2 above). But what type to use for naming? std::string is my own preferred type, but by the virtue of template parameters, a name may be any value type that may be converted to a string via the << operator. Integers meet these requirements and only add 4 bytes of overhead. And if you keep your integer names as an enum within a single .h file, it is very quick and convenient too. (The test/example program in the accompanying source code gives an example of strings and ints as keys.) std::string << enum string int If you're satisfied with your answers to these questions, then comes the relatively painless part of making the class potentially persistent. // an example persistent class // 1. include calvin.h #include "calvin.h" // 2. Inherit from calvin::persistent<key> class A : public calvin::persistent<std::string> { int a; float a2; double a3; struct Aa { int a4; int a5; Aa(void) : a4(4), a5(5) {} }; Aa a6; public: // 3. Default constructor and constructor with the parameter of the key A( void ) : a(1), a2(2.0f), a3(3.0), a6(Aa()) { } A( const std::string& name ) : persistent<std::string>(name), a(1), a2(2.0f), a3(3.0), a6(Aa()) { } virtual ~A( void ) { } protected: // 4. serialize method (used for both reading and writing) template <typename Stream> Stream& serialize( Stream& s, const unsigned int version ) { return s ^ a ^ a2 ^ a3 ^ a6.a4 ^ a6.a5; } private: // 5. friendship of allow_persistence friend calvin::allow_persistence<A>; // 6. version of the class static const int version_ = 1; }; As you can see, making a class persistent is simple. Five alterations and your class is ready to be saved to or loaded from an archive (which we will discuss further below). archive calvin::persistent _name void serialize ^ There are some additional features of the library that can be used in place of the steps above, usually to solve specific problems that the more general features might not allow. It is a matter of convenience that the library allows you to use a single method for saving and loading objects. Sometimes this is not practical or even possible. In this case, the serialize method may be split into two as below: std::ostream& serialize( std::ostream&, unsigned int ); std::istream& serialize( std::istream&, unsigned int ); Though it probably goes without saying, the method taking a std::ostream& as its parameter is the one called for saving, and the method taking a std::istream& parameter is the one used for loading. Also note the lack of the template parameter. Each may perform the operations necessary to save or restore the object. These methods should use the ^ operator as used in the generic method example rather than the normal >> and << operators. std::ostream& std::istream& >> To demonstrate the other features, let's examine a class that builds on the class A above. A struct made_of_prims { int i; float f; double d; }; // 1. An unnamed yet persisted class class persistent_void_test : public calvin::persistent<void> { std::string msg; friend struct calvin::allow_persistence<void>; public: persistent_void_test( void ) : msg( "I'm a calvin::persistent<void>" ) {} template<typename Stream> Stream& serialize( Stream& s, unsigned int version ) { return s ^ msg; } }; // 2. Subclass of a persisted class class B : public A { public: B( void ) : A(), b1(0), b2(1), stupid(NULL) {} B( const std::string& name ) : A(name), b1(0), b2(1), stupid(NULL) {} B( const std::string& name, const char* stupid ) : A(name), b1(0), b2(1), stupid(stupid) { return; } virtual ~B( void ) {} void add( made_of_prims& p) { vec_of_prims.push_back( p ); } private: int b1; unsigned int b2; std::vector<made_of_prims> vec_of_prims; persistent_void_test sm; const char* stupid; template <typename Stream> Stream& serialize( Stream& s, const unsigned int version ) { // 3. call base class serialize method directly return A::serialize( s, version ) ^ b1 ^ b2 ^ // 4. STL containers supported directly vec_of_prims ^ sm ^ // 5. PtrArray used to persist arrays PtrArray<const char>( stupid, (stupid == NULL) ? 0 : (unsigned int) strlen(stupid)+1 /*null terminator*/); } // 6. Friendship still needs to be granted even in subclasses friend calvin::allow_persistence<B>; }; The above example highlights some additional features and requirements of the library when making your class persistent. calvin::persistent<void> vectors lists deques PtrArray vector boost::shared_ptr boost::shared_array The examples above demonstrate how most types of data are supported, but here is a more thorough reference to how Calvin handles members of different data types in a serialize method. new T[#] Congratulations on having made it this far. Enabling classes for persistence is the most complicated part of using the Calvin library. Just persist a little more. Instances to be persisted need to be named. Naming your instances is as straightforward as the snippet at the beginning of the article shows. Simply invoke the object's constructor that takes a name when the object is created, or use the set_name function later to give it a name. That's it. set_name What are legal names for instances? As mentioned above, it can be any value type that can be converted to a string via the << operator. Also, the name must be compatible with the archive selected. See the documentation for an archive type for what is considered legal. Names must be unique for each instance. They distinguish instances within the store and the program. As persistent objects are created or loaded, their names are added to a registry. When a second attempt to load an object by its name is attempted, the original instance is returned instead of a new instance. Calvin reports an error when an object is created with a name already used (see Error Handling in Calvin below). To make unique names to instances possible, Calvin works with boost::shared_ptr exclusively. This way, named objects have an automatic tracking mechanism so that when they are deleted, Calvin can know to load one at the next request. Why doesn't Calvin just keep a permanent reference? Memory issues mostly, but I think that a library shouldn't do things such as that behind the curtains when the facility of another excellent library can already do it with minimal fuss. Where do these objects go when they are persisted? They go into archives. Archives are collections of named objects. Archives can conceivably be the front end for any type of store such as files or a database. For now, the only implemented archive is the file_archive. file_archive has a template parameter that must match the key type of the objects to be written and read using it. A convenient filesys_archive is declared as a file_archive<string>. file_archive filesys_archive file_archive<string> As its name implies, the filesys_archive simply stores objects in files named the names given to them in the program. For this reason, when using a filesys_archive, names must consist of only valid filename characters, which depends on your particular operating system. A good rule of thumb is to stick with alphanumeric, '.', and '_', if using. To use a filesys_archive (or file_archive), include fs_archive.h in the appropriate .cpp file and declare your archive with a single string parameter representing the root directory, as so. filesys_archive ar( "../data" ); Initiating a save requires a boost::shared_ptr to a named object. boost::shared_ptr<MyPersistentClass> p( "p" ); // "p" is the name of p // ... do something here with p ... ar.save( p ); In the above you should see a file named "p" in the directory "data". To use the object later, an archive loads an object given a name and returns a boost::shared_ptr to the object. boost::shared_ptr<MyPersistentClass> p; p = ar.load( "p" ); Exceptions or error codes? That seems to be question. Exceptions are the standard error reporting mechanism in C++, but there are many valid reasons for avoiding them. In this regard, I took the route that the Boost libraries did and give the programmer an option. Depending on the value of the NO_CALVIN_EXCEPTIONS macro at compile time, either an exception (of type calvin_exception) is thrown, or the function calvin::throw_exception is called. If the function option is chosen, then the programmer must define a function by that name to handle any errors. The function should take a single parameter of type calvin_exception. NO_CALVIN_EXCEPTIONS calvin_exception calvin::throw_exception calvin_exception is a subclass of std::exception, and uses the what parameter to store the cause. Call calvin_exception.what() to see the error string. std::exception what calvin_exception.what() That's it for using Calvin. I've been using Calvin in my own applications for several months now with nary a problem. Then again, I know it and tend to perhaps skirt its warts unintentionally. It should be considered 0.1 software. I would welcome any bug fixes. Calvin isn't done yet though. Current plans to extend it are to include a ZIP archive and an XML archive. The next article will be a tutorial on creating new archives by subclassing stream buffers. If you'd like a deeper explanation on the inner workings of Calvin, e-mail me and I might write up another article talking about the template type matching that makes Calvin work. In the meantime, peruse the source code. It's compact, simple, and pretty straightforward. I've included it here and written this article hopefully so that you can use it and extend it to suit your needs. The example/test program included with this article requires that Boost version 1.31 or later be installed in the calvin subdirectory. Also, the Boost file system library should be built and available to the test program for linking. Calvin is copyright by myself, Jay Kint, and licensed for use under the MS-PL license. It should be considered "as-is" and no warranty is intended or implied. The author would like to thank John Olsen for his contributions and feedback on this article. 1A value type is one that isn't a pointer or a reference. If you can assign from one variable to a second variable, assign something else to the first variable, and the second variable hasn't changed, then those variables types are considered value types. For example: // value type pseudo code T x, y; x = something; y = x; x = something_else; print x; print y; If T is a value type, then x should print something_else and y should print something. If x and y print something_else, then T is not a value type. T x something_else y something Specifically the serialization library, type traits, shared pointers, and file system libraries. Lots of other good things are there as well. If you don't know the boost libraries, check them out. Great little persistence library, which was my first choice, until I started running into the limitations that prompted Calvin. Every persistence library I've run across has mentioned this article as a source. Worth the read just for an idea of what should be in a persistence library whether you're a user or designer. First draft posted for review. Second draft posted for review. Article submitted to CodeProject. This article, along with any associated source code and files, is licensed under The Microsoft Public License (Ms-PL) General News Suggestion Question Bug Answer Joke Rant Admin Math Primers for Programmers
http://www.codeproject.com/Articles/9529/Persistence-is-the-Key?fid=151484&df=90&mpp=10&sort=Position&spc=None&select=1039294&tid=3090715
CC-MAIN-2013-20
refinedweb
2,800
64.81
Python Programming/Extending with ctypes ctypes[1] is a foreign function interface module for Python (included with Python 2.5 and above), which allows you to load in dynamic libraries and call C functions. This is not technically extending Python, but it serves one of the primary reasons for extending Python: to interface with external C code. Basics[edit] A library is loaded using the ctypes.CDLL function. After you load the library, the functions inside the library are already usable as regular Python calls. For example, if we wanted to forego the standard Python print statement and use the standard C library function, printf, you would use this: from ctypes import * libName = 'libc.so' # If you're on a UNIX-based system libName = 'msvcrt.dll' # If you're on Windows libc = CDLL(libName) libc.printf("Hello, World!\n") Of course, you must use the libName line that matches your operating system, and delete the other. If all goes well, you should see the infamous Hello World string at your console. Getting Return Values[edit] ctypes assumes, by default, that any given function's return type is a signed integer of native size. Sometimes you don't want the function to return anything, and other times, you want the function to return other types. Every ctypes function has an attribute called restype. When you assign a ctypes class to restype, it automatically casts the function's return value to that type.
https://en.wikibooks.org/wiki/Python_Programming/Extending_with_ctypes
CC-MAIN-2018-34
refinedweb
241
63.8
Hey people, I'm a newbie at C++ and am taking things nice and slowly for the moment. I've created my first C++ program "Hello World". It compiles correctly but when i run the program all it does is pop up and disappear in less than a second. The code is as follows: /**************************************************/ #include <IOSTREAM> int main () { std::cout << "Hello World!\n"; // Displays Hello World return 0; } /**************************************************/ I can't figure out what's going wrong. I'm using Dev C++ to create my programs. Can anyone give me any pointers as to the problem? Could it be to do with the enviroment or compiler options?
https://cboard.cprogramming.com/cplusplus-programming/57486-console-problems.html
CC-MAIN-2017-51
refinedweb
107
75.71
If you have heard or used DotNetNuke and built some modules to it, you might be aware of the fact that as in web development, usually, building small and quick functionalities into a website usually requires a ton of coding with quirky (and usually pricy) IDEs. Just came up with an idea while thinking of building another module for creating macros inside DotNetNuke: hey, why don't we all use scripting? Ok, nowadays everything is scripted - even parts of Operating Systems are scripted. As the speed and memory of computers keep getting bigger, it actually doesn't matter that much since with different kinds of scripting languages, we get easy and yet powerful access to various down-level functionalities that would otherwise require rather extensive knowledge on various technologies. With this in mind, I didn't think it would be a bad idea to run an interpreted language inside another interpreted language. You only need .NET 2.0 and DotNetNuke 4.x running on the website - .NET 2.0 is required by IronPython and thus the DotNetNuke 4.x. I'd love to see this working in .NET 1.1 and DotNetNuke 3.x, but unfortunately, IronPython relies heavily on Generics and can't be translated easily to .NET 1.1. The basic idea is that we have the DotNetNuke portal installed where we run various modules that are actually quite independent pieces of software that rely on the DotNetNuke core framework. Also, the IronPython module (still a working name) is a normal module that actually embeds the IronPython running engine inside the module that runs inside the DotNetNuke ASP.NET application, which runs inside IIS's worker process. This allows developers to quickly run small (or even larger) Python scripts inside the module while still having total control over the current module's functions, DotNetNuke portal's or host's functions, and even the ASP.NET application's functions. This is done by providing the following modules, objects, and namespaces by default after initializing the actual Python engine: Request Server Response Web.HttpContext.Current.CurrentHandler Handler.Page System.Web Web System.Reflection.Assembly.GetCallingAssembly Then a few objects that are more interesting: PlaceHolder These are the basic modules and objects that are set up for developers to tweak around to see how things work. I'm not that good yet with Python, so I just threw in a few small code samples that will give you a raw idea of what we are talking about. Here's one small script that allows you to list all the users in DotNetNuke: uc = DotNetNuke.Entities.Users.UserController() users = uc.GetUsers(False,False) for n in users : print n.FullName + " = " + n.Username print "\nUsers in system : " print users.Count And here's another that lists all the pages (including the Admin and Host pages): tc = DotNetNuke.Entities.Tabs.TabController() tabs = tc.GetAllTabs() for t in tabs : print t.TabName Get the idea? You can also write directly to the response stream with the following statement: App.Response.Write("Hello World") The following redirects the user's browser: App.Response.Redirect("") With the current version of the module, you have two options: you can either work around with the IronPython Console, or you can set up a script that is run every time the module is loaded. This means that you can restrict the module's running rights with normal DotNetNuke role based security settings. At the moment, writing out the Python script is not as easy as it should be, but probably, future versions will have a better working editor. At the moment, you can access the simple editor by selecting "Settings" from the module's dropdown menu. From there, you should see "IronPython Settings" in the bottom of the Settings section. After opening it, you will see a simple textbox that allows you to write your own code. For starters, put a script like this there: cal = Web.UI.WebControls.Calendar() IronCanvas.Controls.Add(cal) Don't check "Keep IronPython engine in user session..." since this seems to have funky results at the moment. After clicking "Update", you should see a simple ASP.NET calendar control inside the module. Reading the small code should give you the hang of it. Note! At the moment, there is no solution to handle the events inside ASP.NET or UserControls, so please check for further betas in case you find this interesting. There is a "pyevent" module available for you to import, but while writing this, I haven't got the idea of how to create a function that responds to the events of ASP.NET controls - this might be because of the beta (6) phase of IronPython or the actual fact that IronPython is not designed for environments like ASP.NET. pyevent Naturally, you can also put to script input one (or both) of the previous scripts - just to see how they work. Maybe this is enough this time - give me feedback, ideas, and correct me if something here is badly wrong. Hopefully, once our module's version 1.0 is released, IronPython would be in its final release as well so that we can get all the cool modules that can be found in "legacy".
http://www.codeproject.com/Articles/13959/Running-IronPython-in-DotNetNuke?msg=2593561
CC-MAIN-2014-41
refinedweb
873
63.9
exit, _exit - terminate process #include <stdlib.h> void exit(int status); #include <unistd.h> void _exit(int status); The exit() function first calls all functions registered by atexit(3C), in the reverse order of their registration. Each function is called as many times as it was registered.(3C). The _exit() and exit() functions terminate the calling pro- cess with the following consequences: o All of the file descriptors, directory streams, conversion descriptors and message catalogue descrip- tors open in the calling process are closed. o If the parent process of the calling process is exe- cuting a wait(2), wait3(3C), waitid(2) or waitpid(2), subse- quently executes wait(2), wait3(3C), waitid(2) or waitpid(2). o If the parent process of the calling process is not executing a wait(2), wait3(3C), waitid(2) or wait- pid(2),(2), wait3(3C), waitid(2) or waitpid(2). A zombie process only occupies a slot in the process table; it has no other space allocated either in user or kernel space. The process table slot that it occupies is partially over- laid with time accounting information (see <sys/proc.h>) to be used by the times(2) function. o Termination of a process does not directly terminate its children. The sending of a SIGHUP signal as described below indirectly terminates children in some circumstances. o A SIGCHLD will be sent to the parent process. o The parent process ID of all of the calling process's existing child processes and zombie processes is set to 1. That is, these processes are inherited by the initialization process (see intro(2)). o Each mapped memory object is unmapped. o Each attached shared-memory segment is detached and the value of shm_nattch (see shmget(2)) in the data structure associated with its shared memory ID is decremented by 1. o For each semaphore for which the calling process has set a semadj value (see semop(2)), that value is added to the semval of the specified semaphore. o If the process is a controlling process, the SIGHUP signal will be sent to each process in the foreground process group of the controlling terminal belonging to the calling process. o If the process is a controlling process, the control- ling terminal associated with the session is disasso- ciated from the session, allowing it to be acquired by a new controlling process. o If the exit of the process causes a process group to become orphaned, and if any member of the newly- orphaned process group is stopped, then a SIGHUP sig- nal followed by a SIGCONT signal will be sent to each process in the newly-orphaned process group. o If the parent process has set its SA_NOCLDWAIT flag, or set SIGCHLD to SIG_IGN, the status will be dis- carded, and the lifetime of the calling process will end immediately. o If the process has process, text or data locks, an UNLOCK is performed (see plock(3C) and memcntl(2)).. o An accounting record is written on the accounting file if the system's accounting routine is enabled (see acct(2)). o An extended accounting record is written to the extended process accounting file if the system's extended process accounting facility is enabled (see acctadm(1M)). o attri- butes: ____________________________________________________________ | ATTRIBUTE TYPE | ATTRIBUTE VALUE | |_____________________________|_____________________________| | MT-Level | _exit() is Async-Signal Safe| |_____________________________|_____________________________| acctadm(1M), intro(2), acct(2), close(2), memcntl(2), semop(2), shmget(2), sigaction (2), times(2), wait(2), waitid(2), waitpid(2), atexit(3C), fclose(3C), mq_close(3RT), plock(3C), signal(3HEAD), tmpfile(3C), wait3(3C), attributes(5)
http://man.eitan.ac.il/cgi-bin/man.cgi?section=2&topic=_exit
CC-MAIN-2020-24
refinedweb
603
58.72
Class for synchronization of transactions within multiple documents. Each transaction of this class involvess one transaction in each modified document. More... #include <TDocStd_MultiTransactionManager.hxx> Class for synchronization of transactions within multiple documents. Each transaction of this class involvess one transaction in each modified document. The documents to be synchronized should be added explicitly to the manager; then its interface is used to ensure that all transactions (Open/Commit, Undo/Redo) are performed synchronously in all managed documents. The current implementation does not support nested transactions on multitransaction manager level. It only sets the flag enabling or disabling nested transactions in all its documents, so that a nested transaction can be opened for each particular document with TDocStd_Document class interface. NOTE: When you invoke CommitTransaction of multi transaction manager, all nested transaction of its documents will be closed (committed). Constructor. Unsets the flag of started manager transaction and aborts transaction in each document. Adds the document to the transaction manager and checks if it has been already added. Clears redos in the manager and in documents. Clears undos in the manager and in documents. Commits transaction in all documents and fills the transaction manager with the documents that have been changed during the transaction. Returns True if new data has been added to myUndos. NOTE: All nested transactions in the documents will be committed. Makes the same steps as the previous function but defines the name for transaction. Returns True if new data has been added to myUndos. Returns the added documents to the transaction manager. Dumps transactions in undos and redos. Returns available manager redos. Returns available manager undos. Returns undo limit for the manager. Returns true if a transaction is opened. Returns Standard_True if NestedTransaction mode is set. Methods for protection of changes outside transactions. Returns True if changes are allowed only inside transactions. Opens transaction in each document and sets the flag that transaction is opened. If there are already opened transactions in the documents, these transactions will be aborted before opening new ones. Redoes the current transaction of the application. It calls the Redo () method of the document being on top of the manager list of redos (list.First()) and moves the list item to the top of the list of manager undos (list.Prepend(item)). Removes the document from the transaction manager. Removes undo information from the list of undos of the manager and all documents which have been modified during the transaction. If theTransactionOnly is True, denies all changes outside transactions. Sets nested transaction mode if isAllowed == Standard_True NOTE: field myIsNestedTransactionMode exists only for synchronization between several documents and has no effect on transactions of multitransaction manager. Sets undo limit for the manager and all documents. Undoes the current transaction of the manager. It calls the Undo () method of the document being on top of the manager list of undos (list.First()) and moves the list item to the top of the list of manager redos (list.Prepend(item)).
https://dev.opencascade.org/doc/refman/html/class_t_doc_std___multi_transaction_manager.html
CC-MAIN-2022-27
refinedweb
494
50.43
/socket/websocket"); // equivalent to passing connectionProvider: PhoenixIoConnection.provider // you can also pass params on connection if you for example want to authenticate using a user token like final socket = PhoenixSocket("ws://localhost:4000/socket/websocket", socketOptions: PhoenixSocketOptions(params: {"user_token": 'user token here'}, )); Options that can be passed on connection include :- - timeout - How long to wait for a response in miliseconds. Default 10000 - heartbeatIntervalMs - How many milliseconds between heartbeats. Default 30000 - reconnectAfterMs - Optional list of milliseconds between reconnect attempts - params - Parameters sent to your Phoenix backend on connection. Import & Connection (HTML) import 'package:phoenix_wings/html.dart'; final socket = new PhoenixSocket("ws://localhost:4000/socket/websocket", connectionProvider: PhoenixHtmlConnection.provider); Common Usage await socket.connect(); final chatChannel = socket.channel("room:chat", {"id": "myId"}); chatChannel.on("user_entered", PhoenixMessageCallback (Map payload, String _ref, String, _joinRef) { print(payload); }); chatChannel.join(); Examples Mobile - when running the flutter example in your emulator, with the server also running in the same computer host, remember that the emulator is running in a segregated VM, so you need to configure it to point your server that is running on the host machine. # check your IP configuration $ ifconfig enp0s20u5c4i2: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500 inet 10.0.0.2 netmask 255.255.255.240 broadcast 172.20.10.15 After checking your IP, go to your flutter Settings -> Proxy, and add the proxy host configuration with your IP, and port where your phoenix server with the websockets is listening. Configure your flutter app to point to your phoenix websocket server. final socket = PhoenixSocket("ws://10.0.0.2:4000/socket/websocket"); See here for an illustrated example. Server - phoenix server with a channel that will communicate with the flutter app above. Console - if you want to debug the websockets direclty, without phoenix_wings, using the phoenix protocol. See here for more info about the json protocol. You will have a lot of fun, connecting, and seeing the loop in this console app sending messages to your flutter app. To run, simply: dart console.dart Testing Most of the tests are run on the VM. However, the PhoenixHtmlConnection tests must run in a browser. By default tests will run on VM, Chrome and Firefox. This is set in dart_test.yaml Tests are run via pub run test
https://pub.dev/documentation/phoenix_wings/latest/
CC-MAIN-2021-39
refinedweb
376
51.14
STL enjoys speaking in the third person and also enjoys bringing you this exclusive news: Visual Studio 2008 Service Pack 1 (VC9 SP1) contains the TR1 and MFC library extensions that were originally released in the Visual C++ 2008 Feature Pack Refresh. But wait! VC9 SP1 also contains many delicious fixes for TR1 and MFC. (For TR1, "many" is 16; for MFC, "many" is 60 or more!) As the current maintainer of the C++ Standard Library and TR1 implementations within Microsoft, STL has compiled an exhaustive list of the TR1 fixes in VC9 SP1. But first! Shout-outs must be sent to P.J. Plauger and Christopher J. Walker of Dinkumware, who tirelessly worked to implement most of these fixes, and Microsoft's compiler front-end ninja Jonathan Caves, who fixed nasty compiler bugs that were exposed by TR1. The three most significant fixes are: * A massive performance improvement in regex matching. * An across-the-board performance improvement in STL containers of TR1 objects (e.g. vector<shared_ptr<T> >). * A fix allowing tr1::function to store function objects with non-const function call operators. This is the exhaustive list: 1. The STL algorithm search_n() no longer spuriously triggers _HAS_ITERATOR_DEBUGGING assertions. (search_n() isn't part of TR1, but this is why it's being mentioned here: A severe search_n() bug, a regression from VC8 SP1 to VC9 RTM where the predicate version attempted to use operator==(), was fixed in the VC9 Feature Pack Refresh. However, the fix contained this less severe flaw, which was noticed in time to be truly fixed in VC9 SP1.) 2. The random distribution uniform_int<unsigned long long> no longer infinitely loops. 3. The <random> header has been overhauled, incorporating many C++0x improvements. 4. The copy constructors of the pseudorandom number generators now behave correctly. (This was a subtle mistake where a template constructor provided a better match than a copy constructor during overload resolution.) 5. enable_shared_from_this's copy constructor and copy assignment operator have been corrected. (This affected classes deriving from enable_shared_from_this, and did not affect other uses of shared_ptr.) 6. shared_ptr<const T> can now be constructed from const T *. (This is relatively unusual.) 7. tr1::function can now store function objects with non-const function call operators. (This was a severe problem.) 8. The performance of regex matching has been massively improved. (In general, TR1 Regex is as fast or faster than Boost.Regex 1.35.0. TR1 Regex is still slower than Boost for some regexes, such as those dominated by alternations like "cute|fluffy|kittens", but their performance has also improved significantly compared to the Feature Pack Refresh. Further performance improvements are being investigated for VC10.) 9. The performance of unordered_set (etc.) and hash_set (etc.) has been significantly improved. (tr1::unordered_set and stdext::hash_set share the same implementation. erase() still presents performance problems, which are being investigated for VC10.) 10. result_of now accepts references to functions. 11. is_function and is_member_function_pointer now work correctly with variadic arguments. 12. is_polymorphic now works correctly. (It previously gave bogus answers for classes like std::iostream. This was a compiler bug fixed by Jonathan Caves.) 13. The <memory> header's declarations of the _InterlockedIncrement (etc.) intrinsics can now be suppressed by defining _DO_NOT_DECLARE_INTERLOCKED_INTRINSICS_IN_MEMORY . This is for compatibility with <intrin.h> and <winbase.h>, which contain conflicting declarations for certain configurations of certain platforms. (This workaround has been eliminated in VC10, which contains a comprehensive fix.) 14. mem_fn() now works with abstract base classes. (This surprising bug was caused by library and compiler bugs, both fixed. Other code may benefit from this compiler fix.) 15. is_pod and has_trivial_constructor now work correctly. (They previously gave bogus answers for certain uncommon classes.) 16. The Swaptimization (previously mentioned in this VCBlog post) is now actually used for TR1 objects. #16 deserves some explanation. STL containers (vector, deque, list, set, multiset, map, multimap) are "fast-swappable"; x.swap(y) and swap(x, y) are constant-time, nofail, non-iterator-invalidating. (You might remember that VC8's Swap Bug broke that last, very important part.) std::string swap() is also constant-time and nofail, although it can invalidate iterators due to the small string optimization. Constant-time means that swapping STL containers is implemented by swapping their guts (their pointers, whether to a vector's block of memory, a list's doubly-linked nodes, or a map's binary tree nodes). That's far superior to all of the element-copying that "Container temp = a; a = b; b = temp;" would involve. Now, when a vector undergoes reallocation, it has to copy its current elements from the old memory block to the new memory block. When you have a vector of STL containers (vector<vector<int> >, vector<set<int> >, etc.), that would be slow, copying the sub-containers and doing lots of dynamic memory allocation and deallocation. The Swaptimization, which was present in VC8 (possibly earlier, but STL hasn't checked), is a bit of template magic whereby STL containers are marked as having fast swaps. When a vector<T> undergoes reallocation, it detects whether the T has a fast swap, and if so, will swap elements from the old memory block to the new memory block. This is super fast! (Why doesn't it always swap? For vector<int>, simply copying ints from the old memory block to the new memory block is faster - nothing needs to be swapped back into the old memory block.) Many TR1 objects, such as unordered_set and match_results and shared_ptr, also have fast swaps. This is because they're either containers (like unordered_set) or they wrap containers (like match_results), or they're not containers but they have the same pointers-to-stuff structure (like shared_ptr). So, Dinkumware and STL worked really hard to give all TR1 objects fast swaps, and mark them appropriately to be picked up by the Swaptimization. Unfortunately, this was simply broken in the Feature Pack Refresh! The problem was subtle. The STL provides the swap() free function in namespace std, which performs the three-step dance that works for anything copyable and assignable (but might be slow). The idea is that users with fast-swappable classes, in addition to providing member swap(), can fully specialize swap() in namespace std for their classes. (Generally, users *aren't* supposed to add anything to namespace std. However, Standard templates may be fully or partially specialized for user classes; this is paragraph 17.4.3.1/1 of the Standard.) This works well enough. However, users with fast-swappable class *templates* have to do something different. A not-commonly-understood fact about C++ is that there are no such things as partial specializations of function templates. Anything that looks like a partial specialization is actually an overload. (Only classes can be partially specialized.) In practice, overloads behave similarly enough to how users think partial specializations of function templates would work, and everyone gets along happily. But this means that you can't add overloads of swap() to namespace std, even if you think they look like those mythical partial specializations. So, if you have a fast-swappable class template, you need to provide a swap() free function in the same namespace as your class template, to be found through Argument-Dependent Lookup (ADL, formerly and less descriptively called Koenig Lookup). (No, this isn't really ideal. Swapping is such a fundamental operation that it really ought to be recognized in the Core Language like copying - but it's far too late to change that.) So, TR1, which is almost but not quite part of the Standard, came along and provided a bunch of fast-swappable stuff in namespace std::tr1, and defined overloads of swap(), again within std::tr1. The TR1 classes were fast-swappable, annotated as such, and detected as such by the existing Swaptimization machinery within vector and a few other places. So why didn't it work? It turned out that the Swaptimization was being performed with a call to qualified std::swap(). Most people don't do C++ name lookup in their heads for fun, but the important thing to know is this: Qualified Name Lookup disables ADL. Uh oh! So, the general implementation of std::swap() (doing the slow three-step dance of copying) was chosen, instead of the specific implementations of std::tr1::swap() for shared_ptr<T>, unordered_set<T>, and so forth. Disaster. STL was deeply mortified. (Why was a qualified call being used? Another name lookup subtlety - unqualified calls like swap() activate both Unqualified Name Lookup (the usual, which everyone is familiar with) and ADL (which everyone really should be familiar with). Ordinarily, the union of the sets of functions they find is then used for overload resolution (picking what actually gets called). But if Unqualified Name Lookup finds a member function, ADL is bypassed; this is paragraph 3.4.2/2a of the Standard. So, within a class that defines a member swap(), calling unqualified swap() doesn't activate ADL, nor does it even find std::swap() - it finds the member swap(). Thus, qualified calls became conventional within VC's Standard Library implementation.) The fix was to define a wrapper, std::_Swap_adl(), that calls unqualified swap(), activating ADL properly. The STL now calls std::_Swap_adl() whenever ADL is desired. (It continues to call std::swap() whenever ADL is unnecessary, such as when swapping builtins). As with all _Leading_underscore_capital names, users shouldn't call std::_Swap_adl() themselves (it may change or disappear in future versions). You can perform the exact same trick by defining your own wrapper in your own namespace of choice, with the wrapper containing a "using namespace std;" and an unqualified call to swap(). (If you look at std::_Swap_adl()'s implementation, it lacks a using-directive - it already lives in namespace std, unlike anything you can define.) The end result is that when a vector<shared_ptr<T> > undergoes reallocation, absolutely no reference counts are incremented or decremented - which is a significant performance win. Woot! Note that none of these fixes were in the SP1 Beta, which branched for release as the Feature Pack was being finished. Stephan T. Lavavej Visual C++ Libraries Developer(b); } } // Specific to VC9 SP1 and above. // This machinery was broken for classes outside namespace std in VC9 RTM and below. // This machinery will not be present in VC10 and above. #if defined(_MSC_VER) && _MSC_VER == 1500 && _MSC_FULL_VER >= 150030729 #include <xutility> namespace std { template <> class _Move_operation_category<feline::kitty> { public: typedef _Swap_move_tag _Move_cat; }; } #endif int main() { vector<feline::kitty> v; cout << "*** Constructing cat." << endl; feline::kitty cat; cout << "*** Pushing back cat #1." << endl; v.push_back(cat); cout << "*** Pushing back cat #2." << endl; cout << "*** Destroying cat and v." << endl; C:\Temp>cl /EHsc /nologo /W4 meow.cpp meow.cpp C:\Temp>meow *** Constructing cat. default ctor *** Pushing back cat #1. copy ctor dtor *** Pushing back cat #2. swap *** Destroying cat and v. Note that the Swaptimization requires a default ctor. Thank you. Hopefully, VC10 will see C++0x finalized and implemented, with its move constructors and all, so we won't need this hack by that; fxn& operator=(const fxn& other) { m_f = other.m_f; std::tr1::function<FT> m_f; int add(int x, int y) { return x + y; int mult(int x, int y, int z) { return x * y * z; ; With VC9 RTM: _MSC_FULL_VER: 150021022 _MSC_BUILD: 8 This is VC9 RTM or above. With VC9 SP1: _MSC_FULL_VER: 150030729 _MSC_BUILD: 1\include"\include"C\include\array. it: #include <xutility> namespace std { template <> class _Move_operation_category<feline::kitty> { public: typedef _Swap_move_tag _Move_cat; }; }
http://blogs.msdn.com/vcblog/archive/2008/08/11/tr1-fixes-in-vc9-sp1.aspx
crawl-002
refinedweb
1,913
56.96
React Hook useState() React Hooks is actually a collection of functions by React. We can use functional components therefore we can convert the component to functional component. You can use React hooks feature Because React 16.8 or higher projects now are able to use hooks. Let’s have a closer look at the following example to understand the useState hooks better. Example In our example, We just need to create a constant name app and this will be a function that gets props and then handle a function body in Person.js. This is a normal functional component but now you can use this React Hook feature or one of the React Hooks like in the],otherState: ' some other value' //The second element is array function allow us to update this state //such that React is aware of it and will re-render this. }); Importing { useState } from React Library We don’t need to import {component} from React library anymore. Because we don’t extend with class methods, instead of import React, {Component} we can use {useState}. useState is the most important React hook. It’s allow us to manage state in a functional component. you can call like normal function and you can just pass your previous initial state in the example above, we don’t need render method just return JSX useState returns an array with exactly two elements and always two elements. First element we get back will always be our current state (useState({})-> Initially an object and whenever we change it or update it. Second element in state array will be always be a function that allows us to update this state such that React is aware of it and will re-render this component in line. We can also use modern JS features called array destructuring which works. This allows you to pull element out of the array you can get back from right side of the equal sign of this function call as following. (const [] = useState() First element you can give const [] element any name if you want const. [personsState] Second element will be function that allows you to set that state, so i will name it setPersonsState Person State will give you access to person object and personsState will replace as following. {this.state.persons[1].name}//instead {personsState.persons[1].name} //We can access person data and setPersonsState allows us to set a new state and thus we can re-add that switchHandler to our component. Question How do we add a method to the functional component? You can have functions inside the functions. We can add a new This is perfectly fine for JS code. You can have function definition inside of function In JS & React. It looks strange that you don’t use that so often. But React hooks the way you use is fine like this. Now you have a functional component that can manage state and that can have other functions does the something like onClick, Which we previously called this.switchHandler, this keyword is no longer available so we are not inside of class anymore. We can just pass as following. onClick={switchHandler} instead onClick={this.switchHandler} referring to content here const switchHandler = () => { No parentheses because we don’t wanna execute it immediately. Only this execute the function , when you are using React Hooks the function here as following. const [personsState, setPersonState] Second element is array (setPersonState) does not merge whatever you pass in old state. (setPersonsState) instead its replaces to old state. This is very important. When you ever update a state like this, you have to manually make sure you include all old state data. This is a normal functional component that We don’t need this.setState anymore because this function doesn’t exist anymore we passed instead setPersonState. Now we just [setPersonState] and we passed our new state(line 50-57)],otherState: ' some other value' //The second element is array function allow us to update this state});const [personsState, setPersonState] = useState({ persons: [ {name: "Ali Bener", age: 43 }, {name: "Arslan Eksi", age: 36 }, {name: "Levent Mergen", age: 74 } ],});const [otherState, setPersonsState] = useState ('some other value') console.log(personsState, otherState)const switchHandler = () => { setPersonState({ persons : [ {name: "Erkan Togan", age: 49 }, {name: "Ceren Ates", age: 40 }, {name: "Nevriye Yilmaz", age: 40 }],otherState : personsState.otherState });};return ( <div className="App"> <h1> LEGENDS LIST </h1> <p>This is really working </p> <button onClick={switchHandler}> Switch Name</button> <Person name={personsState[0].name} age={personsState[0].age} /> <Person name={personsState[1].name} age={personsState[1].age}> My Hobbies: Voleyball</Person> <Person name ={personsState[2].name} age={personsState[2].age} /> </div> );}export default App; //we also made const app with lowercase Here for example manually adding the other state property and then you have to access the person’s state here. This will just make sure that we copy in the old and untouched other state you can also change this as well. This is a more elegant way to do it. Just copy the useState and pass a value inside. When we click the switched name we update persons and we still have another value because that state is not touched by our call to setPersonsState which only interacts with the first useState result. According to Reactjs.org about Hooks. Summary React hooks is all about these use something functions with useState being most important, that allows you to add functionally to functional components. As you see in the example useState allow us add state management to functional components if you are using React Hooks only you don’t’ need class based componentWe will use class base components.
https://mezgitci9.medium.com/react-hook-usestate-5b0699ac3a?responsesOpen=true&source=user_profile---------4----------------------------
CC-MAIN-2021-49
refinedweb
938
54.02
Warning: this page refers to an old version of SFML. Click here to switch to the latest version. File transfers with FTP FTP for dummies If you know what FTP is, and just want to know how to use the FTP class that SFML provides, you can skip this section. FTP (File Transfer Protocol) is a simple protocol that allows manipulation of files and directories on a remote server. The protocol consists of commands such as "create directory", "delete file", "download file", etc. You can't send FTP commands to any remote computer, it needs to have an FTP server running which can understand and execute the commands that clients send. So what can you do with FTP, and how can it be helpful to your program? Basically, with FTP you can access existing remote file systems, or even create your own. This can be useful if you want your network game to download resources (maps, images, ...) from a server, or your program to update itself automatically when it's connected to the internet. If you want to know more about the FTP protocol, the Wikipedia article provides more detailed information than this short introduction. The FTP client class The class provided by SFML is sf::Ftp (surprising, isn't it?). It's a client, which means that it can connect to an FTP server, send commands to it and upload or download files. Every function of the sf::Ftp class wraps an FTP command, and returns a standard FTP response. An FTP response contains a status code (similar to HTTP status codes but not identical), and a message that informs the user of what happened. FTP reponses are encapsulated in the sf::Ftp::Response class. #include <SFML/Network.hpp> sf::Ftp ftp; ... sf::Ftp::Response response =; // just an example, could be any function std::cout << "Response status: " << response.getStatus() << std::endl; std::cout << "Response message: " << response.getMessage() << std::endl; The status code can be used to check whether the command was successful or failed: Codes lower than 400 represent success, all others represent errors. You can use the isOk() function as a shortcut to test a status code for success. sf::Ftp::Response response =; if (response.isOk()) { // success! } else { // error... } If you don't care about the details of the response, you can check for success with even less code: if () { // success! } else { // error... } For readability, these checks won't be performed in the following examples in this tutorial. Don't forget to perform them in your code! Now that you understand how the class works, let's have a look at what it can do. Connecting to the FTP server The first thing to do is connect to an FTP server. sf::Ftp ftp;(""); The server address can be any valid sf::IpAddress: A URL, an IP address, a network name, ... The standard port for FTP is 21. If, for some reason, your server uses a different port, you can specify it as an additional argument: sf::Ftp ftp;("", 45000); You can also pass a third parameter, which is a time out value. This prevents you from having to wait forever (or at least a very long time) if the server doesn't respond. sf::Ftp ftp;("", 21, sf::seconds(5)); Once you're connected to the server, the next step is to authenticate yourself: // authenticate with name and password("username", "password"); // or login anonymously, if the server allows it; FTP commands Here is a short description of all the commands available in the sf::Ftp class. Remember one thing: All these commands are performed relative to the current working directory, exactly as if you were executing file or directory commands in a console on your operating system. Getting the current working directory: sf::Ftp::DirectoryResponse response =; if (response.isOk()) std::cout << "Current directory: " << response.getDirectory() << std::endl; sf::Ftp::DirectoryResponse is a specialized sf::Ftp::Response that also contains the requested directory. Getting the list of directories and files contained in the current directory: sf::Ftp::ListingResponse response =; if (response.isOk()) { const std::vector<std::string>& listing = response.getListing(); for (std::vector<std::string>::const_iterator it = listing.begin(); it != listing.end(); ++it) std::cout << "- " << *it << std::endl; } // you can also get the listing of a sub-directory of the current directory: response =("subfolder"); sf::Ftp::ListingResponse is a specialized sf::Ftp::Response that also contains the requested directory/file names. Changing the current directory:("path/to/new_directory"); // the given path is relative to the current directory Going to the parent directory of the current one:; Creating a new directory (as a child of the current one):("name_of_new_directory"); Deleting an existing directory:("name_of_directory_to_delete"); Renaming an existing file:("old_name.txt", "new_name.txt"); Deleting an existing file:("file_name.txt"); Downloading (receiving from the server) a file:("remote_file_name.txt", "local/destination/path", sf::Ftp::Ascii); The last argument is the transfer mode. It can be either Ascii (for text files), Ebcdic (for text files using the EBCDIC character set) or Binary (for non-text files). The Ascii and Ebcdic modes can transform the file (line endings, encoding) during the transfer to match the client environment. The Binary mode is a direct byte-for-byte transfer. Uploading (sending to the server) a file:("local_file_name.pdf", "remote/destination/path", sf::Ftp::Binary); FTP servers usually close connections that are inactive for a while. If you want to avoid being disconnected, you can send a no-op command periodically:; Disconnecting from the FTP server You can close the connection with the server at any moment with the disconnect function.;
https://en.sfml-dev.org/tutorials/2.0/network-ftp.php
CC-MAIN-2021-49
refinedweb
925
54.63
- 26 Jan 2021 07:56:30 UTC - Distribution: Language-FormulaEngine - Module version: 0.06 - Source (raw) - Browse (raw) - Changes - How to Contribute - Repository - Issues (0) - Testers (134 / 0 / 0) - KwaliteeBus factor: 1 - 80.84% Coverage - License: perl_5 - Activity24 month - Tools - Download (39.73KB) - MetaCPAN Explorer - Permissions - Permalinks - This version - Latest version++ed by:and 1 contributors Michael Conrad - NAME - VERSION - DESCRIPTION - AUTHOR NAME Language::FormulaEngine::Namespace::Default - Default spreadsheet-like set of functions and behavior VERSION version 0.06 DESCRIPTION This is a namespace containing many spreadsheet-like functions. It aims for spreadsheet similarity rather than compatibility; the goal to give users of the FormulaEngine a familiar environmet rather than to try duplicating all features and misfeatures Excel. Core Grammar Functionality These are the methods that implement the infix operators. sum( num1, num2 ... ) - negative( num1 ) - mul( num1, num2, ... ) - div( numerator, denominator ) - and( bool1, bool2, ... ) This applies perl-ish boolean semantics to each argument, and returns a numeric 0 or 1. No arguments are evaluated after the first false value. or( bool1, bool2, ... ) This applies perl-ish boolean semantics to each argument, and returns a numeric 0 or 1. No arguments are evaluated after the first true value. not( bool1 ) This applies perl-ish boolean semantics to the argument and returns numeric 1 or 0. compare( val1, op, val2, ...op, val ) This compares two or more values against the 6 canonical operators "<", "<=", ">", ">=", "==", "!="and returns 0 or 1. It uses numeric comparison if both sides of an operator looks_like_number, and uses string comparison otherwise. Utility Functions choose( offset, val1, val2, val3, ... ) Given a 1-based offset, return the value of the Nth parameter. if( condition, val_if_true, val_if_false ) If conditionis "true" (Perl interpretation) return val_if_true, else val_if_false. iferror( value_maybe_error, alternate_value ) If value_maybe_errordoes not throw an exception, return it, else return the alternate_value. ifs( condition1, value1, condition2, value2, ... ) A simplified sequence of IF functions. If condition1is true, it returns value1, else if condition2is true it returns value2, and so on. If no condition is true it dies. (use a final true condition and value to provide a default) na() Throw an NA exception. Math Functions abs( number ) Return absolute value of number acos( ratio ) Return angle in radians of the ratio adjacent/hypotenuse. acot( ratio ) Return angle in radians of the ratio adjacent/opposite. asin( ratio ) Return angle in radians of the ratio opposite/hypotenuse. atan( ratio ) Return angle in radians of the ratio opposite/adjacent. atan2( x, y ) Same as atan, but without division, so x=0 returns PI/2 instead of division error. average( num1, ... ) Return sum of numbers divided by number of arguments base( num1, radix, min_length=0 ) Return number converted to different base, with optional leading zeroes to reach min_length. ceiling( number, step=1 ) Round a number up to the next multiple of step. If step is negative, this rounds away from zero in the negative direction. cos( angle ) Cosine of anglein radians cot( ratio ) Return the angle for the triangle ratio adjacent/opposite. degrees( angle_in_radians ) Convert radians to degrees exp( power ) Return base of the natural log raised to the specified power. fact( n ) Compute factorial of n. ( 1 * 2 * 3 * ... n) floor( number, step=1 ) Round a number down to the previous multiple of step. If step is negative, this rounds toward zero in the positive direction. max( number, ... ) Return largest value in list min( number, ... ) Return smallest value in list mod( number, modulo ) Returns true modulous of a number. This uses Perl's (and math's) definition. For the Excel- compatible MOD function, see remainder. pi() Value of π radians( angle_in_degrees ) Convert degrees to radians. rand( range=1 ) Returns pseudo-random value greater or equal to 0 and less than range. This uses perl's (C's) built-in rand()function which is likely not as good as the generators used by spreadsheet programs, but I didn't want to add a hefty dependency. remainder( number, divisor ) Return the number after subtracting the biggest multiple of divisor that can be removed from it. The remainder's sign will be the same as the sign of divisor(unless remainder is zero). round( number, digits=0 ) Round NUMBER to DIGITS decimal places of precision. Uses the IEEE 5-round-to-even algorithm that C gives us. DIGITS defaults to 0, making it round to the nearest integer. Dies if you attempt to round something that isn't a number. roundup( number, digits=0 ) Like "round", but always round up. See also "ceiling". rounddown( number, digits=0 ) Like "round", but always round down. See also "floor". sign( value ) Return 1, 0, or -1 depending on the sign of value. sin( angle ) Returns ratio of opposite/adjacent for a given angle in radians. sqrt( number ) Return square root of a number. tan( angle ) Return ratio of opposite/adjacent for an angle. String Functions char( codepoint_value ) Return a unicode character. clean( string ) Returns stringafter removing all non-printable characters (defined as [:^print:]) code( string ) Opposite of "char", known as ord()in other languages. Returns the unicode codepoint number of the first character of the string. concat, concatenate( string, ... ) Returns all arguments concatenated as a string find( needle, haystack, from_offset=1 ) Return the character offset of needlefrom start of haystack, beginning the search at from_offset. All offsets are 1-based. fixed( number, decimals=2, no_commas=false ) Return the number formatted with a fixed number of decimal places. By default, it gets commas added in the USA notation, but this can be disabled. len( string ) Return number of unicode characters in STRING. lower( string ) Return lowercase version of STRING. replace( string, offset, length, new_text ) Replace text in stringwith new_text, overwriting lengthcharacters from offset. substr( string, offset, length=max ) Same as perl's builtin. trim( string ) Remove all leading and trailing whitespace and replace runs of whitespace with a single space character. upper( string ) Return uppercase version of STRING. textjoin, join( separator, string, ... ) Same as perl's builtin. DateTime Functions Date math is implemented using the DateTime module. Strings are coerced into dates using the DateTime::Format::Flexible module for any parameter where a spreadsheet function would normally expect a date value. "Since 1900" date serial numbers are not used at all. date( year, month, day ) Convert a (year,month,day) triplet into a date. datedif( start_date, end_date, unit ) Calculate difference bwteen two dates. Unit can be one of: "Y"(whole years), "M"(whole months), "D"(whole days). Dates can be parsed from any string resembling a date. datevalue( text ) Parse a date, or die trying. day( date ) Returns the day number of a date days( end_date, start_date ) Returns number of days difference between start and end date. eomonth( start_date, months ) Calculate the date of End-Of-Month at some offset from the start date. hour( date ) Return the hour field of a date. minute( date ) Return minute field of a date. month( date ) Return month field of a date. year( date ) Return the year field of a date. AUTHOR Michael Conrad <mconrad@intellitree.com> This software is copyright (c) 2021 by Michael Conrad, IntelliTree Solutions llc. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. Module Install Instructions To install Language::FormulaEngine, copy and paste the appropriate command in to your terminal. cpanm Language::FormulaEngine perl -MCPAN -e shell install Language::FormulaEngine For more information on module installation, please visit the detailed CPAN module installation guide.
https://web-stage.metacpan.org/release/NERDVANA/Language-FormulaEngine-0.06/view/lib/Language/FormulaEngine/Namespace/Default.pm
CC-MAIN-2022-40
refinedweb
1,237
51.04
Hi, I have followed online tutorials on how to implement lejos using Eclipse. I'm close to getting lejos up and running but have some errors which I cannot fix. I am just trying to run this simple code: import lejos.nxt.*; public class HelloWorld { public static void main (String[] args) { System.out.println("Hello World"); Button.waitForPress(); } } ...But am getting the following errors. program has been linked successfully uploading ... leJOS NXJ> Failed to load USB comms driver: library jfantom.dll (windows/x86_64) was not found in C:\Program Files (x86)\leJOS NXJ\lib\pc\native leJOS NXJ> Searching for any NXT using Bluetooth inquiry leJOS NXJ> Search Failed: BlueCove libraries not available leJOS NXJ> Failed to find any NXTs leJOS NXJ> Failed to connect to any NXT No NXT found - is it switched on and plugged in (for USB)? uploading the program failed with exit status 1 My NXT is turned on and connected. It looks like a driver error however I am pretty sure have installed the correct driver. Have I missed out a stage of the instillation process? Any help would be greatly appreciated. Many Thanks
http://www.lejos.org/forum/viewtopic.php?f=7&t=3006
CC-MAIN-2014-35
refinedweb
189
74.69
It commonly acknowledged here that current decision theories have deficiencies that show up in the form of various paradoxes. Since there seems to be little hope that Eliezer will publish his Timeless Decision Theory any time soon, I decided to try to synthesize some of the ideas discussed in this forum, along with a few of my own, into a coherent alternative that is hopefully not so paradox-prone. I'll start with a way of framing the question. Put yourself in the place of an AI, or more specifically, the decision algorithm of an AI. You have access to your own source code S, plus a bit string X representing all of your memories and sensory data. You have to choose an output string Y. That’s the decision. The question is, how? (The answer isn't “Run S,” because what we want to know is what S should be in the first place.) Let’s proceed by asking the question, “What are the consequences of S, on input X, returning Y as the output, instead of Z?” To begin with, we'll consider just the consequences of that choice in the realm of abstract computations (i.e. computations considered as mathematical objects rather than as implemented in physical systems). The most immediate consequence is that any program that calls S as a subroutine with X as input, will receive Y as output, instead of Z. What happens next is a bit harder to tell, but supposing that you know something about a program P that call S as a subroutine, you can further deduce the effects of choosing Y versus Z by tracing the difference between the two choices in P’s subsequent execution. We could call these the computational consequences of Y. Suppose you have preferences about the execution of a set of programs, some of which call S as a subroutine, then you can satisfy your preferences directly by choosing the output of S so that those programs will run the way you most prefer. A more general class of consequences might be called logical consequences. Consider a program P’ that doesn’t call S, but a different subroutine S’ that’s logically equivalent to S. In other words, S’ always produces the same output as S when given the same input. Due to the logical relationship between S and S’, your choice of output for S must also affect the subsequent execution of P’. Another example of a logical relationship is an S' which always returns the first bit of the output of S when given the same input, or one that returns the same output as S on some subset of inputs. In general, you can’t be certain about the consequences of a choice, because you’re not logically omniscient. How to handle logical/mathematical uncertainty is an open problem, so for now we'll just assume that you have access to a "mathematical intuition subroutine" that somehow allows you to form beliefs about the likely consequences of your choices. At this point, you might ask, “That’s well and good, but what if my preferences extend beyond abstract computations? What about consequences on the physical universe?” The answer is, we can view the physical universe as a program that runs S as a subroutine, or more generally, view it as a mathematical object which has S embedded within it. (From now on I’ll just refer to programs for simplicity, with the understanding that the subsequent discussion can be generalized to non-computable universes.) Your preferences about the physical universe can be translated into preferences about such a program P and programmed into the AI. The AI, upon receiving an input X, will look into P, determine all the instances where it calls S with input X, and choose the output that optimizes its preferences about the execution of P. If the preferences were translated faithfully, the the AI's decision should also optimize your preferences regarding the physical universe. This faithful translation is a second major open problem. What if you have some uncertainty about which program our universe corresponds to? In that case, we have to specify preferences for the entire set of programs that our universe may correspond to. If your preferences for what happens in one such program is independent of what happens in another, then we can represent them by a probability distribution on the set of programs plus a utility function on the execution of each individual program. More generally, we can always represent your preferences as a utility function on vectors of the form <E1, E2, E3, …> where E1 is an execution history of P1, E2 is an execution history of P2, and so on. These considerations lead to the following design for the decision algorithm S. S is coded with a vector <P1, P2, P3, ...> of programs that it cares about, and a utility function on vectors of the form <E1, E2, E3, …> that defines its preferences on how those programs should run. When it receives an input X, it looks inside the programs P1, P2, P3, ..., and uses its "mathematical intuition" to form a probability distribution P_Y over the set of vectors <E1, E2, E3, …> for each choice of output string Y. Finally, it outputs a string Y* that maximizes the expected utility Sum P_Y(<E1, E2, E3, …>) U(<E1, E2, E3, …>). (This specifically assumes that expected utility maximization is the right way to deal with mathematical uncertainty. Consider it a temporary placeholder until that problem is solved. Also, I'm describing the algorithm as a brute force search for simplicity. In reality, you'd probably want it to do something cleverer to find the optimal Y* more quickly.) Example 1: Counterfactual Mugging Note that Bayesian updating is not done explicitly in this decision theory. When the decision algorithm receives input X, it may determine that a subset of programs it has preferences about never calls it with X and are also logically independent of its output, and therefore it can safely ignore them when computing the consequences of a choice. There is no need to set the probabilities of those programs to 0 and renormalize. So, with that in mind, we can model Counterfactual Mugging by the following Python program: def P(coin): AI_balance = 100 if coin == "heads": if S("heads") == "give $100": AI_balance -= 100 if coin == "tails": if Omega_Predict(S, "heads") == "give $100": AI_balance += 10000 The AI’s goal is to maximize expected utility = .5 * U(AI_balance after P("heads")) + .5 * U(AI_balance after P("tails")). Assuming U(AI_balance)=AI_balance, it’s easy to determine U(AI_balance after P("heads")) as a function of S’s output. It equals 0 if S(“heads”) == “give $100”, and 100 otherwise. To compute U(AI_balance after P("tails")), the AI needs to look inside the Omega_Predict function (not shown here), and try to figure out how accurate it is. Assuming the mathematical intuition module says that choosing “give $100” as the output for S(“heads”) makes it more likely (by a sufficiently large margin) for Omega_Predict(S, "heads") to output “give $100”, then that choice maximizes expected utility. Example 2: Return of Bayes This example is based on case 1 in Eliezer's post Priors as Mathematical Objects. An urn contains 5 red balls and 5 white balls. The AI is asked to predict the probability of each ball being red as it as drawn from the urn, its goal being to maximize the expected logarithmic score of its predictions. The main point of this example is that this decision theory can reproduce the effect of Bayesian reasoning when the situation calls for it. We can model the scenario using preferences on the following Python program: def P(n): urn = ['red', 'red', 'red', 'red', 'red', 'white', 'white', 'white', 'white', 'white'] history = [] score = 0 while urn: i = n%len(urn) n = n/len(urn) ball = urn[i] urn[i:i+1] = [] prediction = S(history) if ball == 'red': score += math.log(prediction, 2) else: score += math.log(1-prediction, 2) print (score, ball, prediction) history.append(ball) Here is a printout from a sample run, using n=1222222: -1.0 red 0.5 -2.16992500144 red 0.444444444444 -2.84799690655 white 0.375 -3.65535182861 white 0.428571428571 -4.65535182861 red 0.5 -5.9772799235 red 0.4 -7.9772799235 red 0.25 -7.9772799235 white 0.0 -7.9772799235 white 0.0 -7.9772799235 white 0.0 S should use deductive reasoning to conclude that returning (number of red balls remaining / total balls remaining) maximizes the average score across the range of possible inputs to P, from n=1 to 10! (representing the possible orders in which the balls are drawn), and do that. Alternatively, S can approximate the correct predictions using brute force: generate a random function from histories to predictions, and compute what the average score would be if it were to implement that function. Repeat this a large number of times and it is likely to find a function that returns values close to the optimum predictions. Example 3: Level IV Multiverse In Tegmark's Level 4 Multiverse, all structures that exist mathematically also exist physically. In this case, we'd need to program the AI with preferences over all mathematical structures, perhaps represented by an ordering or utility function over conjunctions of well-formed sentences in a formal set theory. The AI will then proceed to "optimize" all of mathematics, or at least the parts of math that (A) are logically dependent on its decisions and (B) it can reason or form intuitions about. I suggest that the Level 4 Multiverse should be considered the default setting for a general decision theory, since we cannot rule out the possibility that all mathematical structures do indeed exist physically, or that we have direct preferences on mathematical structures (in which case there is no need for them to exist "physically"). Clearly, application of decision theory to the Level 4 Multiverse requires that the previously mentioned open problems be solved in their most general forms: how to handle logical uncertainty in any mathematical domain, and how to map fuzzy human preferences to well-defined preferences over the structures of mathematical objects. Added: For further information and additional posts on this decision theory idea, which came to be called "Updateless Decision Theory", please see its entry in the LessWrong Wiki. There's lots of mentions of Timeless Decision Theory (TDT) in this thread - as though it refers to something real. However, AFAICS, the reference is to unpublished material by Eliezer Yudkowsky. I am not clear about how anyone is supposed to make sense of all these references before that material has been published. To those who use "TDT" as though they know what they are talking about - and who are not Eliezer Yudkowsky - what exactly is it that you think you are talking about? 1) Congratulations: moving to logical uncertainty and considering your decision's consequences to be the consequence of that logical program outputting a particular decision, is what I would call the key insight in moving to (my version of) timeless decision theory. The rest of it (that is, the work I've done already) is showing that this answer is the only reflectively consistent one for a certain class of decision problems, and working through some of the mathematical inelegancies in mainstream decision theory that TDT seems to successfully clear up an... (read more) Why didn't you mention earlier that your timeless decision theory mainly had to do with logical uncertainty? It would have saved people a lot of time trying to guess what you were talking about. Looking at my 2001 post, it seems that I already had the essential idea at that time, but didn't pursue very far. I think it was because (A) I wasn't as interested in AI back then, and (B) I thought an AI ought to be able to come up with these ideas by itself. I still think (B) is true,. Now that I have some idea what Eliezer and Nesov were talking about, I'm still a bit confused about AI cooperation. Consider the following scenario: Omega appears and asks two human players (who are at least as skilled as Eliezer and Nesov) to each design an AI. The AIs will each undergo some single-player challenges like Newcomb's Problem and Counterfactual Mugging, but there will be a one-shot PD between the two AIs at the end, with their source codes hidden from each other. Omega will grant each human player utility equal to the total score of his or he... (read more) There are two parts to AGI: consequentialist reasoning and preference. Humans have feeble consequentialist abilities, but can use computers to implement huge calculations, if the problem statement can be entered in the computer. For example, you can program the material and mechanical laws in an engineering application, enter a building plan, and have the computer predict what's going to happen to it, or what parameters should be used in the construction so that the outcome is as required. That's the power outside human mind, directed by the correct laws, and targeted at the formally specified problem. When you consider AGI in isolation, it's like an engineering application with a random building plan: it can powerfully produce a solution, but it's not a solution to the problem you need solving. Nonetheless, this part is essential when you do have an ability to specify the problem. And that's the AI's algorithm, one aspect of which is decision-making. It's separate from the problem statement that comes from human nature. For an engineering program, you can say that the computer is basically doing what a person would do if they had crazy amount of time and machine patience. But that's... (read more) To celebrate, here are some pictures of Omega! (except the models that are palette swaps of Ultima) Although I still have not tried to decipher what "Timeless Decision Theory" or "Updateless Decision Theory" is actually about, I would like to observe that it is very unlikely that the "timeless" aspect, in the sense of an ontology which denies the reality of time, change, or process, is in any way essential to how it works. If you have a Julian-Barbour-style timeless wavefunction of the universe, which associates an amplitude with every point in a configuration space of spacelike states of the universe, you can always constru... (read more) Thanks for twisting my mind in the right direction with the S' stuff. I hereby submit the following ridiculous but rigorous theory of Newcomblike problems: You submit a program that outputs a row number in a payoff matrix, and a "world program" simultaneously outputs a column number in the same matrix; together they determine your payoff. Your program receives the source code of the world program as an argument. The world program doesn't receive your source code, but it contains some opaque function calls to an "oracle" that's guaranteed... (read more) 2) The key problem in Drescher's(?) Counterfactual Mugging is that after you actually see the coinflip, your posterior probability of "coin comes up heads" is no longer 0.5 - so if you compute the answer after seeing the coin, the answer is not the reflectively consistent one. I still don't know how to handle this - it's not in the class of problems to which my TDT corresponds. Please note that the problem persists if we deal in a non-quantum coin, like an unknown binary digit of pi. I thought the answer Vladimir Nesov already posted solved Counterfactual Mugging for a quantum coin? In this solution, there is no belief updating; there is just decision theory. (All probabilities are "timestamped" to the beliefs of the agent's creator when the agent was created.) This means that the use of Bayesian belief updating with expected utili... (read more) My book discusses a similar scenario: the dual-simulation version of Newcomb's Problem (section 6.3), in the case where the large box is empty (no $1M) and (I argue) it's still rational to forfeit the $1K. Nesov's version nicely streamlines the scenario. Just to elaborate a bit, Nesov's scenario and mine share the following features: In both cases, we argue that an agent should forfeit a smaller sum for the sake of a larger reward that would have been obtainted (couterfactually contingently on that forfeiture) if a random event had turned out differently than in fact it did (and than the agent knows it did). We both argue for using the original coin-flip probability distribution (i.e., not-updating, if I've understood that idea correctly) for purposes of this decision, and indeed in general, even in mundane scenarios. We both note that the forfeiture decision is easier to justify if the coin-toss was quantum under MWI, because then the original probability distribution corresponds to a real physical distribution of amplitude in configuration-space. Nesov's scenario improves on mine in several ways. He eliminates some unnecessary complications (he uses one simulation instead of two, and just tells the agent what the coin-toss was, whereas my scenario requires the agent to deduce that). So he makes the point more clearly, succinctly and dramatically. Even more importantly, his analysis (along with Yudkowsky, Dai, and others here... (read more) Why do you insist on making life harder on yourself? If the problem isn't solved satisfactorily in a simple world model, e.g. a deterministic finite process with however good mathematical properties you'd like, it's not yet time to consider more complicated situations, with various poorly-understood kinds of uncertainty, platonic mathematical objects, and so on and so forth. So...UDT dominates all known decision theories Understanding check: But does the Bayesian update occur if the input X affects the relative probabilities of the programs without s... (read more) This is one of those posts where I think "I wish I could understand the post". Way to technical for me right now. I sometimes wish that someone can do a "Non-Technical" and Non-mathematical version of posts like these ones. (but I guess it will take too much time and effort). But then I get away saying, I don't need to understand everything, do I? PS again: Don't forget to retract: Smart agents win.
https://www.lesswrong.com/posts/de3xjFaACCAk6imzv/towards-a-new-decision-theory
CC-MAIN-2021-10
refinedweb
3,088
50.67
BlackBerryProfileDeleteDataFlag #include <bb/platform/identity/BlackBerryProfileDeleteDataFlag> To link against this class, add the following line to your .pro file: LIBS += -lbbplatform The flags for deleting profile entries. Multiple flags can be combined using bitwise 'OR' unless stated otherwise. deleteData() flags parameter Overview Public Types Index Public Types - Default 0x00000000 Use the default flags for delete requests. If options are not specified, the deletion will follow the default behavior where the specified remote entry is deleted as well as the cached copy if it was cached. Remove the local cached copy of the entry. Override the default behavior to remove only the cached copy, but leave the remote copy unchanged.. - CacheOnly 0x00000001 Delete all entries under the profile type. Removes all the entries for the given type. The name specified must be NULL when using this flag. To avoid accidental removal of shared entries, use type Vendor in BlackBerryProfilePropertyType, which does not allow this flag. - DeleteAll 0x00000002 - Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/reference/cascades/bb__platform__identity__blackberryprofiledeletedataflag.html
CC-MAIN-2015-18
refinedweb
172
59.3
Cursor Class Represents the image used to paint the mouse pointer. Assembly: System.Windows.Forms (in System.Windows.Forms.dll) System.Windows.Forms.Cursor A cursor is a small picture whose location on the screen is controlled by a pointing device, such as a mouse, pen, or trackball. When the user moves the pointing device, the operating system moves the cursor accordingly. Different cursor shapes are used to inform the user what operation the mouse will have. For example, when editing or selecting text, a Cursors.IBeam cursor is typically displayed. A wait cursor is commonly used to inform the user that a process is currently running. Examples of processes you might have the user wait for are opening a file, saving a file, or filling a control such as a DataGrid, ListBox or TreeView with a large amount of data. All controls that derive from the Control class have a Cursor property. To change the cursor displayed by the mouse pointer when it is within the bounds of the control, assign a Cursor to the Cursor property of the control. Alternatively, you can display cursors at the application level by assigning a Cursor to the Current property. For example, if the purpose of your application is to edit a text file, you might set the Current property to Cursors.WaitCursor to display a wait cursor over the application while the file loads or saves to prevent any mouse events from being processed. When the process is complete, set the Current property to Cursors.Default for the application to display the appropriate cursor over each control type. Cursor objects can be created from several sources, such as the handle of an existing Cursor, a standard Cursor file, a resource, or a data stream. If the image you are using as a cursor is too small, you can use the DrawStretched method to force the image to fill the bounds of the cursor. You can temporarily hide the cursor by calling the Hide method, and restore it by calling the Show method. Starting with the .NET Framework 4.5.2, the Cursor will be resized based on the system DPI setting when the app.config file contains the following entry: The following code example displays a form that demonstrates using a custom cursor. The custom Cursor is embedded in the application's resource file. The example requires a cursor contained in a cursor file named MyCursor.cur. To compile this example using the command line, include the following flag: /res:MyCursor.Cur, CustomCursor.MyCursor.Cur using System; using System.Drawing; using System.Windows.Forms; namespace CustomCursor { public class Form1 : System.Windows.Forms.Form { [STAThread] static void Main() { Application.Run(new Form1()); } public Form1() { this.ClientSize = new System.Drawing.Size(292, 266); this.Text = "Cursor Example"; // The following generates a cursor from an embedded resource. // To add a custom cursor, create a bitmap // 1. Add a new cursor file to your project: // Project->Add New Item->General->Cursor File // --- To make the custom cursor an embedded resource --- // In Visual Studio: // 1. Select the cursor file in the Solution Explorer // 2. Choose View->Properties. // 3. In the properties window switch "Build Action" to "Embedded Resources" // On the command line: // Add the following flag: // /res:CursorFileName.cur,Namespace.CursorFileName.cur // // Where "Namespace" is the namespace in which you want to use the cursor // and "CursorFileName.cur" is the cursor filename. // The following line uses the namespace from the passed-in type // and looks for CustomCursor.MyCursor.Cur in the assemblies manifest. // NOTE: The cursor name is acase sensitive. this.Cursor = new Cursor(GetType(), "MyCursor cursor file named MyWait.cur in the application directory. It also requires a Customer object that can hold a collection of Order objects,.
https://msdn.microsoft.com/en-us/library/windows/apps/system.windows.forms.cursor
CC-MAIN-2017-17
refinedweb
623
50.33
1. Administering the Oracle Solaris Desktop 2. Managing User Preferences With GConf 6. Working With MIME Types Refreshing the MIME Database XML files provide all the information regarding MIME types that are installed into the MIME database by the update-mime-database application. The MIME XML files are located in the <MIME>/packages directory. A few rules about the MIME XML files: The XML file must specify the namespace as. The root element must be mime-info. Zero or more mime-type elements can be specified as children of the mime-info element. The type attribute is used to specify the MIME type that is being defined. By default, the freedesktop.org.xml file is installed to the packages directory in one of the <MIME> paths (usually /usr/share/mime/ packages). The following table gives a brief description of each element that can occur as children to the mime-type element. Table 6-1 Child elements of <mime-type> The following example defines the text/x-diff MIME type. Example 6-1 Example of a diff.xml source XML file: <?xml version='1.0'?> <mime-info <mime-type <comment>Differences between files</comment> <comment xml:verskille tussen lêers</comment> <!-- more translated comment elements --> <magic priority="50"> <match type="string" offset="0" value="diff\t"/> <match type="string" offset="0" value="***\t"/> <match type="string" offset="0" value="Common subdirectories: "/> </magic> <glob pattern="*.diff"/> <glob pattern="*.patch"/> </mime-type> </mime-info> In this example, multiple comment elements give a readable name to the MIME type in a number of different languages. The text/x-diff MIME type has rules for matching both through glob patterns and through the use of content sniffing (known as magic rules). Any file with the .diff or .patch extension will resolve to this MIME type. Additionally, any file whose contents start with the strings specified in the value attributes of the match element will resolve to the text/x-diff MIME type. For more details about the glob patterns and magic rules, see the XDG shared mime info specification.
http://docs.oracle.com/cd/E26502_01/html/E28056/gmcas.html
CC-MAIN-2014-52
refinedweb
343
58.28
C++ is a common programming language. It is really simple and fun to make your own games, but without the knowledge to do basic tasks, it can get really frustrating. One of the most vital and basic tasks is displaying text on the screen. Steps - 1Open or create a C++ file. Feel free to name it anything you want. Save it somewhere you can remember. - 2Write at the top the command #include <iostream>. This translates into "use the pre-made file named iostream". You may not have of created it, but it comes with many C++ downloads. This provides you with the command vital for displaying text. - 3In the next line, type using namespace std; which translates to "use the standard library." - 4On the next line, type int main(){. This is the main function, the main set of commands, that the program looks for. All lines after this are usually indented. - 5On the next line, type cout << . This is the command used to display text. Make sure you indent it. - 6Continue the line with the text you want to display on the screen inside quotation marks("s). - 7End the line with a semi-colon(;). Then, on the next line, type "}". Don't indent it. - 8Locate the "build and run" button. It should be near the top of the screen and should look like a gear and a green arrow. A pop-up should appear displaying the text in quotations.Advertisement Community Q&A Search Ask a Question 200 characters left Include your email address to get a message when this question is answered.Submit Advertisement Tips - Be careful with your coding! If an error occurs, read over your code again.Thanks! Advertisement About This Article Thanks to all authors for creating a page that has been read 4,363 times. Is this article up to date? Advertisement
https://m.wikihow.com/Display-Text-on-the-Screen-in-C%2B
CC-MAIN-2020-10
refinedweb
307
78.04
! I'm also very interested in this as it'd allow for more user friendly completions for most languages. You can create descriptions for completions by placing a \t tab chracater in the "trigger" key. Everything after that will be displayed on the right like with snippets. What do you mean with CaseSensitive? Do you want the completion to only trigger when case is matched? I don't think you can do that. The only way I can think of that handles completions containing a word-separator is probably with an EventListener.on_query_completions plugin. Allright so i tried this, and it worked just properly until the word contained a word seperator, e.g: something.something\t(something, something); now typing in something and pressing the tab would work just properly, however if i type in something.something and tab then it would write: something.something.something(something, something) any possible fix for this? No, that's what I've been saying. At least not with usual completions. This is because completions are only interpreted to the first word seperator character. So, what you need to do is write an API completion than can get text beyond the word separator (the "."). See "Packages/HTML/html_completions.py" for a pretty nice example. A nice example of what, exactly? Okay, let me phrase it like this: In order to do what you want to you have to write a Python plugin. If you are not known to Python, well, go ahead and learn it, it's awesome. If you are, I think you can put some sense into these examples (with the API documentation): API: sublimetext.com/docs/2/api_reference.htmlDocs on plugins: docs.sublimetext.info/en/latest/ ... ugins.htmlSpecifically about completion plugins: docs.sublimetext.info/en/latest/ ... tions-list More advanced example from "Packages/HTML/html_completions.py":[code]# This responds to on_query_completions, but conceptually it's expanding class HtmlCompletions(sublime_plugin.EventListener): def on_query_completions(self, view, prefix, locations): # Only trigger within HTML if not view.match_selector(locations[0], "text.html - source - meta.tag, punctuation.definition.tag.begin"): return ] # Get the contents of each line, from the beginning of the line to # each point lines = [view.substr(sublime.Region(view.line(l).a, l)) for l in locations] # Reverse the contents of each line, to simulate having the regex # match backwards lines = [l[::-1] for l in lines] # Check the first location looks like an expression rex = re.compile("(\w-]+)(.#])(\w+)") expr = match(rex, lines[0]) if not expr: return ] # Ensure that all other lines have identical expressions for i in xrange(1, len(lines)): ex = match(rex, lines*) if ex != expr: return ] # Return the completions arg, op, tag = rex.match(expr).groups() arg = arg::-1] tag = tag::-1] expr = expr::-1] if op == '.':$1</{0}>$0".format(tag, arg) else:$1</{0}>$0".format(tag, arg) return (expr, snippet)][/code]*
https://forum.sublimetext.com/t/snippet-word-seperator-and-other-please-help/10209/1
CC-MAIN-2016-30
refinedweb
477
50.84
This article was originally published over at the Codurance blog to grow in directions that it needs to. Boundaries designed at the start will stunt that growth at certain axis when direction of growth is at its most unpredictable. Also testing the system as a whole is very cumbersome. One can argue that the services should be decoupled enough that testing the application where all the services need to run is kept to a minimum. Sure, but in my experience even that minimal testing is a pain. Pain that should be lessened or altogether avoided for as long as possible. So why do we do it? Why are microservices such a compelling idea? The premise of isolating change is extremely attractive. We have all been stung with “the monolith”. We look at the system and see the change hotspots and wonder, “if only I had those hotspots isolated so that I didn’t have to redeploy the whole thing when they change” or “if only I could re-engineer this part without having to worry about the rest” etc. Yes microservices based architecture may help you achieve that (Remember! I said they are a bad idea at the start of a project, not a bad idea altogether.) but by this time you understand the hotspots in the application and your understanding of the domain has matured. My problem is with creating strong boundaries between different aspects of our application. These resist change if the understanding changes and some of the boundaries are no longer valid. It discourages people to question the already drawn boundaries because they are not easy to change. An Idea So can we do microservices without having to draw strong boundaries, at least at the start? Like anything in life it is not so simple. From weak-to-strong – we can use classes/modules, interfaces/protocols, package/namespaces, sub-projects, libraries and processes to draw these boundaries. The problem with the conventional microservices is that we go straight to the processes level to draw the boundary which is the strongest level at which you can separate the system. However, the weaker the boundary the bigger the chance that you’ll have to do extra work to strengthen that boundary because dependencies will have leaked through. But at the same time weaker boundaries are easier to redefine. What if we keep the boundaries in process but make them explicit? For example we segregate the system into components that are only allowed to speak to each other over a well defined interface just like our microservices but they’re all running in the same process. This could be serialised into something specific like JSON or a more abstract interchange format. Code could be divided into top level packages ensuring that there is no direct binary dependency between modules. So that modules are truly passing messages to each other – like good old fashioned Object Oriented Programming. We must ensure that there is no direct dependency between modules e.g. shared code, shared memory or shared database tables. Code can be reused using versioned libraries. This will allow us to keep explicit boundaries between the modules in our codebase that are strong enough that individual modules can be extracted into their own microservices when required but also weak enough that they can be easily changed when needed. Even this level of division my not be ideal at the start and we may start with a single component to the point where a division into at least two parts becomes apparent. Conclusion So the advice, if you haven’t guessed, is that we should start our system with minimal assumptions and restrictions and then sense the system to see where it needs to go. Microservices could be the vision of the destination but we shouldn’t try to second guess the destination or even preplan our journey. We should sense and adapt. Premature abstractions and boundaries will drown out this sense in certain areas resulting in a system that is not as fully evolved as it could’ve been.
https://www.voxxed.com/2016/02/premature-microservices/
CC-MAIN-2019-13
refinedweb
677
61.77
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards The <boost/format.hpp> format class provides printf-like formatting, in a type-safe manner which allows output of user-defined types. A format object is constructed from a format-string, and is then given arguments through repeated calls to operator%. Each of those arguments are then converted to strings, who are in turn combined into one string, according to the format-string. cout << boost::format("writing %1%, x=%2% : %3%-th try") % "toto" % 40.23 % 50; // prints "writing toto, x=40.230 : 50-th try" or later on, as inor later on, as incout << format("%2% %1%") % 36 % 77; you feed variables into the formatter.you feed variables into the formatter.format fmter("%2% %1%"); fmter % 36; fmter % 77; // fmter was previously created and fed arguments, it can print the result : cout << fmter ; // You can take the string result : string s = fmter.str(); // possibly several times : s = fmter.str( ); // You can also do all steps at once : cout << boost::format("%2% %1%") % 36 % 77; // using the str free function : string s2 = str( format("%2% %1%") % 36 % 77 ); using namespace std; using boost::format; using boost::io::group; It prints : "11 22 333 22 11 \n"It prints : "11 22 333 22 11 \n"cout << format("%1% %2% %3% %2% %1% \n") % "11" % "22" % "333"; // 'simple' style. It prints : "(x,y) = ( -23, +35) \n"It prints : "(x,y) = ( -23, +35) \n"cout << format("(x,y) = (%1$+5d,%2$+5d) \n") % -23 % 35; // Posix-Printf style It prints : "writing toto, x=40.23 : 50-th step \n"It prints : "writing toto, x=40.23 : 50-th step \n"cout << format("writing %s, x=%s : %d-th step \n") % "toto" % 40.23 % 50; all those print : "(x,y) = ( -23, +35) \n"all those print : "(x,y) = ( -23, +35) \n"cout << format("(x,y) = (%+5d,%+5d) \n") % -23 % 35; cout << format("(x,y) = (%|+5|,%|+5|) \n") % -23 % 35; cout << format("(x,y) = (%1$+5d,%2$+5d) \n") % -23 % 35; cout << format("(x,y) = (%|1$+5|,%|2$+5|) \n") % -23 % 35; Both print the same : "_ +101_ 101 \n"Both print the same : "_ +101_ 101 \n"format fmter("_%1$+5d_ %1$d \n"); format fmter2("_%1%_ %1% \n"); fmter2.modify_item(1, group(showpos, setw(5)) ); cout << fmter % 101 ; cout << fmter2 % 101 ; The manipulators are applied at each occurence of %1%, and thus it prints : "_ +101_ +101 \n"The manipulators are applied at each occurence of %1%, and thus it prints : "_ +101_ +101 \n"cout << format("_%1%_ %1% \n") % group(showpos, setw(5), 101); For some std::vector names, surnames, and tel (see sample_new_features.cpp) it prints :For some std::vector names, surnames, and tel (see sample_new_features.cpp) it prints :for(unsigned int i=0; i < names.size(); ++i) cout << format("%1%, %2%, %|40t|%3%\n") % names[i] % surname[i] % tel[i]; Marc-François Michel, Durand, +33 (0) 123 456 789 Jean, de Lattre de Tassigny, +33 (0) 987 654 321 The program sample_formats.cpp demonstrates simple uses of format. sample_new_features.cpp illustrates the few formatting features that were added to printf's syntax such as simple positional directives, centered alignment, and 'tabulations'. sample_advanced.cpp demonstrates uses of advanced features, like reusing, and modifying, format objects, etc.. And sample_userType.cpp shows the behaviour of the format library on user-defined types. boost::format( format-string ) % arg1 % arg2 % ... % argN The format-string contains text in which special directives will be replaced by strings resulting from the formatting of the given arguments. The legacy syntax in the C and C++ worlds is the one used by printf, and thus format can use directly printf format-strings, and produce the same result (in almost all cases. see Incompatibilities with printf for details) This core syntax was extended, to allow new features, but also to adapt to the C++ streams context. Thus, format accepts several forms of directives in format-strings : The printf format specifications supported by Boost.format follows the Unix98 Open-group printf precise syntax, rather than the standard C printf, which does not support positional arguments. (Common flags have the same meaning in both, so it should not be a headache for anybody) Note that it is an error to use positional format specifications (e.g. %3$+d) mixed with non-positional ones (e.g. %+d) in the same format string. In the Open-group specification, referring to the same argument several times (e.g. "%1$d %1$d") has undefined behaviour. Boost.format's behaviour in such cases is to allow each argument to be reffered to any number of times. The only constraint is that it expects exactly P arguments, P being the maximum argument number used in the format string. (e.g., for "%1$d %10$d", P == 10 ). Supplying more, or less, than P arguments raises an exception. (unless it was set otherwise, see exceptions) A specification spec has the form : [ N$ ] [ flags ] [ width ] [ . precision ] type-char Fields insided square brackets are optional. Each of those fields are explained one by one in the following list : Note that the 'n' type specification is ignored (and so is the corresponding argument), because it does not fit in this context. Also, printf 'l', 'L', or 'h' modifiers (to indicate wide, long or short types) are supported (and simply have no effect on the internal stream). printf(s, x1, x2); cout << format(s) % x1 % x2; But because some printf format specifications don't translate well into stream formatting options, there are a few notable imperfections in the way Boost.format emulates printf. In any case, the format class should quietly ignore the unsupported options, so that printf format-strings are always accepted by format and produce almost the same output as printf. format formatter("%+5d"); cout << formatter % x; unsigned int n = formatter.size(); All flags which are translated into modification to the stream state act recursively within user-defined types. ( the flags remain active, and so does the desired format option, for each of the '<<' operations that might be called by the user-defined class)e.g., with a Rational class, we would have something like : Rational ratio(16,9); cerr << format("%#x \n") % ratio; // -> "0x10/0x9 \n" It's a different story for other formatting options. For example, setting width applies to the final output produced by the object, not to each of its internal outputs, and that's fortunate : cerr << format("%-8d") % ratio; // -> "16/9 " and not "16 /9 " cerr << format("%=8d") % ratio; // -> " 16/9 " and not " 16 / 9 " But so does the 0 and ' ' options (contrarily to '+' which is directly translated to the stream state by showpos. But no such flags exist for the zero and space printf options) and that is less natural : It is possible to obtain a better behaviour by carefully designing the Rational's operator<< to handle the stream's width, alignment and showpos paramaters by itself. This is demonstrated in sample_userType.cpp.It is possible to obtain a better behaviour by carefully designing the Rational's operator<< to handle the stream's width, alignment and showpos paramaters by itself. This is demonstrated in sample_userType.cpp.cerr << format("%+08d \n") % ratio; // -> "+00016/9" cerr << format("% 08d \n") % ratio; // -> "000 16/9" The internal stream state of format is saved before and restored after output of an argument; therefore, the modifiers are not sticky and affect only the argument they are applied to. The default state for streams, as stated by the standard, is : precision 6, width 0, right alignment, and decimal flag set. The state of the internal format stream can be changed by manipulators passed along with the argument; via the group function, like that : cout << format("%1% %2% %1%\n") % group(hex, showbase, 40) % 50; // prints "0x28 50 0x28\n" When passing N items inside a 'group' Boost.format needs to process manipulators diferently from regular argument, and thus using group is subject to the following constraints : Such manipulators are passed to the streams right before the following argument, at every occurence. Note that formatting options specified within the format string are overridden by stream state modifiers passed this way. For instance in the following code, the hex manipulator has priority over the d type-specification in the format-string which would set decimal output : cout << format("%1$d %2% %1%\n") % group(hex, showbase, 40) % 50; // prints "0x28 50 0x28\n" Boost.format enforces a number of rules on the usage of format objects. The format-string must obeys the syntax described above, the user must supply exactly the right number of arguments before outputting to the final destination, and if using modify_item or bind_arg, items and arguments index must not be out of range. When format detects that one of these rules is not satisfied, it raises a corresponding exception, so that the mistakes don't go unnoticed and unhandled. But the user can change this behaviour to fit his needs, and select which types of errors may raise exceptions using the following functions : unsigned char exceptions(unsigned char newexcept); // query and set unsigned char exceptions() const; // just query The user can compute the argument newexcept by combining the following atoms using binary arithmetic : For instance, if you don't want Boost.format to detect bad number of arguments, you can define a specific wrapper function for building format objects with the right exceptions settings : It is then allowed to give more arguments than needed (they are simply ignored) :It is then allowed to give more arguments than needed (they are simply ignored) :boost::format my_fmt(const std::string & f_string) { using namespace boost::io; format fmter(f_string); fmter.exceptions( all_error_bits ^ ( too_many_args_bit | too_few_args_bit ) ); return fmter; } And if we ask for the result before all arguments are supplied, the corresponding part of the result is simply emptyAnd if we ask for the result before all arguments are supplied, the corresponding part of the result is simply emptycout << my_fmt(" %1% %2% \n") % 1 % 2 % 3 % 4 % 5; cout << my_fmt(" _%2%_ _%1%_ \n") % 1 ; // prints " __ _1_ \n" The performance of boost::format for formatting a few builtin type arguments with reordering can be compared to that of Posix-printf, and of the equivalent stream manual operations to give a measure of the overhead incurred. The result may greatly depend on the compiler, standard library implementation, and the precise choice of format-string and arguments. Since common stream implementations eventually call functions of the printf family for the actual formatting of numbers, in general printf will be noticeably faster than the direct stream operations And due to to the reordering overhead (allocations to store the pieces of string, stream initialisation at each item formatting, ..) the direct stream operations would be faster than boost::format, (one cas expect a ratio ranging from 2 to 5 or more) When iterated formattings are a performance bottleneck, performance can be slightly increased by parsing the format string into a format object, and copying it at each formatting, in the following way. const boost::format fmter(fstring); dest << boost::format(fmter) % arg1 % arg2 % arg3 ; As an example of performance results, the author measured the time of execution of iterated formattings with 4 different methods the test was compiled with g++-3.3.3 and the following timings were measured (in seconds, and ratios) : string fstring="%3$0#6x %1$20.10E %2$g %3$0+5d \n"; double arg1=45.23; double arg2=12.34; int arg3=23; - release mode : printf : 2.13 nullStream : 3.43, = 1.61033 * printf boost::format copied : 6.77, = 3.1784 * printf , = 1.97376 * nullStream boost::format straight :10.67, = 5.00939 * printf , = 3.11079 * nullStream - debug mode : printf : 2.12 nullStream : 3.69, = 1.74057 * printf boost::format copied :10.02, = 4.72642 * printf , = 2.71545 * nullStream boost::format straight :17.03, = 8.03302 * printf , = 4.61518 * nullStream namespace boost { template<class charT, class Traits=std::char_traits<charT> > class basic_format { public: typedef std::basic_string<charT, Traits> string_t; typedef typename string_t::size_type size_type; basic_format(const charT* str); basic_format(const charT* str, const std::locale & loc); basic_format(const string_t& s); basic_format(const string_t& s, const std::locale & loc); basic_format& operator= (const basic_format& x); void clear(); // reset buffers basic_format& parse(const string_t&); // clears and parse a new format string string_t str() const; size_type size() const; // pass arguments through those operators : template<class T> basic_format& operator%(T& x); template<class T> basic_format& operator%(const T& x); // dump buffers to ostream : friend std::basic_ostream<charT, Traits>& operator<< <> ( std::basic_ostream<charT, Traits>& , basic_format& ); // Choosing which errors will throw exceptions : unsigned char exceptions() const; unsigned char exceptions(unsigned char newexcept); // ............ this is just an extract ....... }; // basic_format typedef basic_format<char > format; typedef basic_format<wchar_t > wformat; // free function for ease of use : template<class charT, class Traits> std::basic_string<charT,Traits> str(const basic_format<charT,Traits>& f) { return f.str(); } } // namespace boost This class's goal is to bring a better, C++, type-safe and type-extendable printf equivalent to be used with streams.Precisely, format was designed to provide the following features : In the process of the design, many issues were faced, and some choices were made, that might not be intuitively right. But in each case they were taken for some reasons. The author of Boost format is Samuel Krempp. He used ideas from Rüdiger Loos' format.hpp and Karl Nelson's formatting classes. Revised 02 December, 2006 Distributed under the Boost Software License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at)
http://www.boost.org/doc/libs/1_42_0/libs/format/doc/format.html
CC-MAIN-2016-36
refinedweb
2,266
50.87
There are a handful of good resources on implementing syntax highlighting and MarkDown support for content in Django, when I was starting out work on my blog I based my code off of the example I found here. It is a well done article, and it ends up with a usable system (the core of which I reuse here). But this article is going to look at extending its functionality a bit, and also simplifying the code. If you were to implement the above article's code you would end up writing MarkDown that look like this: ### My article 1. My list entry one 2. My list entry two <code class="python">def x (a, b): return a * b</code> That is reasonable solution, but I find the code element a bit clunky, so I here is my alternative implementation of syntax highlighting with MarkDown for Django. After its implemented that bit of MarkDown will be written like this: ### My article 1. My list entry one 2. My list entry two @@ python def x (a, b): return a * b @@ Your milleage may vary, but I think thats a big improvement for both readability and writability. Further more the MarkDown++ solution shown here allows access to all of Pygments syntax lexers so you can use languages like html+django, ruby, scheme, or apache. Implementation The first step is to download [MarkDown++][markdownpp]. You can either add it to your Python path, or I personally added it as a file in my Django application that calls it, thus I import it like so: import myproject.myapp.markdownpp as markdownpp Next we need to make our model that contains our content: class Entry(models.Model): title = models.CharField(maxlength=200) body = models.TextField() body_html = models.TextField(blank=True, null=True) use_markdown = models.BooleanField(default=True) and give it a save method def save(self): if self.use_markdown: self.body_html = markdownpp.markdown(self.html) else: self.body_html = self.body super(Entry,self).save() And thats it, now you can use the MarkDown++ syntax for code syntax highlighting. You will need to save a copy of this css file and place it in your media folder somewhere that your template can load it (it is the css needed for the syntax coloring, and shouldn't interfere with your existing css at all). Taking it a bit further Now you already have a working MarkDown and code syntax highlighter implemented, but we can spruce this up a little bit more. Often you'll have files (images or otherwise) that you'll be referencing in your content. Unfortunately its an error prone process to write your reference links by hand: [myfile]: and if you write a number of them it gets a bit tedious. Wouldn't it be great if the entries wrote the references for themselves? Well we can, and we're going to (and there was much rejoicing across the land). First lets add a Resource model: class Resource(models.Model): title = models.CharField(maxlength=50) markdown_id = models.CharField(maxlength=50) content = models.FileField(upload_to="myapp/resource") Now we need to make a few modifications to our Entry from earlier: class Entry(models.Model): title = models.CharField(maxlength=200) body = models.TextField() body_html = models.TextField(blank=True, null=True) use_markdown = models.BooleanField(default=True)() And now when you write an article you can use any of the markdown_id's from any Resources you have created. After looking at the code you'll think to yourself: "Why not have it only import the Resources that the Entry is actually using? Just throw in a quick many to many relationship and...." Infact, that is what I did, but it quickly becomes a very unpleasant solution... let me explain. If you have a ManyToMany field linking you Entry to Resource instances, then you'll do something like this to get the related resources: refs = entry.resources.all() Which is great, really easy and all that jazz. Only it doesn't work. The problem is that when you save your Entry you are also saving your new relationships between Resources and the Entry. This means at the point in time where you save the Entry, the fully updated list of Resources related to that Entry is not yet available. So you'd think you could do something like this: def save: super(Entry,self).save() res = self.resources.all() # etc etc super(Entry,self).save() But unfortunately you can't. For whatever reason the change doesn't propegate to the database quickly enough for it to be updated at that point (disclaimer: I do my development mostly using SQLite3, which has abysmal database locking, perhaps this approach would work better on PostgreSQL, but I doubt it). Alright, now you're thinking "lets just use the dispatcher to listen for a post save hook, and then save the Entry a second time." And you're right, that works, sort of, but not as cleanly as you might think. The crux of the problem is that if you call resave again immediately with the post_save hook then things still won't be updated yet, you have to actually wait for 2-3 seconds for the change to be available. I don't particularly recommend using it, but for the sake of completeness here is my current solution to this updating problem: import time, thread from django.db import models from django.dispatch import dispatcher from django.db.models import signals ####################### ### your models go here ### ####################### resave_hist = {} dispatcher.connect(resave_object, signal=signals.post_save, sender=Entry) This is very hacky, but it is the only solution I have come up with that works (other than simply adding all resources to every entry). The crux of it is that it uses a seperate thread to wait for three seconds and then resaves it. There is some additional code to prevent infinite save loops. Onward and Upward Precreating reference links so you don't have to is an idea that has legs: you can create automatic links to previous and next articles, to blogs on your blog roll, whatever you use. Give it a quick thought, maybe you can transfer some of your writing workflow onto the software instead. I'd be interested to see if anyone has any better solutions for only loading relevant reference. Let me know if there are any questions.
http://lethain.com/syntax-highlighting-markdown-and-pinch-automagick/
CC-MAIN-2014-10
refinedweb
1,058
64.71
There has been some discussion about this off-list. Most of the discussion has centered around what appropriate tab completion should be in different cases (what should be offered in the completion namespace and when). pydb has been dealt with a number of these issues already (thanks Rocky Bernstein). I would like to continue the discussion here. Here's a quick summary: - when a line is blank, pdb commands and valid identifiers and keywords should be included in the namespace. - if a line does not start with a pdb command, then it is a python expression and can be completed by rlcompleter, with all valid identifiers and keywords available in the namespace. - if a line does start with a pdb command then we should decide on the best way to complete each pdb command. for example: " "[complete] -> all possible commands, identifiers and keywords "b"[complete] -> "basestring bool break buffer" "basestring."[complete] -> "basestring.__base__ basestring.__delattr__ ... etc." "break "[complete] -> whatever the pdb completer will offer currently, the submitted patch only attempts to complete python expressions with rlcompleter. I think it would be useful (and more honest, as Rocky puts it) to offer completion for pdb commands as well. Apart from general comments, what would be great are suggestions on what sort of completion should follow the various pdb commands. Stephen Emslie On 1/22/07, stephen emslie <stephenemslie at gmail.com> wrote: > Thanks for taking a look. I've created a patch relative to pdb.py in svn and submitted it to sourceforge here: > > > > > > On 1/19/07, Aahz < aahz at pythoncraft.com> wrote: > >: > > > >
https://mail.python.org/pipermail/python-ideas/2007-February/000202.html
CC-MAIN-2016-44
refinedweb
263
63.09
At some point I want to write more about Scala type aliases and type members, but for today I just want to put a little reminder here for myself. Until I take the time to write more, here are two images from stackoverflow that provide Scala type examples. Using Scala type aliases This first image is from this link, and discusses the difference between a type and a value: Here’s the source code to go along with that image, with a few println statements at the end: object TypeAliases1 extends App { type Row = List[Int] def Row(xs: Int*) = List(xs: _*) type Matrix = List[Row] def Matrix(xs: Row*) = List(xs: _*) val m = Matrix(Row(1,2,3), Row(1,2,3), Row(1,2,3)) println(m) println(m.getClass) } As programs get more complicated, I sometimes use type aliases to help simplify them. For instance, I just created this type alias in a Scala object: type DataTypeMap = Map[String, DataTypeAsJson] and then ended up using it later in several places, like this: def getAllDataTypesAsMap(jsonString: String): DataTypeMap = { For my brain, it makes it easier to look at the getAllDataTypesAsMap method and reason about what it returns. As shown in the image above, a problem comes along when you try to use a type alias as a value (as a class or object), and that image and source code show the solution to that problem. Scala type members This next image is from this SO link, and shows an example of creating a Scala type member: Summary Again, I hope to come back here and share some more Scala type examples for both type aliases and type members, but until then, I hope this information is helpful.
https://alvinalexander.com/scala/scala-type-examples-type-aliases-members/
CC-MAIN-2021-49
refinedweb
291
57.88
Upgrading to Angular v6: Step by Step Angular v6 was released on May 3rd and now we can focus on upgrading our projects to the new version. In this post I documented my experience and steps upgrading some projects from v5 to v6. Some tips and key differences between projects created with v5 and v6 are also included. What’s new in Angular v6 First things first, Angular team published a detailed summary on what’s new in v6 here. So before continuing reading this article, I suggest you read the official Angular blog first. Upgrading to Angular v6 The best resource with details on how to upgrade to a new Angular version is. Even if you are upgrading from v2 to v6, it will list all the breaking changes since v2 to v6! It is a great way to know in details what you need to change in your code. 1: Instaling latest Angular CLI First step is to make sure you have the latest CLI available: npm install -g @angular/cli or (linux and macos) sudo npm install -g @angular/cli You should see something like this: With the release of Angular v6, the Angular CLI is now also versioned as Angular, meaning until Angular v5, we would use Angular CLI 1.x, and now Angular CLI is also on version 6.x. It makes it much easier! 2: ng update Needless to say, please do create a branch to update your project, as you’ll never know if all dependencies will still work after upgrading to Angular v6. We can use ng update -d or ng update --dry-run to check what needs to be updated in our project: So first, we will start with @angular/cli. In order for the ng update command to work inside the project, we need first to update the @angular/cli version to 6.x. npm install --save-dev @angular/cli@latest Next, run the ng update command for @angular/cli, then @angular/core and then for the other packages required ( rxjs, @angular/material): ng update @angular/cli ng update @angular/core ng update @angular/material ng update rxjs Some project structure files have changed from v5 to v6. There is no angular-cli.json anymore, it has been replaced by angular.json. The structure of angular.json also has changed to support multiple projects per workspace. When we run ng update @angular/cli all the required files will be updated! 3: Updating other dependencies I also like to update the other npm depedencies used by the project during an Angular upgrade. The npm package npm-check-updates is really helpful for this task. npm install -g npm-check-updates Use the command ncu to check what packages have updates available. And ncu -u to update your package.json. When changing versions of package of package.json, I personally also like to delete the node_module and run npm i again just to make sure the correct versions are availavle locally (and also update package-lock.json). 4: Updating RxJS So, next step now is running ng serve to check if everything is ok with the code. Don’t forget to verify for all beaking changes. Even though we were able to update RxJS code since Angular v5 (with RxJS v5) to use the pipeble operators, in the projects I did the upgrade to v6, I forgot to change a few places. To help us with this task, we can install rxjs-tslint to help us removing all deprecated RxJS code. npm install -g rxjs-tslint rxjs-5-to-6-migrate -p src/tsconfig.app.json As a quick summary regarding the imports: import { Subject } from 'rxjs/Subject'; import { BehaviorSubject } from 'rxjs/BehaviorSubject'; import { Observable } from 'rxjs/Observable'; import { of } from 'rxjs/observable/of;' to: import { BehaviorSubject, Subject, Observable, of } from 'rxjs'; Or, if you were not using RxJS pipeble operators yet:. From: this.http.get('url') .do(console.log) .map(results => results.data) .subscribe(results => { console.log('Results', results); }); To: this.http.get('url') .pipe( tap(console.log), // old 'do' operator map(results => results.data) ) .subscribe(results => { console.log('Results', results); }); After updating your RxJS code, you might still get errors regarding RxJS from third-party dependencies. To solve this, install rxjs-compat and once the dependencies have upgraded their code as well, you can remove this package from your project. npm install --save rxjs-compat 5: Simplifying Dependency Injection for Core Services A new feature introduced in Angular v6 is called “tree-shakable providers”. This means we no longer need to declare services in a module by using the property providedIn, and this will allow the services to be tree-shakable, meaning if they are not being used, they will not be part of the prod bundle. import { Injectable } from '@angular/core'; @Injectable({ providedIn: 'root' }) export class MyCoreService { } I applied this feature in all core services (global scope) of my projects, but I’m still using non tree-shakable providers with services that do not have global scope. This allowed to clean up the providers section from CoreModule. For more information about tree-shakable providers: 6: Updating Angular Material (optional) If you are using Angular Material in your project, don’t forget to run ng update @angular/material to update the material dependencies. A breaking change from v5 to v6 is how we import the Material modules in our project: From: import { MdToolbarModule, MdIconModule, MdButtonModule, MdMenuModule, MdCardModule } from '@angular/material'; To: import { MatToolbarModule } from '@angular/material/toolbar'; import { MatIconModule } from '@angular/material/icon'; import { MatButtonModule } from '@angular/material/button'; import { MatMenuModule } from '@angular/material/menu'; import { MatCardModule } from '@angular/material/card'; Now each module has its own package. This is also one of the reasons I like to create a separate module for thrid party imports as already explained in this article. It makes much easier to fix imports! Some other things… I do have some projects that were created since Angular v2, and after every major version update, I usually just updated package.json and fixed the breaking changes in the code and that was ok. Since there were some project structure changes in Angular CLI v6, I also decided to migrate a project by creating a brand new project with CLI 6 and copying the src folder from the old project to the new project. Below are some of the most impacted changes. The major difference that had some impact on the code is the baseUrl: './' from tsconfig.json. With projects created with CLI 1.x (for Angular v4 and v5), this configuration is not there by default (but inside src/tsconfig.ap.json). Moving baseUrl to the root tsconfig.json had an impact on custom paths declared in tsconfig.json and also on the path for lazy loading modules. Before - custom path in tsconfig.json: paths: { "@env/*": ["environments/*"] } After (single project created with CLI v6): paths: { "@env/*": ["src/environments/*"] } And the lazy loading modules need to be declared using the relative path: Before: { path: 'home', loadChildren: 'app/home/home.module#HomeModule' } After: { path: 'home', loadChildren: './home/home.module#HomeModule' } If you have nested modules, you also need to update them to use relative paths: Before ( module1-routing.module.ts): { path: 'foo', loadChildren: 'app/module1/module2/module2.module#Module2Module' } After ( module1-routing.module.ts): { path: 'foo', loadChildren: './module2/module2.module#Module2Module' } There were a few changes in the CLI v6 commands as well. As the majority of my professional applications use Java in the backend, the output folder ( dist) from ng build is configured to a different path. Until CLI 1.x, there was a flag ( output-path -op) that could be used in the ng build command ( ng build -op ../other/path). With CLI v6, if you need to use a different output path, you need to update angular.json and remove the -op flag from ng build: "architect": { "build": { "builder": "@angular-devkit/build-angular:browser", "options": { "outputPath": "../other/path", ... } } } Please note this is not needed if you upgrade from an existing CLI 1.x project to CLI v6.x using ng update References Hopefully this article might help you migrating your project, and if you find anything else that is relevant, please leave a comment so we can update this article together! :) Happy coding and happy upgrating!
http://loiane.com/2018/05/upgrading-to-angular-v6/
CC-MAIN-2020-29
refinedweb
1,381
53.61
Prim’s MST for Adjacency List Representation in C++ In this tutorial, we will learn about the implementation of Prim’s MST for Adjacency List Representation in C++. MST stands for a minimum spanning tree. We need to calculate the minimum cost of traversing the graph given that we need to visit each node exactly once. We represent the graph by using the adjacency list instead of using the matrix. This reduces the overall time complexity of the process. We follow a greedy approach, wherein we prioritize the edge with the minimum weight. In the standard template library available in c++, we have a data structure called priority queue which functions in a similar manner to the heaps. We enter all the edges along with their weights in the priority queue so that we always extract the edge with the minimum weight. Given an undirected graph, we choose a node as the source node. We enter the neighboring edges of the node in the priority queue(min-heap) and extract the one with the minimum weight. As we go on selecting the nodes, we keep on adding the edges connecting those nodes in the priority queue. This ensures that every time the selected edge has the minimum weight. Say, we have a graph given as: The detailed code below will show how to get the minimum spanning tree using prim’s algorithm. Prim’s MST for Adjacency List Representation in C++ solution #include <iostream> #include<bits/stdc++.h> using namespace std; vector<pair<int, int> > adj_list[7]; //to make the graph vector<bool> visited(7,false); // to keep a track if the node has been already visited or not. Initially all are false as no node is visited vector<int> connection(7,-1); // to track the final connections that the MST has vector<int> value(7, INT_MAX); // to store the minimum weight value possible for a node priority_queue<pair<int, int>, vector<pair<int, int> >, greater<pair<int, int> > > que; //priority queue to extract minimum weights void prims() { que.push(make_pair(0, 1)); //push the weight required to insert the source node =0 and the node itself(i.e 1) value[1]=0; //minimum weight for source is 0 while (!que.empty()) { int node = que.top().second; //get the node visited[node] = true; //as it is visited now change its value to true que.pop(); for (auto neighbor : adj_list[node]) { //we check for all its neighbors int weight = neighbor.second; //get their weight int vertex = neighbor.first; //get their index if (!visited[vertex] && value[vertex] > weight) { //if the node is not visited and if its weight along this edge is less than the value[vertex] = weight; //previous edge associated with it, then only we consider it connection[vertex] = node; que.push(make_pair(value[vertex], vertex)); //we update the values and then push it in the queue to examine its neighbors } } } } void print_graph() { for (int i = 2; i < 7; ++i) printf("%d - %d\n", connection[i], i); //print the connections } } int main() { makegraph(5, 4, 5); //insert the node and edge makegraph(5, 1, 3); //insert the node and edge makegraph(1, 2, 3); //insert the node and edge makegraph(1, 4, 7); //insert the node and edge makegraph(2, 5, 11); //insert the node and edge makegraph(6, 4, 1); //insert the node and edge makegraph(5, 6, 7); //insert the node and edge makegraph(3, 1, 3); //insert the node and edge makegraph(3, 2, 7); //insert the node and edge prims(); //call the function the perform the minimum spanning tree algorithm print_graph(); //print the final minimum spanning tree return 0; } VARIABLE EXPLANATION: Note that all the identifiers are global so that we can freely use them in every function if required. Firstly, let us understand all the variables and their purpose. - We make a vector of pairs(adj_lis) to represent the graph. The pairs are basically the edges of the graph or in other words, the connections. For example, if our graph contains an edge connecting 3 and 4, then the vector will have <3,4> and <4,3>. - The next variable is the vector visited which is used to mark the presence of a vertex in the MST. As we need to take each edge only once, we use this array to make sure that no vertex is visited twice or more. If the index corresponding to a vertex is marked true, then we do not visit it again. - The connections vector stores the final connections of the MST. For simplicity, we have assumed that 7 vertices are present and we represent each vertex as the index of the array. The indexes are used to store the vertex numbers with which they are connected. For example, connection[3]=1 means 3 is connected to 1. - The value vector stores the minimum weight with which we can join a vertex. - The priority queue is used to extract edges with minimum weight at every step. CODE EXPLANATION - Let us start understanding from the main() function. Firstly, we make a graph using the make graph() function which takes in the connections as its parameters and keeps on adding the edges in the graph. The final result is a graph that is represented in the form of an adjacency list. } - Next, the main function calls the prims() function which gives us our MST. - We are using a min-heap in the form of a priority queue so that we can extract the minimum weight possible at every step. - Firstly, we add the source vertex to the queue along with its weight(0) and update its value vector to 0. que.push(make_pair(0, 1)); value[1]=0; - Then, we keep on repeating the following process until we get our MST: At each step, we consider the top element of the(because it has the minimum weight) and mark it as visited. int node = que.top().second; visited[node] = true; que.pop() We check all its neighbors and if they are not already visited and their value is less than that possible with some other edge, we push them to the queue. for (auto neighbor : adj_list[node]) { int weight = neighbor.second; int vertex = neighbor.first; if (!visited[vertex] && value[vertex] > weight) { value[vertex] = weight; connection[vertex] = node; que.push(make_pair(value[vertex], vertex)); } - Finally, we print the MST using the print_graph() function. We get the output as : 1 - 2 1 - 3 5 - 4 1 - 5 4 - 6 Time complexity: ElogV (E: no of edges, V: number of nodes) To read more: Minimum Spanning Tree for Graph in C++
https://www.codespeedy.com/prims-mst-for-adjacency-list-representation-in-cpp/
CC-MAIN-2020-50
refinedweb
1,096
60.65
Background: physical server, about two years old, 7200-RPM SATA drives connected to a 3Ware RAID card, ext3 FS mounted noatime and data=ordered, not under crazy load, kernel 2.6.18-92.1.22.el5, uptime 545 days. Directory doesn't contain any subdirectories, just millions of small (~100 byte) files, with some larger (a few KB) ones. We have a server that has gone a bit cuckoo over the course of the last few months, but we only noticed it the other day when it started being unable to write to a directory due to it containing too many files. Specifically, it started throwing this error in /var/log/messages: ext3_dx_add_entry: Directory index full! The disk in question has plenty of inodes remaining: Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda3 60719104 3465660 57253444 6% / So I'm guessing that means we hit the limit of how many entries can be in the directory file itself. No idea how many files that would be, but it can't be more, as you can see, than three million or so. Not that that's good, mind you! But that's part one of my question: exactly what is that upper limit? Is it tunable? Before I get yelled at--I want to tune it down; this enormous directory caused all sorts of issues. Anyway, we tracked down the issue in the code that was generating all of those files, and we've corrected it. Now I'm stuck with deleting the directory. A few options here: rm -rf (dir) while [ true ]; do ls -Uf | head -n 10000 | xargs rm -f 2>/dev/null; done ) export i=0; time ( while [ true ]; do ls -Uf | head -n 3 | grep -qF '.png' || break; ls -Uf | head -n 10000 | xargs rm -f 2>/dev/null; export i=$(($i+10000)); echo "$i..."; done ) This seems to be working rather well. As I write this, it's deleted 260,000 files in the past thirty minutes or so. Final script output!: 2970000... 2980000... 2990000... 3000000... 3010000... real 253m59.331s user 0m6.061s sys 5m4.019s So, three million files deleted in a bit over four hours. rm -rfv | pv -l >/dev/null The data=writeback mount option deserves to be tried, in order to prevent journaling of the file system. This should be done only during the deletion time, there is a risk however if the server is being shutdown or rebooted during the delete operation. data=writeback According to this page, Some applications show very significant speed improvement when it is used. For example, speed improvements can be seen (...) when applications create and delete large volumes of small files. Some applications show very significant speed improvement when it is used. For example, speed improvements can be seen (...) when applications create and delete large volumes of small files. The option is set either in fstab or during the mount operation, replacing data=ordered with data=writeback. The file system containing the files to be deleted has to be remounted. fstab data=ordered commit Would it be possible to backup all of the other files from this file system to a temporary storage location, reformat the partition, and then restore the files? Whilst a major cause of this problem is ext3 performance with millions of files, the actual root cause of this problem is different. When a directory needs to be listed readdir() is called on the directory which yields a list of files. readdir is a posix call, but the real linux system call being used here is called 'getdents'. Getdents list directory entries by filling a buffer is entries. The problem is mainly down to the fact that that readdir() uses a fixed buffer size of 32Kb to fetch files. As a directory gets larger and larger (the size increases as files are added) ext3 gets slower and slower to fetch entries and additional readdir's 32Kb buffer size is only sufficient to include a fraction of the entries in the directory. This causes readdir to loop over and over and invoke the expensive system call over and over. For example, on a test directory I created with over 2.6 million files inside, running "ls -1|wc-l" shows a large strace output of many getdent system calls. $ strace ls -1 | wc -l brk(0x4949000) = 0x4949000 getdents(3, /* 1025 entries */, 32768) = 32752 getdents(3, /* 1024 entries */, 32768) = 32752 getdents(3, /* 1025 entries */, 32768) = 32760 getdents(3, /* 1025 entries */, 32768) = 32768 brk(0) = 0x4949000 brk(0x496a000) = 0x496a000 getdents(3, /* 1024 entries */, 32768) = 32752 getdents(3, /* 1026 entries */, 32768) = 32760 ... Additionally the time spent in this directory was significant. $ time ls -1 | wc -l 2616044 real 0m20.609s user 0m16.241s sys 0m3.639s The method to make this a more efficient process is to call getdents manually with a much larger buffer. This improves performance significantly. Now, your not supposed to call getdents yourself manually so no interface exists to use it normally (check the man page for getdents to see!), however you can call it manually and make your system call invocation way more efficient. This drastically reduces the time it takes to fetch these files. I wrote a program that does this. $ time ./dentls bigfolder >out.txt real 0m2.355s user 0m0.326s sys 0m1.995s Almost ten times more efficient! I suspect that the larger the directory the more efficient this would end up being. I've provided the source to this program below. If you want to delete, uncomment the unlink line. This will drastically slow down performance I imagine. It also avoids printing/unlinking anything that is not a file. It will spit out file names to stdout. You should probably redirect this output. You can use that to delete the files outside of the program afterwards if you wanted. /* I can be compiled with the command "gcc -o dentls dentls.c" */ #define _GNU_SOURCE #include <search.h> /* Defines tree functions */ #include <dirent.h> /* Defines DT_* constants */ #include <fcntl.h> #include <stdio.h> #include <unistd.h> #include <stdlib.h> #include <sys/stat.h> #include <sys/syscall.h> #include <sys/types.h> #include <string.h> /* Because most filesystems use btree to store dents * its very important to perform an in-order removal * of the file contents. Performing an 'as-is read' of * the contents causes lots of btree rebalancing * that has significantly negative effect on unlink performance */ /* Tests indicate that performing a ascending order traversal * is about 1/3 faster than a descending order traversal */ int compare_fnames(const void *key1, const void *key2) { return strcmp((char *)key1, (char *)key2); } void walk_tree(const void *node, VISIT val, int lvl) { int rc = 0; switch(val) { case leaf: rc = unlink(*(char **)node); break; /* End order is deliberate here as it offers the best btree * rebalancing avoidance. */ case endorder: rc = unlink(*(char **)node); break; default: return; break; } if (rc < 0) { perror("unlink problem"); exit(1); } } void dummy_destroy(void *nil) { return; } void *tree = NULL; struct linux_dirent { long d_ino; off_t d_off; unsigned short d_reclen; char d_name[256]; char d_type; }; int main(const int argc, const char** argv) { int totalfiles = 0; int dirfd = -1; int offset = 0; int bufcount = 0; void *buffer = NULL; char *d_type; struct linux_dirent *dent = NULL; struct stat dstat; /* Test we have a directory path */ if (argc < 2) { fprintf(stderr, "You must supply a valid directory path.\n"); exit(1); } const char *path = argv[1]; /* Standard sanity checking stuff */ if (access(path, R_OK) < 0) { perror("Could not access directory"); exit(1); } if (lstat(path, &dstat) < 0) { perror("Unable to lstat path"); exit(1); } if (!S_ISDIR(dstat.st_mode)) { fprintf(stderr, "The path %s is not a directory.\n", path); exit(1); } /* Allocate a buffer of equal size to the directory to store dents */ if ((buffer = malloc(dstat.st_size+10240)) == NULL) { perror("malloc failed"); exit(1); } /* Open the directory */ if ((dirfd = open(path, O_RDONLY)) < 0) { perror("Open error"); exit(1); } /* Switch directories */ fchdir(dirfd); while (bufcount = syscall(SYS_getdents, dirfd, buffer, dstat.st_size+10240)) { offset = 0; dent = buffer; while (offset < bufcount) { /* Dont print thisdir and parent dir */ if (!((strcmp(".",dent->d_name) == 0) || (strcmp("..",dent->d_name) == 0))) { d_type = (char *)dent + dent->d_reclen-1; /* Only print files */ if (*d_type == DT_REG) { /* Sort all our files into a binary tree */ if (!tsearch(dent->d_name, &tree, compare_fnames)) { fprintf(stderr, "Cannot acquire resources for tree!\n"); exit(1); } totalfiles++; } } offset += dent->d_reclen; dent = buffer + offset; } } fprintf(stderr, "Total files: %d\n", totalfiles); printf("Performing delete..\n"); twalk(tree, walk_tree); printf("Done\n"); close(dirfd); free(buffer); tdestroy(tree, dummy_destroy); } Whilst this does not combat the underlying fundamental problem (lots of files, in a filesystem that performs poorly at it). Its likely to be much, much faster than many of the alternatives being posted. As a forethought, one should remove the affected directory and remake it after. Directories only ever increase in size and can remain poorly performing even with a few files inside due to the size of the directory. Edit: I revisited this today, because most filesystems store their directory structures in a btree format, the order of which you delete files is also important. One needs to avoid rebalancing the btree when you perform the unlink. As such I added a sort before deletes occur. The program will now (on my system) delete 1000000 files in 43 seconds. The closest program to this was rsync -a --delete which took 60 seconds (which also does deletions in-order, too but does not perform an efficient directory lookup). rsync -a --delete [256] [FILENAME_MAX] There is no per directory file limit in ext3 just the filesystem inode limit (i think there is a limit on the number of subdirectories though). You may still have problems after removing the files. When a directory has millions of files, the directory entry itself becomes very large. The directory entry has to be scanned for every remove operation, and that takes various amounts of time for each file, depending on where its entry is located. Unfortunately even after all the files have been removed the directory entry retains its size. So further operations that require scanning the directory entry will still take a long time even if the directory is now empty. The only way to solve that problem is to rename the directory, create a new one with the old name, and transfer any remaining files to the new one. Then delete the renamed one. make sure you do: mount -o remount,rw,noatime,nodiratime /mountpoint which should speed things up a bit as well. ls very slow command. Try: find /dir_to_delete ! -iname "*.png" -type f -delete Is dir_index set for the f/s ? tune2fs -l | grep dir_index, if not enabling it ... it's usually on for new RH find simply did not work for me, even after changing the ext3 fs's parameters as suggested by the users above. Consumed way too much memory. This PHP script did the trick - fast, insignificant CPU usage, insignificant memory usage: <?php $dir = '/directory/in/question'; $dh = opendir($dir)) { while (($file = readdir($dh)) !== false) { unlink($dir . '/' . $file); } closedir($dh); ?> I posted a bug report regarding this trouble with find: My preferred option is the newfs approach, already suggested. The basic problem is, again as already noted, the linear scan to handle deletion is problematic. rm -rf should be near optimal for a local filesystem (NFS would be different). But at millions of files, 36 bytes per filename and 4 per inode (a guess, not checking value for ext3), that's 40 * millions, to be kept in RAM just for the directory. rm -rf At a guess, you're thrashing the filesystem metadata cache memory in Linux, so that blocks for one page of the directory file are being expunged while you're still using another part, only to hit that page of the cache again when the next file is deleted. Linux performance tuning isn't my area, but /proc/sys/{vm,fs}/ probably contain something relevant. If you can afford downtime, you might consider turning on the dir_index feature. It switches the directory index from linear to something far more optimal for deletion in large directories (hashed b-trees). tune2fs -O dir_index ... followed by e2fsck -D would work. However, while I'm confident this would help before there are problems, I don't know how the conversion (e2fsck with the -D) performs when dealing with an existing v.large directory. Backups + suck-it-and-see. tune2fs -O dir_index ... e2fsck -D -D /proc/sys/fs/vfs_cache_pressure Obviously not apples to apples here, but I setup a little test and did the following: Created 100,000 512-byte files in a directory (dd and /dev/urandom in a loop); forgot to time it, but it took roughly 15 minutes to create those files. dd /dev/urandom Ran the following to delete said files: ls -1 | wc -l && time find . -type f -delete 100000 real 0m4.208s user 0m0.270s sys 0m3.930s This is a Pentium 4 2.8GHz box (couple hundred GB IDE 7200 RPM I think; EXT3). Kernel 2.6.27. rm find -delete Sometimes Perl can work wonders in cases like this. Have you already tried if a small script such as this could outperform bash and the basic shell commands? #!/usr/bin/perl open(ANNOYINGDIR,"/path/to/your/directory"); @files = grep("/*\.png/", readdir(ANNOYINGDIR)); close(ANNOYINGDIR); for (@files) { printf "Deleting %s\n",$_; unlink $_; } Or another, perhaps even faster, Perl approach: #!/usr/bin/perl unlink(glob("/path/to/your/directory/*.png")) or die("Could not delete files, this happened: $!"); EDIT: I just gave my Perl scripts a try. The more verbose one does something right. In my case I tried this with a virtual server with 256 MB RAM and half a million files. time find /test/directory | xargs rm results: time find /test/directory | xargs rm real 2m27.631s user 0m1.088s sys 0m13.229s compared to time perl -e 'opendir(FOO,"./"); @files = readdir(FOO); closedir(FOO); for (@files) { unlink $_; }' real 0m59.042s user 0m0.888s sys 0m18.737s *(oN) I recently faced a similar issue and was unable to get ring0's data=writeback suggestion to work (possibly due to the fact that the files are on my main partition). While researching workarounds I stumbled upon this: tune2fs -O ^has_journal <device> This will turn off journaling completely, regardless of the data option give to mount. I combined this with noatime and the volume had dir_index set, and it seemed to work pretty well. The delete actually finished without me needing to kill it, my system remained responsive, and it's now back up and running (with journaling back on) with no issues. data mount noatime dir_index Well, this is not a real answer, but... Would it be possible to convert the filesystem to ext4 and see if things change? Alright this has been covered in various ways in the rest of the thread but I thought I would throw in my two cents. The performance culprit in your case is probably readdir. You are getting back a list of files that are not necessarily in any way sequential on disk which is causing disk access all over the place when you unlink. The files are small enough that the unlink operation probably doesn't jump around too much zeroing out the space. If you readdir and then sort by ascending inode you would probably get better performance. So readdir into ram (sort by inode) -> unlink -> profit. Inode is a rough approximation here I think .. but basing on your use case it might be fairly accurate... From what I remember the deletion of inodes in ext filesystems is O(n^2), so the more files you delete the faster the rest will go. There was a one time I was faced with similar problem (though my estimates looked at ~7h deletion time), in the end went the jftuga suggested route in first comment. I would probably have whipped out a C compiler and done the moral equivalent of your script. That is, use opendir(3) to get a directory handle, then use readdir(3) to get the name of files, then tally up files as I unlink them and once in a while print "%d files deleted" (and possibly elapsed time or current time stamp). opendir(3) readdir(3) I don't expect it to be noticeably faster than the shell script version, it's just that I'm used to have to rip out the compiler now and again, either because there's no clean way of doing what I want from the shell or because while doable in shell, it's unproductively slow that way. You are likely running into rewrite issues with the directory. Try deleting the newest files first. Look at mount options that will defer writeback to disk. For a progress bar try running something like 'rm -rv /mystuff 2>&1 | pv -brtl > /dev/null' . I haven't benchmarked it, but this guy did: rsync -a --delete ./emptyDirectoty/ ./hugeDirectory/ You could use 'xargs' parallelization features: ls -1|xargs -P nb_concurrent_jobs -n nb_files_by_job rm -rf ls|cut -c -4|sort|uniq|awk '{ print "rm -rf " $1 }' | sh -x actually, this one is a little better if the shell you use does command line expansion: ls|cut -c -4|sort|uniq|awk '{ print "echo " $1 ";rm -rf " $1 "*"}' |sh By posting your answer, you agree to the privacy policy and terms of service. asked 3 years ago viewed 15437 times active yesterday
http://serverfault.com/questions/183821/rm-on-a-directory-with-millions-of-files/185070
CC-MAIN-2014-23
refinedweb
2,911
64.3
Testing. I’m going to cover the most popular and widely used types of tests in this article. This might be nothing new to some of you, but it can at least serve as a refresher. Either way, my goal is that you’re able to walk away with a good idea of the different types of tests out there. Unit. Integration. Accessibility. Visual regression. These are the sorts of things we’ll look at together. And not just that! We’ll also point out the libraries and frameworks that are used for each type of test, like Mocha. Jest, Puppeteer, and Cypress, among others. And don’t worry — I’ll avoid a bunch of technical jargon. That said, you should have some front-end development experience to understand the examples we’re going to cover. OK, let’s get started! What is testing?What is testing? Software testing is an investigation conducted to provide stakeholders with information about the quality of the software product or service under test.Cem Kaner, “Exploratory Testing” (November 17, 2006) At its most basic, testing is an automated tool that finds errors in your development as early as possible. That way, you’re able to fix those issues before they make it into production. Tests also serve as a reminder that you may have forgotten to check your own work in a certain area, say accessibility. In short, front-end testing validates that what people see on the site and the features they use on it work as intended. Front-end testing is for the client side of your application. For example, front-end tests can validate that pressing a “Delete” button properly removes an item from the screen. However, it won’t necessarily check if the item was actually removed from the database — that sort of thing would be covered during back-end testing. That’s testing in a nutshell: we want to catch errors on the client side and fix them before code is deployed. Different tests look at different parts of the projectDifferent tests look at different parts of the project Different types of tests cover different aspects of a project. Nevertheless, it is important to differentiate them and understand the role of each type. Confusing which tests do what makes for a messy, unreliable testing suit. Ideally, you’d use several different types of tests to surface different types of possible issues. Some test types have a test coverage analytic that shows just how much of your code (as a percentage) is looked at by that particular test. That’s a great feature, and while I’ve seen developers aim for 100% coverage, I wouldn’t rely on that metric alone. The most important thing is to make sure all possible edge cases are covered and taken into account. So, with that, let’s turn our attention to the different types of testing. Remember, it’s not so much that you’re expected to use each and every one of these. It’s about being able to differentiate the tests so that you know which ones to use in certain circumstances. Unit testingUnit testing - Level: Low - Scope: Tests the functions and methods of an application. - Possible tools:, AVA, Jasmine, Jest, Karma, Mocha Unit testing is the most basic building block for testing. It looks at individual components and ensures they work as expected. This sort of testing is crucial for any front-end application because, with it, your components are tested against how they’re expected to behave, which leads to a much more reliable codebase and app. This is also where things like edge cases can be considered and covered. Unit tests are particularly great for testing APIs. But rather than making calls to a live API, hardcoded (or “mocked”) data makes sure that your test runs are always consistent at all time. Let’s take a super simple (and primitive) function as an example: const sayHello = (name) => { if (!name) { return "Hello human!"; } return `Hello ${name}!`; }; Again, this is a basic case, but you can see that it covers a small edge case where someone may have neglected to provide a first name to the application. If there’s a name, we’ll get “Hello ${name}!” where ${name} is what we expect the person to have provided. “Um, why do we need to test for something small like that?” you might wonder. There are some very important reasons for this: - It forces you to think deeply about the possible outcomes of your function. More often than not, you really do discover edge cases which helps you cover them in your code. - Some part of your code can rely on this edge case, and if someone comes and deletes something important, the test will warn them that this code is important and cannot be removed. Unit tests are often small and simple. Here’s an example: describe("sayHello function", () => { it("should return the proper greeting when a user doesn't pass a name", () => { expect(sayHello()).toEqual("Hello human!") }) it("should return the proper greeting with the name passed", () => { expect(sayHello("Evgeny")).toEqual("Hello Evgeny!") }) }) describe and it are just syntactic sugar. The most important lines with expect and toEqual. describe and it breaks the test into logical blocks that are printed to the terminal. The expect function accepts the input we want to validate, while toEqual accepts the desired output. There are a lot of different functions and methods you can use to test your application. Let’s say we’re working with Jest, a library for writing units. In the example above, Jest will display the sayHello function as a title in the terminal. Everything inside an it function is considered as a single test and is reported in the terminal below the function title, making everything very easy to read. Integration testingIntegration testing - Level: Medium - Scope: Tests Interactions between units. - Possible tools: AVA, Jest, Testing Library If unit tests check the behavior of a block, integration tests make sure that blocks work flawlessly together. That makes Integration testing super important because it opens up testing interactions between components. It’s very rare (if ever) that an application is composed of isolated pieces that function by themselves. That’s why we rely on integration tests. We go back to the function we unit tested, but this time use it in a simple React application. Let’s say that clicking a button triggers a greeting to appear on the screen. That means a test involves not only the function but also the HTML DOM and a button’s functionality. We want to test how all these parts play together. Here’s the code for a <Greeting /> component we’re testing: export const Greeting = () => { const [showGreeting, setShowGreeting] = useState(false); return ( <div> <p data-{showGreeting && sayHello()}</p> <button data-testid="show-greeting-button" onClick={() => setShowGreeting(true)}>Show Greeting</button> </div> ); }; Here’s the integration test: describe('<Greeting />', () => { it('shows correct greeting', () => { const screen = render(<Greeting />); const greeting = screen.getByTestId('greeting'); const button = screen.getByTestId('show-greeting-button'); expect(greeting.textContent).toBe(''); fireEvent.click(button); expect(greeting.textContent).toBe('Hello human!'); }); }); We already know describe and it from our unit test. They break tests up into logical parts. We have the render function that displays a <Greeting /> component in the special emulated DOM so we can test interactions with the component without touching the real DOM — otherwise, it can be costly. Next up, the test queries <p> and <button> elements via test IDs ( #greeting and #show-greeting-button, respectively). We use test IDs because it’s easier to get the components we want from the emulated DOM. There are other ways to query components, but this is how I do it most often. It’s not until line 7 that the actual integration test begins! We first check that <p> tag is empty. Then we click the button by simulating a click event. And lastly, we check that the <p> tag contains “Hello human!” inside it. That’s it! All we’re testing is that an empty paragraph contains text after a button is clicked. Our component is covered. We can, of course, add input where someone types their name and we use that input in the greeting function. However, I decided to make it a bit simpler. We’ll get to using inputs when we cover other types of tests. Check out what we get in the terminal when running the integration test: <Greeting />component shows the correct greeting when clicking the button. End-to-end (E2E) testingEnd-to-end (E2E) testing - Level: High - Scope: Tests user interactions in a real-life browser by providing it instructions for what to do and expected outcomes. - Possible tools: Cypress, Puppeteer E2E tests are the highest level of testing in this list. E2E tests care only about how people see your application and how they interact with it. They don’t know anything about the code and the implementation. E2E tests tell the browser what to do, what to click, and what to type. We can create all kinds of interactions that test different features and flows as the end user experiences them. It’s literally a robot that’s interacted to click through an application to make sure everything works. E2E tests are similar to integration tests in a sort of way. However, E2E tests are executed in a real browser with a real DOM rather than something we mock up — we generally work with real data and a real API in these tests. It is good to have full coverage with unit and integration tests. However, users can face unexpected behaviors when they run an application in the browser — E2E tests are the perfect solution for that. Let’s look at an example using Cypress, an extremely popular testing library. We are going to use it specifically for an E2E test of our previous component, this time inside a browser with some extra features. Again, we don’t need to see the code of the application. All we’re assuming is that we have some application and we want to test it as a user. We know what buttons to click and the IDs those buttons have. That’s all we really have to go off of. describe('Greetings functionality', () => { it('should navigate to greetings page and confirm it works', () => { cy.visit('') cy.get('#greeting-nav-button').click() cy.get('#greetings-input').type('Evgeny', { delay: 400 }) cy.get('#greetings-show-button').click() cy.get('#greeting-text').should('include.text', 'Hello Evgeny!') }) }) This E2E test looks very similar to our previous integration test. The commands are extremely similar, the main difference being that these are executed in a real browser. First, we use cy.visit to navigate to a specific URL where our application lies: cy.visit('') Second, we use cy.get to get the navigation button by its ID, then instruct the test to click it. That action will navigate to the page with the <Greetings /> component. In fact, I’ve added the component to my personal website and provided it with its own URL route. cy.get('#greeting-nav-button').click() Then, sequentially, we get text input, type “Evgeny,” click the #greetings-show-button button and, lastly, check that we got the desired greeting output. cy.get('#greetings-input').type('Evgeny', { delay: 400 }) cy.get('#greetings-show-button').click() cy.get('#greeting-text').should('include.text', 'Hello Evgeny!') It is pretty cool to watch how the test clicks buttons for you in a real live browser. I slowed down the test a bit so you can see what is going on. All of this usually happens very quickly. Here is the terminal output: Accessibility testingAccessibility testing - Level: High - Scope: Tests the interface of your application against accessibility standards criteria. - Possible tools: AccessLint, axe-core, Lighthouse, pa11y Web accessibility means that websites, tools, and technologies are designed and developed so that people with disabilities can use them.W3C Accessibility tests make sure people with disabilities can effectively access and use a website. These tests validate that you follow the standards for building a website with accessibility in mind. For example, many unsighted people use screen readers. Screen readers scan your website and attempt to present it to users with disability in a format (usually spoken) those users can understand. As a developer, you want to make a screen reader’s job easy and accessibility testing will help you understand where to start. There are a lot of different tools, some of them automated and some that run manually to validate accessibilit. For example, Chrome already has one tool built right into its DevTools. You may know it as Lighthouse. Let’s use Lighthouse to validate the application we made in the E2E testing section. We open Lighthouse in Chrome DevTools, click the “Accessibility” test option, and “Generate” the report. That’s literally all we have to do! Lighthouse does its thing, then generates a lovely report, complete with a score, a summary of audits that ran, and an outline of opportunities for improving the score. But this is just one tool that measures accessibility from its particular lens. We have all kinds of accessibility tooling, and it’s worth having a plan for what to test and the tooling that’s available to hit those points. Visual regression testingVisual regression testing - Level: High - Scope: Tests the visual structure of application, including the visual differences produced by a change in the code. - Possible tools: Cypress, Percy, Applitools Sometimes E2E tests are insufficient to verify that the last changes to your application didn’t break the visual appearance of anything in an interface. Have you pushed the code with some changes to production just to realize that it broke the layout of some other part of the application? Well, you are not alone. Most times than not, changes to a codebase break an app’s visual structure, or layout. The solution is visual regression testing. The way it works is pretty straightforward. Visual test merely take a screenshot of pages or components and compare them with screenshots that were captured in previous successful tests. If these tests find any discrepancies between the screenshots, they’ll give us some sort of notification. Let’s turn to a visual regression tool called Percy to see how visual regression test works. There are a lot of other ways to do visual regression tests, but I think Percy is simple to show in action. In fact, you can jump over to Paul Ryan’s deep dive on Percy right here on CSS-Tricks. But we’ll do something considerably simpler to illustrate the concept. I intentionally broke the layout of our Greeting application by moving the button to the bottom of the input. Let’s try to catch this error with Percy. Percy works well with Cypress, so we can follow their installation guide and run Percy regression tests along with our existing E2E tests. describe('Greetings functionality', () => { it('should navigate to greetings page and confirm everything is there', () => { cy.visit('') cy.get('#greeting-nav-button').click() cy.get('#greetings-input').type('Evgeny', { delay: 400 }) cy.get('#greetings-show-button').click() cy.get('#greeting-text').should('include.text', 'Hello Evgeny!') // Percy test cy.percySnapshot() // HIGHLIGHT }) }) All we added at the end of our E2E test is a one-liner: cy.percySnapshot(). This will take a screenshot and send it to Percy to compare. That is it! After the tests have finished, we’ll receive a link to check our regressions. Here is what I got in the terminal: And here’s what we get from Percy: Performance testingPerformance testing - Level: High - Scope: Tests the application for performance and stability. - Possible Tools: Lighthouse, PageSpeed Insights, WebPageTest, YSlow Performance testing is great for checking the speed of your application. If performance is crucial for your business — and it likely is given the recent focus on Core Web Vitals and SEO — you’ll definitely want to know if the changes to your codebase have a negative impact on the speed of the application. We can bake this into the rest of our testing flow, or we can run them manually. It’s totally up to you how to run these tests and how frequently to run them. Some devs create what’s called a “performance budget” and run a test that calculates the size of the app — and a failed test will prevent a deployment from happening if the size exceeds a certain threshold. Or, test manually every so often with Lighthouse, as it also measures performance metrics. Or combine the two and build Lighthouse into the testing suite. Performance tests can measure anything related to performance. They can measure how fast an application loads, the size of its initial bundle, and even the speed of a particular function. Performance testing is a somewhat broad, vast landscape. Here’s a quick test using Lighthouse. I think it’s a good one to show because of its focus on Core Web Vitals as well as how easily accessible it is in Chrome’s DevTools without any installation or configuration. Wrapping upWrapping up Here’s a breakdown of what we covered: So, is testing for everyone? Yes, it is! Given all the available libraries, services, and tools we have to test different aspects of an application at different points, there’s at least something out there that allows us to measure and test code against standards and expectations — and some of them don’t even require code or configuration! In my experience, many developers neglect testing and think that a simple click-through or post check will help any possible bugs from a change in the code. If you want to make sure your application works as expected, is inclusive to as many people as possible, runs efficiently, and is well-designed, then testing needs to be a core part of your workflow, whether it’s automated or manual. Now that you know what types tests there are and how they work, how are you going to implement testing into your work? Just to add on top of the tools you mention, there are other tools out there that might cover E2E and visual regression with no extra code, one example is If you are designing or documenting your components with Storybook (which I can only recommend), you can do visual regression testing without any additional setup. Just create a project on chromatic.com and upload your stories. No mention of Playwright? I’m looking at it because it seems to have much more modern, sensible and intuitive browser automation APIs than either Cypress or Puppeteer, but I don’t know much more about it. Seems a bit overlooked, perhaps because it’s relatively new in the E2E space? It would have been nice to compare. Really well written article! I like the way you are teaching me Great article covering the basics. Agree with Rasmus about Playwright, which is effectively the successor to Puppeteer. It’s being developed very actively and is already very powerful and useful beyond testing into web automation more generally. Very informative and give insight into the latest tools for automation testing . Great article , thanks.. I had knowledge of only unit testing but was familiar with the others, however this was s great refresher nonetheless. Note: the last section of your article on performance testing has some sentence structural errors that can make it difficult to read.. had to re-read that sections to understand lol. For visual testing another great tool ImsgeMagic and can be integrated with any tool easily with small batch for Totally didn’t mention Chromatic for visual regression testing. We love using it on our team! We are using cumcumber and protactor for a combination to write end-to-end testing. This is specific to angular tho. Works pretty nifty
https://css-tricks.com/front-end-testing-is-for-everyone/?utm_campaign=Web-Crunch%20Digest&utm_medium=email&utm_source=Revue%20newsletter
CC-MAIN-2022-05
refinedweb
3,316
55.84
for connected embedded systems mq_setattr() Set a queue's attributes Synopsis: #include <mqueue.h> int mq_setattr( mqd_t mqdes, const struct mq_attr* mqstat, struct mq_attr* omqstat ); Arguments: - mqdes - The message-queue descriptor, returned by mq_open(), of the message queue that you want to set the attributes of. - mqstat - A pointer to a mq_attr structure that specifies the attributes that you want to use for the message queue. For more information about this structure, see mq_getattr(); for information about which attributes you can set, see below. - omqstat - NULL, or a pointer to a mq_attr structure where the function can store the old attributes of the message queue.: - O_NONBLOCK - No mq_receive() or mq_send() will ever block on this queue. If the queue is in such a condition that the given operation can't be performed without blocking, then an error is returned, and errno is set to EAGAIN. Returns: -1 if the function couldn't change the attributes (errno is set). Any other value indicates success. Errors: - EBADF - Invalid message queue mqdes. Classification: See also: mq_getattr(), mq_open(), mq_receive(), mq_send() mq, mqueue in the Utilities Reference
http://www.qnx.com/developers/docs/6.3.2/neutrino/lib_ref/m/mq_setattr.html
crawl-003
refinedweb
182
55.13
(For more resources related to this topic, see here.) Introducing the Sunspot data Sunspots are dark spots visible on the Sun's surface. This phenomenon has been studied for many centuries by astronomers. Evidence has been found for periodic sunspot cycles. We can download up-to-date annual sunspot data from. This is provided by the Belgian Solar Influences Data Analysis Center. The data goes back to 1700 and contains more than 300 annual averages. In order to determine sunspot cycles, scientists successfully used the Hilbert-Huang transform (refer to). A major part of this transform is the so-called Empirical Mode Decomposition (EMD) method. The entire algorithm contains many iterative steps, and we will cover only some of them here. EMD reduces data to a group of Intrinsic Mode Functions (IMF). You can compare this to the way Fast Fourier Transform decomposes a signal in a superposition of sine and cosine terms. Extracting IMFs is done via a sifting process. The sifting of a signal is related to separating out components of a signal one at a time. The first step of this process is identifying local extrema. We will perform the first step and plot the data with the extrema we found. Let's download the data in CSV format. We also need to reverse the array to have it in the correct chronological order. The following code snippet finds the indices of the local minima and maxima respectively: mins = signal.argrelmin(data)[0] maxs = signal.argrelmax(data)[0] Now we need to concatenate these arrays and use the indices to select the corresponding values. The following code accomplishes that and also plots the data: import numpy as np import sys import matplotlib.pyplot as plt from scipy import signal)) plt.plot(1700 + extrema, data[extrema], 'go') plt.plot(year_range, data) plt.show() We will see the following chart: In this plot, you can see the extrema is indicated with dots. Sifting continued The next steps in the sifting process require us to interpolate with a cubic spline of the minima and maxima. This creates an upper envelope and a lower envelope, which should surround the data. The mean of the envelopes is needed for the next iteration of the EMD process. We can interpolate minima with the following code snippet: spl_min = interpolate.interp1d(mins, data[mins], kind='cubic') min_rng = np.arange(mins.min(), mins.max()) l_env = spl_min(min_rng) Similar code can be used to interpolate the maxima. We need to be aware that the interpolation results are only valid within the range over which we are interpolating. This range is defined by the first occurrence of a minima/maxima and ends at the last occurrence of a minima/maxima. Unfortunately, the interpolation ranges we can define in this way for the maxima and minima do not match perfectly. So, for the purpose of plotting, we need to extract a shorter range that lies within both the maxima and minima interpolation ranges. Have a look at the following code: import numpy as np import sys import matplotlib.pyplot as plt from scipy import signal from scipy import interpolate)) spl_min = interpolate.interp1d(mins, data[mins], kind='cubic') min_rng = np.arange(mins.min(), mins.max()) l_env = spl_min(min_rng) spl_max = interpolate.interp1d(maxs, data[maxs], kind='cubic') max_rng = np.arange(maxs.min(), maxs.max()) u_env = spl_max(max_rng) inclusive_rng = np.arange(max(min_rng[0], max_rng[0]), min(min_rng[-1], max_rng[-1])) mid = (spl_max(inclusive_rng) + spl_min(inclusive_rng))/2 plt.plot(year_range, data) plt.plot(1700 + min_rng, l_env, '-x') plt.plot(1700 + max_rng, u_env, '-x') plt.plot(1700 + inclusive_rng, mid, '--') plt.show() The code produces the following chart: What you see is the observed data, with computed envelopes and mid line. Obviously, negative values don't make any sense in this context. However, for the algorithm we only need to care about the mid line of the upper and lower envelopes. In these first two sections, we basically performed the first iteration of the EMD process. The algorithm is a bit more involved, so we will leave it up to you whether or not you want to continue with this analysis on your own. Moving averages Moving averages are tools commonly used to analyze time-series data. A moving average defines a window of previously seen data that is averaged each time the window slides forward one period. The different types of moving average differ essentially in the weights used for averaging. The exponential moving average, for instance, has exponentially decreasing weights with time. This means that older values have less influence than newer values, which is sometimes desirable. We can express an equal-weight strategy for the simple moving average as follows in the NumPy code: weights = np.exp(np.linspace(-1., 0., N)) weights /= weights.sum() A simple moving average uses equal weights which, in code, looks as follows: def sma(arr, n): weights = np.ones(n) / n return np.convolve(weights, arr)[n-1:-n+1] The following code plots the simple moving average for the 11- and 22-year sunspot cycle: import numpy as np import sys import matplotlib.pyplot as plt data = np.loadtxt(sys.argv[1], delimiter=',', usecols=(1,), unpack=True, skiprows=1) #reverse order data = data[::-1] year_range = np.arange(1700, 1700 + len(data)) def sma(arr, n): weights = np.ones(n) / n return np.convolve(weights, arr)[n-1:-n+1] sma11 = sma(data, 11) sma22 = sma(data, 22) plt.plot(year_range, data, label='Data') plt.plot(year_range[10:], sma11, '-x', label='SMA 11') plt.plot(year_range[21:], sma22, '--', label='SMA 22') plt.legend() plt.show() In the following plot, we see the original data and the simple moving averages for 11- and 22-year periods. As you can see, moving averages are not a good fit for this data; this is generally the case for sinusoidal data. Summary This article gave us examples of signal processing and time series analysis. We looked at shifting continued that performs the first iteration of the EMD process. We also learned about Moving averages, which are tools commonly used to analyze time-series data. Resources for Article: Further resources on this subject: - Advanced Indexing and Array Concepts [Article] - Fast Array Operations with NumPy [Article] - Move Further with NumPy Modules [Article]
https://www.packtpub.com/books/content/signal-processing-techniques
CC-MAIN-2016-30
refinedweb
1,041
59.19
Am Dienstag 05 Januar 2010 02:31:20 schrieb Dale Jordan: > Kind Rand g is basically State g (or, rather StateT g Identity). Looking at the definition of (>>=) for that: instance (Monad m) => Monad (StateT s m) where return a = StateT $ \s -> return (a, s) m >>= k = StateT $ \s -> do ~(a, s') <- runStateT m s runStateT (k a) s' in a1 >> a2, if a2 wants to use the state, it can't do that before a1 is done. iterateR is never finished, so iterateR act >> otherAct can only work if otherAct doesn't use the state (or puts a state before using it). > or point out a better way to > generate an arbitrary-length random list while still being able to > reuse the generator? Sorry, no. > (I'd rather not use split since this generator > doesn't support it and its of dubious soundness.) > > Dale Jordan -------------- next part -------------- An HTML attachment was scrubbed... URL:
http://www.haskell.org/pipermail/haskell-cafe/2010-January/071654.html
CC-MAIN-2013-48
refinedweb
154
63.53
Browser-Based Integration Tests¶ Integration and acceptance tests that use real browsers or simulated browsers are great for really exercising your app. The power they provide comes with some complexity. We’ve collected information on common problems with Capybara, Selenium, and other related tools. Capybara: Subdomains¶ One approach is to set Capybara.app_host. Many folks edit /etc/hosts locally. We suggest using lvh.tddium.com or lvh.me which have wildcard A records in DNS that point .lvh.me and to 127.0.0.1. You can read more about this approach here. Capybara: Blank Pages, SystemExit and 500 Server Error¶ Three common, frustrating errors with Capybara are server timeouts that resulting in Capybara raising the SystemExit exception, failed redirections, and other errors that result in an HTTP 500. By default the stack traces for these types of errors get eaten, making them hard to diagnose. One way to get a stack trace to help debug these sorts of problems is to wrap your application in a stub that logs the error. You can add the following snippet to your Capybara configuration, making sure to change MyApp to the name of your application. You can download this example: ### Show stack trace for 500 errors ### (adapted from) # Given an application, yield to a block to handle exceptions class ExceptionRaiserApp def initialize(app) @app = app end def call(env) @app.call(env) rescue => e backtrace = [ "#{'*'*20} #{e.to_s} #{'*'*20}", '', e.message, '', Rails.backtrace_cleaner.clean(e.backtrace, :silent), '', '*'*80, '' ].join("\n\t") # log the backtrace Rails.logger.error backtrace # re-raise so capybara gets a 500 and knows what's up raise e end end Capybara.app = ExceptionRaiserApp.new(MyApp::Application) Capybara and Selenium tests are often timing dependent and the timing of tests often changes when the test suite is run in parallel in Solano CI. The first step in debugging these test failures is to make sure you have made the changes necessary to run your tests in parallel. There is a separate discussion of parallelism-related Selenium failures in handling failures. Capybara Timeouts and Server Exits¶ If you are seeing Selenium/Webkit tests fail in Solano CI with SystemExit or Timeout::Error exceptions or if you see “Rack application timed out during boot” there are three likely causes: - You’re explicitly setting the Capybara.server_port. Recent versions of Capybara (after 1.1.0), will automatically pick a server port in a robust way. - You’re using an old version of Capybara (earlier than 1.1.0) that is less intelligent at automatic port picking. - You have started your own copy of X11/Xvfb, possibly using the headless gem. Solano CI manages the X11 display for you, so you should not start your own headless display when ENV[‘TDDIUM’] is true. If you can upgrade Capybara, we recommend that you do so. If not, we’ve tested the following patches: If you are using Capybara 0.4.0 or later with Selenium, then add the following to your features/support/env.rb: def find_available_port server = TCPServer.new('127.0.0.1', 0) server.addr[1] ensure server.close if server end if ENV['TDDIUM'] then Capybara.server_port = find_available_port end If you are using Capybara 0.3.9 or earlier, there is no attribute to set the server_port, so a little monkey-patch is needed. Add the following to your features/support/env.rb: require 'socket' class Capybara::Server private def find_available_port server = TCPServer.new('127.0.0.1', 0) @port = server.addr[1] ensure server.close if server end end Capybara hangs when using capybara-webkit¶ Problems with webkit-server hangs have been reported when using some versions of the thin webserver with some versions of libqt, the toolkit used to implement much of webkit. Users have reported success switching to Webrick or mongrel for their capybara app-under-test. Here’s an example configuration to switch to Webrick: # See: Capybara.server do |app, port| require 'rack/handler/webrick' Rack::Handler::WEBrick.run(app, :Port => port, :AccessLog => [], :Logger => WEBrick::Log::new(nil, 0)) end Failure to start capybara-webkit¶ When changing either the capybara-webkit gem or the version of Qt library specified in a configuration file, the dependency cache may need to be dropped and rebuilt. The default dependency cache settings will automatically do so when Gemfile[.lock] files are changed. If a webkit_server failed to start after 15 seconds error occurs, you can increase the timeout value with: Capybara::Webkit.configure do |config| config.class.parent::Server.send(:remove_const, :WEBKIT_SERVER_START_TIMEOUT) config.class.parent::Server::WEBKIT_SERVER_START_TIMEOUT = 30 # default 15 end
http://docs.solanolabs.com/HandlingFailures/browser-based-integration-tests/
CC-MAIN-2017-26
refinedweb
760
50.63
nlcst-is-literal nlcst utility to check if a node is meant literally. Useful if a tool wants to exclude values that are possibly void of meaning. For example, a spell-checker could exclude these literal words, thus not warning about “monsieur”. Install This package is ESM only: Node 12+ is needed to use it and it must be imported instead of required. npm install nlcst-is-literal Use Say we have the following file, example.txt: The word “foo” is meant as a literal. The word «bar» is meant as a literal. The word (baz) is meant as a literal. The word, qux, is meant as a literal. The word — quux — is meant as a literal. And our script, example.js, looks as follows: import {readSync} from 'to-vfile' import {unified} from 'unified' import retextEnglish from 'retext-english' import {visit} from 'unist-util-visit' import {toString} from 'nlcst-to-string' import {isLiteral} from 'nlcst-is-literal' const file = readSync('example.txt') const tree = unified().use(retextEnglish).parse(file) visit(tree, 'WordNode', visitor) function visitor(node, index, parent) { if (isLiteral(parent, index)) { console.log(toString(node)) } } Now, running node example yields: foo bar baz qux quux API This package exports the following identifiers: isLiteral. There is no default export. isLiteral(parent, index|child) Check if the child in parent is enclosed by matching delimiters. If index is given, the child of parent at that index is checked. For example, foo is literal in the following samples: Foo - is meant as a literal. Meant as a literal is - foo. The word “foo” is meant as a literal. Related nlcst-normalize— Normalize a word for easier comparison nlcst-search— Search for patterns Contribute See contributing.md in syntax-tree/.github for ways to get started. See support.md for ways to get help. This project has a code of conduct. By interacting with this repository, organization, or community you agree to abide by its terms.
https://unifiedjs.com/explore/package/nlcst-is-literal/
CC-MAIN-2022-05
refinedweb
323
52.15
Cheap dirty way to send a raw-mysensors message? Hello, is it possible to "inject" a sensor-value into the gateway by hand-crafting a data packet? I recently got some of these little guys () Now my idea was to slap a NRF24 on the ISP header (+2 extra pins for CE and CSN). So far so good .. now I want to craft a simple packet to get data to my gateways .. This is what I got so far but my gateway is not picking up anything. radio.begin(); // Start up the radio radio.setDataRate(RF24_250KBPS); radio.setAutoAck(1); // Ensure autoACK is enabled radio.setRetries(15,15); // Max delay between retries & number of retries byte station_address[] = { 0x00, 0xFC, 0xE1, 0xA8, 0xA8 }; //byte station_address[] = { 0xA8, 0xA8, 0xE1, 0xFC, 0x00 }; radio.openWritingPipe(station_address); // Write to device address 0x00,0xFC,0xE1,0xA8,0xA8 radio.stopListening(); byte test[] = { 250, 250, 0, 0b10000001 /* version_length */, 0b00100001, 7 /* S_HUM */ , 1 /* V_HUM */, 123}; radio.write(test,8); And no .. I can't cram the mysensors library on the ATTINY44. I am currently happe that I got the basic chirp functunality for measuring the humidity of the soil, attiny debug serial and nrf24 all running on 4k of flash. Der Sketch verwendet **4046 Bytes (98%)** des Programmspeicherplatzes. Das Maximum sind 4096 Bytes. Globale Variablen verwenden 150 Bytes (58%) des dynamischen Speichers, 106 Bytes für lokale Variablen verbleiben. Das Maximum sind 256 Bytes. If anybody know if this is even possible I would like to hear from you. Best regards Marc The problem I think is that the gateway needs the node to register on the network so it knows what it should expect receiving, it is a 2 way communication not like those devices using RF link that only transmit and other only receives. Have you evaluated the button size node on openhardware? I think it could be an alternative for your needs @gohan Thanks for your response. The problem is that I already have 5 "chirps" and just later came up with the idea of adding nrf24 to them. I might try to add a "relay" node which will listen for "normal" rf24 packets and then send them to my gateway using the mysensors library. After many hours trial and error I present you .. the raw-mysensors-client!! MyGateway: ESP8266 + mysensorsMQTT MyNode: MyNode-MCU: Attiny44 (MemoryFootprint: 87% of 4KB) ArduinoCore: ArduinoCore-PinLayout: Counterclockwise (like AttinyCore) Note: The default "counterclockwise"pinout is the alternate pinout shown here: The important hint is here () Connect MOSI of the RF24 to PA5 of the attiny and MISO of the RF24 to PA6 of the attiny. CE = 8 CSN = 7 To get the connection done right DONT .. i mean DO NOT BECAUSE IT DOESNT FUCKING WORK connect MOSI of the RF24 to MOSI of the attiny. RF24 uses a embedded implementation of the USI-engine found on some AtTiny's. radio-initilisation radio.begin(); // Start up the radio radio.setDataRate(RF24_250KBPS); radio.setAutoAck(1); // Ensure autoACK is enabled radio.setRetries(15,15); // Max delay between retries & number of retries //radio.setPayloadSize(8); radio.enableDynamicPayloads(); radio.setPALevel(RF24_PA_MAX); byte tx_address[] = { 0x00, 0xFC, 0xE1, 0xA8, 0xA8 }; //byte tx_address[] = { 0xA8, 0xA8, 0xE1, 0xFC, 0x00 }; radio.openWritingPipe(tx_address); radio.stopListening(); Send-Function: void sendHumidity(uint16_t humidity) { // snprintf_P(_fmtBuffer, MY_GATEWAY_MAX_SEND_LENGTH, PSTR("%s/%d/%d/%d/%d/%d"), prefix, message.sender, message.sensor, mGetCommand(message), mGetAck(message), message.type); // X8X22<\0><\n>!<7>7AY0;255;3;0;9;TSF:MSG:READ,50-50-0,s=55,c=1,t=7,pt=1,l=1,sg=0:65<\n> // 0;255;3;0;9;Sending message on topic: domoticz/in/MyMQTT/50/55/1/0/7<\n> #define LAST_NODE_ID 50 // last #define NODE_ID 50 // sender #define GATEWAY_ID 0 // destination // version_length // 5 bit - Length of payload // 1 bit - Signed flag // 2 bit - Protocol version // command_ack_payload // 3 bit - Payload data type // 1 bit - Is ack messsage - Indicator that this is the actual ack message. // 1 bit - Request an ack - Indicator that receiver should send an ack back. // 3 bit - Command type #define SENSOR_TYPE 7 // type S_HUM = 7 #define SENSOR_ID 55 // sensor-id byte test[] = { LAST_NODE_ID, NODE_ID, GATEWAY_ID, 0b00010010, 0b01100001, SENSOR_TYPE, SENSOR_ID, (uint8_t)(humidity & 0x00FF), (uint8_t)((humidity >> 8) & 0x00FF), 0x00 }; radio.write(test,9); } I am not quite sure yet if the message-types etc. are right but I am trying to find this out. 'A' (dec 65) is the "value" of my humidity .. next step is to make this a real value ofc. Proof at domoticz: @gohan: In the file MyConfig.h I commented this out: /** * @def MY_REGISTRATION_FEATURE * @brief If enabled, node has to register to gateway/controller before allowed to send sensor data. */ // #define MY_REGISTRATION_FEATURE This will skip the whole hasse of registering the client properly (which I can't .. remember 87% of flash is full) @cimba007 very nice work! Great way to be compatible with the MySensors protocol/ecosystem on a limited device when the full MySensors feature set is not needed. I might have run into a bug in the calculation of message length. In my handcrafted packet I use this as the 4thy byte send: 0b10000001 So to craft this I looked at MyMessage.h which showed me this: uint8_t version_length; // 2 bit - Protocol version // 1 bit - Signed flag // 5 bit - Length of payload So from my understanding: Protocol Version = 2 => 0b10 Signed Flag = 0 => 0b0 Length of Payload = 1 = 0b00001 Which results in 0b10 0 00001 = 0b10000001 But I get this error: LEN,8!=23 So .. where might this come from? radio.write(test,8); Mycontroller received a packet with the length of 8 but expected a packet with the length of .. 23 ?! #define mGetLength(_message) ((uint8_t)BF_GET(_message.version_length, 3, 5)) //!< Get length field Which in essential is BF_GET .. #define BF_GET(y, start, len) ( ((y)>>(start)) & BIT_MASK(len) ) //!< Extract a bitfield of length 'len' starting at bit 'start' from 'y' So what is happening? BF_GET(0b10000001, 3, 5) ( ((0b10000001)>>(3)) & BIT_MASK(5) ) //!< Extract a bitfield of length 'len' starting at bit 'start' Whoops? This will throw away all the length-information!!! (0b10000001)>>(3) This should result in: 0b11110000 & BIT_MASK5 = 0b1111000 & 0b00011111 = 0b00010000 = 16 decimal const uint8_t expectedMessageLength = HEADER_SIZE + (mGetSigned(_msg) ? MAX_PAYLOAD : msgLength); const uint8_t expectedMessageLength = 7+ 16); // = 23 Yeah .. the lib is right .. 8 != 23 .. but the handcrafted length = 1 + header_length (7) = 8 Am I wrong or is this a bug? Edit: I might have read the comments in the source code the wrong way From time to time I run into this bug .. not sure if this is related: 0;255;3;0;9;TSM:READY:NWD REQ<\n> 0;255;3;0;9;TSF:MSG:SEND,0-0-255-255,s=255,c=3,t=20,pt=0,l=0,sg=0,ft=0,st=OK:<\n> Fatal exception 28(LoadProhibitedCause):<\n> epc1=0x40202704, epc2=0x00000000, epc3=0x00000000, excvaddr=0x00000003, depc=0x00000000<\n> <\r><\n> - NeverDie Hero Member last edited by I wish the chirp had been designed with an atmega328p. Cost and footprint for the smd is similar I believe Its not "that" bad .. the Attiny44A is compatible with Arduino thanks to For now I got Humidity and Voltage(VCC) running up fine. @cimba007 Could you please share your latest working code for generating the packet? I've created an ATTiny841 node with the RFM69CW radio (with a modified RFM69 library to match the radio config registers of MySensors) but the Raspberry Pi gateway is seeing the packets different. Here is the sending part: byte test[] = { 1, // last 1, // sender 255, // destination 0b00001010, // version_length: 2 bit - Protocol version (2), 1 bit - Signed flag (no), 5 bit - Length of payload (1 byte) 0b00100001, // command_ack_payload: 3 bit - Command type (C_SET), 1 bit - Request an ack (no), 1 bit - Is ack message (no), 3 bit - Payload data type (P_BYTE) 16, // type: V_TRIPPED 3, // sensor ID 1 // Value: 1 }; radio.send(255, packet, strlen(packet)); which is received as: DEBUG TSF:MSG:READ,10-255-33,s=0,c=3,t=1,pt=0,l=2,sg=0: DEBUG !TSF:MSG:LEN,6!=9 where none of the fields match the packet Here is the packet definition in core/MyMessage.h: uint8_t last; ///< 8 bit - Id of last node this message passed uint8_t sender; ///< 8 bit - Id of sender node (origin) uint8_t destination; ///< 8 bit - Id of destination node /** * 2 bit - Protocol version<br> * 1 bit - Signed flag<br> * 5 bit - Length of payload */ uint8_t version_length; /** * 3 bit - Command type<br> * 1 bit - Request an ack - Indicator that receiver should send an ack back<br> * 1 bit - Is ack message - Indicator that this is the actual ack message<br> * 3 bit - Payload data type */ uint8_t command_ack_payload; uint8_t type; ///< 8 bit - Type varies depending on command uint8_t sensor; ///< 8 bit - Id of sensor that this message concerns. /* * Each message can transfer a payload. We add one extra byte for string * terminator \0 to be "printable" this is not transferred OTA * This union is used to simplify the construction of the binary data types transferred. */ union { uint8_t bValue; ///< unsigned byte value (8-bit) uint16_t uiValue; ///< unsigned integer value (16-bit) int16_t iValue; ///< signed integer value (16-bit) uint32_t ulValue; ///< unsigned long value (32-bit) int32_t lValue; ///< signed long value (32-bit) struct { //< Float messages float fValue; uint8_t fPrecision; ///< Number of decimals when serializing }; struct { //< Presentation messages uint8_t version; ///< Library version uint8_t sensorType; ///< Sensor type hint for controller, see table above }; char data[MAX_PAYLOAD + 1]; ///< Buffer for raw payload data } __attribute__((packed)); ///< Doxygen will complain without this comment I'll try and log the raw incoming packet on the Pi side just to confirm that the radio config is actually matching. Turns out that the RFM69 arduino library was missing two additional bytes from the packet header -- rfm69_header_t.versionand rfm69_header_t.sequenceNumberin RFM69::sendFrame()(see the diff). select(); SPI.transfer(REG_FIFO | 0x80); // Select the FIFO write register. SPI.transfer(bufferSize + 5); // rfm69_header_t.packetLen SPI.transfer(toAddress); // rfm69_header_t.recipient SPI.transfer(1); // RFM69_PACKET_HEADER_VERSION = (1u) rfm69_header_t.version header version (20180128tk: >=3.0.0 fused with controlFlags) SPI.transfer(_address); // rfm69_header_t.sender SPI.transfer(CTLbyte); // rfm69_header_t.controlFlags SPI.transfer(0); // rfm69_header_t.sequenceNumber for (uint8_t i = 0; i < bufferSize; i++) SPI.transfer(((uint8_t*) buffer)[i]); unselect(); So after adding SPI.transfer(1);as rfm69_header_t.versionand SPI.transfer(0);as rfm69_header_t.sequenceNumberthe packets are now being parsed correctly: DEBUG TSF:MSG:READ,1-1-255,s=3,c=1,t=16,pt=1,l=1,sg=0:1 DEBUG TSF:MSG:BC @kasparsd I'm also interested in using ATTiny with a RFM69CW module as a MySensor node. How have you initialized the radio with your modified library? Can you show a complete example? @cimba007 Can you post the complete code of the chirp including the headers? Did you only include these libraries: #include <SPI.h> #include "nRF24L01.h" #include "RF24.h" ? Suggested Topics Support for new Arduino Hardware platform: WavGat UNO R3 compatible board A few random questions (V_tripped vs V_armed) IEC 62056-21 Energy meter How to read frequency and SWP output from watermark sensor Combine several scetches to 1 node Arduino pro mini dead after flashing MYSbootloader. Get periodical sensor readings from children 1 & 2 while sensor on child 3 is interrupt driven.
https://forum.mysensors.org/topic/7123/cheap-dirty-way-to-send-a-raw-mysensors-message/1
CC-MAIN-2019-09
refinedweb
1,860
54.63
Semaphores are a programming construct designed by E. W. Dijkstra in the late 1960s. Dijkstra's model was the operation of railroads:. In the computer version, a semaphore appears to be a simple integer. A thread waits for permission to proceed and then signals unclear to most of the world, as Dijkstra is Dutch. However, in the interest of true scholarship: P stands for prolagen, a made-up word derived from proberen te verlagen, which means try to decrease. V stands for verhogen, which means increase. ThisRT)). Conceptually, a semaphore is a nonnegative integer count. Semaphores are typically used to coordinate access to resources, with the semaphore count initialized to the number of free resources. Threads then atomically increment the count when resources are added and atomically decrement the count when resources are removed. When the semaphore count becomes zero,a_init(3THTHR).) Multiple threads must not initialize the same semaphore. A semaphore must not be reinitialized while other threads might be using it. sem_init() returns zero after completing successfully. Any other return value indicates that an error occurred. When any of the following conditions occurs, semaphore. When pshared is 0, the semaphore can be used by all the threads in this process only. #include <semaphore.h> sem_t sem; int ret; int count = 4; /* to be used within this process only */ ret = sem_init(&sem, 0, count); When pshared is nonzero, the semaphore can be shared by other processes. #include <semaphore.h> sem_t sem; int ret; int count = 4; /* to be shared among processes */ ret = sem_init(&sem, 1, count); The functions sem_open(3RT), sem_getvalue(3RT), sem_close(3RT), and sem_unlink(3RT)RT), sem_getvalue(3RT), sem_close(3RT), and sem_unlink(3RT). Prototype:. Prototype: int sem_wait(sem_t *sem); #include <semaphore.h> sem_t sem; int ret; ret = sem_wait(&sem); /* wait for semaphore */ Use sema_wait(3THR) to block the calling thread until the count in the semaphore pointed to by sem becomes greater than zero, then atomically decrement it. sem_wait() returns zero after completing successfully. Any other return value indicates that an error occurred. When any of the following conditions occurs,RT) to try to atomically decrement the count in the semaphore pointed to by sem when the count is greater than zero. This function is a nonblocking version of sem_wait(); that is it returns immediately if unsuccessful. sem. Prototype: int sem_destroy(sem_t *sem); #include <semaphore.h> sem_t sem; int ret; ret = sem_destroy(&sem); /* the semaphore is destroyed */ Use sem_destroy(3RT) to destroy any state associated with the semaphore pointed to by sem. The space for storing the semaphore is not freed. (For Solaris threads, see sem_destroy(3THR).) sem_destroy() returns zero after completing successfully. Any other return value indicates that an error occurred. When the following condition occurs, the function fails and returns the corresponding value. sem points to an illegal address.. typedef struct { char buf[BSIZE]; sem_t occupied; sem_t empty; int nextin; int nextout; sem_t pmut; sem_t cmut; } buffer_t; buffer_t buffer; sem_init(&buffer.occupied, 0, 0); sem_init(&buffer.empty,0, BSIZE); sem_init(&buffer.pmut, 0, 1); sem_init(&buffer.cmut, 0, 1); buffer.nextin = buffer.nextout = 0; Another pair of (binary) semaphores plays the same role as mutexes,); }
http://docs.oracle.com/cd/E19683-01/806-6867/6jfpgdcnj/index.html
CC-MAIN-2017-43
refinedweb
520
58.89
Get the highlights in your inbox every week. How to install Python on Windows | Opensource.com How to install Python on Windows Install Python, run an IDE, and start coding right from your Microsoft Windows desktop. Subscribe now. Just because Python is easy to learn doesn't mean you should underestimate its potential power. Python is used by movie studios, financial institutions, IT houses, video game studios, makers, hobbyists, artists, teachers, and many others. On the other hand, Python is also a serious programming language, and learning it takes dedication and practice. Then again, you don't have to commit to anything just yet. You can install and try Python on nearly any computing platform, so if you're on Windows, this article is for you. If you want to try Python on a completely open source operating system, you can install Linux and then try Python. Get Python Python is available from its website, Python.org. Once there, hover your mouse over the Downloads menu, then over the Windows option, and then click the button to download the latest release. Alternatively, you can click the Downloads menu button and select a specific version from the downloads page. Install Python Once the package is downloaded, open it to start the installer. It is safe to accept the default install location, and it's vital to add Python to PATH. If you don't add Python to your PATH, then Python applications won't know where to find Python (which they require in order to run). This is not selected by default, so activate it at the bottom of the install window before continuing! Before Windows allows you to install an application from a publisher other than Microsoft, you must give your approval. Click the Yes button when prompted by the User Account Control system. Wait patiently for Windows to distribute the files from the Python package into the appropriate locations, and when it's finished, you're done installing Python. Time to play. Install an IDE To write programs in Python, all you really need is a text editor, but it's convenient to have an integrated development environment (IDE). An IDE integrates a text editor with some friendly and helpful Python features. IDLE 3 and NINJA-IDE are two options to consider. IDLE 3 Python comes with an IDE called IDLE. You can write code in any text editor, but using an IDE provides you with keyword highlighting to help detect typos, a Run button to test code quickly and easily, and other code-specific features that a plain text editor like Notepad++ normally doesn't have. To start IDLE, click the Start (or Window) menu and type python for matches. You may find a few matches, since Python provides more than one interface, so make sure you launch IDLE. If you don't see Python in the Start menu, launch the Windows command prompt by typing cmd in the Start menu, then type: C:\Windows\py.exe If that doesn't work, try reinstalling Python. Be sure to select Add Python to PATH in the install wizard. Refer to the Python docs for detailed instructions. Ninja-IDE If you already have some coding experience and IDLE seems too simple for you, try Ninja-IDE. Ninja-IDE is an excellent Python IDE. It has keyword highlighting to help detect typos, quotation and parenthesis completion to avoid syntax errors, line numbers (helpful when debugging), indentation markers, and a Run button to test code quickly and easily. To install it, visit the Ninja-IDE website and download the Windows installer. The process is the same as with Python: start the installer, allow Windows to install a non-Microsoft application, and wait for the installer to finish. Once Ninja-IDE is installed, double-click the Ninja-IDE icon on your desktop or select it from the Start menu. Tell Python what to do Keywords tell Python what you want it to do. In either IDLE or Ninja-IDE, go to the File menu and create a new file. Ninja users: Do not create a new project, just a new file. In your new, empty file, type this into IDLE or Ninja-IDE: print("Hello world.") - If you are using IDLE, go to the Run menu and select the Run Module option. - If you are using Ninja, click the Run File button in the left button bar. Any time you run code, your IDE prompts you to save the file you're working on. Do that before continuing. The keyword print tells Python to print out whatever text you give it in parentheses and quotes.That's not very exciting, though. At its core, Python has access to only basic keywords like print and help, basic math functions, and so on. Use the import keyword to load more keywords. Start a new file in IDLE or Ninja and name it pen.py. Warning: Do not call your file turtle.py, because turtle.py is the name of the file that contains the turtle program you are controlling. Naming your file turtle.py confuses Python because it thinks you want to import your own file. Type this code into your file and run it: import turtle Turtle is a fun module to use. Add this code to your file:? Try more complex code: import turtle as t import time t.color("blue") t.begin_fill() counter = 0 while counter < 4: t.forward(100) t.left(90) counter = counter+1 t.end_fill() time.sleep(2) As a challenge, try changing your script to get this result: Once you complete that script, you're ready to move on to more exciting modules. A good place to start is this introductory dice game. Stay Pythonic Python is a fun language with modules for practically anything you can think to do with it. As you can see, it's easy to get started with Python, and as long as you're patient with yourself, you may find yourself understanding and writing Python code with the same fluidity as you write your native language. Work through some Python articles here on Opensource.com, try scripting some small tasks for yourself, and see where Python takes you. To really integrate Python with your daily workflow, you might even try Linux, which is natively scriptable in ways no other operating system is. You might find yourself, given enough time, using the applications you create! Good luck, and stay Pythonic. 6 Comments You may have to also consider whether you are going to use Python for scripting in some API-enabled piece of software. Scribus requires at least Python 2.7, and I'm not sure if Python 3+ will work. At the same time, the Scribus that you download from Sourceforge has Python 2.7 built into it. This is a somewhat limited Python, though, and you may want/need other packages. The trick is to install Python 2.7 or better and any packages you need, then go to the directory where the Scribus-installed Python is and either rename it, or delete it*. When Scribus starts, it will first look for its own Python, and if it can't find it, check the system. * Hint: on general principles, it's better first to just rename and make sure your Python works with Scribus before deleting it in Scribus. Yes, I am informed that you would have to use Python 2.7 with Scribus, so this illustrates that different parent software might require specific versions of Python. Great point, Greg. Thanks for the reminder. It's also of note that you can probably have 2 versions of Python installed on Windows - you certainly can with Linux. Great! Tx! and yes... in fact i have 2 versions of Python installed rigth now Thanks for providing in-depth instructions to install Python. I previously Messed up with Python Installation, where i took Microsoft Windows 10 Support help to Reset it. By following your Steps, I could do it very easily and quickly. You Step wise Images helped me a lot which is very easy to understand and implement. Thanks, Mahesh
https://opensource.com/article/19/8/how-install-python-windows
CC-MAIN-2020-05
refinedweb
1,357
73.47
This is a pretty rough start on my code, and my output is producing 0 for every single count. public class VoteCount { public static void main(String[] args) { Scanner input = new Scanner(System.in); final int FLAG = -999; int votes; System.out.print("Enter votes for candidates 0-9. End with " + FLAG); votes = input.nextInt(); int [] number; number = new int [votes]; int [] candidate; candidate = new int [10]; for (int i= 2; i < 10; i ++) { candidate [i] = 0; } System.out.println ("Candidate Number, Number of Votes"); for (int i = 0; i < 10; i++) System.out.println( i + " " + (candidate[i])); System.out.println(); } } This post has been edited by lolitacharm: 28 June 2011 - 12:21 PM
http://www.dreamincode.net/forums/topic/237382-java-election-arrays-keep-tally-of-votes/
CC-MAIN-2017-09
refinedweb
114
60.41
oo::object - NAME - oo::object — root class of the class hierarchy - SYNOPSIS - CLASS HIERARCHY - DESCRIPTION - CONSTRUCTOR - DESTRUCTOR - EXPORTED METHODS - NON-EXPORTED METHODS - obj eval ?arg ...? - obj unknown ?methodName? ?arg ...? - obj variable ?varName ...? - obj varname varName - obj <cloned> sourceObjectName - EXAMPLES - SEE ALSO - KEYWORDS NAMEoo::object — root class of the class hierarchy SYNOPSISpackage require TclOO oo::object method ?arg ...? CLASS HIERARCHYoo::object DESCRIPTIONThe oo::object class is the root class of the object hierarchy; every object is an instance of this class. Since classes are themselves objects, they are instances of this class too. Objects are always referred to by their name, and may be renamed while maintaining their identity. Instances of objects may be made with either the create or new methods of the oo::object object itself, or by invoking those methods on any of the subclass objects; see oo::class for more details. The configuration of individual objects (i.e., instance-specific methods, mixed-in classes, etc.) may be controlled with the oo::objdefine command. Each object has a unique namespace associated with it, the instance namespace. This namespace holds all the instance variables of the object, and will be the current namespace whenever a method of the object is invoked (including a method of the class of the object). When the object is destroyed, its instance namespace is deleted. The instance namespace contains the object's my command, which may be used to invoke non-exported methods of the object or to create a reference to the object for the purpose of invocation which persists across renamings of the object. CONSTRUCTORThe oo::object class does not define an explicit constructor. DESTRUCTORThe oo::object class does not define an explicit destructor. EXPORTED METHODSThe oo::object class supports the following exported methods: - obj destroy - This method destroys the object, obj, that it is invoked upon, invoking any destructors on the object's class in the process. It is equivalent to using rename to delete the object command. The result of this method is always the empty string. NON-EXPORTED METHODSThe oo::object class supports the following non-exported methods: - obj eval ?arg ...? - This method concatenates the arguments, arg, as if with concat, and then evaluates the resulting script in the namespace that is uniquely associated with obj, returning the result of the evaluation. - obj unknown ?methodName? ?arg ...? - This method is called when an attempt to invoke the method methodName on object obj fails. The arguments that the user supplied to the method are given as arg arguments. If methodName is absent, the object was invoked with no method name at all (or any other arguments). The default implementation (i.e., the one defined by the oo::object class) generates a suitable error, detailing what methods the object supports given whether the object was invoked by its public name or through the my command. - obj variable ?varName ...? - This method arranges for each variable called varName to be linked from the object obj's unique namespace into the caller's context. Thus, if it is invoked from inside a procedure then the namespace variable in the object is linked to the local variable in the procedure. Each varName argument must not have any namespace separators in it. The result is the empty string. - obj varname varName - This method returns the globally qualified name of the variable varName in the unique namespace for the object obj. - obj <cloned> sourceObjectName - This method is used by the oo::object command to copy the state of one object to another. It is responsible for copying the procedures and variables of the namespace of the source object (sourceObjectName) to the current object. It does not copy any other types of commands or any traces on the variables; that can be added if desired by overriding this method in a subclass. EXAMPLESThis example demonstrates basic use of an object. set obj [oo::object new] $obj foo → error "unknown method foo" oo::objdefine $obj method foo {} { my variable count puts "bar[incr count]" } $obj foo → prints "bar1" $obj foo → prints "bar2" $obj variable count → error "unknown method variable" $obj destroy $obj foo → error "unknown command obj"
https://core.tcl-lang.org/tcloo/wiki?name=Doc:+oo::object
CC-MAIN-2021-31
refinedweb
687
52.39
The basic three steps for using variables in the C language: These steps must be completed in that order. The declaration is a statement ending with a semicolon: type name; type is the variable type: char, int, float, double, and other specific types. name is the variable's name. A variable's name must not be the name of a C language keyword or any other variable name that was previously declared. The name is case sensitive. You can add numbers, dashes, or underscores to the variable name, but always start the name with a letter. The equal sign is used to assign a value to a variable. The format is very specific: variable = value; variable is the variable's name. value is either an immediate value, a constant, an equation, another variable, or a value returned from a function. After the statement is executed, the variable holds the value that's specified. In the following code four variable types are declared, assigned values, and used in printf() statements. #include <stdio.h> int main() // ww w .j av a 2 s . c o m { char c; int i; float f; double d; c = 'a'; i = 1; f = 19.0; d = 23456.789; printf("%c\n",c); printf("%d\n",i); printf("%f\n",f); printf("%f\n",d); return(0); }
http://www.java2s.com/example/c-book/variables-declaration.html
CC-MAIN-2018-39
refinedweb
220
75.71
Class names in Ruby are Constants, so the first letter should be a capital. class Cat # correct end class dog # wrong, throws an error end You can define a new class using the class keyword. class MyClass end Once defined, you can create a new instance using the .new method somevar = MyClass.new # => #<MyClass:0x007fe2b8aa4a18> A class can have only one constructor, that is a method called initialize. The method is automatically invoked when a new instance of the class is created. class Customer def initialize(name) @name = name.capitalize end end sarah = Customer.new('sarah') sarah.name #=> 'Sarah' There are several special variable types that a class can use for more easily sharing data. Instance variables, preceded by @. They are useful if you want to use the same variable in different methods. class Person def initialize(name, age) my_age = age # local variable, will be destroyed at end of constructor @name = name # instance variable, is only destroyed when the object is end def some_method puts "My name is #{@name}." # we can use @name with no problem end def another_method puts "My age is #{my_age}." # this will not work! end end mhmd = Person.new("Mark", 23) mhmd.some_method #=> My name is Mark. mhmd.another_method #=> throws an error Class variable, preceded by @@. They contain the same values across all instances of a class. class Person @@persons_created = 0 # class variable, available to all objects of this class def initialize(name) @name = name # modification of class variable persists across all objects of this class @@persons_created += 1 end def how_many_persons puts "persons created so far: #{@@persons_created}" end end mark = Person.new("Mark") mark.how_many_persons #=> persons created so far: 1 helen = Person.new("Helen") mark.how_many_persons #=> persons created so far: 2 helen.how_many_persons #=> persons created so far: 2 # you could either ask mark or helen Global Variables, preceded by $. These are available anywhere to the program, so make sure to use them wisely. $total_animals = 0 class Cat def initialize $total_animals += 1 end end class Dog def initialize $total_animals += 1 end end bob = Cat.new() puts $total_animals #=> 1 fred = Dog.new() puts $total_animals #=> 2 We have three methods: attr_reader: used to allow reading the variable outside the class. attr_writer: used to allow modifying the variable outside the class. attr_accessor: combines both methods. class Cat attr_reader :age # you can read the age but you can never change it attr_writer :name # you can change name but you are not allowed to read attr_accessor :breed # you can both change the breed and read it def initialize(name, breed) @name = name @breed = breed @age = 2 end def speak puts "I'm #{@name} and I am a #{@breed} cat" end end my_cat = Cat.new("Banjo", "birman") # reading values: my_cat.age #=> 2 my_cat.breed #=> "birman" my_cat.name #=> Error # changing values my_cat.age = 3 #=> Error my_cat.breed = "sphynx" my_cat.name = "Bilbo" my_cat.speak #=> I'm Bilbo and I am a sphynx cat Note that the parameters are symbols. this works by creating a method. class Cat attr_accessor :breed end Is basically the same as: class Cat def breed @breed end def breed= value @breed = value end end Ruby has three access levels. They are public, private and protected. Methods that follow the private or protected keywords are defined as such. Methods that come before these are implicitly public methods. A public method should describe the behavior of the object being created. These methods can be called from outside the scope of the created object. class Cat def initialize(name) @name = name end def speak puts "I'm #{@name} and I'm 2 years old" end ... end new_cat = Cat.new("garfield") #=> <Cat:0x2321868 @ new_cat.speak #=> I'm garfield and I'm 2 years old These methods are public ruby methods, they describe the behavior for initializing a new cat and the behavior of the speak method. public keyword is unnecessary, but can be used to escape private or protected def MyClass def first_public_method end private def private_method end public def second_public_method end end Private methods are not accessible from outside of the object. They are used internally by the object. Using the cat example again: class Cat def initialize(name) @name = name end def speak age = calculate_cat_age # here we call the private method puts "I'm #{@name} and I'm #{age} years old" end private def calculate_cat_age 2 * 3 - 4 end end my_cat = Cat.new("Bilbo") my_cat.speak #=> I'm Bilbo and I'm 2 years old my_cat.calculate_cat_age #=> NoMethodError: private method `calculate_cat_age' called for #<Cat:0x2321868 @ As you can see in the example above, the newly created Cat object has access to the calculate_cat_age method internally. We assign the variable age to the result of running the private calculate_cat_age method which prints the name and age of the cat to the console. When we try and call the calculate_cat_age method from outside the my_cat object, we receive a NoMethodError because it's private. Get it? Protected methods are very similar to private methods. They cannot be accessed outside the instance of object in the same way private methods can't be. However, using the self ruby method, protected methods can be called within the context of an object of the same type. class Cat def initialize(name, age) @name = name @age = age end def speak puts "I'm #{@name} and I'm #{@age} years old" end # this == method allows us to compare two objects own ages. # if both Cat's have the same age they will be considered equal. def ==(other) self.own_age == other.own_age end protected def own_age self.age end end cat1 = Cat.new("ricky", 2) => #<Cat:0x007fe2b8aa4a18 @name="ricky", @age=2> cat2 = Cat.new("lucy", 4) => #<Cat:0x008gfb7aa6v67 @name="lucy", @age=4> cat3 = Cat.new("felix", 2) => #<Cat:0x009frbaa8V76 @name="felix", @age=2> You can see we've added an age parameter to the cat class and created three new cat objects with the name and age. We are going to call the own_age protected method to compare the age's of our cat objects. cat1 == cat2 => false cat1 == cat3 => true Look at that, we were able to retrieve cat1's age using the self.own_age protected method and compare it against cat2's age by calling cat2.own_age inside of cat1. Classes have 3 types of methods: instance, singleton and class methods. These are methods that can be called from an instance of the class. class Thing def somemethod puts "something" end end foo = Thing.new # create an instance of the class foo.somemethod # => something These are static methods, i.e, they can be invoked on the class, and not on an instantiation of that class. class Thing def Thing.hello(name) puts "Hello, #{name}!" end end It is equivalent to use self in place of the class name. The following code is equivalent to the code above: class Thing def self.hello(name) puts "Hello, #{name}!" end end Invoke the method by writing Thing.hello("John Doe") # prints: "Hello, John Doe!" These are only available to specific instances of the class, but not to all. # create an empty class class Thing end # two instances of the class thing1 = Thing.new thing2 = Thing.new # create a singleton method def thing1.makestuff puts "I belong to thing one" end thing1.makestuff # => prints: I belong to thing one thing2.makestuff # NoMethodError: undefined method `makestuff' for #<Thing> Both the singleton and class methods are called eigenclasses. Basically, what ruby does is to create an anonymous class that holds such methods so that it won't interfere with the instances that are created. Another way of doing this is by the class << constructor. For example: # a class method (same as the above example) class Thing class << self # the anonymous class def hello(name) puts "Hello, #{name}!" end end end Thing.hello("sarah") # => Hello, sarah! # singleton method class Thing end thing1 = Thing.new class << thing1 def makestuff puts "I belong to thing one" end end thing1.makestuff # => prints: "I belong to thing one" Classes can be created dynamically through the use of Class.new. # create a new class dynamically MyClass = Class.new # instantiate an object of type MyClass my_class = MyClass.new In the above example, a new class is created and assigned to the constant MyClass. This class can be instantiated and used just like any other class. The Class.new method accepts a Class which will become the superclass of the dynamically created class. # dynamically create a class that subclasses another Staffy = Class.new(Dog) # instantiate an object of type Staffy lucky = Staffy.new lucky.is_a?(Staffy) # true lucky.is_a?(Dog) # true The Class.new method also accepts a block. The context of the block is the newly created class. This allows methods to be defined. Duck = Class.new do def quack 'Quack!!' end end # instantiate an object of type Duck duck = Duck.new duck.quack # 'Quack!!' In many languages, new instances of a class are created using a special new keyword. In Ruby, new is also used to create instances of a class, but it isn't a keyword; instead, it's a static/class method, no different from any other static/class method. The definition is roughly this: class MyClass def self.new(*args) obj = allocate obj.initialize(*args) # oversimplied; initialize is actually private obj end end allocate performs the real 'magic' of creating an uninitialized instance of the class Note also that the return value of initialize is discarded, and obj is returned instead. This makes it immediately clear why you can code your initialize method without worrying about returning self at the end. The 'normal' new method that all classes get from Class works as above, but it's possible to redefine it however you like, or to define alternatives that work differently. For example: class MyClass def self.extraNew(*args) obj = allocate obj.pre_initialize(:foo) obj.initialize(*args) obj.post_initialize(:bar) obj end end
https://sodocumentation.net/ruby/topic/264/classes
CC-MAIN-2021-21
refinedweb
1,635
67.55
I was going through and commenting c++ source code to gain a better understanding of it, when I came across this range checking code. I would like to know what the maximum call to LargeAllocate would be and how or why range checking might be done in this way. Also if you see errors in my comments please let me know. Thanks in advance dword declaration: code (it stretches 160 lengthwise so you may want to copy it into you favorite text editor first)code (it stretches 160 lengthwise so you may want to copy it into you favorite text editor first)Code:typedef unsigned int dword; Edit: I think I made a mistake in my comments. Is it true that the constants (255 and 32767) are up-converted for comparison to the length of dword, and if so would it be unsigned or signed?Edit: I think I made a mistake in my comments. Is it true that the constants (255 and 32767) are up-converted for comparison to the length of dword, and if so would it be unsigned or signed?Code://///////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// // // MEMBER FUNCTION: idHeap::Allocate // // Allocate memory based off of the bytes needed. There is some funky magic here with checking ranges and I am not sure of the speed increase over // the loss in readability // // ~~ GLOBAL VARIABLE: c_heapAllocRunningCount // Gets incremented to keep the count of heap allocations current. // // ~~ MACRO: USE_LIBC_MALLOC // Forces use of standard C allocation functions for debugging and preformance testing. // /////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////// void* idHeap::Allocate(const dword bytes){ //Checks to see if bytes is a zero, if it is then there isn't anything to allocate if(!bytes){ //FIX. COULD WE NOT JUST CHECK TO SEE IF IT WAS LESS THAN 1 HERE AND AVOID THE MAGIC LATER? if(bytes < 1){ return NULL; } //Increment the count of heap allocations c_heapAllocRunningCount++; //Allocate memory via our custom functions or standard C functions #if USE_LIBC_MALLOC return malloc(bytes); #else //Check if bytes is within the range 0 to 255 (it was checked earlier in the function to be non-zero so the desired range is actually 1 to 255). This is //done by "not-ing" the value 255 (denoted by the ~) which is represented by the compiler as two's compliment -- meaning 255 in binary looks like 01111111. //Not-ing it would give the value 10000000 (-256). So if you preform a bitwise "and" on the number 'bytes' then only values within the range 0 to 255 //would return 0. Also, 255 is the signed one byte integer maximum. if(!(bytes & ~255)){ return SmallAllocate(bytes); } //Same as the previous check except that the desired range is 1 to the maximum value of a signed 2 byte integer. if(!(bytes & ~32767)){ return MediumAllocate(bytes); } //This basically means that the unsigned 4 byte integer 'bytes' is greater than the range 1 to 32,767 (or over 15 bit in length). In turn that means bytes' //is in the range 32,768 (short int maximum) to 4,294,967,295 (unsigned int maximum) or 4gb. However, apparently c++ has it so you cant have constant decimal //numbers over 2,147,483,647 or 2gb in a function call or variable assignment -- so as of this revision I don't know what the maximum call to this //would be (2gb or 4gb). return LargeAllocate(bytes); #endif }
https://cboard.cprogramming.com/cplusplus-programming/144307-help-complicated-strange-magic.html
CC-MAIN-2017-26
refinedweb
553
64.34
platformio and mysensors for atmega328p on breadboard? Hi all, I am new to MySensors, and I have only limited experience with platformio and Arduino programming in general. I was able to get the temperature sensor built with a Nano and an NRF24L01+ I had laying around -- worked great! As usual, I used platformio to manage the dependencies and the build (I don't care much for the Arduino editor or library and board manager). Now I want to make a version of the same sketch using an atmega328p on a breadboard. I figure it should work as it's the same MCU. However, I'm running into a slew of errors, mostly "not declared in this scope." My INO file is identical to the one from the temp sensor tutorial. In file included from .piolibdeps/MySensors_ID548/MySensors.h:257:0, from rf24_temp/src/tempsensor.ino:38: .piolibdeps/MySensors_ID548/core/MyTransport.cpp: In function 'void stInitTransition()': .piolibdeps/MySensors_ID548/core/MyTransport.cpp:80:45: error: 'hwReadConfigBlock' was not declared in this scope sizeof(transportConfig_t)); ^ .piolibdeps/MySensors_ID548/core/MyTransport.cpp: In function 'void stInitUpdate()': .piolibdeps/MySensors_ID548/core/MyTransport.cpp:108:61: error: 'hwWriteConfig' was not declared in this scope hwWriteConfig(EEPROM_NODE_ID_ADDRESS, (uint8_t)MY_NODE_ID); ^ .piolibdeps/MySensors_ID548/core/MyTransport.cpp: In function 'void stUplinkUpdate()': .piolibdeps/MySensors_ID548/core/MyTransport.cpp:232:43: error: 'hwMillis' was not declared in this scope _transportSM.lastUplinkCheck = hwMillis(); ^ .piolibdeps/MySensors_ID548/core/MyTransport.cpp: In function 'void transportSwitchSM(transportState_t&)': .piolibdeps/MySensors_ID548/core/MyTransport.cpp:342:37: error: 'hwMillis' was not declared in this scope _transportSM.stateEnter = hwMillis(); // save time ^ .piolibdeps/MySensors_ID548/core/MyTransport.cpp: In function 'uint32_t transportTimeInState()': .piolibdeps/MySensors_ID548/core/MyTransport.cpp:347:18: error: 'hwMillis' was not declared in this scope return hwMillis() - _transportSM.stateEnter; So far I've found a few useful links: - - - What I've tried (with a pio run target=cleanbefore each attempted build): - Adding #include <Arduino.h>to the top of my sketch (nothing changes) - Specifying build_flags = -I/path/to/MySensors - Specifying lib_ignore = MySensors I'm using: - MacOS 10.13.1 - platformio 3.4.1 - MySensors 2.1.1 My platformio.ini: [env:328p8m] platform = atmelavr board = 328p8m framework = arduino upload_protocol = usbasp lib_deps = MySensors DallasTemperature build_flags = -I../MySensors lib_ignore = MySensors Any ideas? Again, this builds fine with the same code but using a nano as the target platform, just won't build for the atmega328p for some reason. Hi, welcome to the mysensors forum. About your problem, I'm not entirely sure that is mysensors related. Have you asked in the platformio forum? @gohan -- thanks! Sorry for the delayed response, thought I had email notifications for replies set up, but I guess not. Yeah, I'm not sure either, but I've used platformio for a number of other projects without running into this issue, so I thought there might be people here using MySensors with platformio that could perhaps shed some light. I'll ask over at their community forum first, and if I don't have any luck will likely make issues at both repos. Link to my cross-post there I have installed the mysensors lib from platformio (platformio lib install xx, or something like that) And my platformio.ini file look like this: [env:pro8MHzatmega328] platform = atmelavr framework = arduino board = pro8MHzatmega328 I think you could remove the build_flags and the lib_ignore Solved with some changes in the Dallas temp library (and platformio 3.5 which allows installing a library from a git url). My thread there:
https://forum.mysensors.org/topic/8029/platformio-and-mysensors-for-atmega328p-on-breadboard
CC-MAIN-2022-33
refinedweb
574
50.63
Ravel How to flatten the elements of a NumPy array into a 1D NumPy array. as py from plotly.tools import FigureFactory as FF import numpy as np In [2]: import plotly.plotly as py from plotly.tools import FigureFactory as FF pi_array = np.array([[3, 1, 4, 1], [5, 9, 2, 6], [5, 3, 5, 8], [9, 7, 9, 3]]) ravel_array = np.ravel(pi_array) fig = FF.create_annotated_heatmap([ravel_array], colorscale='Earth' ) fig.layout.title = '$\pi$' py.iplot(fig, filename='numpy-ravel-pi') Out[2]: In [2]: help(np.ravel) Help on function ravel in module numpy.core.fromnumeric: ravel(a, order='C')) Parameters ---------- a : array_like Input array. The elements in `a` are read in the order specified by `order`, and packed as a 1-D array. order : {'C','F', 'A', 'K'}, optional The elements of `a` are read using this index order. 'C' means to index the elements in row-major, C-style order, with the last axis index changing fastest, back to the first axis index changing slowest. 'F' means to index the elements in column-major, Fortran-style order, with the first index changing fastest, and the last index changing slowest. Note that the 'C' and 'F' options take no account of the memory layout of the underlying array, and only refer to the order of axis indexing. 'A' means to read the elements in Fortran-like index order if `a` is Fortran *contiguous* in memory, C-like order otherwise. 'K' means to read the elements in the order they occur in memory, except for reversing the data when strides are negative. By default, 'C' index order is used. Returns ------- y : array_like If `a` is a matrix, y is a 1-D ndarray, otherwise y is an array of the same subtype as `a`. The shape of the returned array is ``(a.size,)``. Matrices are special cased for backward compatibility. See Also -------- 'A', it will preserve the array's 'C' or 'F' ordering: >>> print(np.ravel(x.T)) [1 4 2 5 3 6] >>> print(np.ravel(x.T, order='A')) [1 2 3 4 5 6] When ``order`` is 'K', it will preserve orderings that are neither 'C' nor 'F',])
https://plot.ly/numpy/ravel/
CC-MAIN-2019-26
refinedweb
363
58.99
(This is kind of a bonus, but it shows how minor the change is to bootbzImages)Skipping over the "cli" and segment loading is enough to allow lguestto boot bzImages. There are some "out" insns in the unpacking code,but lguest already has to emulate/skip-over them because of random x86probes during boot.We can no longer assume the launcher has set the bss to zero: we nowneed to zero it ourselves.Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>--- Documentation/lguest/lguest.c | 134 +++++++------------------------------- arch/i386/boot/compressed/head.S | 6 + drivers/lguest/core.c | 8 +- drivers/lguest/lguest.c | 2 4 files changed, 41 insertions(+), 109 deletions(-)diff -r 73d71b701360 arch/i386/boot/compressed/head.S--- a/arch/i386/boot/compressed/head.S Fri May 04 22:49:34 2007 +1000+++ b/arch/i386/boot/compressed/head.S Fri May 04 22:51:49 2007 +1000@@ -32,6 +32,11 @@ .globl startup_32 startup_32:+#ifdef CONFIG_PARAVIRT+ movl %cs,%eax+ andl $0x3,%eax+ jnz calc_delta+#endif cld cli movl $(__BOOT_DS),%eax@@ -48,6 +53,7 @@ startup_32: * data at 0x34-0x3f are used as the stack for this calculation. * Only 4 bytes are needed. */+calc_delta: leal 0x40(%esi), %esp call 1f 1: popl %ebpdiff -r 73d71b701360 drivers/lguest/lguest.c--- a/drivers/lguest/lguest.c Fri May 04 22:49:34 2007 +1000+++ b/drivers/lguest/lguest.c Fri May 04 22:51:49 2007 +1000@@ -805,6 +805,8 @@ static unsigned lguest_patch(u8 type, u1 * every routine we have to override to avoid privileged instructions. */ __init void do_lguest_init(void *boot) {+ memset(__bss_start, 0, __bss_stop - __bss_start);+ /* Copy boot parameters first: the Launcher put the physical location * in %esi, and head.S converted that to a virtual address and handed * it to us. */diff -r 73d71b701360 Documentation/lguest/lguest.c--- a/Documentation/lguest/lguest.c Fri May 04 22:49:34 2007 +1000+++ b/Documentation/lguest/lguest.c Fri May 04 22:53:31 2007 +1000@@ -205,74 +205,30 @@ static unsigned long map_elf(int elf_fd, return ehdr->e_entry; } -/*L:160 Unfortunately the entire ELF image isn't compressed: the segments- * which need loading are extracted and compressed raw. This denies us the- * information we need to make a fully-general loader. */-static unsigned long unpack_bzimage(int fd)-{- gzFile f;- int ret, len = 0;- /* A bzImage always gets loaded at physical address 1M. This is- * actually configurable as CONFIG_PHYSICAL_START, but as the comment- * there says, "Don't change this unless you know what you are doing".- * Indeed. */- void *img = (void *)0x100000;-- /* gzdopen takes our file descriptor (carefully placed at the start of- * the GZIP header we found) and returns a gzFile. */- f = gzdopen(fd, "rb");- /* We read it into memory in 64k chunks until we hit the end. */- while ((ret = gzread(f, img + len, 65536)) > 0)- len += ret;- if (ret < 0)- err(1, "reading image from bzImage");-- verbose("Unpacked size %i addr %p\n", len, img);-- /* Entry is physical address: convert to virtual */- return (unsigned long)img;-}--/*L:150 A bzImage, unlike an ELF file, is not meant to be loaded. You're- * supposed to jump into it and it will unpack itself. We can't do that- * because the Guest can't run the unpacking code, and adding features to- * lguest kills puppies, so we don't want to.- *- * The bzImage is formed by putting the decompressing code in front of the- * compressed kernel code. So we can simple scan through it looking for the- * first "gzip" header, and start decompressing from there. */+/*L:150 A bzImage, unlike an ELF file, is not meant to be mapped into memory.+ * You're supposed to jump into it and it will unpack itself. We used to have+ * to perform some hairy magic becuase we couldn't run the unpacking code, and+ * adding features to lguest kills puppies, so we didn't want to.+ *+ * Fortunately, Jeremy Fitzhardinge convinced me it wasn't that hard to fix, so+ * now we just read the funky header so we know where in the file to load, and+ * away we go. */ static unsigned long load_bzimage(int fd) {- unsigned char c;- int state = 0;-- /* GZIP header is 0x1F 0x8B <method> <flags>... <compressed-by>. */- while (read(fd, &c, 1) == 1) {- switch (state) {- case 0:- if (c == 0x1F)- state++;- break;- case 1:- if (c == 0x8B)- state++;- else- state = 0;- break;- case 2 ... 8:- state++;- break;- case 9:- /* Seek back to the start of the gzip header. */- lseek(fd, -10, SEEK_CUR);- /* One final check: "compressed under UNIX". */- if (c != 0x03)- state = -1;- else- return unpack_bzimage(fd);- }- }- errx(1, "Could not find kernel in bzImage");+#warning document this with reference to Documentation/i386/boot.txt+ u8 hdr[0x300];+ int r;+ void *p = (void *)0x100000;++ lseek(fd, 0, SEEK_SET);+ read(fd, hdr, sizeof(hdr));++ lseek(fd, (unsigned long)(hdr[0x1F1]+1) * 512, SEEK_SET);++ while ((r = read(fd, p, 65535)) > 0)+ p += r;++ return *(unsigned long *)&hdr[0x214]; } /*L:140 Loading the kernel is easy when it's a "vmlinux", but most kernels-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2007/5/4/205
CC-MAIN-2016-26
refinedweb
858
62.38
>>>>> On Fri, 26 Nov 2004 09:24:58 -0500, Stefan Monnier <address@hidden> said: >> To benefit from this on Mac OS X, we need to define POSIX_SIGNALS >> in src/s/darwin.h. In src/s/freebsd.h, there is a comment as: > Hmm... shouldn't this be defined automatically by the configure > script? I agree with you. For using SYNC_INPUT properly in Carbon Emacs, we need some more changes. Could you see the following patch (also appeared in) for the systems that use polling, and install if it is correct? YAMAMOTO Mitsuharu address@hidden Index: src/keyboard.c =================================================================== RCS file: /cvsroot/emacs/emacs/src/keyboard.c,v retrieving revision 1.791 diff -c -r1.791 keyboard.c *** src/keyboard.c 20 Aug 2004 10:34:12 -0000 1.791 --- src/keyboard.c 4 Sep 2004 08:49:50 -0000 *************** *** 2097,2103 **** --- 2097,2107 ---- struct atimer *timer; { if (poll_suppress_count == 0) + #ifdef SYNC_INPUT + interrupt_input_pending = 1; + #else poll_for_input_1 (); + #endif } #endif /* POLL_FOR_INPUT */
https://lists.gnu.org/archive/html/emacs-devel/2004-11/msg01230.html
CC-MAIN-2019-35
refinedweb
158
60.92
Mercurial > dropbear view libtommath/bn_mp_grow.c @ 457:e430a26064ee DROPBEAR_0.50 Make dropbearkey only generate 1024 bit keys line source #include <tommath.h> #ifdef BN_MP], */ /* grow as required */ int mp_grow (mp_int * a, int size) { int i; mp_digit *tmp; /* if the alloc size is smaller alloc more ram */ if (a->alloc < size) { /* ensure there are always at least MP_PREC digits extra on top */ size += (MP_PREC * 2) - (size % MP_PREC); /* reallocate the array a->dp * * We store the return in a temporary variable * in case the operation failed we don't want * to overwrite the dp member of a. */ tmp = OPT_CAST(mp_digit) XREALLOC (a->dp, sizeof (mp_digit) * size); if (tmp == NULL) { /* reallocation failed but "a" is still valid [can be freed] */ return MP_MEM; } /* reallocation succeeded so set a->dp */ a->dp = tmp; /* zero excess digits */ i = a->alloc; a->alloc = size; for (; i < a->alloc; i++) { a->dp[i] = 0; } } return MP_OKAY; } #endif /* $Source: /cvs/libtom/libtommath/bn_mp_grow.c,v $ */ /* $Revision: 1.3 $ */ /* $Date: 2006/03/31 14:18:44 $ */
https://hg.ucc.asn.au/dropbear/file/e430a26064ee/libtommath/bn_mp_grow.c
CC-MAIN-2022-40
refinedweb
166
59.74
java.lang.Object java.lang.Throwablejava.lang.Throwable java.lang.Exceptionjava.lang.Exception PIRL.Database.Database_ExceptionPIRL.Database.Database_Exception public class Database_Exception The excpetion thrown from the PIRL.Database package. public static final String ID public static final String DISCONNECTED_STATE public Database_Exception(String message, Throwable cause) The ID of this class will preceed the message. If a none-null message is specified it will be preceed with a new-line character. If a non-null cause is specified its message will be appended to the specified message after a new-line character. Appending the cause's message to the specified message is done for backwards compatibility with the exceptions that did not support a chained cause. This can result in applications that expect to chain the cause messages themselves to get unexpected redundant messages. N.B.: Once a cause is set it can not be changed. Therefore, though it is allowed to set the cause to null, the Database_Exception will not do this to avoid the situation of not being able to replace a null cause with a valid cause. Use the Throwable.initCause(Throwable) method to provide a cause after the Configuration_Exception has been constructed without one. message- The exception's message String (may be null). cause- The exception's Throwable cause (may be null). If the cause is null, no cause is set in the exception object. public Database_Exception(Throwable cause) The exception will have the ID of this class as its message. cause- The exception's Throwable cause (may be null). public Database_Exception(String message) message- The exception's message String (may be null). Database_Exception(String, Throwable) public Database_Exception() The exception will have the ID of this class as its message. Database_Exception(String, Throwable) public boolean Disconnected() If the cause of the Database_Exception was itself a Database_Exception it's cause is recursivly checked. DISCONNECTED_STATE. public static String exception_String(Throwable throwable) The String returned by the Throwable.getMessage() method is used. If the throwable is null or its message is null, the empty String is used. This String has words that should be hidden masked out. If the throwable is a SQLException its message is preceed with a "SQL exception -" line and appended with two lines listing the SQL state and error codes of the exception. throwable- The Throwable from which to generate a message. public static String exception_String(Exception exception) exception- The Exception from which to generate a message. exception_String(Throwable) public static String masked_String(String string) When the word "password", "Password" or "PASSWORD" is encountered in the string the following word is replaced with "*******". The delimiters used to identify word characters are any of the " \t\n\r&:=" characters. string- The String to be masked. Words.Mask(Vector), Words.Delimiters(String) public static SQLException masked_SQLException(SQLException SQL_exception) A new SQLException is constructed containing a copy of the message and state description from the specified SQLException with any passwords masked out in the message. The stack track is also copied. If the SQLException has a chained cause, that is copied over. This method is applied recursively to any SQLException chained to the specified exception, and the resulting new SQLException used to replace the chained exception. In effect, a deep copy of the specified SQLException is made with the message being masked in the process. SQL_exception- The SQLException to be masked and copied.
https://pirlwww.lpl.arizona.edu/software/PIRL_Java_Packages/PIRL/Database/Database_Exception.html
CC-MAIN-2018-51
refinedweb
554
56.25
Your Account by Simon St. Laurent Related link: XSLT has long been a tool I've used, but as little as possible. After two days of training, I'm both remembering why that is and learning ways to work around the aspects of the language that have bothered me for years. Getting a more complete understanding of XSLT's context and intent makes a huge difference. XSLT is, in many ways, alien territory to "classical programmers", whether those programmers came from procedural or object-oriented styles. Variables don't vary, but they are namespace-qualified. Whitespace doesn't matter most of the time, but it creeps up in unexpected ways that foil seemingly obvious ways to do things. Scope issues frequently confound approaches that briefly looked promising, though recursion often holds a solution. Parentheses aren't nearly as harmless as they are in the languages I'm used to, like Java. Meanwhile, there's frequently more than one way to do things, and the differences, if any, are often subtle. While this might sound daunting, it's actually made the training a lot more interesting because of interactions you can have much more easily in person. A lot of the fun in this class has been Ken Holman's phrasing problems in terms which make great sense to programmers, but which aren't the best way to solve the problem in XSLT. After we visit the possibilities, we find a better answer. While much of the course is lecture, exercises and general interplay give us all (myself included) a chance to make mistakes both in private and in public. These aren't the kinds of interactions you can have in a half-day tutorial, never mind a 45- or 90-minute presentation. By the end of this, we'll have experienced firsthand - and solved - most of the common (and mysterious) problems that arise in XSLT development. My fellow trainees are an interesting mix of Canadians and Americans, docheads and dataheads. Only two people (of ten) are using DTDs for their XML, but it doesn't seem that everyone's rushed to embrace the latest in XML technology, as almost no one is using XML namespaces in their work either. (I use them about half the time.) Some attendees have prior XSLT experience, but everyone arrived knowing XML already. This course is explicitly about XSLT 1.0 and XPath 1.0, but XSLT 2.0 and XPath 2.0 have come up a few times. Sometimes it's been in the context of problems which those new specs will solve better, but at least as often the future is dark, cloudy, or unclear. It's hard to imagine covering those mammoth specifications in this kind of detail - I have a hard time imagining any but the most dedicated sitting through the ten days of training those specs would seem to demand. Larger specs create problems that go way beyond the scope of the problems they were meant to solve. Some of Ken Holman's choice tidbits are well worth repeating, though they may make more sense to you if you know XSLT. "Many new users don't realize how little XML has done. And maybe that's why it's so successful." (To be fair, he meant that XML has succeeded by doing little, not that users didn't get it.) "// is the biggest waste of [processing] time in stylesheets... This isn't to say that // doesn't have a role at times." "Not intuitive, and not in the REC." Perhaps the most important quote I've heard from Ken, the one which most clearly tells me the boundaries where XSLT is useful or not, is: "All we're doing is building a result tree." If I have a problem I can (reasonably) describe as result-tree building, XSLT is a good candidate for the work. If that's a stretch, there's probably another tool that's more appropriate. We'll see if my brain can keep up after a few more days of this, but right now I'm very pleased with a better understanding of problems that have troubled me for years. Ever build a result tree? © 2015, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://archive.oreilly.com/pub/post/xslt_training_day_2.html
CC-MAIN-2015-06
refinedweb
726
72.26
Michael Albinus <address@hidden> writes: > address@hidden (Phillip Lord) writes: > >> Michael > > Hi Philip, > >> I think that there is a problem with this commit, in that the default >> selector is only used for "make check". By default "make check-maybe" >> runs all tests (including the expensive ones). So, you have to do >> >> make check-maybe> >> I think it makes more sense for check-maybe to skip expensive tests, >> unless told otherwise, as "make check-maybe" is a good candidate for use >> pre-commit. > > I see. Before touching the Makefile, we shall agree how all the targets > shall behave. I would say, that "check" and "check-maybe" shall skip the > expensive tests. "check-expensive", "<foo>", and "<foo>.log" shall run > all tests. Sorry for slow reply! I would say that, yes, both check and check-maybe should skip expensive tests by default. I think this actually contradicts the GNU coding standards, but possibly it's these that need updating for very slow tests. I'd also agree about <foo>, <foo>.log (on master <foo>.log is lisp/<foo.log>, and lisp/<foo> also exists). However, I also think that these should also respond to setting SELECTOR on the command line. The problem with this patch: +SELECTOR_DEFAULT = (quote (not (tag :expensive-test))) +SELECTOR_EXPENSIVE = nil +SELECTOR = +check-maybe: + @${MAKE} check-doit SELECTOR="${SELECTOR_DEFAULT}" + is that "make check-maybe SELECTOR=nil" doesn't actually run the expensive tests as it should. That was the reason for my original, rather more complicated, suggestion. I've attached a complete patch below (actually tested this time!), which I think works. Would this make sense to you? Phil diff --git a/test/automated/Makefile.in b/test/automated/Makefile.in index 152e601..2534a65 100644 --- a/test/automated/Makefile.in +++ b/test/automated/Makefile.in @@ -89,7 +89,13 @@ WRITE_LOG = > $@ 2>&1 || { stat=ERROR; cat $@; }; echo $$stat: $@ ## Beware: it approximates 'no-byte-compile', so watch out for false-positives! SELECTOR_DEFAULT = (quote (not (tag :expensive-test))) SELECTOR_EXPENSIVE = nil -SELECTOR = +ifndef SELECTOR +SELECTOR_ACTUAL=$(SELECTOR_DEFAULT) +else +SELECTOR_ACTUAL=$(SELECTOR) +endif + + %.log: ${srcdir}/%.el @if grep '^;.*no-byte-compile: t' $< > /dev/null; then \ loadfile=$<; \ @@ -100,7 +106,7 @@ SELECTOR = echo Testing $$loadfile; \ stat=OK ; \ $(emacs) -l ert -l $$loadfile \ - --eval "(ert-run-tests-batch-and-exit ${SELECTOR})" ${WRITE_LOG} + --eval "(ert-run-tests-batch-and-exit ${SELECTOR_ACTUAL})" ${WRITE_LOG} ELFILES = $(sort $(wildcard ${srcdir}/*.el)) LOGFILES = $(patsubst %.el,%.log,$(notdir ${ELFILES})) @@ -123,7 +129,7 @@ $(foreach test,${TESTS},$(eval $(call test_template,${test}))) ## Rerun all default tests. check: mostlyclean - @${MAKE} check-doit SELECTOR="${SELECTOR_DEFAULT}" + @${MAKE} check-doit SELECTOR="${SELECTOR_ACTUAL}" ## Rerun all default and expensive tests. .PHONY: check-expensive @@ -133,7 +139,7 @@ check-expensive: mostlyclean ## Only re-run default tests whose .log is older than the test. .PHONY: check-maybe check-maybe: - @${MAKE} check-doit SELECTOR="${SELECTOR_DEFAULT}" + @${MAKE} check-doit SELECTOR="${SELECTOR_ACTUAL}" ## Run the tests. .PHONY: check-doit
http://lists.gnu.org/archive/html/emacs-devel/2016-01/msg00868.html
CC-MAIN-2019-13
refinedweb
469
51.44
Final Report Issues & Challenges From Decision XG, I have built a set of very small and simple ontologies that illustrate the idea, see below. - First you have an ontology about people. It is very simple, just the class Person and a few datatype properties (name and age). - Then you have an ontology about animals. It is equally simple, defining the class Animal and a few datatype properties (name and age). - I can use each of the ontologies independently of each other. If I just want to talk about people I use one, if I just need to say something about an animal I use the other. But what if I want to say something about both, e.g. that a person can have a pet which is an animal? Let's make another ontology about pets, import the two previous ontologies and then add an object property "hasPet". - Well, but what if my use-case is more complex? I want to also reason on the people in a knowledge base, and define a class called "animal lover" which is defined as the people who has some pet so that I can classify my instances? Simple, I just create a new ontology, import my pet ontology and define a new class for the animal lovers that are the people who have some pet (all the other classes and properties I need are already defined in the imported ontologies, so I just add one class and one restriction). - Finally, I may have the use-case that I want to add some data, but I don't want to distribute the data with the ontology of course. Simple, I just make an RDF file that reuses the vocabulary of my ontology (in this case the pet ontology and its imports). Then the complete knowledge base, can be combined in another file if needed. - If we take the same approach for decision representation, we can create a "family" of small vocabularies, where some are very simple and some are much more complex, but they all build on one another to support different use-cases. As long as we give good documentation for each one it should be easy to choose the right "level of complexity" when somebody wants to reuse the format, they don't have to care about the complex stuff if they don't want to. - Are there downsides? The only one I can think of now is that these small modules will have different namespaces, so the more complex an ontology you wan tot reuse, the more namespaces you have to keep track of.
https://www.w3.org/2005/Incubator/decision/wiki/Final_Report_Issues_%26_Challenges
CC-MAIN-2018-51
refinedweb
433
66.78
Have you ever thought about jamming the Wi-Fi connection network? Now days Wi-Fi password hacking is very common, so by jamming the Wi-Fi network you can block or jam any Wi-Fi connection, and no one is able to connect to that Wi-Fi network even after knowing the password. This can be done with a tiny Microcontroller ESP12E, which is also referred as Wi-Fi module or NodeMCU. If you are new to this small but powerful chip, then go through Getting started with ESP12 article. ESP is very popular for Wi-Fi tricks like creating a fake Wi-Fi network, serving your own page to steals someone’s password, block the Wi-Fi network etc. Even ESPs are being sold, with all the software flashed on them for doing these tricks, you just need to Plug and Play. But here we are creating our own Wi-Fi jammer. Technically, we are not making a jammer but a Deauther. There is a small difference between these. A Jammer sends noise signals to the Wi-Fi spectrum (2.4GHz) thus disturbing original Wi-Fi frequency spectrum. While a Deauther sends packets to interfere with your Wi-Fi signals thus disrupting the normal working of your Wi-Fi router. It behaves like a jammer. There is a Wi-Fi protocol called 802.11 which act as a deauthentication frame. This is used to safely disconnect all the users connected with the router. To disconnect any device from some Wi-Fi network, it is not important to know the password or to be in the network, you just need mac address of the Wi-Fi router and client device and it is enough to be in its range of that Wi-Fi network. Disclaimer: It is illegal to use jammer in the public areas without taking permission of govt. authority. This tutorial is just for educational purpose. Do it on your risk. Two Methods to make Wi-Fi jammer with NodeMCU There are plenty of available Codes or firmware to make NodeMCU as Wi-Fi jammer. You just need to burn the code or firmware into NodeMCU. Here we have selected two stable and easy methods, using which you can use NodeMCU to act as Wi-Fi jammer. 1. Uploading Jammer Arduino sketch into ESP12. For this method we will use Arduino code and Library written by Spacehuhn and it is very long code so we will use this code to directly upload in our NodeMCU using Arduino IDE. 2. Uploading Wi-Fi Jammer firmware into ESP12 using ESP8266 flasher. For this method we need Jammer firmware for NodeMCU which be downloaded from the given links: - ESP8266 flasher - Deauther Firmware – It is basically a .bin file .It is available for three NodeMCU versions depending on flash memory (1MB, 4MB and 512Kb). Download the version according to your board specification. In my case, board version is 1MB. Method 1: Uploading Jammer Sketch using Arduino IDE Let’s start with uploading the Arduino code Step 1:- Go to File -> Preferences in Arduino IDE and add this link to the Additional Boards Manager URLs and Click on OK. Close the Arduino IDE and Reopen it. Step 2:- Click on Tools -> Board -> Board Manager. Search for ESP8266. You must select version 2.0.0. This code will work only for this version. If you have already installed another versions, remove it and install 2.0.0 Step 3:- Again go to File -> Preferences and click the folder path under More preferences. Step 4:- Now, open the packages -> esp8266 -> hardware -> esp8266- > 2.0.0 -> tools -> sdk -> include and open the user_interface.h file with the text editor. Step 5:- Come to last line of the code and before #endif and); Then Save the file. Step 6:- Extract the library that you have downloaded earlier and open it. Open esp8266_deauther-master -> esp8266_deauther -> esp8266_deauther.ino This is the sketch which will be uploaded in the NodeMCU. Compile this sketch. If there is a error then you have to install these libraries: Now, your code is ready to upload. Connect NodeMCU to the PC, Choose NodeMCU esp-12E board from tools menu, choose the correct port and hit the upload button. Running the NodeMCU Wi-Fi Jammer Reset your ESP12 board after uploading the code and open the Serial Monitor. You will see this information on serial monitor: Step 1:- Now, connect your laptop or smartphone with Access Point created by NodeMCU. Name of AP is “pwned” and password is “deauther” These are default name and password which you can see on serial monitor . Step 2:- Open your browser and enter this address 192.168.4.1 . You will see a warning, read it and click on I have read and understood Step 3:- After this you will see the window given below. Click on Scan APs to search for the WiFi networks available. Now, click on Reload. Step 4:- Click on the WiFi network which you want to Jam. You can choose more than one but it will make your NodeMCU unstable. Step 5:- Click on Attacks and you will see that you have chosen one target for attack. To start the Attack click on start and then Reload. You have successfully jammed the network. To stop the attack click on stop button. Make a fake WiFi network If you want to make fake WiFi networks i.e. Beacons. Click on SSIDs above and name the SSIDs as you want .Add and save it. Come back to Attacks menu and click on Start in front of Beacon. You can check in your mobile or PC that wifi name that you have created will be displayed but it will not connect with this fake network it is just a WiFi spam. Method 2: Uploading Firmware using ESP8266 flasher Now, we will see the second method where we have to upload a firmware in ESP12 using Esp8266 flasher. It is easy to use and you don’t have to do any extra work or editing in sketch as we have done in previous method. Step 1:- Open the esp8266flasher.exe file. Step 2:- Click on Config and and then setting icon. Choose the .bin file that you have downloaded for your board and click on Operations. Step 3:- Click on Flash and it will start your uploading process. Wait for few minutes and your firmware uploading is finished. To run this firmware ,reset your nodemcu and all the steps for running the NodeMCU Wi-Fi Jammer are same as we have done in case of previous method using Arduino sketch. As you can see this method is very easy and more stable than Arduino version. So, I’ll recommend using this method for better performance. You can use mobile App instead of going to browser to access the portal. Download the app from this link. Interface of this app is same as the webpage. You can power your NodeMCU using your smartphone. For this you will need an OTG cable and your portable WiFi jammer is ready for work but use this device on your risk as already warned!! So with readily available Code or Firmware, its become very easy to jam or overlaod any wifi netwrok so that nobody is able to connect with it, but again use it carefully. Aug 18, 2018 i had create a project on arduino which is for fly drone.. but for few reason im not able to compile my program
https://circuitdigest.com/microcontroller-projects/diy-wifi-jammer-using-nodemcu-esp12
CC-MAIN-2020-50
refinedweb
1,249
82.34
BBC micro:bit Feline Detection System Introduction This project is a simple prototype for a Feline Detection System (FDS). The FDS is meant to detect the presence of a cat. Project Requirements - You will need a micro:bit. - You can do this from USB power but it will be easier to set up with an optional battery pack. - You will need two cables with crocodile clip connectors. - You will need to make two electrical contacts for the test subject to touch. Tin foil is my product of choice. You need to choose a suitable test subject. Here was my willing participant, You must respect your test subject and show due regard for his/her safety. Felis Catus is not generally known to work for free. You will need something to interest the subject. A supply of cat treats may be also be required. Clip a crocodile from pin 0 to one piece of foil. Clip the other one to GND. Python Program A simple program is used. All that happens is that an image is displayed on the LED matrix when the cat touches both contacts at the same time. from microbit import * while True: if pin0.is_touched(): display.show(Image.GHOST) else: display.clear() sleep(10) Results Bait the trap and then wait. The test subject places his back legs on the first electrical contact. As he eats the cat treats, he makes a connection to the second piece of foil with his front paws or with his tongue. The light goes on to register the presence of a cat. Challenges - Obviously, there could be more action when the cat is detected. Cats react differently to buzzer noises. Some are captivated by melodies, some don't like it at all. - Notice the way that the sensor is designed. Two conductive pads are placed close enough together so that a living being can complete the circuit through natural movement. Place the pads so that a footstep would complete the circuit and you have a stomp pad. Anything that touches both pads simultaneously is going to set off the switch. Experiment away.
http://multiwingspan.co.uk/micro.php?page=feline
CC-MAIN-2017-22
refinedweb
351
76.42
Each source file can contain one public class. The source file's name has to be the name of that class. By convention, the source file uses a .java filename extension (a tail end of a file name that marks the file as being of a particular type of file.) So, for a class declared as such: public class HelloWorld{ ... The file it is stored in should be named HelloWorld.java. The capitalization should be the same in both cases. Some operating systems don't notice whether file names are capitalized or not. It doesn't matter, you should be in the habit of using the correct capitalization in case you work on a system that does care about capitalization. When you compile your file with javac, you pass the full file name to javac: javac HelloWorld.java Let's say we save the file under a different name. We write the following program: public class HelloThere{ public static void main(String arg[]){ System.out.println("Hello There."); } } Then we save it as HelloWorld.java, and try to compile it: >javac HelloWorld.java FileName.java:1: class HelloThere is public, should be declared in a file named HelloThere.java public class HelloThere{ ^ 1 error Java lets us know that it won't compile until we rename the file appropriately (according to its rules.) So let's rename the file. Let's call it HelloThere.javasource. Seems a bit more explicit than just .java, right? Let's run the compiler: >javac HelloThere.javasource error: Class names, 'HelloThere.javasource', are only accepted if annotation processing is explicitly requested 1 error Java's still not happy with us. Annotation processing? That's when we include extra information in the program about the program itself. We're not bothering with that just now. So we should just name the file HelloThere.java, and not get fancy with our file names. But, under the right circumstances, javac does allow file name extensions other than .java. That's why we always type in the full file name, including .java, when we use javac. We say 'javac HelloThere.java', not just 'javac HelloThere'. Javac can't assume that we mean a .java file, though that's what it will usually be. The Class File Once we make javac happy with a proper file name, and a program with no errors, javac produces a new file. This file will have the original file name, but with .java replaced with .class. This is your bytecode file, the file that the Java Virtual Machine can run. When we run the program with Java, we're running the .class file. In the case of HelloThere, we're running the HelloThere.class file. But we don't type in the full file name. Why? Unlike javac, java requires a .class file. That's all it will work with. There's no opportunity to have a different extension to the file name. So it assumes the .class part of the file name. But that's not the whole story. If you add .class yourself, here's what you'll get: >java HelloThere.class Exception in thread "main" java.lang.NoClassDefFoundError: HelloThere/class Caused by: java.lang.ClassNotFoundException: HelloThere) Pretty ugly. What we're actually doing when we type "java HelloThere" is telling Java to run the class HelloThere. Java assumes that it will find this in a file called "HelloThere.class", so that's what it's looking for first. We're not telling Java to run the file HelloThere.class, we're telling it to run the class HelloThere, which it expects to find in the file HelloThere.class. But what if we ask for another class that doesn't have its own .class file? Just for fun, let's change HelloThere.java like this, and see what happens: public class HelloThere{ public static void main(String[] arg){ System.out.println("Hello."); } } class HelloZoik{ public static void main(String[] arg){ System.out.println("Zoiks!"); } } After we edit it, we compile with 'javac HelloThere.java' and hold our breath. Hurray! No errors! Now we have a second class, HelloZoik, in the HelloThere.class file. Can Java find it? Let's try: java HelloZoik Zoiks! It worked! Java found our class inside HelloThere.class. This shows it's not the file name that we're calling with the 'java' command, it's the class name. If Java doesn't find the class inside a file with the same name as the class followed by .class, it'll look in the other .class files available.
https://beginwithjava.blogspot.com/2011/01/
CC-MAIN-2022-40
refinedweb
760
78.45
Given two strings s1 and s2, write a function that will convert s1 to s2(if possible) by using only one opeartion. The operation is to pick any character from s1 and insert it at front of s1. Print the number of operations that took to convert. Example INPUT s1 = “abcd” s2 = “cbad” OUTPUT 2 The number of opeartions took are 2 Explanation Pick ‘b’ and insert it at beginning ie, “abcd” -> “bacd” Pick ‘c’ and insert it at beginning ie, “bacd” -> “cbad” Algorithm 1. First create two count arrays for s1 and s2, to check whether s1 can be converted to s2 or not 2. Traverse from end of both strings, ie, i = n-1 for s1 and j = n-1 for s2. Initialize count to keep track of no of operations a. If characters match, ie s1[i] == s2[j], then make i = i-1 and j = j-1 b. If current characters dont match, then search s2[j] in remaining s1, Keep increamenting count as these characters must be moved ahead. For above example, step 2 will be as follows 1) s1[i](ie, ‘d’) = s2[j](ie, ‘d’), decreament i,j 2) s1[i](ie, c) != s2[j](ie, ‘a’), decreament i and increase count, ie count = 1 3) s1[i](ie, ‘b’) != s2[j](ie, ‘a’), decreament i and increase count, count = 2 4) s1[i](ie, ‘a’) = s2[j](ie, ‘a’) Therefore, count = 2 C++ Program #include<bits/stdc++.h> #define NO_OF_CHARS 256 using namespace std; // Function to find minimum number of operations required to transform // s1 to s2. int minOps(string& s1, string& s2) { int m = s1.length(), n = s2.length(); //1) checking whether the conversion is possible or not if (n != m) { return -1; } int count[NO_OF_CHARS]; //initializing 0 to every char in count. memset(count, 0, sizeof(count)); // count characters in s1 for (int i=0; i<n; i++) { count[s2[i]]++; } // subtract count for every character in s2 for (int i=0; i<n; i++) { count[s1[i]]--; } // Check if all counts become 0, if yes we are good for (int i=0; i<256; i++) { if (count[i]) return -1; } //2) calculating the number of operations required int ans = 0; for (int i=n-1, j=n-1; i>=0; ) { // If there is a mismatch, then keep incrementing // 'count' until s2[j] is not found in s1[0..i] while (i>=0 && s1[i] != s2[j]) { i--; ans++; } // If s1[i] and s2[j] match if (i >= 0) { i--; j--; } } return ans; } // Driver program int main() { string s1 = "abcde"; string s2 = "cbad"; cout << "Minimum number of operations required is " << minOps(s1, s2)<<endl; return 0; }
https://www.tutorialcup.com/interview/string/transform-one-string-number-operations.htm
CC-MAIN-2020-10
refinedweb
446
68.81
On 29 November 2016 at 15:13, Simon Riggs <si...@2ndquadrant.com> wrote: > On 14 November 2016 at 15:50, Robert Haas <robertmh...@gmail.com> wrote: >> On Sat, Nov 12, 2016 at 11:09 AM, Andres Freund <and...@anarazel.de> wrote: >>> I'm very tempted to rename this during the move to GUCs >> ... >>> Slightly less so, but still tempted to also rename these. They're not >>> actually necessarily pointing towards a primary, and namespace-wise >>> they're not grouped with recovery_*, which has become more important now >>> that recovery.conf isn't a separate namespace anymore. >> >> -1 for renaming these. I don't think the current names are >> particularly bad, and I think trying to agree on what would be better >> could easily sink the whole patch. > > OK, so we can move forward. Thanks. > > I'm going to be doing final review and commit this week, at the > Developer meeting on Thurs and on Friday, with input in person and on > list. err... no I'm not, based on review feedback in Tokyo. New schedule is roughly this... * agree changes over next week * make changes and submit new patch by 1 Jan * commit patch by 7 Jan Overview of details agreed in Tokyo, now subject to further comments from hackers * Move recovery.conf parameters into postgresql.conf Allow reload of most parameters, allow ALTER SYSTEM Provide visibility of values through GUC interface * recovery.conf is replaced by recovery.trigger -> recovery.done * pg_basebackup -R will write recovery.trigger to data directory insert parameters postgresql.conf.auto, if possible * backwards compatibility - read recovery.conf from $DATADIR - presence of recovery.conf will cause ERROR * backwards compatibility - some parameter names will change, so allows others to changes also if needed * Add docs: "Guide to changes in recovery.conf in 10.0" * recovery_target as a single parameter, using proposed "xid 666" "target value" format * remove hot_standby parameter altogether, in line with earlier changes * trigger_file renamed to promote_trigger_file * standby_mode = on| off -> default to on * pg_ctl recover - not as part of this patch I'll work on the patch from here on, to allow me to work towards commit. Comments please. -- Simon Riggs PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription:
https://www.mail-archive.com/pgsql-hackers@postgresql.org/msg299030.html
CC-MAIN-2018-47
refinedweb
380
57.98
Convention over configuration has became a buzz word these days. It is a software design paradigm which seeks to decrease the number of decisions that developers need to make, gaining simplicity, but not necessarily losing flexibility. More and more web frameworks are using this approach and making web development simple. Ruby on rails, DJango, Grails, CakePHP, ASP .NET, Apache Wicket etc are a few in top demand these days. What’s Play!? One of the newest ones is the Play! framework. Play is heavily inspired by Ruby on Rails and Django. A developer familiar with any of these frameworks will feel at home. Play leverages the power of Java to build web applications in an environment that is not Java Enterprise Edition-centric. By lifting away the Java EE constraints, Play provides developers with an easy-to-develop and elegant stack aimed at productivity. I have worked in CakePHP and I am a big fan of it. I believe Play! is bringing the same experience of fast web application development to the Java world, right from hello world baby steps to create a full production ready application and writing tests. When you visit Play!’s overview webpage on their site, the very first line will surely catch your attention. The Play framework is a clean alternative to bloated Enterprise Java stacks. It focuses on developer productivity and targets RESTful architectures. Play is a perfect companion to agile software development. Although the words “bloated Enterprise Java stacks” may offend many Java developers who are very used to work with Spring MVC, Struts of JavaServer Faces; I think its more of less correct and many of us already feels so working for years now in Spring / Struts and realizing how much boiler plate code is being pushed into system and how complex things became over period of time. What I like most about Play! framework is the flexibility of this new framework and how easy it makes the life of developer. The application development becomes a piece of cake (like most CakePHP developers will feel). And the best part is all this you can do with Java without switching your favorite language and shifting to other like Rube or PHP. Highlights of Play! framework Let us see few differences of Play! from other Java frameworks (Courtesy Wiki): - Play is fully RESTful – there is no Java EE session per connection. This makes Play more outwardly-scalable than many other frameworks. - No configuration: download, unpack and develop – it’s that simple. - Easy round trips: no need to deploy to an application server, just edit the code and press the refresh button on the browser. - Integrated unit testing: unit testing is at the core of Play! - Elegant API: rarely will a developer need to import any third party library – Play comes with all the typical stuff built-in. - CRUD module: easily build administration UI with little code. - Play is a web framework: it is language agnostic and currently supports Java and Scala. - Modular architecture Your first Play! application Hello world in Play! is straight forward. I will not spend much time in describing steps for installing and configuring Play!. On its homepage, Play! framework developer Guillaume Bort has made this wonderful getting started screencast which you can follow to create your very first Play! application.A few points I would like to highlight here is: - Play is a real “Share nothing” system. Ready for REST, it is easily scaled by running multiple instances of the same application on several servers. - Play! loads the code dynamically. Just fix the bug in Java file and hit reload and voila, the change will get reflected instantly on webpage. No need to go through those sluggish rebuild / deployment / server restart. - Play! comes with a very efficient template system. The template system is based on Groovy as an expression language. It provides template inheritance, includes and tags. - A fully functional stack of modules are available in Play!. Provides integration with Hibernate, OpenID, Memcached… And a plugin system to create your own customize plugin or include one from hundreds of freely available ones. - Error discovery is very easy with Play!. When an error occurs, play shows you the source code and the exact line containing the problem. Even in templates. Your first Play! – GAE – Siena application For this tutorial, I have chosen a little complex example which we develop in Play! and deploy on Google App Engine (GAE). The GAE offers fast development and deployment; simple administration, with no need to worry about hardware, patches or backups; and effortless scalability. The Google app engine does not support SQL or RDBMS. The App Engine datastore provides robust, scalable storage for your web application, with an emphasis on read and query performance. An application creates entities, with data values stored as properties of an entity. The app can perform queries over entities. All queries are pre-indexed for fast results over very large data sets. Our demo application called eazyBookmark is already running on Google App engine. This is a Bookmark application where user can login and save their bookmarks. Click below image to visit online demo. Online Demo Link: Following are the functional aspects of eazyBookmark app. - User can login to eazyBookmark using their Google account. No need to remember password for new account. - Using “Add link” functionality, user can add any URL and save as bookmark. - Any bookmark can be tagged with multiple tags. This way user can organize the links easily. - A bookmark once added can later be Edited or deleted. - A small bookmarklet script is available for user to add it in browser favorites. This bookmarklet can be later used by User to bookmark any webpage. Read Tools page of eazyBookmark. Getting started Create a blank application in Play! using play create eazybookmark commnd. C:\Play>play new eazybookmark ~ _ _ ~ _ __ | | __ _ _ _| | ~ | '_ \| |/ _' | || |_| ~ | __/|_|\____|\__ (_) ~ |_| |__/ ~ ~ play! 1.1, ~ ~ The new application will be created in C:\Play\eazybookmark ~ What is the application name? [eazybookmark] ~ ~ OK, the application is created. ~ Start it with : play run eazybookmark ~ Have fun! ~ The above Hello World video shows in depth how to install and use Play and create your first application. Adding Google App Engine (GAE) and Siena module to Play! We need two modules for our eazybookmark application. One is Google App engine (GAE) and another one is Siena. You can install these modules by running following command with Play!. C:\Play> play install gae .. .. C:\Play> play install siena The above commands will install latest version of these modules. If you are behind proxy than the above command may fails. Currently there is no way of setting up proxy with Play! (v1.1) so you can manually download gae and siena from its homepage and copy them in /modules directory of your Play!. Once the modules are installed successfully, you can add following lines in eazybookmark apps /conf/application.conf file. # ---- Google app engine module ---- module.gae=${play.path}/modules/gae-1.4 # ---- Siena module ---- module.siena=${play.path}/modules/siena-1.3 You may want to restart the Play application to reflect the changes of the modules. Once the application is restarted, the GAE module creates /war/WEB-INF/appengine-web.xml file. You need to update this file and add the application id of GAE. In our case we will have following data. File: /war/WEB-INF/appengine-web.xml <appengine-web-app <application>eazybookmark</application> <version>1</version> </appengine-web-app> You may want to replace eazybookmark with your own GAE Application ID. Play! and Siena models Let us first review the model classes we going to use for our application. We will have four model classes. - User – This is a User model. All user related information such as email, name, etc will get stored here. - Link – This is Link model. All links related information such as URL, title, description etc will get stored here. - Tag – This is Tag model. All the tags related information such as tagname etc will get stored here. - LinkTag – This is a mapping model which maps Links with Tag. In Siena to create Many to Many relationship we need to create three classes. This class maps Link and Tag and add a many to many relationship to these entities. Here is the code for each of the models. File: /app/model/User.java package models; import siena.*; public class User extends Model { @Id public Long id; public String email; public String name; public Date created; public Date modified; static Query<User> all() { return Model.all(User.class); } public static User findById(Long id) { return all().filter("id", id).get(); } public static User findByEmail(String email) { return all().filter("email", email).get(); } public User() { super(); } public User(String email) { this.email = email; } public String toString() { return email; } } The User model has few attributes like email, name, created and modified which define a user. Few methods like findById() and findByEmail() select the User based on Id and email respectively. File: /app/model/Link.java package models; import java.util.*; import siena.*; public class Link extends Model { @Id(Generator.AUTO_INCREMENT) public Long id; public String url; public String title; public String description; public Date created; public Date modified; @Filter("link") public Query<LinkTag> linktags; @Index("user_index") public User user; public Link(User user, String url) { super(); this.url = url; this.user = user; } public Link() { super(); } static Query<Link> all() { return Model.all(Link.class); } public static Link findById(Long id) { return all().filter("id", id).get(); } public static List<Link> findByUser(User user) { return all().filter("user", user).order("-created").fetch(); } public static void addTag(Link link, Tag tag) { LinkTag linkTag = new LinkTag(link, tag); linkTag.insert(); } public String toString() { return url; } public List<Tag> findTagsByLink() { List<Tag> tags = LinkTag.findByLink(this); return tags; } public static void addTagsFromCSV(Link link, String tagcsv, User user) { Collection<Tag> tags = null; if(null != tagcsv || !tagcsv.equalsIgnoreCase("")) { String [] tagArr = tagcsv.split(","); tags = new ArrayList<Tag>(); for(String tagstr : tagArr) { tagstr = play.templates.JavaExtensions.slugify(tagstr.trim()).trim(); if(null != tagstr && !tagstr.equals("")) { Link.addTag(link, Tag.findOrCreateByName(tagstr,user)); } } } } } The Link model class is similar to User. It has its own attributes such as url, title, description etc. Also note that Link model has few methods like findById(), findTagsByLink(), addTag(), addTagsFromCSV() etc. These methods are used from the controller to manipulate the link data. The Link has a many to many relationship with Tag. One link can have multiple tags attached to it. File: /app/model/Tag.java package models; import siena.*; import java.util.*; public class Tag extends Model { @Id(Generator.AUTO_INCREMENT) public Long id; public String name; @Index("user_index") public User user; public static Query<Tag> all() { return Model.all(Tag.class); } public static Tag findOrCreateByName(String name, User user) { Tag tag = all().filter("user", user).filter("name", name).get(); if(null == tag) { tag = new Tag(); tag.name = name; tag.user = user; tag.insert(); } return tag; } public static Tag findById(Long id) { return all().filter("id", id).get(); } public static Tag findByName(String name, User user) { return all().filter("user", user).filter("name", name).get(); } public static List<Tag> findByUser(User user) { return all().filter("user", user).fetch(); } public String toString() { return name; } } The Tag model has one attribute tag name. This can be anything like travel, leisure, technology, news etc. Also note that each tag has one to one mapping with User. This is defined by @Index("user_index") annotation with object User. The Tag has usual methods findById, findByName, findByUser, findOrCreateByName etc which is used from controller to manipulate the Tag data. File: /app/model/LinkTag.java package models; import java.util.*; import siena.*; public class LinkTag extends Model { @Id public Long id; @NotNull @Column("link_id") @Index("link_index") public Link link; @NotNull @Column("tag_id") @Index("tag_index") public Tag tag; public LinkTag(Link link, Tag tag) { super(); this.link = link; this.tag = tag; } public static Query<LinkTag> all() { return Model.all(LinkTag.class); } public static List<Tag> findByLink(Link link) { List<LinkTag> linkTags = all().filter("link", link).fetch(); List<Tag> tags = new ArrayList<Tag>(); for(LinkTag linkTag : linkTags) { tags.add(Tag.findById(linkTag.tag.id)); } return tags; } public static List<Link> findByTag(Tag tag) { List<LinkTag> linkTags = all().filter("tag", tag).fetch(); List<Link> links = new ArrayList<Link>(); for(LinkTag linkTag : linkTags) { links.add(Link.findById(linkTag.link.id)); } return links; } public String toString() { return link.toString() + " : " + tag.toString(); } public static void deleteByLink(Link link) { List<LinkTag> linkTags = all().filter("link", link).fetch(); if(null != linkTags) { for(LinkTag linktag : linkTags) { linktag.delete(); } } } } The LinkTag model links the models Link and Tag. It adds the Many-to-many relationship between these models. We have few useful methods like findByLink, findByTag to get the list of relationship. The Play! Controllers The controller in Play! framework handles all the request / response between client and web server. It is a plain Java class which extends Controller class of Play!. In our application we will use two controller classes: Application and Profile. We will also have a helper class called Auth which has few useful methods to determine is a user is logged in, or to get email address of logged in user. This class invoke GAE module to get the authentication details and route user to Google page for authentication. File: /app/controller/Auth.java package controllers; import play.modules.gae.*; import com.google.appengine.api.users.*; public class Auth { public static boolean isLoggedIn() { return GAE.isLoggedIn(); } public static String getEmail() { return GAE.getUser().getEmail(); } public static User getUser() { return GAE.getUser(); } public static void login(String action) { GAE.login(action); } public static void logout(String action) { GAE.logout(action); } } Note that above Auth class is not part of controller (We haven’t extended it with Controller). It is just a utility class that we can use from other controllers to access user information. You may want to create your own package utils and keep this class there. For sake of simplicity, I am keeping this in controllers package. File: /app/controller/Application.java package controllers; import play.mvc.*; import models.*; import models.User; import java.util.*; public class Application extends Controller { public static void index() { if(Auth.isLoggedIn()) { User user = User.findByEmail(Auth.getEmail()); if(null == user) { user = new User(); user.email = Auth.getEmail(); user.created = new Date(); user.insert(); Profile.edit(user.id); return; } Profile.index(null); } render(); } public static void login() { Auth.login("Application.index"); } public static void logout() { Auth.logout("Application.index"); } } Check the Application controller which is the entry point to our application. The index() method will be called whenever user request homepage /. In index() method we are checking if user is logged in. If not, then we show her a Login page using render() method. This method renders the default view of this action. We will see this view later in our example. Also note that if user is authenticated, we redirect it to Profile.index action. If user has logged in for first time in eazyBookmark, she will be redirected to Edit Profile page where she can select username for her profile. File: /app/controller/Profile.java package controllers; import models.*; import play.*; import play.mvc.*; import java.util.*; import play.data.validation.*; import java.net.*; import net.htmlparser.jericho.*; import java.io.*; import play.Logger; import models.Tag; public class Profile extends Application { @Before static void checkConnected() { if(Auth.getUser() == null) { Application.login(); } else { renderArgs.put("user", Auth.getEmail()); } } public static void index(String tag) { User user = getUser(); List<Link> links = null; if(null == tag || tag.equalsIgnoreCase("")) { links = Link.findByUser(user); } else { links = LinkTag.findByTag(Tag.findByName(tag, user)); } List<Tag> tags = Tag.findByUser(user); render(user, links, tags); } //... //... } The Profile controller is extended from Application controller to take advantage of login() / logout() method. Also note that this is not the full code for Profile.java. You can check the full source in the project source code. The View All the view files in Play! framework as saved in /views/ folder with .html extension. These are the plain HTML files with certain groovy template tags. Read more about built-in template tags in Play! here. There is a file main.html in /views folder which act as the template for all other views. Following is the content of main.html for eazyBookmark. File: /views/main.html <.ico'}"> <script src="" type="text/javascript" charset="utf-8"></script> #{get 'moreScripts' /} </head> <body> <div id="pagewidth" > <div id="headerbar"> <div id="headercol"> <div id="logo"> <h3><a href="/">eazyBookmark</a></h3> </div> <div id="headerlink"> #{if user} ${user} <span class="sep">|</span> <a href="@{Application.logout()}">Logout</a> #{/if} #{else} <a href="@{Application.login()}">Login</a> #{/else} </div> </div> </div> <div id="wrapper" class="clearfix"> <div id="maincol"> #{if flash.error} <p class="error">${flash.error}</p> #{/if} #{if flash.success} <p class="success">${flash.success}</p> #{/if} #{doLayout /} </div> </div> <div id="footer"> <p> Copyright © ${new Date().format('yyyy')} <a href="">eazyBookmark</a> </p> </div> </div> </body> </html> In above template main.html file, we have #{doLayout /} tag which renders the content. Also note that we are displaying error / success message using flash mechanism of Play!. For each controller, we create a separate folder in views/ folder. And these folders will contain the view file for each action in given controller. Thus in /views/Application folder we will have index.html and in /views/Profile folder we will have edit.html, editlink.html and index.html. Download these html files from the source code attached at the end of tutorial. Deploying Play! app in Google App Engine (GAE) We are done with the coding of our eazyBookmark application. Now we can deploy this app in our GAE account. For that follow the basic steps given here: - Get yourself a GAE account and set up an application. you will need the ID. - Download the GAE SDK for Java - run play war myappname -o myappname-war - run APPENGINE_SDK_DIR/bin/appcfg update myappname-war/ - Log in to your app engine console and check out your application! You can check the Online Demo here: Download full Source Code From {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/your-first-play-%e2%80%93-gae-%e2%80%93-siena
CC-MAIN-2016-50
refinedweb
3,032
60.21
I'm trying to remove all outer elements of a list that are contained in a second list, while keeping ones that may be 'sandwiched' inside. I know how to take the complement of the intersection of two sets, but here I just want to remove all beginning and trailing elements. So far I've come up with the following, but it feels clunky: def strip_list(l, to_remove): while l[0] in to_remove: l.pop(0) while l and l[-1] in to_remove: l.pop(-1) return l mylist = ['one', 'two', 'yellow', 'one', 'blue', 'three', 'four'] nums = ['one', 'two', 'three', 'four'] strip_list(mylist, nums) # > ['yellow', 'one', 'blue'] def strip_list(data, to_remove): idx = [i for i, v in enumerate(data) if v not in to_remove] return data[idx[0]:idx[-1]+1]
https://codedump.io/share/k8h6s8F3v5aJ/1/strip-outer-elements-from-list-that-are-contained-in-another-list
CC-MAIN-2018-05
refinedweb
131
73.71
Wade Wright's whimsical rants and dumping ground for things he doesn't want to forget.. >What happens if we run out of GUIDs? Right on! Don't believe the people who claim that GUIDs are unique until you see their test plan! We want *proof* with reproducible results, not statistical theory! Show us every GUID that's been made or will be made and *then* we'll know for sure. I'll bet they're the same people who claim the Universe is infinite. A never-ending universe might be the only explanation that their tiny minds can come up with - but not necessarily the real answer! <vbg> Pingback from Morning Brew #88 Thank you for the great articles. I've been forcing the use of GUID's on my projects for several years now and still continue to run into all the "but why" questions. The only two concerns that I give any weight to are performance and size. The size concern is often easily dismissed in 99% of use cases and in the other 1% alternatives should be used and that's ok. The performance concern is not as easily dismissed. I've always considered it a trade-off that was well worth making given the benefits of GUIDs (specifically the ability to create them in your application layer). Having just read the article by Jimmy Nilsson and his COMB's style GUID solution, I'm thinking we can now have our cake and eat it too. #3: It's fairly trivial to use NewSequentialId from your object model: using System; using System.Runtime.InteropServices; using System.Security; [SuppressUnmanagedCodeSecurity] [DllImport("rpcrt4.dll", SetLastError = true)] static extern int UuidCreateSequential(out Guid value); public static Guid NewSequentialGuid() { Guid value; if (0 != UuidCreateSequential(out value)) { value = Guid.NewGuid(); } return value; } msdn2.microsoft.com/.../aa379322.aspx DOH! Thanks Richard! Guess I will have to revisit that idea a little more now. I truly need to consider the chance of duplication, but I can't think of the last time one of my apps ran on a box without a NIC. Hmmmm... I will investigate further. I don't care about the "millions of inserts", but I do care about the "Why does it take so long for my report to run?" and the "Why is my application slower than molasseses in January?" that usually happens when people build applications without thinking about performance up front. I like your article, but you are coming off a little cocky. I have been solving performance problems in large database applications for over 14 years. Like I said, I like your reasoning for using GUID's, but I don't think you truly appreciate what a high performance application is all about. Mr Datagod, I must be cocky, because all I can think of is smart a%$ responses to you: 1) Someone named DataGOD is calling ME cocky? 2) As the Director of Development for Match.com a while back, I think I have an EXTREMELY good appreciation for high performance applications. However, as I said, those are simply not the norm for most developers and for most applications. I will say it again, a different way. I don't claim this is the best way to do every single application on the planet... just most of them :) LOL, good point. I have used Datagod (small g) as a handle for so long that I forget how cocky it sounds. :) Pingback from Alex Gorbatchev » Morning Brew #87
http://weblogs.asp.net/wwright/archive/2007/11/11/gospel-of-the-guid-answers-to-your-burning-questions-comments-and-insults.aspx
crawl-002
refinedweb
582
65.32
There is a heap overflow in the `ioctl` handler for `geom`, which is non-critical since it is only triggerable as `root`. Essentially, there are no checks on the user supplied `req.narg` value. The code uses this value to calculate a size by multiplying by `sizeof(struct gctl_req_arg)`, and then calls `g_malloc` and `copyin`. `g_malloc` treats its `size` parameter as an `int`: static __inline void *g_malloc(int size, int flags) So this size will be truncated to 32 bit, however the `copyin` call will use the full 64 bit size. PoC to trigger the bug, resulting in panic (must be run as `root`): #include <stdio.h> #include <unistd.h> #include <fcntl.h> #include <errno.h> #include <sys/types.h> #include <sys/ioctl.h> #include <geom/geom_ctl.h> int main(void) { int result; struct gctl_req req; int g; g = open("/dev/geom.ctl", O_RDONLY); if(g == -1) { printf(" [-] Couldn't open geom.ctl!\n"); return 1; } req.error = malloc(0x100); req.lerror = 2; req.version = GCTL_VERSION; req.narg = 0x5555556; req.arg = malloc(0x4000); memset(req.arg, 'a', 0x4000); result = ioctl(g, GEOM_CTL, &req); printf("%d %d\n", result, errno); free(req.arg); return 0; } Created attachment 172625 [details] Fix for 209113 A problem existed where a g_malloc call would allocate a buffer length specified by a 32 bit integer. Then copyin would write a 64 bit integer worth of data into the buffer. By changing g_malloc()'s len argument to be a size_t (matching malloc) the buffer will always be large enough for copyin. It is still possible to hang the process while waiting for a large enough allocation, so the M_NOWAIT flag has replaced the M_WAITOK flag. -secteam@. This is not exploitable by non-root. (In reply to rday from comment #1) I don't really like the M_NOWAIT part -- shouldn't we place a hard limit on how much the userland may request instead? I think you're right, a hard limit would be better. I can't find a maximum parameter count or a maximum parameter size in the documentation though. I don't think I'm familiar enough with the system to come up with a value. On the other hand, if root issues a malicious ioctl() then root's process just waits(using M_WAITOK). This doesn't seem like much of a concern. Lacking a sufficient hard limit, would it be best to simply change the `size` parameter's type to size_t? Removing the M_NOWAIT change?
https://bugs.freebsd.org/bugzilla/show_bug.cgi?format=multiple&id=209113
CC-MAIN-2022-05
refinedweb
411
68.77
This action might not be possible to undo. Are you sure you want to continue? by Wassim Boustani English I, Section VI Professor Martin 11 March 2002 Boustani 2 Outline Thesis statement: Starting a home-based business can be a monumental undertaking; however, it has provided freedom and financial independence to millions of successful entrepreneurs; and with proper guidance, it can provide you with the same benefits. I. The self-employed A. Description B. Personality qualifications C. Reasons for being self-employed II. Becoming self-employed A. Developing your idea B. Financial investment C. Defining your mission D. Finding customers E. First plan of action III. Starting from home A. Legal issues B. Equipment requirements C. Managing finances D. Cultivating the right image E. Home business tactics F. Helpful resources Boustani 3 Starting a Home-Based Business Robin was an excellent computer programmer who always kept her skills up to date, and she made sure to have an advantage over her co-workers by keeping high standards. After five years of working for the same company, she felt that the challenge and growth potential there had reached its limits. She also noticed that the company was no longer financially stable, and that appreciation for her excellent dedication and performance had diminished. Her daily routine now consisted of long work hours, which lead to little time for a social life. Robin always had an entrepreneurial spirit, so she decided that the only way to fulfill her full potential would be to start her own consulting business from home. Starting a home-based business can be a monumental undertaking; however, it has provided freedom and financial independence to millions of successful entrepreneurs; and with proper guidance, it can provide you with the same benefits. A home-based business is usually a sole proprietorship, which means that it has one owner, and it takes place mostly from the owner's home. The owner is responsible for all aspects of the business. The type of business can also be a partnership, or a corporation, but the latter is rare. Starting a business from your home can have its advantages and disadvantages. There are many types of self-employed individuals. These can include traditional business owners that offer conventional products and services; those that want to grow their business and employ others; those that are their own business by selling their knowledge, experience and contacts; and those that are a combination of all of the above. Although more women are becoming self-employed, an article by De Lisser mentions that males still manage 59% of businesses operated from a home. The average age of these entrepreneurs is 44 years; they are married and are professionals in marketing, sales, or technology with almost ten years of Boustani 4 experience (1). Starting a business is not for everyone, and you should ask yourself if it is something that you should be pursuing. There are many factors to consider, such as time constraints, family responsibilities, health, money, and the consequences of failure. Research has shown that your personality, assertiveness, and ambition should also be appropriate: Successful home-based business owners typically share common traits, behaviors, skills and approaches to business that give them an edge over those who fail. While some traits are particular to working at home, most of them are required for any successful business, large or small, at home or away. (Perkins) You need to find a reason to start your own business that is powerful enough to keep you focused along the way. The most popular reason has often been freedom. According to Perkins, if your main reason for becoming self-employed is freedom, be careful not to become isolated, or make it a habit of working long hours and neglecting your health, because you lack the social interaction of a regular office setting (1). Many home-based business owners are already comfortable working alone and have the discipline to spend long hours operating their business without supervision or outside influences. You probably already have an idea about what your business will be about. However, you must develop this idea so that you can easily explain the products and services that you offer in one sentence. Simply start by writing out a full description of your idea, and then keep revising it until you reach the fewest possible words. You will be using this descriptive sentence when people ask you what your business's function is (Levinson 42). Now that you have a business idea, you will need the money to get started. Your business can be financed by many sources, such as a second home mortgage, life insurance loan, credit Boustani 5 unions, credit cards, retirement plan, gifts or loans from family, help from suppliers, banks, and government agencies. Be careful about certain sources such as retirement plans, life insurance, and credit cards because fees, taxes, and interest charges can quickly add up. The most popular government agency source for financing is the Small Business Association (SBA). All gifts and loans from family members should be treated as business transactions with signed, written agreements (Pollan 205). The best way to define your mission is to write a business plan. This is especially true if you will be seeking a loan. A business plan is crucial to defining who you are, what your business is, what you will be selling, who your customers are, what your estimated expenses and income will be for the next five years, and what you plan to do with your business in the future (The Entrepreneur Magazine Group 121). Finding customers can be one of the most difficult challenges. There are many advertising strategies, including postcards, fliers, brochures, and other literature that may be sent out in mass mailings, or made available at your place of business; products such as pens, calendars, chocolates, t-shirts, free samples that are imprinted with the name of your business; business cards left wherever you go; notices posted publicly on electronic bulletin boards; free samples given out on the street corner, or at any public event (Anthony 126). One of the best advertising methods is word-of-mouth, which is why it is crucial to keep your customers happy. A plan of action is required if you want your business to develop. The best way to create one is to have a list of what needs to be done, with a detailed schedule. Following your plan of action every day, even if it is only a small task, will ensure that you stay focused on your goals and achieve some level of production. Regardless of how small your business is, you must operate it legally. Contact your Boustani 6 county's chamber of commerce and your city hall to learn about zoning and licensing issues. Most of the time, you simply need to file an Assumed Name Certificate with the clerk of the county in which your business is conducted. Personal liability is full, and a sole proprietor is personally responsible for all debts of his or her business. For purposes of taxation, business income is reported and taxed through the sole proprietor's personal tax return (Internal Revenue Service). Your home office should be sufficiently equipped to properly support your business. Most home-based operations today need a computer; multi-function printer that can fax, copy, and scan; a reliable Internet connection; and a good phone system. Call waiting service can help if you cannot afford two phone lines, because customers will not expect a busy signal when they call a business. Your office space should be of a good size, and away from domestic distractions. You should establish an accurate bookkeeping system from the very beginning. Consult a qualified accountant for guidance. Keeping records of all income and expenses is important, and those pertinent to the Internal Revenue Service are usually kept for a minimum of seven years. Accounting software can help you effectively manage finance and taxation issues. Making a good impression on your customers is critical. You need to cultivate the right image for your type of business. Customers will tell more people about a bad experience, than about a good experience with you. You will want positive word-of-mouth advertising from them. Your county's chamber of commerce, the Small Business Administration, your town or city hall, and your library have a multitude of resources that can help you with all aspects and stages of your business. With the will to succeed, the proper motivation, an entrepreneurial spirit, some marketing know-how, and accurate financial management, you can start and grow your home-based Boustani 7 business to any level you desire. Set your goals high and stay competitive by marketing aggressively, providing excellent customer service, cultivating a professional image, keeping accurate financial records, using resources effectively, and nurturing current customers. Soon you will leave the office crowd and build your own successful home-based business. Works Cited Anthony, Joseph. Kiplinger's Working for Yourself: Full Time, Part Time, Anytime. Washington: Kiplinger Books, 1993. De Lisser, Ellena, and Dan Morse. "More Men Than Women Work From Home." Startup Boustani 8 Journal 21 June 1999. 13 Mar. 2002 <>. The Entrepreneur Magazine Group. The Entrepreneur Small Business Advisor. New York: John Wiley & Sons, 1995. Small Business/Self-Employed. Internal Revenue Service. 26 Jan. 2002 < cId%3D20859,00.html>. Johnson, Robert. "Neighborhoods Feud Over Home Office Headquarters." Startup Journal 6 Jul. 1999. 13 Mar. 2002 <>. Levinson, Jay C. Guerilla Marketing. New York: Houghton Mifflin Company, 1998. Perkins, Broderick. "Realize the Dream of a Home-Based Business." Startup Journal 26 Oct. 2001. 13 Mar. 2002 <>. Pollan, Stephen M., and Mark Levine. The Field Guide to Starting a Business. New York: Simon & Schuster, 1990. Robbins, Stephen P., and Robin Stuart-Kotze. Management: Concepts and Applications. Scarborough: Prentice-Hall Canada, 1990. This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue reading from where you left off, or restart the preview.
https://www.scribd.com/document/3574935/Starting-a-Home-Based-Business
CC-MAIN-2016-50
refinedweb
1,681
55.84
> Minutes I spend on the phone, on a typical day: The same survey was conducted for teenagers. The results, which initially baffled researchers, are below: 1-30 0% 30-60 0% 60-90 0% 90-120 0% More than 120 0% It turns out nobody actually talks on the phone anymore or is even aware the phone supports such a feature. Everyone texts. Religious Ceremony Leads To Evolution of Cave Fish ..? IRS Servers Down During Crucial Week I still file my taxes the old-fashioned way... via paper. So their system outage wont affect me. I would e-file, but why do them a favor? It takes me just as long to e-file as it does to fill out the paper form. And if the IRS is going to waste my time, i'm going to waste theirs. :p Japan Successfully Deploys First Solar Sail In Space. Foxconn Workers Getting Raise With Apple Subsidies Your anger is admirable but misguided. It is not the corporations at fault. They are just playing by the rules which allow them to do evil things. The rules are at fault. The the politicians make the rules. So if you want to place blame, its on the system, not the players of the system. You really cannot expect the players in the system to do the right thing all the time. The rules should enforce ethical behavior. So fuck the politicians who are the real sellouts. iPhone 4 Beta Shows AT&T Tethering is that easier? sounds very technical to me. tethering was easy to install. iPhone 4 Beta Shows AT&T Tethering I've been tethering for 2 years with my iPhone. Just jailbreak teh damn thing. Once you have it, you almost forget that nobody else with an iPhone can do it. (and haha to you) Here's a little trick: Tether your iPhone at a starbucks cafe. Starbucks allows iPhones to connect to their wifi for free. Therefore, you get free starbucks wifi on yer laptop instead of having to pay for it. Now, why starbucks won't enter the modern area and stop charging for internet access is beyond me. iPhone OS 4.0 Brings Multitasking, Ad Framework For Apps Are you some kind of moron? Nobody knew everything apple planned to do with their platform. YOU are looking at the situation with perfect 20/20 vision. Well news flash, asshole, reality isnt that way. You learn as you go. And since only today I can see what apple is up to, i choose not to support it. So therefore, YOU can just GO TO HELL. Anti-Cancer Agent Stops Metastasis In Its Tracks. iPhone OS 4.0 Brings Multitasking, Ad Framework For Apps. When I die, I want my body to be ... I would like my body to be hurled through space, with one tiny proviso. Please launch me at close to light speed, preferably with a long range orbit that will eventually take me back to earth. That way time will accelerate very fast relative to the decay rate of my body. When 5 minutes of my after-death time occur, thousands of years will have passed on earth, giving our now advanced society the chance to resurrect me, which should be pretty easy since i've only been dead for 5 minutes. Its a guaranteed win! Unless we destroy ourselves, in which case all you people who were sitting there frozen are screwed anyway. Learning Python, 4th Edition Okay ill give this a try... import antigravity woah!! thanks python! Debunking a Climate-Change Skeptic Sorry slashdot, i plead temporary stupidity. I posted this response to the wrong story!! Debunking a Climate-Change Skeptic? ACTA Internet Chapter Leaked — Bad For Everyone What if i connect to the internet via VPN? Does this law apply to VPN vendors? (They aren't technically ISPs). If the VPN guys have to snoop through your activity to find out whether you're downloading an mp3, kinda defeats the entire point doesn't it. Would this kill an entire industry? Murdoch Says E-Book Prices Will Kill Paper Books another old wrinkly dinosaur doesn't like change! news at 11. Pieces of stamped mail I sent in 2009: The results page are very typical of people. No matter what the poll is, there are people out there who just want to "lie" to the poll because it somehow makes them feel better. They always choose the biggest number, no matter how absurd. In the case the poll could have been: # of mail i sent last year: 1-20 [60%] 20-40 [20%] 40-60 [7%] 60-80 [3%] 80-100 [1%] 100-10,000 [0.5%] over 10,000 [8.5%] No matter how absurd it is for anyone to send over 10,000 pieces of mail, you will see an "uptick" at the largest number in the poll. Go look at other polls, I see the same predictable behavior. It would make an interesting study. The US Economy Needs More "Cool" Nerds. Climate, Habitat Threaten Wild Coffee Species Without coffee there will be no more sufficiently awake programmers! All software development will stop! Climate, Habitat Threaten Wild Coffee Species The key difference here, (i dont know if you've noticed), but if you WAIT to see with 100% certainty whether it is a problem or not, and it turns out to be a poblem, its not fixable at that point and the world essentially ends (or gets very screwed up). Get it? If you die because of climate change, you die in real life!!
http://beta.slashdot.org/~Jorgandar
CC-MAIN-2014-35
refinedweb
932
84.07
This post covers how to send push notification using Android Firebase. In the past, we were used to send push notification in Android using Google Cloud messaging service. Recently, it was introduced a new way to send push data using Android Firebase. Even if the basic principles remain the same Firebase introduced some interesting new features. Firebase supports other services like: - Authentication - Remote config - Crash reporting This post will cover step by step how to send a push notification from Firebase console to an app. What is push notification? Before digging into the details how to send notification in Android, it is useful to clarify what is a push notification. Using Android push notification, our app can notify a user of new events. This happens even if our app is not working in foreground. Using this service, we can send data from our server to our app whenever a new event occurs. This paradigm is much more efficient respect to keep on connecting to the server (pull method) to ask if there are new events. Using push notification, we can keep the user informed about events without draining the smartphone battery. When a user receives the notification, it appears, as a customized icon, in the status bar. There are different paradigm to use when sending a push notification: Message to a single device Message to a topic (send the same message to the multiple devices subscribed to a specific topic. This implements the model publisher/subscribers) Message to a group (multiple devices that share the same key used to identify a smartphone) Set up Firebase cloud messaging project It is time to start! Create an account to access to Firebase console and define a project: and then: Once you have created your project, you have to add the package name of your app. Be aware using the same package name in Android Studio and in the Firebase console: At the end of this process, your project is configured on Firebase and you are ready to develop your Android Firebase app. At the end, you get a json file that you have to copy at app module level. Read this tutorial about how to use fingerprint authentication in Android. How to implement Android Firebase push notification Now we can develop the Android app integrated with Firebase. As a first step, we have to add Firebase to our app and modify gradle files. At the project level let’s modify gradle fille as: buildscript { repositories { jcenter() } dependencies { classpath 'com.android.tools.build:gradle:2.1.3' classpath 'com.google.gms:google-services:3.0.0' } } and at the module name (usually named app): .... dependencies { compile fileTree(dir: 'libs', include: ['*.jar']) testCompile 'junit:junit:4.12' compile 'com.android.support:appcompat-v7:24.2.0' compile 'com.google.firebase:firebase-messaging:9.4.0' }Token(); we override the onTokenRefresh method. public class FireIDService extends FirebaseInstanceIdService { @Override public void onTokenRefresh() { String tkn = FirebaseInstanceId.getInstance().getToken(); Log.d("Not","Token ["+tkn+"]"); } } In this method, we just log the token, but it could be used in a real app to send the token to the server so that the serve stores it. Don’t forget to declare this service in the Manifest.xml. .. <service android: <intent-filter> <action android: </intent-filter> </service> At the end,. Running the app, we get the result shown in the video below: Firebase API An interesting aspect is Firebase API. In other words, it is possible to invoke Android Firebase services using API. This is very interesting because we can integrate Firebase with external systems. In this example, we will show how to send a notification invoking Firebase API. The fist step is getting the Firebase authentication key so that we can authenticate the client. As client we will use Postman but we can use other clients too. How to get the authentication key? Well you get it from Firebase console: Once we have the authentication key we can create our message: and as body: Notice that to contains the smartphone token (described above). Finally sending the message, we get as result: And the notification appears on the destination smartphone. At the end of this post, you gained the knowledge how to use push notification using Android Firebase. Source code available @github.
https://www.survivingwithandroid.com/2016/09/android-firebase-push-notification.html
CC-MAIN-2017-26
refinedweb
711
56.05
Brad Frost Web Design, Workshops, Consulting, Music, and Art 2016-11-28T13:30:26Z WordPress Brad Frost <![CDATA[Atomic Design Out Now]]> 2016-11-28T13:30:26Z 2016-11-28T13:23:38Z <![CDATA […]]]> <p>I have great news: <strong><em>Atomic Design</em> is complete and is on sale now! </strong></p> <div id="attachment_10033" style="width: 1034px" class="wp-caption alignnone"><a href=""><img class="wp-image-10033 size-large" src="" alt="Ziggy the model bulldog with the Atomic Design ebook on an iPad" width="1024" height="768" sizes="(max-width: 1024px) 100vw, 1024px" /></a><p class="wp-caption-text">Ziggy is so freaking excited.</p></div> <p>You can pick which flavor of the book suits you best:</p> <ol> <li><a href=""><strong>Paperback</strong></a> for $20</li> <li><a href=""><strong>Ebook</strong></a> for $10</li> <li><a href=""><strong>Paperback + Ebook</strong></a> for $25</li> </ol> <p><strong>The ebook</strong> includes ePub, mobi, and PDF formats so you can read it on your iPad, Kindle, desktop, or whichever digital contraption you like to read ebooks on.</p> <p><strong>The print book</strong> is at the printer right now and will start shipping the week of December 12th. You can expect 193 color pages printed on high-quality paper with a nice, durable cover. You can also expect some stickers in the package!</p> <h2>A long and fruitful journey</h2> <p>This project has been quite the journey. I’m incredibly happy with the way the book turned out, and if I had to do it again I’d do it exactly the same way. I took preorders of the ebook from <a href="">Day 1<.</p> <p>Because of the open nature of the project, the <strong>biggest challenge was managing people’s expectations.</strong> I didn’t have the luxury of putting my client work on pause in order to dedicate myself to writing, so there were spells where this project took a back seat to more pressing client deadlines. Even <a href="">posting the entire manuscript online for free</a> <em>felt</em> a lot longer to people who are used to ordering a book and receiving a finished product shortly afterwards. Again, I’m super happy with this process, and I’ll write about the many advantages of doing things this way in a future post.</p> <h2>Thank you</h2> <p>There are a ton of people to thank, and I tried to do my best in the book’s <a href="">thank you section</a>. But I’ll mention a few people here who helped bring this book to life. Huge thanks to my wife, Melissa, who has been with me every step of this process. I’m so incredibly fortunate to be married to such an amazing woman. Thanks to <a href="">Owen Gregory</a>, who copy edited this book with an awe-inspiring eye for detail. Big thanks to <a href="">Rachel Andrew</a>, who tackled all the ebook formatting while criss-crossing the globe. She is an absolute force of nature and I have so much respect for her and her work. And also huge thanks to <a href="">Rachel Arnold Sager</a>, who wrangled all the print formatting, communicating with the printer, and helping make stickers. And to <a href="">Josh Clark</a> and <a href="">Dan Mall</a>, who provided the book’s <a href="">foreword</a>.</p> <p>I’d also like to give a GIGANTIC thank you to the 2,400+ people who preordered the <em>Atomic Design</em> ebook. Those people trusted me enough to pull through and deliver a book, and without their support this book would not exist. Thank you thank you thank you.</p> <p>I’m so incredibly fortunate to work in such an open, sharing, and collaborative community. Every day I’m inspired, educated, and encouraged by you all. <strong>Thank you.</strong></p> <p>So that’s that! You can <a href="">head over to the shop</a> to get the print, ebook, and print + ebook versions of <em>Atomic Design</em>.</p> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[Frontend Guidelines Exercise]]> 2016-11-22T16:19:49Z 2016-11-22T16:19:49Z <![CDATA […]]]> <p>I’m helping my client’s web development team establish some frontend guidelines as we roll up our sleeves to build a new website from the ground up.</p> <p>We’ve done a projects together in the past, but (naturally) they were on pretty pretty tight deadlines, so despite our best intentions the team’s code consistency fell by the wayside as we raced to the finish line project. Sound familiar?</p> <p>We’re determined to fight that entropy this time around, and I thought of a fun little exercise to do with the team to get everyone on the same page at the beginning of the project. Here’s what we did:</p> <h2>1. Have a conversation</h2> <p>This might sound crazy, but <strong>developers on your team might have opinions on how to write frontend code</strong>. Get all the frontend people all in a room together to discuss how you’re planning to write code together.</p> <p>I created a <a href="">frontend guidelines questionnaire</a> to help facilitate these conversations.</p> <p><a href=""><img class="alignnone size-full wp-image-9653" src="" alt="bradfrost:frontend-guidelines-questionnaire" width="978" height="562" sizes="(max-width: 978px) 100vw, 978px" /></a></p> <p>Discuss the pros and cons of different principles, methodologies, syntaxes, and structures. It’s worth your time to have some healthy debate on the best way to approach things. Remember that the team needs to work <em>together</em>, so individual (sometimes strong) preferences may need to take a back seat for the good of the team and product.</p> <h2>2. Establish some initial frontend guidelines</h2> <p>Take some notes during this meeting and immediately start transforming them into some initial frontend guidelines.</p> <p>These guidelines will undoubtedly get fleshed out refined as your project progresses, but at the beginning <strong>focus on getting a loose consensus on frontend principles and agree to specific naming conventions and syntax</strong>.</p> <p>Get the team together and review and discuss these conventions. Once everyone feels good about the working hypothesis, put the guidelines into action.</p> <h2>3. Code a molecule</h2> <p>Now for the fun part! Pick a relatively simple component for the team to mark up and style. Each developer will put their heads down and apply the agreed-upon syntax/structure to the component’s HTML and CSS.</p> <p>For our project, we picked a simple <a href="">media object</a> pattern:</p> <p class="codepen" data-See the Pen <a href="">Media block pattern frontend guidelines exercise</a> by Brad Frost (<a href="">@bradfrost</a>) on <a href="">CodePen</a>.</p> <p><script async</script></p> <h2>4. Review</h2> <p>After about 20 minutes of coding, get the team together and have each developer present what they came up with. <strong>You’ll likely be surprised how much deviation can occur even with agreed-upon frontend guidelines in place.</strong></p> <p>Talk through discrepancies, identify potential weak spots in your guidelines, and have each developer tweak their code based on the feedback of the group.</p> <h2>5. Repeat and refine</h2> <p>We didn’t get time to do this, but it would have been great to do a few more rounds of this with components of increasing complexity. I suspect that with each round everyone would get a little more comfortable with the conventions we’ve agreed upon.</p> <p>The nascent frontend guidelines should be updated based on the reviews and discussions around the exercise. <strong>Your frontend guidelines can and should be referenced, revisited, and revised throughout the course of the project.</strong> It should be an evolutionary, living thing rather than a static file that doesn’t truly represent the way the team is writing code.</p> <p.</p> <p>So that’s the exercise! Rather than waiting until the whole team is stressed out and slammed with deadlines, you can proactively establish conventions and make sure the team is on the same page.</p> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[What to do?]]> 2016-11-15T14:59:29Z 2016-11-15T13:51:36Z <![CDATA[There are literally thousands of things you’re not doing right now.. I grew up being told that helping out meant going to work at a soup kitchen, donating canned goods, and donating blankets […]]]> <p><strong>There are literally thousands of things you’re not doing right now.</strong></p> <p.</p> <p>I grew up being told that helping out meant going to work at a soup kitchen, donating canned goods, and donating blankets to the homeless. There’s no doubt these are all Good Things. So why am I not dedicating all of my time to these activities?</p> <p.</p> <p>So what <em>is</em> for me? How can I help? That’s a question that’s been on loop in my brain. It’s a question I think many people ask themselves. <strong>I think it makes sense to find ways to help that map to your specific skill set and personality.</strong></p> <p>I should <a href="">volunteer</a> at my food bank. Again, that would really help people out. But how about redesigning <a href="">the food bank’s website</a>? That maps to my skills a lot better and can enable more people to donate and volunteer to the food bank. (I still haven’t gotten exact numbers, but online donations have gone up significantly since we did the project).</p> <p>I’m going to do my best to focus my energy, money, and time on things that help others be healthy, happy, prosperous, and safe.</p> <p>Different people do different things to help. Those things might not exactly align with the things you think help. You might not think the things I choose to do aren’t helping at all. And I’m sorry if you feel that people don’t need helped.</p> <p><strong>What can you do that aligns with your values, personality, and skills?</strong> Try to do those things. There will still be thousands of things you aren’t doing to help, but trust that you can do <em>something</em>.</p> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[‘Thought Leader’]]> 2016-11-07T17:10:46Z 2016-11-07T17:10:46Z <![CDATA[This is phenomenal. via Adactio.]]> <p><iframe src="" width="560" height="315" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p> <p>This is phenomenal. via <a href="">Adactio</a>.</p> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[Thunder Nerds 46 – Music, Atomic Design & Sharing Your Knowledge with Brad Frost]]> 2016-11-07T16:23:03Z 2016-11-07T16:23:03Z <![CDATA[I had the opportunity to chat atomic design and a bunch of other stuff on the Thunder Nerds podcast. Check it out!]]> <p>I had the opportunity to chat atomic design and a bunch of other stuff on the Thunder Nerds podcast. Check it out!</p> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[CSS Architecture for Design Systems]]> 2016-11-07T14:51:59Z 2016-11-07T14:51:59Z <![CDATA. To give a bit of a context for the project, we were tasked with creating a design system and style guide […]]]> <p.</p> <p>To give a bit of a context for the project, we were tasked with creating a design system and style guide meant to serve the organization’s thousands of developers, who employ a vast array of technologies to build their over 500 internal web applications.</p> <p.</p> <h2>Establish CSS Principles</h2> <p>At the beginning of the project, we talked with developers about their process and pain points, and asked how an interface design system could make their lives easier.</p> <p>We went through and completed my <a href="">frontend guidelines questionnaire</a>, which resulted in a set of frontend principles that were to be encapsulated within the system. Here are the CSS-specific principles we established:</p> <blockquote> <ul> <li><strong>Make it modular.</strong> – The design system is modular in every way, which very much applies to the way CSS is written. There should be clear separation between components.</li> <li><strong>Legibility is key.</strong> – Developers should be able to understand CSS code at a glance and understand the purpose of any given selector.</li> <li><strong>Clarity trumps succinctness</strong> – The design system may sometimes seem verbose, but it delivers clarity and reslience in exchange. Keeping CSS legible and scalable means sacrificing a shorter syntax.</li> <li><strong>Keep things flat</strong> – Long selector strings should be avoided wherever possible in order to keep CSS as DOM-independent and modular as possible.</li> <li><strong>Avoid conflicts</strong> – Since components will be deployed to many different applications, it’s critical to ensure that the design system’s CSS doesn’t conflict with other libraries and systems. This is accomplished by the system’s namespacing of class names, described in more detail below.</li> </ul> </blockquote> <p>From there, we established conventions and a syntax that embraced these principles in order to meet developers’ needs. Here’s a look at the class syntax we came up with:</p> <h2>Global Namespace</h2> <p><strong>All classes associated with the design system are prefixed with a global namespace, which is the Company Name followed by a hyphen:</strong></p> <pre><code>.cn-</code></pre> <p. <a href="">Lightning Design System</a> employs a similar approach for their system (with the prefix <code>.slds-</code>) as third-party developers make use of their system in environments Salesforce may not control. In our case, many of our client’s developers use <a href="">Angular</a> so they’re already familiar with the notion of a namespace, since Angular uses <code>ng-</code> as a namespace for Angular-specific code.</p> <h2>Class prefixes</h2> <p>In addition to a global namespace, we <strong>added prefixes to each class to make it more apparent what job that class is doing.</strong> Here’s what class prefixes we landed on:</p> <ul> <li><strong><code>c-</code></strong> for UI components, such as <code>.cn-c-card</code> or <code>.cn-c-header</code></li> <li><strong><code class="highlighter-rouge">l-</code></strong> for layout-related styles, such as <code>.cn-l-grid__item</code> or <code>.cn-l--two-column</code></li> <li><strong><code class="highlighter-rouge">u-</code></strong> for utilities, such as <code class="highlighter-rouge">.cn-u-margin-bottom-double</code> or <code class="highlighter-rouge">.cn-u-margin-bottom-double</code></li> <li><strong><code>is-</code></strong> and <strong><code>has-</code></strong> for specific states, such as <code class="highlighter-rouge">.cn-is-active</code> or <code>.cn-is-disabled</code>. These state-based classes would apply to</li> <li><strong><code>js-</code></strong> for targeting JavaScript-specific functionality, such as <code class="highlighter-rouge">.js-modal-trigger</code>. No styles are bound to these classes; they’re reserved for behavior only. For most cases, these <code>js-</code> classes would toggle state-based classes to an element.</li> </ul> <p>I was introduced to this concept <a href="">by Harry Roberts</a>, <a href="">design system users</a> to be able to easily make heads or tails of things.</p> <h2 id="bem-syntax">BEM syntax</h2> <p><a href="">BEM</a> stands for “Block Element Modifier”, which means:</p> <ul> <li><strong>Block</strong> is the primary component block, such as <code class="highlighter-rouge">.cn-c-card</code> or <code class="highlighter-rouge">.cn-c-btn</code></li> <li><strong>Element</strong> is a child of the primary block, such as <code class="highlighter-rouge">.cn-c-card__title</code></li> <li><strong>Modifier</strong> is a variation of a component style, such as <code class="highlighter-rouge">.cn-c-alert--error</code></li> </ul> <p>This methodology has been gaining a lot of popularity, and combining these concepts with the global namespace and class prefixes allowed us to create even more explicit, encapsulated class names.</p> <h2 id="putting-it-all-together-anatomy-of-a-class">Putting it all together: anatomy of a class</h2> <p>The combination of a global namespace, category prefixes, and BEM syntax results in an explicit (and yes, verbose) class string that allows developers to deduce what job it plays in constructing the UI.</p> <p>Let’s take a look at the following example:</p> <pre><code class="highlighter-rouge">.cn-c-btn--secondary</code> </pre> <ul> <li><code class="highlighter-rouge">cn-</code> is the global namespace for all styles coming from the design system.</li> <li><code class="highlighter-rouge">c-</code> is the category of class, which in this case <code class="highlighter-rouge">c-</code> means “component”</li> <li><code class="highlighter-rouge">btn</code> is the block name (“Block” being the “B” in BEM)</li> <li><code class="highlighter-rouge">--secondary</code> is a modifier, indicating a stylistic variation of the block (“Modifier” being the “M” in BEM)</li> </ul> <p>Here’s another example:</p> <pre><code class="highlighter-rouge">.cn-l-grid__item</code></pre> <ul> <li><code class="highlighter-rouge">cn-</code> once again is the system’s global namespace.</li> <li><code class="highlighter-rouge">l-</code> is the category of class, which in this case <code class="highlighter-rouge">l-</code> means “layout”</li> <li><code class="highlighter-rouge">grid</code> is the block name</li> <li><code class="highlighter-rouge">__item</code> is an element, indicating that this is a child of the block (“Element” being the “E” in BEM)</li> </ul> <p>And one more:</p> <pre><code class="highlighter-rouge">.cn-c-primary-nav__submenu</code></pre> <ul> <li><code class="highlighter-rouge">cn-</code> is the system’s global namespace.</li> <li><code class="highlighter-rouge">c-</code> is the category of class, which in this case c<code class="highlighter-rouge">-</code> means “component”</li> <li><span style="font-family: monospace;">primary-nav</span> is the block name</li> <li><code class="highlighter-rouge">__submenu </code>is an element, indicating that this is a child of the block (“Element” being the “E” in BEM)</li> </ul> <p>Again, there’s no doubt these classes are more verbose than most other approaches out there, but for this specific system these conventions made a lot of sense.</p> <h2>Other tricks</h2> <h3>Being explicit with minutia</h3> <p>In order to prevent things from falling apart we detailed how to handle a lot of the minor details like comments, spacing around code blocks, tabs vs spaces, etc. Thankfully, <a href="">Harry Roberts</a> has put together an excellent and comprehensive resource called <a href="">CSS Guidelines</a>, which we used as our baseline for these kinds of conventions. We combed through everything and flagged areas where we planned on deviating from what Harry spelled out.</p> <h3>Sass parent selectors</h3> <p:</p> <pre><code> .cn-c-primary-nav { /** * Nav appearing in header * 1) Right-align navigation when it appears in the header */ .cn-c-header & { margin-left: auto; /* 1 */ } } </code></pre> <p>This means all my primary navigation styles can be found in the primary navigation Sass partial, rather than splitting them between multiple files.</p> <h3>Explicit rules around Sass nesting</h3> <p>Nesting in Sass can be very convenient, but runs the risk of poor output with overly long selector strings. <strong>We followed the <a href="">Inception Rule</a></strong> and never nested Sass more than three layers deep.</p> <p>Keeping the design system’s CSS flatness principle in mind, we wanted to limit nesting to the following use cases:</p> <ol> <li>Modifiers of a style block</li> <li>Media queries</li> <li>Parent selectors</li> <li>States</li> </ol> <h4><a id="user-content-1-style-block-modifiers" class="anchor" href=""></a>1. Style block modifiers</h4> <p>For modifiers, if the rule is only a few lines long, the modifier can be nested inside the parent like so:<-c">/**</span> <span class="pl-c"> * Error Alert</span> <span class="pl-c"> */</span> &<span class="pl-c1">--error</span> { <span class="pl-c1">border-color</span>: <span class="pl-c1">red</span>; <span class="pl-c1">color</span>: <span class="pl-c1">red</span>; } }</pre> </div> <p>Thanks to the <code>&</code> symbol, this would compile to:<-e">.cn-c-alert--error</span> { <span class="pl-c1">border-color</span>: <span class="pl-c1">red</span>; <span class="pl-c1">color</span>: <span class="pl-c1">red</span>; } </pre> </div> <p>For longer style blocks we didn’t nest the modifier code as it reduced the legibility of the code.</p> <h4><a id="user-content-2-media-queries" class="anchor" href=""></a>2. Media queries</h4> <p>Component-specific media queries should be nested inside the component block.</p> <div class="highlight highlight-source-css"> <pre><span class="pl-e">.cn-c-primary-nav</span> { <span class="pl-c">/* Base styles */</span> <span class="pl-c">/**</span> <span class="pl-c"> * 1) On larger displays, convert to a horizontal list</span> <span class="pl-c"> */</span> @<span class="pl-c1">media</span> <span class="pl-c1">all</span> <span class="pl-c1">and</span> (<span class="pl-c1">min-width</span>: <span class="pl-c1">40<span class="pl-k">em</span></span>) { display: flex; } }</pre> </div> <p>This compiles to:</p> <div class="highlight highlight-source-css"> <pre><span class="pl-e">.cn-c-primary-nav</span> { <span class="pl-c">/* Base styles */</span> } <span class="pl-k">@media</span> <span class="pl-c1">all</span> <span class="pl-k">and</span> (<span class="pl-c1">min-width</span>: <span class="pl-c1">40<span class="pl-k">em</span></span>) { <span class="pl-e">.cn-c-primary-nav</span> { <span class="pl-c1">display</span>: flex; } }</pre> </div> <h4><a id="user-content-3-parent-selectors" class="anchor" href=""></a>3. Parent selectors</h4> <p>The design system will make use of <a href="">Sass’s parent selector</a> mechanism. This allows all rules for a given component to be maintained in one location.</p> <div class="highlight highlight-source-css"> <pre><span class="pl-e">.cn-c-primary-nav</span> { <span class="pl-c">/**</span> <span class="pl-c"> * Nav appearing in header</span> <span class="pl-c"> * 1) Right-align navigation when it appears in the header</span> <span class="pl-c"> */</span> .cn<span class="pl-c1">-c-header</span> & { <span class="pl-c1">margin-left</span>: <span class="pl-c1">auto</span>; <span class="pl-c">/* 1 */</span> } }</pre> </div> <p>This will compile to:</p> <div class="highlight highlight-source-css"> <pre><span class="pl-e">.cn-c-header</span> <span class="pl-e">.cn-c-primary-nav</span> { <span class="pl-c1">display</span>: flex; }</pre> </div> <p>All styles for <code>cn-c-primary-nav</code> should be found in one place, rather than scattered throughout multiple partial files.</p> <h4><a id="user-content-4-states" class="anchor" href=""></a>4. States</h4> <p>States of a component should be included as a nested element. This includes <code>hover</code>, <code>focus</code>, and <code>active</code> states:</p> <div class="highlight highlight-source-css"> <pre><span class="pl-e">.cn-c-btn</span> { <span class="pl-c1">background</span>: <span class="pl-c1">blue</span>; &:hover, &:focus { background: <span class="pl-c1">red</span>; } }</pre> </div> <p>This will compile to:</p> <div class="highlight highlight-source-css"> <pre><span class="pl-e">.cn-c-btn</span> { <span class="pl-c1">background</span>: <span class="pl-c1">blue</span>; } <span class="pl-e">.cn-c-btn</span><span class="pl-e">:hover</span>, <span class="pl-e">.cn-c-btn</span><span class="pl-e">:focus</span> { <span class="pl-c1">background</span>: <span class="pl-c1">red</span>; }</pre> </div> <p>States can also take the form of utility classes, such as <code>is-</code> and <code>has-</code>:</p> <div class="highlight highlight-source-css"> <pre><span class="pl-e">.cn-c-accordion__panel</span> { <span class="pl-c1">overflow</span>: <span class="pl-c1">hidden</span>; <span class="pl-c1">max-height</span>: <span class="pl-c1">0</span>; &.cn<span class="pl-c1">-is-active</span> { <span class="pl-c1">max-height</span>: <span class="pl-c1">40<span class="pl-k">em</span></span>; } }</pre> </div> <p>This will compile to:</p> <div class="highlight highlight-source-css"> <pre><span class="pl-e">.cn-c-accordion__panel</span> { <span class="pl-c1">overflow</span>: <span class="pl-c1">hidden</span>; <span class="pl-c1">max-height</span>: <span class="pl-c1">0</span>; } <span class="pl-e">.cn-c-accordion__panel.cn-is-active</span> { <span class="pl-c1">max-height</span>: <span class="pl-c1">40<span class="pl-k">em</span></span>; }</pre> </div> .</p> <h2>Does this work for everybody?</h2> <p.</p> <p>I work on plenty of projects where I can get by with strings like <code>.table-of-contents li a</code>, <a href="">other teams like Sparkbox come to similar conclusions</a>.</p> <p>After a few weeks away from the project, we’re returning to continue work on version 1.1 of the design system. I’m looking forward to coming back to this code base and seeing how quickly I can get re-acclimated with it!</p> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[CSS Tricks Screencast #149: A Quick Intro to Pattern Lab Node with Brian Muenzenmeyer]]> 2016-08-30T23:40:28Z 2016-08-30T23:40:28Z <![CDATA[Brian walks Chris through Pattern Lab, including how to get it up and running, make changes to patterns, etc. Great stuff!]]> <p><a href="">Brian</a> walks <a href="">Chris</a> through <a href="">Pattern Lab</a>, including how to get it up and running, make changes to patterns, etc. Great stuff!</p> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[Clarity Conf Video Online]]> 2016-08-30T22:24:45Z 2016-08-30T22:24:45Z <![CDATA[I kicked off the amazing Clarity Conf, a conference dedicated to style guides and design systems, earlier this year and they just posted the video of my talk. Enjoy. Or don’t. Whatever.]]> <p>I kicked off the amazing <a href="">Clarity Conf</a>, a conference dedicated to style guides and design systems, earlier this year and they just posted the video of my talk. Enjoy. Or don’t. Whatever.</p> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[Shoptalk Episode 231: Book Writing Panel]]> 2016-08-30T22:10:55Z 2016-08-30T22:10:55Z <![CDATA. As always, you can […]]]> <p.</p> <p>As always, you can <a href="">read my book</a> in its entirety and <a href="">preorder the final ebook</a> to support the project.</p> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[Anatomy of a Pattern in a Pattern Library]]> 2016-08-15T15:46:14Z 2016-08-15T15:01:38Z <![CDATA[Technically a pattern library is a simple collection of UI components, but in order for design system users do their best work, a pattern library should also present other important info. Let’s take a look at what information can be displayed alongside each pattern Title The name of the pattern should be prominent and easy to understand by everyone who visits the pattern library. Naming things is hard, but teams should strive to agree upon names that make sense for everyone, rather than only developers. […]]]> <p>Technically a pattern library is a simple collection of UI components, but in order for <a href="">design system users</a> do their best work, a pattern library should also present other important info. Let’s take a look at what information can be displayed alongside each pattern</p> <h2>Title</h2> <p>The name of the pattern should be prominent and easy to understand by everyone who visits the pattern library. Naming things is hard, but teams should strive to agree upon names that make sense for everyone, rather than only developers. <a href="">Conducting an interface inventory exercise</a> is a great way to discuss and agree upon UI pattern names.</p> <div id="attachment_9911" style="width: 973px" class="wp-caption alignnone"><a href=""><img class="wp-image-9911 size-full" src="" width="963" height="729" sizes="(max-width: 963px) 100vw, 963px" /></a><p class="wp-caption-text">Each pattern in FutureLearn’s pattern library prominently displays the pattern title</p></div> <h2>Description</h2> <p>A succinct definition should be provided to help people understand what the pattern is and what it does. Some patterns can be easy to discern (“data table”) while others might need more of an assist (“jumbotron”).</p> <div id="attachment_9909" style="width: 1034px" class="wp-caption alignnone"><a href=""><img class="wp-image-9909 size-large" src="" alt="Each pattern in the Draft U.S. Web Design Standards pattern library has a succinct definition for each pattern</a><p class="wp-caption-text">Each pattern in the Draft U.S. Web Design Standards pattern library has a succinct definition for each pattern</p></div> <h2>Live Example</h2> <p>A key characteristic of any pattern library is displaying a living, breathing instance of a pattern. Showcasing a live pattern allows teams to demonstrate responsiveness, interaction, motion, ergonomic considerations, true color & text rendering, and performance, which are all things a static picture can never show.</p> <div id="attachment_9916" style="width: 1034px" class="wp-caption alignnone"><a href=""><img class="size-large wp-image-9916" src="" alt="MailChiimp's pattern library displays a live example of each pattern in their library." width="1024" height="575" sizes="(max-width: 1024px) 100vw, 1024px" /></a><p class="wp-caption-text">MailChiimp’s pattern library displays a live example of each pattern in their library.</p></div> <h2>Context</h2> <p>While showing an abstracted live UI pattern is a necessary part of any pattern library, it’s also important to show how and where that pattern gets used. Material Design does a fantastic job at putting a component in context of actual applications.</p> <div id="attachment_9904" style="width: 1034px" class="wp-caption alignnone"><a href=""><img class="size-large wp-image-9904" src="" alt="Material Design shows context for its UI components using plenty of images and videos" width="1024" height="705" sizes="(max-width: 1024px) 100vw, 1024px" /></a><p class="wp-caption-text">Material Design shows context for its UI components using plenty of images and videos</p></div> <p>Screenshots and videos can be used to show a pattern in context. While these media are certainly effective, it can involve a lot of manual effort to produce them. One of my favorite features of <a href="">Pattern Lab</a> is pattern lineage, which shows exactly which patterns make up any given component, as well as shows where that pattern is used.</p> <div id="attachment_9906" style="width: 863px" class="wp-caption alignnone"><a href=""><img class="size-full wp-image-9906" src="" alt="Pattern Lab's lineage feature displays what patterns make up any component, and also shows all the places that component is employed." width="853" height="647" sizes="(max-width: 853px) 100vw, 853px" /></a><p class="wp-caption-text">Pattern Lab’s lineage feature displays what patterns make up any component, and also shows all the places that component is employed.</p></div> <h2>Usage</h2> <p>When should you reach for a toggle pattern instead of a group of radio buttons? Should you use tabs or an accordion? Showcasing dos, don’ts, rules, and considerations for each pattern can help design system users reach for the right tool for the right job and take a lot of the guesswork out of the implementation.</p> <div id="attachment_9919" style="width: 894px" class="wp-caption alignnone"><a href=""><img class="wp-image-9919 size-full" src="" alt="Screen Shot 2016-08-15 at 10.17.03 AM" width="884" height="1012" sizes="(max-width: 884px) 100vw, 884px" /></a><p class="wp-caption-text">Material Design’s usage contains information about when to reach for a particular pattern, as well as extremely specific sizing, spacing, and design instructions.</p></div> <h2>Code</h2> <p>Almost every pattern library out there displays the appropriate component code alongside the living instance of a pattern. But what code is relevant to show? It depends on the type of design system you’re maintaining. For integrated design systems like <a href="">Rizzo</a>, the only code that gets exposed is the code necessary to pull a pattern into the application. For less integrated systems, frontend markup and/or CSS and/or JavaScript might be provided for users to copy and paste. Here’s some of the code</p> <ul> <li><strong>HTML</strong> showcasing the markup syntax and structure</li> <li><strong>Templating </strong>code that shows the markup as well as the dynamic aspects of the pattern</li> <li><strong>CSS</strong> code specific to a particular pattern</li> <li><strong>JavaScript</strong> code to control a particular pattern</li> <li><strong>Implementation</strong> code that shows how to pull a particular pattern into a backend system</li> </ul> <div id="attachment_9922" style="width: 1034px" class="wp-caption alignnone"><a href=""><img class="size-large wp-image-9922" src="" alt="Salesforce's Lightning Design System showcases the frontend markup as well as the SCSS styling information for each pattern" width="1024" height="383" sizes="(max-width: 1024px) 100vw, 1024px" /></a><p class="wp-caption-text">Salesforce’s Lightning Design System showcases the frontend markup as well as the SCSS styling information for each pattern</p></div> <h2>Resources</h2> <p>The answer to “Why is this component like this?” shouldn’t be “because I said so.” A well-constructed design system should incorporates industry best practices. There are tons of fantastic articles that detail considerations for form, table, card, tabs, and so on. Bundling helpful resources from industry Smashing Magazine, A List Apart, CSS Tricks, and tons of other places can level up design system users and help them become better practitioners.</p> <p>In addition to external resources, it’s a great idea to maintain a set of over-arching design principles that govern your design system. Linking to these internal resources where appropriate helps tie abstract principles together with real-world implementations.</p> <div id="attachment_9923" style="width: 998px" class="wp-caption alignnone"><a href=""><img class="size-full wp-image-9923" src="" alt="GOV.UK maintains a fantastic set of design principles that underpin their design system. Their UI patterns reflect these principles." width="988" height="690" sizes="(max-width: 988px) 100vw, 988px" /></a><p class="wp-caption-text">GOV.UK maintains a fantastic set of design principles that underpin their design system. Their UI patterns reflect these principles.</p></div> <h2>People</h2> <p>Who created this pattern? Who should you contact if you have questions or proposed changes to a pattern? Who else has a say in how this pattern should look or behave? Some pattern libraries expose relevant people’s names and contact information to keep a healthy conversation going between design system makers and users.</p> <h2>Meta</h2> <p>And finally, patterns can contain helpful meta data that</p> <ul> <li><strong>Pattern status</strong> – In progress, In review, Complete, Deprecated, and so on<>Version</strong> – i.e. Button v1>Changelog</strong> – When was the pattern last <">updated with a link to its h<">ist>Compatibility info</strong> – What are the dependencies required in order to use this pattern?</span></li> </ul> <h2>Crafting Effective Patterns</h2> <p>So a pattern can contain:</p> <ul> <li>Pattern title</li> <li>Pattern description</li> <li>Live example</li> <li>Contextual information</li> <li>Usage guidelines and requirements</li> <li>Code samples</li> <li>Internal and external resources</li> <li>People involved</li> <li>Helpful meta data</li> </ul> <p>This information can help set your UI design system up for success.</p> <p> </p> <p> </p> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[A Code Review, Or Yet Another Reason to Love the Web]]> 2016-07-22T20:04:53Z 2016-07-22T20:04:53Z <![CDATA on Twitter: Hey web forms and BEM aficionados, anyone care to review some form markup I’ve been writing? — Brad Frost (@brad_frost) July 15, 2016 I got a lot of fantastic, actionable feedback from lots of people, which by […]]]> <p>I’m in the early days of creating a design system for a big organization. Last week I posted some of my initial form markup to Codepen:</p> <p class="codepen" data-See the Pen <a href="">PzEZwr</a> by Brad Frost (<a href="">@bradfrost</a>) on <a href="">CodePen</a>.</p> <p><script src="//assets.codepen.io/assets/embed/ei.js" async=""></script></p> <p>And asked for feedback on Twitter:</p> <blockquote class="twitter-tweet" data-<p> Hey web forms and BEM aficionados, anyone care to review some form markup I’ve been writing? <a href=""></a></p> <p>— Brad Frost (@brad_frost) <a href="">July 15, 2016</a> </p></blockquote> <p><script src="//platform.twitter.com/widgets.js" async="" charset="utf-8"></script></p> <p>I got a lot of fantastic, actionable feedback from lots of people, which by itself is amazing. But I was particularly blown away that <a href="">Jonathan Snook</a> — author of <a href="">SMACSS</a>, world-renowned CSS developer/thinker, and all-around fantastic human being — took the time to record an almost 20-minute long video review of my code.</p> <p><iframe src="" width="640" height="360" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p> <p>I’m practically jumping out of my seat because this embodies so many of the principles that I care about. So let’s talk about ’em!</p> <h2>Frontend Prep Chef</h2> <p>One of the things Snook touched on was that he typically doesn’t prefer to mark things up before having any designs in place. I can certainly appreciate this sentiment, as it’s tough to know how things should be structured without knowing what the end goal is.</p> <p>But! I’m a massive fan of getting into the browser on Day One of a project, and I feel that <a href="">frontend developers can and should play the role of prep chef</a>. Rather than coming in *after* many (often bad/limiting) design decisions have already been made, developers have a huge opportunity to establish UI pattern markup and crude CSS upfront so they can spend more time collaborating *with* the project’s designers to create the design system. Doing this prep chef work gets designs into the browser — the project’s final environment — as quickly as possible so the team can address essential aspects of design like responsiveness, performance, true type and color rendering, ergonomics, and much more.</p> <p>This means frontend developers have to fly blind for a bit, be on their toes to adapt to evolving designs, and get comfortable iterating.</p> <h2>Iterative Design, Iterative Code</h2> <p>Establishing this <a href="">more collaborative, pattern-driven workflow</a> means iterating over frontend code becomes a core and necessary part of the process. In my consulting work, I encounter a lot of developers who are uncomfortable with this notion. They think they have one shot to get things right or the whole effort falls apart. There are a lot of factors that contribute to this mindset, but maybe they just watched too much Star Wars as a kid?</p> <p><iframe src="" width="480" height="360" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p> <p>We do not have just one shot at getting the code right. Crafting frontend code should be as iterative an endeavor as the rest of the design process. I talk about this type of workflow as being akin to subtractive stone sculpture, where a giant slab of stone is chipped away at and refined over time.</p> <p>So make something. Learn lessons (from mistakes you’ve made, from implementing patterns in new locations, or from Trained CSS Professionals like Jonathan Snook). Then revisit and refactor.</p> <p>Yes, this does require being vigilant to keep your code clean. Build tools can help with this, but reviewing and grooming your codebase on a regular basis is a good practice to get into anyways. In my consulting experience, I’ve found it often takes as little as a week for developers to get comfortable writing code, rewriting it, tweaking it, and constantly improving things.</p> <p>After a while you begin challenging your own techniques, embracing critiques, and actively look for ways to tweak things to make the system stronger. I don’t see Jonathan’s review as “Brad’s doing things wrong.” but rather as another opportunity to iterate over my work to make it better.</p> <h2>Creative Exhaust</h2> <p>We have the choice to share what we know or not share what we know. Thanks to the web (that thing we help make every day), there’s an opportunity to share more things for the benefit of ourselves and others.</p> <p>In my TEDx talk, I talk <a href="">creative exhaust</a>, which is the idea that <strong>the artifacts of our creative work can often have much bigger impact on the world</strong> than the actual work we actually produce.</p> <p><iframe src="" width="640" height="360" frameborder="0" allowfullscreen="allowfullscreen"></iframe></p> <p>In this specific case, the artifact I chose to share was some basic markup I wrote in the first week of a project. For all I know, this code won’t even find its way into the final work. But that little snippet has already caused a ripple effect, causing Jonathan to create a video, which caused some <a href="">additional conversation on Twitter</a>, which inspired me to create this post, and which may result other thoughts and ideas.</p> <p>This, ladies and gentlemen, is why I do what I do. I suspect this is also why Jonathan decided to take valuable time out of his busy schedule to share what he knows. And this is why I think so many of us love working on the web. Let’s never take for granted the incredible, collaborative power of the web, and let’s continue sharing what we know.</p> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[Copy Editing Atomic Design]]> 2016-04-28T16:22:20Z 2016-04-28T16:22:20Z <![CDATA[I’m in the process of wrapping up the manuscript of Atomic Design, and it’s time to begin the copy editing process. Up until this point this book project has been a solo endeavor. Writing this book in the open has been fantastic so far, as its immediately gotten the book’s content in front of people. And because of this, I’ve had people contribute and point out issues with the text, which I think is freaking awesome! But I’m now to a point in the process where I need make […]]]> <p>I’m in the process of wrapping up the manuscript of <em><a href="">Atomic Design</a>, </em>and it’s time to begin the copy editing process.</p> <p>Up until this point this book project has been a solo endeavor. Writing this book <a href="">in the open</a> has been fantastic so far, as its immediately gotten the book’s content in front of people. And because of this, I’ve had people <a href="">contribute</a> and point out <a href="">issues</a> with the text, which I think is freaking awesome!</p> <p>But I’m now to a point in the process where I need make sure what I’m saying makes any damn sense and jives with the basic rules of the English language. This is a job for a professional. <strong>That’s why I’ve enlisted the help of editor extraordinaire, <a href="">Owen Gregory</a>, to help me with these tasks.</strong> In the spirit of openness, Owen’s kindly agreed to conduct his copy editing his work in the open. <strong>We’ve <a href="">published his edits here</a>, and we’ve been keeping track of everything in an <a href="">edits branch on Github</a>.</strong></p> <p>Doing this kind of work in the open is a bit different than the usual behind-the-scenes copy editing process. To speak to that, <strong>here’s Owen himself with his thoughts on this process:</strong></p> <h2>A Word from Owen Gregory</h2> <p>When Brad asked me to provide copy-editing services for <em>Atomic Design</em> he also asked if I would be happy for my work, like his, to be shown in public while in progress. Against all my instincts, I agreed. I’m used to working behind the scenes and with a certain degree of anonymity: it’s the author’s name on the cover, not mine. Perhaps drawing back the curtain would reveal me, like Oz the Great and Powerful, to be humbug, pettifogging over serial commas, full of sound and fury, signifying nothing.</p> <p>But as Brad has proved, there’s value in openness. Books don’t spring whole from a writer’s forehead, and public collaboration is one of the web’s killer features.</p> <p>To some, particularly other authors I’ve worked with, it might seem that I’ve edited with a very light touch. Dozens of paragraphs go by without so much as a murmur. But much of the work I’ve done on Brad’s book remains invisible. What would have made a word-processed document heavy with tracked changes can be found only (at this point) in the <a href="">GitHub repo’s history</a>. (Editing in GitHub, without the features of a word-processor, has been uncomfortable.) There has been plenty of line editing: shifting punctuation, correcting typos, capping up and down, recasting sentences, sifting and tidying both general and particular.</p> <p>These are the mainstays of copy-editing. Brad was pretty sure of his structure, themes and content; his voice, honed from years of writing, speaking and bringing death to bullshit, was already strong. He wasn’t asking for structural or development editing. What remains visible in these chapters-in-progress are my comments and queries. Brad sometimes returns to the same phrases again and again, so I pick up on repetition that’s habitual rather than rhetorical. Facts are checked, US English patterns verified (I’m British), assertions challenged, new words and forms suggested – and even interpolated, though transparently.</p> <p>I try to do all this with gentle encouragement, kindness and humour, and in a spirit of editorial rigour. No doubt my raised eyebrow, tongue-filled cheek and ironical tone are easily lost in a little yellow box, but my intent is always to help improve the book, acting as one kind of ideal reader, on behalf of other readers.</p> <p>I’ll reserve judgement on open copy-editing for the time being, at least until I’ve seen how Brad responds to my comments. Until then, there’s more editing to be done. Always more editing. Have you seen the state of the internet? Horrifying.</p> <hr /> <h2>Follow Along</h2> <p>Hey there, Brad again. This is an exciting process, and I’m super excited Owen is along for the ride to help me bring the book home. If you want to follow along with the process you can:</p> <ul> <li><a href="">Read the manuscript</a> and also read the manuscript <a href="">complete with Owen’s edits</a>.</li> <li>Sign up for the <a href="">newsletter</a> where I occasionally publish updates on the project and share design system-related resources.</li> <li>Poke around the book’s <a href="">Github repo</a>.</li> <li>Check out the <a href="">timeline</a> of the project.</li> <li><a href="">Preorder the e-book</a> for $10</li> <li>And if you ever need editing (not just copy editing) help, do yourself a favor and <a href="">reach out to Owen</a>. He’s been an absolute pleasure to work with, and has a load of experience editing web design-related books.</li> </ul> <p>Thanks so much!</p> <p> </p> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[Design Edu Today Podcast]]> 2016-04-27T02:15:23Z 2016-04-27T02:15:23Z <![CDATA[I had a great time chatting with Gary Rozanc about working remotely, atomic design, and a whole lot more.]]> <p>I had a great time chatting with <a href="">Gary Rozanc</a> about working remotely, <a href="">atomic design</a>, and a whole lot more.</p> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[Clarity Conf: Designing for Earthlings and Astronauts]]> 2016-04-05T12:04:33Z 2016-04-05T12:04:33Z <![CDATA[At Clarity Conference in San Francisco, Richard Danne discussed his long experience creating style guides for massive organizations. Here are my notes from his talk: Richard didn’t invent the style guide, but worked on the NASA style guide, which was one of the first style guides (and was the first for the U.S. government). The NASA Graphic Standards Manual began work in 1974 and launched in 1975. It resulted in a great, unified program. Its success was due to it being such a comprehensive document. It […]]]> <p>At <a href="">Clarity Conference</a> in San Francisco, <a href="">Richard Danne</a> discussed his long experience creating style guides for massive organizations. Here are my notes from his talk:</p> <ul> <li>Richard didn’t invent the style guide, but worked on the <a href="">NASA style guide</a>, which was one of the first style guides (and was the first for the U.S. government).</li> <li><strong>The NASA Graphic Standards Manual</strong> began work in 1974 and launched in 1975. It resulted in a great, unified program. Its success was due to it being such a comprehensive document. It wasn’t just a logo, but a holistic system.</li> <li>It raises design principles to a new level. It was designed for the ages.</li> <li>Mission: <strong>“we wanted to bring this tech down to earth”</strong></li> <li>Manual had tabs to flip through to to the different sections:</li> <li>Color – NASA red</li> <li>Logo</li> <li>Business forms – they look so simple, but took over a year to get approval.</li> <li>4 or 5 different type faces.</li> <li>There was no graphic designer on staff at NASA, so the organization would often take an illustrator and say “Well you know art” and the results were terrible.</li> <li>Had to demonstrate the specifics of how the logo and design should look on ground vehicles, airplanes, and other vehicles.</li> <li>The “flying bathtub shuttles” were all ceramic tiles, so graphic placement was crucial. These shuttles were the first US vehicles to use Helvetica.</li> <li>Required congressional approval and military approval to introduce NASA Red into the color palette.</li> <li>Satellites are still out there with the NASA logo Richard designed.</li> <li>Color had to be flame retardant, which caused a lot of back and forth.</li> <li>Richard was elected as President of AIGA around the time of the NASA manual.</li> <li>When in a crisis, it’s sometimes an advantage to make decisions very quickly</li> <li>Did additional work for NASA, including public broadcasting film titles. Contained the first computer animations done in New York City.</li> <li><strong>“Going to work in space”</strong> posters to be circulated at schools.</li> <li>Space technology led to advancements in medicine, technology, etc.</li> <li>1983 needed to generate a report to see how successful the design system was.</li> <li>Copy Cat Culture – this stuff seeps into culture and has a life of its own. Lots of knockoffs and inspirations from the NASA logo.</li> <li>Other style guides Richard helped create -Department of Transportation, IBM, Bristol Myers, DuPont, FAA, FIT, Mendez Junior, Pratt and Whitney, Atlantic Mustual Companies, Cape Symphony Orchestra, Kid Knowledge, Napa Valley Jazz Society, AIGAwards</li> <li><strong>Recently there’s been a resurgence of print style guides</strong>, as people can hold them in their hands and tend to respect them more.</li> <li><strong>The introduction of mobile is forcing us to pull things back and make things simpler, more legible to work on these smaller devices.</strong></li> </ul> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[Clarity Conf: Living Systems: Brand in the context of peoples lives.]]> 2016-04-05T12:13:27Z 2016-04-05T11:37:50Z <![CDATA to PepsiCo and ultimately to Uber. What is Brand? There are two definitions. Ogilvy’s, which is “a product and it’s attributes.” The other definition is more intangible: “A person’s gut feeling about a product or company.” Brand is an invisible […]]]> <p>At <a href="">Clarity Conference</a> in San Francisco, <a href="">Jeremy Perez-Cruz</a> talked about the human elements that go into great brand systems. Here are my notes:</p> <ul> <li>Jeremy founded Etsy’s global brand design studio, and he believes the future of branding is in-house. After that, he went to PepsiCo and ultimately to Uber.</li> <li>What is Brand? There are two definitions. Ogilvy’s, which is <strong>“a product and it’s attributes.”</strong> The other definition is more intangible: <strong>“A person’s gut feeling about a product or company.”</strong> Brand is an invisible art that’s instinctively felt.</li> <li>As a brand systems creator, you should be a psychologist, a designer, and a magician. There’s a slight of hand involved as <strong>you’re trying to get people to feel the way we want them to feel.</strong> All three of those things involve human perception.</li> <li>Types of branding work: Studio work (doing work for clients), corporate (bureaucracy, global in scale), startups (many startups don’t establish an actual brand until they reach a certain point)</li> <li>Iteration is crucial. A great brand should be seen as a living, breathing investment. There shouldn’t be a black box around the brand system and it should be tested in the real world. Brand and product designers should work closely together.</li> <li>Brand in the context of human lives means exhibiting sympathy. How do we design for human lives?</li> <li><strong>Breathe</strong> -Brand guidelines should move and flex. Sometimes you have to break the guidelines in order to make something look good. Feedback is one of the most important aspects of a successful design system.</li> <li><strong>Grow</strong> – Define the guidelines and grow and make more comprehensive over time.</li> <li><strong>Learn</strong> – We all start with the basics. Test things out. You won’t know what works and what doesn’t until you actually try it out.</li> <li><strong>Feel</strong> – Emotional aspect of the brand. Laugh, cry, etc</li> <li><strong>Fear</strong> – A great idea scares the crap out of everyone. At Etsy, anyone can push production code to the live site, which is a scary concept.</li> <li><strong>Age</strong> – It’s ok to have a patina on your system as it leads to authenticity over time.</li> </ul> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[Clarity Conf: Baking Accessibility In]]> 2016-04-02T17:29:19Z 2016-04-02T17:29:19Z <![CDATA[At Clarity Conference in San Francisco, Cordelia McGee-Tubb talked about building accessibility into design systems. Here are my notes: Cordelia is an accessibility specialist at Dropbox and used to work at Salesforce. Accessibility is creating experiences that anyone can use regardless of their abilities. Creating flexible systems should consider the experiences of people with disabilities from the very start, not just through tacked-on accommodations. As style guide makers, you are creating the cookbooks that everyone else reads. Things like Bootstrap are like […]]]> <p>At <a href="">Clarity Conference</a> in San Francisco, <a href="">Cordelia McGee-Tubb</a> talked about building accessibility into design systems. Here are my notes:</p> <ul> <li>Cordelia is an accessibility specialist at Dropbox and used to work at Salesforce.</li> <li><strong>Accessibility is creating experiences that anyone can use regardless of their abilities.</strong> Creating flexible systems should consider the experiences of people with disabilities from the very start, not just through tacked-on accommodations.</li> <li.</li> <li>People are talking less about <em>accessibility</em> these days, but rather talking about about <em>universal design</em>. <strong>Think about users that have needs different than your own.</strong></li> <li>Foundational ingredients of a component system: <strong>use semantic HTML.</strong> Screen readers rely on crawling HTML, so make it semantic. If it looks like a button and functions like a button, make it a <code>button</code>.</li> <li><strong>Form fields:</strong> Don’t remove <code>label</code>s and use <code>placeholder</code> instead. Use visible <code>input</code> fields and create an association using <code>for</code>.</li> <li>Give images <code>alt</code> text, even for icons. You can use aria-hidden=true to have screenreaders ignore an icon that already has adjacent text next to it.</li> <li>Include a dash of <a href="">ARIA</a>. Use ARIA whenever you have models, menus, accordions, tabs, alerts, trees, etc.</li> <li>Make keyboard interactions as rich as mouse interactions. Lots of people don’t interact with your experience with a mouse.</li> <li>Don’t forget about focus styles. Lots of people don’t like default focus styles, but don’t just remove them entirely. Create better, custom focus styles, which is good for branding as well as accessibility.</li> <li>Color. There are two color rules: Maintain a reasonable contrast between text and background colors, and don’t use color alone to convey meaning.</li> <li>Navigate in grayscale mode to address color-specific accessibility issues.</li> <li><strong><a href="">Scooter</a> is Dropbox’s CSS library</strong> and has a lot of accessibility stuff built in.</li> <li><strong>Include accessibility info in your documentation.</strong> Lightning Design System and Bootstrap include accessibility-specific stuff. Write component-specific documentation. Encourages people to pay attention to accessibility and spreads best practices. General documentation: include general accessibility guidelines that helps set the stage. Use both together for holistic documentation.</li> <li>Bake accessibility in from the beginning. Use semantic HTML, alt text, ARIA, keyboard accessibility, Be mindful color usage, and write good accessibility documentation</li> </ul> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[Clarity Conf: Communicating Animation]]> 2016-04-02T17:13:43Z 2016-04-02T17:13:43Z <![CDATA Lightning Design System. Animation is important for consistent branding and UX. It makes people feel like there are physical laws that your UI adheres to. Challenges for consistent animations: communication issues, inadequate deliverables for implementation, and a lack of respect. It’s […]]]> <p>At <a href="">Clarity Conference</a> in San Francisco, <a href="">Rachel Nabors</a> talked about UI animation and how to document animation within a design system. Here are my notes:</p> <ul> <li>Rachel has worked on <a href="">Dev Tools Challenger</a>, <a href="">Alice in Videoland</a>, and helped with the <a href="">motion portion</a> of Salesforce’s Lightning Design System.</li> <li><strong>Animation is important for consistent branding and UX.</strong> It makes people feel like there are physical laws that your UI adheres to.</li> <li>Challenges for consistent animations: communication issues, inadequate deliverables for implementation, and a lack of respect. It’s hard to find good animation guideline examples online.</li> <li>Designers want animation docs to be thematic, theoretical, and educational. Developers want them to be granular, component-based, instructional, maintainable.</li> <li>With documentation, you can show what’s there and why, provide smart defaults, provide unity (choreography), and guidance for future contributors.</li> <li>Animations designed in code can often look too mechanical, sort of like designs that start in code.</li> <li>Developers loathe getting gifs and movies, then are told “here, just make this.”</li> <li><strong>Easing:</strong> Rate of change: ease-in, ease-out, bounce. Cubic beziers help make custom animations.</li> <li>Fades and color changes look best with more linear, subtle curves. Bounces increase animacy and add an air of fun.</li> <li><strong>Timing:</strong> animation can’t happen without a duration. For the Lightning Design System, they established a motion modular scale for animation timing.</li> <li>Timing limitations: try to stay inside 70 to 700 milliseconds. 200-300ms is a sweet spot. Shorter durations for fades and color changes, longer durations for large movements, use milliseconds instead of seconds.</li> <li><strong>Properties</strong>: what’s being changed? <code>opacity</code> and <code>transform</code> are the most performant properties to animate on the web.</li> <li>“The art challenges the technology and the technology inspires the art.” – Pixar. Push the limits of what’s technically possible.</li> <li><strong>Combine easing, timing, and properties to make an animation vocabulary</strong></li> <li>Making an animation vocabulary: “zoom” vs “zoooooooooom” pay attention to how people communicate animation. Pave the cowpaths when establishing your animation language (i.e. look at Keynote’s motion transition effects). Use your words to communicate animation.</li> <li><strong>Storyboards</strong> – began in film but can be used on the web as well. Sometimes storyboards can be more useful than a video for documentation. <strong>Define the actions:</strong> when [this trigger happens], do [this action]. Storyboarding tools: go old school with Post-Its, Rachel has a storyboard template, <a href="">storyboardsthat.com</a>.</li> <li><strong>Animatics</strong> involves demoing animation.An animation is worth a thousand meetings. They’re not deliverables: don’t throw videos at developers. They’re guidelines and they don’t provide people with actionable content. Animatics tools: Keynote, AfterEffects, Principle app, Stop Motion app</li> <li><strong>Prototypes</strong> – real working examples can be great to demo animation, but they’re bad for documentation. Prototyping tools: Native-oriented tools: principle, pixate, web-oriented: Invision + gifs, UX Pin, Framer.js</li> <li><strong>Combine storyboards, animatics, prototypes to provide solid documentation</strong> on look and feel as well info on how to reproduce animations</li> <li><strong>Generating buy-in:</strong> not everyone is as excited about animation as you. Group documentation can help people feel involved. Wireframe exercise. Cultivate and champion animation: get more people on board. Get a co-conspirator on board.</li> <li>Do it anyways. If people aren’t caring about animation, just make it happen anyways.</li> </ul> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[Clarity Conf: Being Human, Being Slack]]> 2016-04-02T16:56:55Z 2016-04-02T16:56:55Z <![CDATA[At Clarity Conference in San Francisco, Anna Pickard discussed everything that goes into creating a cohesive voice and tone at Slack. Here are my notes: Anna works at Slack, and is tasked with making a style guide to deal with the company’s fast growth. The goal is to maintain a consistent tone of voice, as they want Slack to feel like another member of your team. How do you scale that? Rules: glossary and grammar, oxford comma, emdash, etc. They created a Gloss Bot, […]]]> <p>At <a href="">Clarity Conference</a> in San Francisco, <a href="">Anna Pickard</a> discussed everything that goes into creating a cohesive voice and tone at <a href="">Slack</a>. Here are my notes:</p> <ul> <li>Anna works at Slack, and is tasked with making a style guide to deal with the company’s fast growth. The goal is to maintain a consistent tone of voice, as they want Slack to feel like another member of your team. How do you scale that?</li> <li>Rules: glossary and grammar, oxford comma, emdash, etc.</li> <li>They created a <a href="">Gloss Bot</a>, which is a Slack bot that pulls up definitions and rules around grammar</li> <li><strong>Voice and tone</strong>: Asking people to be like you ends up with them doing a bad impression of you. How do you sound human and strike the write tone?</li> <li.</li> <li>Style rules: <strong>important to define what you <em>don’t</em> sound like and what you <em>don’t</em> do.</strong></li> <li>Looked at MailChimp’s <a href="">Voice and Tone</a> as a starting point. <em>This but not that</em> is a good place to start.</li> <li>Break down the voice into shared characteristics of the brand.</li> <li><strong>Empathy:</strong> Who am I talking to? What emotional state are they in? What is the context? What do I want them to take away from this?</li> <li><strong>Courtesy:</strong>.</li> <li><strong>Craftsmanship: </strong>let people know how and why you’re making the decisions you’re making. Work as transparently as possible. Is this as good as I can make it? Could someone else make it better? What are the options?</li> <li><strong>Playfulness:</strong> have an open mind. What do error messages (amongst other things) usually sound like? How can I do this differently? Put a different spin on it.</li> <li><strong>Honesty:</strong> be open and honest and fair to your users. Is this really what I want to say? Is this really the way I would say it? How human does it sound if I read it out load?</li> <li><strong>Creating a style guide is about finding a “unity of intention.”</strong></li> </ul> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[Clarity Conf: Crawl, Walk, Run – the Evolution of a Design System]]> 2016-04-02T16:46:45Z 2016-04-02T16:46:45Z <![CDATA and lots of external developers. Historically, CSS was written by engineers who would rather be writing Java. It became very clear they needed a design system. There are a lot of developers, but only about 100 people on their UX […]]]> <p>At <a href="">Clarity Conference</a> in San Francisco, <a href="">Stephanie Rewis</a> & <a href="">Brandon Ferrua </a>discussed the creation of the CSS framework that’s part of Salesforce’s <a href="">Lightning Design System</a>. Here are my notes:</p> <ul> <li>Salesforce has 20,000 employees and has made lots of acquisitions. There are internal developers and lots of external developers.</li> <li>Historically, CSS was written by engineers who would rather be writing Java. It became very clear they needed a design system.</li> <li>There are a lot of developers, but only about 100 people on their UX team. They touch lots of data in addition to the UI.</li> <li><strong>Learning to crawl</strong> – Launched Salesforce1, and launched a living style guide to go along like it. They got lots of great feedback, including external developers who said “How can my app look like this?”</li> <li>Needed a more comprehensive design system to serve all their products and users.</li> <li>Design system components: CSS Framework + UI Library. The goals were to eliminate red lines and instead reference real code, and also make it easy for people to copy and paste.</li> <li>They did a design audit and <a href="">inventoried</a> all components in designers’ comps.</li> <li><strong>Design Tokens:</strong> Fonts, Font sizes, weights, line heights, background colors, etc. These are shared attributes across all their products, so they wanted to make a system to keep that stuff consistent.</li> <li><strong><a href="">Theo</a></strong> is an open source tool that allows Salesforce to share tokens across products using a JSON object that propagates out to all instances (across web and native applications).</li> <li>Broke down components down to their smallest patterns and objects.</li> <li><strong>Clarity and understandability are key to worthwhile class names.</strong> This may lead to more verbose class names, but <strong>clarity trumps verbosity.</strong></li> <li>Used modified BEM conventions. They had to modify syntax because BEM and XML don’t play nicely (double dashes aren’t allowed in XML comments)</li> <li><strong>Learning to Walk – </strong>Enterprise apps are unique. They demand content and data-rich interfaces. They lack vertical rhythm normally found in more document-centric UIs.</li> <li>Heading levels may vary and components should be agnostic. They flattened all the font-sizes in headings so developers are encouraged to think about semantics without worrying about visual styling.</li> <li><strong>Accessibility</strong> – use ARIA roles, REM units (to address users changing default settings),</li> <li><strong>Play well with others</strong> – The Salesforce CSS framework namespaces their styles to avoid conflicts with legacy/external components (i.e. <code>.button</code> coming from some other framework). They added a wrapper div to scope components that are specifically Salesforce.</li> <li><strong>Learning to Run</strong> – How do we maintain consistency across a massive organization? How do you scale? How do we keep our design system agnostic?</li> <li><strong>Minimize dependencies</strong> – didn’t overcomplicate things. Didn’t overpromise on the system.</li> <li>You don’t know what you don’t know. What makes up your ecosystem? Who are your customers? Try to understand your potential footprints.</li> <li>MVP: Tokens, Icons, HTML, CSS, Guidelines. They specifically avoided adding JavaScript to their framework, as different developers use different frameworks: Lightning, React, Angular, jQuery, etc.</li> <li><strong>States and variants</strong> – The pattern library demonstrates all the states and variants that components could have. The code preview shows what classes are added/removed from the component in order to accomplish that state. This gives designers and developers a good idea of all the permeations of any component.</li> <li><strong>Documentation</strong> – Show developers how to accomplish with Javascript without being tool-specific. Build accessibility into the documentation and make it easy to implement.</li> <li>Avoiding the potholes – We have to be forward thinking but backwards compatible – what happens when a change is written? Salesforce has 3 releases a year that the design team needs to support.</li> <li>Design system team need to provide a migration path to get developers to update their applications. Developers need to understand what has been modified, retired, or added.</li> <li><strong>Sass deprecate</strong> – build deprecation messages into Sass workflow to give warning to developers that code is going away soon.</li> </ul> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[Clarity Conf: Code Patterns for Pattern-Making]]> 2016-04-02T16:18:22Z 2016-04-02T16:18:22Z <![CDATA consistency, clarity, and efficiency. Context matters – not everything needs to be Material Design. Patterns and design systems should fit the needs of the project or organization. Is it serving a 10 person team or a 200 person team? Is it for […]]]> <p>At <a href="">Clarity Conference</a> in San Francisco, <a href="">Miriam Suzanne</a> talked about establishing patterns to make your codebase easy to document and maintain. Here are my notes from her talk:</p> <ul> <li>Miriam works at <a href="">Oddbird</a> and is the creator of <a href="">Susy</a> amongst many other web tools.</li> <li><strong>Patterns lead to consistency, clarity, and efficiency.</strong></li> <li>Context matters – not everything needs to be Material Design. Patterns and design systems should fit the needs of the project or organization. Is it serving a 10 person team or a 200 person team? Is it for an internal company or is it in a consulting engagement?</li> <li>Very inspired by <a href="">CSS Systems</a> by Natalie Downe. She was one of the first people to talk about pattern portfolios and modular CSS.</li> <li><strong>Our product is the architecture</strong> – a solid foundation for hand-off to a client. Architecture, rather than the product, is really what they’re selling their clients</li> <li>Style guides are unit tests and integration tests</li> <li>Patterns combine languages: design, HTML, CSS, and JavaScript</li> <li><strong>Style guides represent integration</strong> – they show context & relationships to help people make better projects.</li> <li><strong>Maintenance must be integrated into processes.</strong> If the style guide isn’t integrated with the team’s workflow, it will fade away.</li> <li>Pattern API is what we’re trying to build. We want to reduce opportunities to deviate from the system.</li> <li>Basics of Web Architecture – HTML patterns are usually the core of the pattern, but we tend to focus on the CSS.</li> <li>SMACSS, OOCSS, BEM, ITCSS, Atomic are all great methodologies for someone. There’s no one right way to do things, but it’s important to understand why you’re choosing one method over another.</li> <li><strong>Separation of concerns</strong> – data, logic, structure and presentation are need boundaries. These are fuzzy lines but they’re still lines.</li> <li>Sepecificity is your guide – <a href="">Inverted triangle</a>. CSS was designed for patterns. Classes are patterns: Don’t repeat yourself. Elements share a purpose.</li> <li>Sass extends work well when used semantically to represent “is-a” relationships.</li> <li><strong>Keep naming conventions consistent across the entire team</strong> – that doesn’t necessarily mean always dashes or underscores, but the naming should be consistent. What is this thing? The answer should be clear from the name. An example: layout region – component – element – state- js-hook</li> <li><strong>`is-`</strong> pattern from SMACSS is a great pattern.</li> <li><strong>`js-`</strong> is helpful for Javascript-speicific stuff</li> <li><strong>Come up with whatever conventions you like, but make sure the whole team knows it, understands it, and practices it.</strong> There’s no one right answer, but “no answer” is totally wrong.</li> <li>Sass patterns – defining colors can be done a ton of different ways. Look for legibility and scalability. Every technique has tradeoffs.</li> <li><strong>Make documentation the lazy option</strong> – make it easier to go get the variable rather than going rogue.</li> <li><a href="">Sassdoc</a> is great, <a href="">Herman</a> is an extension to it. Keeps documentation automatic and easy. Sass maps can be automatically converted into a living code style guide.</li> <li>Systems > solutions. Tools should fit you and your team’s workflow, rather than the other way around.</li> <li>Patterns should get down to the essential. Remove extra stuff like CSS properties that don’t actually apply to the pattern.</li> <li><a href="">True</a> for Sass unit tests.</li> <li>Lint your tools</li> <li>HTML Templates Jinja/Nunjacks pre-processors for your markup and help prevent deviation from standards.</li> <li>Look for opportunities to simplify things and make things more legible</li> <li>They’ve failed a bunch of time. Building by hand means that it will become obsolete</li> <li>Patterns should be meaningful. Make it easy to use patterns. Users just need to learn the abstraction but are shielded from all that goes into it.</li> </ul> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[Clarity Conf: Deconstructing Web Systems; or, A Pattern Language for Web Development]]> 2016-04-05T11:54:06Z 2016-04-01T21:01:13Z <![CDATA surroundings. Pattern language is a method of describing good design practices within a field of experiences. Christopher Alexander wrote A Pattern Language and Synthesis of Form. These books became required reading for researchers and computer science, which ultimately influenced Object-Oriented […]]]> <p>At <a href="">Clarity Conference</a> in San Francisco, <a href="">Claudina Sarahe</a> discussed the many types of patterns that make up frontend web systems. Here are my notes:</p> <ul> <li>Claudina works on <a href="">Casper</a> on their design system.</li> <li>Patterns are a collective memory of things which work in our surroundings.</li> <li>Pattern language is a method of describing good design practices within a field of experiences.</li> <li>Christopher Alexander wrote <em><a href="">A Pattern Language</a></em> and <em><a href="">Synthesis of Form</a></em>. These books became required reading for researchers and computer science, which ultimately influenced Object-Oriented Programming.</li> <li><em>A Pattern Language</em> – the main goal was to empower people to design and build at scale. You don’t always need an architect to build things. Alexander intended to release the book as a three-ring binder, as was thinking about evolution and adaptation.</li> <li><strong>Anatomy of a pattern:</strong> Name, Context – a way to identify, Problem, Solution(s), Related patterns</li> <li>Do we need a pattern language for front-end? <strong>Pattern languages formalize values, an structure our knowledge of complex systems.</strong> A pattern language forces us to remember we’re working in a system.</li> <li>Frontend is more than just design systems, and there are many types of patterns we encounter:</li> <li><strong>Global patterns:</strong> Community guidelines, temporary autonomous zones (conferences and meetups), interdependent disciplines (UI, UX, design), web guidelines (W3C), open borders,</li> <li><strong>Process patterns:</strong> Philosophy of the work. Purpose, Planning/management/ code reviews, cross functional teams, single origin of truth, documentation, naming, design systems</li> <li><strong>Workspace patterns:</strong> Editors, CLI, Syntax highlighing, Shortcuts, Git/Github, Version & Dependency Management, Configuration/Settings</li> <li><strong>Project patterns:</strong> build tools, dependencies, directory structure, linters, composable, HTML templating, CSS methodologies, JS, Content Strategy, Shareable Data, IDs</li> <li><strong>Poetic structures:</strong> Casper’s design system <a href="">Night Shade</a> is being released later this month. <a href="">Ando</a> is a frontend generator and a methodology for building systems. Static site generator that builds their production sites.</li> <li><strong>Naming and purpose.</strong> <a href="">Ando</a> is an architect who influenced Casper’s work. “To change the dwelling is to change the city and to reform society.” Casper believes to change your sleeping situation can change your life.</li> <li>Ando’s pattern language: purpose, open borders, templating, shareable data, build tools, buldling, configuration,</li> <li><strong>Open borders:</strong> the system should work with cross-disciplinary teams. Don’t want to have silos, and anyone should be able contribute to the system. The system used to run on Ruby on Rails, but that wasn’t designer-friendly as there were too many dependencies. A core ethos of their new system is a low barrier to entry, which means that more people can contribute.</li> <li><strong>Documentation</strong>: A way to record decisions. Code isn’t self-documenting and needs to be easily referenced. Use community-vetted code tools. Pick one place to put it all and aim for establishing standards. SassDoc, JSDoc, Read the Docs, Github wiki/readme. <strong>HTML, CSS, and JS documentation is all formatted in the same style,</strong> making it easier for team members to document decisions.</li> <li><strong>HTML Templates:</strong> solves hard-to-maintain code. <a href="">Nunjucks</a>, Jade, Swig, Handlebars/Mustache. Create reusable pieces of code that abstracts the markup, letting team members make a macro (i.e. content panel) that is basically a mixin that can be passed different parameters.</li> <li><strong>Directory structure</strong> involves organizing and grouping project files. Structuring things right means you don’t have to waste a lot of time looking for files. <strong>Component-based structure</strong>: group everything around a component in one place (template, CSS, JS, etc). If you need to delete something, you can delete the containing folder. Build processes allow us to compile to any type of directory structure, so you should choose a directory structure that makes sense for authoring your frontend code.</li> </ul> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[Clarity Conference: Beyond the Toolkit: Spreading a System Across People & Products]]> 2016-04-01T19:19:08Z 2016-04-01T19:17:12Z <![CDATA[At Clarity Conference in San Francisco, Nathan Curtis talked about making design systems and how to make sure they take root at your organization. Here are my notes: Google hit a home run with Material Design, but is it all really that consistent? There are disparities across their products still. Type, color, and iconography are three big elements that we pay a lot of attention to when evaluating consistency. People might design a bunch of different card designs. Are they solving the same […]]]> <p>At <a href="">Clarity Conference</a> in San Francisco, <a href="">Nathan Curtis</a> talked about making design systems and how to make sure they take root at your organization. Here are my notes:</p> <ul> <li>Google hit a home run with Material Design, but is it all really that consistent? There are disparities across their products still.</li> <li>Type, color, and iconography are three big elements that we pay a lot of attention to when evaluating consistency.</li> <li>People might design a bunch of different card designs. Are they solving the same problem? Should they be normalized?</li> <li><a href="">Component cut up workshop</a> – Print out screens and have people cut up the UIs. Afterwards, start categorizing those components. It’s a great exercise to get teams thinking about components and establishing a shared vocabulary.</li> <li>Making design systems involves defining the <a href="">parts, products, & people</a>.</li> <li>Establish a roadmap – what are the expectations of this initiative? What can we tackle first, and where do we want to end up?</li> <li>Mission accomplished – we just launched our living style guide! We love our artifact! But then the discussion stops and it immediately stops being useful.</li> <li><strong>A successful design system has to create a more cohesive user experience, encourage collaboration, and create efficiencies.</strong></li> <li>The design system can be broken into <strong>parts, people, and products</strong></li> <li><strong>Parts</strong> – what does the design system encompass? Voice and tone? Components? Code? What are the priorities? Nathan has a <a href="">worksheet</a> and exercise to help teams figure this out.</li> <li><strong>People</strong> – who is the design system supposed to reach? Meta tools for other builders (Foundation & Bootstrap), Material Design (people making things within Google), Harmony (a portfolio’s guides), A team’s playbook</li> <li>Sometimes a style guide fails because it didn’t live up to expectations and aspirations. Not everything needs to be Material Design, but it should be useful. When they don’t take root people become discouraged.</li> <li>In order for the design system to be successful, you have to define what’s going on</li> <li>Choose the flagships products: what are the main products that are going to be used to create the design system? What are the auxiliary products. Define what the priority of products would be, and sse their launch dates to establish your roadmap.</li> <li>What’s your get? Portfolio of web properties – (Home Products, Support, About us, training)</li> <li>Avoid distractions when establishing your design system. <strong>What are distractions for the design system? Homepages!</strong> Very political, often bespoke designs that don’t necessarily fit in with the rest of the UI. Start elsewhere.</li> <li><strong>Build one-offs.</strong> It’s alright to build things that are one offs. Not everything needs to run off of the design system. (Related: <a href="">design system flexibility</a>). Doesn’t need to be a dictatorship.</li> <li><strong>Hook system access via code they’ll use first</strong>. Get the basic shell of the page (header, footer, etc) to work across the entire experience and people will be excited to normalize more of the UI.</li> <li>For Marriott: they have web sites, web apps, native apps. Success with a design system means moving meaningful metrics, such as increasing bookings.</li> <li><strong>Radiate influences from web apps</strong> – Web apps are often the most trackable,transactional, utilitarian things and can be good candidates for leading the design system creation.</li> <li><strong>Demonstrate value across the journey</strong> – Rather than working in independent streams making a bunch of static PSDs, make integrated, responsive, clickable prototypes. <a href="">Livia Labate</a> – “The [clickable prototype] demo convinced me anyone in 1 minute, leaving me 59 minutes to dig into heftier topics.” Show don’t tell, and showcase the entire journey rather than one stream.</li> <li><strong>People</strong> – How are people set up to make and manage the design system?</li> <li><strong>Model 1: Solitary</strong> – One person makes it, and everyone should (or has to) use it. Doesn’t get used because it often doesn’t take other people’s considerations into account. Overlords don’t scale – Sun had one guy who made a style guide, but it doesn’t scale.</li> <li><strong>Model 2: Centralized</strong> – Build a centralized team to service the design system. This is great as you have dedicated people maintaining the system. But those people lack of specific product context, so it can be difficult for those folks to know whether the system is working or not out in the real world.</li> <li>Design systems are evident in modern organizations. Modern organizations are reorganizing to make design systems a priority. Change titles – Jina Bolton changed her title to explicitly mention “Design Systems” rather than just “Product Designer”. That stuff matters.</li> <li><strong>Model 3: Federated</strong> – People are distributed across teams, but contribute to the centralized design system. There may be dedicated people on a design systems team, but there are people on individual teams who contribute to the system.</li> <li>Find your inner connector – Some people will want to drive change, some people influence change, and some people won’t care. Watching people who are driving, influencing, ignoring, design changes.</li> <li>@cap’s <a href="">sliding scale of giving a fuck</a> – Create a framework around how strongly people feel when there’s a disagreement.</li> <li>Designate “go to”s, not deciders – in a federated system, Go-tos are people who are embedded in teams across different disciplines who are aware of all the decisions made around the design system. Still dedicated to their teams but are the go-to people when there are questions around the system.</li> <li>Mix doers with delegators – The doers are in the design tools dealing with the problems and constraints of the design system. Delegators are directors that have power to make change. Mix those people together for successful results.</li> <li>There will be people who will make the design systems, but those people need to be aligned with other people across the organization. Who are those product owners and other people who will be impacted by the design system, and how can you make sure the design system aligns with their needs. Help them become advocates.</li> <li>After every design system project, Nathan’s company asks the team members: <a href="">what was satisfying and what was dissatisfying</a>? Some feedback: “Alignment is too squishy and is hard to define.” Not as tangible as other aspects of the process.</li> <li>Embrace new responsibilities: you might be a maker, who’s used to designing, writing, collaborating, reviewing, etc. But as the design system grows, new responsibilities emerge: product manager, editor, seller, evangelist, connector, aligner. Not everyone is cut out for these tasks, but it’s important for <em>some people</em> to own the design system.</li> <li>Have a CEO or executives make the design system a priority. “First things first: you muse have total support from the top.” Having top-down initiatives helps make design systems happen and take root.</li> <li><a href="">A design system is a living, funded product</a>.</li> </ul> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[Clarity Conf: Building empowering style guides with practical research]]> 2016-03-31T18:21:14Z 2016-03-31T18:21:14Z <![CDATA[At Clarity Conference, a conference all about design systems and style guides (!) in San Francisco, Isaak Hayes & Donna Chan describe their process for creating a style guide at AppDirect. Here are my notes: Style guides should be usable for users and have an positive impact on the organization. At AppDirect they made a style guide but found that it wasn’t addressing user needs. They built components in isolation and had a hard time getting it adopted. So they talked to […]]]> <p>At <a href="">Clarity Conference</a>, a conference all about design systems and style guides (!) in San Francisco, <a href="">Isaak Hayes</a> & <a href="">Donna Chan</a> describe their process for creating a style guide at <a href="">AppDirect</a>. Here are my notes:</p> <ul> <li>Style guides should be usable for users and have an positive impact on the organization.</li> <li>At AppDirect they made a style guide but found that it wasn’t addressing user needs. They built components in isolation and had a hard time getting it adopted.</li> <li>So they talked to other companies. Other organization made things in silos, leading to a misalignment of needs. Things were either too technical or led to different directions. As a result these things were thrown in the trash. People saw it as a big waste of time and effort, which led to a negative perception of style guides. That made it harder to try again.</li> <li>They went about fixing this by establishing a style guide research process: <strong>Discover, Interview, Understand, Define</strong></li> <li><strong>Discover</strong> – Who are the people we need to talk to? Style guide means different things to different people. Cast a wide net to capture everyone who will be affected by the style guide.</li> <li>Different kinds of people to interview: <strong>Users, builders, and stakeholders</strong>.</li> <li><strong>Style Guide Users</strong> – designers, developers product managers, QAs, Sales, Marketing, docs, people who will be making use of the style guide</li> <li><strong>Builders</strong> – Designers, frontend, engineers, PMs, docs, people who will actually be creating the style guide</li> <li><strong>Stakeholders</strong> – CEO, Department heads, project leads, people who will be influenced by the creation of the style guide.</li> <li>There may be overlap in the different user groups</li> <li>What current projects will be affected by the style guide? Talk with those people who work on current and future projects</li> <li><strong>Consider product-specific needs</strong> – not a blanket system across the board. Perhaps your organization has a themeable UI</li> <li><strong>Interview</strong> – What problems are we trying to solve with the style guide?</li> <li><strong>Interviewing users</strong> – What are your pain points? Uncover goals – What would a style guide enable you to achieve? Usability – what info do you need from a style guide?</li> <li><strong>Interviewing builders</strong> – Also may be users, so ask them same questions. What goals do they have? What makes a successful style guide? Uncover requirements</li> <li><strong>Interviewing stakeholders</strong> – Uncover the problems they hope to solve. What goals do they have?</li> <li>Interviewing tips: have face-to-face interviews, pull out nuggets, use sticky notes, and if you have to divide and conquer to complete interviews</li> <li>Understand – how do you make sense of all the interviews and info? Find common trends across interviewees and make problem statements</li> <li><strong>Define</strong> – how to take those problem statements and do something with them? Defined principles, user stories, and metrics</li> <li><strong>Principles</strong> – Take problem statement and convert to key principle (“redline designs take forever!” translates to <em>efficiency</em> as a key principle). Whatever principles you establish should get your team aligned to do good work together.</li> <li>Create user stories – Take a principle and convert it into an actual use case (For efficiency as a principle: “As a designer, I need to communicate basic elements of a page to an engineer”)</li> <li><strong>Define Metrics</strong> – What are the effects of a principle like efficiency? Maybe a decrease in JIRA tickets? Shorter code review? Fewer Github changes? Faster production?</li> <li>Maybe send out surveys if concrete metrics are hard to come by. How are people feeling before and after?</li> <li><strong>Problems</strong> – discover pain points</li> <li><strong>Principles</strong> – help guide the process</li> <li><strong>User Stories</strong> – know exactly what we’re building and for whom</li> <li><strong>Metrics</strong> – measure the impact of the style guide</li> </ul> <p><img class="alignnone size-large wp-image-9759" src="" alt="IMG_0555" width="1024" height="768" sizes="(max-width: 1024px) 100vw, 1024px" /></p> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[Trump.css]]> 2016-03-14T13:12:05Z 2016-03-14T13:12:05Z <![CDATA[Trump.css pic.twitter.com/8Ezki3F7fg — Brad Frost (@brad_frost) March 14, 2016]]> <blockquote class="twitter-tweet" data-<p lang="en" dir="ltr">Trump.css <a href="">pic.twitter.com/8Ezki3F7fg</a></p> <p>— Brad Frost (@brad_frost) <a href="">March 14, 2016</a></p></blockquote> <p><script async</script></p> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[AMA on Designer News]]> 2016-03-02T20:47:06Z 2016-03-02T20:47:06Z <![CDATA[I’m doing an Ask Me Anything session on Designer News. I’m happy to answer any questions you may have, especially around atomic design, the role of frontend design in the design process, responsive design, bullshit, music, art, and whatever else. Fire away!]]> <p>I’m doing an Ask Me Anything session on <a href="">Designer News</a>.</p> <blockquote><p>I’m happy to answer any questions you may have, especially around atomic design, the role of <a href="" rel="nofollow">frontend design</a> in the design process, responsive design, bullshit, music, art, and whatever else. Fire away!</p></blockquote> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[Peak Style Guide]]> 2016-02-22T14:56:21Z 2016-02-22T14:50:52Z <![CDATA […]]]> <p>Chris Coyier ended the latest <a href="">CSS Tricks newsletter</a> with some thoughts about style guides.</p> <blockquote><p>As much as I love style guides, and I <em>love</em> style guides, I wonder if we’re at Peak Style Guide. I’ve seen logos and mascots. I’ve seen dedicated sites and open repos begging for contributions. I’ve gotten generic emails from marketing companies peddling some companies style guide.</p> <p>Everybody’s way isn’t the One True Way. The whole point of a style guide is to guide the particular style of some specific brand. <a href="" target="_blank">AM I CRAZY?</a> The success of your style guide is how useful it is, not how many stars it has.</p></blockquote> <p>I absolutely agree with Chris that style guides should be first and foremost useful for the people who use them. If you work at a place where a style guide can be effectively maintained with only a Github repository with a decent README, more power to you!</p> <p>But I don’t think these style guide logos and mascots Chris refers to are simply gratuitous visual wankery.<strong> It reflects an organization’s commitment to making and maintaining a thoughtful, deliberate design system.</strong></p> <p>Taking the time to craft a good-looking, <a href="">visible</a> package for a design system:</p> <ul> <li><strong>Helps build awareness</strong> of the design system and helps get the organization excited about and invested in the concept. This can lead to more time, funding, and resources dedicated to maintaining and growing the design system.</li> <li><strong>Makes it much more <a href="">approachable</a></strong>, <em>especially</em> for non-technical folks. This can lead to a diversity of perspectives being represented in the design system, resulting in an effective shared vocabulary and better cross-disciplinary collaboration.</li> <li><strong>Creates a sense of accountability</strong> for the organization, which better ensures the design system is utilized and doesn’t just die on the vine.</li> <li><strong>Can assist <a href="">recruiting</a> efforts</strong>, as designers and developers are looking to work at organizations that embrace modern Web best practices.</li> <li>Provides an <strong>opportunity to make the resource bigger</strong> to include other style guide types like <a href="">brand assets</a>, <a href="">design language</a>, <a href="">voice and tone</a>, and <a href="">writing</a>.</li> </ul> <p>When I see well-designed, holistic pattern library like Salesforce’s <a href="">Lightning Design System</a>:</p> <p><a href="" rel="attachment wp-att-9717"><img class="alignnone size-large wp-image-9717" src="" alt="Lightning Design System" width="1024" height="502" sizes="(max-width: 1024px) 100vw, 1024px" /></a></p> <p>I see an organization’s commitment to a sound, deliberate design system that’s meant to stand the test of time. If style guide logos, mascots, and dedicated sites help organizations get on board and excited about design systems, I’m all for ’em.</p> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[Frontend Design]]> 2016-02-18T01:38:36Z 2016-02-17T13:55:28Z <![CDATA, unicorn, or Bo Jackson) lives in a sort of purgatory between worlds: They understand UX principles and best practices, but may not […]]]> <p><img class="alignnone size-large wp-image-9697" src="" alt="Frontend design" width="960" height="540"></p> <p>Somewhere between design – a world of personas, pixels, and polish – and engineering – a world of logic, loops, and linux – lies frontend design. <strong>Frontend design involves creating the HTML, CSS, and presentational JavaScript code that makes up a user interface. </strong></p> <p>A frontend designer (who may also go by UI developer, client-side developer, UI engineer, <a href="">design engineer</a>, frontend architect, designer/developer, prototyper, unicorn, or <a href="">Bo Jackson</a>) lives in a sort of purgatory between worlds:</p> <ul> <li>They understand UX principles and best practices, but may not spend their time conducting research, creating flows, and planning scenarios</li> <li>They have a keen eye for aesthetics, but may not spend their time pouring over font pairings, comparing color palettes, or creating illustrations and icons.</li> <li>They can write JavaScript, but may not spend their time writing application-level code, wiring up middleware, or debugging.</li> <li>They understand the importance of backend development, but may not spend their time writing backend logic, spinning up servers, load testing, etc.</li> </ul> <p <a href="">book</a>:</p> <blockquote><p>When a previous employer discovered I wrote HTML, CSS, and presentational JavaScript, they moved me to sit with the engineers and back-end developers. Before too long I was being asked, “Hey Brad, how long is that middleware going to take to build?” and “can you normalize this database real quick?”</p> <p>Here’s the thing: I’ve never had a computer science class in my life, and I spent my high school career hanging out in the art room. Suffice it to say those requests made me extremely uncomfortable.</p> <p>There’s a fundamental misunderstanding that all coding is ultra-geeky programming, which simply isn’t the case. HTML is not a programming language. CSS is not a programming language. But <strong>because HTML and CSS are still technically code, frontend development is often put in the same bucket as Python, Java, PHP, Ruby, C++, and other programming languages</strong>. This misunderstanding tends to give many frontend developers, myself included, a severe identity crisis.</p></blockquote> <p>This distinction between frontend UI code and “real programming” has real ramifications on organizational structure:</p> <blockquote><p>Organizationally, there is often a massive divide between designers and developers (or “marketing” and “IT”, or “creative” and “engineering”, or some other divisive labels). Designers and developers often sit on different floors, in different buildings altogether, in different cities, and sometimes even in different countries in different continents. While some of this organizational separation may be justified, <strong>creating a division between designers and frontend developers is an absolutely terrible idea</strong>.</p> <p>Here’s the thing: HTML, CSS, and presentational JavaScript build user interfaces – yes, the same user interfaces that those designers are meticulously crafting in tools like Photoshop and Sketch. In order for teams to build successful user interface design systems together, <strong>it’s crucial to treat <a href="">frontend development as a core part of the design process</a>.</strong></p></blockquote> <p>That’s why I’m encouraged to read about how companies like Optimizely are <a href="">structuring</a> their <a href="">teams</a> to include frontend work as part of the design process. <a href="">Jonathan Snook</a> <a href="">shared some absolutely brilliant thoughts</a> on the topic based on his experience at Shopify. I’m excited to see this awareness cropping up, and encourage organizations to include frontend design as a key part of their design process.</p> <p>I personally think that people who are skilled at frontend design are in a great position to help bridge the divide between the design and development worlds. They are <a href="">mortar</a> that help hold the bricks in place. Existing in purgatory between worlds may sound like a bad thing, but it doesn’t have to be! Embrace the fuzziness, encourage frontend designers to exist between worlds, and let collaboration and great work ensue.</p> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[Email Responses #3: UX Designer or Front-End Developer?]]> 2016-01-27T16:33:36Z 2016-01-27T16:01:13Z <![CDATA[I am currently going through the frustration of working in a traditional ‘waterfall agency’. They are not going to change process, client work is suffering and it’s got to the point that I am looking for a new job. Like yourself, I am very much a designer with front end skills (design, css3, html5, responsive, 12 years experience) minus the heavy duty javascript skills. I can work around JS with some jquery or sourcing plugins that work. My question to […]]]> <blockquote><p>I am currently going through the frustration of working in a traditional ‘waterfall agency’. They are not going to change process, client work is suffering and it’s got to the point that I am looking for a new job.</p> <p>Like yourself, I am very much a designer with front end skills (design, css3, html5, responsive, 12 years experience) minus the heavy duty javascript skills. I can work around JS with some jquery or sourcing plugins that work.</p> <p>My question to you is whether my skillset covers both the ux designer and front end dev role? I see many job opportunities for both but it’s sometimes difficult to choose between.</p></blockquote> <p>I’m of the belief that <a href="">every person</a> who helps make a product is a UX designer. The copy editor who’s making content easy to read and navigate is a UX designer. The backend developer who’s making the site secure and fast is a UX designer. The visual designer who’s using color, typography, and texture to make the site easier to use is a UX designer. Etc. Etc.</p> <p>I don’t necessarily think you have to choose between the two. Quite honestly, most organizations don’t recognize front-end design (a shorthand I use for a UI-focused frontend developer) as a thing, so that wouldn’t necessarily be reflected in job postings.</p> <p>I’d say apply to places for both these roles, and when you land interviews explain what skills you have, what problems you enjoy solving, and what topics you’re passionate about. Talk about the importance of <a href="" target="_blank">mortar</a>. If the interviewer responds with “sorry, we’re looking for someone to crank out a bunch of static wireframes” or “sorry, we’re looking for someone who can build out this RESTful API” then I’d say it’s better to pass than be pigeon-holed. But there most certainly are opportunities out there for you to do what you do best. You have to be willing to articulate what you want in order to get it. And once you get it, have the courage to work the way you need to rather than be stifled by a finite bulleted list of job requirements.</p> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[Experts Weigh In: What Is The Most Common Web Design Mistake You See?]]> 2016-01-25T18:12:56Z 2016-01-25T18:12:56Z <![CDATA[Adobe asked me to weigh in on the most common web design mistake I see. The answer in a word: bullshit. But of course I elaborated a bit. […]]]> <p>Adobe asked me to weigh in on the most common web design mistake I see. The answer in a word: <a href="">bullshit</a>. But of course I elaborated a bit.</p> <blockquote><p>I think the most common web design mistake I see is sites not respecting users and their time. This is represented in a number of ways:</p> <ul> <li>Bloated, slow-loading pages</li> <li>Overly-aggressive advertising</li> <li>Popups and overlays</li> <li><a href="" target="_blank">Dark patterns</a></li> <li>Other superfluous or intentionally deceptive practices I lovingly classify as <a href="" target="_blank">bullshit</a>.</li> </ul> <p.</p></blockquote> <img src="" height="1" width="1" alt=""/> Brad Frost <![CDATA[Performance Budget Builder]]> 2016-01-26T18:14:54Z 2016-01-25T17:06:28Z <![CDATA[Performance budgets are awesome. So I made a thing to help you make performance budgets.. […]]]> <p>Performance budgets are awesome. So I <a href="">made a thing</a> to help you make performance budgets.</p> <p><a href="" rel="attachment wp-att-9667"><img class="alignnone size-large wp-image-9667" src="" alt="Make A Performance Budget" width="1024" height="816" sizes="(max-width: 1024px) 100vw, 1024px" /></a></p> <p. Feel free to take it and run with it.</p> <p>There’s a lot that can be said about performance budgets, but I’ll leave that to the pros. Check out these great resources:</p> <ul> <li><a href="">How To Make A Performance Budget</a></li> <li><a href="">Approach New Designs with a Performance Budget</a></li> <li><a href="">Performance Budget Metrics</a></li> <li><a href="">Setting a Performance Budget</a></li> <li><a href="">Responsive Design on a Budget</a></li> <li><a href="">Responsive Responsive Design</a></li> </ul> <img src="" height="1" width="1" alt=""/>
http://feeds.feedburner.com/brad-frosts-blog
CC-MAIN-2016-50
refinedweb
18,020
55.64
Re: Good practice when writing modules... - From: r0g <aioe.org@xxxxxxxxxxxxxxxxxx> - Date: Sat, 15 Nov 2008 06:45:27 -0500 bearophileHUGS@xxxxxxxxx wrote: r0g: a) Import all the other modules these functions depend on into the modules global namespace by putting them at the top of the module or should I... b) Include them in each function individually. This is a interesting topic, that requires some care. Generally I suggest you put them at the top, so they can be found and seen with less problems (*), it's the standard. If a function is called only once in a while (like a plotting function), and to import its module(s) it needs lot of time and memory (like matplotlib), then you may move the import inside the function itself, especially if such function isn't speed-critical. (*) I generally agree with the PEP8 but regarding this I don't follow it: I feel free to put more than one of them on the same line: import os, sys, ... I generally list the built-in modules first, then the well known one (pyparsing, numpy, etc), and then my ones or less known ones. I also put a comment after each import, with the name of the function/ class it is used into: import foo # used by baz() import bar # used by spam() ... def baz(n): return foo(n) * 10 ... def spam(k): return bar(k) - 20 Time ago I have read an interesting article that says that comments of today will often become statements and declaration and tags of tomorrow, that is stuff that the interpreter/compiler/editor understands. This means that such annotations of mine may be something fit to become a syntax of the language. I don't have ideas on how such syntax can be, if you have suggestions I'm listening. Finally in modules that contain many different things, and where each of them uses generally distinct modules, as a compromise I sometimes write code like this: import foo # used by baz() def baz(n): return foo(n) * 10 import bar # used by spam() def spam(k): return bar(k) - 20 My editor has a command that finds and lists me all the global imports (not indented) of the module. Bye, bearophile Thanks for the suggestions guys. I hadn't thought about putting comments after the imports, that's a good idea, although I guess you need to be disciplined in keeping them up to date. All the same, given the seemingly negligible performance hit and the nature of this particular module I think I will probably go with includes within the functions themselves anyway... The module I am compiling is kind of a scrapbook of snippets for my own development use, it has no coherent theme and I wouldn't be distributing it or using the whole thing in a production environment anyway, just copying the relevant functions into a new module when needed. I'm thinking having the imports inline might make that process easier when I do need to do it and once copied I can always move them out of the functions declarations. Having said that, having your editor trace these dependencies for you is an interesting idea too. Which editor are you using? (not trying to start an editor flame war anyone, shh!). Can this be done from idle or are you aware of a python script that can do this maybe? Thanks again, Roger. . - Follow-Ups: - Re: Good practice when writing modules... - From: Robert Kern - References: - Good practice when writing modules... - From: r0g - Re: Good practice when writing modules... - From: bearophileHUGS - Prev by Date: Suggest a Labs feature - Next by Date: Re: best python unit testing framwork - Previous by thread: Re: Good practice when writing modules... - Next by thread: Re: Good practice when writing modules... - Index(es):
http://coding.derkeiler.com/Archive/Python/comp.lang.python/2008-11/msg01771.html
CC-MAIN-2013-20
refinedweb
636
67.89
Hello, We are facing a parsing issue with WebProxy DSM (q1labs_sem_dsm_squid_webproxy.jar). Some squid logs are not well parsed because the source IP is set to the syslog server. My question is how can I submit a BUG to IBM ? After investigation, these logs are those with an HTTP return code to 0 : Aug 30 10:02:58 proxy5 (squid-3): 1504080178.360 2208 10.10.10.103 TCP_MISS_ABORTED/0 0 POST - SOURCEHASH_PARENT/192.168.1.219 - 1.1 30/Aug/2017:10:02:58 +0200 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:52.0) Gecko/20100101 Firefox/52.0" "fr,fr-FR;q=0.8,en-US;q=0.5,en;q=0.3" 480 - Aug 30 10:02:58 proxy002 (squid-3): 1504080178.327 2117 10.10.10.215 TCP_MISS_ABORTED/0 0 POST - SOURCEHASH_PARENT/192.168.1.219 - 1.1 30/Aug/2017:10:02:58 +0200 "-" "Mozilla/5.0 (Windows NT 6.1; rv:52.0) Gecko/20100101 Firefox/52.0" "fr,fr-FR;q=0.8,en-US;q=0.5,en;q=0.3" 473 - This return code is not defined in the standard : But it's defined in the squid source code as None : $ cat src/http/StatusCode.h [...] namespace Http { /** * These basic HTTP reply status codes are defined by RFC 2616 unless otherwise stated. * The IANA registry for HTTP status codes can be found at: * */ typedef enum { scNone = 0, scContinue = 100, scSwitchingProtocols = 101, scProcessing = 102, /**< RFC2518 section 10.1 */ scEarlyHints = 103, /**< draft-kazuho-early-hints-status-code */ scOkay = 200, scCreated = 201, scAccepted = 202, scNonAuthoritativeInformation = 203, scNoContent = 204, [...] The regex in DSM is set to : private static final AdaptivePattern SQUID_EVENT_PATTERN = AdaptivePattern.compile((String)"(\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}) (\\w+)/\\d{3} \\d+ \\w+ \\S+\\s?\"?($|-|[\\w\\\\\\.@]+)\"?"); And should be updated to accept 0 =>... (\\w+)/\\d{1,3} ... private static final AdaptivePattern SQUID_EVENT_PATTERN = AdaptivePattern.compile((String)"(\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}) (\\w+)/\\d{1,3} \\d+ \\w+ \\S+\\s?\"?($|-|[\\w\\\\\\.@]+)\"?"); I could set a new regex for these logs but the DSM should also be corrected as well. Answer by JonathanPechtaIBM (8198) | Aug 31, 2017 at 03:33 PM I looked in to this issue and you should raise a PMR with QRadar Support on this issue. Anytime that you have a value that is not parsing, you can open an investigation and we will review your issue. You can reference this post in your software ticket, but to save time you should run an event export for those events and we can replay them and update the Squid Web Proxy DSM. The procedure below is customized for you, but we have a parsing issue document here for what to gather to help move your ticket forward:. What to do 1. From the Log Activity tab, click Add Filter for Log Source Type = Squid Web Proxy and filter for Source IPs that are incorrect. 2. Alternately, in the Quick Filter search bar, type TCP_MISS_ABORTED 3. A search is displayed with the values. You might need to expand your search to locate these events. There is a +15 minute button that you can use to expand your search. 4. After you have found some examples, select Actions > Export to XML > Full Export (All Columns). 5. Download the export from QRadar. 6. Go to. 7. Describe your issue and attach the logs to the ticket. It would also be helpful to reference this forum post. NOTE: I sat down with development briefly today and we ran sample event through the unit test and believe there is an IP parsing issue here for the TCP_MISS_ABORTED events and discussed this with development. They opened a preliminary review item already and the support representative can associate your PMR# to that issue. If you have questions, let me know. Jonathan Answer by klaszlo13 (4) | Aug 30, 2017 at 11:31 AM Based on this article about APARs you should submit a Service Request / Problem Management Record (SR/PMR). 125 people are following this question. Announcement: QVM Externally Hosted Scans (March 1st - power outtage) 0 Answers Two log sources with same source identifier 2 Answers Parsing and Handling Lists within Properties or Payloads 0 Answers Does Proofpoint DSM parse Targeted Attack Protection? 5 Answers DSM Not overriding Destination Port 2 Answers
https://developer.ibm.com/answers/questions/397321/issue-dsm-webproxy.html?sort=votes
CC-MAIN-2019-22
refinedweb
725
67.15
Utilizing SIMD/SSE in Unity3D (.NET 2.0) Motivation First of all, what does SIMD actually mean? I certainly haven’t heard of it until some years into my computer science studies. It is an acronym for Single instruction, multiple data (SIMD) and is related to the architecture of CPU’s and GPU’s. Another acronym often appearing alongside SIMD is Streaming SIMD Extensions (SSE). You can read theoretical details about it in various publications and all over the internet, but I’ll try to frame it in a simplified, practical fashion: Basically for you as a coder, SIMD allows to perform four operations (reading/writing/calculating) for the price of one instruction. The cost reduction is enabled by vectorization and data-parallelism. What a deal! And you don’t even have to handle threads and race conditions to gain this parallelism. You’d better take advantage of it. The problem So from a conceptual standpoint, how can we translate that into code? Let’s assume we have an array of positions (three floats in a vector) and we would like to compute the average of all positions. To do so, we’d simply need to loop over the array, sum up all positions, and divide the result by the number of elements in the array. Speaking C# in Unity3D, the “normal” way we would implement it could look something like this: public Vector3 Average(Vector3[] positions) { Vector3 summed = Vector3.zero; for(int i=0;i<positions.Length;i++) { summed += positions[i]; } return summed / positions.Length; } Unfortunately, SIMD is not directly supported in Unity. First, I thought you can utilize the Vector4 struct instead of Vector3 provided by UnityEngine, but that doesn’t work either. Thus, you have to come up with your own solution. From my point of view, you have the following options: 1. It is common to utilize the benefits of SIMD in C++ and related math libraries. Therefore, you could code part of the algorithm in C++. You’d then have to transfer the data from Unity to a DLL, compute the result “externally” in your C++ code and transfer the data back. This can be a viable option if there is a lot of computation required, but brings the problem of transferring the data back and forth. In addition, I think this can be more error prone due to the transfer of data and can slow down the development process, as the proper function and communication between two distinct components needs to be verified. 2. You can integrate plugins into Unity and implement SIMD operations directly in C#. Since this variant is simpler and should work well for smaller samples, I’ll elaborate on this option. Solution in C# with .NET 2.0 Although Unity does not (yet?) support SIMD operations innately, you can take advantage of the Mono.simd.dll hidden in the Unity installation folder. Depending on the version you prefer, just copy the .dll from PathToUnity\Editor\Data\Mono\lib\mono\2.0over into your project. It provides 16-byte data structures like a vector with four floats or integers. Benefiting from using Mono.simd; we can change the previous code as follows: public Vector3 AverageSIMD(Vector3[] positions) { Vector4f summed = new Vector4f(0, 0, 0, 0); for (int i = 0; i < positions.Length; i++) { summed += new Vector4f(positions[i].x, positions[i].y, positions[i].z, 1.0f); } summed /= new Vector4f(positions.Length, positions.Length, positions.Length, 1.0f); return new Vector3(summed.X, summed.Y, summed.Z); } In each step of the loop, the cost of summing up three float values was reduced to a single instruction. For one step in the loop, this may seem just a bit. However, depending on the number of positions we have to sum up, the performance gain will be significant. The provided version can be optimized further if the positions array is provided with Vector4f directly at the method call, so that the loop does not have to convert each Vector3 into a Vector4f. Comparing each variant with random generated positions gave the following results: While the SIMD with Vector3 conversion takes about 66% of the original duration, the SIMD without conversion actually just takes about 22% of the original duration. Neat, isn’t it? Conclusion SIMD offers powerful ways to crank up the performance and can be implemented within Unity. This article showed how to apply it to a simple problem, but I hope you are eager to apply it to other problems as well. How could this knowledge be applied to compute various powers of a number, a dot product, or a polynomial? (Maybe I’ll demonstrate this in another article). Think about how to package float values intelligently into the provided vector structs and work with them. Profile it, and see if the change was worth it. For the future, it would be cool to find out how the performance compares to the C++ alternative mentioned earlier in this article. Furthermore, newer versions of Unity (2017+) already provide experimental support for .NET 4.6, which offers System.Numerics. A quick test exposed that these vector structs seem to be superior to the Mono.Simd equivalent. Also, the UnityEngine.Vector3 seems to compute faster with .NET 4.6, but I’d need to investigate this further. When Unity will provide stable support for .NET 4.6, the new structs will definitely be very viable options for SIMD improvements. [UPDATE 03/2018] Now that the C# Entity-Component-System, Job System, and Burst compiler will be available soon, you should consider using it: Unite Austin 2017 — Writing High Performance C# Scripts I’m a great friend of free and accessible knowledge for everyone. If my work helped you out and you feel like giving something back, you can consider supporting me by sharing this article. I deeply appreciate your support, truly. Thanks! All the best, Broman
https://medium.com/@bromanz/simd-sse-unity3d-net-2-0-70f6c911713f
CC-MAIN-2019-22
refinedweb
983
57.27
Symbolic math in python Posted March 01, 2013 at 07:07 PM | categories: symbolic, math | tags: | View Comments Updated March 03, 2013 at 12:21 PM Matlab post Python has capability to do symbolic math through the sympy package. 1 Solve the quadratic equation from sympy import solve, symbols, pprint a,b,c,x = symbols('a,b,c,x') f = a*x**2 + b*x + c solution = solve(f, x) print solution pprint(solution) >>> >>> >>> >>> >>> >>> [(-b + (-4*a*c + b**2)**(1/2))/(2*a), -(b + (-4*a*c + b**2)**(1/2))/(2*a)] _____________ / _____________\ / 2 | / 2 | -b + \/ -4*a*c + b -\b + \/ -4*a*c + b / [---------------------, -----------------------] 2*a 2*a The solution you should recognize in the form of \(\frac{b \pm \sqrt{b^2 - 4 a c}}{2 a}\) although python does not print it this nicely! 2 differentiation you might find this helpful! from sympy import diff print diff(f, x) print diff(f, x, 2) print diff(f, a) >>> 2*a*x + b 2*a >>> x**2 3 integration from sympy import integrate print integrate(f, x) # indefinite integral print integrate(f, (x, 0, 1)) # definite integral from x=0..1 >>> a*x**3/3 + b*x**2/2 + c*x a/3 + b/2 + c 4 Analytically solve a simple ODE from sympy import Function, Symbol, dsolve f = Function('f') x = Symbol('x') fprime = f(x).diff(x) - f(x) # f' = f(x) y = dsolve(fprime, f(x)) print y print y.subs(x,4) print [y.subs(x, X) for X in [0, 0.5, 1]] # multiple values >>> >>> >>> >>> >>> >>> f(x) == exp(C1 + x) f(4) == exp(C1 + 4) [f(0) == exp(C1), f(0.5) == exp(C1 + 0.5), f(1) == exp(C1 + 1)] It is not clear you can solve the initial value problem to get C1. The symbolic math in sympy is pretty good. It is not up to the capability of Maple or Mathematica, (but neither is Matlab) but it continues to be developed, and could be helpful in some situations. Copyright (C) 2013 by John Kitchin. See the License for information about copying.
http://kitchingroup.cheme.cmu.edu/blog/2013/03/01/Symbolic-math-in-python/
CC-MAIN-2017-39
refinedweb
354
70.23
.: bazel build tensorflow/tools/quantization:quantize_graph bazel-bin/tensorflow/tools/quantization/quantize_graph \ --input=/tf_files/optimized_graph.pb \ --output=/tf_files/rounded_graph.pb \ --output_node_names=final_result \ --mode=weights! Great post peter, One question: will this run on swift, using bridging or any other methods, or does it require a 100% objective C project? Thanks for a great post. I am running into an issue with building the app on macos -sierra. Have posted this on stackoverflow. I found the solution to the aforementioned issue here: Was able to run the example successfully Pingback: TensorFlow for mobile poets – Cloud Data Architect Thank you for nice tutorial! So far I have tried to retrain inception v3 model and use it in Android app which takes about 10 seconds to make predictions. Is the latency on iPhone any lower? If you’re using Bazel, try adding –android_cpu=armeabi-v7a to the command line. I just realized there’s a bug that’s preventing NEON from being enabled with that build approach. Thanks Pete! running build command with suggested parameter indeed significantly improved the performance. Actually, it takes around 10 seconds if retrained model is directly used. Then, after running command optimize_for_inference, the time was reduced to around 3 – 5 seconds. Then, by running bazel build -c opt –android_cpu=armeabi-v7a tensorflow/examples/android:tensorflow_demo, the detection time dropped to around 1.5 – 2.5 s. Hello and thank you for the steps. I use android studio(gradle+bazel) and a LG G4 to test my model. Predictions takes about 6.5 sec delay. I have those on my build.gradle: def bazel_location = ‘/home/lef/bin/bazel’ def cpuType = ‘armeabi-v7a’ def nativeDir = ‘libs/’ + cpuType I tried 3 different models: One with -> python/tools/strip_unused One with -> python/tools/optimize_for_inference And one with -> quantization/quantize_graph (after optimize) All the above have 6.5 sec delay between predictions. Original imagenet graph runs fast and smooth. Is there anything else that i can try to reduce delay? Thanks a lot What phone are you using? On iPhone 6s it goes 2-3 fps… Pingback: 在安卓上运行TensorFlow Great post! Is mmapped graph accuracy better than the accuracy of the inception v1 graph fine-tuned on the same training set? (Inception v1 is the graph used in Android and iOS examples on Tensorflow github repository) Thanks a lot for the tutorial, super interesting. I’ve followed along and everything worked fine but when I’m running the app on the phone no labels are appearing… There are some errors messages in Xcode’s console (sorry it’s a bit long) ——————– 2016-11-09 00:25:39.725551 CameraExample[11960:4661788] 0x174148140 Copy matching assets reply: XPC_TYPE_DICTIONARY { count = 2, transaction: 0, voucher = 0x0, contents = “Assets” => : { length = 1229 bytes, contents = 0x62706c6973743030d4010203040506646558247665727369… } “Result” => : 0 } 2016-11-09 00:25:39.727163 CameraExample[11960:4661788] 0x17015b070 Copy assets attributes reply: XPC_TYPE_DICTIONARY { count = 1, transaction: 0, voucher = 0x0, contents = “Result” => : 1 } 2016-11-09 00:25:39.727222 CameraExample[11960:4661788] [MobileAssetError:1] Unable to copy asset attributes 2016-11-09 00:25:39.727302 CameraExample[11960:4661788] Could not get attribute ‘LocalURL’: Error Domain=MobileAssetError Code=1 “Unable to copy asset attributes” UserInfo={NSDescription=Unable to copy asset attributes} ……. E ~/tensorflow/tensorflow/contrib/ios_examples/camera/CameraExampleViewController.mm:320] Running model failed:Not found: FeedInputs: unable to find feed output mul 2016-11-09 00:45:45.025627 CameraExample[269:8699] Finalizing CVPixelBuffer 0x17032a1e0 while lock count is 1. etc… ——————– I don’t know if it’s related but for some reasons in my frameworks folder, all the files except libprotobuf.a and libprotobuf-lit.a are appearing in red but the app compile without issues. Any ideas why? Thanks! Guillaume I’ve run into the same issue. Any idea what the solution is? One thing that’s surprising in those logs is that the input name is “mul” (all lower-case) rather than “Mul” as it should be. Is it possible there’s a typo there? I posted the bug and figured out only afterward it was indeed a typo!! :-S Not used to have a variable name with a cap letter. I’ve tried to delete the comment after I found out my mistake but it was not possible. Sorry for the trouble and thanks for answering Pete, and thanks again for this great tutorial. Pingback: Celebrating TensorFlow’s First Year - Contrado Digital Hi Pete, Thanks for the great tutorial! I followed similar steps to load a retrained model onto android. I was just curious though. Do you know what any why these specific variables are set to these numbers? “` private static final int INPUT_SIZE = 299; private static final int IMAGE_MEAN = 128; private static final float IMAGE_STD = 128; “` Thanks Pingback: The Google Brain team — Looking Back on 2016 Pingback: The Google Brain team — Looking Back on 2016 - 莹莹之色 can you make a tensorflow tutorial for android? Will be updated to work for Tensorflow r1.1 very soon! Hi Pete, would you please be so kind and help me with one issue that prevents me from moving forward. I have graph with two output layers (final_result_orig – which is basically coming form retraining example,final_result_added – my custom layer) and I am unable to strip/optimize_on_inference it in order to run on android device (on pc it runs fine) When I run: bazel-bin/tensorflow/python/tools/optimize_for_inference \ –input=/tmp/output.pb \ –output=/tmp/optimized.pb \ –input_names=Mul \ –output_names=”final_result_orig,final_result_added” Then in my android application, I get “Session was not created with a graph before Run()” error, and both inal_result_orig,final_result_added is not found. When I run: bazel-bin/tensorflow/python/tools/optimize_for_inference \ –input=/tmp/output.pb \ –output=/tmp/optimized.pb \ –input_names=Mul \ –output_names=”final_result_orig” It works fine, final_result_orig is available and works correctly, however final_result_added is obviously not found and not available for my app to use. And when I run: bazel-bin/tensorflow/python/tools/optimize_for_inference \ –input=/tmp/output.pb \ –output=/tmp/optimized.pb \ –input_names=Mul \ –output_names=”final_result_added” It does not work as well with “Session was not created with a graph before Run()” error, and both inal_result_orig,final_result_added is not found. I do not understand what I am doing wrong – what could be wrong with “final_result_added”, as it works fine on PC and not android? Otherwise, thank you very much for cool tutorial. Hi, Can you please guide me with regard to including the bounding box for the mobile tensorflow application. Where bounding box appears with image label on top of it. Pingback: Google大脑团队2016年度回顾 - 莹莹之色 Pingback: Google大脑团队2016年度回顾 – 聚合时代 Hi, Thanks for this nice article, it was of great help to reduce and export my model, in order for it to work on Android. I’m running tensorflow 1.1.0 on Android, and the optimized and quantized model is working great. But the mmapped model is raising an exception (java.io.IOException: Not a valid TensorFlow Graph serialization: Invalid GraphDef). I guess it only works on iOS ! Also, I went ahead and tried the eightbit mode, the resulting model is tiny, but it gives very bad results (random is better). I guess that mode will only work for certain models ? Thanks again Hi, First I want to thank you for an absolutely great tutorial, we had a lot of fun with it and won a second place in A.I. hackathon with an app based on your tutorial. However we are now gearing app this app for the appstore and we ran into some interesting problems. I thought maybe you will know how to solve them. First whenever we run the graph through graphdef_memmapped it is no longer working on the iOS app, no labels appear and the graph seems to be unrecognised. Second when we tried to run test to find out if the graph checks out we noticed that the test fail even on the very first unmodified graph. If we used a model without graphdef_memmapped optimisation then our app works fine, however on some phones it crashes for no reason with no stack trace on Xcode which would indicate that it’s due to the app eating memory too quickly. Is there something we are missing here? How would you go about solving this? I’m actually working on some better documentation for this process, so here’s an early draft: —————-. One advantage of this is that the OS knows the whole file will be read at once, and so can efficiently plan the loading process so it’s as fast as possible. The actual loading can also be put off until the memory is first accessed, so it can happen asynchronously with your initialization code. You can also tell the OS you’ll only be reading from the area of memory, and not writing to it. This gives the benefit that when there’s pressure on RAM, instead of writing out that memory to disk as normal virtualized memory needs to be when swapping happens, it can just be discarded since there’s already a copy on disk, saving a lot of disk writes. Since TensorFlow models can often be several megabytes in size, speeding up the loading process can be a big help for mobile and embedded applications, and reducing the swap writing load can help a lot with system responsiveness too. It can also be very helpful to reduce RAM usage. For example on iOS, the system can kill apps that use more than 100MB of RAM, especially on older devices. The RAM used by memory-mapped files doesn’t count towards that limit though, so it’s often a great choice for models on those devices. TensorFlow has support for memory mapping the weights that form the bulk of most model files. Because of limitations in the ProtoBuf serialization format, we have to make a few changes to our model loading and processing code though. The way memory mapping works is that we have a single file where the first part is a normal GraphDef serialized into the protocol buffer wire format, but then the weights are appended in a form that can be directly mapped. To create this file, you need to: “`c++ std::unique_ptr. “`c++: “`c++. ———— If this doesn’t help, drop me an email on petewarden@google.com and I’ll try to dig in further. Hi Pete, Great tutorial. I followed all the steps and got the camera app launch using the graph models and labels from my own image identification project (similar to the poets example), but I do not see the labels on my iPhone. I get the following error message. Could you give me some hints on how to get around this. I am using iPhone 5s for testing. ——————— Error message: ——————— [[Node: _arg_Mul_0_0 = _Arg[T=DT_FLOAT, index=0, _device=”/job:localhost/replica:0/task:0/cpu:0″]()]] 2017-04-26 21:21:44.808788: E /Users/amitavabhaduri/amit_devel/proj_openCV/AgShift/tensorflow_iOS/tensorflow/tensorflow/contrib/ios_examples/camera/CameraExampleViewController.mm:352] Running model failed:Not found: No registered ‘_Arg’ OpKernel for CPU devices compatible with node _arg_Mul_0_0 = _Arg[T=DT_FLOAT, index=0, _device=”/job:localhost/replica:0/task:0/cpu:0″]() . Registered: Thanks, Amit Sorry you’re hitting problems! I think this issue should have been fixed by this change: Let me know if you still see this when you sync to the latest code. Hi Pete, The camera app now works for me on my iPhone, but no matter what test image I show to the camera (using live mode or freeze frame), it always predicts a particular label over others. It’s kind of gravitating towards one label with a very high probability. Even if I show the training images, it still does not work well. On the contrary when I predict the same images using label_image in batch mode it works fine. Would you know what could be wrong? Have you encountered such behavior? Thanks, Amit Hi Pete, Is there a way to test ‘mmapped_graph.pb’ in linux? All my debug and test till the quantization step (rounded_graph.pb) worked perfectly. After I generate the ‘mmaped_graph.pb’ and port it over to iOS, I see it strongly predicting only 1 label all the time no matter any image I show to the iPhone camera. It would be great to test this model in linux as well (if possible). Have you seen this kind of behavior? Any pointers? Sorry to bug you on this, but just so keen to see tensorflow work on iOS for my project. Thanks, Amit Thank you! The Tutorial was great! Pingback: TensorFlow on Android – Deep Synapse Thanks for your great explanation on this article. It helps me a lot. But when i run with my own model (.pb and .txt), tensorflow failed to detect the object. My app doesn’t crash, the tensorflow just give the wrong result. Can you help me? Hi and thanks for the great tutorial. I managed to complete the for Poets tut, but when I cd into tensorflow and run the first bazel command above, I get the following error. I wonder what could be the cause? ERROR: /tensorflow/tensorflow/core/kernels/BUILD:2140:1: C++ compilation of rule ‘//tensorflow/core/kernels:matrix_solve_ls_op’ failed: gcc failed: error executing command /usr/bin/gcc -U_FORTIFY_SOURCE -fstack-protector -Wall -B/usr/bin -B/usr/bin -Wunused-but-set-parameter -Wno-free-nonheap-object -fno-omit-frame-pointer -g0 -O2 ‘-D_FORTIFY_SOURCE=1’ -DNDEBUG … (remaining 115 argument(s) skipped): com.google.devtools.build.lib.shell.BadExitStatusException: Process exited with status 4. gcc: internal compiler error: Killed (program cc1plus) Pingback: TensorFlow on Android – KAYENRE Technology Hey Pete, Thx for the write. Very interesting and helpfull. A question is the pool_3/_reshape op removed by this script? In other words after I have ran the script would I be able to get this operation on android and run it. Thx a lot for your help. So to answer my own question the script does not remove the pool_3/_reshape. Hi,Peter! I ran mmaped.pb on Android, the exception is : Not found: memmapped_package://InceptionResnetV2_Repeat_block35_1_Branch_0_Conv2d_1x1_weights 07-28 09:52:21.708: I/native(9233): [[Node: InceptionResnetV2/Repeat/block35_1/Branch_0/Conv2d_1x1/weights = ImmutableConst[dtype=DT_FLOAT, memory_region_name=”memmapped_package://InceptionResnetV2_Repeat_block35_1_Branch_0_Conv2d_1x1_weights”, shape=[1,1,320,32], _device=”/job:localhost/replica:0/task:0/cpu:0″]()]] Could you post a bug with full details on github.com/tensorflow/tensorflow/issues ? That will help us track down what’s going wrong. Thank you Peter! The exception is gone. I had a mistake yesterday. It works now. But, it runs on Android so slowly when recognizing the image, even uses memory mapped model. I use the retrained Inception Resnet V2 model. I’m trying to resolve that. Thanks for your reply. hey Peter, Thanks for this great tutorial. I am running into an issue with running the app in both “tf_camera_example.xcworkspace” and “tf_camera_example.xcodeproj” . – When running “tf_camera_example.xcworkspace” it tells me : Assertion failed: (error == nil), function -[CameraExampleViewController setupAVCapture], file /Users/mac/projects/tensorflow/tensorflow/examples/ios/camera/CameraExampleViewController.mm, line 70. – and When building “tf_camera_example.xcodeproj” it tells Apple Mach-O Linker Error Group clang: error: linker command failed with exit code 1 (use -v to see invocation) Any suggestions please 😦 Hi Pete Warden! How can I show the confusion matrix using this retrained model? Thanks Hi Pete, I got a very basic question. Was the original model “Inception V3” or “mobilenet_” in this tutorial? It seemed to me it may be InceptionV3. I think settling this is important to enter the right input_mean and input_std since this has to match how you process the image during _retraining_. I was following the Tensorflow Poet so I didn’t do any of the bazel build stuff, everything is said to nicely contain in the git clone of that tutorial. In the script/retrain.py, I did spot a difference between “inception_v3” and “mobilenet_*”, it is 128 vs 127.5. It is close enough, but not sure if this will degrade accuracy by few %. Pingback: What is TensorFlow? | Opensource.com – BreakingExpress Pingback: 学习笔记TF066:TensorFlow移动端应用,iOS、Android系统实践 – 1
https://petewarden.com/2016/09/27/tensorflow-for-mobile-poets/
CC-MAIN-2020-16
refinedweb
2,670
57.57
Operating systems, development tools, and professional services for connected embedded systems for connected embedded systems asyncmsg_put(), asyncmsg_putv() Send an asynchronous message to a connection Synopsis: #include <sys/asyncmsg.h> int asyncmsg_put( int coid, const void *buff, size_t size, unsigned handle), int (*call_back) ( int err, void* buf, unsigned handle )); int asyncmsg_putv( int coid, const iov_t* iov, int parts, unsigned handle, int (*call_back) ( int err, void* buf, unsigned handle )); Arguments: - coid - The ID of the connection to send the message to. - buff - (asyncmsg_put() only) A pointer to the buffer that holds the message. - size - (asyncmsg_put() only) The size of the message. - iov - (asyncmsg_putv() only) A pointer to an array of IOV buffers that hold message. - parts - (asyncmsg_putv() only) The number of elements in the IOV array. - handle - A user-defined handle that's passed to the call_back function to allow for quick identification of the message's package. - call_back - NULL, or a function to call when a message is processed. If this argument is NULL, the call_back specified in the _asyncmsg_connection_attr passed to asyncmsg_connect_attach() is called. Library: libasyncmsg Use the -l asyncmsg option to qcc to link against this library. Description: The asyncmsg_put() and asyncmsg_putv() functions send an asynchronous message to the connection identified by the coid argument: - For asyncmsg_put(), buff points to the message, and size specifies its length. - For asyncmsg_putv(), the message is stored in an I/O vector pointed to by iov, and parts specifies the number of entries in the array. You can use the handle, which is passed to the call_back function, to help you identify the message. Returns: EOK, or -1 if an error occurred (errno is set). Errors: - EBADF - The connection specified by coid doesn't exist. - EFAULT - A fault occurred when the kernel tried to access the buffers provided. - EAGAIN - The send queue is full. Classification: See also: asyncmsg_channel_create(), asyncmsg_channel_destroy(), asyncmsg_connect_attach(), asyncmsg_connect_attr(), _asyncmsg_connection_attr, asyncmsg_connect_detach(), asyncmsg_flush(), asyncmsg_free(), asyncmsg_get(), asyncmsg_malloc() Asynchronous Messaging technote
http://www.qnx.com/developers/docs/6.4.1/neutrino/lib_ref/a/asyncmsg_put.html
crawl-003
refinedweb
317
54.02
Dependency Injection or Inversion? The hardest thing about being a software developer, for me, is coming up with names for things. I’ve worked out a system with which I’m sort of comfortable where, when coding, I pay attention to every namespace, type, method and variable name that I create, but in a time-box (subject to later revisiting, of course). So I think about naming things a lot and I’m usually in a state of thinking, “that’s a decent name, but I feel like it could be clearer.” And so we arrive at the titular question. Why is it sometimes called “dependency injection” and at other times, “dependency inversion.” This is a question I’ve heard asked a lot and answered sometimes too, often with responses that make me wince. The answer to the question is that I’m playing a trick on you and repeating a question that’s flawed. Dependency Injection and Dependency Inversion are two distinct concepts. The reason that I led into the post with the story about naming is that these two names seem fine in a vacuum but, used together, they seem to create a ‘collision,’ if you will. If I were wiping the slate clean, I’d probably give “dependency inversion” a slightly different name, though I hesitate to say it since a far more accomplished mind than my own gave it the name in the first place. My aim here isn’t to publish the Nth post exhaustively explaining the difference between these two concepts, but rather to supply you with (hopefully) a memorable mnemonic. So, here goes. Dependency Injection == “Gimme it” and Dependency Inversion == “Someone take care of this for me, somehow.” I’ll explain a bit further. Dependency Injection is a generally localized pattern of writing code (though it may be used extensively in a code base). In any given method or class (or module, if you want) rather than you going out and finding or making the things you need, you simply order your collaborators to “gimme it.” So instead of this: public Time getTheTime() { ThingThatTellsTime atom = new CesiumAtom(); return atom.getMeTheTimeSomehow(); } You say, “nah, gimme it,” and do this instead: public Time getTheTime(ThingThatTellsTime whatever) { return whatever.getMeTheTimeSomehow(); } It isn’t you responsible for figuring out that time comes from atomic clocks which, in turn, come from atoms somehow. Not your problem. You say to your collaborators, “you want the time, Buddy? I’m gonna need a ThingThatTellsTime, and then it’s all yours.” (Usually you wouldn’t write this rather pointless method, but I wanted to keep the example as simple as humanly possible). Dependency Inversion is a different kind of tradeoff. To visualize it, don’t think of code just yet. Think of a boss yelling at a developer. Before the ‘inversion’ this would have been straightforward. “Developer! You, Bill! Write me a program that tells time!” and Bill scurries off to do it. But that’s so pre-Agile. Let’s do some dependency inversion and look at how it changes. Now, boss says, “Help, someone, I need a program that tells time! I’m going to put a story in the product backlog” and, at some point later, the team says, “oh, there’s something in the backlog. Don’t know how it got there, exactly, but it’s top priority, so we’ll figure out the details and get it done.” The boss and the team don’t really need to know about each other directly, per se. They both depend on the abstraction of the software development process; boss has no idea which person writes the code or how, and the team doesn’t necessarily know or care who plopped the story in the backlog. And, furthermore, the backlog abstraction doesn’t depend on knowing who the boss is or the developers are or exactly what they’re doing, but those details do depend on the backlog. Okay, so first of all, why did I do one example in code and the other in anecdote, when I could have also done a code example? I did it this way to drive home the subtle scope difference in the concepts. Dependency injection is a discrete, code-level tactic. Dependency inversion is more of an architectural strategy and way of structuring (decoupling) code bases. And finally, what’s my (mild) beef with the naming? Well, dependency inversion seems a little misleading. Returning to the boss ordering Bill around, one would think a strict inversion of the relationship would be the stuff of inane sitcom fodder where, “aha! The boss has become the bossed! Bill is now in charge!” Boss and Bill’s relationship is inverted, right? Well, no, not so much — boss and Bill just have an interface slapped in between them and don’t deal with one another directly anymore. That’s more of an abstraction or some kind of go-between than an inversion. There was certainly a reason for that name, though, in terms of historical context. What was being inverted wasn’t the relationship between the dependencies themselves, but the thinking (of the time) about object oriented programming. At the time, OOP was very much biased toward having objects construct their dependencies and those dependencies construct their dependencies, and so forth. These days, however, the name lives on even as that style of OOP is more or less dead this side of some aging and brutal legacy code bases. Unfortunately, I don’t have a better name to propose for either one of these things — only my colloquial mnemonics that are pretty silly. So, if you’re ever at a user group or conference or something and you hear someone talking about the “gimme it” pattern or the “someone take care of this for me, somehow” approach to architecture, come over and introduce yourself to me, because there will be little doubt as to who is talking. Thanks — this is perfect for a presentation I’m giving next week. 🙂 Glad to hear it helped. Good luck with the presentation 🙂 Is ‘dependency inversion’ the same term as ‘inversion of control’ ? Not quite. Let’s use Erik’s Dependency Injection example. Dependency Inversion comes into play in the choice to depend on ThingThatTellsTime rather than CesiumAtom directly. The former is an abstraction, implemented by CesiumAtom and depended upon throughout the app. So we have an abstraction, and we have modules that depend on the abstraction, but something’s missing: an actual implementation of ThingThatTellsTime to make our app work. IoC is a pattern, typically supported by a generic “container” framework, to solve the general problem of fulfilling modules’ dependencies. In this particular example we would configure our IoC container to provide a CesiumAtom whenever… Read more » Nice explanation. Was thinking about adding an addendum to the post, but now I don’t have to. I disagree, IOC and DIP are interchangeable. IOC and IOC CONTAINER are 2 vastly different items. As long as you aren’t referring to a container as “IOC” all is well. IOC is a principle, an IOC container is a detail on how to implement the principle. IOC is NOT A PATTERN ” inversion of control (IoC) describes a design” it is a design or principle, not a pattern. The IOC container is not a pattern either, an IOC container is a library that leverages the service locator and factory patterns to instantiate objects. Yes. I wish that style of programming was dead! I still see objects constructing their dependencies all the time in new code, or even worse service locating a singleton. Granted, I imagine the audience for your blog probably stop doing silly things like this long ago. Another great post! Thanks — glad you liked! And yeah, thinking on it I’m fortunate because over the last several years I’ve generally been working either in green field code bases or else I’m specifically being brought in to help factor legacy code bases away from this sort of thing. It’s been quite some time since I’ve had to live with this style of programming and I don’t miss it at all. Singletons aren’t inherently bad if they’re used well. They’re even easily testable if you allow back doors into their state. The other thing is you can use poor man’s dependency injection to achieve many of the benefits of DIP/IOC without the need of a container. This is a really frictionless change that you could instill in your own code and evangelize in your organization. This will leave the simplicity of new MyService() that other team members are used to. I agree with the IOC / DIP being interchangeable. There’s a memorable post on c2.com regarding this subject which I strongly recommend. As an off-topic, I believe singletons were never intended to be used by regular app programmers (end-programmers, if you will), but to become part of infraestructure – plumbing code. I mean, how do you even know you will only need ‘one’ of a kind. What if you end up needing more than one instance? Why introduce extra stress on your app by putting such a hard constraint? Plus, you’re mixing two concerns in the same code unit,… Read more » Your take on singletons seems very much in line with mine as well. I see people say that there can be good uses of them and, while I suppose that’s theoretically true, I’ve never actually witnessed any in the wild. They always just wind up being dumping grounds for global variables and ways to have two unrelated classes talk to each other. I’ve always found them to be awkward at best (there is something deeply but subtly weird about controlling your own cardinality) and most commonly complete architectural nightmares. For what it is worth, here is when the idea of dependency inversion clicked for me. Thinking of a simple data layer above a database, it exposes CreateX(), UpdateX(), ReadX(), DeleteX() methods. It exposes them because that is what the underlying data store exposes. Rather than having the consumer depend upon the interface exposed by the data layer, the consumer exposes the interface which it expects data layers to implement. It may be something nicer like ReturnAllCustomersWhoPayBillsOnTime(). All data layers then take their dependency on that interface, and implement it. Inverting that dependency allows for several implementations of the data… Read more » That makes sense, and it’s more true to the inversion name (though the thing being inverted seems to be allowing consumers rather than producers to define APIs). This reminds me of the “magic oracle” concept I posted about once, where when writing a method and thinking of what to abstract out you think, “if I could have a magic box, what would I ask it to do for me?” Yeah, I definitely want to second Eric’s comment. In dependency inversion, consumers are the ones that define the API. I’d say if you have some “Consumer” using some “Library”, there’s two broad versions of this. First, maybe Library has some class, or set of classes, which you want to abstract. Like maybe it has a bunch of ProbabilityDistribution classes, for different types of distribution. To invert that dependency, you’d want to write your own IProbabilityDistribution (or similar) interface, then use adapters to adapt from the Library classes to your interface. If you’re following DIP then IProbabilityDistribution will be defined by… Read more » Eric, this might blow your mind but your example of ReturnAllCustomersWhoPayBillsOnTime() there is no reason for an interface. This can be a concrete class with virtual ReturnAllCustomersWhoPayBillsOnTime(). This is actually more honest. You are not going to have multiple providers of ReturnAllCustomersWhoPayBillsOnTime(), you have 1 single tightly coupled contract to your persistence of choice. The fact that you have a repository injected into your class that exposes ReturnAllCustomersWhoPayBillsOnTime() is an implementation detail and SHOULD be hidden from the public contract. If you have a detailed and specific interface that is implemented by exactly one class, that is a bad abstraction.… Read more » The simplest way to draw clarity between dependency injection and inversion is to only refer to dependency inversion as the dependency inversion principle (DIP). It becomes extraordinarily clear when you use the full name. Principles are very different from actions to follow them, such as using DI to satisfy DIP. The second way to eliminate all confusion is instead of referring to DIP you can interchangeably use IOC. That’s a good call. Thinking of the “Principle” in DIP really drives home the distinction I was drawing between strategy (DIP) and tactics (Dependency Injection). “Principles” communicate “theoretical/strategic.” Would “dependency abstraction” be a more accurate term for “dependency inversion”? Can’t really speak to better, per se, but that seems clearer… to me, anyway.
https://daedtech.com/dependency-injection-or-inversion/
CC-MAIN-2022-40
refinedweb
2,147
62.98
- ChatterFeed - 86Best Answers - 0Likes Received - 2Likes Given - 1Questions - 334Repl System.QueryException: Record Currently Unavailable Here is the exact exception: 'System.QueryException: Record Currently Unavailable: The record you are attempting to edit, or one of its related records, is currently being modified by another user. Please try again.' Class.Utility.DoCaseInsertion: line 98, column 1 I've done a lot of research on the 'UNABLE_TO_LOCK_ROW' exception and 'Record Currently Unavailable' exception and I can't seem to find a great solution to this issue. What I've tried to accomplish is a loop to attempt the insert 10 times, but I'm still getting the 'Record Currently Unavailable' exception. Does anyone else have a suggestion for this? Below is the code: Public static void DoCaseInsertion(case myCase) { try { insert myCase; } catch (System.DmlException ex) { boolean repeat = true; integer cnt = 0; while (repeat && cnt < 10) { try { repeat = false; List<Contact> contactList = [select id from Contact where id =: myCase.ContactId for update]; // Added for related contact to overcome the 'UNABLE_TO_LOCK_ROW issues' List<Account> accountList = [select id from Account where id =: myCase.AccountId for update]; // Added for related account to overcome the 'UNABLE_TO_LOCK_ROW issues' insert myCase; } catch (System.DmlException e) { repeat = true; cnt++; } } } } formula text and if condition: Syntax error I would need help with this formula. I get a message: Syntax error. Any idea, please? IF ( PotentialCOMAPABudget__c < 200.000,"Bajo", IF ( PotentialCOMAPABudget__c > 200.000 < 1.000.000, "Medio", IF ( PotentialCOMAPABudget__c > 1.000.000, "Alto", IF ( PotentialCOMAPABudget__c > 5.000.000, "Key Account", null ) ) ) ) I need a formula Please. Everything I try gives me an error: Owner:Queue.Id ISCHANGED(field) Can Someone help me create the right formula Please Adeline Help with Test Script - @isTest public class EndoraDocumentAttachment { //Method static testMethod void InsertAttachment() { Opportunity b = new opportunity(); b.RecordType.developername = 'Capital'; b.name = 'TEST'; b.Quote_Contact__c = '0033B000007Rz8X'; b.StageName = 'Attention'; b.CloseDate = (System.Today().addDays(1)); b.Account_Dept__c = 'GI/Endoscopy'; insert b; Attachment attach = new Attachment(); attach.Name='Unit Test Attachment'; Blob bodyBlob=Blob.valueOf('Unit Test Attachment Body'); attach.body=bodyBlob; attach.parentId=b.id; insert attach; List<Attachment> attachments=[select id, name from Attachment where parent.id=:b.id]; System.assertEquals(1, attachments.size()); attach = [SELECT Id, name from Attachment where parent.id=:b.id]; delete attach; b.Attachment__c = False; update b; } } Errors: System.NullPointerException: Attempt to de-reference a null object Stack Trace: Class.EndoraDocumentAttachment.InsertAttachment: line 8, column 1 restricting values of picklist to certain users belonging to same profile but without modifying in record types I have object "account" which have 5 record types, there are around 10 profiles in my org. For object "account" there are few picklist. For one of the profile, presently 5 pricklist are available out of 8. For certain users of this profile i want only 3 prickist to be avaiable. How can i i acieve my usecase ? thanks for suggestion! Setting up an Email Alert thru a Worklflow Hi Guys, Im fairly new and looking to setup some things. I want to configure an Email Alert when a Picklist field has been sitting on a value for more than a number of specified days. For example, i have a picklist field called "stage". i want the record owner to be notified thru email when the stage is at the "quoting" value for more than 2 days. Reading here i know i have to set up a workflow, im trying to make a formula but i dont find a way to come up with the right syntax. Here is what i come up with so far: Stage(TEXT(Quoting)) && TODAY - DATEVALUE(LastModifiedDate)>= 2 Please HELP Implementation of Trigger.new and Trigger.old trigger emailCheck on Employee__c (before update) { Map<Id,Employee__c> o = new Map<Id,Employee__c>(); o = trigger.oldMap; for(Employee__c n : trigger.new) { Employee__c old = new Employee__c(); old = o.get(n.Id); if(n.Email__c != old.Email__c) { n.Email__c.addError('Email cannot be changed'); } } } Thanks and Regard Rohit kumar singh Workflow formula check Stage change IF( ISCHANGED(StageName) AND ( NOT(ISPICKVAL(StageName, "Value 1")), NOT(ISPICKVAL(StageName, "Value 2")), NOT(ISPICKVAL(StageName, "Value 3")), NOT(ISPICKVAL(StageName, "Value 4")) )) Validation rule at object level to check if the entered text value contains any one of the picklist field value present on that object Thanks and Regards, Shiva RV Validation Rule to Require Attachment when Opp Reaches Probability 100% I need to create a trigger or validation rule that requires a user to attach a document/insertion order to an opportunity when it is changed to 100% or closed won. Please advise. Trigger development - Please Help Thank you, Trigger trigger Read on Web_Links__c (before update){ StatusRead.Read(Trigger.new); } Class public class StatusRead { public static void Read(List<Web_Links__c> WI){ Web_Links__c WL = [SELECT status__c FROM Web_Links__c]; if(WL.Owner_ID__c == WL.Last_user_to_view_record__c){ WL.status__c = 'Read'; update WL; } } } Update Account after Email Message is it possoble to update the Account Type after an Email Message is sent ? It tried the process builder, but it did not work. The Email is related to the Account. Thanks for your help guys ! ;) Hi Guys, I Have a requiement. Basic Question on Update fields I'hv a basic question on Updating fileds using APEX Code - rather I want to clear my concepts . The requirement is very simple. For Account Object , Update all Countries to United States of America where country is like US or USA . Now, I'm able to do it by writing below code Public Class Acc_Country_Name_Change { Public Pagereference Name_update() { List<Account> Act = [SELECT name,BillingCountry FROM Account where BillingCountry in ( 'US','USA')]; for(Account a : act) { a.BillingCountry ='united states of america'; } update act; return null; } } Now , I'hv two questions on this . Please help me to clear my concept. 1> in the For loop , its like FOR ( sObject : List ) - so why both are not List ? is it because in FOR Loop , only one row get processed at a time ? Or something else? 2> Inside the FOR loop , we are writing a.BillingCountry ='united states of america' , but in update its like Update act . Why so? Can someone please help me to clarify my doubts? I know they are very basic , But I'm kind of confused. Thanks, Tanoy Trigger to copy email from contact to account First of all, I am just the administrator, not a developer and not a coder. However today I am wearing all the hats. I am hoping someone would be kind enough to help me with the following. I am not sure of even where to start, except that I should do this by a trigger. However I know nothing of how to actually accomplish this. I am hoping that someone could provide the code necessary as I am not savvy enough to write any of it myself. Being a non-profit, we do not have the budget to enlist a professional for this and the task has fallen to me to handle. Short story - Due to a third party integration, we found that we needed an email address on the account page. However, with the one-to-one model, we are essentially unable to access the true Account record for an individual. What I would like to accomplish is this: a trigger that will copy the email address from our Contact object into our email field on the Account object. This would happen when the field is changed in any way. I hope that I've provided enough information, but if not I will gladly elaborate where needed. Thanks in advance everyone. Assistance with FeedItem Apex Trigger We have the trigger below that if an attachment is deleted from Chatter Feed, certain profiles will recieve the error message. I need assistance in updating the trigger to only fire on a certain object (Documents_Manager__c). Currently the trigger is firing on all objects. The trigger is also currently blocking the System Administrator profile for testing testing purposes but we would like to add serveral profiles to the block list as well. Thank you in advance! trigger ChatterDeleteBlocker on FeedItem (After Delete) { User u = [SELECT id,ProfileId,Profile.Name FROM User WHERE id = :UserInfo.getUserId()]; for(FeedItem fi : trigger.old) if(fi.type == 'ContentPost' && u.Profile.Name == 'System Administrator') fi.addError('You do not have the permissions to delete files from chatter feed'); } How to hide/disable the back button that appears on the top left corner, when navigating from one component to another in lightning ? Any trick on how to achieve this ? formula to auto populate lookup based off picklist Opportunity picklist fields: a1. Market b1. Vertical c1. Segment Threshold fields: a2. Market (Custom Field) b2. Record Type c2. Segment (Standard Name Field) Conceptual plan: When a1,b1,c1 are selected, update lookup to Threshold with Threshold record that matches a2,b2,c2 to a1,b1,c1 I am thinking on using process builder to set the criteria which covers {a1,b1,c1 are selected} I am stuck finding a way to create a formula that pulls the record based on matching the Opportunity picklist values. Thanks in advance! Is Rich text area supported in ischanged() in validation rule? Validation Rule with either/or required, depending on picklist value With the validation rule that I have written below, it's forcing the user to choose either checkbox REGARDLESS of the value in the picklist field. If you can help, that would be really great... AND( Modify_Existing_Report__c = FALSE, Create_New_Report__c = FALSE, OR( ISPICKVAL (SF_Request_Type__c, "Report: New/Modify (Complete Section I)"), RecordTypeId = '012a0000000AWNv')) How to create a trigger (fires every 24 hours) that fires a process created on the process builder? Can someone walk me through how build such a trigger in Apex? Thanks, Fares Alsyoufi. Bypass/workaround for Length of Formula Field I have a requirment to bulid a formula which would tweek the values selected from some picklist values. The issue is the length of my formula is '44000' and i hold sfdc allow only 3900 char in formula. If i split the formula also i may have to create nearly 10 formula field and then concatinate them. Is there any workaround for this please suggest. Thanks! Validation rule exception for dependent pick list values on Opportunity Stage How to populate a text field based on two picklist fields? Make picklist field Required based on other Picklist Field Value I have 2 custom picklist fields, Eg : abc__c(New, Open, Inprogress) and xyz__c(High, Medium, Low, Average) If the value of picklist abc__c is 'New' , xyz__c should not be 'Required' and for all rest of values on abc__c , xyz__c should be Required Need an validation rule for this.. Thanks!! Adding a lightning component created via App Builder to VF Page ? My question is , is there a way to a component created via Lightning App Builder to a VF Page ? Thank you? Process Flow Error Hi We are currently having issues in our environment where we've gone and created 2 Process Builders pulling from Task and then updating on the Contact. Error element myRule_1_A1 (FlowRecordUpdate). The flow tried to update these records: 0032A00002VB8EmQAL. This error occurred: ALL_OR_NONE_OPERATION_ROLLED_BACK: Record rolled back because not all records were valid and the request was using AllOrNone header. For details, see API Exceptions. More so, we are having issues with the flows when lead conversion happens. Error: System.DmlException: Update failed. First exception on row 0 with id 00T23000005q7a7EAA; first error: CANNOT_EXECUTE_FLOW_TRIGGER, The record couldn’t be saved because it failed to trigger a flow. A flow trigger failed to execute the flow with version ID 301230000000VKM. Flow error messages: <b>An unhandled fault has occurred in this flow</b><br>An unhandled fault has occurred while processing the flow. Please contact your system administrator for more information. Contact your administrator for help.: [] Class.leadconvert.BulkLeadConvert.updateLeadTasks: line 1279, column 1 Class.leadconvert.BulkLeadConvert.convertLead: line 127, column 1 We've concluded that we get these errors when the Lead has more than 10 or 20 tasks related. Those tasks in particular are the ones that trigger our Process Builder. We tried to diagnose the issue in Sandbox as well. Based on posts I've read it's possible that we are hitting limitations Does anyone have any suggestions aside from creating an Apex Trigger to do this? Locker Service - any recommended approach to resolve existing component issues ? Need some advice re Locker Service. As locker service will be auto enabled in Summer ’17 release, has anyone activated the LockerService critical update and test in sandbox to verify the correct behaviour of the components? If so, would be great to know what kind of issues faces and any recommended approach, best practice to resolve those? . Thanks in advance. Regards -Riti How to write validation rule on website field in account object for avoding duplicates ? Now I m using Professional edition . I need to avoide duplcate company websites in Account object. For this one how to write validation rule on this field ? How do I pass a base64 string to a apex string in controller? ex. { "first name" : "Rob, "mypdfdoc" : "AAASDECgtNhr==" } when I pass the file to JavaScript it encodes it to base64 . when I pass it back to a string type in my controller it shows [file type object] not AAASDEC... Any ideas why it is not showing it as a base64 string, instead displays [file type object] in the debug log? Basically I want to take the base64 string and use put to assign to a map thank you Validating multiple number formats in one validation rule I am trying to validate the phone number on the contact according to multiple formats. I want users to be able to enter phone numbers that match one of three formats, one starting with "+" , another starting with "00" and finaly one starting with "(". I can create all of the these as single validation rules, but when I try to combine two or more, I am struggling wiht the logic even if the syntax of the formula is correct. Any suggestions? SOQL for querying all contacts with no ralated list My requirement is that I need to quey all those records from the custom object which do not have a lookup to Contact. Please help. Thanks. Too many picklist values for IF and ISPICKVAL ?? IF(ISPICKVAL(Service_Type_del__c, "One-time"), ABS((Cost__c * Probability) / 1), IF(ISPICKVAL(Service_Type_del__c, "Yearly"), ABS((Cost__c * Probability) / 1), IF(ISPICKVAL(Service_Type_del__c, "Twice Yearly"), ABS((Cost__c * Probability) / 2), IF(ISPICKVAL(Service_Type_del__c, "Quarterly"), ABS((Cost__c * Probability) / 4), IF(ISPICKVAL(Service_Type_del__c, "Monthly"), ABS((Cost__c * Probability) / 12), IF(ISPICKVAL(Service_Type_del__c, "Twice Monthly"), ABS((Cost__c * Probability) / 12), IF(ISPICKVAL(Service_Type_del__c, "Weekly"), ABS((Cost__c * Probability) / 12), IF(ISPICKVAL(Service_Type_del__c, "Twice Weekly"), ABS((Cost__c * Probability) / 12), IF(ISPICKVAL(Service_Type_del__c, "Daily"), ABS((Cost__c * Probability) / 12), 0 ))))))))) How to implement left and right swipe functionality using lightening framework? Solution for copying Description(Long text Area datatype) field contents to Custom field I had gone through some problems working withDescription field so now I got a solution to work with "Long Text Area DataType", if you want to count() the records or other filtering things with Description. I made a custom field "Notes__c"(Text Area) in contact object and using Trigger i copied the contents of Description field to Notes__c. here is the code: trigger CopydescToNotes on Contact (before insert,before update) { for(Contact c : Trigger.new) if(c.Description!=null) c.Notes__c=c.Description; } Thanks Shephali
https://developer.salesforce.com/forums/ForumsProfile?userId=005F0000005wTn2IAE&communityId=09aF00000004HMGIA2
CC-MAIN-2021-21
refinedweb
2,558
56.05
Dear all, I am trying to put some data in a list control, but I cannot make my list accept duplicate entries. It actually accepts duplicates, but then I cannot select them anymore. Here is an example: <mx:Application xmlns: <mx:Script> <![CDATA[ import mx.collections.ArrayCollection; private var ac:ArrayCollection = new ArrayCollection(["a", "b", "a"]); private function init():void { l1.dataProvider = ac; } ]]> </mx:Script> <mx:List </mx:Application> This creates a nice list with the entries: "a", "b" and "a" -> the problem is that I cannot select the first "a" in the list - the selection goes to the last "a" !? But I need the first one also selectable ! Does anybody have a clue how to get it working ? Thanks a lot in advance ! Daniel I've noticed this before as well. Can you use distinct objects and use labelFunction to make them appear identical? Or does that also not work. That's because the flex compoenents use == to recognize elements. Since "a" == ""a" the problem will remain the same. What you can do, thought, is to wrap your Strings in objects. For instance : myAC = new ArraCollection( [ {label:"a"} , {label:"a"} ] ); This way you've got two distinct object with the right label property ("a"). The code above is quite ugly, but you can do the same with a custom class to wrap the Strings. This would give something like : myAC = new ArrayCollection( [ new MyCustomWrapper( "a" ) , new MyCustomWrapper( "a" ) ] ); And I believe you don't need to implement a label function as long as you have a label property on the custom wrapper class. The solution of msakrejda will work. Change this line of code: private var ac:ArrayCollection = new ArrayCollection([{label: "a"}, {label: "b"}, {label: "a"}]); Because 'label' is the default value of the labelField property, you don't have to set is explicitly. It's a better practice to use objects in the dataprovider instead of strings because you can store extra data into the objects. Furthermore I'm wondering why you would use identical labels in a list? Anyway, hope this solution works for you. D Your answer applied to me too, thanks. Hey Guys ! It works Thank you !
https://forums.adobe.com/thread/556669
CC-MAIN-2018-30
refinedweb
362
66.13
by Raman Sah How to build a simple, extensible blog with Elixir and Phoenix In this post, we’ll discuss how to build a boilerplate Phoenix web app with user authentication and an admin panel, along with image upload in Elixir. TodoMVC has become a de facto tool to compare various JavaScript-based MV* frameworks. Along the same lines, I feel that a blog application can be a tiebreaker in choosing a new backend or API framework. So let’s get started and build one in Phoenix. We’ll follow the default setup, that is Phoenix hooked up with Ecto running on PostgreSQL. Here are the final screens to give you an idea of what the app will look like at the end. The landing page will show all the published blogs in a card layout. A card can be clicked to view that particular post. We will have a dashboard that will show the statistics in brief. Access to this page requires admin user login. There will be a separate section that has an overview of all the posts. Here you can publish / modify / delete posts. This is the post editor layout featuring a markdown editor along with a file picker for the featured image. Note: The full working code is hosted on GitHub. There are numerous files in the project which cannot be shared in a single blog. So I have explained the specific ones which I assume are critical. Let’s keep the project’s name as CMS for now. So we’ll start with creating a new project with mix phx.new cms. Run mix deps.get to install dependencies. Generate a migration file for users and posts, respectively. # User migration file mix phx.gen.schema Auth.User users name:string email:string password_hash:string is_admin:boolean # Posts migration file mix phx.gen.schema Content.Post posts title:string body:text published:boolean cover:string user_id:integer slug:string Two tables have to be created in the database which represent users and posts. I’ve kept it rather simple, keeping only the required fields and expanding when the need arises. Subsequently, we can define changesets and additional methods in the user and post schema as well. user.ex post.ex @derive {Phoenix.Param, key: :slug} Since we want the posts to have a readable and SEO friendly URL structure, we inform route helpers to reference slug instead of id in the URL namespace. The routes are described here: Resources which are specific to the admin section are clubbed together and assigned a pipeline which forces authentication. Meanwhile, global routes are treated with passive authentication. User details are fetched if a session is present but the pages are still accessible. Login and home pages belong here. Executing mix phx.routes gives me this output: The view is divided into three logical sections: - Navigation bar - Sidebar - Main Content While the navigation bar is always visible, the sidebar appears only if an admin user is logged in. Browsing content will be inside the admin context. The links in the sidebar will grow as and when the app evolves. The Admin.Post controller follows the typical CRUD architecture and includes an action to toggle the published state of a given post. A lot of controls reside in the index page of admin’s post section. Here, posts can be deleted, published and modified. templates/admin/post/index.html.eex To keep the template uncluttered, we can define convenience view helpers like formatting time etc. separately. views/admin/post_view.ex Arc along with arc_ecto provides out of the box file upload capabilities. Since a post features a cover image, we have to define an arc configuration in our app. Each post in our blog requires two versions of cover images — original which is visible inside specific post view and a thumb version with a smaller footprint to populate the cards. For now, let’s go with 250x250 resolution for the thumb version. Coming back to the app’s landing page, it will house the cards for all the published posts. And each post will be accessible through the slug formed. controllers/page_controller.ex This project explores Phoenix — how a Phoenix app is structured and how to dismantle a Phoenix-based project. I hope you’ve learned something and enjoyed it! The full working app is on Github :. Feel free to clone 👍 and do clap if you find this blog useful 😃
https://www.freecodecamp.org/news/simple-extensible-blog-built-with-elixir-and-phoenix-61d4dfafabb1/
CC-MAIN-2019-26
refinedweb
739
65.83
#include <SPI.h>#include <WiFi.h>// the IP address for the shield:IPAddress ip(192, 168, 0, 177); char ssid[] = "yourNetwork"; // your network SSID (name)char pass[] = "secretPassword"; // your network password (use for WPA, or use as key for WEP)int status = WL_IDLE_STATUS;void setup(){ // Initialize serial and wait for port to open: Serial.begin(9600); while (!Serial) { ; // wait for serial port to connect. Needed for Leonardo only } // check for the presence of the shield: if (WiFi.status() == WL_NO_SHIELD) { Serial.println("WiFi shield not present"); while(true); // don't continue } WiFi.config(ip); //); } // print your WiFi shield's IP address: Serial.print("IP Address: "); Serial.println(WiFi.localIP());}void loop () {} The WiFi.config() now works after:1. Downloading and using Arduino 1.0.5.2. Firmware updated following Arduinos instructions: found the Arduino instructions to be rather non-intuitive, especially if you are not friendly with command prompt.Basically to upgrade the firmware on the shield you have to:1. Upgrade HDG104 named "wifi_dnld.elf"2. Upgrade T32UC3 named "wifiHD.elf" I used this tutorial to help with a non command prompt update method: hope this helps anyone with similar problems.
http://forum.arduino.cc/index.php?topic=173510.0
CC-MAIN-2015-40
refinedweb
191
60.92
The documentation of the standard classes/methods never mention of what type/class an argument must be. For example let’s look at the subscript of an Array. There is the form “array[index]”. But of what Type must the index object be? The ruby book tells me that programming ruby, types doesn’t matter so much, what matters is to what messages an object responds. Thus I tried the following, thinking that to_i is needed so an object can be used as an subscript index. #!/usr/bin/ruby class C def to_i 0 end end c = C.new a = [0,1,2] p a[c] It doesn’t work however: ./test:12:in `[]’: can’t convert C into Integer (TypeError) from ./test:12 So to which messages must C respond so objects of it can be used as array index? Flo
https://www.ruby-forum.com/t/argument-types/165507
CC-MAIN-2018-47
refinedweb
142
75.61
How to make a GUI slider to control a variable Creating a GUI slider is very simple. You simply generate a project with the GUI add on, initialize an ofxFloatSlider and gui, draw the gui, and link the slider to a specific variable. When you generate your project, include the ofxGui addon. When you open your app in xCode, you should see the gui add on source files here: in the header file (.h) Include the "ofxGui.h" file. #include "ofxGui.h" Initialize a slider and a panel. Here we will use ofxFloatSlider radius to control the size of a circle. If you wish to work with intergers, use ofxIntSlider. ofxFloatSlider radius; ofxPanel gui; in the implementation file (.cpp) Setup your panel named 'gui' and add the radius slider using gui.add(). Here we are labeling the slider with the string "radius", starting the inital value at 140, and giving the slider a range of 10 to 300. void ofApp::setup(){ gui.setup(); gui.add(radius.setup("radius", 140, 10, 300)); } For the sake of example, draw a circle in the draw() function and pass the variable 'radius' as the third parameter. void ofApp::draw(){ ofDrawCircle(ofGetWidth()/2, ofGetHeight()/2, radius); gui.draw(); } When you run the app, move the radius slider back and forth to change the size of the circle.
https://openframeworks.cc/learning/01_basics/how_to_create_slider/
CC-MAIN-2019-13
refinedweb
222
74.29
do you plan to create a CNI version your natcJavaBridge.c? do you think it is worth? .. right now I dont have much time to work on that right now, but I may help if you think that it would be worth to do.. or even do it myself entirely.. then would just need some help from you.. > Hi, > [please excuse the delay] > > I have just checked a fix into the head of the CVS. > Please use this until the next version 1.0.6 is available. > > I have tested your program against the CVS head: > gcj: 3.2.3 > php: 4.3.4 > > gcj -fjni -oJavaBridge.srv \ > --main=JavaBridge \ > php-java-bridge-1.0.5/modules/JavaBridge.class \ > test/gcjtest.java \ > -L`pwd`/php-java-bridge-1.0.5/modules -lnatcJavaBridge > > su -c "killall -9 java; rm /tmp/.php-java-bridge" > > export LD_LIBRARY_PATH=`pwd`/php-java-bridge-1.0.5/modules > > ./JavaBridge.srv /tmp/.php-java-bridge 1 "" & > php test/gcjtest.php > > ==> 48890 yes.. the my original code used a lots of space chars as padding.. but sourceforge's forum engine shrank it to just one space.. thats why the output length is different... the original class was: public class Test { public String test () { StringBuffer sb = new StringBuffer (); for (int i=0;i<10000;i++) { sb.append (i); sb.append (" "); } return sb.toString (); } } so Test.test() outputs 248890 bytes. I just checkout the cvs and tried and the same problem comes up again. now the pattern is a little different though.. the php script runs fine for like 15 times.. then the same exception comes up in JavaBridge.svr console.. and the php output is this: Warning: java.lang.StringIndexOutOfBoundsException in /home/www/test.php on line 31 then I can just run the script again and it will work.. for 2 or 3 times.. then the exception and the warning come up again.. and this pattern is repeated as many time I try the thing. I compiled with -g so the exception is a bit more verbose: java.lang.StringIndexOutOfBoundsException at java.lang.String.charAt(int) (/usr/lib/libgcj.so.4.0.0) at gnu.gcj.convert.Output_8859_1.write(java.lang.String, int, int, char[]) (/usr/lib/libgcj.so.4.0.0) at java.lang.String.getBytes(java.lang.String) (/usr/lib/libgcj.so.4.0.0) at java.lang.String.getBytes() (/usr/lib/libgcj.so.4.0.0) at JavaBridge.setResult(long, long, java.lang.Object) (/home/Downloads/php-java-bridge/cvs/php-java-bridge/server/JavaBridge.java:114) at JavaBridge.Invoke(java.lang.Object, java.lang.String, java.lang.Object[], long, long) (/home/Downloads/php-java-bridge/cvs/php-java-bridge/server/JavaBridge.java:409) at _Jv_CallAnyMethodA(java.lang.Object, java.lang.Class, _Jv_Method, boolean, java.lang.Class[], jvalue, jvalue) (/usr/lib/libgcj.so.4.0.0) at __clone (/lib/tls/libc-2.3.3.so) > Additional notes regarding gcj: > > 1. GCJ cannot currently handle multiple threads accessing the library > via JNI (CNI might work, not tested). So until this gcj bug is fixed, > please start the bridge via dl() and do *not* hard-code the socketname > option. This starts a new server process for each incoming request. ... what this mean exactly? ... apache is multithreaded.. and thus I can have two php scripts running in different threads and being called at the same time.. so they cannot access the same class.method? or cannot use the php-java-bridge lib at once at all? test ___________________________________________________________ Gesendet von Yahoo! Mail - Jetzt mit 100MB Speicher kostenlos - Hier anmelden:
https://sourceforge.net/p/php-java-bridge/mailman/php-java-bridge-users/?viewmonth=200409&viewday=27
CC-MAIN-2017-43
refinedweb
591
72.63
Received this in our mailbox last week. Thought it might amuse a few GZers... Maybe I should start a business building Faraday Cages around these people's homes! ;-) First, the double space after periods in the paragraphs. Whoever wrote this learnt to type when typewriters were a thing. Seriously, though, "radiation that 10s or 100s x greater" sounds to me they are confusing data speed with frequency, and this with radiation. Then "SafeG". They basically want cabled networking everywhere. No more wireless anyway. Yeah, nah. Same kind of people who believe in the anti-vaxx crap. And that the man did not land in the moon. Geekzone broadband switch | Backblaze backup (Geekzone aff) | Amazon (Geekzone aff) | MightyApe (Geekzone aff) | Sharesies (ref code) | My technology disclosure Give them some reading material back :) #include <std_disclaimer> Any comments made are personal opinion and do not reflect directly on the position my current or past employers may have. Mike Retired IT Manager. The views stated in my posts are my personal views and not that of any other organisation. Be it ever so humble, there is no place like home. k1w1k1d: Anyone still know semaphore? Let's setup the Clacks! GNU PTerry Anything I say is the ramblings of an ill informed, opinionated so-and-so, and not representative of any of my past, present or future employers, and is also probably best disregarded. Beware the 5G treehouse conspiracy! There are kids in the neighbourhood, right now, who are engaged in a secret plot to set up an advanced communications network between their treehouses. It is said that their mysterious devices make use of cans that have been emptied of their contents and had the lids removed. As the network consists of five children named Geoff, George, Grant, Gail and Geraldine, it is commonly referred to as ‘5G’. The techno-cans are linked in some unknown way with special string that makes it possible to convey voice waves over the distance spanning the treehouses, giving these kids an insurmountable tactical advantage in any battle involving dirt clods or water guns. Apart from the sheer unfairness of it all, the devices are known to present invisible hazards to the health and well-being of certain older members of our precious community. Not only do the sound waves contain frequencies known to set up pleasurable vibrations in the nether regions when used incorrectly by our less rigorous residents, leading to distraction and impure thoughts, they are also capable of transmitting words that can result in uncontrollable excitation and spontaneous emissions of a most disturbing nature. Some have actually been heard whispering “I love you”. It has been reported that innocent bystanders fleeing the emissions have become entangled in the hard to see web of string, rolling it around themselves and wrapping themselves in the 5G devices as they are yanked from the treehouses, hitting them on the head. Clearly these nefarious devices represent a mortal threat to the health, welfare, and moral turpitude of our exceptionally challenged population. They must be stopped! Sign our petition today! 5G must go! Ban string! We don’t need these new-fangled technological devil instruments! Shouting was good enough for our Devonian ancestors and it is good enough for us! I don't think there is ever a bad time to talk about how absurd war is, how old men make decisions and young people die. - George Clooney
https://www.geekzone.co.nz/forums.asp?forumid=42&topicid=257243
CC-MAIN-2020-16
refinedweb
568
62.78
strcmp - compare two strings #include <string.h> int strcmp(const char *s1, const char *s2); The strcmp() function compares the string pointed to by s1 to the string pointed to by s2. The sign of a non-zero return value is determined by the sign of the difference between the values of the first pair of bytes (both interpreted as type unsigned char) that differ in the strings being compared. Upon completion, strcmp() returns an integer greater than, equal to or less than 0, if the string pointed to by s1 is greater than, equal to or less than the string pointed to by s2 respectively. No errors are defined. None. None. None. strncmp(), <string.h>. Derived from Issue 1 of the SVID.
http://pubs.opengroup.org/onlinepubs/007908775/xsh/strcmp.html
CC-MAIN-2014-42
refinedweb
123
68.3
Provided by: manpages-dev_3.35-0.1ubuntu1_all NAME aio_write - asynchronous write SYNOPSIS #include <aio.h> int aio_write(struct aiocb *aiocbp); Link with -lrt. DESCRIPTION The aio_write() function queues the I/O request described by the buffer pointed to by aiocb. file offset aiocbp->aio_offset, regardless of the current file offset. If O_APPEND is set, data is written at the end of the file in the same order as aio_write() calls are made. After the call, the value of the current writing. EFBIG The file is a regular file, we want to write at least one byte, but the starting position is at or beyond the maximum offset for this file. EINVAL One or more of aio_offset, aio_reqprio, aio_nbytes are invalid. ENOSYS This function is not supported. VERSIONS The aio_write() function is available since glibc 2.1. CONFORMING TO POSIX.1-2001, POSIX.1-2008. NOTES. SEE ALSO aio_cancel(3), aio_error(3), aio_fsync(3), aio_read(3), aio_return(3), aio_suspend(3), lio_listio(3), aio(7) COLOPHON This page is part of release 3.35 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at- pages/. 2010-10-02 AIO_WRITE(3)
https://manpages.ubuntu.com/manpages/precise/en/man3/aio_write.3.html
CC-MAIN-2021-39
refinedweb
198
76.72
In this tutorial we will learn about the CharArrayWriter in Java. java.io.CharArrayWriter extends the java.io.Writer. The CharArrayWriter class writes a character buffer to the output stream. This class is acted as a Writer to write the stream. toCharArray() and toString() methods can be used to get the data. close() method of this class has no effect. Constructor Detail Method Detail Example This example is being given here for demonstrating how to use CharArrayWriter. This example explains that how to write char array to the output stream using, read the characters, and write the data to the file. For this in the example I have used the methods of CharArrayWriter such as write() method (for writing to the output stream), toCharArray() (for reading the char array data), writeTo() (for writing the data to another Writer). Source Code import java.io.CharArrayWriter; import java.io.FileWriter; import java.io.IOException; public class JavaCharArrayWriterExample { public static void main(String args[]) { char ch[] = {'a', 'b', 'c', 'd'}; CharArrayWriter caw = null; FileWriter fw = null; try { caw = new CharArrayWriter(); System.out.println(); System.out.println("*****Output*****"); System.out.println("An array of character is written successfully to the output stream"); caw.write(ch); char c[] = caw.toCharArray(); System.out.print("Data read through toCharArray is : "); for(int i =0; i<c.length; i++) { System.out.print(c[i]); } System.out.println(); fw = new FileWriter("test.txt"); caw.writeTo(fw); } catch(Exception e) { System.out.println(e); } finally { if(caw != null) { try { caw.close(); fw.close(); } catch(Exception ioe) { System.out.println(ioe); } } }// end finally }// end main }// end class Output When you will execute the above example you will get the output as below as well as you will see that a file with the name which you have specified at the time of programming is created in the specified directory.Writer Post your Comment
http://roseindia.net/java/example/java/io/chararraywriter.shtml
CC-MAIN-2016-18
refinedweb
312
51.95
On Mon, Jun 13, 2011 at 8:30 AM, Greg Ewing greg.ewing@canterbury.ac.nz, IMO. Yikes, now *there's* a radical proposal. -lots on any idea that would make: def f(): i = 0 def g1(): return i i = 1 def g2(): return i return [g1, g2] differ in external behaviour from: def f(): result = [] for i in range(2): def g(): return i result.append(g) return result or: def f(): return [lambda: i for i in range(2)] or: def _inner(): for i in range(2): def g(): return i yield g def f(): return list(_inner()) Cheers, Nick.
https://mail.python.org/archives/list/python-ideas@python.org/message/O5WWPOPX7GOLJ6LNGEV7BANLM3BYBGYO/
CC-MAIN-2022-27
refinedweb
102
65.56
Using the Gravatar API: Announcing Your Presence WEBINAR: On-Demand How to Boost Database Development Productivity on Linux, Docker, and Kubernetes with Microsoft SQL Server 2017 Grava What? Gravatar is a social presence site that specializes in storing a profile and associated avatar for you. Unlike sites such as Facebook, Linked-In, and others, Gravatar doesn't promote sharing of things. It has one simple goal, and that's to store avatars and profile information in a central place for you use in other applications or Web sites. Discus, for example, can be set up to use your Gravatar picture, so that when you make a post using any Web site that uses Discus for its comments, your photo and user name are automatically filled in, even if your not actually a member of that site. Figure 1: Gravatar's main page If you don't have a WordPress account, still click the sign-in button, but when you get to the page to enter your sign-in details, look just below the form and you'll see a link to "Create an Account." To just use Gravatar in an external application, you don't actually need a Gravatar account. All you need is the email address registered with Gravatar of the person who's account you wish to display. For this post, however, I'm going to demonstrate using the service by using my own account, and I'll assume you also have an account you can use to program against. Let's Create Some Code For this post, I'm going to use a standard WebForms ASP.NET application to demonstrate using Gravatar. I'm using WebForms purely for simplicity; for whichever platform you use, the process is the same. Fire up Visual Studio, and start a "Web application." Figure 2: An empty Web application using WebForms Add a "Web Form" called 'Index' to your project. Your Solution Explorer should look something like what's shown in Figure 3. Figure 3: Solution Explorer for our Gravatar Project Make sure that your ASPX code for the page looks as follows: <%@ Page <head runat="server"> <title>Gravatar Test Page</title> </head> <body> <form id="form1" runat="server"> <div> <asp:button <asp:image </div> </form> </body> </html> If you've called your page by a different name, remember to NOT change the first line, and only replace everything from the 'DOCTYPE' onwards. If you switch to design view for your page, you should see something similar to the following: Figure 4: Our page in design mode To get an avatar image from Gravatar for any given email address, you have to perform the following steps: - Convert the email address to all lower case. - Trim off any leading or trailing whitespace. - Calculate an MD5 hash of the email address. - Make a HTTP get request to Gravatar with the calculated hash (or use the hash to create a string for an image tag). Double-click the button in design view, and then add the following code to the button click handler in your page behind code.); } You'll also need the "GetMD5Hash" function and various using statements. Your entire code behind should look something similar to the following: using System; using System.Security.Cryptography; using System.Text; namespace GravatarTest { public partial class Index : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { }); } private string GetMD5Hash(string input) { MD5CryptoServiceProvider mD5CryptoServiceProvider = new MD5CryptoServiceProvider(); byte[] bytes = Encoding.UTF8.GetBytes(input); bytes = mD5CryptoServiceProvider.ComputeHash(bytes); StringBuilder stringBuilder = new StringBuilder(); byte[] numArray = bytes; for (int i = 0; i < (int)numArray.Length; i++) { byte num = numArray[i]; stringBuilder.Append(num.ToString("x2").ToLower()); } return stringBuilder.ToString(); } } } Change the "gravatarEmail" string to whatever the email of your Gravatar account is and, all being well, if you press F5 to run the Web page, then click the button; you should see your Gravatar profile picture appear. Figure 5: My browser showing my Gravatar Avatar The URL to construct to retrieve your avatar image couldn't be simpler. It's "" followed by the MD5 hash calculated, as described above. You don't need to authenticate, have an API token, or any complicated OAuth schemes. You simply make a regular GET call to the URL and get an image as a response. If you need to make a secure call from an HTTPS page, you simply just change "" to "secure.gravatar.com" and use HTTPS as the scheme. Following the hash, there are a couple of useful parameters you can add. 'S', or 'size', allows you to set the pixel size of the returned image. I've used it in the code above so the image size matches the size of the ASP.NET image container I created. You can have a maximum size of 2048. The 'd' parameter lets you set a default behaviour if an avatar hash is not found. If you make 'd=404', Gravatar will return a 404 error if an image doesn't exist. If you set 'd=<alternate image url>', the specified image will be used. You can also use the following: - d=mm: Mystery Man - d=identicon: Geometric pattern calculated from the provided hash - d=monsterid: Mini monster picture based on the provided hash - d=wavatar: Image similar to a mini monster, but all cool and new wave - d=retro: An 'OlkSkool' 8 bit graphic representation of the provided hash - d=blank: A simple blank image Providing the d parameter will automatically substitute the image as required if no hash can be looked up. So, even if the email you work with is not registered, it's still possible to get some kind of image. Getting Profile Information Gravatar can handle more than just an avatar image. If you've filled in the various bits that you can change on your Gravatar account, you also can make a call to "", followed by the hash calculated for your email. You'll get a mini profile page; for example, mine's at. Figure 6: My Gravatar mini profile Although this might be great if you just want to display a simple popup/iframe that's already populated, you can simply, just by adding '.json' to the end of the request, get all the information on that page returned as a JSON object. Figure 7: My Gravatar profile returned as JSON Representing the different objects returned by Gravatar is not particularly difficult. The main 'entry', for example, can be represented as follows: using System.Collections.Generic; namespace GravatarTest { public class GravatarEntry { public int Id { get; set; } public string Hash { get; set; } public string RequestHash { get; set; } public string ProfileUrl { get; set; } public string PreferredUsername { get; set; } public string ThumbnailUrl { get; set; } public List<PhotoEntry> Photos { get; set; } public NameEntry Name { get; set; } public string DisplayName { get; set; } public string AboutMe { get; set; } public string CurrentLocation { get; set; } public List<EmailEntry> Emails { get; set; } public List<ImEntry> Ims { get; set; } public List<AccountEntry> Accounts { get; set; } public List<UrlEntry> Urls { get; set; } } } The various sub objects are all just as simple. PhotoEntry.cs namespace GravatarTest { public class PhotoEntry { public string Value { get; set; } public string Type { get; set; } } } NameEntry.cs namespace GravatarTest { public class NameEntry { public string GivenName { get; set; } public string FamilyName { get; set; } public string Formatted { get; set; } } } namespace GravatarTest { public class EmailEntry { public bool Primary { get; set; } public string Value { get; set; } } } ImEntry.cs namespace GravatarTest { public class ImEntry { public string Type { get; set; } public string Value { get; set; } } } AccountEntry.cs namespace GravatarTest { public class AccountEntry { public string Domain { get; set; } public string Display { get; set; } public string Url { get; set; } public string Userid { get; set; } public string Username { get; set; } public bool Verified { get; set; } public string Shortname { get; set; } } } UrlEntry.cs namespace GravatarTest { public class UrlEntry { public string Value { get; set; } public string Title { get; set; } } } Once you have all the various structures needed, you simply can just use the HTTP web client to grab the JSON from Gravatar, then parse it into your classes using JSON.NET. For example, add the following method to your code behind. public static string GetData(string url) { string resultContent; using (var client = new HttpClient()) { client.BaseAddress = new Uri(url); client.DefaultRequestHeaders.TryAddWithoutValidation("User-Agent", quot;Mozilla/5.0 (Windows NT 6.2; WOW64; rv:19.0) Gecko/20100101 Firefox/19.0"); var result = client.GetAsync(url).Result; resultContent = result.Content.ReadAsStringAsync().Result; var code = result.StatusCode; switch (code) { case HttpStatusCode.InternalServerError: throw new ApplicationException("Server 500 Error while posting to: " + url); case HttpStatusCode.NotFound: throw new ApplicationException("Server 404 Error " + url + " was not a valid resource"); } } return resultContent; } And use NuGet to add Json.Net to your project. Before you can parse the entry returned, you need to make one more class, called GravatarProfiles.cs. The reason for this is because the Gravatar API, even though it's returning only one object, it returns it as an array of GravatarEntries. To allow Json.Net to parse the object correctly, we have to make sure that we are able to structure things so that Json.Net can work with it. GravatarProfiles.cs should look something like the following: using System.Collections.Generic; namespace GravatarTest { public class GravatarProfiles { public List<GravatarEntry> Entry { get; set; } } } Once that's in place, it's as simple as this: string gravatarProfileJson = GetData(String.Format("{0}.json", hash)) GravatarProfiles gravatarProfiles = JsonConvert.DeserializeObject<GravatarProfiles>(gravatarProfileJson); to get the Gravatar Profile into your code. Figure 8: Visual Studio Debugger showing our profile in .NET objects From this point on, you can extract the details and add them to labels, data repeaters, or anything else you want to use to display them. Found a strange API you've never seen in .NET before, or want to know if there's a .NET way of doing something? Drop me a comment in the Comments section below and I'll try to cover it in a future post. There are no comments yet. Be the first to comment!
https://www.codeguru.com/columns/dotnet/using-the-gravatar-api-announcing-your-presence.html
CC-MAIN-2018-05
refinedweb
1,660
54.22
Subject: Re: [OMPI devel] problem in the ORTE notifier framework From: Nadia Derbey (Nadia.Derbey_at_[hidden]) Date: 2009-05-28 08:12:03 On Thu, 2009-05-28 at 05:57 -0600, Ralph Castain wrote: > I agree with Terry here about being careful in pursuing this path. > What I wouldn't want to have happen is to force anyone wanting to be > notified of error events to have to also turn on peruse, which impacts > the non-error code path. Agreed, I missed that part! Regards, Nadia > > > > > > I think George's point is that we > > already have lots of hooks in place in the PML -- and > they're called > > peruse. So if we could use those hooks, then a) they're > run-time > > selectable already, and b) there's no additional cost in > performance > > critical/not-critical code paths (for the case where these > stats are > > not being collected) because PERUSE has been in the code > base for a > > long time. > > > > I think the idea is that your callbacks could be invoked by > the peruse > > hooks and then they can do whatever they want -- increment > counters, > > conditionally invoke the ORTE notifier system, etc. > > > > > > > > On May 27, 2009, at 11:34 AM, George Bosilca wrote: > > > > > What is a generic threshold? And what is a counter? We > have a policy > > > against such coding standards, and to be honest I would > like to stick > > > to it. The reason is that the PML is a very complex piece > of code, and > > > I would like to keep it as easy to understand as possible. > If people > > > start adding #if/#endif all over the code, we diverging > from this > > > goal. > > > > > > The only way to make this work is to call the notifier or > some other > > > framework in this "slow path" and let this other framework > do it's own > > > logic to determine what and when to print. Of course the > cost of this > > > is a function call plus an atomic operation (which is > already not > > > cheap). It's starting to get expensive, even for a "slow > path", which > > > in this particular context is just one insertion in an > atomic FIFO. > > > > > > If instead of counting in number of times we try to send > the fragment, > > > and switch to a time base approach, this can be solved > with the PERUSE > > > calls. There is a callback when the request is created, > and another > > > callback when the first fragment is pushed successfully > into the > > > network. Computing the time between these two, allow a > tool to figure > > > out how much time the request was waiting in some internal > queues, and > > > therefore how much delay this added to the execution time. > > > > > > george. > > > > > > On May 27, 2009, at 06:59 , Ralph Castain wrote: > > > > > > > ORTE_NOTIFIER_VERBOSE(api, counter, threshold,...) > > > > > > > > #if WANT_NOTIFIER_VERBOSE > > > > opal_atomic_increment(counter); > > > > if (counter > threshold) { > > > > orte_notifier.api(.....) > > > > } > > > > #endif > > > > > > _______________________________________________ > > > devel mailing list > > > devel_at_[hidden] > > > > > > > > > > > -- > > Nadia Derbey <Nadia.Derbey_at_[hidden]> > > > _______________________________________________ > devel mailing list > devel_at_[hidden] > > > > _______________________________________________ > devel mailing list > devel_at_[hidden] > -- Nadia Derbey <Nadia.Derbey_at_[hidden]>
http://www.open-mpi.org/community/lists/devel/2009/05/6145.php
CC-MAIN-2015-11
refinedweb
492
67.18
I'm fairly new to JAVA and OOP and I'm currently following an academic course where I'm learning data structures and algorithms in java. As I was learning about implementation of linked lists I've ran into a small problem of not understanding the code how to create a node when implementing a linked list(I'm familiar with constructors and bit of recursion ). Code of the Node class as follows public class Node { public int info; public Node next, prev; public Node (int el) { this (el,null,null); } public Node (int el,Node n,Node p){ info = el; next =n; prev=p; } } public class List { private Node head, tail; public List ( ){ head = tail = null; } public boolean isEmpty( ){ return head == null; } public void addToTail (int el) { if (!isEmpty ( )) { tail = new Node (el, null, tail); tail.prev.next = tail; } else head = tail = new Node(el); } public int removeFromTail ( ){ int el = tail.info; if (head == tail) head = tail =null; else { tail = tail.prev; tail.next = null; } return el; } } Ok let's start from the Node Class public Node next, prev; public Node (int el) { this (el,null,null); } Here the objects next and prev are references to the next and previous nodes to the current node (which is your current object (this)) this (el,null,null); It means you are creating a node which has no previous or next node. as you pass null, null for next and previous. Its similar to creating a head as head doesn't have next and previous nodes. When you create the head you will never change it but you will change the next of the head the time when you create the second element in your list When you create a tail of the linked list public void addToTail (int el) { if (!isEmpty ( )) { tail = new Node (el, null, tail); tail.prev.next = tail; } else head = tail = new Node(el); } here you first create a tail Node by tail = new Node (el, null, tail); And then get the previous of tail and set the next element of the prev as the tail by doing tail.prev.next = tail; Every time you add a new Node to the list you are calling addToTail(int e1) which updates the tail and updates the next of the old tail.
https://codedump.io/share/1wVw43VAqsn0/1/how-a-java-class-works-with-it39s-own-reference-type
CC-MAIN-2017-09
refinedweb
382
60.79
THis is what I need to do...I need help with the third point 1. Write a class OrderedList. The implementation of OrderedList must be as a linked list of Comparable elements. The list items are maintained in ascending order at all times. On this assignment, no indexing methods are allowed. The following methods are required: int size() //Return the number of items in the list boolean contains(Comparable item)//Return true iff item is stored in the OrderedList void insert(Comparable item)//Add an item to the OrderedList Comparable remove(Comparable item) //Remove and return a matching item Also, provide a default constructor and a toString() method. 2. Write a class, Record, to represent a concordance entry consisting of a word, a count, and a line-number list. The line-number list should be implemented as an ArrayList. 3. Write the Concordance class. There should be exactly 2 instance variables: the name of the text file from which the Concordance is created, and an OrderedList of Record elements. Each Record element stores the information about one word from the text file. The constructor has a parameter for the file name, and builds the Concordance by processing the words from the text file. Provide a toString() method, a method Record lookup(String word) to look up a given word in a Concordance, and a method to write a Concordance to a text file. this is what I have so far import java.util.LinkedList; public class OrderedList { private Node first; public OrderedList() { this.first = null; } public int size() { int count = 0; Node pointer = this.first; while(pointer != null) { count++; pointer = pointer.next; } return count; } public boolean contains(Comparable item) { Node pointer = this.first; while (pointer!= null && !pointer.data.equals(item)) pointer = pointer.next; return (pointer != null) && (pointer.data.equals(item)); } public void insert( Comparable newI) { Node pointer = this.first; Node newbee= new Node(newI); while(pointer != null) pointer = pointer.next; if (pointer == null) { newbee.next = this.first; this.first = newbee; } else { newbee.next = pointer.next; pointer.next = newbee; } } public Comparable remove(Comparable item) { if (this.first == null) return null; Node pointer = this.first; Node trailer = null; while(pointer.next != null) { trailer = pointer; pointer = pointer.next; } if (pointer!= null) if (trailer ==null) this.first = pointer.next; else trailer.next = pointer.next; return pointer.data; } public String toString() { String image = this.size() + " Items"; Node pointer = this.first; while (pointer != null) { image += "\n" + pointer.data; pointer = pointer.next; } return image; } private class Node { public Comparable data; public Node next; public Node(Comparable item) { this.data = item; this.next = null; } } } *************************************************** import java.util.ArrayList; public class Record { public String word = ""; public int count = 0; public ArrayList<Integer> list = new ArrayList<Integer>(); public Record (String w, int c, ArrayList<Integer> l) { this.word = w; this.count = c; this.list = l; } } ****************************************************** import java.util.Comparator; public class Concordance { public String filename = ""; public OrderedList list = new OrderedList(); public Concordance(String filename) { } public Record lookup(String word) { // Record r = ; return null; } public String toString() { String r =""; //for (Record term: list) // r = r + term; return r; } } Please I have been trying to figure it out but could not figure it out.
https://www.daniweb.com/programming/software-development/threads/274777/need-help-urgent-with-java-project
CC-MAIN-2017-34
refinedweb
522
61.83
requestFocusand requestFocusInWindowmethods. They draw a distinction between giving the focus to a component even if that requires changing the currently focused window ( requestFocus) and giving the focus to a component only if its window has the focus ( requestFocusInWindow). So if you've got a Java application that's being annoying and stealing the focus from other applications, it's likely calling requestFocuswhen it should be calling requestFocusInWindow. (This problem can be more difficult to spot than you might realize. If you have code that does some work and then requests focus, you can miss this problem if you're the sort of person who always waits for it to complete anyway, or if it usually completes so fast you can't get bored and start doing something else, or – as I'll mention later – if you're on Mac OS.) Requesting focus is made slightly more confusing and complicated than described above by platform differences. When I'm using Linux, a new window will automatically get the focus. When I'm using Solaris, with what seems like the same window manager (but presumably with some different configuration option somewhere), a new window won't get the focus. Should my application gain a call to requestFocus? On the one hand, it would make the application far more usable (it's crazy for the user to ask for a dialog to appear but not have the focus go to the new dialog), but on the other hand it seems to be deliberately going against some aspect of the user's configuration. On the other other hand, that user may, like me, not have explicitly chosen such a configuration, and may, like me, not have a clue of how to fix it. (I'm not actually certain it's quite as simple as a matter of configuration, because sometimes it works as I'd expect.) When I'm using Mac OS, there's yet another notion of focus: application focus. So requestFocusbehaves slightly differently again. It will transfer the focus to a window only if the application is currently focused. (Thanks to the screen menu bar, an application can be focused even when it has no windows open. Despite having used Mac OS since 2001-12, I still find it odd that I can close every Safari window, say, and have iTunes as the only window on the display, yet Safari still has the "focus". It's especially disconcerting if you leave the computer and come back to it in such a state.) You might think toFrontwould be the answer, but it isn't. If you ask on Apple's java-dev mailing list, as I did, how to get around this restriction, you get two kinds of response. There's the "you're evil, Mac applications shouldn't do that" response and the "you can't" response. (Strictly, there was a third, suggesting I use open(1), but I wanted something that worked for arbitrary Java programs, even ones not packaged as a .app/ directory.) Neither of these two claims are true. If you have two applications working in concert, as in my case, it's idiotic for them not to be able to transfer focus between themselves. And it turns out you can give the focus to an arbitrary program, if you're prepared to write a bit of C++. Here's the version of the code in salma-hayek at the time of writing: #include <Carbon/Carbon.h> #include <iostream> /* * On Mac OS, there's a strong notion of application. One consequence is that * toFront doesn't work on an application that isn't the one the user's * interacting with. Normally this is fine, if not preferable, but it's awkward * for a Java program that wants to be brought to the front via indirect user * request. For example, without this, it's impossible for the "edit" script * to bring Edit to the front on Mac OS. * * (This file is Objective C++ simply so that the make rules add * "-framework Cocoa", which will drag in Carbon for us.) */ static void bringProcessToFront(pid_t pid) { ProcessSerialNumber psn; OSStatus status = GetProcessForPID(pid, &psn); if (status == noErr) { status = SetFrontProcess(&psn); } if (status != noErr) { std::cout << "pid " << pid << " = " << status << std::endl; } } int main(int, char*[]) { bringProcessToFront(getppid()); return 0; } Spawn this from the Java program that wants to grab the application focus, and hey presto! You're the focused application. My editor uses this to come to the front when the user causes it to open a file from a terminal emulator.
http://elliotth.blogspot.com/2005/09/requestfocus-requestfocusinwindow.html
CC-MAIN-2017-22
refinedweb
758
58.72
Double Nickels on the Dime: An Oral History of the Foundation Grants Program This is a long-ish post summarizing my perspective on the implementation and effects (so far) of CIHR’s Foundation Grant program. I know this program is defensible from some points of view. I’m not trying to speak for anyone but myself. My main points are these: - A grant consolidation program is a good idea under some conditions. But Foundation is not a consolidation program, and we can’t afford it at any scale under the current circumstances. The opportunity costs are too high. - CIHR should stop Foundation, wind down current grants, and re-allocate as much funding as possible to Project Grants, with a target of 20% success rates as the number one priority of the agency. Project provides the broadest support to the most people. It is the best way to invest in the future of Canadian health research. It matters more — and is a better way to fulfill CIHR’s mandate — than Foundation or most strategic programs. If the Project Grants program is not robust and healthy, CIHR is a failure. - If 20% success rates can be maintained, a true consolidation program should be considered, with strict controls on eligibility and budget. Alain the farmer has 10 pigs and 4 buckets of slop per day to feed them. [Bear with me.] He notices two of the pigs — his two favorite pigs, in fact, pigs he knows are of the highest quality — tend to get a bit more slop than the others. They are good at competing for a spot at the trough, and they are good eaters! He’s had them the longest, and they have more meat on them than the others. So he thinks, “If I give those two even more slop, I’ll have the two most excellent pigs in the county, and I’ll make a killing on market day.” So he splits his single pig pen into two. He puts 8 of the pigs in one pen and feeds them 2 slop buckets per day, and puts the 2 excellent pigs in another and gives them each their own bucket. The 2 super-pigs grow faster, but not all that much faster. Often they leave uneaten slop on the floor of their pen. Even a pig can only eat so much, and they can only grow so fast. Meanwhile, the 8 other pigs stop growing. Most of them lose weight. Competition at the trough gets ugly. A few get sick. One of the younger pigs dies. On market day, he does indeed make way more than average on his two super-pigs, who are nicely fattened and beautiful to behold. No one can deny their porcine excellence, as determined by stringent, objective, expert pig review. One of them even wins a blue ribbon and gets its picture in the newspaper with the Minister of Agriculture! But his 8 other pigs (sorry, now 7) are worth far less than average. Overall, it’s a large net loss. The farmer could learn something about zero-sum resource allocation, diminishing returns, and opportunity costs. Or, he could pretend that winning a ribbon was the goal all along. A lot is happening at CIHR, now under new (albeit interim) leadership. Most notably, they have returned to face-to-face panels for peer review. I’m not saying it was easy, but this was, politically speaking, low-hanging fruit. We are going back to a peer review system that pretty much worked. But success rates will reach new lows, and fewer labs will be funded than before the Reforms. Peer review in the Reforms was bad. Some of it was surreal. But the bungled implementation of virtual peer review was not what crippled the system. It was the creation of a funding caste system in which large, long-term Foundation grants are awarded primarily based on seniority and reputation. These grants both enlarge annual budgets and extend funding durations for a small subset of scientists — a subset from which early and mid-career investigators are largely excluded by design. Reallocating a large proportion of open funds to a small group in Foundation has been paid for so far by cutting the number of normal operating grant competitions in half for 3 years. Because time did not stand still while this happened, it has had predictable effects on stability and application pressure. Each year, many more operating grants are ending than CIHR is awarding, something that was never true before the Reforms. Because the vast majority of PIs in the system have one grant, this means labs are being defunded. People are losing their jobs. Past investment is squandered. As we return (finally, maybe?) to two competitions per year, it will necessitate cutting Project success rates in half. Six of one, half dozen of the other — the effect is the same. The opportunity costs are massive, but have been ignored. CIHR initially committed 45% of open program funding to Foundation. This was clearly not something they could afford to do while keeping productive labs open and letting new investigators into the system. It was, in fact, an obscene giveaway that was duly taken advantage of with large — and largely approved — budget requests above an already dubiously generous “baseline” calculation. How the 45% allocation and individual budgeting procedures were approved by those charged with oversight of CIHR’s operations is an incomprehensible failure of due diligence that as far as I know remains unquestioned and unexplained. The various defenses of this at the time were laughable, in a not-funny sort of way. For example, a CIHR executive at the time told me on the phone that they expected ECIs and MCIs to outperform senior scientists in Project, because after all, Foundation was protecting us from having to compete with the scientific crème de la crème as they floated up to unprecedented heights in the funding distribution. At the same time, ironically, CIHR was releasing data showing that, in fact, ECIs and MCIs had generally competed just fine in the OOGP. As it happened (believe it or not), ECIs and MCIs did not outperform more experienced applicants in the chaotic and confusing Project competitions. Unlike in any competition before, quotas and supplemental funding were needed to ensure that ECIs received something approaching — but still substantially less than — the proportion of open program funding they had received in the OOGP. MCIs received no such relief nor sympathy. After all, what more natural time to cull the herd — shouldn’t you have to prove yourself? We’re a meritocracy! Never mind that “proving yourself” under today’s crisis conditions bears zero resemblance to proving yourself under the stable formats and 25–30% success rates of the 2000s. It’s always been hard. So to summarize the Reforms: CIHR took a funding program that was roughly equitable by career stage and split it into two programs that both disfavored early/mid-career applicants, one of them cartoonishly so. All the while, they claimed that life would be good in the Projects, because we were safely protected there from the apex scientists who were now in Foundation, along with half the money. Alternative take: The ladder is being pulled up on two generations of Canadian scientists. In the OOGP, about 140 people had the equivalent of 3 or more concurrent grants in annual funding. CIHR was now going to give out about 140 grants larger than that every year while funding the same number of labs. It was truly magical thinking. In fact, as in all zero-sum wealth concentration phenomena, the creation of local luxury and stability would be paid for with global scarcity and instability. Last year, CIHR finally entered the CHiPs-style 40 car pile-up on the Pacific Coast Highway phase of the Reforms. (This phase was easy to foresee, though it was left off the official transition Gantt charts.) Around this time, it became conventional wisdom that the Foundation program had some “good ideas behind it” but was “unsustainable at its current size.” So: how much could CIHR afford to allocate to a <ahem> “pilot” program like Foundation? The answer 4 years ago was probably something like 10%, assuming they were able to keep budgets under control, which they can’t seem to do. The answer today is unequivocally 0%. This is an agency overextended in every sense and still mired in operational issues. The damage done by the Reforms cannot be addressed by returning to face-to-face panels and fiddling around the edges of budget allocations. Why bring all this up? Why relitigate the Reforms when we have to look ahead? Because it is clear that there is still no will to even temporarily suspend the Foundation program, let alone do the needful and cancel it. The fact that the people charged with getting CIHR back on the right track — and given enormous authority to do so — are still thinking that a program like Foundation has any place at all in what we are all hoping will be CIHR’s recovery should be unsettling. It suggests that they share a core belief of previous leadership: that there is a subset of senior PIs who should be protected at any cost from the current funding climate. Do you “like the idea” of a “Foundation-type program?” I do. Especially when we phrase it in this exquisitely hedged and vague manner. I think we all like the idea, in isolation, in principle. So what? I “like the idea” of a lot of things we can’t afford or that have unintended consequences where harms outweigh benefits. Again and again, the appeal and reasonableness of the “idea” of a Foundation-like program has been weaponized against us and used to justify this specific program at this specific time, which is poorly-conceived and is doing enormous damage. The recent CIHR road show is indistinguishable from the Reforms marketing blitz on this topic. Don’t you think [Famous Scientist] deserves it? Don’t you like excellence? Well, Foundation is about excellence, chum. Case closed. If success rates stabilized in Project at an acceptable level, I would welcome a “Foundation-like” program that would give people who have a sustained track record of high productivity some extra freedom from the grant treadmill. My idea in a nutshell: if you have 3 or more concurrent grants that have been successfully renewed, you can consolidate them. This can only happen when Project success rates are above 20%. And, indeed, that is exactly what should be the primary goal for CIHR: get Project Grant success rates up to and sustained at 20% or higher so that we can have a functional, healthy, future-oriented funding system. Here is how: - Stop the Foundation program. Start winding down current Foundation grants now. Revisit consolidation when Project success rates are 20%. - Put 70% of the CIHR budget into Project. This will require cutting strategic programs and non-operating grant spending significantly. - Do everything possible to #SupporttheReport. Even with the above, 20% success rates in Project will require the full Naylor ask. I know influential people want these grants. Who wouldn’t? I know there is significant support for continuing Foundation. But I would hope there is even more support for the research community all being in this together, for not pulling up the ladder, for not eating our young, and for building research capacity and sustaining research careers. These, by the way, are things that are required by the CIHR Act. They cannot be accomplished with 10% success rates in Project. Prestige-based glamour grants to a small tier of scientists are, funnily enough, not mentioned in the CIHR Act. The CIHR Reforms were an extraordinary failure that has done extraordinary damage. Damage that is now chronic, no matter how you review the grants. It is always tempting to deploy half measures, to split the difference on hard choices. That’s not good enough. Extraordinary measures are needed to restore CIHR. Stopping Foundation isn’t even the extraordinary step — it’s the easiest one on the table. If we can’t get that done, I have little hope for CIHR as an agency that can support a bright future for Canadian health research.
https://medium.com/@MHendr1cks/double-nickels-on-the-dime-an-oral-history-of-the-foundation-grants-program-626549adb2da
CC-MAIN-2017-34
refinedweb
2,050
63.29
The script runs perfectly as my unprivileged user, but when i try to launch it from a cron job it never connects to the broker. I can see that it is running by grep'ing for python, but: 1. nothing appears when i try to redirect STDOUT into a log file. 2. the broker never responds with a client connection message My cron entry looks like this: Code: Select all @reboot python3 ~/sensor.py & >> log.txt Code: Select all import paho.mqtt.client as mqtt import threading import bme680 import time import json Code: Select all def connect(mqttc): mqttc.connect(MQTT_Broker, MQTT_Port, Interval) mqttc.loop_forever() def main_loop(): mqttc = mqtt.Client() T1 = threading.Thread(target=connect, args=(mqttc,)) T1.daemon = True T1.start() while True: poll_sensors() time.sleep(1200) main_loop()
https://www.raspberrypi.org/forums/viewtopic.php?f=28&t=235460&p=1731306&sid=e9685121a4d8560568f748270930e606
CC-MAIN-2021-25
refinedweb
130
62.04
I have a model to write blogs. In that I'm using a wysiwyg editor for the Blog's descritpion, through which I can insert an image within the description. class Blog(models.Model): title = models.CharField(max_length=150, blank=True) description = models.TextField() pubdate = models.DateTimeField(default=timezone.now) publish = models.BooleanField(default=False) In the admin it looks like this: What I want is to get first image inside the description of the blog, and present it in the template like this: How do get the img src from the description of the blog to use it in the template? Your help and guidance will be very much appreciated. Thank you. views.py: def blog(request): blogs = Blog.objects.all() return render(request, 'blogs.html', { 'blogs':blogs }) template: {% for blog in blogs %} <div class="blog"> <p class="blog_date"> {{blog.pubdate}} </p> <h2 class="blog_title"> {{blog.title}} </h2> <img src="{{STATIC_URL}} ###img src to be included" class="blog_image img-responsive img-thumbnail"> <a href="blog_detail.html" class="blog_read_more btn btn-info">Read more</a> <div class="container"> <p class="divider">***</p> </div> </div> {% endfor %} Here's a basic approach you can tweak for your convenience: Add a first_image field to your model: class Blog(models.Model): title = models.CharField(max_length=150, blank=True) description = models.TextField() pubdate = models.DateTimeField(default=timezone.now) publish = models.BooleanField(default=False) first_image = models.CharField(max_length=400, blank=True) Now all you have to do is populate your first_image field on save, so your model's save method should look like this: def save(self, *args, **kwargs): # This regex will grab your first img url # Note that you have to use double quotes for the src attribute img = re.search('src="([^"]+)"'[4:], self.description) self.first_image = img.group().strip('"') super(Blog, self).save(*args, **kwargs) Now simply reference it in your template. This is not a fully fledged solution, there are other things that you should take into account, like differentiating between a smiley or thumbnail or a code snippet imr src and an actual img src, but this should do for personal use, you don't want to limit other people's choice to having their first image as the cover image. Hope this helps!
https://databasefaq.com/index.php/answer/3421/django-django-templates-django-views-get-the-first-image-src-from-a-post-in-django
CC-MAIN-2019-51
refinedweb
372
58.89
Hi, I’ve got a CUDA C/C++ code that launch a kernel with 512threads per blocks and 72498037 blocks. But there is a problem… When I launche my code, it run for approximately 15min and… Then blackscreen + driver problem and the program stop. I set de Watchdog timer to an hour (3600s) So I think the problem does not come from here. I run cuda-memcheck and I didn’t notice any memory error. I check error from CUDA function like follow : gpuErrchk("cudafunction"); defined like : # define gpuErrchk(ans) { gpuAssert((ans), __FILE__, __LINE__); } inline void gpuAssert(cudaError_t code, const char *file, int line, bool abort = true) { if (code != cudaSuccess) { printf("GPUassert: %s %s %d\n", cudaGetErrorString(code), file, line); getchar(); exit(1); } } In main function my code is like : int main(void) { cudaMalloc -> pass cudaMalloc -> pass cudaMalloc -> pass cudaMalloc -> pass cudaMemcpy -> pass cudaMemcpy -> pass kernel <<< 614, 1024 >>> -> pass cudaStreamQuery -> pass cudaGetLastError -> pass cudaMemcpy -> pass otherkernel <<< 72498037 ,512 >>> -> pass cudaStreamQuery -> pass //cudaDeviceSynchronize -> GPUassert: unknown error "path to the main.cu" cudaGetLastError -> pass cudaFree -> GPUassert: unknown error "path to the main.cu" cudaFree -> exited cudaFree -> exited cudaFree -> exited return (0); } If I add the commented line to the main function I get the error on this line, if not on the first free. After the error the code exit. So… What’s happening ? And an other thing, I cannot launch 1024 threads on the second kernel… Don’t know why, so I’ll try to reinstall cudaToolkit and driver. And see if it works
https://forums.developer.nvidia.com/t/kernel-problem-execution-stop-after-15min/45885
CC-MAIN-2022-27
refinedweb
253
51.58
ULiege - Aerospace & Mechanical Engineering Metafor has a few integrated meshers, quite simple but usually capable of meshing parts that are not too complicated. However, 2D meshers require that the wire delimiting the domain has the right orientation. The wire must be defined with its “area to the left”, which means that when the wire is followed along its orientation (defined by the succession of its curves), the matter of the part is on the left-hand The auto-detection is working if the Sides is made of 4 edges, each of which is only made of one Curve. TransfiniteMesher2D(sideset(number)).execute(type, tri) In the example above, the auto-detection is doable. A similar function for higher degrees is: HighDegreeTransfiniteMesher2D(sideset(number), degree).execute(type, sideNode, tri) For 9-nodes mesh elements, the central node is linked to the Side. For 16-nodes mesh elements, central nodes are linked to the Side. mat = ((mat11, mat12, ..., mat1nbmax), (mat21, mat22, ..., mat2nbmax), (mat31, mat32, ..., mat3nbmax), (mat41, mat42, ..., mat4nbmax) ) TransfiniteMesher2D(sideset(number)).execute2(mat, type) where mat is a python tuple with 4 components. Each component is a tuple which contains the number of the curves which constitute each edge. Each of these four edges can be made of a different number of curves. Example: This face can be meshed by the command: TransfiniteMesher2D(sideset(1)).execute2( (1,2,(3,4),5) ) The order of the lines does not matter in 2D (in opposition to the 3D case). Therefore, the following command also works: TransfiniteMesher2D(sideset(1)).execute2( ((4,3),1,2,5) ) The mesh can be projected on the surface surfNo (whether with or without auto-detection) if this surface is associated to the Side before the meshing operation: sideset(1).setSurfaceNo(surfNo) This is done with the commands: MesherTFI2D =TransfiniteMesher2D(sideset(number)) MesherTFI2D.setEnableDistribution() MesherTFI2D.execute(type, tri) or MesherTFI2D.execute2(mat, type) when one if the edges is already discretized using a mesh elements distribution. Note : By default, the parameter of the function setEnableDistribution is set to True. This mesher is a new implementation of L. Stainer's work, which contains an offset and frontal mesher. It is usually rather disappointing. Triangles (frontal): sidset(no).frontalTriangle(d) Quadrangles (offset + frontal): sidset(no).frontalQuad(d) where d is the average length of the mesh elements edges to generate. This quadrangular mesher is based on Sarrate algorithm. This method consists in diving the domain in a recursive way until only one quadrangle remains. It is very robust, and enables efficient meshing of complex domains. Gen4 can be used to mesh: These lines are used to mesh the face #1 of the domain, whose vertices (points #1, #2, #3 and #4) densities are 0.1, 0.1, 0.2 and 0.05. from toolbox.meshers import Gen4Mesher defaultDensity = 0.1 mesher = Gen4Mesher(sideset(1), domain, defaultDensity) mesher.setPointD(3,0.2) mesher.setPointD(4,0.05) mesher.execute() A default density, used unless explicitly stated differently, can be defined. This way, a density is only defined on specific mesh points. If no value is assigned in the data set, $0.1$ is taken as default density value. CAREFUL : The face #1 must be defined in the XY plane. Create triangles on a planar face, as a function of
http://metafor.ltas.ulg.ac.be/dokuwiki/doc/user/geometry/mesh/2d
CC-MAIN-2021-43
refinedweb
545
51.14