text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Opened 3 years ago Closed 20 months ago #7993 closed bug (worksforme) ghc 7.6 (not 7.4) sometimes hangs at child process exit on s390x Description On Debian's s390x architecture (64-bit S/390, Linux kernel), builds of several packages hang with GHC 7.6 where they did not hang with GHC 7.4. In particular, ghc itself hangs during its own build when bootstrapping with 7.6. This is quite easy to reproduce on affected systems, although it doesn't hang in exactly the same place every time. It appears that the runtime sometimes deadlocks when a subprocess exits; the strace looks like this: 7523 exit_group(0) = ? 6680 <... futex resumed> ) = ? ERESTARTSYS (To be restarted) 6680 --- SIGCHLD (Child exited) @ 0 (0) ---) [repeats forever] ghc spawns enough subprocesses (gcc etc.) that it's essentially bound to hit this sooner or later. I suspect perhaps a lack of signal-safety somewhere - at an extremely wild guess, perhaps the type of an important variable written in a signal handler happens to exceed the size of sig_atomic_t on s390x and not elsewhere - but I haven't yet been able to track this down in the time available to me. If you don't immediately recognise this as something obvious, then perhaps somebody more fluent in Haskell than I would be good enough to suggest test code that exercises this and is somewhat simpler than "build ghc"? If my analysis is at all close to the mark, then something that sits in a loop forking and reaping a trivial child process on each iteration should be enough to reproduce this. On the assumption that most non-Debian-developers don't have convenient access to S/390 machines (Debian developers can use zelenka.debian.org), I'd be happy to try things out. Change History (6) comment:1 Changed 3 years ago by nomeata - difficulty set to Unknown comment:2 Changed 3 years ago by nomeata I tried to reproduce the problem by spawning lots of processes, but import System.Process main = mapM_ (\_ -> readProcess "/bin/echo" ["hello", "world"] "") [0..10000] did not deadlock. comment:3 Changed 3 years ago by pmylund - Cc simonmar added I am experiencing the same issue, but on x86_64, and with my own application which uses GHC (7.6.2) threads. On occasion a thread will loop forever, and give the same kind of output from strace. (Is it the same issue?) Unfortunately I don't know how to begin troubleshooting. Whatever I do to try to reproduce it in a smaller test, the problem goes away if I don't use the combination of threads, STM and exception handling that I have in my larger application. I will keep trying and report back, but any input is appreciated. Update: Please ignore the above. It turns out I had become trapped in an infinite loop inside my own recursive function, and that the strace output is to be expected from applications that are actually doing something (like infinitely looping.) Sorry about that. comment:4 Changed 3 years ago by simonmar Getting a stack trace would probably help. You want to make sure that GHC itself is built with -debug: set GhcDebugged=YES in your build.mk (this will slow down the build, but you can remove it later). When the process hangs, attach to it with gdb and get a backtrace of all the threads. comment:5 Changed 20 months ago by thomie - Status changed from new to infoneeded Does this problem still occur with 7.8.3? comment:6 Changed 20 months ago by nomeata - Resolution set to worksforme - Status changed from infoneeded to closed At least ghc itself seems to build fine: I did not yet try to upload separate packages to be built with this. I guess we can close/ignore this for now, and revisit if it occurs again with 7.8. I tried to find out if -V0 helps, but unfortunately it does not.
https://ghc.haskell.org/trac/ghc/ticket/7993
CC-MAIN-2016-30
refinedweb
660
69.92
How to forecast a time series out-of-sample using an ARIMA model in Python? I have seen similar questions at Stackoverflow. But, either the questions were different enough or if similar, they actually have not been answered. I gather it is something that modelers run into often, and have a challenge solving. In my case I am using two variables, one Y and one X with 50 time series sequential observations. They are both random numbers representing % changes (they could be anything you want, their true value does not matter. This is just to set up an example of my coding problem). Here are my basic codes to build this ARIMAX(1,0,0) model. import pandas as pd import statsmodels.api as sm import statsmodels.formula.api as smf df = pd.read_excel('/Users/gaetanlion/Google Drive/Python/Arima/df.xlsx', sheet_name = 'final') from statsmodels.tsa.arima_model import ARIMA endo = df['y'] exo = df['x'] Next, I build the ARIMA model, using the first 41 observations modelho = sm.tsa.arima.ARIMA(endo.loc[0:40], exo.loc[0:40], order =(1,0,0)).fit() print(modelho.summary()) So far everything works just fine. Next, I attempt to forecast or predict the next 9 observations out-of-sample. Here I want to use the X values over these 9 observations to predict Y. And, I just can't do it. I am showing below just the one code, that I think gets me the closest to where I need to go. modelho.predict(exo.loc[41:49], start = 41, end = 49, dynamic = False) TypeError: predict() got multiple values for argument 'start' 1 answer - answered 2021-04-15 10:09 Econ_matrix This example should work. I am using your code but it is slightly changed. import pandas as pd import statsmodels.api as sm import numpy as np from statsmodels.tsa.arima_model import ARIMA generate an example data frame df = pd.DataFrame(data = {'x' : np.random.normal(12, 3,size = 332), 'y' : np.random.normal(12, 2,size = 332)}) df endo = df['y'] exo = df['x'] The order of Arima model kept as in your code, it is just for demonstration Lets leave 12 observations out of the model modelho = sm.tsa.arima.ARIMA(endo[:-12], exo[:-12], order =(1,0,0)).fit() modelho.summary() exo[-12:] modelho.predict(exog = exo[-12:], start = 320, end = 331)
https://quabr.com/67099991/how-to-forecast-a-time-series-out-of-sample-using-an-arima-model-in-python
CC-MAIN-2021-21
refinedweb
395
60.61
As I mentioned in my previous blog, during my experiments with JavaFX I needed to run certain tasks on a separate thread (e.g. calls to a remote web service via Jersey Client API). One can do it in JavaFX using JavaTaskBase class, but I wanted something simpler, something similar to what FXexperience blog suggested. So, I created a custom subclass of javafx.async.Task named AsyncTask that allowed me to make asynchronous calls as follows: AsyncTask { run: function() { // add the code you want to run asynchronously } onDone: function() { // this is executed once the "run" method finishes running } }.start(); Here is the source code for the AsyncTask class together with the helper Java class it’s using. It should be self explanatory. AsyncTask.fx import javafx.async.Task; public class AsyncTask extends Task, AsyncTaskHelper.Task { /** Function that should be run asynchronously. */ public var run: function() = null; // the helper def peer = new AsyncTaskHelper(this); // used to start the task override function start() { started = true; if (onStart != null) onStart(); peer.start(); } // don't need stop - isn't implemented override function stop() { // do nothing } // called from the helper Java class from a different thread override function taskRun() { // run the code to be run asynchronously if (run != null) run(); // send a notification (on the dispatch thread) the code finished running FX.deferAction(function() { done = true; if (onDone != null) onDone(); }); } } AsyncTaskHelper.java import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; public class AsyncTaskHelper implements Runnable { // Using a fixed threadpool to run the asynchronous task private static final ExecutorService QUEUE = Executors.newFixedThreadPool(10); // the "parent" JavaFX AsyncTask instance private final Task peer; public AsyncTaskHelper(Task peer) { this.peer = peer; } // called from AsyncTask.start() method - will add this task // to the thread pool queue public void start() { QUEUE.execute(this); } // called by the thread pool queue to start the task public void run() { peer.taskRun(); } // interface to be implemented by the "parent" JavaFX AsyncTask public static interface Task { public void taskRun(); } } nice approach. thanks man!
http://blog.alutam.com/2009/08/26/custom-asynchronous-tasks-in-javafx/
CC-MAIN-2019-09
refinedweb
330
58.79
With all the REST buzz nowadays, using SOAP/WCF services seems so ‘old school’. But the right tool for the right job, so let’s move on and focus on the problem I want to handle: sending large files over WCF(I blogged about this before with some tips, but people keep asking questions, so I describe the process step by step). - Service Contract - Service Implementation - Configuring The Service - Hosting The Service in IIS Let’s start by laying the groundwork for the WCF Service. As stated the service must enable users to upload a file to the web server which hosts it. Thus the resulting service contract only contains one method, named Upload(…). Open up Visual Studio 2010 and create a new blank solution. Next add a new Empty Web application called Sample.Services. Afterwards go to Add new item, choose the WCF service template and type FileUploadService.cs as the name. This will automatically generate some files (IFileUploadService.cs, FileUploadService.cs) and update the web.config with some default configuration settings. Open the IFileUploadService file and replace the generated code with the code in Listing 1: Listing 1 – Service Contract 1: [ServiceContract(Namespace = "")] 2: public interface IFileUploadService 3: { 4: [OperationContract] 5: UploadResponse Upload(UploadRequest uploadRequest); 6: } As you can see in Listing 1 above they mention two other classes, namely: - UploadRequest - UploadResponse These classes are both decorated with the MessageContract attribute. Note: WCF requires that the parameter that holds the data to be streamed must be the only parameter in the method. Note:. The FileInfo class specifies the structure of a SOAP envelope for a particular message. Listing 2 – FileInfo class 1: [MessageContract] 2: public class UploadRequest 3: { 4: [MessageHeader(MustUnderstand = true)] 5: public string FileName { get; set; } 6: 7: [MessageBodyMember(Order = 1)] 8: public Stream Stream { get; set; } 9: } Note:. Listing 3 displays how to setup the return value as a message contract. The response only contains a boolean value to indicate if the upload was successful. Listing 3 – FileReceivedInfo class 1: [MessageContract] 2: public class UploadResponse 3: { 4: [MessageBodyMember(Order = 1)] 5: public bool UploadSucceeded { get; set; } 6: } Now it’s time to provide an actual implementation for the service contract. Open the FileUploadService.cs file and replace the generated code with the code in Listing 4. The code is pretty straightforward. It reads the incoming stream and saves it to a file using familiar .NET code. The Upload(…) method’s return type is of the UploadResponse type. If the upload succeeds the UploadSucceeded property is set to true, if it fails this property is set to false. Listing 4 – Service Implementation 1: [ServiceBehavior(InstanceContextMode = InstanceContextMode.PerCall, 2: ConcurrencyMode = ConcurrencyMode.Single)] 3: public class FileUploadService : IFileUploadService 4: { 5: #region IFileUploadService Members 6: 7: public UploadResponse Upload(UploadRequest request) 8: { 9: try 10: { 11: 12: string uploadDirectory = 13: ConfigurationManager.AppSettings["uploadDirectory"]; 14: 15: // Try to create the upload directory if it does not yet exist 16: if (!Directory.Exists(uploadDirectory)) 17: { 18: Directory.CreateDirectory(uploadDirectory); 19: } 20: 21: // Check if a file with the same filename is already 22: // present in the upload directory. If this is the case 23: // then delete this file 24: string path = Path.Combine(uploadDirectory, fileInfo.FileName); 25: if (File.Exists(path)) 26: { 27: File.Delete(path); 28: } 29: 30: // Read the incoming stream and save it to file 31: const int bufferSize = 2048; 32: byte[] buffer = new byte[bufferSize]; 33: using (FileStream outputStream = new FileStream(path, 34: FileMode.Create, FileAccess.Write)) 35: { 36: int bytesRead = request.Stream.Read(buffer, 0, bufferSize); 37: while (bytesRead > 0) 38: { 39: outputStream.Write(buffer, 0, bytesRead); 40: bytesRead = request.Stream.Read(buffer, 0, bufferSize); 41: } 42: outputStream.Close(); 43: } 44: return new UploadResponse 45: { 46: UploadSucceeded = true 47: }; 48: } 49: catch (Exception ex) 50: { 51: return new UploadResponse 52: { 53: UploadSucceeded = false 54: }; 55: } 56: } 57: 58: #endregion 59: } Start by opening the web.config file. Next specify the directory in which the service should save the incoming files. Listing 5 – Upload directory 1: <appSettings> 2: <add key="uploadDirectory" value="C:\temp\upload" /> 3: </appSettings> Now let’s first add a new behavior to our serviceBehaviors node. Listing 6 – Service Behavior 1: <behaviors> 2: <serviceBehaviors> 3: <behavior name="FileUploadServiceBehavior"> 4: <serviceMetadata httpGetEnabled="True" httpsGetEnabled="False" /> 5: <serviceDebug includeExceptionDetailInFaults="False" /> 6: </behavior> 7: </serviceBehaviors> 8: </behaviors> The behavior specifies that the service should not propagate exception details and that it’s metadata should be shared over HTTP. Next up is the binding for the service.. Note: Configure these settings according to the needs of your application. Listing 7 – The Binding 1: <bindings> 2: <basicHttpBinding> 3: <!-- buffer: 64KB; max size: 64MB --> 4: <binding name="FileUploadServiceBinding" 5: transferMode="Streamed" 6: messageEncoding="Mtom" 7: maxReceivedMessageSize="67108864" maxBufferSize="65536" 8: closeTimeout="00:01:00" openTimeout="00:01:00" 9: 10: <security mode="None"> 11: <transport clientCredentialType="None" /> 12: </security> 13: </binding> 14: </basicHttpBinding> 15: </bindings> Last but not least is the configuration for the service itself. Listing 8: FileUploadService configuration 1: <services> 2: <service behaviorConfiguration="FileUploadServiceBehavior" 3: 4: <endpoint address="" binding="basicHttpBinding" contract="Sample.Services.IFileUploadService" 5: 6: </endpoint> 7: </service> 8: </services> Now that the service has been setup it’s time to host it. Don’t try to host it in your ASP.NET Development Server(default option), instead choose IIS or the new IIS Express option (if installed). Therefore right click on your web application and choose Properties. Go to the Web tab and change Servers option from Use Visual Studio Development Server to Use Local IIS Web server. That’s it! 23 comments: Thanks for the post. I have been wrestling with this for a few days now, and you answered many of my questions. I am having a lot of problems, however, getting my service to be hosted in IIS. I went through all of those steps, but I continuously get errors when trying to add the service reference. Am I doing something wrong there? Incredibly helpful and clear. And it works too! Many thanks indeed. Just one thing to say: thank you! The internet should have more articles like this, explaining different parts of Web.config I think you have some problem with listing 4 in line number 24. string path = Path.Combine(uploadDirectory, fileInfo.FileName); I guess It should be string path = Path.Combine(uploadDirectory, request.FileName); Thank you very much for the nice post FYI, in Figure 2, the class name used in UploadRequest instead of FileInfo as described. Thanks for this example. However, I am unable to upload files from a client that uses the service as a Service Reference. Once inside the service, the Stream is empty/corrupt (Stream.Length shows '(request.Stream).Length' threw an exception of type 'System.NotSupportedException' in the debugger). Any ideas would be helpful. Edit: I just read that Length is not supported in this case, however, when I attempt to read from request.Stream, 0 bytes are read (even though on the client, the FileStream that I am sending is a valid file, with 12726 bytes. I concur, image is valid and ends upon the local drive as zero bytes? Going to give the ByteStreamHttpBinding version a crack now. ", when I attempt to read from request.Stream, 0 bytes are read (even though on the client, the FileStream that I am sending is a valid file, with 12726 bytes." Ok Folks, it was a quick fix, a beer to think it over and had it. Before calling the service, you must set the stream position to zero: ServiceReference2.FileUploadServiceClient upl = new ServiceReference2.FileUploadServiceClient(); stream.Position = 0; upl.Upload("kaz0002.jpg", stream); upl.Close(); Easy. How to call this service from an ASPX file upload controller I'm a newby with WCF and MS Visual Studio Express 2012 and found this article very helpfull and clear. But now I would like to generate a WSDL for the client. Is that possible with MS VS Express 2012? Why Microsoft doesn't provide such examples? They have a very good advertisement but lousy tutorials. And if they provide information about their products, than those guys are Gurus or what the f.... Thanks very much for this here. Thanks for the post, I just have two issues with trying to impliment the sample: 1) VS 2010 keeps telling me that ConfigurationManager (Listing 4.13) isn't within any of the used namespaces. I tried using System.Object and System.Configuration, but to no avail. What should I do to get VS 2010 to recognize ConfigurationManager? 2)For the config-related listings (#5-8), it isn't clear to me where to add the text. Should I first delete all the default text? Should i keep the heading? What about the element? Sorry if this sounds like an oversimplistic question--I admittedly have little experience with XML or XAML, whichever one is being used here... I am facinf an iisue of bad request 400 error when file lenght is greater than 64 kb. any one help me thanks in Advance Anonymous, please check every setting and make sure there are no errors. I was able to upload/download files which were megabytes in size no problems. Good Luck Hi, I'm trying to upload a file using the attributes in the form: enctype="multipart/form-data" action="" method="post" but i'm gertting the error: 415 Cannot process the message because the content type 'multipart/form-data; boundary=----WebKitFormBoundary0XZn8zAivDWByghj' was not the expected type 'multipart/related; type="application/xop+xml"'. A brilliant example of how it should be done. Many thanks :) Hello Thanks for very nice post. I tried to upload one MB file and getting exception. Can you please give us client side configuration details as well. I am really not sure how to configure that. Thanks in advance Ashu Awesome post, I've been dealing with this issue and this solve it magically. By the way I'm using windows service host, and works great! Thanks! Do you have an example of how the client would invoke the UploadFile method? Paul Janssen - it is a web service so it is called using that technique. Rhinoman Can we transfer large data of 50-100gb using this technique? –
http://bartwullems.blogspot.com/2011/01/streaming-files-over-wcf.html
CC-MAIN-2017-26
refinedweb
1,699
57.87
:On Thu, Aug 25, 2005 at 03:09:21PM -0700, Matthew Dillon wrote: :> The entire directory tree does not need to be in memory, only the :> pieces that lead to (cached) vnodes. DragonFly's namecache subsystem :> is able to guarentee this. : :*How* can it guaranty that without reading the whole directory tree in :memory first? Unix filesystems have no way to determine in which :directories an inode is linked from. If you have /dir1/link1 and :/dir2/dir3/link2 as hardlinks for the same inode, you can't correctly :update the FSMID for dir2 without having read dir3 first, simply because :no name cache entry exists. This is true of hardlinks, yes, but if the purpose is to mirror then it doesn't really matter which path is used to get to the file. And from an auditing and security standpoint you don't have to worry about pre-existing 'random' hardlinks going to places that they shouldn't, because that's already been checked for. What you do want to know about are newly created hardlinks in places where they shouldn't exist, and that ability would not be impaired in the least. Also, directories cannot be hardlinked, only files. As problems go this one would have virtually no effect on the types of operations that we want to be able to accomplish. You can't just throw up your hands and put out a random situation that will hardly ever occur in real life (and not at all for a huge chunk of potential applications of the feature), and call that a showstopper. If it turned out that the file hardlink issue interferes with a certain type of operation that we desire to have, it is also a very solvable problem. Programs like cpdup can already deal with hardlinks, so the real issue is whether you want to take the hit of scanning the entire directory tree to find the links or whether you want to maintain a lookaside database and use the journal to keep it up to date. :> . :> :> This is not correct. It is certainly NOT enough to just be told :> when an inode changes.... you need to know where in the namespace :> the change occured and you need to know how the change(s) effect :> the namespace. Just knowing that a file with inode BLAH has been :> modified is not nearly enough information. : :The point is that the application can determine in which inodes it is :interested in and reread e.g. a directory when it has changed. There are :some edge cases which might be hard to handle without additional :information (e.g. when a link is moved outside the currently supervised :area and you want to continue it's supervision. That's an entirely :different question though. No. The problem is that the application (such as a mirroring program) could be interested in ALL THE INODES, not just some of them. Monitoring inodes doesn't help you catch situations where new files are created, nor does it help you if you want to monitor activity on an entire subtree (which could contain thousands of directories and millions of files), or any situation where you need to monitor more then a handful of inodes. The kqueue approach is just plain stupid, frankly. It is totally unscaleable and totally insufficient when dealing with terrabyte filesystems. :... :> back out of it. If it has changed, you know that something changed :> while you were processing the directory or file and you simply re-recurse :> down and rescan just the bits that now have different FSMID's. : :But it is also very limited because it doesn't allow any filtering on :what is interesting. In the worst case you just update all the FSMIDs This is incorrect. I just said in my last email that you *CAN* filter on what is interesting. Maybe not with this first commit, but the basic premise of using the namecache topology not only for monitoring but also for configuration and control is just about the only approach that will actually work with regards to implementing a filtering mechanism, because it can cover millions of files and directories with very little effort and because it can be inclusive of files or dirs that have not yet been created. What you are proposing doesn't even come close to having the monitoring and control capabilities that we need. :for nothing. It also means as long as there is no way to store them :persistenly that you can't free namecache entries without having to deal :with exactly those cases in applications. Storing them persistenly has :to deal with unrecorded changes which wouldn't be detected. Just think :about dual-booting to FreeBSD. There is nothing anyone can do about some unrelated operating system messing around with your filesystems, nor should we restrict our activities based on the possibility. This is a DragonFly feature for systems running DragonFly, not for systems running FreeBSD or Linux or any other OS. :> For example, softupdates right now is not able to guarentee data :> consistency. If you crash while writing something out then on reboot :> you can wind up with some data blocks full of zero's, or full of old :> data, while other data blocks contain new data. : :That's not so much a problem of softupdates, but of any filesystem without very :strong data journaling. ZFS is said to do something in that area, but it :can't really solve interactions which cross filesystems. The very same :problem exists for FSMIDs. This is something where a transactional database :and a normal filesystem differ: filesystems almost never have full :write-ahead log files, because it makes them awefully slow. The most :important reason is that applications have no means to specify explicit :transaction borders, so you have to assume an autocommit style usage :always. : :Joerg I have no idea what you are trying to say here, Joerg. You seem to be throwing up your hands and saying that we shouldn't implement it because it isn't perfect, but your proposal to monitor inodes (aka via kqueue) can't handle even a tenth of the types of operations I want DragonFly to be able to do. Insofar as persistent storage goes, we have several choices. My number one choice is to integrate it into UFS, because it's almost trivial to do so. A filesystem certainly does *NOT* have to be natively journaled or transactional in any way... all we have to do is update the inode with the new FSMID *after* the related data has been synchronized, and that's a very easy algorithm. It doesn't even have to sync the file, there is nothing preventing us from writing out transitional FSMID's (instead of the latest one) based on what we've synced to disk. This is a far easier situation to deal with then e.g. softupdates because we do not have to track crazy interactions within the filesystem. The FSMIDs are allowed to be 'behind' the synced data as long as the synced data does not get ahead of the high level journal. More to the point, though, it's a really bad idea to limit features simply because some filesystem written 20 years was not originally built to handle it. DragonFly is about pushing the limits, not about accomodating them. The journaling is a big leap for BSD operating systems, but there is a big gap inbetween that needs to be filled for those sysads that want to have alternative backup and auditing methodologies but who want to avoid doing continuous and full scans of their (huge) filesystems, not to mention other potential features. DATABASE TRANACTIONS PRIMER I sense that there is a fundamental misunderstanding of how database transactions can actually work here, and how FSMID's relate to the larger scheme, one that is probably shared by many people so I will endevour to explain it. If you take a high level view of a database-like transaction you basically have a BEGIN, do some work, and a COMMIT. When you get the acknowledgement from the COMMIT that is a guarentee that if a crash occured right there your transaction will still be good after the reboot. But accomplishing this does not imply that the data must be synchronized to disk through the filesystem, nor does it imply that other, later transactions which had not yet been acknowledged couldn't be written to disk. In our environment it only means that the operation must be journaled to persistent store (which is DIFFERENT from the activity going on in the filesystem), and that after a crash the system must be able to UNDO any writes that were written to the disk or to the journal that were related to UNCOMMITTED transactions. If you think about it, what this means is that the actual disk I/O we do can be a lot more flexible then our high level perception of the transaction. It's very important that people understand this. Persistent FSMIDs fit into this idea very well. When used as a recovery mechanism all we have to do is guarentee that the transactions related to the FSMID we are writing have already gotten onto the disk. Since we can delay FSMID synchronization indefinitely, this is a trivial requirement that does not need the sophistication of softupdates and does not preclude, e.g. a lookaside database file to hold the FSMIDs for filesystems that cannot store them persistently. Our high level journal can be used to accomplish tranasctional unwinding, that is to UNDO changes made to the filesystem that are not transactionally consistent. In the context of a filesystem, what this means is that we can use our high level journal to make the persistent FSMID completely consistent with the filesystem state after a crash either by undoing filesystem operations to bring the filesystem back to the state as of the stored FSMID, or by regenerating the FSMID from the high level journal. WE CAN GO BOTH FORWARDS AND BACKWARDS IN ORDER TO MAKE THE FILESYSTEM STATE SANE AGAIN AFTER A CRASH. THE ONLY REQUIREMENT for being able to accomplish this is that the filesystem operations in question not be synchronized to the disk until the related journal entry has been acknowledged. Note that I am not saying that the operations should stall, I am simply saying that they would not be synchronized to the disk... they would still be in the buffer cache, and programs would still see instant updates to the FSMID and the file data. Also remember that unlike softupdates, the FSMID we write to the disk does not have to be the latest one, so we do not get stuck in a situation where a program that is continuously writing to a file would prevent data buffers from being written out to the platter. That is not the case. All that it means is that the FSMID written to the disk may be slightly behind the FSMID stored in the journal, and both will be behind the real-time FSMID stored in system memory. Now it turns out that accomplishing this *ONE* requirement can be done solely within the high level buffer cache implementation. It does not require interactions with the filesystem. e.g. UFS does not need to have any knowledge about the interactions. On crash recovery the FSMIDs can be used by the journaling subsystem to determine not only how far back in the journal it has to go to rerun the journal, but also to help the journaling subsystem figure out which portions of the filesystem data might require an UNDO.... in the context of the current system that would prevent, e.g. the large sections of ZEROs you get in softupdates filesystems when you crash. The journal would be able to guarentee either the old data or the new data. Crash recovery after a reboot would also be able to update the stale FSMIDs in the filesystem from the journal (where they are also stored), maintaining a level of consistency across crashes that most UNIX systems cannot do today. But why limit ourselves to that? What if we want to guarentee that a high level operation, such as an 'install' command, which encompasses many filesystem operations, either succeeds in whole or fails in whole across a crash condition? With a journal and implementing this one data ordering requirement, WE CAN MAKE THAT GUARENTEE! In fact, the combination of persistent FSMIDs and journaling would allow us to implement meta transactions that could encompasses gigabytes worth of operations. It could give us a transactional capability that is visible at the coarse 'shell' level, eventually. -Matt Matthew Dillon <dillon@xxxxxxxxxxxxx>
https://www.dragonflybsd.org/mailarchive/commits/2005-08/msg00393.html
CC-MAIN-2018-05
refinedweb
2,122
57.1
>>." It's a silly proposition (Score:5, Insightful) IE's problem is not the engine, it's the shitty interface. (Ditto about Windows 8, many would say.) Re: (Score:3, Insightful) You may call it what you will, (inertia, stubbornness, laziness, unwillingness to change,) but truth is that many people just prefer it and Internet Explorer is still popular amongst a big group of users, and in the same way you and I could be called the same for not: (Score:2) Thanks. I didn't know about the control tab option. Does that work in excel? (I am not at work to test.), Insightful) AC says you're dumb - I disagree with him. Your opinion is pretty well thought out. I do, however, disagree with your assessment somewhat. Trident needs to die, and die hard. Microsoft needs to pull that abomination out of Windows completely, along with all the ActiveX controls, all it's privileges, all of it's quirks, both good and bad. I don't believe that I'll ever think that Windows is a "good" operating system, but the removal of Trident would make it one hell of a lot better. Sure, I know that many of IE's worst vulnerabilities have been "fixed", but I shall never forget how many vulnerabilities there have been, or how bad they have been. As for Webkit - I've liked it since it's debut under Google's name. Sure, I realize it's not Google's invention, but webkit is cool. If/when Microsoft shifts to Webkit, they really, really, REALLY need to install it as an unprivileged application, and make certain that it just BROWSES. It doesn't need hooks into dozens of programs, it doesn't need privileges, it doesn't need much of anything. A few plugins, addons such as Mozilla and Google offer for their own browsers. Leave it at that. A browser on Windows should be just as much, and no more than a browser on any Unix-like. The browser shouldn't even be used for updates, as Microsoft has done for all these years. A separate and distinct updating program is a requirement, with no overlap in privileges. Yes, Trident needs to die, quickly, and hard. It would be a wonderful thing if five years from now, Trident were just history, with zero support anywhere. I'd like to see websites assist people with updating from Trident simply. Just stop coding for Trident. "This site is best viewed with ANY browser that is not Internet Explorer!" Re:It's a silly proposition (Score:4, Insightful) Trident in IE 10 scores a decent in HTML 5/5.1 and CSS 3. It is not the piece of crap it once was in IE 6. Just because you have not used it in 12 years doesn't mean it is the same as in 2001. Re:It's a silly proposition (Score:5, Insightful) The problem is that if they start getting a significant share of the browser market again, they're almost guaranteed to start their old extend/extinguish trick. Microsoft needs to stay an 'also ran' in the browser market until they learn to play with others.: ) That was the argument in 2003 when we were first trying to get people to switch to Firefox. While I'm sure that's true in some places (China mostly, from what I last heard on the subject) the days of widespread SAAS are upon us and now even giant mega corps don't have a real problem upgrading. Even if the updated web apps have ignored the last several years' best practice of feature detection instead of user-agent sniffing, they're unlikely to have serious problems with how close the modern rendering engines: (Score:3, Insightful) Correct on IE, it is just using some weird design choices but I don't see how anybody can argue that Win 8 isn't wrong when this is the average user response [youtube.com] I saw at the shop. When the user needs a fricking training course to use your damned OS like its 1986 all over again? Something has gone HORRIBLY wrong. IE's biggest problem isn't the UI, its the giant fucking bullseye painted on it by hackers because they know the clueless rubes that are still running that 30 day Norton trialware from 6 years ago. IE's biggest problem isn't the UI, its the giant fucking bullseye painted on it by hackers because they know the clueless rubes that are still running that 30 day Norton trialware from 6 years ago and think that works is using IE. Add to that the fucking braindead choice to not port back to their supported OSes so that the ONLY way you can use the same browser across XP/Vista/7 is to NOT use IE and you have a browser made of fail.) People ought to know that the prefixed attributes are in beta and may change. If they ship that to production anyway, they had better be ready to change it if the standard is updated before the prefix is dropped. Fortunately none of the vendor-specific extensions are anything but minor enhancements, so they can't do any serious damage. It's not like W3C is going to redefine a pixel here.: (Score:2, Insightful) The funny thing is, the reason developers are targeting WebKit is because of the iPhone (Safari) not because of Chrome. If it works in Chrome on Windows, it will work on Safari on the iPhone, without needing to test if it actually works on the iPhone. Although that has problems too, as Chrome and Safari use different Javascript implementations, and Google uses an inherently terrible method of sandboxing that wastes extreme amounts of memory. Also Chrome has no 64-bit version on Windows which is a non-starter,) Re:Arguments of convenience (Score:4, Insightful) I do not give a shit whether it is opensource. I do give a shit whether it enslaves the web and enforces another decade of stagnationm [pcmag.com], where we can't move on to HTML 6 and corps lock a special version of Chrome from this decade to support their apps. Maybe Android 3.x will be used and corps will downgrade their phones for just that one version 10 years from now if the W3C makes changes that the current webkit does not support. Only Google's way of doing it is different. IE 5.5 was cutting edge and MS was inventing new standards and it was the best browser back then. THe problems came when w3c decided to recommend the same standards implemented differently. Then IE 6 did things one way, and Firefox rendered them in another. Open source or not I do not want to see that problem again. Re: : (Score:2, Insightful) In the past many on Slashdot argued vehemently for web standards. It's interesting that a lot of people who used to be pro-web-standard when Microsoft was non-compliant with IE are now saying "hey, we're only going to target webkit because ..." The same reasons that applied to avoiding an IE monoculture for web development apply to a webkit monoculture. Rather than bathing in schadenfreude, people should be kicking over bins just like they did with IE to ensure that the most popular implementation follows the standard, not the standard follows the most common implementation. Webkit is open source. IE was not. The people and companies working on webkit are not trying to kill Mozilla. Hell, the biggest contributor to webkit is Mozilla's largest source of revenue. Webkit is used by many browsers on many platforms from many companies (Safari on mac and iOS, Chrome on everything, RIM's blackberry browser, ...). IE was intentionally tied to a single OS. WebKit has a long history of respecting standards. There are extensions which are prototypes for future standards, but they are cle Re: (Score:3, Insightful) I would strongly disagree with this. Having a standards committee design the next step in a technical advance is one of the worst ways of working possible. What you usually end up with is a huge conglomeration of random ideas and special interests. For programming the result is frequently described as "feeping creaturitus" [wikipedia.org]. The reason for web standards is not technical, standards don't help make better mousetraps they exist so that a hundred mice can wrestle the cat into submission. So that the little). No, they simply should adhere to the standards. (Score:5, Insightful) That'll finally bring more choice to the user, in stead of the pseudo-choice now. I prefer opera and have that installed as my default browser, but still have IE and Chrome installed because some websites will only work on either of those. Between the three I can open all sites that I need, but it shouldn't be necessary if all just follow the standards, and consequently, all web sites only need to be written to that standard as well. Re: (Score:2, Insightful) Webkit browsers passed all the acid tests long before Trident ever got close to passing. Trident was the lowest scoring engine, and as far as I know, it is still the lowest scoring. Maybe Microsoft has simply given up on ever getting Trident to pass? Maybe they know that Trident can never attain all the standards implemented today, or standards that will be implemented in years to come? Face it man, MS has been working hard in recent years just to get into the same league as all the other modern browsers.:Wrong approach (Score:4, Insightful) Right....maybe they should switch from using NTOSKRNL.EXE to Linux too. After all, no one cares about the kernel; users and developers only care about the UI and APIs that sit above it. And maybe they could turn Visual C++ into a front-end to LLVM, and have .NET target the JVM. All of these changes would save Microsoft from the trouble of developing several large pieces of software. From Microsoft's point of view, of course they should keep Trident development going. I'm surprised this is even being questioned. To do otherwise would be to give control of the web over to Apple and Google. The only reason that Apple and Google care about standards right now is because Microsoft is still a big player in the game. If it was up to Google, they'd be making their own proprietary versions of HTTP, JavaScript and ActiveX ;) Then there's Apple - and even though I'm a Linux user, I'm happy that Microsoft is there to keep Apple in check!) No, and I love Webkit. (Score:4, Insightful) Trident is getting better with each major release, which is a good thing. And Microsoft still has some input towards standards as well, such as the WebRTC spec if I remember correct, or something similar that also had some features missing from it. Yeah, you could argue that things would be simpler if there was just ONE thing, the one thing that correctly interprets the specs, but it is also those incorrect spec implementations that have driven competition, driven the creation of new ideas to replace old ones and inspired so many developers to create methods to deal with them in their own ways. Not only that, without all this mess, there would be no experimentation with future specs, and all these separate browsers lead to browser prefixes being implemented, even by Microsoft recently. The main problem with web dev is most devs are terrible. Admittedly that is mainly a problem with such inconsistency in JavaScript, and HTML allowing spaghetti syntax all over the place. And lets not get started on scope. Holy crap, so many people are clueless about it. And again, that it is true globally in any form of programming. Abuse of global namespaces being the biggest headache in all programming, such things that make you want to headbutt your monitor with your fist, a physical impossibility! But damn it I will find a way and collapse the universe just so THEY don't exist! The next huge change in JS is going to bring a lot of new features, but also a bunch of changes to the way JS is executed. It is going to be a shaky decade when that comes about. But it will be for the better. I hope... Re: : (Score:3, Insightful) That was over 10 years ago. Lets go to today? Right now webkit is causing problems being this decades IE 6 [pcmag.com] in terms of mobile browsing and HTML 5 and css 3. If you own a Windows Phone (I know you do not, but bare with me ..) and go to disney.com or cnn.com will it render correctly? Nope. THey use ---webkit prefixes. HTML5Test.com is part of the problem too as Google is in a pissing match on being the best browser, but what that site doesn't tell you is that these are not implemented the same as W3C drafting Re:Ditch HTML5 for stronger web and user protectio (Score:4, Insightful) Webkit is making MS honest. Have you tried IE 10? I know the thought probably sends shiver down your spine but I have to say MS really is caring and shaking in their boots. It is a great browser. I fear webkit becoming too dominate at this point and Windows Phone users are whinning they can't view mobile sites as they cater to just webkit. I can't advocate openstandards and bash IE 6, yet fully support webkit at the same time. I would be a hypocrite otherwise. What if you want to use FirefoxOS in your next phone? Will you be screwed over? Right now, yes. IE has standard behavior now. Since IE 9 it passed all the acid tests. Just because you hate one browser doesn't mean you should support the entrenchment of another or support things like html5test that test non standard non implemented things. It encourages all the things that caused IE to be proprietary when implementations of things like the CSS box model came about locking corporate desktops up for decades.] Re:I find Trident faster than WebKit. (Score:4, Insightful) Actually, in a very real sense the engine _does_ belong to the competition. To actually get your code landed in WebKit you have to convince the current project maintainers (mostly Google and Apple) to accept it. Which means that if you want to do something that Google and Apple don't (both, often!) approve of, you have to maintain it as a separate branch and deal with the merge pain. No different from other projects where you have to collaborate with others, but a lot different from having control over the code as Microsoft does with Trident right now. Re: ]
https://tech.slashdot.org/story/13/01/12/0347256/should-microsoft-switch-to-webkit?sdsrc=next
CC-MAIN-2017-13
refinedweb
2,495
71.24
From: Gabriel Dos Reis (gdr_at_[hidden]) Date: 2002-08-17 18:59:35 Some days ago, someone was wondering on this list (sorry, I can't find the message off hand) whether the following construct typedef reverse_iterator<T> reverse_iterator; were valid or not and reported that one of his compiler rejected it. His compiler was right; somewhere, the standard says 3.3/4 func-tion templates; in this case the class name or enumeration name is hidden (3.3.7). [Note: a namespace name or a class template name must be unique in its declarative region (7.3.2, clause 14). ] Hope that helps, -- Gaby Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2002/08/34160.php
CC-MAIN-2019-22
refinedweb
127
67.86
Note: This is part 2 of four-parts Learning GatsbyJs series. This learning post is still in active development and updated regularly. In the previous post of this learning series, step-by-step procedures to install & set basic Gatsby sites (default starter & default blog) using Gatsby CLI (a npm package) were discussed. The objective of this learning series is to explore GatsbyJs to build an simple SPA blog site and document step-by-step procedures to install & setup Gatsby and overview of its building blocks. Part 1: Learning GatsbyJS – Setup & Installation Part 2: Understanding GatsbyJS Building Blocks (this post) Part 3: An Overview of Gatsby Plugins & GraphQL Part 4: Learning to Programmatically Create Pages in Gatsby In this part 2 of four-parts learning series, we will explore Gatsby main building blocks using very basic starter-hello-world (without any blots), and familiarize how contents (pages, posts & images) are added & modified in a Gatsby site. Installation of Gatsby hello-world Step by step procedure to install development tools to create a Gatsby site are discussed in the previous post. For understanding basic building blocks of Gatsby, we will set up a new site with Gatsby starter hello-world in gatsby-projects folder. Create hello-world starter site in project folder From the gatsby-projects folder create a new project named hello-blog using the “hello-world” starter template as follows: #! change directory to project folder cd gatsby-projects #! create a new site using hello-world starter gatsby new hello-blog Start local Development Server To start the development server, change to project folder (eg. hello-blog) and run gatsby develop command as shown below: #! navigate into new site directory cd hello-blog #! start development server gatsby develop If no errors displayed, the new Gatsby site is developed in local server and is available to view and interact locally at. In the browser, just “Hello World” is displayed. That it! Next, we will start exploring files, folders & directories generated by Gatsby in our project folder & start learning deeply! Exploring Site (Project) Structure The top-level file and directory structure in Gatsby project created with default starter is similar to the react app created using create-react-app. Indeed, Gatsby project files are created using react components but also contains Gatsby specific js files, for example, gatsby-browser.js, gatsby-config.js, gatsby-node.js, and gatsby-ssr.js, and others. A brief description of these files is discussed in the previous post. More detailed description is available in Gatsby project structure and under what is inside in Gatsby official starter boilerplate. Let’s deep dive into the useful folders & files in Gatsby project structure that are used to modify or customize our project. From the customization or modification perspective, main implementation folders include src/ , public/ , static/ and gatsby-config.js file. Sub-directory .cache/ This is internal cache folder automatically generated by Gatsby. Not meant for modification. Sub-directory public/ This is automatically generated by Gatsby. This folder contains output of the Gatsby build process. Not meant for modification. Sub-directory src/ This folder contains all the code related to the frontend of the site (eg., header, footer, page template, etc.). It may contain: /pages: contains components under src/pageswith path based on their file name. /posts: contains components under src/postswith path based on their file name. /templates: this is just an index React component file. index.js: this is just an index React component file. This folder may contain html.js for custom configuration of default .cache/default_html.js as described in this custom html docs. This folder may also contain standard React code structure, for example, /components, /utils inside /src. Directory static/ This directory contains automatically generated static build of your site (eg. css, images, etc.) after running gatsby develop or gatsby build terminal command. Any file inside this folder will not be processed. File gatsby-config.js The gatsby-config.js file is located at the root of the project folder and site configuration options are placed here. In this file site’s page metadata like the site title, site description and Gatsby plugins are included and configured. Additional Information: Gatsby Project Structure Extending the Default Project In this section, to get a feel of how contents (pages and posts) are created without worrying about data query with GraphQL, basic page/posts content will be created, simple styles applied and image will be added. The project structure will also be modified as necessary. 1: Modifying ‘Hello World’ Home Page Let’s make some modification to src/pages/index.js file to add some contents for header, main, and footer sections. //src/pages/index.js import React from "react" export default () => ( <section> <div style={{ background: `purple` , color: `white`, padding: '5px 0'}}> <h1>My Hello Blog</h1> </div> <div> <h1>Home Page</h1> <p>Content section. What a Gatsby world.</p> <img src="" alt="" /> </div> <div style={{ background: `MidnightBlue` , color: `white`, padding: '1px 0'}}> <p> Powered by the great Gatsby </p> </div> </section> ) In the above code example, header, main and footer are added within <div>.</div> and wrapped within <section>.</section>. Images in the pages can be added with common with <img> tag and src attributes (eg., <img src="url"> ) as shown in line 11. Note: Image in Gatsby sites are optimized & displayed using Gatsby plugins and its one of the attractive Gatsby features. Working with images in Gatsby will be discussed in detail separately. 2. Adding New Pages Gatsby is based on React components. The index.js described earlier is based on a page component. Just like in React, components are the building blocks of Gatsby too. Let’s modify the src/page/index.js page component and create two additional page components, src/page/about.js and src/page/resources.js to add to the project. //src/pages/about.js import React from "react" export default () => ( <section> <div style={{ background: `purple` , color: `white`, padding: '5px 0'}}> <h1>My Hello Blog</h1> </div> <div> <h1>About Page</h1> <p>Content for about page. A react way!</p> <img src="" alt="" /> </div> <div style={{ background: `MidnightBlue` , color: `white`, padding: '1px 0'}}> <p> Powered by the great Gatsby </p> </div> </section> ) The above code for about.js is similar, with minor modifications, to previous index.js page. Similarly, lets create resource.js page as shown below. //src/pages/resources.js import React from "react" export default () => ( <section> <div style={{ background: `purple` , color: `white`, padding: '5px 0'}}> <h1>My Hello Blog</h1> </div> <div> <h1>Resources Page</h1> <p>It's Gatsby way. Excellent resources for all !</p> <img src="" alt="" /> </div> <div style={{ background: `MidnightBlue` , color: `white`, padding: '1px 0'}}> <p> Powered by the great Gatsby </p> </div> </section> ) Next, when the site is develop with gatsby develop command and viewed in a browser, the following output is displayed: index.js, about.js& resources.jspages displayed in a browser. Refactoring With Sub-Components In the previous section, all three page components ( index.js, about.js and resources.js) contain same header & footer sections. These common sections can be broken into sub-components and re-use them calling in any page. Lets create a new directory at src/components and inside it create two files, header.js and footer.js. Now the src/ folder structure looks as follows: #! src/ folder structure ├── src │ └── page | └── components │ └── header.js | └── footer.js | └── index.js | └── about.js | └── resources.js | └── styles │ └── global.css ├── gatsby-browser.js Now create following two files, header.js and footer.js components by copying from index.js file as shown below: //src/components/header.js import React from "react" export default () => ( <div style={{ background: `purple` , color: `white`}}> <h1>My Hello Blog</h1> </div } Similarly, copy the footer section from index.js file and paste to footer.js as shown below: //src/components/footer.js import React from "react" export default () => ( <div style={{ background: `MidnightBlue` , color: `white`> <p> Powered by Gatsby</p> </div> } Next, refactor index.js, about.js and resources.js files to import header <Header /> and <Footer /> components (lines: 5 & 9) and by replacing header & footer code blocks (lines: 6-9) and (lines: 13-16), respectively from previous section as shown below: // src/pages/index.js import React from "react" export default () => ( <Header /> <p>Content section. What a world.</p> // add random an image from unsplash <img src="" alt="" /> <Footer /> ) Similarly modify the about.js file & resources.js file by replacing header & footer code blocks with <Header /> and <Footer /> components. The resulting browser out (shown above) is exactly same as seen in previous (without refactoring) section 3. Adding Basic Styles In Gatsby, style can be applied as inline or as global styles using layout component. One of the quick and dirty way of adding styles in Gatsby project is using gatsby-browser.js file. Create .css file in the project folder. While in hello-blog folder issue the following command: #!create styles/global.css cd src mkdir styles cd styles touch global.css Add some basic styles to global.css file as shown below: /* add some basic styles */ html { background-color: Biege; padding: 1em; } Navigate to project folder (eg. hello-blog) and create gatsby-browser.js file at the root of project & include the global.css style file with import statement (line 5). Alternately, it can also be added with require('./src/styles/global.css'). #! create gatsby-browser.js folder gatsby-browser.js #! import global.css import "./src/styles/global.css" This method is not generally used in real project but for learning purposes, it is described briefly here. 4. Styling with CSS Module Gatsby recommends using CSS Modules also referred as component-scoped styles to apply CSS to specific component(s) to maintain & use as self-contained styles. To quote from Gatsby CSS Module doc: “A CSS Module is a CSS file in which all class names and animation names are scoped locally by default“. Gatsby bundles css files with .module.css extension as a CSS Module. Using CSS module, styles can be applied to specific component thus ensuring class names are unique to the component and avoiding conflicts with similar class names in other component. To better understand the process, lets start with a new page with a very basic component and style it using CSS module. In this example, a css file is created at src/styles folder is named team.module.css (most rules were adopted from Gatsby tutorial). Next, create a new team.js page component (shown below) at src/pages folder and import the recently created team.module.css file (line 5). While importing module.css, give any variable name (eg. teamstyle, pagestyle etc, just no caps for variable name). In the example below, a <Team /> inline component was created as described in the Gatsby tutorial and styled using the <Team /> component specific style team.module.css which looks at localhost:8000/team as shown below: The above layout style was inspired by Gatsby’s tutorial Build a new page using CSS Modules Adding styles with Layout Component In the previous section, how global style and CSS module can be used to style pages components was discussed briefly. In this section how a Layout component can be applied to share common components (eg., header, footer, aside, global styles, etc.) to the entire project site will be discussed. To learn & better understand the use of shared Layout component, a very basic proof of concept was created from scratch and then the Layout component was used to refactor the entire hello-blog project to apply global styling & adding header, footer components to all other page components. Step 1. Creating a Basic Layout Component For this section, a simple layout.js component was created at src/components/ folder following Gatsby tutorial: //src/components/layout.js import React from "react" import "../styles/main.css" import Header from "../components/header" import Footer from "../components/footer" export default ({ children }) => ( <section> <Header /> <div className="site-main"> <div className="site-content"> {children} </div> </div> <Footer /> </section> ) In the layout.js component (shown above), css style main.css, header.js and footer.js were imported (lines: 4-6). The <Header /> and <Footer /> components were called in lines: 10 & 16. 2. Importing the Layout component into Index.js & other Pages The Layout component was imported to src/pages/index.js page as shown below: // src/pages/index.js import React from "react" import Layout from "../components/layout" export default () => ( <Layout> <div> <h1>Learning Gatsby Nested Components</h1> <p>Trying to learn and practice using Layout component to design basic page header, footer etc.</p> <img src="" alt="" /> <p> This just the beginning!</p> </div> </Layout> ) In example above, the <Layout /> component was imported into index.js page (line 4) and used as wrap around page content (lines: 7, 13). Similarly, the <Layout /> component was imported to other three page components ( about.js, resources.js, and contact.js) used as in index.js page component. The <Team /> component is imported in about.js component (line: 3) and used as called in component (line: 14) as shown below: //src/pages/about.js import React from "react" import Team from "./team" import Layout from "../components/layout" export default () => ( <Layout> <div> <h1>About pages</h1> <p>We are a small but a diverified team passately working to solve most pressing problems of our community</p> </div> <Team /> </Layout> ) 3. Adding Site Site Title & Navigation in Header component Because, the site will have the same site title and site navigation in all the pages, the <Header /> component created in previous section was reactored to include site title and navigation links between pages as described in Gatsby tutorial. //src/pages/components/header.js import React from "react" import { Link } from "gatsby" import "../styles/header.css" const ListLink = props => ( <li style={{ display: `inline-block`, marginRight: `1rem` }}> <Link to={props.to}>{props.children}</Link> </li>) export default () => ( <section className="header"> <div className="site-title wrapper"> <Link to="/"> <h3>My Hello Blog</h3> </Link> <ul className="menu"> <ListLink to="/">Home</ListLink> <ListLink to="/resources/">Resources</ListLink> <ListLink to="/about/">About</ListLink> <ListLink to="/contact/">Contact</ListLink> </ul> </div> </section> ) In the example above, Gatsby’s build in <Link /> react component was imported in line 4. Basic styles for the title and site navigation were defined in src/styles/header.css. The site title was linked using <Link /> component as <Link to="pagename">My Page</Link> as shown in lines: 14-17. An inline <ListLink /> component was defined (lines: 7-10) as described in the tutorial, and was used to link between page components as shown in lines 19-22. When viewed in browser, the site shows with a basic four page site powered by a shared <Layout /> component with global <Header /> and <Footer /> components. The site title & navigation are defined in the shared <Header /> component. Wrapping Up In this learning-note post, steps to extend & modify a bare-bone stater page by adding new pages, applying styles with global & class modules, and use of sub-component to share common components of in page (header, footers, navigation, etc.) to entire site were explored. In the next post, how to work with data in Gatsby with the use of plugins will be further explored. Next Post: An Overview of Gatsby Data & Plugins Useful References While preparing this post, I have referred the following references extensively. Please to refer original posts for more detailed information.
https://tinjurewp.com/jsblog/learning-gatsbyjs-understanding-building-blocks/
CC-MAIN-2022-05
refinedweb
2,559
58.08
February 2013 Volume 28 Number 02 Windows with C++ - Creating Desktop Apps with Visual C++ 2012 By Kenny Kerr | February 2013. A follow-up question I inevitably receive is how best to approach desktop app development on Windows and where to begin. Well, in this month’s column, I’m going to explore the fundamentals of creating desktop apps with Visual C++. When I was first introduced to Windows programming by Jeff Prosise (bit.ly/WmoRuR), Microsoft Foundation Classes (MFC) was a promising new way to build apps. While MFC is still available, it really is showing its age, and a need for modern and flexible alternatives has driven programmers to search for new approaches. This issue has been compounded by a shift away from USER and GDI (msdn.com/library/ms724515) resources and toward Direct3D as the primary foundation by which content is rendered on the screen. For years I’ve been promoting the Active Template Library (ATL) and its extension, the Windows Template Library (WTL), as great choices for building apps. However, even these libraries are now showing signs of aging. With the shift away from USER and GDI resources, there’s even less reason to use them. So where to begin? With the Windows API, of course. I’ll show you that creating a desktop window without any library at all isn’t actually as daunting as it might seem at first. I’ll then show you how you can give it a bit more of a C++ flavor, if you so desire, with a little help from ATL and WTL. ATL and WTL make a lot more sense once you have a good idea of how it all works behind the templates and macros. The Windows API The trouble with using the Windows API to create a desktop window is that there are myriad ways you could go about writing it—far too many choices, really. Still, there’s a straightforward way to create a window, and it starts with the master include file for Windows: #include <windows.h> You can then define the standard entry point for apps: int __stdcall wWinMain(HINSTANCE module, HINSTANCE, PWSTR, int) If you’re writing a console app, then you can just continue to use the standard C++ main entry point function, but I’ll assume that you don’t want a console box popping up every time your app starts. The wWinMain function is steeped in history. The __stdcall calling convention clarifies matters on the confusing x86 architecture, which provides a handful of calling conventions. If you’re targeting x64 or ARM, then it doesn’t matter because the Visual C++ compiler only implements a single calling convention on those architectures—but it doesn’t hurt, either. The two HINSTANCE parameters are particularly shrouded in history. In the 16-bit days of Windows, the second HINSTANCE was the handle to any previous instance of the app. This allowed an app to communicate with any previous instance of itself or even to switch back to the previous instance if the user had accidentally started it again. Today, this second parameter is always a nullptr. You may also have noticed that I named the first parameter “module” rather than “instance.” Again, in 16-bit Windows, instances and modules were two separate things. All apps would share the module containing code segments but would be given unique instances containing the data segments. The current and previous HINSTANCE parameters should now make more sense. 32-bit Windows introduced separate address spaces and along with that the necessity for each process to map its own instance/module, now one and the same. Today, this is just the base address of the executable. The Visual C++ linker actually exposes this address through a pseudo variable, which you can access by declaring it as follows: extern "C" IMAGE_DOS_HEADER __ImageBase; The address of __ImageBase will be the same value as the HINSTANCE parameter. This is in fact the way that the C Run-Time Library (CRT) gets the address of the module to pass to your wWinMain function in the first place. It’s a convenient shortcut if you don’t want to pass this wWinMain parameter around your app. Keep in mind, though, that this variable points to the current module whether it’s a DLL or an executable and is thus useful for loading module-specific resources unambiguously. The next parameter provides any command-line arguments, and the last parameter is a value that should be passed to the ShowWindow function for the app’s main window, assuming you’re initially calling ShowWindow. The irony is that it will almost always be ignored. This goes back to the way in which an app is launched via CreateProcess and friends to allow a shortcut—or some other app—to define whether an app’s main window is initially minimized, maximized or shown normally. Inside the wWinMain function, the app needs to register a window class. The window class is described by a WNDCLASS structure and registered with the RegisterClass function. This registration is stored in a table using a pair made up of the module pointer and class name, allowing the CreateWindow function to look up the class information when it’s time to create the window: { ... }; VERIFY(RegisterClass(&wc)); To keep the examples brief, I’ll just use the common VERIFY macro as a placeholder to indicate where you’ll need to add some error handling to manage any failures reported by the various API functions. Just consider these as placeholders for your preferred error-handling policy. The earlier code is the minimum that’s required to describe a standard window. The WNDCLASS structure is initialized with an empty pair of curly brackets. This ensures that all the structure’s members are initialized to zero or nullptr. The only members that must be set are hCursor to indicate which mouse pointer, or cursor, to use when the mouse is over the window; hInstance and lpszClassName to identify the window class within the process; and lpfnWndProc to point to the window procedure that will process messages sent to the window. In this case, I’m using a lambda expression to keep everything inline, so to speak. I’ll get back to the window procedure in a moment. The next step is to create the window: VERIFY(CreateWindow(wc.lpszClassName, L"Title", WS_OVERLAPPEDWINDOW | WS_VISIBLE, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, CW_USEDEFAULT, nullptr, nullptr, module, nullptr)); The CreateWindow function expects quite a few parameters, but most of them are just defaults. The first and second-to-last parameters, as I mentioned, together represent the key that the RegisterClass function creates to let CreateWindow find the window class information. The second parameter indicates the text that will be displayed in the window’s title bar. The third indicates the window’s style. The WS_OVERLAPPEDWINDOW constant is a commonly used style describing a regular top-level window with a title bar with buttons, resizable borders and so on. Combining this with the WS_VISIBLE constant instructs CreateWindow to go ahead and show the window. If you omit WS_VISIBLE, then you’ll need to call the ShowWindow function before your window will make its debut on the desktop. The next four parameters indicate the window’s initial position and size, and the CW_USEDEFAULT constant used in each case just tells Windows to choose appropriate defaults. The next two parameters provide the handle to the window’s parent window and menu, respectively (and neither are needed). The final parameter provides the option of passing a pointer-sized value to the window procedure during creation. If all goes well, a window appears on the desktop and a window handle is returned. If things go south, then nullptr is returned instead and the GetLastError function may be called to find out why. With all the talk about the hardships of using the Windows API, it turns out that creating a window is actually quite simple and boils down to this: WNDCLASS wc = { ... }; RegisterClass(&wc); CreateWindow( ... ); Once the window appears, it’s important that your app starts dispatching messages as soon as possible—otherwise your app will appear unresponsive. Windows is fundamentally an event-driven, message-based OS. This is particularly true of the desktop. While Windows creates and manages the queue of messages, it’s the app’s responsibility to dequeue and dispatch them, because messages are sent to a window’s thread rather than directly to the window. This provides a great deal of flexibility, but a simple message loop need not be complicated, as shown here: MSG message; BOOL result; while (result = GetMessage(&message, 0, 0, 0)) { if (-1 != result) { DispatchMessage(&message); } } Perhaps not surprisingly, this seemingly simple message loop is often implemented incorrectly. This stems from the fact that the GetMessage function is prototyped to return a BOOL value, but in fact, this is really just an int. GetMessage dequeues, or retrieves, a message from the calling thread’s message queue. This may be for any window or no window at all, but in our case, the thread is only pumping messages for a single window. If the WM_QUIT message is dequeued, then GetMessage will return zero, indicating that the window has disappeared and is done processing messages and that the app should terminate. If something goes terribly wrong, then GetMessage might return -1 and you can again call GetLastError to get more information. Otherwise, any nonzero return value from GetMessage indicates that a message was dequeued and is ready to be dispatched to the window. Naturally, this is the purpose of the DispatchMessage function. Of course, there are many variants to the message loop, and having the ability to construct your own affords you many choices for how your app will behave, what input it will accept and how it will be translated. Apart from the MSG pointer, the remaining parameters to GetMessage can be used to optionally filter messages. The window procedure will start receiving messages before the CreateWindow function even returns, so it had better be ready and waiting. But what does that look like? A window requires a message map or table. This could literally be a chain of if-else statements or a big switch statement inside the window procedure. This does, however, quickly become unwieldy, and much effort has been spent in different libraries and frameworks to try to manage this somehow. In reality, it doesn’t have to be anything fancy, and a simple static table will suffice in many cases. First, it helps to know what a window message consists of. Most importantly, there’s a constant—such as WM_PAINT or WM_SIZE—that uniquely identifies the message. Two arguments, so to speak, are provided for every message, and these are called WPARAM and LPARAM, respectively. Depending on the message, these might not provide any information. Finally, Windows expects the handling of certain messages to return a value, and this is called the LRESULT. Most messages that your app handles, however, won’t return a value and should instead return zero. Given this definition, we can build a simple table for message handling using these types as building blocks: typedef LRESULT (* message_callback)(HWND, WPARAM, LPARAM); struct message_handler { UINT message; message_callback handler; }; At a minimum, we can then create a static table of message handlers, as shown in Figure 1. Figure 1 A Static Table of Message Handlers static message_handler s_handlers[] = { { WM_PAINT, [] (HWND window, WPARAM, LPARAM) -> LRESULT { PAINTSTRUCT ps; VERIFY(BeginPaint(window, &ps)); // Dress up some pixels here! EndPaint(window, &ps); return 0; } }, { WM_DESTROY, [] (HWND, WPARAM, LPARAM) -> LRESULT { PostQuitMessage(0); return 0; } } }; The WM_PAINT message arrives when the window needs painting. This happens far less often than it did in earlier versions of Windows thanks to advances in rendering and composition of the desktop. The BeginPaint and EndPaint functions are relics of the GDI but are still needed even if you’re drawing with an entirely different rendering engine. This is because they tell Windows that you’re done painting by validating the window’s drawing surface. Without these calls, Windows wouldn’t consider the WM_PAINT message answered and your window would receive a steady stream of WM_PAINT messages unnecessarily. The WM_DESTROY message arrives after the window has disappeared, letting you know that the window is being destroyed. This is usually an indicator that the app should terminate, but the GetMessage function inside the message loop is still waiting for the WM_QUIT message. Queuing this message is the job of the PostQuitMessage function. Its single parameter accepts a value that’s passed along via WM_QUIT’s WPARAM, as a way to return different exit codes when terminating the app. The final piece of the puzzle is to implement the actual window procedure. I omitted the body of the lambda that I used to prepare the WNDCLASS structure previously, but given what you now know, it shouldn’t be hard to figure out what it might look like: wc.lpfnWndProc = [] (HWND window, UINT message, WPARAM wparam, LPARAM lparam) -> LRESULT { for (auto h = s_handlers; h != s_handlers + _countof(s_handlers); ++h) { if (message == h->message) { return h->handler(window, wparam, lparam); } } return DefWindowProc(window, message, wparam, lparam); }; The for loop looks for a matching handler. Fortunately, Windows provides default handling for messages that you choose not to process yourself. This is the job of the DefWindowProc function. And that’s it—if you’ve gotten this far, you’ve successfully created a desktop window using the Windows API! The ATL Way The trouble with these Windows API functions is that they were designed long before C++ became the smash hit that it is today, and thus weren’t designed to easily accommodate an object-oriented view of the world. Still, with enough clever coding, this C-style API can be transformed into something a little more suited to the average C++ programmer. ATL provides a library of class templates and macros that do just that, so if you need to manage more than a handful of window classes or still rely on USER and GDI resources for your window’s implementation, there’s really no reason not to use ATL. The window from the previous section can be expressed with ATL as shown in Figure 2. Figure 2 Expressing a Window in ATL class Window : public CWindowImpl<Window, CWindow, CWinTraits<WS_OVERLAPPEDWINDOW | WS_VISIBLE>> { BEGIN_MSG_MAP(Window) MESSAGE_HANDLER(WM_PAINT, PaintHandler) MESSAGE_HANDLER(WM_DESTROY, DestroyHandler) END_MSG_MAP() LRESULT PaintHandler(UINT, WPARAM, LPARAM, BOOL &) { PAINTSTRUCT ps; VERIFY(BeginPaint(&ps)); // Dress up some pixels here! EndPaint(&ps); return 0; } LRESULT DestroyHandler(UINT, WPARAM, LPARAM, BOOL &) { PostQuitMessage(0); return 0; } }; The CWindowImpl class provides the necessary routing of messages. CWindow is a base class that provides a great many member function wrappers, mainly so you don’t need to provide the window handle explicitly on every function call. You can see this in action with the BeginPaint and EndPaint function calls in this example. The CWinTraits template provides the window style constants that will be used during creation. The macros harken back to MFC and work with CWindowImpl to match incoming messages to the appropriate member functions for handling. Each handler is provided with the message constant as its first argument. This can be useful if you need to handle a variety of messages with a single member function. The final parameter defaults to TRUE and lets the handler decide at run time whether it actually wants to process the message or let Windows—or even some other handler—take care of it. These macros, along with CWindowImpl, are quite powerful and let you handle reflected messages, chain message maps together and so on. To create the window, you must use the Create member function that your window inherits from CWindowImpl, and this in turn will call the good old RegisterClass and CreateWindow functions on your behalf: Window window; VERIFY(window.Create(nullptr, 0, L"Title")); At this point, the thread again needs to quickly begin dispatching messages, and the Windows API message loop from the previous section will suffice. The ATL approach certainly comes in handy if you need to manage multiple windows on a single thread, but with a single top-level window, it’s much the same as the Windows API approach from the previous section. WTL: An Extra Dose of ATL While ATL was designed primarily to simplify the development of COM servers and only provides a simple—yet extremely effective—window-handling model, WTL consists of a slew of additional class templates and macros specifically designed to support the creation of more-complex windows based on USER and GDI resources. WTL is now available on SourceForge (wtl.sourceforge.net), but for a new app using a modern rendering engine, it doesn’t provide a great deal of value. Still, there are a handful of useful helpers. From the WTL atlapp.h header, you can use its message loop implementation to replace the hand-rolled version I described earlier: CMessageLoop loop; loop.Run(); Although it’s simple to drop into your app and use, WTL packs a lot of power if you have sophisticated message filtering and routing needs. WTL also provides atlcrack.h with macros designed to replace the generic MESSAGE_HANDLER macro provided by ATL. These are merely conveniences, but they do make it easier to get up and running with a new message because they take care of cracking open the message, so to speak, and avoid any guesswork in figuring out how to interpret WPARAM and LPARAM. A good example is WM_SIZE, which packs the window’s new client area as the low- and high-order words of its LPARAM. With ATL, this might look as follows: BEGIN_MSG_MAP(Window) ... MESSAGE_HANDLER(WM_SIZE, SizeHandler) END_MSG_MAP() LRESULT SizeHandler(UINT, WPARAM, LPARAM lparam, BOOL &) { auto width = LOWORD(lparam); auto height = HIWORD(lparam); // Handle the new size here ... return 0; } With the help of WTL, this is a little simpler: BEGIN_MSG_MAP(Window) ... MSG_WM_SIZE(SizeHandler) END_MSG_MAP() void SizeHandler(UINT, SIZE size) { auto width = size.cx; auto height = size.cy; // Handle the new size here ... } Notice the new MSG_WM_SIZE macro that replaced the generic MESSAGE_HANDLER macro in the original message map. The member function handling the message is also simpler. As you can see, there aren’t any unnecessary parameters or a return value. The first parameter is just the WPARAM, which you can inspect if you need to know what caused the change in size. The beauty of ATL and WTL is that they’re just provided as a set of header files that you can include at your discretion. You use what you need and ignore the rest. However, as I’ve shown you here, you can get quite far without relying on any of these libraries and just write your app using the Windows API. Join me next time, when I’ll show you a modern approach for actually rendering the pixels in your app’s window. Kenny Kerris
https://docs.microsoft.com/en-us/archive/msdn-magazine/2013/february/windows-with-c-creating-desktop-apps-with-visual-c-2012
CC-MAIN-2021-49
refinedweb
3,150
59.84
Trackback authentication Jacques Distler:.. Like Scott I am a bit wary about letting MT have my private key. But my objection is not because MT is a third party tool. Even GPG or PGP tools are third party. I am pretty sure not many have gone through its codes. That doesn't prevent one from trusting it. My objection is with the practice of putting any of my private keys on a publicly accessible server. I know someone will jump up and say that I could create a new key pair just for use by MT. That still doesn't solve the problem that a private key is residing on a very insecure (by nature) system.Posted by Srijith at C'mon guys! You don't use your personal key for this purpose. You create a new key-pair with the express purpose of authenticating trackbacks. Indeed, since the secret key has to be left on the server un-password-protected, you'd bloody-well better not use an "important" key for the purpose. As to the "it's on a public webserver, ergo it's insecure." Have you thought about how HTTPS works? The SSL private key is unencrypted on the server. Have you thought about how SSH works? The server's private host key is unencrypted on the server. This is a ubiquitous situation. Proper care needs to be taken, but having the private key unencrypted on the server is not a-priori insecure.Posted by Jacques Distler at Of course, if you think about it for a second, there's no reason the secret key could not reside, encrypted, on the server. Your weblogging software could prompt you for the password to unlock the secret key, and create the trackback signature when you prepare your post. Of course, the irony of this discussion is that 99% of MovableType users log into their blog by sending a password unencrypted over the 'net. Worrying about whether the server's secret key is encrypted or not seems pretty silly when you're sending the administrative password in the clear over the network.Posted by Jacques Distler at The key nut to crack there is to make it easy and painless to sign a comment. It's not particularly hard,using GPGDropThing (for MacOSX) or GPGShell (for Windows). More relevant, I think, is that a very small proportion of people have PGP keys, and the rest are not about to go to the trouble of installing PGP/GnuPGP, and creating themselves a key-pair, just so they can sign their comments on my, and Phil Ringnalda's and Krishnan Srijith's and Urs Schreiber et al's blogs. After all, the protection that PGP-signing affords the commenter only obtains if most of the blogs he comments on allow PGP-signed comments. We've ... umh ... got a ways to go before that happens.Posted by Jacques Distler at Trackback authentication based on PGP signature is of more potential and feasibility than that of comment authentication, because, - Trackback is code-to-code; - A trackbacker has an identity that can not be anonymous; Not necessarily so (even not desirably in some occation/contex) for a commenter; Coverage of PGP commenting ideaSome good coverage and discussion on the idea of PGP signing comment posts: PGP-Signed Comments - A good introduction by Jacuqes Distler on why comments should be signed. Notes...... [more] Trackback from TriNetre - The Third Eye at Signing Trackbacks isn’t any good unless the signatures are verified. In order to verify the signatures using Distler’s method, the server has to initiate HTTP GETs to arbitrary servers. And still, if the signer hasn’t been seen before, all you know is that the Trackback is signed and the signature is related to a site. You still don’t know that the alleged linking resource links to you, is not spam, has the alleged title and contains text that resembles the extracted quote. But once you accept that you have to make a connection when you receive a Trackback, you might as well accept Pingback and retrieve the alleged linking resource in order to check that it actually links to your page and in order to find out the real title of the linking page and perhaps even a extract a quote. Pingback “only” provides a URL, but what good is the additional Trackback payload when it can’t be trusted and you don’t know the character encoding of the text? With Pingback the autodiscovery is much cleaner than with Trackback, so if the protocol is changed, it makes more sense to build on Pingback. It does not really matter whether the Pingback originated from the linking content management system or was sent by a third party. If the other page links to yours, what you have is a more useful connection than what you get from a Trackback that is signed but contains a fake title and extract and refers to a resource that does not actually link to your page.Posted by Henri Sivonen at I'm not sure I understand Henri's objection, but, then, I'm not sure I understand the point of signed trackbacks. Let me posit 4 scenarios, and then we can ask which of pingbacks, conventional (unsigned) trackbacks and signed trackbacks are applicable. 1) X links to your post, and wishes to inform you of that. 2) X links to your post, and a 3rd party wishes to inform you of that. 3) X has a discussion that is relevant to your post, but no explicit link, and wishes to inform you of that. 4) X has a discussion that is relevant to your post, but no explicit link, and a 3rd party wishes to inform you of that. All three protocols would work in case 1). Pingbacks and conventional trackbacks would work in case 2), but signed trackbacks would not. Pingbacks would not work in cases 3,4). Signed trackbacks would work in case 3); conventional trackbacks would work in either. What signed trackbacks tell you is that "X sent the trackback". But, presumably, what you really want to know is whether the discussion at X is relevant, and whether the trackback "excerpt" is representative of that discussion. Neither of these is guaranteed by the fact that X signed the trackback. If, for instance, you are worried about trackback spam ("X" is the spammer's website), then just because the spammer signed the trackback does not make it any more relevant.Posted by Jacques Distler at Jacques: (3) is isomorphic to somebody placing a comment on your blog, with the exception that you are provided with the blog's name instead of the author's name. Here's a mapping (look for table 1). Henri apparently is choosing to focus on (1), and noting that if you need to fetch the page anyway in order to verify the link, why bother passing that information on the trackback? However, it is worth nothing that another piece of information, namely the comment itself (a.k.a., excerpt), is also missing. Excerpts are extremely difficult to reverse engineer from an HTML page.Posted by Sam Ruby at It is always possible for someone to leave a comment on your blog, saying "X discusses this, too." and provide a link. [That's assuming you haven't disabled comments (on that entry). I've seen many a blog entry with the statement: "Comments closed; if you want to comment, send me a Trackback."] Cases 3,4) are really no different from 1,2) in this regard. Either the author, or a third party could leave such a comment. What distinguishes case 1) is Trackback autodiscovery, which (as Phil Ringnalda has often complained) turns Trackbacks into something closely resembling Pingbacks. What makes a difference though, is, as you say, the Trackback excerpt. The only scenario I can see where signed trackbacks would be useful is in preventing a 3rd party from sending a trackback with an obnoxious "excerpt". The author of X can, of course, do whatever he wants, signatures or no signatures.Posted by Jacques Distler at I was indeed focusing on scenario (1), because I think (3) and (4) are more theoretical than practical and significantly complicate the problem space. Scenario (3) can easily be reduced to scenario (1) if X can come up with a pretext for linking to your page. When the Trackback is legitimate, finding such a pretext to link isn’t too hard. It’s even easier to link and let the CMS ping automatically than not to link and ping manually. I think scenario (4) isn’t particularly likely to come up in practice and I think scenario (4) is too prone to spam. The rest of my reasoning led to a situation where the recipient doesn’t need to distinguish between scenarios (1) and (2). But, presumably, what you really want to know is whether the discussion at X is relevant, and whether the trackback "excerpt" is representative of that discussion. Neither of these is guaranteed by the fact that X signed the trackback. Exactly. And when the title and excerpt are not trusted, it makes more sense to use Pingback, because Pingback has better autodiscovery. However, it is worth nothing that another piece of information, namely the comment itself (a.k.a., excerpt), is also missing. Excerpts are extremely difficult to reverse engineer from an HTML page. Getting an excerpt that has similar quality as the excerpts sent by MT isn’t that hard if the goal posts are adjusted a little so that the excerpt is centered around the link to your page instead of being anchored to the beginning of the post. (The beginning of the post is harder to locate the than the link to your page.) The excerpts provided by Trackback aren’t that good. First of all, the byte sequence comes without character encoding information. Secondly, the excerpt provided by Trackback is a plain string with no markup. Thirdly, the recipient cannot control the quality of the excerpts. You have to settle with whatever excerpt length and quality the sender cares to provide. From a Java-centric point of view, I suggest the following: - GET the alleged linking resource - If the Content-Length exceeds n (where n is a reasonable length limit to protect against malicious sites that would send megabytes of garbage), close the connection. In any case, limit the number of bytes actually read to n. - If the Content-Type starts with text/html, instantiate TagSoup. Else if the content type starts with application/xhtml+xml, instantiate an XML parser with an entity resolver that never fetches anything from the network. Else, close the connection. - Set to the contentHandler of the parser to a SAX filter that limits the breadth and depth of the tree and the number of attributes on a given element in order to protect against attacks that attempt to make the recipient allocate a lot of memory. - Use the SAX events to build a document tree (DOM, XOM, JDOM, dom4j, or similar). - Find an element whose namespace is the XHTML namespace, whose local name is “a” and that has an attribute “href” that contains the string that was alleged as the URL to a page on your site in the Pingback call. - Extract a quote of m characters from the document tree around that element. (As an improvement, instead of extracting only text content to, the extraction could be taken beyond Trackback and the marked up structure could be preserved.)Posted by Henri Sivonen at Whenever possible, I much prefer human authored excerpts over machine authored ones. Trackback will provide them directly (admittedly with an ambiguous encoding and format). With Pingback, I have to rely on a more indirect means... my approach to date have been to locate the feed associated with the page, and then identify the entry associated with that particular pingback. I do a similar thing with referrers. This works best if the weblog has an autodiscovery link to their feed, and if the feed provides both a summary and full content. Code: pingback, extractor.Posted by Sam Ruby at Haven't thought about this in depth, but wouldn't it be adequate for practical purposes if the target simply verified that a link was indeed made from the other end before accepting a trackback? Am I missing something? Posted by Seb at Two things, no, three. One, there are perfectly legitimate uses for pings where the pinger doesn't link to the pingee. Two, now that I've seen a lot more comment spam than I had the last time we discussed this, I would get around a requirement to link by simply linking from a text-decoration: none period, or some other way of hiding a link in plain sight if you became too cunning for links from just a piece of punctuation. And third, if you want to be reassured that I link to you, well, since I control when you get the ping and what I return to you, I'll just tell the script behind my page that for the next five minutes it needs to include a link to you. Pleeeese can I turn evil? Comment spam, crapflooding, page-widening, trackback spam, RSS aggregator exploding, it's always so much easier and more fun to be the one doing the evil, not the one trying to block it.Posted by Phil Ringnalda at I would get around a requirement to link by... Which is exactly why I don't think an automated link check is going to suffice for me. I'm still going to click through to see if your page is actually relevant. (Or, more likely, summarily delete the trackback if the URL is livenudeanything.com .) Trackback spam is, I think, intrinsically less attractive to the spammers. But it's also harder for the blog-owner to combat. I know it's retrograde, but separating trackbacks to a separate page (still the default in MovableType) with a nofollow directive for spiders would essentially render them ineffective. Pleeeese can I turn evil? Comment spam, crapflooding, page-widening, trackback spam, RSS aggregator exploding, it's always so much easier and more fun to be the one doing the evil, not the one trying to block it. Ah, but the challenge is far greater for the Good Guy. The spammers and crapflooders and other lowlifes have the advantage of playing the white pieces. You'd bore quickly of the Dark Side.Posted by Jacques Distler at I know it's retrograde, but separating trackbacks to a separate page (still the default in MovableType) with a nofollow directive for spiders would essentially render them ineffective. Rendering them ineffective does not stop them. Compare referers and robots.txtPosted by Sam Ruby at Compare referers and robots.txt I was merely suggesting that there's now pagerank boost from a link on a page with a nofollow directive (hmmm. Maybe one needs a noindex directive too.) Rendering them ineffective does not stop them. In the sense that spammers don't know or care whether spamming your particular blog will boost their pagerank? True, as long as spamming many or most blogs will be effective. It would only be a deterrent is if the big players like Six Apart made that the default configuration.Posted by Jacques Distler at I use Movable Type for my blog. As I'm sure everybody here knows, it's a 3rd party app. And while I do have the source, I don't have time to review it all. So there's no way I'm giving it access to my private keys. This seems like a good idea, but I don't trust my weblog software. Perhaps if I had written it myself, I'd feel differently.Posted by Scott Johnson at
http://www.intertwingly.net/blog/2004/03/04/Trackback-authentication
crawl-001
refinedweb
2,653
61.77
- Code: Select all from random import randint from time import sleep, time print('Your battleship is roaming the seas, suddently another battleship is nearing you, and the battle starts.') man_hits = 0 comp_hits = 0 while man_hits < 2 and comp_hits < 2: #Man shoot begin battleshpbx = randint(1, 2) battleshpby = randint(1, 2) print(battleshpbx) print(battleshpby) shootax = input('Enter the x position of your missile. From 1 to 2.') shootay = input('Enter the y position of your missile. From 1 to 2.') if shootax == battleshpbx and shootay == battleshpby: print('hit!') man_hits = man_hits + 1 print(man_hits) #Man shoot end #Comp shoot start battleshpax = input('Please enter the x position of your battleship. From 1 to 2.') battleshpay = input('Please enter the y position of your battleship. From 1 to 2.') shootbx = randint(1, 2) shootby = randint(1, 2) print(shootbx) print(shootby) if shootax == battleshpbx and shootay == battleshpby: print('computer hit!') comp_hits = comp_hits + 1 print(comp_hits) #Comp shoot end if man_hits > comp_hits: print('you win!') elif comp_hits > man_hits: print('computer wins!') else: print('it\'s a draw! try again.') Thank you!
http://python-forum.org/viewtopic.php?f=26&t=19795&p=44255
CC-MAIN-2017-26
refinedweb
178
51.65
Monads as containers From HaskellWiki Revision as of 06:47, 9 May 2006 There now exists a translation of this article into Russian! A monad is a container type together with a few methods defined on it. Monads model different kinds of computations. Like Haskell lists, all the elements which a monadic container holds at any one time must be the same type (it is homogeneous). There are a few ways to choose the basic set of functions that one can perform on these containers to be able to define a monad. Haskell generally uses a pair of functions called return and bind (>>=), but it is more natural sometimes to begin with map ( fmap), return and join, as these are simpler to understand at first. We can later define bind in terms of these. The first of these three, generally called map, (but called fmap in Haskell 98) actually comes from the definition of a functor. We can think of a functor as a type of container where we are permitted to apply a single function to every object in the container. That is, if f is a functor, and we are given a function of type (a -> b), and a container of type (f a), we can get a new container of type (f b). This is expressed in the type of fmap: fmap :: (Functor f) => (a -> b) -> f a -> f b If you will give me a blueberry for each apple I give you (a -> b), and I have a box of apples (f a), then I can get a box of blueberries (f b). Every monad is a functor. The second method, return, is specific to monads. If m is a monad, then return takes an element of type a, and gives a container of type (m a) with that element in it. So, its type in Haskell is return :: (Monad m) => a -> m a If I have an apple (a) then I can put it in a box (m a). The third method, join, also specific to monads, takes a container of containers m (m a), and combines them into one m a in some sensible fashion. Its Haskell type is join :: (Monad m) => m (m a) -> m a If I have a box of boxes of apples (m (m a)) then I can take the apples from each, and put them in a new box (m a). From these, we can construct an important operation called bind or extend, which is commonly given the symbol (>>=). When you define your own monad in Haskell, it will expect you to define just return and bind. It turns out that mapping and joining come for free from these two. Although only return and bind are needed to define a monad, it is usually simpler to think about map, return, and join first, and then get bind from these, as map and join are in general simpler than bind. What bind does is to take a container of type (m a) and a function of type (a -> m b). It first maps the function over the container, (which would give an m (m b)) and then applies join to the result to get a container of type (m b). Its type and definition in Haskell is then (>>=) :: (Monad m) => m a -> (a -> m b) -> m b xs >>= f = join (fmap f xs) -- how we would get bind (>>=) in Haskell if it were join and fmap -- that were chosen to be primitive If I have a box of apples (m a) and for each apple, you will give me a box of blueberries (a -> m b) then I can get a box with all the blueberries together (m b). Note that for a given container type, there might be more than one way to define these basic operations (though for obvious reasons, Haskell will only permit one instance of the Monad class per actual type). [Technical side note: The functions return and bind need to satisfy a few laws in order to make a monad, but if you define them in a sensible way given what they are supposed to do, the laws will work out. The laws are only a formal way to give the informal description of the meanings of return and bind I have here.] It would be good to have a concrete example of a monad at this point, as these functions are not very useful if we cannot find any examples of a type to apply them to. Lists are most likely the simplest, most illustrative example. Here, fmap is just the usual map, return is just (\x -> [x]) and join is concat. instance Monad [] where return :: a -> [a] return x = [x] -- make a list containing the one element given (>>=) :: [a] -> (a -> [b]) -> [b] xs >>= f = concat (map f xs) -- collect up all the results of f (which are lists) -- and combine them into a new list The list monad, in some sense, models computations which could return any number of values. Bind pumps values in, and catches all the values output. Such computations are known in computer science as nondeterministic. That is, a list [x,y,z] represents a value which is all of the values x, y, and z at once. A couple examples of using this definition of bind: [10,20,30] >>= \x -> [x, x+1] -- a function which takes a number and gives both it and its -- successor at once = [10,11,20,21,30,31] [10,20,30] >>= \x -> [x, x+1] >>= \y -> if y > 20 then [] else [y,y] = [10,10,11,11,20,20] And a simple fractal, exploiting the fact that lists are ordered: f x | x == '#' = "# #" |>= f >>= f >>= f >>= f = "# # # # # # # # # # # # # # # #" You might notice a similarity here between bind and function application or composition, and this is no coincidence. The reason that bind is so important is that it serves to chain computations on monadic containers together. You might be interested in how, given just bind and return, we can get back to map and join. Mapping is equivalent to binding to a function which only returns containers with a single value in them -- the value that we get from the function of type (a -> b) which we are handed. The function that does this for any monad in Haskell is called liftM -- it can be written in terms of return and bind as follows: liftM :: (Monad m) => (a -> b) -> m a -> m b liftM f xs = xs >>= (return . f) -- take a container full of a's, to each, apply f, -- put the resulting value of type b in a new container, -- and then join all the containers together. Joining is equivalent to binding a container with the identity map. This is indeed still called join in Haskell: join :: (Monad m) => m (m a) -> m a join xss = xss >>= id It is common when constructing monadic computations that one ends up with a large chain of binds and lambdas. For this reason, some syntactic sugar called "do notation" was created to simplify this process, and at the same time, make the computations look somewhat like imperative programs. Note that in what follows, the syntax emphasizes the fact that the list monad models nondeterminism: the code y <- xs can be thought of as y taking on all the values in the list xs at once. The above (perhaps somewhat silly) list computations could be written: do x <- [10,20,30] [x, x+1] and, do x <- [10,20,30] y <- [x, x+1] if y > 20 then [] else [y,y] The code for liftM could be written: liftM f xs = do a <- xs return (f a) If you understood the above, then you have a good start on understanding monads in Haskell. Check out Maybe (containers with at most one thing in them, modelling computations that might not return a value) and State (modelling computations that carry around a state parameter using an odd sort of container described below), and you'll start getting the picture as to how things work. A good exercise is to figure out a definition of bind and return (or fmap, join and return) which make the following tree type a monad. Just keep in mind what they are supposed to do. data Tree a = Leaf a | Branch (Tree a) (Tree a) For more good examples of monads, and lots of explanation see which has a catalogue of the commonest ones, more explanation as to why you might be interested in monads, and information about how they work. The question that many people will be asking at this point is "What does this all have to do with IO?". Well, in Haskell, IO is a monad. How does this mesh with the notion of a container? Consider getChar :: IO Char -- that is, an IO container with a character value in it. The exact character that the container holds is determined when the program runs by which keys the user presses. Trying to get the character out of the box will cause a side effect: the program stops and waits for the user to press a key. This is generally true of IO values - when you get a value out with bind, side effects can occur. Many IO containers don't actually contain interesting values. For example, putStrLn "Hello, World!" :: IO () That is, the value returned by putStrLn "Hello, World!" is an IO container filled with a value of type (), a not so interesting type. However, when you pull this value out during a bind operation, the string Hello, World! is printed on the screen. So another way to think about values of type IO t is as computations which when executed, may have side effects before returning a value of type t. One thing that you might notice as well, is that there is no ordinary Haskell function you can call (at least not in standard Haskell) to actually get a value out of an IO container/computation, other than bind, which puts it right back in. Such a function of type IO a -> a would be very unsafe in the pure Haskell world, because the value produced could be different each time it was called, and the IO computation could have side effects, and there would be no way to control when it was executed (Haskell is lazy after all). So how do IO actions ever get run? The IO action called main runs when the program is executed. It can make use of other IO actions in the process, and everything starts from there. When doing IO, a handy special form of bind when you just want the side effects and don't care about the values returned by the container on the left is this: (>>) :: Monad m => m a -> m b -> m b m >> k = m >>= \_ -> k An example of doing some IO in do notation: main = do putStrLn "Hello, what is your name?" name <- getLine putStrLn ("Hello " ++ name ++ "!") or in terms of bind, making use of the special form: main = putStrLn "Hello, what is your name?" >> getLine >>= \name -> putStrLn ("Hello " ++ name ++ "!") or, very primitive, without the special form for bind: main = putStrLn "Hello, what is your name?" >>= \x -> getLine >>= \name -> putStrLn ("Hello " ++ name ++ "!") Another good example of a monad which perhaps isn't obviously a container at first, is the Reader monad. This monad basically consists of functions from a particular type: ((->) e), which might be written (e ->) if that were supported syntax. These can be viewed as containers indexed by values of type e, having one spot for each and every value of type e. The primitive operations on them follow naturally from thinking this way. The Reader monad models computations which read from (depend on) a shared environment. To clear up the correspondence, the type of the environment is the index type on our indexed containers. type Reader e = (->) e -- our monad Return simply produces the container having a given value at every spot. return :: a -> (Reader e a) return x = (\k -> x) Mapping a function over such a container turns out to be nothing more than what composition does. fmap :: (a -> b) -> Reader e a -> Reader e b = (a -> b) -> (e -> a) -> (e -> b) -- by definition, (Reader a b) = (a -> b) fmap f xs = f . xs How about join? Well, let's have a look at the types. join :: (Reader e) (Reader e a) -> (Reader e a) = (e -> e -> a) -> (e -> a) -- by definition of (Reader a) There's only one thing the function of type (e -> a) constructed could really be doing: join xss = (\k -> xss k k) From the container perspective, we are taking an indexed container of indexed containers and producing a new one which at index k, has the value at index k in the container at index k. So we can derive what we want bind to do based on this: (>>=) :: (Reader e a) -> (a -> Reader e b) -> (Reader e b) = (e -> a) -> (a -> (e -> b)) -> (e -> b) -- by definition xs >>= f = join (fmap f xs) = join (f . xs) = (\k -> (f . xs) k k) = (\k -> f (xs k) k) Which is exactly what you'll find in other definitions of the Reader monad. What is it doing? Well, it's taking a container xs, and a function f from the values in it to new containers, and producing a new container which at index k, holds the result of looking up the value at k in xs, and then applying f to it to get a new container, and finally looking up the value in that container at k. The Monads as Computation perspective makes the purpose of such a monad perhaps more obvious: bind is taking a computation which may read from the environment before producing a value of type a, and a function from values of type a to computations which may read from the environment before returning a value of type b, and composing these together, to get a computation which might read from the (shared) environment, before returning a value of type b. How about the State monad? Although I'll admit that with State and IO in particular, it is generally more natural to take the view of Monads as Computation, it is good to see that the container analogy doesn't break down. The state monad is a particular refinement of the reader monad discussed above. I won't go into huge detail about the state monad here, so if you don't already know what it's for, what follows may seem a bit unnatural. It's perhaps better taken as a secondary way to look at the structure. For reference to the analogy, a value of type (State s a) is like a container indexed by values of type s, and at each index, it has a value of type a and another, new value of type s. The function runState does this "lookup". newtype State s a = State { runState :: (s -> (a,s)) } What does return do? It gives a State container with the given element at every index, and with the "address" (a.k.a. state parameter) unchanged. return :: a -> State s a return x = State (\s -> (x,s)) Mapping does the natural thing, applying a function to each of the values of type a, throughout the structure. fmap :: (a -> b) -> (State s a) -> (State s b) fmap f (State m) = State (onVal f . m) where onVal f (x, s) = (f x, s) Joining needs a bit more thought. We want to take a value of type (State s (State s a)) and turn it into a (State s a) in a natural way. This is essentially removal of indirection. We take the new address and new box that we get from looking up a given address in the box, and we do another lookup -- note that this is almost the same as what we did with the reader monad, only we use the new address that we get at the location, rather than the same address as for the first lookup. So we get: join :: (State s (State s a)) -> (State s a) join xss = State (\s -> uncurry runState (runState xss s)) I hope that the above was a reasonably clear introduction to what monads are about. Feel free to make criticisms and ask questions. -- CaleGibbard
http://www.haskell.org/haskellwiki/index.php?title=Monads_as_containers&diff=4033&oldid=4032
CC-MAIN-2013-20
refinedweb
2,726
63.63
Question: I've written seven test cases for understanding the behavior of the finally block. What is the logic behind how finally works? package core; public class Test { public static void main(String[] args) { new Test().testFinally(); } public void testFinally() { System.out.println("One = " + tryOne()); System.out.println("Two = " + tryTwo()); System.out.println("Three = " + tryThree()); System.out.println("Four = " + tryFour()); System.out.println("Five = " + tryFive()); System.out.println("Six = " + trySix()); System.out.println("Seven = " + trySeven()); } protected StringBuilder tryOne() { StringBuilder builder = new StringBuilder(); try { builder.append("Cool"); return builder.append("Return"); } finally { builder = null; } } protected String tryTwo() { String builder = "Cool"; try { return builder += "Return"; } finally { builder = null; } } protected int tryThree() { int builder = 99; try { return builder += 1; } finally { builder = 0; } } protected StringBuilder tryFour() { StringBuilder builder = new StringBuilder(); try { builder.append("Cool"); return builder.append("Return"); } finally { builder.append("+1"); } } protected int tryFive() { int count = 0; try { count = 99; } finally { count++; } return count; } protected int trySix() { int count = 0; try { count = 99; } finally { count = 1; } return count; } protected int trySeven() { int count = 0; try { count = 99; return count; } finally { count++; } } } Why builder = null is not working? Why does builder.append("+1") work whereas count++( in trySeven()) does not work? Solution:1 Once you do the return, the only way to override that is to do another return (as discussed at Returning from a finally block in Java, this is almost always a bad idea), or otherwise complete abruptly. Your tests don't ever return from a finally. JLS §14.1 defines abrupt completion. One of the abrupt completion types is a return. The try blocks in 1,2,3,4, and 7 abruptly complete due to returns. As explained by §14.20.2, if the try block completes abruptly for a reason R besides a throw, the finally block is immediately executed. If the finally block completes normally (which implies no return, among other things), "the try statement completes abruptly for reason R.". In other words, the return initiated by the try is left intact; this applies to all your tests. If you return from the finally, "the try statement completes abruptly for reason S (and reason R is discarded)." (S here being the new overriding return). So in tryOne, if you did: finally { builder = null; return builder; } this new return S would override the original return R. For builder.append("+1") in tryFour, keep in mind StringBuilder is mutable, so you're still returning a reference to the same object specified in the try. You're just doing a last minute mutation. tryFive and trySix are straight-forward. Since there is no return in the try, the try and finally both complete normally, and it executes the same as if there was no try-finally. Solution:2 Let's start with use case you'll see more often - you have a resource that you must close to avoid a leak. public void deleteRows(Connection conn) throws SQLException { Statement statement = conn.createStatement(); try { statement.execute("DELETE * FROM foo"); } finally { statement.close(); } } In this case, we have to close the statement when we're done, so we don't leak database resources. This will ensure that in the case of an Exception being thrown, we will always close our Statement before the function exits. try { ... } finally { ... } blocks are meant for ensuring that something will always execute when the method terminates. It's most useful for Exception cases. If you find yourself doing something like this: public String thisShouldBeRefactored(List<String> foo) { try { if(foo == null) { return null; } else if(foo.length == 1) { return foo.get(0); } else { return foo.get(1); } } finally { System.out.println("Exiting function!"); } } You're not really using finally properly. There is a performance penalty to this. Stick to using it when you have Exception cases that you must clean up from. Try refactoring the above to this: public String thisShouldBeRefactored(List<String> foo) { final String result; if(foo == null) { result = null; } else if(foo.length == 1) { result = foo.get(0); } else { result = foo.get(1); } System.out.println("Exiting function!"); return result; } Solution:3 The finally block is executed when you leave the try block. The "return" statement does two things, one it sets the return value of the function and two it exits the function. Normally this would look like an atomic operation but within a try block it will cause the finally block to execute after the return value was set and before the function exits. Return execution: - Assign return value - run finally blocks - exit function Example one (primitive): int count = 1;//Assign local primitive count to 1 try{ return count; //Assign primitive return value to count (1) }finally{ count++ //Updates count but not return value } Example two(reference): StringBuilder sb = new StringBuilder();//Assign sb a new StringBuilder try{ return sb;//return a reference to StringBuilder }finally{ sb.append("hello");//modifies the returned StringBuilder } Example three (reference): StringBuilder sb = new StringBuilder();//Assign sb a new StringBuilder try{ return sb;//return a reference to StringBuilder }finally{ sb = null;//Update local reference sb not return value } Example four (return): int count = 1; //assign count try{ return count; //return current value of count (1) }finally{ count++; //update count to two but not return value return count; //return current value of count (2) //replaces old return value and exits the finally block } Solution:4 builder = null and builder.append("+1") are working. It's just that they're not affecting what you're returning. The function returns what the return statement has, regardless of what happens afterward. The reason there is a difference is because builder is passed by reference. builder=null changes the local copy of builder. builder.append("+1") affects the copy held by the parent. Solution:5 Why builder = null is not working? Because you are setting the local reference to null which will not change the content of the memory. So it is working, if you try to access the builder after finally block then you'll get null. Why builder.append("+1") work? Because you are modifying the content of the memory using the reference,that's why it should work. Why count++ does not work in testFive()? It is working fine with me. It outputs 100 as expected. Solution:6 Consider what the compiler is actually doing for the return statement, for instance in tryOne(): it copies a reference to builder back to the calling function's environment. After it's done this, but before control goes back to the calling function, the finally block executes. So you have something more like this, in practice: protected StringBuilder tryOne() { StringBuilder builder = new StringBuilder(); try { builder.append("Cool"); builder.append("Return"); StringBuilder temp = builder; return temp; } finally { builder = null; } } Or, in terms of the order that statements actually get executed (ignoring possible exceptions, of course), it looks more like this: protected StringBuilder tryOne() { StringBuilder builder = new StringBuilder(); builder.append("Cool"); builder.append("Return"); StringBuilder temp = builder; builder = null; return temp; } So setting builder = null does run, it just doesn't do anything useful. However, running builder.append("something") will have a visible effect, since both temp and builder refer to the same (mutable) object. Likewise, what's really happening in trySeven() is something more like this: protected int trySeven() { int count = 0; count = 99; int temp = count; count++; return temp; } In this case, since we're dealing with an int, the copies are independent, so incrementing one doesn't affect the other. All that said, the fact remains that putting return statements in a try-finally block is quite clearly confusing, so if you've got any kind of choice in the matter, you'd be better off rewriting things so that all your return statements are outside any try-finally blocks. Solution:7 StringBuilder cannot be null it expects a string value. Null arguments are generally bad when working with strings. count++ isn't declared??? builder.append("") you are appending a string - good. count=count++; Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com EmoticonEmoticon
http://www.toontricks.com/2019/05/tutorial-understanding-finally-block.html
CC-MAIN-2020-40
refinedweb
1,342
56.96
Cryptography is the ancient science of writing in secret code, and of course, for centuries writing meant pen and paper. Today, cryptography is critically important for modern applications, but what options do we have for cryptography in Swift? Marcin Krzyżanowski outlines existing libraries for cryptography, before presenting his work in Swift. He demonstrates some basic cryptographic algorithms, alongside example implementations in Swift. We end with a call to join in, learn, and implement! You can contribute to Marcin’s project on GitHub. Early Cryptography (0:26) Cryptography itself is not bound to any programming language in particular, or even programming languages at all. It can be performed with a pen and paper, or with a mechanical machine just like Enigma. The Enigma has three wheels, and wires that make it somewhat unpredictable. The operator of the machine presses the keyboard buttons, but sometimes the same key will produce different results. This helped make the Enigma so hard to crack. Invented by a German engineer at the end of World War I, it was commercialized shortly after, and used militarily until about 1970. It was cracked several times, first by the Polish Cipher Bureau in 1932, and more famously during World War II, by the British at Bletchley Park. Today, we have computers to do the same calculations that the Enigma once did. Existing Frameworks (1:35) When it comes to software, all of the operations are handled by cryptoframeworks, or libraries. In particular, the Apple platform has CommonCrypto within the system library. There is also OpenSSL. Both of these are C libraries that can be used with Swift through C interoperability. SwiftSSL is a wrapper around OpenSSL that we can use. NaCl only does a fraction of what OpenSSL can do, but is still a very good library. CryptoSwift is an approach to implement cryptofunctions in pure Swift, which I will talk about further on. Finally, you can use JavaScript with CryptoJS. Bridging is painful, but it can be used. Get more development news like this CommonCrypto (2:48) CommonCrypto is a C library, and it is part of the system available to iOS and OS X. It is an open source project, so you can explore its internals. Thanks to C interoperability, it can be used with Swift. However, you do have to use unsafe pointers. CommonCrypto is fairly easy to use. The following example demonstrates the encryption of data using the AES cipher. CCCrypt( UInt32(kCCEncrypt), UInt32(kCCAlgorithmAES128), UInt32(kCCOptionPKCS7Padding), keyBytes, // UnsafePointer<Void>(keyData.bytes) key.count, ivBytes, // UnsafePointer<Void>(ivData.bytes) dataBytes, // UnsafePointer<Void>(data.bytes) dataLength, cryptPointer, // UnsafeMutablePointer<Void>(cryptData!.mutableBytes) cryptLength, &numBytesEncrypted ) CryptoSwift (3:44) A few months ago, I started working on this project named CryptoSwift. I created it out of curiosity, and a need to learn. In the very early days, when there was only a wrapper around OpenSSL, I wondered what was inside its MD5 hash function. I wondered if I could implement it, and I did. I continued, and started to explore new areas, learning new things about cryptography. As an engineer, I have to challenge myself constantly. CryptoSwift is a Swift framework available for iOS and OS X. My principles in building this as a pure Swift project were to avoid C code and unsafe pointers. It’s constantly improving, and there is still a lot to do. It also comes with extensions over NSData and strings, so data can be encrypted immediately. To start, I have implemented some algorithms. Hashes (5:21) The first group of algorithms are hashes. These are the functions, and they can be used to verify or check the integrity of data. They are used around the cryptographic protocols, but have no secrecy. Everyone can calculate the hash around the data. With CryptoSwift, it looks like this: import CryptoSwift "SwiftSummit".md5() "SwiftSummit".sha1() "SwiftSummit".sha512() "SwiftSummit".crc32() I implemented MD5, SHA1, and SHA2 with variants. Using the extensions around Swift strings, these can be calculated straightaway. Ciphers (6:07) The next group that I implemented were ciphers. A cipher is a well-defined algorithm. I implemented two symmetric ciphers: AES and ChaCha. Ciphers start with plaintext. During encryption, a key is applied, some operations are performed and ciphertext is the output. Decryption is a similar, but reversed process. AES, or Advanced Encryption Standard, is perhaps the most popular cipher you may have heard of, and it is mathematically proven to be safe. It is so popular that vendors have started to implement support with their hardware. It can be used with assembler to make things very, very fast. Even though it’s good, it is hard to use with Swift because we do not have access to the assembler. So, if we want to use it, we have to use C. To use it, we need a key and initialization vector iv. We have an instance of aes, and an encryption area of [1,2]. import CryptoSwift let key = "1234567890123456" // key let iv = "1234567890123456" // random if let aes = AES(key: key, iv: iv, blockMode: .CBC) { if let encrypted = aes.encrypt([1,2], padding: PKCS7()) { let data = NSData.withBytes(encrypted) } } The other cipher I implemented is ChaCha. It was invented by the mathematician programmer Daniel J. Bernstein after he tried to come up with something faster but just as secure as AES. Apple chose to use it with HomeKit, and Google uses it with Chrome. However, there is a lack of official support in OpenSSL. Using ChaCha is similar to the previous example with AES. There is an instance of chacha and an encrypt function. import CryptoSwift let key = "1234567890123456" // key let iv = "1234567890123456" // random if let chacha = ChaCha20(key: key, iv: iv) { if let encrypted = chacha.encrypt([1,2]) { let data = NSData.withBytes(encrypted) } } For the API, I created an enum Cipher. It has two options, ChaCha20 and AES, as well as two functions, encrypt and decrypt. For this, I chose to work with an array of bytes. enum Cipher { case ChaCha20(key: [UInt8], iv: [UInt8]) case AES(key: [UInt8], iv: [UInt8], blockMode: CipherBlockMode) func encrypt(bytes: [UInt8]) -> [UInt8]? func decrypt(bytes: [UInt8]) -> [UInt8]? static func randomIV(blockSize: Int) -> [UInt8] } Block Mode (9:45) When encryption happens, a cipher works on a block. When you encrypt a longer message, you have to somehow concatenate the output. This is why mathematicians invented block modes. This algorithm uses a block cipher to encrypt a large message. I have implemented four of these algorithms: ECB, CBC, CFB, and CTR. In this talk, I will show you two of these algorithms, EBC and CBC. The most naive is ECB, or Electronic Codebook. Each of the three blocks of text are encrypted independently — there is plaintext, encryption, and ciphertext for each of the blocks. The result is obtained by concatenating one to the next. But, the problem with this mode is that it somehow exposes some of the information of plaintext, because it repeatedly performs the same operation. CBC, or Cipher Block Chaining, is something of an opposite approach. This is the default block mode that exists in CryptoSwift. The difference with this mode is that the input for every block uses the output from the previous block. The encryption is done sequentially, so although it cannot be parallelized, it ends up more secure. Notice that, in this mode, we have an initialization vector, which is the only random value in the encryption. Authenticators (12:11) The last group of algorithms are authenticators. The message authentication code is a short piece of information that can provide integrity and authenticity assurances. When you receive a message, you can check that it is the file you expect, from the person you expected. The key is a part of the authentication operation. I implemented two of them: Poly, because of ChaCha, and HMAC, because of AES. The enum I used to implement has these two cases, as well as the function authenticate. HMAC has a variant due to the hash function. enum Authenticator { case Poly1305(key: [UInt8]) case HMAC(key: [UInt8], variant: HMAC.Variant) func authenticate(message: [UInt8]) -> [UInt8]? } Performance (13:24) There are currently some problems with performance. This implementation is slower than CommonCrypto, although not everywhere. It is significantly slower for AES - it takes about forty seconds to encrypt one megabyte, compared to only a couple seconds. AES uses a lot of operations and loops, so perhaps that is the reason for its poor performance. However, ChaCha is extremely fast. It takes only a second and a half for the same code. Recently, I improved performance by 40% just be reserving memory for the array using the function reserveCapacity. From what I can see, the allocation of small chunks of memory is visible. Crypto You Can Do (14:52) So, Crypto you can do. How? Well, you can certainly contribute to CryptoSwift. I would also recommend Daniel Bernstein’s site. Again, he was the author of the library NaCl. Of course, you might understand nothing at first. Keep re-reading, and at some point, try to write some code. Implement it and do tests. These ciphers are hard to keep track of in your head. When you have some code, share it and ask for feedback! The worst that can happen will be nothing; the best is that you will learn something new. There is always a lot to fix and improve, especially with a project like this. You have seen the performance issues; there are other issues, too, as well as a lot of missing pieces. I encourage you to contribute, even if all you might do is work on performance or API design. It isn’t about cryptography, but Swift. Thank you! About the content This talk was delivered live in March 2015 at Swift Summit London. The video was transcribed by Realm and is published here with the permission of the conference organizers.
https://academy.realm.io/posts/swift-summit-marcin-krzyzanowski-cryptoswift-cryptography/
CC-MAIN-2021-17
refinedweb
1,645
67.45
Mapping in BIZTALK Introduction There have been many articles in the internet that describes or outline about mapping features in BizTalk. But today in this article we are diving further to know more features that make our document mapping possible to our need. BizTalk as a windows server product integrates the heterogeneous system in a particular System architecture. Main important feature of this server product is supported by XSLT extensively. BizTalk Mapper is a graphical tool which assists in transforming data format between any number of schemas of disparate systems (or) entities. Maps also contain the business logic to support our transformation of data required by the target system. Transformation can be achieved by writing a Custom .NET components, using the external style sheet to perform our work, code driven solution with C# inline code, performing message assignment in the orchestration and driving it to the map. More preciously we can use the Rule engine to establish the common business logic transformation. Rather than encoding these rules in each end system rule should be deployed in the integration hub. Doing this way simplifies our update process.There is no need for us to customize the system to suit the other system required format. Integration layer will take care all the complex mapping process. This article is intended to focus on main and advanced concepts of Map. Beginners have to read the respective beginning articles to familiar themselves with mapping process. Mapping tool is supplied with BizTalk server product and documentation of BizTalk is a good start. Functoids String Extracting Functoids: These are the functoids that is used to extract certain number of constant strings from the Input document schema and map it to destination schema. For example if we have to extract first five letters of string then we can use the string left functoid and specify the source by simply dragging to the functoid left input and specify the number of strings to be extracted. Counting of the string value starts from 0. For eg to extract 2 letters specify 0 -1. This functoid can be used in conjection with string concatenation functoid. Any change in the number of string retrievals have bottle neck. Maps has to be recompiled. Mass Copy functoid There are many situations where the fields that might appear in the source document and their mapping with the destination document is dynamic. What should be done in order to create or add the field in schema dynamically? Use the Keyword <ANY> in the schema to achieve this. It is easier said than done. Schema containing the <ANY> element is not validated 100 percent. Validation of schema occurs by combination of namespace and root node and with <ANY> element defined it might be difficult to know the order of field occurrences in advance. In order to obtain the fine validation use validation in pipelines. From the advanced Tab drag the Mass-copy functoid and place it in the Map. After defining the order of known field create the field with the tag <ANY>.when the schema with the <ANY> element is validated they are considered success. If this schema is mapped to the other schema containing the <ANY> element then the Mass-Copy functoid should be used. Mass – copy functoid can be performed only with the node level not with the fields level. So the node containing the <ANY> field has to be mapped. Database Lookup functoid Use this functoid to retrieve the information from the database. First matched result row is obtained . In order to obtain all the rows that matches our criteria use the looping functoid. Database functoid uses the four parameters. Parameter 1: Identify the field that will be used in the search field. This is the string that would appear in the SQL WHERE clause. Connect this is to the left side of the DATABASE look up functoid. Parameter 2: Give the values for the connection strings like server name, database name, security. Remember if your SQL server is configured for trusted connection then your BizTalk host to be one of the member of this trusted connection else the connection would be refused. Parameter 3: Specify the table from which the data should be obtained after the search Parameter 4: Specify the column of the search. If the column you are specifying end up with more than one result set them using the combination key is advisable. Both the key values should be concatenated before being used as a query search criteria. Query that we specify in Db look up functoid will be converted as a dynamic SQL. It is more important to use ERROR RETURN Functoid to display in case if there is any connection issue. Value Extractor functoid should be used to concatenate the values obtained by the query to the destination schema. It is advisable to have the value extractor function name same as that of the search field to facilitate our work. Looping functoid This functiod is used when there is repeating data structure found in the source side of the mapping. These repeating structures are consolidated and mapped to the single destination structure. Connect the repeating structure to the left of the functoid and right to the destination node.This functoid functions like the For ... Each statement of our programming languages. For eg a person might order product through the company website and also through the vendor's website. To iterate through the sales we can use this functoid to consolidate the sales and map to our total sales in our destination schema. You can perform the customized consolidation using the Logical functoid. Logical functoid accepts two parameters of which one of the parameter should be true and other is one which we need to evaluate. For example we can use this functoid to find the Maximum sales by embedding the Logic Sales > 20000. Iteration functoid If we want to index through the repeating structure or perform some Business Logic function that is based on no of occurrences of particular field or value we can use iteration functoid. For example if we want to find the record count of repeating source record connect the repeating record to the left of record count functoid . result will be count of repeating source record. You can combine this with the decision logic functoid to do further filtration. Custom functoid You can create your own functoid instead of pasting inline code in each of the shapes.Create C# New library project. You have to reference Microsoft. BizTalk.BaseFunctoids.dll to write the new functoid on our own. We have to use Resource file (resX) to store the functoid configuration parameter. Code the functionality that should be exhibited by the functoid. You have to use the third party tool to attach the image. Four main properties that you need to describe and mention in your custom code logic is 1.SetName 2.SetDescription 3.SetToolTip 4.SetBitmap These 4 parameters are mandatory settings. If you are referencing the assembly externally then you have to set the SetExternalFunction attribute. Only with this setting your BizTalk can know where to find the definition for implementing the mapping process.Your implementation is identified with the help of the assembly.Class.Function name.Your Custom functoid has to be copied to the GAC .You can also set the minimum Input required by the functoid and maximum parameter required. Date & Time functoid Most critical business function that depends on time need to have field that describes the time of action performed. For example every single transaction in Banking application whether it succeeds or fails need to keep track of time of action. Date function is used in conjunction and mapped to the destination field.This functoid is very useful in Batch processings. We can also find the number of days past the action happened.You can use the Add Days to produce only day values with out time. Implementing If-Then-Else function To implement the If-Then-Else function of programming language use any of the desired logical function from the group and used it with the Value extractor functoid. Establish the connection with the value extractor functoid and Logical functoid. Field that need s to be checked for the condition is linked with the value extractor functoid.Value Extractor functoid defines the action that needs to happen when the logical condition evaluates to true. For eg if we want to map to different destination nodes based on the logical condition evaluation this functoid combination can be used. Link to the logical condition defines the information on which map will work on. Order of the Value Mapping functoid play a crucial role in determining the outcome. Sometimes order may mix up. One has to visit the configure Functoid input window to check the order of evaluation. Any changes in order should be corrected here. Value Mapping Functiod. Value Mapping functoid( Flattening) , Value Mapping functoid are the two forms available.These two fucntoid causes new record to be created in destination for each record in the source based on the logical condition evaluation to true or false. New record will be created for true evaluation, no record will be created false evaluation. Difference between these two functoid how they are mapped to destination. If value of the incoming field cannot be mapped then value mapping functoid creates empty destination node will be created where as Value Mapping functoid ( flattening) will not created a empty node. This default behaviour of Value mapping functoid can be changed with the help of other functoid and scripting. Scripting Functoids External assembly You can use scripting functoid to do number of functions. It can call the external assembly, it can map to external XSLT file,It can allow you to write c# inline code etc. To link the external assembly go to it property window select script type as External assembly from Drop down.Link to the assembly,class,function of our customized implementation. Be sure that external assembly need to be present in the GAC before compiling and using the Map else error will be thrown. Inline C# code You can use the inline script buffer to specify the inline code. Note that this embedding feature creates a maintenance bottle neck.Each method variable mentioned should be unqiue as there are mapped in CDATA tag of map.You cannot debug the code that is inlined. So before embedding them test them in the Visual studio and import it to the inline script buffer. This solves lot of confusion. Passing Orchestration variable in to MAP Create the Message schema that contains the fields to capture the information from orchestration.These fields need to be promoted or distinguished field so that it can be made available to context message Only message with the context message can be mapped. Create and Link the orchestration message to this schema Define the orchestration variable that instantiates context message in the orchestration. This variable should be of System.Xml.XmlDocument type Drag a message assignment shape in the orchestration designer and in the construct message specify MapContextMsg as the only message constructed. Open the Expression editor and load the XML string confirming ContextMessage in to the XmlDocMsgBuilder variable. Then assign this Xml variable to MapContextMsg. Once instantiated they can be used any where in orchestration. Once the context message is available mapping need to be performed to map source or destination. Now drag Transform shape and open the dialog box by double clicking it.Specify the source message to be MapContextMsg and include any other is available. Specify destination message. This steps are used becoz Mapper cannot directly access the business logic variables from orchestration directly. It can use functoids to invoke other things. However the destination message should not be added as input message for schema XSLT Scripting You can used XSLT scripting but it can be used for smaller transformation. Coding will be huge for larger tranforamtion. Instead map all the transformation in a file and import the file to the Mapper by write clicking it on the Mapping page.In contrast to it you can use XSLT Template. Main difference between scripting and Template is how sources document are passed and accessed. Inline uses XSL methods where as in templates they are passed as parameters. Templates are used where you need same logic to do the mapping transformation. Notes: Few notes about Best practice. Organising the map can be done using the Grids, Links. Links that form a particular group can be grouped together in Grid page and there can be as many grid pages depending on your requirement. Keep the map simple. Minimize the use of Business rules. Use labels where ever required. Summary In this article, I have tried to cover most important things that can be performed and used in Mapping. Mapping plays an important role for many integration to exchange the data. There are quiet few lefts will update that with.
https://www.codeproject.com/Articles/20521/BizTalk-Mapping-Part-I
CC-MAIN-2018-22
refinedweb
2,160
56.45
In [1] the authors look at applying Newton’s root-finding method to the function f(z) = zp where p = a + bi. They show that if you start Newton’s method at z = 1, the kth iterate will be (1 – 1/p)k. This converges to 0 when a > 1/2, runs around in circles when a = 1/2, and diverges to infinity when a < 1/2. You can get a wide variety of images by plotting the iterates for various values of the exponent p. Here are three examples. Here’s the Python code that produced the plots. import numpy as np import matplotlib.pyplot as plt def make_plot(p, num_pts=40): k = np.arange(num_pts) z = (1 - 1/p)**k plt.plot(z.real, z.imag) plt.axes().set_aspect(1) plt.grid() plt.title(f"$x^{{ {p.real} + {p.imag}i }}$") plt.savefig(f"newton_{p.real}_{p.imag}.png") plt.close() make_plot(0.53 + 0.4j) make_plot(0.50 + 0.3j) make_plot(0.48 + 0.3j) Note that the code uses f-strings for the title and file name. There’s nothing out of the ordinary in the file name, but the title embeds LaTeX code, and LaTeX needs its own curly braces. The way to produce a literal curly brace in an f-string is to double it. More posts on Newton’s method [1] Joe Latulippe and Jennifer Switkes. Sometimes Newton’s Method Always Cycles. The College Mathematics Journal, Vol. 43, No. 5, pp. 365-370 One thought on “Newton’s method spirals” Hi John, thanks for all your posts, which I’ve been enjoying for some months now. This is just a a tech comment about the code above. I get the graphs, but with a weird box in the centre (see ) However, if I call plt.axes().set_aspect(1) /before/ plt.plot(z.real, z.imag), instead of after it as in the original code, then the graphs come out ok. I have python 3.8.3 (on macOS 10.15.7), numpy 1.20.2 and matplotlib 3.4.1. best wishes! Toby
https://www.johndcook.com/blog/2021/01/23/newtons-method-spirals/
CC-MAIN-2022-40
refinedweb
352
76.62
How many times have you prepared your income tax returns for the previous year, only wishing you knew then what you know now so you could go back and make more advantageous tax decisions? In most cases, you are stuck with the decisions you made before the new tax year began, even though you may not have all of the relevant tax information available to assist with those decisions until several months into the new tax year. Too bad for you, says the IRS, unless you are an estate or trust. Under Section 663(b) of the Internal Revenue Code, any distribution by an estate or trust within the first 65 days of the tax year can be treated as having been made on the last day of the preceding tax year. For example, a distribution of $500 of trust income by the trustee to a beneficiary on Jan. 20, 2017, can be treated as having been made in either the 2017 tax year or the 2016 tax year. In most years, the last day to make a distribution count toward the previous tax year is March 6; but in leap years like 2016, the last day is March 5. The date will also change if the 65th day falls on a weekend, in which case it will be the next business day. The 65th day in 2017 will be March 6. The election to treat the distribution as being made in the previous tax year must be made by the fiduciary on a timely filed income tax return (including extensions) for the tax year to which the distribution is meant to apply. A fiduciary may make the election for only a partial amount of the distributions within the 65-day period, but once the election is made, it is irrevocable. The main advantage of this tax rule is it may provide an opportunity for tax savings. An estate or trust pays income taxes at graduated rates similar to individuals, but under current laws the top federal income tax rate (39.6 percent) applies to income in excess of $12,400. By comparison, married couples filing jointly pay the top rate when income exceeds $466,950 (or $415,050 for single filers). In some cases, an additional 3.8 percent Medicare surtax on the net investment income of the estate or trust may apply, resulting in a total marginal tax of 43.4 percent. To avoid paying a higher tax rate, income from the estate or trust may be distributed to a beneficiary, and the beneficiary will then pay any income tax associated with the distribution, rather than the estate or trust, at the beneficiary’s individual tax rate. (See our previous post regarding application of the Net Investment Income Tax). For example, a beneficiary who pays income taxes at a rate of 25 percent would pay less income tax on the distribution amount than a trust already paying at the top rate of 39.6 percent (or even 43.4 percent). In cases of estates or trusts with large taxable income and beneficiaries in lower tax brackets, the tax savings can be significant. State income tax consequences may also apply to distributions made from a trust or estate, and there may be limitations on the amounts of distributions a fiduciary can apply using the 65-day rule. It is recommended that you discuss all possible consequences with your tax advisor before trying to apply the rules discussed above.
https://www.lexology.com/library/detail.aspx?g=ff9c7695-e8af-4793-ac0d-09b5609fe1ee
CC-MAIN-2018-13
refinedweb
579
55.98
Post Syndicated from Bruce Schneier original Tag Archives: humor APT Horoscope Post Syndicated from Bruce Schneier original This. Nihilistic Password Security Questions Post Syndicated from Bruce Schneier original Posted three years ago, but definitely appropriate for the times. Friday Squid Blogging: T-Shirt Post Syndicated from Bruce Schneier original As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered. Read my blog posting guidelines here. Resetting Your GE Smart Light Bulb Post Syndicated from Bruce Schneier original If you need to reset the software in your GE smart light bulb — firmware version 2.8 or later — just follow these easy instructions:. Welcome to the future! Introducing Furball — Rapid Content Delivery Post Syndicated from Yev original When we first introduced Catblaze back in 2016, people called us crazy. Back then we thought we were onto something after our CAT scans filtered through our 200 petabytes of data and saw that over 50% of the material was cat pictures. Well, we couldn’t have been more right. We’re now backing up well over 750 petabytes of data and our Catblaze service accounts for almost one-third of it. Similarly to how we keep iterating on Backblaze Cloud Backup, we knew we had to keep working on Catblaze as well. Introducing the Furball With that many cat photos being uploaded to us, we saw the need to introduce a rapid cat delivery system to our Catblaze offering, which can concatenate your new cat content with your existing cat content in the cloud! We took a look at our B2 Fireball and realized that we could create a similar system that was integrated with our Restore by Mail service to deliver your cat content currently backed up to Catblaze. Introducing: the Furball! How it Works You’ve uploaded all of your cat content to Catblaze, and you feel great. But oh no! Disaster has struck when your frisky feline flipped Fresca all over your computer. Now you need some way to get all of those feline files back. Fret not! Just log in to Catblaze, navigate to the Furball page, enter your address, et voila — all of your cat content will be coughed up to the Furball and sent directly back to you! One thing to keep in mind is that Backblaze typically has 11 nines of durability, Catblaze and the Furball program are down to only 9 lives of durability, but don’t let that worry you. Furball Pricing You might be thinking that the Furball is priceless, but we’re pleased to announce that it won’t actually cost a paw and a leg! We recently increased our Restore by Mail capabilities and Furball pricing is similar at just $189 per Furball for up to 8 terabytes of frisky feline fun! *Please note that the Furball ships as soon as we can actually get the cat contents inside the box. This might sound easy but herding cats has proven tricky in the past. Also, please make sure you send us clean data —- otherwise it takes us a while to scrub it. As the old saying goes, “litterbox in, litterbox out.” Availability and Pricing Catblaze is available now for just $6/month per computer for an unlimited amount of cat-related content. We’ll also let you upload other content as well, but we know it’s not as important. Just cough up $189 and the Furball is yours — sent overnight by PetEx! Building on the success of our Restore Return Refund program, you can return your Furball to us within 30 days and we’ll refund you the money! You can try Catblaze for free by visiting: though you might find that it says Backblaze once installed. We regret this typo. The post Introducing Furball — Rapid Content Delivery appeared first on Backblaze Blog | Cloud Storage & Cloud Backup. The Maltese MacBook Post Syndicated from Roderick Bauer original — Editor It was a Wednesday and it would have been just like any other Wednesday except Apple was making its big fall product announcements. Just my luck, I had to work in the San Francisco store, which meant that I was the genius who got to answer all the questions. I had just finished helping a customer who claimed that Siri was sounding increasingly impatient answering his questions when I looked up and saw her walk in the door. Her blonde hair was streaked with amethyst highlights and she was wearing a black leather tutu and polished kneehigh Victorian boots. Brightly colored tattoos of Asian characters ran up both of her forearms and her neck. Despite all that, she wouldn’t particularly stand out in San Francisco, but her cobalt-blue eyes held me and wouldn’t let me go. She rapidly reduced the distance between the door and where I stood behind the counter at the back of the store. She plopped a Surface Pro computer on the counter in front of me. “I lost my data,” she said. I knew I’d seen her before, but I couldn’t place where. “That’s a Windows computer,” I said. She leaned over the counter towards me. Her eyes were even brighter and bluer close up. “Tell me something I don’t know, genius,” she replied. Then I remembered where I’d seen her. She was on Press: Here a while back talking about her new startup. She was head of software engineering for a Google spinoff. Angels all over the valley were fighting to throw money at her project. I had been sitting in my boxers eating cold pizza and watching her talk on TV about AI for Blockchain ML. She was way out of my league. “I was in Valletta on a business trip using my MacBook Pro,” she said. “I was reading Verlaine on the beach when a wave came in and soaked Reggie. ‘Reggie’ is my MacBook Pro. Before I knew it, it was all over.” Her eyes misted up. “You know that there isn’t an Apple store in Malta, don’t you?” she said. “We have a reseller there,” I replied. “But they aren’t geniuses, are they?” she countered. “No, they’re not.” She had me there. “I had no choice but to buy this Surface Pro at a Windows shop on Strait Street to get me through the conference. It’s OK, but it’s not Reggie. I came in today to get everything made right. You can do that for me, can’t you?” I looked down at the Surface Pro. We weren’t supposed to work on other makes of computers. It was strictly forbidden in the Genius Training Student Workbook. Alarms were going off in my head telling me to be careful: this dame meant nothing but trouble. “Well?” she said. I made the mistake of looking at her and lingering just a little too long. Her eyes were both shy and probing at the same time. I felt myself falling head over heels into their inky-blue depths. I shook it off and gradually crawled back to consciousness. I told myself that if a customer’s computer needs help, it doesn’t make any difference what you think of the computer, or which brand it is. She’s your customer, and you’re supposed to do something about it. That’s the way it works. Damn the Genius Training Student Workbook. “OK,” I said. “Let’s take care of this.” I asked her whether she had files on the Surface Pro she needed to save. She told me that she used Backblaze Cloud Backup on both the new Surface Pro and her old MacBook Pro. My instincts had been right. This lady was smart. “That will make it much easier,” I told her. “We’ll just download the backed up files for both your old Macbook Pro and your Surface Pro from Backblaze and put them on a new MacBook Pro. We’ll be done in just a few minutes. You know about Backblaze’s Inherit Backup State, right? It lets you move your account to a new computer, restore all your files from your backups to the computer, and start backing up again without having to upload all your files again to the cloud. “What do you think?” she asked. I assumed she meant that she already knew all about Inherit Backup State, so I went ahead and configured her new computer. I was right. It took me just a little while to get her new MacBook Pro set up and the backed up files restored from the Backblaze cloud. Before I knew it, I was done. “Thanks” she said. “You’ve saved my life.” Saved her life? My head was spinning. She turned to leave. I wanted to stop her before she left. I wanted to tell her about my ideas for an AI-based intelligent customer support agent. Maybe she’d be impressed. But she was already on her way towards the door. I thought she was gone forever but she stopped just before the door. She flipped her hair back over her shoulder as she turned to look at me. “You really are a genius.” She smiled and walked out of the store and out of my life. My eyes lingered on the swinging door as she crossed the street and disappeared into the anonymous mass of humanity. I thought to myself: she’ll be back. She’ll be back to get a charger, or a Thunderbolt to USB-C adaptor, or Magsafe to USB-C, or Thunderbolt 3 to Thunderbolt 2, or USB-C to Lightning, or USB-A to USB-C, or DisplayPort to Mini DisplayPort, or HDMI to DisplayPort, or vice versa. Yes, she’ll be back. I panicked. Maybe she’ll take the big fall for Windows and I’ll never see her again. What if that happened? Then I realized I was just being a sap. Snap out of it! I’ll wait for her no matter what happens. She deserves that. The post The Maltese MacBook appeared first on Backblaze Blog | Cloud Storage & Cloud Backup. xkcd on Voting Computers Post Syndicated from Bruce Schneier original OMG The Stupid It Burns SUPER game night 3: GAMES MADE QUICK??? 2.0 Post_1<<_3<<_8<<_11<<_12<<?? XKCD’s Smartphone Security System Post Syndicated from Bruce Schneier original Security Vulnerabilities in Star Wars Post Syndicated from Bruce Schneier original A fun video describing some of the many Empire security vulnerabilities in the first Star Wars movie. Happy New Year, everyone. "Santa Claus is Coming to Town" Parody Post Syndicated from Bruce Schneier original Wondermark on Security Post Syndicated from Bruce Schneier original More notes on US-CERTs IOCs Post Syndicated from Robert Graham original Yet another Russian attack against the power grid, and yet more bad IOCs from the DHS US-CERT. IOCs are “indicators of compromise“, things you can look for in order to order to see if you, too, have been hacked by the same perpetrators. There are several types of IOCs, ranging from the highly specific to the uselessly generic. A uselessly generic IOC would be like trying to identify bank robbers by the fact that their getaway car was “white” in color. It’s worth documenting, so that if the police ever show up in a suspected cabin in the woods, they can note that there’s a “white” car parked in front. But if you work bank security, that doesn’t mean you should be on the lookout for “white” cars. That would be silly. This is what happens with US-CERT’s IOCs. They list some potentially useful things, but they also list a lot of junk that waste’s people’s times, with little ability to distinguish between the useful and the useless. An example: a few months ago was the GRIZZLEYBEAR report published by US-CERT. Among other things, it listed IP addresses used by hackers. There was no description which would be useful IP addresses to watch for, and which would be useless. Some of these IP addresses were useful, pointing to servers the group has been using a long time as command-and-control servers. Other IP addresses are more dubious, such as Tor exit nodes. You aren’t concerned about any specific Tor exit IP address, because it changes randomly, so has no relationship to the attackers. Instead, if you cared about those Tor IP addresses, what you should be looking for is a dynamically updated list of Tor nodes updated daily. And finally, they listed IP addresses of Yahoo, because attackers passed data through Yahoo servers. No, it wasn’t because those Yahoo servers had been compromised, it’s just that everyone passes things though them, like email. A Vermont power-plant blindly dumped all those IP addresses into their sensors. As a consequence, the next morning when an employee checked their Yahoo email, the sensors triggered. This resulted in national headlines about the Russians hacking the Vermont power grid. Today, the US-CERT made similar mistakes with CRASHOVERRIDE. They took a report from Dragos Security, then mutilated it. Dragos’s own IOCs focused on things like hostile strings and file hashes of the hostile files. They also included filenames, but similar to the reason you’d noticed a white car — because it happened, not because you should be on the lookout for it. In context, there’s nothing wrong with noting the file name. But the US-CERT pulled the filenames out of context. One of those filenames was, humorously, “svchost.exe”. It’s the name of an essential Windows service. Every Windows computer is running multiple copies of “svchost.exe”. It’s like saying “be on the lookout for Windows”. Yes, it’s true that viruses use the same filenames as essential Windows files like “svchost.exe”. That’s, generally, something you should be aware of. But that CRASHOVERRIDE did this is wholly meaningless. What Dragos Security was actually reporting was that a “svchost.exe” with the file hash of 79ca89711cdaedb16b0ccccfdcfbd6aa7e57120a was the virus — it’s the hash that’s the important IOC. Pulling the filename out of context is just silly. Luckily, the DHS also provides some of the raw information provided by Dragos. But even then, there’s problems: they provide it in formatted form, for HTML, PDF, or Excel documents. This corrupts the original data so that it’s no longer machine readable. For example, from their webpage, they have the following: import “pe” import “hash” Among the problems are the fact that the quote marks have been altered, probably by Word’s “smart quotes” feature. In other cases, I’ve seen PDF documents get confused by the number 0 and the letter O, as if the raw data had been scanned in from a printed document and OCRed. If this were a “threat intel” company, we’d call this snake oil. The US-CERT is using Dragos Security’s reports to promote itself, but ultimate providing negative value, mutilating the content. This, ultimately, causes a lot of harm. The press trusted their content. So does the network of downstream entities, like municipal power grids. There are tens of thousands of such consumers of these reports, often with less expertise than even US-CERT. There are sprinklings of smart people in these organizations, I meet them at hacker cons, and am fascinated by their stories. But institutionally, they are dumbed down the same level as these US-CERT reports, with the smart people marginalized. There are two solutions to this problem. The first is that when the stupidity of what you do causes everyone to laugh at you, stop doing it. The second is to value technical expertise, empowering those who know what they are doing. Examples of what not to do are giving power to people like Obama’s cyberczar, Michael Daniels, who once claimed his lack of technical knowledge was a bonus, because it allowed him to see the strategic picture instead of getting distracted by details. “Only a year? It’s felt like forever”: a twelve-month retrospective Post Syndicated from Alex Bate original This weekend saw my first anniversary at Raspberry Pi, and this blog marks my 100th post written for the company. It would have been easy to let one milestone or the other slide had they not come along hand in hand, begging for some sort of acknowledgement. The day Liz decided to keep me So here it is! Joining the crew Prior to my position in the Comms team as Social Media Editor, my employment history was largely made up of retail sales roles and, before that, bit parts in theatrical backstage crews. I never thought I would work for the Raspberry Pi Foundation, despite its firm position on my Top Five Awesome Places I’d Love to Work list. How could I work for a tech company when my knowledge of tech stretched as far as dismantling my Game Boy when I was a kid to see how the insides worked, or being the one friend everyone went to when their phone didn’t do what it was meant to do? I never thought about the other side of the Foundation coin, or how I could find my place within the hidden workings that turned the cogs that brought everything together. … when suddenly, as if out of nowhere, a new job with a dream company. #raspberrypi #positive #change #dosomething 12 Likes, 1 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “… when suddenly, as if out of nowhere, a new job with a dream company. #raspberrypi #positive…” A little luck, a well-written though humorous resumé, and a meeting with Liz and Helen later, I found myself the newest member of the growing team at Pi Towers. Ticking items off the Bucket List I thought it would be fun to point out some of the chances I’ve had over the last twelve months and explain how they fit within the world of Raspberry Pi. After all, we’re about more than just a $35 credit card-sized computer. We’re a charitable Foundation made up of some wonderful and exciting projects, people, and goals. High altitude ballooning (HAB) Skycademy offers educators in the UK the chance to come to Pi Towers Cambridge to learn how to plan a balloon launch, build a payload with onboard Raspberry Pi and Camera Module, and provide teachers with the skills needed to take their students on an adventure to near space, with photographic evidence to prove it. All the screens you need to hunt balloons. . We have our landing point and are now rushing to Therford to find the payload in a field. . #HAB #RasppberryPi 332 Likes, 5 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “All the screens you need to hunt balloons. . We have our landing point and are now rushing to…” I was fortunate enough to join Sky Captain James, along with Dan Fisher, Dave Akerman, and Steve Randell on a test launch back in August last year. Testing out new kit that James had still been tinkering with that morning, we headed to a field in Elsworth, near Cambridge, and provided Facebook Live footage of the process from payload build to launch…to the moment when our balloon landed in an RAF shooting range some hours later. “Can we have our balloon back, please, mister?” Having enjoyed watching Blue Peter presenters send up a HAB when I was a child, I marked off the event on my bucket list with a bold tick, and I continue to show off the photographs from our Raspberry Pi as it reached near space. Spend the day launching/chasing a high-altitude balloon. Look how high it went!!! #HAB #ballooning #space #wellspacekinda #ish #photography #uk #highaltitude 13 Likes, 2 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “Spend the day launching/chasing a high-altitude balloon. Look how high it went!!! #HAB #ballooning…” You can find more information on Skycademy here, plus more detail about our test launch day in Dan’s blog post here. Dear Raspberry Pi Friends… My desk is slowly filling with stuff: notes, mementoes, and trinkets that find their way to me from members of the community, both established and new to the life of Pi. There are thank you notes, updates, and more from people I’ve chatted to online as they explore their way around the world of Pi. *heart melts* By plugging myself into social media on a daily basis, I often find hidden treasures that go unnoticed due to the high volume of tags we receive on Facebook, Twitter, Instagram, and so on. Kids jumping off chairs in delight as they complete their first Scratch project, newcomers to the Raspberry Pi shedding a tear as they make an LED blink on their kitchen table, and seasoned makers turning their hobby into something positive to aid others. It’s wonderful to join in the excitement of people discovering a new skill and exploring the community of Raspberry Pi makers: I’ve been known to shed a tear as a result. Meeting educators at Bett, chatting to teen makers at makerspaces, and sharing a cupcake or three at the birthday party have been incredible opportunities to get to know you all. You’re all brilliant. The Queens of Robots, both shoddy and otherwise Last year we welcomed the Queen of Shoddy Robots, Simone Giertz to Pi Towers, where we chatted about making, charity, and space while wandering the colleges of Cambridge and hanging out with flat Tim Peake. Queen of Robots @simonegiertz came to visit #PiTowers today. We hung out with cardboard @astro_timpeake and ate chelsea buns at @fitzbillies #Cambridge. . We also had a great talk about the educational projects of the #RaspberryPi team, #AstroPi and how not enough people realise we’re a #charity. . If you’d like to learn more about the Raspberry Pi Foundation and the work we do with #teachers and #education, check out our website –. . How was your day? Get up to anything fun? 597 Likes, 3 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “Queen of Robots @simonegiertz came to visit #PiTowers today. We hung out with cardboard…” And last month, the wonderful Estefannie ‘Explains it All’ de La Garza came to hang out, make things, and discuss our educational projects. Estefannie on Twitter Ahhhh!!! I still can’t believe I got to hang out and make stuff at the @Raspberry_Pi towers!! Thank you thank you!! Meeting such wonderful, exciting, and innovative YouTubers was a fantastic inspiration to work on my own projects and to try to do more to help others discover ways to connect with tech through their own interests. Those ‘wow’ moments Every Raspberry Pi project I see on a daily basis is awesome. The moment someone takes an idea and does something with it is, in my book, always worthy of awe and appreciation. Whether it be the aforementioned flashing LED, or sending Raspberry Pis to the International Space Station, if you have turned your idea into reality, I applaud you. Some of my favourite projects over the last twelve months have not only made me say “Wow!”, they’ve also inspired me to want to do more with myself, my time, and my growing maker skill. Museum in a Box on Twitter Great to meet @alexjrassic today and nerd out about @Raspberry_Pi and weather balloons and @Space_Station and all things #edtech ⛅🛰⛅🛰 🤖🤖 Projects such as Museum in a Box, a wonderful hands-on learning aid that brings the world to the hands of children across the globe, honestly made me tear up as I placed a miniaturised 3D-printed Virginia Woolf onto a wooden box and gasped as she started to speak to me. Jill Ogle’s Let’s Robot project had me in awe as Twitch-controlled Pi robots tackled mazes, attempted to cut birthday cake, or swung to slap Jill in the face over webcam. Jillian Ogle on Twitter @SryAbtYourCats @tekn0rebel @Beam Lol speaking of faces… Every day I discover new, wonderful builds that both make me wish I’d thought of them first, and leave me wondering how they manage to make them work in the first place. Space We have Raspberry Pis in space. SPACE. Actually space. Raspberry Pi on Twitter New post: Mission accomplished for the European @astro_pi challenge and @esa @Thom_astro is on his way home Twelve months later, this still blows my mind. And let’s not forget… - The chance to visit both the Houses of Parliment and St James’s Palace - Going to a Doctor Who pre-screening and meeting Peter Capaldi, thanks to Clare Sutcliffe There’s no need to smile when you’re #DoctorWho. 13 Likes, 2 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “There’s no need to smile when you’re #DoctorWho.” We’re here. Where are you? . . . . . #raspberrypi #vidconeu #vidcon #pizero #zerow #travel #explore #adventure #youtube 1,944 Likes, 30 Comments – Raspberry Pi (@raspberrypifoundation) on Instagram: “We’re here. Where are you? . . . . . #raspberrypi #vidconeu #vidcon #pizero #zerow #travel #explore…” - Making a GIF Cam and other builds, and sharing them with you all via the blog Made a Gif Cam using a Raspberry Pi, Pi camera, button and a couple LEDs. . When you press the button, it takes 8 images and stitches them into a gif file. The files then appear on my MacBook. . Check out our Twitter feed (Raspberry_Pi) for examples! . Next step is to fit it inside a better camera body. . #DigitalMaking #Photography #Making #Camera #Gif #MakersGonnaMake #LED #Creating #PhotosofInstagram #RaspberryPi 19 Likes, 1 Comments – Alex J’rassic (@thealexjrassic) on Instagram: “Made a Gif Cam using a Raspberry Pi, Pi camera, button and a couple LEDs. . When you press the…” The next twelve months Despite Eben jokingly firing me near-weekly across Twitter, or Philip giving me the ‘Dad glare’ when I pull wires and buttons out of a box under my desk to start yet another project, I don’t plan on going anywhere. Over the next twelve months, I hope to continue discovering awesome Pi builds, expanding on my own skills, and curating some wonderful projects for you via the Raspberry Pi blog, the Raspberry Pi Weekly newsletter, my submissions to The MagPi Magazine, and the occasional video interview or two. It’s been a pleasure. Thank you for joining me on the ride! The post “Only a year? It’s felt like forever”: a twelve-month retrospective appeared first on Raspberry Pi. John Oliver is wrong about Net Neutrality Post Syndicated from Robert Graham original Tune in now to catch @lastweetonight with @iamjohnoliver on why we need net neutrality and Title II. — EFF (@EFF) May 8, 2017 The command-line, for cybersec Post. An SQL Injection Attack Is a Legal Company Name in the UK Post Syndicated from Bruce Schneier original Someone just registered their company name as ; DROP TABLE “COMPANIES”;– LTD. Reddit thread. Obligatory xkcd comic. Election-Day Humor Post Syndicated from Bruce Schneier original This was written in 2004, but still holds true today.
https://noise.getoto.net/tag/humor/
CC-MAIN-2021-39
refinedweb
4,521
71.04
2.2. Data Preprocessing¶ So far we have introduced a variety of techniques for manipulating data that are already stored in ndarrays. To apply deep learning to solving real-world problems, we often begin with preprocessing raw data, rather than those nicely prepared data in the ndarray format. Among popular data analytic tools in Python, the pandas package is commonly used. Like many other extension packages in the vast ecosystem of Python, pandas can work together with ndarray. So, we will briefly walk through steps for preprocessing raw data with pandas and converting them into the ndarray format. We will cover more data preprocessing techniques in later chapters. 2.2.1. Loading Data¶ As an example, we begin by creating an artificial dataset that is stored in a csv (comma-separated values) file. Data stored in other formats may be processed in similar ways. # Write the dataset row by row into a csv file data_file = '../data/house_tiny.csv' with open(data_file, 'w') as f: f.write('NumRooms,Alley,Price\n') # Column names f.write('NA,Pave,127500\n') # Each row is a data point f.write('2,NA,106000\n') f.write('4,NA,178100\n') f.write('NA,NA,140000\n') To load the raw dataset from the created csv file, we import the pandas package and invoke the read_csv function. This dataset has \(4\) rows and \(3\) columns, where each row describes the number of rooms (“NumRooms”), the alley type (“Alley”), and the price (“Price”) of a house. # If pandas is not installed, just uncomment the following line: # !pip install pandas import pandas as pd data = pd.read_csv(data_file) print(data) NumRooms Alley Price 0 NaN Pave 127500 1 2.0 NaN 106000 2 4.0 NaN 178100 3 NaN NaN 140000 2.2.2. Handling Missing Data¶ Note that “NaN” entries are missing values. To handle missing data, typical methods include imputation and deletion, where imputation replaces missing values with substituted ones, while deletion ignores missing values. Here we will consider imputation. By integer-location based indexing ( iloc), we split data into inputs and outputs, where the former takes the first 2 columns while the later only keeps the last column. For numerical values in inputs that are missing, we replace the “NaN” entries with the mean value of the same column. inputs, outputs = data.iloc[:, 0:2], data.iloc[:, 2] inputs = inputs.fillna(inputs.mean()) print(inputs) NumRooms Alley 0 3.0 Pave 1 2.0 NaN 2 4.0 NaN 3 3.0 NaN For categorical or discrete values in inputs, we consider “NaN” as a category. Since the “Alley” column only takes 2 types of categorical values “Pave” and “NaN”, pandas can automatically convert this column to 2 columns “Alley_Pave” and “Alley_nan”. A row whose alley type is “Pave” will set values of “Alley_Pave” and “Alley_nan” to \(1\) and \(0\). A row with a missing alley type will set their values to \(0\) and \(1\). inputs = pd.get_dummies(inputs, dummy_na=True) print(inputs) NumRooms Alley_Pave Alley_nan 0 3.0 1 0 1 2.0 0 1 2 4.0 0 1 3 3.0 0 1 2.2.3. Conversion to the ndarray Format¶ Now that all the entries in inputs and outputs are numerical, they can be converted to the ndarray format. Once data are in this format, they can be further manipulated with those ndarray functionalities that we have introduced in Section 2.1. from mxnet import np X, y = np.array(inputs.values), np.array(outputs.values) X, y (array([[3., 1., 0.], [2., 0., 1.], [4., 0., 1.], [3., 0., 1.]]), array([127500., 106000., 178100., 140000.])) 2.2.4. Summary¶ Like many other extension packages in the vast ecosystem of Python, pandascan work together with ndarray. Imputation and deletion can be used to handle missing data. 2.2.5. Exercises¶ Create a raw dataset with more rows and columns. Delete the column with the most missing values. Convert the preprocessed dataset to the ndarrayformat.
https://www.d2l.ai/chapter_preliminaries/pandas.html
CC-MAIN-2019-47
refinedweb
664
59.3
In Python, a namespace package allows you to spread Python code among several projects. This is useful when you want to release related libraries as separate downloads. For example, with the directories Package-1 and Package-2 in PYTHONPATH, Package-1/namespace/__init__.py Package-1/namespace/module1/__init__.py Package-2/namespace/__init__.py Package-2/namespace/module2/__init__.py the end-user can import namespace.module1 and import namespace.module2. On Python 3.3, you don't have to do anything, just don't put any __init__.py in your namespace package directories and it will just work. This is because Python 3.3 introduces implicit namespace packages. On older versions, there's a standard module, called pkgutil, with which you can 'append' modules to a given namespace. You should put those two lines in both Packages. 1/namespace/__init__.py and Package-2/namespace/__init__.py: from pkgutil import extend_path __path__ = extend_path(__path__, __name__) This will add to the package's __path__ all subdirectories of directories on sys.path named after the package. After this you can distribute the 2 packages separately.
https://www.tutorialspoint.com/How-to-create-python-namespace-packages-in-Python-3
CC-MAIN-2021-43
refinedweb
185
53.78
Quoting Oren Laadan (orenl@librato.com):> +Security> +========> +> .with access mode now, actually.> .That is now possible, and this is done.> .> +However, this can be controlled with a sysctl-variable.> +> +> diff --git a/Documentation/checkpoint/usage.txt b/Documentation/checkpoint/usage.txt> new file mode 100644> index 0000000..ed34765> --- /dev/null> +++ b/Documentation/checkpoint/usage.txt> @@ -0,0 +1,193 @@> +> + How to use Checkpoint-Restart> + =========================================> +> +> +API> +===> +> +The API consists of two new system calls:> +> +* int checkpoint(pid_t pid, int fd, unsigned long flag);> +> + Checkpoint a (sub-)container whose root task is identified by @pid,> + to the open file indicated by @fd. @flags may be on or more of:> + - CHECKPOINT_SUBTREE : allow checkpoint of sub-container> + (other value are not allowed).> +> + Returns: a positive checkpoint identifier (ckptid) upon success, 0 if> + it returns from a restart, and -1 if an error occurs. The ckptid will> + uniquely identify a checkpoint image, for as long as the checkpoint> + is kept in the kernel (e.g. if one wishes to keep a checkpoint, or a> + partial checkpoint, residing in kernel memory).> +> +* int sys_restart(pid_t pid, int fd, unsigned long flags);> +> + Restart a process hierarchy from a checkpoint image that is read from> + the blob stored in the file indicated by @fd. The @flags' will have> + future meaning (must be 0 for now). @pid indicates the root of the> + hierarchy as seen in the coordinator's pid-namespace, and is expected> + to be a child of the coordinator. (Note that this argument may mean> + 'ckptid' to identify an in-kernel checkpoint image, with some @flags> + in the future).> +> + Returns: -1 if an error occurs, 0 on success when restarting from a> + "self" checkpoint, and return value of system call at the time of the> + checkpoint when restarting from an "external" checkpoint.Return value of the checkpointed (init) task's syscall at the time ofexternal checkpoint? If so, what's the use for this, as opposed toreturning 0 as in the case of self-checkpoint?> + TODO: upon successful "external" restart, the container will end up> + in a frozen state.Should clone_with_pids() be mentioned here?thanks,-serge
https://lkml.org/lkml/2009/7/23/126
CC-MAIN-2016-30
refinedweb
348
64.81
Raspberry Pi 2 review The Bottom Line - Quad-core processor - 1GB memory - Runs Linux & Windows 10 - Price - No support for Android When the original Raspberry Pi was released in 2012 it kick-started a whole movement of hobbyists, developers, and educationalists who used the platform to create, hack and teach. The Raspberry Pi succeeded for three important reasons. First, it was a full computer on a little board, it had a desktop and you could write computer programs on it; Second, it had a set of GPIO pins, similar to those find on microcontroller platforms like the Arduino; Third, it only cost $35. Three years after the original launch, the Raspberry Pi Foundation has addressed the performance issue by releasing the Raspberry Pi 2. If there was one complaint about the Pi, it was about its overall performance when running desktop applications. Now, three years after the original launch, the Raspberry Pi Foundation has addressed the performance issue by releasing the Raspberry Pi 2. It has a quad-core processor and double the RAM of the Raspberry Pi 1. I ordered a Raspberry Pi 2 just days after the launch and since its arrival I have been taking it through its paces, and this is what I found out. The Raspberry Pi isn’t the only SBC on the market today and in terms of performance and features many of the alternative SBCs beat the Raspberry Pi 1 quite easily. However, with the possible exception of the ODROID C1, the Raspberry Pi has always won on price. With the launch of the Pi 2, the Raspberry Pi Foundation has kept the same sweet price point, but has managed to boost the performance of the board. Here is a detailed look at how the Raspberry Pi 2 compares to some other SBCs: Raspberry Pi 1 and Raspberry Pi 2 Like the Raspberry Pi 1, the Pi 2 can run a variety of Linux distributions. The easiest way to install an OS for the Pi is to use the New Out Of the Box Software (NOOBS) package. This package boots the Pi and then allows you to pick which operating system you want to install. You can even install multiple operating systems and dual-boot via a boot menu. NOOBS for the Pi 2 is still maturing. At the moment it only provides Raspbian (a Linux distro based on Debian Wheezy), and OpenELEC. All the other OSes like RASPBMC, Pidora, and RISC OS currently only work on the RPi 1. However, things are moving quickly and I expect that more support for the Pi 2 will come soon. One of the big announcements that was made at the time of the RPi 2 launch was that Microsoft will be releasing a version of Windows 10 that supports the Raspberry Pi 2. This release of Windows 10 will be free through the Windows Developer Program for IoT. What yet isn’t known, is what will be included in that version. It will obviously be a cut down version, but how cut down it will be remains to be seen. Microsoft is looking at the emerging IoT market and the release announcement clearly says that Microsoft sees this developer community as “as an amazing source of innovation for smart, connected devices that represent the very foundation for the next wave of computing.” In other words, don’t expect Microsoft to give away free desktop equivalent version of Windows, so that you can sell your old PC and replace it with a Raspberry Pi. I could be wrong, time will tell. The one major operating system that the RPi 2 doesn’t support is Android. The RPi 1 didn’t support it and at the moment there is no news that the situation will change with the Pi 2. The Raspberry Pi Foundation doesn’t see Android as a priority, and there appears to be some porting difficulties due to some missing drivers from Broadcom. However, this could all change. Like the CuBox and the HummingBoard, the Raspberry Pi 1 and 2 are official platforms for OpenELEC. The Open Embedded Linux Entertainment Center (OpenELEC) is a small Linux distribution that turns the RPi 2 into a Kodi (previously XBMC) media center. Installing it is simple enough via NOOBS or via an image file available on the OpenELEC site. The distro boots quickly and the interface is smooth and responsive. I was able to use it with the Yatse, the XBMC / Kodi Remote app without any problems. The app found the RPi2 straight away and I was able to control Kodi easily. In terms of performance, I tested the power of the RPi 2’s CPU and GPU by playing two HD video files. Both files were encoded in H.264, the first at 4429 kbps, and the second at 15038 kbps. Both were full HD resolution. The good news is that both videos played fine. There was no stuttering or artifacts, and the sound played via the HDMI. The only downside was that the UI was slow when the videos were playing. Bringing up the on-screen-controls to pause, stop, etc., resulted in the mouse jerking and jumping, however the UI still actually worked. In comparison the same files on the CuBox played equally as well, and the UI remained responsive. One of the attractions of the Raspberry Pi (and in fact other SBCs) is the ability to connect hardware (LEDs, motors, servos, sensors etc) directly to the board and control/monitor that hardware within a computer program. The advantage of the Pi over a microcontroller board, like the Arduino Due or the MBED board, is that the GPIO (General Purpose Input/Output ) pins can be controlled from a variety of programming languages, and not just C or C++. In the video review I demonstrate how the Raspberry Pi 2 can be used to flash a LED. Of course, this is a very simple circuit, however it demonstrates the ability of the Raspberry Pi 2 to interact with the outside world. For those interested in getting this working with a RPi 2 then here is the Python program I used: import RPi.GPIO as GPIO import time GPIO.setmode(GPIO.BOARD) GPIO.setup(7, GPIO.OUT) while (1): GPIO.output(7, GPIO.HIGH) time.sleep(1) GPIO.output(7, GPIO.LOW) time.sleep(1) The first part imports the modules needed for working with the GPIO pins and the module needed for the sleep() function. The next bit sets pin 7 as an output, and then the loop just sets pin 7 HIGH (i.e. on) and then LOW (i.e. off) with a one second delay between each action. Since the RPi 2 is quite new then I needed to manually update RPi.GPIO before it would work. However I think the latest version of Raspbian has the updated GPIO module. But for those interested, you can find more help with updating RPi.GPIO on Adafruit’s How to Fix Error Loading RPi.GPIO Python Library On Your Brand New Raspberry Pi 2. There is also a useful primer on building the LED circuit. If you liked the Raspberry Pi 1, then you will love the Raspberry Pi 2. The performance jump from the Pi 1 to the Pi 2 is excellent, and the extra memory really helps the desktop performance. Because the Raspberry Pi Foundation has managed to keep the price the same then there is little to complain about. Android support would be nice, but the Pi has thrived so far without it, so it isn’t a deal breaker by any means. The promise of Windows 10 is intriguing and the current support for Linux is excellent. So, go buy a Raspberry Pi 2, you won’t be disappointed. - Guest - Tjaldid - lynxblaine - Tommy Peel - Tom Riddle - crutchcorn - DePanda - starklombardi - crutchcorn - starklombardi - crutchcorn - garysims - Panos_P_Greece - Peter Kula - andy - candy zhu - Me - garysims - andy
http://www.androidauthority.com/raspberry-pi-2-review-588122/
CC-MAIN-2016-40
refinedweb
1,322
70.13
HEllo, I wish to have a visual pointer on the border of the screen to indicates in which direction the target is. Does someone has already done something like this? Or can give me idea on how to solve this? HEllo, I wish to have a visual pointer on the border of the screen to indicates in which direction the target is. Does someone has already done something like this? Or can give me idea on how to solve this? Do you rather mean something like a pointer in a fixed place that simply points towards a target (like in some driving games) or something that moves along the screen edge (sometimes seen in FPSs)? something seen in spatial fps(?). there is a arrow moving along the screen edge to knows where target is, when it is no more on the screen. (I think in Xwing we got this, and in Eve online, and Black Prophecy of course). Try this: from direct.showbase.ShowBase import ShowBase def isInView(obj, camnode=None, cam=None): if camnode is None: camnode = base.camNode if cam is None: cam = base.cam return camnode.isInView(obj.getPos(cam)) def sign(number): return number / abs(number) if number != 0 else 0 class Follower(object): """Onscreen indicator for relationship of a node relative to the camera. Shows an indicator at the edge of the screen if target is not in view.""" def __init__(self, target): """Arguments: target -- nodepath to follow """ self.target = target self.pointer = loader.loadModel("smiley") self.pointer.setScale(0.1) self.pointer.setColor(1, 0, 0, 1) self.pointer.reparentTo(base.aspect2d) self.pointer.hide() self.task = taskMgr.add(self.track, "onscreen follower") self.active = True def track(self, task): if isInView(self.target): self.pointer.hide() else: self.pointer.show() x, y, z = self.target.getPos(base.cam) # y is unimportant for now if abs(x) > abs(z): z /= abs(x) if x != 0 else 1 x = sign(x) else: x /= abs(z) if z != 0 else 1 z = sign(z) x *= base.getAspectRatio() self.pointer.setPos(x, 1, z) return task.cont def toggle(self): if self.active: self.pointer.hide() taskMgr.remove(self.task) self.active = False else: self.task = taskMgr.add(self.track, "onscreen follower") self.active = True def destroy(self): taskMgr.remove(self.task) self.pointer.removeNode() class App(ShowBase): def __init__(self): ShowBase.__init__(self) target = loader.loadModel("smiley") target.reparentTo(render) target.setY(10) f = Follower(target) base.accept("space", f.toggle) App().run() After starting, move around as you would in pview. If you move the smiley out of the view, a small red sphere will pop up and indicate its direction. Press space to toggle it. Nothing to say. It is exactly what I wish to do. I was not hoping directly the code, but it is perfect. I will adapt to what I am doing. Thanks a lot!
https://discourse.panda3d.org/t/pointer-to-target/11138
CC-MAIN-2022-27
refinedweb
484
62.24
meaning that you’ll be doing this again soon with a much larger dataset. So, you decide to write a script so that it will be easy to do the analysis again. Write a Python script that: grangers_analysis_yourname.csv. This code should use functions to break the code up into manageable pieces. To help you get started here are two functions, one for importing the data from the web, and one for exporting it to a csv file. def get_file_from_web(url): """Imports a comma delimited text file from the web into a list of lists""" webpage = urllib.urlopen(url) datareader = csv.reader(webpage) data = [] for row in datareader: data.append(row) return data def export_to_csv(data, filename): """Export list of lists to comma delimited text file""" outputfile = open(filename, 'wb') datawriter = csv.writer(outputfile) datawriter.writerows(data) outputfile.close()
http://www.programmingforbiologists.org/exercises/Combining-the-basics/
CC-MAIN-2019-09
refinedweb
139
66.03
I am attempting to download a dataset for some analyses study. When I start the stream I get an error “”{“errors”:[{“message”:“SSL is required”,“code”:92}]}". I realise I have to set up a SSL certificate. But as a newbe, the documentation is of no help. Can someone explain in simple terms what AND how I set this up. Many thanks. Creating an SSL connection You don’t need to use a client certificate to connect; you just need to make sure that you’re hitting and not. Isaach, many thanks for quick response. I’m trying to use a python script that gets the Tweets and does a bit of sorting etc. The part of the code relating to the Twitter url is: def oauth_get_tweet(tid, http_method="GET", post_body='', http_headers=None): url = '' + tid consumer = oauth.Consumer(key=CONSUMER_KEY, secret=CONSUMER_SECRET) token = oauth.Token(key=ACCESS_KEY, secret=ACCESS_SECRET) client = oauth.Client(consumer, token) resp, content = client.request( url, method=http_method, body=post_body, headers=http_headers ) Does this make sense? Appreciating your assistance. Sure, it makes sense. What the error is telling you is to use https and not http in your URL. I did change the the url = ‘’ TO https://*etc and I got the same error again, so I thought there maybe something else I missed. I re-ran the script this morning with the suggested change and I’m now receiving error free tweets. Thank you for your help.
https://twittercommunity.com/t/creating-an-ssl-connection/29218
CC-MAIN-2019-09
refinedweb
241
76.93
One of the things I showed at the WPUG meeting was accessing XNA APIs from Silverlight. In some cases because it’s the only way to achieve what you need (eg access to the microphone) and in others because it makes your life easier (eg gestures). In this post I’ll cover microphone access from Silverlight. Peter Foot’s blog entry helped me a lot in getting this up and running. Access to the microphone is provided through the Microsoft.Xna.Framework.Audio namespace in the Microsoft.Xna.Framework assembly. You’ll need to add a reference to this XNA assembly from your WP7 Silverlight app if you need microphone access. The class we’re interested in is Microphone. This class provides access to all the available microphones on the system and exposes a static property – Default – that returns the Microphone instance for the current default recording device. Once we Start the microphone, it begins buffering data and, at some point, will fire the BufferReady event. At this point it’s our responsibility to empty the buffer. This goes on until we want to stop recording. To do that we simply call Stop(). Once we have this data we can use the SoundEffect class to play it back and even mess with it a little. I’ve wired the pitch parameter up to a slider in my app so I can sound like either Pinky and Perky or Don LaFontaine depending on my mood. Here’s video of the app in action: As you can see, the UI is ultra-simple: Start and Stop are just buttons to start and stop recording (for ease, stop also initiates playback). The pitch slider is on the right. using System; using System.IO; using System.Windows; using Microsoft.Phone.Controls; using Microsoft.Xna.Framework.Audio; namespace HelloSoundWorld { public partial class RecordSound : PhoneApplicationPage { MemoryStream ms; Microphone mic = Microphone.Default; // Wire up an event handler so we can empty the buffer when full // Crank up the volume to max public RecordSound() { InitializeComponent(); mic.BufferReady += Default_BufferReady; SoundEffect.MasterVolume = 1.0f; } // When the buffer's ready we need to empty it // We'll copy to a MemoryStream // We could push into IsolatedStorage etc void Default_BufferReady(object sender, EventArgs e) { byte[] buffer = new byte[1024]; int bytesRead = 0; while ((bytesRead = mic.GetData(buffer, 0, buffer.Length)) > 0) ms.Write(buffer, 0, bytesRead); } // The user wants to start recording. If we've already made // a recording, close that MemoryStream and create a new one. // Start recording on the default device. private void start_Click(object sender, RoutedEventArgs e) { if (ms != null) ms.Close(); ms = new MemoryStream(); mic.Start(); } // The user wants to stop recording. Checks the microphone // is stopped. Reset the MemoryStream position. // Play back the recording. Pitch is based on slider value private void stop_Click(object sender, RoutedEventArgs e) { if (mic.State != MicrophoneState.Stopped) mic.Stop(); ms.Position = 0; SoundEffect se = new SoundEffect( ms.ToArray(), mic.SampleRate, AudioChannels.Mono); se.Play(1.0f, (float)slider1.Value, 0.0f); } } } Run this and you’ll hit an InvalidOperationException. There’s something else we need to do: We need to add some boilerplate code to make sure the XNA Framework is happy. You can find more information here. Add a new class (in your App.xaml.cs will do):Interval; } void IApplicationService.StartService(ApplicationServiceContext context) { this.frameworkDispatcherTimer.Start(); } void IApplicationService.StopService() { this.frameworkDispatcherTimer.Stop(); } void frameworkDispatcherTimer_Tick(object sender, EventArgs e) { FrameworkDispatcher.Update(); } } And hook this up in the constructor for the App (Application) class: this.ApplicationLifetimeObjects.Add( new XNAAsyncDispatcher(TimeSpan.FromMilliseconds(50))); Which adds it as a service to the Silverlight application. XNA should now be happy. Record away! I would be great if the microphone minimum specs were specified for at least, the sample rate. The current sample rate in the emulator is 16000Hz, which is far from what's standard on other phone (48000Hz on iPhone 3G/+/4G). This spec is important when you are relying on some spectral analysis of the audio signal (or quality audio recording). Having no minimum spec requires to handle various different sample rate, meaning also that the microphone experience is not consistent across different hardware... Do you know what's the status concerning this - small - issue?
http://blogs.msdn.com/b/mikeormond/archive/2010/08/27/xna-from-silverlight-on-windows-phone-7-the-microphone.aspx
CC-MAIN-2015-32
refinedweb
702
59.9
iStreamSource Struct Reference This interface represents a stream source. More... #include <imap/streamsource.h> Detailed Description This interface represents a stream source. This can be implemented by the application to implement faster loading of data. Basically the idea is to have some kind of 'id' that represents a buffer for a mesh. This implementation of this interface can try to load the buffer given that id. Definition at line 51 of file streamsource.h. Member Function Documentation Load a buffer given an id. This will fire the callback as soon as the buffer is ready. Note that some implementations that don't support asynchronious loading may call the callback immediatelly from within this function. - Returns: - false if we can't find the buffer (early error). The error should be placed on the reporter. Save a buffer with some id. Returns false if the buffer couldn't be saved for some reason. The error should be reported on the reporter by this function. The documentation for this struct was generated from the following file: - imap/streamsource.h Generated for Crystal Space 1.4.1 by doxygen 1.7.1
http://www.crystalspace3d.org/docs/online/api-1.4.1/structiStreamSource.html
CC-MAIN-2016-18
refinedweb
188
60.92
How to tell if mysensors library starts up successfully? I would like to turn on a LED if a node cannot find the gateway (e.g. if the distance is too large) when the node is powered up. How can that be done? I understand that the send command returns a Boolean that tells whether a message reached the gateway. However, if the node cannot find the gateway on power up the sketch never proceeds beyond #include <MySensors.h>. Is it possible to derive some status from this command? you already have 3 status LED, when red led flashes there has been a transmission error. The node is a sensebender micro. I would like to use the built-in LED. I never made a node with the 3 status LEDs. It might be worthwhile for me to look more into that. Unless you want to go the rfm69 way as they are a bit more advanced and you can read signal strength, power output and also the new driver has adaptive TX power. - Boots33 Hero Member last edited by @arraWX The line of code to force the node to move on if no gateway is found is //set how long to wait for transport ready in milliseconds #define MY_TRANSPORT_WAIT_READY_MS 3000 some posts that may be of interest Synchronising Light switch Booting sensors without Gateway connection? - Lars Deutsch last edited by
https://forum.mysensors.org/topic/7858/how-to-tell-if-mysensors-library-starts-up-successfully/7?lang=en-US
CC-MAIN-2021-49
refinedweb
230
81.53
I have an object with a single material that has 6 textures assigned to it. I want to randomize the active index, while deactivating the other 5. Is this possible with bpy? Is it possible to randomize the active texture index of a given material with python. here a snippet but maybe this is for BGE or you want do in multiple materials… not very useful right now import bpy,random mat = bpy.data.materials['Material'] tex = [1,0,0,0,0,0] random.shuffle(tex) for i,b in enumerate(tex): mat.texture_slots[i].use = b Thanks, this is all I needed.
https://blenderartists.org/t/is-it-possible-to-randomize-the-active-texture-index-of-a-given-material-with-python/646845
CC-MAIN-2020-50
refinedweb
103
51.95
Wow, I remember wearing out a few of the K&R books, and for the old people like me you know what I mean. Although I was much more of a BASIC programmer, as much of my work early on was in test systems and BASIC was used a lot on those machines. I did do my time with the C Language. Funny these days, 70 seems young, especially at my age. Dennis Ritchie made a big impact on the way computers were designed, and certainly increased the employment of the semi-colon. I never met him, but his work is one of the books that is never thrown away, or at least not till the pages were well worn, and another took it’s place. K&R is still on my shelf, and I am able to easily go over to it and open it to page 6, and see some of the first code that many programmers write. Finally: A simple good bye: #include <stdio.h> main() { printf(“goodbye, dmr\n”); } #include <stdio.h> main() { printf(“goodbye, dmr\n”); } Legal Note: Restrictions:
http://blogs.msdn.com/b/devschool/archive/2011/10/17/dennis-ritchie-creator-of-c-programming-language-passes-away.aspx
CC-MAIN-2015-32
refinedweb
184
79.19
#include <CGAL/Barycentric_coordinates_2/Mean_value_coordinates_2.h> 2D mean value coordinates. This class implements 2D mean value coordinates ( [5], [2], [3] ), which can be computed at any point in the plane. Mean value coordinates are well-defined everywhere in the plane and are non-negative in the kernel of a star-shaped polygon. The coordinates are computed analytically. See more details in the user manual here. initializes all internal data structures. This class implements the behavior of mean value coordinates for 2D query points. computes 2D mean value coordinates. This function fills c_begin with 2D mean value mean value weights. This function fills weights with 2D mean value weights computed at the query point with respect to the vertices of the input polygon. If query belongs to the polygon boundary, the returned weights are normalized. The number of returned weights equals to the number of polygon vertices.
https://doc.cgal.org/latest/Barycentric_coordinates_2/classCGAL_1_1Barycentric__coordinates_1_1Mean__value__coordinates__2.html
CC-MAIN-2022-27
refinedweb
145
51.24
=0 Java count words from file Java count words from file In this section, you will learn how to determine the number of words present in the file. Explanation: Java has provides several... by using the StringTokenizer class, we can easily count the number of words count frequency of words in the string Java count frequency of words in the string. In this tutorial, you will learn... of words in a string. In the given example, we have accepted a sentence...,Integer> the words with frequency. Example: import java.util.*; public index Fortran Tutorials Java Tutorials Java Applet Tutorials Java Swing and AWT Tutorials JavaBeans Tutorials Import java IO - Java Beginners Import java IO for example i know java IO is for input and output. I am using Netbeans5.5.1. How can i see all the classes related to java IO for example; stream reader, buffer reader java including index in java regular expression including index in java regular expression Hi, I am using java regular expression to merge using underscore consecutive capatalized words e.g., "New York" (after merging "New_York") or words that has accented characters Java file line count Java file line count In the section, you will learn how to count the number of lines from the given file. Description of code: Java has provide various... of java.util.* package. In spite of using any IO stream, we have used Scanner Java reverse words in a string using only loops Java reverse words in a string using only loops In this tutorial, you will learn how to reverse words in a string without using any inbuilt methods like... are allowed. Example: public class ReverseWordsUsingLoops { public Ask java count Ask java count Good morning, I have a case where there are tables... the results: for example: | code book | name of book | sum | | b001 | Java 1 | 10 | | b002 | beginner java | 5 Convert Number To Words Convert Number To Words In this example, We are going to convert number to words. Code...: Output of this program. C:\corejava>java NumberToWords java-io - Java Beginners java-io Hi Deepak; down core java io using class in myn...:// Thanks example | Java Programming | Java Beginners Examples | Applet Tutorials... applications, mobile applications, batch processing applications. Java is used... | Linux Tutorial | Java Script Tutorial | PHP Tutorial | Java IO File - Java Beginners IO File Write a java program which will read an input file & will produce an output file which will extract errors & warnings. It shall exclude the standard errors & warnings. The standard errors & warnings you can find out index of javaprogram index of javaprogram what is the step of learning java. i am not asking syllabus am i am asking the step of program to teach a pesonal student. To learn java, please visit the following link: Java Tutorial Count instances of each word Count instances of each word I am working on a Java Project... of the words preceded by the occurrence count. My program compiles and runs, but it displays duplicate listings of the words and they are not in the correct order. io - Java Beginners . Thanks Amardeep Java program - convert words into numbers? Java program - convert words into numbers? convert words into numbers? had no answer sir Word Count Word Count This example counts the number of occurrences of a specific word.... To count it we are using countMatches() method Simple IO Application - Java Beginners Simple IO Application Hi, please help me Write a simple Java application that prompts the user for their first name and then their last name. The application should then respond with 'Hello first & last name, what Java Count Vowels Java Count Vowels In this program you will learn how to count vowels in a String. Here you... a variable- count = 0. Now, we have applied a loop here which will go up Java IO InputStream Example Java IO InputStream Example In this section we will discuss about the InputStream in Java. An abstract class InputStream is a base class of all the byte... Example : An example is being given here which demonstrates how How to index a given paragraph in alphabetical order How to index a given paragraph in alphabetical order Write a java program to index a given paragraph. Paragraph should be obtained during runtime. The output should be the list of all the available indices. For example: Input IO FilterWriter Java IO FilterWriter In this example we will discuss about the FilterWriter...(); bw.write("Java IO FilterWriter Example"); System.out.println("Data is written... an example is being given which demonstrates about how to use Java FilterWriter Java count occurrence of number from array Java count occurrence of number from array Here we have created an example that will count the occurrence of numbers and find the number which has... element : arr) { int index = list1.indexOf(element); if (index != -1 Count Active Thread in JAVA Count Active Thread in JAVA In this tutorial, we are using activeCount() method of thread to count the current active threads. Thread activeCount... of active threads in the current thread group. Example : class ThreadCount Index Out of Bound Exception :\saurabh>java Example Valid indexes are 0, 1,2,3,4,5,6 or 7 End... Index Out of Bound Exception Index Out of Bound Exception are the Unchecked Exception Count the character in java Count the character in java Write a java program to count.... Count characters by implementing thread import java.util.*; class CountCharacters { public static void count(final String str){ Runnable Java IO SequenceInputStream Example Java IO SequenceInputStream Example In this tutorial we will learn about...; started form the offset 'off'. Example : An example... or concatenate the contents of two files. In this example I have created two text count the repeated character in one string count the repeated character in one string to find the how character occurrence in one String (for example the string is -java means there is 2...;counter<third.length;counter++){ char ch= third[counter]; int count JavaScript Count Words JavaScript Count Words In this section, you will learn how to count words... to enter words of their choice. The javascript takes the value of the text area and using the regular expression, determine the number of words and finally display display co-occurrence words in a file display co-occurrence words in a file how to write java program for counting co occurred words in the file java program to convert decimal in words java program to convert decimal in words write a java program to convert a decimal no. in words.Ex: give input as 12 output must be twelve Java FileOutputStream Example Java FileOutputStream Example In this section we will discuss about the Java IO FileOutputStream. FileOutputStream is a class of java.io package which...) throws IOException Example : Here an example is being given which Difference between Java IO Class - Java Beginners Difference between Java IO Class What is the difference in function between Two set of Stream class as mention below- 1)FileInputStream... information. Thanks Count Rows - JSP-Servlet Count Rows How to count rows in Java. Thanks Hibernate count() Function . for example : count( [ distinct | all ] object | object.property) count(*); also...Hibernate count() Function In this tutorial you will learn how to use the HQL count() function count() function counts the maximum number of rows letter count problem - Java Beginners letter count problem i have a problem in my java coding to count two characters of a string. eg to count the letter "ou" in the string "How do you feel today?". The answer should be 5. but i still have error compiling Java IO OutputStreamWriter "); bw.newLine(); bw.write("Java IO OutputStreamWriter Example...Java IO OutputStreamWriter In this section we will learn about the Java..."); osw = new OutputStreamWriter(os); osw.write("Java IO Java Convert Number to Words Java Convert Number to Words In this section, you will learn how to convert a numeric value into its equivalent words. The words of the numbers(1-100... to the usage. Through the given code, you can convert the large digit numbers into StringWriter Java IO StringWriter In this section we will discussed about the StringWriter... in the Java program. In this example I have created a Java class named...[]) { String str = "Java StringWriter Example"; try OGNL Index is a expression language. It is used for getting and setting the properties of java object... properties of java object. It has own syntax, which is very simple. It make... - For example, array[0], which returns first element of current core java ,io operation,calling methods using switch cases core java ,io operation,calling methods using switch cases How to create a dictionary program,providing user inputs using io operations with switch cases and providing different options for searching,editing,storing meanings Character count by while loop Character count by while loop Write the program to count the number...]; int count=0; for ( int i=0; i<third.length; i++){ if (ch==third[i]) count++; } boolean flag=false; for(int j=counter-1;j>=0;j--){ if(ch==third[j Java IO Writer Java IO Writer In this section we will discuss about the Writer class in Java... len) throws IOException Example To demonstrate how to use Writer to write into the output stream I am giving a simple example. In this example I have count occourance no of time no between 0 to 9 given by user in java example that accepts 9 integers from the user and count the occurrence of each...count occourance no of time no between 0 to 9 given by user in java import java.io.*; class count_no { public static void main(String
http://www.roseindia.net/tutorialhelp/comment/41180
CC-MAIN-2014-41
refinedweb
1,608
54.52
By Beyang Liu on February 27, 2017 Update: Part 2 of this series is now published. Sourcegraph lets you view any line of code in your web browser with all the navigation features of an IDE and more. That includes both classic abilities — like jump-to-definition, find-references, tooltips, and symbol search — and novel superpowers like cross-repository jump-to-definition and global usage examples. The sum of these parts is a quick, frictionless way to discuss or make sense of code. Underneath the hood is a complex system that parses and analyzes source code on the fly and provides the underlying code navigation capabilities to the UI. These capabilities we collectively call “Code Intelligence.” Code Intelligence is not a marketing term. We use it to mean something very specific: Code Intelligence is the set of auto-navigation and auto-generation primitives that use a semantic understanding of code to enable a human programmer to efficiently read and write source code. Let’s break that down: In other words, Code Intelligence is shorthand for “jump-to-def + find-references + symbol-search + tooltips + autocomplete + more.” Code Intelligence is what makes the experience of using Sourcegraph magical. In a series of posts starting with this one, I’m going to explain how the magic works, starting with how the technical challenges of Code Intelligence have led us to adopt the Language Server Protocol as a key layer in our architecture. We think the Language Server Protocol is the future of Code Intelligence — both on Sourcegraph and inside every editor, and we believe that every plugin and IDE author should check it out. Pick at random a repository, a commit, and a file in one of the languages we support. (If you can’t think of one, this one will do.) Visit that file in Sourcegraph and in a matter of seconds, you will be able to jump to where things are defined, find everywhere they are used, jump across dependency boundaries, and view usage examples drawn from other projects. Instant Code Intelligence on Sourcegraph IDEs like Eclipse, IntelliJ, and Visual Studio are massively complex tools that often only support Code Intelligence well on a few languages. Sourcegraph aims to support every programming language. In order to make this gigantic undertaking tractable, we made a key architectural decision early on to define a clear protocol between the language analyzer and the code viewing frontend.. We’ve experimented with different protocols to satisfy our needs along the way. We created a protocol and library called srclib, designed for batch offline language analysis, which powered most of our Code Intelligence in the early days. But over time, as our users and customers began to rely on Sourcegraph more and more, they demanded real-time Code Intelligence in more places (code review, in their editor, etc.) across a larger number of repositories and revisions. To address their needs, we needed a protocol that allowed for real-time analysis. We found what we were looking for in the Language Server Protocol. The Language Server Protocol (LSP) is an open protocol originally created and open-sourced by Microsoft that defines a set of standard Code Intelligence capabilities for editor plugins. Here is a subset that will be familiar to any professional programmer: An example of the protocol in use, from the official README. Sourcegraph is not an editor, but we view IDE-level Code Intelligence as table stakes for any tool for viewing and making sense of code. Moreover, we want to bring that level of Code Intelligence to every language in every editor. But a multitude of IDEs and editor plugins already exist—why the need for new plugins, much less a new standard? We originally wrote about the M x N problem back in 2013, when we created srclib, our open-source offline language analysis library. It is the problem of having M editors (Emacs, Vim, Sublime, Visual Studio Code, Atom, etc.) and N programming languages (JavaScript, Java, Go, Python, Rust, TypeScript, etc.) Each of the editors has editing capabilities (buffer management, file navigation, keyboard shortcuts, look-and-feel, etc.) that should be orthogonal to choice of language. In the perfect world, you should be able to stick with your editor of choice no matter what language you work in. But that’s not the world we live in. Why not? Because to make every editor support Code Intelligence for every language, you’d need to build M x N editor plugins, each of which needs to integrate with the plugin API of a specific editor and understand the compiler-level semantics of the language. M x N LSP defines a communication protocol that sits between editor plugins and the underlying analysis libraries. You build one language server for each language and one plugin for each editor. Each editor plugin that speaks LSP will now have support for every single language server. You’ve reduced the M x N problem to an M + N problem. M + N This is, of course, all good in theory, but how many languages are actually supported by an LSP-based language server today? A lot. Another key feature of LSP is the lack of any real data model for code. The protocol has no notion of namespaces, class hierarchies, definitions, or references. How does one represent “jump to definition” with no notion of a definition? In the case of LSP, the input is merely a filename, line number, and column—the location of the reference—and the output is… a filename, line number, and column—the location of the definition. LSP does not attempt to model the semantic relationships in code at all. But if we are trying to build Code Intelligence, shouldn’t our protocol be aware of at least the basic semantic relationships in code? The answer is no for two reasons: None of this precludes building a semantic data model on top of LSP. In fact, at Sourcegraph, we’ve done exactly that for some of our more advanced features. But all of that is possible because we build on top of a layer that does not impose oversimplifying assumptions about the code. The last important feature of LSP that I’ll touch upon in this post is extensibility. The creators of LSP foresaw that in the future, people would desire new functionality out. I’ll dive into specifics in a later post, but for now, I’ll just note that it is easy to add new functionality to LSP without breaking backwards compatibility. Indeed, there is a vibrant, open-source community that continues to contribute changes to the protocol while maintaining backwards compatibility with existing LSP plugins. History has taught us that when it comes to creating technology, it’s overwhelmingly the technologies that save developers time that win. Sourcegraph’s mission revolves around saving developers time. To that end, we’ve invested deeply in Code Intelligence, because we believe it is a huge multiplier of developer productivity and, as a corollary, a multiplier of the overall rate of technological progress. We believe that LSP will bring a new wave of Code-Intelligence-powered editor plugins, IDEs, and developer tools. We hope other companies, organizations, and individuals building tools for programmers will join us in adopting and promoting this standard for the benefit of programmers everywhere. Hopefully, I’ve given you a better idea of the technical problems we’re solving at Sourcegraph and why they matter to the greater software community. In subsequent posts, I’ll dive into extensions we’ve made to LSP to enable novel Code Intelligence abilities (cross-dependency jump-to-def and global usage examples), and I’ll describe implementation details of language servers that we think will be broadly useful and interesting. If you are like us and find this interesting, start contributing and sign up for Sourcegraph. Making Code Intelligence “just work”
https://about.sourcegraph.com/blog/part-1-how-sourcegraph-scales-with-the-language-server-protocol/
CC-MAIN-2018-47
refinedweb
1,314
51.48
Hey All I want to create a simple dice golf game in C# The basic idea is: A player rolls 3 random dice until they roll a double The number of rolls taken until a double is rolled is calculated and displayed on the console This is run 18 times until all 18 holes are completed Final score is displayed on the console I started some of the code so far but i'm not really sure where i should be heading or what i should be doing.. (The code is a bit of a mess and doesn't really anything, ive just been noting down things i might need to include as i go along and havent organised them yet) I hope you can help me :) using System; namespace dicegame1 { public class RandDice { public static void Main() { Random ran = new Random(); int player1; int player2; int player1RoundScore; int player2RoundScore; bool player1RoundScore = false; bool player2RoundScore = false; int roundNumber; } // Random dice rolling // { int diceOne = ran.Next(1,7); int diceTwo = ran.Next(1,7); int diceThree = ran.Next(1,7); Console.write(ran.Next(1,7) + " "); Console.write(ran.Next(1,7) + " "); Console.write(ran.Next(1,7)); } { if (diceOne == diceTwo || diceTwo == diceThree || diceOne == diceThree) } } } }
https://www.daniweb.com/programming/software-development/threads/190551/c-dice-game
CC-MAIN-2017-17
refinedweb
204
52.26
use a unique hidden form variable to detect and prevent duplicate submission of the same form, when the consequences warrant. It works like this: # Generate the form: self.write(''' <form method="post" ...> %s ... </form> ''' % (self.postId(), ...)) # Before processing the POSTed form data: if self.isReposted(): ... duplicate submission: complain and do not process And here's the code in SitePage to implement the above: def postId(self): """ Add a unique posting identifier to the form """ id = '%s%s' % (time.time(), random.random()) return '<input type=hidden' % id def isReposted(self): """ Return true if we've already processed this submission """ id = self.request().field('__form_post_id', '') if not id: # odd... return 0 sess = self.session() did = sess.value('__forms_seen', {}) if id in did: return 1 did[id] = 1 sess.setValue('__forms_seen', did) return 0 Aaron Held wrote: > Costas Malamas wrote: >> Also, in a related thought, have you tried redirecting self.writeln() >> to stuff HTML into a cache to speed up response time? does that work? >> it's the next step in my optimization... > > > each URL is a unique page - look for the rendered page in the cache. If > not, render it from a cheetah template [ ... ] I'm using the similar approach, though a bit more sophisticated. Again, the template files are interpreted with Cheetah. class CheetahTemplateFileCache: """ Caches Cheetah's template files. Note, that classes instead of instances are stored in a cache. This is required because instances are modified by Cheetah and are not safe for re-use and thus must be created fresh on each usage. """ srcFileTag = re.compile('Source file: [^\n]+') def __init__(self): self.cache = {} self.lock = threading.Lock() def genTemplate(self, fileName): t = Template(file=fileName) temp = {} src = t.generatedModuleCode() src = self.srcFileTag.sub('XXX', src) # workaround Cheetah bug exec src in temp tklass = temp['GenTemplate'] if not tklass: raise Error("Can't construct template for %s" % fileName) return tklass def addToCache(self, fileName): tklass = self.genTemplate(fileName) self.cache[fileName] = (tklass, time.time()) return tklass def get(self, fileName): "Returns a Cheetah's Template for given file name." self.lock.acquire() try: tklass = None if not self.cache.has_key(fileName): tklass = self.addToCache(fileName) else: tklass, mtime = self.cache[fileName] newmtime = os.path.getmtime(fileName) if newmtime > mtime: tklass = self.addToCache(fileName) return tklass() finally: self.lock.release() > I have a problem. > self.application().forward(self.transaction(), servletName) > works. > self.application().forward(self.transaction(), servletName) The problem gone. I only needed extra URL path in called servlet. self.application().forward(self.transaction(), servletName + self.transaction().request().extraURLPath()) Thanx to Webware/WebKit/Testing examples. -- JZ I have a problem. servletName = 'myServlet.py' self.application().forward(self.transaction(), servletName) works. servletName = 'myServlet.py?myParam=maValue' self.application().forward(self.transaction(), servletName) DOES NOT WORK :(( -- JZ Hancock, David (DHANCOCK) wrote: > What database are you using? If it's one that supports "auto" columns or > "sequences" (automatically incremented counters, specifically for primary > keys), consider using such a function when you insert. If your database > doesn't support that, then I'm not sure what you can do. Well if there is support for transactions in the database you could maintain a table of sequence numbers and increment that as part of the transaction used for the insert. > What I just described is just good database practice. There's probably > another way to trap the case of submitting the same form twice. Could you put mutex protected flags in the session to indicate whether or not the form had been submitted? Nick Hallo, Hancock, David (DHANCOCK) hat gesagt: // Hancock, David (DHANCOCK) wrote: > What database are you using? If it's one that supports "auto" columns or > "sequences" (automatically incremented counters, specifically for primary > keys), consider using such a function when you insert. I In this specific problem this probably will not help, because only one insert is wanted. Using autoincrement keys you'd create two valid database rows, when the goal was to just get one. ciao -- Frank Barknecht _ ______footils.org__ I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/webware/mailman/webware-discuss/?viewmonth=200312&viewday=11
CC-MAIN-2017-13
refinedweb
702
52.36
09 February 2011 10:59 [Source: ICIS news] LONDON (ICIS)--INEOS has declared force majeure on supplies of high density polyethylene (HDPE) from its facility at ?xml:namespace> INEOS declared force majeure on the evening of 8 February, the source said. The company already has a declaration of force majeure in place on HDPE supplies from its 240,000 tonne/year plant at The exact status of the Lillo HDPE plant was not known on Wednesday, but INEOS had been experiencing some major production issues throughout January. The plant had been running at reduced rates since a power outage brought both of its lines down during the first week of January. The 180,000 tonne/year HDPE line restarted after being shut down for five days. INEOS restarted the 240,000 tonne/year line three weeks later. Both production lines at Lillo were due to be taken off line for two weeks of planned maintenance work in March, the source said. Buyers have been affected by the production problems at Lavera and Lillo. “We have been struggling with deliveries this month,” said a large buyer. HDPE producers had been targeting price increases following the €25/tonne increase ($34/tonne) in the February ethylene contract price, and increases were going through. HDPE margins had been very poor in late 2010, and sometimes HDPE prices were barely above the ethylene contract level, but margins were now improving. However, HDPE was still widely regarded in the market as the PE grade with the worst margins. Net HDPE blowmoulding prices were around €1,200/tonne FD (free delivered) NWE (northwest HDPE is used in the food packaging and household goods sectors. ($1 = €0.73) For more on polyethylene
http://www.icis.com/Articles/2011/02/09/9433501/ineos-declares-force-majeure-on-hdpe-supplies-from-lillo.html
CC-MAIN-2014-35
refinedweb
284
60.35
Coding4Fun - Never miss a message again - Posted: Feb 28, 2010 at 5:09 PM - 2,229 Views - 9 Comments Something went wrong getting user information from Channel 9 Something went wrong getting user information from MSDN Something went wrong getting the Visual Studio Achievements. In this article, you'll learn how to create a screensaver with great visual effects using WPF. Your screensaver will be fun to watch and practical, and you won't have to resort to low-level graphics code. I've created screensavers before, but I haven't done much WPF work. My last screensaver had a Polaroid-like snapshot look and worked well, but relied on GDI+ for primitive graphics routines. I now know that WPF is the only way to go! For this project, you'll need Visual C#/Basic Express Edition 2008. Expression Blend can come in handy too, if you have it. There's a trial version, but it's a bit pricey for hobbyists to buy otherwise. My original concept was to create a "virtual message center" where visitors could leave audio and video recordings or written messages that would appear as Post It-style notes with slick graphics effects on the screen. Eventually, though, I gave up on the media recording and replaced the sticky notes with a more traditional stack-based ListBox. The project's main layout includes a message control for leaving and reading messages, an "I'm away from my desk" region, and a button that indicates when you're back. Photos from your pictures folders spin in the background (fun to do in XAML, but painful with GDI+). I tackled the away message first. Depending on what fields you fill out, it lets people know that you're not there, where you are, and when to expect you back. The time clock counts down, rather than just showing a static time. If you don't want to be too revealing, you don't have to fill it all the entire form. All changeable fields are set via databinding. With the exception of the countdown timer, these are static (one-time) bindings. I have the countdown field bound to a TimeSpan property, ReturnTimeRemaining. All of the properties used for databinding are contained in the UserInfo project which implements INotifyPropertyChanged. Objects that implement this interface can raise an event on property changes, and since WPF automatically subscribes to these events, the UI stays updated. XAML <StackPanel x: <StackPanel Orientation="Horizontal" HorizontalAlignment="Center"> <TextBlock FontSize="28" VerticalAlignment="Top" Text="{Binding UserName}" HorizontalAlignment="Center" /> <TextBlock FontSize="28" VerticalAlignment="Top" Text=" is away right now" HorizontalAlignment="Center" /> </StackPanel> <TextBlock FontSize="28" Text="{Binding Message}" TextAlignment="Center" VerticalAlignment="Top" HorizontalAlignment="Center" /> <StackPanel Orientation="Horizontal" HorizontalAlignment="Center"> <TextBlock FontSize="20" VerticalAlignment="Top" Text="Expected return: " HorizontalAlignment="Center" /> <TextBlock x: </StackPanel> </StackPanel> The message box is a simple interface where you can enter your name and a message. You can also click a checkbox to make your message private or not. Displayed messages are shown in the ListBox. The name and message textboxes are databound to an AwayMessage object. Since WPF/XAML supports declarative binding, you can set an association between the Text properties of the controls and the properties of a data object. The object being bound is called the DataContext, and it's independent of the bindings themselves. If you assign a new object reference to the context, the fields instantly update to their referenced properties. The ListBox is databound to the collection of AwayMessage objects. Since the AwayMessageCollection class inherits from ObservableCollection, the ListBox is made aware of additions and removals from the collection through events. No explicit code is required to update the UI to match the objects. In fact, since the AwayMessasge class implements INotifyPropertyChanged, even changes within the collection items themselves will be reflected in the list. It's powerful stuff! Visual C# public class AwayMessageCollection : ObservableCollection<AwayMessage> Visual Basic Public Class AwayMessageCollection Inherits ObservableCollection(Of AwayMessage) Effects that make the photos spin and grow, and display the pushpin are all accomplished declaratively using XAML timelines. This enables you to define the changes, such as a rotation from 0 - 360, as well as the time it should take to complete. WPF then handles those changes, taking into account graphics performance, CPU speed, etc. to make it easy for you. This happens on a background thread and will be as smooth as hardware allows without any frame-by-frame work in the main code. Even making the pushpin appear is based on setting the Visibility property as part of that timeline. To see this timeline, look toward the top of the XAML file, under <Window.Resources> for PhotoEffectsStoryboard. This creates a series of DoubleAnimation elements. A "DoubleAnimation" refers to the fact that the property type it animates is a double datatype. Each element specifies the target element name, property, the beginning and ending values, and the duration the change should take. If you omit the From property, you create an animation called a "handoff animation." This starts the animation from whatever the current value is. Handoff animations can be really useful in interactive applications where elements can be in different starting points at different times, but in this controlled application they don't make much sense. XAML <Storyboard x: <DoubleAnimation Storyboard. <DoubleAnimation Storyboard. <DoubleAnimation Storyboard. <DoubleAnimation Storyboard. <DoubleAnimation Storyboard. <DoubleAnimation Storyboard. <ObjectAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard. <DiscreteObjectKeyFrame KeyTime="00:00:00" Value="{x:Static Visibility.Hidden}"/> <DiscreteObjectKeyFrame KeyTime="00:00:02.5000000" Value="{x:Static Visibility.Visible}"/> <DiscreteObjectKeyFrame KeyTime="00:00:07.5000000" Value="{x:Static Visibility.Hidden}"/> </ObjectAnimationUsingKeyFrames> </Storyboard> Notice that the TargetName properties are set to named ScaleTransform elements, not to control elements themselves. You can animate any element with a name, so in this example, the Border object has a RenderTransform set to an instance of a TransformGroup containing a ScaleTransform and a RotateTransform object (peek at the XAML if that sounds confusing!). Naming those transforms makes it possible to animate them. XAML <Border.RenderTransform> <TransformGroup> <ScaleTransform x: <RotateTransform x: </TransformGroup> </Border.RenderTransform> After creating this, I decided I should have gone with the actual yellow sticky note look. You could achieve this by creating a user control that presents a name, note, and other fields in a colored square. Then the screen could be covered with a large ListBox with its item display template set to this custom control. Finally, you would need to modify the layout template of the ListBox to allow the notes to arrange in an unstructured way to simulate a bulletin board. Remember that screensavers must be named with a "scr" file extension, not "exe" so take the project output and rename it first. Installing a screensaver is as simple as copying the scr file and associated DLL's to your Windows folder, then choosing it in the system Screen Saver Settings dialog. You could also set your away information by opening the scr file with the /c argument, but this wouldn't be very convenient! Note that the /c argument is the standard way for a screensaver to open in configuration mode, and is what happens when you click Settings in the screen saver system applet. WPF is a major shift, but it allows a dynamic look that was nearly impossible to achieve with GDI+ for anyone but seasoned experts. Though working with the flexibility of XAML, storyboards, and nested controls can be daunting, it's well worth the time. Look at lots of samples, play around with it, and download Blend for easier XAML editing. You'll have shiny, rounded, mirrored controls in no time! did not work. Also, how can I start this without going into the Desktop properties everytime? Yeah, I tried again last night. The SCR file extension is strange. If you use the /c switch from a command prompt it opens fine, but if you create a shortcut and specify the switch then it ignores it (Windows seems to force a /S when you open it). Best option would be to make a copy and rename it as .EXE and then use a shortcut with the /c option. Sorry it's such a pain! You shouldn't need to open it from Desktop properties once you've set it as your system screensaver. From there it should just start when the system is idle. There seems to be weirdness with the SCR file extension. Make a copy of the executable using the EXE extension, then create a shortcut that specifies the "/c" parameter in the target. This will bring up the configuration dialog. As for avoid Desktop properties every time, once you've copied the SCR file and DLL's to your system folder, you can just select it as your default screensaver -- it's up to Windows to launch it based on your settings then. Thanks! I'll give this to all the employees in my company. how do you view the message that people put on there this is very nice man, keep up the good wok @Will: When you exit the screensaver the messages will be automatically showing in the configuration window. You can also start the screensaver configuration window automatically to see. I had considered sending SMS or emails when messages are left but never got to it. Maybe someone else wants to give it a try! @QuintonS you mean you'd like to do a coding4fun article or run the application? How do I post something like this? Remove this comment Remove this threadclose
http://channel9.msdn.com/coding4fun/articles/Coding4Fun-Never-miss-a-message-again
CC-MAIN-2013-48
refinedweb
1,584
55.13
YouTube API, Version 3 on Rails A while ago, I penned an article on using YouTube on Rails, explaining the basics of interacting with the YouTube API. The previous article covered how to fetch video information and use the YouTube IFrame API to manipulate video player. Later, another article, Uploading Videos to YouTube with Rails, was released showing how to create an app that allows users to upload videos directly to YouTube. The Youtube_it gem was used for both demos. Both of my posts garnered quite an bit of comments, which I appreciate. youtube_it is a great gem, however, it employs version 2 of the YouTube API, which is now officially deprecated and will no longer be supported after April 20th, 2015. Ouch. Fortunately, Claudiofullscreen saved the day by creating a new gem called yt (now that’s a short name) that utilizes version 3 of the YouTube API. In this article, we are going to build an app similar to the one that was introduced in the “YouTube on Rails” and “Uploading Videos to YouTube with Rails” posts, but make it work with version 3 of the YouTube API. The working demo is available at sitepoint-ytv3.herokuapp.com. The source code is available at GitHub. Changes in V3 There are some notable changes in v3: - Authentication. Whereas API v2 allowed authentication via OAuth, OAuth 2, AuthSub or Developer Key (to perform read-only requests), API v3 only supports OAuth 2. With OAuth 2, you can access user’s private data and manipulate it via API. Read-only access, not requiring user authentication, is also supported. You need to provide an API key that identifies your project, which I’ll show you in a bit. - Fetching videos. Fetching videos by tags and finding most-linked videos is not supported anymore. Advanced queries with boolean operators were removed as well. - No more comments for you. At least for now, as YouTube is re-working its commenting system, so comments are not currently part of the API. This means that you can’t list or manage video comments anymore. - Video responses were retired. RIP. As this announcement states, video responses were used about 0.0004% of the time, so the YouTube team decided to remove it. - Access control lists. Most of this functionality was removed; the only one that remains is embeddable, however there are reports that it does not work as well. You can refer to the Migration Guide to learn more. By the way, there is a special guide for those who are migrating from youtube_it gem to yt. Preparing the App I am going to build an app that will provide the following features: - Users should be able to add their videos that already exist on YouTube. - Videos should be displayed on the main page of the site, along with some basic information (like title). - Users should be able to upload their videos to YouTube via the app. Uploaded videos should also be saved in the app’s database. In this guide, I am going to stick with Rails 4.2.0, but the same solution (with a very few modifications) can be implemented with Rails 3 and 4.1. Start by creating a new app without the default testing suite: $ rails new YtVideosV3 -T Drop the following gems into your Gemfile: Gemfile [...] gem 'yt', '~> 0.13.7' gem 'bootstrap-sass', '~> 3.3.0.1' gem 'autoprefixer-rails' [...] The main star here is yt. I am using Bootstrap for styling purposes, but it’s not required. autoprefixer-rails is recommended for use with Bootstrap to automatically add browser vendor prefixes. Don’t forget to run $ bundle install Hook up Bootstrap: application.scss @import "bootstrap-sprockets"; @import "bootstrap"; @import 'bootstrap/theme'; Okay, now tweak the layout a bit:> </ul> </div> </div> <div class="container"> <% flash.each do |key, value| %> <div class="alert alert-<%= key %>"> <%= value %> </div> <% end %> </div> [...] Next, proceed with the model, called Video, which will store the users’ videos. It it going to contain the following attributes: link( string) – a link to the video on YouTube uid( string) – video’s unqiue identifier presented by YouTube. It is a good idea to add a database index here title( string) – video’s title published_at( datetime) – a date when the video was published on YT likes( integer) – likes count for the video dislikes( integer) – dislikes count for the video Create and apply the corresponding migration: $ rails g model Video link:string title:string published_at:datetime likes:integer dislikes:integer uid:string:index $ rake db:migrate Don’t forget to set up the routes: config/routes.rb [...] resources :videos, only: [:index, :new, :create] root to: 'videos#index' [...] Create the controller: videos_controller.rb class VideosController < ApplicationController def index @videos = Video.order('created_at DESC') end def new @video = Video.new end def create end end On the index page, display all the videos that were added by the user. There will also be a new page that presents a form to add a new video. The create action will be fleshed out in the next section. Lastly, create an index view with a single button:> Adding Videos From YouTube The next step is creating a form to add videos that have previously been uploaded to YouTube, meaning, outside our application. The only thing that we need to know is the link to the video that the user wishes to add. All other information about it will be found using the YouTube API. As such, the form is very simple: views-primary' %> <% end %> </div> The shared/_errors.html.erb partial is used here: %> Now, the create action: videos_controller.rb [...] def create @video = Video.new(video_params) if @video.save flash[:success] = 'Video added!' redirect_to root_url else render :new end end private def video_params params.require(:video).permit(:link) end [...] Really simple. Let’s also set up validation so that users cannot enter invalid links: models/video.rb class Video < ActiveRecord::Base YT_LINK_FORMAT = /\A.*(youtu.be\/|v\/|u\/\w\/|embed\/|watch\?v=|\&v=)([^#\&\?]*).*/i validates :link, presence: true, format: YT_LINK_FORMAT end Setting up the YT API Before moving on, we have to configure the yt gem so it can communicate with the YouTube API. First of all, navigate to the Google Developers Console and create a new application. Call it whatever you like, but keep in mind that users will see this name when authenticating via OAuth 2. Next, navigate to the Consent screen page (APIs & Auth section) and provide basic information about your app. Open the APIs page and enable the following: - Google+ API - YouTube Analytics API - YouTube Data API v3 If you forget to enable some of these, errors will be produced when trying to communicate with the YT API. Also note that the APIs have usage quotas, so be aware of how much your sending to the API. The last step is obtaining a server key for public API requests as, currently, we do not need any user interaction – only basic actions will be performed. Navigate to Credentials and click “Create new key” in the “Public API” access section. Then choose Server key and enter your server’s IP address so that requests cannot be sent from other IPs. If you are not sure which IP to enter, simply leave this field blank for now (which effectively means that any IP is allowed to send requests with the provided server key) – we are building a demo app after all. Lastly click “Create” – a new Key for server applications will be added. The API key value is what we need. Create an initializer file to set up yt: config/initializers/yt.rb Yt.configure do |config| config.api_key = 'your_server_key' end This API key should be kept safe – I am using an environment variable to store it. yt is now configured and can issue API requests to fetch basic information, such as a video’s title or publishing date. Querying the YT API Before the video is saved in the database, information about it should be loaded from YouTube. In the “YouTube on Rails” article, I used a before_create callback to fetch all the required info, but one of the readers noted that observer can be also used for this task, so let’s try that instead. In Rails 4, observers are not part of the framework’s core anymore, so we need a separate gem to bring them back: Gemfile [...] gem 'rails-observers' [...] Run $ bundle install and tweak the application’s configuration like this config/application.rb [...] config.active_record.observers = :video_observer [...] to register a new observer. Create the observer in the models directory: models/video_observer.rb class VideoObserver < ActiveRecord::Observer def before_save(resource) video = Yt::Video.new url: resource.link resource.uid = video.id resource.title = video.title resource.likes = video.like_count resource.dislikes = video.dislike_count resource.published_at = video.published_at rescue Yt::Errors::NoItems resource.title = '' end end The before_save method will run only before the record is saved. This method accepts a resource as an argument. Inside the method, I am using Yt::Video.new to fetch the specified video via the API by its URL. Then, we simply use yt’s methods to get all the necessary info. I am also rescuing from the Yt::Errors::NoItems error – it will occur when the requested video was not found. That’s all! You can go ahead and add some videos of your choice to check if everything is working correctly. Displaying Videos Let’s spend a couple of minutes modifying the index page so videos are being shown. Use the following layout:> <% if @videos.any? %> <div class="container"> <h1>Latest videos</h1> <div id="player-wrapper"></div> <% @videos.in_groups_of(3) do |group| %> <div class="row"> <% group.each do |video| %> <% if video %> <div class="col-md-4"> <div class="yt_video thumbnail"> <%= link_to image_tag("{video.uid}/mqdefault.jpg", alt: video.title, class: 'img-rounded'), "{video.uid}", target: '_blank' %> <div class="caption"> <h5><%= video.title %></h5> <p>Published at <%= video.published_at.strftime('%-d %B %Y %H:%M:%S') %></p> <p> <span class="glyphicon glyphicon glyphicon-thumbs-up"></span> <%= video.likes %> <span class="glyphicon glyphicon glyphicon-thumbs-down"></span> <%= video.dislikes %> </p> </div> </div> </div> <% end %> <% end %> </div> <% end %> </div> <% end %> in_groups_of is a Rails method that will divide the array of videos into groups of 3 elements. We then iterate over each group to render them. Notice the if video condition – it is required because if you have, for example, 5 elements in the array and divide them into groups of 3, the last element will be set to nil. For each video, display its thumbnail image, which YouTube generates for us. mqdefault.jpg means that we want to fetch a 320×180 image with no black stripes above and below the picture. There is also a hqdefault.jpg (480×360 image with black stripes above and below the picture) and <1,2,3>.jpg (120×90 image with different scenes from the video with black stripes above and below the picture). Each thumbnail acts as a link to the video on YouTube. In the “YouTube on Rails” article, I showed how to implement the YouTube IFrame API in order to add the player to your page (and manipulate it). This process has not changed at all, so no need to duplicate the same code here. Browse the Displaying the Videos section in that article for more information. Uploading Videos to YouTube Right now, users are able to add their videos into our app easily. What if someone has a video that is not yet uploaded to YouTube? Should we instruct this person to first use the YT Video Manager to upload the file and then use our app’s interface to share the video? It would be more convenient if users could just upload their videos directly via our app. Authenticating via Google+ First of all, we need an authentication system in place. As you may remember, YT APIv3 only allows the OAuth 2 protocol for authentication and this protocol requires us to pass a special token that is generated when a user logs in via our app. We are going to use the omniauth-google-oauth2 that provides a Google OAuth 2 strategy for OmniAuth. Add the new gem into the Gemfile: Gemfile [...] gem 'omniauth-google-oauth2' [...] and run $ bundle install Create an initializer that will contain settings for our authentication strategy: config/initializers/omniauth.rb Rails.application.config.middleware.use OmniAuth::Builder do provider :google_oauth2, 'YT_CLIENT_ID', 'YT_CLIENT_SECRET', scope: 'userinfo.profile,youtube' end We are registering a new strategy called google_oauth2. YT_CLIENT_ID and YT_CLIENT_SECRET can be obtained via the Google Developer Console, which we used a few minutes ago. Return there, open the app, that you’ve created earlier, and navigate to Credentials. Click the “Create new Client ID” button and select “Web application”. Put your site’s URL in the “Authorized JavaScript” origins field (use “” if working on a developer’s machine). For the Authorized redirect URIs, provide the site URL plus “/auth/google_oauth2/callback” (for example, “”). A new Client ID for the web application will be created. The Client ID and Client Secret fields are what you want. Once again, those keys should not be available in your version control system. The last parameter for our strategy is the scope, which specifies which actions the app would like to perform. userinfo.profile means that we want to be able to fetch basic information about the user’s account (like name, unique identifier and that stuff). youtube means that the app will be able to manage the user’s YouTube account (because we need to be able to upload new videos). Add a couple more routes: config/routes.rb [...] get '/auth/:provider/callback', to: 'sessions#create' delete '/logout', to: 'sessions#destroy', as: :logout [...] The first one is the callback route where a user is redirected after a successful authentication. The second route will be used for logging out. We need to store the user’s data somewhere, so a new table will be required. Let’s call it users. For now it will contain the following fields: name( string) – user’s name (with a surname perhaps). token( string) – token to perform API requests. uid( string) – user’s unique identifier. We are going to add an index with a uniqueness constraint here. Create the migration: $ rails g model User name:string token:string uid:string and append this line right after the create_table method: xxx_create_users.rb [...] create_table :users do |t| [...] end add_index :users, :uid, unique: true [...] We also need a new controller with two actions: sent by the server to our app (it is called the “auth hash”). Now, let’s create the from_omniauth method: models/user.rb class User < ActiveRecord::Base class << self def from_omniauth(auth) user = User.find_or_initialize_by(uid: auth['uid']) user.name = auth['info']['name'] user.token = auth['credentials']['token'] user.save! user end end end Here the find_or_initialize_by method is used. It tries to find a user with the provided uid and, if found, the record is returned as a result. If it is not found, a new object is created and returned. This is done to avoid situations when the same user is being created multiple times. We then fetch the user’s name and token, save the record, and return it. Here is a sample auth hash that you can use as a reference. It is time to create a new method to test if the user is logged in. We are going to call it current_user, which is a common idiom in Rails projects. application_controller.rb [...] private def current_user @current_user ||= User.find_by(id: session[:user_id]) if session[:user_id] end helper_method :current_user [...] It simply checks if the session contains the user_id and, if it does, try to find the user with the specified id. helper_method ensures that this method can also be used in views. Lastly, let’s expand our menu a bit: views> <% if current_user %> <li><%= link_to 'Upload Video', new_video_upload_path %></li> <% end %> </ul> <ul class="nav navbar-nav pull-right"> <% if current_user %> <li><span><%= current_user.name %></span></li> <li><%= link_to 'Log Out', logout_path, method: :delete %></li> <% else %> <li><%= link_to 'Log In', '/auth/google_oauth2' %></li> <% end %> </ul> </div> </div> [...] I’ve also added some styles to pretty-up those links: application.scss [...] .nav > li > span { display: block; padding-top: 15px; padding-bottom: 15px; color: #9d9d9d; } Uploading Great, only one step is left. We have to create another form allowing the user to select a video and provide the title and description for it. We could utilize the same VideosController, but I’ve decided to use another approach here that follows REST principles and makes validation really simple. Create a new controller: video_uploads_controller.rb class VideoUploadsController < ApplicationController def new @video_upload = VideoUpload.new end def create end end We’ll come back to the create method soon enough. For now, add the new routes: config/routes.rb [...] resources :video_uploads, only: [:new, :create] [...] and yet another menu item: views/layouts/application.html.erb [...] <ul class="nav navbar-nav"> <li><%= link_to 'Videos', root_path %></li> <% if current_user %> <li><%= link_to 'Add Video', new_video_upload_path %></li> <% end %> </ul> [...] Now, the actual form: views/video_uploads/new.html.erb <div class="container"> <h1>Upload video</h1> <% if current_user %> <%= form_for @video_upload do |f| %> <%= render 'shared/errors', object: @video_upload %> <div class="form-group"> <%= f.label :file %> <%= f.file_field :file, class: 'form-control', required: true %> </div> <div class="form-group"> <%= f.label :title %> <%= f.text_field :title, class: 'form-control', required: true %> </div> <div class="form-group"> <%= f.label :description %> <%= f.text_area :description, class: 'form-control', cols: 3 %> </div> <%= f.submit 'Upload', class: 'btn btn-primary' %> <% end %> <% else %> <p>Please <%= link_to 'sign in', '/auth/google_oauth2' %>.</p> <% end %> </div> We have to check if the user is logged in, otherwise trying to upload a video will result in an error. The form contains three fields: file, title, and description. You can expand it further, allowing users to provide, for example, tags or a category for their video (don’t forget to tweak controller and a model, accordingly). We need to think about validation. Obviously, we don’t need a separate table here because information about the uploaded video will be saved to the same videos table that already exists. As such, it seems the model is not required, but then all the validation logic has to be put into the controller, which is not the best idea: def create if params[:file].present? && params[:title].present? # ... and more checks here # upload video else # display an error (and user won't even understand what exactly is wrong) end end Instead, create a Ruby class and call it VideoUpload – all the validation logic can be put there. However, it would also be nice if this class borrowed some cool ActiveRecord features. We can do this, we have the technology. Meet active_type created by the folks from Makandra. ActiveType makes Ruby objects quack like ActiveRecord. Drop this gem into the Gemfile [...] gem 'active_type', '0.3.1' [...] and run $ bundle install Now create the video_upload.rb file in the models directory: models/video_upload.rb class VideoUpload < ActiveType::Object attribute :file, :string attribute :title, :string attribute :description, :text validates :file, presence: true validates :title, presence: true end Unfortunately there is an issue with Postgres and Rails 4.2 (maybe with some other version of Rails as well) that required me to modify the second and third lines like this: models/video_upload.rb [...] attribute :file, :varchar attribute :title, :varchar [...] This is a simple Ruby class that inherits from ActiveType::Object, whcih grants it super powers. With the help of the attribute method, we specify attributes and types. The validates method comes directly from ActiveRecord and you can use it the same way. Pretty cool! At this point, we can return to the controller video_uploads_controller.rb def create @video_upload = VideoUpload.new(title: params[:video_upload][:title], description: params[:video_upload][:description], file: params[:video_upload][:file].try(:tempfile).try(:to_path)) if @video_upload.save uploaded_video = @video_upload.upload!(current_user) # check if the video was uploaded or not redirect_to root_url else render :new end end This is just a basic controller method. Of course, calling save on the @video_upload does not actually save anything – it only runs validations. The upload! method does not exist, yet so let’s fix that: models/video_upload.rb [...] def upload!(user) account = Yt::Account.new access_token: user.token account.upload_video self.file, title: self.title, description: self.description end [...] This method creates a new yt client with the access token that we’ve received earlier. The upload_video method starts the actual upload. It accepts file and video parameters, like title and description. If you have read my “Uploading Videos to YouTube with Rails” article, then you probably noticed that the uploading process is now much easier. For YT API v2, you had to actually perform two requests: the first one returned the upload token and the second one allowed the upload to start. That was really messy and, thank Google, they’ve simplified things. The last piece of create‘s logic: [...] def create @video_upload = VideoUpload.new(title: params[:video_upload][:title], description: params[:video_upload][:description], file: params[:video_upload][:file].try(:tempfile).try(:to_path)) if @video_upload.save uploaded_video = @video_upload.upload!(current_user) if uploaded_video.failed? flash[:error] = 'There was an error while uploading your video...' else Video.create({link: "{uploaded_video.id}"}) flash[:success] = 'Your video has been uploaded!' end redirect_to root_url else render :new end end [...] Display an error if the video failed to upload. Otherwise add the video’s info to the database and redirect to the root_url. Feel free to refactor this code further. By the way, there are other status checks other than failed? – check out the examples here. Some Gotchas You should remember that YouTube will need some time to digest the video and, the longer the video is, the longer this process will take. Why should you care? Because if you try to fetch the video’s duration right after the upload process finishes, zero seconds will be returned as a result. The same applies to thumbnail images – they will not be available for a few minutes and you’ll see the default, boring grey image instead. To overcome this issue you can set up some kind of background process that periodically checks if newly uploaded videos were digested (use processed? method). If yes, fetch all their information. You may even want to hide those videos and display them on the main page only after the parsing is finished. Just don’t forget to warn your users about this fact (parsing of long videos may take more than 10 minutes). Also, don’t forget that YouType can reject videos for many reasons: it is too long, too short, duplicated, violates copyrights, has unsupported codec, etc., So, use extensive status checks. Conclusion That’s all for today, folks! I hope you find this article as useful as the original versions. The yt gem has many more fascinating features, so browse its readme! Have you already used the YT API v3? What do you think about it? Have you encountered any specific problems? Also don’t hesitate to post your questions or requests topics that you want me to cover. See you! hello mr. bodrova maybe you can help me...i got an message uninitialized constant OpenSSL::SSL::SSLErrorWaitReadable when i try to upload a video.. Hello! Its "Bodrov", but call me Ilya plz I and some other coders stumbled upon this issue and it is still unclear what is the root cause. See this: and The author of the gem said that he had when YT API was experiencing some problems and after a while it is gone. I'd recommend checking that all required APIs are enabled (check the second screenshot here). You might re-open the #103 issue and describe when the problem occurs. Hopefully this we be solved this way or another. This is a great tutorial. Thanks very much for providing it. Do you know if the Youtube API quotas apply here? If quotas apply (I would think they would for some actions), do you know if the quota is applied per the site that is calling the api or per the user that is causing the api to be called--obviously, if the quotas apply per user, that would enable a lot more activity than if they apply every time the site calls the api. It is a little confusing how using the gem maps to the different types of APIs Youtube describes, which have their own quota limits. I assume the quotas apply, but am not sure how. It looks like there are separate quotas for anyone using Youtube Data API and Youtube Analytics API (not sure about Youtube Player API, which might be most on point). There might not be a quota for simply embedding a video, but I am pretty sure there is a quota for uploading a video from a site to youtube. Seems like this could be called "insert":. If so, by the quota rules (getting pretty specific here), that costs about 1600 units, and you are alotted 50,000,000 units a day. That would translate to about 30,000 videos allowed to be uploaded per day. That is obviously a lot and not a concern for building something just for learning, but if your site starts getting a lot of traction, it could be a little bit of a concern if users could at most do 30,000 a day (3 mil a month). Any way that quota might be taken per user? (so your site could upload any amount, but each user would be limited to 30,000 a day--obviously way more than enough) And, as said, do you think the quotas apply for both just embedding an already existing quota and uploading a new video, or just uploading a new video? There are a few unanswered Stack Overflow questions on this, so I think it is a point of confusion. Thanks again! That is a really good question, but unfortunately I can't answer it 100%. Quota is really being applied for any API interaction, however it is not applied to Iframe API when you embed videos (because you are not providing any keys or tokens). If you open Google Developers Console and navigate to API section of any project, there will be Enabled APIs tab. Inside you will see quota per API. Therefore interacting with YT Analytics in not the same as interacting with YT Data API. These quotas are counted per project. What I do not know is how to set up those per user and seems like from this console there is no way to do that (moreover it seems that there is no way to do it at all, see below). I believe you can search for such topics in Google groups or contact them directly. I have heard somewhere (though I am really not sure if it was related to Google APIs) that if you need much more API calls for a project, then you may contact Google team and discuss this personally. For example, here is some info on usage limits for Google maps It clearly states that if your app generates too many traffic, Google team will contact you to discuss payment options. Some more info on billing: Money makes the world go around, as they say. Hopefully that helps and thank you for the feedback! Thanks. Your reply adds clarity to how this might work with the Google quotas--and I doubt there is much more to know without talking to Google directly. I appreciate it. Hey Ilya (@bodrovis) This tutorial was my introduction to ruby and may I say that it was very well done, everything made perfect sense and was easy to implement and all is working perfectly. However I need my system to do something a little differently, and I can't find to many articles regarding this on rails. I need my site to have one central youtube account, that all the videos are submitted too, regardless of the end-users on my site. I believe its like a content owner account but not 100%, maybe you know of a couple of links you can point me too. I know their is a lot of content copy right issues involved, but we would own the rights to the videos submitted, so that is not an issue in this case. Any assistance would be appreciated. Thank you for the feedback! Well, the only thing that I can suggest is service accounts - it might help in your case. I don't know about any other possible solutions Hey Ilya (@bodrovis) That is exactly what I'm busy with now, and once you have the access token from your service account, you can use the yt gem as per normal. Thanks a lot for your quick feedback Good luck then! @bodrovis - Hi Ilya, Thanks so much for this tutorial, incredibly helpful for the project I am currently working on. One minor point which my team got stuck on for a couple of hours together around the Google+ OAuth step. We were trying to run in local host following your exact implementation instructions but continued to get redirect_url, CSRF detected and SSL errors throughout troubleshooting. 2 things worked for us that are worth flagging to other people following: Thank you for these tips, should be helpful for other readers! @bodrovis - no problem I have one follow-up question around validating video uploads. With this structure, how would you implement file size validations for video uploads? We only want to allow short videos e.g. < 60 secs in length and to begin with want to set an arbitrary file size limit of 50MB. We have explored trying to add explicit validations within the video_upload.rb ActiveType::Object but to no avail. e.g. validates_size_of :file, maximum: 50.megabytes, message: "should be less than 50MB" I understand how you could achieve this using a third party uploader like CarrierWave but struggling to see how you could do this here. Any thoughts would be greatly appreciated! Thanks, Alex That's an interesting question. I will try to look into it and let you know the results. Judging by this yt currently loads the whole file into memory so we may work with it somehow. Thank you @bodrovis for this tutorial. Has helped me and the project I am working on tremendously. I am running into an issue with the YouTube authentication however. This is the error I am getting: According to the application trace this is coming from the upload!method in video_upload.rb; the access token seems to be inauthentic. I'm not exactly sure if this is the case or how to go about fixing it because it does work in some instances (e.g when the db is empty). Any advice would be greatly appreciated. Thank you! Hi! As far as I see you've opened this issue here so I'll monitor it and discuss there. Thanks! I've identified the problem and responded accordingly
https://www.sitepoint.com/youtube-api-version-3-rails/
CC-MAIN-2018-17
refinedweb
5,169
65.52
A package that creates and manipulates screen overlays based on tkinter. Project description Overlay A package that creates and manipulates screen overlays based on tkinter. Platforms - Mac OS (tested and works) - Linux (not tested) - Windows (tested and does not work) Installation pip install overlay Usage A basic overlay is created as such: from overlay import Window win = Window() Window.launch() The constructor of the Window class takes the following (optional) parameters: - size: tuple, the dimension (width, height) of the overlay window. - position: tuple, the position of the overlay (on screen). - transparent: bool, whether to set the overlay background transparent. - alpha: float [0, 1], the alpha (transparency) of the overlay. - draggable: bool, whether the window can be dragged Note that the parameters mentioned above can be edited any time as attributes of an instance of Window. In order to edit the content of a overlay, one needs to obtain the root of the overlay, upon which all else shall be build. import tkinter as tk from overlay import Window win = Window() label = tk.Label(win.root, text="Window_0") label.pack() Window.launch() Multiple overlays can be created just as easily: import tkinter as tk from overlay import Window win_0 = Window() label_0 = tk.Label(win_0.root, text="Window_0") label_0.pack() win_1 = Window() label_1 = tk.Label(win_1.root, text="Window_1") label_1.pack() Window.launch() The following program covers a brief explaination on other methods of the Window class. import tkinter as tk from time import sleep from overlay import Window def other_stuff(text): '''A simple demonstration. The usage of sleep is to emphasize the effects of each action.''' print(text) sleep(2) win_0.hide() # Hides the overlay. sleep(1) win_0.show() # Shows the overlay. sleep(1) win_0.focus() # Sets focus to overlay. win_1.center() # Moves the overlay to the center of the screen. sleep(1) Window.hide_all() # Hides all overlays. sleep(1) Window.show_all() # Shows all overlays. sleep(1) win_0.destroy() # Kills the overlay. sleep(1) Window.destroy_all() # Kills all overlays and ends the mainloop. '''Creates two windows.''' win_0 = Window() label_0 = tk.Label(win_0.root, text="Window_0") label_0.pack() win_1 = Window() label_1 = tk.Label(win_1.root, text="Window_1") label_1.pack() Window.after(2000, other_stuff, 'Hello World') # Identical to the after method of tkinter.Tk. Window.launch() Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/overlay/
CC-MAIN-2020-50
refinedweb
402
52.76
#include <MANCHESTER.h>#define TxPin 4 //the digital pin to use to transmit dataunsigned int ON = 1010; //the 16 bits to sendunsigned int OFF = 0000; //the 16 bits to sendvoid setup() { MANCHESTER.SetTxPin(TxPin); // sets the digital pin as output default 4}void loop() { MANCHESTER.Transmit(ON); delay(1000); MANCHESTER.Transmit(OFF); delay(1000);} #include <MANCHESTER.h>#define RxPin 4#define ledPin 0void setup() { pinMode(ledPin, OUTPUT); MANCHESTER.SetRxPin(RxPin); //user sets rx pin default 4 MANCHESTER.SetTimeOut(1000); //user sets timeout default blocks}void loop() { unsigned int data = MANCHESTER.Receive(); if (data == 1010) { digitalWrite(0, HIGH); } else { digitalWrite(0, LOW); }} Good Day Ma'am Sir!!! I'm using virtualwire for connecting 2 arduino's using 433MHz tx rx modules. However, I'm doing the "shrinkify your project" on the transmitter side so that I will not use arduino anymore and instead use ATtiny45. but the virtualwire library won't upload in the ATtiny45. How can I program this IC with VirtualWire without touching/reprogramming the receiver side ? thanks for your help Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=149928.msg1140229
CC-MAIN-2015-18
refinedweb
207
59.4
import model StandardListItem { title: ListItemData.firstname + " " + ListItemData.lastname imageSource: ListItemData.image description: ListItemData.title } } // end of ListItemComponent ] } // end of ListView attachedObjects: [ GroupDataModel { id: dataModel }, DataSource { id: dataSource // Load the data from an SQL database, based on a specific query source: "sql/contacts1k.db" query: "select * from contact order by firstname, lastname" onDataLoaded: { // After the data is loaded, insert it into the data model dataModel.insertList(data); } } // end of DataSource ] onCreationCompleted: { // After the root Page is created, direct the data source to start // loading data dataSource.load(); } } // end of Page Data access using QML The direct data access classes ( JsonDataAccess, SqlDataAccess, and XmlDataAccess) are useful methods of loading and storing data that you use in your apps. In many cases, you'll need to access this data using QML and load it into a Cascades component (for example, loading it into a data model for display in a list view). You can use the DataSource class to do just that. The DataSource class lets you access the same JSON, SQL, and XML data but provides a QML component for you to use, complete with properties and signals. There are properties that let you specify the source of the data, the query to use (for SQL and XML data), and the type of data that you're accessing. There's also a property that you can use to specify whether the data source is local to the device or is located remotely. Remote XML or JSON data is loaded asynchronously using an HTTP data source URL, with the help of the QNetworkAccessManager class. After you load data using a DataSource, you can provide it directly to a data model and list view to display it in your app. You're free to use the DataSource class in C++, but it's really designed as an easy way to load data using QML. To use a DataSource in QML, you need to add it to an attachedObjects list in your app. You also need to register the class as a QML type in one of your C++ source files, and import the library in your QML file. After data is loaded successfully, the dataLoaded() signal is emitted and you can use the onDataLoaded signal handler to start working with the data (for example, by inserting it into a data model). If the data isn't loaded successfully, the error() signal is emitted and includes a DataAccessErrorType that you can use to handle the error. Here's an example of how to use a DataSource to access data in an SQL database that's located locally on the device. The source property specifies the file to load data from, and the query property specifies the SQL query to run. After the data is loaded, it's inserted into a GroupDataModel that's associated with a ListView. Not applicable Not applicable Here's how to load data from a remote source (in this case, XML data from an RSS feed). The source property specifies the URL of the remote data source, and the query property specifies the types of items to retrieve. You'll notice that the type property is also used, and accepts a value from the DataSourceType::Type enumeration. Typically, the data type is inferred from the value of the source property or query property. However, in this case, the type property is required to explicitly state that the data is XML data. // In a C++ source file in your app #include <bb/data/DataSource> ... // Register the DataSource class as a QML type so that it's accessible in QML bb::data::DataSource::registerQmlTypes(); import data // model StandardListItem { reserveImageSpace: false title: ListItemData.title description: ListItemData.pubDate } } ] } attachedObjects: [ GroupDataModel { id: dataModel // Sort the data in the data model by the "pubDate" field, in // descending order, without any automatic grouping sortingKeys: ["pubDate"] sortedAscending: false grouping: ItemGrouping.None }, DataSource { id: dataSource // Load the XML data from a remote data source, specifying that the // "item" data items should be loaded source: "" query: "/rss/channel/item" type: DataSourceType.Xml onDataLoaded: { // After the data is loaded, clear any existing items in the data // model and populate it with the new data dataModel.clear(); dataModel.insertList(data) } } ] onCreationCompleted: { // When the top-level Page is created, direct the data source to start // loading data dataSource.load(); } } Not applicable Not applicable Last modified: 2015-05-07 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/documentation/device_platform/data_access/using_data_source.html
CC-MAIN-2017-22
refinedweb
740
52.29
. Here are the questions So, fire on. Any insight on this matter could be valuable. Thanks in advance. _ _ _ _ (_|| | |(_|>< _| * Why is the Perl testing infrastructure so effective? Because Schwern made it that way, and it was good. :) * If I wanted to export some of the qualities of Perl testing to a non-Perl product, what should I focus on? Getting Schwern or Ovid interested in that product. That's most effective when you change the product to be World of Warcraft, I think. * What motivates a (QA) contributor? If you're asking how to get people to work for free, well, there is no answer. It's different for every person. If you're assuming that you will get work for free, I think you're already off to a bad start. iPods might be interesting, but they aren't all that special or expensive to make most people do that much work. If the product is really useful and people like it enough or think they can't live without it, they'll help with it. I contribute to open source because I'm too stupid to realize that for the same pay I could watch TV all day, get through my Netflix list, or maybe read a book. :) Update: I wrote this post just before going off to bed, and then laid in bed thinking about it and that it probably is more flippant than I mean it to be. Writers should be given free prescriptions of sleeping pills for this very reason. I think things such as Perl's testing culture congeal around a few alpha personalities. There's no scientific reason these things happen, and a lot of it is by accident. I created Test::Pod because I could. It was easy and it was fun. From that, Andy Lester got interested, and eventually created Test::Pod::Coverage, and eventually took over Test::Pod. Both of those ended up as CPANTS metrics. Although it pains me to say it, I think Malcolm Gladwell's The Tipping Point applies here. That's not a perscription for success. It just points out that some things succeed for no other reason than somebody does it and somebody else likes it enough to do it too. For every time that happens, though, many other things don't catch on. I've written plenty of test modules, but I bet most people can't name any them. Test::Pod is the one that made it. Perl's testing culture matured at a particular time. Test had been around for quite a while, although most CPAN contributors seemed to find it just as easy to print "ok $test\n"; as use a module. Test::Simple did what a lot of people were thinking they should do, but didn't: write an ok function. Schwern actually did it though, and there were plenty of other people around who were waiting to use it. IF he had made it earlier, maybe it would just be sitting there on CPAN collecting dust. Who knows. Update2 Foro stvn: sure, there are a lot of people involved with testing now, but I credit Schwern with starting the whole thing. All that stuff you mention came later. Uhm, I think you are missing a LOT of names on your list there. What about petdance and the Phalanx project? And chromatic and his test-first evangalism which he spreads through (Test Code Kata, and several articles on Test::Builder)? They have done a lot for testing in the perl community as well. And then there is nothingmuch and the many people over in #perl6 who contributed to make Test::TAP::Model and Test::TAP::HTMLMatrix. Work which is being expanded upon by several others (sorry for not knowing more specific names) to create things like: And then there is AdamK and his work on PITA, which opens up the world of ridiculously large scale testing as well. I could go on and on, the perl-qa list is quite active with many regularly contributing members. My point really is that Perl's testing culture is so strong, because Perl's testing community is so strong and very active. New tools are constantly being developed, and existing tools are constantly being improved. Keeping the community active keeps it strong, and new shiny toys are a great way of attracting even more talent. UPDATE For stvn: sure, there are a lot of people involved with testing now, but I credit Schwern with starting the whole thing. All that stuff you mention came later... One of the big changes in Perl testing came about when the pumpkings adopted a policy of applying only patches that included tests. (This of course was only possible when people started taking test failures seriously enough to fix them.) Why. I know this wasn't exactly your question, but I think this will be of help... One of the things that makes perl testing framework really nice is TAP, this means that the test can be anything that just prints "ok" or "not ok" for each test (Test::Harness actually expects that to be a Perl program, but this is easily work-aroundable (does that word exists?)). So, here is what I use to work with Test-Driven-Development in whatever-language: #!/bin/sh make clean all check && perl -MTest::Harness -e '@tests=<./test/t*>;$T +est::Harness::Switches=qw();*Test::Harness::Straps::_command_line = s +ub {return $_[1]};runtests(@tests)' [download] BTW, for C code, I also use this _test.h file... which is very helpful to me... #ifndef LOADED_TEST_H #define LOADED_TEST_H #include <stdio.h> #include <unistd.h> #define plan(numtests) printf("1..%i\n",(numtests)); #define ok(bool,testname) printf("%s - %s\n",(bool)?"ok":"not ok",test +name); #define pass(testname) printf("ok - %s\n",testname); #define fail(testname) printf("not ok - %s\n",testname); #define skip(testname,reason) printf("ok - %s # Skipped: %s\n",testnam +e,reason); #define is_int(got,expected,testname) printf("%s - %s (expected:%i, go +t:%i)\n",((got)==(expected))?"ok":"not ok",testname,expected,got); #define is_short(got,expected,testname) printf("%s - %s (expected:%hhu +, got:%hhu)\n",((got)==(expected))?"ok":"not ok",testname,expected,go +t); #define is_flt(got,expected,testname) printf("%s - %s (expected:%f, go +t:%f)\n",((got)==(expected))?"ok":"not ok",testname,expected,got); #define is_str(got,expected,testname) printf("%s - %s (expected:%s, go +t:%s)\n",(strcmp((got),(expected))==0)?"ok":"not ok",testname,expecte +d,got); #endif [download] PHP itself has a pretty decent testing package, but it's more like Test::Class than Test::More: you create a class to do the testing, lots of setup, etc. So we didn't get much in the way of buy-in. My job was to build tools to get the programmers doing tests on the search platform, across a large number of international sites. They key was Keeping It Trivial To Do. I put together a Perl application called simple_scan (available as App::SimpleScan on CPAN) that took a URL, a regex, and a 'Y' or 'N' as its input; it then generated a Test::More-based Perl program that actually did the testing. This was the first bar: no writing programs to write a test. The second bar was, for instance, running 20 queries against 20+ sites. Obviously cut-n-pasting 20 identical tests was unappealing, and I wanted to stay away from the idea that you were writing a program. So I came up with the idea of doing combinatorial substitution: define a variable that has the servers you want to test, and another one which has the queries you want to run, and simple_scan does all the work of generating the unique combinations. So 3 lines of input can now generate 400+ tests, all of which are monitored via the standard TAP tools. The lesson of all this is to make sure that you provide testing in a way that is compatible with the goals of the programmer on the ground: if a programmer finds it really easy to write and run tests, he or she will write them. If there's any friction between "I should test this" and "test is running", you'll find that there are no tests. The other lesson is that you don't have to make them write Perl to take advantage of the Perl testing tools. 1 I'm working on development tools now; simple_scan's worked out so well that I ran out of things to do for Search! A couple of thoughts here... I have in the past donated time to community projects, (not always code related) and at some point asked myself "why?" The following is the best I could come up.
http://www.perlmonks.org/index.pl?node_id=579983
CC-MAIN-2017-17
refinedweb
1,469
71.95
Mouse vs. Keyboard - Determining Click Initiator Using A jQuery Custom Event When filling out online forms, I love to use my keyword as a means to both provide information as well as to navigate from form field to form field. This works great; but from time to time, an "itchy Tab finger" causes me to accidentally hit "Enter" on an inappropriate form element (such as a Cancel link). Falling victim to this problem the other day, I wondered if there was a way to determine which device - the mouse or the keyboard - triggered the "click" event. If I could, then I thought it might provide an opportunity to confirm an action if, say, a Cancel link were triggered from the keyboard rather than the mouse. There are two ways for a user to activate a link: either clicking the link directly with the mouse; or, bringing the link into focus and then hitting the Enter key on the keyboard. Both of these actions trigger a "click" event. But, after I looked at the Event object being produced, I couldn't find a consistent, cross-browser way to determine which device initiated the event. As such, I thought I might try to create a custom jQuery Event Type - "clickwith" - that would piggyback the native click event and include keyboard conditions during the event dispatch. Before I get into the code, I should warn you that the jQuery custom Event system is a bit of mystery to me. I can get it to, "work." But, I don't necessarily have a great mental model of how it all fits together or how the underlying events are bound and dispatched. And, forget about including data and namespaces in the mix - that's already way beyond my current understanding. That said, the idea behind this custom event, clickwith, is that we'll bind some key events as well as a click event. The key events - keyup and keydown - will keep track of key activity surrounding the click event. Then, when the custom "clickwith" event is being dispatched, it can use this key-based metadata as part of the outgoing event data. The pseudo-code for the approach is as follows: - If key pressed and key is Enter, set flag to True. - If click event is raised and flag is true, announce keyboard. - If click event is raised and flag is false, announce mouse. - If key released, set flag to false. As you can see, the keyboard events simply set a flag that is used within the click event handler. Ok, let's take a look at the code: - <!DOCTYPE html> - <html> - <head> - <title>Determine Link Trigger Method With jQuery</title> - </head> - <body> - <h1> - Determine Link Trigger Method With jQuery - </h1> - <p> - <a href="#" class="link">Click me please</a>!<br /> - <a href="#" class="link">Click me please</a>!<br /> - <a href="#" class="link">Click me please</a>!<br /> - <a href="#" class="link">Click me please</a>!<br /> - </p> - <!-- Configure scripts. --> - <script type="text/javascript" src="../jquery-1.7.1.js"></script> - <script type="text/javascript"> - // Set up a special Event Type which will help us to - // determine what physical device was used to initiate the - // click event on a given element: Mouse or Keywboard. Since - // there doesn't appear to be anything inherent to the click - // event that denotes device (cross-browser), we'll have to - // use some surrounding events to setup the click event - // handler data. The default device is considered the Mouse; - // the alternate device is the keyboard. - (function( $ ){ - // When a key is depressed, we want to signal that this - // might be a keyboard-initiated click event. As such, - // we'll store a boolean to be used in the click event. - function handleKeyDown( event ){ - // Check to make sure that this key is one that is - // capable of triggering a click event (ie. the enter - // button, 13). - if (event.which === 13){ - $.data( this, "clickwithevent:keyboard", true ); - } - } - // When a key is released, we know that the click event - // will have already take place (if at all); as such, we - // set the boolean flag to false since any subsequent - // click event will be triggered by a mouse (or preceeded - // by a keydown event). - function handleKeyUp( event ){ - $.data( this, "clickwithevent:keyboard", false ); - } - // When the click event is triggered, we need to - // determine if the event was initiated by the keyboard - // or the mouse. If the boolean flag is true, it means - // that the click event was preceeded by the depression - // of the Enter key, which indicates that the click event - // was initiated by the keyboard. - function handleClick( event ){ - // Get the flag for keyboard-based click. - var isKeyPress = $.data( this, "clickwithevent:keyboard" ); - // Let's create a new event that extends the click - // event. This way, when we trigger our "clickwith" - // event, we get all of the contextual information - // associated with the click; but, we don't - // accidientally trigger click events. - var clickEvent = createEvent( "clickwith", event ) - // Tell jQuery to trigger the new event with a second - // argument that indicates the initiator of the click - // event. - $.event.handle.apply( - this, - [clickEvent, (isKeyPress ? "keyboard" : "mouse")] - ); - } - // I create a new jQuery event object using the given - // event object as the collection of properties to copy. - // This way, we can "extend" an existing Event object - // without worrying about copying data we shouldn't. - function createEvent( eventType, event ){ - // For each event object, we will try to copy all of - // the following properties that are available. - var properties = [ - "altKey", "bubbles", "button", "cancelable", - "charCode", "clientX", "clientY", "ctrlKey", - "currentTarget", "data", "detail", "eventPhase", - "metaKey", "offsetX", "offsetY", "originalTarget", - "pageX", "pageY", "prevValue", "relatedTarget", - "screenX", "screenY", "shiftKey", "target", - "view", "which" - ]; - // Create a new properties object that will be used - // to create the new event. - var eventProperties = {} - // Copy over all properties from the old event. - $.each( - properties, - function( index, property ){ - // Make sure this property is available on - // the original event. - if (property in event){ - // Copy it over to the new event property - // collection. - eventProperties[ property ] = event[ property ]; - } - } - ); - // Create and return the new event object with the - // duplicated properties. - return( - new $.Event( eventType, eventProperties ) - ); - } - // Configure the special event, "clickwith", so that - // jQuery knows how to bind and unbind event handlers. - $.event.special.clickwith = { - // I configure each element that is bound to the - // clickwith event. I am only called once per element. - setup: function( data, namespaces ){ - // Set up the key events that surround the click - // events that setup the meta data. - $( this ) - .data( "clickwithevent:keyboard", false ) - .on( "keydown.clickwithevent", handleKeyDown ) - .on( "keyup.clickwithevent", handleKeyUp ) - .on( "click.clickwithevent", handleClick ) - ; - }, - // I remove the configuration from each element that - // is bound to the clickwith event. I am only called - // oncer per element.s - teardown: function( namespaces ){ - // Remove all traces of the special event. - $( this ) - .removeData( "clickwithevent:keyboard" ) - .off( "keydown.clickwithevent" ) - .off( "keyup.clickwithevent" ) - .off( "click.clickwithevent" ) - ; - } - }; - })( jQuery ); - // -------------------------------------------------- // - // -------------------------------------------------- // - // Make sure this event works on a direct event binding. - $( "a.link" ).on( - "clickwith", - function( event, trigger ){ - console.log( "LOCAL[ " + trigger + " ]", event ); - } - ); - // Make sure this event works on a delegated event binding. - $( document ).on( - "clickwith", - "a.link", - function( event, trigger ){ - console.log( "GLOBAL[ " + trigger + " ]", event ); - } - ); - // Try manually triggering a click event (which should, in - // turn, trigger a clickwith event, using the Mouse as the - // default device trigger). - $( "a.link:first" ) - .trigger( "click" ) - ; - </script> - </body> - </html> Again, the way the special Events work in jQuery is not something that I have a strong grasp of. As such, I won't try to explain this code too much. If you look at the video above, however, you will see that this is working both for locally-bound events as well as delegated event bindings. For a very in-depth exploration of jQuery special events, take a look at this article by Ben Alman - his understanding is a world better than mine. Looking For A New Job? - Senior Developer at Quality Bicycle Products - ColdFusion Developer at WRIS Web Services - Coldfusion Developer at Cavulus - Web Developer at Townsend Communications, Inc. - Support Programming Manager at InterCoastal Net Designs Reader Comments "...I love to use my keyword as a means to both provide information as well as to navigate from form field to form field." Maverick! @Ross, Ha ha, nice catch! Best case scenario, I just fill fields in with my Mind :D what if u monkey patch the jQuery click event to return a property like let's say origin: "keyboard" or origin: "mouse"? I am curious if the new movement toward touch screens might negate this question altogether? Windows 8 is designed to move us to that environment event : mouse, keyboard, or touch. clicks that come from the enter key, these coordinates are always zero. (Nobody ever mouse-clicks at 0,0.) This essentially creates a simple hook that you can use within click event handlers to discriminate keyboard "clicks" from mouse "clicks". @Ben, Thanks dude! This fixed the problem I was having to differentiate between the keyboard or mouse that triggered the click! :)
http://www.bennadel.com/blog/2369-mouse-vs-keyboard---determining-click-initiator-using-a-jquery-custom-event.htm
CC-MAIN-2016-07
refinedweb
1,471
62.48
My problem is about get a binary stream of bitmap. I'd like to saving the bitmap into my file through std::ofstream stream.Before, I saved my bitmap to file and then read it so I got (char*)buffer which I could include to my file. But this idea is not very optimal so I want to bypass it. I tried two ways, but anyone didn't work. 1. I try with ALLEGRO_FILE. I wanted to get char* buffer of bitmap file and then saved it into my file (it is experimental, it's not about saving just to PNG file but let's suppose). int size; char* buffer = gameCore->bitmapLoader->saveBitmap_f(".png", bmpo, size); std::ofstream file("eksperyment.png", std::ios::out | std::ios::binary | std::ios::trunc); file.write(buffer, size); file.close(); char* BitmapLoader::saveBitmap_f(std::string fileType, ALLEGRO_BITMAP *bitmap, int &size) { size = al_get_bitmap_width(bitmap) * al_get_bitmap_height(bitmap) * 4; void* fBuffer = malloc(size); ALLEGRO_FILE *fp = al_open_memfile(fBuffer, size, "rwb"); if(!al_save_bitmap_f(fp, fileType.c_str(), bitmap)) return NULL; unsigned fileLength = al_ftell(fp); // I don't know exactly if better is size variable or this fileLength. But I thing is this, because is calculate from fp file, have more correctly value than size variable. I think if bitmap is saving to AL_FILE is compressing to its own format? Doesn't have just 800x800 pixels but is saved with PNG headers etc... isn't? But 2,5k KB is just 800x800x4 so it's this value like size. al_fread(fp, fBuffer, fileLength); return (char*)fBuffer; } With this code up, I get from original bitmap 800x800 px (here called "bmpo"), PNG file of weight 2500 KB and it's unreadable.On the beginning I had a trouble with function al_save_bitmap_f(fp, fileType.c_str(), bitmap), but it's probably because malloc of void pointer, I thing allocated not enough memory. Now it's work. 2. Because first thing doesn't work I found code like this. This code wasn't tested. On beginning it didn't work too, but finally I understand better this algorithm and get exactly this same unreadable file like before. std::string buffer = gameCore->bitmapLoader->BitmapBytes(bmpo); std::ofstream file("eksperyment.png", std::ios::out | std::ios::binary | std::ios::trunc); file.write(buffer.c_str(), buffer.size()); file.close(); I left comments in code, it give how I thought. Hi! I'm Pole and my english is not very well. So, please understanding to me. :) I'm surprised that the first method didn't work. For your use case, you might also implement your own file vtable (see al_create_file_handle and the implementation of the memfile addon). Have you tried saving the via al_save_bitmap and seeing how big that file was? "For in much wisdom is much grief: and he that increases knowledge increases sorrow."-Ecclesiastes 1:18[SiegeLord's Abode][Codes]:[DAllegro5]:[RustAllegro] For your use case, you might also implement your own file vtable (see al_create_file_handle and the implementation of the memfile addon). I've never had to deal with it. I must implement those functions: ? I don't know how it will be help me. In those function I write excatly the same code, how wrote before. You mean maybe it change mode of action? Have you tried saving the via al_save_bitmap and seeing how big that file was? Yes, on begin I save this bitmap as physical, temporary file with this function, but this way is not too optimal so I want to change it. In this case original bitmap PNG have 1049 KB, but bitmap PNG saved through this function have 1540 KB. I don't know how it will be help me. In those function I write excatly the same code, how wrote before. You mean maybe it change mode of action? The main difference is that you'd be able to make it be backed by std::vector so you wouldn't need to know the in-memory size of your file ahead of time and you'd also know exactly how big the file is ( al_ftell only tells if you where the last file position is, so if the writer used al_fseek for some reason, it'd tell you something weird).
https://www.allegro.cc/forums/thread/616035
CC-MAIN-2021-49
refinedweb
700
74.9
Instant Standard "When people ask me what we're doing to drive standards, I tell them to go to hell", says James Barry, the new CTO for Jabber, Inc. But there's a twinkle in his voice as he adds the URL: " Hades.jabber.org/ietf". That's where Jabber.org has posted Jabber-rfc, an informational "working document", also known in IETF lingo as an "Internet-draft". RFC more commonly means Request For Comment. IETF is the Internet Engineering Task Force. The latest draft of the document, which runs 80 pages in text format, is dated February 12. In customary open-source fashion, the Jabber folks are exposing the process and inviting participation. James Barry says this is already a "historical" document, for the simple reason that it's a public source of information to which anybody can easily refer. He also thinks it may be historic in another respect. "Not many open-source projects have made the effort to form standards", he says. "They rely on the code itself to do that. So this is a different approach. It's a great way to legitimize an open-source project. It's a forced rigor. We're documenting what we do to a high degree of accuracy and completeness, to fit the conventions of the IETF." Jabber's standards are also less a matter of code than of protocol. Here's the abstract: Jabber is a set of open, XML-based protocols for which there exist multiple implementations. These implementations have been used mainly to provide instant messaging and presence services that are currently deployed on thousands of domains worldwide and are accessed by millions of users daily. Because a standard description of the Jabber protocols is needed to describe this new traffic growing over the Internet, the current document defines the Jabber protocols as they exist today. In addition, this document describes, but does not address, the known deficiencies of the Jabber protocols, since these are being addressed through a variety of standards efforts. The document is unsparing in its description of deficiencies. For example, "At present the Jabber protocols comply only with a subset of the XML namespace specification and do not offer the full flexibility of XML namespaces. In addition it would be beneficial for the Jabber protocols to enable additional types of availability through a properly namespaced sub-element of the <presence/> data type." As with all open-source efforts, what needs to be done matters more than what's been done already. Needless to say, James Barry and other members of the Jabber development community want the document to recruit development help. And since there's a lot of overlap between Jabber and Linux development, we're eager to hear more about the subject from Linux Journal readers as well. Here are three questions that quickly come to mind: Should innovative open-source projects even bother with the standards process? (Others, such as Apache, don't.) Does open source in some ways replace the standards process or render it irrelevant? If not, are there better ways to participate? Doc Searls is Senior Editor of Linux Journal. He is also a member of the Jabber Inc. Open Source Advisory Board.: single-IM protocol vs interop protocol? status 1. Has this RFC actually been submitted to the IETF? If not, does that make it less "real"? 2. I've only glanced at the RFC, but it seems like a specification for a specific rich IM protocol. Meaning that another system like AIM couldn't conform without changing its internal architecture. Wouldn't a better approach be to define an IM interop protocol? Which might imply a more reasonable expectation of conformance by existing vendors? Right now this smells like a red herring to me. But I could certainly be wrong, I'm not well informed on the processes here... --BillSeitz, Re: single-IM protocol vs interop protocol? status Yes, it has been submitted -- I just sent the submission email to the RFC Editor. --stpeter Re: single-IM protocol vs interop protocol? status It will be submitted as an informational document this week ( We hope!) subject to the comments of the Jabber mail list contributors. We have wanted the Open Source community to finish it's comments and submit as an informational document so that it will be published with minimum comment. Then we intend to take the pieces and submit them into the various efforts in the IETF around standardization. There are many protocols IMPP, cPIM, SIMPLE, Presence for SIP, IM for SIP aand those are just the IETF submissions. The IEE, ITU, WAP Forum, IM Unified, Parlay Group, 3GPP, Wireless Village, PAM forum are others that are also working on "standards" . We like the IETF because you submit as individuals and comments are taken rather free flowing like the Open Source effort. And standards are adopted by consensus. Because many of the IM protocols are closed, an interop standard might be a real kludge without intimate help from the proprietary players. Based purely on XML, we hope that interop will come naturally, even though we know there will be issues. Re: Instant Standard Like you wrote, alot of open-source projects "rely" on becoming a defacto standard. Look at Apache. It's becoming more and more a standard because of it's versatile use and of course the open-source nature of the product. To me it is more valuable then a standard imposed by some sort of comittee. Why? It (Apache) has proven itself to be versatile and powerfull enough for people who would otherwise have chosen another product. As opposed to a standard enforced upon me where the product itself has not proven itself to be "worthy" enough. I hear alot of talk about Jabber but haven't seen any serious real world application of it. At the very least i should have noticed by now that a product was "powered by" Jabber. I'm not saying Jabber is a bad product but if Jabber becomes a standard while not being used much it has no value. A standard should be widespread and widely used before it even becomes a standard. This way, it has a track record worthy of a standard. Or at least that's my humble opinion.... Re: Instant Standard You're making somehting of an apples-to-oranges comparison here. If Apache is a standard, it's only "an industry standard" in the respect that a lot of people use it, and it was originally built upon the NCSA reference codebase. But that's a very different phenomenon than what we see with Jabber. Apache relies on a real, honest to god, IETF standard already: RFC 2616. Jabber, on the other hand, seeks to actually become an IETF standard. Don't confuse standards of the "we all agree that it should work like this type," with the "we all use this because it works well" type. Re: Instant Standard Jabber, the Open Source project, came along because several proprietary players atarted the market and did not allow interoperability. Apache came out of the need to advance a government sponsored project (NCSA) in lieu of commercial alternatives at the time. Brian, Randy and the others needed better features for their commercial sites they were building and operating, and their were no commercial alternatives for their commecial sites. Thus Apache took off before the commercial guys were ready with their goods. (I know - I was at one of the commercial alternatives at the time) Jabber started after the commercial companies had locked up the market. Like Linux it is battling well entrenched players, yet making headway. It is widely used with thousands of servers, millions of users and dozens of servers based on the Jabber codebase. But because of the high value put on messaging by a large companies an interoperable standard has not yet emerged. If Jabber had appeared before the commercial versions it would be the defacto standard. So the Jabber community is in the position of trying to united the commercial software and content players with a standards push. There are over 20 standards efforts in the Instant Messaging and Presence area, yet none seem to have taken off. And to date there is no real interoperability between services. Jabber's XML play is an attempt to offer one up. I think that Apache has become the defacto standard by being "good enough" that there is no reason to go to another product for Web servicng needs. The Apache core has not made an effort to get the http standard updated. The question is would working with the standards community to work with the http standard make the Net better? Jabber is another case of being good enough. But because it started after the commercial player got market share, I think by going the standards route, it might still become the defacto standard. I think this is a first by an Open Source effort. So we hope to lead the way in opening up the market. Are you aware of another Open Source project that has gone the standards route? Re: Instant Standard The main reason you don't see "powered by jabber" out there is because Jabber (the company) is targeting larger installations and companies which use jabber as a standard internally. You are not going to see it on mom and pop shops and smaller companies. It is out there though, go read they press releases. Re: Instant Standard Well, go.com's messenger is a Jabber system. Plus there are over 20,000 domains with Jabber servers. Seems to be getting some use. ;-) It's my primary messaging system, although I do rely on the transports some. Re: Instant Standard I negelected to say that this effort is in addition to widespread code adoption. If Jabber were not already in use all over the place -- and growing rapidly -- there would be little point to this kind of effort.
http://www.linuxjournal.com/article/5838?quicktabs_1=2
CC-MAIN-2016-26
refinedweb
1,663
64.41
NAME poll - synchronous I/O multiplexing LIBRARY Standard C Library (libc, -lc) SYNOPSIS #include <poll.h> int poll(struct pollfd fds[], nfds_t nfds, int timeout); DESCRIPTION The poll() system call examines a set of file descriptors to see if some of them are ready for I/O. The fds argument is a pointer to an array of pollfd structures as defined in #include <poll.h> (shown below). The nfds argument blocking.. This flag is always checked, even if not present in the events bitmask. If timeout is neither zero nor INFTIM (-1), it specifies a maximum interval to wait for any file descriptor to become ready, in milliseconds. If timeout is INFTIM (-1), the poll blocks indefinitely. If timeout is zero, then poll() will return without blocking. RETURN VALUES The poll() system call returns the number of descriptors that are ready for I/O, or -1 if an error occurred. If the time limit expires, poll() returns 0. If poll() returns with an error, including one due to an interrupted.
http://manpages.ubuntu.com/manpages/intrepid/man2/poll.2freebsd.html
CC-MAIN-2015-35
refinedweb
170
66.03
A transducer is a composable way of processing a series of values. Many basic transducers correspond to functions you may be familiar with for processing Lists. Transducers can be used to combine processing operations in a way that allows processing to be done more efficiently. When using List.map, it is more efficient to compose multiple functions and then map the list with the composed function than to map the list with each function independently because the list will only be traversed once. Similarly, transducers can be used to process Lists more efficiently, but it is not limited to mapping operations. filter, take, drop, and any other transducer can be efficiently composed. import List as L import Transducer as T exposing ((>>>)) slowMapChain = [1, 2, 3] |> L.map ((+) 10) |> L.map toString fastMapChain = [1, 2, 3] |> L.map ((+) 10 >> toString) slowChain = [1, 2, 3] |> L.filter ((/=) 2) |> L.map toString fastChain = [1, 2, 3] |> T.transduceList (T.filter ((/=) 2) >>> T.map toString) Transducers can be reused with many different data types. List, Array, Set, Dict are supported by this library, and you can define your own transducer processes to work with other data types. You can also define transducer processes that convert between types (for example, transducing from a List into a Set). import Maybe import String import Transducer as T exposing ((>>>)) import Result exposing (toMaybe) import Set exposing (Set) parseValidInts = T.map String.toInt >>> T.map toMaybe >>> T.filter ((/=) Nothing) >>> T.map (Maybe.withDefault 0) exampleList : List Int exampleList = T.transduceList parseValidInts [ "123", "-34", "35.0", "SDF", "7" ] exampleConvert : Set Int exampleConvert = T.transduce List.foldr Set.insert Set.empty parseValidInts [ "123", "-34", "35.0", "SDF", "7" ] Reducertype to be a -> r -> rinstead of r -> a -> r
https://package.frelm.org/repo/208/1.0.0
CC-MAIN-2019-09
refinedweb
287
60.72
Frequently Asked Questions¶ Here we try to give some answers to questions that regularly pop up on the mailing list. What is the project name (a lot of people get it wrong)?¶ scikit-learn, but not scikit or SciKit nor sci-kit learn. Also not scikits.learn or scikits-learn, which were previously used. Why scikit?¶ There are multiple scikits, which are scientific toolboxes built around SciPy. You can find a list at Apart from scikit-learn, another popular one is scikit-image. How can I contribute to scikit-learn?¶ See Contributing. Before wanting to add a new algorithm, which is usually a major and lengthy undertaking, it is recommended to start with known issues. Please do not contact the contributors of scikit-learn directly regarding contributing to scikit-learn. What’s the best way to get help on scikit-learn usage?¶ For general machine learning questions, please use Cross Validated with the [machine-learning] tag. For scikit-learn usage questions, please use Stack Overflow with the [scikit-learn] and [python] tags. You can alternatively use the mailing list. Please make sure to include a minimal reproduction code snippet (ideally shorter than 10 lines) that highlights your problem on a toy dataset (for instance from sklearn.datasets or randomly generated with functions of numpy.random with a fixed random seed). Please remove any line of code that is not necessary to reproduce your problem. The problem should be reproducible by simply copy-pasting your code snippet in a Python shell with scikit-learn installed. Do not forget to include the import statements. More guidance to write good reproduction code snippets can be found at: If your problem raises an exception that you do not understand (even after googling it), please make sure to include the full traceback that you obtain when running the reproduction script. For bug reports or feature requests, please make use of the issue tracker on GitHub. There is also a scikit-learn Gitter channel where some users and developers might be found. Please do not email any authors directly to ask for assistance, report bugs, or for any other issue related to scikit-learn. How should I save, export or deploy estimators for production?¶ See Model persistence. How can I create a bunch object?¶ Don’t make a bunch object! They are not part of the scikit-learn API. Bunch objects are just a way to package some numpy arrays. As a scikit-learn user you only ever need numpy arrays to feed your model with data. For instance to train a classifier, all you need is a 2D array X for the input variables and a 1D array y for the target variables. The array X holds the features as columns and samples as rows . The array y contains integer values to encode the class membership of each sample in X. How can I load my own datasets into a format usable by scikit-learn?¶ Generally, scikit-learn works on any numeric data stored as numpy arrays or scipy sparse matrices. Other types that are convertible to numeric arrays such as pandas DataFrame are also acceptable. For more information on loading your data files into these usable data structures, please refer to loading external datasets. What are the inclusion criteria for new algorithms ?¶ We only consider well-established algorithms for inclusion. A rule of thumb is at least 3 years since publication, 200+ citations and wide use and usefulness. A technique that provides a clear-cut improvement (e.g. an enhanced data structure or a more efficient approximation technique) on a widely-used method will also be considered for inclusion. From the algorithms or techniques that meet the above criteria, only those which fit well within the current API of scikit-learn, that is a fit, predict/transform interface and ordinarily having input/output that is a numpy array or sparse matrix, are accepted. The contributor should support the importance of the proposed addition with research papers and/or implementations in other similar packages, demonstrate its usefulness via common use-cases/applications and corroborate performance improvements, if any, with benchmarks and/or plots. It is expected that the proposed algorithm should outperform the methods that are already implemented in scikit-learn at least in some areas. Inclusion of a new algorithm speeding up an existing model is easier if: - it does not introduce new hyper-parameters (as it makes the library more future-proof), - it is easy to document clearly when the contribution improves the speed and when it does not, for instance “when n_features >> n_samples”, - benchmarks clearly show a speed up. Also note that your implementation need not be in scikit-learn to be used together with scikit-learn tools. You can implement your favorite algorithm in a scikit-learn compatible way, upload it to GitHub and let us know. We will be happy to list it under Related Projects. If you already have a package on GitHub following the scikit-learn API, you may also be interested to look at scikit-learn-contrib. Why are you so selective on what algorithms you include in scikit-learn?¶ Code is maintenance cost, and we need to balance the amount of code we have with the size of the team (and add to this the fact that complexity scales non linearly with the number of features). The package relies on core developers using their free time to fix bugs, maintain code and review contributions. Any algorithm that is added needs future attention by the developers, at which point the original author might long have lost interest. See also What are the inclusion criteria for new algorithms ?. For a great read about long-term maintenance issues in open-source software, look at the Executive Summary of Roads and Bridges Why did you remove HMMs from scikit-learn?¶ See Will you add graphical models or sequence prediction to scikit-learn?. Will you add graphical models or sequence prediction to scikit-learn?¶ Not in the foreseeable future. scikit-learn tries to provide a unified API for the basic tasks in machine learning, with pipelines and meta-algorithms like grid search to tie everything together. The required concepts, APIs, algorithms and expertise required for structured learning are different from what scikit-learn has to offer. If we started doing arbitrary structured learning, we’d need to redesign the whole package and the project would likely collapse under its own weight. There are two project with API similar to scikit-learn that do structured prediction: - pystruct handles general structured learning (focuses on SSVMs on arbitrary graph structures with approximate inference; defines the notion of sample as an instance of the graph structure) - seqlearn handles sequences only (focuses on exact inference; has HMMs, but mostly for the sake of completeness; treats a feature vector as a sample and uses an offset encoding for the dependencies between feature vectors) Will you add GPU support?¶ No, or at least not in the near future. The main reason is that GPU support will introduce many software dependencies and introduce platform specific issues. scikit-learn is designed to be easy to install on a wide variety of platforms. Outside of neural networks, GPUs don’t play a large role in machine learning today, and much larger gains in speed can often be achieved by a careful choice of algorithms. Do you support PyPy?¶ In case you didn’t know, PyPy is an alternative Python implementation with a built-in just-in-time compiler. Experimental support for PyPy3-v5.10+ has been added, which requires Numpy 1.14.0+, and scipy 1.1.0+. How do I deal with string data (or trees, graphs…)?¶ scikit-learn estimators assume you’ll feed them real-valued feature vectors. This assumption is hard-coded in pretty much all of the library. However, you can feed non-numerical inputs to estimators in several ways. If you have text documents, you can use a term frequency features; see Text feature extraction for the built-in text vectorizers. For more general feature extraction from any kind of data, see Loading features from dicts and Feature hashing. Another common case is when you have non-numerical data and a custom distance (or similarity) metric on these data. Examples include strings with edit distance (aka. Levenshtein distance; e.g., DNA or RNA sequences). These can be encoded as numbers, but doing so is painful and error-prone. Working with distance metrics on arbitrary data can be done in two ways. Firstly, many estimators take precomputed distance/similarity matrices, so if the dataset is not too large, you can compute distances for all pairs of inputs. If the dataset is large, you can use feature vectors with only one “feature”, which is an index into a separate data structure, and supply a custom metric function that looks up the actual data in this data structure. E.g., to use DBSCAN with Levenshtein distances: >>> from leven import levenshtein >>> import numpy as np >>> from sklearn.cluster import dbscan >>> data = ["ACCTCCTAGAAG", "ACCTACTAGAAGTT", "GAATATTAGGCCGA"] >>> def lev_metric(x, y): ... i, j = int(x[0]), int(y[0]) # extract indices ... return levenshtein(data[i], data[j]) ... >>> X = np.arange(len(data)).reshape(-1, 1) >>> X array([[0], [1], [2]]) >>> # We need to specify algoritum='brute' as the default assumes >>> # a continuous feature space. >>> dbscan(X, metric=lev_metric, eps=5, min_samples=2, algorithm='brute') ... ([0, 1], array([ 0, 0, -1])) (This uses the third-party edit distance package leven.) Similar tricks can be used, with some care, for tree kernels, graph kernels, etc. Why do I sometime get a crash/freeze with n_jobs > 1 under OSX or Linux?¶ Several scikit-learn tools such as GridSearchCV and cross_val_score rely internally on Python’s multiprocessing module to parallelize execution onto several Python processes by passing n_jobs > 1 as argument. The problem is that Python multiprocessing does a fork system call without following it with an exec system call for performance reasons. Many libraries like (some versions of) Accelerate / vecLib under OSX, (some versions of) MKL, the OpenMP runtime of GCC, nvidia’s Cuda (and probably many others), manage their own internal thread pool. Upon a call to fork, the thread pool state in the child process is corrupted: the thread pool believes it has many threads while only the main thread state has been forked. It is possible to change the libraries to make them detect when a fork happens and reinitialize the thread pool in that case: we did that for OpenBLAS (merged upstream in master since 0.2.10) and we contributed a patch to GCC’s OpenMP runtime (not yet reviewed). But in the end the real culprit is Python’s multiprocessing that does fork without exec to reduce the overhead of starting and using new Python processes for parallel computing. Unfortunately this is a violation of the POSIX standard and therefore some software editors like Apple refuse to consider the lack of fork-safety in Accelerate / vecLib as a bug. In Python 3.4+ it is now possible to configure multiprocessing to use the ‘forkserver’ or ‘spawn’ start methods (instead of the default ‘fork’) to manage the process pools. To work around this issue when using scikit-learn, you can set the JOBLIB_START_METHOD environment variable to ‘forkserver’. However the user should be aware that using the ‘forkserver’ method prevents joblib.Parallel to call function interactively defined in a shell session. If you have custom code that uses multiprocessing directly instead of using it via joblib you can enable the ‘forkserver’ mode globally for your program: Insert the following instructions in your main script: import multiprocessing # other imports, custom code, load data, define model... if __name__ == '__main__': multiprocessing.set_start_method('forkserver') # call scikit-learn utils with n_jobs > 1 here You can find more default on the new start methods in the multiprocessing documentation. Why does my job use more cores than specified with n_jobs under OSX or Linux?¶ This happens when vectorized numpy operations are handled by libraries such as MKL or OpenBLAS. While scikit-learn adheres to the limit set by n_jobs, numpy operations vectorized using MKL (or OpenBLAS) will make use of multiple threads within each scikit-learn job (thread or process). The number of threads used by the BLAS library can be set via an environment variable. For example, to set the maximum number of threads to some integer value N, the following environment variables should be set: - For MKL: export MKL_NUM_THREADS=N - For OpenBLAS: export OPENBLAS_NUM_THREADS=N Why is there no support for deep or reinforcement learning / Will there be support for deep or reinforcement learning in scikit-learn?¶ Deep learning and reinforcement learning both require a rich vocabulary to define an architecture, with deep learning additionally requiring GPUs for efficient computing. However, neither of these fit within the design constraints of scikit-learn; as a result, deep learning and reinforcement learning are currently out of scope for what scikit-learn seeks to achieve. You can find more information about addition of gpu support at Will you add GPU support?. Why is my pull request not getting any attention?¶ The scikit-learn review process takes a significant amount of time, and contributors should not be discouraged by a lack of activity or review on their pull request. We care a lot about getting things right the first time, as maintenance and later change comes at a high cost. We rarely release any “experimental” code, so all of our contributions will be subject to high use immediately and should be of the highest quality possible initially. Beyond that, scikit-learn is limited in its reviewing bandwidth; many of the reviewers and core developers are working on scikit-learn on their own time. If a review of your pull request comes slowly, it is likely because the reviewers are busy. We ask for your understanding and request that you not close your pull request or discontinue your work solely because of this reason. How do I set a random_state for an entire execution?¶ For testing and replicability, it is often important to have the entire execution controlled by a single seed for the pseudo-random number generator used in algorithms that have a randomized component. Scikit-learn does not use its own global random state; whenever a RandomState instance or an integer random seed is not provided as an argument, it relies on the numpy global random state, which can be set using numpy.random.seed. For example, to set an execution’s numpy global random state to 42, one could execute the following in his or her script: import numpy as np np.random.seed(42) However, a global random state is prone to modification by other code during execution. Thus, the only way to ensure replicability is to pass RandomState instances everywhere and ensure that both estimators and cross-validation splitters have their random_state parameter set. Why do categorical variables need preprocessing in scikit-learn, compared to other tools?¶ Most of scikit-learn assumes data is in NumPy arrays or SciPy sparse matrices of a single numeric dtype. These do not explicitly represent categorical variables at present. Thus, unlike R’s data.frames or pandas.DataFrame, we require explicit conversion of categorical features to numeric values, as discussed in Encoding categorical features. See also Column Transformer with Mixed Types for an example of working with heterogeneous (e.g. categorical and numeric) data. Why does Scikit-learn not directly work with, for example, pandas.DataFrame?¶ The homogeneous NumPy and SciPy data objects currently expected are most efficient to process for most operations. Extensive work would also be needed to support Pandas categorical types. Restricting input to homogeneous types therefore reduces maintenance cost and encourages usage of efficient data structures.
https://scikit-learn.org/0.21/faq.html
CC-MAIN-2022-21
refinedweb
2,619
54.12
Explore how the accuracy of your monte_carlo_pi(N) integration varies with N. To do this, you will call monte_carlo_pi(N) function 100 times for each of the following values of N [10, 20, 50, 100, 200, 500, 1000, 2000, 5000, 10000]. Then for each N value, calculate the average and standard deviations of your estimates. Write a function accuracy() that does this work, and returns an array having one row for each N value, and three columns: the first containing the N value, the 2nd should contain the average of the 100 calculations and the 3rd is the standard deviation (ddof=1). This will function may take several seconds to execute. You might want to have it print out something periodically so that you know its still running (perhaps one line per N value). While you are developing and testing this, you may want to omit the largest couple of values of N. My code for monte_carlo_pi(N) is as below: import numpy as np def monte_carlo_pi(N): x = np.random.random(N) y = np.random.random(N) count = 0 for i in range(N): if x[i]**2 + y[i]**2 < 1: count += 1 ratio = (count/N)*4 return ratio And this is what I’ve done so far: def accuracy(): #putting the N values into an array for easy manipulation: N = np.array([10, 20, 50, 100, 200, 500, 1000, 2000, 5000, 10000]) #calling the function for each N value: for n in N: row = np.array([n]) #calling the function 100 times: arr = np.array([]) for n in range(1,101): monte = monte_carlo_pi(n) arr = np.append(arr, monte) #storing the values into an array mean = np.sum(arr) / 100 std = np.std(mean, ddof=1) #appending the value into each row: row = np.append(row, mean) row = np.append(row, std) #making a new array N_arr = np.array([]) N_arr = np.append(N_arr, row) return N_arr Please help!!! I’ll love you forever!
https://discuss.codecademy.com/t/python-help/440038
CC-MAIN-2019-43
refinedweb
326
66.23
can any one help the approach how to handle better ways for Re-usable REST Request (Test Steps) in multiple test cases I have a scenarios where i need to Re-Use REST Requests in Multiples of test cases as Test Steps What is the Best approach for this solution on ReadyAPI, I am using Ready API 2.6v Solved! Go to Solution. Hi @678, Did additional suggestions help? @678 - You can try below approach:- def tCase = testRunner.testCase.testSuite.testCases["Name of test case"] def tStep = tCase.testSteps["test step you want to run"] tStep.run(testRunner, context) Thanks for your help, avidCoder! @678, is this what you are looking for? So,i need the place this REST Request steps in one testsuite and need to call from there where ever i required by using Groovy Script step ? Place the rest request in its own test case then use the Run TestCase step to call the test case. If you need to pass data you can use the property transfer step to send the data to a properties step.
https://community.smartbear.com/t5/SoapUI-Pro/Reusable-REST-Request-approach/m-p/181636
CC-MAIN-2019-39
refinedweb
180
81.33
In NLP, the pipeline is the concept of integrating various text processing components together such that, the output of one component serves as the input for the next component. Spacy provides built-in functionality of pipelines that can be set up quite easily. In this tutorial, we will take you through the features of the Spacy NLP Pipeline along with examples. Spacy NLP Pipeline Spacy NLP pipeline lets you integrate multiple text processing components of Spacy, whereas each component returns the Doc object of the text that becomes an input for the next component in the pipeline. We can easily play around with the Spacy pipeline by adding, removing, disabling, replacing components as per our needs. Moreover, you can also customize the pipeline components if required. Spacy NLP Pipeline Components The default components of a trained pipeline include tagger, lemmatizer, parser, and entity recognizers. We can improve the efficiency of this pipeline process by only enabling those components which are needed or by processing the texts as a stream using nlp.pipe and buffer them in batches, instead of one-by-one. We can initialize them by calling nlp.add_pipe with their names and Spacy will automatically add them to the nlp pipeline. Below is the list of different NLP components and their description. Below are the components that are available in Spacy Pipeline. Adding Custom Attributes In Spacy, we can add metadata in the context and save it in custom attributes using nlp.pipe. This could be done by passing the text and its context in tuples form and passing a parameter astuples=True. The output will be a sequence of doc and context. In the example below, we are passing a list of texts along with some custom attributes to nlp.pipe and setting those attributes to the doc using doc. import spacy from spacy.tokens import Doc if not Doc.has_extension("text_id"): Doc.set_extension("text_id", default=None) text_tuples = [("This is the first text.", {"text_id": "text1"}), ("This is the second text.", {"text_id": "text2"})] nlp = spacy.load("en_core_web_sm") doc_tuples = nlp.pipe(text_tuples, as_tuples=True) docs = [] for doc, context in doc_tuples: doc._.text_id = context["text_id"] docs.append(doc) for doc in docs: print(f"{doc._.text_id}: {doc.text}") text1: This is the first text. text2: This is the second text. Multiprocessing Spacy has provided a built-in multiprocessing option with nlp.pipe using the n_process. This will greatly increase the performance of the nlp pipeline. We can use this to make it a multiprocessing task or also make it multiprocessing with as many processes as CPUs can afford by passing n_process=-1 to nlp.pipe. However, this should be used with caution. We can also set our own batch_size in the nlp pipeline which is 1000 by default. For shorter tasks, it can be faster to use a smaller number of processes with a larger batch size. The optimal batch_size setting will depend on the pipeline components, the length of your documents, the number of processes, and how much memory is available. docs = nlp.pipe(texts, n_process=4, batch_size=2000) Spacy Pipeline Under the Hood Spacy pipeline package consists of three components: the weights, i.e. binary data loaded in from a directory, a pipeline of functions called in order, and language data like the tokenization rules and language-specific settings. A Spanish NER pipeline requires different weights, language data, and components than an English parsing and tagging pipeline. This is also why the pipeline state is always held by the Language class. spacy.load puts this all together and returns an instance of Language with a pipeline set and access to the binary data. import spacy nlp = spacy.load("en_core_web_sm") doc = nlp(text) When we load a pipeline, Spacy first consults the meta.json and config.cfg. The config tells Spacy what language class to use, which components are in the pipeline, and how those components should be created. - Load the language class and data for the given ID via get_lang_class and initialize it. The Language class contains the shared vocabulary, tokenization rules, and language-specific settings. - Iterate over the pipeline names and look up each component name in the [components] block. The factory tells Spacy which component factory to use for adding the component with add_pipe. The settings are passed into the factory. - Make the model data available to the Language class by calling from_disk with the path to the data directory. Sample CONFIG.CFG The pipeline’s config.cfg tells Spacy to use the language “en” and the pipeline [“tok2vec”, “tagger”, “parser”, “ner”, “attribute_ruler”, “lemmatizer”]. Spacy will then initialize spacy.lang.en.English, and create each pipeline component and add it to the processing pipeline. It’ll then load in the model data from the data directory and return the modified Language class for you to use as the nlp object. lang = "en" pipeline = ["tok2vec", "parser"] factory = "tok2vec" # Settings for the tok2vec component factory = "parser" # Settings for the parser component Spacy first tokenizes the text, loads the model data, and then calls each component in order. The component then accesses the model data to assign annotations to Doc object, token, or to the span of doc object. The modified document returned by a component is passed to the next component for processing in the pipeline. The output from one component serves as the input for another component for example part of speech tag assigned to a token server as input data for lemmatizer. However, some of the components work independently such as tagger and parser, and don’t require data from any other components. doc = nlp.make_doc("This is a sentence") # Create a Doc from raw text for name, proc in nlp.pipeline: # Iterate over components in order doc = proc(doc) # Apply each component When can get the list of processing pipelines with the help of nlp.pipeline which returns a list of tuples containing component name and component itself. Whereas nlp.pipe_names returns all the component’s names. print(nlp.pipeline) print(nlp.pipe_names) [('tok2vec', <spacy.pipeline.Tok2Vec>), ('tagger', <spacy.pipeline.Tagger>), ('parser', <spacy.pipeline.DependencyParser>), ('ner', <spacy.pipeline.EntityRecognizer>), ('attribute_ruler', <spacy.pipeline.AttributeRuler>), ('lemmatizer', <spacy.lang.en.lemmatizer.EnglishLemmatizer>)] ['tok2vec', 'tagger', 'parser', 'ner', 'attribute_ruler', 'lemmatizer'] Customizing Pipeline In Spacy, we can customize our NLP pipeline. That is we can add, disable, exclude and modify components in the pipeline. This could make a big difference in processed text and will improve loading and inference speed. For example, if we don’t need a tagger or parser we can disable or exclude them from the pipeline. Disable Component We can disable the pipeline component while loading the pipeline by using disable keyword. It will be included but disabled by default. The component and its data will be loaded with the pipeline, but it will be disabled and not run as part of the processing pipeline. However, we can explicitly enable it when needed by calling nlp.enable_pipe. For example, the trained Spacy pipeline ‘en_core_web_sm’ contains both a parser and senter that perform sentence segmentation, but the senter is disabled by default. import spacy nlp = spacy.load("en_core_web_sm", disable=["tagger", "parser"]) # Loading the tagger and parser but don't enable them. doc = nlp("This sentence wouldn't be tagged and parsed") nlp.enable_pipe("tagger") # Explicitly enabling the tagger later on. doc = nlp("This sentence will only be tagged") We can use the nlp.select_pipes context manager to temporarily disable certain components for a given block. The select_pipes returns an object that lets us call its restore() method to restore the disabled components when needed. This could be useful if we want to prevent unnecessary code indentation of large blocks. import spacy disabled = nlp.select_pipes(disable=["tagger", "parser"]) disabled.restore() doc = nlp("This senetence will be tagged as well as parsed") If we want to disable all pipes except for one or a few, we can use the enable keyword. import spacy nlp=spacy.load(enable="parser") # Enable only the parser doc = nlp("This sentence will only be parsed") Exclude Components In Spacy, we can also exclude a component by passing exclude keyword along with the list of excluded components. Unlike diable, it will not load the component and its data with the pipeline. Once the pipeline is loaded, there will be no reference to the excluded or include any components. import spacy nlp = spacy.load("en_core_web_sm", exclude=["ner"]) # Load the pipeline without the entity recognizer doc = nlp("NER will be excluded from the pipeline") We can also use the remove_pipe method to remove pipeline components from an existing pipeline, the rename_pipe method to rename them, or the replace_pipe method to replace them with a custom component entirely. nlp.remove_pipe("parser") nlp.rename_pipe("ner", "entityrecognizer") nlp.replace_pipe("tagger", "my_custom_tagger") Analyzing Components In Spacy we can analyze the pipeline components using the nlp.analyze method which returns information about the components such as the attributes they set on the Doc and Token, whether they retokenize the Doc and which scores they produce during training. It will also show warnings if components require values that aren’t set by the previous component – for instance if the entity linker is used but no component that runs before it sets named entities. Setting pretty=True will pretty-print a table instead of only returning the structured data. import spacy nlp = spacy.blank("en") nlp.add_pipe("tagger") # This is a problem because it needs entities and sentence boundaries nlp.add_pipe("entity_linker") analysis = nlp.analyze_pipes() print("output 1:") print(analysis) analysis = nlp.analyze_pipes(pretty=True) print("Output 2:") print(analysis) Output 1: { "summary": { "tagger": { "assigns": ["token.tag"], "requires": [], "scores": ["tag_acc", "pos_acc", "lemma_acc"], "retokenizes": false }, "entity_linker": { "assigns": ["token.ent_kb_id"], "requires": ["doc.ents", "doc.sents", "token.ent_iob", "token.ent_type"], "scores": [], "retokenizes": false } }, "problems": { "tagger": [], "entity_linker": ["doc.ents", "doc.sents", "token.ent_iob", "token.ent_type"] }, "attrs": { "token.ent_iob": { "assigns": [], "requires": ["entity_linker"] }, "doc.ents": { "assigns": [], "requires": ["entity_linker"] }, "token.ent_kb_id": { "assigns": ["entity_linker"], "requires": [] }, "doc.sents": { "assigns": [], "requires": ["entity_linker"] }, "token.tag": { "assigns": ["tagger"], "requires": [] }, "token.ent_type": { "assigns": [], "requires": ["entity_linker"] } } } Output 2: ============================= Pipeline Overview ============================= # Component Assigns Requires Scores Retokenizes - ------------- --------------- -------------- ----------- ----------- 0 tagger token.tag tag_acc False 1 entity_linker token.ent_kb_id doc.ents nel_micro_f False doc.sents nel_micro_r token.ent_iob nel_micro_p token.ent_type ================================ Problems (4) ================================ ⚠ 'entity_linker' requirements not met: doc.ents, doc.sents, token.ent_iob, token.ent_type Creating Custom Components In Spacy we can create our own custom pipeline component and add it to the nlp pipeline. We create a pipeline component like any other function except first describe the function as a pipeline component using @Language.component decorator. This pipeline component will be listed in the pipeline config to save, load, and train pipeline using our component. The custom components can be added to the pipeline using the add_pipe method. we can also specify the component position in the pipeline list. import spacy nlp= spacy.load("en_core_web_sm") @Language.component("my_component") # creating component. def my_component(doc): # Do something to the doc here return doc nlp.add_pipe("my_component", first=True) # adding it to the pipeline. nlp.add_pipe("my_component", before="parser") - Also Read – Tutorial for Stopwords in Spacy Library - Also Read – Complete Guide to Spacy Tokenizer with Examples Reference – Spacy Documentation
https://machinelearningknowledge.ai/spacy-nlp-pipeline-tutorial-for-beginners/
CC-MAIN-2022-33
refinedweb
1,846
50.23
The 'Trick' To Algorithmic Coding Interview Questions (dice.com) 208 Nerval's Lobster writes: Ah, the famous "Google-style" algorithmic coding interview. If you've never had one of these interviews before, the idea is to see if you can write code that's not only correct, but efficient, too. You can expect to spend lots of time diagramming data structures and talking about big O notation. Popular hits include "reverse a linked list in place," "balance a binary search tree," and "find the missing number in an array." Like it or not, a "Google-style" coding interview may stand between you and your next job, so it's in your interest to figure out how to deal with it. Parker Phinney, founder of Interview Cake, uses a Dice column to break down a variety of example problems and then solve them. But it's not just about mastering the most common kinds of problems by rote memorization; it's also about recognizing the patterns that underlie those problems. Alternate headline (Score:5, Funny) Re:Alternate headline (Score:5, Insightful) Re: (Score:2) No offense, but why the hell would you want to work there without being an engineer and getting the stock options? Re:Alternate headline (Score:5, Insightful) No offense, but why the hell would you want to work there without being an engineer and getting the stock options? Because you still get a lot of the benefits, as well, even though you explicitly don't get all of them, since companies are required to not treat contractors exactly the same as employees, including having limited terms of employment, and "air gaps" in employment history with the company. But it's not like you don't get the food, or access to most of the athletic stuff, etc.. Plus, you get to hang out with very smart people, and, if you impress them, it's possible that they will pursue you for full time employment. Even without that, however: you get to put "Google" on your resume. Re:Alternate headline (Score:5, Insightful) You're shortchanging yourself with 2nd tier work. (Score:2) Being a contractor gets your foot into the door to demonstrate your abilities No, that's the probationary period as a direct hire. To do so as a contractor is to give up any ground one might have. Also, roasted duck and mac-n-cheese on Fridays is a killer combo at one of the cafeterias. Had similar options as a directly-hired person for a certain East-Coast based media conglomerate. Re:Alternate headline (Score:4, Insightful) In my experience, stock options are a pain. Work 3 to 5 years, earn an extra ten thousand dollars overall from the options. It is rare to make a ton of money from options. Always get the salary up front. And don't let the options tie you down if you don't like the job, because I've seen people stick around depressed hoping that their big bonus will come in next year. In other words, become a second-class citizen. (Score:2) They use contractors since they think it is bad for such people to receive good benefits. Re: Alternate headline (Score:2) Re: (Score:2) Memory is only wasted if it isn't used. Since the spec didn't call out any limitations on memory useage then that point is moot. Re:Alternate headline (Score:4, Informative) what is the alternative when we need an o(1) access time? Insertion sort and infix trees work well. The allow you to make specific assumptions about the data you are traversing. TL;DR? (Score:2) Re:TL;DR? (Score:5, Informative) Learn the 40 examples in TFA off by heart I've worked at several companies that do this style of interview, and interviewed well over 100 people this way. Any question you can just Google the answer for is a stupid interview question - though is may be used for a phone screen, where the real test is: can you code at all, not can you solve it. I use questions where everyone who codes for a living will get the answer eventually, and measure how quickly it was solved, how good the code is, were errors and corner cases thought through, and so on. I use problems related to real problems I've worked on in my career. I find that's a better way to reliably sort candidates. Others use very difficult questions where they don't expect most people to solve them without hints. I don't like that approach myself. For those questions, learning the algorithms common to these questions (which go in and out of fashion) is good practice. Four I'd refresh myself on before an interview are: * Code some graph-exploration with backtracking, like a maze explorer * Remember how A* works, and code it (or at least be able to code a breadth-first search without pause) * Look up how O(n) median (or k'th element) works, and code it (median problems used to be in fashion, and array-partitioning of some sort is ever popular) * Radix sort and hash tables - it seems the sub-O(n*log(n)) sorting question and related search questions never die Questions to gauge your comfort with recursion and pointers are also common, but you really shouldn't have to practice those. (Pattern matching in strings used to be another popular question, but I haven't heard of anyone using that for a long time now). The good questions will be stuff there's no way to practice for, but I've found those four to be just generally good practice to knock the rust off the stupid algorithmic stuff that only comes up in job interviews - but practice on a whiteboard, not a keyboard. Sorting candidates. (Score:5, Insightful) I use problems related to real problems I've worked on in my career. I find that's a better way to reliably sort candidates. I find that the best way to sort candidates is to use a "sorting hat". Mostly I try to hire hire Ravenclaw. Unless it's a help desk position; then it's almost always a Hufflepuff. Re:TL;DR? (Score:4, Insightful) The last time I was asked to work an algorithm on a whiteboard during an interview, I straight up said: I'm not comfortable tackling this in a 45 minute interview. I did not get the job, but I went on to get a better job where I was given hard problems and expected to actually think them before solving them, without any need for a frenetic rush. Re:TL;DR? (Score:4, Insightful) I will never work again at a company that doesn't screen programmers with some sort of difficult coding questions during the interview process. The last time I did, the place was full of people who couldn't code for shit (but had very impressive resumes). I hate "puzzle" questions, but proving you can code something non-trivial and being judged on the quality of that code seems to me to be the most objective and fair way to judge a candidate's technical ability. Re: (Score:2) I did my own initial code with the help of a comp sci grad. Eventually, we grew to the point where I was no longer able to do everything and things got increasingly complex - more than my ability, or were working in that direction. The first couple of hires came from people with a proven track record in a related field (they were in the transportation engineering realm - similar enough). After that, I had them sit in on the interviews - that's the person they'd be working with, after all. I also figured it Re:TL;DR? (Score:5, Insightful). Re: TL;DR? (Score:2) Yup Re: (Score:2) Not necessarily. You can google pretty much any answer. But often that's not enough. In antivirus analysis, part of what you do is to disassemble trojans and find out what they do, and how they do it. You cannot google everything there. More importantly, you have to be able to spot and dissect "interesting" bits quickly because time is not on your side. And that requires you to know. Not to know where to look something up. An example I often used to screen prospective applicants was this: You find this bit of Re: (Score:2) Re: (Score:2) Well, I want to know if you will write decent code under pressure (as in, the second half of every coding project). Even small examples are enough to see whether you talked through the design and asked questions before diving in, whether errors are handled or even checked for, etc, etc. Coding style shines through even small problems (as long as they're nontrivial). What you can't measure is the stuff the IDE really does for you. There's nothing worse than "compiler trivial pursuit" -style questions, alth Re:TL;DR? (Score:5, Insightful) I used to ask the harder stuff, but I am finding extremely few people who can do simple coding. They all look good on paper though. But today's programming is all about knowing how to do function calls to pre-built libraries. Especially CS graduates, they're just awful at programming at a low level. The EE people won't know what big-Oh notation means but they know how to read and write code that implements data structures. So ya, reverse a list, it's stupidly simple but I'm amazed at how many people list C/C++ on their resume who can't figure that out. Or they say "there are libraries to do that" (ya, but what if you're core dumping in that library and it's your job to fix it quickly). We've got enough idea people who sit around doing nothing, it's good to have people who can do stuff. I mean even if someone does not know the answer, how come they can't even imagine an answer? How come they're having trouble just setting up a loop, or they miss all the obvious corner cases? These are questions that everyone who codes in C for an embedded system should know the answers to. I don't want to hire someone with 10 years of C experience only to have me end up tutoring them in C. There's a lot of resume inflation out there. They'll like 5 years of working with ARM, and yet know nothing about ARM. 5 years of writing device drivers and yet not know how to clear a bit in a register. But they'll list all 27 source code control systems they've ever used, every CPU they've ever seen, and point out that they they won the six sigma award at their previous company. Re: (Score:3) Yep - I learned long ago that no matter what's on someone's resume, never bring them in without a phone screen where they do some simple coding. So many people can't code at all. The difficulty at the low-level stuff is why Java became so popular - you can hire people who don't get pointers and bit-bashing but can still get work done. That's nothing... (Score:5, Funny)?" My interviewers laughed. I got the job and worked there for six years. I've seen game controllers and keyboards destroyed in fits of rage, but no one ever got into a fist fight out in the hallway. The correct answer to the question is to take bets. Re: (Score:2) Re:That's nothing... (Score:5, Insightful)?" You have been modded mostly Funny, but you deserve +5 Insightful. The way to respond to a provocative question like that is to ask another question that bounces it back. That makes the question go away. I heard a similar piece of advice years ago about responding to the question "How are you with handling difficult co-workers?" The suggested answer was "Are you thinking of someone in particular?" Re: (Score:2) Were those keyboards, Model M, too? Re: (Score:2) So it's ok to duke it out during lunch break? not all sets have a solution (Score:3) for systems engineers, its typically some fluff question like how to build a datacenter in literal hell, or how to handle wireless voip QoS in a flying rape crisis icecream truck. Re: (Score:2) Re: (Score:2, Funny) >> questions are all written in advance in the committee-based interview process, and anyone could potentially ask any kind of question. The twenty-two year old secretary could ask the interviewee [TOPIC], even if she has no idea what she even said Did you just tell us that you work for CNBC? Re: (Score:3) performance under stress. Why is "performance under stress" a relevant metric? I do almost all my coding alone in a quiet office, and can't imagine a realistic situation that would have someone looking over my shoulder and telling me to hurry up. When I conduct interviews, I try to remove the stress. I give the candidate a test problem, and a quiet cubicle to work in. Then I come back in 30 minutes and ask them to show me their solution. If you only test them on a whiteboard, in front of a nitpicking audience, you are just weedin Re: (Score:3) Exactly. The interview is already stress. The interviewer doesn't realize that, because it's not their ass on the line. Re: (Score:2) Exactly. The interview is already stress. The interviewer doesn't realize that, because it's not their ass on the line. And the most stressful kind of interview is a panel interview. If you're interviewing for an on-your-feet kind of job, then a panel interview might make sense. If you're interviewing for a position that requires creativity and craft, then leave someone alone for awhile with a problem. Re: (Score:2) So you've never coded something that worked fine until it went into production on that one machine that was slightly different then your dev and staging machines? Or the external webservice your code relied on decided to no longer exist or change APIs and management or your customers are dem Re:not all sets have a solution (Score:4, Insightful) So you've never coded something that worked fine until it went into production on that one machine that was slightly different then your dev and staging machines? I have been in situations like that occasionally. Never did it involve someone standing over me, shouting, or telling me to "hurry up". That is unprofessional and counter-productive. My boss knew that the problem would be fixed fastest if he gave me clear directions, a quiet place to work, and then left me alone with no interruptions. Re: (Score:2) Fixed the fastest isn't always the more important. Sometimes you really need to knnow when things are going to be fixed to plan for other things. Re: (Score:2) Why is "performance under stress" a relevant metric? At Google, everyone works at the speed of light, moves to a different cube every three months, and gains an average 27 pounds in weight from eating at the cafeteria (roasted duck and mac-n-cheese on Fridays is so good). Some people may find that stressful. Every company I worked at since Google claimed to have a faster pace work environment, but they were all slower than Google. I often find myself browsing the Internet for the rest of the day because I finished my work in the first five minutes. Re: (Score:2) Yeah? So what if your office is suddenly in the middle of a conflict zone and there's active gun fire in the office next door, huh? Yeah, I thought so... Re: (Score:2)' Re: (Score:2)'r Re: (Score:2) plenty of stressed out, angry, traders telling you to hurry up and make it work If your company's work environment includes anger and yelling, then you certainly should test for that in the interview process. But in a company that values professionalism and reliable code, I don't see any point in testing for "performance under stress". Re: (Score:3) The last interview I had, one of the interviewers kept asking more and more esoteric questions with the specific goal of forcing me to say "I don't know." (I got the job, by the way.) Re:not all sets have a solution (Score:5, Funny) That's why when I go to interviews, the first question I get I just answer "I don't know", it saves a lot of time. Re: (Score:2) Ok, I underestimated you. ... That was indeed funny! And probably even the truth Re: (Score:2) .... ...you won't ever admit you don't know something! perhaps I've been doing software for too long, but I'm Re: (Score:2) ... performance under stress ... Um... WHY? There's been a lot of studies showing that emotions dampen critical thought and vice versa. Give an engineer a problem then leave them alone until they come back with an answer. Re: (Score:2) Um... WHY? There's been a lot of studies showing that emotions dampen critical thought and vice versa. Give an engineer a problem then leave them alone until they come back with an answer. Exactly, and while people will need different amounts of time, once you do this several times with the same people to eliminate flukes, you will identify two classes: One that comes back with a working solution most of the time and one that does not. The former are the good engineers and the latter are the bad ones. Seriously, whether somebody takes a week or a month to solve a problem difficult enough that you cannot look it up does not matter much. The kicker is whether they can or cannot solve it. Coding Re: (Score:3) Not only that, when you have good data models, good interfaces, well-structured code etc. then a fix will be easy to do. Re: (Score:2) I had a job interview once where I swear the sequence of events was designed to test my reactions. The manager had two people to interview, me and someone else. The interviewer came out 20 minutes past my scheduled time and said she was sorry, but she was delayed and would need another 15 minutes. When she came back out 20 minutes later, she spoke to the other candidate and then came to me and said that the other candidate (who was to be interviewed after me) had an appointment and would I mind waiting and Re: (Score:2) Heavy sigh. (Score:3) You can expect to spend lots of time diagramming data structures and talking about big O notation. Popular hits include "reverse a linked list in place," "balance a binary search tree," and "find the missing number in an array." Ya, I hate these kind of interview "tests". I and my brain don't work like that, solving specifics in detail on the spot. From TFA: Not long ago, Max Howell, the author of Homebrew (software that basically every engineer with a Mac uses), famously quipped about being rejected from Google after being unable to invert a binary tree. Would probably be me too. Like it or not, a “Google-style” coding interview may stand between you and your dream job. So it’s in your best interest to learn how to beat. Re: (Score. Sorry to hear that. You will always dream about her. Finding new purpose is hard. I know. Re: (Score:2) yeah, my ex wife is a big part of the reason why I'm not debt-free and financially independent within my budget. Okay, I'm financially independent within MY budget. Not hers. Sorry you lost yours. Wish we could trade places. Re: (Score:2) Not long ago, Max Howell, the author of Homebrew (software that basically every engineer with a Mac uses), famously quipped about being rejected from Google after being unable to invert a binary tree. What does that even mean? You can traverse it left-first or right-first, but "inverted"? Re: (Score:3) If you aren't trying to directly marry Google and replace human emotions with feelings of corporate loyalty, you obviously have no place in the technological world of today. Google is a sloppy kisser -- and their tongue algorithm is stuck in "beta". Nerval's lobster is a Dice.com shill ... (Score:5, Informative) And once again Nerval's Lobster posts a story which links to a dice.com story. Seriously, not one story ever accepted from Nerval's Lobster doesn't point to dice.com, which pretty much means he's a paid staffer whose stories get promoted to click-whore for dice.com. Honestly, make him an editor and give us a box to block stories from him. But stop pretending he's getting accepted because of any other reason than shilling for dice. Re:Nerval's lobster is a Dice.com shill ... (Score:4, Informative) He is a paid staff writer. Nerval's Lobster:=Nick Kolakowski[0]. There's a twitter profile that links the two, which I posted a good two or more months ago now. [0] Re: (Score:2) Honestly, make him an editor and give us a box to block stories from him. It won't work. They have an "Ask Slashdot" category but the editors (Hi Timothy!) can't be bothered to or can't figure out how to post articles of that type in that category. Summary (Score:3) Be able to understand and evaluate Big-O notation and use hash maps and sets. Which if you don't already know, you should! Not just google (Score:5, Informative) Re: (Score:2) But if you do this, in what way is it a viable test of your ability to think and reason about a problem? You're copy/pasting the answer from the Internet. I guess that is the skill most-needed for software developers nowadays... Re: (Score:2) I often look stuff up in a book or on the net. Software developers should have good search skills, and the ability to use creative approaches to problems. Ideally, you'd turn down jobs at the company that does that, but sometimes you really need the paycheck. Re: (Score:3) It could be argued that the fact the solutions are available on google makes them even more useful as interview questions. They identify the potential employee as someone who walks into important meetings without even bothering to do basic preparation. Re: (Score:2) Like an idiot, I didn't google "Amazon code tests" ahead of time and pre-solve all of the possible code tests, because I was given one and sucked at it, only to later find it was one of the listed ones. So, note to the wise: google the code tests for the company you're applying for and pre-do the possible solutions. If this is all the information I have about your programming ability, I never never want to work with you. Hopefully you have other redeeming skills as a programmer.... These problems are all solvable with second year knowledge of computer science. Re: (Score:2) These problems are all solvable with second year knowledge of computer science. I think that's the problem for a lot of the candidates. They can hack some code together, but they don't have a solid computer science foundation. That's my big concern with this push for code academies. They are teaching people to code, but are they teaching the underlying mathematics and computational theories? Re: (Score:2) but are they teaching the underlying mathematics and computational theories? And let's be honest, those underlying theories are not super-hard. They take time to learn, but it's worth it because it can keep you from making some completely ridiculous programming mistakes. Re: (Score:2) Does the job description call for a programmer or a computer scientist? The two are not always interchangeable. Re: (Score:2) Re: (Score:2) Don't be so harsh. My second year is about ... oops, need a pocket calculator meanwhile to figure that ... 30 years in the past. And frankly: I never had a programming problem that was covered by any education, may it be school, university or books. (And at that time most data structures we now have in libraries where already existans and taught in schools/universities) If you want me to write an AVL tree (or black/red tree) I likely will need two days (if I can not google). Re: (Score:2) (And at that time most data structures we now have in libraries where already existans and taught in schools/universities) Donald Knuth said that when he wrote The Art of Computer Programming, programmers were amazed that they could write their own linked lists. The idea had never occurred to them (because they were provided by libraries from the computer manufacturers). That was a long time ago, and yet there was still need for custom data structures, just as there is today. Re: (Score:3) Hm, today it is the other way around. People are reinventing linked lists, not knowing that there is a library. What you call 'custom data structure' might as well be a 'domain model'. Actually I myself never stumbled over linked list libraries etc. before Rogue Wave started selling its data structure libraries and 10 years later the STL emerged. Neither during my Pascal, nor my Modula II nor during my early C times (1987 - 1995) I ever had the option to use a general purpose library of data structures. Well, th Re: (Score:2) What you call 'custom data structure' might as well be a 'domain model'. Maybe.....when I think of a domain model, I think of a higher level design, that isn't concerned with the lower level implementation details. That is, the domain model won't particularly care if you use a linked list or an array list as long as it is sufficiently performant. Re: (Score:2) Of course cause that knowledge is always easily remembered. I think so. I haven't written a linked list in something like a decade, but I'm pretty sure I can figure it out if I have to. Re: (Score:2) So they hire on the basis of pompous ass-ery now? Wise, including admitting when you don't know something, is so much better a top-tier engineering quality than fast. Fast is completely irrelevant. If you have to fix something in half an hour because your company just released some crap into production, you're all already doing it wrong. Truly sad (Score:4, Interesting) Mode of list of numbers (Score:3) The author used Python for his example and suggested using a Dict to solve the first problem, so presumably we have the Standard Python Library at our disposal: from collections import Counter def get_mode(nums): return Counter(nums).most_common(1) This will give the mode and the count. I don't think the author's solution (14 lines) would get you the job! Normally, I wouldn't make it one line, but the Slashdot Editor Window doesn't seem to support a proper code block and also doesn't support non-breakings spaces. Recognize the patterns (Score:3) So, I had an esoteric maths class once where the prof handed out all past final exams as study tools. The exam was pre-announced to be "answer 5 questions of your choosing from 9 given." The class had covered 3 concepts, 2 which I had mastered, plus Green's functions. I doubled down and bet that the prof wouldn't put 5 Green's functions questions on the test, and he didn't - exactly. 4 questions were on the 2 skills I had mastered, so I answered them quickly and easily. 4 more were explicit: solve using Greens' functions - which I skipped. The final question was a differential equation which simply asked: "What is the solution to: blah + blah / x + blah / x^2 = 0 ?" which I recognized from a past exam which solved "blah * x^2 + blah * x + blah = 0" I solved it "by inspection" and demonstrated the correctness of the solution. Still got a B in the class instead of an A, even after scoring 100% on a final exam that had a median class score below 50% - discussed it with the prof later, and he said "you still don't know how to use Green's functions, do you?" "Obviously not, didn't seem they would be required for the final." B for cleverness, for the A you'd need to learn the archaic skill that has been ground to fine talc and recorded in tables of solved differential equations that were mostly developed and published by 1900. 25 years later, still haven't had a use for Green's functions. Seems to me that places like Google are crawling with kids who have learned all the esoteric CS algorithms and theories and already applied the hell out of them. Do they really need more people with the same skillset? Homogeneity isn't competitive in the long run. Re: (Score:3) You won't find a use for a mathematical tool you don't know how to use. Doesn't this weed out the people you want? (Score:2, Interesting) These interviews seem to weed out the people you want - those who can see deeply into a problem and create an elegant solution. And select the people you don't want - those who are good at bluffing. So I don't get it. If you did this sort of interview, you'd wind up with ... the steaming pile of Android code Google has now. Oh, I get it. Maybe Google should rethink their approach? Eh? (Score:3) Problem 1: Mode Given an array of numbers, return the mode—the number that appears the most times. The article goes on to propose two blindingly stupid and overly-complicated solutions which I can't imagine anyone ever even considering, before finally proposing the bleedin' obvious correct solution. Problem 2: Missing Number Given an array of numbers where one number appears twice, find the repeat number. Well, you've just failed the "name the problem" part of the interview. Problem 3: Sorting Given an array of numbers in the range 1..1000, return a new array with those same numbers, in sorted order. There may be repeats in the input array. If there are, you should include those repeats in your sorted answer. First thought: hash maps! No! First thought: standard library functions! qsort(<arrayname>,<size>,sizeof(<elementsize>),compare_function); <?php sort($array); ?> And so on. Re: (Score:2) These questions are stupid and pointless. Unless you're an academic most programmers had to learn these algorithms once to pass a class. After that they use libraries for the respective language they're programming in. I'm not interested if you can pound out a quicksort from memory. I'm interested in whether or not you know how to use sorting and apply it correctly to do the damn job I'm hiring you for. Show me your previous work. Demonstrate a program you wrote. Show me your code and explain it. That will te Re: (Score:2) Re: (Score:2) In C++, my sorting algorithm is std::sort. I haven't had to know how to write a sort in fifteen years (when I adapted heapsort to make a modifiable priority quieue). Re: (Score:2) Re: (Score:2) death in CS for half a decade . I think you mean century. The bulk of the sort and search work was pretty well solved by then end of the 1960s. However due to the massive amounts of data being crunched by the likes of google these algorithms have undergone a bit a renaissance. In the dawn of computing the data/ram ratio was massive. We didn't have gobs of either but RAM was expensive. In the late 80's through to the early 90's the ratio shifted dramatically. Outside of certain scientific computing domains your typical large data set rarely Re: (Score:3, Insightful) Actually, if they add in "and the list of numbers is really long", then the fact that you know the numbers are all in the range from 1..1000 means that you can do a lot better than "the standard library function". The standard library function is an O(N lg N) sort, because it can't make any assumptions about the inputs (and your high school algorithms class can happily prove that N lg N is as good as it gets for arbitrary input lists which only support a comparison operator). If you know the range of poss Re: (Score:2) His analysis is wrong (Score:3, Insightful) I tried commenting on the dice article but it didn't work. His analysis is wrong here: "They have O(1)-time insertions and lookups (on average)." and therefore here: "This takes O(n) time, which is the optimal runtime for this problem! And we unlocked it by exploiting the O(1)-time insertions and lookups that dictionaries give us.". Hashing insert and find are not O(1). They are likely O(N) or O(log N) depending on the implementation. We expect constant time, but worst case is not constant. Therefore, the algorithm he's shown is O(N^2) or, maybe, O(NlogN). It is expected to run in linear time and most of the time it probably will. An explanation like this leads to people using hashing when they shouldn't -- ie. when they *require* an upper bound. I would rank a candidate that understood the distinction above one that didn't -- and since he's trying to help people, he should get it right. Re:His analysis is wrong (Score:4, Insightful) Well-spotted -- from an AC too. Actually the average or amortized time complexity of hashing insertion is much better than the worst case. In fact they're constant, provided you have enough space to make collisions rare. So the "use hashing for everything" trick is reasonable heuristic for many tasks, but of course not all of them. Knowing how to balance the concern about worst case against the concern about average case is a matter of judgment, which is frequently lacking in people who fetishize this stuff. There are times when a compact O(n^2) algorithm will outperform a complex O(n log(n)) algorithm for all relevant inputs. Re: (Score:2) Actually, the average case estimation is correct and is the most common. Your statement that hashing inserts and find are likely O(N) is O(log N) is not true at all. Two things, a perfect hash function for all integers of a given set is pretty trivial. Given than, O(1) worse case time is a given. More generally, a cryptographic hash will (provably) have very few collisions, so it's not hard to create a hash table that really performs in constant time in all cases. The tradeoff is in space complexity In this Re: (Score:2) Hashing is certainly not O(1). Everyone who has had sets larger than a couple of thousand elements should know that. It can be very efficient, though. My answers (Score:2) "reverse a linked list in place," > use a double linked list. no need for any reversing. "balance a binary search tree," > use self-balancing binary search tree "find the missing number in an array" > disallow empty slots in the array, throw a runtime exception if the caller passed null into the array These are stupid (Score:3) I went through a Google interview, and I thought the questions were really stupid. I think I gave them quite a few answers that likely were wrong in their eyes, because I had too much experience with the subjects. For example, when they asked me how to do a hash-function, clearly expecting one of the standard (pretty bad) constructs. I told them to use the functions by Bob Jenkins, or, if there was time, a full-blown crypto hash. Now, I have filled hash-tables with 100 million elements and got collision chains up to 200 elements long with the STL hash function, but only 30 with SpookyHash by Jenkins. And if there is a spinning disk access in there, the 10us or so a crypto-hash costs you is not a problem either, and the randomization will be excellent under all conditions. But my impression was that they though I was evading the question because I did not really know how this works. That s a pretty bad fail on their side. There were several more. I think the real problem was that I had actual hands-on experience with almost everything they asked me, while they expected me to work though the questions from the data they gave me. The problem hence is that these questions prefer people with some, but not too deep knowledge or actual experience. As soon as you know more, your chances of failing increase. That is really stupid. Incidentally, I know a few ex-Googlers now and I am pretty glad they did not hire me. Many people there are not nearly as smart as they think they are and the 20% time is more of a way to press even more working hours out of employees. They kept pestering me for a few years to re-interview, until I told them, sure, no problem, my daily fee is $1600 and I will be happy to do more interviews if you pay for my time. That finally go the message across. It has a place, but who checks your work? (Score:2)... [stackexchange.com] and have some objective folk critique it. Practice without feedba Re:I expect these in my next job interview but ... (Score:5, Insightful) If the interviewer is worth their salt the idea usually isn't to see if you can get to the best possible, most efficient manner, but rather to see how you approach the problem. Do you solve the actual problem, are you good at understanding the implication of your design (figuring out what is slow or less than optimal about it, understanding the impact of set size on an performance). How do you approach optimizing the function you have created, are you stuck in one mindset or are you willing to pull back and try an entirely different approach to get a better result. Some jobs require this kind of coding but you are right, most of the time you don't have to have the optimal solution, readability matters as well, usually more than ideal performance. Often that will come up as part of the discussion but for a lot of these problems, efficient solutions are often just as readable as the naive ones. How recent was your CPSC degree (Score:4, Insightful) That's really all these stupid things are assessing. I've forgotten all the details about b-tree implementation (and R+ trees for spatial data, and, and, and...) but that shouldn't matter, as long as I know general programming principles and quality aspects, and know how to methodically go about looking up the details from appropriate sources, then copying and modifying existing code. Design creativity, and pros/cons design decision tree exploration, and getting the gist of some fundamental programming concepts (like complexity, maintaining simplicity, refactoring, encapsulation, importance of good naming, importance of good comments etc) should be much more important skills than rote memorization of some 50 year old algorithm. Companies should be much more interested in what you have already programmed, when you had a month or more to do it, and time to concentrate and research and refine, than what you can program under duress before the USS Enterprise falls into the black hole right ahead. Re: (Score:2) I totally agree with you. I like a lot about the prospect of working at some of these companies, but their interviews are completely useless, imo. Still, they do narrow the field, and I imagine that's something that needs doing. Re: (Score:2) Aaaaand... you have pretty much indicated that I would never want to hire you, much less work with you. You need to jump through some hoops to get a job. The recruiting agency was trying to determine whether you'd be willing to do so. Not to mention whether you had any basic intelligence to solve an analytic problem. They have their own reputation to protect. They don't want to advance a candidate unless they show some basic skills and temperaments that the client wants. Put your ego away and play the game.
https://developers.slashdot.org/story/15/11/06/1850251/the-trick-to-algorithmic-coding-interview-questions
CC-MAIN-2017-34
refinedweb
6,864
70.84
I have looked on Stack Overflow everywhere but I cant find a solution to this problem. Given that I have a folder/file as string: "/path1/path2/path3/file" "/path1/path2/path3" "/path1/path2" ['path1', 'path2', 'path3'] "/path1/path2/path3" os.path.dirname() (doc) is the way to go. It returns the directory which contains the object pointed by the path: >>> import os.path >>> os.path.dirname('/path1/path2/path3/file') '/path1/path2/path3' In this case, you want the "grandparent" directory, so just use the function twice: >>> parent = os.path.dirname('/path1/path2/path3/file') >>> os.path.dirname(parent) '/path1/path2' If you want to do it an arbitrary number of times, a function can be helpful here: def go_up(path, n): for i in range(n): path = os.path.dirname(path) return path Here are some examples: >>> go_up('/path1/path2/path3/file', 1) '/path1/path2/path3' >>> go_up('/path1/path2/path3/file', 2) '/path1/path2' >>> go_up('/path1/path2/path3/file', 3) '/path1'
https://codedump.io/share/sLqYEgr9x29m/1/path-manipulation-in-python
CC-MAIN-2016-50
refinedweb
163
55.13
Content Count18 Joined Last visited About jasonsturges - RankMember - Birthday December 21 Contact Methods - Website URL Profile Information - GenderMale - LocationDes Moines, Iowa Recent Profile Visitors The recent visitors block is disabled and is not being shown to other users. jasonsturges reacted to a post in a topic: Remove `onComplete()` handler from shared loader jonforum reacted to a post in a topic: Large texture data on mobile with zoom pan for some feedback on best practices for handling large texture data. On desktop, it seems I can easily load large textures exceeding 8k on a side; however, mobile fails to render past 4096 pixels on the longest side. Project I'm working on loads schematic views, of sorts - there's a background image on which components are placed. Zoom / pan is implemented from Pixi-Viewport project. These are available as vector, but prototyping as SVG seems to incur a significant performance loss. Not sure if there's a mipmapping approach, or some kind of level of detail such as TileMap. Roughly following the TileMap project... background image is sliced and loaded via multiple frames. Presume source code for the "webgl: zoomin and zoomout / retina webgl: zoomin and zoomout" examples are just The canvas version doesn't appear to be working: main.js:104 Uncaught TypeError: Cannot read property 'flush' of undefined at update (main.js:104) Tiggs reacted to a post in a topic: Change cursor view in PIXI.js - Component invalidaton lifecycle jasonsturges replied to jasonsturges's topic in Pixi.js@ivan.popelyshev This makes a lot of sense. Really appreciate the insight here. Need to digest this, and might have some follow up questions. Similar situation with a complex data visualization involving a few thousand sprites. Pixi.js performance is blistering fast, solid at 60fps, but I have some reservations about patterns I'm implementing. Thanks again for all the insight and support! - - - Component invalidaton lifecycle jasonsturges posted a topic in Pixi.jsA display object's `render()` method will call every frame, right? Is there a component lifecycle for validation built in to Pixi, akin to fl.controls usage of fl.core.UIComponent? As in, a way to trigger rendering / update only when needed per display object (not the entire Pixi application) when multiple properties are set on the same frame. For example, if I have several properties that determine state of a display object, should I calculate that in `render()`; or, should I set an invalidate flag to be handled on the next frame? Pseudo-code example: class MySprite extends PIXI.Sprite { set color(value) { // ... this.dirty = true; } set selected(value) { // ... this.dirty = true; } render(renderer) { super.render(renderer); // Calculate display state here? // Is there a better lifecycle hook? // Or, manually validate using the animation frame? this.validate(); } validate() { // Validate presentation state if dirty if (this.dirty === false) return; this.dirty = false; // ...calculate state based on properties // if color is red and selected is true, texture = red-selected.p. jasonsturges reacted to a post in a topic: Change Background Color In JavaScript? -. ivan.popelyshev reacted to a post in a topic: Extending PIXI.utils - - jasonsturges started following Fabric.js select and transform suggestions in Pixi.js and Extending PIXI.utils Extending PIXI.utils jasonsturges posted a topic in Pixi.jsAre there some examples of extending PIXI.utils namespace? Taking a look at Pixi's debugging and editor tools, the Free Transform Tool appears to extend PIXI.utils - cool library, looks pretty alpha without npm or webpack. Just references PIXI in a global scope and attempts to prototype utils. If I want to extend Pixi through utilities, or maybe just provide webpack module libraries through npm, any recommendations such as good example projects to follow? ivan.popelyshev reacted to a post in a topic: Fabric.js select and transform suggestions in Pixi.js. jasonsturges reacted to a post in a topic: Fabric.js select and transform suggestions in Pixi.js?
https://www.html5gamedevs.com/profile/35353-jasonsturges/
CC-MAIN-2020-45
refinedweb
650
51.04
Can I use Flash classes in Flex?Jamal_Soueidan Jul 30, 2007 11:11 AM Hello, This content has been marked as final. Show 12 replies 1. Re: Can I use Flash classes in Flex?peterent Aug 4, 2007 6:14 AM (in response to Jamal_Soueidan)First, to attempt this you should use Flash CS3 so that you are using ActionScript 3 in both Flash and Flex. Second, don't use the Flash UI components like Button, Label, etc. which appear to be the same as their Flex counterparts, but they are not. If your Flash component loads one of those classes into the Flash Player before the Flex definition, the Flex definition won't get loaded and anything using them may not work properly as you've found out. We realize this shortcoming and are looking into normalizing these classes. Look into the Flex Component Kit for Flash CS3 on Adobe Labs. This will be more fully-featured in Flex 3. 2. Re: Can I use Flash classes in Flex?RyanORo Aug 6, 2007 10:14 AM (in response to peterent)I'm working with a pure ActionScript 3.0 project that does not use Flex. I'd like to take component assets created by an artist in Flash CS3 and use them in the pure ActionScript application. In this case, is it kosher to use the regular Flash UI component set? The problem I'm having is how to import the definitions of the fl - package classes that make up the Flash UI components. Or, would it make more sense to have the artist use the Flex components in CS3? What are the requirements for using those components in my ActionScript? I'd rather not have to get too deeply into Flex, can I just use the Flex component classes in my ActionScript without having to set up full-on Flex projects and so forth? 3. Re: Can I use Flash classes in Flex?peterent Aug 6, 2007 10:29 AM (in response to Jamal_Soueidan)If you aren't using ANY Flex components, such as mx.core.Application, and just using Flex Builder as an ActionScript editor, then you have free reign to use whatever Flash components you want. You should be able to just add import fl.controls.*; to your ActionScript 3 file and create a control and add it. 4. Re: Can I use Flash classes in Flex?RyanORo Aug 6, 2007 10:37 AM (in response to Jamal_Soueidan)Great, that's what I was hoping to hear. :) However, "import fl.controls.*" does not work for me. I get the compile time error "1172: Definition fl.controls could not be found." Do I need to add a library path or something like that? (I tried adding C:\Program Files\Adobe\Adobe Flash CS3\en\Configuration\Components\User Interface as an SWC folder, but that caused an error too.) 5. Re: Can I use Flash classes in Flex?peterent Aug 6, 2007 10:42 AM (in response to Jamal_Soueidan)In the Project Properties->Build Path, click the Library tab (not the source tab) and add the following directory (click Add SWC Folder): C:\Program Files\Adobe\Adobe Flash CS3\en\Configuration\Components\User Interface You need all of the SWCs in there. AND remove any Flex SWCs from that path. 6. Re: Can I use Flash classes in Flex?RyanORo Aug 6, 2007 10:53 AM (in response to Jamal_Soueidan)Thanks Peter... I tried that, but I get these mysterious errors in the Problems window: An internal build error has occurred. Please check the Error Log. Unable to load SWC Accordion.swc 7. Re: Can I use Flash classes in Flex?peterent Aug 6, 2007 11:29 AM (in response to Jamal_Soueidan)Blimy - I forgot about that. You need Flex 3 to do a complete build using Flex Builder and Flash CS 3. Sorry to have wasted your time on this. I ran into this myself when CS3 came out but had forgotten that until I saw the error. You can't use FB 2 for this type of thing. All-Flash projects have to be done in Flash. 8. Re: Can I use Flash classes in Flex?RyanORo Aug 6, 2007 11:59 AM (in response to Jamal_Soueidan)Ah well... when is Flex 3 going to be available? :) So if instead I wanted to use that Adobe Labs Flex Component Kit, I would have to use Flex then? I know zero about Flex... I'd appreciate it if you could point me towards any resources that would help me figure out what would be required, so I could decide whether or not it is worth it for us to pursue this path. Thanks. 9. Re: Can I use Flash classes in Flex?gomeropie Aug 17, 2007 10:15 AM (in response to RyanORo)OK, i have flex 3 and the same errors result any ideas? Thanks, Patrick 10. Re: Can I use Flash classes in Flex?eitanavgil Aug 29, 2007 1:15 PM (in response to Jamal_Soueidan)same thing here i dont think there is a way to import a UI component to AS3 project. Maybe Adobe should make a fix or maybe they should export a special package so AS3 project would be able to get some normal UI components. Right now the only way i could find a solutiond for that problem is some opensource UI projects. 11. Re: Can I use Flash classes in Flex?butcherBaker Aug 30, 2007 2:49 PM (in response to Jamal_Soueidan)I'm not using flex, but I'm getting strange issue from components. import fl.controls.ProgressBar worked a few days ago, now it won't work. the ProgressBar.as (and the swc) is right where the installer put it. but it throws an 1172 Definition not found Error every time. I've called Adobe and am waiting a response. I've trouble shot this enough that I do not think it's a bug. Even if I copy the livedocs progressBar example and paste it, it gets the 1172 plus other errors related to component definition not importing. meanwhile, an older AS3 program from last week, using the same code, imports fl.controls packages and compiles just fine. arrgghh! 12. Re: Can I use Flash classes in Flex?buabco Aug 31, 2007 10:29 AM (in response to butcherBaker)it is possible to import any UIComponent made in FLASH CS3 or even previous versions into Actionscript and use them. The process is more complex though. I'll try to explain: First make sure that when you created the UIComponent it actually exists as a class with in the flash movie, this is give it a LINKAGE NAME. Second compile this SWF or SWC and put it in the path of your flex Proyect. Third use de [EMBED] compiler command to include this class into your AS class, the way to do it is this: [Embed (source="/assets/UICOMPONENT.swf")] [Binddable] private var UI:Class; Once you do that you can use the class each time you want: var tmp:MopvieClip= new UI(); if You don't want to load the hole movie, you can add elements of it, just read the documentation on the EMBED Command. One thing I've used though is to clone child element's of the movie and use them in my class later. Some tips I've found out: 1.- The object created is a MovieClip whith one Child Element that's a Loader class. 2.- You have to implement a Listener to make sure the Embed element is loaded before use it or you'll get an error. 3.- You'll then be able to access all the elements in the movieclip created with Flash CS 3 or 2. 4.- I've noticed that there are some compiling isues when using filters in button frames, I've only used the CS2 Actionscript preview and not CS3 so I don't know if this was solved.
https://forums.adobe.com/thread/70719
CC-MAIN-2017-30
refinedweb
1,334
73.47
Imagine you are using TestComplete on a home made calculator, and you want to confirm it's computing 2+2 correctly (answer should be 4). You have a script, "Compare.py" that should looks something like: def CompareAnswer(n): if n == 4: return true // test continues return false // this should cause the test to fail/stop. How would I go about sending a value accessed from an object that I am testing back to a script such as one above? Solved! Go to Solution. I'm no Python guy... but generally speaking, doesn't matter what language. Let's say that there is a text field in your calculator application that contains that answer. Let's say, in NameMapping, it looks something like Aliases.MyCalculator.resultText Let's say that object is a pretty standard text object. That means that the value of the result would be found in the "wText" property of the object. So, with all that, given the function you defined in Python, to check 2+2 = 4, you'd perform 2+2 on your calculator and then do CompareAnswer(Aliases.MyCalculator.resultText.wText) Within your function, I'd probably convert the "n" parameter to an integer using aqConvert.VarToInt since you're comparing the parameter to the integer 4... this ensures that you are comparing apples to apples. View solution in original post You're a beast. That was quick, and well written and easily understandable.Can't thank you enough.Now it's time to start looking into name mapping. You know, if documentation were written by you I think these forums would be a ghost town 😛 Thanks for the kind words. 🙂As for the documentation.... yeah, it's a help file that tells you, basically, "These are the things that TestComplete has". There are some basic assumptions, though, that generally you have received some sort of basic training in how to use TestComplete or similar automation tool. That's why SmartBear supplies the free monthly 101 training and other similar free webinars that help you learn this kind of stuff. There was a book written a few years ago by a third party. Look for the TestComplete Cookbook. I think it's a bit out of date (written in 2013) but it MIGHT help you at least get the basics. In the meantime.... happy to help. That's what these forums are for. 🙂
https://community.smartbear.com/t5/TestComplete-General-Discussions/How-to-pass-a-value-back-to-a-TestComplete-script/m-p/141387/highlight/true
CC-MAIN-2021-21
refinedweb
399
67.45
LED blink using timer In this tutorial, you’ll try a new way to blink the LED - blink using a timer. Learning goals - Learn how to use a timer to trigger the interrupt. - Blink the LED using timer. 🔸Background What is a timer? You've known the interrupt caused by the pin state change in the previous tutorial. An interrupt will occur once a rising edge or falling edge is detected. Besides, there is also timer interrupt. It allows the interrupt to happen at every specified time interval. In your everyday life, you must have used timers. For example, you are cooking meals, and the dish still needs another 30min. You would set a timer for 30min and do many other works since the timer will remind you after 30min. It can improve your efficiency. The timer you are going to use is quite similar. It is hardware built on the microcontroller. You can set the expected time for interrupt and an ISR. When the set time has elapsed, the interrupt occurs, the microcontroller will go to execute the ISR. After it, the microcontroller goes back to continue its previous work. 🔸Circuit - LED module note The circuits above are simplified for your reference. 🔸Preparation Class Timer - this class is used to set interrupt at a specific time interval. 🔸Projects 1. LED blink using timer You will blink the LED module and onboard blue LED with different speed. Example code // Import the SwiftIO library to control input and output and the MadBoard to use the id of the pins. import SwiftIO import MadBoard // Initialize a digital pin for LED module. let led = DigitalOut(Id.D19) // Initialize the onboard blue LED. let blueLed = DigitalOut(Id.BLUE) // Initialize a timer for 1500ms. let timer = Timer(period: 1500) // Define a new function used to toggle the LED. func toggleLed() { led.toggle() } // Set an interrupt to reverse the LED state every time the interrupt occurs. timer.setInterrupt(toggleLed) // Blink onboard blue LED. while true { blueLed.high() sleep(ms: 500) blueLed.low() sleep(ms: 500) } Code analysis let timer = Timer(period: 1500) Initialize a timer. - The timer's mode is periodby default, which means the interrupt occurs every time the set time elapsed. If it it set to oneShot, the timer interrupt happens once. - The parameter periodsets the time for interrupt in milliseconds. timer.setInterrupt(toggleLed) Set the timer interrupt. Similar to the interrupt in the previous tutorial, it calls toggleLed as ISR. So every 1500ms the state of led changes. while true { blueLed.high() sleep(ms: 500) blueLed.low() sleep(ms: 500) } In the loop, the microcontroller could do other work. Here you will make the onboard blue LED blink at a faster speed. As you can see, you can control two LEDs without worrying their timing.
https://docs.madmachine.io/tutorials/swiftio-circuit-playgrounds/modules/led-timer
CC-MAIN-2022-21
refinedweb
462
69.79
@appnest/web-router@appnest/web-router A powerful web component router A router interprets the browser URL and navigates to a specific views based on the configuration. This router is optimized for routing between web components. If you want to play with it yourself, go to the playground. Go here to see a demo. 😴Lazy loading of routes 🎁Web component friendly 📡Easy to use API 🛣Specify params in the path 👌Zero dependencies 📚Uses the history API 🎉Support routes for dialogs 🛡Add guards to routes ⚓️Use the anchor tag for navigating ⚙️Very customizable 📖 Table of Contents ➤ Table of Contents➤ Table of Contents - ➤ Installation - ➤ Usage - ➤ lit-element - ➤ Advanced - ➤ ⚠️Be careful when navigating to the root! - ➤ Contributors - ➤ License ➤ Installation➤ Installation npm i @appnest/web-router ➤ Usage➤ Usage This section will introduce how to use the router. If you hate reading and love coding you can go to the playgroud to try it for yourself. 1. Add <base href="/"> To turn your app into a single-page-application you first need to add a <base> element to the index.html in the <head>. If your file is located in the root of your server, the href value should be the following: <base href="/"> 2. Import the router2. Import the router To import the library you'll need to import the dependency in your application. import "@appnest/web-router"; 3. Add the <router-slot> element The router-slot component acts as a placeholder that marks the spot in the template where the router should display the components for that route part. <router-slot> <!-- Routed components will go here --> </router-slot> 4. Configure the router4. Configure the router Routes are added to the router through the add function on a router-slot component. Specify the parts of the path you want it to match with or use the ** wildcard to catch all paths. The router has no routes until you configure it. The example below creates three routes. The first route path matches urls starting with login and will lazy load the login component. Remember to export the login component as default in the ./pages/login file like this export default LoginComponent extends HTMLElement { ... }. The second route matches all urls starting with home and will stamp the HomeComponent in the web-router. The third route matches all paths that the two routes before didn't catch and redirects to home. This can also be useful for displaying "404 - Not Found" pages. const routerSlot = document.querySelector("router-slot"); await routerSlot.add([ { path: "login", component: () => import("./path/to/login/component") // Lazy loaded }, { path: "home", component: HomeComponent // Not lazy loaded }, { path: "**", redirectTo: "home" } ]); You may want to wrap the above in a whenDefined callback to ensure the router-slot exists before using its logic. customElements.whenDefined("router-slot").then(async () => { ... }); 5. Navigate using the history API, anchor tag or the <router-link> component In order to change a route you can either use the history API, use an anchor element or use the router-link component. History APIHistory API To push a new state into the history and change the URL you can use the .pushState(...) function on the history object. history.pushState(null, "", "/login"); If you want to replace the current URL with another one you can use the .replaceState(...) function on the history object instead. history.replaceState(null, "", "/login"); You can also go back and forth between the states by using the .back() and .forward() functions. history.back(); history.forward(); Go here to read more about the history API. Anchor elementAnchor element Normally an anchor element reloads the page when clicked. This library however changes the default behavior of all anchor element to use the history API instead. <a href="/home">Go to home!</a> There are many advantages of using an anchor element, the main one being accessibility. router-link With the router-link component you add <router-link> to your markup and specify a path. Whenever the component is clicked it will navigate to the specified path. Whenever the path of the router link is active the active attribute is set. <router-link <button>Go to login page!</button> </router-link> Paths can be specified either in relative or absolute terms. To specify an absolute path you simply pass /home/secret. The slash makes the URL absolute. To specify a relative path you first have to be aware of the router-slot context you are navigating within. The router-link component will navigate based on the nearest parent router-slot element. If you give the component a path (without the slash), the navigation will be done in relation to the parent router-slot. You can also specify ../login to traverse up the router tree. 6. Putting it all together6. Putting it all together So to recap the above steps, here's how to use the router. <html> <head> <base href="/" /> </head> <body> <router-slot></router-slot> <a href="/home">Go to home</a> <a href="/login">Go to login</a> <script type="module"> import ""; customElements.whenDefined("router-slot").then(async () => { const routerSlot = document.querySelector("router-slot"); await routerSlot.add([ { path: "login", component: () => import("./path/to/login-component") }, { path: "home", component: () => import("./path/to/home-component") }, { path: "**", redirectTo: "home" } ]); }); </script> </body> </html> ➤ lit-element The web-router works very well with lit-element. Check out the example below to get an idea on how you could use this router in your own lit-element based projects. import { LitElement, html, query, PropertyValues } from "lit-element"; import { RouterSlot } from "@appnest/web-router" const ROUTES = [ { path: "login", component: () => import("./pages/login") }, { path: "home", component: () => import("./pages/home") }, { path: "**", redirectTo: "home" } ]; @customElement("app-component"); export class AppComponent extends LitElement { @query("router-slot") $routerSlot!: RouterSlot; firstUpdated (props: PropertyValues) { super.firstUpdated(props); this.$routerSlot.add(ROUTES); } render () { return html`<router-slot></router-slot>`; } } ➤ Advanced➤ Advanced You can customize a lot in this library. Continue reading to learn how to handle your new superpowers. GuardsGuards A guard is a function that determines whether the route can be activated or not. The example below checks whether the user has a session saved in the local storage and redirects the user to the login page if the access is not provided. If a guard returns false the routing is cancelled. function sessionGuard () { if (localStorage.getItem("session") == null) { history.replaceState(null, "", "/login"); return false; } return true; } Add this guard to the add function in the guards array. ... await routerSlot.add([ ... { path: "home", component: HomeComponent, guards: [sessionGuard] }, ... ]); Dialog routesDialog routes Sometimes you wish to change the url without triggering the route change. This could for example be when you want an url for your dialog. To change the route without triggering the route change you can use the functions on the native object on the history object. Below is an example on how to show a dialog without triggering the route change. history.native.pushState(null, "", "dialog"); alert("This is a dialog"); history.native.back(); This allow dialogs to have a route which is especially awesome on mobile. ParamsParams If you want params in your URL you can do it by using the :name syntax. Below is an example on how to specify a path that matches params as well. This route would match urls such as user/123, user/@andreas, user/abc and so on. The preferred way of setting the value of the params is by setting it through the setup function. await routerSlot.add([ { path: "user/:userId", component: UserComponent, setup: (component: UserComponent, info: RoutingInfo) => { component.userId = info.match.params.userId; } } ]); Alternatively you can get the params in the UserComponent by using the queryParentRouterSlot(...) function. import { LitElement, html } from "lit-element"; import { Params, queryParentRouterSlot } from "@appnest/web-router"; export default class UserComponent extends LitElement { get params (): Params { return queryParentRouterSlot(this)!.match!.params; } render () { const {userId} = this.params; return html` <p>:userId = <b>${userId}</b></p> `; } } customElements.define("user-component", UserComponent); Deep dive into the different route kindsDeep dive into the different route kinds There exists three different kinds of routes. We are going to take a look at those different kinds in a bit, but first you should be familiar with what all routes have in common. export interface IRouteBase<T = any> { // The path for the route fragment path: PathFragment; // Optional metadata data?: T; // If guard returns false, the navigation is not allowed guards?: Guard[]; // Whether the match is fuzzy (eg. "name" would not only match "name" or "name/" but also "path/to/name") fuzzy?: boolean; } Component routesComponent routes Component routes resolves a specified component. You can provide the component property with either a class that extends HTMLElement (aka a web component), a module that exports the web component as default or a DOM element. These three different ways of doing it can be done lazily by returning it a function instead. export interface IComponentRoute extends IRouteBase { // The component loader (should return a module with a default export if it is a module) component: Class | ModuleResolver | PageComponent | (() => Class) | (() => PageComponent) | (() => ModuleResolver); // A custom setup function for the instance of the component. setup?: Setup; } Here's an example on how that could look in practice. routerSlot.add([ { path: "home", component: HomeComponent }, { path: "terms", component: () => import("/path/to/terms-module") }, { path: "login", component: () => { const $div = document.createElement("div"); $div.innerHTML = `🔑 This is the login page`; return $div; } }, { path: "video", component: document.createElement("video") } ]); Redirection routesRedirection routes A redirection route is good to use to catch all of the paths that the routes before did not catch. This could for example be used to handle "404 - Page not found" cases. export interface IRedirectRoute extends IRouteBase { // The paths the route should redirect to. Can either be relative or absolute. redirectTo: string; // Whether the query should be preserved when redirecting. preserveQuery?: boolean; } Here's an example on how that could look in practice. routerSlot.add([ ... { path: "404", component: document.createTextNode(`404 - The page you are looking for wasn't found.`) } { path: "**", redirectTo: "404", preserveQuery: true } ]); Resolver routesResolver routes Use the resolver routes when you want to customize what should happen when the path matches the route. This is good to use if you for example want to show a dialog instead of navigating to a new component. If the custom resolver returns false the navigation will be cancelled. export interface IResolverRoute extends IRouteBase { // A custom resolver that handles the route change resolve: CustomResolver; } Here's an example on how that could look in practice. routerSlot.add([ { path: "home", resolve: (info: RoutingInfo) => { const $page = document.createElement("div"); $page.appendChild(document.createTextNode("This is a custom home page!")); // You can for example add the page to the body instead of the // default behavior where it is added to the router-slot. // If you want a router-slot inside the element you are adding here // you need to set the parent of that router-slot to info.slot. document.body.appendChild($page); }) } ]); Stop the user from navigating awayStop the user from navigating away Let's say you have a page where the user has to enter some important data and suddenly he/she clicks on the back button! Luckily you can cancel the the navigation before it happens by listening for the willchangestate event on the window object and calling preventDefault() on the event. window.addEventListener("willchangestate", e => { // Check if we should navigate away from this page if (!confirm("You have unsafed data. Do you wish to discard it?")) { e.preventDefault(); return; } }, {once: true}); Helper functionsHelper functions The library comes with a set of helper functions. This includes: path()- The current path of the location. query()- The current query as an object. queryString()- The current query as a string. toQuery(queryString)- Turns a query string into a an object. toQueryString(query)- Turns a query object into a string. slashify({ startSlash?: boolean, endSlash?: boolean; })- Makes sure that the start and end slashes are present or not depending on the options. stripSlash()- Strips the slash from the start and/or end of a path. ensureSlash()- Ensures the path starts and/or ends with a slash. isPathActive (path: string | PathFragment, fullPath: string = getPath())- Determines whether the path is active compared to the full path. Global navigation eventsGlobal navigation events You are able to listen to the navigation related events that are dispatched every time something important happens. They are dispatched on the window object. // An event triggered when a new state is added to the history. window.addEventListener("pushstate", (e: PushStateEvent) => { console.log("On push state", path()); }); // An event triggered when the current state is replaced in the history. window.addEventListener("replacestate", (e: ReplaceStateEvent) => { console.log("On replace state", path()); }); // An event triggered when a state in the history is popped from the history. window.addEventListener("popstate", (e: PopStateEvent) => { console.log("On pop state", path()); }); // An event triggered when the state changes (eg. pop, push and replace) window.addEventListener("changestate", (e: ChangeStateEvent) => { console.log("On change state", path()); }); // A cancellable event triggered before the history state changes. window.addEventListener("willchangestate", (e: WillChangeStateEvent) => { console.log("Before the state changes. Call 'e.preventDefault()' to prevent the state from changing."); }); // An event triggered when navigation starts. window.addEventListener("navigationstart", (e: NavigationStartEvent) => { console.log("Navigation start", e.detail); }); // An event triggered when navigation is canceled. This is due to a Route Guard returning false during navigation. window.addEventListener("navigationcancel", (e: NavigationCancelEvent) => { console.log("Navigation cancelled", e.detail); }); // An event triggered when navigation ends. window.addEventListener("navigationend", (e: NavigationEndEvent) => { console.log("Navigation end", e.detail); }); // An event triggered when navigation fails due to an unexpected error. window.addEventListener("navigationerror", (e: NavigationErrorEvent) => { console.log("Navigation failed", e.detail); }); // An event triggered when navigation successfully completes. window.addEventListener("navigationsuccess", (e: NavigationSuccessEvent) => { console.log("Navigation failed", e.detail); }); Scroll to the topScroll to the top If you want to scroll to the top on each page change to could consider doing the following. window.addEventListener("navigationend", () => { requestAnimationFrame(() => { window.scrollTo(0, 0); }); }); Style the active linkStyle the active link If you want to style the active link you can do it by using the isPathActive(...) function along with listning to the changestate event. import {isPathActive} from "@appnest/web-router"; const $links = Array.from(document.querySelectorAll("a")); window.addEventListener("changestate", () => { for (const $link of $links) { // Check whether the path is active const isActive = isPathActive($link.getAttribute("href")); // Set the data active attribute if the path is active, otherwise remove it. if (isActive) { $link.setAttribute("data-active", ""); } else { $link.removeAttribute("data-active"); } } }); ➤ ⚠️ Be careful when navigating to the root! From my testing I found that Chrome and Safari, when navigating, treat an empty string as url differently. As an example history.pushState(null, null, "") will navigate to the root of the website in Chrome but in Safari the path won't change. The workaround I found was to simply pass "/" when navigating to the root of the website instead.
https://www.npmjs.com/package/@appnest/web-router
CC-MAIN-2022-33
refinedweb
2,451
50.63
18 April 2008 11:16 [Source: ICIS news] By John Richardson SINGAPORE (ICIS news)--The consensus view is that the world will return to normal in 2012, justifying another wave of big capacity to satisfy voracious demand growth in emerging markets. ?xml:namespace> This view was very much in evidence at the recent National Petrochemical and Refiners Association (NPRA) meeting in ?xml:namespace> Senior executives at the International Petrochemical Conference (IPC) were talking about where and when to add more capacity with much of the talk focused on methanol-to-olefins (MTO) technologies and better integration between refineries and petrochemicals. Perhaps the world economy will recover from its current problems fairly quickly and the looming supply overhang will be comfortably absorbed by 2012. Hundreds of millions more people in emerging markets might also become consumers of products made from chemicals, resulting in the next up cycle delivering the kind of profitability we saw in 2003-2007. But if you’re planning scenarios, it might at least be worth taking these possibilities into account: * The * Estimates of the petrochemical supply overhang being absorbed by the early part of the next decade prove to be wrong because they was based on global growth remaining at a minimum of 3% per year. The IMF's warning in April 2008 that there was a 25% chance of growth of less than 3% in 2008-09 - amounting to a global recession - comes true. Global ethylene (C2) operating rates are driven to historic lows. The * Inflation slows the pace at which millions more people in developing nations ramp up their consumption. High food prices persist. IMF managing director Dominique Strauss-Kahn warned in April 2008 that “the food crisis poses questions about the survivability of democracy and political regimes… (and) sometimes these questions lead to war”. His warning comes true. This is frightening stuff. A colleague, who saw a draft of this article, suggested I come off the fence and say what I think. To be honest, I don’t know and so I am going to go away, talk to more people and do so some more research. Watch this space. But I am leaning strongly towards the view that high, and even higher, crude prices are here to stay - perhaps the biggest single-biggest threat to a V-shaped economic recovery. Paul Hodges, chairman of UK-based consultancy International e-Chem, argues on his ICIS blog - Chemicals and the Economy - that This is not a sign of the resilience of the Gasoline, diesel and other fuel prices are heavily subsidised in The supply side gets ever more scary. Lukoil vice-president Leonid Fedun was widely quoted in the media earlier this week as saying that last year’s Russian oil production of 10m bbl/day was the highest he would see in his “lifetime”. The International Energy Agency had estimated last July that Russian crude output would rise to 10.5m bbl/day in 2012. Non-Opec supply is in decline - for example, There are doubts over how much extra output Opec can achieve to match the shortfall between supply and demand. Supply is also threatened by booming AF Alhajji, energy economist and Associate Professor at Another short-term risk is this year’s hurricane season in the US Gulf. Meteorologists at This might make harder-to-extract reserves look more viable – for instance, the But what effect will the credit crisis have on the availability of funding for exploring these reserves, which also include deep-sea conventional crude? There is a lot of disagreement over how much the ramp-up in crude is the result of fundamentals and how much is down to speculation. Funds have poured into crude in response to the weaker dollar and the fall in equity markets and the speculators are generally accepted to have played an important role in pushing crude to record highs this week. Estimates of the “trader’s premium” on the price of a barrel of crude are between $20/bbl and $45/bbl. Similarly, calculations about where crude prices would have to be in the long term to justify these hard-to-justify reserves range from $60/bbl to $80/bbl. If, for arguments sake, you take $45 off $113.21/bbl (the June Brent price on Thursday) you get $68.21/bbl. Nobody really knows what the trader’s premium is and therefore what would happen to pricing if funds suddenly poured out of crude. So in an economic environment that is perhaps more uncertain than at any time since the 1970s oil crisis or even the Great Depression, why take the risk? And finally, how can anyone be certain that growth in An argument used to justify predictions of ever-surging demand for chemicals is that The further west you go, the lower are per capita incomes - resulting in huge infrastructure spending by the government and a drive to get lower-value industry to relocate westwards. The government is determined to narrow the income gap between coastal and inland But can the environment sustain growth at the kind of levels we have seen in Could water shortages and air pollution lead to the economy coming off the rails? Western and northern How can people continue moving to the existing big towns and cities, and how can Air pollution, no matter how efficient the technologies, is bound to get much worse as industry spreads westwards. Can This is all worth thinking about - and, as we’ve already said, even building into scenarios. But the danger of following such a set of beliefs is that you get left behind in the great petrochemicals game of building more and more capacity in order to maintain economies of scale and grow market share in developing economies. Who would want to be a CEO? Bookmark
http://www.icis.com/Articles/2008/04/18/9117202/insight-multiple-scenarios-for-global-chemicals.html
CC-MAIN-2015-06
refinedweb
968
55.27
Take a look at the example in Figure 8.3, ButtonProject.java, which is designed to show how to handle events in SWT applications. When you click the button in this project, the application catches that event and then the message “Thanks for clicking.” appears in the Text widget. This application starts by creating the Button widget it needs, giving it the caption “Click here,” and making it a push button (the allowed styles are SWT.ARROW, SWT.CHECK, SWT.PUSH, SWT.RADIO, SWT.TOGGLE, SWT.FLAT, SWT.UP, SWT.DOWN, SWT.LEFT, SWT.RIGHT, and SWT.CENTER): import org.eclipse.swt.*; ... No credit card required
https://www.safaribooksonline.com/library/view/javatm-after-hours/0672327473/0672327473_ch08lev1sec4.html
CC-MAIN-2018-34
refinedweb
106
68.36
Your First tvOS App with Fire The first time you start Fire, before opening or starting a new project, it will present you with the Welcome Screen, pictured below. You can also always open up the Welcome screen via the "Window|Welcome" menu command or by pressing ⇧⌘1. In addition to logging in to your remobjects.com account, the Welcome Screen allows you to perform three tasks, two of which you will use now. On the bottom left, you can choose your preferred Elements language. Fire supports writing code in Oxygene, C# and Swift. Picking an option here will select what language will show by default when you start new projects, or add new files to a multi-language project (Elements allows you to mix all fivwe languages in a project, if you like). This tutorial will cover all four languages. For code snippets, and for any screenshots that are language-specific, you can choose which language to see in the top right corner. Your choice will persist throughout the article and the website, though you can of course switch back and forth at any time. After picking your default language (which you can always change later in Preferences), click the "Start a new Project" button to bring up the New Project Wizard sheet: You will see that your preferred language is already pre-selected at the top right – although you can of course always choose a different language just for this one project. On the top left you will select the platform for your new application. Since you're going to build an tvOS app for Apple TV, select Cocoa. This filters the third list down to show all Cocoa templates only. Drop down the big popup button in the middle and choose the "Single View App (tvOS)" project template, then click "OK". Next, you will select where to save the new project you are creating: This is pretty much a standard Mac OS X Save Dialog; you can ignore the two extra options at the bottom for now and just pick a location for your project, give it a name, and click "Create Project". You might be interested to know that you can set the default location for new projects in Preferences. Setting that to the base folder where you keep all your work, for example, saves you from having to find the right folder each time you start a new project. Once the project is created, Fire opens its main window, showing you a view of your project: Let's have a look around this window, as there are a lot of things to see, and this windows is the main view of Fire where you will do most of your work. The Fire Main Window At the top, you will see the toolbar, and at the very top of that you see the name MyFirstApp.sln. Now, MyFirstApp is the name you gave your project, but what does .sln mean? Elements works with projects inside of a Solution. You can think of a Solution as a container for one or more related projects. Fire will always open a solution, not a project – even if that solution might only contain a single project, like in this case. In the toolbar itself are buttons to build and run the project, as well as a few popup buttons that let you select various things. We'll get to those later. The left side of the Fire window is made up by what we call the Navigation Pane. This pane has various tabs at the top that allow you to quickly find your way around your project in various views. For now, we'll focus on the first view, which is active in the screenshot above, and is called the Project Tree. You can hide and show the Navigation Pane by pressing ⌘0 at any time to let the main view (wich we'll look at next) fill the whole window. You can also bring up the Project Tree at any time by pressing ⌘1 (whether the Navigation Pane is hidden or showing a different tab). The Project Tree The Project Tree shows you a full view of everything in your project (or projects). Each project starts with the project node itself, which is larger, and selected in the screenshot above, as indicated by its blue background. Because it is selected, the main view on the right shows the project summary. As you select different nodes in the Project Tree the main view adjusts accordingly. Each project has three top level nodes. Settings gives you access to all the project settings and options for the project. Here you can control how the project is built and run, what exact compiler options are used, etc. The project settings are covered in great detail here. References lists all the external frameworks and libraries your project uses. As you can see in the screenshot, the project already references all the most crucial libraries by default (we'll have a look at these later), and you can always add more by right-clicking the References node and choosing "Add Reference" from the context menu. You can also drag references in directly from the Finder, as well as, of course, remove unnecessary references. Please refer to the References topic for more in-depth coverage. Files, finally, has the meat of your application. This is where all the files that make up your app are listed, including source files, images and other resources. The Main View Lastly, the main view fills the rest of the window (and if you hide the Navigation Pane, all of the window), and this is where you get your work done. With the project node selected, this view is a bit uninspiring, but when you select a source file, it will show you the code editor in that file, and it will show specific views for each file type. When you hide the Navigation Pane, you can still navigate between the different files in your project via the Jump Bar at the top of the main view. Click on the "MyFirstApp" project name, and you can jump through the full folder and file hierarchy of your project, and more. Your First tvOS Project Let's have a look at what's in the project that was generated for you from the template. This is already a fully working app that you could build and launch now – it wouldn't do much, but all the basic code and infrastructure is there. First of all, there are two source files, the AppDelegate and a ViewController, with file extensions matching your language. And there's a handful of non-code files in the Resources folder. If you are already used to iOS development, then a lot of this will be very familiar to you already. The Application Delegate The AppDelegate is a standard class that pretty much every iOS, tvOS and Mac app implements. You can think of it as a central communication point between the Cocoa (Touch) frameworks and your app, the go-to hub that Cocoa will call when something happens, such as your app launching, shutting down, or getting other external notifications. There's a range of methods that the AppDelegate can implement, and by default the template provides four of them to handle application launch, shutdown, suspension (when the app moves into the background) and resuming, respectively. For this template, they are all empty, because what gets your app off the ground and running happens elsewhere, as we'll see in a second. If you wanted to add your own code to run when the app starts, you would add that to the implementation of the application:didFinishLaunchingWithOptions: body: method AppDelegate.application(application: UIApplication) didFinishLaunchingWithOptions(launchOptions: NSDictionary): Boolean; begin result := true; end; public BOOL application(UIApplication application) didFinishLaunchingWithOptions(NSDictionary launchOptions) { return true; } func application(_ application: UIApplication!, didFinishLaunchingWithOptions launchOptions: NSDictionary!) -> Bool { return true } As it stands, the method just returns true to let Cocoa know that everything is A-OK and the app should start normally. Another thing worth noticing on the AppDelegate class is the UIApplicationMain Attribute that is attached to it. You might have noticed that your project has no main() function, no entry point where execution will start when the app launches. The UIApplicationMain attribute performs two tasks: (1) it generates this entry point for you, which saves a whole bunch of boilerplate code and (2) it lets Cocoa know that the AppDelegate class (which could be called anything) will be the application's delegate class. type [UIApplicationMain, IBObject] AppDelegate = class(IUIApplicationDelegate) //... end; [UIApplicationMain, IBObject] class AppDelegate : IUIApplicationDelegate { //... } @UIApplicationMain @IBObject public class AppDelegate : IUIApplicationDelegate { //... } The View Controller The second class, and where things become more interesting, is the ViewController. Arbitrarily named so because it is the only view controller for your application (you started with the "Single View" app template, after all), this is where the actual logic for your application's view will be encoded. It too is fairly empty at this stage, with two placeholder methods that get called when the view loads ( viewDidLoad) and when there is a shortage of memory ( didReceiveMemoryWarning), respectively. You'll fill this class up with real code in a short while. But first let's move on to the other files in the project. The Resources There are four resource files in the project, nested in the Resources subfolder. This is pure convention, and you can distribute files within folders of your project as you please. The Assets.xcassetsfile is an Xcode Asset Catalog – essentially a collection of images (and possibly other assets) in various sizes. Like most resource files, you can edit this file in Xcode (we'll see how in a few moments). By default, it only contains the application icon, in the various sizes to support iPhones and iPads at different resolutions. Main.storyboardcontains the real UI of your application that will show once it is launched. This file will also be designed in Xcode, and it will have connections to your code, so that it can interact with the application logic you will write. Finally, Info.plistis a small XML file that provides the operating system with important parameters about your application. The file provides some values that you can edit (such as the name of the Launch Screen and Main Storyboard above, or what types of devices your app will run on), and as part of building your application, Elements will expand it and add additional information to the file before packaging it into your final app. You can read more about this file here. <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" ""> <plist version="1.0"> <dict> <key>UIMainStoryboardFile</key> <string>Main</string> <key>UIDeviceFamily</key> <array> <integer>1</integer> <integer>3</integer> </array> ... The Main Storyboard As stated earlier, no startup code is necessary in the AppDelegate method, because Cocoa already knows how to get your app up and running. And that is via the UIMainStoryboardFile entry in the Info.plist file. As you can see above, it points to Main, which is the Main.storyboard file in your project. Let's have a look. Editing .storyboard files is done in Xcode, using Apple's own designer for Mac and iOS. For this step (and for many other aspects of developing for tvOS) Xcode needs to be installed on your system. Xcode is available for free on the Mac App Store, and downloading and running it once is enough for Fire to find it, but we cover this in more detail here in the Setup section. When you select the .storyboard file, the main view will populate with information about the storyboard, including its name, build action ("Storyboard"), and a helpful "Edit in Xcode" button: Clicking this button will generate/update a small stub project that will open in Xcode and give you access to all the resources in your project. Instead of using the button, you can also either right-click a resource file and select "Edit in Xcode" from the menu, or right-click the project node and choose "Edit User Interface Files Xcode". Xcode will come up and look something like this: On the left, you see all your resource files represented – there's the Main storyboard and the Asset Catalog. Selecting an item will open it on the right, as is happening with the Main.storyboard in the screenshot above. To make more room, let's hide the file list (⌘0 just like in Fire) and the Document Outline (via the small square button in the bottom left). Also, press ⌥⌘1 to show the Utility pane on the right instead: The view for the screenshots is a bit constraint, but you should see the 1080p main view for your application, alongside an arrow pointing to it from the left, which indicates that this is the initial screen for your app that Cocoa will show when it loads your storyboard. If you select the view controller by clicking its title bar and then select the third tab in the Utility View (⌥⌘3), you will see that its "Class" property is set to "ViewController". This is the ViewController class in your code. So let's go back to Fire and start adding some (very simple) functionality to this view controller. You can close Xcode, or leave it open and just switch back to Fire via ⌘Tab – it doesn't matter. Back in Fire, select the ViewController source file, and let's start adding some code. We're going to create a really simple app for now, just a button and a label. Clicking the button will update the label with a greeting message. Let's start by adding a property, so that we can refer to the label from code: public [IBOutlet] property label: UILabel; [IBOutlet] public UILabel label { get; set; } @IBOutlet var label: UILabel Note the IBOutlet attribute attached to the property. This (alongside the IBObject attribute that's already on the ViewController class itself) lets the compiler and Xcode know that the property should be made available for connections in the UI designer ( IB is short for Interface Builder, the former name of Xcode's UI designer). Next, let's add a method that can be called back when the user presses the button: [IBAction] method ViewController.sayHello(sender: id); begin label.text := 'Hello from Oxygene'; end; [IBAction] public void sayHello(id sender) { label.text = "Hello from C#"; } @IBAction func sayHello(_ sender: Any?) { label.text = "Hello from Swift" } Similar to the attribute above, here the IBAction attribute is used to mark that the method should be available in the designer. Note: If you are using Oxygene as a language, methods need to be declared in the interface section and implemented in the implementation section. You can just add the header to the interface and press ^C and Fire will automatically add the second counterpart for you, without you having to type the method header declaration twice. Now all that's left is to design the user interface and hook it up to the code you just wrote. To do that, right-click the Main.storyboard and choose "Edit in Xcode" again. This will bring Xcode back to the front and make sure the UI designer knows about the latest change you made to the code. (Tip: you can also just press ⌘S at any time to sync.) Now drag a label and a button from the bottom-right class palette onto the View, and align them so that they look something like this: (Tip: you can zoom the view by right-clicking or control-clicking into the empty white space of the designer and choosing a scale factor, if you cannot fit the whole view on your screen.) Then select both controls via ⌘-clicking on each one in turn, press the first button in the bottom right, check "Horizontally Center in Container" and click "Add 2 Constraints". This configures Auto Layout, so that the controls will automatically be centered in the view, regardless of screen size. In a real app, you would want to set more constraints to fully control the layout, such as the spacing between the controls, but for this simple app, just centering them will do. Finally, it's time to connect the controls to your code. There are two connections to be made in total – one from the property to the UITextField and UILabel, and one from the UIButton back to the action. For the first, click on the yellow view controller icon (which represents your ViewController class) while holding down the Control (^) key, and drag onto the text field, as shown below: When you let go, a small popup appears, listing all properties that match up for connecting. In this case, it will be the view property (which links the base view for the controller, and you don't want to mess with that), and the label one you defined earlier. Click on "label" to make the connection. Connecting the action works the same way, just in reverse. This time, Control-drag from the button back to the yellow view controller icon. Select "sayHello:" from the popup, and you're done. And with that, the app is done. You can now close Xcode, go back to Fire, and run it. Running Your App You're now ready to run your app in the Simulator. Earlier on we looked at the top toolbar in Fire. In the middle of the toolbar, you will find a popup button that is the device selector, which should by default be set to "tvOS Device". That is a placeholder that, when selected, tells Fire to build your application for a real, physical Apple TV device, and is available whether you have an actual Apple TV connected or not. However, because "tvOS Device" does not represent a real device, you cannot run your app on it. If you open up the popup, you will see a menu with more options. For one, you will see the tvOS Simulator; for another, you will also see any real Apple TV devices you have connected to your Mac (such as "Apple TV 4" in the screenshot below), along with their device type and OS version. For now just select the simulator. When done, you can hit the "Play" button in the toolbar, or press ⌘R. Fire will now build your project and launch it in the simulator. You will notice that while the project is compiling, the Jump Bar turns blue and shows the progress of what is happening in the top right of the window. The application icon will also turn blue for the duration; this allows you to switch away from Fire while building a more complex/slow project, and still keep an eye on the progress via the Dock. If there were any errors building the project, the Jump Bar and application icon would turn red, and you could press ⌥⌘M or use the Jump Bar to go to the first error message. But hopefully your first iOS project will build ok, and a few seconds later, the iOS Simulator should come to the front, running your app. Press "Enter" to press the button (the Apple TV Simulator will not respond to mouse clicks, only cursor key and Enter. Or you can pair an Apple Remote control with your Mac and use that with the Simulator, as well), and the label will update, as per your code. Running on your device should be just as easy, assuming you have your Apple developer account set up and registered with Xcode as discussed here. Simply select your real device from the device picker popup, hit ⌘R again, and you will see your app get deployed to and launch on your Apple TV. You will see two warnings being emitted when building for a device. That is because your project has no Provisioning Profile and Developer Certificate selected yet. Fire will pick the default profile and certificate (assuming one is present), so that you can run your application without hassle, but the two warnings will remind you that, eventually, you will want to explicitly select the exact profile and certificate you want to use for this app. To do so, simply select the "Settings" (⌘I) node on the Project Tree, which brings up the Project Settings Editor: You can select both your Certificate and your Provisioning Profile from the list (once again, see here if your profiles or certificates don't show up as you expect them to).
https://docs.elementscompiler.com/Tutorials/FirstApp/Fire/tvOS/
CC-MAIN-2020-45
refinedweb
3,428
67.08
>>? People buy that stuff? (Score:1) The masquerading program for Windows is $160 for 20 client boxes. My lord. I could throw together an old 486 for hardly a penny more and have it performing the same task with a _real_ OS. Re:100kb Microkernel? (Score:1) Total: 22,392K Paged: 14,624K Nonpaged: 7,768K On a WinNT server in the next room, SP4: Total: 18,556K Paged: 14,556K Nonpaged: 4,000K I suspect that these numbers vary a bit from machine to machine, and the numbers on the server were changing as I was trying to get them. Michael Koehn -- I'm working on my boss. Already got permission to set up one Linux box as a print server (Yay!) NT clone: Already been done. (Score:1) Linux is still a better choice even for this kind of application for many reasons. I've worked with both and I'd choose Linux in a heartbeat. Re:Clueless about NT Operating System as usual. (Score:1) NT's design. (Score:1) NT's kernel is ntoskrnl.exe. The microkernel itself is just a small portion of ntoskrnl.exe. (I think it's about 60k.) Some other stuff that's not actually part of the microkernel is bound into the same file, for example the namespace manager, and the security manager. hal.dll contains the Hardware Abstraction Layer. This contains most of the processor specific code in the system. The NT kernel sits on top of this. ntdll.dll is a user mode DLL that contains the syscall interface for user mode programs to invoke the NT native interface. It's mainly a bunch of wrappers that do an int2e, which invokes NT's syscall handler. Parts of Win32 sits on top of this DLL. This DLL is not part of the kernel. kernel32.dll, user32.dll, and gdi32.dll are the user-mode client side DLLs that implement Win32 itself. They're not part of the kernel at all, and kernel32.dll sits on top of ntdll.dll. csrss.exe is the user-mode server-side process that implements the Win32 subsystem and parts of GDI. This is not part of the kernel either. win32k.sys is the kernel-mode part of csrss.exe that implements the GDI graphics engine and some other stuff. This is not part of the kernel, though it does execute in kernel mode. NT's design is not all that bad in my opinion. Re:100kb Microkernel? (Score:2) Putting the Win32 API directly into the kernel is short sighted, and implies that Win32 API is all that this kernel is capable of running. That means it's already nearly obsolete before it's even out the door. In a sense, it's the equivalent of calling a kernel which has the BASH shell (and almost nothing more) directly into a lightweight kernel and claiming that it is a new lean-mean Linux. I wonder if the doj could open win32 (Score:2) After this the win32 will be everwhere though and be bad for possix. But we would have choices and if all these different distros of windows (linux, be, ect)and if posix is included perhaps win32 would die. Another great thing could happen with apple. Apple would relise that win32 is the thing after this new wave of windows clones and would add win32 api support into mac osx so non computer people could have access to a stable OS thats way easier and supperior to use then windows. I truly hope that the doj will force ms to release the win32 api. WinNT API != Win32 API (Score:1) On top of this there are various drivers which allow executables with different `personalities' (not sure if that is the correct terminology). Win32 is one of these personalities, POSIX is another - or would be if the driver was complete and correct. Presumably the Win16/WindowsOnWin32 stuff is another. So have these people implemented the WinNT API (probably a realistically small task), or the Win32 API -- which is huge and constantly expanding? Re:Linux is real POSIX (Score:1) Re:Why not WINE? (Score:1) They could still sell support, a la Red Hat... Alex Bischoff --- Re:WINE works for me... (Score:1) WINE is certainly not useless. I've been running Quicken 6.0 for the past four months via Wine for all of our home finances. Sure, there are rough spots and some missing functionality, but it works fine for us. Anyway, I just wanted to thank the Wine team for their great work. I do agree with you about the suckitude of Windows command line apps. Re:quicken and wine (Score:1) Well, I am running 16 bit Quicken with a version of Wine from March 1999. I have found that newer vesions made 16 bit support worse but 32 bit support better. Versions of wine after March 1999 tended to crash and burn upon Quicken startup, but this might have changed in the past month with all the progress the Wine team has been making. Using -managed and -winver win31, I am able to run all the basics and create charts and graphs. Loans and auto-completion work correctly, if slowly. I have not tried the net functionality (don't need it), and I haven't configured wine for printing yet so I don't know if it works. Of course, I back up every time, but I have not had corrupted data yet, and I use Quicken/Wine several times per week. I even have a Windomaker dock app configured so that my wife can run it easily. PetrOS... for games? (Score:2) Re:eat it, Bill (Score:1) Re:Believe it when you see it (Score:1) ...phil Re:eat it, Bill (Score:1) Why is it that you M$ moles are so easy to spot? LK Re:eat it, Bill (Score:1) >Anybody who doesn't leap right into the gang rape is a Microsoft mole? The attitude is what gives it all away. Characterizing this as a gang rape bolsters my position that M$ moles are usually easy to spot. M$ is the most powerful corporation in the world. Pointing out their anti-competitive and often illegal practices is NOT a gang rape. LK It fits in the cache! (Score:1) In my experience, the fastest code can be the biggest code, at least in independent testing. Code that requires looping can often be sped up by unrolling the loop when there is a fixed small number of iterations. But this leaves out what may be the most important part: the cache. If your kernel is big, then regardless of how optimized it is, it will waste clock cycles getting into the CPU to do its stuff. Any OS that takes several MB between kernel and needed services will always take a huge penalty. The whole point of a 100K kernel is that even on the most pathetic systems it will remain continuously in the cache. It would almost be like having the kernel embeded in the CPU. If the services (disk, net, etc.) don't take much room, then you get another huge boost. It's really cool that memory is cheap now, but even a gig of ram will never make up for a small cache. That's why Xeon processors cost so much. (of course if your machine is doing any disk swapping to make up for not enough memory, then you're dead meat Yes, the versions of Unix that have huge kernels can still get fantastic performance, but at what cost: they don't have 512K caches, they have several MB. Ouch!!! I'll take a small kernel and small services (thus a _much_ cheaper machine with the same performance) any day. Three cheers for Trumpet Software! (assuming it works and they can get past Microsloth) Re:eat it, Bill (Score:1) >Microsoft is most assuredly NOT the most powerful corporation in the world. It's possibly the most powerful corporation in the geek-world that so many Slashdot readers inhabit. M$ controls the OS of approx 90% of the world's personal computers. M$ makes over 33 million dollars per day. M$ is in a position where they could control the way most people access the internet. To control the exchange of information is power. You know it, I know it, and Chairman Gates knows it. >Some of us, who might be attacked as "supporters of Microsoft" are really just people who can't stand it when we see the losers trying to take down a successful business because they can't compete in the market. M$ needs to play by the rules, just like everyone else. You can't do certain things which M$ is accused of doing. It's dishonest to steal someone else's idea and pretend that it was yours all along. You can't steal the source code for someone else's compression program and pass it off as your own. You can't use your position in the market to force people to not use your competition's products. It would be like GM designing their cars to break if you attempt to install after market products on them from a certain manufacturer. We don't want to destroy M$, but we do want them to play by the rules. LK Re:eat it, Bill (Score:1) >You're taking this just a BIT too seriously... Not at all, this is a serious issue. Whoever controls the way we exchange information, coltrols everything. >Windows is popular, but if MS tried to do something REALLY f'd up with it people could either use Linux or not upgrade to the latest version. Like intentionally holding back bug fixes to their old OS so that people are pressured to buy the new one? The average computer buyer today doesn't even know that linux is. They know what windows is. I've had people who were thinking about buying a Macintosh come up to me and ask "So, does this run Windows 95 or what?". The average consumer is buying a computer to keep up with the Jones', not because they want a new tool or toy to use. If M$ decided that to use windows you were going to have to pay them a $100 per year renewal fee for your software license, most people would have no choice but to pay it. There are morons out there who would pay anything as long as they got to use AOL and M$ Office. LK You think NT is so great? (Score:1) You think NT is so great? Just try changing the permissions on a file from the command line. The greatest OS in the world is worthless if it is built in such a way that you can't use it. Re:100kb Microkernel? (Score:2) Using VMWARE, I tried installing Windows NT Workstation 4.0 with varying memory settings. Here's the results. 8 MB = Refused to Install 12 MB = Refused to Install 16 MB = Installed, ran slowly 32 MB = Installed, ran much better than 16MB. Then after I installed with 32MB, I started reducing the RAM on the already installed NT. 32 MB = Booted fine, as expected. 16 MB = Booted fine, but slower. 12 MB = Booted fine, but really really slow. 8 MB = Blue Screen of Death on bootup. I thought it was interested that the installation program wouldn't let you install with 12MB, but that NT would boot with 12MB. Re:api (Score:1) Anyone who had a Win32 clone would probably be more than happy to devote however many programmers were necessary to maintain compatibility with MS releases, because the market for a Windows clone is HUGE. If you've already got one, then you're looking at a not-small slice of a multi-billion-dollar market, which can pay for ALOT of programmers verifying every single MS API call for every single MS release An interesting approach... (Score:3) It sounds like he's concentrated on getting the command line programs working and doesn't have a GUI yet. Since (I'm guessing) the GUI is the bulk of the work, this hardly counts as a Windows clone. But, I actually like the approach. I wonder if the Wine folks wouldn't have made faster progress by following the same strategy. As it is now, there are lots of programs that "sort of do something" under Wine, but few useful ones that really work 100%. If the command line stuff worked WELL it might draw more developers to finish the job. Re:An interesting approach... (Score:1) Re:100kb Microkernel? (Score:1) Re:Don't get too excited (Score:1) Re:Speaking of NTFS... (Score:1) What does bug me is the inability to boot "single user" off a cd. MS has to do something about that. _damnit_ Re:100kb Microkernel? (Score:1) For example, the Mach microkernel is not an operating system in and of itself. It provides services such as network and disk access. Operating systems such as BSD, Linux, and NeXT were built to utilize its services. There is one thing one can not say about the Mach microkernel -- it is small. Neither PetrOS nor NT are microkernels, although, they may utilize a microkernel-like architecture by creating Ring O level services. In this case, the folks at Trumphet have usurped a computer science concept for marketing purposes. X for an Interface? (Score:1) Re:An interesting approach... (Score:1) Re:I thought the Win32 API had GUI in it (Score:1) Re:An interesting approach... (Score:2) Although I'd love to see _anything_ that got Wine working better than it does now - right now it's completely useless. VMWare is going to kick it's butt all over the shop. Matt. perl -e 'print scalar reverse q(\)-: Re:100kb Microkernel? (Score:3) Matt. perl -e 'print scalar reverse q(\)-: Re:Kernel size has nothing to do with being slow. (Score:1) Re:Why not WINE? (Score:1) Re:They obviously misspelled the name... (Score:1) quicken (Score:1) -- Re:VMWare vs Wine. (Score:1) 2) Resources: the only real hogging that VMWare does is memory, and most of that is the memory given to the guest OS; it's a simple tunable trade-off. It only hogs CPU when it's actually doing something. 3) Second box: I can't afford to do that. What I *will* do is add another 128M to my existing box and give NT 96M insted of 64. I have yet to try WINE, so I will not comment, except to say that I think it's a wonderful project and exactly the kind of thing that shows that there is no great mystery to Windows. Best of luck. Paul. Re:hmm.. i wonder if it will be open source? (Score:2) A more accurate comparision would be from a fresh boot what is the graph of memory consumption of each OS while running this script in SuperWizzyWorks 2000? Re:Clueless about NT Operating System as usual. (Score:1) I think that this is embarrassing for NT... Re:People buy that stuff? (Score:1) Re:Clueless about NT Operating System as usual. (Score:1) Re:yup, no UNIX equivalent of WaitForMultipleObjec (Score:1) But maybe I use threads threads more easily because I mainly work in Java, and Java makes it very comfortable to work with threads... Re:yup, no UNIX equivalent of WaitForMultipleObjec (Score:1) Actually... (Score:1) The Win32 API is a patchwork quilt of conflicting, broken APIs strung together by one OS implementation- it's why Wine's not quite as far as it could be and TWIN got open sourced... It's far, far better to start writing apps using something other than the Win32 API. Re:GUI (Score:1) Re:This is a great Idea!!!! - score this down NOW (Score:1) -- Barry de la Rosa, Senior Reporter, PC Week (UK) Work: barry_delarosa[at]vnu.co.uk, tel. +44 (0)171 316 9364 Clueless about everything as usual. (Score:1) [API's listed] the core of the OS is amazingly well thoughtout and designed by experienced software engineers. I wouldn't think of it! Please, the embarassment is all yours. - ...and then come back and feel embarrassed... Kook of the year (Score:1) Too bad you don't have a clue subsystem in which to install a clue. Re:The market votes with their wallets. (Score:1) So what is your point exactly? I've seen an NT box crash. I've never seen a Linux box crash. I've seen OS/2's GUI lock up (face it, who hasn't?) but I've never seen the machine actually go down. My point? NT sucks. You said OS/2 didn't have a security subsystem. It does and it's flexible. That's the argument. OS/2 is more scalable than NT in a variety of ways as is UNIX. In terms of security, they are less flexible architecturally. That may or may not be a problem. NT does suck, though. sphincter? (Score:1) Trumpet Software (Score:3) Re:I wonder if the doj could open win32 (Score:1) Even with Win32 APIs out there, it would be a major challenge to develop a OS from scratch. Since the Win32 APIs don't make any programs up, you would have to write a whole new desktop enviroment from sratch. For one, it would be usless in Mac OS X, since Mac OS X is big endian PowerPC OS [mainly], while the win32 API is are mainly oriented to little endian x86. (Yes there was a Windows NT 3.5.1 port to CHRP PowerPC [running in little endian mode] a few years back, but it failed in general, because x86 binary programs could not run on the PowerPC. Would it kill posix?: OF Course NOT. Posix is a set of APIs for *nix-like systems, designed for scalblity, power, and stablity. Win32 APIs are designed to bring Windows a stable set of 32-bit APIs. Most *nix-like OSs rely heavly on posix APIs, so they will be in use for year and years and years. At any rate, the main benfit of releasing win32 APIs, Windows would be more stable, faster (since everybody knew about the APIs). Also it would greatly help out projects like WINE. ANd the obvious joke would be... (Score:1) You mean there's a multiuser mode?--Joe -- I wish! (Score:1) I'm not using any extravaganza funky applications, but rather mostly MS office suite and Visual Studio, but after two 8h days of usage, the kernel takes almost 100M of my 128M memory, which is almost unusable. This problem has nothing to with bloat in the applications, but rather somewhere deep inside the kernel. Re:But IBM and Sun would love some windows clones (Score:1) Think. Re:The space shuttle (off-topic) (Score:1) I was under the impression that the shuttle uses 6502's (well, later models) because they were the only CPU's that are currently manufactured to withstand the heat generated by reentry. But it's been a while, and I could be wrong. Yeah, that's a lot worse than windows (Score:1) which won't let you do any of the above when a game crashes and takes the keyboard/screen with it. you get real warm and fuzzy with the reset button. If your POS brand-name machine even has one anymore. But that was a design feature, right? Re:The obstacle to Microkernels (Score:1) Goodness gracious (Score:3) If this is legal (and you can bet MS will be trying hard to prevent it from being) then we may just have hit the point where even OS-specific software and drivers aren't OS-specific any more. Of course the obvious MS response is to immediately make some incompatible API changes that break this new micro-OS, and patent them so far up their asses that a programmer couldn't extract them without reaching down their mouths with a plumber's snake. We'll have to see how the legal side of this evolves. Re:Score 1? (Score:1) But he's down to negative one anyway (and that was a moderating point well spent). Re:100kb Microkernel? (Score:1) Now get in line. Re:100kb Microkernel? (Score:1) ... spammage [/usr/src/linux]# ls -al vmlinux -rwxrwxr-x 1 root root 1278562 Jul 8 17:11 vmlinux That's a very much modular kernel too without any extra gunk my hardware doesn't support, or things I don't use, like routers. That aint tiny either. They obviously misspelled the name... (Score:1) mark "sorry, too much userfriendly, I s'pose..." Re:Goodness gracious (Score:1) VMWare vs Wine. (Score:2) 1) You need a copy of windows to run. To do it legally costs $$$, especially NT. 2) Running a whole second OS is a serious resource hog. 3) It's effectively running on a second (virtual) computer, in its own little sealed box. Why not just get a second computer and a monitor/keyboard/rat switch? Wine provides the Win32 system calls to a Linux process, allowing things like a windows CGI program to do credit card validation to be spawned from Linux' Apache. It may never run every windows program in existence, but: 1) Neither does any one version of Windows. 2) I don't own every windows program in existence. I only care about the ones I have (which these days, are mostly games, half of which actually run under DOS.) 3) This is legacy support. 50% of the legacy windows programs out there aren't Y2K compliant anyway, and an amazing number of people are limping along with "good enough for now" 3.1 installs left over from the 1980's for their daily word processing and checkbook balancing/payroll. (Sheesh, last year I helped a friend of a friend copy his comic book store inventory system from an old 386 SX with a 100 meg hard drive to an old 386 DX with a 200 meg drive. Only reason he left the old system was he'd tried Dos 6 doublespace and the drive started to eat itself.) We don't HAVE to support the latest and greatest Windows apps, those companies are still around and we can lobby for a native version as we penetrate farther and farther into "grandma" land and our usage numbers go up with drool-proof interfaces like Gnome and automatic install/configuration and pre-installs. And we ALREADY support a lot of the old stuff, and creep farther every day. The Wine people are adding new APIs faster than Microsoft is. They're better at it. Someday, they'll catch up. Rob Re:100kb Microkernel? (Score:1) Re:PetrOS - Server OS, not desktop. (Score:1) (Don't even think of mentioning that web server test...my desktop machine is not a webserver. And I don't have multiple T1s to handle that bandwidth anyway. Or four network cards. I knew which is faster, I used to run Windows on this overclock PPro with 32 megs of RAM.) Re:Hey moderators. Lets test the new moderation he (Score:1) My last moderation point was gobbled up. so it should have docked him another point... (I'm posting as AC because, of course, I can't post to a topic I've moderated...) Re:Hey moderators. Lets test the new moderation he (Score:1) Wake up... (Score:1) VC sucks in comparison. Linux IS a better code development enviroment I have to admit. I hate UNIX, but I had to leave NT for sane developement.. Give it a try. Your code will be better (partly because egcs 1.1.2 is a better standard C++). You obviously.. (Score:1) functions in it hmm.. i wonder if it will be open source? (Score:1) now if we all can convince them to open up the dev project this would be damn cool.. expand wine to run native nt products alongside reqular windoze apps. -lordvdr "Linux is not portable" - Linus Torvalds Re:Speaking of NTFS... (Score:2) I haven't seen it yet, but apparently NT5 has a "single user mode" that's command line only. -- Re:People buy that stuff? (Score:2) Much less troublesome than the Trumpet Winsock was the Microsoft 32-bit winsock built in to Windows for Workgroups. (It's essentially the same 32-bit networking that's built in to W95). -- Re:An interesting approach... (Score:2) My understanding is that this is a different approach than wine is taking. wine is trying to emulate the entire sprawling Win32 API, whereas this thing only emulates the "Native" WinNT kernel API. One can imagine a project that translates native WinNT kernel calls to POSIX/Linux API calls. (Another Poster mentioned that there are only 40 or so native API calls, so this is probably several orders of magnitude easier than emulating Win32.) Then you just get all the DLLs, etc from your "licenced" version of WinNT, and bam - Windows programs are running on Linux. The only problem I see is that the graphics wouldn't be over X, but that maybe could be solved with a Win Teminal server client approach. -- Playing it straight (Score:2) Actually, there is (WTS). -- Why not WINE? (Score:1) I dunno... (Score:1) Re:People buy that stuff? (Score:1) Back in the old days (before 1995), Trumpet's Windows 3.1 stack was the best thing going in the market. Even if it's been surpassed since then, it was good stuff, it fit on a floppy, and it did the job. Most, if not all of the other Win31 stacks were serious payware, less flexible, etc. Re:They obviously misspelled the name... (Score:1) Re:Speaking of NTFS... (Score:2) The times I've dealt with video capture on NT, I've given the capture software/hardware a raw AV scsi drive to play with... anything less really isn't worth your time unless your just fooling around. NT Native API (Score:3) Inside the Native API [sysinternals.com] Inside Native Applications [sysinternals.com] Just out of curiosity, I took a look at native.exe (from the applications article) - the only dependency is on NTDLL.DLL, which weighs in at 347kb on my NT4 SP4 machine. Keep in mind that ntdetect.com, ntldr, hal.dll Though I have to admit the exports for it look a little weird... it looks like it implements a good chunk of the standard C library, and I want to know who thought exporting functions like "PropertyLengthAsVariant" were absolutely vital to the kernel... Re:100kb Microkernel? (Score:2) This sounds a lot like saying that Linux is capable of running a web server, X windows, Netscape, Emacs, yadda-yadda, and it can fit on a floppy too. Note, not at the same time, but it can. The floppy sized piece is a small part of the whole that can do wonderful things. I'm sure that the Trumpet people rely on other kernel mode services to provide a system that can run anything at all. To their credit though, the Trumpet people couldn't take functionality OUT of the mukernel to reduce it's size to ~100K, so that size is a result of tweaks. But then again, we don't know how large that functionally comparable piece of M$-NT is per their distribution of it. Re:100kb Microkernel? (Score:2) I've used/developed for QNX in a real-time environment, and I was very impressed. But, the thing to remember is that small size comes at the cost of functionality and performance. After reading your link and some of the ones from there on, I'm under the impression that beyond a bootable POSIX, browser and web server, there's not much there on that floppy. And I noticed that it uses a two stage boot process to get going. Step one bootstraps a decompressor, and step two loads the decompressed system into memory. That OS, off the floppy, is probably on the order of 4MB+... The QNX installation I worked with included a full OS (complete with those bells and whistles like grep, awk and vi), the full Photon windowing system (not just the GUI support for the browser) the developer support for TCP/IP, and Photon, and a nuts-to-the-wall C/C++ compiler from Watcom. The install was about 100MB+, and still wouldn't run Quake.:) It's nice to have a 45K mukernel, but it is more important to have the code for the whole system efficient and fast. Even if the mukernel is half a meg, it must be fast before anything else - except where size trully matters, like on a satellite. Re:100kb Microkernel? MS kernel size numbers. (Score:2) If anyone is interested in learning about the NT kernel go to. Learn more about our enemy.... Close, but no cigar.. (Score:2) Re:100kb Microkernel? (Score:2) The coolest thing about this is that with a 200kb NT, it would be possible to use it as an NT emulator, making it possible to load NT device drivers under other OS's. A little linux-NT bridge could easily be built, where the drivers would get all of the NT services they expect. This would be very helpful for getting "alternative" OS's like BeOS, Linux, MacOS, OS/2, (and now, PetrOS) etc. running on currently unsupported hardware. -m Kernel size has nothing to do with being slow. (Score:2) Re:hmm.. i wonder if it will be open source? (Score:4) Easter eggs. If you hold down QCKRTISO whilst saying the Lord's Prayer backwards and tipping milk into your keyboard, it displays random pictures from Bill's family photo album. This is why stuff like GIF decoders have to be in kernel space under Windows NT; the "photo album" Easter egg requires them to work. Don't get too excited (Score:2) CreateFile, CloseHandle, etc. - Minimal file operations VirtualAlloc, GlobalAlloc, etc - Minimal memory management Plus a half a dozen misc functions. They state in the article that they haven't even started on the GUI, perhaps the hardest part. You can't just clone a few bits kernel32.dll and winnt.dll and say you have a windows clone. They also make no mention of how they plan to implement DDK which, IMO, would be the whole point of making a windows clone. Without device drivers what good is an OS? The WINE project is *way* beyond this. Also WINE benefits tremendously by having a linux core and thus a solid device driver base behind it. Having said that, there are 2 problems with Wine. The first will probably never be surmounted, and that it will never be able support hardware that has win32 only drivers, and many of the APIs Microsoft has developed don't exist under linux so even if someone was willing and able to port, they couldn't. Take Direct3d for example. The best you could hope is to make a D3D->GL layer inside WINE, but it's not a very good mapping. Then there are weirder things like : CryptoAPI, Telephony API, etc.. where there is nothing at all like it under linux. The second problem with WINE is that it is a single process solution. It makes no attempts to emulate the entire system, just the current process. This means you can't : debug a process, drag and drop, and other forms of IPC that many programs depend on. I believe this can be fixed, but will require a fairly big change to WINE. Another project to look at that is very interesting is the FX!32 system by DEC. This system actually runs under NT, so they didn't have to write APIs except to thunk from 64->32->64. But it can run native intel binaries with very little slow down by doing dynamic code translation. (wow, I just noticed "Linux" is not the Microsoft Spell Checker) 100kb Microkernel? (Score:2) Re:100kb Microkernel? (Score:2) Re:100kb Microkernel? (Score:2) NT's Kernel is fine...there are no problems there. The bloat comes in at the interface and application levels, for the most part. That is why NT fared so well in those benchmarks against Linux...they didn't install crap like MS Office on those boxes, it really was OS vs. OS. As to the size of the Kernel 2MB is about right. As to what you can install it on ? I installed Windows NT Advanced Server 3.1 on a 486/33 w 8MB!!! I had to turn off networking during install, and then install networking after I had NT running...but it ran. I installed NT Workstation 4.0 on a Compaq P90 with 8MB. It was unusable but ran. I later upgraded that machine to a second HD which I used solely for the swap file...it was usable barely with MSoffice 95. Things were much better when I moved the machine to 24 MB and upgraded to 2MB video memory. I find the NT 3.5x OS to be VERY stable, much more so than pre 2.0.x Linux. NT 4 is as stable or more stable than Linux as a workstation. When something goes bad you can kill services and restart them. Just like any reasonable OS If the GUI goes though...you have to reboot. That said the GUI is much more stable than X/KDE or X/Gnome. NT is NOT as bad as Linux folk think. NT is MUCH worse than MS thinks. NT bears NO RELATION to what MS marketing says. NT is the best general purpose workstation available right now. I have great expectations for MacOS X. [See Mac OS Rumors [macosrumors.com] for why. if you don't already know.] Linux is really coming along here, way ahead of even a year ago. It'll be a while yet. I think MacOS X will give a good example of what to aim for/above in the future of Linux interfaces. Sun is the best enterprise server solution. I use Linux for small and medium business sized servers and light database applications. The availability of Oracle and IBMDB2 is making me think of using it for larger databases, maybe I'll ask the next client to try it out. I use Sun and Linux for special purpose workstations. I always prefer Linux for this if the application is available. (Sometimes they really want Autocad OK ?) I ran into a bank that needs a supercomputer, I still don't really understand thier application. I am going to try to fit the app to Beowulf. I know this went a bit off topic, nonetheless I hope it was thoughtful, if not neccessarily useful. Re:Clueless about NT Operating System as usual. (Score:2) Afaik that doesn't do more than waiting for multiple objects to finish. In Unix, you could simply wait for each single one to terminate without much overhead (pthread_join). MsgWaitForMultipleObjects A design mistake (of Win32) ReadFileEx/WriteFileEx man aio PulseEvent You do know how to use message passing or other forms of IPC? The event functions could be easily replaced by pipes, for example. Yes, I admit that Unix wasn't designed with multithreading in mind. In contrast, if you look at the recent standards formulated by POSIX and implemented by many vendors, you will notice that developing your application will not be limited by the API. In practice, being used to work with Microsoft "solutions" becomes a limiting factor. Clueless about NT Operating System as usual. (Score:2) WaitForMultipleObjects MsgWaitForMultipleObjects ReadFileEx/WriteFileEx (async i/o) PulseEvent (some of the event stuff is really cool) and then come back and feel embarrassed for being an ignorant Linux would be all your life. The applications may or may not be poor in your opinion. However the OS is fantastic. Some subsections of it are problematic (I don't like the registry as a device for instance, and it's support for multiple consoles is poor, and networked GUI), however the core of the OS is amazingly well thoughtout and designed by experienced software engineers. Cheers Re:100kb Microkernel? (Score:2) That said; "She's a witch - throw her in the river, if she floats she's a witch, if she drows, she's not! Well, Ducks float... So? So do other things... wood So, witches are made of wood?" - A summation of a Python sketch. Proving that 2+2 doesn't always equal 4. On this logic, we could say (using simple chaining methodology...) that if a: In order to know something, you must experience it (Win kernel, big), otherwise, no matter how valid the source, it is only assumed/presumed. Therefore, people are just assuming that NT has a hideous, huge kernel - when in fact it may be gorgeous and petite, with the "bloat" being caused by all the other stuff... Long winded I know, but I'm simple... Mong. * Paul Madley Re:100kb Microkernel? (Score:2) moitz: i used to be somebody ReactOS (was: Re:WinNT API != Win32 API) (Score:2) There is also a GPL'ed implementation of that microkernel: its name is ReactOS. It is planned a Win32 server on top of it and probably a POSIX+ one in the future. This project borrows some code from Wine [winehq.com]. You can download the pre-alpha code (no GUI yet!) from [reactos.com].
https://slashdot.org/story/99/07/08/1224215/petros---nt-alternative
CC-MAIN-2018-13
refinedweb
6,300
74.49
I have a dictionary I wish to turn into a list containing lists so that I can write into a csv, but what ever I do, it doesn't work. I used sorted(dllist.items()) to sort them as [(key1, value1), (key2, value2), ... ,(keyN, valueN)] dict = [('aaa', [5787, 40, 1161, 1222]), ('aab', [6103, 69, 810, 907]), ('aac', [3081, 41, 559, 638]), ('aae', [1011000, 191, 411, 430])] list = [(aaa, 5787, 40, 1161, 1222), ('aab',6103, 69, 810, 907), ('aac', 3081, 41, 559, 638), ('aae', 1011000, 191, 411, 430)] Given my assumption about what your dictionary looks like, this should do the conversion you want and write the output to a CSV file: import csv d = { 'aaa': [5787, 40, 1161, 1222], 'aab': [6103, 69, 810, 907] } rows = [[k] + v for k, v in sorted(d.items())] with open("out.csv", "w") as out: writer = csv.writer(out) for row in rows: writer.writerow(row) # out.csv: # aaa,5787,40,1161,1222 # aab,6103,69,810,907
https://codedump.io/share/wBWBTvGYADIu/1/dictionary-to-list-with-lists-inside-and-print-to-csv
CC-MAIN-2017-04
refinedweb
164
87.15
Encaps. The namespace declaration URI as a constant. The value is,. The XML Namespace URI as a constant. The value is as defined in the "Namespaces in XML" * recommendation. This is the Namespace URI that is automatically mapped to the "xml" prefix. Create a new Namespace support object. Declare a Namespace prefix. All prefixes must be declared before they are referenced. For example, a SAX driver (parser) would scan an element's attributes in two passes: first for namespace declarations, then a second pass using you must not declare a prefix after you've pushed and popped another Namespace context, or treated the declarations phase as complete by processing a prefixed name.. Return an enumeration of all prefixes declared in this context. The empty (default) prefix will be included in this enumeration; note that this behaviour differs from that of getPrefix(String) and getPrefixes(). getPrefixes() getURI(String) Return one of the prefixes mapped to a Namespace "". Return an enumeration of all prefixes for a given URI whose declarations are active in the current context. This includes declarations from parent contexts that have not been overridden. "".DeclaredPrefixes() getURI(String) Look up a prefix and get the currently-mapped Namespace URI. This method looks up the prefix in the current context. Use the empty string ("") for the default Namespace. getPrefix(String) getPrefixes() Returns true if namespace declaration attributes are placed into a namespace. This behavior is not the default. Revert to the previous Namespace context.() Process a raw XML qualified name, after all declarations in the current context have been handled by. Start a new Namespace context. The new context will automatically inherit the declarations of its parent context, but it will also keep track of which declarations were made within this context.() Reset this Namespace support object for reuse. It is necessary to invoke this method before reusing the Namespace support object for a new session. If namespace declaration URIs are to be supported, that flag must also be set to a non-default value. setNamespaceDeclUris(boolean) Controls whether namespace declaration attributes are placed into the NSDECL namespace by processName(). This may only be changed before any contexts have been pushed.
http://developer.android.com/reference/org/xml/sax/helpers/NamespaceSupport.html
CC-MAIN-2015-32
refinedweb
360
50.33
Introduction Here I will explain how to get title & meta description of url in asp.net using C# and VB.NET or get page title and description from page url in asp.net using HTMLAgilityPack in C# and VB.NET. Description In previous articles I explained jQuery Get Current Page url & title, Get url parameter values using jQuery, jQuery Dropdown menu with CSS, Asp.net Interview questions, Send values from one page to another page using QueryString, Highlight Gridview records based on search and many articles relating to Gridview, SQL, jQuery,asp.net, C#,VB.NET. Now I will explain how to get title & description of url in asp.net using C# and VB.NET. To get title & description of url we need to write the following code in aspx page Now in code behind add the following namespaces C# Code Here I added new HtmlAgilityPack namespace by using this reference we can parse out HTML pages you can get this dll reference from attached sample folder or from this url Now add below code in code behind VB.NET Code Demo Download Sample Code Attached 6 comments : d System.IO.FileNotFoundException: Could not find 1. file 'C:\WINDOWS\system32\'. and 2. System.Net.WebException: The remote name could not be resolved: How to Resolve these errors.... Thanks for this HtmlAgilityPack; namespace which files dll need to be added. i downloaded from codeplex it has so many folder are available. In which dll i need to include.. Thanks, Ramanathan.N Hii Sir. Suresh I am Dhiraj a Student of class XI.I want to make my own website in .net and i found your website as the best website on Internet for learners. Sir, Can you post a new article, in that "a user can post his questions and at bottom a button to comment on that considering the user is logged in or not if not then login using fb." This was a really great contest and hopefully I can attend the next one. It was alot of fun and I really enjoyed myself.. buy real fb likes google not allowimg meta tags any more ! how to get description now ??
http://www.aspdotnet-suresh.com/2013/05/get-title-and-meta-description-of-url.html
CC-MAIN-2018-13
refinedweb
361
73.98
Dear community, once again I’d like to start by offering my thanks to everyone for their patience through an unclear situation. We’re grateful for the dedication and commitment many of you have shown through all this and it once again serves as another reminder of just how awesome our community is. Like everyone else who’s excited about mobile development, and in particular using Unity to create awesome games for the iPhone, we’ve been following the development of the iOS 4.0 Terms of Service closely.. And all along we’ve continued to invest heavily in our Unity iPhone line, including a number of new features that will be coming soon in the Unity 3.0 release. But as soon as the new terms of service were revealed we also started working on a contingency plan, just in case Apple decides to stop approving Unity-based games. Allow me to explain that contingency plan so everyone out there knows what “plan B” looks like. As you probably know, Unity is mostly written in optimized C++ with assembly optimizations and Objective-C wrappers thrown in for good measure. Game logic is written by the developer, using C# and JavaScript, both of which are running on top of .NET. The beauty of this scheme is that we’ve been able to sidestep the old scripting-versus-native question as .NET provides for very rapid development (and almost near-instant compilation), while at the same time generating highly optimized code. And on the iPhone we actually ahead-of-time compile .NET to completely static machine code for speed and conformance with the old iOS Terms of Service. Also, on the iPhone it’s easy to drop into Objective-C code to access fresh APIs like the Game Center, Core Motion, etc. This is truly a case of best of both worlds. Since Unity’s .NET support may conflict with the new terms of service, we are working on a solution where entire games can be created without any .NET code. In this proposed scenario all the scripting APIs will be exposed to and can be manipulated from C++. This is of course not ideal as there are thousands of code examples, snippets, and extensions created by the community can no longer be copied into your project, .NET assemblies can’t simply be dropped in, and C++ is more complex than JavaScript or even C#. But honestly, it’s not as bad as one might imagine. One still has the full benefit of the asset pipeline, the shader language, an array of tools and of course the engine and its optimizations. We are also working on maintaining the elegant workflows of the JavaScript and C# in Unity: “scripts” will still be able to be edited live, variables will still be shown in the inspector, and a number of other sweet features that one doesn’t usually associate with C++ development. Essentially we are creating a .NET based C++ compiler that will allow us to write purely managed C++ code in the Web Player and other Platforms. On iOS C++ code will be compiled by Apple’s Xcode tools. This indeed is a very powerful combination. In the Unity Editor, you have fast compilation times and a completely sandboxed environment. On the device you have native C++ performance and low memory overhead. This combines the key strength of scripting languages and C++ code. When you combine those with the fact that when it comes to straightforward game logic, C++ really isn’t as complex as it’s often made out to be (and as it can be) hopefully you can see that life won’t be so bad after all. To help demonstrate my point, let’s look at a few different examples. Here is a simple JavaScript function to rotate an object around the world origin: And now here is that same bit of code written in C++: As you can see the code required isn’t all that different in simple case scenarios, but what about a more complex example? Here is a bit of JavaScript that peeks accelerometer and looks for user touch input on an iOS device, then uses that to fly a craft and fire a missile in the game: And again, here is that same set of code in C++: Again, the code doesn’t get that much more complicated just by writing it in C++ versus JavaScript, and the difference is even smaller compared to C#. We continue to be excited about the iPhone, iPod touch and iPad as platform targets for Unity developers. While we don’t think C++ is the best language to write game code, using C++ as a scripting language has memory and performance advantages on low-end devices. This is a great feature to have for developers who want to squeeze the last ounce of memory & performance out of their games. We still can’t believe Apple will force developers into choosing a specific language for development. And as mentioned, Apple is still approving every Unity-based game we know of. In case the situation changes, rest assured that we are working this Plan B. We’ll be ready to talk more about this as well as share some time-line information with you soon, while of course waiting to find out if any of this will actually be necessary. Benjamin Fedder Jensen9月 24, 2010 12:20 pm I’d love to see C++ as a language to interface with the engine. Albeit I do C#, Java and Python often, I always fall back to C++ and implement it there albeit harder. I just don’t like Garbage Collectors, although they are meant to make memory management cleaner; I’m very tidy with my memory, and it haven’t been a problem, even in old C. But then again, I’m weird, and a control freak, and I don’t like no GC touching my allocations :| Not that I never make memory leaks. But you learn to look out for them. irshad khan9月 20, 2010 12:32 am Nice source code & easily create game thnkx for Untiy Vivek9月 15, 2010 11:25 pm if Google could go as big as it did on opensource then why cant THE INTERIM i-of-Apple take the middle ground and be known the i-Position. Unity u rock. U too Apple. Let everyone who knows C++ or C# or JavaScript or Java rock too. Let GREAT GAMES BE MADE. The “i” of Apple and the “U” of Unity makes the u-n-i. Lets build on the u-n-i architecture. Make growing in the hearts more than growing in the pockets a reality. the iReality. Terry9月 9, 2010 6:52 pm Please let us know what the details of Apple’s decision to drop restrictions on what programming tools developers can use to create iOS apps mean for Unity: David8月 21, 2010 5:00 am I think this shows some poor understanding of software architecture. You dont expose lower level code to solve higher level problem – very simple basic sw architecture fundamental. People here are obviously indie/non-professional developers. Having working in the software industry for some 25 years as a software engineer on games, simulators, plc’s, robotics and many other platforms I have see how bad C++ exposure can go. It is the wrong solution for high level abstraction. To those who think no great games use scripting, my god you are so wrong. WOW.. for a single massive example is almost entirely built with Lua. But almost _all_ AAA games titles have some form of HLL driving the game systems. From Naught Dog’s LISP.. to Unreals script, and Crysis’s script. You think exposing C++ is a good idea, but in all my life I have NEVER seen it used wisely. There is a whole argument about C++ and OO that I really dont want to get into, but the facts are out there (google the C++ OO failures – there are some fantastic ones, from Netscape to IBM. And very little long term success). For those that talk about compiling and building .. you are not talking about C++ but ANSI C. THAT is the true great cross platform toolset. C++ has a myriad of variants, and is a nightmare on many platforms to cross compile (unless you are lucky and have multiple gcc targets). I would heavily recommend not going down this path. Choose _any_ other scripting system, any, there are hundred’s of great ones out there. Your product will suffer with this sort of change. Joe7月 29, 2010 12:10 pm I’d just like to add to this what I imagine a lot of people have already said: Regardless of whether or not the new TOS require it, I think allowing C++ Unity development should happen. Hell, I’d even like to see the Unity Engine in SDK form as well. Lets face it, a proper AAA game is fairly unlikely to be made using C# scripts, unless the game logic is fairly light. Bach7月 26, 2010 8:50 am Sounds like an excellent solution. When should we be expecting this alternative? And most importantly, has anyone released a game yet after the TOS have taken place? did it get accepted/rejected? are there any cases been heard off yet? Nikko7月 18, 2010 8:46 am I’m programming for +24 years and from my experience C/C++ is a programming language that will last forever, is owned by no corporate and thus is an EXCELLENT choice. Some code I have programed in the early 90’s are still compiling fine today (compression for example) .NET is a Microsoft owned technology and who knows if they will drop it or not. Ask to anyone who worked with MFC, or Visual Basic how they feel now that Microsoft simply dropped these technology after millions of people learned and used them, or invested in expensive certificates… now these are not supported by Microsoft. Better use C++ for everything, it is the most solid and lasting programming language ever. Finn7月 18, 2010 3:48 am Why is Apple being such a jerk about this? What possible advantage do they get forcing us to use C++? We are still dependent on Unity as a third party solution either way, it just means my games will be developed slower. Grr. TimViana7月 14, 2010 2:47 pm So, will it stop decompilers? Matthias7月 9, 2010 10:32 am Being able to use C++ to create something which runs in the Unity web player would exactly be the thing I’m looking for :) Joachim7月 8, 2010 2:48 pm @Superwaugi: “we have to code a game only for the Iphoneplattform and code it again for other plattforms. One big advantage so far of Unity is, that you can use most of the code for different plattforms.” We are planning to have a managed C++ compiler. This has two effects: 1. You can deploy the same code on all platforms Unity supports, including the web player 2. You have quick iteration times and a full sandbox RocketBunny7月 8, 2010 6:28 am Wouldn’t it be great if Android phones were a bigger market than Apple… can only dream of that day. I love my Apple products but damn, I hate being held at gunpoint like this with their TOS. I love C# and I love Unity. I’m going to pray to the Gods that Apple sees the light in favor of Unity. Superwaugi7月 7, 2010 11:47 am It`s good to know, that there is a Plan B. Beeing an artist myself it would not be easy to learn C++, because Java is much easier for someone coming from the modelling side. Anyway: If we have to code for Iphone in C++. we have to code a game only for the Iphoneplattform and code it again for other plattforms. One big advantage so far of Unity is, that you can use most of the code for different plattforms. Also it wouldn´t be great to recode our works in progress…. But again: thank you for having a Plan B. All the best to you! Ashkan7月 7, 2010 11:02 am @people who they don’t know UT unity technologies is the most honest company that you can find in this industry. they did not and will not change it because this is their key to success so stop saying: i think unity … apple … i can swear that if you don’t know so David don’t know too and when he know, you will know after few hours. i think UT will improve AOT compiler and mono touch guys are working on it too so it will become better. as i said before C++ surely is a good option for many different reasons from academy to our hurts but there are many other important features to add. if UT was 1000 people then i would say that it’s a most have but now it’s just nice to have. David Helgason7月 7, 2010 7:53 am @Ryan: we will likely only do this if forced to, and we will only ever disable JavaScript if Apple absolutely forces us to. This isn’t sure (or even likely), but we are not in full control of this ourselves. We certainly won’t go and disable great features for fun. @proponents of C++: no promises as this is a pretty big chunk of work, but we do hear that there’s more people than we thought who like C++ (but then this comment thread isn’t going to be representative, and we’re obviously not going to use it as some kind of public vote). Thus, no need to add “+1” comments. @everyone: there’s been some speculation that we have some information that we’re not sharing. All I can say is that that’s far from the case, and we’ve promised to share any updates we get as soon as we possess them. Ryan N.7月 7, 2010 7:44 am Javascript is my 4th. Lol, ok I’m stopping now :) Ryan N.7月 7, 2010 7:43 am Pardon my English. It’s not my first language. My 3rd actually. Ryan N.7月 7, 2010 7:39 am Wait wait wait… just to be clear, Javascript will be dropped? Think of the children!!! The reason I got Unity Pro/iPhone Pro is because of the ease of scripting. That was the selling point for me. I’m an artist who just started to taste a little of the programming world. C++ looks alien to me. :( Is it just me or the lines written in C++ is more than the js example? How is that good when this easily increments quickly as you put in more codes? I hope C++ is just an added option. Not a requirement. Hate to see Unity to be only for 3l337 users. I love the way the workflow is right now. Should it be C++ only, I guess artists will look elsewhere. I’d hate for that to happen. Hopefully it doesn’t become like Apple. The “Hey, you can only use XCODE for this!” attitude. Worst case: I’ll just drop Unity iPhone and just focus on making desktop games as long as I can make develop easily and faster with javascripts. I’m happy for you C++ enthusiasts. Worried and concerned for fellow artists/js users like me though. Koblavi7月 6, 2010 11:15 pm I dunno about y’all but I think this thread is quickly turning from “David: Oh Folks I’m sorry you may have to start using C++” to “Unity Users: Wow awesome we want to start using C++ roll it out!!” @David & @Joachim… are u surprised?? and that makes the 100th comment!! Joe7月 6, 2010 4:17 pm Will you be open sourcing this .NET C++ compiler? I know many, MANY, people would be interested in this. There are some which are Windows only, C++/CLI, and some that are cross platform but incomplete, lcc. months7月 6, 2010 3:44 pm Multi threading is a lot easier in C# then C++. Especially in the future once, we get simple syntax for delightful loops. Just spin your code off into a new thread and use C# it will end up faster. If you don’t like OOP then by all means continue to use C++. mimiboss don’t know what kind of math your doing but C# does not support vectorization well. Although mono does add some extensions. Sven Myhre7月 6, 2010 11:15 am Hm… quite a few typos in my last post. Sorry about that : ( Sven Myhre7月 6, 2010 11:12 am I think there is no question that C# and JavaScript are extremely efficient for doing the typical trigger and control scripts needed for respond to actions. BUT, if you try to use Unity for more heavy-weight changes – like implementing a new terrain system. The current bitmap based terrain system is very good for plain vanilla shooters. But let’s say you wanted to have deformable terrain – maybe using voxels or something. Yes, IT IS possible to write code in C++ that you can call for generating the mesh … BUT … for these kind of changes, you really need keep and access custom C++ specific datatypes – and use them in all kinds of scripts. So, you really need your camera, your AI, your path-finding, your collision-system – basically EVERYTHING – to access data types only available in C++. OR – if I look at my own railroad project (which has been written in C# – but I would have saved months writing it in C++, and the code would have been clearner), where I have a terrain system that covers the entire US. The basic terrain node is a hex and each hex define a terrain based upon curved surfaces (bezier-patches). (and yes, it runs great on an iPad/iPhone) This is a major system component that needs to be accessed by everything from the input-touch-tracking to cameras, to collision, etc. If you for instance look at a hex node – it has 6 directions – but instead of having an array of 6 directions and loop over them – in C# I need to handle each direction separately and duplicate code – because if I wanted to keep everything in arrays (points, control points, normals, directions, fdds, etc) it would cause each hex node to perform a dozen array allocations – whereas C++ can keep these as part of the type definition and be a part for the core hex node structure. And when you have millions of these hex node in memory, you really need to resort to extreme compression and custom datatypes. Again, this is just one tiny example – but my point is that if you want to fundamentally change a “default” engine component in Unity – while keeping the excellent editor, the asset pipeline, the prefabs, the webplayer and lots of target platforms in addition to all the other cool stuff – the only language that really is capable of doing that, is C++. I can see that my case is not very typical, and you could even argue that Grome or some other low-level C++ engine/renderer would be a better fit for a project like mine – but I really LOVE Unity and environment, but Unity is so much more than just a renderer, and for me it was instant love the second I got Unity 1.6 a few years ago. AND … I guess the fact that I have managed to complete a project with such fundamental components being fitted, and keeping everything C# is a testament to that we really don’t need C++ – but I would be completed the project many months ago in C++, the codebase would been 50% smaller, I would have been able to drop many layers of caching, I could have had even more trains and details in C++. So, please – don’t disregard the importance of C++ if you really want to extend Unity into ways your really never thought possible : ) Kevin7月 6, 2010 6:35 am mimiboss, That’s a curious result, but I believe you. It does make me wonder what the fundamental difference really is. When doing ahead-of-time compilation, mathematical expressions should result in roughly the same code. Perhaps GCC is just plain capable of optimizations that mono isn’t? I’d love to see someone do an in-depth analysis of what the issue is. Phil7月 6, 2010 6:19 am Having read the news and comments it gets clear to me that Apple granted Unity guys a period of grace to implement a solution based on C++ and once implemented, the period will be over and C++ will turn out to be the only way to make it for the Apple Store. The more scripting languages you support, more difficult will be to mantain core code. Besides, imho, C# made Unity stand out in comparison to other solutions like Torque and Shiva3D. Tonio Loewald7月 6, 2010 6:16 am Seems to me based on your examples that a JavaScript or C# to C++ compiler wouldn’t be out of the question ;-) I don’t think plan B actually bypasses the TOS though. You’d need to either (a) not do any of the Cocoa setup for us or (b) open source your runtime. AngryAnt7月 6, 2010 2:23 am Personally I would like to see the cost/benefit analysis that convinced so many of the above commenters that C++ game development > C# game development – given that the scenario is game logic and not core development. rfdowden7月 5, 2010 11:08 pm While the idea of more choice seems like a good idea at first, I’m not so sure I would push for ubiquitous C++ scripting support. While I applaud the company for providing a Plan B in case Apple brings the hammer down, I prefer this option only be a Plan B. More choice brings complexity and confusion. There is a balance that must be struck and I would say Unity has already struck that balance. By introducing further scripting choices they will spread themselves thinner and either the quality will go down, competetivess will be reduced because time between updates will increase or the price will have to go up so they can hire more people, etc. One of the biggest problems I forsee is the further dilution of scripting examples and documentation. While this would become a necessary evil if Apple forces their hand, I see it as a terrible blow to the ease-of-use of Unity in general. As an iOS developer I see the need to placate Apple (wish it weren’t so) but I don’t want to lose the speed of developtment and simplicity that Unity currently has achieved. If one needs C++, then use plugins. There is no need to script everything in C++ especially if it is managed. The future will be managed code for the vast majority of programming anyway. Low level programming like the Unity engine itself makes sense in C++ but game mechanics and behavior scripting does not require a BS in computer science and we shouldn’t move in that direction. I want to be able to hire and use less skilled workers to do simple scripting, not require everyone feel comfortable with C++ if they don’t have to be. (And if lots of code examples become only available in C++… well then everyone needs to understand it.) As it is now, Unity is ahead of its time in allowing simple game programming tasks be done with high level scripting languages. C++, while a powerful and useful language in its own right, is not best suited for the future of simpler programming tasks. All in all… I’m just saying, be careful what you wish for. Laurent7月 5, 2010 6:31 pm C++ support for gameplay scripting is definitely a good feature. mimiboss7月 5, 2010 12:28 pm @Kevin “… but I don’t yet buy the performance argument.” I had to rewrite some math for “Crash Course” title. Typically c++ vs c# (using Unity Math) speed increase is 200% to 670%. For example cross product is 2x faster, but lets try to use the same c++ code compiled in c#. Nintari7月 5, 2010 11:13 am If C++ is not required, I’d rather see the Unity team put their efforts in other areas than spend the time and resources adding and supporting C++ as an option. Sure it might be nice to have it, but if it’s that big of a hassle, there’s a laundry list of other features I’d put before it. Just my $0.02. Kevin7月 5, 2010 8:11 am Sven, are you quite sure C# doesn’t support stack allocations and arrays of value types? I’m pretty certain it does. Kevin7月 5, 2010 8:09 am My feelings are mixed on this. On one hand, as a long time C++ developer, I’m quite comfortable with it. On the other hand, C# is just easier and elegant for many things. I think Unity is doing the right thing by adding it as protection for both them and us. We really need to appreciate how awesome that is that they can pull this off. Still, what exactly are the scenarios where C++ will outperform AOC .NET overall? I think C++ is great to hedge on Apple, but I don’t yet buy the performance argument. Pete7月 5, 2010 3:11 am Thanks guys for the update. It would be a pity if C# couldn’t be used any longer. I haven’t used C++ for ages now, and I don’t want to switch back to it. If I had to, maybe I would, but I prefer not to. I’m pretty comfortable with C#. Rischkong7月 5, 2010 1:21 am Perfect c++ for Unity. Is there a Release Date for Plan b? ;). Thank you! stew7月 4, 2010 11:02 pm managed C++ / CLI is a dead language as far as microsoft is concerned. Ashkan7月 4, 2010 7:42 am @sven myhre you can use C++ libraries to do your calculations in C++ and just call them and get the results in unity to display them. @joachim you can implement unity’s C++ support in a way that people jcan use unity as a library that can load scenes that made by unity editor and without editor support for things like public variables inside editor inspector and … it’s ok for C++ programmers and their habits. i mean, most IDEs like unity are for scripting languages. engine’s that use C++ don’t have these kind of IDEs. take a look at source engine. i think scripting languages like are the best option for game logic scripting but C++ is good for critical code wich is possible now in unity so let’s use currently available languages. if it was possible to use .net with unity in windows, it might be possible to use managed C++ with unity and have it all work out of the box :) Charles Tam7月 4, 2010 5:54 am Hi, thank for your Plan B, I think we’ll need it given Apple’s authority on the iOS platform. ZJP7月 4, 2010 4:45 am PS : @David Helgason CEO and Coder?!. I like this. I wish you much success as Bill Gates ;) ZJP7月 4, 2010 4:38 am “…I don’t know C++ but using Unity has given me confidence that I/you who use it and get these amazing ideas in our brains onto a computer screen that pull users down ….” +1 Go! go! go! for Plan “B”. ;) C++ for ALL platforms. Bluster7月 4, 2010 1:44 am Unity is training the next gen of game programmers by proxy. C++ has a million+ libraries on the web. Hex grid generators, University/guvmnt/miltary researched AI, graphics and rendering libraries, sound, the frakkin Universe mapping. the human genome, dynamics simulations, megasim frameworks blibbatty blabbitty blue.. I don’t know C++ but using Unity has given me confidence that I/you who use it and get these amazing ideas in our brains onto a computer screen that pull users down our tangents of thought using artworks and scripts can use the same logics they used deducing their first masked raycasts (goldanged bitshifting binary syntactical bltherfrest if ya ask me:)) to produce quality C++ code with the requisite guidance from the minds at Unity (documentation, documentation, documentation), who no doubt struggled and triumphed through the gates of learning to simulate via code and graphical representations of the results of that code just the same as everybody here writing comments to one degree or another. Beyond the extensive libraries and research presented as C++ code out there, this is a valuable “stay-employed” skillset alongside the knowledge of 3D meshing, rendering, rigging, dynamics and all the other disciplines that are necessary to compete effectively under the moniker of game developer/studio/team. And back to the original premise.Unity training the next gen of game programmers. All of these folks will not end up as a CEO/CTO of their own gaming studio but conversely will be working for others. To give them the best foot forward would be to add this to the “curriculum” for all. The pros at it will surely assist on the forums and soon the whole community will be kicking butt with a size 16 cowboy boot in that arena. Gregory Pierce7月 3, 2010 6:39 pm As I suspected, most people would actually like to use C++ regardless of Apple’s TOS issues. Many professional game developers are already writing their code in C/C++ so this is almost a welcome change regardless of the actual impact of Apple’s move. Godstroke7月 3, 2010 12:04 pm Great news! I’d prefer c++ for all platforms too. This should not only be a workaround. C++ should be the standard main coding language of unity3d. Sven Myhre7月 3, 2010 11:51 am Brilliant! For simulations, like the train simulator I am working on now, I really miss the efficiency of C++. In such simulations with hundreds and potentially thousands of trains running in realtime across a nation, large datasets needs to be simulated even if not visible on screen. More than anything, in C#, I miss array definitions being part of the data type, like in C++ – so one can allocate arrays of structs in one single alloc, use dozens and even hundred of small temporary arrays in quick methods and even put them on stack with no memory alloc – or – being able bake them into larger structs. Being able to use C++ would also make it easier to reuse the same logic on game-servers, where C# and JavaScript is often not an option. Even though 3.0 is a fantastic piece of work – I would trade everything new in 3.0 for being able to use C++ for scripting :) I really hope this would make it into Unity – it is almost so that I hope for Apple to ban .Net, so this project can take first priority within Unity :) mift7月 3, 2010 10:52 am “In practice this means the workflow of exposing properties in scripts will work perfectly and effortless in C++ too. Awesome.” Well, thats just cool. Though, reflection wouldnt work, am I wrong? Not that I need it, just a question :) . What other .NET features, apart from the garbage collector, would be missing? I mean, .NET has alot of handy classes which would be missing then, right? Tony7月 3, 2010 10:51 am I don’t think they will ever enforce it – unless they see a product that they do not want on their platform (flash anyone?). With flash withdrawing, I think that Apple will just sit tight with the gun in their pocket. Bryzi7月 3, 2010 10:50 am @stew: I agree and I’d quite like to see unmanaged C++ available for advanced developers anyway, even if apple don’t require it. @Sean: Surely the flash packager was written in C+ or objective C. Its the flash files that it plays that are not. This is similar to unity in that its core will be native and it plays the unity data files. Personally i think Apple are nuts and should have simply come out and say “Flash is banned in iPhone”. An alternative rule would have been for them to simply require that applications reach a set level of performance and responsiveness. Most flash apps would probably fail that anyway and if a unity app failed that then make it run faster!! Console games must pass a similar rule or Sony/MS will throw them out. nikemerlino7月 3, 2010 8:41 am GOOOOOOO with Plan BBBBBBBBBB !!!!!!! cmonkey7月 3, 2010 4:15 am Dear David and Unity, Thank you so much for sharing “Plan B”! I think its an excellent plan! I hope that Unity will proceed with implementing “Plan B” regardless of Unity Apps currently being approved because it would guarantee compliance with the iPhone TOS, and also because of the memory and performance advantages that you noted. I strongly suspect that Apple will get tougher with their TOS as time goes by, and their “grace period” on the TOS has ended. Thanks David and Unity! stew7月 3, 2010 3:47 am *plug in stew7月 3, 2010 3:47 am Managed c++ is not going to be faster than c#. If you read, what they say is that it will be managed c++ on web and computer and unmanaged c++ only on idevice. If you want unmanaged c++ you will still need to use a pull in Rob7月 3, 2010 3:09 am I’m really glad that i decided to dive into the world of unity last year. The current update about the Apple’s ToS and unity shows once more that UT won’t stop moving even if it means to break through heavy walls. Best thanks to the Team! Koblavi7月 3, 2010 2:59 am @David…thanks for the honesty and thinking about us. I use unity Free to design vehicle simulations (engine, transmission etc) for a driving school and some arcade games. I don’t have pro let alone the Iphone license so none of this really affects me. But I’m a huge fan of UT because and Personally I am glad you guys have already thought of plan B. However I think UT is too young to take on this Huge challenge and may loose focus of it’s vision of DEMOCRATIZING game development for the rest of us. I see a number of the pple on this thread are thrilled about c++ and some even think it should be included across all platforms. Has any of you stopped to think about the implications for UT and it Business Model?? -Re engineering unity Iphone by a staff of only 60? (not all 60 of program) -A possible increase in the price of the Iphone license to factor in new development costs. …just to mention a few. Yes sure C++ comes with all the goodies, I just don’t think UT should be investing there yet. Like Dave said it is a back up plan (which I hope they never have to implement). I just Hope Sean’s theory flies GotCakes7月 3, 2010 12:45 am @David. Okay thanks for the reply David. David Helgason7月 2, 2010 11:59 pm @GotCakes: as we’ve promised before, we’ll let you know when we know. If we’d been told that Unity would be banned, we wouldn’t pretend not to know. That’s a promise. GotCakes7月 2, 2010 11:56 pm That may well be true Sean, but when David went for talks with Apple (check further down the company blog) over whether Unity was affected, did Apple just respond with ‘well its obvious’, the lack of a publicised result from that meeting suggests it was more like ‘yes’. Sean Baggaley7月 2, 2010 11:14 pm It’s quite possible that the reason Apple haven’t given a formal reply yet is because they consider the answer to be “obvious” in some way. As far as I can see from the new ToS, Unity doesn’t actually break the rules. It uses XCode as a key part of its build process for iPhone apps and generates an XCode project file. Also, Unity *was* “originally written in Objective-C, C, C++…” as specified in the agreement, and it is also true that “only code written in C, C++, and Objective-C” is being compiled and directly linked against the Documented APIs. Which is what the agreement requires. What’s running on the iPhone is *always* Unity. It’s the database bundled with the app that changes, not Unity itself. Code is data. But it’s that use of XCode in the Unity for iPhone tool-chain which is the clincher: Adobe’s Flash is primarily a web browser plugin. How are users supposed to install such things on an iPhone? There’s no mechanism for installing plugins. Apple would have to build one, and add an irritating “Uninstall Plugins” app too, since plugins wouldn’t appear on any of the user’s home pages. What if Apple bundled it into Safari? That would open the floodgates to all sorts of other web plugins. Flash doesn’t support WMV, for example. Would Apple also have to bundle Flip4Mac’s plugin too? Where do they draw the line? Whoever Apple slammed the door on would turn around and sue. Better to just ignore all third-party plugins and avoid all that expense. It’s not as if Flash makes Apple any money anyway; it’s just a nice-to-have, not an essential feature. (And, of course, the fact that there wasn’t a proper, full-fat version of Flash available for the iPhone-like devices until *very* recently also helps in that decision.) Adobe’s cancelled Flash for iPhone app builder did NOT touch XCode or the iPhone SDK directly. It must, therefore, use a statically-linked set of libraries which would require updating whenever iOS itself was updated. This would be a support nightmare for Apple, as well as the developers using this technology. And an expensive one at that. Given that the various Stores are not major revenue streams for Apple, it’s not fair to expect them to shoulder this burden while *also* making the iDevice user experience notably worse for end users. Apple only loses if they support this development path. There is no benefit for them at all. In summary: That Apple haven’t stopped accepting Unity apps seems to support my hypothesis that Unity isn’t actually affected by section 3.3.1 of the new agreement. (I’m not a lawyer, etc.) Higor Bimonti7月 2, 2010 9:50 pm Does Apple have a way to know what language it was developed? They decompile the thing just to check it? I think C++ support would interesting, BUT, would need special attention on documentation from Unity, just to make the migration easier to us. Psychoz Interactive CEO7月 2, 2010 8:34 pm Unity, Should continue to implement this Plan B. Cause we can maybe be fine now, but what make you think that Apple will never add more ToS restrictions? To be honest, even if Unity app are now being accepted on the App Store, theses apps are not ToS compliant. Unity should continue to implement C++ support, no matter what, there’s a lot of reasons why: – Avoid any conflict with new ToS in the future. Or simply avoid any future rejection from Apple. – More speed, less memory eat. I think that plan B is not an option, it is a requirement and a very safe move made by Unity to stay on track! :P davidb7月 2, 2010 8:19 pm Actually I would love to have the option to write all my game logic in c++ across the board regardless of weather Apple require it or not. I am more interested in using the web player atm but I do want to push out onto the iPhone and Android at some point. danien7月 2, 2010 7:22 pm Thanks for the update. While we are still waiting for an official yes/no regarding Apple’s ToS for Unity, this is the next most important piece of information we needed in order to plan for our product and to decide whether we would continue our investment in using Unity. mindlube7月 2, 2010 6:59 pm David, thanks for the update & good to know that at least C++ is an option as plan “B”. Meanwhile my Chickie Dominos 1.4 was approved and in the app store just last week, and runs fine on iOS4 :) GotCakes7月 2, 2010 6:55 pm I’m still not entirely taken with this, I thought there was a meeting between yourself (David) and ‘Apple’ quite literally months ago, and yet this article still implies that you do not know if you may or may not conflict with Apples iOS4 TOS. Surely it’s a yes or no? I still don’t think we’ll be committing any time on IPhone projects in the near future with Unity now to be honest. TimViana7月 2, 2010 6:34 pm @tau I know how to program C++ ( I read several books about it, including “Effective and more effective C++” ), just because of that I know several people using Unity will have several problems to create code to that language, you know, there are lots of developer that aren’t PROgrammers… I hope it doesn’t mean a stop on Android support development, after Android support release I hope Apple will be afraid with all new boom of Android quality games, and, as consequence, loss of mobile market share, wich will made Apple review their ToS or continue with their philosophy of high quality and small market share…. drichner7月 2, 2010 6:21 pm I think it would be a great addition in any case. I would totally use it… LJC7月 2, 2010 6:06 pm @Robert Brackenridge: I have to agree, as well. I love the fact that UT have an excellent “Plan B” ready, but I still see it as a “Plan B”. I can only imagine the amount of work it would be to fully implement this; work taken away from other, more important features. Plus, having to support four languages instead of three, and having to write all-new documentation for a new language. We, already, have issues with trying to find a tutorial or example for something in the “right” language (i.e. “I found it in Js, but I need it in C#”). Adding yet another would just add a whole new layer of separation. I look at everything being added in 3.0, and I am amazed. I, then, imagine taking any of that away just to add C++, and I just don’t think the trade-off would be worth it. DallonF7月 2, 2010 5:46 pm Heck, I’d be interested to see C++ as an option for regular Unity… though I probably wouldn’t use it. Robert Brackenridge7月 2, 2010 5:28 pm I’m with some of the other comments. I would hate to see a development requirement get in the way of future feature additions… I know how taxing this could be to a small company from a support perspective. There are many areas within the existing platform which are in need of attention and detail. But, I am quite happy that your team has taken the time to lay out a contingency. tau7月 2, 2010 4:19 pm @TimViana – it’s time for you to go and learn some C++ basic memory management and STL usage ;) You cannot compare buggy C++ code to C+/Java implementation… mimiboss7月 2, 2010 4:08 pm Another vote for native c++ support! 30fps on iPod 2nd gen. (high details) and unstable 20-25fps on iPhone 3G (medium details) … Jesse Dorsey7月 2, 2010 3:08 pm I agree with Tony, this would make it far easier for the rest of my programming team to adopt Unity. Tony7月 2, 2010 3:03 pm I have to give another vote for putting this into Unity 3.0 even if it’s not technically needed. I would love to be able to code Unity games in C++. tino7月 2, 2010 2:56 pm This is poo. Apple should be sued for monopoly and fascist behavior towards their developers. All developer should be free to chose tools they want. Will they in the future demand certain image manipulation software over another? Will you update your tutorials and scripting reference manual from jscript to c++, for the new developers ? And how can apple know what scripting language you use if you compile it ahead of time and run trough xcode ? TimViana7月 2, 2010 2:51 pm Actually C++ can be memory innefficient and slower than java and c# in some cases, because it has no garbage collector and are not optimized in some ways, you have to optimize it or your program will be slow and memory consuming, an example of bad C++ code, it doesn’t destroy unused objects instances: C++: #include #include using namespace std; class Teste { public: string nome; Teste(string nome) { this->nome=nome; }; }; int main() { time_t start,end; double dif; time (&start); for (int x = 0;x < 9999; x++) { for (int i = 0;i < 999; i++) { Teste* a = new Teste("teste"); } } time (&end); dif = difftime (end,start); cout.precision(15); cout << fixed << dif <<endl; system("pause"); return 0; } *********** java class Teste { public String nome; public Teste(String nome) { this.nome = nome; } } public class Main { public static void main(String[] args) { long start = System.currentTimeMillis(); for (int x = 0; x < 9999; x++) { for (int i = 0; i < 999; i++) { Teste a = new Teste("teste"); } } double dif = System.currentTimeMillis()-start; //converte para segundos; dif=dif/1000; System.out.println(dif); } } CocoaPod7月 2, 2010 2:42 pm 1. Can this C++ implementation also support Objective-C++? 2. Can we also code/tweak in Xcode directly as an option or does it have to go through Unity’s editor to parse “C++ scripts” and generate raw C++ source for Xcode to compile? 3. Having said that coroutines are not supported at the moment, and as per question 2, is it possible to use Grand central dispatch now that iOS 4 supports it? Big Pig7月 2, 2010 2:25 pm @Joachim Without coroutines my code’s going to get really messy… it’ll be a real challenge to port to C++ this way. I guess I’ll start looking into fibers/coroutines in C++ in my spare time. gritche7月 2, 2010 2:23 pm Terrific, i like this “Plan B” ! C++ seem a perfect choice for performances and compiling from Unity sound really great. Also, from C# the move is really small ;) Bryzi7月 2, 2010 2:23 pm Just to clarify, would the C++ code be compiled for all platforms or are you planning to create some sort of interpreted C++ for windows, osx etc? One method which would take less effort would be to provide a mechanism for native C++ components to be added which expose parameters and interact with other components. This would be less flexible but very powerful. Elliott Mitchell7月 2, 2010 2:17 pm Thanks David, Formulating a “Plan B” is a great idea. I’m surprised Apple has left you hanging in the wind for so long now. I’m not a true programmer (more a hacker) who is a 3D / FX artist and game developer. I don’t know the nuances between programming languages. Thanks for providing code examples. I pray Apple will clarify it’s ToS immediately so we can all get to making games without worrying. Joachim Ante7月 2, 2010 2:12 pm “On the iPhone version then XCode will compile the “scripts” to native code. But what about the references set in the editor?” In short: It will just work. As you know C++ doesn’t support reflection natively, which makes automatic serialization of properties not a core feature of a GCC based compiler. The great thing about having a .NET based C++ compiler in the editor is that we can actually parse the C++ code while the user writes it. This means we can automatically generate additional boiler plate C++ code when making a build for the iPhone, essentially automatically generating all serialization code for the user. In practice this means the workflow of exposing properties in scripts will work perfectly and effortless in C++ too. Awesome. Joachim Ante7月 2, 2010 2:04 pm @Big Pig: Coroutines will not be supported. I am not going to say never, but it’s a significant engineering effort and definately not part of the first steps. virtual void Update () for the win :) tau7月 2, 2010 1:36 pm Is it UNMANAGED/NATIVE C++ or just a .NET implementation? I’d love to have unmanaged C++ for performance reasons and to remove bloated .NET libraries from an iPhone/Android builds – probably will save up to 8Mb. bayon7月 2, 2010 12:55 pm I Need a converter c# -> C++ lol Great news RCL7月 2, 2010 12:40 pm Great news. I preferer C++ to everything else, even if it means slower iterations. Mika7月 2, 2010 12:31 pm As someone who has been pondering using Unity instead of rolling out own engine in C++, but really does not like the restrictions and performance disadvantages of C#/etc, it would be great news to have C++ as an option. Please consider doing this, necessary or not! NPoleBomb7月 2, 2010 12:03 pm IMPRESSIVE! I love C++! it would be a very useful feature. for development of game, C/C++ is absolute choice ever! so, it’s clear now that Unity3D does avoid ToS perfectly! Nuno Afonso7月 2, 2010 11:55 am I also am for the addition of C++ to the engine. My reasons for wanting this are: – C# programming “dulls your senses”, it is not really a games industry standard, so even if you work for 10 years on Unity and do great projects, you can’t really apply to work that requires C++ knowledge; – Some times doing “simple stuff” in C# is a pain, ppl may not like working with pointers but they sure are very helpful; – Mono is NOT really optimized… Although it is getting better, it is still REALLY slow! It doesn’t do a great job optimizing (ie inlining) that .NET does, and when you have intensive engine components believe me it will show its problems. It would be great to work with C++, hopefully it would have a nice integration with a debugger! XD And it would be great if it had at least the same level of integration that you guys are adding to MonoDevelop to XCode (especially since the new version does seem much better). Looking forward for this to be added no matter what! ;-] Ugur7月 2, 2010 11:43 am I love it that you have a Plan B just in case and share it with us, it shows once again how much you guys care, thanks a lot for that. I strongly disagree with those saying they´d love to have this option of using managed C++ no matter if its needed or not though. Yeah, an additional language choice always sounds good in theory, even more so if it comes with in theory better performance as end result. But i imagine how much work it would be to pull all of this off and i´d MUCH prefer UT working on improving existing functionality and adding new functionality other than support for another language instead. I love unity but there are of course always things that could be improved and added (i have a long list of feature requests myself of which i post chunks on the various channels for that quite regularly), so yeah, i´d be really sad if instead of on a good chunk of those ressources instead get spent on managed C++ implementation. So yeah, bottomline for me: Awesome UT thinks of a Plan B, but i hope it never has to be used. Ashkan7月 2, 2010 11:22 am @David thank you so much for your honesty again! unity guys are always honest and it’s great! you are the only engine that don’t have a fake stupid comparison on your website that says unity is perfect! torque, shiva, gamestudio A7 and many others have them and show their own good features. as a student i used C++ in university and love this powerful language. i use C# in unity and web development. having a C++ API is nice but UT can use the time to develop more great features.(if they don’t have to do this). having C++ is nice because integration of some C++ class libraries like (physx softbodies) will be alot easier! i love to have this feature just like many others but i think there are more important features for professional development. take a look at networking part of unity please! not that bug free! liverol7月 2, 2010 11:15 am if i don’t want to make games on iphone,just for PC,do i need to use c++?will it be the only script language??? Luca7月 2, 2010 10:40 am I’d like to see c++ support too. Can you tell us if this feature will be implemented in Unity 3.0? mift7月 2, 2010 10:28 am What about the SendMessage functionality? does it still work? c++ doesnt have introspection, AFAIK. You say that a managed c++ compiler for mono can make c++ “scripts” possible, ok. Those scripts wont be faster because they are compiled into CLR bytecode. On the iPhone version then XCode will compile the “scripts” to native code. But what about the references set in the editor? Arto Koistinen7月 2, 2010 10:01 am If you decide to implement this even if Apple does not require it, would it be possible to mix C++ with C#/JS? That would solve some problems with the Mono memory management that have been reported before. Lars Steenhoff7月 2, 2010 9:50 am Thanks for sharing this with us! Tinus7月 2, 2010 9:28 am +1 for desiring C++ support even if not required for iOS4; but yes, I understand the extra workload is not something you guys would want. skygunner7月 2, 2010 9:00 am Thanks for the heads up! If the C++ path becomes unavoidable, if you could prepare plenty of Unity C++ code examples, documentation and tutorials available, that would really help those (me) who aren’t as comfortable with the language. :-) David Helgason7月 2, 2010 8:47 am @Big Pig: I mist admit that I got good help from Joachim Ante and others on this (but hey I used to be a coder, and there’s nothing foreign in these code example above). I can’t comment on the tricky bits like coroutines yet. Not because it’s secret, we just don’t know yet. Big Pig7月 2, 2010 8:46 am “and iff this ever becomes a part of the product” Please make it a part of the product! Just the additional 10Mb of RAM we’d gain are a sufficient reason to offer this as an option. Big Pig7月 2, 2010 8:44 am David, Thanks so much for this update. It’s rare to see a CEO dwelve into code examples :-) Just one question, that has been already asked but not answered yet: will we have Coroutines in C++? I know of some Coroutine implementations in C++ and ObjectiveC, but it’d be nice to have Unity’s support on this. Alex David Helgason7月 2, 2010 8:40 am @bronxbomber92 and others: there’s a lot of detail questions like these left to answer. We have been working on the technical bits, and iff this ever becomes a part of the product (which is not certain at this point), we’ll have to figure them all out :) bronxbomber927月 2, 2010 8:21 am If Plan B did become necessary, how would that affect the C/C++/Objective-C Plugins Support available in Unity Pro and Unity iPhone Advance? Would that just “automatically” become a feature of both iPhone Basic and iPhone Advance and remain a Unity Pro feature? Or would you guys still try to restrict linking to static or dynamic libraries (or frameworks if you prefer) for the iPhone Advance? Anyways, great to see you guys are fervently working on a backup plan! *Hopefully* the hard work is all for not (it’s rare one ever means to say, heh)! Lorenzo Cambiaghi7月 2, 2010 8:15 am Wow!!!! I regulary program in java,c#,cocoa,as3 but i love c++….for me is the best language. Jonathan Moore7月 2, 2010 7:47 am While C++ might not be the best choice for scripting Logic, if all platforms supported it as an option I think it would strengthen Unity’s stance as a great engine to learn game development on. Computer Science students looking to get into the games industry like to get as much practice in C++ as possible (including myself), but working on projects around busy class schedules might not accommodate rolling their own engine in C++. So I know from that perspective, C++ support would be great. David Helgason7月 2, 2010 7:11 am @jackpoz: Lua doesn’t solve the problem with ToS 3.3.1. If anything it’s worse (not to mention a bunch of other reasons why it’s not optimal). @MattCarr and others: C++ “scripting” is a somewhat compelling feature, but it’s a lot of work (and maintenance) so it’s not certain that we’d do it unless we have to. But we’ll see. @ImaginaryHuman: translating C# / JavaScript to C++ is not trivial, and by the way doesn’t actually solve the ToS 3.3.1 problem, since it’s prescribed that an application has to be written “originally” in certain languages. But we’re considering that too. @Daniel and drawcoder: you can already call into C/C++ libraries everywhere but in the browser. Yesterday I saw camera input and augmented reality on Unity Android, and that was based on an external library. It’s quite a straightforwards process. Paris Buttfield-Addison7月 2, 2010 7:03 am Great stuff guys – this is why we can rely on Unity as a platform to build a business off. marty7月 2, 2010 6:27 am And all of this with coroutines, right? ;-) drawcoder7月 2, 2010 6:20 am Love this and the C++ option. Script could very well be auto generated into C++. This is great though because I would like to be able to use other C/C++ libs in my Unity projects especially on mobile. Keeping scripting as C# is also great and probably a bit better than Lua setups. Mike7月 2, 2010 6:19 am Awesome! Just plain awesome. Amazing work. I would love to see what the performance difference and memory footprint reductions are. Superb work guys! Daniel Rodríguez7月 2, 2010 6:18 am Agreed, scripting in C++ sounds like a great idea anyway as it will allow code written for games in C++ to run inside Unity. Kevin Hopcraft7月 2, 2010 6:14 am Even if you do not need to keep this. Please let it be in the engine anyways. It would be pretty epic. ImaginaryHuman7月 2, 2010 6:05 am Why don’t you continue to let developers write code in UnityScript or C# and then convert the sourcecode itself to C++ automatically before compilation? This way we won’t even have to learn C++ at all, and to be honest, even though the examples seem *fairly* simple, there is still a learning curve and different/more complex syntax to get your head around. Why expose this to the developer at all when you could just translate existing code into C++? FierceForm7月 2, 2010 6:04 am I want this C++ implementation on all platforms. Then I would just be able to use C++ for everything, which I would prefer. Mark7月 2, 2010 5:58 am Agreed, if it turns out that on iOS writing your game code in C++ is significantly faster than C#, I would switch. If this R&D reaches fruition, even if not necessary to ship games on the App Store, I’d be really interested to see some benchmarks on how it compares to C#/Javascript. MattCarr7月 2, 2010 5:55 am Will this C++ implementation be developed to completion if it’s discovered it’s not needed before it’s finished. Having C++ as an option would be a nice feature (even if the majority of the Unity user base never uses it) in any case. jackpoz7月 2, 2010 5:47 am may i suggest to use Lua? Adams Immersive7月 2, 2010 5:43 am Thanks for the detailed explanation! I’m sure a LOT of work has been going into this stuff behind the scenes. While learning a new syntax and rules would be a creative stumbling block, and I really hope Apple simply continues permitting Unity games as always, having a clear backup plan is very comforting!
https://blogs.unity3d.com/jp/2010/07/02/unity-and-ios-4-0-update-iii/
CC-MAIN-2019-35
refinedweb
10,308
69.52
When learning about a new framework we often see trivial demos depicting the framework’s basic features, for example the well-known TodoMVC Application. And that’s great — I mean who doesn’t like Todo apps, right? Well today, we’re going to take a slightly different tack. We’re going to shun the generic and instead focus on one of the unique core features of the Aurelia framework: visual composition. Aurelia, the new kid on the block, has already been introduced in a previous article, along with it’s capabilities of extending HTML. By the end of this article we should get a better understanding of how composition helps to assemble complex screens out of small resuable components. To do so we’re going to create a report builder app. You can find a demo of the app here and find the full source code here. What Is Visual Composition? The basic idea of composition in computer science is to take small entities, in the case of object composition, simple objects/data types, and combine them into bigger and more complex ones. The same thing applies to function composition, where the result of one function is passed as the attribute to the next and so on. Visual composition shares this fundamental concept by allowing one to aggregate multiple distinct sub-views into a more complex view. An important thing to consider when talking about visual composition is the difference between heterogeneous and homogeneous sub-items. In order to understand this, lets look at the following figure. Comparison of visual composition types On the left side we see an example of homogeneous composition. As the name suggests, this is all about rendering items which have the same type and only varying content. This type of composition is used in most frameworks when creating repeated lists. As the example depicts, imagine a simple list of items being rendered sequentially one after another. On the right side we can see an example of heterogeneous composition. The major difference is the assembly of items which have different types and views. The example demonstrates a page consisting of several building blocks with different content and purpose. A lot of frameworks offer that functionality via router-views, where specific view-regions are placed on the screen and different route endpoints are loaded up. The obvious drawback of this method is that the application requires a router. Besides that, creating complex view compositions can still become quite a tedious task, especially if you take nested compositions into account. Aurelia on the other hand offers, in addition to the router-view, an alternative approach by exposing visual composition as a first-class feature via a custom element. That way it enforces the separation of concerns even on a visual level and thus leads the developer towards the creation of small and reusable components. The result is increased modularity and the chance to create new views out of already existing ones. Using Aurelia’s Compose Element In order to make use of visual composition within Aurelia, we can utilize the predefined compose custom element. It operates on one of Aurelia’s key conventions, the view and view-model (VM) pairs (which this article will also be referring to as a page). In short, compose allows us to include a page at any particular position inside another view. The following snippet demonstrates how to use it. At the position we’d like to include the Hello World page, we simply define the custom element and set the value of its view-model attribute to the name of the file containing the VM definition. <template> <h1>Hello World</h1> <compose view-</compose> </template> If we need to pass some additional data to the referenced module, we may use the model attribute and bind a value to it. In this case we pass on a simple object, but could also reference a property from the calling VM. Now the HelloWorld VM can define an activate method, which will get the bound model data passed as an argument. This method may even return a Promise, e.g. in order to get data from the backend, which will make the composition process wait until it’s resolved. export class HelloWorld { constructor() { } activate(modelData) { console.log(modelData); // --> { demo: 'test' } } } Besides loading the VM, the corresponding HelloWorld view will also be loaded and its contents placed into the compose element. But let’s say that we don’t want to follow that default convention of VM and view pairs. In this case we can use the additional attribute view and point it to the HTML file we’d like to use as a view. <compose view-</compose> In this case the VM will still be loaded, but instead of loading hello-world.html the composition engine will insert the contents of alternative-hello-world.html into the compose element. Now what if we need to decide dynamically which view should be used? One way we can accomplish this is to bind the view attribute to a property of the calling VM, whose value will be determined by some logic. // calling VM export class App {</compose> This is fine but might not fit each use case. What if the HelloWorld VM needs to decide itself which view it wants to show? In that case we simply let it implement a function called getViewStrategy which has to return the name of the view file as a string. An important thing to note is, that this will be called after the activate function, which allows us to use the passed on model data, to determine which view should be displayed. export class HelloWorld { constructor() { } activate(modelData) { this.model = modelData; } getViewStrategy() { if( this.model.demo === 'test' ) return 'alternative-hello-world.html'; else return 'hello-world.html'; } } Preparing the Project Setup Now that we’ve seen how the compose element does its magic, lets get a look at the report builder application. In order to kick start the development, we’ve built it upon the Skeleton Navigation App. Some parts, such as the router, have been stripped off since this application is using just a single complex view composed of other sub-views. To get started, either visit our GitHub repo, download the master branch and extract it to a folder, or clone it locally by opening a terminal and executing following command: git clone To complete the installation, please follow the steps listed under “Running The App” in the project’s README. Creating the Report View Our app’s entry point is the page app.html (located in the src folder). The VM ( app.js) is just an empty class, pre-loading Twitter Bootstrap. The view, as depicted in the snippet below, acts as the main app’s container. You’ll notice that it composes the screen out of two separate pages called toolbox and report. The first acts as our container for various draggable tools whereas the second is the sheet you place those widgets on. <template> <div class="page-host"> <h1 class="non-printable">Report Builder</h1> <div class="row"> <compose class="col-md-2 non-printable" view-</compose> <compose class="col-md-10 printable" view-</compose> </div> </div> </template> Looking at toolbox.html we see that the view is outputting a list of available widgets alongside the buttons to print or clear the report. <template> <h3>Toolbox</h3> <ul class="list-unstyled toolbox au-stagger" ref="toolboxList"> <li repeat. <i class="fa ${widget.icon}"/> ${widget.name} </li> </ul> <button click. Print</button> <button click. Clear Report</button> </template> The toolbox VM exposes those widgets by declaring an identically named property and instantiating it inside its constructor. This is done by importing the widgets from their respective locations and passing their instances — created by Aurelia’s dependency injection — to the widgets array. In addition an EventAggregator is declared and assigned to a property. We’ll get to this a bit later. import {inject} from 'aurelia-framework'; import {EventAggregator} from 'aurelia-event-aggregator'; import {Textblock} from './widgets/textblock'; import {Header} from './widgets/header'; import {Articles} from './widgets/articles'; import {Logo} from './widgets/logo'; @inject(EventAggregator, Textblock, Header, Articles, Logo); export class Toolbox { widgets; constructor(evtAgg, textBlock, header, articles, logo) { this.widgets = [ textBlock, header, articles, logo ]; this.ea = evtAgg; } ... } So what do those widgets contain? Looking at the project structure, we can find all of them inside the sub-folder src/widgets. Lets start with a simple one: the logo widget. This widget simply shows an image inside its view. The VM follows a default pattern by implementing the properties type, name and icon. We’ve seen those being used in the toolbox repeater block. // logo.html <template> <img src="images/main-logo.png" /> </template> // logo.js export class Logo { type = 'logo'; name = 'Logo'; icon = 'fa-building-o'; } Looking at the textblock widget we see an additional activate method, accepting initial model data from the composition engine // textblock.js export class Textblock { type = 'textblock'; name = 'Textblock'; icon = 'fa-font'; text = 'Lorem ipsum'; activate(model) { this.text = model; } } In order to see how that model is made available to the view, lets take a look at the report page. What we see in its view is a mix of both homogeneous and heterogeneous composition. The report, essentially an unordered list, will output any widgets added to it — this is the homogeneous part. Now each widget itself has a different display and behavior which constitutes the heterogeneous part. The compose tag passes on the initial model, as well as the name of the sub-views’ view-model. Additionally, a remove icon is drawn which can be used to remove a widget from the report sheet. <template> <ul class="list-unstyled report" ref="reportSheet"> <li repeat. <compose model.</compose> <i class="remove-widget fa fa-trash-o col-md-1 non-printable" click.</i> </li> </ul> </template> The removal is carried out by looking for the respective widget’s id and splicing it from the report.widget array. Aurelia’s repeater will take care of updating the view to actually remove the DOM-Elements. removeWidget(widget) { let idx = this.widgets.map( (obj, index) => { if( obj.id === widget.id ) return index; }).reduce( (prev, current) => { return current || prev; }); this.widgets.splice(idx, 1); } Inter-Component-Communication via Events We’ve mentioned that the toolbox has a “Clear Report” button, but how does that trigger the clearance of all the widgets added to the report page? One possibility would be to include a reference to the report VM inside the toolbox and call the method this would provide. This mechanism would however, introduce a tight coupling between these two elements, as the toolbox wouldn’t be usable without the report page. As the system grows, and more and more parts become dependent on each other, which will ultimately result in an overly-complex situation. An alternative is to use application-wide events. As shown in the figure below, the toolbox’s button would trigger a custom event, which the report would subscribe to. Upon receiving this event, it would perform the internal task of emptying the widgets list. With this approach both parts become loosely coupled, as the event might be triggered by another implementation or even another component. Events used to create the clear all feature To implement this we can use Aurelia’s EventAggregator. If you look at the toolbox.js code snippet above, you can see that the EventAggregator has already been injected into the toolbox VM. We can see it in action in the clearReport method, which simply publishes a new event with the name clearReport. clearReport() { this.ea.publish('clearReport'); } Note that we could also pass an additional payload with the data, as well as have events identified via custom types instead of strings. The report VM then subscribes to this event inside its constructor and, as requested, clears the widgets array. import {inject} from 'aurelia-framework'; import {EventAggregator} from 'aurelia-event-aggregator'; import sortable from 'sortable'; @inject(EventAggregator) export class Report { constructor(evtAgg) { this.ea = evtAgg; this.ea.subscribe('clearReport', () => { this.widgets = []; }); } ... Use External Code via Plugins So far we haven’t looked at the actual drag & drop feature, which we’re going to use to drag widgets from the toolbox onto the report sheet. Of course one could create the functionality via native HTML5 Drag and Drop, but why go reinventing the wheel when there are already a bunch of nice libraries such as Sortable out there to do the work for us. A common pattern when developing applications is thus to rely on external code bases which provide out-of-the-box features. But not only 3rd party code might be shared that way. We can do the same with our own reusable features by leveraging Aurelia’s plugin system. The idea is the same. Instead of rewriting code for each application, we create a custom Aurelia plugin, hosting the desired functionality and exporting it with simple helpers. This is not limited to pure UI components but might be used as well for shared business logic or complex features like authentication/authorization scenarios. Leverage Subtle Animations In that vein, let’s take a look at Aurelia Animator CSS, a simple animation library for Aurelia. Aurelia’s animation library is built around a simple interface which is part of the templating repository. It acts as a kind of generic interface for actual implementations. This interface is called internally by Aurelia in certain situations where built-in features work with DOM-Elements. For example, the repeater uses this to trigger animations on newly inserted/removed elements in a list. Following an opt-in approach, in order to make use of animations, it is necessary to install a concrete implementation (such as the CSS-Animator) which does its magic by declaring CSS3 animations inside your stylesheet. In order to install it we can use the following command: jspm install aurelia-animator-css After that, the final step is to register the plugin with the application, which is done during the manual bootstrapping phase in the main.js file of our report builder example. export function configure(aurelia) { aurelia.use .standardConfiguration() .developmentLogging() .plugin('aurelia-animator-css'); // <-- REGISTER THE PLUGIN aurelia.start().then(a => a.setRoot()); } Note: The plugin itself is just another Aurelia project following the convention of having an index.js file exposing a configure function, which receives an instance of Aurelia as a parameter. The configure method does the initialization work for the plugin. For example, it might register components such as custom elements, attributes or value converters, so that they can be used out-of-the-box (as with the compose custom element). Some plugins accept a callback as a second parameter which can be used to configure the plugin after initialization. An example of this is the i18n plugin. The report builder makes use of subtle animations during the composition phase and to indicate the removal of a widget from the report. The former is done within the toolbox view. We add the class au-stagger to the unordered list to indicate that each item should be animated sequentially. Now each list-item needs the class au-animate, which tells the Animator that we’d like to have this DOM-Element animated. <ul class="list-unstyled toolbox au-stagger" ref="toolboxList"> <li repeat. <i class="fa ${widget.icon}"/> ${widget.name} </li> </ul> We do the same for the reports view widget-repeater: <li repeat. As mentioned, the CSS-Animator will add specific classes to elements during the animation-phase. All we need to do is to declare those in our stylesheet. Adding Drag & Drop As for including 3rd party libraries, we can take advantage of Aurelia’s default package manager JSPM. To install the previously mentioned library, Sortable.js, we need to execute following command, which will install the package under the name sortable. jspm install sortable=github:rubaxa/sortable@1.2.0 After installation, JSPM will automatically update the file config.js and add its package mappings: System.config({ "map": { ... "sortable": "github:rubaxa/sortable@1.2.0", ... } }); Now that the package is installed we can use it inside our toolbox VM by fist importing it and then registering the drag & drop feature for our widgets list inside the attached hook. It’s important to do it at this time, since this is when the view is fully generated and attached to the DOM. import sortable from 'sortable'; ... export class Toolbox { ... attached() { new sortable(this.toolboxList, { sort: false, group: { name: "report", pull: 'clone', put: false } }); } } You might wonder where this.toolboxListis coming from. Take a look at the refattribute of the toolboxview in the animation section above. This simply creates a mapping for an element between the view and the VM. The final part is to accept the dropped elements inside the report VM. To do this, we can leverage the onAdd handler of Sortable.js. Since the dragged list element itself is not going to be placed inside the report but rather the referenced widget composed by the view, we first have to remove it. After this, we check the type of the widget and in case of a textblock, we initialize a prompt for the text, which will be used as the widget’s model data. Finally, we create a wrapper object including the widget’s id, type and model, which will be used by the report view to compose the widget. attached() { new sortable(this.reportSheet, { group: 'report', onAdd: (evt) => { let type = evt.item.title, model = Math.random(), newPos = evt.newIndex; evt.item.parentElement.removeChild(evt.item); if(type === 'textblock') { model = prompt('Enter textblock content'); if(model === undefined || model === null) return; } this.widgets.splice(newPos, 0, { id: Math.random(), type: type, model: model }); } }); } Conclusion And that’s it. We’ve seen how Aurelia’s compose element can help us to create a complex visual composition and nicely separate all of our components into small reusable parts. On top of that, I’ve demonstrated the concept of Aurelia Plugins, to share code between multiple projects as well as how to use 3rd party libraries. We, the Aurelia Team, hope you’ve enjoyed reading this article and would be happy to answer any questions, either here in the comments or on our Gitter channel.
https://www.sitepoint.com/composition-aurelia-report-builder/?utm_source=sitepoint&utm_medium=articletile&utm_campaign=likes&utm_term=javascript
CC-MAIN-2017-47
refinedweb
3,048
56.05
Smalltalk had something called blocks. Blocks were nothing more then to be able to create an object containing some statements, just like the statements you put between '{' and '}'. To put it in Java syntax: Block myBlock = { System.out.println("This is a test"); // do something else // put code here whatever you like System.out.println("Last statement executed."); } The example above created a Block-object and stores it into the variable myBlock. Important is that this doesn't execute the statements in the block. Execution of the statements would look like this: myBlock.execute(); or let's use it as a parameter public void myMethod(Block b){ b.execute(); } Above statements causes the blocks to be executed. Now what is the advantage of such a feature? Well, in Smalltalk block's could be paramaterized. Take a look at this example: Block printBlock = (Object printMe){ System.out.print(printMe); } The code above created a block which prints it's parameter printThis which should be a String. The example below uses this block. for(Iterator i = c.iterator();i.hasNext();){ printBlock.execute(i.next()); } In the case above the printBlock get executed with each element of the collection. As we see block can have parameters. In addition, blocks (like any other object) can be used as parameters too. THIS MAKES BLOCKS EXTREMLY USEFULL AND REDUCES A LOT OF SYNTAX!!! Take a look at the example below, where the collection class has a new method: public forEach(Block b) This method executes the block b for each of the elements it has. Thus the collection class uses an iterator (or whatever it likes) to iterate each element and executes the block b with the element as the parameter. We van use this method as follows: Collection collection; // add elements to c collection.forEach(printBlock); Of course block do not have to be assigned to a variable first. Let's use them directly: collection.forEach( (Object o){ Task t = (Task)o; System.out.println("Executing task: " + t); t.perform(); System.out.println("Finished task: " + t); } ); See, blocks make it very usefull for iterating. In fact, all looping constructs like for and while are obsolete with blocks. This greatly reduces the syntax. Blocks look like anonymous inner classes, which were commonly used for to provide event-listeners. Blocks can have return types too. Lets look at another example. This block has the return type String and takes a String as parameter: Block decisionBlock = String (Object o){ return ((String)o).toUppercase(); } the collection class is extended and has the method collect(Block b): Collection uppercased = collection.collect(decisionBlock); making it very usefull to transform one collection into another. Some other examples: // select: takes a subset of a collection // it executes the block for each of its elements and creates and answers a new collection with only those elements for which the block-execution return true collection existingFiles = collection.select( boolean (Object o){ File f = (File)o; return f.exists(); } ) also think of providing exception handlers: public void performTask(Task t, Block errorBlock){ try{ t.perform(); } catch(Exception e){ errorBlock.
http://archive.oreilly.com/cs/user/view/cs_msg/25737
CC-MAIN-2017-17
refinedweb
513
51.24
21 August 2012 17:08 [Source: ICIS news] LONDON (ICIS)--Low water levels in the river Rhine in Germany have led ship owners and charterers to achieve higher freight rates on barge journeys, sources said on Tuesday.Continued hot weather across Europe has seen water levels on the ?xml:namespace> “On current water levels, a 3,000 tonne barge is able to take around 39% to 44% of its capacity to the upper With barge owners not fully utilising their capacities and bunker fuel costs remaining at high levels, another broker said that; “charterers and owners continue to push for higher rates to recoup on losses made on these inefficient journeys.” “Many have achieved success in gaining higher rates,” he added. According to data from the German waterways authority, levels in Kaub, With rainfall forecasts in the region lower than expected for this time of year, sources expect rates to increase further as water levels continue to fall. Last week, sources in the methanol market said transport barges on the river Rhine had been forced to reduce their loads along certain stretches because of low water levels, caused by recent warm weather and a lack of rain. Additional reporting by Ross Ye
http://www.icis.com/Articles/2012/08/21/9588794/barge-rates-increase-on-low-river-rhine-water-levels.html
CC-MAIN-2014-49
refinedweb
202
51.41
Multiplatform. A simple podcast aggregator. Podget is a simple podcast aggregator optimized for running as a scheduled background job (i.e. cron), with support for categories & folders, importing servers from OPML lists & iTunes PCAST files, exporting an OPML file, automatic playlist creation and cleanup. Updated in Version 0.8 on June5, 2016 to include support for ATOM feeds in additions to RSS feeds.. Apcupsd is a program for monitoring UPSes and performing a graceful computer shutdown in the event of a power failure. It runs on Linux, Mac OS/X, Win32, BSD, Solaris, and other OSes. Environment for extending the usability of Oracle SQLPlus SqlPlusConfig enhances the usability of Oracle SQLPlus in interactive mode. Characteristics: simplifies connection to database, configuring session parameters after connecting, convenient command history and autocompletion, auto-adjust width of columns for data output. Oracle SQLPlus and the same packages of Cygwin are required. Includes cygwin/linux and emacs user configuration scripts. This project includes cygwin/linux and emacs user configuration scripts.. It is a shell library to import other shell scripts as libraries. As a library we consider a simple shell script containing function and/or variable definitions. Libinclude offers you the possibility to use those kind of libraries in a comfortable manner. The usage is very close to the syntax of python's import statement. Functions (FUNC) and variables (VAR) in a library (MYLIB) can be simply included, but also imported in their own namespace (MYLIB_VAR, MYLIB_FUNC). It also provides the import MYLIB as ALIAS and the from MYLIB import FUNC statement, like in python. To ensure the usability of the libraries across different OS and/or software configurations, libinclude provides some features to be utilized in the libraries. One can define dependencies with 3ed party programs, which will be checked while importing and returns a meaningful error message if not met. To adopt the library to different environments, conditional comments provide the possibility to include code blocks depending on defined conditions, or run code on file inclusion. regutils - Win9x registry tools in Unix C & INI file tools in Perl Regutils is a collection of programs to help in managing the configuration of Windows software and systems. The utilities can be used to apply user and machine specific customizations on the fly as users log in or as machines are booted. They can also be used to identify and correct similarities and differences between software configurations. These may be helpful in debug situations or when consistency or differences need to be maintained. The regutils package was initially created and maintained by Michael Rendell at through version 0.10, but is now maintained here. Binary package download of the INI Diff and Patch utilities alone without the older registry utils can be found here: envbot is an advanced modular IRC bot in bash. envbot supports SSL, IPv6, module loading/unloading on the fly, advanced access control and many more features. Mpge Mpge is a wrapper of meterpreter (msfconsole, msfpayload and msfencode) of Metasploit Framework directly integrated with Mac OS X Snow Leopard 10.6.8 and with OS X Mavericks 10.9.. I used three real Mac OS X: Attacker: MacBook with Snow Leopard 10.6.8 Target: Mac iBook PowerPC G4 with Mac OS X10.3.5 Panther and after MacBook and iMac Mac OS X Mountain Lion 10.8.1. All Mac OS X were connected on intranet lan of an italian ISP. The attacker MacBook is in listening and expected the reverse shell from the target Mac iBook PowerPC G4 that receive a package and when user click on file .pkg and insert the user password, the attacker receive a reverse shell of target. For more details read Features and User Reviews. WordBash is a WordPress clone written in GNU Bash DEVELOPMENT IS ON HOLD WordBash is a Bash CGI script that looks and acts just like WordPress with many of the basic WordPress features - posts, pages, sidebar, comments, tags, categories, etc. WordBash is a 1300 lines of shell script. It only uses one call to an external program in a limited way - everything else, viewing, saving/filtering comments, etc. is done with just Bash code - No AWK, TR, etc. It's all Bash. WordBash is nearly identical to WordPress with the twenty eleven theme. WordBash has Object Oriented aspects, a Hierarchical File Database, Caching Factory based Web Templates and an Adaptive Extensible Record format. It does not have all WordPress features but it has most of the basics. It looks and feels just like WordPress of 20MB of code! But WordBash is 1300 lines of shell script. More work needs to be done - and I have not released the Admin code (but I will). The design has some flaws but they are easily fixed. The code is "designed for change.". This.) RTTI for Python Source Files based on inspect The 'pysourceinfo' package provides basic runtime information on executed sourcefiles based on 'inspect' and additional sources.
https://sourceforge.net/directory/language%3Ashell/os%3Awindows/?sort=update&page=6
CC-MAIN-2018-17
refinedweb
823
55.84
This command is used to put objects onto or off of the active list. If none of the five flags [-add, -af, -r, -d, -tgl] are specified, the default is to replace the objects on the active list with the given list of objects. When selecting a set as in select set1, the behaviour is for all the members of the set to become selected instead of the set itself. If you want to select a set, the -ne/noExpandflag must be used. With the advent of namespaces, selection by name may be confusing. To clarify, without a qualified namespace, name lookup is limited to objects in the root namespace :. There are really two parts of a name: the namespace and the name itself which is unique within the namespace. If you want to select objects in a specific namespace, you need to include the namespace separator :. For example, ‘select -r foo*’ is trying to look for an object with the fooprefix in the root namespace. It is not trying to look for all objects in the namespace with the fooprefix. If you want to select all objects in a namespace (foo), use ‘select foo:*’. Note: When the application starts up, there are several dependency nodes created by the system which must exist. These objects are not deletable but are selectable. All objects (dag and dependency nodes) in the scene can be obtained using the lscommand without any arguments. When using the -all, adn/allDependencyNodesor -ado/allDagObjectsflags, only the deletable objects are selected. The non deletable object can still be selected by explicitly specifying their name as in select time1;. instead, the selection is cleared if the selection mod is replace (the default); otherwise, it does nothing Derived from mel command maya.cmds.select Example: import pymel.core as pm # create some objects and add them to a set pm.sphere( n='sphere1' ) # Result: [nt.Transform(u'sphere1'), nt.MakeNurbSphere(u'makeNurbSphere1')] # pm.sphere( n='sphere2' ) # Result: [nt.Transform(u'sphere2'), nt.MakeNurbSphere(u'makeNurbSphere2')] # pm.sets( 'sphere1', 'sphere2', n='set1' ) # Result: nt.ObjectSet(u'set1') # # select all dag objects and all dependency nodes pm.select( all=True ) # clear the active list pm.select( clear=True ) # select sphere2 only if it is visible pm.select( 'sphere2', visible=True ) # select a couple of objects regardless of visibilty pm.select( 'sphere1', r=True ) pm.select( 'sphere2', add=True ) # remove one of the spheres from the active list (using toggle) pm.select( 'sphere1', tgl=True ) # remove the other sphere from the active list pm.select( 'sphere2', d=True ) # the following selects all the members of set1 pm.select( 'set1' ) # this selects set1 itself pm.select( 'set1', ne=True ) # Some examples selecting with namespaces: # create a namespace and an object in the namespace pm.namespace( add='foo' ) # Result: u'foo' # pm.namespace( set='foo' ) # Result: u'foo' # pm.sphere( n='bar' ) # Result: [nt.Transform(u'foo:bar'), nt.MakeNurbSphere(u'foo:makeNurbSphere1')] # # 'select bar' will not select "bar" unless bar is in the # root namespace. You need to qualify the name with the # namespace (shown below). pm.select( 'foo:bar' ) # select all the objects in a namespace pm.select( 'foo:*' )
http://www.luma-pictures.com/tools/pymel/docs/1.0/generated/functions/pymel.core.general/pymel.core.general.select.html#pymel.core.general.select
crawl-003
refinedweb
526
59.09
JavaScript. import { Chart, StepLineSeries, DateTime } from '@syncfusion/ej2-charts'; Chart.Inject(Chart, StepLineSeries, DateTime); let chart: Chart = new Chart({ primaryXAxis: { valueType: 'DateTime' }, series:[{ type: 'StepLine', dataSource: [ { x: new Date(1975, 0, 1), y: 16 }, { x: new Date(1980, 0, 1), y: 12.5 }, { x: new Date(1985, 0, 1), y: 19 }, { x: new Date(1990, 0, 1), y: 14.4 }, { x: new Date(1995, 0, 1), y: 11.5 }, { x: new Date(2000, 0, 1), y: 14 }, { x: new Date(2005, 0, 1), y: 10 }, { x: new Date(2010, 0, 1), y: 16 }], xName: 'x', yName: 'y', }, ], }, '#Chart'); <!DOCTYPE html> <html> <head></head> <body style = 'overflow: hidden'> <div id="container"> <div id="Chart"></div> </div> <style> #control-container { padding: 0px !important; } </style> </body> </html> Stepline Chart User Guide Learn the available options to customize JavaScript stepline chart. Stepline Chart API Reference Explore the JavaScript stepline chart APIs.
https://www.syncfusion.com/javascript-ui-controls/js-charts/chart-types/stepline-chart
CC-MAIN-2020-45
refinedweb
147
80.72
Meaningful VC Exits Can your startup generate venture-scale returns? The purpose of this post is, with the help of some venture math and a bunch of frequently made assumptions, to derive the size in term of revenues that a company has to achieve in a 5–7 years time window to be regarded as attractive for VC investment. We will do that for different fund sizes (micro $50mm-$100mm, traditional $350mm and mega $1b VC funds) and different startup business models (SaaS, marketplaces, e-commerce). This might also be helpful for the company and its investors to plan revenue targets and required growth rates for the years ahead, from first investment to exit. Defining a Meaningful Exit Let’s start with some venture math assumptions: - An early-stage venture fund is going to invest in 20 companies - The fund aims to get a 3x gross return (which, with the usual 2–20 fee estructure, translates into something close to a 2x net return and a 15%-25% IRR depending on the actual timing of the cash flows) - The expected distribution of outcomes is: 1/3 losses (7 companies with 0x returns), 1/3 money-back (7 companies with 1x returns), 1/3 successes (6 companies with substantial returns) - The expected distribution of the successful outcomes is: 1 home-run (a company returning the entire fund) + 5 meaningful exits (5 companies returning the amount required up to a 3x gross fund return) Considering all the above, we are in the position to mathematically define a meaningful exit: a company able to return (3–1–1/3)/5 = (5/3)/5 = 1/3 of the fund. Don’t get lost in the math: From our 3x target gross return I have subtracted 1x (the expected return of the home-run) and 1/3 (the expected return of the money-back companies) and then I have divided the result by 5 (the expected number of meaningful exits). Determining the Size of a Meaningful Exit Once that we know what a meaningful exit is, let’s introduce one more assumption: that the VC is going to construct a 20% ownership in a successful company over time, which, by the way, is not easy at all. With that in mind, we are now equipped to tell for how much a company must be sold in order to be considered a meaningful exit, for different fund sizes: For example, for a $50mm micro VC fund like ours, a company must generate at least $17mm in returns to be considered meaningful, which with a 20% ownership position for the VC at the time of the exit, means that the company has to be sold for at least $83mm. For the same $50mm micro VC fund, the thing gets a bit more complicated to achieve a home-run return: the company has to sell for at least $250mm. Here you can also see the importance of ownership targets: if the VC had only 10% of the company instead of 20%, the company had to sell for $167mm instead of $83 just to be meaningful or for $500mm to be a home-run. I hope that at this point it is also clear that, the bigger the fund, the bigger the exit required to be considered meaningful. Remember that when assessing the right investor for you and your company. The bigger the fund, the bigger the exit required to be considered meaningful. If you are following the reasoning through here, you might think that a way to reduce the size of a meaningful exit for a VC would be to increase the portfolio size. And you would be right: if instead of investing in 20 companies the VC invested in 30 (assuming the same outcome distribution), a meaningful exit would be (3–1–1/3)/9=(5/3)/9=0.19x the fund size. The problem with that is that the portfolio size is constrained, not only by the number of attractive investment opportunities the VC is able to find during the investment period of the fund, but also by the number of companies the VC firm is able to properly monitor/add value to (for instance by sitting on the board of directors). Also, I think it is worth noting that there are two schools of thought regarding the best strategy for selecting VC investments (and both of them have been proven successful): some VCs will only aim at potential home-runs, knowing that some of those investments will most likely end up being “just” meaningful exits, and some VCs will mainly aim at potential meaningful exits, expecting that 1 or 2 of those investments will end up being home-runs. We are in the second camp. Determining the Revenue Targets for Meaningful Exits Before we can determine the revenue targets, again, we need some more assumptions: - A SaaS company with 75% gross margin can be sold for 5x ARR - A marketplace company with 15% take rate can be sold for 1x the last twelve months (LTM) gross merchandise value (GMV) - An e-commerce company with 30% gross margin can be sold for 2x the LTM revenues - In all cases, Net Debt (debt minus cash) at the time of exit equals zero, so Enterprise Value (EV) = Equity Value - In all cases, the same growth rate is assumed (Note: I hope you see the correspondence between the revenue multiples and gross margins above) Now that we know the size of the exits we need for different fund sizes (e.g. $83mm for the $50mm micro VC fund) and the expected revenue multiples for different business models (e.g. 5x ARR for SaaS), we can determine the revenue targets for the different combinations of them: This means that for a $50mm micro VC fund, a SaaS company has to be able to generate $17mm in ARR in 5–7 years; or $83mm in GMV if it is a marketplace; or $42mm in revenues if it is an e-commerce business. Can I grow fast enough? The next question is: “Given my current ARR/GMV/revenues, can I/how do I grow fast enough to become a meaningful exit, or even better, a home-run for the venture fund I am speaking to?” As an example, let’s continue with the the micro VC $50mm fund and the SaaS company. Now consider that some of the best SaaS companies have followed the following revenue trajectory: T2D3 (triple, triple, double, double, double). It might also be useful to remember that to triple the ARR is equivalent to growing at a monthly compound rate of 10% and that to double it is equivalent to growing at a monthly compound rate of 6%. (No need to mention that if you present a business plan in which you need to grow faster than that in order to become a meaningful exit/home-run you won’t get much credibility from the investor). This means that, if your current ARR is $200k ($17k MRR) and you follow that revenue trajectory, you might become a successful exit for the VC. If your current ARR is $700k ($58k MRR) and you follow that revenue trajectory, you might become a home-run. I don’t think it is wise to assume that every company will follow that growth path, so adjust accordingly (e.g. decrease the growth rates and increase the starting required ARR). Once you have your revenue target, budget the corresponding sales & marketing costs plus any other expenses required to achieve that goal and assess if the overall picture is achievable. Remember that everything seems possible (and even easy) in a spreadsheet but unfortunately it is not so in reality. Final Remarks - Not every company can grow quickly enough to generate venture-scale returns. There is absolutely nothing wrong with it. But a VC should only invest in the companies with that potential. - The fund size (and the venture math assumptions used) determines what a meaningful exit is. - Some investors aim only at potential home-run exits. Others don’t. Both strategies have been proven successful over time. - Your business model determines mainly your gross margin. Your gross margin and your growth rate are some of the most important factors to determine the multiple you get in an exit. Use that multiple to find your revenue target. - Work backwards to find out the growth rates required to hit that target given your current level. Compare those rates to those of the most successful companies to have a feeling of its feasibility. - There are many possible values for the assumptions used — although I think the ones presented here are reasonable. Adjust them to your liking and do your own math! JME Venture Capital is an early-stage tech VC firm based in Madrid investing in the best Spanish founders everywhere. JME VC manages two venture funds with €60mm in assets under management. So far, JME VC has invested in 19 companies across two funds, including Flywire (formerly peerTranfer), WorldSensing, Redbooth (formerly Teambox), Jobandtalent, Minube, OnyxSolar and Playspace. - Send us your business plan: backme@jme.vc - Read our blog posts:
https://medium.com/jme-venture-capital/meaningful-vc-exits-2bb5702776e2
CC-MAIN-2019-26
refinedweb
1,522
51.41
Created attachment 69830 [details] full journald log Fedora 18 (systemd-195-2.fc18) running on OLPC XO laptops. We ship a udev rule that renames eth* network interfaces when they are found. KERNEL=="eth*", PROGRAM="olpc_eth_namer", NAME="%c" This rule is no longer working, the renames fail with error: systemd-udevd[258]: error changing net interface name eth0 to eth1: Device or resource busy The journalctl logs (attached) show that NetworkManager is activating the device before udev rules have run. Here is the trimmed sequence of events: First the device appears and NM starts doing stuff with it, including bringing the interface up: Nov 09 16:38:37 xo-93-20-8d.localdomain kernel: asix 1-1.2:1.0: eth0: register 'asix' at usb-d4208000.usb-1.2, ASIX AX88772 USB 2.0 Ethernet, 00:1c:49:01:05:e9 Nov 09 16:38:38 xo-93-20-8d.localdomain NetworkManager[371]: <info> (eth0): carrier is OFF Nov 09 16:38:38 xo-93-20-8d.localdomain NetworkManager[371]: <error> [1352479118.338790] [nm-device-ethernet.c:454] update_permanent_hw_address(): (eth0): unable to read permanent MAC address (error 0) Nov 09 16:38:38 xo-93-20-8d.localdomain NetworkManager[371]: <info> (eth0): new Ethernet device (driver: 'asix' ifindex: 2) Nov 09 16:38:39 xo-93-20-8d.localdomain kernel: IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready Nov 09 16:38:39 xo-93-20-8d.localdomain NetworkManager[371]: <info> (eth0): preparing device. Then my udev rule PROGRAM gets run which outputs these lines to /dev/kmsg: Nov 09 16:38:39 xo-93-20-8d.localdomain : eth namer Nov 09 16:38:39 xo-93-20-8d.localdomain : eth namer eth1 Its trying to rename the new device to eth1. But that then fails, because the network interface is up. Nov 09 16:38:40 xo-93-20-8d.localdomain systemd-udevd[258]: error changing net interface name eth0 to eth1: Device or resource busy I checked the NM code, it uses libgudev to become aware of new network devices, and ones that are available at startup. So this seems like a udev bug - it should not be advertising these devices to libgudev clients before udev itself has finished applying the rule-driven configuration. Similar problem here (Fedora 18, systemd-195-2.fc18.x86_64). I have an old machine with two network adapters and a 70-persistent-net.rules as follows: ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="XX:XX:XX:XX:XX:XX", NAME="eth0" ACTION=="add", SUBSYSTEM=="net", ATTR{address}=="YY:YY:YY:YY:YY:YY", NAME="eth1" These rules do not work anymore. The rename, when it is needed, fails. systemd-udevd[282]: error changing net interface name eth0 to eth1: File exists systemd-udevd[284]: error changing net interface name eth1 to eth0: File exists Lennart commented on fedora-devel that we should use biosdevname instead. I would be happy using it, but unfortunately this BIOS does not support SMBIOS 2.6, so I need the rules working. journalctl output attached. Created attachment 69901 [details] journalctl output Short version: The rule will not work any more, and cannot be made working with systemd. The rules need to use names now which do not use kernel names like ethX. Biosdevname *should* work fine, even without "BIOS support" in that sense. It should be able to calculate a predictable name based on the physical location of the hardware, at least if PCI/USB hardware is used. Long version: We do no longer support renaming network interfaces in the kernel namespace. Interface names are required to use custom names that can never clash with the kernel created ones. We do not support swapping names; we cannot win any race against the kernel creating new interfaces at the same time. We do no longer support the creation of udev rules from inside the hotplug path. It was pretty naive to ever try this in the first place, it all is a problem that cannot be solved properly, and which creates many more new problems than it solves. The entire udev-based automatic persistent network names is all just a long history of failures, it pretended to be able to solve something; but it couldn't deliver. We completely stopped pretending that now, and need to move on to something that can work out in a reliable and predictable manner. Predictable network interface names require a tool like biosdevname, or manually configured names, which do not use the kernel names. Well, it does not work here: # biosdevname -d BIOS device: Kernel name: eth1 Permanent MAC: 00:16:76:8C:E9:04 Assigned MAC : 00:16:76:8C:E9:04 ifIndex: 3 Driver: 8139too Driver version: 0.9.28 Firmware version: Bus Info: 0000:02:02.0 PCI name : 0000:02:02.0 PCI Slot : Unknown Index in slot: 0 BIOS device: Kernel name: eth0 Permanent MAC: 1C:7E:E5:26:32:3A Assigned MAC : 1C:7E:E5:26:32:3A ifIndex: 2 Driver: r8169 Driver version: 2.3LK-NAPI Firmware version: Bus Info: 0000:02:03.0 PCI name : 0000:02:03.0 PCI Slot : Unknown Index in slot: 0 biosdevname bug? So probably I have a crappy system like this: For now I will use names that do not conflict with the ones used by the kernel. Thanks for the tip Kay. I imagine you have probably resolved Marcos's case above. But the original issue reported when I opened the bug still stands, and is not related to the target name being used. (I updated the rules to rename the device to foo1 to remove any doubt - still doesn't work) In my case, the issue is that udev presents the device to NetworkManager before it has applied the relevant udev rules. NetworkManager immediately brings the device up (e.g. ifconfig eth0 up) which then means when udev tries to rename it shortly after, it fails, because you can't rename an interface that is up. This is reproducible on every boot. I've investigated further: On startup, NetworkManager starts listening for uevents via libgudev (to be informed of new network devices that get added later), and then enumerates all existing network devices: devices = g_udev_client_query_by_subsystem (priv->client, "net"); for (iter = devices; iter; iter = g_list_next (iter)) { net_add (self, G_UDEV_DEVICE (iter->data)); This is hitting a race. libudev's enumeration works directly with /sys, without consulting udevd. In this case, udevd has not finished reading/applying all the relevant rules to the device, but libudev finds it anyway, and hands it over to NetworkManager. I added some debug messages in NM and udevd and can confirm that the following is happening: 1. system boots 2. Network device is detected 3. NetworkManager starts, queries available network devices, finds the device 4. NetworkManager brings the network device up 5. udevd starts processing of the network device 6. udevd tries to rename the network device, fails, its in use 7. udevd announces presence of the network device to libudev listeners I can't imagine this is the only race caused by the fact that libudev's enumeration doesn't seem to synchronise with udevd before presenting devices. What options do we have to solve this? maybe this? Ah, that looks promising. So, the NM enumeration part would start checking udev_device_get_is_initialized() before processing devices. If that returns 0, it would skip the device, on the basis that udev is still setting it up, and we should expect it to arrive via a uevent later. Does that logic sound sensible? Could be, that this is enough: g_udev_enumerator_add_match_is_initialized() Yes, and also the udev_enumerate_add_match_is_initialized() documentation agrees with the logic above. Seems to have solved the problem. Glad that there wasn't a gaping hole after all. Thanks! Use of freedesktop.org services, including Bugzilla, is subject to our Code of Conduct.
https://bugs.freedesktop.org/show_bug.cgi?id=56929
CC-MAIN-2017-26
refinedweb
1,304
65.22
. Introduction. Phaser is a hot new HTML5 game development framework with a dedicated community, so if you haven't heard about it yet you should definitely give it a try! Related Posts What is Phaser? Phaser is a framework for building both desktop and mobile HTML5 games. It was created by Photon Storm. The framework is written in pure JavaScript, but also contains TypeScript definition files, in case you're into that. Phaser code is heavily based on the Flash gamedev platform Flixel, so Flash developers can feel right at home. Under the hood, it uses the Pixi.js engine to take care of rendering everything on screen using Canvas, or WebGL if possible. It's quite new, but growing rapidly fast with the help of the active community at HTML5GameDevs forums. There are already many tutorials and articles available, and you can also check the official documentation, and a large collection of examples that can be very helpful during development. It is open sourced and freely available on GitHub, so you can dive directly into the source code and learn from it. The latest stable build of Phaser, at the time of writing, is version 2.0.7. What is Monster Wants Candy? When I start working on a game, I think about the core idea first and try to quickly set up a working prototype. In this case study, we start with a fairly simple demo of a game called Monster Wants Candy. Instead of working from a prototype, I will show you the structure of the project first, so you can understand the whole idea. We will then go through the chronological steps of our game: from loading the assets to creating the main menu and the actual game loop. You can check out the Monster Wants Candy demo right now to see what we'll be working on together. The coding was taken care of by Andrzej Mazur from Enclave Games (that's me!), and all the graphical assets were created by Robert Podgórski from Blackmoon Design. The story of Monster Wants Candy is simple: an evil king has kidnapped your love and you have to collect enough candy to get her back. The gameplay is also simple: the sweets are falling down and you can tap them to eat them. The more points you gain from eating the candy, the better. If you miss any and they fall off the screen, you'll lose a life and the game will be over. As you can see, it is a very simple game, but the structure is complete. You'll find that the most important use of the framework is for tasks like loading images, rendering sprites, and detecting user activity. It also makes for a good starting point from which you can copy the code, start fiddling with it, and build your own game. Project Setup and Structure You can read this handy article from the framework author himself about how to get started with Phaser, or you can copy the phaser.min.js file from the GitHub repo into your project directory and start working from scratch. You don't need an IDE—you can simply launch the index.html file in your browser and instantly see the changes you made in the source code. Our project folder contains the index.html file which includes the HTML5 structure and all the necessary JavaScript files. There are two subfolders: img, which stores all of our graphic assets, and src, which stores the source code of the game. Here's how the folder structure looks: In the src folder, you'll find the JavaScript files—this is where the magic happens. In this tutorial, I will describe the purpose and the contents of every file in that folder. You can see the source for each file in the GitHub repo for this tutorial. Index.html Let's start with the index.html file. It looks like a basic HTML5 website, but instead of adding the text and lots of HTML elements, we initialize the Phaser framework, which will render everything to a Canvas element. <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <title>Monster Wants Candy demo</title> <style> body { margin: 0; background: #B4D9E7; } </style> <script src="src/phaser.min.js"></script> <script src="src/Boot.js"></script> <script src="src/Preloader.js"></script> <script src="src/MainMenu.js"></script> <script src="src/Game.js"></script> </head> <body> <script> (function() { var game = new Phaser.Game(640, 960, Phaser.AUTO, 'game'); game.state.add('Boot', Candy.Boot); game.state.add('Preloader', Candy.Preloader); game.state.add('MainMenu', Candy.MainMenu); game.state.add('Game', Candy.Game); game.state.start('Boot'); })(); </script> </body> </html> We define the usual structure of the HTML document with the doctype and some information in the <head>: charset encoding, title of the page, and CSS style. Usually, we would reference the external CSS file where we put all the styling, but we don't need it here—as I mentioned earlier, everything will be rendered on a Canvas element, so we won't have any HTML elements to style. The last thing to do is to include all of our JavaScript files: from the phaser.min.js file with the source code of the Phaser framework, to all of our own files containing the game's code. It is good to minify the number of requests in the browser by combining all the JavaScript files into one, so that your game loads faster, but for the purpose of this tutorial we will simply load them separately. Let's move to the contents of the <body> tag, where we initialize the framework and start our game. The code is inside a self-invoking function; the first line looks like this: var game = new Phaser.Game(640, 960, Phaser.AUTO, 'game'); This code will initialize Phaser with some defaults: 640 is the game's Canvas width in pixels and 960 is the game's height. Phaser.AUTO informs the framework how we want our game to be rendered to the Canvas. There are three options: CANVAS, WEBGL and AUTO. The first runs our game in the 2D context of the Canvas; the second uses WebGL to render it where possible (mostly desktop right now, but the mobile support is getting better); and the third leaves this decision to the framework, which will check whether WebGL is supported and decide whether the game can be rendered in this context—if it can't, then the 2D Canvas rendering will be used. The framework initialization will be assigned to the single object called game, which we will use when referencing the Phaser instance. The next lines are all about adding states to our game: game.state.add('Boot', Candy.Boot); 'Boot' is a state name and Candy.Boot is an object (defined in the next steps) that will be executed when we start that state. We're adding states for Boot (configuration), Preloader (loading assets), MainMenu (you guessed it; the main menu of our game) and Game (the main loop of the game). The last line, game.state.start('Boot'), starts the Boot state, so that the proper function from the Candy.Boot object will be executed. As you can see, there's one main JavaScript game object created, with many others assigned within for special purposes. In our game we have Boot, Preloader, MainMenu, and Game objects which will be our game states, and we define them by using their prototypes. There are a few special function names inside those objects reserved for the framework itself ( preload(), create(), update(), and render()), but we can also define our own ( startGame(), spawnCandy(), managePause()). If you're not sure you understand all of this, then don't worry—I'll explain everything using the code examples later. The Game Let's forget about the Boot, Preloader and MainMenu states for now. They will be described in detail later; all you have to know at the moment is that the Boot state will take care of the basic config of the game, Preloader will load all the graphic assets, and MainMenu will show you the screen where you'll be able to start the game. Let's focus on the game itself and see how the code of the Gamestate looks. Before we go through the whole Game.js code, though, let's talk about the concept of the game itself and the most important parts of the logic from a developer's point of view. Portrait mode The game is played in portrait mode, meaning that the player holds their mobile vertically to play it. In this mode, the screen's height is greater than its width—as opposed to the landscape mode, where the screen's width is greater than its height. There are types of games that work better in portrait mode (like Monster Wants Candy), types that work better in landscape mode (including platformer games like Craigen), and even some types that work in both modes, although it's usually a lot harder to code such games. Game.js Before we go through the source code of the game.js file, let's talk about its structure. There is a world created for us, and there is a player character inside whose job it is to grab the candy. Game world: The world behind the monster is static. There's an image of the Candyland in the background, we can see the monster in the foreground, and there's also a user interface. Player character: This demo is intentionally very simple and basic, so the little monster is doing nothing besides waiting for the candy. The main task for the player is to collect the candy. Candy: The core mechanic of the game is to catch as much candy as possible. The candies are spawned at the top edge of the screen, and the player must tap (or click) them as they're falling down. If any candy falls off the bottom of the screen, it's removed, and the player character receives damage. We don't have a lives system implemented, so after that the game instantly ends and the appropriate message is displayed. Okay, let's look at the code structure of our Game.js file now: Candy.Game = function(game) { // ... }; Candy.Game.prototype = { create: function() { // ... }, managePause: function() { // ... }, update: function() { // ... } }; Candy.item = { spawnCandy: function(game) { // ... }, clickCandy: function(candy) { // ... }, removeCandy: function(candy) { // ... } }; There are three functions defined in the Candy.Game prototype: create()takes care of the initialization managePause()pauses and unpauses the game update()manages the main game loop with every tick We will create a handy object called item to represent a single candy. It will have some useful methods: spawnCandy()adds new candy to the game world clickCandy()is fired when a user clicks or taps on the candy removeCandy()removes it Let's go through them: Candy.Game = function(game) { this._player = null; this._candyGroup = null; this._spawnCandyTimer = 0; this._fontStyle = null; Candy._scoreText = null; Candy._score = 0; Candy._health = 0; }; Here, we're setting up all the variables that we will be using later in the code. By defining this._name, we're restricting the use of the variables to the Candy.Game scope, which means they can't be used in other states—we don't need them there, so why expose them? By defining Candy._name, we're allowing the use of those variables in other states and objects, so, for example, Candy._score can be increased from the Candy.item.clickCandy() function. The objects are initialized to null, and the variables we need for calculations are initialized with zeros. We can move on to the contents of Candy.Game.prototype: create: function() { this.physics.startSystem(Phaser.Physics.ARCADE); this.physics.arcade.gravity.y = 200; this.add.sprite(0, 0, 'background'); this.add.sprite(-30, Candy.GAME_HEIGHT-160, 'floor'); this.add.sprite(10, 5, 'score-bg'); this.add.button(Candy.GAME_WIDTH-96-10, 5, 'button-pause', this.managePause, this); this._player = this.add.sprite(5, 760, 'monster-idle'); this._player.animations.add('idle', [0,1,2,3,4,5,6,7,8,9,10,11,12], 10, true); this._player.animations.play('idle'); this._spawnCandyTimer = 0; Candy._health = 10; this._fontStyle = { font: "40px Arial", fill: "#FFCC00", stroke: "#333", strokeThickness: 5, align: "center" }; Candy._scoreText = this.add.text(120, 20, "0", this._fontStyle); this._candyGroup = this.add.group(); Candy.item.spawnCandy(this); }, At the beginning of the create() function, we set up the ARCADE physics system—there are a few available in Phaser, but this is the simplest one. After that, we add vertical gravity to the game. Then we add three images: the background, the floor on which the monster is standing, and the score UI's background. The fourth item we add is the Pause button, Note that we're using the Candy.GAME_WIDTHand Candy.GAME_HEIGHT variables, which are defined in Candy.Preloader() but are available throughout the whole game code. Then we create a monster, the player's avatar. It's a sprite with frames—a spritesheet. To have it look like he's standing and breathing calmly, we can animate him. The animations.add() function creates an animation from the available frames, and the function takes four parameters: - the name of the animation (so we can reference it later) - the table with all the frames we want to use (we can use only some of them if we want) - a framerate - a flag to specify whether to loop the animation and play it indefinitely. If we want to start our animation, we have to use the animations.play() function with the name specified. We then set the spawnCandyTimer to 0 (getting ready to count up) and the health of the monster to 10. Styling the Text The next two lines let us show some text on the screen. The this.add.text() function takes four parameters: left and top absolute positions on the screen, the actual text string and the config object. We can format the text accordingly using the CSS-like syntax in that config object. The config for our font looks like this: this._fontStyle = { font: "40px Arial", fill: "#FFCC00", stroke: "#333", strokeThickness: 5, align: "center" }; In this case, the font is Arial, it's 40 pixels tall, the color is yellow, there's a stroke defined (with color and thickness), and the text is center-aligned. After that, we define candyGroup and spawn the first candy. Pausing the Game The pause function looks like this: managePause: function() { this.game.paused = true; var pausedText = this.add.text(100, 250, "Game paused.\nTap anywhere to continue.", this._fontStyle); this.input.onDown.add(function(){ pausedText.destroy(); this.game.paused = false; }, this); }, We change the state of this.game.paused to true every time the pause button is clicked, show the appropriate prompt to the player, and set up an event listener for the player's click or tap on the screen. When that click or tap is detected, we remove the text and set this.game.paused to false. The paused variable in the game object is special in Phaser, because it stops any animations or calculations in the game, so everything is frozen until we unpause the game by setting it to false. The Update Loop The update() function name is one of the reserved words in Phaser. When you write a function with that name, it will be executed on every frame of the game. You can manage calculations inside it based on various conditions. update: function() { this._spawnCandyTimer += this.time.elapsed; if(this._spawnCandyTimer > 1000) { this._spawnCandyTimer = 0; Candy.item.spawnCandy(this); } this._candyGroup.forEach(function(candy){ candy.angle += candy.rotateMe; }); if(!Candy._health) { this.add.sprite((Candy.GAME_WIDTH-594)/2, (Candy.GAME_HEIGHT-271)/2, 'game-over'); this.game.paused = true; } } Every tick in the game world, we add the time elapsed since the previous tick to the spawnCandyTimer variable to keep track of it. The if statement checks whether or not it's time to reset the timer and spawn new candy onto the game world—we do this every second (that is, every time we notice that the spawnCandyTimer has passed 1000 milliseconds). Then, we iterate through the candy group with all the candy object inside (we could have more than one on screen) using a forEach, and add a fixed amount (stored in the candy object's rotateMe value) to the candy's angle variable, so that they each rotate at this fixed speed while falling. The last thing we do is check if the health has dropped to 0—if so, then we show the game over screen and pause the game. Managing the Candy Events To separate the candy logic from the main Game, we use an object called item that contains the functions we will use: spawnCandy(), clickCandy() and removeCandy(). We keep some of the variables related to candy in the Game object for easier use, while others are defined only in the item functions for better maintainability. spawnCandy: function() { var dropPos = Math.floor(Math.random()*Candy.GAME_WIDTH); var dropOffset = [-27,-36,-36,-38,-48]; var candyType = Math.floor(Math.random()*5); var candy = game.add.sprite(dropPos, dropOffset[candyType], 'candy'); candy.animations.add('anim', [candyType], 10, true); candy.animations.play('anim'); game.physics.enable(candy, Phaser.Physics.ARCADE); candy.inputEnabled = true; candy.events.onInputDown.add(this.clickCandy, this); candy.checkWorldBounds = true; candy.events.onOutOfBounds.add(this.removeCandy, this); candy.anchor.setTo(0.5, 0.5); candy.rotateMe = (Math.random()*4)-2; game._candyGroup.add(candy); }, The function begins by defining three values: - a randomized x-coordinate to drop the candy from (between zero and the width of the Canvas) - the y-coordinate to drop the candy from, based on its height (which we determine later on based on the type of candy) - a randomized candy type (we have five different images to use) We then add a single candy as a sprite, with its starting position and image as defined above. The last thing we do in this block is set a new animation frame that will be used when the candy spawns. Next, we enable the body of the candy for the physics engine, so that it can fall naturally from the top of the screen when the gravity is set. Then, we enable the input on the candy to be clicked or tapped, and set the event listener for that action. To be sure that the candy will fire an event when it leaves the screen boundaries we set checkWorldBounds to true. events.onOutOfBounds() is a function that will be called when our candy exits the screen; we make it call removeCandy() in turn. Setting the anchor to our candy in the exact middle lets us rotate it around its axis, so that it will spin naturally. We set the rotateMe variable here so we can use it in the update() loop to rotate the candy; we choose a value between -2 and +2. The last line adds our newly created candy to the candy group, so that we can loop through them all. Let's move on to the next function, clickCandy(): clickCandy: function(candy) { candy.kill(); Candy._score += 1; Candy._scoreText.setText(Candy._score); }, This one takes one candy as a parameter and uses the Phaser method kill() to remove it. We also increase the score by 1 and update the score text. Resetting the candy is also short and easy: removeCandy: function(candy) { candy.kill(); Candy._health -= 10; }, The removeCandy() function is fired if the candy disappears below the screen without being clicked. The candy object is removed, and the player loses 10 points of health. (He had 10 at the beginning, so missing even one piece of falling candy ends the game.) Prototypes and Game States We've learned about the game mechanics, the core idea, and how the gameplay looks. Now it's time to see the other parts of the code: scaling the screen, loading the assets, managing button presses, and so on. We already know about the game states, so let's see exactly how they look, one after the other: Boot.js Boot.js is the JavaScript file where we will define our main game object—let's call it Candy (but you can name it whatever you want). Here's the source code of the Boot.js file: var Candy = {}; Candy.Boot = function(game) {}; Candy.Boot.prototype = { preload: function() { this.load.image('preloaderBar', 'img/loading-bar.png'); }, create: function() { this.input.maxPointers = 1; this.scale.scaleMode = Phaser.ScaleManager.SHOW_ALL; this.scale.pageAlignHorizontally = true; this.scale.pageAlignVertically = true; this.scale.setScreenSize(true); this.state.start('Preloader'); } }; As you can see, we're starting with var Candy = {} which creates a global object for our game. Everything will be stored inside, so we won't bloat the global namespace. The code Candy.Boot = function(game){} creates a new function called Boot() (used in index.html) which receives the game object as a parameter (also created by the framework in index.html). The code Candy.Boot.prototype = {} is a way to define the contents of Candy.Boot using prototypes: Candy.Boot.prototype = { preload: function() { // code }, create: function() { // code } }; There are a few reserved names for functions in Phaser, as I mentioned before; preload() and create() are two of them. preload() is used to load any assets and create() is called exactly once (after preload()), so you can put the code that will be used as a setup for the object there, such as for defining variables or adding sprites. Our Boot object contains these two functions, so they can be referenced by using Candy.Boot.preload() and Candy.Boot.create(), respectively. As you can see in the full source code of the Boot.js file, the preload() function loads a preloader image into the framework: preload: function() { this.load.image('preloaderBar', 'img/loading-bar.png'); }, The first parameter in this.load.image() is the name we give to the loading bar image, and the second is the path to the image file in our project structure. But why are we loading an image in the Boot.js file, when Preload.js is supposed to do it for us anyway? Well, we need an image of a loading bar to show the status of all the other images being loaded in the Preload.js file, so it has to be loaded earlier, before everything else. Scaling Options The create() function contains a few Phaser-specific settings for input and scaling: create: function() { this.input.maxPointers = 1; this.scale.scaleMode = Phaser.ScaleManager.SHOW_ALL; this.scale.pageAlignHorizontally = true; this.scale.pageAlignVertically = true; this.scale.setScreenSize(true); this.state.start('Preloader'); } The first line, which sets input.maxPointers to 1, defines that we won't use multi-touch, as we don't need it in our game. The scale.scaleMode setting controls the scaling of our game. The available options are: EXACT_FIT, NO_SCALE and SHOW_ALL; you can enumerate through them and use the values of 0, 1, or 2, respectively. The first option will scale the game to all the available space (100% width and height, no ratio preserved); the second will disable scaling completely; and the third will make sure that the game fits in the given dimensions, but everything will be shown on the screen without hiding any fragments (and the ratio will be preserved). Setting scale.pageAlignHorizontally and scale.pageAlignVertically to true will align our game both horizontally and vertically, so there will be the same amount of free space on the left and right side of the Canvas element; the same goes for top and bottom. Calling scale.setScreenSize(true) "activates" our scaling. The last line, state.start('Preloader'), executes the next state—in this case, the Preloader state. Preloader.js The Boot.js file we just went through has a simple, one-line preload() function and lots of code in the create() function, but Preloader.js looks totally different: we have lots of images to load, and the create() function will just be used to move to another state when all the assets are loaded. Here's the code of the Preloader.js file: Candy.Preloader = function(game){ Candy.GAME_WIDTH = 640; Candy.GAME_HEIGHT = 960; }; Candy.Preloader.prototype = { preload: function() { this.stage.backgroundColor = '#B4D9E7'; this.preloadBar = this.add.sprite((Candy.GAME_WIDTH-311)/2, (Candy.GAME_HEIGHT-27)/2, 'preloaderBar'); this.load.setPreloadSprite(this.preloadBar); this.load.image('background', 'img/background.png'); this.load.image('floor', 'img/floor.png'); this.load.image('monster-cover', 'img/monster-cover.png'); this.load.image('title', 'img/title.png'); this.load.image('game-over', 'img/gameover.png'); this.load.image('score-bg', 'img/score-bg.png'); this.load.image('button-pause', 'img/button-pause.png'); this.load.spritesheet('candy', 'img/candy.png', 82, 98); this.load.spritesheet('monster-idle', 'img/monster-idle.png', 103, 131); this.load.spritesheet('button-start', 'img/button-start.png', 401, 143); }, create: function() { this.state.start('MainMenu'); } }; It starts similarly to the previous Boot.js file; we define the Preloader object and add definitions for two functions ( preload() and create()) to its prototype. Inside the Prototype object we define two variables: Candy.GAME_WIDTH and Candy.GAME_HEIGHT; these set the default width and height of the game screen, which will be used elsewhere in the code. The first three lines in the preload() function are responsible for setting the background color of the stage (to #B4D9E7, light blue), showing the sprite in the game, and defining it as a default one for the special function called setPreloadSprite() that will indicate the progress of the loading assets. Let's look at the add.sprite() function: this.preloadBar = this.add.sprite((640-311)/2, (960-27)/2, 'preloaderBar'); As you can see, we pass three values: the absolute left position of the image (the center on screen is achieved by subtracting the width of the image from the width of the stage and halving the result), the absolute top position of the image (calculated similarly), and the name of the image (which we loaded in the Boot.js file already). The next few lines are all about using load.image() (which you've seen already) to load all of the graphic assets into the game. The last three are a little different: this.load.spritesheet('candy', 'img/candy.png', 82, 98); This function, load.spritesheet(), rather than loading a single image, takes care of a full collection of images inside one file—a spritesheet. Two extra parameters are needed for telling the function the size of a single image in the sprite. In this case, we have five different types of candy inside one candy.png file. The whole image is 410x98px, but the single item is set to 82x98px, which is entered in the load.spritesheet() function. The player spritesheet is loaded in a similar manner. The second function, create(), starts the next state of our game, which is MainMenu. This means that the main menu of the game will be shown just after all the images from the preload() function have been loaded. MainMenu.js This file is where we will render some game-related images, and where the user will click on the Start button to launch the game loop and play the game. Candy.MainMenu = function(game) {}; Candy.MainMenu.prototype = {); }, startGame: function() { this.state.start('Game'); } }; The structure looks similar to the previous JavaScript files. The prototype of the MainMenu object doesn't have a preload() function, because we don't need it—all the images have been loaded in the Preload.js file already. There are two functions defined in the prototype: create() (again) and startGame(). As I mentioned before, the name of the first one is specific to Phaser, while the second one is our own. Let's look at startGame() first: startGame: function() { this.state.start('Game'); } This function takes care of one thing only—launching the game loop—but it's not launched automatically or after the assets are loaded. We will assign it to a button and wait for a user input.); }, The create() method has three add.sprite() Phaser functions that we are familiar with already: they add images to the visible stage by positioning them absolutely. Our main menu will contain the background, the little monster in the corner, and the title of the game. Buttons There's also an object we've already used in Game state, a button: this.startButton = this.add.button(Candy.GAME_WIDTH-401-10, Candy.GAME_HEIGHT-143-10, 'button-start', this.startGame, this, 1, 0, 2); This button looks like it is more complicated than over methods we've seen so far. We pass eight different arguments to create it: left position, top position, name of the image (or sprite), the function to execute after the button is clicked, the context in which this function is executed, and indices of the images in the button's spritesheet. This is how the button spritesheet looks, with the states labelled: It's very similar to the candy.png spritesheet we used before, except arranged vertically. It's important to remember that the last three digits passed to the function— 1, 0, 2—are the different states of the button: over (hover), out (normal), and down (touch/click) respectively. We have normal, hover and click states in the button.png spritesheet, respectively, so we change the order in the add.button() function from 0, 1, 2 to 1, 0, 2 to reflect that. That's it! You now know the basics of Phaser game framework; congratulations! The Finished Game The demo game used in the article has evolved into a full, finished game that you can play here. As you can see, there are lives, achievements, high scores, and other interesting features implemented, but most of them are based on the knowledge you've already learned by following this tutorial. You can also read the short "making of" blog post to learn about the origins of the game itself, the story behind it, and some fun facts. Resources Building HTML5 games for mobile devices has exploded in the last few months. The technology is getting better and better, and there are tools and services popping out almost every single day—it's the best time to dive in to the market. Frameworks like Phaser gives you the ability to create games that run flawlessly on a variety of different devices. Thanks to HTML5, you can target not just mobile and desktop browsers, but also different operating systems and native platforms. There are lots of resources right now that could help you get into HTML5 game development, for example this HTML5 Gamedev Starter list or this Getting Started With HTML5 Game Development article. If you need any help you can find fellow developers on the HTML5GameDevs forums or directly at #BBG channel on Freenode IRC. You can also check the status of the upcoming book about Firefox OS and HTML5 games, but it's still in the early stages of writing. There's even a Gamedev.js Weekly newsletter that you can subscribe to, to keep up to date with the latest news. Summary This was a long journey through every line of code of the Monster Wants Candy demo, but I hope it will help you learn Phaser, and that in the near future you will create awesome games using the framework. The source code used in the article is also freely available on GitHub, so you can fork it or just download it and do whatever you want. Feel free to modify it and create your own games on top of it, and be sure to visit the Phaser forums if you need anything during the development. Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post Envato Market has a range of items for sale to help get you started.
http://gamedevelopment.tutsplus.com/tutorials/getting-started-with-phaser-building-monster-wants-candy--cms-21723?WT.mc_id=Tuts+_website_relatedtutorials_sidebar
CC-MAIN-2016-22
refinedweb
5,342
64.81
Home › Forums › WinForms controls › Other WinForms controls › Text Selection on Focus in WinTextBox - AuthorPosts - User (Old forums)MemberSeptember 13, 2006 at 10:37 amPost count: 23064 Hi, we converted our project to .NET2.0 and found out that TextBoxes behave differently than in 1.1… when tabbing to them, the contents are no longer selected. The XCeed WinTextBox.TextArea.SelectOnFocus seems to be the solution to this problem, but it is by default false. We have a large number of textboxes in our application – all of them would have to be converted by hand and every new TextBox would need this flag – if someone forgets this, the box would behave differently than the others – our customers already complained about it. So my question: is there any possibility to globally set SelectOnFocus for our application? Or is there any other solution for our problem? Thanks in advance, Mike Imported from legacy forums. Posted by mike_t (had 4334 views)User (Old forums)MemberSeptember 14, 2006 at 2:33 pmPost count: 23064 The WinTextBox has the same behavior in both .NET1.1 and .NET 2.0 with our latest version (3.2.6403.0). The WinTextBox.TextBoxArea.SelectOnFocus is false by default. We just tested it. So it is possible that you have a version that, for some reason, had a bug which made the behavior differ from one version of .NET to the other. Unfortunately, in your case, you will have to set the SelectOnFocus to true everywhere. The only other solution is to create a custom class deriving from WinTextBox, override the CreateTextBoxArea, set the SelectOnFocus to true in there, and then do a CTRL-SHIFT-H in your project, and replace all the Xceed.Editors.WinTextBox (and/or WinTextBox) with the name of this new class. i.e : <i> using System; using System.Collections.Generic; using System.Text; using Xceed.Editors; namespace WindowsApplication29 { public class AutoSelectWinTextBox : WinTextBox { protected override TextBoxArea CreateTextBoxArea() { TextBoxArea textBoxArea = base.CreateTextBoxArea(); textBoxArea.SelectOnFocus = true; return textBoxArea; } } } </i> Imported from legacy forums. Posted by André (had 275 views)User (Old forums)MemberSeptember 15, 2006 at 3:02 amPost count: 23064User (Old forums)MemberApril 4, 2007 at 12:38 amPost count: 23064 A rew more WinTextBox questions: 1. If you have TextBoxArea.SelectOnFocus set to false, why won’t a WinTextBox keep the SelectionStart and SelectionLength state when you tab away then back to it? The standard Windows TextBox does. 2. Why does the WinTextBox always position the caret to the end of the text when it gets focus via the keyboard? 3. If you have TextBoxArea.SelectOnFocus set to true, all the text gets selected not only when you focus the control via a keystroke, but also when you give it focus via a mouse click. Standard windows behavior is to select all text only when focusing the control via the keyboard (I mean on data-entry type forms, not the few exceptions like the address bar in IE). Any work-arounds for these? Imported from legacy forums. Posted by Glenn (had 461 views)User (Old forums)MemberApril 16, 2007 at 2:24 pmPost count: 23064 - AuthorPosts - You must be logged in to reply to this topic.
https://forums.xceed.com/forums/topic/Text-Selection-on-Focus-in-WinTextBox/
CC-MAIN-2021-25
refinedweb
530
56.86
Programming. Data forms an integral part of the lives of Data Scientists. From the number of passengers in an airport to the count of stationary in a bookshop, everything is recorded today in form of digital files called databases. Databases are nothing more than electronic lists of information. Some databases are simple, and designed for smaller tasks while others are powerful, and designed for big data. All of them, however, have the same commonalities and perform a similar function. Different database tools store that information in unique ways. Flat files use a table, SQL databases use a relational model and NoSQL databases use a key-value model. In this article, we will focus only on the Relational Databases and accessing them in Python. We will begin by having a quick overview of the Relational databases and their important constituents. Relational Database: A Quick Overview A Relational database consists of one or more tables of information. The rows in the table are called records and the columns in the table are called fields or attributes. A database that contains two or more related tables is called a relational database i.e interrelated data. The main idea behind a relational database is that your data gets broken down into common themes, with one table dedicated to describing the records of each theme. i) Database tables Each table in a relational database has one or more columns, and each column is assigned a specific data type, such as an integer number, a sequence of characters (for text), or a date. Each row in the table has a value for each column. A typical fragment of a table containing employee information may look as follows: The tables of a relational database have some important characteristics: - There is no significance to the order of the columns or rows. - Each row contains one and only one value for each column. - Each value for a given column has the same type. - Each table in the database should hold information about a specific thing only, such as employees, products, or customers. By designing a database this way, it helps to eliminate redundancy and inconsistencies. For example, both the sales and accounts payable departments may look up information about customers. In a relational database, the information about customers is entered only once, in a table that both departments can access. A relational database is a set of related tables. You use primary and foreign keys to describe relationships between the information in different tables. ii) Primary and Foreign Keys Primary and foreign keys define the relational structure of a database. These keys enable each row in the database tables to be identified and define the relationships between the tables. - Primary Key The primary key of a relational table uniquely identifies each record in the table. It is a column, or set of columns, that allows each row in the table to be uniquely identified. No two rows in a table with a primary key can have the same primary key value. Imagine you have a CUSTOMERS table that contains a record for each customer visiting a shop. The customer’s unique number is a good choice for a primary key. The customer’s first and last name are not good choices because there is always the chance that more than one customer might have the same name. - Foreign Key A foreign key is a field in a relational table that matches the primary key column of another table. The example above gives a good idea of the primary and foreign keys. Database Management Systems The Database management system (DBMS) is the software that interacts with end users, applications, and the database itself to capture and analyze data. The DBMS used for Relational databases is called Relational Database Management Systems(RDBMS). Most commercial RDBMSes use Structured Query Language (SQL), a declarative language for manipulating data, to access the database. The major RDBMS are Oracle, MySQL , Microsoft SQL Server, PostgreSQL , Microsoft Access, and SQLite . We have barely scratched the surface regarding databases here. The details are beyond the scope of this article.However, you are encouraged to explore the database ecosystem since they form an essential part of a data scientist’s toolkit. This article will focus on using python to access relational Databases. We will be working with a very easy to use database engine called SQLite. SQLite SQLite is a self-contained, high-reliability, embedded, full-featured, public-domain, SQL database engine. SQLite is… SQLite is a relational database management system based on the SQL language but optimized for use in small environments such as mobile phones or small applications. It is self-contained, serverless, zero-configuration and transactional. It is very fast and lightweight, and the entire database is stored in a single disk file. SQLite is built for simplicity and speed compared to a hosted client-server relational database such as MySQL. It sacrifices sophistication for utility and complexity for size. Queries in SQLite are almost identical to other SQL calls. Python sqlite3 module SQLite can be integrated with Python using a Python module called sqlite3. You do not need to install this module separately because it comes bundled with Python version 2.5.x onwards. This article will show you, step by step, how to work with an SQLite database using Python. Before starting I would highly recommend you all to install DB Browser for SQLite. The browser can be downloaded from their official page easily. DB Browser for SQLite is a high quality, visual, open source tool to create, design, and edit database files compatible with SQLite. It will help us to see the databases being created and edited in real time. Since everything is in place, let us get to work. Contents: - CONNECTING to a Database - CREATING a Table - INSERTING records in a TABLE - SELECTING records from the TABLE - UPDATING Records in the TABLE - DELETE Operation - Example walktrough Connecting to a Database - Open any Python IDE of your choice and type in the following commands. You can even use Jupyter Notebook for the same. In general, the only thing that needs to be done before we can perform any operation on a SQLite database via Python’s sqlite3module is to open a connection to an SQLite database file: import sqlite3 conn = sqlite3.connect('my_database.sqlite') cursor = conn.cursor() print("Opened database successfully") The above Python code enables us to connect to an existing database using connection object conn.If the database does not exist, then it will be created and finally, a database object will be returned. A cursor object is our interface to the database, that allows running any SQL query on our database. If everything goes well, the following lines will be returned on running the script: Opened database successfully Let us now open and view the newly created database in the DB browser. Indeed a new database named my_database.sqlite is created which is currently empty. Before going further, there are two more things that are worth mentioning. If we are finished with our operations on the database file, we have to close the connection via the .close() method: conn.close() And if we performed an operation on the database other than sending queries, we need to commit those changes via the .commit() method before we close the connection: conn.commit() conn.close() We should always remember to commit the current transaction. Since by default Connector/Python does not autocommit, it is important to call this method after every transaction that modifies data for tables that use transactional storage engines. If you don’t call this method, anything you did since the last call to commit() will not be visible from other database connections. 2. Creating a Table Now we will create a table in the previously created database. Type the following code in the IDE. cursor.execute('''CREATE TABLE SCHOOL (ID INT PRIMARY KEY NOT NULL, NAME TEXT NOT NULL, AGE INT NOT NULL, ADDRESS CHAR(50), MARKS INT);''') cursor.close() The routine conn.execute executes the SQL statement. Here we create a table called with SCHOOL the fields: ID, NAME, AGE, ADDRESS and MARKS We also designate as ID Primary Key and then close the connection. Let us see these details in the DB Browser. 3. INSERTING records in the TABLE Let us now INSERT records of students in the table SCHOOL created in the above example. import sqlite3 conn = sqlite3.connect('my_database.sqlite') cursor = conn.cursor() cursor.execute("INSERT INTO SCHOOL (ID,NAME,AGE,ADDRESS,MARKS) \ VALUES (1, 'Rohan', 14, 'Delhi', 200)"); cursor.execute("INSERT INTO SCHOOL (ID,NAME,AGE,ADDRESS,MARKS) \ VALUES (2, 'Allen', 14, 'Bangalore', 150 )"); cursor.execute("INSERT INTO SCHOOL (ID,NAME,AGE,ADDRESS,MARKS) \ VALUES (3, 'Martha', 15, 'Hyderabad', 200 )"); cursor.execute("INSERT INTO SCHOOL (ID,NAME,AGE,ADDRESS,MARKS) \ VALUES (4, 'Palak', 15, 'Kolkata', 650)"); conn.commit() conn.close() When the above program is executed, it will create the given records in the table SCHOOL. 4. SELECTING records from the TABLE Let us say we want to select some particular records from the table i.e only, ID, NAME and MARKS. We can do this easily with the command import sqlite3 conn = sqlite3.connect('my_database.sqlite') cursor = conn.cursor() for row in cursor.execute("SELECT id, name, marks from SCHOOL"): print("ID = ", row[0]) print("NAME = ", row[1]) print("MARKS = ", row[2], "\n") conn.commit() conn.close() When the above program is executed, it will produce the following result. We can see only that the address and age have not been returned. ID = 1 NAME = Rohan MARKS = 200 ID = 2 NAME = Allen MARKS = 150 ID = 3 NAME = Martha MARKS = 200 ID = 4 NAME = Palak MARKS = 650 5. UPDATING Records in the TABLE Let us see how to use the command UPDATE to update any record and then fetch and display the updated records from the table SCHOOL. Here we will update Martha’s marks from 200 to 250 and will again fetch the records. import sqlite3 conn = sqlite3.connect('my_database.sqlite') cursor = conn.cursor() conn.execute("UPDATE SCHOOL set MARKS = 250 where ID = 3") conn.commit() for row in cursor.execute("SELECT id, name, address, marks from SCHOOL"): print("ID = ", row[0]) print("NAME = ", row[1]) print("MARKS = ", row[2], "\n") conn.commit() conn.close() When the above program is executed, marks for Martha would change from 200 to 250. ID = 1 NAME = Rohan MARKS = 200 ID = 2 NAME = Allen MARKS = 150 ID = 3 NAME = Martha MARKS = 250 ID = 4 NAME = Palak MARKS = 650 6. DELETE Operation We can use the operation DELETE to delete any record from the table SCHOOL. Let us say Allen has left the school permanently and we want to delete his records from the database. Now let’s fetch the details of all other students from the records. import sqlite3 conn = sqlite3.connect('my_database.sqlite') cursor = conn.cursor() conn.execute("DELETE from SCHOOL where ID = 2") conn.commit() for row in cursor.execute("SELECT id, name, address, marks from SCHOOL"): print("ID = ", row[0]) print("NAME = ", row[1]) print("ADDRESS = ", row[2]) print("MARKS = ", row[3], "\n") conn.commit() conn.close() When the above program is executed, it will produce the following result. ID = 1 NAME = Rohan ADDRESS = Delhi MARKS = 200 ID = 3 NAME = Martha ADDRESS = Hyderabad MARKS = 250 ID = 4 NAME = Palak ADDRESS = Kolkata MARKS = 650 The same can be seen in the DB Browser. The second record has been deleted. In the above section, we learned how to create a database and perform various operations on it. In this section, let’s work with a real database example to see how we can incorporate the basics we have just learned. Example Walkthrough: Soccer database We will be working with the soccer database from Kaggle. It is the ultimate Soccer database for data analysis and machine learning and the entire details can be accessed from kaggle. The database contains 8 tables. Pre-requisites: A basic knowledge of Python and libraries like pandas will come in handy. Download the dataset in SQLite format from Kaggle and save it in the same directory as your jupyter notebook. Importing basic libraries import sqlite3 import pandas as pd import numpy as np import seaborn as sns import matplotlib.pyplot as plt %matplotlib inline Accessing the Database with the sqlite3 Package # Creating the connection to database con = sqlite3.connect('soccer.sqlite') cursor = con.cursor() Getting a list of all the tables saved into the database for row in cursor.execute("SELECT name FROM sqlite_master WHERE type='table';"): print(row) ('sqlite_sequence',) ('Player_Attributes',) ('Player',) ('Match',) ('League',) ('Country',) ('Team',) ('Team_Attributes',) Reading all the TABLES with pandas library country_table = pd.read_sql_query("SELECT * FROM Country", con) league_table = pd.read_sql_query("SELECT * FROM League", con) match_table = pd.read_sql_query("SELECT * FROM Match", con) player_table = pd.read_sql_query("SELECT * FROM Player", con) player_att_table = pd.read_sql_query("SELECT * FROM Player_Attributes", con) team_table = pd.read_sql_query("SELECT * FROM Team", con) team_att_table = pd.read_sql_query("SELECT * FROM Team_Attributes", con) Exploratory Data Analysis We will only analyse Player table here. But feel free to analyse all the remaining tables as well. Player Table # Dimensions player_table.shape (11060, 7) # player_table.info() Data columns (total 7 columns): id 11060 non-null int64 player_api_id 11060 non-null int64 player_name 11060 non-null object player_fifa_api_id 11060 non-null int64 birthday 11060 non-null object height 11060 non-null float64 weight 11060 non-null int64 dtypes: float64(1), int64(4), object(2) Accessing the first 5 records of Player table player_table.head() Now we have a pandas dataframe, and we can easily work with this to get desired information e.g: Finding all the players with height > 150 cm. height_150 = pd.read_sql_query("SELECT * FROM Player WHERE height >= 150 ", con) Similarly you can explore all the other tables further to get other meaningful insights. Please find the code in the Jupyter notebook below. The code is self explanatory. Conclusion In this tutorial, we have seen how easy it is to to get started with SQLite database operations via Python. The module sqlite3 is very simple to use and comes in very handy when dealing with large data systems. I hope you found this article useful. Let me know if you have any doubt or suggestion in the comment section below.
https://medium.com/analytics-vidhya/programming-with-databases-in-python-using-sqlite-4cecbef51ab9
CC-MAIN-2019-22
refinedweb
2,365
56.86
Pandas-based Data Handler for VCF, BED, and SAM Files Project description pdbio Pandas-based Data Handler for VCF, BED, and SAM Files Installation $ pip install -U pdbio Python API Example of API call from pprint import pprint from pdbio.vcfdataframe import VcfDataFrame vcf_path = 'test/example.vcf' vcfdf = VcfDataFrame(path=vcf_path) pprint(vcfdf.header) # list of header pprint(vcfdf.samples) # list of samples print(vcfdf.df) # VCF dataframe vcfdf.sort() # sort by CHROM, POS, and the other print(vcfdf.df) # sorted dataframe Command-line interface Example of commands # Convert VCF data into sorted TSV data $ pdbio vcf2csv --sort --tsv test/example.vcf # Convert VCF data into expanded CSV data $ pdbio vcf2csv --expand-info --expand-samples test/example.vcf # Sort VCF data by CHROM, POS, and the other $ pdbio vcfsort test/example.vcf Run pdbio --help for more information. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pdbio/
CC-MAIN-2020-05
refinedweb
165
52.15
Using the Caché Gateway for .NET Gateway Architecture [Back] Server: docs1 Instance: LATEST User: UnknownUser [ ] > [ Caché Language Bindings and Gateways ] > [ Using the Caché Gateway for .NET ] > [ Gateway Architecture ] Go to: Using Proxy Classes Using Wrapper Classes with .NET APIs Creating and Running a Gateway Server Importing Proxy Classes The .NET Gateway API Search : The Caché Object Gateway for .NET (which this book will usually refer to as simply the .NET Gateway ) provides an easy way for Caché to interoperate with Microsoft .NET Framework components. The .NET Gateway can instantiate an external .NET object and manipulate it as if it were a native object within Caché. Note: The .NET Gateway can also be used in an Ensemble production (see the Ensemble document Using the Object Gateway for .NET ). You can create and test your .NET Gateway classes in Caché, as described in this book, and then add them to a production using the Ensemble .NET Gateway business service. Using Proxy Classes The external .NET object is represented within Caché by a proxy class. A proxy object looks and behaves just like any other Caché object, but it has the capability to issue method calls out to the .NET Common Language Runtime (CLR), either locally or remotely over a TCP/IP connection. Any method call on the Caché proxy object triggers the corresponding method of a .NET object inside the CLR. The following diagram offers a conceptual view of Caché and the .NET Gateway at runtime. .NET Gateway Operational Model Instances of the .NET Gateway Server run in the CLR. Caché and the CLR may be running on the same machine or on different machines. The numbered items in the .NET Gateway Operational Model diagram point out the following relationships: A Caché namespace accesses an instance of the .NET Gateway Server. Access is controlled by an instance of the Caché %Net.Remote.Service class. Each Caché session is connected to a separate thread within the Gateway server. The connection is controlled by an instance of the Caché %Net.Remote.Gateway class. Each proxy object communicates with a corresponding .NET object. A call to any Caché proxy method initiates the following sequence of events: Caché sends a message over the TCP/IP connection to the .NET Gateway worker thread. The message consists of the method name, parameters, and occasionally some other information. The .NET Gateway worker thread finds the appropriate method or constructor call and invokes it using .NET reflection. The results of the method invocation (if any) are sent back to the Caché proxy object over the same TCP/IP channel, and the proxy method returns the results to the Caché application. Note: You can access a proxy class with code written in either Caché Basic or ObjectScript. The examples in this document use ObjectScript. Using Wrapper Classes with .NET APIs In most cases, you will use the .NET Caché. Ability to use legacy DLLs ActiveX DLLs cannot be used directly in a 64bit Windows environment, and a DLL that was not written as a .NET assembly cannot be used with the .NET Gateway even in a 32bit environment. However, it is possible to create a wrapper that the .NET Gateway can use to call a DLL indirectly. Creating and Running a Gateway Server Before you can use the .NET Gateway, you must start an instance of the .NET Gateway Server and tell Caché the name of the host on which the server is running. Once started, a server runs until it is explicitly shut down. Once the .NET Gateway server is running, each Caché session that needs to invoke .NET class methods must create its own connection to the server, as shown in the following diagram: Connecting to a .NET Gateway Worker Thread. As long as it remains connected, the assigned port for the connection stays in use and is unavailable for use in other connections. See Setting Gateway Server Properties for a detailed description of how to create a .NET Gateway Server property definition, and Running a Gateway Server for details on how to start, connect, disconnect, and stop a server. Importing Proxy Classes Caché proxy classes are generated by sending a query to the .NET Gateway Server, which returns information about the methods for which proxy classes are required. The imported method information is then used to construct the proxy classes, as shown in the following diagram: Importing .NET Classes The Caché session sends an import request. Upon receiving the request, the .NET Caché side, the .NET Gateway worker thread returns the results of the introspection to the Caché session, which uses the information to generate new proxy classes. See Creating Proxy Classes for details on how to generate proxy classes. The .NET Gateway API The following classes provide most of the functionality used by your Caché .NET Gateway applications: %Net.Remote.ObjectGateway an ObjectGateway object contains the property settings required to run and monitor an instance of the .NET Gateway Server. See Defining a Gateway Server for a detailed description. %Net.Remote.Service a Service object controls the interface between a Caché namespace and an instance of the .NET Gateway Server. See Running a Gateway Server . %Net.Remote.Gateway a Gateway object controls the connection between a Caché session and a worker thread within an instance of the .NET Gateway Server, and provides methods to generate proxy classes. See Connecting to a Server and Generating Proxy Classes Programmatically . %Net.Remote.ImportHelper the ImportHelper class provides some extra class methods for inspecting assemblies and generating proxy classes. See Generating Caché Proxy Classes . See the Caché class library documentation for the most complete and up to date information on each of these classes. © 1997-2017, InterSystems Corp. [Back] [Top of Page] Build: Caché v2017.1 (792) Last updated: 2017-03-20 19:02:18 Source: BGNT_arch.xml
http://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=BGNT_arch
CC-MAIN-2017-17
refinedweb
967
59.6
! Import Platform API Let’s start with making some small code changes - then our app will “just work” when we deploy it to a device. Import the Ionic Platform API into photo.service.ts, which is used to retrieve information about the current device. In this case, it’s useful for selecting which code to execute based on the platform the app is running on (web or mobile): import { Platform } from '@ionic/angular'; export class PhotoService { public photos: Photo[] = []; private PHOTO_STORAGE: string = "photos"; private platform: Platform; constructor(platform: Platform) { this.platform = platform; } // other code } Platform-specific Logic First, we’ll update the photo saving functionality to support mobile. In the readAsBase64() function, check which platform the app is running on. If it’s “hybrid” (Capacitor or Cordova, two native runtimes), then read the photo file into base64 format using the Filesystem readFile() method. Otherwise, use the same logic as before when running the app on the web: private async readAsBase64(cameraPhoto: CameraPhoto) { // "hybrid" will detect Cordova or Capacitor if (this.platform.is('hybrid')) { // Read the file into base64 format const file = await Filesystem.readFile({ path: cameraPhoto.path }); return file.data; } else { // Fetch the photo, read as a blob, then convert to base64 format const response = await fetch(cameraPhoto.webPath); const blob = await response.blob(); return await this.convertBlobToBase64(blob) as string; } } Next, update the getPhotoFile() method. When running on mobile, return the complete file path to the photo using the Filesystem API. When setting the webviewPath, use the special Capacitor.convertFileSrc() method (details here). private async getPhotoFile(cameraPhoto, fileName) { if (this.platform.is('hybrid')) { // Get the new, complete filepath of the photo saved on filesystem const fileUri = await Filesystem, head back over to the loadSaved() function we implemented for the web earlier. On mobile, we can directly set the source of an image tag - <img src=”x” /> - to each photo file on the Filesystem, displaying them automatically. Thus, only the web requires reading each image from the Filesystem into base64 format. Update this function to add an if statement around the Filesystem code: public async loadSaved() { // Retrieve cached photo array data const photos = await Storage.get({ key: this.PHOTO_STORAGE }); this.photos = JSON.parse(photos.value) || []; // Easiest way to detect when running on the web: // “when the platform is NOT hybrid, do this” if (!this.platform.is('hybrid')) { //.759.8.1>${readFile.data}`; } } } At the bottom of the addNewtoGallery() function, update the Storage API logic. If running on the web, there’s a slight optimization we can add. Even though we must read the photo data in base64 format in order to display it, there’s no need to save in that form, since it’s already saved on the Filesystem: Storage.set({ key: this.PHOTO_STORAGE, value: this.platform.is('hybrid') ? JSON.stringify(this.photos) : JSON.stringify(this.photos.map(p => { // Don't save the base64 representation of the photo data, // since it's already saved on the Filesystem const photoCopy = { ...p }; delete photoCopy.base64; return photoCopy; })) Finally, a small change to tab2.page.html is required to support both web and mobile. If running the app on the web, the base64 property will contain the photo data to display. If on mobile, the webviewPath will be used: <ion-col <ion-img </ion-img> </ion-col> Our Photo Gallery now consists of one codebase that runs on the web, Android, and iOS. Next up, the part you’ve been waiting for - deploying the app to a device.
https://ionicframework.com/jp/docs/es/angular/your-first-app/5-adding-mobile
CC-MAIN-2020-10
refinedweb
576
57.47
Querying Views¶ View Object¶ - class couchbase.views.iterator. View[source]¶ __init__(parent, design, view, row_processor=None, include_docs=False, query=None, streaming=True, **params)[source]¶ Construct Bucket object. for result in View(c, "beer", "brewery_beers"): print("emitted key: {0}, doc_id: {1}" .format(result.key, result.docid)) Execute a view with extra query options: # Implicitly creates a Query object view = View(c, "beer", "by_location", limit=4, reduce=True, group_level=2) Execute a spatial view: from couchbase.views.params import SpatialQuery # .... q = SpatialQuery() q.start_range = [ -119.9556, 38.7056 ] q.end_range = [ -118.8122, 39.7086 ] view = View(c, 'geodesign', 'spatialview', query=q) for row in view: print('Location is {0}'.format(row.geometry)))) __iter__()[source]¶ Returns a row for each query. The type of the row depends on the row_processorbeing used. Attributes¶ -interface. By default, it is an instance of RowProcessoritself. - couchbase.views.iterator. raw¶ The actual couchbase.bucket.HttpResultobject. Note that this is only the last result returned. If using paginated views, the view comprises several such objects, and is cleared each time a new page is fetched. Row Processing¶ - class couchbase.views.iterator. RowProcessor[source]¶ - class couchbase.views.iterator. ViewRow¶ This is the default class returned by the RowProcessor value¶ The value emitted by the view’s mapfunction (second argument to emit). If the view was queried with reduceenabled, then this contains the reduced value after being processed by the reducefunction. docid¶ This is the document ID for the row. This is always Noneif reducewas specified. Otherwise it may be passed to one of the getor setmethod to retrieve or otherwise access the underlying document. Note that if include_docswas specified, the docalready contains the document Query Object¶ - class couchbase.views.params. Query. update(copy=False, **params)¶ Chained assignment operator. This may be used to quickly assign extra parameters to the Queryobject. Example: q = Query(reduce=True, full_sec=True) # Someplace later v = View(design, view, query=q.update(mapkey_range=["foo"])) Its primary use is to easily modify the query object (in-place). View Options¶. Result Range and Sorting Properties¶ The following properties allow you to - Define a range to limit your results (i.e. between foo and bar) - Define a specific subset of keys for which results should be yielded - Reverse the sort order - class couchbase.views.params. Query[source] mapkey_range¶ Specify the range based on the contents of the keys emitted by the view’s mapfunction.to.country, doc.state, doc.city], doc.event) } } Then you may query for all events in a specific state by using: q.mapkey_range = [ ["USA", "NV", ""] ["USA", "NV", q.STRING_RANGE_END] ] While the first two elements are an exact match (i.e. only keys which have ["USA","NV", ...]in them, the third element should accept anything, and thus has its start value as the empty string (i.e. lowest range) and the magic q.STRING_RANGE_ENDas its lowest value. As such, the results may look like: ViewRow(key=[u'USA', u'NV', u'Reno'], value=u'Air Races', docid=u'air_races_rno', doc=None) ViewRow(key=[u'USA', u'NV', u'Reno'], value=u'Reno Rodeo', docid=u'rodeo_rno', doc=None) ViewRow(key=[u'USA', u'NV', u'Reno'], value=u'Street Vibrations', docid=u'street_vibrations_rno', doc=None) # etc. dockey_range¶ Specify the range based on the contents of the keys as they are stored by upsert(). These are returned as the “Document IDs” in each view result. You must use this attribute in conjunction with mapkey_rangeoption. Additionally, this option only has any effect if you are emitting duplicate keys for different document IDs. An example of this follows: Documents: c.upsert("id_1", { "type" : "dummy" }) c.upsert("id_2", { "type" : "dummy" }) # ... c.ups. mapkey_single¶function can return more than one result with the same key, you may still get more than one result back. mapkey_multi¶"] ] inclusive_end¶ Declare that the range parameters’ (for e.g. mapkey_rangeand dockey_range) end key should also be returned for rows that match it. By default, the resultset is terminated once the first key matching the end range is found. Reduce Function Parameters¶ These options are valid only for views which have a reduce function, and for which the reduce value is enabled - class couchbase.views.params. Query[source] reduce¶ Note that if the view specified in the query (to e.g. couchbase.bucket.Bucket.query()) does not have a reduce function specified, an exception will be thrown once the query begins. group¶ Specify this option to have the results contain a breakdown of the reducefunction based on keys produced by map. By default, only a single row is returned indicating the aggregate value from all the reduceinvocations. Specifying this option will show a breakdown of the aggregate reducevalue based on keys. Each unique key in the result set will have its own value. Setting this property will also set reduceto True group_level¶ This is analoguous to group, except that it places a constraint on how many elements of the compound key produced by mapshould be displayed in the summary. For example if this parameter is set to 1then the results are returned for each unique first element in the mapped keys. Setting this property will also set reduceto True Pagination and Sampling¶ These options limit or paginate through the results - class couchbase.views.params. Query[source] skip¶ Warning Consider using mapkey_rangeinstead. Using this property with high values is typically inefficient. Control Options¶ These do not particularly affect the actual query behavior, but may control some other behavior which may indirectly impact performance or indexing operations. - class couchbase.views.params. Query[source] stale¶may be used instead. update_after Return stale indexes for this result (so that the query does not take a long time), but re-generated the index immediately after returning. The constant STALE_UPDATE_AFTERmay be used instead. A Boolean Type may be used as well, in which case Trueis converted to "ok", and Falseis converted to "false" connection_timeout¶ This parameter is a server-side option indicating how long a given node should wait for another node to respond. This does not directly set the client-side timeout. Boolean Type¶ Options which accept booleans may accept the following Python types: - Standard python booltypes, like Trueand False - Numeric values which evaluate to booleans - Strings containing either "true"or "false" Other options passed as booleans will raise an error, as it is assumed that perhaps it was passed accidentally due to a bug in the application. Numeric Type¶ Options which accept numeric values accept the following Python types: - int, longand floatobjects - Strings which contain values convertible to said native numeric types It is an error to pass a bool as a number, despite the fact that in Python, bool are actually a subclass of int. JSON Value¶). JSON Array¶ String¶ Options which accept strings accept so-called “semantic strings”, specifically; the following Python types are acceptable: - strand unicodeobjects - intand longobjects Value¶. Unspecified Value¶. Convenience Constants¶ These are convenience value constants for some of the options Circumventing Parameter Constraints¶. Geospatial Views¶ Geospatial views are views which can index and filter items based on one or more independent axes or coordinates. This allows greater application at query-time to filter based on more than a single attribute. Filtering at query time is done though _ranges_. These ranges contain the start and end values for each key passed to the emit() in the map() function. Unlike Map-Reduce views and compound keys for startkey and endkey, each item in a spatial range is independent from any other, and is not sorted or evaluated in any particular order. See `GeoCouch`_<> for more information. Creating Geospatial Views¶ Creating a geospatial view may be done in a manner similar to creating a normal view; except that the design document defines the spatial view in the spatial field, rather than in the views field. ddoc = { 'spatial': { 'geoview': ''' if (doc.loc) { emit({ type: "Point", geometry: doc.loc }, doc.name); } ''' } } cb.bucket_manager().design_create('geo', ddoc) The above snippet will create a geospatial design doc ( geo) with a single view (called geoview). Querying Geospatial Views¶ To query a geospatial view, you must pass an instance of SpatialQuery as the query keyword argument to either the View constructor, or the Bucket.query() method. from couchbase.views.params import SpatialQuery q = SpatialQuery(start_range=[0, -90, None], end_range=[180, 90, None]) for row in bkt.query(query=q): print "Key:", row.key print "Value:", row.value print "Geometry", row.geometry - class couchbase.views.params. SpatialQuery. start_range¶ The starting range to query. If querying geometries, this should be the lower bounds of the longitudes and latitudes to filter. Use None to indicate that a given dimension should not be bounded. q.start_range=[0, -90] end_range¶ The upper limit for the range. This contains the upper bounds for the ranges specified in start_range. q.end_range[180, 90] skip¶ See Query.skip limit¶ See Query.limit stale¶ See Query.stale
https://pythonhosted.org/couchbase/api/views.html
CC-MAIN-2022-27
refinedweb
1,465
57.16
Balanced match source code analysis text 0. Basic information 0.1 Usage The goal of the balanced match library is very simple: match the first pair of strings that meet the conditions, and disassemble them into three parts: front, middle and back 0.2 Version: v2.0.0 This library is relatively stable, and there is nothing to change This paper studies the latest version: v2.0.0 0.2 Doc Relevant documents are written in READEME, which is relatively concise Portal: balanced-match - npm 1. Source code analysis 1.0 source code project structure The whole project is very small, just an index.js 1.1 main entrance - index.js (reading notes: / index.js/0_structure.js) 'use strict'; function balanced(a, b, str) {} function maybeMatch(reg, str) {} balanced.range = range; function range(a, b, str) {} module.exports = balanced; There are three methods in the whole project, and two methods are exported: balanced and balanced.range 1.2 balanced Next, let's look at the details of the main entrance - index.js (reading notes: / index.js/1_balanced.js) /** * @param {string | RegExp} a * @param {string | RegExp} b * @param {string} str */ function balanced(a, b, str) { // Adapt regexp input if (a instanceof RegExp) a = maybeMatch(a, str); if (b instanceof RegExp) b = maybeMatch(b, str); // Search scope const r = range(a, b, str); // Results before and after calculation return ( r && { start: r[0], end: r[1], pre: str.slice(0, r[0]), body: str.slice(r[0] + a.length, r[1]), post: str.slice(r[1] + b.length), } ); } The function allows both incoming strings and regular expressions at the same time, so it first adapts with the maybeMatch method, then calls the range method to get the range, and finally extracts the result from the original string. 1.3 maybeMatch - index.js (reading notes: / index.js/2_maybeMatch.js) /** * @param {RegExp} reg * @param {string} str */ function maybeMatch(reg, str) { // match[0] is the first matching string const m = str.match(reg); return m ? m[0] : null; } The author doesn't use too many strange techniques for regular expressions. In fact, he goes directly to the original string to find whether there is a match, and then directly converts it back to the string 1.4 range The range function can be said to be the core of the library, which is to find the first non nested pair according to a and b. It is divided into several steps below - index.js (reading notes: / index.js/3_range.js) /** * @param {string} a * @param {string} b * @param {string} str */ function range(a, b, str) { let begs, beg, left, right, result; let ai = str.indexOf(a); let bi = str.indexOf(b, ai + 1); let i = ai; // There is at least one pair of results if (ai >= 0 && bi > 0) { // Are they equal if (a === b) { return [ai, bi]; } begs = []; left = str.length; At the beginning, ensure that there is at least one pair. At the same time, if the same subscript is matched, it will be returned directly (there can be no other nested pairs between the front and back) while (i >= 0 && !result) { // Collect all subscripts that match a if (i === ai) { begs.push(i); ai = str.indexOf(a, i + 1); } else if (begs.length === 1) { // Output the result when there is only one begs left result = [begs.pop(), bi]; } else { beg = begs.pop(); if (beg < left) { // For each pop-up a, record the last pair of result subscripts left = beg; right = bi; } bi = str.indexOf(b, i + 1); } i = ai < bi && ai >= 0 ? ai : bi; } The next step is the loop process. First, collect all strings conforming to a; Then pop up a step by step to match the next b, and finally return the first qualified a and b pairs // For the case that the begs are not used up (a occurrence times > b) if (begs.length) { result = [left, right]; } } return result; } When a occurs more than b, find the last qualified pair of a and b (that is, the first pair of strings in the outermost layer) according to the left and right recorded above Other resources Reference connection Reading notes reference
https://programmer.group/balanced-match-source-code-analysis.html
CC-MAIN-2021-49
refinedweb
690
73.98
My question is: How can I write an IPython cell magic which has access to the namespace of the IPython notebook? IPython allows writing user-defined cell magics. My plan is creating a plotting function which can plot one or multiple arbitrary Python expressions (expressions based on Pandas Series objects), whereby each line in the cell string is a separate graph in the chart. This is the code of the cell magic: def p(line, cell): import pandas as pd import matplotlib.pyplot as plt df = pd.DataFrame() line_list = cell.split('\n') counter = 0 for line in line_list: df['series' + str(counter)] = eval(line) counter += 1 plt.figure(figsize = [20,6]) ax = plt.subplot(111) df.plot(ax = ax) def load_ipython_extension(ipython): ipython.register_magic_function(p, 'cell') import pandas as pd ts = pd.Series([1,2,3]) %%p ts * 3 ts + 1 NameError: name 'ts' is not defined p ts * 3\n ts + 1 ts p ts Use the @needs_local_scope decorator decorator. Documentation is a bit missing, but you can see how it is used, and contributing to docs would be welcome.
https://codedump.io/share/h0NdK4BKhGM8/1/ipython-notebook-how-to-write-cell-magic-which-can-access-notebook-variables
CC-MAIN-2018-05
refinedweb
181
66.64
The Absent Yet Present Link August 14, 2002. As is often the case, however, reaction to a new W3C specification, even a very early draft, exposed a venerable, enduring fault line in the XML world, namely, the split between XML users and XML core developers. In this case, we'll let the former be represented by the weblogging community, the latter by the XML-DEV list. Of course, this division is mostly a fiction, a little heuristic I'm using to make a larger point, but it's not entirely divorced from reality. In trying to measure reaction to the XHTML 2.0 draft, my Web wanderings made two things plain. First, the weblogging community focused its attention equally on the big XHTML 2.0 changes, the community being, on average, about as interested in navigation lists as in the deprecation of img and the apotheosis of href. That's just about what one might expect: many webloggers still write a considerable amount of HTML by hand, and a respectable percentage of them are committed to public standards, are early adopters of XHTML, and, in general, junkies of all things Web. Any significant changes to XHTML, including major deprecations and additions, are going to get the attention of this crowd, and for good reason. Second, for the hard-core XML-specification junkies -- the ones who not only know and regularly use, but often are the very first to use all those specification acronyms -- the nonpareil XHTML 2.0 issue is the absence of XLink. In fact, a kind of strange trick played itself out: the absence of XLink was so conspicuous, was felt so strongly by so many, that XLink became a kind of absent presence. Because it was absent from the XHTML 2.0 draft, discussion about that absence, what it might signify or portend, dominated the discussion of XHTML 2.0 on the XML-DEV list. Thus, its absent presence provoked many of XML development community's familiar concerns about specification coherence, interdependence, refactoring, and the like. The Question of XLink Perhaps the most common reaction to the possible ubiquity of href was simply to wonder why the XHTML Working Group hadn't decided to use XLink. Many XML developers seemed to be wondering whether, if it wasn't suitable or appropriate for use in XHTML 2.0, XLink had any future at all. Andrew Watt was one of the first people to raise the issue: "XLink 1.0 already defines linking attributes, which can be placed on any XML element, not just XHTML elements. We... appear to have two... hyperlinking technologies intended for use on some or all XML elements". Uche Ogbuji suggested that one reason XML junkies are not getting behind XHTML 2.0 is that "layered technologies are a good thing, and that a lack of layering is reason for loud challenge". That is to say, XHTML 2.0 has provoked a loud response because of "its refusal to layer with XLink". No XML-DEV conversation would be complete without the inclusion of namespaces. In the case of XHTML 2.0 versus XLink, some developers took the position that since XHTML already uses namespaces, the objection that using a namespaced href ( xlink:href) over a "bare" href wasn't a consistent objection. The typical response pointed out that linking is in some way intrinsic to HTML, that namespaces are a fine implementation tool, but that many, if not most potential XHTML 2.0 users will find them off-putting. Namespaces are off-putting to most potential users of XHTML 2.0; we would do well to remember that XHTML 1.0's well-formedness constraints are part of the obstacle to its wider acceptance. It stands to reason, then, that if namespaces prove to be an even larger obstacle than well-formedness, the widespread success of XHTML 2.0 really is a long-term proposition. But that conjecture alone, no matter how well-grounded, isn't sufficient to decide the issue of whether href should be a bare (because its part of XHTML's CAC) or namespaced (because its taken from XLink) attribute in XHTML 2.0. Further, members of the XHTML Working Group object that the namespaced href is not the reason it isn't using XLink. They say that, in part, they're not using XLink because rather than describing linking constructs for other markup languages, XLink became the linking constructs. As the HTML Working Group put it way back in 2000 in its XLink Last Call comments: XLink does not meet the basic requirements it set itself, nor of its 'customers', and as such is insufficient for the needs of the future Web. Any linking proposal that requires documents to be changed in order to use linking is not suitable. So, what's the technical way forward? Norm Walsh, in a comprehensive and elegant post, suggested that there are seven possibilities. First, abandon the idea of a universal linking language; or, in Norm's words, give up "the ability to write applications that unambiguously (and without heuristics) recognize links in vocabularies about which they do not have domain-specific knowledge". Second, take XHTML's href as the universal sign of a link and hope for the best, a kind of capitulation to the status quo. Third, bite the bullet and use XLink as is, even if that means XHTML does not use it. The next four possibilities are all variations on the "fix XLink" theme. So, fourth, add an xlink:href-name attribute to XLink. Fifth, add xlink:???-name attributes to XLink, in which case each link attribute that XLink gets can be renamed per element. Sixth, "fix" XLink by tossing it out the window and starting over from scratch, "using elements instead of attributes", as Norm says. Last, still throw XLink out the window and do something altogether different: perhaps, as Norm suggests in another post, supplement the infoset with linking information. Lying behind or beside the technical issues may well be a matter of internal W3C politics or, even more frustrating, a big pile of hurt feelings. There is some indication that the XHTML Working Group purposefully avoided the use of XLink because the XLink Working Group abandoned its mandate to support HTML 4.0 linking constructs, this in addition to the aforementioned objections to XLink. The point isn't to make either working group solely responsible for the present mess. There is blame enough to go around. As is often the case, the technical and the social interpenetrate each other in a kind of warp and weft, which can make things hard to sort out.. Resources - XLink 2.0 Requirements: XML-DEV thread. - XML Needs Credible Linking: XML-DEV thread. - XLink Olden Days: XML-DEV thread. - When Should I Use XLink? - xlink:href: By way of background, a www-tag thread from early July 2002.
http://www.xml.com/pub/a/2002/08/14/deviant.html
CC-MAIN-2017-13
refinedweb
1,146
63.19
get different alternatives for OTA programming with the ESP32 boards. For example, in the Arduino IDE, under the Examples folder, there is the BasicOTA example (that never worked well for us); the OTA Web Updater (works well, but it is difficult: - Iclude the AsyncElegantOTA, AsyncTCP and ESPAsyncWebServer libraries in the platformio.ini file of your project; - Include AsyncElegantOTA library at the top of the code: the serial port. This sketch should contain the code to create the OTA Web Updater so that you are able . /* <Arduino.h> _3<<. required lines of code to handle ElegantOTA: #include <AsyncElegantOTA.h> AsyncElegantOTA.begin(&server); AsyncElegantOTA.loop();. Elegant. Copy the following code to your main.cpp file. /*. ElegantOTA. Watch the Video Demonstration Wrapping Up In this tutorial, you’ve learned how to add OTA capabilities to your Async Web Servers using the AsyncElegantOTA library. This library is super everything you need to know about building web servers with the ESP32: Learn more about the ESP32 with our resources: Thanks for reading. 31
https://randomnerdtutorials.com/esp32-ota-over-the-air-vs-code/
CC-MAIN-2021-25
refinedweb
168
51.18
A c# implementation of duck typing: - An interface defined by me to express my expected behaviors of duck, IMyDuck. - An object from others that I want to consume. Let’s call its type OtherDuck. - A function that checks if Duck has all the members of IMyDuck. - A function (a proxy factory) that generates a proxy that implements IMyDuck and wrap around OtherDuck. - In my software, I only bind my code to IMyDuck. The proxy factory is responsible for bridging IMyDuck and OtherDuck.: - When consuming libraries from companies that implement the same standard. - When different teams in a large enterprise each implement the company’s entities in their own namespace. - When a company refactors software (see the evolution of asp.net for example). - When calling multiple web services. Note that: - I noticed the impromptu-interface project that does the same thing after I completed by project. Also, someone mentioned TypeMock and Castle in the comment of Eric’s log. So I am not claiming to be the first person with the idea. I am still glad to have my own code generator because I am going to use it to extend the idea of strongly-typed wrapper as comparing to data-transfer-objects. - Also, nearly all Aspect-Oriented-Programming (AOP) and many Object-Relational-Mapping (ORM) frameworks have the similar proxy generator. In general, those in AOP frameworks are more complete because ORM frameworks primary concern with properties. I did looked at the Unity implementation when I implemented mine (thanks to the Patterns & Practices team!). My implementation is very bare-metal and thus easy to understand.
http://weblogs.asp.net/lichen/a-c-implementation-of-duck-typing
CC-MAIN-2015-18
refinedweb
265
58.69
C Loops: For, While, Do While, Break, Continue with Example What are Loops? In looping, a program executes the sequence of statements many times until the stated condition becomes false. A loop consists of two parts, a body of a loop and a control statement. The control statement is a combination of some conditions that direct the body of the loop to execute until the specified condition becomes false. In this tutorial, you will learn- - What are Loops? - Types of Loops - While Loop - Do-While loop - For loop - Break Statement - Continue Statement - Which loop to Select? Types of Loops Depending upon the position of a control statement in a program, a loop. The control conditions must be well defined and specified otherwise the loop will execute an infinite number of times. The loop that does not stop executing and processes the statements number of times is called as an infinite loop. An infinite loop is also called as an “Endless loop.” Following are some characteristics of an infinite loop: 1. No termination condition is specified. 2. The specified conditions never meet. The specified condition determines whether to execute the loop body or not. ‘C’ programming language provides us with three types of loop constructs: 1. The while loop 2. The do-while loop 3. The for loop While Loop A while loop is the most straightforward looping structure. The basic format of while loop is as follows: while (condition) { statements; } It. After exiting the loop, the control goes to the statements which are immediately after the loop. The body of a loop can contain more than one statement. If it contains only one statement, then the curly braces are not compulsory. It is a good practice though to use the curly braces even we have a single statement in the body. In while loop, if the condition is not true, then the body of a loop will not be executed, not even once. It is different in do while loop which we will see shortly. Following program illustrates a while loop: #include<stdio.h> #include<conio.h> int main() { int num=1; //initializing the variable while(num<=10) //while loop with condition { printf("%d\n",num); num++; //incrementing operation } return 0; } Output: 1 2 3 4 5 6 7 8 9 10 The above program illustrates the use of while loop. In the above program, we have printed series of numbers from 1 to 10 using a while loop. - We have initialized a variable called num with value 1. We are going to print from 1 to 10 hence the variable is initialized with value 1. If you want to print from 0, then assign the value 0 during initialization. - In a while loop, we have provided a condition (num<=10), which means the loop will execute the body until the value of num becomes 10. After that, the loop will be terminated, and control will fall outside the loop. - In the body of a loop, we have a print function to print our number and an increment operation to increment the value per execution of a loop. An initial value of num is 1, after the execution, it will become 2, and during the next execution, it will become 3. This process will continue until the value becomes 10 and then it will print the series on console and terminate the loop. \n is used for formatting purposes which means the value will be printed on a new line. Do-While loop A do-while loop is similar to the while loop except that the condition is always executed after the body of a loop. It is also called an exit-controlled loop. The basic format of while loop is as follows: do { statements } while (expression); As we saw in a while loop, the body is executed if and only if the condition is true. In some cases, we have to execute a body of the loop at least once even if the condition is false. This type of operation can be achieved by using a do-while loop. In the do-while loop, the body of a loop is always executed at least once. After the body is executed, then it checks the condition. If the condition is true, then it will again execute the body of a loop otherwise control is transferred out of the loop. Similar to the while loop, once the control goes out of the loop the statements which are immediately after the loop is executed. The critical difference between the while and do-while loop is that in while loop the while is written at the beginning. In do-while loop, the while condition is written at the end and terminates with a semi-colon (;) The following program illustrates the working of a do-while loop: We are going to print a table of number 2 using do while loop. #include<stdio.h> #include<conio.h> int main() { int num=1; //initializing the variable do //do-while loop { printf("%d\n",2*num); num++; //incrementing operation }while(num<=10); return 0; } Output: 2 4 6 8 10 12 14 16 18 20 In the above example, we have printed multiplication table of 2 using a do-while loop. Let’s see how the program was able to print the series. - First, we have initialized a variable ‘num’ with value 1. Then we have written a do-while loop. - In a loop, we have a print function that will print the series by multiplying the value of num with 2. - After each increment, the value of num will increase by 1, and it will be printed on the screen. - Initially, the value of num is 1. In a body of a loop, the print function will be executed in this way: 2*num where num=1, then 2*1=2 hence the value two will be printed. This will go on until the value of num becomes 10. After that loop will be terminated and a statement which is immediately after the loop will be executed. In this case return 0. For loop A for loop is a more efficient loop structure in ‘C’ programming. The general structure of for loop is as follows: for (initial value; condition; incrementation or decrementation ) { statements; } - The initial value of the for loop is performed only once. - The condition is a Boolean expression that tests and compares the counter to a fixed value after each iteration, stopping the for loop when false is returned. - The incrementation/decrementation increases (or decreases) the counter by a set value. Following program illustrates the use of a simple for loop: #include<stdio.h> int main() { int number; for(number=1;number<=10;number++) //for loop to print 1-10 numbers { printf("%d\n",number); //to print the number } return 0; } Output: 1 2 3 4 5 6 7 8 9 10 The above program prints the number series from 1-10 using for loop. - We have declared a variable of an int data type to store values. - In for loop, in the initialization part, we have assigned value 1 to the variable number. In the condition part, we have specified our condition and then the increment part. - In the body of a loop, we have a print function to print the numbers on a new line in the console. We have the value one stored in number, after the first iteration the value will be incremented, and it will become 2. Now the variable number has the value 2. The condition will be rechecked and since the condition is true loop will be executed, and it will print two on the screen. This loop will keep on executing until the value of the variable becomes 10. After that, the loop will be terminated, and a series of 1-10 will be printed on the screen. In C, the for loop can have multiple expressions separated by commas in each part. For example: for (x = 0, y = num; x < y; i++, y--) { statements; } Also, we can skip the initial value expression, condition and/or increment by adding a semicolon. For example: int i=0; int max = 10; for (; i < max; i++) { printf("%d\n", i); } Notice that loops can also be nested where there is an outer loop and an inner loop. For each iteration of the outer loop, the inner loop repeats its entire cycle. Consider the following example, that uses nested for loops output a multiplication table: #include <stdio.h> int main() { int i, j; int table = 2; int max = 5; for (i = 1; i <= table; i++) { // outer loop for (j = 0; j <= max; j++) { // inner loop printf("%d x %d = %d\n", i, j, i*j); } printf("\n"); /* blank line between tables */ }} Output: 1 x 0 = 0 1 x 1 = 1 1 x 2 = 2 1 x 3 = 3 1 x 4 = 4 1 x 5 = 5 2 x 0 = 0 2 x 1 = 2 2 x 2 = 4 2 x 3 = 6 2 x 4 = 8 2 x 5 = 10 The nesting of for loops can be done up-to any level. The nested loops should be adequately indented to make code readable. In some versions of ‘C,’ the nesting is limited up to 15 loops, but some provide more. The nested loops are mostly used in array applications which we will see in further tutorials. Break Statement The break statement is used mainly in in the switch statement. It is also useful for immediately stopping a loop. We consider the following program which introduces a break to exit a while loop: #include <stdio.h> int main() { int num = 5; while (num > 0) { if (num == 3) break; printf("%d\n", num); num--; }} Output: 5 4 Continue Statement When you want to skip to the next iteration but remain in the loop, you should use the continue statement. For example: #include <stdio.h> int main() { int nb = 7; while (nb > 0) { nb--; if (nb == 5) continue; printf("%d\n", nb); }} Output: 6 4 3 2 1 So, the value 5 is skipped. Which loop to Select? Selection of a loop is always a tough task for a programmer, to select a loop do the following steps: - Analyze the problem and check whether it requires a pre-test or a post-test loop. - If pre-test is required, use a while or for a loop. - If post-test is required, use a do-while loop. Summary - Looping is one of the key concepts on any programming language. - It executes a block of statements number of times until the condition becomes false. - Loops are of 2 types: entry-controlled and exit-controlled. - ‘C’ programming provides us 1) while 2) do-while and 3) for loop. - For and while loop is entry-controlled loops. - Do-while is an exit-controlled loop.
https://www.thehackingcoach.com/c-loops-for-while-do-while-break-continue-with-example/
CC-MAIN-2022-40
refinedweb
1,808
70.73
“Scope” has got to be one of the most confusing words in all of programming language design. People seem to use it casually to mean whatever is convenient at the time; I most often see it confused with lifetime and declaration space. As in “the memory will be released when the variable goes out of scope”. In an informal setting, of course it is perfectly acceptable to use “scope” to mean whatever you want, so long as the meaning is clearly communicated to the audience. In a more formal setting, like a book or a language specification, it’s probably better to be precise. The difference between scope and declaration space in C# is subtle. The scope of a named entity is the region of program text in which it is legal to refer to that entity by its unqualified name. There are some subtleties here. The implication does not “go the other way” — it is not the case that if you can legally use the unqualified name of an entity, that the name refers to that entity. Scopes are allowed to overlap. For example, if you have: class C { int x; void M() { int x; } } class C then the field is in scope throughout the entire body text of C, including the entirity of M. Local variable x is in scope throughout the body of M, so the scopes overlap. When you say “x”, whether the field or the local is chosen depends on where you say it. A declaration space, by contrast, is a region of program text in which no two entities are allowed to have the same name. For example, in the region of text which is body of C excluding the body of M, you’re not allowed to have anything else named x. Once you’ve got a field called x, you cannot have another field, property, nested type, or event called x. Thanks to overloading, methods are a bit of an oddity here. One way to characterize declaration spaces in the context of methods would be to say that “the set of all overloaded methods in a class that have the same name” constitutes an “entity”. Another way to characterize methods would be to tweak the definition of declaration space to say that no two things are allowed to have the same name except for a set of methods that all differ in signature. In short, scope answers the question “where can I use this name?” and declaration space answers the question “where is this name unique?” Lifetime and scope are often confused because of the strong connection between the lifetime and scope of a local variable. The most succinct way to put it is that the contents of a local variable are guaranteed to be alive at least as long as the current “point of execution” is inside the scope of the local variable. “At least as long” of course implies “or longer”; capturing a local variable, for example, extends its lifetime. > The most succinct way to put it is that the contents of a local variable are guaranteed to be alive at least as long as the current "point of execution" is inside the scope of the local variable. AFAIK not really, unless you use GC.KeepAlive(). Some versions of .NET 2.0 even managed to collect "this" in the middle of a method if you didn’t use any fields later, which led to a really ugly problem with handles, as your Dispose could release this.someIntPtr right between it being ldfld’ed and passed to some unmanaged method. Of course now we have SafeHandle which closes the handle itself, and with some pretty dark magic of CriticalFinalizerObject. Hi Eric, I’m really enjoying this "What’s The Difference" series of posts, it’s really cool that you’re taking the time to dispel these common mis-conceptions. It’s also very cool to learn a new thing or two from reading your blog. How many more posts will there be in this series? Hope you’re enjoying your holiday. I wouldn’t mind seeing some photographic evidence of these Beaver Sharks that we’ve heard so much about 😉 >the current "point of execution" is inside the scope Be careful not to mix up "lifetime" and "extent". 😉 This is why we refactored a large vb.net 1.x project to declare variables at the innermost scope instead at the head of the function. This helped reduce bugs greatly and force the application to clean up resources as soon as possible instead of at the end of each method. For some reason, the original offshore development team put in a large number of 500+ line functions via cut and paste code. VS at some future version should allow refactoring to push varialbes to innermost scope. "no two things are allowed to have the same name except for a set of methods that all differ in signature" Alternatively, you could consider the method "name" to be a combination of the name and signature. For example, given: void M() void M(int) void M(string) void M(int, string) you could assume that the method names look something like: M M@Int32 M@String M@Int32@String As I recall, the C++ compiler does something like this for overloading exported functions. > force the application to clean up resources as soon as possible instead of at the end of each method Neither scope nor declaration space of your locals have any effect on when GC will clean up resources. If you look at the IL generated from your code, you’ll see that. Pavel, That’s a consequence of the fact that scope is property of identifiers, and lifetime is not (it’s a property of storage). Storage can be identified by a different name in a different scope (with ref parameters), or even in the same scope (C++ references, or C# methods with multiple ref parameters with the same type). At least one resource is reclaimed (although not by the GC) when execution leaves the defining (as opposed to aliasing) scope — stack space. Then in a GC environment such as .NET, instances of reference types have storage and lifetime completely distinct from the scope of handles to them. But these handles are not identifiers for the instance’s storage in the same way a ref parameter is, they can be reassigned to point to different storage. Once new programmers learn the difference between storage and the identifier that names it, the distinction between scope and lifetime becomes straightforward. @paul >. Forgetting to cleanup allocated resources / variables is a symptom of poorly written code. This covers obvious problems like not cleaning up open files, open sockets, … to much harder ones like failing to properly deallocate a memory block returned by a thinly wrapped win32 api call. Developers that rely on the GC to cleanup introduce many expensive bugs into their code. Moving variables to innermost scope greatly helps larger scale refactorings like extract method. We used this many times to extract the inner body of an overly complicated method and make that extracted method static. This resulted in much easier code to test, debug and verify it functions correctly. Essentially, it reduces to: "Reduce the context needed to understand a given line of code." – Reduce nesting – Reduce the need to know the state of an object before calling a method (i.e., use static methods when possible) – Reduce the lines of code you need to read to get from the start of a method to any given line in that function – check for and return from errors before handling the non-error case (reduce nesting, ensure error handling is done for resource allocation) – don’t declare variables at top of function, declare them inline where they are needed – precompute things that are used over and over (i.e., don’t have 20 lines of object.array(25).methodz(1,2,3).propertyX called over and over just to access the properties) – eliminate unused methods and properties (e.g., fold properties into the constructor if they are only used when constructing an object) . "Alternatively, you could consider the method "name" to be a combination of the name and signature. For example, given:" It’s rather complex because it, in your example includes only the name and return type of the signature for the purposes of treating them as one entity with multiple parts. Anything with a different return type in that situation is considered a clash within the declarative space. However when you do it with methods in the base class as well public class Bottom { public int Blah() {return 0;} } public class Bar : Bottom { public double Blah() {return 0.0;} } Then this is considered acceptable (albeit with a compiler warning without new) as you are allowed to shadow, thus cover up one declarative space with another. Of course the moment you make Bottom’s Blah private this issue goes away so the *maximum* access level in the base class(es) alters the declarative space for sub classes but has no effect on the space *within* the class. but wait. it gets worse 🙂 public interface IFoo { int Blah(); } public class Bottom : IFoo { int IFoo.Blah() {return 0;} } public class Bar : Bottom { public double Blah() {return 0.0;} } suddenly the compiler warning goes away because we have pushed the declarative space into the interface and, despite Bar being usable as an IFoo and truly implementing it, it is only accessible from variables where you stop being able to treat it as a Bar, hence the spaces cannot clash. Good that the compiler is clever about this but you begin to see the complexity of it. I’d hate to have to implement a compiler, though it’s probably a source of combined maddening and marvellous all at once 🙂 > The scope of a named entity is the region of program text > in which it is legal to refer to that entity by its *unqualified name.* > … > [x] is in scope throughout the entire body text of C, including the > entirity of M. So just to clarify, within the method M() of your example, saying "this.x" does not count as qualifying the name? What does unqualified mean, exactly? Hobbes i think many people might arrive at this website because they are hitting the 'variable name is already defined' and such like errors. I would point out to those people wishing to re-use a variable name that it is perfectly ok as long as the scopes do not form a parent-child relation ship. In other words it is fine to have foreach and for loops use the same variable names as long as the same name does not also exist in the enclosing scope For example foreach (string s in list) Console.WriteLine(s) foreach (string s in anotherlist) Console.WriteLine(s) is fine as long as s in not also declared in the enclosing scope of the loops @Hobbes: No, this.x would count as qualifying the name. However, it is still *legal* to refer to the field x by its unqualified name (that is, just with x) from within the method M(). In other words, there's no *rule* that you can't refer to a field of a class with its unqualified name from within a method of that class. However (and as Eric points out) just because it's *legal* to refer to the field x in this way from within the method, does not guarantee that the name actually *will* refer to that entity. In this case, there is a local variable x whose scope overlaps with that of the field x, and within the body of M() it is this local variable that is *actually* referred to by x. But this doesn't change the "legality" of referring to the field with an unqualified name, it just means something else trumps it. Similarly, it's *legal* to call a method public foo(object o) with the call foo(5), but if you also have a method public foo(int i) in the same class, then the method that takes an object won't be the *actual* one called by foo(5). In contrast, you can't refer to the field x from outside C (or classes derived from C) by using the unqualified name, even if x is public. This is true regardless of the presence or absence of other entities named x. It is simply illegal to use the unqualified name of the field x in that part of the code. That part of your code is outside the scope of x. Or in short: the rules for determining scope don't depend on if other entities have the same name. See here: msdn.microsoft.com/…/aa691132(v=VS.71).aspx When there are multiple entities with the same name and overlapping scope, you have name hiding, as described in the subsequent sections of that specification.
https://blogs.msdn.microsoft.com/ericlippert/2009/08/03/whats-the-difference-part-two-scope-vs-declaration-space-vs-lifetime/
CC-MAIN-2016-40
refinedweb
2,165
66.57
C# Constants - Constants are not variables but these are similar to variables. - These variable’s values cannot be changed throughout its lifetime. - We prefix a variable with the [const] keyword to make it a constant variable. - We cannot assign any value of a variable (non-constant) to a constant. Example: using System; namespace csharpBasic { // start class definition / declaration. class Program { // static main method void type declaration. static void Main(string[] args) { // String type constant variable declaration and initialization. const string programingLanguage = "c#"; // Re-initialization of already declared constant. programingLanguage = "Asp.net"; // Compile-time error! // Print constant. Console.WriteLine(programingLanguage); Console.ReadKey(); } // End of main method definition. } // End of class. } Remember: - Constants cannot be prefixed with static modifier because these are static by default. - They must be initialized at its declaration time. - Once we have assigned a value to them, than that value cannot be overwritten or changed. - Constants initialized at compile time. Advantages of using constants: - Constants makes program easier to read. - It helps us to prevent mistakes in our program. If we assign another value to already declared and initialized constant than the compiler generates an ERROR.
https://tutorialstown.com/csharp-constants/
CC-MAIN-2018-43
refinedweb
188
52.97
User:RAHB/Talk Archive) Poo Lit Surprise I'm sending you this because you are signed up to judge the Poo Lit Surprise. If you no longer want to judge or are incapable of, please tell me as soon as possible. If you're still good to go,, or if these rules are not cognizant within you. Thank you again for your valued participation in the balletic train wreck that is the Poo Lit Surprise! ~ Fnoodle (talk) (my creator) 00:28, 15 July 2008 (UTC) Because you helped me... ...you get a cookie! Thanks for the formatting help, RAHB, I needed that. - Damn, that's a lot bigger than the average cookie people give around here. I think people are gonna start doing nice things for you just to get one of those massive cookies....mmmmmm, cooooookiiiieees. -RAHB 02:08, 17 July 2008 (UTC) User:Cajek/Polar Express Hey RAHB, you seem to be on the fast-track to adminhood. You know more about the PLS than I do, and I have some questions about my latest userspace article. Would you be willing to oblige me a few answers? • <5:40, 17 Jul 2008> - Ask away, I'll tell you whatever I can. -RAHB 06:01, 17 July 2008 (UTC) - There's no Polar Express article (as far as I can tell), so maybe it should go in the "mainspace" category. But it's written like a book, so many "alternate" namespace? Is this the kind of stuff that PLS accepts? (P.S., I hope you're doing okay, RAHB!) • <6:29, 17 Jul 2008> - It looks like 100% book to me. I'd put it in the alternate namespaces category. I can't imagine what a mainspace article for it would be like, but so far yours looks like it would fit right in the UnBooks namespace (interesting fact, UnBooks isn't technically a namespace at all, it's just a fake one made up in the mainspace, which you probably already know from your research on wikia numbers and all). So yeah, unless you change the format around, keep it in alternate, but I think I like it just how it is. (P.S. After PLS, can I audio it? I really want to do the jovial black guy and little kids voices. I was reading it like that the whole time.) (P.S.S. Don't worry about me, man. We all have to go through some shit sometime in our lives, better I get mine out of the way now. It's unfortunate, but "a man's gotta do what a man's gotta do..." or something.) -RAHB 06:34, 17 July 2008 (UTC) - Okay, it's in the alternate mainspace category now. And, yes, I was definitely thinking of you to do the audio. That would be really cool! By the way, did you notice I tricked out that car picture all by myself? I hope you enjoyed reading that stupid article, and thanks for the advice! (P.S.S. I think that, as long as you can laugh at your situation, you'll do good. And you? You've got a flexible, ha-ha, type of personality. Please don't become "hardened" by this experience, okay RAHB? You're too fun a guy.) • <6:43, 17 Jul 2008> - Haha, that's pretty good, but only if that light area around the lights is tape. If it isn't, well then, your MS Paint is showing. Either way, I totally didn't notice it was photoshopped. Also, consider the audio put on my to-do list then, right after that one for Burninator. (Parentheses: Don't worry man, if anything, I can come out of it with the knowledge that afterward I can progress and do things the way I want to. Nothing's gonna "harden" me, unless of course we're talking in strictly sexual terms....oooh yeah, looking for cheap houses gets me so turned on. Ahhhh yeeeeeeeah..) -RAHB 06:59, 17 July 2008 (UTC) - It's a TERRIBLE picture, and no, I used the Mac version of MS Paint called Appleworks, which you've probably never heard of. (Yeah, definitely use sexual jokes on whoever is selling their houses. It'll catch 'em off guard!) Okay, I'm goin' to bed! • <7:06, 17 Jul 2008> - Maybe I wasn't looking at it in the right light....I'll try to turn my monitor on its side next time I get the time...which will be never. Oh man, I can see it now: "Hey, this For Sale sign...can I buy special condoms for that at like, the grocery store or something? I'm just wondering, you know, not that I'm for contraception or anything, I just want to minimize the chance of splinters, and stuff....." -RAHB 07:11, 17 July 2008 (UTC) - "For sale? This house would make a TERRIBLE xtra large, splinter flavored condom!" • <12:41, 17 Jul 2008> - "Exactly how many phallic appliances are there inside the house? Uh...more importantly, how large are they?" -RAHB 15:26, 17 July 2008 (UTC) - "Could you point out the places where my dick would fit? It's for my cousin. Nah, I'm kidding, it's for me." • <15:47, 17 Jul 2008> - "What can you tell me about the whole 'indecent exposure' thing around here? Do people, like....care if I'm fucking the water meter in broad daylight? Or do I have to wait 'till night? What if I do it behind the tree in the front yard?" -RAHB 15:52, 17 July 2008 (UTC) - "On an unrelated note, where is the animal shelter? Where, in this neighborhood, are the fuckable animals? Is there any ordinance regarding animal-fucking that I should know about?" • <15:58, 17 Jul 2008> - "You guys do the whole Weekend Animal Fucking thing, right? My old neighborhood did that, before the accident I mean. That's not why I moved out, I loved those animals, you know. But I figured this place would have them too. You guys do have them, right?" -RAHB 16:00, 17 July 2008 (UTC) - "Do you guys have an annual Animal Fucking Festival like they did in my old neighborhood? It wasn't official, but it gave my old neighborhood a certain charm. Do you have anything like that? Maybe a zoo or something? Seriously. You've gotta try it." • <16:04, 17 Jul 2008> - "Oh no, don't worry, it's cool if you guys aren't into that kind of thing, it's fine really. I know it takes some time to adjust to that sort of thing, especially when you're so far away from the zoo....so anyways, I'll set up the pigpen right over here. I figure after a few weeks I might rally up a few of the locals and try to get a public animal-human crossover kind of park going. Wouldn't that be so awesome?" - We totally need to write My Old Neighborhood. -RAHB 16:09, 17 July 2008 (UTC) - OMG, is there a collaboration section in PLS?? • <16:12, 17 Jul 2008> - No =( Collaborations used to be allowed, but they got rid of it this time because the rules for collabs were too complex and nobody ever did them. -RAHB 16:13, 17 July 2008 (UTC) - Well, we don't have to do PLS to do a collab! Maybe we should give it a shot? • <16:46, 17 Jul 2008> - It sure sounds enticing. And we may have just written half of the content already. Or just the basic idea. Whatever. Anyways, we're both working on our PLS stuff, but I am moving later this month. Since I don't know whether I'll have internet or not, I guess I'll have to keep you updated by the day. But if we end up finishing our entries early (I plan on finishing my mainspace one today), I think we could probably go for it right away. If that's good with you. -RAHB 17:01, 17 July 2008 (UTC) - I'm doing my mainspace article right now (inspired by your observational humor one), and I have an essay to write (*gulp*) so maybe this evening? It's good we're both in the same time zone. And the same metaphorical neighborhood. The one with the Fuckable Animals. • <17:10, 17 Jul 2008> - I love inspiring things. If mine can't be good, at least somebody can get an idea out of it. And yeah, this evening sounds pretty good, I don't think I've got anything going on today (if today is, in fact, Thursday, I'm losing track of days). I'll meet you over by the "petting" zoo, at around...well, I'll probably be around most of the night, just look for me I guess. -RAHB 17:23, 17 July 2008 (UTC) This is brilliant. I wish I'd thought of it! Can't wait to see the finished product. -OptyC Sucks! CUN17:35, 17 Jul - Wow, thanks man. I guarantee the finished product will be a lot nicer. Right now it's just me babbling for the most part, though I've been plotting it out for weeks. Glad you like it. -RAHB 17:38, 17 July 2008 (UTC) I think you have told me to do this... “If I don't have it done by Wednesday you can burn me at the stake.” Well, if you insist. Honestly, I'm not in a rush, so this isn't a warning or anything. Just doing what you told me to do....maybe next time tell me to do something nice for you. It'll totally end up better for one of us. (Sorry about the 5th degree burns.) The Woodburninator (woodtalk) (woodstalk) 21:59, 17 July 2008 (UTC) - Man, I'd totally ask you for a blowjob next time too, but these 5th degree burns have scorched the nerve sensors off of my body. Ow. In other news, I'm sorry it's taken so long, I think I'll be having time tomorrow for it, I do want to make sure it's really good, so you need not worry about quality. But it will get done, thanks for reminding me (and for the shiny template). -RAHB 00:31, 18 July 2008 (UTC) - I do what I'm asked. But honestly, if you're writing stuff for PLS or something, I can wait. It's not a huge rush. The Woodburninator (woodtalk) (woodstalk) 03:07, 18 July 2008 (UTC) - Oh, not at all. Well, I mean yes at all, I am writing for the PLS, but that has nothing to do with taking up my audio time. Mainly because I usually do audio in the mid-to-late afternoon, and I usually write in the very early-to-early mornings. But I've had some other things going on, as well as obvious laziness and occasional forgetfulness factoring in. Still, I think I can say it'll be done tomorrow. -RAHB 03:11, 18 July 2008 (UTC) - Wow. Very very good sir. It sounds awesome. Thanks for the help. I feel like I should give you something, but I'm sure the earlier template will suffice. But, You Rock. The Woodburninator (woodtalk) (woodstalk) 06:11, 19 July 2008 (UTC) I'm BACK!!! Hey RAHB, I'm back! Sorry to scare you, I just took a break for a few days, or weeks. I made some edits on my article I had sex with your wife to add a new section, without taking any of your advice, but I'll get that in later, lol. :) thanks! --Liz muffin 22:24, 17 July 2008 (UTC) Aw man, not more audio requests Hey RAHB, I was looking at Riddle and I thought: There's RAHB. Especially since you nominated it, I think! If you're running out of audios, that would be cool. And THEN in AUGUST after PLS, you can do Polar Express! Meh, just throwing some ideas at you. Like rotten tomatoes. Also, we should eventually write My Old Neighborhood once I get back to normal. This summer was crazy, so as soon as I feel more awake, we can get started. Alright, nice talking to you, RAHB! Hope everything's okay! • <17:05, 19 Jul 2008> - I can give it a shot when I'm a little more back to normal myself. After I get all my PLS entries in, I'm gonna be doing less in the way of contributions that aren't site maintenance until I get more of an idea of what all is gonna be happening with my housing situation. Other than that, I'll definitely be recording Polar Express, Riddle, and redoing the one for Serious eventually as well. And we're definitely writing My Old Neighborhood. I'm thinking something like mid-August for all of these, all depending on things, but I think it's a feasible goal. I'll be sure to keep you updated on everything. Thanks for always giving me things to do, ya crazy. -RAHB 17:20, 19 July 2008 (UTC) HD,IB! Meesa back now, what a long and boring trip. ~ Mgr.ReadMeSoon!? 23:49, 12 August 2008 (UTC) - Where'd you go? -RAHB 23:52, 19 July 2008 (UTC) - Mataguay Scout Camp. That place sucks. ~ Mgr.ReadMeSoon!? 23:49, 12 August 2008 (UTC) - What?! NO FUCKING WAY! I used to go there every summer when I was in the Boy Scouts! That place does really suck though, but still. -RAHB 00:14, 20 July 2008 (UTC) - Rahb was in the boy scouts? Jeez. NOW he's gonna teach us about how animal fucking festivals are immoral. • <0:21, 20 Jul 2008> - Oh no man, I was the worst Boy Scout ever. I half-assed everything, somehow still got promoted to Life Scout, was always fucking around or laying around being lazy. Never did any of the physical merit badges, except biking which was really fucking easy. I sucked at Boy Scouting. I was in Boy Scouts when I was a *shudder* Mormon, too. It was Boy Scouts led by the Mormons. That was something. But please, by all means Cajek, fuck the hell out of those animals. The festival wouldn't be the same without the 8 o'clock Cajek-Caribou Showdown! -RAHB 00:24, 20 July 2008 (UTC) - Yeah, I am star right now and have been in for 1 and a half years or some crap. The place is a classic, but a run-down peice-of-shit classic just the same. ~ Mgr.ReadMeSoon!? 23:49, 12 August 2008 (UTC) This boy scouts revelation definitely explains a number of things about RAHB's version of heterosexual living. Lets just say that the salute wasn't the only thing involving three fingers. --THINKER 05:01, 20 July 2008 (UTC) - The "merit badges" I "pin on people's chests" aren't exactly orthodox Boy Scout regulation either. -RAHB 02:34, 21 July 2008 (UTC) UnSignpost: July 17th, 2008 May contain traces of humor! July 17th, 2008 • Eleventh Issue • Telling You Stuff You Already Knew, But With Different Words! Howto: Inject Rats into your bloodstream I'm thinking of writing a new article that may even be better than the last. How about Howto: Inject rats into your bloodstream? Help me if you think its worth it =)--Liz muffin 00:25, 21 July 2008 (UTC) - Wow, that sounds like a great one. I love absurd humor like that, if done properly it sounds like it could be pretty good. I say go for it, if you need any help, just ask. -RAHB 00:28, 21 July 2008 (UTC) Another article to help with RAHB, you can help with this UnNews if you want. I didn't really make it the funny type, but I think it is a great idea. ~ Mgr.ReadMeSoon!? 23:49, 12 August 2008 (UTC) - I may take a look at it. For the time being, I'm mainly just doing site maintenance for the next couple weeks until my whole housing situation clears up. I haven't had much time to write, or gotten much inspiration. -RAHB 04:56, 23 July 2008 (UTC) Monsieur RAHB How would you like to become the deputy master of Unnews? Seeing as Zim is not on lately, we need someone to take care of Unnews - mainly with the lead articles update, categories and basically pushing that place back to its former glory. What do you think? ~ 11:25, 23 July 2008 (UTC) - Yeah, sure I'll do it. I guess. Pffft. I guess. -CAJK 11:48, 23 July 2008 (UTC) - Don't mind Cajek. But seriously, fuck yeah I'll do it. The only problem I have right now is that I'm not sure what my schedule is looking like until about the 31st. I'm moving sometime this month, but I should be ready to go again in August. But if you need some help with it right now, I can still offer my services until the time comes for me to move. -RAHB 11:55, 23 July 2008 (UTC) - Thing is, I can't get around to do it at all with the general maintenance. And no other admin is looking after the place currently. I figured since you know Unnews very well you'll do a good job :) ~ 12:19, 23 July 2008 (UTC) - Yeah, I think I'm pretty well versed in the goings on with it. I'd be glad to take on the job. So just clarify for me once more, exactly everything that I need to work on. Categories, Lead Article templates, welcoming to UnNews I suppose if Zim won't be doing that anymore either (I haven't bothered to check just now whether he still does). Anything else I should be looking after? -RAHB 12:30, 23 July 2008 (UTC) - On second thought, I can't. I'm too busy right now. Got a lotta stuff on my plate, like finding a new neighborhood. -CAJK 12:35, 23 July 2008 (UTC) - Cajek, stop trying to confuse Mr. Dillo (and me). Now let the adults talk about their adult things. You can come back later and we'll....ahem..."play" with the animals...if you know what I mean...and I think you do...what I mean is that we're going to have wild animal sex with the animals...and apparently underage sex as well since I just inferred that you were a child in this hypothetical, role-playing-ish situation...and you very well might be because I don't know your actual age...but we'll still get to fuck the animals anyways....in the new neighborhood....if you know what I mean.... -RAHB 12:43, 23 July 2008 (UTC) - I'd love to write that article with you Cajek, but my house had a rat "problem". You see, they were running up and down "holes" in my "house" all willy-nilly, and now I have to move out. As far as animal fucking goes, it's all good in my book, so long as it has a spine. And none of those marsupials with their "pouches". It's too easy. -CAJK 13:06, 23 July 2008 (UTC) - I hope it all works out for you RAHB, in the meantime I'll be writing 74 articles a day with one hand while I fuck this kangaroo here with the other (marsupials rule man, what are you talking about?). We can write that article and have an animal fucking party and everything. Also, hopefully your new house doesn't have the rat problem. The problem of course being that the rats are not very fuckable. It's a pity. - ~ - I'm too tired to try and figure out who do I need to ban here O_O. Anyway, RAHB, this seems right. Also maybe help pushing forward with the audio section. I wouldn't say Zim is away for good, it's probably just a hiatus. So generally - keep the place clean, update the lead articles on regular basis, maintain categories, help people around. That's it more or less. I'll leave Zim a message so he wouldn't freak out when he sees you sit around in his living room putting your feet on the brand new sofa. ~ 22:01, 23 July 2008 (UTC) - Mordillo, ban RAHB. I MEAN CAJEK!! I mean RAHB. -CAJK 00:43, 24 July 2008 (UTC) Really? RE the hiding edits thing for PLS... I knew users with oversight could remove edits from the history, but I did not know it was possible to stop changes appearing in RC.:09, Jul 23 - I think that's what I heard. From a totally reputable source...I think. I'd ask about it to be sure, I just thought of it on a hunch. -RAHB 12:15, 23 July 2008 (UTC) UnSignpost: July 24th, 2008 May contain traces of humor! July 24th, 2008 • Twelfth Issue • Now On Time? Thanks For the vote for n00b of the month! --mrmonkey72 16:23, 27 July 2008 (UTC) - No problem, you've really been earning it lately. Keep up the good work. -RAHB 22:44, 27 July 2008 (UTC) Let me be the first person you ban. Since you're gonna be an admin soon, the first thing you should do is ban me for 10 minutes. Serious. Freakazoid and Alice Cooper suck. That should be enough. --MegaPleb • Dexter111344 • Complain here 02:31, 30 July 2008 (UTC) - You can just call me "The Baninator". -RAHB 02:35, 30 July 2008 (UTC) - Thank you Mr. "The Baninator"! --MegaPleb • Dexter111344 • Complain here 02:36, 30 July 2008 (UTC) - No problem, Mr. "The Baninated". -RAHB 02:37, 30 July 2008 (UTC) - That name sounds familiar... The Woodburninator (woodtalk) (woodstalk) 05:14, 30 July 2008 (UTC) We'll have milk and cookies, afterwards. Sir Modusoperandi Boinc! 08:22, 31 July 2008 (UTC) Rocking, and it's relative merits - Yes, I named her "Evelyn The Modified Dog"... that's how reverend zim_ulator rolls. It's one of my favorite Zappa songs. Evie likes when I sing it to her, but ignores FZ when he's on the stereo. She also likes another song I made up called "(I have a) Cookie for the puppy, fuck the kitties!", soon to be release as an UnTune. My new PC fixin's are on their way, and I'm hoping to have a decent studio again. As it it, I can't even listen to music any more! - Also, thanks for all of your help around UnNews. I feel a bit like an evil godparent to UnNews, and I'm happy to see it lives through the herculean efforts of other gits like me... to wit, yourself. I delare you to rock awesome:13, 31 July 2008 (UTC) - You sir, have excellent taste, as well as excellent awesome. As for UnNews, think nothing of it, I'm glad to help out anywhere I can, UnNews being a particularly worthy endeavor. I think I'll be starting to do some audios for it again myself, keep the spirit alive and all that. Best of luck to you on your computer fixing and your studio stuffs. We'll all be waiting with open arms, anticipating the return of Zim. -RAHB 13:32, 31 July 2008 (UTC) VANDAL! BAN BAN BAN! XD XD XD! LOL LOL LOL! RAPE!! hey...wait a minute... 01:54, 1 August 2008 (UTC) - ah HA! It was I, the masked vandal! You won't get me alive ADMINS!! ~ <01:58, 1 August 2008> - Lol -RAHB 02:20, 1 August 2008 (UTC) - So, you're just sitting around refreshing waiting for the maintenance stuff to pop up? :) ~ 11:47, 1 August 2008 (UTC) - Oh dear, was I not supposed to do that? -RAHB 11:48, 1 August 2008 (UTC) - No, no, not at all - I just imagine you sitting there hitting F5 over and over and mumbling "damn, I need to huff something, damn I need to huff something". It looks so vivid because I've done the same bloody thing :) ~ 11:52, 1 August 2008 (UTC) - Hahahaha. Well, I've got other things to keep me busy. I actually just keep juggling between QVFD, Ban Patrol, Maintenance, New Pages, RC. All the normal stuff I suppose. But yes...that's essentially exactly what I'm doing. -RAHB 11:54, 1 August 2008 (UTC) gimme a while to fix that thing the list of snitches, i can fix it, just give me a few minutes... i know there are some people who will like it. i just forgot how to do lists for a second, thats all... - A list of snitches isn't funny. Perhaps you'd like to flesh out a full article? Otherwise, it just looks like vanity to me. And we don't allow that here. -RAHB 21:49, 1 August 2008 (UTC) Corny!OK, I'll stop, I'll try not to make anymore templates such as and Template:Homer Eating. I know it sounds corny. - Yeah, that would be great man. It's just we already have so many as it is, is all. Thanks. -RAHB 17:52, 2 August 2008 (UTC) RAHB!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! </namedrop> ^_^ - 04:15, 4 August 2008 (UTC) Do Me A Favour RAHB 6 hour ban please. Need to do work today, need temptation removing.... -, Aug 4 - Ok. Effective as of now? -RAHB 08:14, 4 August 2008 (UTC) - Y, Aug 4 Help me with this please I think the content of the article A candle for a sin movement was pretty good. I'm even proud of it. ;). What can I do to improve it? I am recently making a link for "Dr. Seigh Tunn" to Satan. I wantedt to add a picture of the hindenburg burning, but I'm not good at navigation in uncylco yet, so please help me out. —The preceding unsigned comment was added by Chez Anthony (talk • contribs) --Chez Anthony 11:25, 4 August 2008 (UTC) - I know you're smart enough to look at my user talk, but I'm too dumb to post there, so here goes... - I'm not so sure about the picture stuff. I'm a contributor in a minor language wikipedia and I'm not as good there in navigation either. Can you post at least just one picture. just the hindenburg burning. please! i noticed the poo lit surprise writing competition notice above. I'm betting my ACFASM article for it. please! I need it posted on the mainspace as soon as possible. —The preceding unsigned comment was added by Chez Anthony (talk • contribs) - So how do i post it there?--Chez Anthony 12:18, 4 August 2008 (UTC) - I meant the pee review. How do i post it there? as i've mentionted i'm not that good at navigating around. —The preceding unsigned comment was added by Chez Anthony (talk • contribs) - I have posted it yesterday. But I don't see it there now. why? Do me a solid, RAHB Delete Dr. Skullthumper for 6 hours: He needs a time out after all the banning he's done. ...? I'm bored! • <20:38, 04 Aug 2008> - Hmmm. I dunno about six hours...how about the expiry time of....A TURNIP! -RAHB 21:37, 4 August 2008 (UTC) UnSignpost: July 31st, 2008 May contain traces of humor! July 31st, 2008 • Lucky Thirteenth Issue • Now with 20% more ninjas! Dude, you should make a userpage. You've been here long enough, ya:37, Aug 6 - Damn auto-QVFD script. I'm gonna put some white shit over its location. Yeah, that'll work. -RAHB 00:58, 6 August 2008 (UTC) - But what about when you scroll down? How will you read words at the top of your screen? How, I ask:10, Aug 6 Also...Idea: Ok, you know html goatse, right? And you and Skull are being all userpagey vandally, right? Ok, well, here's the plan: Html Uncyclopedians. We get one for you, then maybe for me, and UnIdiot, then DrS...then the whole website! Muahahaha! -, Aug 6 - Like....html pictures of ourselves, or what? -RAHB 02:27, 6 August 2008 (UTC) - Html, man, html! Html EVERYWHERE!!! -:31, Aug 6 From Ethine: You, sir, are an asshole. Someday, I WILL get you back for that, as I know you had something to do with it, despite your denials. Just wait. Love, - I didn't. I'm perfectly serious. It went something like this: - <DrSkullthumper> I'm lucky RAHB didn't put something in the sitenotice about it - <RAHB> meh, I don't know how to do mediawiki - <TheLedBalloon> Neither do I - <TheLedBalloon> BUT THAT'S NEVER STOPPED ME BEFORE! - ... - <TheLedBalloon> - <RAHB> Oh god... - <DrSkullthumper> Dammit... - And that's how it went. Totally 100% honest. Besides, you probably have plenty else to be mad at me about. ;) -RAHB 07:43, 6 August 2008 (UTC) hey rahb, just wondering if you had any comment for the unsignpost as far as becoming an admin and your plans from this point onward. 19:40, 6 August 2008 (UTC) - I know his plans! World Domination Being a Dick n00bing it upStaying cool? Whatever, I was just here to say Thanks for the Biopic, Gerry! Now I'll go back to doing what I was doing earlier. The Woodburninator (woodtalk) (woodstalk) 19:47, 6 August 2008 (UTC) - Well, I'll go with Woodburninator's response. In a stunning moment of actual seriousness, because I can't think of a funny way to answer it that isn't the cliched "world domination" response, I suppose I'll just keep going about my business, keeping it cool. Oh, and vandalizing Skullthumper's userpage on a regular basis. And banning Cajek of course, but I've left that mostly up to Skull. Oh, and one of these days, you won't expect it, because I'll wait for just the right moment; One of these days, I'm going to put {{:Main Page}} in the sitenotice. That'll be ultimate. -RAHB 19:55, 6 August 2008 (UTC) Oi Scrote! What's this "Thinker and Me Haz A Blog" thing? I'm not pulling my weight so I don't get a) - Shit man, I'm sorry. I'll go ahead and put your name in, I guess when my mind started typing "Thinker and Me" just flowed out better. But hey, not to say I wouldn't be happy to see your next post ;) -RAHB 20:35, 7 August 2008 (UTC) - No worries. I'm hoping to get my inspiration flowing again. Managed one or two new articles on Uncyclopedia recently, so maybe my muse has returned. :) Have kept up with yours and Thinkers additions though...think its expanding into something really) Something Yes pleas I would like people to add on to it! Sorry about the gay bashing, it just seemed that thats what most people where doing here, I'm really not homophobic, I'd post it on your user talk page but I can't! So sorry. If no one has added onto it in like a week I guess delete it —The preceding unsigned comment was added by Truthiness899 (talk • contribs) Illegal words Hello RAHB. I'm quite new in uncyclopedia, as in i only did my first article last night. This article was called Illegal words, I wasn't sure why the article was huffed (or what huffed means for that manner). I created that article because I thought people would laugh, and people did, I showed it to two people without telling them that I wrote it and they both found it funny, so I was confused when this morning I decided to work on it some more, and found it to be huffed. True, some of it was kind of tacky, but it had a 100% success rate. I wasn't sure on how to do some of the more fancy stuff like giving it a big title thing, or how to say it wasn't complete. If it was given more time and careful editing, I think it could be a popular article. So please reconsider the huff on it. Thank you in advance. Babyblaster 13:20, 9 August 2008 NiggaHomeGirl Hey RAHB! What's Up! Hey I was wondering if you say the opening ceremony to the Olympics Last Night? n_n - Nah, I didn't. I don't actually watch TV that much. But the olympics just sort of slipped my mind. -RAHB 22:27, 9 August 2008 (UTC) Oh! Samething with me. I only saw a little bit of it last Night. Heh-Heh! :D --70.161.7.58 22:35, 9 August 2008 (UTC)NiggaHomeGirl UnSignpost: August 7th, 2008 May contain traces of humor! August 7th, 2008 • Fourteenth Issue • Just like Grandma used to make! Ask and ye shall receive... First off, thank you for kind vote for The Last Bachelor Party on VFP. Critique was on the money and the image fixed, some light shadowing, per Modus, et. al. was added and Saint Mathew was given a crisp twenty to stuff into her thong as well. I don't think that the picture is going to cause me to go to Hell, but if I had been able to pull off The Last Lap Dance, then the mear mention of that idea would have certainly sealed my fate. Hugs! Dame GUN PotY WotM 2xPotM 17xVFH VFP Poo PMS •YAP• 18:08, 13 August 2008 (UTC) - Beautiful. Great job on it, I'll look forward to seeing it on the front page soon. -RAHB 19:15, 13 August 2008 (UTC) Will you adopt me? Hey RAHB, will you adopt me? I need to know how to be t3h funneh. Thanks. (oh, and if it's worth anything to ya, there might be another cookie for ya, wink wink, nudge nudge, grin grin, hummingbird). --Velosi-T 00:12, 14 August 2008 (UTC) oh, more audios/articles Hey RAHB, you still wantin' to do the audio for UnBooks:Polar Express? I think also maybe there was another one you liked, maybe Riddle? That's a lot, so you tell me what, if any, you have time to do. We also need to write My Neighborhood, with all the fuckable plants and fuckable Animals. Tell me what yer schedule is, I guess! • <6:33, 14 Aug 2008> - Ah, well right now I'm working on moving out, packing, all that good stuff. I'm probably going to put off the bigger things until September begins, and just do normal maintenance in the meantime. But I may be taking a short wiki-break within the coming weeks as well. So I'll let you know about all of them at the beginning of September I suppose, then I'll be situated and such. I still do want to do the audios and My Old Neighborhood still sounds good to do. So I guess I'll have to let you know when I found out about all the new setup and stuff. -RAHB 07:09, 14 August 2008 (UTC) Huffed Page The Page you huffed I meant to be a Sandbox but I forgot to put the username in front - User:Bloroninblorchspit Sorry... Sorry for causing much trouble... Hetelllies 02:41, 15 August 2008 (UTC) - I don't know who caused what, and I'm not going to point any fingers at anyone. I do know that the guy who was impersonating you was being disruptive, and has now been silenced. I hope this whole mess hasn't discouraged you as far as editing Uncyclopedia goes. -RAHB 02:52, 15 August 2008 (UTC) - Don't worry... I'll do a take on a Japanese phenomenon called Enjo Kosai. Hmm, I wonder if I should reinvent myself.Hetelllies 03:21, 15 August 2008 (UTC) Huffed lean Errm I see you Huffed Lean manufacturing - because "Uncyclopedia is not Wikipedia". You do realise that none of it was true? Henry Ford, Charlie Chaplin - Charles Dickens. Oh and it wasn't created by German industrialists in the 1920s :-) Okay I'm just a noob but looking at "Uncyclopedia's Five Pliers" - one of them is "Uncyclopedia is an anti-encyclopaedia" and "incorporating elements of general encyclopaedias, specialised encyclopaedias, and almanacs and generally turning them on their head." I thought the article qualified as bona fide nonsense. Oh well ..... Plain Peasant 03:46, 15 August 2008 (UTC) - I can restore it for you if you'd like. It all looked pretty straight to me. Either way though, it didn't seem to be very satirical or anything like that. From what I see, it's just, as you say, a bunch of lies. While lies can be funny, they're not funny in and of themselves, they usually need some sort of angle on them to make them funny. Just saying something like "Bill Gates is a fisherman from New Zealand who likes horses" is not grounds for an article. And Uncyclopedia's purpose isn't to produce complete nonsense, so I'm very sorry if you were misinformed. But anyways, like I said, say the word and I'll restore it for you, though probably with a maintenance tag on it, since it was rather short. Also, if you'd like to, check out Illogicopedia, one of our sister projects. That site specializes more in "bona fide nonsense" if that is in fact what you're more interested in. That's in no way a "go away to this dump on the side for people like you" sort of thing. But the sites do serve two different purposes, and Illogicopedia tends to be more focused towards outright lies and such like that. Well, there's me rambling, but yeah, if you'd like, I'll restore it. -RAHB 03:57, 15 August 2008 (UTC) - Okay - tell you what, if you could restore and tag it with whatever and I'll look at making it more satirical :-) Plain Peasant 04:23, 15 August 2008 (UTC) query hey rahb, i've been trying to keep track of images in VFD articles that are only linked to in said article, to QVFD them if the article gets deleted. then i took a look at the unused image catalog - and there are over 1000 (i think) images there. is it worth it for me to QVFD the images, will they eventually get deleted in some sort of monthly unused images purge, or does nobody care? 15:13, 15 August 2008 (UTC) - Well, you'll notice unused images don't get QVFD'd much. I'm not sure what happens to them, but I've heard of admins going through them at times and deleting large amounts. I think we keep them around for a while just in case anybody ends up needing them or something, though if you'd like to QVFD duplicates, I think those are fine when you find them (just make sure you relink the pages they're on to the other version, if they're on other pages). But yeah, last time I started QVFDing unused images, they told me it's alright, they're all on the unused images list and that I didn't have to put them up. So I can only assume there's some way we get rid of them or put them to use, one or the other. -RAHB 01:26, 16 August 2008 (UTC) - okay, that's what i imagined happening. i could almost see zombiebaron wading through a river of unused images, about to unleash his wrath, a hellishly gleeful smirk on his face... 06:29, 16 August 2008 (UTC) Congratulations! You are the recipient of the Mhaille Award For Excellence for the month of July 2008. I know its not "up there" with the great awards of Uncyclopedia, but its a way for me to show my own support and appreciation for what people are doing out there to make this place better. For all of the hard work that you have put in, which looks to increase now you have the added burden of "sysoph) - Many thanks Mhaille. It may not be "up there" as it were, but it still means a lot to be awarded something like it from someone as prestigious as yourself, and whose opinion I can say I find to be sound and educated. It'll hold a special place in my trophy cabinet...well, actually it'll probably just sit next to the other ones, but each place is "special" in itself I suppose. I consider it just as good as any "official" award on the site. And don't worry, with my adminhood I plan to do so many great things. For example, one day I plan on putting {{:Main Page}} in the sitenotice, and seeing how long it takes before anybody finds out why every page looks like the main page. It will be excellent. =) -RAHB 01:31, 16 August 2008 (UTC)
http://uncyclopedia.wikia.com/wiki/User:RAHB/Talk_Archive_6
CC-MAIN-2015-18
refinedweb
6,872
83.25
Tips for using Eclipse with Jython After you hopefully read the article about setting up Eclipse for Maximo Jython development, there are some Tips & Tricks to get the best results from using Eclipse in combination with Jython. A short summary can be found here: Undefined Variables You often use so called implicit variables when you program in context of a launchpoint. The best known variable is mbo which is used quit often. If you use this variable in context of Eclipse it will throw an error in the GUI, because it has never been declared in context of the script. My best practice is to assign these variables at the beginning of the script to another variable even using a more meaningful variable name (or even the same name) and using the @UndefinedVariable tag. The above sample would now looks like this and will not throw any error: workorderMbo = mbo # @UndefinedVariable workorderMbo.getString("wonum") Prevent Using Script In-/Out Variables Maximo provides you the opportunity to define In-/Out Variables in context of a launchpoint. Variables defined in that way can be accessed as regular variables in Jython but will seamlessly read/change the content of an MBO attribute. This looks very handy at the beginning, but will make reading of scripts much harder, because you can never be sure which attribute a variable is really bound to. In addition you get much more issues with Undefined Variables in eclipse. So in fact of clarity I am a big fan of not using In-/Out Variables. I will show you the programming alternative to this: In-Variables assetNum = mbo.getString(“ASSETNUM”) Out-Variables mbo.setValue(“ASSETNUM”,assetNum) Now you could say that the Maximo mechanism will provide you the benefit to assign different values to a variable based on its launchpoint. The situation can also be handled quit simple using some script code: launchPoint = launchPoint # @UndefinedVariable if launchPoint == ‘MYLAUNCHPOINT1’: assetNum = mbo.getString(“ALTASSETNUM”) So I hope you will not see any uncovered situations but your script has now all clarity at one place. Let Eclipse organize your Imports Have you ever seen a Jython script which starts like this: from java.io import * from java.rmi import * from psdi.mbo import * This is really bad style because you will import the complete classes from which you only need a small piece. Much better: from java.io import files from java.rmi import RemoteException from psdi.mbo import MboConstants But how can you easily maintain the shown list? Quit easy: Don’t care about – Eclipse will help you. Never define an Import manually! Just write your code using an method from a class library. When you use an method not yet imported Eclipse will help you with an error. Just place your mouse over the error and you will get a hint like this: Now press the CTRL-1 one key and Eclipse will show you how it might solve the issue: By selecting the first option “Import MXServer (psdi.server)” Eclipse will create the import statement for you: from psdi.server import MXServer On the other hand side, when you no longer need an import Eclipse will show a warning on that import: You can just delete the line. Even when you delete a line to much you now know an easy way to restore the imports. Use a Revision Control System I think it is agreed best practice to have a Revision Control System in place when you develop more than some small scripts. But sometimes you do not have a choose and there is no tool available in your environment. For that case there is a cool, but not well known function within Eclipse. The Team function provides a local history of your edits within the last 7 days. You do not have to activate it, it’s just there! To get an older file version right click on the File and select Team > Show Local History… You can now right click an entry and either open it directly or compare an old version with the current version. That is a very nice feature. How about the 7 days? You may want a longer retention period for your files. Select: Window > Preferences in the menu. In the tabs on the left select: General > Workspace > Local History In this configuration page you can either completely remove time limits for this function or extend the number of days and size for the history.
https://www.maximoscripting.com/2015/07/
CC-MAIN-2022-27
refinedweb
743
62.98
Run-length encoding in Python Recently I discussed run-length encoding and DEFLATE compression. I never actually showed a Python implementation of a run-length encoder, so here’s one now. import itertools as its def ilen(it): '''Return the length of an iterable. >>> ilen(range(7)) 7 ''' return sum(1 for _ in it) def runlength_enc(xs): '''Return a run-length encoded version of the stream, xs. The resulting stream consists of (count, x) pairs. >>> ys = runlength_enc('AAABBCCC') >>> next(ys) (3, 'A') >>> list(ys) [(2, 'B'), (3, 'C')] ''' return ((ilen(gp), x) for x, gp in its.groupby(xs)) The decoder is equally simple. Itertools.repeat expands a (count, value) pair into an iterable which will generate count elements. Itertools.chain flattens these iterables into a single stream. def runlength_dec(xs): '''Expand a run-length encoded stream. Each element of xs is a pair, (count, x). >>> ys = runlength_dec(((3, 'A'), (2, 'B'))) >>> next(ys) 'A' >>> ''.join(ys) 'AABB' ''' return its.chain.from_iterable(its.repeat(x, n) for n, x in xs) If you haven’t seen itertools.chain.from_iterable() yet, it was introduced at Python 3.0/2.6. The important feature here is that it lazily works its way through a single iterable argument. If instead we’d written: def runlength_dec(xs): .... return its.chain(*(its.repeat(x, n) for n, x in xs)) then our run-length decoder would need to consume all of xs before yielding results (which is why we must interrupt the interpreter’s execution below). >>> xs = its.cycle((3, 'A'), (2, 'B')) >>> runlength_dec(xs) C-c C-cTraceback (most recent call last): File "<stdin>", line 1, in <module> File "<string>", line 25, in runlength_dec File "<string>", line 25, in <genexpr> KeyboardInterrupt Named tuples for clarity Streams of pairs (as shown above) are perfectly Pythonic. If we run-length encode a stream of numbers, clients will just have to read the manual and remember that item[0] is a repeat count and item[1] is a value. If this seems fragile, a new-ish member of the collections module can give the pair more structure. >>> from collections import namedtuple >>> Run = namedtuple('Run', 'count value') >>> run1 = Run(count=10, value=2) >>> run2 = Run(value=2, count=10) >>> run1 Run(count=10, value=2) >>> run2 Run(count=10, value=2) >>> run1.count 10 >>> run1[0] 10 Here’s how we’d change runlength_enc() to use the new type. def runlength_enc(xs): '''Return a run-length encoded version of the stream, xs. >>> ys = runlength_enc('AAABBCCC') >>> next(ys) Run(count=3, value='A') >>> list(ys) [Run(count=2, value='B'), Run(count=3, value='C')] ''' return (Run(ilen(gp), x) for x, gp in its.groupby(xs))
http://wordaligned.org/articles/runlength-encoding-in-python
CC-MAIN-2016-07
refinedweb
449
56.76
US5235551A - Memory addressing scheme - Google PatentsMemory addressing scheme Download PDF Info - Publication number - US5235551AUS5235551A US07638688 US63868891A US5235551A US 5235551 A US5235551 A US 5235551A US 07638688 US07638688 US 07638688 US 63868891 A US63868891 A US 63868891A US 5235551 A US5235551 A US 5235551A - Authority - US - Grant status - Grant - Patent type - - Prior art keywords - address - memory - plane - data - character - 1. Field of the Invention This invention relates to the field of memory addressing schemes. In particular, this invention relates to a memory addressing scheme for laser printer font or program cartridges. 2. Background Art Computer systems provide visual output on computer displays, such as a cathode ray tube (CRT), or to a printer to produce hard copy. A typical use of a printer on a computer system is to provide hard copy of text output from a word processor or other text editing program. One type of printer used with computer systems is known as a "laser printer." In a laser printer, a laser beam is used to scan across the surface of a photo-sensitive material to form patterns of electrical charge. The photosensitive material, which may be disposed on a drum, normally has a negative charge. Wherever the laser strikes the drum, a relatively positive "dot" of charge is created. These dots combine to form patterns, images and letters on the surface of the drum. Negatively charged toner particles are attracted to the positively charged dots on the surface of the drum. This toner pattern is then transferred to a sheet of paper passed adjacent to the drum. The toner image, which is now printed on the paper, is fused by passing the paper and toner through a heated roller, and the printed page exits from the printer. A laser printer is not limited in the type of pattern or image that can be created on the drum. Therefore, laser printers are used to provide a variety of typefaces or "fonts" for displaying text characters. A font is a collection of character representations that are of a specific typeface (such as Courier, Times, etc.) combined with a specific style (Bold, Italic, Bold & Italic, etc.). One type of font utilized by computer systems and laser printers is referred to as a "scalable outline font." In a scalable outline font, each character of the font is made up of a series of curves. The curves are defined as extending between a number of control points on the borders of a font character (hence the name "outline" fonts). Because the outline of each character is a mathematical description of a character shape, an outline font character can be scaled to virtually any size and still maintain its shape. When printed on a laser printer, an outline font character is translated to a "bit map" (series of dots) that fills the outline defined by the mathematical description of the character. Printers that use scalable outline fonts come with a specific set of "built in" outline font data, referred to as "resident" font data, stored in the printer. The resident font data is stored in a non-volatile memory, such as a read only memory (ROM). The ROM includes character description and information for each character of each resident font. One area of the ROM is reserved as a "header" that identifies the resident fonts available to a computer user using the printer and the addresses of the character information for each character of a font. Often, computer users desire to use fonts in addition to those resident fonts provided by a laser printer. Some printers provide mechanisms for adding additional scalable outline fonts, to provide a larger number of scalable outline fonts to choose from. One such mechanism provided is the "font cartridge," which can be inserted into the printer. The cartridge is normally composed of a fixed amount of readable memory that can be accessed by the printer's internal microprocessor. The printer's internal program is designed to recognize the presence of the font cartridge, and add any scalable outline fonts in the cartridge to the fonts available for use. The font cartridge includes header information several hundred bytes in size and describes information common to all the characters described in the font, along with the locations in the cartridge of the "character" data for each of the several hundred characters described in the font. Each instance of "character" data, identified in the common header, describes the appearance of a single character within the font, and is several hundred bytes in size. The majority of cartridge space is taken up by character data. A disadvantage of laser printer font cartridges is that they are "address limited," that is, the number of address lines that can be used to address memory locations is fixed. In typical embodiments, only 2 megabytes of memory per cartridge can be addressed. This imposes a 2 megabyte limit on the amount of data that can be built into a scalable font cartridge, which effectively limits the cartridge to about twenty-five (25) scalable outline fonts. A number of prior art attempts have been made to provide expanded memory configurations for address limited systems. Nielsen, U.S. Pat. No. 4,368,515 is directed to a bank switchable memory system. Nielsen describes a method of expanding the number of ROM memory locations that can be addressed by a computer system without requiring additional address lines. The scheme of Nielsen is used in connection with a game cartridge ROM for a video game system. The scheme of Nielsen uses an address decode logic block coupled to a 12-bit address line to search for certain predefined addresses. When these pre-defined addresses are detected, a flip-flop is enabled that adds an extra bit to the most significant bit (MSB) of the address lines. This effectively provides a 13-bit address line instead of the 12-bit address line provided by the computer system. Selection between the two groups of memory is achieved by designating a specific predefined 12-bit address as a "decode A" address and a second predefined 12-bit address as a "decode B" address. When the address decode logic block detects decode A or decode B, the appropriate output of a flip-flop is enabled so that subsequent 12-bit addresses access the appropriate memory locations in the selected memory bank. Balaska, U.S. Pat. No. 4,485,457, is also directed to a memory system for use as a video game system that has a limited number of address lines. The system of Balaska uses a decoding circuit coupled to a data bus and an address bus. The data bus is coupled to a random access memory and the address communicates to a read-only memory. A decoding circuit is coupled to the data bus and to the address bus. The decoding circuit detects a predetermined address and a predetermined status of selected data lines and produces a signal that selects one of the plurality of ROM segments of memory locations. The device of Balaska provides three banks of ROM that are addressed by three particular 12-bit ROM read signals. Lushtak, U.S. Pat. No., 4,503,491 is directed to a computer with an addressing capability greater than the number of addresses that can be generated by its address lines. The device of Lushtak provides a plurality of memory banks and bank selection is effected by supplying a first address which enables a bank-select decoding logic. The device of Lushtak requires predefined addresses to select a memory location. Kummer, U.S. Pat. No. 4,609,996 is directed to a microcomputer that uses a memory expansion module. The memory system associated with a central processing unit (CPU) comprises a base memory and an add-on expansion memory. When both memories are installed, one memory has even-numbered addresses and another has odd-numbered addresses. The CPU accesses individual address locations, but a subsystem addresses even addresses to obtain data from the even address memory bank and the next higher odd address. The least significant bit of the address bits is used to indicate which memory bank (even or odd) is to be accessed. If the address is an even address, the address is provided to the memory bank storing even addresses. If the incoming address is odd, the last bit is dropped and a more significant bit is added to the address and the memory bank containing the odd number of bits is accessed. Yokoyama, U.S. Pat. No. 4,866,671 describes a system for expanding character generator ROM and buffer RAM space in a printer by searching for empty areas in an address space of a printer memory and utilizing the empty space for additional storage. Nakagawa, U.S. Pat. No. 4,926,372, describes a memory cartridge in which a large capacity ROM is provided. Storage area of the ROM is divided into a plurality of banks having memory addresses accessible by a CPU. Bank enable data is stored in each of the memory banks. When an access is made to a particular bank, data stored in that bank is read out to the processor. If that data includes a data work pointing to another memory bank, that memory bank is then enabled. Prior art solutions to limited memory addressing capabilities have a number of disadvantages. One particular disadvantage is the fact that memory bank switching information must be predefined and made known to the CPU or other device accessing the memory. In other cases, additional address lines must be added to provide expanded addressability. None of the prior art solutions provide a scheme that expands memory address space while remaining transparent to the "host system" and without requiring additional address lines. The present invention provides a method for increasing the addressable memory space of an addressed line limited computer system. In the present invention, at least two memory planes, plane zero and plane one, are provided. Each memory plane contains the maximum number of addresses that can be addressed by the available address lines. The present invention constrains the starting addresses of individual character data that are valid in each memory plane. For example, if the addresses of the memory planes are configured in hexidecimal, memory plane zero contains valid starting addresses only at those locations having a least significant nibble of "0" or "8". Memory plane one is constrained to have valid starting addresses, for example, at those addresses having a least significant nibble of "4" or "C". A processing means is provided to determine when a starting address is provided to the memory. The processor determines which memory plane can accept the starting address as a valid starting address and enables the address lines to communicate with that memory plane. More memory planes can be defined by reviewing more bits in the starting address. For example, if two bits are reviewed, four memory planes can be defined. For a given number of address lines N reviewed, the maximum number of memory planes is 2N. FIG. 1 is a block diagram of a computer/printer system. FIG. 2 is a flow diagram of printer operation. FIG. 3 illustrates two memory planes of the preferred embodiment of this invention. FIG. 4 is a block diagram of the preferred embodiment of the font cartridge of the present invention. FIG. 5 is a flow chart illustrating the operation of the present invention. FIG. 6 is a flow chart illustrating the assignment of starting addresses in a plurality of memory planes. A memory addressing scheme is described. In the following description, numerous specific details, such as number of address lines, number of memory planes, etc., block diagram of a computer/printer system is illustrated in FIG. 1. A host computer 10 communicates to a printer (indicated by dashed line 12), on bus 11. The host computer sends print commands to computer interface 13. The computer interface 13 communicates with printer firmware 15 on bus 14. The printer firmware 15 includes control algorithms to govern the operation of the printer 12. The printer firmware provides address data and control information to the resident font 16 and font cartridge 17 on bus 18. The printer firmware 15 also communicates with print hardware 20 on bus 19. In operation, the host computer 10 provides a print command to the printer 12, requesting a certain font character. The computer interface 13 provides this command to the printer firmware 15. The firmware determines whether the character is found in the resident font block 16 or the font cartridge 17. An appropriate enable signal is provided to the resident font 16 or font cartridge 17, depending on the location of the desired character information. The appropriate font character is then accessed and output data is provided on bus 18 to printer firmware 15. The printer firmware 15 then uses this information to drive the print hardware 20, so that the desired character is printed. The operation of a prior art print cycle is illustrated in FIG. 2. At step 21, a printer is initialized or turned on and examines its resident fonts. Resident fonts are stored in a read only memory (ROM). The ROM is divided into a first "header" area and a second "character information area". The header contains information about the available fonts and address information, such as the starting address of each character in the available fonts. At decision block 22, the printer determines if it contains an additional font cartridge. If there is an additional font cartridge, the system proceeds to step 23 and the header of the font cartridge is examined to determine what fonts are available and the address of each character of the available fonts. If there is no font cartridge available, the system proceeds to step 24 where the printer receives a print command from the host computer, requesting a certain font character. At step 25, the printer firmware determines the location (resident font or additional cartridge) of the requested character and reads the entire character sequentially. At step 26, the character is rasterized and printed or saved. The disadvantage of the system described in FIGS. 1 and 2 is a limited number of address lines on address and control bus 18 that lead to the font cartridge 17. Therefore, the addressable memory space at font cartridge 17 is limited. The present invention provides a system that allows a larger memory space to be addressed without adding address lines. In addition, the system of the present invention is transparent to the host computer as well as the printer firmware. The present invention takes advantage of the fact that each font character has a specific starting address, and, once a font character is accessed, the address accesses are sequential (except for occasional repeated accesses to the same word address). The present invention provides multiple "planes" of font character data. The starting address of a font character to be accessed determines the appropriate memory plane to be accessed. The system of the present invention first identifies a starting address of a font character, selects the appropriate memory plane and maintains access to that plane until another starting address is received. At that time, a determination is made whether to switch memory planes or remain in the present memory plane. Each memory plane contains identical memory address locations as every other memory plane. However, each memory plane is constrained to specific valid starting addresses. The configuration of these memory planes is illustrated by example in FIG. 3. A block diagram of a font cartridge of the present invention having two memory planes is illustrated in FIG. 3. The use of two memory planes is by way of example only, as the present invention supports a plurality of memory planes that can be accessed by a given number of address lines. Referring to FIG. 3, a font cartridge of the present invention includes memory plane zero and memory plane one. Each memory plane includes hexidecimal address space 000000 to 1FFFFF. A portion of each memory plane is reserved for header information. In the preferred embodiment of the present invention, the header information in each memory plane is identical and identifies all the available fonts as well as the starting address locations of each character of each font on both memory planes. Memory plane zero includes a separate area for character data of fonts resident on memory plane zero. Memory plane one includes a separate character data area with descriptive information of font characters resident on memory plane one. The present invention limits the valid starting address of memory plane zero and one. For example, in one embodiment of the present invention, memory plane zero can only have valid starting addresses for those addresses whose least significant nibble is 0 or 8. Memory plane one is limited to starting addresses having a least significant nibble of 4 or C (in hexidecimal format). Referring to memory plane zero, starting addresses for five font characters are shown at address location 0008,0010,0020,0028 and 0030. Each starting address is followed by one or more words of character data at sequential address locations. If necessary, the character data may extend beyond the next valid starting address. For example, for the font character having a start address of 0010, the character data extends to address 001A, and includes address 0018, normally a valid starting address. Memory plane one has font character data at start addresses 0004,000C, 0014,001C, 0024 and 0034. As is the case with memory plane zero, character data may extend past the next valid starting address. For example, the font character at address 0024 extends to address 002E, past valid starting address 002C. Still referring to FIG. 3, it can be seen that addresses that are starting addresses in one memory plane can still be used for character data in the other memory plane. For example, memory address 0008 is a start address in memory plane zero and contains character data in memory plane one. FIG. 4 is a block diagram illustrating the preferred embodiment of the font cartridge of the present invention. The cartridge communicates with the control and data bus 18 through a printer interface 30. The printer interface provides address and control information to address and control bus 36 through buffer 34. The address and control information is also provided to in/out latch 37 coupled to bus 36. Latch 37 is coupled to adder/comparator 38. Comparator 38 is also coupled to address and control bus 36. Comparator 38 provides an output 43 to selector logic block 39. Selector logic block 39 is also coupled to address and control bus 36. The selector logic block 39 provides enable outputs 40,41 and 42 to data plane zero, data plane one and data plane N, respectively. Data plane zero, data plane one and data plane N receive address information from address and control bus 36 and provide data onto data bus 35. Data bus 35 provides data to printer interface 30 through buffer 33. The latch 37 captures the state of the address bus at the end of each printer access cycle. This captured address is compared with the current state of the address bus at the beginning of each access cycle by the adder/comparator block 38. The output 43 of this block is active if the current address is equal to the captured address, or is one word location greater than the captured address (thus indicating a sequential or repeated data word access). The output 43 is inactive otherwise. The output 43 of comparator 38 is examined by the selector logic block 39. An inactive output 43 of comparator 38 is an indication that the current printer access cycle may require switching of the data plane. An active signal 43 indicates that the access cycle does not require a data plane switch. When the signal 43 is inactive the selection of the data plane by selector logic block 39 is determined by examining the pattern of the lower address bits. The receipt of a starting address does not necessarily require a change in data planes. The starting address may be to another character in the same memory plane as is currently selected. After the appropriate data plane is selected, the address is provided to the data plane and the information at that memory address is provided on data bus 35 through buffer 33 to the printer interface 30. Each instance of character data begins at an address that, when specified for reading by the microprocessor, will result in a specific pattern in the lower address lines (the address lines corresponding to the least significant bits in the address value). This specific pattern in the lower address lines is determined by which "plane" the data is expressed in, as seen by the switching hardware (latch 37, adder/comparator 38). The function of the switching hardware is to detect when the reading of a character is beginning. At these times, based on the pattern in the lower address lines, the hardware switches the corresponding "plane" in so that the appropriate data is seen by the microprocessor. The switching hardware does this by detecting breaks in the sequential/repeat access pattern seen when reading character data words. A break in the pattern is detected whenever a word address in the cartridge is accessed that is not the same address as, or the address sequentially following, the address of the word last read by the microprocessor. In such "break" cases, the lower address lines are used to determine the appropriate memory plane and to switch the memory plane, if necessary. These sequential/repeat access "breaks" can occur under the following conditions: 1. The microprocessor is not reading character data, but something else (common font header, etc.). Since all data other than character data is duplicated on each 2 megabyte plane, appropriate values are found, regardless of which plane is selected by the switching hardware. 2. The microprocessor is reading the first or second word of character data. In this case, because of the choice of starting addresses for al instances of character data, the appropriate "plane" is selected by the switching hardware. As the microprocessor continues to read the character data, no access will cause the "break" condition to be detected, so the appropriate 2 megabyte plane will remain selected and the microprocessor will read valid data for the character. A flow diagram illustrating the operation of the preferred embodiment of the present invention is illustrated in FIG. 5. At step 50, the address bus is examined, using the selection hardware (latch and address comparator). At decision block 51, the argument "new address equal to or one greater than old address?" is made. If the argument is false, this means that a new starting address has been detected and the system proceeds to step 52. At step 52, the lower address bits are examined in the selector logic. At step 53, the lower address bits are used to select the appropriate data plane. This may or may not involve switching from the current data plane. The system then proceeds to step 54. If the argument at decision block 51 is true, no new starting address is detected and the system proceeds to step 54. At step 54, the data at the current address location is enabled onto the data bus. The system then latches the old address from the address bus at step 55 ending the bus cycle. The maximum number of 2 megabyte planes of data is determined by the number of lower address lines used by the switching hardware. When the switching hardware detects a break, and is selecting the appropriate plane, the lower address lines are used as a binary value for which plane to select. If one address line is used, then the hardware selects from planes zero and one. If two address lines are used, then the choices are 0, 1, 2 and 3. If N address lines are used, then planes 0. . . ((2**N)-1) are available. For a given number of address lines (e.g., N), the maximum number of planes is 2N. In the preferred embodiment, address line A2 (binary 4's place) corresponds to the least significant bit of the binary value for the memory plane's index. If more than one address line is used, A3 is next, and A4 and so on. So the address lines [A(N+1). . . A2] are used to form the binary plane index value. The starting address for character data in a given plane index (say P) is such that: (Address) mod 4=0 ((Address) mod (2.sup.(N+2)))/4=P (Plane Index) Put another way, for N address lines used, and plane index P, the "eligible" starting addresses in the 2 megabyte space are calculated as follows: First eligible starting address at P times 4. Successive starting addresses occur every (2.sup.(N+2)) byte locations. In the preferred embodiment of the present invention, there are rules that govern the selection of starting addresses for font character data. The starting addresses are generated one plane at a time. For the first plane, the valid starting addresses are selected. However, because the character data can extend beyond the next starting address, some valid starting addresses are removed from the list of eligible starting addresses as font characters are assigned to memory address locations. After the first memory plane is filled, valid starting addresses for the next memory plane are determined. First, all potential starting addresses (based on the low order of bits) are determined. Any potentially valid starting address that does not have a gap of at least one address location between itself and the ending address of any character in this or any previously filled memory plane is removed from the list of valid starting addresses. When character data is entered in the second memory plane, an "end check" is performed. A character cannot end on the same address, one prior address or one subsequent address as another character's starting address in a previously filled plane. If this test is not passed, the next eligible starting address is tested. A flow diagram illustrating a method of assigning starting addresses, filling a plane with character data and performing conflict checks is illustrated in FIG. 6. At step 101, the first memory plane is opened and at step 102 a font character is selected for storage in the memory plane. At step 103, the valid starting addresses for the current plane are obtained, (for example, in memory plane zero of FIG. 3, only addresses having a least significant nibble of 0 or 8 are valid starting addresses). At decision block 104 the argument "Start location available?" is made. This argument determines if any start addresses are available in that memory plane, or if they have all been used or disqualified. If the argument at block 104 is false, the system proceeds to step 105 and the current plane is closed. At decision block 106, the argument "All planes filled?" is made. If the argument is true, no more character data can be stored and the system exits the flow chart. If the argument is false, the system proceeds to step 107. At step 107, the next available memory plane is opened and the system returns to step 103. If the argument at decision block 104 is true, the system proceeds to decision block 108 and the argument "Will character fit?" is made. This is to determine if the storage of the character will extend beyond the end of the memory plane. If the argument is false, the system proceeds to step 105. If the argument at decision block 108 is true, the system proceeds to step 109. At step 109, the tentative end address of the character is calculated. The system then proceeds to step 110 to execute a validity check of the end address of the character. The steps beginning at step 110 are repeated for each prior plane that has been used to store character data. At decision block 111, the argument "Another prior plane?" is made. If the argument is true, the system proceeds to step 112 to perform a validity check for each character in the prior plane. At decision block 113, the argument "Another prior character?" is made. If the argument is false, the system returns to decision block 111. If the argument is true at decision block 113, the system proceeds to decision block 114 and the argument "Start location conflict?" is made. This is to determine if the starting address of the character to be stored conflicts with the ending addresses of any previously stored characters. The starting address of a new character cannot be the same address as the ending address of a prior character or one address greater than or one address less than the ending address of a prior character. If the argument at decision block 114 is false, (no starting address conflict), the system proceeds to decision block 115 to perform an end address conflict check. If the argument at decision block 114 is true (starting address conflict detected), the system proceeds to step 120, where that start address is disqualified and the system returns to step 104 to obtain a new potential start location. At decision block 115, the argument "End location conflict?" is made. This is to determine if the end address of the character to be stored conflicts with the starting addresses of any previously stored characters. A character cannot end on the same address, one prior address or one subsequent address as another character's starting address in a previously filled plane. If the argument at decision block 115 is true (end address conflict detected), the system proceeds to step 120 and the start address is disqualified. If the argument at decision block 115 is false (no conflict detected), the system returns to decision block 113 to determine if other prior characters must be checked against the character to be stored. After all characters in a plane have been checked against the character to be stored (argument at block 113 is false), the system returns to decision block 111 to determine if other planes need to be checked. If all planes have been checked (argument at block 111 is false), the system proceeds to step 116. At step 116, the character is stored beginning at the start address and that start address is removed as an available start address. The system then proceeds to decision block 117 and the argument "Another character to insert?" is made. If the argument is false, the system proceeds to step 119, and the current plane is closed. This occurs when all characters have been stored. If the argument at decision block 117 is true, the system proceeds to step 118, a new character is obtained and the system returns to block 104. The following code illustrates one example of a method for implementing the operations described in FIG. 6. The code is given by way of example only and other means of implementing the methods of validating starting addresses may be used without departing from the scope of this invention. ______________________________________Conflict Detection Code______________________________________Conflict detection for tentative insertion of data item.Global variables and data structures hold current state of system.Passed tentative starting address and ending address (addressof last word with data) for insertion into the current plane.Returns non-zero (TRUE) if a conflict is present.If no conflict, returns zero (FALSE). Does not modifythe state variables to reflect insertion.static int .sub.-- conflict (start, end)long start; /*byte address of first word with data*/long end; /*byte address of last word with data*/ /*. . . so size if ((end-start) + 2) bytes*/int n, nent, inx, diff;fol.sub.-- ptr fptr;long emin, emax, smin, smax, *1ptr;/*----- if on first plane (c.sub.-- place index == 0), noconflict------*/if (! c.sub.-- plane) return (0);/*----- set min and max conflicting ending address forother planes -*/emin = end - con.sub.-- play;emax = end + 2;/*----- set min and max conflicting start address forother planes --*/smin = start - 2;smax = start + con.sub.-- play;/*----- for each plane of lower index than the currentplane. . . -----*/for (n = c.sub.-- plane, fptr = fol.sub.-- list; n; --n, ++fptr) {/*----- local de-reference of max entries to check thisplane -*/nent = fptr->n.sub.-- entries;/*----------------------------------------------------------------------*/1/*----- prior checks may have excluded conflicts withlower ---*//*----- address entries, so start there and advance. ----*//*----- Here advance in start address list until in orbeyond -*//*----- conflict range due to candidate endingaddress -----*/for ( 1ptr - fptr->start. cl, inx = pftr->start. inx, diff = 0; (*1ptr < emin) && (inx < nent); ++inx, ++1ptr, ++diff);/*----- if advanced any, update to optimize next timecalled --*/if (diff) { fptr->start. cl = 1ptr; fptr->start. inx = inx; }/*----- if starting address stopped within range,conflict -----*/if ((inx < nent) && (*1ptr <= emax)) return (1);/*----------------------------------------------------------------------*/1/*----- No conflicts in this plane due to the candidatedata --*/*----- ending address. Now perform same method forchecking --*//*----- conflicts due to its starting address ----------*/for ( 1ptr = fptr->end. cl, inx = fptr->end. inx, diff = 0; (*1ptr < smin) && (inx < nent); ++inx, ++1ptr, ++diff );/*----- if advanced any, update to optimize next timecalled --*/if (diff) { fptr->end. cl = 1ptr; fptr->end. inx = inx; }/*----- if ending address stopped within range,conflict -----*/if ((inx < nent) && (*1ptr <= smax)) return (1);}/*----- if we survive until here, no conflicts in any planes, so -----*//*----- return zero (FALSE) ------------------------------*/return (0);}Discovers first available starting location in currentplane for a data item of the given length. If all remaining startinglocations in the current plane are exhausted, closes the currentplane and opens the next plane for filling, unless all planes havebeen exhausted.Each eligible address is examined for conflicts with datain order planes by calling the .sub.-- conflict function above.If a valied starting address is located, returns that offsetand pointer to the (different if opened new plane) FILE pointer.The function return value will be non-zero (TRUE) to indicatethat the data can be accommodated.If last available plane is exhausted, returns zero(FALSE) to indicate no more data can be accommodated.int fol.sub.-- next.sub.-- addr (clen, osret, fnret)int clen; /*length of data item in bytes*/long *osret; /*if found, return starting address here*/FILE **fnret; /*if found, return FILE pointer here*/{long os, nos, eos;int fits;/*----- continue until find starting point in plane, orexhausted -*do {/*----- from end of plane's file, round to eligiblestart -----*/nos = ((os = fb.sub.-- tell (fnow)) & ol.sub.-- bic) + plane.sub.-- add:/*----- need to round upward to eof or greater -----*/if (nos < os) nos += o1.sub.-- rep;/*----- for statistics, record filler space not used -----*/if (nos>os) align-pad += (nos = os);/*----- byte address of last word with data -----*/eos = nos + clen - 2;/*----- loop until find starting address that doesn't -----*//*----- or until run out of room on the currentplane -----*/while (fits = eos < data.sub.-- end) /*still fits*/ && .sub.-- conflict (nos, eos) /*in conflict, get next*/ ) { /*----- advance start and end address to next eligible --*/ nos += o1.sub.-- rep; eos += o1.sub.-- rep;/*----- and update stats on conflict hits and fill --*/++n conflicts;con.sub.-- pad += o1.sub.-- rep;}/*----- If (fits == TRUE) we've found good address; else -----*//*----- we've fallen out of current plane ---------------------*/if (! fits){ /*----- close current plane ---------------*/ .sub.-- fol.sub.-- eplane (): /*----- if that was last plane, return (FALSE) for full -*/ if ((c.sub.-- plane + 1) >=n.sub.-- planes) return (0); else { /*----- increment plane index and start new one -----*/ ++c.sub.-- plane; .sub.-- fol.sub.-- splane (); }}}while (! fits);/*----- if we fall out of loop, we have valid address for dataitem -*//*----- update the state variables to show new data itemresiding ---*/*f1.sub.-- stat++ = nos;*f1.sub.-- end++ = eos;/*----- increment index of elements in plane, range check -----*/if (++f1 inx >= f1.sub.-- max) bail ("plane overlay overflow");/*----- if padding needed, do it so caller can write directly -----*/if (nos > os) pad.sub.-- zero (fnow, nos);/*----- report offset and (possibly new) file pointer -----*/*osret = nos;*fnret = fnow;return (1);}/**********************************************************/long fol.sub.-- room.sub.-- left (){return (data.sub.-- end - fb.sub.-- tell (fnow));}static void .sub.-- fol.sub.-- splane (){char bf[20];fol.sub.-- ptr xfol;int n:cfol = fol.sub.-- list + c.sub.-- plane;fl.sub.-- inx = cfol->n.sub.-- entries = 0;fl.sub.-- start =cfol->start. list = (long*) fb.sub.-- malloc (plane.sub.-- a);fl.sub.-- end = cfol->end. list = (long*) fb.sub.-- malloc (plane.sub.-- a);for (xfol = fol.sub.-- list, n = c.sub.-- plane; n; --n, ++xfol){ xfol->start. inx = 0; xfol->start. cl = xfol->start. list; xfol->end. inx = 0; xfol->end. cl = xfol->end. list;}sprintf (bf, "%s.%d", bname, c.sub.-- plane);fnow = fb.sub.-- open (bf, "wb");pad.sub.-- zero (fnow, d.sub.-- start);}static void .sub.-- fol.sub.-- eplane (){pad.sub.-- zero (fnow, d.sub.-- break);fclose (fnow);fnow = NULL;cfol->n.sub.-- entries = f1.sub.-- inx;plane.sub.-- add += granularity;}void fol.sub.-- close (fth, np.sub.-- ret)FILE *fth;int *np.sub.-- ret;{int i;char bf[20]; if (fnow ! = NULL).sub.-- fol.sub.-- eplane (); for (1 = 0; 1 <= c.sub.-- plane; ++1) { sprintf (bf, "%s.%d", bname, i); fnow = fb.sub.-- open (bf, "rb+"); fb.sub.-- seek (fth, OL); zxfer (fth, fnow, d.sub.-- start); fclose (fnow);}*np.sub.-- ret = c.sub.-- plane + 1;}void fol stats (ar, cr, nr)long *ar;long *cr;long *nr;{*ar = align.sub.-- pad;*cr = con.sub.-- pad;*nr = n.sub.-- conflicts;______________________________________ This technique of data organization and switching hardware as describe herein is not limited to font cartridges but is applicable to other types of data as well. The general criteria for data organization in the present invention are: 1. Data reading by device microprocessor begins within some number of words from the beginning of the data. 2. Data is read in sequential order, except for repeated access to the same word address. Sequential order means one greater word address each read. 3. Data is read to within some number of words to the end of the data. 4. The above reads occur with no intermediate reads by the microprocessor to any other word in the cartridge address space. For the example of the font cartridge of the preferred embodiment of this invention, the criteria are: 1. Reads begin within one word of data beginning. 2. Data is read in sequential order, except for repeated access to the same word address. Sequential order means one greater word address each read. 3. Reads continue through last word of data. 4. The above reads occur with no intermediate reads by the microprocessor to any other word in the cartridge address space. Thus, a method for increased memory space in an address limited system is described.
https://patents.google.com/patent/US5235551A/en
CC-MAIN-2018-26
refinedweb
6,446
53.71
Jifty::API - Manages and allow reflection on the Jifty::Actions that make up a Jifty application's API # Find the full name of an action my $class = Jifty->api->qualify('SomeAction'); # Logged users with an ID greater than 10 have restrictions if (Jifty->web->current_user->id > 10) { Jifty->api->deny('Foo'); Jifty->api->allow('FooBar'); Jifty->api->deny('FooBarDeleteTheWorld'); } # Fetch the class names of all the allowed actions my @actions = Jifty->api->actions; # Check to see if an action is allowed if (Jifty->api->is_allowed('TrueFooBar')) { # do something... } # Undo all allow/deny/restrict calls Jifty->api->reset; You can fetch an instance of this class by calling "api" in Jifty in your application. This object can be used to examine the actions available within your application and manage access to those actions. Creates a new Jifty::API object. Don't use this, see "api" in Jifty to access a reference to Jifty::API in your application. Returns the fully qualified package name for the given provided action. If the ACTIONNAME starts with Jifty:: or ApplicationClass::Action, simply returns the given name; otherwise, it prefixes it with the ApplicationClass::Action. Resets which actions are allowed to the defaults; that is, all of the application's actions, Jifty::Action::Autocomplete, and Jifty::Action::Redirect are allowed; everything else is denied. See "restrict" for the details of how limits are processed. Takes a list of strings or regular expressions, and adds them in order to the list of limits for the purposes of "is_allowed". See "restrict" for the details of how limits are processed. Takes a list of strings or regular expressions, and adds them in order to the list of limits for the purposes of "is_allowed". See "restrict" for the details of how limits are processed. Method that "allow" and "deny" call internally; POLARITY is either allow or deny. Allow and deny limits are evaluated in the order they're called. The last limit that applies will be the one which takes effect. Regexes are matched against the class; strings are fully /qualify and used as an exact match against the class name. The base set of restrictions (which is reset every request) is set in "reset", and usually modified by the application's Jifty::Dispatcher if need be. If you call: Jifty->api->deny ( qr'Foo' ); Jifty->api->allow ( qr'FooBar' ); Jifty->api->deny ( qr'FooBarDeleteTheWorld' ); ..then: calls to MyApp::Action::Baz will succeed. calls to MyApp::Action::Foo will fail. calls to MyApp::Action::FooBar will pass. calls to MyApp::Action::TrueFoo will fail. calls to MyApp::Action::TrueFooBar will pass. calls to MyApp::Action::TrueFooBarDeleteTheWorld will fail. calls to MyApp::Action::FooBarDeleteTheWorld will fail. Returns true if the CLASS name (which is fully qualified if it is not already) is allowed to be executed. See "restrict" above for the rules that the class name must pass. Lists the class names of all of the allowed actions for this Jifty application; this may include actions under the Jifty::Action:: namespace, in addition to your application's actions. Jifty, Jifty::Web, Jifty::Action Jifty is Copyright 2005-2006 Best Practical Solutions, LLC. Jifty is distributed under the same terms as Perl itself.
http://search.cpan.org/~jesse/Jifty/lib/Jifty/API.pm
CC-MAIN-2015-06
refinedweb
531
56.35
In 2020, two significant IT platforms converge. On the one hand, Spark 3 becomes available with the support of Kubernetes as a scheduler. On the other hand, VMware releases project Pacific which is an industry-grade Kubernetes that is natively integrated with the VMware vSphere 7 hypervisor. In this session, we present a reference architecture that integrates these two platforms. With the integration of Spark 3 and VMware Pacific, Spark clusters get deployed on the same Kubernetes + virtual machines platform that is used by dozens of thousands of companies across the world. These are some of the main benefits: Session elements: – Hi and welcome to this talk on Spark Kubernetes and VMware vSphere. My name is Justin Murray and I’ll be introducing my co-speaker in one second and we’re very glad to be here at the Spark AI Summit 2020. Our title today is “Simplify and Boost Apache Spark “Deployments with Hypervisor-Native Kubernetes.” Well that’s quite a mouthful but we’re going to be talking here about a very tight link between Kubernetes and the VMware hypervisor as a basis for running Apache Spark. So my co-speaker on the next slide is Enrique Corro and I’ll ask Enrique to introduce himself briefly here. – Thank you Justin. Hello everyone. Thank you for joining us today. My name is Enrique Corro. I work for the office of CTO at VMware as a data science engineer. I’m super happy to be here with Justin and all of you, thank you. – Thanks again. And I belong to the Cloud Services Business Unit within VMware which is actually running vSphere technology, our core hypervisor technology, on VMware cloud on AWS which we’ll has really served the needs of the IT administrator to be quite frank. We’ve given scalable infrastructure, hybrid cloud infrastructure and by that I mean both on-premises and in the cloud on VMA of vSphere Enrique to describe that in more detail. Enrique. – Thank you Justin. Okay. I’m going to talk… I’m going to start by talking about VMware vSphere with kubernetes which is a venue VMware platform designed to bridge the gap between infrastructure and application development. We have that from version 7 basically incorporates kubernetes as a series of negative processes within the hypervisor. This allows increasing infrastructure attributes such as performance, security, availability, cost and troubleshooting. At the same time, DevOps teams get cell service environments that allow them to call, test and deploy and support modern applications with great agility. Let’s consider the container orchestration approach offered by kubernetes also applies to spark which is officially supported, sorry which is officially supporting kubernetes as an orchestrator for spark 3 version. Now I will talk about VMware Tanzu, a new platform designed to build, run and manage modern applications such as spark on top of properly managed Enterprise rate kubernetes platforms. At the heart of VMware Tanzu, we have the Tanzu kubernetes breed also known as TKG. The Tanzu kubernetes breed provides a consistent upstream compatible implementation of kubernetes which gets tested, signed and supported by VMware. You can deploy tons of kubernetes grid across your vSphere clusters and also across Amazon EC2 instances. We are working to extend TKG support for multi public cloud providers besides AWS. We also are planning to support multi kubernetes flavors in the future. The Tanzu kubernetes grid has a native of ordinance of multi cluster paradigms and this allows you to manage any number of kubernetes cluster from a centralized location which has many administration advantages. Here’s an illustration of how IT operations teams can manage their tanzu kubernetes cluster from the description and user interface. On the Left panel you can see the hierarchical organization of the data center. Following a top-down order, we find the physical hosts grouped by bhisma clusters. Inside is, we see a new grouping component called namespaces. You can think about a namespace as a pool of resources dedicated to one or multiple tanzu kubernetes clusters. The right panel shows 7 and tanzu kubernetes clusters, VMware exclude all the infrastructure pieces together within the hyper platform called VMware Cloud Foundation. Here, a bird eye view of the physical architecture of the platform you can deploy a cloud foundation on a wide range of supported vendors. In the past two years, we have worked within to develop a hard cloud data analytic solution that leverages different interrelation technologies for machine learning and big data. Spark may greatly benefit from these horrible components to see incremental performance gains. How a foundation integrates a computing, networking and storage layers of the hybrid cloud infrastructure following a standardized validated architecture. This architecture gets automatically deployed and lifecycle using the series of management components included with the solution. The left side of the… The left side of the picture, we see the operations module of cloud foundation called the Management Domain. From that point, IT operations gets all the tools needed to operate a hybrid cloud environment including the tanzu kubernetes clusters. As shown on the right side of the picture development teams such as data engineering and data science can take control of the kubernetes resources using a standard API’s. Here we see a typical view of an end-to-end analytics pipeline with Apache Spark at the core. With kubernetes clusters available for developers, it is possible to deploy many open-source applications using the Bitnami Helm charts. If you are not familiar with Helm, you can think of it as an open-source package management solution for kubernetes. Helm charts allow you to deploy and remove software using very simple command line instructions. As a compliment, Bitnami continuously monitors and updates a catalog of more than 103 open-source applications to ensure development stacks kubernetes in the vSphere interface designed to manage kubernetes resources. Then we will deploy a new kubernetes cluster using the command line interface and we will verify the status of this newly created cluster. Then we will explore the bitnami’s Helm charts catalog which includes a part, a chart for Apache Spark. Next we’ll deploy Apache cluster using the Helm chart. Finally we’ll verify the functionality of the newly created spark cluster. Okay. Let’s explore the new kubernetes capability incorporated in the vSphere 7 management interface. Here a view of the cloud infrastructure components. At the top we have data center objects and the typical resources they manage. Within the data center object, we see a new element called namespaces which integrates the kubernetes clusters. From view, you can monitor the status of the kubernetes components, the number of cluster deployments and the resource capacity that the kubernetes clusters are consuming. Now let’s deploy kubernetes cluster named “k8-for-spark” using just one TKG comment. Here we see the nginx pods on it. It is time to use the cube control command to deploy nginx from our jumbofile. Once script control gets executed, we get confirmation that the nginx pods got deployed. Then we use view controller a couple of times to grid the nginx pods to status until they get reported as running. Now let’s meet the bitnami catalog of Helm charts which includes charts for Apache Spark. Bitnami provides a catalog of curated containers and Helm charts for thousands of open-source applications with Apache Spark included. Here we see the options available to deploy spark either on docker or on kubernetes. If you click on the file it takes us to the Github repository for the Spark Helm chart. Here we can see an example of the two Helm comments required to deploy Spark on kubernetes. We can also see that the deployment can be customized by modifying the spark charts configuration parameters. The list of parameters includes things like the image registry, the network service port numbers, CPU memory operations for the master and workers and the number of worker replicas. There is a total of 97 parameters available, they tale of the deployment to your needs. Now let’s deploy Apache Spark on the kubernetes cluster previously created for this purpose. We will for install Spark using only two Helm comments. We start by adding the bitnami’s charts respository to the local Helm records. Next we proceed to run the film’s stored command to make a new deployment called spark k8. After several seconds we get confirmation that spark got deployed. We are given some references about how to launch the web UI and also how to submit jobs. Next we use cube control to verify that the spark boards are working. We keep doing this until we see that the master and the workers are all up and running. Then we switch the web UI to verify the sparks state from this interface. Skip the second. We see that no applications are running and not completed because the cluster is new and we have to confirm that the cluster status is alive. Finally let’s verify that the spark cluster deployed in kubernetes is operational by executing a job. Here we use the cube control execute command to submit a pile verify the status of the last application. We click on the app ID. I’m verifying standalone, that is spark running outside test we were running spark on kubernetes and we were trying to find would there be any impact on performance and also trying to see what benefits do spark master but to the API server in kubernetes which is now acting as the resource manager and we run the spark driver a little bit different to this diagram. We run the spark driver on the same virtual machine as a spark as the kubernetes master but the executors were being spun up on the fly on the spark submit comma. So you’ll see this a bit more in the next slide. So this is just the same picture blown up. So you can choose whether your spark driver runs in a pod in your kubernetes cluster or in the spark cluster or you can run your driver on the client side, that’s called client mode and we actually use client mode here but the functionality was the same. Client mode allows you to execute remotely from your kubernetes cluster and driver cluster mode would allow you to run the driver within your cluster and have everything together. So the communication that you’re going on here to say schedule a pod et cetera, that’s all being done within the same virtual machine in our kubernetes case here but the executors are running in pods and they’re being fired up on the fly here. So next slide. So this is the architecture at the hardware level and at the software level, all in one. And the four rows here, host one to host four represent four second generation Intel Xeon “Cascade Lake” servers, quite powerful servers with two sockets in each one, Intel Platinum 8260 four spark worker virtual machines and on the first toast we ran the spark master and spark driver together. As I mentioned the spark driver is now outside the cluster to some extent. So spark… For the spark master VM we had eight virtual CPUs and 64 gigs of memory, quite a small virtual machine actually and for the spark workers we gave them a little more power. They had 16 virtual CPUs or V CPUs and 120 gigs of memory each and so in total on the first host, we had four times 120 that’s 480 gigs for the workers and another 64 for the spark master making 544 gigs allocated on that first host. Now we’re going to fill those empty slots on the host two, three and four in when we deploy kubernetes on to this and that’s going to be the next picture that you’ll see. Remember, the same hosts, the same virtual machines in all cases, it’s just now that instead of being just a spark worker, the individual VM’s, four VMs that are look-alike on each host are now kubernetes workers. So same hardware but this time we have three kubernetes masters. This is to simulate highly available system and we have an HAProxy running on post for there in the first VM. So we had three extra… We have three extra virtual machines in this case in the first slot on each host and these kubernetes workers, same sized VMs, the Masters had eight virtual CPUs, the workers had 16, very simple approach to doing this for uniformity across the two environments. So that’s how we set this up. Now a few notes on the next one. The spark-submit which we typically supply to the spark master it can call a kubernetes master instead of a spark master by putting k8s as the prefix to the URL or your URI you’re given. What we did in preparation for that was create a private namespace, just as you do in regular kubernet or back for your kubernetes cluster. Nothing unusual here. So we used cluster mode here which is the spark driver runs in the cluster. We also used climbing mode, another experiment. So both worked fine on vSphere. Next slide please. So these are the results of the tests and this was ResNet 50 which is an image classification test running on top of spark with Intel big DL libraries and a program written using Intel big DL as the driver. Enrique mentioned some Intel software at the start. We work closely with Intel on increasing performance. Both running on the same machines with a varying number of virtual machines higher is better on these charts and the blue represents spot stand alone, the orange represents spark and kubernetes. As you can see, they’re within 1% of each other. Now the number of images per second here is very low because this is not GPU enhanced deep learning, this is regular CPU-based deep learning and that’s an experiment to drive a lot of traffic through this rather than a test of deep learning. It’s trying to saturate the system as much as you can, we could and you will see that when we go to the next one but my main point in this section is that performance is roughly the same whether you’re on spark stand-alone, just running in virtual machines or spark running in kubernetes and virtual machines. Okay.% and above and also that you can use a standard kubernetes dashboard to look at your virtualized kubernetes just as you would if it was running elsewhere. We also have a console of our own called Tanzu Mission Control and the Tanzu brand that Enrique mentioned at the beginning is a whole family of products including Tanzu Mission Control that can look at your kubernetes clusters whether they’re running on VMware vSphere or running in the cloud on AWS or running on VMware cloud on AWS. Any of those be controlled by tons of Mission Control. Okay. Let’s go to the next one. So having done that performance test, now we wanted to go back into training and say could we use spark for training on VMware? And we took an example of a tool here which does training and took the output from that tool which is a Java object and you see this set up here. Actually this is in VMware cloud on AWS and the user interface although I am using the bright background rather than the dark background that Enrique was using, you can tell this is VMware cloud on AWS because right in the center of the screen it shows you the domain in which we’re operating which is US West and then on the top left hand side, the address mentioned is vmwarevmc.com which means this is VMware running on the public cloud on AWS hardware and those six machines on the top left-hand side of the navigation with their IP addresses $10 et cetera, those are physical machines in an AWS data center running VMware vSphere. But the reason that I highlighted on VMware cloud on AWS as well. So here’s the user interface from that tool. It’s a very nice user interface. I’m not going to go through it in detail. This is H2O.ai go through the details of the training here instead we’re going to hit the deploy button in the middle of the top there and generate a Java object from this training session and deploy it into spark. So when we hit deploy, we get a Java object which is in a stores terminal, terminology called model optimized Java object or a mojo. Having got that pipeline, that mojo you see it on the third line of the docker file on your right hand side there, we’re going to copy that pipeline model optimized Java object, mojo, we’re gonna copy that into our container and then we’re going to run a rest server in which this is going to execute,, part of the Tanzu family and then we tested that docker container on its own by simply doing a docker run. But more interesting than that was deploying that same thing, that same container image into kubernetes and you can see a kubernetes kop cut will apply there on the second from last line and the k. N Now let’s go back to spark and h2o happens to have a flavor of their technology that works with spark is called sparkling water, sparkling’s sparkling water and standalone spark. And finally here, what came out of that predictor or that score was the set of rows that you see in the middle of the screen and the set of rows you see in the bottom of the screen, they both have default payment next dot zero and dotO capable of vSphere. Many, many thousands of companies run VMware vSphere to support Helm charts, Enrique showed you that in his demo and then we went on to move to testing performance of spark on VMs, on to’s a general blog site at VMware called blogs.vmware.com/apps/ml for machine learning. You can find tons and tons of information there about how to use GPUs with VMware, how to do spark on VMware and we’ve also done a lot of testing of Hadoop and spark together on VMware as well as the standalone spot that you saw earlier and we’ve got many papers written about Big Data Enrique Corro Fuentes from VMware’s office of the CTO and Justin Murray here, thank you very much for your time and we’ll get your questions coming up. VMware Enrique Corro has worked for VMware since 2006. Currently, he acts as a Staff Engineer focused on Data Science at the VMware's Office of the CTO. Enrique is part of the team that drives new types of integrations between VMware and other IT industry-leading companies to facilitate the adoption of Machine Learning and Artificial Intelligence by companies of any size and industry. Enrique is currently undergoing a Masters Degree Program in Data Science with the University of Illinois. VMware Justin Murray works as a Technical Marketing Manager at VMware . Justin creates technical material and gives guidance to customers and the VMware field organization to promote the virtualization of big data workloads on VMware's vSphere platform. Justin has worked closely with VMware's partner ISVs (Independent Software Vendors) to ensure their products work well on vSphere and continues to bring best practices to the field as the customer base for big data expands.
https://databricks.com/session_na20/simplify-and-boost-spark-3-deployments-with-hypervisor-native-kubernetes
CC-MAIN-2021-21
refinedweb
3,321
57.71
import re p = re.compile("[a-z]") for m in p.finditer('a1b2c3d4'): print(m.start(), m.group()) Taken from span() returns both start and end indexes in a single tuple. Since the match method only checks if the RE matches at the start of a string, start() will always be zero. However, the search method of RegexObject instances scans through the string, so the match may not start at zero in that case. >>> p = re.compile('[a-z]+') >>> print p.match('::: message') None >>> m = p.search('::: message') ; print m <re.MatchObject instance at 80c9650> >>> m.group() 'message' >>> m.span() (4, 11) Combine that with: In Python 2.2, the finditer() method is also available, returning a sequence of MatchObject instances as an iterator. >>> p = re.compile( ... ) >>> iterator = p.finditer('12 drummers drumming, 11 ... 10 ...') >>> iterator <callable-iterator object at 0x401833ac> >>> for match in iterator: ... print match.span() ... (0, 2) (22, 24) (29, 31) you should be able to do something on the order of for match in re.finditer(r'[a-z]', 'a1b2c3d4'): print match.span()
https://pythonpedia.com/en/knowledge-base/250271/python-regex---how-to-get-positions-and-values-of-matches
CC-MAIN-2020-29
refinedweb
177
78.85
01 July 2010 11:16 [Source: ICIS news] MOSCOW (ICIS news)--Russia’s Sibur has begun expandable polystyrene (EPS) sales contract talks ahead of the material being produced at one of its subsidiaries, it said on Thursday. A new 50,000 tonne/year EPS facility at Sibur-Khimprom, using ?xml:namespace> Sibur-Khimprom was also building a new 220,000 tonne/year ethylbenzene unit. It needs this extra capacity to expand styrene production, which in turn would serve the EPS facility. This was due to come on stream by the end of the year as well, Sibur said. Based in the town
http://www.icis.com/Articles/2010/07/01/9372724/russias-sibur-in-eps-contract-sales-talk-ahead-of-unit-start-up.html
CC-MAIN-2015-06
refinedweb
102
72.87
While learning C programming language one of the most exciting parts is writing and reading a file. Because these operations create something on the operating system we can see which is different from other examples. In this tutorial, we will look at different aspects of file operations. stdio.h Library As we know C provides different types of features with libraries. Input and output related features are provided by the library named stdio.h. In order to run related file operations, we should include this library as below. We generally put include like to the start of the code file like below. #include stdio.h Opening File The first step to work with a file is opening it. Files can be opened by using fopen function. fopen function generally gets the filename and mode parameters. fopen ("test.txt", "w+"); fopen function returns a handler back where we use FILE type variable for this. FILE * fp; Below we will create a file pointer named fp and open file named test.txt with w+ write and read mode. #include <stdio.h> int main() { FILE * fp; fp = fopen ("test.txt", "w+"); return(0); } Closing File In the previous part, we have opened a file with fopen function. But the code provided there is not an efficient code because the file handler does not closed which means the file is not closed. Not closing a file can create a performance or write problems. So after our operation is completed we should close the file with fclose function. fclose(fp); and complete code will be like below. #include <stdio.h> int main() { FILE * fp; fp = fopen ("test.txt", "w+"); fclose(fp); return(0); } Reading File One of the fundamental steps for file operation is reading a file. There are different ways and modes to read a file but in this step, we simply read a line. We will put this in a while loop and read it to the end of the file. Because we will read file so we will use read mode while opening the file with fopen function. We will provide the variable, str , we want to put the grabbed string and the size to read which is 80 and the last one is the file pointer fp fgets(str,80,fp) And here fully working code where we use while to read line by line to the end of the file. If the end of the file arrived the NULL value will be returned. #include <stdio.h> int main() { FILE * fp; char str[80]; fp = fopen ("test.txt", "r"); while((fgets(str,80,fp))!=NULL) printf("%s",str); fclose(fp); return(0); } Writing File In previous steps, we have learned how to open and close files. But the ultimate goal is not opening and closing files. We generally read or write to file. There are different ways to write a file but in this tutorial, we will simply put some line to the file. We will use fputs function by providing the string and file pointer like below. fputs("Hi this is an example",fp); We can see the whole working example below. #include <stdio.h> int main() { FILE * fp; fp = fopen ("test.txt", "w+"); fputs("Hi this is an example",fp); fclose(fp); return(0); }
https://www.poftut.com/c-file-operations-open-write-close-files/
CC-MAIN-2022-27
refinedweb
545
82.54
Even since I was a kid, I loved puzzles. One of the reasons I got involved with maths and computers was probably this strange attraction to the "puzzled" state of mind - stress, frustration, and then... the incredible elation of figuring it out... One of the geometry puzzles I've found that was particularly enjoyable was the crossing ladders puzzle: As shown in figure 1, two ladders are standing on opposite walls. They meet each other somewhere in the middle, 30 meters above the ground. The first one is 119 meters long, while the second one is 70 meters long. What is the distance of the walls? ...and I could keep writing a lot more. What to do then? How to solve such a non-linear system of equations? The first thing I tried was to look for a way to create an equation with only one unknown - combining two or more of the equations above. After a fair amount of head scratching, I eventually found a "path": Starting from equation (3) ... Midpoint subdivision is easy: starting from two places where the above function has values with opposite signs, you just subdivide and recurse until you get to something close enough to 0: import sys def f(x): return x**4-60*(x**3)-(119**2-70**2)*(x**2)+ \ 60*(119**2-70**2)*x-900*(119**2-70**2) def midpoint(a, b): assert(f(a)*f(b)<0) m = (a+b)/2 if abs(f(m))<10e-10: print "Solution:", m sys.exit(0) if f(m)*f(a)<0: midpoint(a, m) else: midpoint(m, b) # initial borders found from within python shell, through search: # >>> f(10) # -2779688 # >>> f(17) # -1776368 # >>> f(254) # 2714410480L midpoint(17, 254) #include <stdio.h> #include <stdlib.h> #include <math.h> #include <string.h> #include <gsl/gsl_errno.h> #include <gsl/gsl_roots.h> #include <gsl/gsl_poly.h> int main(void) { printf("\nSolving using polynomial functions...:-)\n"); double coeffs[5] = { -900.0*189.0*49.0, 60.0*189.0*49.0, -189.0*49.0, -60.0, 1.0 }; double roots[4*2]; gsl_poly_complex_workspace * w = gsl_poly_complex_workspace_alloc(5); if (GSL_SUCCESS != gsl_poly_complex_solve (coeffs, 5, w, roots)) { printf("Poly solving function failed...:(\n"); } else { int i; gsl_poly_complex_workspace_free (w); for (i = 0; i < 4; i++) { printf("z%d = %+.18f %+.18f i\n", i, roots[2*i], roots[2*i+1]); } } } Solving using polynomial functions...:‑) z0 = +26.974598914402541538 +8.637179057706713792 i z1 = +26.974598914402541538 -8.637179057706713792 i z2 = +105.000000000000056843 +0.000000000000000000 i z3 = -98.949197828805026234 +0.000000000000000000 iSo: two complex numbers, a negative one, and just as Python said... 105. a = [ 1, -60, -9261, 555660, -8334900 ] roots(a)The same roots as in GSL's results: two complex numbers, one negative, and 105. Symbolic solutions aside, how about numerical methods? Perhaps an iterative approach, like the Newton method for solving polynomials - but since this is a system of equations, let's try something different: 1. Change each equation into a form "left side = 0" 2. Create one function from each of the equations, by creating the form: f(x,y,z,...)=0 3. Set the unknowns to any set of initial values, close to our problem space (ladders are 70 and 119 units long, so use something close to that range). 4. Calculate the functions - they won't produce zero (not so lucky, are we? If they did, we have the solution!) 5. Accumulate the absolute values of the "error" of each function. 6. Now let's see if we can improve this error, by making it smaller 7. For each unknown, try moving its value by a small value (delta) 8. Does the total error - accumulated over all functions - get any smaller? because of the change? 9. If yes, keep moving the variable in the same direction, until the error starts increasing again 10. Then switch to the next unknown.How much can we lower the "error" with this approach? #!/usr/bin/env python var_a = var_b = var_c = var_d = var_e = var_f = 1.0 def main(): delta = 1 e = error() # Initial error while e>1e-3: # Try moving each var_iable in turn by delta # to see if error diminishes... for name in globals().keys(): if not name.startswith('var_'): continue oldValue = globals()[name] globals()[name] += delta # Moving up... ne = error() if ne < e: break globals()[name] -= 2*delta # Moving down ne = error() if ne < e: break # Restoring, going to next variable globals()[name] = oldValue # If no variable helped, make delta 10 times smaller and retry if ne >= e: delta /= 10.0 print "Switching to lower step", delta if delta < 1e-4: # unless delta is very small, in which case stop print "Best result attained." return e continue print [x+"="+str(y) for x, y in globals().items() if x.startswith('var_')] print "Error:", ne e = ne if __name__ == "__main__": main() ['var_e=2.0', 'var_f=1.0', 'var_d=1.0', 'var_c=1.0', 'var_b=1.0', 'var_a=1.0'] Error: 21151.4188406 ['var_e=3.0', 'var_f=1.0', 'var_d=1.0', 'var_c=1.0', 'var_b=1.0', 'var_a=1.0'] Error: 21142.5855072 ... ... ['var_e=68.8477', 'var_f=1.012', 'var_d=96.336', 'var_c=4.4298', 'var_b=30.0171', 'var_a=75.1'] Error: 39.1177788151 Switching to lower step 1e-05 Best result attained.Well... the error ended up being significantly lower than what it was at the beginning (from 21151.42 down to 39.1), but not low enough, not 0... Perhaps if we try changing the initial values of the unknowns? var_a = var_b = var_c = var_d = var_e = var_f = 35.0The error now goes lower... ... ['var_e=26.397', 'var_f=34.8976', 'var_d=101.9999', 'var_c=33.8079', 'var_b=46.02', 'var_a=39.96'] Error: 3.66166319392 Switching to lower step 1e-05 Best result attained.The correct solution for d (see above) is 105 - we're almost there! But we seem to have a problem: the initial values seem to have a tremendous impact on how close we come to finding the solution. Here's why: It appears that the "N-space" of our N variables (here, N=6) has many "local" minimums, that don't allow the algorithm to "escape" once it descends in one of them. That's why once we ended up to 3.66, we couldn't get any lower, no matter which variable we tried to nudge: any movement causes the error to move up. It's like being blind and trying to find sea water (level = 0) when all you have is a device that reports height: you can end up getting "lured" to a mountain lake, and never be able to get any lower than the lake's bottom... Perhaps it would be best to start setting up all "initialValues" in a range from 1 to 200. Why? Well, the two ladders are 70 and 119 units long, the "neighborhood" of 1-200 seems like a good place to start. Let's also make our steps smaller, to avoid jumping from one local minimum "lake" to another; we'll drop delta from 1 to 0.1. While we're at it, to make things run faster, we'll use the Python psyco module: ... if __name__ == "__main__": try: import psyco psyco.profile() except: print 'Psyco not found, ignoring it' bestError = 1e10 for startValue in xrange(1, 200): # Reset the initial values of all variables to startValue for name in globals().keys(): if not name.startswith('var_'): continue globals()[name] = startValue # Run the algorithm... e = main() # Did it improve in comparison to the best previous run? if e < bestError: print "(", startValue, ") Error=", e, \ [x+"="+str(y) for x, y in globals().items() if x.startswith('var_')] bestError = e ... ( 41 ) Error= 3.22073538976 ['var_f=27.9442', 'var_e=28.8', 'var_d=104.5997', 'var_c=40.989', 'var_b=41', 'var_a=41.5865']Well, the algorithm ended up even lower: 3.22. Before it stopped, it seemed to converge to the correct solution: the target value of d (as we saw in the previous section) was 105, our solution gave 104.6 - c should have been 42, it is 41. The other unknowns however are far from their correct values: f should be 40, it is 28; e should be 16; it is almost 29. In other words, we just ended up in a different lake. It appears that the 6-space of our variables (var_a to var_f) is FULL of local minimums for our error function... It is very easy for the algorithm to descend into one of these and never get out. Let's modify the algorithm: instead of "exhausting" one variable first and then moving to the next one, we will try nudging all of them, and we will pick the one that causes the error to drop the most. In our analogy of local minimums, the blind person is "feeling" the ground, and choosing the direction that causes the "steepest descent" route: def main(): delta = 0.01 e = error() # Initial error while e>1e-3: # Try moving each var_iable in turn # by delta to see if error diminishes... beste = e move = None for name in [x for x in globals().keys() if x.startswith('var_')]: oldValue = globals()[name] globals()[name] += delta # Moving up... upe = error() if upe < beste: beste = upe move = (name, delta) globals()[name] = oldValue - delta # Moving down downe = error() if downe < beste: beste = downe move = (name, -delta) # Restoring, going to next variable globals()[name] = oldValue # If no variable helped, make delta 10 times smaller and retry if beste >= e: delta /= 10.0 print "Switching to lower step", delta if delta < 1e-4: # unless delta is very small, in which case stop print "Best result attained." return e continue globals()[move[0]] += move[1] e = error() print [x+"="+str(y) for x, y in globals().items() if x.startswith('var_')] print "Error:", e Do we have an error in the functions, a bug when moving from pure mathematics to Python? Let's replace variables with their correct values... (obtained from the symbolic-space solution we did first): First, var_d is replaced with 105.0. Result: we get stuck in a "lake" of depth 2.86: ( 33 ) Error= 2.85886323868 ['var_f=30.88', 'var_e=25.12', 'var_c=42.0', 'var_b=43.0532', 'var_a=39.1282']Next is var_c - we set it to 42.0... ( 35 ) Error= 3.35048762477 ['var_f=28.02', 'var_e=27.98', 'var_b=41.0502', 'var_a=41.0229']An even higher lake, at 3.35... Removing var_b (50.0): ( 40 ) Error= None ['var_f=40', 'var_e=16.0', 'var_a=34.0']And there it is, finally... The solution for the three last variables is indeed found by our algorithm! So it wasn't that we made any mistake in transcribing the equations - no. It is just that the N-space is filled with "traps" - both the "steepest descent" and the original "exhaust each variable in turn" algorithms are easily trapped into these "pits" and can't find the correct solution. Another thing we could do is to split the N-space in many "areas" and "hunt" down inside each of them. The scanning above was done by reseting ALL variables to a startValue, from 1 to 200. This is a sampling of a "line" in the 6-space; apparently we need much more than a line to locate the "lake" that leads all the way down to zero error. Time to mimic nature... It mimics nature! When swarms of insects (e.g. ants) search for food, they face a challenge that shares some similarities to our own: each ant hunts for food, and each one may (a) get trapped in a hole and never be able to get out, (b) find nothing or (c) find a great source of food. In the case of (c), and depending on how plenty the food is, the lucky ant "calls out" to the ants in its neighborhood, and they come to help - calling out to their neighbours in turn, and pretty soon, all the nest is visiting the land-of-plenty, carrying the treasures back to the nest. We can do the same thing! Instead of doing a "lonely" search, and get trapped inside a local-minimum in our search-space, we can create a "swarm" of "particles", that search different areas on their own: each time that a particle moves to improve its error (decrease its "height"), it chooses to move by a random amount... The Wikipedia article explains it much better than I do, and allowed me to write the following code in less than an hour: #!/usr/bin/env python import sys import random var_a = var_b = var_c = var_d = var_e = var_f = 1.0 # First version of the functions: using divisions # was not a good idea, since divisions cause "extreme" # and "sudden" movements in the optimization effort # # So these were transformed... # ... into the ones below, which are equivalent, and use # multiplication instead of division - and are therefore # much more stable, numerically speaking:*var_d - 30.0*119.0 def fun6(): return var_b*var_c - 30.0*70.0 def fun7(): return var_f*var_a - (119-var_a)*var_e def fun8(): return var_f*(70-var_b) - var_b*var_e def fun9(): return var_d*var_e - var_f*var_c # This extra one was added, to make sure the solutions will # be positive and not negative values... it returns big # values (errors!) for negative numbers, and 0 for positive # ones... def fun10(): return 1e6*( abs(var_a)-var_a + \ abs(var_b)-var_b + \ abs(var_c)-var_c + \ abs(var_d)-var_d + \ abs(var_e)-var_e + \ abs(var_f)-var # Particle swarm... # How many particles? g_particleNo = 100 # Best value so far... g_best = 1e100 # obtained from these variable values: g_bestX = [0, 0, 0, 0, 0, 0] # And this is the order of the mapping of variables to indexes: g_vars = ('a', 'b', 'c', 'd', 'e', 'f') # Limit the particle speeds (the amount by which each dimension is # allowed to change in one step) to this value: g_vmax = 4.0 class Particle: def __init__(self, idx): self._x = [] self._v = [] self._idx = idx # Initialize the particle to a random place from 50 to 200 # (the ladders are 70 and 110, so the solution is around there) for i in g_vars: # foreach of my variables, add a random value self._x.append(random.randint(50, 200)) self._v.append(0) # Particle speed initialized to 0 self._best = 1e20 # The "best-so-far-for-this-particle" self._bestX = self._x[:] # and where it was found. def checkAndUpdateBest(self): global g_best, g_bestX for name, y in zip(g_vars, self._x): globals()['var_'+name] = y e = error() if e<1e-3: # New place solves the problem! print "Solution found:\n", self._x, "\nerror is", e sys.exit(1) if e<self._best: # New place improves this particle's best value! self._best = e self._bestX[:] = self._x[:] if e<g_best: # New place improves the global best value! print "(global error)", e, "\n", self._x g_best = e g_bestX[:] = self._x[:] def live(self): # For each variable (dimension) for i in xrange(0, 6): # update the speed, using the Particle Swarm Optimization: self._v[i] = \ 0.95*self._v[i] + \ 0.7*random.random()*(self._bestX[i] - self._x[i]) + \ 0.7*random.random()*(g_bestX[i] - self._x[i]) # clamp the speed to -g_vmax .. g_vmax if self._v[i] > g_vmax: self._v[i] = g_vmax if self._v[i] < -g_vmax: self._v[i] = -g_vmax # Update the particle's current dimension self._x[i] += self._v[i] # Check to see if we found a solution or improved the "best so far" self.checkAndUpdateBest() def main(): swarm = [] for i in xrange(0, g_particleNo): # Use "g_particleNo" particles swarm.append(Particle(i+1)) while True: # For all eternity, for each particle for p in swarm: # move around the search space, look for the solution! p.live() if __name__ == "__main__": try: import psyco psyco.profile() except: print 'Psyco not found, ignoring it' main() bash$ python ./ladders-swarm.py ... (global error) 0.00108106412256 [33.999999108625637, 49.999998838801204, 42.000000232576404, 105.0000031754703, 15.99999769902716, 39.999997468849152] (global error) 0.0010070313707 [33.999999174902875, 49.999997599804843, 41.999999367745644, 105.00000241198516, 15.999998594051929, 39.999996695289852] Solution found: [33.999999697035015, 49.999997875663965, 42.000000045975234, 105.00000250106874, 15.9999988649258, 39.999997431681408] error is 0.000843222146955The only additional thing I had to do - compared to the suggestions of the Wikipedia article - was that I had to change the values of C1 and C2 (the "weight" factors that influence the moves) from the original suggestion of 2.0 to a much smaller 0.7. It makes the "speed" of the particle movements slower, and it was necessary to do so, for some reason - otherwise the process never found the solution (apparently, it kept jumping over it). Judging from other articles I read about PSO, C1 and C2 are in fact tunable parameters, that have to be tweaked on a per-problem basis. Finally, success! But... I am still not satisfied... The numerical methods needed manual tweaking... They still require a human's help to find the solution! The whole point of this was to see a computer doing it all on its own, without any assistance from my side... In[1]:= Reduce[(e+f)^2+d^2 == 119^2 && (e+f)^2+c^2 == 70^2 && e^2+30^2 == a^2 && f^2+30^2 == b^2 && a/30 == 119/d && b/30 == 70/c && f/e == (119-a)/a && f/e == b/(70-b) && d/c == f/e && e>0 && f>0,{e,f},Reals] Out[1]= d==105 && c==42 && b==50 && a==34 && e==16 && f==40There - now that's what I wanted. No help from my side, no clue whatsoever - as close to magic as we'll ever get. But... I'd really prefer doing this with an open-source tool. I don't want to depend on a closed source product that might someday disappear... (%i1) eq1: (e+f)^2+d^2 = 119^2$ (%i2) eq2: (e+f)^2+c^2 = 70^2$ (%i3) eq3: e^2+30^2 = a^2$ (%i4) eq4: f^2+30^2 = b^2$ (%i5) eq5: a/30 = 119/d$ (%i6) eq6: b/30 = 70/c$ (%i7) eq7: f/e = (119-a)/a$ (%i8) eq8: f/e = b/(70-b)$ (%i9) eq9: d/c = f/e$ (%i10) solve([eq1, eq2, eq3, eq4, eq5, eq6, eq7, eq8, eq9], [a,b,c,d,e,f]);...which gives... (%o10) []]This was what I wanted, all along. An open-source tool that solves a non-linear set of equations without any hints from me. And it's not the only one... Since it's main application uses the Python interpreter, I started with dir(), and noticed that a solve function was available in the inventory. help(solve), and 5 minutes later... bash$ cd /work/sage-2.8.15-debian32-i686-Linux bash$ ./sage sage: a,b,c,d,e,f = var('a,b,c,d,e,f') sage: solutions=solve([(e+f)^2+d^2 == 119^2,(e+f)^2+c^2 == 70^2,e^2+30^2 == a^2,f^2+30^2 == b^2,a/30 == 119/d,b/30 == 70/c,f/e ==(119-a)/a,f/e == b/(70-b) ,d/c == f/e ], a,b,c,d,e,f) sage: print solutions...which gave... [ ... ] ] Is it able to cope with non-linear systems? Yes, it is... bash$ cat > /var/tmp/ladders.m function F=myfun(x) a = x(1); b = x(2); c = x(3); d = x(4); e = x(5); f = x(6); F(1) = (e+f)^2+d^2 - 119^2; F(2) = (e+f)^2+c^2 - 70^2; F(3) = e^2+30^2 - a^2; F(4) = f^2+30^2 - b^2; F(5) = a*d - 30*119; F(6) = b*c - 30*70; F(7) = f*a - e*(119-a); F(8) = f*(70-b) - b*e; F(9) = d*e - f*c; endfunction; x = fsolve("myfun", [100 100 100 100 100 100]) (Ctrl-D) bash$ octave /var/tmp/ladders.mAnd the results: GNU Octave, version 2.1.73 (i486-pc-linux-gnu). Copyright (C) 2006 John W. Eaton.). x = 34.000 50.000 42.000 105.000 16.000 40.000Notice that for Octave's solver to work, I modified the equations to translate divisions into multiplications... more stable numerically, and easier to differentiate (Octave's numerical solvers are trying to approximate the function derivatives, so it is better to help by providing functions that are easier to differentiate). from z3 import * a,b,c,d,e,f = Ints('a b c d e f') s = Tactic('qfnra-nlsat').solver() s.add( a>0, a<200, b>0, b<200, c>0, c<200, d>0, d<200, e>0, e<200, f>0, f<200, (e+f)**2 + d**2 == 119**2, (e+f)**2 + c**2 == 70**2, e**2 + 30**2 == a**2, f**2 + 30**2 == b**2, a*d == 119*30, b*c == 70*30, a*f - 119*e + a*e == 0, b*e - 70*f + b*f == 0, d*e == c*f) print s.check() # solve the problem print s.model() # print the solution What's more important, is that the methods we tried (especially Maxima and Sage) can be used to address any non-linear system of equations. And that's far more important than solving funny, entertaining puzzles :‑)
http://users.softlab.ntua.gr/~ttsiod/ladders.html
CC-MAIN-2014-42
refinedweb
3,537
74.49
Fugue Icons This is a set of web icons, created by Yusuke Kamiyamane and provided by him on a Creative Commons Attribution 3.0 license. The icons have been merged together into a single file using the script included in this package, to make them easier to use as CSS sprites in web applications. Also, some glue code that makes it easier to include the icons in your application has been provided. General usage To use those icons in any web application, just links the generated fugue-icons.css file in your HTML header like this: - . code-block:: html - <link rel="stylesheet" type="text/css" - Be sure to use an URL at which the files are actually served, which is specific to your application. Now, to include an icon in your HTML template, just add apropriate classes to the element that is supposed to display that icon: - . code-block:: html - <p>This is an abacus: <span class="fugue-icon fugue-abacus"></span>.</p> You might need to add some additional CSS rules to adapt to your existing stylesheets. The first class, fugue-icon, sets some basic common styles for the icon, while the second class, fugue-abacus, scrolls the sprite sheet image to the right position that displays the "abacus" icon. Use with Flask applications You can use this package directly with your Flask application, by registering the provided blueprint. - . code-block:: python import flask import fugue_icons.blueprint import fugue_icons app = flask.Flask(__name__) app.register_blueprint(fugue_icons, url_prefix='/fugue-icons') Then, in your template's HTML header include the link to the CSS file: - . code-block:: html - <link rel="stylesheet" type="text/css" - Use with Django applications Add this package to INSTALLED_APPS in your django settings: - . code-block:: python - INSTALLED_APPS = ( - 'django.contrib.auth', 'django.contrib.admin', - ... - 'fugue_icons', ) In your template's HTML header include the link to the CSS file: - <link rel="stylesheet" type="text/css" - Remember to run manage.py collectstatic when deploying your application.
https://bitbucket.org/thesheep/fugue-icons/src
CC-MAIN-2019-04
refinedweb
326
55.34