text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
What will we cover in this tutorial? We will look at how the Birthday Paradox is used when estimating how collision resistance a hash function is. This tutorial will show that a good estimate is that a n-bit hash function will have collision by chance with n/2-bit random hash values. Step 1: Understand a hash function A hash function is a one-way function with a fixed output size. That is, the output has the same size and it is difficult to find two distinct input chucks, which give the same output. A hash function is any function that can be used to map data of arbitrary size to fixed-size values. Probably the best know example of a hash-function is the MD5. It was designed to be used as a cryptographic hash function, but has been found to have many vulnerabilities. Does this mean you should not use the MD5 hash function? That depends. If you use it in a cryptographic setup, the answer is Do not use. On the other hand, hash function are often used to calculate identifiers. For that purpose, it also depends if you should use it or not. This is where the Birthday Paradox comes in. Step 2: How are hash functions and the Birthday Paradox related? Good question. First recall what the Birthday Paradox states. …in a random group of 23 people, there is about a 50 percent chance that two people have the same birthday How can that be related to hash functions? There is something about collisions, right? Given 23 people, we have 50% chance of collision (two people with the same birthday). Hence, if we have that our hash functions maps data to a day in the calendar year. That is, it maps hash(data) -> [0, 364], then given 23 hash values, we have 50% chance for collision. But you also know that our hash function maps to more than 365 distinct values. Actually, the MD5 maps to 2^128 distinct values. An example would be appreciated now. Let us make a simplified hash function, call it MD5′ (md5-prime), which maps like the MD5, but only uses the first byte of the result. That is, we have MD5′(data) -> [0, 255]. Surely, by the pigeonhole principle we would run out of possible values after 256 distinct data input to MD5′ and have a collision. import hashlib import os lookup_table = {} collision_count = 0 for _ in range(256): random_binary = os.urandom(16) result = hashlib.md5(random_binary).digest() result = result[:1] if result in lookup_table: print("Collision") print(random_binary, result) print(lookup_table[result], result) collision_count += 1 else: lookup_table[result] = random_binary print("Number of collisions:", collision_count) The lookup_table is used to store the already seen hash values. We will iterate over the 256 (one less than possible values of our MD5′ hash function). Take some random data and hash it with md5 and only use first byte (8 bits). If result already exists in lookup_table we have a collision, otherwise add it to our lookup_table. For a random run of this I got 87 collisions. Expected? I would say so. Let us try to use the Birthday Paradox to estimate how many hash values we need to get a collision of our MD5′ hash function. A rough estimate that is widely used, is that the square root of the number of possible outcomes will give a 50% chance of collision (see wikipedia for approximation). That is, for MD5′(data) -> [0, 255] it is, sqrt(256) = 16. Let’s try that. import hashlib import os collision = 0 for _ in range(1000): lookup_table = {} for _ in range(16): random_binary = os.urandom(16) result = hashlib.md5(random_binary).digest() result = result[:1] if result not in lookup_table: lookup_table[result] = random_binary else: collision += 1 break print("Number of collisions:", collision, "out of", 1000) Which gives some like this. Number of collisions: 391 out of 1000 That is in the lower end, but still a reasonable approximation. Step 3: Use a correct data structure to lookup in Just to clarify. We will not find collisions on the full MD5 hash function, but we will try to see if the estimate of collision is reasonable. This requires to do a lot of calculations and we want to ensure that we are not having a bottleneck with using a wrong data structure. The Python dict should be a hash table with expected insert and lookup O(1). Still the worst case is O(n) for these operations, which would be a big overhead to cary along the way. Hence, we will first test, that the dictionary has O(1) insert and lookup time for the use cases we have of it here. import time import matplotlib.pyplot as plt def dict_size(size): start = time.time() dict = {} for i in range(size): if i in dict: print("HIT") else: dict[i] = 0 return time.time() - start x = [] y = [] for i in range(0, 2**20, 2**12): performance = dict_size(i) x.append(i) y.append(performance) plt.scatter(x, y, alpha=0.1) plt.xlabel("Size") plt.ylabel("Time (sec)") plt.show() Resulting in something like this. What does that tell us? That the dict in Python has a approximately linear insert and lookup time, that is O(1). But there some overhead at some sizes, e.g. a bit before 3,000,000. It is not exactly linear, but close enough not to expect a exponential run time. This step is not necessary, but it is nice to know how the function grows in time, when we want to check for collisions. If the above time complexity grew exponentially (or not linearly), then it can suddenly become hard to estimate the runtime if we run for a bigger space. Step 4: Validating if square root of the bit size is a good estimate for collision We will continue our journey with our modified MD5′ hash function, where the output space will be reduced. We will then for various output space sizes see if the estimate for 50% collision of the hash functions is decent. That is, if we need approximately sqrt(space_size) of hash values to have an approximately 50% chance of a collision. This can be done by the following code. import hashlib import os import time import matplotlib.pyplot as plt def main(bit_range): start = time.time() collision_count = 0 # Each space_size counts for 4 bits, hence we have space_size = bit_range//4 for _ in range(100): lookup_table = {} # Searching half the sqrt of the space for collision # sqrt(2**bit_range) = 2**(bit_range//2) for _ in range(2**(bit_range//2)): random_binary = os.urandom(16) result = hashlib.md5(random_binary).hexdigest() result = result[:space_size] if result in lookup_table: collision_count += 1 break else: lookup_table[result] = random_binary return time.time() - start, collision_count x = [] y1 = [] y2 = [] for i in range(4, 44, 4): performance, count = main(i) x.append(i) y1.append(performance) y2.append(count) _, ax1 = plt.subplots() plt.xlabel("Size") plt.ylabel("Time (sec)") ax1.scatter(x, y1) ax2 = ax1.twinx() ax2.bar(x, y2, align='center', alpha=0.5, color='red') ax2.set_ylabel("Collision rate (%)", color='red') ax2.set_ylim([0, 100]) plt.show() The estimated collision rate is very rough, as it only runs 100 trials for each space size. The result are shown in the graph below. Interestingly, it seems to be in the 30-50% range for most cases. As a note, it might confuse that the run-time (the dots), does not seem to be linear. That is because for each bit-size we increase, we double the space. Hence, the x-axis is a logarithmic scale. Step 5: What does that all mean? This has high impact on using hash functions for creating unique identifiers. If you want a short identifier with the least number of bits, then you need to consider the Birthday Paradox. Assume you created the following service. import hashlib import base64 def get_uid(text): result = hashlib.md5(text.encode()).digest() result = base64.b64encode(result) return result[:4] uid = get_uid("my text") print(uid) If the input text can be considered random, how resistant is get_uid(…) function against collision. Well, it returns 4 base64 characters. That is 6*4 = 24 bits of information (each base 64 character contains 6 bits of information). The rough estimate is that if you use it sprt(2^24) = 2^12 = 4,096 times you will have a high risk of collision (approximately 50% chance). Let’s try. import hashlib import os import base64 def get_uid(text): result = hashlib.md5(text).digest() result = base64.b64encode(result) return result[:4] lookup_table = {} for _ in range(4096): text = os.urandom(16) uid = get_uid(text) if uid in lookup_table: print("Collision detected") else: lookup_table[uid] = text It does not give collision every time, but run it a few times and you will get. Collision detected Hence, it seems to be valid. The above code was run 1000 times and gave collision 497 times, which is close to 50% of the time.
https://www.learnpythonwithrune.org/2020/09/
CC-MAIN-2021-25
refinedweb
1,504
66.33
I can only read the entire file into the Buffered (code below):No, you can do whatever you want. All you've done so far is read the entire file in. Now attempt to do what you WANT to do and post issues (if you have them) here. What I need to do is finding specific points in theProbably. text file and extract the text between to points (e.g <abstract> text <\abstract>) to a new file. It is a big file with 162,000 abstracts. I have a version of the file using ASCII and one using XML (1.5 GB). I have never used XML before, but am wondering if it would be smart to do here. I was thinking about using String.compareTo to findYou can do all these things. You can also use regular expressions. Basically you're parsing the file. There might be characteristics of the file that allow you to simplify the parsing. the points, but since they are in plain text I don't know how. Wondering if it would be a good idea to use a String Tokenizer? Also considered using indexOf(string), but I would have multiple identical strings. Is it possible to delete passages in the original file as you go along? If so how? index = str.indexOf("<abstract>"); System.out.println("Index is" +index); import java.io.File; import javax.xml.parsers.SAXParser; import javax.xml.parsers.SAXParserFactory; import org.xml.sax.Attributes; import org.xml.sax.SAXException; import org.xml.sax.helpers.DefaultHandler; public class SaxTest { public static void main(String[] argv) { try { SAXParserFactory fac = SAXParserFactory.newInstance(); SAXParser parser = fac.newSAXParser(); SimpleHandler sh = new SimpleHandler(); parser.parse(new File(argv[0]), sh); } catch(Exception e) { e.printStackTrace(); } } private static class SimpleHandler extends DefaultHandler { private boolean doOutput = false; public void startElement(String uri, String localName, String qName, Attributes atts) throws SAXException { if ("abstract".equalsIgnoreCase(qName)) { doOutput = true; } else { doOutput = false; } } public void endElement(String uri, String localName, String qName) throws SAXException { doOutput = false; } public void characters(char[] ch, int start, int length) throws SAXException { if (doOutput) System.out.println(new String(ch, start, length)); } } } and the KMP classand the KMP class import com.edison.library.kmp.*; import java.io.*; import java.util.regex.*; public class Fred704 { private static final Pattern encodingExtractionRegex = Pattern.compile("encoding *?= *?\"([^\"]+)\""); public static String getXMLFileEncoding(File xmlFile) throws IOException { // By default an XML document uses UTF-8 character encoding // so assume that for the moment. String encoding = "UTF-8"; BufferedReader reader = null; try { // The first line of an XML file has to be able to be read as ASCII // so read the first line as ASCII reader = new BufferedReader(new InputStreamReader(new FileInputStream(xmlFile), "ASCII")); final String firstLine = reader.readLine(); reader.close(); // Now look for the encoding // and extract it if found. final Matcher matcher = encodingExtractionRegex.matcher(firstLine); if (matcher.find()) { encoding = matcher.group(1).trim(); } } finally { if (reader != null) reader.close(); } return encoding; } public static void main(final String[] args) throws Exception { final File source = new File(System.getProperty("user.home") + "/work/dev/stow-longa/church/graves.xml"); final File destination = new File(System.getProperty("user.home") + "/xxxx.txt"); final String encoding = getXMLFileEncoding(source); final String beginPattern = "<line>"; final String endPattern = "</line>"; final ReaderToWriterKMP beginKMP = new ReaderToWriterKMP(beginPattern); final ReaderToWriterKMP endKMP = new ReaderToWriterKMP(endPattern); final Reader reader = new BufferedReader(new InputStreamReader(new FileInputStream(source), encoding)); final Writer writer = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(destination), encoding)); for (boolean eof = false; !eof;) { eof = beginKMP.search(reader); if (!eof) { writer.write(beginPattern); eof = endKMP.search(reader, writer); writer.write("\n"); } } writer.close(); reader.close(); } } import java.io.*; /** * A KMP implementation that copies chars from a Reader to a Writer until * a match is found or EOF is found. */ public class ReaderToWriterKMP { private final char[] pattern_; private final int[] next_; /** * Constructs a ReaderToWriterKMP for a given pattern to match. * * @param pattern the pattern to match in the Reader. */ public ReaderToWriterKMP(String pattern) { pattern_ = pattern.toCharArray(); next_ = new int[pattern_.length]; next_[0] = -1; for (int len = next_.length-1, i = 0, j = -1; i < len; next_[++i] = ++j) { while((j >= 0) && (pattern_ != pattern_[j])) { j = next_[j]; } } } /** * Copies from the reader until a match is obtained * or EOF is found. * * @param reader the reader from which to read the characters. * @return 'true' if EOF found before a match. * @throws IOException if there is a read or write error */ public boolean search(final Reader reader) throws IOException { return search(reader, null); } /** * Copies from the reader to the writer until a match is obtained * or EOF is found. * * @param reader the reader from which to read the characters. * @param writer the writer to which to write the characters. * @return 'true' if EOF found before a match. * @throws IOException if there is a read or write error */ public boolean search(final Reader reader, final Writer writer) throws IOException { for (int j = 0; j < next_.length; j++) { final int ch = reader.read(); if (ch == -1) return true; if (writer != null) writer.write(ch); while ((j >= 0) && (ch != pattern_[j])) { j = next_[j]; } } return false; } } You should be able to use this main program with minimal changes for the XML file and with a few more changes for the text file. Message was edited by: sabre150 Sabre150: I get a complilation error since I don'tThat is because that is the package the ReaderToWriterKMP class is in on my sytem. You can put it in any package you like! have com.edison.library.kmp.* tried (unsuccesfull) to Google for it but could not find it. Do you know where to find it? Or how to create it?
https://community.oracle.com/thread/1224046?tstart=125055
CC-MAIN-2014-15
refinedweb
934
60.01
Answered by: Unable to start debugging. Unable to start program...The system cannot find the file specified - Hello, When I try to debug my C++ program in .NET, I press F5 (or click the debug arrow from the main menu) I get the above error. The directory in the above message is wrong, it points to a an old project that no longer exists. I checked the Command value in the project properties and its correct (by right clicking the project in the solution explorer). If I right click the project and hit debug from the pop-up menu, it does execute the right program, just not from the main menu. Can anyone help, I have tried changing properties of everything, installed framework 2.0, searched the registry for references to the old project, deleting the pdb and debug directory and done a full rebuild, but no joy. Any help would be greatly appreciated. Thanks. Question Answers All replies Hello there. Well i have the same problem as dumbmonkey and selecting wjhat you tols hom to select does not help me out. My error says: Unable to start program........... The system cannot find the file specified. Same deal. I right clicked on Project and selected "set as startup project" and nothing happened. Even when compiled once agian. Thanks! Hey Wayouk, Well, I tried to do the same as dumbmonkey and I got the same thing that you did. I didn't realize that I forgot something basic. I forgot the using namespace std; You should double check the beginning too. However, I do have the same problem still with most of my programs, even ones that have run before perfectly sound. So, hopefully this helps. Hey, maybe you can help me out with this too. - It's not working for me. It warns me that test - Debug Win32 is outdated even if I click "set as startup project". Then it says I have errors, even though the syntax is fine and nothing is highlighted. Then it gives me the error the original poster had. Is there anything more specific I have to do? I've encountered the same problem when I entered a wrong executable name. Here is how I solved the problem; I was trying to debug a sollution with 7 projects on it. Each time I try to compile it was asking for a name for the executable. If you give the absolute path of your project (...\Visual Studio 2008\Projects\yourProjectName\Debug\yourProjectName.exe ) it works fine! hope that helps - I was using visual studio 2010 and I opened a header file in the source file folder in the project explorer and used it to write my "main() function... I copied the code and then opened a new source file (.cpp) and restored the code to the new source file and it worked perfect... (Well not perfect, I had to correct a bunch of errors, but the debugging worked). My solution is really behaving strangely. I have Built the solution, Set the Start Up Project as well as the Start Up Page and have checked all my form declarations and still I am getting this same error being discussed. Strangely enough, the app won't run when using debugging (F5) but when I use Ctrl + F5 it shows the same error but opens up the browser and displays the site. Could this be due to a OS update? I'm running Windows 8 and am using Visual Studio 2010. Let me know if there are any other things I could try to do to resolve the problem?
https://social.msdn.microsoft.com/Forums/vstudio/en-US/3b8ffe5b-7b71-4218-a8fd-b8e8cb5e6168/unable-to-start-debugging-unable-to-start-programthe-system-cannot-find-the-file-specified?forum=vsdebug
CC-MAIN-2017-22
refinedweb
600
82.95
Dear all, Here is the part 2 of the AD review, from section 4.21 on. Regarding the part 1, thanks Andy for addressing all comments in version 17. - section 4.22 "Data Correlation. Not sure what you mean by the section title and "Data can be correlated in various ways"? Which data? YANG modules, YANG objects, object instances, from different YANG server, etc. I guess I miss a sentence or two regarding this "correlation" objective and which guidelines this section is going to provide to "authors and reviewers of Standards Track specifications containing YANG data model modules". Note: I read that section multiple times. - section 4.22. Isn't it clarified with NMDA. It's not inline with 4.23.2, which says: Designers SHOULD describe and justify any NMDA exceptions in detail, such as the use of separate subtrees and/or separate leafs. ... and I guess confusing in light of the real guidelines in 4.23.3 Btw, why is this paragraph in 4.22 and not in 4.23? - section 4.23 Operational state is now modeled using YANG according to new NMDA, Please add a reference to the draft. - section 4.26 "YANG 1.1 Guidelines" I'm confused by the title. The entire document is about 1.1, right? I guess you want to express something such as "YANG 1.1 specific Constructs Guidelines" - section 4.26.1 Multiple revisions of the same module can be imported, provided that different prefixes are used. Reading. Any contradiction? Then reading: This MAY be done if the authors can demonstrate that the "avoided" definitions from the most recent of the multiple revisions are somehow broken or harmful to interoperability. "avoided" definitions? I simply don't understand this sentence. - section 4.26.4 The NETCONF Access Control Model (NACM) [RFC6536 <>] does not support parameter access control for RPC operations. Let's use draft-ietf-netconf-rfc6536bis - Appendix B YANG Module Registry: Register the YANG module name, prefix, namespace, and RFC number, according to the rules specified in [RFC7950 <>]. I guess this is [RFC6020] in this case. Indeed the "YANG Module Names" registry is specified in RFC6020/. See for example - Appendix B References -- verify that the references are properly divided between normative and informative references, thatRFC 2119 <> is included as a normative reference if the terminology defined therein is used in the document Refer to RFC8174 - Appendix B (and maybe some more text somewhere else. To refer to Tom Petch latest email to NETMOD, we should need a few words about: If a YANG module has a Reference or Description clause specifying an I-D, and the I-D is listed as an Informative Reference. Regards, Benoit _______________________________________________ netmod mailing list netmod@ietf.org
https://www.mail-archive.com/netmod@ietf.org/msg07841.html
CC-MAIN-2018-09
refinedweb
454
56.66
Multiple File Upload with Flash and Ruby on Rails I’m going to prefix this tutorial by declaring my love for AJAX to preempt a Flash vs Ajax war. I use AJAX daily, it’s a great tool, the sun shines out it’s arse, etc. But sometimes Flash is just the better tool for the job. I’m going off on a tangent here but many problems with AJAX, e.g. breaks the back button, were the exact same arguments people made about Flash years ago. ALAX is cool but can you play MP3’s with it? No. Can you Play Video with AJAX? No. Upload files with AJAX? No*. And most importantly can you make a 3D spinning LED thingy in AJAX? No AJAX file upload is impossible for 99.99% of web users, but Flash file upload is available to 90% of users (the rest are using Gopher on Linux so funk ‘em). As an extra special bonus this example will upload multiple files through Flash, but wait there’s more… it will also include the upload status/progress for each file. files awaiting upload uploading with progress done! This server side will be Rails but .NET or PHP would work as well. The source for the tutorial is here. Flash files are in the ‘fla’ directory. As usual start off with a new Rails app Since we’ll be uploading files we’ll need somewhere to store them preferably in a unique folder so as not to mess anything else up. Change into the ‘upload’ app directory (not the public/uploads directory) so we can generate models and controllers. Create a ‘File’ model, ‘File’ is a reserved word for Rails so we’ll call it ‘DataFile’ In the model file (app/model/data_file.rb) we’re going to create a custom save method. It will have three arguments ‘data’ the file data, ‘name’ what to name the file, and ‘directory’ where to put the file. end Next create an ‘Upload’ controller In the controller file (app/controllers/upload_controller.rb) create an index method that takes file information from Flash and passes it to the ‘DataFile’ model. Because there’s no view associated with this controller we add ‘‘ to avoid any errors. We set the upload directory to the ‘uploads’ directory we created earlier " ". end Almost done, after bashing AJAX earlier it’s time to bash Flash a bit. Flash file upload is broken when used with Rails, from the Bubble Share folk "the Windows version of Flash Player 8 sends a multipart request with content-length set to zero(0), before sending the real request, if the file you are uploading is bigger than 10K". Long story short it no worky with Rails. Big ups to the bubblers they figured out how to fix it, download the fix here and place it in the Rails plug-in directory (vendor/plugins). That’s it for the Rails side of things (easy peasy lemon squeezy). Using the source files you can start a webbrick/lighttpd server and test the multipleUpload.fla file in the Flash IDE. Now on to Flash… The multipleUpload.fla file has three components on stage, a DataGrid (files_dg), and two Buttons one for browsing the file system (browse_btn), and one for uploading (upload_btn). The actions for the .fla are pretty short, just creating an instance of a ‘MultipleUpload‘ class (described below) and hooking up the components to it. import com.vixiom.utils.MultipleUpload System.security.allowDomain ("*.localhost.com"); var MU:MultipleUpload = new MultipleUpload (this.files_dg, this.browse_btn, this.upload_btn); Below is the ‘MultipleUpload.as’ file. Here’s a quick run down of what’s going on: a FileReferenceList object is created with a listener that ‘listens’ for various user events dealing with files (onSelect, onCancel, onOpen, onProgress, onComplete, onHTTPError, onIOError, onSecurityError). The major ones are onSelect (when a user selects files with the file browser), onProgress (an interval that runs in reference to a particular file being uploaded), and onComplete (when a file has finished uploading). To show the user the status of our file uploads we’re using the DataGrid’s built in ‘editField‘ method to change what the status cells display. // delegate import mx.utils.Delegate; // ui components import mx.controls.DataGrid import mx.controls.Button // file reference import flash.net.FileReferenceList; import flash.net.FileReference; class com.vixiom.utils.MultipleUpload { private var fileRef:FileReferenceList; private var fileRefListener:Object; private var list:Array; private var files_dg:DataGrid; private var browse_btn:Button; private var upload_btn:Button; ////////////////////////////////////////////////////////////////////// // // Constructor (files_dg, browse_btn, upload_btn) // ////////////////////////////////////////////////////////////////////// public function MultipleUpload(fdg:DataGrid, bb:Button, ub:Button) { // references for objects on the stage files_dg = fdg; browse_btn = bb; upload_btn = ub; // file list references & listener fileRef = new FileReferenceList(); fileRefListener = new Object(); fileRef.addListener(fileRefListener); // setup iniUI(); inifileRefListener(); } ////////////////////////////////////////////////////////////////////// // // iniUI // ////////////////////////////////////////////////////////////////////// private function iniUI() { // buttons browse_btn.onRelease = Delegate.create(this, this.browse); upload_btn.onRelease = Delegate.create(this, this.upload); // columns for dataGrid files_dg.addColumn("name"); files_dg.addColumn("size"); files_dg.addColumn("status"); } private function browse() { trace("// browse"); fileRef.browse(); } private function upload() { trace("// upload"); // upload the files for(var i:Number = 0; i < list.length; i++) { var file = list[i]; trace("name: " + file.name); trace(file.addListener(this)); file.upload(""); } } ////////////////////////////////////////////////////////////////////// // // inifileRefListener // ////////////////////////////////////////////////////////////////////// private function inifileRefListener() { fileRefListener.onSelect = Delegate.create(this, this.onSelect); fileRefListener.onCancel = Delegate.create(this, this.onCancel); fileRefListener.onOpen = Delegate.create(this, this.onOpen); fileRefListener.onProgress = Delegate.create(this, this.onProgress); fileRefListener.onComplete = Delegate.create(this, this.onComplete); fileRefListener.onHTTPError = Delegate.create(this, this.onHTTPError); fileRefListener.onIOError = Delegate.create(this, this.onIOError); fileRefListener.onSecurityError = Delegate.create(this, this.onSecurityError); } ////////////////////////////////////////////////////////////////////// // // onSelect // ////////////////////////////////////////////////////////////////////// private function onSelect(fileRefList:FileReferenceList) { trace("// onSelect"); // list of the file references list = fileRefList.fileList; // data provider list so we can customize things var list_dp = new Array(); // loop over original list, convert bytes to kilobytes for(var i:Number = 0; i < list.length; i++) { list_dp.push({name:list[i].name, size:Math.round(list[i].size / 1000) + " kb", status:"ready for upload"}); } // display list of files in dataGrid files_dg.dataProvider = list_dp; files_dg.spaceColumnsEqually(); } ////////////////////////////////////////////////////////////////////// // // onCancel // ////////////////////////////////////////////////////////////////////// private function onCancel() { trace("// onCancel"); } ////////////////////////////////////////////////////////////////////// // // onOpen // ////////////////////////////////////////////////////////////////////// private function onOpen(file:FileReference) { trace("// onOpenName: " + file.name); } ////////////////////////////////////////////////////////////////////// // // onProgress // ////////////////////////////////////////////////////////////////////// private function onProgress(file:FileReference, bytesLoaded:Number, bytesTotal:Number) { trace("// onProgress with bytesLoaded: " + bytesLoaded + " bytesTotal: " + bytesTotal); for(var i:Number = 0; i < list.length; i++) { if (list[i].name == file.name) { var percentDone = Math.round((bytesLoaded / bytesTotal) * 100) files_dg.editField(i, "status", "uploading: " + percentDone + "%"); } } } ////////////////////////////////////////////////////////////////////// // // onComplete // ////////////////////////////////////////////////////////////////////// private function onComplete(file:FileReference) { trace("// onComplete: " + file.name); for(var i:Number = 0; i < list.length; i++) { if (list[i].name == file.name) { files_dg.editField(i, "status", "complete"); } } } ////////////////////////////////////////////////////////////////////// // // onHTTPError // ////////////////////////////////////////////////////////////////////// private function onHTTPError(file:FileReference, httpError:Number) { trace("// onHTTPError: " + file.name + " httpError: " + httpError); } ////////////////////////////////////////////////////////////////////// // // onIOError // ////////////////////////////////////////////////////////////////////// private function onIOError(file:FileReference) { trace("// onIOError: " + file.name); } ////////////////////////////////////////////////////////////////////// // // onSecurityError // ////////////////////////////////////////////////////////////////////// private function onSecurityError(file:FileReference, errorString:String) { trace("onSecurityError: " + file.name + " errorString: " + errorString); } } That’s it! I’d love to be proven wrong but that’s impossible to do with AJAX. And if you could do it with AJAX who would want to deal with all the cross browser testing anyways? *I’ve actually seen a couple of examples of attempts at AJAX file uploading but there’s always some caveat like "only works with FireFox if the user does A, B, and C". [...] Un “applet” de Flash para subir archivos a tu aplicación RoR. [...] Pingback by Mi viaje en tren » Blog Archive » Avistamientos #5 — September 15, 2006 @ 4:48 pm You are a god send! I’ve spent a week battling with Java applets and was getting discouraged after seeing all of the (wonderfully attractive) Flash/Cold Fusion uploaders. Who would have known that I could find a Rails/Flash uploader - exactly what I was looking for. Anyway - thanks. Comment by Alex — September 18, 2006 @ 5:00 pm I was just looking for a way to upload multiple files using ajax and found: Is this missing something that you’re requiring? What I would like to be able to do is put in a form something like /home/username/pics/*.jpg, and be able to upload multiple files that way. Gmail has been doing file uploading for some time, where you choose a file to attach to your email, and while you are working on your email, it is uploading the file in the background. Daniel Comment by Daniel S. — September 22, 2006 @ 12:24 pm The stickman example uses ajax to add addtional file input fields but the actual file uploading isn’t done with ajax (similar to uploading multiple files in BaseCamp). I think I remember reading that Google uses a hidden iframe and javascript to upload files, which is a work around I used for flash uploading before flash had the FileReference object. Your /home/username/pics/*.jpg uploader would be pretty sweet, I’d love to be able to just drag a folder of files on to the browser and have the files upload. A buddy of mine developed a system that uploads a zip file of images which then are unzipped and sorted. Comment by KreeK — September 22, 2006 @ 12:45 pm Is the Flash for this tutorial Flash 8 or Flash MX (2004) ? Thanks!! Comment by David Beckwith — September 23, 2006 @ 9:03 pm Can you explain how to deploy this? Sorry, I’m a newbie at both Flash and RoR ^^; . . . . Comment by David Beckwith — September 23, 2006 @ 10:09 pm Hi, I changed 0.0.0.0:3000/upload to my absolute url: I took out :3000 cuz we’re using Fast-CGI. Is that wrong ? My swf file is here: thanks. d Comment by David Beckwith — September 23, 2006 @ 11:58 pm Hi David, 1. [Is the Flash for this tutorial Flash 8 or Flash MX (2004) ?] Flash 8 was the first version to implement file uploading. 2. [Can you explain how to deploy this?] I wish I had the time but there’s a lot of different variables, including multiple ways of hosting Rails, for deployment. The things to check are; is your ‘uploads’ directory writeable on the production version? Have you tested a simple file upload with a regular RHTML form? 3. [I changed 0.0.0.0:3000/upload to my absolute url...] That’s correct ‘:3000′ is a port number that’s used when testing in development. I tried out your uploader it looks like it’s uploading the file but not saving it so definetly check your file permissions (you’ll have to get your hands dirty with unix commands) try chmod’ing the uploads directory. > chmod 755 uploads. BTW the tutorial uses no security checks, in it’s current state it is NOT safe for a public uploader, it could be used for a password protected CMS but I would still check that only certain types of files are being uploaded. Comment by KreeK — September 24, 2006 @ 9:52 am i am a vr new user for flash8 and want to use file upload on my site. can you tell me how to use this feature?? How to access this from webpage?? where to store files on server?? Comment by Tarpan — September 29, 2006 @ 6:53 am Very nice tutorial. To follow up on KreeK’s comment about security; you can add a simple client side security check by passing a FileFilter to the Browse method. For example, fileRef.browse([new FileFilter("Image Files", "*.jpg;*.png;*.gif;*.bmp")]); You can check out a simple example of this here: Derek Comment by Derek Wischusen — September 29, 2006 @ 7:24 pm hi i hv downloaded and run this file working fine but add list viewing single file only i want onemore file view and upload same time plz help me Comment by saravana — October 3, 2006 @ 3:06 am i want add onemore file file from different folder so please viewing the all browsing file finaly i want uploaad it thanking you bye saravana Comment by saravana — October 3, 2006 @ 3:12 am Has anyone got a solution to flash passing the session info for the upload action? I use a :before_filter login_required and flash doesnt seem to have the same cookie headers. Comment by Rex — October 4, 2006 @ 3:46 am [...] The first one is doing Multiple File Uploading with Flash and Ruby on Rails. This one has a nice to use interface and seems workable if you dont mind using flash in your project. [...] Pingback by Multiple File Uploads with Rails and Progress Bars — October 8, 2006 @ 4:18 pm I think there is a small “Bug”, if you upload bigger files that is time consuming (depends from your bandwidth), so after some time (30secs) you get a “A script in this movie is causing Flash Player to run slowly” alert. see here: is there a way to fix this? maybe in “function onProgress” ? Comment by flyset — October 9, 2006 @ 3:12 am Hi! One of the sulutions to select at once and upload multiple files or all files within folder is Flash upload control. It works in all browsers and OSs and just needs Flash player on client side. See example here: Comment by Gregory — October 16, 2006 @ 2:25 am This is very interesting. Thanks. I’m just wondering why you need a database in the example. It doesn’t seem to be used. I would also be interested to know how I can prevent people accessing upload.swf if the are not logged in to my rails application. Any ideas? Comment by Mischa — October 30, 2006 @ 8:44 am “I’m just wondering why you need a database in the example. It doesn’t seem to be used.” I can’t remember where I found the Rails upload part of it, in this case the file is the ‘Model’ so the ’save’ method is being overwritten to save it in the file system rather than a db. If you wanted to you could move the code that writes the file to the controller. “I would also be interested to know how I can prevent people accessing upload.swf if the are not logged in to my rails application. Any ideas?” Yeah that’s a problem (for those that don’t know SWFs can be accessed directly like images), you could use Flash Remoting () or XMLsocket () to check with Rails that the user is logged in. Flash Remoting is a bit more complicated than XMLsocket but has the ability to pass Ruby objects directly into Flash as native ActionScript objects, while XMLsocket brings everything in as a string. Comment by KreeK — October 30, 2006 @ 10:15 am You can still keep the code in the model, just don’t inherit from ActiveRecord. At least that’s what I was thinking Thanks for your suggestion. I will have a look at it. Comment by Mischa — October 30, 2006 @ 12:37 pm I’m coming at this from the Flash side of things so I’m always looking to get better at Ruby/Rails How would you have a model that didn’t inherit from ActiveRecord? Would it just be a vanilla Ruby class that was in ‘app/models’? (without the ‘< ActiveRecord::Base’) Comment by KreeK — October 30, 2006 @ 12:54 pm Exactly. If you don’t inherit from ActiveRecord there’s no need for a database table corresponding to the class. As you’re not storing anything in the database this would make the most sense imho. Comment by Mischa — October 30, 2006 @ 1:27 pm i don’t know if my last comment got submitted so i’ll try once more. sorry if i’m double posting. i was curious if there’s a resolution to flyset’s question (number 15). the only way i’ve been able to alleviate the issue is by completely commenting out the onProgress calls. with the onProgress calls, it seems to slow the script during larger files (20mb or so). any ideas? thoughts? Comment by anthony — November 12, 2006 @ 5:24 pm i am confused, i uploaded your project to server, chmod-ed /uploads directory… i can select files but when i click UPLOAD button it is trying to connect to 0.0.0.0 i have changed System.security.allowDomain variable in .fla file to my domain with port number and exported as upload.swf to server, replacing original file… i have changed fileUpload line of /fla/com/vixiom/utils/MultipeUpload.as file to my_domain/upload still it tries to connect to 0.0.0.0, restarted firefox, emptied cache… as far as understand, /fla/com/vixiom/utils/MultipeUpload.as is not used, because it is not in public directory, so how is it loaded? any help? Comment by mirec — November 16, 2006 @ 3:45 am mirec (Q23) I don’t know if you’ve solved this by now, but MultipleUpload.as is loaded by the import statement in the .fla file (the dots specify the path). As far as setting the domain, those two places (.fla line 3 and .as line 76) should be the only two. I would guess you’re running an un-updated .swf file. Try running it straight out of Flash 8 (ctrl+enter). Hope that helps! Thanks again, KreeK, for this excellent demo! Comment by A.Boltz — November 19, 2006 @ 7:46 pm [...] Tutorial Multiple File Upload with Flash and Ruby on Rails [...] Pingback by .bootstrap » Beautiful and Efficient File Upload using Flash and Ruby on Rails — November 20, 2006 @ 10:59 am Where do I put the path to my upload.php? Im not using Ruby on Rails, So the Flash should know to call my php file… Comment by alon — November 30, 2006 @ 10:04 am In the upload function of com.vixiom.utils.MultipleUpload there’s a line (sorry no line numbers) file.upload(“â€); replace Ҡwith ҠThe original upload example I modified for rails used PHP it’s in the Flash Documentation (do a search for file.upload). Comment by KreeK — November 30, 2006 @ 8:53 pm So, what you’re telling me is that I should trust a totally unknown source with a flash component and allow it full on access to my hard disk? That’s what you’re saying right? Flash with no security model, no signing, no public/private key support? Not gonna happen on this system and I think most people will have the same opinion if they value the contents of their hard drives. Comment by Greekkit — December 1, 2006 @ 12:13 am Flash no more or no less safe than you make it. Flash has a security model as of version 8. This example is no more than a glorified file input field and you should obviously check the files with Ruby/PHP/.NET to make sure nothing malicious is being uploaded. Comment by KreeK — December 1, 2006 @ 12:41 am Its very learnful thanks.I have a problem that,there are located multiple image thumbnails in javascript.And when we click on the thumbnail buton,the large copy of thumbnail want to play in flash.Can it posible with one unique coding to multiple thumbnails ? Please help me,i am in trouble. Thank you. Ajit. Comment by Ajit Chougule — December 6, 2006 @ 11:03 pm If you’re using Rails and want an image upload and processor I’d suggest looking at “Acts as Attachment” there’s some tutorials on this page For help with javascript comunicating with Flash look at or hope that helps! Comment by KreeK — December 7, 2006 @ 7:20 am This probably qualifies as the longest amount of time ever to reply to a comment (other than never), the answer to flyset’s question (#15), if you’re uploading large files and getting “A script in this movie is causing Flash Player to run slowly” The answer is here… Comment by KreeK — December 17, 2006 @ 9:28 pm [...] When uploading a file through Flash every file comes through as with it’s .content_type as ‘application/octet-stream’ so you can’t check it using traditional methods*. [...] Pingback by Vixiom Axioms » Flash image upload security with Ruby on Rails — December 18, 2006 @ 7:29 am Hey guys, great script!! I’m developing a photo upload program, but am having trouble getting Firefox to upload the photos. IE works great, but one Firefox browser I tested didn’t even open the file window when I hit “Browse”, and another seemed to work fine (show ‘upload 100%’), but the photos weren’t placed anywhere. Is there some settings in Firefox that I need to adjust? Thanks, Dan Comment by danny — December 20, 2006 @ 8:29 am could someone give me a link to a .PHP uploader that i can use for file.upload thanks Comment by jun — December 20, 2006 @ 3:19 pm here’s a PHP upload script Comment by KreeK — December 20, 2006 @ 3:31 pm Any idea why this crashes and burn when changing RewriteRule ^(.*)$ dispatch.cgi [QSA,L] to RewriteRule ^(.*)$ dispatch.fcgi [QSA,L] in .htaccess? Comment by Heist — December 21, 2006 @ 12:03 pm The short answer is no, I use mongrel (not lighttpd or mod_fastcgi) for local dev and production sites, I switched after all kinds of fcgi problems. What is the log saying? Comment by KreeK — December 21, 2006 @ 12:46 pm Absolutely nothing, that’s the weird part about it. It’s like the server never gets the request, but it does… because it causes it to freeze without logging/showing anything in the prompt. Comment by Heist — December 21, 2006 @ 1:04 pm Charles is very good at debugging stealth errors Also if the fcgi version is on a live site to be sure that you set System.security.allowDomain (“*.localhost.comâ€); in actionscript. Might need a crossDomain.xml file in your rails public directory too (example of one here). I’m pretty sure it’s a problem of Flash not connecting, but just in case try testing the upload controller with a multi-part HTML form. hope one of those helps. Comment by KreeK — December 21, 2006 @ 1:25 pm Anyone have any examples of integrating this method of upload with a FileColumn field or ImageMagick? Also, I am not sure the best way of associating the uploaded file with an existing model. My first thought is to pass the model’s id in the query string to the flash script (is this even possible? (never developed with flash before)). The flash script would then pass that id in the url to the upload controller which would handle adding the association. That seems kind of hackish though and not secure. My other thought is to have flash call a javascript function once the upload is comple, with the name of the uploaded file. That javascript function could then make an ajax call which would handle validation, resize the image, and associate it with the correct model. Any thoughts / input are much appreciated. Thanks Comment by Matt — December 31, 2006 @ 5:44 pm Hey Matt, I haven’t tried it with FileColumn, I use Acts as Attachment, I could get files uploaded with AaA but for some reason the resizing that comes with AaA wouldn’t fire (I eventually built my own custom script). Passing the id in the URL string works the same in Flash as HTML/Javascript, it will be available in Rails as params[:id]. file.upload(”” + id); To get the id into Flash you use the ‘FlashVars’ parameter in the SWF file’s Object/Embed tags (the code that places the flash file in an HTML page) like this, when you pass a variable into Flash this way it’s available at the _root level of your Flash file (_root.id). There are many other ways to pass varaibles to/from Flash including XML and Flash Remoting ( ). Hope that helps, Alastair Comment by KreeK — January 1, 2007 @ 6:22 pm Thanks Alastair, I got it working with FileColumn (I am not using AaA) without any difficulty. All you need to do is set your FileColumn field to params[:Filedata] before saving the object. To handle security I ended up passing a token to the flash file (a SHA1 hash of the users login information). I then pass the token to the Upload controller which handles validating permissions. I also added functionality to detect when all of the files are done uploading in the ActionScript. When the uploads are complete the AS fires a javascript function which makes an AJAX call to update the users display with the newly uploaded (and resized) images. If anyone is interested in the code for this shoot me an email, I probably won’t get around to posting it online. Comment by Matt Kull — January 1, 2007 @ 10:56 pm I am still having problems with the “script is running to slowly” errors. I tried creation an onEnterFrame function but it doesn’t appear to be firing. Would anyone mind posting the updated code with this fix in place? (flash newb) Also a tip, if your file appears to upload but nothing happens, make sure to monitor your server logs to see if the Upload action was called and if it threw an error or not. Thanks! Comment by Matt — January 4, 2007 @ 9:51 pm Matt, I’d love to see the additions you made to the code. I’m looking to exactly the same thing, but I can’t get my rjs to work. I’ve poked around, but can’t find an email address for you either, so I’m posting here in the hopes that you will see this. Please contact me at prog _AT_ keithwoody _DOT_ com. Thanks in advance. Comment by Keith — January 9, 2007 @ 4:37 pm u madafaka… I codebreaked this uploader finaly… I needed to learn Ruby On Rails, but it’s ok..:) one more thing I know now.. questions : how to integrate this app with mysql (info of files to be stored in the db )? I see 001_create_data_files.rb file in db folder, so did you do this allready, if Yes could you paste sql query for tables, example etc…? I runned on little problem while uploading big (over 25mb) files. script makes flash player to run slowly, CPU goes to 100%, and uses all of Virtual Memory..to intensive.. solution for this ? and BIG THANK YOU for this ! great work, I dl around 20 flash uploaders, and NONE is working. Only one example using CFM runs perfectly, and this app. Licence for CFM is 1400$+, so I decided for this one (: lol poz ob1 Comment by ob1y2k — January 16, 2007 @ 7:05 am This uploader appears to upload multiple files at the same time, but according to Flash’s help files and posts I’ve seen elsewhere, files can only be uploaded one at a time. When I do a Test Movie and look at the traces in Output, it indeed appears to be starting additional uploads before completing others. Is it actually uploading multiple files simultaneously or is this some sort of trick of when certain events fire? Perhaps its attempt to upload everything simultaneously is what is bogging it down when hit with multiple large files? :/ Comment by Mike — January 18, 2007 @ 2:07 pm It’s uploading files simultaneously, but you could rework it to have each file wait its turn. The onProgress listener is the cause of the problem, as each file gets its own onProgress listener multiple large files can make it bug out. I’d be interested to know if having files wait their turn fixes that issue, let me know once you build it! I’ve had success with medium sized files up to 10mb using this fix Flash also has a built in limit of 100mb. You can check the file size before upload: if ((file.size/1024 > maxFileSize) || (isNaN(file.size))) {} Comment by KreeK — January 18, 2007 @ 3:14 pm Hi…. Thanks for the code. I too was following the same before. Have you noticed, when you upload multiple Shortcut files, you will get the file size as 0 for everything. Please try to test it. And please let me know the solution. with best regards Shibu M Comment by Shibu — January 24, 2007 @ 12:02 am [...] Vixiom Axioms » Multiple File Upload with Flash and Ruby on Rails [...] Pingback by mutterings » Blog Archive » Multiple File Upload with Flash and Ruby on Rails — January 26, 2007 @ 2:12 am KreeK:[I’ve had success with medium sized files up to 10mb using this fix] can you please post the piece of code that of this fix? Comment by redron — January 29, 2007 @ 8:10 am Kreek, thanks so much for this site. I’m also having trouble getting the onEnterFrame hack to stop the alert that ’script is making this run slowly….’ I’ve spent a couple hours looking for how to do this, but I can’t figure it out. I also tried the OSflash irc room with no luck. Would you mind emailing me how you got around this problem? thanks a million, Mike Comment by Mike Michelson — February 3, 2007 @ 11:48 pm no matter what i try, i keep getting httpError: 500. on the webrick output, it says POST /upload HTTP/1.1″ Comment by rich — February 9, 2007 @ 12:05 pm nm, fixed it.. Comment by rich — February 9, 2007 @ 2:17 pm I am still not able to get the ’script is making this run slowly….’ message to go away… if anyone finds a working solution please email me the code they used. kaine0[AT]geemail.com (gmail) Comment by Matt Kull — February 12, 2007 @ 10:07 pm Hi This is just what I need. I have downloaded the files and created a project upload. However, it does not work when i use the url /localhost:3002/upload/index. (data etc is nil). When I use /localhost:3002/ the flash interface is displayed but it does not work . What about the usage of the pasrt “import com.vixiom.utils.MultipleUpload .. i”n your tutorial. I cant find that in any of your files Need some help, thanks !! (uses rails locally on windows xp) Comment by Hans — February 15, 2007 @ 4:43 am Hi there, I’m having some problems here. I browse for a file and select it. Then…I browse for a second file and select it, but it doesn’t add to the list, it simply replaces the first one. How can I fix this problem? Thanks for your help Comment by Antonie Potgieter — February 15, 2007 @ 8:26 pm Antonie, Right now it selects multiple files at once, but replaces the list if you browse again. ‘onSelect’ is function that fires when you select files you’d need to change it so ‘var list_dp’ wasn’t wiped each time (declare it in the class not the method) then push files to it. Comment by KreeK — February 15, 2007 @ 8:52 pm Hans, com.vixiom.utils.MultipleUpload is in upload/fla/com/vixiom/utils/MultipleUpload if Flash wasn’t finding it you’d get an ActionScript error. Not sure what the problem could be, Do you have Flash Player 8 or greater? Comment by KreeK — February 15, 2007 @ 9:00 pm Is there any way to get flash to use the same session when doing file upload. I’ve done pretty much the same thing as you have here but my file upload action gets a different session than the rest of my flash calls to rails. I need to somehow pass the name of the file back to my flash app, but have no way to do this - if i render something in the action it is ignored, if I save it to the session and call another action I get a different session object. any help would be apprechiated. Comment by John — February 16, 2007 @ 8:15 am Hey John, I didn’t think that would be the case, but you’re right the upload controller has a different session. I thought I could be tricky and use a session from the application controller but that also gave different session objects depending on which controller was calling it. I’m in the middle of extending this uploaded to a image manager similar to the one for Slide Show Pro (image upload then ability to reorder your images with acts_as_list). The way I’m tracking thins is to pass the ID of the images parent to Flash (using Flash params in it’s object/embed code) then I use Flash Remoting to load any previous images, upload any new ones, and when the upload is finished (which flash knows), I call Flash Remoting again to update my images. Could that work for you? thanks, Alastair Comment by KreeK — February 16, 2007 @ 11:27 am I’m trying to get this to work with PHP but I am having trouble. I have changed the line in the fla to be System.security.allowDomain (”*.mydomain.com”); and I have changed the line in the .as file to be file.upload(””); which is the path to my php file. I continually get an IOError when I run the uploader directly from flash. I know that the php file will process (move it to the desired directory) an uploaded file because I tested the php file from a standard html form. Is there any way to get a clearer understanding of what the general IOError means? Or is it clear to any of you why I am contiuously getting the IOError? Thanks. Comment by Chris — February 22, 2007 @ 1:39 pm Hey Chris, An IOError is an Input/Ouput error which most likely means it’s having troubling writing the file. At first I thought it might be a permissions issue but if it works with an HTML form then it’s not that. Remember that file data from flash comes through as the param ‘Filedata’, where in HTML you can call your file input anything you want, so make sure that’s name you’re using in your process php script. thanks, Alastair PS I’m having my own Flash upload headache right now, seems that with Firefox on Macs flash likes to append port 80 to your upload source (…) which the site I’m working on doesn’t use. Comment by KreeK — February 22, 2007 @ 1:58 pm Alastair - thanks for the quick reply. The process file is expecting the param to be ‘Filedata’ - That is how I tested it with my html form as well. Any other thoughts? Should my php file be returning (echoing) any response to the flash file? Comment by Chris — February 22, 2007 @ 2:17 pm @KreeK Thank you for the comments on preventing the “list_dp” wipe. I removed the line in the method : “var list_dp = new Array();” …and defined the variable in the class itself as you said. Meaning…I added “private var list_dp:Array;” below “private var list:Array;” For some odd reason it doesn’t work anymore. Do you have any advice on this? Thank you very much for your assistance. All the best, Antonie Comment by Antonie Potgieter — February 23, 2007 @ 10:48 pm Ok…it works…but it doesn’t seem to put the rows into the grid. After selecting files, I don’t see them in the flash file, but after uploading, they actually upload. Is there something else I can use other than “push”? Comment by Antonie Potgieter — February 23, 2007 @ 10:54 pm Antonie, This is an issue with the data grid not updating, which the list_dp is used for (fileReference.list is the list of files to upload). Make sure you have: files_dg.dataProvider = list_dp; after each time you push a new item to the list_dp array. Another solution: You don’t have to use a dataProvider to populate a dataGrid you can also use dataGrid.addItem hope that helps, Alastair Comment by KreeK — February 24, 2007 @ 1:08 am Hi, I was trying to save the data of the directory and the filename in a table in the database. The “upload” database contains a table “data_files” with the fields (id, name, and file) I got to this and it doesn’t work: @data_file = DataFile.new @data_file.save(data, name, directory) Comment by Japo — February 26, 2007 @ 11:52 pm I am having the same problem as rich no matter what i try, i keep getting httpError: 500. on the webrick output, it says POST /upload HTTP/1.1″does anyone know how this was fixed? thanks for any help Comment by rafael — March 4, 2007 @ 12:33 am xxxxxxx Comment by Ganesh Iyer — March 22, 2007 @ 6:59 am [...] Opencard Me and Welby went to the Academie voor popcultuur today to do a bit of card-testing under the students. We didn’t got as many people as we wanted but we did found out some interesting results. I think we’ll be using some or all of them as far as possible of course. Upload system Today’s has also been about finding a way to upload files to a website because its one of the items were going to use in the website for the Academie voor popcultuur. I found this very usefull here is the link. [...] Pingback by Mike’s blog » Blog Archive » Opencard testing and Upload systems — March 27, 2007 @ 10:38 am For me browse button works - shows files. When I select some files and click Upload - nothing happens. The following appears in the log: TypeError (can’t convert nil into String): /app/models/data_file.rb:5:in `join’ /app/models/data_file.rb:5:in `save’ Looks like - data, file both are blank. could be because of params is empty. Any suggestions to make this work. I have flash 9,0,20,8 – sunds2 Comment by sunds2 — April 8, 2007 @ 7:47 pm Hi, I am unable to figure out how the flash code will work with the rails application and is there a way by which we dont have to hard code the url used in the .fla and .as files? Comment by Neha — April 20, 2007 @ 4:35 am There are a number of ways of passing a variable (in this case a url string), these links show how with a query string () or with FlashVars (). The variables will be available at the root of the main movie, so if you pass a variable called ‘url_string’ it will be accessible as ‘_root.url_string’ Comment by KreeK — April 20, 2007 @ 10:32 am I get the same error as comment 72. File upload works out of Flash. I changed the domain to my ip address in the fla and the upload to my address as well. What I also did is I created an uploads folder in the rails root directory, because the rails app is trying to save there and not in public/uploads (before that I was getting 500 errors with uploading from Flash). Now uploading from Flash works fine, but if I try in FF or IE nothing happens and I see in the webrick log that a file called crossdomain.xml is needed but not found. Anybody an idea on this? Thx Comment by sashthebash — April 21, 2007 @ 11:12 am Hey guys I was able to fix the “A script in this movie is causing Flash Player to run slowly” problem by using an onEnterFrame event and have posted the updated source code on my blog at Comment by Matt Kull — April 22, 2007 @ 12:59 pm Hi, I have a security problem as well, but I see in firebug the movie tries to load the crossdomain.xml from even if my host is my life system. Where can I set this ip? Comment by Kai Rautenberg — May 5, 2007 @ 1:55 pm @Kai Put the crossdomain.xml file in the root of your rails app, this is Adobe’s crossdomain file. In your case have the domains be ‘0.0.0.0:3000′ (which is the same as localhost:3000) and then also the domain for your live site. You usually only have rails running at 0.0.0.0:3000 for development, when you deploy an app to a production site any calls to files would be from the domain. Comment by KreeK — May 6, 2007 @ 11:49 am Great work! Comment by Helmut — May 30, 2007 @ 10:12 am I’m trying to figure out how to integrate the flash uploader w/ a PHP script and I’m having some major headaches. It might just be my lack of knowledge on how this works. I know nothing about Ruby, hence why I’m trying w/ PHP. The problem I keep running into or can’t seem to figure out is what does the flash script pass variable-wise? I get the idea of linking to the PHP script so that it can call on that to do the actual uploading, but what variable goes into the PHP script as the file data so it knows what to upload? I apologize if these are newb-ish question, but I’m stuck and would love to use this uploader! Comment by Tim — June 4, 2007 @ 7:19 pm Hey Tim, The variable coming across will be called ‘$_FILES['Filedata']‘ there’s an example PHP script on Adobe’s site Comment by Alastair — June 4, 2007 @ 9:04 pm did anybody figured how this works? i am getting interface loaded ok and able to browse and add files ( it replaces the prev one so seems to be one file at a time but when clicking on upload button nothing happens. how do you debug this thing? i am using ie Comment by stan — June 4, 2007 @ 9:31 pm Hello ! Just a came to think of this, lets say your solution will be i an application witch requires login (thus generating a session id), will this session id be carried thru the flash upload, in order to separate several users logged in at the same time, so that they upload to their respective upload directories (named via their separate session id)? Sinc Kalle Johansson Comment by Kalle Johansson — June 13, 2007 @ 12:07 am @Kalle The uploader will have a seperate session, however you can pass a user_id to flash which can then be sent along with the upload params to make sure the uploaded files are saved with the correct user… just append the user_id to the upload url file.upload(”″); Comment by Alastair — June 13, 2007 @ 12:22 am I tried to used this flash uploader with Java (jakarta.apache.org/commons/fileupload). But gives an error when using with this Flash multi file uploader.. “Processing of multipart/form-data request failed. Stream ended unexpectedly” Can anybody help me.. Has anybody tried this with java/j2ee ? Comment by kasun111 — June 29, 2007 @ 2:57 am [...] Vixiom Axioms » Multiple File Upload with Flash and Ruby on Rails Useful tutorial for beginning to get flash and ruby speaking with each other. (tags: flash rails ruby coding programming tutorial) [...] Pingback by links for 2007-07-07 « things i am or once was — July 10, 2007 @ 2:44 pm I just ‘fixed’ the Flash 8 issue with some carefully crafted Apache2 rewrite rules, if anyone is curious. Enjoy! Comment by Jason Boxman — July 16, 2007 @ 10:13 pm Hey Jason nice work Comment by Alastair — July 16, 2007 @ 11:10 pm Hi, I start learning Ruby on rail. I got this error while I try this exercise. ===================================== TypeError in UploadController#index can’t convert nil into String RAILS_ROOT: ./script/../config/.. Application Trace | Framework Trace | Full Trace #{RAILS_ROOT}/app/models/data_file.rb:5:in `join’ #{RAILS_ROOT}/app/models/data_file.rb:5:in `save’ #{RAILS_ROOT}/app/controllers/upload_controller.rb:10:in `index’ Request Parameters: None Show session dump — flash: !map:ActionController::Flash::FlashHash {} Response Headers: {”cookie”=>[], “Cache-Control”=>”no-cache”} ===================================================== Comment by SweZin — July 21, 2007 @ 7:55 pm Hi I’ve try it in mi personal computer and it works excelente but when i run it on my notebook every time I click on the uploa button it close my browser (I’ve tried reinstalling flash, shockwave, installing firefox, safari). This error happens with all the browser. I suposse is an dll problem but i have no idea where to start, any suggestions???? Thank in advance Comment by Ivan — July 23, 2007 @ 8:56 am Excellent guide, thank you very much. How would you recommend extending this to support queued upload (ie, one file at a time)? Comment by Rocktansky — August 2, 2007 @ 2:47 pm [...] Vixiom Axioms » Multiple File Upload with Flash and Ruby on Rails 플래시와 ë ˆì¼ì¦ˆë¥¼ ì´ìš©í•œ íŒŒì¼ ì—…ë¡œë“œ (tags: flash flex í”Œë ‰ìŠ¤ file upload) [...] Pingback by links for 2007-08-08 — August 8, 2007 @ 9:25 am I am not able to configure the application at my PC. Can please explain the steps to get the application up and running. Please help as i need to get it running :(. Regards, Amit Comment by Amit Yadav — August 9, 2007 @ 8:39 am Hi Amit, The solution is to buy a mac Could you provide more information on what is not working? Comment by Alastair — August 9, 2007 @ 8:52 am I’m using this to upload files to a Java servlet, which uses Apache commons FileUpload package. I observe that Flash Player 9 works just fine but version 8 has a problem. Uploads would reach 100% but never change into “completed” state. I’m not sure what the cause is. Anybody has any suggestion? Comment by Ken — August 25, 2007 @ 5:15 pm I have the file upload dialog coming up. But clicking on Upload button nothing happens. Nothing is getting logged into either Mongrel or Webrick. I saw someone suggesting to update url in .fla and .as files. I Changed it in .as file to but .fla to me appears to be a binary file. Any assistance would be appreciated. Comment by sunds2 — September 25, 2007 @ 8:53 pm Flash files must be compiled through the Flash IDE to implement any changes to an associated .as file (I’m guessing you don’t have the IDE as the .fla is a binary file for you). Comment by Alastair — September 25, 2007 @ 11:02 pm Sounds greek to this newbie. I know nothing about flash except the fact I was impressed that it can be used in my rails project to upload files to my site. I Have I downloaded this rails plugin correctly? I downloaded multiFlashRailsFileUpload.zip and unzipped to a folder and installed this as a plugin. Should I try again? Where do I Get the Flash IDE? I have Aptana IDE - can I use that. From the Flash IDE I should compile .as file (after making changes) and create .fla and .swf? Comment by sunds2 — September 26, 2007 @ 2:45 pm To get a 30 day trial of Flash go to , you could also use Flex but this example uses ActionScript 2.0 (AS2) and you would have to convert it to AS3 for Flex. I have a Flex/AIR example which uses Merb (similar to Rails) here Unlike Flash you can get Flex for Free , but Flex Builder (the Flex IDE) makes working with Flex much easier. I don’t want to discourage you but if you’ve never used Flash/Flex it can be a pretty steep learning curve. However, if you know JavaScript then you should be able to pick up ActionScript pretty quickly. Comment by Alastair — September 26, 2007 @ 4:02 pm Thanks Alastair. Comment by sunds2 — September 26, 2007 @ 6:57 pm Hey, I put it up, replaced 0.0.0.0 with the absolute directory in which i want it to upload i press upload and it goes up to 100% but the file isn’t in the directory i typed in, in fact it’s not anywhere on the site. Comment by Tammam — September 27, 2007 @ 9:31 pm Sorry i forgot to mention. your screen shot says “complete” mine just says “100%” and stops… Comment by Tammam — September 27, 2007 @ 9:34 pm HOW CAN I MAKE IT TO UPLOAD ONE FILE AT A TIME. i get SIMULTANEOSLY UPLOADS . i need them to be in order by name or so.. is there a way ? Comment by Lorenzo Gonzlez — October 2, 2007 @ 4:02 pm Hi, I have the same problem as stan, when selecting files they are replacing each other rather than getting added to the file list with Flash Player 9,0,47,0 Any thoughts? Cheers, Neil Comment by Neil — October 3, 2007 @ 3:46 am Sorry didn’t read the comments - answered by KreeK at comment 58 () Neil Comment by Neil — October 3, 2007 @ 3:50 am @Chris re: comment #62 (and others), The IO error 403 seems to be related to mod_security issues with apache servers which blocks flash headers. A work around is to add the following to the .htaccess file in your root. SecFilterEngine Off SecFilterScanPOST Off No idea how that effects security though? Comment by Richard — October 31, 2007 @ 2:33 am [...] 利用Googleæœç´¢ï¼Œå‘çŽ°æ— æ•°çš„äººä¹Ÿåœ¨è¯¢é—®åŒæ ·çš„é—®é¢˜ã€‚åŽæ¥åœ¨è¿™ç¯‡ä»‹ç»ç”¨Flex实现的批é‡ä¸Šä¼ çš„æ–‡ç« åŽçš„评论里é¢ï¼Œæ‰¾åˆ°äº†ä¸€ä¸ªå«åšâ€œTimothee Groleauâ€çš„哥们的“自问自ç”â€ï¼Œç»ˆäºŽè§£å¼€äº†è¿·å›¢ï¼šFlashPlayer在触å‘å¹¶æ‰§è¡Œç”¨æˆ·å®šä¹‰çš„è„šæœ¬ï¼ˆå°±æ˜¯ä½ ç¼–å†™çš„ActionScript)时,会é‡ç½®â€œè„šæœ¬è¶…æ—¶å€¼â€ï¼ˆä¸Šæ–‡æåˆ°çš„15ç§’ï¼‰ã€‚è¿™æ ·ï¼Œæˆ‘ä»¬å¯åœ¨æŸä¸€ä¸ªâ€œé©¬ç”²MovieClipâ€ä¸Šç»‘定一个onEnterFrameäº‹ä»¶ï¼Œè®©å®ƒä¸æ–地(é€å¸§ï¼‰æ‰§è¡Œã€‚最简å•çš„åšæ³•就是: [...] Pingback by ActionScript 3 Lover » upload-causing-flashplayer-slowly — November 1, 2007 @ 3:03 am I am trying to create a remove function so a user can remove a select files, but I can not get it to work. function removeClick(){ var numItems:Number = files_gd.length; for(var i=0; i Comment by Charles — November 21, 2007 @ 10:53 pm Quote comment 84: The uploader will have a seperate session, however you can pass a user_id to flash which can then be sent along with the upload params to make sure the uploaded files are saved with the correct user… just append the user_id to the upload url file.upload(”″); ************** How can I go about doing this? I thought that any change to the url in the actionscript would have to be compiled into the flash binary file? Could you explain how I can go about dynamically passing a user_id to the flash uploader so that I can stick a user_id in the DB when I save the uploaded files? Thanks arfon Comment by Arfon — December 11, 2007 @ 8:58 am @Arfon you’re right that an actionscript change has to be compiled, you’ll need the Flash IDE to make this change. If you don’t have it Adobe has a 30 day trial version. To get the user id into Flash you pass it in the embed code (the code that places flash on the html page) the best way to do this is with SWFObject () you add a line like so.addVariable(”variableName”, “variableValue”); Then ‘variableName’ will be available at the root of the Flash file. Comment by Alastair — December 11, 2007 @ 9:18 am Thanks. That’s really helpful. So just to clarify, once I’ve passed the variables to the flash in the embed code, do I have to access them in in the original actionscript file, then re-complile it with the changes that I have written? I basically want to build the url that flash uses like this: file.upload(”″); How do I build the variables into the url? i.e. once they are available at the root of the flash file how do I access them? (I’m a complete neewbie to flash/as) Thanks for all your help Comment by Arfon — December 11, 2007 @ 9:47 am The original AS file (that you download from this page) doesn’t have these variables set up yet. But when you are editing the file (to be recompiled by Flash) the variables will be available as _root.variableName. ActionScript is like JavaScript so to build the upload url you would be; file.upload(”” + _root.userid + “&var2=” + _root.var2); When using Flash you can test your file without putting it on the server (files will upload), however with testing you won’t have those variables set up yet because they come from SWFObject’s embed code, so you’ll have to set temp vars, just put _root.varaibleName = variableValue; in MultipleUpload.as’s constructor (of course remove it for the final version). Comment by Alastair — December 11, 2007 @ 10:49 am Alastair you’re a star. Thanks so much, it’s all working perfectly now. Comment by Arfon — December 11, 2007 @ 10:58 am Can you please correct your bullshit to correctly capitalise Ajax? It’s not an Acronym young Luke. Second, Flash is NOT better then Ajax because Flash does not have Usability in mind, what happens if I don’t have JavaScript enabled? It works; but with no Flash, your flaky bullshit fails, FAILS! Anyways, I think people should have better things to do then listen to somebody who can’t even correctly capitalise Ajax :). Comment by EnvyGeeks — December 11, 2007 @ 9:08 pm Can you publish a merb version of the same tutorial? Thx. Comment by mike — December 11, 2007 @ 9:14 pm @EnvyGeeks you have much to learn my young Padawan A.J.A.X stands for Asynchronous JavaScript and XML. That said it now has become a noun so yes you can use it U&LC and your ignorance will be excused (U&LC stands for ‘upper and lowercase’ if you don’t know). AJAX works if you don’t have JS?, yes simple stuff works but complex AJAX won’t. Those in the know use SWFObject with Flash which lets you replace Flash seamlessly if no player is found. BTW (that’s ‘By The Way’) javascript is turned on in 94% of browsers and Flash 8 (used in this example) has 98.4% penetration so even if AJAX could do multiple file upload (which it can’t) Flash would still be the better choice. I think people should have better things to do then listen to somebody with two assholes cause I just tore through your argument and gave you a new one Comment by Alastair — December 11, 2007 @ 10:52 pm @mike for merb it’s pretty much the same as this but sub Flex/AIR for this Flash. Comment by Alastair — December 11, 2007 @ 10:55 pm Consider Merb if you need authentication support for your uploads . . . Comment by Jon — January 15, 2008 @ 2:50 pm So there is no way of getting this to work in a Rails setting in which authentication(Basic Authentication) is required? Comment by Harm — January 16, 2008 @ 9:10 am Hi there! Could one of you guys tell me how to use this? The zip contains 48 folders and 68 files??? I’m ONLY interessed in integrating the upload functionality into my asp.net 2.0 web site… /Pablo Comment by Pablo — January 28, 2008 @ 4:50 pm We have a weird issue where we get a 406 error from our uploads. Tons of forums talk about this with PHP blaming it on mod_security. That does not seem to be the issue with rails though, obviously (though we made the fixes anyways). Here is where it gets even cooler - we do not get 406s on Safari, but we do get them from Firefox and IE. Comment by Noah Horton — January 29, 2008 @ 9:09 pm hello, First of all, thank you for putting up the tutorial. It is great. I think i am getting very close but upload doesn’t work for me. I have downloaded “Instant Rails”, set it up, ran the scripts & created “upload” project as instructed, turned the debugger on, ran the movie & .rs file in Flash, browsed for a file. But, when i click on ‘upload’, it gives error. Console output is: // onOpenName: user_flow.txt // onIOError: user_flow.txt In the debugger, I can see the flow comes to the line: file.upload(””); The flow went to function onOpenName, onIOError as indicated by the console output. After the IOError, nothing happens. I put these two line in the $RAILS_DIR\rail_apps\upload\public\.htaccess file, but it didn’t make a difference. SecFilterEngine Off SecFilterScanPOST Off Here are some further info: Instant Rails version: v2.0 File to be uploaded: c:/…/user_flow.txt (a dir outside of Flash workspace) If I leave the IP to be 0.0.0.0, it doesn’t make a difference either. Can someone help?? Thanks in advance! -Ivy // onOpenName: user_flow.txt // onIOError: user_flow.txt SecFilterEngine Off SecFilterScanPOST Off Comment by Ivy — January 30, 2008 @ 8:45 pm I have been using the FlashFixes plugin to fix the Flash 8 problems but have found some issues. It causes my Mongrel to hang on certain requests from Safari/Mac, lately also through Picnik integration. It causes the Mongrel process to cap to 100% CPU, needing a restart. Taking it out seems to solve the issue. Has anyone seen this? Comment by Alex Kira — February 11, 2008 @ 8:56 am Quick update, I think the FlashFixes does not have patch for a DOS issue that has been applied to CGI.rb. Since it redefines a method in CGI.rb, we lose any upgrades (for example, security upgrades) to that method that have occurred. Here is info on the DOS patch. Flashfixes is missing the c.empty? check that CGI.rb has been updated with along with some other things. Given this, I’m not sure if it is such a good idea to redefine this method in a plugin unless updates are kept current… Comment by Alex Kira — February 11, 2008 @ 10:36 am My company just released a flash multiple-file upload applet called Multi Bit Shift, available at multibitshift.com. We make a LGPL version that is pretty similar to this software available as a free download. In addition, we also have a Rails Plugin available that makes it really easy to integrate into an existing form with a very railsy helper, multi_bit_shift_field, which acts pretty much exactly how you would expect it to. All the text in the applet is customizable, as our the colors, with CSS. We even provide a form on the website that the CSS can be easily compiled to flash with so you don’t need to deal with flex at all. In addition, we also offer a commercial version that’s designed with images in mind that will let your users see what files are on the server. I hope this isn’t considered spam, the plugin and the basic version are LGPL after all. Comment by Justin Cunningham — February 24, 2008 @ 9:48 am If you want to use this with web2py the web2py code equivalent to the ruby code above is: db.define_table(’DataFile’,SQLField(’Filename’),SQLField(’Filedata’,'upload’)) def index(): return SQLFORM(db.DataFile).accepts(request.vars,formname=None) The web2py code provides additional security. The ruby code above is vulnerable to directory traversal attacks. Comment by Massimo — April 16, 2008 @ 9:18 am Hi, that link Matt (comment 76) posted was broken. Anyone else fixed the Actionscript running slowly popup. many many thanks jJ Comment by jJ — May 8, 2008 @ 9:11 am While searching to multiple file upload tutorial , I have found the blog on google. Reading here it seems its better I try to do it in Ajax . I m completely new to Ajax. Usually I do programming for windows . From where I should start learning Ajax? I mean any tutorials u can recommend Comment by KB — May 27, 2008 @ 11:04 pm So looks like everyone has to recompile the flash themselves so that it uploads to the correct location? Why on earth didn’t you just make the upload-destination a parameter to the flash file, so we could just use the demo one? I’ve put a wildcard crossdomain.xml on my local site, but still get nothign besides a message from firefox that it’s “Transferring data from 0.0.0.0″. Running on port 3000 as expected… any ideas? Comment by Kevin — July 29, 2008 @ 2:23 « Panduramesh’s Weblog — September 19, 2008 @ 4:20 am [...] with Java « anil4it — September 30, 2008 @ 4:23 am [...] Alastair Dawson :: Multiple File Upload with Flash and Ruby on Rails [...] Pingback by FLEX CODING « welcome nandhu — October 22, 2008 @ 7:52 « Rameshgoud’s Flex Weblog — October 25, 2008 @ 3:14 am The solution for “A script in this movie is causing Flash Player to run slowly” alert is the following: 1) Open MultipleUpload.as 2) Edit upload function adding this line: private function upload() { trace(”// upload”); // This will prevent Flash from reporting a slow-running script startSomeNonsense(); …. } 3) Add this function somewhere function startSomeNonsense(){ var count:Number = 0; _root.onEnterFrame = function(){ count += 1; } } 4) Publish the .fla file with flash and use your new MultipleUpload.swf copy. Thats it. Comment by Carlo — November 24, 2008 @ 6:56 am [...] reading Multiple File Upload with Flash and Ruby on Rails and Merb on AIR - Drag and Drop Multiple File Upload I decided to create my own version of [...] Pingback by AIR on Rails at Kiichigo Blog — December 16, 2008 @ 1:34 pm Did anyone ever get a php version of this working… a link or a rough guide would be ideal Comment by Emissions — December 17, 2008 @ 6:14 am I use your Rails App as a server and my own Flex Uploader with the URLRequest-Class. It should work but when I send a file to the server I get a: Processing UploadController#index (for 127.0.0.1 at 2008-12-29 15:44:22) [POST]–c6aef7c6c66b984c2b6f90a511aab81987f03184 Parameters: {”Filename”=>”me.jpg”, “action”=>”index”, “Upload”=>”Submit Query”, “controller”=>”upload”, “file”=>#} NoMethodError (You have a nil object when you didn’t expect it! The error occurred while evaluating nil.read): Any idea what the nil object can be? THX Comment by koncat — December 29, 2008 @ 8:37 am Ok first off I love this Tutorial and setup!! So thank you I am using it via PHP so anyone who needs help with it you just have to change your upload.php file to the following: This will create a directory for you and give it read/write privileges for any novices. Also, for the novices, your file first transfers to a temp folder on the SERVER, once it is verified it will get moved to the directory you chose, in this case, “gallery”. For anyone who wants to REMOVE a file that has been added to the list. This one was tough. Copy the Browse button from the multipleUpload.fla file and paste it so you have two of them. Change the name to “Remove” (or whatever you want it to be) and make sure you change the instance name to remove_btn. Now onto the MultipleUpload.as file… Add a new private variable with the other ones private var remove_btn:Button; Add rb:Button to the following: public function MultipleUpload(fdg:DataGrid, bb:Button, ub:Button, rb:Button) Set it equal like the rest: remove_btn = rb; Inside the iniUI function add in: remove_btn.onRelease = Delegate.create(this, this.remove); Now create a brand new function exactly like the following: private function remove() { trace(”// remove”); var t:Number = files_dg.selectedIndex; list.splice(t, 1); files_dg.dataProvider.removeItemAt(files_dg.selectedIndex); } Since the File list is an Array we can just alter the array. We first grab our index that was selected by the user and set that number to t. We then splice it (or remove it from the array) We finally delete it off of the visual list. Not sure if anyone else made something like this but I thought I would share…I will check back if anyone has questions… Comment by Andrew L — January 8, 2009 @ 7:28 pm You really suck!! AJAX can do that and more!!! Comment by AJAX FREAK — February 13, 2009 @ 8:18 am Hello im hiro nakamura. may somebody guide me how to put the file and code “multiFlashRailsFileUpload”. I really blur about it, please,.. very thanks for anybody who would be help me. I need the root information where and hoe to put the directory / folder for this multiFlashRailsFileUpload file. Great thanks for u all Comment by headache men — April 20, 2009 @ 12:07 am
http://blog.vixiom.com/2006/09/08/multiple-file-upload-with-flash-and-ruby-on-rails/
crawl-002
refinedweb
10,859
71.95
2012-11-21 08:24:49 8 Comments When you have server-side code (i.e. some ApiController) and your functions are asynchronous - so they return Task<SomeObject> - is it considered best practice that any time you await functions that you call ConfigureAwait(false)? I had read that it is more performant since it doesn't have to switch thread contexts back to the original thread context. However, with ASP.NET Web Api, if your request is coming in on one thread, and you await some function and call ConfigureAwait(false) that could potentially put you on a different thread when you are returning the final result of your ApiController function. I've typed up an example of what I am talking about below: public class CustomerController : ApiController { public async Task<Customer> Get(int id) { // you are on a particular thread here var customer = await SomeAsyncFunctionThatGetsCustomer(id).ConfigureAwait(false); // now you are on a different thread! will that cause problems? return customer; } } Related Questions Sponsored Content 9 Answered Questions [SOLVED] How to safely call an async method in C# without await - 2013-03-20 11:59:06 - George Powell - 157553 View - 279 Score - 9 Answer - Tags: c# exception async-await task task-parallel-library 8 Answered Questions [SOLVED] Why can't I use the 'await' operator within the body of a lock statement? - 2011-09-30 15:23:00 - Kevin - 87242 View - 315 Score - 8 Answer - Tags: c# .net async-await 2 Answered Questions [SOLVED] What is the difference between asynchronous programming and multithreading? - 2016-01-08 15:53:02 - user5648283 - 61119 View - 177 Score - 2 Answer - Tags: c# multithreading asynchronous parallel-processing async-await 1 Answered Questions ConfigureAwait(false) maintains thread authentication but by default it doesn't - 2018-11-25 07:28:32 - Deepak Agarwal - 102 View - 1 Score - 1 Answer - Tags: c# asp.net multithreading asp.net-web-api async-await 5 Answered Questions [SOLVED] An async/await example that causes a deadlock - 2013-02-22 09:52:29 - Dror Weiss - 44163 View - 86 Score - 5 Answer - Tags: c# task-parallel-library deadlock async-await c#-5.0 3 Answered Questions [SOLVED] deadlock even after using ConfigureAwait(false) in Asp.Net flow - 2014-08-31 03:18:14 - Suresh Tadisetty - 7042 View - 12 Score - 3 Answer - Tags: c# asp.net async-await task-parallel-library deadlock 1 Answered Questions [SOLVED] Why do i need to use ConfigureAwait(false) in all of transitive closure? - 2017-09-07 10:32:21 - vietvoquoc - 6339 View - 25 Score - 1 Answer - Tags: c# multithreading asynchronous 1 Answered Questions [SOLVED] Should I use ConfigureAwait(false) in the top level call of Azure WebJobsSDK - 2017-05-23 13:13:39 - Jeffrey Lott - 425 View - 1 Score - 1 Answer - Tags: c# azure azure-webjobssdk 2 Answered Questions [SOLVED] ConfigureAwait(false) on Top Level Requests - 2015-07-04 23:37:20 - KingOfHypocrites - 5386 View - 28 Score - 2 Answer - Tags: c# asp.net-mvc azure asynchronous async-await 0 Answered Questions Async OnCommand of a Button in an ASP.NET Page - 2014-10-15 17:21:38 - Tom K. - 683 View - 2 Score - 0 Answer - Tags: c# asp.net async-await @Stephen Cleary 2012-11-21 13:40:46 Update: ASP.NET Core does not have a SynchronizationContext. If you are on ASP.NET Core, it does not matter whether you use ConfigureAwait(false)or not. For ASP.NET "Full" or "Classic" or whatever, the rest of this answer still applies. Original post (for non-Core ASP.NET): This video by the ASP.NET team has the best information on using asyncon ASP.NET. This is true with UI applications, where there is only one UI thread that you have to "sync" back to. In ASP.NET, the situation is a bit more complex. When an asyncmethod resumes execution, it grabs a thread from the ASP.NET thread pool. If you disable the context capture using ConfigureAwait(false), then the thread just continues executing the method directly. If you do not disable the context capture, then the thread will re-enter the request context and then continue to execute the method. So ConfigureAwait(false)does not save you a thread jump in ASP.NET; it does save you the re-entering of the request context, but this is normally very fast. ConfigureAwait(false)could be useful if you're trying to do a small amount of parallel processing of a request, but really TPL is a better fit for most of those scenarios. Actually, just doing an awaitcan do that. Once your asyncmethod hits an await, the method is blocked but the thread returns to the thread pool. When the method is ready to continue, any thread is snatched from the thread pool and used to resume the method. The only difference ConfigureAwaitmakes in ASP.NET is whether that thread enters the request context when resuming the method. I have more background information in my MSDN article on SynchronizationContextand my asyncintro blog post. @Aliostad 2012-11-21 16:41:06 My answer got deleted so cannot answer you there. But I am not confusing contexts here, I do not know about you. What is meant by context is Thread Storage Area data. The context does not flow in ContinueWithby default - period. TSA data does not get copied - please prove me wrong if you think otherwise You can check this by looking at HttpContext.Current. That is why we go through hoops and hoops to flow that. @Stephen Cleary 2012-11-21 17:15:47 Thread-local storage isn't flowed by any context. HttpContext.Currentis flowed by the ASP.NET SynchronizationContext, which is flowed by default when you await, but it's not flowed by ContinueWith. OTOH, the execution context (including security restrictions) is the context mentioned in CLR via C#, and it is flowed by both ContinueWithand await(even if you use ConfigureAwait(false)). @Arash Emami 2012-11-28 02:08:08 Thank you Stephen, I marked your post as the answer. I had to read it a few times to get it, but it seems like the only time it would ever be useful to call ConfigureAwait(false) is in a desktop/mobile app, where you make an asynchronous call (like a HttpWebRequest) and would rather do the processing of the result off the UI thread. Otherwise, it is not worth cluttering up the code for any small performance gains made when using ASP.NET. @NathanAldenSr 2014-05-08 19:36:27 Wouldn't it be great if C# had native language support for ConfigureAwait(false)? Something like 'awaitnc' (await no context). Typing out a separate method call everywhere is pretty annoying. :) @Stephen Cleary 2014-05-08 20:32:10 @NathanAldenSr: It was discussed quite a bit. The problem with a new keyword is that ConfigureAwaitactually only makes sense when you await tasks, whereas awaitacts on any "awaitable." Other options considered were: Should the default behavior discard context if in a library? Or have a compiler setting for the default context behavior? Both of these were rejected because it's harder to just read the code and tell what it does. @Royi Namir 2014-05-10 08:46:20 @StephenCleary I don't understand your line : If you disable the context capture using ConfigureAwait(false), then the thread just continues executing the method directly. — Are you saying that the thread is not back in the threadpool , but still waits till the operation is finished , and when it does , it continues the callback ? ( with the same thread)......— Or - Are you talking about that when the async operation finished , another/same thread from threadpool is back to continue the callback , but it's just doesnt enter the execution context.... @Stephen Cleary 2014-05-11 01:13:51 @RoyiNamir: When the async operation finishes, a thread from the thread pool is used to actually complete the task. If you use ConfigureAwait(false), then that same thread resumes executing the asyncmethod without entering the request context. (BTW, this is an implementation detail; this behavior is undocumented). @Anshul Nigam 2014-05-27 09:18:12 @StephenCleary,so does this means that in webapi controller we should not use ConfigureAwait(false)? @Stephen Cleary 2014-05-27 12:28:59 @AnshulNigam: You should use ConfigureAwait(false)whenever you don't need the request context. @Anshul Nigam 2014-05-28 09:21:35 @StephenCleary,but is it very unlikely to not to use request context because for every request one need to send response , something like this.Request.CreateResponse(HttpStatusCode.xxx) @Stephen Cleary 2014-05-28 10:53:47 @AnshulNigam: Which is why controller actions need their context. But most methods that the actions call do not. @Jonathan Roeder 2014-09-09 18:30:05 @StephenCleary, isn't it worth mentioning your other points on deadlocks? stackoverflow.com/questions/13140523/… @Stephen Cleary 2014-09-09 18:57:18 @JonathanRoeder: Generally speaking, you shouldn't need ConfigureAwait(false)to avoid a Result/ Wait-based deadlock because on ASP.NET you should not be using Result/ Waitin the first place. @Eric J. 2015-10-13 19:15:21 @StephenCleary: Do I understand correctly from your blog A good rule of thumb is to use ConfigureAwait(false) unless you know you do need the contextthat using ConfigureAwait(false) will give a performance edge because the current context (e.g. ASP.Net context) is not flowed, but leaving ConfigureAwait(false) out should cause no functional impact? @Stephen Cleary 2015-10-13 19:34:14 @EricJ.: In some cases it can give you better performance. It'll never give you worse performance. There's no functional impact unless someone is using ConfigureAwait(false)as part of a sync-over-async hack. @user4205580 2015-11-14 18:12:31 @StephenCleary I didn't want to ask it here in the comments: stackoverflow.com/questions/33711136/… @Stephen Cleary 2015-11-14 19:04:04 @user4205580: You've already got two good answers. The reason it's not behaving as you expect is because your inner "asynchronous" method is actually synchronous (and the compiler explicitly warns you about this). Make it truly asynchronous (e.g., add an await Task.Delay(200);), and you'll see the thread returned to the thread pool and 200ms later a new thread taken from the thread pool to resume the method. @neleus 2016-08-18 15:42:57 @StephenCleary, As I can see, the root cause is in improper using of Result/ Wait, where ConfigureAwait(false)acts as a workaround. So why so many folks suggest to put this workaround everywhere instead of just fixing the cause. For example, using Task.Run(async () => { await ...}).Wait();will do the trick and this is easier than putting 100 times ConfigureAwait(false)everywhere in library code. Why does no one suggest it? @neleus 2016-08-18 15:47:57 This is an example resharper-plugins.jetbrains.com/packages/ConfigureAwaitChecker @Alexander Derck 2016-08-30 13:02:00 @StephenCleary I'm confused about your comments: "You should use ConfigureAwait(false) whenever you don't need the request context.", but earlier you say "HttpContext.Current is flowed by the ASP.NET SynchronizationContext, which is flowed by default when you await". So any method has access to the request context regardless? When I test it, I can always access HttpContext.Requestin my methods, no matter if I call ConfigureAwait(false)or not. @Stephen Cleary 2016-08-30 13:15:41 The "by default" means "unless you use ConfigureAwait(false)". HttpContext.Requestis not going through HttpContext.Current; HttpContext.Currentis a static property. Also, note that HttpContextis not safe for multithreaded use; if you use ConfigureAwait(false), you can access HttpContext.Request, but you definitely shouldn't. @Alexander Derck 2016-08-30 13:20:21 @StephenCleary Oh misinterpreted your comment, my bad. Thanks for clearing it up @eglasius 2017-01-12 12:29:23 @StephenCleary any thoughts on what neleus suggested as an alternate workaround? @Stephen Cleary 2017-01-12 14:14:45 @eglasius: It only works if the code executed by the Task.Rundoesn't depend on the current context (e.g., HttpContext.Current, any ASP.NET APIs - some of which implicitly depend on an ASP.NET context, dependency injection resolution that is scoped to a request, etc). And keep in mind that "works" in this scenario means "wastes a thread". awaitis still the best solution. @eglasius 2017-01-12 14:29:08 Yes, but if you are running any async code that needs the context and thus wouldn't have ConfigureAwait(false), then you are hit by the deadlock anyway. Both approaches don't work in that case or am I missing something? @Stephen Cleary 2017-01-12 15:16:13 @eglasius: The best way to avoid a deadlock is to not block on async code at all. @eglasius 2017-01-12 15:35:09 I couldn't agree more. After re-reading the whole thread I can see I didn't set the scenario well. What I get now is: the reason to put it everywhere is to avoid unnecessarily restoring the context, which can give you some (small) gain in performance (so no for the deadlocks). Some people use it for deadlocks when calling it from non async code (typically in big code bases upgrade where it is not possible to fully move to async in one go). It is for this later case that I was comparing ConfigureAwait(false) everywhere to a single Task.Run, as it is easy for it to be missed and end up wrong. @JB's 2017-02-02 05:56:25 I had the same issue with my ASP.NET webform app.My Question states the problem but I can't find any solution. I used ConfigureAwait(false)with every await and it worked but when the application runs for the first time page loads as expected but if we request the page again page never loads again. @StephenCleary can help on this matter will be fruitful. Thanks @Stephen Cleary 2017-02-02 14:01:16 @MuhammadIqbal: I don't know if WebMethodsupports async. I've never used asmx. (Note: WebMethodis asmx, not WebForms). @JB's 2017-02-02 17:19:04 I have defined WebMethodin my code behind file, every aspx page has tons of WebMethodin its code behind file. [WebMethod] public static async Task<List<XXX>> GetXXX() => await new BusinessLogic().GetXXX().ConfigureAwait(false);@StephenCleary Thak you very much for the input, but as I am in a scenario can you please give me any recommendations. @Stephen Cleary 2017-02-02 18:05:53 @MuhammadIqbal: My recommendation would be to move from asmx to WebAPI. @JJS 2017-05-31 14:13:13 @StephenCleary please update this post to include content from your recent post blog.stephencleary.com/2017/03/… @Mick 2017-06-22 02:55:47 One thing missing from this answer is Globalisation and Culture. Using ConfigureAwait(false) loses the web.config system.web/globalization settings. See my answer below for more details @Mehdi Dehghani 2019-02-25 06:32:11 @StephenCleary what about Xamarin? it's same as ASP.NET Corein this case? @Stephen Cleary 2019-02-25 21:37:37 @MehdiDehghani: No. UI frameworks including Xamarin have a synchronization context, and code must be in that context to access UI elements. @JBoothUA 2019-07-02 00:58:40 the whole topic is so vague and misleading. even @stephen @JBoothUA 2019-07-02 00:59:40 you all need to step back and realize there are far so many other scenarios that can cause deadlock and this is still a huge issue. @StephenCleary is confident but the entire framework is whack. in .net core or not. shame @JBoothUA 2019-07-02 01:00:26 this thread should be deleted @Aliostad 2012-11-21 13:23:29 I have some general thoughts about the implementation of Task: using. ConfigureAwaitwas introduced in 4.5. Taskwas introduced in 4.0. Task.ContinueWiththey do not b/c it was realised context switch is expensive and it is turned off by default. I have got a few posts on the subject but my take - in addition to Tugberk's nice answer - is that you should turn all APIs asynchronous and ideally flow the context . Since you are doing async, you can simply use continuations instead of waiting so no deadlock will be cause since no wait is done in the library and you keep the flowing so the context is preserved (such as HttpContext). Problem is when a library exposes a synchronous API but uses another asynchronous API - hence you need to use Wait()/ Resultin your code. @Stephen Cleary 2012-11-21 13:48:36 1) You can call Task.Disposeif you want; you just don't need to the vast majority of the time. 2) Taskwas introduced in .NET 4.0 as part of the TPL, which did not need ConfigureAwait; when asyncwas added, they reused the existing Tasktype instead of inventing a new Future. @Stephen Cleary 2012-11-21 13:49:02 3) You're confusing two different types of "context". The "context" mentioned in C# via CLR is always flowed, even in Tasks; the "context" controlled by ContinueWithis a SynchronizationContextor TaskScheduler. These different contexts are explained in detail on Stephen Toub's blog. @Stephen Cleary 2012-11-21 13:49:38 4) The library author doesn't need to care whether its callers need the context flow, because each asynchronous method resumes independently. So if the callers need the context flow, they can flow it, regardless of whether the library author flowed it or not. @svick 2012-11-21 21:45:18 At first, you seem to be complaining instead of answering the question. And then you're talking about “the context”, except there are several kinds of context in .Net and it's really not clear which one (or ones?) are you talking about. And even if you're not confused yourself (but I think you are, I believe there is no context that used to flow with Threads, but doesn't anymore with ContinueWith()), this makes your answer confusing to read. @Aliostad 2012-11-22 09:05:42 @StephenCleary yes, lib dev should not need to know, it is down to the client. I thought I made it clear, but my phrasing was not clear. @Aliostad 2012-11-22 09:08:32 Item 2: do not agree with. asyncgot nothing to do with this you could make a decision to flow or not in the 4.0 all the same. @Aliostad 2012-11-22 09:16:21 @StephenCleary thanks for the link. Perhaps maybe I was confusing them :) @osexpert 2018-12-27 23:48:33 One more general thought: ConfigureAwait does not belong where it is today. Imagine you have an async method X which is calling 100+ async methods with await and ConfigureAwait(Y). This is plain stupid. Y is common for the method X, this ConfigureAwait "thing" belong in a method attribute on X. @enorl76 2019-01-03 02:00:27 They should've invented the Future, and gone with Thenable specs. Promises in javascript are so much easier to deal with. Now as a library developer, all of my async calls are potential deadlocks if I don't pepper MY code with ConfigureAwait(false). All to make your little examples using await and async keywords look nice. @Mick 2017-06-22 02:49:01 The biggest draw back I've found with using ConfigureAwait(false) is that the thread culture is reverted to the system default. If you've configured a culture e.g ... and you're hosting on a server whose culture is set to en-US, then you will find before ConfigureAwait(false) is called CultureInfo.CurrentCulture will return en-AU and after you will get en-US. i.e. If your application is doing anything which requires culture specific formatting of data, then you'll need to be mindful of this when using ConfigureAwait(false). @Stephen Cleary 2017-06-22 12:49:33 Modern versions of .NET (I think since 4.6?) will propagate culture across threads, even if ConfigureAwait(false)is used. @Mick 2017-06-23 01:15:41 Thanks for the info. We are indeed using .net 4.5.2 @tugberk 2012-11-21 09:01:21 Brief answer to your question: No. You shouldn't call ConfigureAwait(false)at the application level like that. TL;DR version of the long answer: If you are writing a library where you don't know your consumer and don't need a synchronization context (which you shouldn't in a library I believe), you should always use ConfigureAwait(false). Otherwise, the consumers of your library may face deadlocks by consuming your asynchronous methods in a blocking fashion. This depends on the situation. Here is a bit more detailed explanation on the importance of ConfigureAwaitmethod (a quote from my blog post): Also, here are two great articles for you which are exactly for your question: Finally, there is a great short video from Lucian Wischik exactly on this topic: Async library methods should consider using Task.ConfigureAwait(false). Hope this helps. @casperOne 2012-11-21 15:15:36 "The GetAwaiter method of Task looks up for SynchronizationContext.Current. If current synchronization context is not null, the continuation that gets passed to that awaiter will get posted back to that synchronization context." - I'm getting the impression that you're trying to say that Taskwalks the stack to get the SynchronizationContext, which is wrong. The SynchronizationContextis grabbed before the call to the Taskand then the rest of the code is continued on the SynchronizationContextif SynchronizationContext.Currentis not null. @tugberk 2012-11-21 16:53:08 @casperOne I have intended to say the same. @binki 2014-12-30 03:43:28 Shouldn’t it be the responsibility of the caller to ensure that SynchronizationContext.Currentis clear / or that the library is called within a Task.Run()instead of having to write .ConfigureAwait(false)all over the class library? @ToolmakerSteve 2015-09-21 19:27:19 @binki - on the other hand: (1) presumably a library is used in many applications, so doing effort one-time in the library to make it easier on applications is cost-effective; (2) presumably the library author knows he has written code that has no reason to require continuing on the original context, which he expresses by those .ConfigureAwait(false)s. Perhaps it would be easier for library authors if that were the default behavior, but I would presume that making it a little bit harder to write a library correctly is better than making it a little bit harder to write an app correctly. @Keith Robertson 2016-09-06 19:20:26 Another way to put it is that ConfigureAwait(false)is not for use in CustomerController.Get()(or high-level application code, including UI event handlers), which, if it starts with a SynchronizationContext almost certainly needs that context later in the method. It is for implementing library code like SomeAsyncFunctionThatGetsCustomer()which does not need the application context. @Michael Parker 2016-10-07 15:36:59 Is it possible to simply use ConfigureAwait at the top level, or does it literally need to be on every call all the way down the stack? It would suck to forget to do it on one call and then have that deadlock the main thread.. @tugberk 2016-10-10 09:15:41 @MichaelParker you need it on every call that you awai at the library level. @Quarkly 2018-07-25 15:56:10 Why should the author of a library coddle the consumer? If the consumer wants to deadlock, why should I prevent them? @masterwok 2019-03-22 13:31:17 @DonaldAirey I'm wondering this same question and I completely agree with you. What have you found that works best for you? I'm currently thinking it would be best to ConfigureAwait(false) at the WebApiController level. @Filip Cordas 2019-04-24 21:46:31 @masterwok I started looking around for a good answer to this question but from what I can see all the answers are a copy paste of someone's opinion on how to do http calls in wpf applications. Best answer use ConfigureAwait(false) when you need to don't do cargo cutting for no reason. @tugberk 2019-04-25 21:42:49 @FilipCordas I guess you have never come across a deadlock situation with this? If you read all the answers, you will see that there is a reason behind this, people don't usually write extra code because it's fun. @Filip Cordas 2019-04-27 00:01:02 @tugberk no never nor should anyone. You should not be trying to do async work in synchronous functions not that hard to avoid nor is it ever unavoidable. It's silly to do something with out a good reason. @tugberk 2019-04-28 17:22:31 @FilipCordas I don't think you are reading things correctly. At the time when this question was asked, which was in 2012, there were legitimate cases where you HAD TO run async code in a synchronous fashion (e.g. due to ASP.NET MVC limitations on child actions, etc.). Like this one: github.com/tugberkugurlu/Bloggy/blob/… @tugberk 2019-04-28 17:24:00 As you can see there, I had to use AsyncHelper.RunSyncas RavenDb client didn't handle it correctly at the library level, which resulted in this PR: github.com/ravendb/ravendb/pull/545 @tugberk 2019-04-28 17:24:32 @FilipCordas So, my advice to you is that try to understand the context next time before making further judgement calls. @Filip Cordas 2019-04-29 20:23:11 @tugberk Well AsyncController was there from 2013 so there was no "had to" it was just simpler. My problem is with "you should always use ConfigureAwait(false)" when in most cases you should not be doing it. You should use it if you don't need the SynchronizationContext not always. And if you are writing a library you should be writing tests cases that make sure you library works and performs well not just calling methods because someone put in a blog post. @Filip Cordas 2019-04-29 20:28:55 Also if you want people to use your library in a sync way provide a sync api don't make it simpler for people to do bad practices. @tugberk 2019-05-01 14:35:43 @FilipCordas you still don't get the case, child action DIDN'T support async back then. doesn't matter whether you have AsyncController or not.
https://tutel.me/c/programming/questions/13489065/best+practice+to+call+configureawait+for+all+serverside+code
CC-MAIN-2019-47
refinedweb
4,430
64.41
- Обучение - Survival Shooter tutorial - Scoring points Scoring points Проверено с версией:: 4.6 - Сложность: Базовая This is part 8 of 10 of the Survival Shooter tutorial, in which you will program the player's score, updating it with each kill. Scoring points Базовая Survival Shooter tutorial Транскрипты - 00:01 - 00:02 So far we have a game - 00:02 - 00:05 where we can shoot a singular enemy - 00:05 - 00:07 and you can be killed by - 00:07 - 00:09 that very same enemy. - 00:09 - 00:11 But currently - 00:11 - 00:14 there is no way to score points. - 00:14 - 00:16 So we want to add more to our UI - 00:16 - 00:18 and we want to add the ability to - 00:18 - 00:20 score points and represent that inside - 00:20 - 00:21 our UI as well. - 00:21 - 00:23 Okay, so, what I want you guys to do - 00:23 - 00:26 is to take a look back at your hierarchy - 00:27 - 00:30 and we are going to look at the HUD Canvas. - 00:30 - 00:32 So the HUD Canvas is our - 00:32 - 00:34 UI canvas, as you remember earlier we placed - 00:34 - 00:38 in the Health UI which gives us the slider for our health - 00:38 - 00:39 and the little heart icon - 00:39 - 00:41 as well as the damage image. - 00:41 - 00:45 But this time we are going to create a score text. - 00:45 - 00:47 So what I'm going to do very quickly is just a the top - 00:47 - 00:50 of the scene view click the 2D button - 00:50 - 00:52 to switch back to 2D mode and I'm just - 00:52 - 00:54 going to zoom right out - 00:54 - 00:56 or what I can do is double click my HUD Canvas - 00:56 - 00:59 to frame it and then zoom back in. - 00:59 - 01:01 What I'm doing is selecting my - 01:01 - 01:03 rect tool because whenever I work on - 01:03 - 01:06 UI stuff I want that 5th tool,. - 01:06 - 01:09 Show us the 2D button really quick while you're zoomed in. - 01:09 - 01:11 There it is, 2D button. - 01:13 - 01:15 Once we're in 2D mode we can then - 01:15 - 01:17 go ahead and create some more UI - 01:17 - 01:19 so this time we're going to make a - 01:19 - 01:21 child object of the HUD Canvas. - 01:21 - 01:23 Therefore I'm going to right click it - 01:23 - 01:25 go to UI and Text. - 01:26 - 01:28 So these UI things are basically a - 01:28 - 01:30 collection of ready-made objects that - 01:30 - 01:32 you can start working with. - 01:32 - 01:33 All of the things that are in the UI system, - 01:33 - 01:35 much like the rest of Unity, - 01:35 - 01:37 are actually components, so what we're really doing - 01:37 - 01:39 is creating a new game object - 01:39 - 01:41 with a text component attached to it. - 01:41 - 01:43 By default when you make a new text component - 01:43 - 01:47 you made something that is default Arial text - 01:47 - 01:49 and it is in grey so that it will - 01:49 - 01:51 work neutrally on light or dark backgrounds. - 01:51 - 01:53 So we're going to rename this - 01:53 - 01:55 The first thing we're going to do is call Text - 01:55 - 01:59 ScoreText, so capital S and T. - 01:59 - 02:02 So rename Text to ScoreText. - 02:02 - 02:04 Then what I'm going to do is to - 02:04 - 02:06 re-anchor this to the top centre of the screen. - 02:06 - 02:08 So if you remember we learnt about - 02:08 - 02:12 rect transform's anchor presets. - 02:12 - 02:13 And the way that we're going to do that is just - 02:13 - 02:16 to set the anchor rather than all of it - 02:16 - 02:19 to the top centre. - 02:19 - 02:21 So it's this preset here. - 02:23 - 02:24 We don't need to Alt, we don't need to Shift, - 02:24 - 02:26 we just need to click that singularly. - 02:26 - 02:28 And what that does is moves our anchors - 02:28 - 02:30 to the top, so you can see our - 02:30 - 02:33 little flower pattern thing is now - 02:33 - 02:34 sat at the top. - 02:34 - 02:36 And from there we can then adjust - 02:36 - 02:37 the positions as appropriate. - 02:37 - 02:39 You'll now noticed that because I've moved those anchors - 02:39 - 02:44 the Y position is -220, so the centre of the game view - 02:44 - 02:48 is -220 pixels or units from the top. - 02:48 - 02:54 So now I can say the Y position is going to be -55 - 02:54 - 02:57 and I'll make sure that my X is also on 0. - 02:57 - 03:00 That moves the text in relation to the anchor. - 03:00 - 03:03 Yeah, so if I set that to 0 - 03:03 - 03:06 you can see that the pivot is 0.5, 0.5, in the centre. - 03:06 - 03:08 But if I drag this down - 03:10 - 03:13 you can see that I'm moving it a - value. - 03:13 - 03:15 So I'll put that around -55. - 03:17 - 03:19 The next thing we're going to do is setup the width, - 03:19 - 03:21 I'll set that to 300. - 03:22 - 03:24 And I'm going to set the heigh to 50. - 03:25 - 03:27 And I'll set the color to white. - 03:27 - 03:30 so in the text component you have all the controls - 03:30 - 03:32 for how the text displays - 03:32 - 03:34 and I'm going to drag in the color picker - 03:34 - 03:37 so that my color for the text is white. - 03:39 - 03:42 Then because we don't want it to just be Arial - 03:42 - 03:45 and very small we're going to set the font. - 03:45 - 03:47 And we're going to use the circle select and - 03:47 - 03:49 choose LuckiestGuy, so that's a font - 03:49 - 03:51 that we've included in this. - 03:51 - 03:53 If you're not used to doing any kind of UI - 03:53 - 03:56 work in Unity, because Unity is authoring another game - 03:56 - 03:58 or application effectively - 03:58 - 04:02 you need to include that font within your project. - 04:02 - 04:06 So we have the truetype file for LuckiestGuy within that. - 04:07 - 04:11 So we have the licence for it and we have the font itself. - 04:11 - 04:13 That means that when we export it will have - 04:13 - 04:15 the font and use it, it doesn't work like - 04:15 - 04:17 word processors or Photoshop, it won't - 04:17 - 04:19 just be able to pick from your library, you - 04:19 - 04:22 have to create a copy of the truetype within your project. - 04:22 - 04:24 So our score text has that font - 04:24 - 04:27 and we're going to set the font size to 50. - 04:27 - 04:30 And we're going to use the alignment under paragraph - 04:30 - 04:32 to centre and middle. - 04:33 - 04:36 So centre and middle and font size to 50. - 04:36 - 04:40 And you should see that we have new text written in there. - 04:40 - 04:42 Obviously we don't want it to say new text, - 04:42 - 04:44 we want to see what our actual score will look like. - 04:44 - 04:46 So in the Text field I'm going to type - 04:46 - 04:50 in Score: 0. - 04:50 - 04:52 That's the default that it's going to look like - 04:52 - 04:54 when we start the game. - 04:54 - 04:56 Also important to note that we don't have to - 04:56 - 04:59 set the text to say Score: 0. - 04:59 - 05:01 Our script is actually going to - 05:01 - 05:03 write what it is that text should be - 05:03 - 05:05 however it's really hard to tell what - 05:05 - 05:07 this is going to look like when we're playing our - 05:07 - 05:08 game without putting some value in there. - 05:08 - 05:10 So you might say later 'why did we set that text - 05:10 - 05:12 when the script is already doing it?'. - 05:12 - 05:14 The reason is so that we can visually see - 05:14 - 05:16 'okay, that looks pretty good' - 05:16 - 05:19 now let's go ahead and apply our scripts and do the rest. - 05:19 - 05:21 So it's just a placeholder. - 05:21 - 05:24 Now that we've done this I'm going to save my scene. - 05:25 - 05:26 So File - Save. - 05:26 - 05:28 And the next thing I'm going to do is put a - 05:28 - 05:30 slight drop shadow, so there are some - 05:30 - 05:32 effects that come with the UI system and - 05:32 - 05:34 we can add them as a separate component. - 05:34 - 05:36 We can keep the ScoreText selected, - 05:36 - 05:39 go to Add Component and just type the word Shadow - 05:39 - 05:41 and it will immediately find that component - 05:41 - 05:43 and you can hit Return. - 05:44 - 05:46 That will just give you a slight drop shadow. - 05:46 - 05:48 I'm going to make it a bit more obvious - 05:48 - 05:54 by changing the Effect Distance to 2, -2 in the X and Y axis. - 05:55 - 05:57 It's also important to keep that - 05:57 - 05:59 Use Graphic Alpha checked, - 05:59 - 06:01 otherwise if you change the alpha - 06:01 - 06:04 of the text the shadow won't also change. - 06:04 - 06:06 What you'll notice about this is if I change the - 06:06 - 06:08 alpha of the text itself the shadow - 06:08 - 06:10 underneath is also fading out. - 06:10 - 06:12 Whereas if it's not checked - 06:13 - 06:14 we can fade this and then the shadow - 06:14 - 06:17 will get left behind which is not desirable. - 06:19 - 06:21 And then we need something to set - 06:21 - 06:23 the score, something to be managing - 06:23 - 06:26 the score, updating the text component's - 06:26 - 06:28 text value with Score 10, Score 20, - 06:28 - 06:30 whatever happens in the game. - 06:30 - 06:32 And the way that we're going to do this is - 06:32 - 06:34 by adding a Manager script. - 06:34 - 06:36 So what I'd like you to do is look in - 06:36 - 06:39 the Scripts - Managers folder - 06:39 - 06:42 and you will find out Score Manager. - 06:42 - 06:44 We're going to drag and drop this on to the - 06:44 - 06:47 ScoreText game object - 06:49 - 06:51 Then once you've applied it you should see - 06:51 - 06:53 it at the bottom of the list of components - 06:54 - 06:55 right underneath the shadow - 06:55 - 06:57 and we can double click to open that up. - 06:59 - 07:01 So at the start we again have our public variables. - 07:01 - 07:04 You'll notice there's a new keyword there - 07:04 - 07:06 Static. - 07:06 - 07:10 So a static variable doesn't belong to the instance - 07:10 - 07:13 of the class it belongs to the class itself. - 07:13 - 07:14 So let me explain. - 07:14 - 07:17 Whenever we're dragging on EnemyHealth - 07:17 - 07:19 or PlayerHealth or PlayerMovement on to an object - 07:19 - 07:22 we're creating an instance of that class - 07:22 - 07:24 and applying it to the game object, - 07:24 - 07:26 so they are all instances of a class. - 07:27 - 07:29 And so all of the variables, - 07:29 - 07:31 they're instance variables, - 07:31 - 07:35 each enemy has it's own health, - 07:35 - 07:38 each player has it's own speed, etcetera. - 07:38 - 07:42 Static variables do not belong to an instance, - 07:42 - 07:44 they belong to the class itself. - 07:44 - 07:46 So what that means is, - 07:46 - 07:49 in order to reference the score there - 07:50 - 07:54 we don't need to go ScoreManager variable - 07:54 - 07:59 GetComponent ScoreManager then use it, we just say - 07:59 - 08:03 ScoreManager type . score. - 08:03 - 08:06 So we don't need to create a variable to use it - 08:06 - 08:09 we're just going to use it through the type itself. - 08:09 - 08:12 So it only effectively exists in one place - 08:12 - 08:14 we're not going to address a bunch of instances where - 08:14 - 08:17 this exists, we're changing it in 1 place. - 08:17 - 08:19 We could still have multiple - 08:19 - 08:21 instance of ScoreManager, - 08:21 - 08:23 we could drag multiple ones on to - 08:23 - 08:27 a game object, on to different game objects, doesn't matter. - 08:27 - 08:30 We're not going to, because that would break everything. - 08:30 - 08:34 But if we did all of them would share the same score. - 08:34 - 08:36 because it belongs to the type - 08:36 - 08:38 not to the instance. - 08:38 - 08:42 So the next thing is we need a reference to our Text component. - 08:42 - 08:43 In awake we're going to setup that reference - 08:43 - 08:45 to the text component. - 08:45 - 08:47 Then we need to reset the score - 08:47 - 08:49 because if we die we want - 08:49 - 08:51 the game to reset, so, - 08:51 - 08:53 we need to set the score back to 0. - 08:53 - 08:55 And in our update function - 08:55 - 08:59 what we're doing there, the text.text is - 08:59 - 09:01 we're changing the text property - 09:01 - 09:03 of the text component. - 09:03 - 09:05 Okay, so the text component that we have - 09:05 - 09:10 that string that we said Score: 0, - 09:10 - 09:13 that was the Score text, - 09:13 - 09:16 that was the text property of the component. - 09:16 - 09:19 So what we're doing is we're setting that - 09:19 - 09:21 to a completely new string, we're not changing that, - 09:21 - 09:24 we're just setting it completely afresh. - 09:24 - 09:27 We're changing it to Score: - 09:27 - 09:29 and then that number will be the score. - 09:30 - 09:32 So very simply that's our ScoreManager and - 09:32 - 09:34 if you happen to save it it'll ask you to convert - 09:34 - 09:36 the line endings, it's no big deal. - 09:37 - 09:40 Okay, we'll need to continue scoring points - 09:40 - 09:42 and I'm going to select my - 09:42 - 09:44 Zombunny in the hierarchy - 09:44 - 09:48 and locate the EnemyHealth script. - 09:48 - 09:51 And we're going to open the EnemyHealth script. - 09:52 - 09:55 and have a look down at the very bottom - 09:55 - 09:58 at the StartSinging function. - 09:58 - 10:01 James mentioned the public static - 10:01 - 10:03 integer score earlier, - 10:03 - 10:05 he promised you very kindly that you could - 10:05 - 10:07 indeed say the name of - 10:07 - 10:11 the class, ScoreManager.score, - 10:11 - 10:13 so without saying GetComponent - 10:13 - 10:16 or create an instance of the script, assign to this part of the script - 10:16 - 10:20 we can very simply just say ScoreManager.score. - 10:20 - 10:22 So we're going to re-enable that by deleting - 10:22 - 10:24 the 2 // comments. - 10:24 - 10:27 And what we're doing there is adding to it - 10:27 - 10:30 the value of ScoreValue. - 10:30 - 10:36 So scoreValue within this particular script is - 10:36 - 10:39 a public variable that we can change. - 10:39 - 10:44 This enemy has a value of 10 that when you kill it you get 10 points. - 10:44 - 10:46 This way we can apply this - 10:46 - 10:48 EnemyHealth script to different enemies - 10:48 - 10:50 and have different score values. - 10:50 - 10:52 So if you wanted to make the - 10:52 - 10:54 killing the elephant worth a lot then you could - 10:54 - 10:56 change that value, you don't need to go in to the - 10:56 - 10:58 script and change it, it's a public value - 10:58 - 11:00 so it appears in the inspector. - 11:00 - 11:03 I just want to make a quick point about static variables. - 11:03 - 11:07 So you know it's a lot easier to do it with static there, - 11:07 - 11:10 we didn't have to create an instance variable, - 11:10 - 11:12 we didn't have to assign it in awake - 11:12 - 11:14 and then use it, we just used it just like that. - 11:14 - 11:17 So why don't we use it for everything like that? - 11:17 - 11:18 That would be so much easier? - 11:18 - 11:22 It's because we have multiple enemies - 11:22 - 11:24 and if we wanted multiple players then - 11:24 - 11:25 we'd have more of those as well. - 11:25 - 11:27 So if we wanted to change the health - 11:27 - 11:30 of one player all of the player's health would change. - 11:30 - 11:32 So we can't do it most of the time. - 11:32 - 11:34 It's just very specific circumstances - 11:34 - 11:36 where you'd only have one score - 11:36 - 11:38 so we can make that static - 11:38 - 11:40 to make it easier for ourselves. - 11:40 - 11:42 So we're going to save this, it's going to ask you - 11:42 - 11:45 to convert line endings, just choose Convert. - 11:45 - 11:47 This project was made on PC and then - 11:47 - 11:49 moved between PC and Mac so the files - 11:49 - 11:51 get confused but it's no big deal. - 11:52 - 11:54 So we should now go ahead and try it out. - 11:54 - 11:56 So if you save your scene and press play - 11:56 - 11:58 at the top of the interface. - 12:01 - 12:05 There we go, 10 points for a Zombunny. - 12:05 - 12:09 A very important point to make right now about prefabs - 12:09 - 12:12 is that they are incredibly useful when you want to - 12:12 - 12:14 spawn more than one object. - 12:14 - 12:16 So some people might use if for rockets in a game - 12:16 - 12:18 and you might use it for enemy spawning in a game. - 12:18 - 12:20 You can use it for really anything you want to - 12:21 - 12:23 But we want to use that for our enemies, - 12:23 - 12:25 there's going to be 3 types of enemy, - 12:25 - 12:28 and you guys have gone and created that first enemy. - 12:28 - 12:30 What we don't want is just the enemy to be - 12:30 - 12:32 sat next to the player when the game starts. - 12:32 - 12:36 So what we need to do is to save him as a prefab. - 12:36 - 12:38 So everybody make sure you've stopped play, - 12:38 - 12:42 so play is no longer on, it should be black at the top. - 12:42 - 12:44 No more blue buttons. - 12:44 - 12:45 What we're going to do is to select our - 12:45 - 12:47 Prefabs folder in the project - 12:47 - 12:50 and then grab the Zombunny in the hierarchy - 12:50 - 12:53 and drag and drop it in to the project panel - 12:53 - 12:55 either in the empty space or drop it on - 12:55 - 12:56 to the Prefabs folder. - 12:57 - 12:59 Both of those will create the same effect, - 12:59 - 13:01 you will get a Zombunny prefab, - 13:01 - 13:03 which looks like this. - 13:05 - 13:07 And you will have all of the same settings - 13:07 - 13:09 that you had on the version that's in the scene. - 13:10 - 13:12 So that version in the scene now belongs - 13:12 - 13:14 to that prefab parent. - 13:14 - 13:16 And even if we delete the version in - 13:16 - 13:18 the scene, in the hierarchy, - 13:18 - 13:20 then the version in the project is saved, - 13:20 - 13:22 and that's very crucial. - 13:23 - 13:25 Everybody check that you've got your Zombunny - 13:25 - 13:27 in the project, it's very important. - 13:27 - 13:30 Then in the hierarchy we want to get rid of it, - 13:30 - 13:31 so I'm going to select it there and - 13:31 - 13:33 on Mac Command Backspace, - 13:33 - 13:35 on PC just the delete key. - 13:35 - 13:37 Remove it from the scene. - 13:37 - 13:40 And then save your scene. - 13:40 - 13:42 Switch off 2D mode and double click the - 13:42 - 13:44 player to zoom back in to the action - 13:44 - 13:46 so you can see the player. - 13:47 - 13:50 Okay, so that is the end of phase 8. ScoreManager Code snippet; } } static var score : int; // The player's score. private var text : Text; // Reference to the Text component. function Awake () { // Set up the reference. text = GetComponent (Text); // Reset the score. score = 0; } function Update () { // Set the displayed text to be the word "Score" followed by the score value. text.text = "Score: " + score; }); } } import UnityEngine.U); } Связанные обучающие материалы - Variables and Functions (Урок) - Statics (Урок)
https://unity3d.com/ru/learn/tutorials/projects/survival-shooter-tutorial/scoring-points?playlist=17144
CC-MAIN-2019-22
refinedweb
3,977
82.07
Announcements Nicholas LadefogedMembers Content count5 Joined Last visited Community Reputation100 Neutral About Nicholas Ladefoged - RankNewbie Need help with hello world :) Nicholas Ladefoged replied to Nicholas Ladefoged's topic in For Beginnersthanks for the replies. what i imagine. is a browser opening when i want to launch what i wrote. but maybe thats just a bit too far out of my league yet im very new a programming. so i still have a "hard time" understanding how it works in reality Need help with hello world :) Nicholas Ladefoged replied to Nicholas Ladefoged's topic in For Beginnersfollowed your first advice, i got my file with a java icon on it. i double click it, and nothing happens? only possible thing that might be wrong, that i can see is maybe the java program i use to launch isent the right? i use "java platform se binary" [img][/img] you got any suggestions? actually something does happen really fast. a cmd opens and closes really fast. captured what it said with a screen shot : Error: could not find or load main class c:\users\nico\desktop\var1.jar btw thanks for your reply. exporting it, so obvious. i just thought i saved it, and tryed to run it as the saved file from eclipse. Need help with hello world :) Nicholas Ladefoged posted a topic in For Beginnersi am using eclipse to make my first java script. .. the script i made look like this public class hallo1 { /** * @param args */ public static void main(String[] args) { // TODO Auto-generated method stub System.out.print("Hello world"); } } [size=2 [img][/img] or mby just make it open my cmd? if thats possible. [/size] Completly new on game programming Nicholas Ladefoged replied to Nicholas Ladefoged's topic in For BeginnersThat is a really good guide. you nailed it. thank you Completly new on game programming Nicholas Ladefoged posted a topic in For BeginnersActually i just want to ask. how to start ? been reading about it for some time now, what i understood was that c++ should be like.. when u know a little bit. so, where do i start? i now a little bit "computer language" i can do most html. but thats about it. and making webpages is just not my thing, so im trying a bit gaming now. i know there might be different opinions on how to start. just come with yours maybe some links to a good tutorial, or just a name on the language or program. thank you.
https://www.gamedev.net/profile/197935-nico1991/
CC-MAIN-2017-34
refinedweb
417
74.79
At work recently I’ve been doing an informal review of our code base, in an attempt to get together a potential code refactoring work package. I say informal because I haven’t got any official time allocated to it – currently its taking place in lunchtimes and during slack that I haven’t told management about. Along with tools like “checkstyle”: and “QStudio pro”: I’ve been able to pick up some issues which will exist in any organisation which, like ours has had no real coding standards or review process. In addition I’ve been carrying out a more general design review, and one issue that cropped up was that of singleton classes and static classes. Those of you familiar with Java would probably of seen implementations of the singleton pattern such as this: public class Singleton { private static Singleton me; private Object singletonData; private Singleton() { } public static synchronized Singleton instance() { if (me == null) { me = new Singleton(); } return me; } public void doSingletonMethod() { //Do something with singleton data } } The Singleton has a private constructor – the only way an instance of the class can be created is by using the static instance() method – this method enforces that within a single JVM, only a single instance of this class may be created. Lets look at a static class with one simple method: public class StaticClass { private Object staticData; public static void doStaticMethod() { //do something with static data } } So whats the difference? Well firstly to use the doSingletonMethod on the SingletonClass, you have to instantiate an instance of the class using the instance method. To call doStaticMethod we simply invoke it via the class: SingletonClass.doStaticMethod(). So, which one should you use? A clear example of a static class would be a set of utility functions, whose only real commonality would be the fact they are all utility functions – no data is associated with the methods beyond the parameters passed in, the results of the methods, or perhaps some constant static data. A clear example of a Singleton class would be a manager class, who stores other objects and manipulates their state. Why the difference? Well, the Singleton class “feels” like its an actual thing, whereas a static class is really just a convenient collection of related functions. Anyway, these thoughts lead me to add two guidelines to my own unwritten style guide: - If the only data it uses that isn’t passed in is all constant (read: final) data, its a static class - If it has NO internal data at all, its a static class - If it has non-constant data and it only makes sense to have one of them, then its a Singleton - If it doesn’t match any of the above, its a normal class Now to apply these rules at work…. _updated 05/09/2003_: Many thanks to R.J. and Payton for picking up some problems with my example code – thats what you get for hand coding example code rather than simply pasting in a real life example 🙂 20 Responses to “Of Singletons and Static classes” Another thing to keep in mind is that a “static class” cannot implement an interface, since interface methods can’t be static. This especially comes into play when you have a singleton whose implementation varies (e.g., according to platform-dependent factors). But your staticData object can be an interface, so it gives you the flexibility you need when implementing instance() btw, the lazy loading of a static is unnecessary, the VM will only construct a static member when it is first accessed anyway. That, and the singleton shown here isn’t thread safe. Related to the interface implementation by Adam, if you give the Singleton class a default or protected constructor, it can be sub-classed, allowing you to override methods in the future and the client only would need to change their Singleton.instance() method to SuperSingleton.instance(). I prefer deciding whether a static class or Singleton class as follows: “If the class maintains no state, and you cannot conceive of it maintaining state in the future, it’s static. Otherwise it’s a Singleton”. BTW, you forgot the “return me;” at the end of instance(), and you better change the method signature to “synchronized”, or you have a race condition if the code is run in a multi-threaded environment. Aha! Thanks for the catch Payton – that’s what I get for hand coding the code example rather than pasting actual code 🙂 Also your summation of what should or shouldn’t be a singleton class is MUCH more elegant than mine – my list amounts to pretty much the same thing, but yours is a lot neater and much easier to understand. As for the lazy loading of the static field, are you sure on this Jed? Its seems like one of those things that could vary between JVM implementations. Now I’m actually going to fix my singleton example to stop me looking like such an idiot 🙂 The lazy load may or may not be necessary, depending on the situation. You could have the following: private static Singleton me = new Singleton(); In which case the race condition is completely avoided and the singleton is created when the class is loaded. However, if you do any processing in the constructor that you may want delayed until later, you’ll want the lazy instantiation. As far as synchronization goes, check this article to see why you shouldn’t use double-checked locking on the instantiation: Another benefit of a singleton is the ability to use it in JSP tags. Tag support for static methods is pretty slim, so it’s often easier to pass the singleton into whatever tag you’re using. Plus, it’s always fun when a rather clueless coworker comes up to you and asks how you can have a private constructor. I have to admit, this has been one of the best threads I’ve seen about this topic. Time to print out Payton’s line and hang it in my cubicle. I have to agree Jay – this thread has been informative enough that I’ve realised it really deserves a more detailed discussion. Expect a more detailed piece from me in the next few days. I prefer to use an Object any day of the week for my singletons() 1)Objects can scale better. 2)Objects are more useful/dynamic/polymorphic. 1)I somtimes use birth control as an Anology. Would prefer a vasectomy or castration? Objects can reproduce like rabits if you decide need more -reverse the operation. A createRabit() factory method can always be added later. *Question: How do Classes reproduce? 2)Dynamic proxies / mock objects / remote Stubs can be used with objects.Objects are also easier to Unit Test public class Singleton implements Remote… { *Answer:They do not (Unless you are cutting & pasting – very slow & error prone) Would it be possible to make instance field start with a “m_” before each instance field variable. This makes it easier to identify which are local variables and which are local variables. Do you have any commit about this? Or could you present this as a standard in your coding. Thank you and GOD bless you! Instance variables starting with “m_”… That brings me back to my Visual C++ days. In my code i tend to use an underscore prefix for all instance variables – I have seen some people using understores as a suffix. Using an m_varname as opposed to an _varname is a lot more ugly in my opinion, but as always beauty is in the eye of the beholder! Guess what is printed by Main: // Main.java class A { // A is a singleton. The getInstance() method is even synchronized. private static A instance; public synchronized static A getInstance() { return (instance == null) ? (instance = new A()) : instance; } // A uses B private B b; private A() { b = new B(); } } class B { // B caches the only instance of A public static A a = A.getInstance(); } class Main { public static void main(String[] args) { System.out.println(A.getInstance() == B.a); } } False. That’s a good one. Because the ‘B’ class is loaded in the ‘A’ constructor, ‘B’ statically initializes it’s ‘A’ variable. The singleton object hasn’t been created yet (it hasn’t returned from the original call to the constructor), therefore ‘A’s instance variable is still null. End result is you get two ‘A’ objects in memory. The better way to do it is like this: …. private static A instance = new A(); public static A getInstance() { return instance; } …. You still get a “false” printing out, because the references are different. However, only one object is created – this preserves the singleton. Who would of thought there could be so many nuances to the Java implementation of one of the simplest Java patterns of all? Personally I’m hoping for a Singleton metadata keyword! Yes, the lazy loading is required. Well, technically, it is “lazy initialization” that is required… the JVM is free to load the classfile into memory at any time, as long as it defers throwing any errors or exceptions until the correct “logical” moment, but none of the static data in the class can be initialized until the class is first accessed. This is called out in Section 12.4.1 of the Java2 Language Spec, which also defines “first access” as being creation of an instance (“new”), usage of any static method, or access to any non-constant static field. Certain reflective operations also can cause class initialization. Per the spec, “A class or interface will not be initialized under any other circumstance.” Section 12.4.1 can be found at I am a bit puzzled by this discussion. I would have thought the “guidelines” would be much simpler. If the design calls for “at most one instance” in the environment use the Singleton pattern. If the design calls for a class (with possibly multiple instances) closely associated with the enclosing class (or the programmer is too lazy to creat another file) use “static. A class that is both would be “static Singleton”. The internal data type distinctions above are possible “propritary” guidelins but somewhat articfical to me. What am I missing? one thing about static classes: i suppose (tell me if i’m wrong!), that you cannot use an InvocationHandler with a Proxy Class on a static class. this characteristic is useful when you want to intercept method calls on a specific class. charles. Reading this today … is this correct: “To call doStaticMethod we simply invoke it via the class: SingletonClass.doStaticMethod()” Should not be: “To call doStaticMethod we simply invoke it via the class: StaticClass.doStaticMethod()” I mean, not SingletonClass there, but StaticClass? “To call doSingletonMethod we simply invoke it via an instance of Singleton Class: SingletonClass.getInstance().doSingletonMethod()” “To call doStaticMethod we simply invoke it via the class: StaticClass.doStaticMethod()”
https://blog.magpiebrain.com/2003/09/04/of-singletons-and-static-classes/
CC-MAIN-2020-10
refinedweb
1,811
60.45
In this tutorial we will learn how to read frames from a webcam and save them in a video file, using Python and OpenCV. Introduction In this tutorial we will learn how to read frames from a webcam and save them in a video file, using Python and OpenCV. For a detailed tutorial on how to get frames from a camera, please check here. This tutorial was tested on Windows 8.1, with version 4.1.2 of OpenCV. The Python version used was 3.7.2. The code from this tutorial is based on the example from the OpenCV documentation, which I encourage you to check. The code We will start by importing the cv2 module. import cv2 Then, we will create an object of class VideoCapture. This object will allow us to get frames from the camera. As input of the constructor, we need to pass the index of the device we want to use. If we have a single webcam connected to the computer, we should pass the value 0. capture = cv2.VideoCapture(0) After this, we need to create an object of class VideoWriter. This object will allow us to write a video file from the frames captured from the camera. One of the parameters of the constructor of this class is the fourcc code. It is a sequence of 4 bytes [1] that specifies the video codec. We will be using the Xvid codec. So, we call the VideoWriter_fourcc function, passing as input the 4 characters of the codec code. It will return an integer representing the codec. fourcc = cv2.VideoWriter_fourcc('X','V','I','D') Going back to the VideoWriter object instantiation, it receives the following parameters: - The name of the file (including the path of the location where you want to save it and the extension); - An integer representing the fourcc code; - The frame rate, in frames per second; - A tuple with the dimensions of the video. For the first parameter, I will be creating an .avi file called video. You can check the difference between the file format and the video codec here. As second parameter we will pass the output of the VideoWriter_fourcc function. As third parameter, we will pass a frame rate of 30 frames per second. As fourth and final parameter I’ll be passing a tuple with the value (640,480), which corresponds to the dimensions of the frame obtained with my camera. videoWriter = cv2.VideoWriter('C:/Users/N/Desktop/video.avi', fourcc, 30.0, (640,480)) After instantiating all the needed objects, we will start obtaining the frames from the camera in an infinite loop that will break when the user clicks the ESC key. while (True): #capture and save frames if cv2.waitKey(1) == 27: break To get a frame, we call the read method on our VideoCapture object. This method takes no arguments and returns a tuple. The first returned value of the tuple is a Boolean indicating if the frame was read correctly (True) or not (False). The second value is a ndarray representing the frame. ret, frame = capture.read() In case the frame was correctly captured, we will show it in a window and also write it to the file. To write the frame to the file, we need to call the write method on our VideoWriter object, passing as input the frame. if ret: cv2.imshow('video', frame) videoWriter.write(frame) In case the infinite loop breaks, it means the capture of frames should finish and we should end our program. Thus, we call the release method on the VideoCapture object, to release the camera, and the release method on the VideoWriter object, to close it. capture.release() videoWriter.release() We will also call the destroyAllWindows function, to destroy the window we opened to show the frames. The final code can be seen below. import cv2 capture = cv2.VideoCapture(0) fourcc = cv2.VideoWriter_fourcc('X','V','I','D') videoWriter = cv2.VideoWriter('C:/Users/N/Desktop/video.avi', fourcc, 30.0, (640,480)) while (True): ret, frame = capture.read() if ret: cv2.imshow('video', frame) videoWriter.write(frame) if cv2.waitKey(1) == 27: break capture.release() videoWriter.release() cv2.destroyAllWindows() Testing the code To test the code, simply run it in a tool of your choice. I’ll be using PyCharm, a Python IDE. During the video recording, you should see a window with the obtained frames, like shown in figure 1. After clicking the ESC button, you should have a file with the video obtained from the camera, like shown in figure 2. References [1]
https://techtutorialsx.com/2020/05/07/python-opencv-saving-video-from-webcam/
CC-MAIN-2020-40
refinedweb
761
75.71
How To Make A Drag-and-Drop File Uploader With Vue.js 3 What. <slot></slot> </div> </template> <script setup> import { onMounted, onUnmounted } from 'vue' const emit = defineEmits(['files-dropped']) function onDrop(e) { emit('files-dropped', [...e.dataTransfer.files]) } function preventDefaults(e) { e.preventDefault() } const events = ['dragenter', 'dragover', 'dragleave', 'drop']: <template> <!-- add `data-active` and the event listeners --> <div : <!-- share state with the scoped slot --> <slot :</slot> </div> </template> <script setup> // make sure to import `ref` from Vue import { ref, onMounted, onUnmounted } from 'vue' const emit = defineEmits(['files-dropped']) // Create `active` state and manage it with functions let active = ref(false) function setActive() { active.value = true } function setInactive() { active.value = false } function onDrop(e) { setInactive() // add this line too emit('files. Why is it doing that? When you drag something over a child element, it will “enter” that element and “leave” the drop zone, which causes it to go inactive. The dragenter event will bubble up to the drop zone, but it happens before the dragleave event, so that doesn’t help. Then a dragover event will fire again on the drop zone which will flip it back to active but not before flickering to the inactive state. To fix this, we’ll add a short timeout to the setInactive function to prevent it from going inactive immediately. Then setActive will clear that timeout so that if it is called before we actually set it as inactive, it won’t actually become inactive. Let’s make those changes: // Nothing changed above let active = ref(false) let inActiveTimeout = null // add a variable to hold the timeout key function setActive() { active.value = true clearTimeout(inActiveTimeout) // clear the timeout } function setInactive() { // wrap it in a `setTimeout` inActiveTimeout = setTimeout(() => { active.value = false }, 50) } // Nothing below this changes You’ll note a timeout of 50 milliseconds. Why this number? Because I’ve tested several different timeouts and this feels the best. I know that’s subjective but hear me out. I’ve tested much smaller timeouts and 15ms was about as low as I went where I never saw a flicker, but who knows how that’ll work on other hardware? It has too small a margin of error in my mind. You also probably don’t want to go over 100ms because that can cause perceived lag when a user intentionally does something that should cause it to go inactive. In the end, I settled somewhere in the middle that is long enough to pretty much guarantee there won’t be any flickering on any hardware and there should be no perceived lag. That’s all we need for the DropZone component, so let’s move on to the next piece of the puzzle: a file list manager. File List Manager I guess the first thing that needs to be done is an explanation of what I mean by the file list manager. This will be a composition function that returns several methods for managing the state of the files the user is attempting to upload. This could also be implemented as a Vuex/Pinia/alternative store as well, but to keep things simple and prevent needing to install a dependency if we don’t need to, it makes a lot of sense to keep it as a composition function, especially since the data isn’t likely to be needed widely across the application, which is where the stores are the most useful. You could also just build the functionality directly into the component that will be using our DropZone component, but this functionality seems like something that could very easily be reused; pulling it out of the component makes the component easier to understand the intent of what is going on (assuming good function and variable names) without needing to wade through the entire implementation. Now that we’ve made it clear this is going to be a composition function and why, here’s what the file list manager will do: - Keep a list of files that have been selected by the user; - Prevent duplicate files; - Allow us to remove files from the list; - Augment the files with useful metadata: an ID, a URL that can be used to show a preview of the file, and the file’s upload status. So, let’s build it in src/compositions/file-list.js: import { ref } from 'vue' export default function () { const files = ref([]) function addFiles(newFiles) { let newUploadableFiles = [...newFiles] .map((file) => new UploadableFile(file)) .filter((file) => !fileExists(file.id)) files.value = files.value.concat(newUploadableFiles) } function fileExists(otherId) { return files.value.some(({ id }) => id === otherId) } function removeFile(file) { const index = files.value.indexOf(file) if (index > -1) files.value.splice(index, 1) } return { files, addFiles, removeFile } } class UploadableFile { constructor(file) { this.file = file this.id = `${file.name}-${file.size}-${file.lastModified}-${file.type}` this.url = URL.createObjectURL(file) this.status = null } } We’re exporting a function by default that returns the file list (as a ref) and a couple of methods that are used to add and remove files from the list. It would be nice to make the file list returned as read-only to force you to use the methods for manipulating the list, which you can do pretty easily using the readonly function imported from Vue, but that would cause issues with the uploader that we’ll build later. Note that files is scoped to the composition function and set inside it, so each time you call the function, you’ll receive a new file list. If you want to share the state across multiple components/calls, then you’ll need to pull that declaration out of the function so it’s scoped and set once in the module, but in our case we’re only using it once, so it doesn’t really matter, and I was working under the thought that each instance of the file list would be used by a separate uploader and any state can be passed down to child components rather than shared via the composition function. The most complex piece of this file list manager is adding new files to the list. First, we’re making sure that if a FileList object was passed instead of an array of File objects, then we convert it to an array (as we did in the DropZone when we emitted the files. This means we could probably skip that transformation, but better safe than sorry). Then we convert the file to an UploadableFile, which is a class we’re defining that wraps the file and gives us a few extra properties. We’re generating an id based on several aspects of the file so we can detect duplicates, a blob:// URL of the image so we can show preview thumbnails and a status for tracking uploads. Now that we have the IDs on the files, we filter out any files that already exist in the file list before concatenating them to the end of the file list. Possible Improvements While this file list manager works well for what it does, there are a number of upgrades that can be done. For one thing, instead of wrapping the file in a new class and then having to call .file on it to access the original file object, we could wrap the file in a proxy that specifies our new properties, but then will forward any other property requests on to the original object, so it is more seamless. As an alternative to wrapping each file in an UploadableFile, we could have provided utility functions that could return the ID or URL given a file, but that’s slightly less convenient and would mean that you’re potentially calculating these properties multiple times (for each render, and so on), but that shouldn’t really matter unless you’re dealing with people dropping thousands of images at once, in which case you can try memorizing it. As for the status, that isn’t pulled straight from the File, so a simple utility function like the others wouldn’t be possible, but you could store the status of each file with the uploader (we’ll be building that later) rather than directly with the files. This might be a better way of handling it in a large app so we don’t end up filling the UploadableFile class with a bunch of properties that just facilitate a single area of the app and are useless elsewhere. Note: For our purposes, having the properties available directly on our file object is by far the most convenient, but it can definitely be argued that it isn’t the most appropriate. Another possible improvement is allowing you to specify a filter so that it only allows certain file types to be added to the list. This would also require addFiles to return errors when some files don’t match the filter in order to let the user know they made a mistake. This is definitely something that should be done in production-ready applications. Better Together We’re far from a finished product, but let’s put the pieces we have together to verify everything is working so far. We’re going to be editing the /src/App.vue file, to put these pieces in, but you can add them to whatever page/section component you want. If you’re putting it inside an alternate component, though, ignore anything (like an ID of “app”) that would only be seen on the main app component. <template> <div id="app"> <DropZone class="drop-area" @ <div v- <div>Drop Them</div> </div> <div v-else> <div>Drag Your Files Here</div> </div> </DropZone> </div> </template> <script setup> import useFileList from './compositions/file-list' import DropZone from './components/DropZone.vue' const { files, addFiles, removeFile } = useFileList() </script> If you start with the script section, you’ll see we’re not doing a whole lot. We’re importing the two files we just finished writing and we’re initializing the file list. Note, we’re not using files or removeFile yet, but we will later, so I’m just keeping them there for now. Sorry if ESLint is complaining about unused variables. We’ll want files at the very least so we can see if it’s working later. Moving on to the template, you can see we’re using the DropZone component right away. We’re giving it a class so we can style it, passing the addFiles function for the “files-dropped” event handler, and grabbing the scoped slot variable so our content can be dynamic based on whether or not the drop zone is active. Then, inside the drop zone’s slot, we create a div showing a message to drag files over if it’s inactive and a message to drop them when it is active. Now, you’ll probably want some styles to at least make the drop zone larger and easier to find. I won’t be pasting any here, but you can find the styles I used for App.vue in the repo. Now, before we can test the current state of the app, we’ll need the beta version of Vue DevTools installed in our browser (stable version doesn’t support Vue 3 quite yet). You can get Vue DevTools from Chrome web store for most Chromium-based browsers or download Vue DevTools here for Firefox. After you’ve installed that, run your app with npm run serve (Vue CLI), npm run dev (Vite), or whatever script you use in your app, then open it in your browser via the URL given in the command line. Open up the Vue DevTools, then drag and drop some images onto the drop zone. If it worked, you should see an array of however many files you added when you view the component we just wrote (see screenshot below).- <span>Drop Them Here</span> <span class="smaller">to add them</span> </span> <span v-else> <span>Drag Your Files Here</span> <span class="smaller"> or <strong><em>click here</em></strong> to select files </span> </span> <input type="file" id="file-input" multiple @ <- <li v-{{ : <img : </component> </template> <script setup> defineProps({ file: { type: Object, required: true }, tag: { type: String, default: 'li' }, }) <- <FilePreview v- </ul> Also, we need to import our new component in the script section: import FilePreview from '. @×</button> Then add this line somewhere in the script section: defineEmits(['remove'])('file', file.file) // track status and upload file file.status = 'loading' let response = await fetch(url, { method: 'POST', './compositions/file-uploader' const { uploadFiles } = createUploader('YOUR-In Progress</span> <span class="status-indicator success-indicator" v-Uploaded</span> <span class="status-indicator failure-indicator" v!
https://www.smashingmagazine.com/2022/03/drag-drop-file-uploader-vuejs-3/
CC-MAIN-2022-27
refinedweb
2,110
57.2
| > C# C# 1-20 of 44 Programmatically Retrieve a System's Logical Drive Information with C# by Deepak Choudhari Use C# and the System.Management namespace to retrieve information about logical drives in a given system. Convert a Delimited String to a Generic List<string> by Srinath MS Split the string at the delimiter and then add each item to a new string list. Call PowerShell Cmdlets from C# by Cody Batt Take advantage of pre-written cmdlet functionality in your own applications. A Quick Way to Generate Properties in Visual Studio C# Projects by Srinath MS Don't write property setters and getters by hand--generate them! Quick C# Project Documentation in Visual Studio by Srinath MS Here's a quick way to document your .NET projects. Incrementing a C# Variable Efficiently by Bashir Nabeel Avoid postfix operators when incrementing composite objects. Using a Built-In Shortcut to Add Properties to a Class by Bill Harmon After declaring your class, suppose you want to add individual element properties to that class. This process has been simplified in Visual Studio when working in C#. Serialize and Deserialize an Object to an XML File in C# 2.0 by Deepak Choudhari The example in this tip uses an ArrayList object to serialize, deserialize, and store itself in an XML file. Another Way to Escape Sequences in .NET Resource Files by Nick Piazza There is actually another way to insert escape sequences, such as newline characters—or actually any Unicode character, directly into the Resource string—without resorting to Replace or similar manipulations. Make a Textbox Display Characters in a Particular Case by Srinath MS Learn how to use the CharacterCasing property to make a textbox display characters in acertain case. Recursive Function Finds a Control on a Form by Yuriy Bas This recursive function finds a control on a form by its name. Using Multiline Strings in .NET Resource Files by Guy Ronen Displaying strings from .NET resource files may be a puzzle if the strings include escape sequences. Here is an easy solution. Escape Sequences in .NET Resource Files by Boris Eligulashvili Learn how to escape sequences in .NET resource files. How to Display ASP.NET DataGrid Data in Excel by Naveen Lanke This tip shares code that allows you to display ASP.NET DataGrid data in Excel. Adding an Existing Windows Form to Another Project by Boris Eligulashvili When you add an existing Windows Form to another (target) project, it may fail to be opened by the VS.Net Designer in your target project. This tip explains why and how to prevent this from happening. Access the ValueMember Item in a Bound CheckedListBox by Evan Stone This tip shows you how to retrieve items in a list that have been checked and simply output them to the Output window. How to Hide a ContextMenu by Boris Eligulashvili Find out how to hide a ContextMenu. How to Create a Web Service in C# by Sachin Kainth Learn how to create a Web service in C#.. 1-20 of 44
http://www.devx.com/tips/dot-net/c-sharp
CC-MAIN-2014-15
refinedweb
509
64.3
Intel is revamping its enterprise SSD lineup with two new offerings that feature TLC 3D NAND and a new SSD controller that brings a range of new features, including increased performance and simplified management capabilities. We've seen the client SSD market transition to 3D TLC NAND en masse, and we expect the same trajectory for the enterprise space, too. 3D NAND offers plenty of advantages, such as increased endurance and reduced power consumption, but most importantly, it offers lower pricing and increased density. The Intel DC P4600 and P4500 employ Intel's first generation 32-layer TLC NAND with 384Gb die. Intel's vertically integrated design employs the company's own NAND, controller, firmware, and components. The new fourth-generation controller features 12 channels (four CE per channel), whereas previous-generation controllers employed 18 channels. Paring back the channel count confers reduced power consumption, but in this case, it also yields a net performance gain. The performance increase is surprising in light of the transition to TLC NAND, which typically results in lower performance. Intel employs a dual-PCB design to house the hefty allotment of NAND, but it hasn't specified which capacity points leverage a daughterboard. The company also claims the new controller is more scalable than previous generations, so we could see higher capacities in the future with the same platform. Endurance is also a notably impressive metric in light of the TLC NAND. The DC P4500 features 10% overprovisioning, while the DC P4600 steps up to 30%. Aside from the fancy new heatsink (which is the same as the 3D Xpoint-powered Intel DC P4800X), the DC P4600 brings up to 702,000/257,000 read/write IOPS, which is a big improvement over the previous "standard" Intel enterprise SSDs. In fact, only the DC P3608, which is two SSDs melded onto one PCB, offers more random read IOPS, and it pales in comparison with random write performance. Notably, the increased performance offers superior IOPS-per-TB metrics, which is an important factor in the data center. Sequential throughput metrics are also impressive for a single-ASIC design. That all comes down to firmware and controller optimization. One of the more notable aspects of the series is the step up to more submission/completion queues. These queues spread out amongst the processor cores to offer increased performance and consistency. Intel has also enabled "snap reads," which allow the controller to read only a NAND page instead of wasting time processing the entire block. The company also added the capability to suspend in-flight background operations, such as garbage collection, to prevent interference that can impact read performance. Intel also coalesces TRIM commands and can suspend them to prevent interference with time-sensitive operations. The culmination of the techniques improves both performance and consistency. We don't have more fine-grained specifications, such as detailed QoS metrics, due to Intel's new policy of not releasing product manuals. This notable departure from the norm is the result of an unnamed competitor duplicating some of Intel's SMART implementation. That's unfortunate. As such, Intel customers will not get access to the product manual without a signed NDA. Intel did provide the basic 99.99th percentile metric of 500 microseconds with a 4K QD1 workload, which is an impressive 8x improvement compared to the DC P3700. Intel stepped up to NVMe 1.2, which offers new management features and performance optimizations for large deployments, and still employs the PCIe 3.0 x4 connection. The increased capabilities of the NVMe Management Interface (NVMe-MI) allow the company to offer increased telemetry for important metrics, such as latency distributions, SSD health, and temperature monitoring. The new out-of-band management capabilities also allow for OS-agnostic management and firmware updates. The SSD also comes with the expected end-to-end data path and power loss protection features. Support for multiple namespaces also makes its way into the arsenal, which is important for carving up devices into logically separate volumes. The new Intel data center SSDs are in production with top cloud service providers. General availability begins in June. The new SSDs feature competitive price points and Intel's standard five-year warranty. You do need a modern computer (Kaby Lake I believe), but the coolest thing is that you can buy a $45 version and drop it in as a fast cache for a system that only has an HDD. Just buy an SSD you might say... sure, but for a lot of people that can lead to confusion and even severe problems. If I had a relative with a compatible computer with ONLY an HDD I'd probably just drop in an M.2 Optane device (based on the results from places like PCPER). New controller or not. Looking at specs - everything is rated "up to", not sustained. Willing to bet these new drives have healthy SLC cache and performance would drop drastically ones data tested is beyond the cache size. It's a product announcement, NOT a review as you know so let's just get out there so others aren't confused. As for your TLC worries, you mention this indirectly, but didn't quote this: "The performance increase is surprising in light of the transition to TLC NAND, which typically results in lower performance.." Any company buying this en masse should be looking carefully at the cost vs performance benefits. It's not "Intel PR poop" likely since the average consumer isn't the target. Educated administrators are. The cache, NAND cost, controller and software architecture appear carefully optimized around performance vs dollar. All we can do is wait for reviews. *For a video editing consumer with a half-decent PC (i7-3770K) but not amazing there's too much changing right now for me to rebuild my system which may include a prosumer PCIe SSD. Intel OPTANE looks awesome, but only for Kaby Lake AFAIK. Ryzen is interesting, and looking for at least 8C/16T but no Optane.. newer Intel CPU's coming with better value (i.e i9) and more than 8C/16T? .. may see other memory changes in the near future?
https://www.tomshardware.com/news/intel-ssd-p4550-p4600-dc,34301.html
CC-MAIN-2022-21
refinedweb
1,029
54.63
Java Bitwise operators are the ones that can easily be applied to the integer values, long, short, byte, and char values. They help in the manipulation of individual bits of an integer. In java, the user has 4 bitwise and 3-bit shift operators to perform bitwise operations. There are certain specific cases where these operators and shifts can be used; otherwise, the user cannot use them commonly. Types of Java bitwise operators: 1.) Bitwise OR: It is a binary operator and returns bit by bit values of OR operation. If any of the inputs is 1, the result will be 1, otherwise it will be 0. It is denoted by ‘|’ sign which compares the two inputs. OR( 0 | 1); = 1 OR( 1 | 0); = 1 OR( 0 | 0); = 0 OR( 1 | 1); = 1 Let us take 2 integers x and y. x = 2; y= 5; Binary of 2= 0010; Binary of 5= 0101; 0010 |0101 The output will be 0111 2.) Bitwise AND: It is a binary operator and returns bit by bit values of AND operations. We denote it by ‘&’ sign which compares the two inputs. If any of the inputs is 0, the result will be 0, otherwise, it will be 1. What is the difference between & and &&? & in java is a type of bitwise operator which helps in comparison of each input. It will evaluate both sides of an input. However, && in Java is a type of logical operator which helps in the comparison of boolean values. It will first evaluate the left side of the condition. if it is satisfied, it will move to the right side. AND( 0 | 1); = 0 AND( 1 | 0); = 0 AND( 0 | 0); = 0 AND( 1 | 1); = 1 Let us take 2 integers x and y. x = 2; y= 5; Binary of 2= 0010; Binary of 5= 0101; 0010 &0101 The output will be 0000 3.) Bitwise Compliment It is a unary operator and it performs operations on only 1 input. We denote it using ‘~’ sign. It seems to invert the patterns of the bits which mean it will change all the 0 inputs to 1. Similarly, all the 1 inputs to 0. ~( 0); = 1 ~( 1); = 0 Let us take an integer x. x = 2; Binary of 2= 0010; ~0010 The output will be 1101 4.) Bitwise XOR It is a binary operator and returns bit by bit values of XOR operations. we denote it by ‘^’ sign which compares the two inputs. It gives the output as 1 if both the inputs are same. Otherwise, the returns 0 when the inputs do not match. XOR( 0 | 1); = 0 XOR ( 1 | 0); = 0 XOR( 0 | 0); = 1 XOR( 1 | 1); = 1 Let us take 2 integers x and y. x = 2; y= 5; Binary of 2= 0010; Binary of 5= 0101; 0010 ^0101 The output will be 1000 We also have bitwise shift operators. They help in shifting the bits of numbers to left or right. we can perform this by either multiplying or dividing the number by 2. The types of shift operators in java are: 5.) Bitwise Right Shift operator The operator shifts the bit to right and fills the void with 0 in the output. We denote it by ‘>>’. The sign of the number decides the left bit. It is done by dividing the number with a power of 2. 6.) Bitwise Left Shift operator The operator shifts the bit to left and fills the void with 0 in the output. It is denoted by ‘<<’. The sign of the number decides the right bit. It is done by multiplying the number with a power of 2. Where are Bitwise Operators Use? The bitwise operators are mostly used in places where manipulation of individual bits is required. They support any integer type and users can use them for the operations such as update a query. They are also used in the operations of the indexed tree. Java Program to explain Bitwise Operators public class DeveloperHelps { public static void main(String[] args) { int a = 2; int b = 6; System.out.println("a&b = " + (a & b)); System.out.println("a|b = " + (a | b)); System.out.println("a^b = " + (a ^ b)); System.out.println("~a = " + ~a); a &= b; System.out.println("a= " + a); } } The output of the above java program will be: a&b = 2 a|b = 6 a^b = 4 ~a = -3 a= 2
https://www.developerhelps.com/java-bitwise-operators/
CC-MAIN-2021-31
refinedweb
732
75.1
A digital dashboard is a portal composed of Web components (called Web Parts) that can be combined and customized to meet the needs of individual users. Web Parts are reusable components that wrap Web-based content such as XML, HTML, and scripts with a standard property schema that controls how Web Parts are rendered in a digital dashboard. This chapter explains how to build a digital dashboard that contains interactive Web Parts that respond to events generated by other parts in the same dashboard. (This chapter assumes you are familiar with Microsoft® SQL Server™ 2000, XML, scripting, and Web application development.) Dashboards support part integration through a set of services provided by the Digital Dashboard Services Component (DDSC). The DDSC includes Part Discovery, Part Notification, Session State Management, and Item Retrieval. There is an underlying object model that you can use to program the services into your code. When building an integrated or interactive dashboard, Part Notification provides the most relevant service. Part Notification service refers to event notification and a corresponding response. Understanding how this service works is key to building interactive Web Parts. This chapter describes how to deploy this service in the context of building a simple dashboard. A dashboard can be an arbitrary container for unrelated parts (for example, a collection of your favorite Web sites or applications arranged into a personal dashboard for easy access), or it can be a container of parts that work together by sharing, summarizing, or filtering the same data set. In the later case, the dashboard operates more like an application, with features and functionality distributed across multiple parts. This chapter describes the basic techniques you need to build exactly this kind of dashboard. The objective of this chapter is to show you the process of creating an interactive dashboard and how to retrieve sample data from the Northwind database using the XML features in SQL Server 2000. Specifically, this chapter teaches you how to: Create parts that get and transform XML-based data from SQL Server. Reference an HTC file that defines HTML behaviors in your dashboard. Use the Digital Dashboard Service Component (DDSC) to raise and respond to events occurring at the part level. Create isolated frames that enable DDSC events to occur on the client, eliminating round trips to the server and improving security. To illustrate these points, a Customer Information dashboard that contains two parts is created. The first Web Part presents a list of customers retrieved from Northwind. The second Web Part is a bar chart that shows order volume by year for a specific customer that you select. When the user clicks a value in the customer list in the first part, the DDSC raises an event that causes the second part to get and display summarized order data about that customer. The actual dashboard and Web Part definitions will be created by you. The code samples included with this chapter provide the Web-based content that you use to create the Web Parts. Code for this chapter is provided on the SQL Server 2000 Resource Kit CD-ROM. Each step of the process is explained, and the tools and software you need to perform each step are identified. To follow the steps in this chapter, you must have SQL Server 2000 running on Microsoft Windows® 2000, and the Digital Dashboard Resource Kit (DDRK) 2.01. From the DDRK, you must also install the SQL Server sample Administration digital dashboard. The sample dashboard provides a way to create dashboards and parts.The sample Administration dashboard is used to define the dashboard and parts described in this chapter. In the process of creating the dashboard, you will need to do the following: Ensure that your SQL Server 2000 installation supports SQL Server authentication. Install the DDRK 2.01. Install the SQL Server sample Administration dashboard from the DDRK. Create virtual and physical directories to store the code sample files. Copy the files to the directories you created in the previous step. Edit the files to correct server name and path information. Define a dashboard using the sample Administration digital dashboard. Define a Customer List Web Part. Define a Customer Order Chart Web Part. Code samples provide the content of the Web Parts you will create. Web Part content can be XML, HTML, or scripts that get and transform data or that define events and behaviors. You can put the content in separate files that you reference or you can type it directly into the Web Part definition. For this exercise, the content is provided in files. Note that a single Web Part can use multiple files to supply functionality. Code samples provided with this chapter include the following: Customerlist.htm (provides content for the Customer List Web Part). Customerlist.xml (contains an XML-based SQL Server query. This query gets a list of company names from the Customers table in Northwind). Customerlist.xsl (transforms the company names in the Customer List Web Part). Customerlist.htc (defines mouseover, mouseout, and click events for the Customer List Web Part). Orderchart.htm (provides content for the Order Chart Web Part). Orderchart.xsl (transforms order data for a specific customer). The code sample files are commented to help you interpret the purpose and intent of the code. Snippets from these files appear in this chapter to illustrate key points. Note Code samples require editing before you can use them. Many of the files contain placeholder values for your Microsoft Internet Information Services (IIS) server and virtual directories. Where indicated in the instructions, you need to replace the placeholder values with values that are valid for your computer. This chapter requires Microsoft SQL Server 2000, Microsoft Windows 2000, Internet Explorer 5.0 or later, and the Digital Dashboard Resource Kit (DDRK) 2.01. SQL Server 2000 is required because it includes XML support for exposing relational data as XML. In the sample dashboard you create, you access Northwind as XML from your Web browser by way of a virtual directory. SQL Server 2000 provides a tool for configuring a virtual directory for the Northwind database. Instructions for configuring this directory are covered later in this chapter. If you install the DDRK on the same computer as SQL Server, your SQL Server installation needs to support SQL Server authentication. Hosting a dashboard and a SQL Server on the same computer means that the Web server (IIS) and SQL Server need to talk to each other. Having both the Web Server and SQL Server use the same integrated authentication mode results in a security violation; the Web server will be prevented from issuing a query to a SQL query when both servers reside on the same computer. To be able to query Northwind from your development computer, you need to use SQL Server authentication. Note that if SQL Server authentication is not enabled, you may need to reinstall SQL Server, selecting SQL Server authentication during the install process. If SQL Server and the DDRK are installed on different computers, you can use whatever authentication mode you like. For more information about supported platforms and installation, see SQL Server Books Online. For this chapter, Windows 2000 and IIS 5.0 are required on the server hosting the digital dashboard. This means that the computer on which you install the DDRK must be running some edition of Windows 2000 server. Clients do not require Windows 2000. Client platforms include any edition of Windows 2000, Windows NT®, and Windows 98. Viewing the dashboard and processing the underlying XML requires Internet Explorer 5.0 or 5.5. Dashboard development starts with the DDRK, which provides the design-time framework and run-time components you need to deploy dashboards and parts. The DDRK 2.01 provides information and development resources. To learn about dashboards, you can read white papers, reference material, and overviews. Development resources include sample Administration digital dashboards that you can analyze to further your understanding of dashboard functionality. More important, the sample Administration digital dashboards offer real functional value¯installing a sample dashboard simultaneously installs digital dashboard components, such as the dashboard factory, the DDSC, and dashboard storage support. The sample Administration digital dashboards also provide a user interface for creating new and modifying existing dashboards and parts, as well as the ability to set properties that control access and presentation. The DDRK contains several sample Administration digital dashboards. For this chapter, we assume you are using the SQL Server Sample Administration Digital Dashboard. You will use this dashboard to create your own dashboard as well as define the Customer List and Order Chart parts. You can download and install the DDRK 2.01 from. To install the SQL Server Sample Digital Dashboard, open the DDRK and go to Building Digital Dashboards. Choose Install the Microsoft SQL Server 7.0 Sample Digital Dashboard (note that this sample dashboard is fully compatible with SQL Server 2000). During installation, you will be asked to create a new SQL Server database to store the dashboards and parts you create. When defining the login to this database, use sa for the user name and leave the password blank. After installation completes, the Welcome page of the SQL Server Sample Administration Digital Dashboard appears (note the HTTP address for future reference). Click Administration to open the Administration page. This is the page you will use later to define a new dashboard and Web Parts. This section explains how to get files into the right places and configure virtual directories. The code samples for this chapter are available on the SQL Server 2000 Resource Kit CD-ROM in the folder, \ToolsAndSamples\DigitalDashboard. There are six files altogether. In the next several steps, we will tell you where to place the files and which files need editing. Use Windows Explorer to create a physical directory in your Default Web Site directory. By default, the path is C:\Inetpub\Wwwroot. To this path, you can add a subdirectory named Tutorial, resulting in this path: C:\Inetpub\Wwwroot\Tutorial. Into this directory, copy the following code sample files: Customerlist.htm Customerlist.htc Orderchart.htm. Use Internet Services Manager to create a new virtual directory under Default Web Site for your HTM and HTC files. In Windows 2000, this tool is located in the Administrative Tools program group. To create a virtual directory, right-click Default Web Site, and then click New Virtual Directory. To match the path names used in the code samples, name your virtual directory Tutorial. To issue an SQL query through HTTP, you need to configure Northwind as a virtual directory. To do this, you use the Configure SQL XML Support in IIS tool, located in the Microsoft SQL Server program group. Instructions that describe this process in detail are provided in the topic "Creating the nwind Virtual Directory" in SQL Server Books Online. You should follow the instructions exactly. When you are finished, you should have the following physical directories: \Inetpub\Wwwroot\nwind \Inetpub\Wwwroot\nwind\schema \Inetpub\Wwwroot\nwind\template For each physical directory, you should have a corresponding virtual directory of the same name. Into the \Inetpub\Wwwroot\nwind\template directory, copy the following code sample files: Customerlist.xml Customerlist.xsl Orderchart.xsl Note The nwind virtual directory is accessed by SQL Server when it retrieves data. The application virtual directory that you use to store the HTM and HTC files is accessed by the dashboard. This is why you need separate directories for each group of files. After you copy all the files, you can adjust the server name and paths in the code sample files. In all cases, replace < your server name > with the name of your IIS server, correcting the virtual path names if necessary. Use the proper name rather than localhost for the server name. Using localhost results in permission denied errors when you add Web Parts later in the tutorial. Open Customerlist.htm from the Tutorial folder using an HTML or text editor. Edit the path in the IFRAME element: <IFRAME ID="CustFrame" SRC="http://< your server name >/nwind/template/customerlist.xml". Save and close the file. Open Orderchart.htm from the Tutorial folder using an HTML or text editor. Edit the path in the SRC property of the ChartFrame object: document.all.ChartFrame.src = "http://< your server name >/Nwind?xsl=…". Open Customerlist.xsl from the Template folder using an HTML or text editor. Edit the path in td style 1048528364element1048528364: td {behavior:url(http://< your server name >/tutorial/customerlist.htc)}. This section tells you how to use the Administration sample dashboard to define a new dashboard and the parts that go in it. A dashboard is a container for Web Parts. It is defined by a schema and supports properties that determine dashboard appearance and behavior. To create the Customer Information dashboard, you start by defining a new dashboard. In your browser, open the Administration page of the SQL Server Sample Digital Dashboard. The default address is http://<your server name>/Dashboard/Dashboard.asp?DashboardID=http://<your server name>/Sqlwbcat/Welcome/Administration. In the Dashboard View pane, select Sqlwbcat, and then click New to define a new dashboard. Sqlwbcat is the default name of both the SQL Server database and IIS extension that manages dashboard and part storage. The dashboard that you define will be stored and managed by Sqlwbcat. In the Dashboard Properties pane, replace the default name NewDashboard1 with CustomerInfo, and then replace the default title New Dashboard with Customer Information Dashboard. If you wish, choose a different predefined stylesheet. Click Save. The CustomerInfo dashboard is added to the list of dashboards for Sqlwbcat. To test your progress so far, open your browser and paste this Address: http://<your server name>/Dashboard/Dashboard.asp?DashboardID=http://<your server name>/Sqlwbcat/CustomerInfo. You should see an empty dashboard, correctly titled and styled, with the Content, Layout, and Settings items in the top right corner. Save this URL in your Favorites list so that you can view the changes as you add each part. The Customer List Web Part contains a list of customers, identified by Company Name. The content for this Web Part is an HTM file. In your browser, open the Administration page of the SQL Server Sample Digital Dashboard. In the Dashboard View pane, select the CustomerInfo dashboard. Scroll down to the Web Part List pane, and then click New to define a new part. In the General tab of Web Part Properties, do the following four things: Replace the default name NewPart1 with CustomerList. Replace the default title NewPart1 with Customer List. Select Left Column for the position on the page. Set Fixed Size to a fixed height of 500 pixels. This shows more rows in the Customer List. Click the Advanced tab. Choose HTML for the Content Type. In Content Link, type the following: http://<your server name>/tutorial/customerlist.htm Click Save. Note that if you subsequently change any properties (for example, to adjust the part position or change the title), the values you entered for fixed height will migrate to the fixed width fields. This bug will be fixed in a subsequent release. The workaround for now is to redo the fixed height, and then click no to disable the fixed width. To test your progress so far, open or refresh the Customer Information dashboard in your browser. The Customer List Web Part should appear in the dashboard. The Order Chart Web Part is an HTML file that contains summarized order data for the customer selected in the Customer List Web Part. In your browser, open the Administration page of the SQL Server Sample Digital Dashboard, then select the CustomerInfo dashboard. Replace the default name NewPart1 with OrderChart. Replace the default title NewPart1 with Order Chart. Select Right Column for the position on the page. Set Fixed Size to a fixed height of 350 pixels to give the part more room. In Content Link, type the following: http://<your server name>/tutorial/orderchart.htm After you add the two parts, the dashboard is ready to use. Open the Customer Information dashboard in your browser. Click a Company Name in the Customer List Web Part. The Order Chart Web Part responds by querying Northwind for order information about the customer, and then aggregating that information into a set of values that can be represented by a bar chart. The name of the customer you select appears above the chart. The following sections detail the events and actions occurring behind the scenes that create the appearance and behavior you see in this dashboard. This section highlights the more interesting aspects of the code samples. Each file is discussed separately. The following table describes the role of each file. File Description Customerlist.htm Creates a structure for the part. Customerlist.xml Gets customer data. Customerlist.xsl Transforms data by selecting it and applying HTML. Customerlist.htc Adds dynamic HTML behaviors, including definitions for the onclick event used to raise an event notification. This notification is received by the Order Chart Web Part. Orderchart.htm Creates a basic structure for the part, gets data by building a query that includes a Company Name passed through the onclick event defined in Customerlist.htc. Orderchart.xsl Transforms the data by selecting it and applying HTML. The bars in the bar chart are dynamically sized based on the amount of annual orders. Two functions different functions are used to calculate these values. This HTML file provides the content for the Customer List Web Part. It contains a reference to the Customerlist.xml file, which in turn contains a reference to Customerlist.xsl, which references the Customerlist.htc file. The Customerlist.htm file defines an isolated frame to contain Customer data from Northwind. Although you can isolate Web Parts in the Web Part definition, using this approach (that is, manually creating IFRAME elements) offers more security and allows you to invoke the DDSC at the part level. Invoking the DDSC at the part level means that you can control other Web Parts (in this case, the Order Chart Web Part) from script inside an IFRAME. To do this, you create a variable named DDSC in the IFRAME content and then set its value equal to the DDSC that exists outside of the frame (that is, the DDSC instance for the dashboard). You can then use the DDSC variable to communicate with other parts. In this example, a DDSC variable is declared in the source for the IFRAME (that is, in the Customerlist.xsl file, which in turn is referenced by the Customerlist.xml file, which provides the content to the IFRAME element). This approach works because a parent can access an IFRAME (note that the reverse case of IFRAMEs accessing parents is not true). In this case, the DDSC instance at the dashboard level can access the IFRAME content you define and participate in the script that you associate with a given IFRAME element. In the code snippet below, the IFRAME 1048528365ID 10485283661048528365attribute 1048528366is defined so that you can reference the frame in script. Next, the IFRAME SRC attribute specifies the XML template file containing the Northwind query. This file is used to populate the frame with a scrollable list of Company Names. The names are retrieved from Northwind when the dashboard loads. Note that UTF-16 encoding is needed to accurately display foreign language characters in the data. Finally, the IFRAME HEIGHT and WIDTH attributes expand the frame so that it occupies all of the available space of the Web Part. <IFRAME ID="CustFrame" SRC="http://<server>/nwind/template/customerlist.xml?contenttype=text/html&outputencodin g=UTF-16" HEIGHT="100%" WIDTH="100%"> </IFRAME> Further on in this file, you find a script block that instantiates a DDSC instance at the frame level, using the value of the IFRAME ID. The DDSC is one of the objects used to implement the Part Notification service. It exposes methods that both raise and respond to event notifications. CustFrame.ddsc= DDSC; This XML template file issues an SQL SELECT statement through IIS using the nwind virtual directory you configured earlier. Specifying the nwind virtual directory is equivalent to specifying the Northwind database (recall that this specification is part the value for the IFRAME SRC attribute in Customerlist.htm). The root element defines a namespace and the XSL file used to transform the result set. The query statement is a child of the root element. <root xmlns: <sql:query> SELECT CompanyName FROM Customers FOR XML AUTO </sql:query> </root> This XSL file transforms the XML result set so that it appears in the page. It defines a template pattern that finds all Customer nodes and gets the value of the Company Name. The Company Name is inserted into a TD element in the order returned by the query. In the code snippet below, the STYLE element defines CSS styles for TH and TD elements. The STYLE TH element is styled with a gray background color. The STYLE TD element calls an HTC file that combines style attributes with script to produce dynamic HTML for the content in each TD element. <STYLE> TH {background-color:#CCCCCC} TD {behavior:url(http://<server>tutorial/customerlist.htc)} </STYLE> This file also declares a variable for DDSC. This variable is used in the Customerlist.htm file to invoke the DDSC object for an IFRAME element. Note that this declaration was discussed previously, in the Customerlist.htm section. <script language="JScript"> var DDSC; </script> The Customer List Web Part is programmed for three events: onmouseover, onmouseout, onclick. Onmouseover and onmouseout define rollover behavior. Through the Click function, the onclick event instantiates the DDSC object at the part level. Clicking a company name raises an event (that is, broadcasts an event notification to other parts in the same dashboard). The RaiseEvent method is a method of the DDSC object. function Click() { ddsc.RaiseEvent("URN:Customer", "SelectCustomer", this.innerHTML); } The URN:Customer parameter is a user-defined namespace that you can create to provide a context for the event. For example, in any given application you may have multiple Click functions. Using a namespace provides a way to distinguish between click events that occur in an Employee form, a Customer list, or an Order bar chart. The SelectCustomer parameter is an event name. This is a user-defined name that identifies the event to other Web Parts that respond to this event. Script attached to the responding Web Part (that is, the Order Chart) refers to the same event name when registering for the event. The this.innerHTML parameter is an event object. This is the object upon which the function operates. In this case, it is a specific Company Name that the user clicks on. This value is passed as part of the event notification, making it available to other parts that want to use it. This file provides the content for the Order Chart Web Part. The file contains an SQL SELECT statement issued through IIS using the nwind virtual directory you configured earlier. The query is multipart, using a combination of fixed strings and a Company Name value that is passed in as a parameter. The data that is returned is total order volume for a single customer, grouped by year. Clicking a different customer in the Customer List issues another query against the database, using new values that correspond to the selected customer. The return values are used to update the contents of the Order Chart. The code that relates the Order Chart to the Customer List Web Part is the following: DDSC.RegisterForEvent("URN:Customer", "SelectCustomer", this.innerHTML); The SelectCustomer parameter is the event name, and this.innerHTML is the event object. As with the Customer List, an isolated frame is used to contain the data. The IFRAME element is defined as follows: <IFRAME ID="ChartFrame" WIDTH="100%" FRAMEBORDER="0" NORESIZE</IFRAME> The onSelectCustomer function provides the code that creates the multipart query. (Note that the first several lines of this function are used to search and replace special characters like ampersands and apostrophes to XML or HTTP equivalents). The query is specified through the SRC parameter of the IFRAME element by way of the document object model. document.all.ChartFrame.src = "http://<server>/nwind?xsl=template/orderchart.xsl&contenttype=text/html&outputencoding= UTF- 16&sql=Select+datepart(year,%20Orders.OrderDate)+as+Year,Sum([order%20details].UnitPrice *[order%20details].Quantity)+as+OrderTotal+from+[order details]+inner+join+Orders+on+[order%20details].OrderID=Orders.OrderID+inner+join+Custom ers+on+Orders.CustomerID=Customers.CustomerID+where+customers.companyname='" +customerName +"'+group+by+datepart(year,%20Orders.OrderDate)+FOR+XML+RAW&root=root"; In this query, an XSL file and encoding attribute are specified before the SELECT statement. The SELECT statement itself is articulated in HTTP syntax. Because the query contains a dynamic element (CustomerName, which is the value passed in as "this.innerHTML" and it varies each time the user clicks a Company Name), a static XML template file could not be used. Passing the SQL query as a string provides a way to combine static and dynamic elements together. This file transforms the XML result set returned for the Order Chart, creating the bar chart and displaying customer information based on an SQL query. This file is referenced in the HTTP statement for the SRC parameter. The bar chart is simple HTML (in this case, TD elements in a table) and it shows differences among annual order volumes for a specific customer. To get differences in bar color and size, different attributes on the TD element are set. These attributes are BACKGROUND-COLOR and WIDTH. WIDTH is an XSL attribute (name=style) that is attached to the TD element. The value of WIDTH is calculated through script. Color coding is based on the year (year values are detected through XSL). Because there are only three years worth of data in the Northwind database, we get by with XSL test cases that detect 1996, 1997, and 1998. <xsl:attributewidth:<xsl:eval>getOrderPercent(this)</xsl:eval>; <xsl:choose> <xsl:whenbackground-color:red</xsl:when> <xsl:whenbackground-color:blue</xsl:when> <xsl:otherwise>background-color:purple;</xsl:otherwise> </xsl:choose> Sizing is based on order volume. In Northwind data, order volumes vary from two-digit to five-digit values. The wide range makes it difficult to scale the bars using fixed values (a bar chart based on pixels would need to accommodate bars that are 42 pixels long and 64,234 pixels long). To work around this, we use percentages. Percentage values show relative rather than absolute differences in the order volumes. For a specific customer, each annual volume (for 1996, 1997, or 1998) is some percentage of the combined three-year volume. To get the three different WIDTH values needed for the three bars in the bar chart, we use two functions. The getOrderPercent function calculates the value of the TD WIDTH attribute by dividing an Order Total by the sum of all Order Totals. This function is called from an xsl:eval element (as shown in the first line of the previous code snippet). The getOrderTotal function sums the Order Totals into one lump sum. This sum becomes the denominator in the getOrderPercent function. Both functions are reproduced here in their entirety: var nTotal = 0; function getOrderPercent(nNode) { var nPercent; if (nTotal == 0) nTotal=getOrderTotal(nNode.ParentNode); nPercent=Math.round((nNode.getAttribute("OrderTotal") / nTotal) * 100) + '%'; return nPercent; } function getOrderTotal(nNode) { var sum=0; var rows=nNode.selectNodes("row"); for (var i = rows.nextNode(); i; i = rows.nextNode()) sum += parseInt(i.getAttribute("OrderTotal")); return sum; }
http://technet.microsoft.com/en-us/library/cc917654.aspx
crawl-002
refinedweb
4,591
56.55
I am using Red Hat Enterprise Linux 3 and trying to open files of 2GB or greater using the following example syntax. #include <iostream> #include <fstream> #include <errno.h> #include <string.h> int main ( int argc , char** argv ) { std::fstream inFile; inFile.open ( "./2GB_file.txt" , std::ios::in|std::ios::binary ); if ( inFile.good ( ) ) { std::cout << "File opened OK" << std::endl; } else { std::cout << "File failed to open" << std::endl; std::cout << "Error is " << errno << " : " << strerror ( errno ) << "." << std::endl; } inFile.close ( ); } I compile with large file support (flags -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE -D_LARGEFILE64_SOURCE) However the files still fail to open, giving the error File failed to open Error is 27 : File too large. Has anyone come across this problem and found a solution. The old C file calls (fopen) are able to open a file greater than 2GB. The problem appears specific to C++. Jon.
http://fixunix.com/linux/7371-files-over-2gb-c-rhel3-print.html
CC-MAIN-2015-11
refinedweb
144
67.25
I am trying to finish the code for a simple program that will break a number down and give you the factors of that number. I have the program working all for one part. Right now it returns something like "2*2*2*5*5*" and instead I want it to say 2^3+5^2. Can anyone tell me what I would need to have to do this? I need to use some type of loop I think, and a While statement maybe? Code Java: import java.util.*; class Factor { String factor(int n) { String S = ""; int f = 2; while (n > 1) if (n%f == 0) { S += f + "*"; n /= f; } else f++; return S; } public void setN(int n) { n = 0; } public void getN(int n) { } } class FactorDemo // main Class { public static void main(String [] args) { Scanner in = new Scanner(System.in); Factor ML = new Factor(); System.out.print("Enter a positive integer(-1 to stop):"); int n = in.nextInt(); while (n > 0) //0 or negative number to stop the program // is called a sentinel { System.out.println(ML.factor(n)); System.out.print("Enter a positive integer(-1 to stop):"); n = in.nextInt(); } System.out.println("Thanks for using my program."); } }
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/5976-trying-write-factoring-numbers-code-please-help-printingthethread.html
CC-MAIN-2017-34
refinedweb
203
65.52
Xerces 2.4 / namespace error Discussion in 'Java' started by edwinek, Oct 8,: - 784 - cvissy - Nov 16, 2004 Xerces XS*, namespace prefixes and XSIDCDefinition string selectorS ML, Mar 24, 2005, in forum: XML - Replies: - 0 - Views: - 507 - S ML - Mar 24, 2005 Fixed/default attributes in Xerces: Namespace prefix missing?...VanOrton, Nov 30, 2005, in forum: XML - Replies: - 2 - Views: - 2,675 - VanOrton - Nov 30, 2005 java xerces xpath fails with namespacejacksu, Feb 10, 2006, in forum: XML - Replies: - 18 - Views: - 8,554 - Greg - Feb 27, 2006 Parsing XSD Schema from namespace schemaLocation using Xerces CDT-saarland.de, Nov 21, 2006, in forum: XML - Replies: - 1 - Views: - 629 - Joe Kesselman - Nov 21, 2006
http://www.thecodingforums.com/threads/xerces-2-4-namespace-error.127418/
CC-MAIN-2015-48
refinedweb
113
52.57
Slider Type: React component How to get Slider? JavaScript import { Slider } from 'fds/components'; Slider creates a full size container (with flex: 1) in which child Slide components should be placed. It shows a single Slide at once, based on the activeSlideIndex prop. Whenever that prop changes after the initial render, a dynamic CSS animation is used to translate each slide on the x-axis. Props active Slide Index (Required) Type: Number The (child) index of the <Slide />component you want to show. Zero-based. max Transition Duration In Ms (Optional) Type: Number The maximum duration in ms of transitioning between two arbitrary slides. Normally, the transition between two adjacent slides is 250ms, this is multiplied by the number of slides to transition, and maxed to whatever you specify here. Defaults to 1000. Default value on Transition End (Optional) Type: Function Callback that is called whenever a slide transition ends and after Slider has re-rendered with isTransitioning set to false. Useful to focus a ref in a slide that has just transitioned. Default value padding Size (Optional) Type: Number | String | Object The amount of padding rendered by the component. This should be a either, A single size value, one of the following values: 0 (zero, no padding at all) 's' (small) 'm' (medium) 'l' (large) to set the paddingSize for the top, right, bottom and left side to a single shared value. Or you can pass an object whose properties determine a specific padding size for either: 'horizontal': %single size value% to set the 'left' and 'right' padding to the given size value (one of the values listed above) 'vertical': %single size value% to set the 'top' and 'bottom' padding Or you can pass any combination of 'bottom', 'horizontal', 'left', 'right', 'top', 'vertical', where the more specific keys override the generic (eg. setting 'left': 's' and 'horizontal': 'm' would result in a padding size of 's' for the left side and a size of 'm' for the right side. Default value
https://documentation.fontoxml.com/latest/slider-e49f93a6a208
CC-MAIN-2021-31
refinedweb
332
56.18
A control the following objects and activities: Layout and display characteristics of the page Forms are composed of fields. Visible field types include simple text boxes, radio buttons, and selection boxes with multiple values. Fields can also have values based on other fields and can be either read-only or be hidden from view. Data that is used on the page Data can be captured dynamically from a resource or be calculated from other fields. With the Identity Manager expression language called XPRESS, field data can be calculated, concatenated, and logically evaluated. Data that is coming into the system Forms can be the interface from web pages as well as from noninteractive systems such as ActiveSync resources. In this role, the form has no visual fields, but still provides rules to set default values and other field values. For example, the Full Name field might not be visible to the administrator using the page, but can be set based on the values that the user enters into the First Name, Middle Name, and Last Name fields. Populating fields from other fields reduces the data entry that users and administrators must perform, consequently reducing potential data entry errors. Likewise, by providing option menus in the place of text input fields, an administrator can select a department from a list instead of entering the department name. For information on the specific HTML components that define the default Identity Manager forms, see Chapter 7, HTML Display Components. Identity Manager background processing Forms are also used within Identity Manager in the background processing. For example, forms can work in conjunction with resource adapters to process information from an external resource before storing it in the Identity Manager repository. When creating forms to manipulate data in the background, you focus primarily on encoding logic because the appearance is irrelevant in forms that are not visible to users. For more information on using hidden (nonvisible) components, see the section titled Using Hidden Components., you will probably edit one of the following five forms: End User Menu Form Anonymous User Menu Form Tabbed User Form End User Form Approval Form Change Password Forms These edited forms control the creation and modification of users and the display of the main menu that the user sees. They are described in greater detail in the following sections. During view and form interactions through the Administrator Interface JSPs for launching requests (before workflow launch), the view is edited directly. Consequently, the form runs in the namespace specified by the form attribute. Typical attribute namespaces include: accounts[*].* waveset.* accountInfo.* :display.session (session for admin) Does not apply to approval pages. By default, there are two implementations of the Change Password forms: End User Change Password – This form is the default password change form. It presents a simple set of fields with which the user can change their password. The password policies for all resources that are assigned to the user are aggregated and summarized, and Identity Manager applies the password change to all assigned resources. Basic Change Password – This form is present in both the Administrator and User Interfaces. It provides information about the resources that are assigned to the user and allows the user to individually select on which resources Identity Manager will change the password. Both Password Change forms support the use of the RequiredChallenge form property. When this property is set to true, the user is prompted to enter the old password after specifying the new password. See Adding a Password Confirmation Challenge for more information. End User Menu Form controls the display of the main menu in the Identity Manager User interface. Typically, this form contains links for changing the user’s password, editing account attributes, and changing answers to authentication questions. You can customize End User Menu Form to add links to launch special workflow processes that are accessible to the user (for example, a process to request access to a system). You can set the RequiresChallenge property in the End User Interface Change Password Form to require users to reenter their current password before changing the password on their account. For an example of how to set this property, see the Basic Change Password Form in enduser.xml. For example, to present the End-User Test Process as a link to click from the end- user pages, add the entries shown in the following code example: The re-evaluates this form’s <Default> expressions whenever the page is refreshed. You can disable this forced regeneration of the form by adding the doNotRegenerateEndUserMenu property (set to true) on the End User Menu form as follows: <Properties> <Property name=’doNotRegenerateEndUserMenu’> <Boolean>true</Boolean> </Property> </Properties> Anonymous User Menu Form controls the display of the main menu in thebed User Form is the default form used for user creation and modification in the Identity Manager Administrator Interface. You can customize a copy of this form by extending it with a form of your design. Do not directly edit the Tabbed User Form. Instead, Sun recommends that you make a copy of this form, give it a unique name, and edit the renamed copy. This will prevent your customized copy from being overwritten during service pack updates and upgrades. Customize your copy of Tabbed User Form to: Restrict the number of attributes that are displayed on the Edit User page. By default, this page displays every attribute that is defined on the schema map for a resource, which can result in an overwhelming list of attributes for a hiring manager to fill out. Set the default field types to more helpful select boxes, checkboxes, and multi value fields. By default, every attribute defined on a resource assigned to a user will appear on the Create User and Edit User pages as a text box (or as a checkbox for Boolean values). Include additional forms to allow common forms to be used on multiple pages. Tabbed User Form contains these fields: accountId role organization resource list application list MissingFields Do not use the MissingFields element in a production environment. It is provided for educational purposes only. When creating or customizing a User form from the Tabbed User form, you must replace the MissingFields element with explicit references to each individual attribute that can be pushed to the assigned resource. You must provide this replacement to avoid common pitfalls that can result from using the global namespace too heavily. (For example, your workflows will not populate resources unless they use global syntax.) (The MissingFields field is not actually a field. It is an element that indicates to the form generator that it should automatically generate text fields in the global namespace for all attributes that can be pushed to the assigned resources that are not explicitly declared in the Tabbed User Form.) By default, every attribute defined on a resource that is assigned to a user appears on the Create User and Edit User pages as a text box (or checkbox for Boolean values). End User Form controls the page that the system displays when a user selects Change Other Attributes from the /user/main.jsp on the Identity Manager User interface. From this page, a user can change his password, authentication questions, and email address. You can customize End User Form to grant users control over other fields, such as those that handle phone numbers, addresses, and physical office locations. Approval Form controls the information that is presented to a resource, role, or organization owner when he is designated an approver of user requests. By default, this page displays a set of read-only fields that contain the name of the administrator that started the process. It also displays information about the user, including the account ID, role, organization, and email address. This form ensures that the resource owner gets a last chance to change a user value before the user is created. By default, approving a user displays all the user attributes in read-only fields. You can customize Approval Form to: Add and remove information about a user. Assign the approver the ability to edit this information so that he can modify the information entered on the initial user form. Create your own approval forms for different purposes. For example, you can create different approval forms for use when an administrator or resource owner initiates account creation or deletes a user. When’/>. How the system processes a form helps determine the behavior of the form in the browser. All form-driven pages are processed similarly, as described below: A page is requested from the Identity Manager User or Administrator Interface. The interface requests a view from the server. A view is a collection of named values that can be edited. Each view is associated with a form that defines how the values in the view are displayed to the user. The server assembles a view by reading data from one or more objects in the repository. In the case of the user view, account attributes are also retrieved from resources through the resource adapter. Derivation expressions are evaluated. These expressions are used to convert cryptic, encoded values from the resource into values that are more meaningful to the user. Derivations are evaluated when the form is first loaded or data is fetched from one or more resources. Default expressions are evaluated. These fields are set to the default value if the field is null. HTML code is generated. The system processes view data and the form to produce an HTML page. During this processing, the allowedValues properties within expressions are evaluated to build Select or MultiSelect HTML components. The page is presented in the browser, and the user can edit the displayed values. During editing, the user typically modifies fields, which can result in a refresh or recalculation of the page. This causes the page to be regenerated, but the system does not yet store the edited data in the repository. Modified values are assimilated back into the view. When a refresh event occurs, the interface receives values for all the form fields that were edited in the browser. Expansion expressions are evaluated. This can result in additional values being placed into the view. Expansion rules are run whenever the page is recalculated or the form is saved. The view is refreshed. The interface asks the server to refresh the view and provides the current set of edited values. The server may insert more values into the view by reading data from the repository or the resources. Derivation expressions are evaluated. Typically, derivation expressions are not evaluated when a view is refreshed. In some complex cases, the system can request derivations after the refresh. The system processes the refreshed view and form and builds another HTML page, which is returned to the browser. The user sees the effects of the refresh and continues editing. The user can cause the view to be refreshed any number of times (repeating steps 7 through 12 each time) until the user either saves or cancels the changes. If the edit is canceled, all the data accumulated in the view is discarded, and the server is informed. As a result, the server can release any repository locks, and control passes to a different page. If the edit is saved, the interface receives the values that have been modified and assimilates them into the view (see step 8). Validation expressions are evaluated. If field values do not meet required specifications, then an error is presented and the field values can be corrected. Once the changes have been made, the process returns to step 13. Expansion expressions are evaluated one last time (see step 9). If the server saves the view, this typically results in the modification of one or more objects in the repository. With user views, resource accounts may also be updated. Several of the preceding steps require iteration over all the fields in the form. These include the evaluation of Derivation expressions, the evaluation of Default and Validation expressions, the generation of HTML, and the evaluation of Expansion expressions. During all field iterations, Disable expressions are evaluated to determine if this field should be processed. If a Disable expression evaluates to true, the field (and any nested fields it contains) is ignored. See Defining Field Names in this chapter for more information on these special types of expressions..
http://docs.oracle.com/cd/E19225-01/820-5821/6nh6l8uej/index.html
CC-MAIN-2017-13
refinedweb
2,055
53
Setting up basic prow on the IBM Cloud Scope This will tell walk you through specifics on getting prow working on theIBM Cloud. A lot of this is taken from the getting started but there aresome specifics you need for the IBM Cloud. You can also take this as an examplefor a generic Kubernetes Cluster minus the IBM Cloud commands. Create a Bot The first thing you should do is create a GitHub bot. It’ll be the account thatwill work from your prow instance. If you don’t know how ore where use thissignup page. Be sure to add the bot to the repositories you expect it to watch, it should bean administrator/owner, to make sure it can see and do what it needs with therepository. Next create an GitHub Personal Access Token for the bot. For permissions add the following:Create a personal access token for the GitHub bot account, adding the following scopes - Must have the public_repo and repo:status scopes - Add the repo scope if you plan on handing private repos - Add the admin:org_hook scope if you plan on handling a github org Place this API key in a file like github_token or the like. prow code Next you need the prow code so you can install it on your Kubernetes Cluster.Go ahead and checkout the code with the following command. git clone git@github.com:kubernetes/test-infra.git Change directory into the test-infra directory you created, and continueto the next step. Create cluster role bindings You’ll need to create a clusterrolebinding for your user you log in as.On the IBM Cloud, your username is IAM@IBMid as an example below is mine. export USER="IAM#jja@ibm.com" kubectl create clusterrolebinding cluster-admin-binding-${USER} --clusterrole cluster-admin --user="${USER}" This will make sure that when you apply the manifests the prow instance cancreate all the different things that is required on your Kubernetes cluster. Create a GitHub Secret Next you need ot cerate your secrets for your instance of prow. Using the followingcommands you can create your main secret to send the Webhook endpoint.Use the github_token you created earlier for the 3rd command. openssl rand -hex 20 > ./secret kubectl create secret generic hmac-token --from-file=hmac=./secret kubectl create secret generic oauth-token --from-file=oauth=../github_token Apply starter.yaml Now that you have the majority set up, you need to deploy the actual manifests for prow. The following command will push and start installing your instance. kubectl apply -f config/prow/cluster/starter.yaml Now verify the deployments, everything should be in READY state and AVAILABLE. kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE deck 2/2 2 2 2m hook 2/2 2 2 2m horologium 1/1 1 1 2m plank 1/1 1 1 2m sinker 1/1 1 1 2m statusreconciler 1/1 1 1 2m tide 1/1 1 1 2m Setting up Ingress correctly Verify the ingress is set up via this next command. The default ingress controller needs some editing, but to make sure that it is “up” the following command willverify it. kubectl get ingress ing If everything looks OK, you can now move to change the ingress via the next specific to IBM Cloud steps.Using the IBM cloud you get an ingress controller for free. Run the following commandto get the needed information, where k8s.asgharlabs.io is the name of your cluster: $ ibmcloud ks cluster get --cluster k8s.asgharlabs.io Retrieving cluster k8s.asgharlabs.io... OK Name: k8s.asgharlabs.io ID: brfakb8d0dlm8ddhq91g State: normal Created: 2020-06-08T21:15:21+0000 Location: dal13 Master URL: Public Service Endpoint URL: Private Service Endpoint URL: - Master Location: Dallas Master Status: Ready (1 day ago) Master State: deployed Master Health: normal Ingress Subdomain: k8sasgharlabsio-706821-0e3eA_FAKE_HASH1e8aa6fe01f33bfc4-0000.us-south.containers.appdomain.cloud Ingress Secret: k8sasgharlabsio-706821-0e3eA_REALLY_REALLY_FAKE_SECRETf33bfc4-0000 Ingress Status: healthy Ingress Message: All Ingress components are healthy Workers: 3 Worker Zones: dal13 Version: 1.18.3_1514 Creator: jja@ibm.com Monitoring Dashboard: - Resource Group ID: 5eb57fd577b64b51beb832c2e9d5287a Resource Group Name: Default Take note of your Ingress Subdomain and Ingress Secret` for the next step. Updating Ingress to work Go ahead and take the following yaml and change it for your subdomain and changethe prow to something else if you desire. ` apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: namespace: default name: ing spec: tls: - hosts: - prow.k8sasgharlabsio-706821-0e3eA_FAKE_HASH1e8aa6fe01f33bfc4-0000.us-south.containers.appdomain.cloud secretName: k8sasgharlabsio-706821-0e3eA_REALLY_REALLY_FAKE_SECRETf33bfc4-0000 rules: - host: prow.k8sasgharlabsio-706821-0e3eA_FAKE_HASH1e8aa6fe01f33bfc4-0000.us-south.containers.appdomain.cloud http: paths: - path: /hook backend: serviceName: hook servicePort: 8888 - path: / backend: serviceName: deck servicePort: 80 ` Conclusion Go to that address in a web browser and verify that the “echo-test” job has a green check-mark next to it. At this point you have a prow cluster that is ready to start receiving GitHub events! Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/jjasghar/ibm-cloud-and-prow-setup-2p56
CC-MAIN-2021-43
refinedweb
825
54.83
0 I am making a mock financial aid account for a project and I wanted to know why my program is not working. Can anyone help me out?? After creating my class, I am trying to access my getName(student_name) function in my main(). It says: "error C2065: 'student_name' : undeclared identifier". How can it be undeclared when I did #include "Financial_Aid_.h" in my main_program.cpp file?? Can someone answer this for me please? Thank You guys and gals for looking at my post. Here is my code. This is my Financial_Aid_.h header file #include <iostream> #include <cmath> #include <vector> #include <algorithm> #include <ctime> #include <conio.h> #include <string> using namespace std; class Financial_Aid_Data { private: string name; float student_id, scholarship_amt; int awards; public: Financial_Aid_Data() /* constructor */ { name = "unknown"; student_id = 123456; scholarship_amt = 0; awards = 0; } string getName(string student_name) { cout << "What is your name? "; getline(cin, student_name); cout << endl << endl; name = student_name; return name; } float getIdNumber(float student_id_num) { cout << "What is your student id number? "; cin >> student_id_num; student_id = student_id_num; cout << endl << endl; return student_id; } int getAwardAmount(int amt_of_awards) { cout << "How many awards do you have? "; cin >> amt_of_awards; awards = amt_of_awards; cout << endl << endl; return awards; } float getScholarship_amount(int awards, float my_scholarship) { vector<float>num_of_awards; for(int i = 0; i < awards; i++) { cout << "How much is this scholarship worth? "; cin >> my_scholarship; num_of_awards.push_back(my_scholarship); cout << endl << endl; } for(float x = 0; x <= num_of_awards.size(); x++) { scholarship_amt += x; } cout << "Your amount is: " << scholarship_amt << endl; return scholarship_amt; } void Display() { cout << "Your name is: " << name << endl << endl; cout << "Your id is: " << student_id << endl << endl; cout << "Your scholarship amount is: $" << scholarship_amt << " from " << awards << " awards." << endl; } }; And this is in my main_program.cpp #include "Financial_Aid_.h" void Continue() { char c; cout << "Press any key to continue... "; c = _getch(); } int main() { Financial_Aid_Data Student_Data; Student_Data.getName(student_name); Continue(); return 0; }
https://www.daniweb.com/programming/software-development/threads/435078/class-problem
CC-MAIN-2017-26
refinedweb
300
59.19
Advanced Registration Forms Overview Advanced registration forms let you build multi-step custom registration and admin user management forms for your applications. What Are Advanced Registration Forms Advanced registration forms let you build powerful, multi-step, custom registration experiences with no coding required. You might be interested in this feature if you use the FusionAuth themed login pages for your application, and the default self service registration form doesn’t meet your needs. Whether you want to break a form up into multiple steps for a better user experience, gather user consents, or have the user provide app specific data, advanced registration forms can help. If you are building your own login and registration pages using the APIs, you can still use the form builder in the administrative user interface, but you will have to generate the user facing HTML from the configured form data and recreate any front end logic. You may want to consider using the themeable hosted login pages instead. How Do I Use Advanced Registration Forms? This feature is only available in paid editions of FusionAuth. Please visit our pricing page to learn more about paid editions. Here’s a video showing setup and use of the advanced registration forms feature. To use advanced registration forms, you must: Create any needed custom form fields. Assemble predefined and custom form fields into steps, and steps into a form. Configure an application to use the form for self service registration. Theme the form (optional, but highly recommended). What is the Difference Between Advanced and Basic Registration Forms FusionAuth has two types of registration forms: basic and advanced. Both of these options allow you to enable self service registration in your application. The basic option is available in all editions of FusionAuth, including Community. Basic registration is limited to a single step and offers minimal configuration. You may mix and match from the following user data fields: Birthdate Full name Middle name Mobile phone Any displayed fields can be required for successful registration. You can choose to use a username or an email for your login identifier. A password field is displayed and required. This is a solid registration page; you can collect information and at the end the user will be associated with the application in FusionAuth and be able to sign in. The look and feel of the registration form can be themed. Validation is limited to having fields be required, though you can also implement additional validation in theme managed client side javascript. Basic registration forms have a subset of the functionality of advanced registration forms. With advanced registration forms, in addition to registering a user to an application, you can also: Collect additional profile data and store it in FusionAuth. Validate any field on the server in a variety of ways, including matching a regular expression. Use more complicated fields, such as consents and confirmation fields. Break a registration process into a series of less imposing steps. Set Up To use advanced registration forms, you must have a valid license key. Please visit our pricing page to review paid edition options and buy a license. Next, you need to activate the license. Before that, ensure that your FusionAuth instance has outbound network access. To activate, follow the steps outlined in the Reactor documentation. Building an Advanced Form Registration Flow Let’s create a form for a fictional real estate application. When someone registers, the application should collect the minimum home price and maximum home price that the user is looking at. You’ll also need to collect other, more typical, data, such as an email address. This guide will walk through creating a form to collect the following profile information: Phone number Free form geographic area where they are looking to buy Minimum house price Maximum house price Some of these fields are available in every FusionAuth installation, but some are custom. Before you create a form, first create any non-standard form fields. Create Form Fields The following fields are available by default: Full name Mobile phone Birthdate Username Middle name If you need additional fields, you must create them. To do so, navigate to. You’ll see a list of the above default fields, any existing custom fields and a button to create new ones. You can mix and match any fields listed here on a form. If what you need is already defined, there’s no need for any custom form field creation. But if not, create a new form field. Custom Form Fields The real power of advanced registration forms comes when you add custom fields. You can add as many of these as you’d like. You may store data in any of the predefined user fields such as user.fullName. But you can also use the data field on both the registration and the user objects to store data. user.data is the right place to store information related to a user’s account which is not application specific. If you wanted information that multiple applications might use, such as a current mailing address, that would be best stored in the user.data field. Store data related to a user’s account and specific to an application in registration.data. As a reminder, a registration is a link between a user and an application defined in FusionAuth. Since you are building a real estate app, the minimum house hunting price point of the user is only useful to this application. Therefore, storing the data in registration.data is the right approach. If you were later to build a mortgage application, there’d be different fields, such as loan amount sought, associated with that registration. Now that you have decided where to store the custom profile data, you should create the fields. First, add a minimum price field. Configure the form field to have a data type of number and a text form control. The user’s minimum price point is really useful information. Make it required so that a new user can’t complete registration without providing a value. Here’s what it will look like before saving the configuration: Add a maximum price field by duplicating the minprice field. Use a key of maxprice; keys must be unique within the data object, registration.data in this case. Change the name as well. All other settings can be the same as those of the minprice field. Finally, add a geographic search area custom field. The purpose of this field is to capture where the new user is looking to buy. It’ll be a string, but make it optional. Potential users might not have a good idea of where they’re interested in looking at homes. After saving the above custom fields, if you view the list of fields, you’ll see the three new fields. They are now available for the advanced registration form you’ll build next. These custom fields can be used for future forms as well. Create a Form The next step is to assemble the form from the form fields. You can mix and match any of the standard, predefined form fields and your custom form fields. Fields may appear in any order on the form. Arrange them in whatever order makes the most sense for your potential users. You may also add as many steps as make sense. It’s a good idea to group similar types of fields together into the same step. When you create a new form, you’ll see a name field and a button to add steps: There are a few rules about advanced registration forms. Each form must have: At least one step Either an email or a username field in one of the steps A password field in one of the steps At least one field on each step To begin building this real estate application form, navigate to + button to create a new form. Add the first step and then the following fields: Phone number Create a second step. Add your custom house hunting parameter fields: Geographic area of interest Minimum house search price Maximum house search price After you’ve added these fields to the form, feel free to rearrange the form fields within each step by clicking the arrows to move a field up or down. The form configuration specifies steps and field display order within those steps. If you need to move a field between steps, delete it from one step and add it to another. To change field validation, return to thesection and make your changes. When you’re done tweaking the form to your liking, save it. Associate a Form With an Application Once you’ve created an advanced registration form, the next step is to specify which applications should use this form. Forms can be reused in any application and any tenant. In addition to specifying the registration form, you’ll need to configure a few other options. Assuming you are creating a new FusionAuth application, navigate to thetab and add one. If you aren’t, you’ll need to tweak the settings of your existing application. You must configure a redirect URL; this is where the user is sent when registration succeeds. Navigate to thetab of your application and enter a valid redirect URL. Though the specifics depend on your application settings, such as whether you require email verification, a user will typically be authenticated at the end of the registration process. You must configure the application to allow users to register themselves. Otherwise, no users will be allowed to create their own accounts, which means they’ll never see the registration form. Navigate to the Self service registration. You configure the application to use your registration form by checking the advanced option and selecting the form you created above.tab and enable Return to the list of applications. Your form is ready to go. Once you have the registration URL, your users can sign up. User Registration To find the registration URL, navigate to Registration URL.and then view the application you created. Copy the Now that you have the URL, open up an incognito window or a different browser and navigate to it. The first screen asks for your first name, email address, password and phone number. Each screen also shows how many registration steps there are. The second screen displays the custom fields: the minimum and maximum home prices and your area of geographic interest. Click Register to complete your sign up. You’ll be sent to the configured redirect URL value and be signed in. The Admin View Sign into the administrative user interface and navigate tosection. You should see a new account added with the data you filled out. If you go to the tab on the new user’s account details page, you’ll see the custom data as well: Theming The form you built has a few rough user interface elements. You can create a better user experience by theming the form. Theming Setup While you can make the changes outlined below in the administrative user interface, you can also manipulate the theme via the FusionAuth API. To do so, navigate to /api/theme endpoint, at a minimum. Next, create a new theme, since the default theme is read-only. Themes are assigned on a tenant by tenant basis, so you may either change the theme for the default tenant or create a new tenant and assign a new theme to it. This guide will do the former. To do so, navigate to Real Estate Application. Navigate to Login theme setting to the Real Estate Application theme. Customizing a Theme Customizing the theme gives you full control over what the user sees. As a reminder, here’s what the first step of the registration flow looked like with no theming: You are going to add placeholders and labels, but there’s a lot more you can do; check out the theming documentation for more information. Navigate to 42968bbf-29af-462b-9e83-4c8d7c2d55cf. Modifying a Theme Via API To change placeholders or other messages to users such as validation errors, you must modify the messages attribute of a theme. These are stored in a Java properties file format by FusionAuth. You might want to use the API, as opposed to the administrative user interface, to change these messages if you plan to version control them or use automated tooling. Scripts can help manage updating the messages via API. The below shell scripts assume you are running FusionAuth at; if not, adjust the endpoints accordingly. These scripts are also available on GitHub. To use them, you must have jq and python3 installed locally. Retrieving a Theme File For Local Editing To modify these messages, you will first retrieve the messages and store them in a text file. Below is a shell script which converts the JSON response from the API into a newline delimited file: API_KEY=<your api key> # created above THEME_ID=<your theme id> curl -H "Authorization: $API_KEY" ''$THEME_ID|jq '.theme.defaultMessages' |sed 's/^"//' |sed 's/"$//' |python3 convert.py > defaultmessages.txt The convert.py script turns embedded newlines into real ones: import sys OUTPUT = sys.stdin.read() formatted_output = OUTPUT.replace('\\n', '\n') print(formatted_output) Running this script after updating the API key and theme ID will create a defaultmessages.txt file in the current directory. This script downloads only the messages file, but could be extended to retrieve other theme attributes. The defaultmessages.txt file contents look like this: # # Copyright (c) 2019-2020, FusionAuth, All Rights Reserved # # Licensed under the Apache License, Version 2.0 (the \"License\"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # # ... # Webhook transaction failure [WebhookTransactionException]=One or more webhooks returned an invalid response or were unreachable. Based on your transaction configuration, your action cannot be completed. The file is approximately 200 lines in length so the above is an excerpt. Open it in your favorite text editor. Modifying the Messages File You are going to add both placeholders for the text input boxes as well as custom validation messages. To add the placeholders, add values to the Custom Registration section. Maintaining sections in this file isn’t required since it’s not a .ini file. However, it’s a good idea to change only what is needed and not restructure the entire file. Upgrades to FusionAuth will add more properties and you will have to merge your changes in. Search for the section starting with Custom Registration forms. The keys of the messages file must match the field keys for the registration form. To add the placeholders for the custom and default input fields, add these lines: # ... user.firstName=Your first name user.mobilePhone=Your mobile phone num registration.data.minprice=Minimum home price registration.data.maxprice=Maximum home price registration.data.geographicarea=Where are you looking? # ... To add validation messages, search for # Custom Registration form validation errors. You’ll add the error messages there. Each error message takes the form: [errortype]fieldname. Look at the Default validation errors section to see the list of valid `errortype`s. The field name is the keyname for the field. For example, to display a user friendly error message when required price range information is omitted or invalid, add these properties: [invalid]registration.data.minprice=Please enter a number [invalid]registration.data.maxprice=Please enter a number [missing]registration.data.minprice=Minimum home price required [missing]registration.data.maxprice=Maximum home price required These messages are displayed to the user when the minimum or maximum prices are invalid. Because these fields have the number datatype, they are invalid any time the user input is not a number, but missing when the empty string is provided. If any of the values added to defaultmessages.txt contain a double quote, escape it: \". Since the file will be eventually turned into a quoted JSON attribute and sent to the API, an unescaped double quote is invalid JSON and will cause the API call to fail. Updating the Messages After defaultmessages.txt has been changed, it needs to be converted to JSON and sent to FusionAuth. The following script updates a FusionAuth theme’s defaultMessages attribute: API_KEY=<your api key> THEME_ID=<your theme id> FILE_NAME=out.json$$ awk '{printf "%s", $0"\\n"}' defaultmessages.txt |sed 's/^/{ "theme": { "defaultMessages": "/' | sed 's/$/"}}/' > $FILE_NAME STATUS_CODE=`curl -XPATCH -H 'Content-type: application/json' -H "Authorization: $API_KEY" ''$THEME_ID -d @$FILE_NAME -o /dev/null -w '%{http_code}' -s` if [ $STATUS_CODE -ne 200 ]; then echo "Error with patch, exited with status code: "$STATUS_CODE exit 1 fi rm $FILE_NAME To load the new messages, run this script in the directory with the modified defaultMessages.txt file. Visit the registration URL in your incognito browser and see the changes: Adding Form Labels You can customize your field display more extensively by modifying macros used to build the registration form. You can edit these directly in the administrative user interface. Navigate to customField FreeMarker macro. The macro is a series of if/then statements executed against every custom field as the user interface is generated. The macro examines each field definition and creates the correct HTML element. For instance, a password field will be rendered as an HTML input field with the type To add a label to each field, after [#assign fieldId = field.key?replace(".", "_") /], add this: <label for="${fieldId}">${theme.optionalMessage(field.key)}:</label> Open an incognito window and go through the registration flow again. You should see labels for both steps. These label values are pulled from your message bundles. This gives you a glimpse of the full flexibility of FusionAuth themes. You can use the power of Apache FreeMarker, ResourceBundles, CSS, and JavaScript to customize and localize these pages. Check out the theme documentation for more. Reading the Data The registered user’s profile data is available via the FusionAuth APIs, in the standard user fields, user.data, and registration.data. It is also available for viewing, but not editing, in the administrative user interface. To enable users to modify their profile data, you’ll have to build a profile management application. The application will let users log in or register. After a user has been authenticated, it will display their profile information. Because the application profile data, such as the home price ange, isn’t standard, you can’t use an OAuth or OIDC library to retrieve it. Instead, you must use the FusionAuth APIs. To do so, you’ll need to create an API key and then use either the API or one of the client libraries to access it. This interface should be integrated with the rest of your application, but this guide will build an example in python and flask. You can view the example code here. Creating an API key Go to. Create an API key. Configure these endpoints to be allowed: /api/user/registration: all methods /api/form: GETonly /api/form/field: GETonly Here’s the relevant section of the example application: # ... @app.route('/', methods=["GET"]) def homepage(): user=None registration_data=None fields = {} if session.get('user') != None: user = session['user'] fusionauth_api_client = FusionAuthClient(app.config['API_KEY'], app.config['FA_URL']) user_id = user['sub'] application_id = user['applicationId'] client_response = fusionauth_api_client.retrieve_registration(user_id, application_id) if client_response.was_successful(): registration_data = client_response.success_response['registration'].get('data') fields = get_fields(fusionauth_api_client) else: print(client_response.error_response) return render_template('index.html', user=user, registration_data=registration_data, fields=fields) # ... This home page route examines the user object, which was returned from the successful authentication. It pulls off the sub attribute, which is the user identifier and looks something like 8ffee38d-48c3-48c9-b386-9c3c114c7bc9. It also retrieves the applicationId. Once these are available, the registration object is retrieved using a FusionAuth client. The registration object’s data field is placed into the registration_data variable and passed to the template for display. The helper method, to be examined below in more detail, is also called and whatever it returns is made available to the template as the fields variable. Here’s the get_fields helper method: # ... def get_fields(fusionauth_api_client): fields = {} client_response = fusionauth_api_client.retrieve_form(app.config['FORM_ID']) if client_response.was_successful(): field_ids = client_response.success_response['form']['steps'][1]['fields'] for id in field_ids: client_response = fusionauth_api_client.retrieve_form_field(id) if client_response.was_successful(): field = client_response.success_response['field'] fields[field['key']] = field else: print(client_response.error_response) return fields # ... This function looks at the form and retrieves ids of all fields on the second step: ['form']['steps'][1]. It then retrieves the configuration of each field. The code then adds that form field configuration information to a dictionary, with a key of the field key. A field key looks like registration.data.minprice. This dictionary is used to build attributes of the update form, which is created later. This helper would need to be modified to loop over multiple steps if you had more than one step collecting profile data. Here’s the update form processing route: # ... @app.route("/update", methods=["POST"]) def update(): user=None error=None fields=[] fusionauth_api_client = FusionAuthClient(app.config['API_KEY'], app.config['FA_URL']) if session.get('user') != None: user = session['user'] user_id = user['sub'] application_id = user['applicationId'] client_response = fusionauth_api_client.retrieve_registration(user_id, application_id) if client_response.was_successful(): registration_data = client_response.success_response['registration'].get('data') fields = get_fields(fusionauth_api_client) for key in fields.keys(): field = fields[key] form_key = field['key'].replace('registration.data.','') new_value = request.form.get(form_key,'') if field['control'] == 'number': registration_data[form_key] = int(new_value) else: registration_data[form_key] = new_value patch_request = { 'registration' : {'applicationId': application_id, 'data' : registration_data }} client_response = fusionauth_api_client.patch_registration(user_id, patch_request) if client_response.was_successful(): pass else: error = "Unable to save data" return render_template('index.html', user=user, registration_data=registration_data, fields=fields, error=error) return redirect('/') # ... This code retrieves the user’s registration object. It updates the data object with new values from the profile update form, perhaps transforming a field from a string to a different datatype if required. Currently only the number type is transformed, but could be extended to handle boolean or other data types. After the object has been updated, a PATCH request is made. This updates only the data field of the user registration. Here’s an image of the portal in action: You can view the example code here, which includes templates and the login and registration links as well as the above profile modification code. Editing User Data In the Admin UI Available Since Version 1.20.0 Advanced user registration forms add custom data to your users' profiles. However, what happens when that profile data needs to be modified? You can write code against the APIs to modify it, but using a custom admin form is easier. You don’t have to write any code, only configure a form or two. There are two types of profile data: User data, which is associated with the user. This could be in standard fields such as mobilePhoneor custom data fields in user.data. Registration data, associated with the user’s registration to an application. This could be in standard fields such as rolesor custom data in registration.data. Each of these types of profile data has an admin form associated with it. Admin user forms are associated with the tenant and admin registration forms are associated with an application. The default user and registration editing forms ship with FusionAuth and are implemented using this functionality. They can easily be replaced by your own custom forms suited to your business needs. How To Use A Custom Admin Form There are a few steps to using custom admin forms: Determine if you are going to create a custom registration form, user form or both. Consider data sources for each field of the profile. It could be user registration, API calls from other systems, or manually entered by an admin. Should some data be protected from admin modification? Create custom fields if needed. This is where you’d set the data type, form control and validation rules. Assemble the fields into a form, including possibly organizing them into sections. Update the tenant or application, as appropriate, to use the form. Let’s walk through each of these. Custom Registration Form or Custom User Form To determine if you are going to create a custom admin registration form, custom admin user form or both, think about where the data should be stored. If the profile information is useful for more than one application which the user might log in to, then the data should be stored on the user. You could put it in a custom field ( user.data.somefield) or repurpose one of the standard user fields. For instance, for the real estate search application built above, a boolean value indicating that someone is a current client would be good to store on the user. Data stored as user.data.currentClient would be helpful for many applications. Some examples of functionality you might build based on this value: For a search application, display additional information to current clients or add a CTA for past clients. For a mortgage application, display additional interest rate or program information. Trigger a welcome email when someone becomes a client. If, on the other hand, your data is useful only to a specific application, associate it with a registration. An example of that is the registration.data.minprice field created above. The minimum and maximum price points someone is searching for only apply to the real estate search application. Such data won’t be useful for other applications. Profile Data Sources Next, consider where the data will come from. You have three main source of profile data: The user registration; when a new user signs up API calls to modify the profile data using the User APIs or Registration APIs. The admin forms, used in the FusionAuth backend. Think about what profile data will come from each source. For example, the user’s email address will typically come from their registration. The date of their closing might come from an external scheduling system. And the current client status should be set by a customer service rep or realtor. When you know where each field is coming from, you can consider what kind of administrator modifications should be allowed. Some of this user profile data will be submitted by the end user and should be read-only for admin users. An example of this would be a data sharing setting; does someone want their data shared with brokerage affiliated companies? Depending on your business rules, you may not want to expose this setting to your admins in the backend FusionAuth user interface. However, more typically, you’ll want to allow your admin users to modify most profile data. There’ll also be fields which contain profile data not created at user registration time. Some of these may be created or updated by automated processes. Others, for example, a user.data.notes field, will be manually updated by admin users. This field can be used to capture information a user’s real estate needs. This field should be updated by customer service reps. New clients certainly won’t be providing this information about themselves on registration. Create Custom Fields If you want any fields in your custom admin forms which are not part of a user registration form you’ve created, such as the notes field mentioned above, you’ll need to create a custom field for that data. To do so, navigate to and add them. Here’s an example of adding a user.data.notes field: You may use any supported validation rules, form controls or data types. Don’t forget to add the field names for any new fields to your theme’s messages file. Otherwise users in the administrative user interface will see a field name like user.data.notes instead of User notes. Build the Forms Next, build the forms. You can use any of the custom or standard form fields previously created. First, let’s add an admin user form. Navigate to Type is set to Admin User. Add your form fields and order as needed. Multiple sections help organize the data if you have a large number of fields and want them logically grouped. Here we’ll just add the fields we added for user registration as well as our new, admin only, notes field. You’ll end up with a form looking similar to this: Next up, let’s create a custom admin registration form. This will only apply to the real estate search application. To add this form, navigate to Type is set to Admin Registration. Add your form fields, ordering them as needed. Add multiple sections if desired. Again, this is typically a good idea if you have a large number of fields and want to logically group them. Below, the three custom registration fields added for user registration have been added to this form: Geographic Area, Minimum Price and Maximum Price. If you are doing the same, you’ll end up with a form looking similar to this: Next up, you’ll need to associate the admin user form with the tenant, and the admin registration form with the application. Associate the Form(s) If your form is an admin user form, modify the tenant settings. If, on the other hand, it is an admin registration form, modify the application. In both cases, you can use either the administrative user interface or the API to make the updates. Below you’ll see how to make the changes with the administrative user interface. To change the form used to edit users, navigate to Admin user form:. Then chose your custom form as the To change the admin registration form, navigate to Form value:. Then chose your custom form as the Once you’ve configured these forms, see how it looks for an admin user to edit a user or a registration by, well, editing a user or registration. These forms will be used both for editing users or registrations as well as adding new users or registrations. Any time you are accessing a user or registration from the FusionAuth administrative user interface, the specified form is used. Limiting User Access to the FusionAuth UI Using custom admin forms lets users access the FusionAuth web interface to manage custom user and registration data. But perhaps you don’t want to expose all of the FusionAuth administrative user interface and configuration settings to employees who only need to be able to update user profile data? FusionAuth roles to the rescue! The FusionAuth application has over 25 roles which offer fine grained control over its functionality. Whether you want to let people only manage themes or webhooks, consents or lambdas, roles let you lock down access. Let’s build a user account which will only have user management access. After adding the account, if needed, edit the account’s FusionAuth registration and give them the user_manager role. With this role, they’ll be able to add and update users, but nothing else. To prevent privilege escalation, they won’t be able to modify their role or anyone else’s, however. Check the user_manager checkbox and save the user. Next time they log in to the FusionAuth interface, they’ll see only what they have permissions for: Log out of your admin account and sign into this user manager account. When you edit a user, you can see the edit screen shows the fields you added to the form above: The same is true for adding or editing a registration for the application: If a user manager edits the URL to try to access other admin areas, they’ll see a message letting them know that access is not authorized: Using the API to Manage Forms You can use the form fields and forms APIs to manage advanced registration forms. Using the API allows for migration of form configuration between environments as well as the dynamic creation of registration forms for new applications. For instance, if you had a private labelled application, you might want to allow an administrator to control which fields were required at registration without allowing them access to the FusionAuth administrative interface. Building a custom interface and calling the FusionAuth APIs to assemble the registration form and associate it with the application would accomplish this. Consents To associate an existing consent with a field, select a field of Self consent. See the Consent APIs for more information on user consents. Consents are rendered as a checkbox to the user in the registration from. The consent field will have a name automatically generated based on the consent identifier. For example: consents['dd35541d-e725-4487-adba-5edbd3680fb8']. However, it can be referenced in the theme files. To add a label for the the above consent, add this line to your messages file: consents['dd35541d-e725-4487-adba-5edbd3680fb8']=I consent to sharing my data with affiliated companies Form Fields and Validation Making sure user registration data meets your quality requirements is important. FusionAuth provides multiple ways to validate user input during the registration process. Any validation failure will prevent the user from moving past the current registration step. The theme controls the location and display of error messages. All validation for advanced registration forms are either browser native or server side. If you’d like to add client side validation, you may inject JavaScript validation libraries and code into your login templates. Form Control If your field uses a form control with a limited set of options, such as a radio button or select dropdown, the user will be forced to choose from that set of options. Form field control options are documented in the form field API documentation. Data Type You can configure a form field to use one of the non- String data types. Doing so means the form field will require the user to enter data acceptable to that data type. For instance, if a form field has a data type of Number, any non-numeric value will result in an error message. Form field data type options are thoroughly documented in the form field API documentation. The Required Attribute If a field is configured to be required, a valid value must be provided. Otherwise an empty string is a valid value. The Confirm Value Attribute If a field is configured to have a Confirm value, a second input field of the same type and control will be added to the form. This confirmation field will be displayed just below the original field, but the location can be customized by modifying the theme. The form will fail validation unless the same value is entered in both fields. Regular Expression Validation If Validation is enabled, a regular expression must be specified. The user input will be matched against the regular expression and validation will fail if it doesn’t match. See the Java Regular Expression documentation for more information on how to build such a regular expression. Special Considerations Searching on User Data All data stored in the registration.data and user.data fields is indexed if you are using the Elasticsearch search engine. You may use the User Search API to search against these values. For example, if you wanted to find all the users with a minprice value between 50000 and 100000, you could use this Elasticsearch query: queryparameter to search for users with a given house hunting price range { "bool": { "must": [{ "nested": { "path": "registrations", "query": { "range": { "registrations.data.minprice": { "gte": 50000, "lte": 100000 } } } } }] } } Adding Required Fields Later Once you enable self service registration, the authentication flow is: Authorize -> Complete Registration -> Redirect Every time a user authenticates using the hosted login pages, FusionAuth ensures their registration is complete. If you add a required field to the application’s registration form after users have registered, the next time one of the users authenticates using the hosted login pages, they’ll be sent to the registration form to fill out the required field. The OAuth complete registration template will be used in this scenario. Modifying an Existing Form Field You cannot change the underlying field, control or data type of an existing form field. Other attributes may be modified. If you need to change the data type or form control of a field, create a new one. Duplicate the form field and update the form to use the duplicate. Changing data types for the same underlying key into registration.data or user.data is problematic if you are using Elasticsearch and may require manual updates of the index. It is recommended that you change the key name if you must change the data type of a form field. For example, if you wanted to modify the real estate search form to have the minimum price be a drop down instead of a numeric input field, duplicate the existing form field and modify the control. Then update the form to use the new form field. Registration With Other Identity Providers If you have an advanced registration form, but allow for a user to register with an external identity provider, such as Facebook or Active Directory, FusionAuth will drop the user into the registration flow after the external provider returns. Assume you’ve enabled the Facebook identity provider and allowed for registration with that provider. Also, assume you’ve created a registration form with three steps. The first step contains optional fields, and the second step contains required fields. After a user signs up with Facebook, they’ll be dropped back into the registration flow on the second step. They’ll be required to complete the registration from from the second step onward before they are fully registered. Related Posts Feedback How helpful was this page? See a problem? File an issue in our docs repo
https://fusionauth.io/docs/v1/tech/guides/advanced-registration-forms/
CC-MAIN-2021-31
refinedweb
6,273
56.66
XML Parser Notes Axis needs an XML parser implementing the JAXP 1.1 specification. It needs to be a fairly complete parser, as SOAP uses XML Schema and namespaces everywhere. The parser that Axis is primarily built and tested against it is Xerces. Other implementations of the XML Parser APIs may look the same to many apps, but cause Axis to choke with obscure errors. We have seen this with Crimson, Caucho, Aelfred and the Oracle parser implementations. Java 1.4 includes Crimson. If you are having problems with Axis, and using any parser other than Xerces, try switching to Xerces to see if it goes away. Also, Axis wants only one parser on the classpath. If you have multiple implementations of an XML parser in the path, you are going to get in trouble. This goes for many other XML applications. Errors like class cast exceptions, or complaints about missing methods in org.w3c classes are common symptoms. Track down where your parsers are coming from, and settle on the one parser that you want. Xerces, preferably.
https://wiki.apache.org/ws/FrontPage/Axis/Install/Diagnostics/XMLParsers
CC-MAIN-2017-43
refinedweb
178
66.23
tag:blogger.com,1999:blog-295391672019-01-28T04:45:11.428-05:00CONFESSIONS OF A POOR, INSANE, FUTURE MILLIONAIREThis blog is for working class people interested in becoming millionaires, making money, investing in stocks, real estate, currencies or the bond market. It's about the pursuit of money and what it takes to "GET RICH".An FTB Bloggers Blog That No One Really Cares<br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="267" src="" width="320" /></a></div><div class="MsoNormal">The sooner that you learn that no one cares the better off you will be.<span style="mso-spacerun: yes;"> </span>When I say no one cares, it doesn’t mean about you or your well-being but you and your dreams.<span style="mso-spacerun: yes;"> </span>It applies to entrepreneurs.<span style="mso-spacerun: yes;"> </span>It doesn’t apply to Olympic athletes.<span style="mso-spacerun: yes;"> </span>It doesn’t apply to help you need for your 9 to 5 job, if you’re sick, or if your car breaks down.<span style="mso-spacerun: yes;"> </span>It applies specifically to your entrepreneurial vision.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal"><br /></div><div class="MsoNormal"><b>HOW COME NO ONE CARES? I DON’ GET IT</b></div><div class="MsoNormal">No one cares because they can’t see what you see.<span style="mso-spacerun: yes;"> </span>If they could, they wouldn’t be slaving away at a job that they hate every day of their lives.<span style="mso-spacerun: yes;"> </span>If they cared, they would be trying to help you, asking how things are going, and calling you up with suggestions.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal">It’s not their fault.<span style="mso-spacerun: yes;"> </span>It doesn’t’ mean they don’t love you, it’s just that your dream is not tangible to them.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal"><br /></div><div class="MsoNormal"><b>WHAT’S TANGIBLE VERSUS INTANGIBLE</b></div><div class="MsoNormal">If you were working toward a promotion at your 9to5, that’s tangible.<span style="mso-spacerun: yes;"> </span>As a society we are conditioned to that behavior.<span style="mso-spacerun: yes;"> </span>Work hard and get a crappy 5% raise, maybe 7% or 2% in this economy.<span style="mso-spacerun: yes;"> </span>People believe that will happen. They are trained to work more hours and you will most likely get a raise. We are even trained that 5% is good. There is no problem getting help to support that.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal">However, if it’s a situation that society is not conditioned to, like starting your own business…most people don’t get it, and some people who do just want to see you fail.<span style="mso-spacerun: yes;"> </span>HARSH, but true.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal">If you are creating a website of some sort, don’t expect anyone to be on board until it takes off and start bringing in revenue.<span style="mso-spacerun: yes;"> </span>If you are starting a brick and mortar, don’t expect anyone to pay attention until you land a client, get a big check or do something drastic like Mortgage the house to pay for your first years expenses.<span style="mso-spacerun: yes;"> </span>Oh…I bet people are all ears now.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal"><br /></div><div class="MsoNormal">If you think I’m joking, let me prove it to you.<span style="mso-spacerun: yes;"> </span>Go right now to facebook and write about what you are currently working on and see how many people send you something encouraging.<span style="mso-spacerun: yes;"> </span>Maybe 1 or 2 if you are lucky.<span style="mso-spacerun: yes;"> </span>Then wait a few hours and post a Meme (a funny viral picture), half of your friends will be hitting like, and sharing that mess making all types of dumb comments.<span style="mso-spacerun: yes;"> </span><span style="mso-spacerun: yes;"> You get more support from a shaved cat, then you launching your own enterprise. </span></div><div class="MsoNormal"><span style="mso-spacerun: yes;"><br /></span></div><blockquote class="tr_bq"><div class="MsoNormal">Again, on facebook if you are launching a website and you write, “Hey guys, I’m launching my new site today, check it out”.<span style="mso-spacerun: yes;"> </span>You might get 10 or 15 hits.<span style="mso-spacerun: yes;"> </span>But if you put, “Wow, just stumbled on to this site and it’s off the hook!”.<span style="mso-spacerun: yes;"> </span>If there is no clear association to you and people realize that you are not benefiting from it, then they will visit it, bookmark it, and visit that site every day.<span style="mso-spacerun: yes;"> </span></div></blockquote><div class="MsoNormal">Believe it or not, subconsciously people won’t visit your site because they don’t want to see you get ahead.<span style="mso-spacerun: yes;"> </span>They don’t even realize that they are doing it.<span style="mso-spacerun: yes;"> </span>SCAREY! </div><div class="MsoNormal"><br /></div><div class="MsoNormal"><b>HOW SHOULD YOU ACT ON THIS?</b></div><div class="MsoNormal">The reason for this post is to help you focus more and conserve energy.<span style="mso-spacerun: yes;"> </span>As a hustler you need everything you can muster to sustain your work load and keep yourself focused.<span style="mso-spacerun: yes;"> </span>If you are working a full time 9to5 then you need your energy even more. <span style="mso-spacerun: yes;"> </span>This means you can’t waste energy trying to convince and prove those around you how good your idea is.<span style="mso-spacerun: yes;"> </span>How hard you’re working.<span style="mso-spacerun: yes;"> </span>Just how things are coming along.<span style="mso-spacerun: yes;"> </span>You have to stop wasting time trying to rationalize to them, why you need to work, or how much work is left to be done.<span style="mso-spacerun: yes;"> </span>They won’t understand because they don’t want to understand.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal">Those who do understand, keep them close, and use them as fuel.<span style="mso-spacerun: yes;"> </span>Keep them updated on your milestones and progress.<span style="mso-spacerun: yes;"> </span>Value those people, because they are rare and they care.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal"><br /></div><div class="MsoNormal"><b>WHAT IF YOUR SPOUSE OR LOVED ONE DOESN’T GET THE VISION? </b></div><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="239" src="" width="320" /></a></div><div class="MsoNormal">This is a doozy!<span style="mso-spacerun: yes;"> </span>This is easily the 2<sup>nd</sup> biggest problem for an entrepreneur.<span style="mso-spacerun: yes;"> </span>Money is first, significant other ‘s /family support is 2<sup>nd</sup>.<span style="mso-spacerun: yes;"> </span>If you significant other gets your vision, believes in you and your goals whole heartedly, then you already have a distinct advantage.<span style="mso-spacerun: yes;"> </span>You have your Michelle Obama and are ready to take over the world.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal"><br /></div><div class="MsoNormal">If not, my first suggestion is to try to bring them on board.<span style="mso-spacerun: yes;"> </span>Tell them and show them what you are working on, and explain your long term vision and how it will benefit you, him or her, and your kids.<span style="mso-spacerun: yes;"> </span>Try again and again.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal"><br /></div><div class="MsoNormal">HOWEVER…..</div><div class="MsoNormal">If they can’t see the vision, my advice is to stop talking about it as much.<span style="mso-spacerun: yes;"> </span>If your wife or husband sees your entrepreneurial endeavors aka “Your Hustle” as “time away from the family” or “you getting to do what you want to do”, you “Playing Games”</div><div class="MsoNormal"><br /></div><div class="MsoNormal">If he or she understood how serious this was, and how if it worked out, it would benefit the whole family they would be on board. There would be no convincing.<span style="mso-spacerun: yes;"> </span>When the topic of support is brought up you wouldn't hear things like, “Well I don’t say anything when you sit on the computer for hours”.<span style="mso-spacerun: yes;"> </span>That’s called tolerating.<span style="mso-spacerun: yes;"> </span>Tolerating and supporting are not the same thing.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal"><br /></div><div class="MsoNormal">Support is when you are working and they ask if you need anything.<span style="mso-spacerun: yes;"> </span>They ask if you are hungry, they ask if there is something that you can do to help.<span style="mso-spacerun: yes;"> </span>Supporting is not whining for them to come to bed, complaining that you don’t get to fall asleep next to them.<span style="mso-spacerun: yes;"> </span>That is called making them choose between you and their dream.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal"><br /></div><div class="MsoNormal">Do what I’ve done, simply cut them out.<span style="mso-spacerun: yes;"> </span>Until you get to a point where they understand your vision and how big it is, don’t talk about.<span style="mso-spacerun: yes;"> </span>It will only drive a wedge in between you two.<span style="mso-spacerun: yes;"> </span>You want more then anyone in the world for this person to believe in you and be on board but it can’t happen so don’t force it.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal"><br /></div><div class="MsoNormal">It will be hard at first because this is all you are about.<span style="mso-spacerun: yes;"> </span>You are sleeping, eating , sh@#tting your dreams and you can’t share them.<span style="mso-spacerun: yes;"> </span>But you will see that once you refrain from talking about it, and just quietly executing your mission, they will then grow interested.<span style="mso-spacerun: yes;"> </span>Then you only give them a little.<span style="mso-spacerun: yes;"> </span>Play that game all the way to the bank.</div><div class="MsoNormal"><br /></div><div class="MsoNormal">I’m speaking from experience.<span style="mso-spacerun: yes;"> </span>Ever since I stopped talking about my projects with my fiance’ we have found peace.<span style="mso-spacerun: yes;"> </span>Once she sees how much money everything is earning she will then believe and start to support instead of tolerate.<span style="mso-spacerun: yes;"> </span>Then I will be stronger then ever.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal"> -MJL</div><div class="MsoNormal"><br /></div>An FTB Bloggers Blog in Arnold's World is the Key to Success<!--">A huge part of your success, in becoming a future millionaire will be your ability to Separate the Real World from Arnold’s World.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal"><br /></div><div class="MsoNormal"><b>HOW I CAME ACROSS ARNOLD’S WORLD</b></div><div class="MsoNormal"><br /></div><div class="MsoNormal">Arnold’s World is a place that I have just recently entered.<span style="mso-spacerun: yes;"> </span>A reader of this blog Deuce Carter has a Tumbler site with some interesting motivational pictures.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal">One of them is the Screen saver below of Arnold Schwarzenegger.<span style="mso-spacerun: yes;"> </span>Despite his recent issues – here is a man that has conquered everything he’s ever tried.<span style="mso-spacerun: yes;"> </span>Physically, mentally and politically.<span style="mso-spacerun: yes;"> </span>A man who barely spoke English ended up being the Governor of this country’s largest State.<span style="mso-spacerun: yes;"> </span>In Arnold’s World physical & mental perfection along with financial freedom were mandatory.<span style="mso-spacerun: yes;"> </span>Once I removed my computer background and replaced it with Arnold – I conceptually entered Arnold’s World.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal"><br /></div><div class="MsoNormal">In order to stay motivated, you must separate Arnold’s World from the Real World.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal"><br /></div><div class="MsoNormal"><br /></div><div class="MsoNormal">Living in<span style="mso-spacerun: yes;"> </span>– Arnold’s World</div><div class="MsoNormal"><br /></div><div class="MsoNormal">If you are doing what it takes to push yourself to the next level, you should feel special, but unfortunately that “specialness” won’t translate to the real world until you have reached real world success.<span style="mso-spacerun: yes;"> </span>Confused? </div><div class="MsoNormal"><br /></div><div class="MsoNormal">In the Arnold World people are rated on a different scale.<span style="mso-spacerun: yes;"> </span>People are rated on their ability to push themselves to the limit in all aspects of their lives.<span style="mso-spacerun: yes;"> </span>In Arnold’s World, when a person gets home from their 9 to 5, they are expected to further their education, take care of themselves physically and strive for financial Freedom.<span style="mso-spacerun: yes;"> </span>Those are the expectations.<span style="mso-spacerun: yes;"> </span>In fact, that should be the criteria for reading this blog.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal"><br /></div><div class="MsoNormal">Let’s list those again shall we: AWE (Arnold’s World Expectations)</div><div class="MsoNormal"><br /></div><div class="MsoListParagraph">Further their education</div><div class="MsoNormal" style="margin-left: .5in;">By no means does this mean grad school, college etc. <span style="mso-spacerun: yes;"> </span>This could mean learning about your business, reading forums, magazines, anything that will make you better at whatever you will end up doing for your financial freedom.<span style="mso-spacerun: yes;"> </span>Learning about your craft.;">2.<span style="font: 7.0pt "Times New Roman";"> </span></span></span>Take Care of yourself physically </div><div class="MsoNormal" style="margin-left: .5in;">This doesn’t mean hitting the gym every day, but it does mean doing some physical activity a few times a week, and eating healthy.<span style="mso-spacerun: yes;"> </span>A healthy body goes hand in hand with a healthy mind.;">3.<span style="font: 7.0pt "Times New Roman";"> </span></span></span>The Strive for financial freedom.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal" style="margin-left: .5in;">Basically working to be a future millionaire.<span style="mso-spacerun: yes;"> </span>That’s why we are here right?<span style="mso-spacerun: yes;"> </span>Google the term “Future Millionaire”? Guess who comes up in the number two spot in the search results??? </div><div class="MsoListParagraph">If you are here, you are at the right place because there is no place on the internet where the struggle is understood better then here.<span style="mso-spacerun: yes;"> </span>Period.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal"><br /></div><div class="MsoNormal"><br /></div><div class="MsoNormal">You can remember the three expectations by simply keeping in mind: <span class="body">Work harder on yourself than you do on your job.</span> </div><div class="MsoNormal"><br /></div><div class="MsoNormal"><b>HOW DOES ARNOLD’S WORLD COMPARE TO THE REAL WORLD?</b></div><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="300" src="" width="320" /></a></div><div class="MsoNormal" style="background: white;"><span style="color: black;"><br /></span><span style="color: black;">To be great in the Arnold World you are doing items 1 thru 3 to the best of your ability, until you reach success. Constant effort over time! Now let’s compare yourself to your fat, out of shape co worker. You and him are in the little kitchenette area at work – you know that annoying little place with the microwave and toaster oven, and over packed refrigerator with everyone’s labeled lunch bags. You are there heating up vegetables and chicken and your co-worker is heating up a left over cheesesteak and fries. </span></div><div class="MsoNormal" style="background: none repeat scroll 0% 0% white; color: black;"><br /></div><div class="MsoNormal" style="background: none repeat scroll 0% 0% white; color: black;">When he went home last night, he watched 4 hours of TV, ordered 2 cheesesteaks from the delivery joint down the street, talked on the phone to his buddy about the weekend for an hour and then played Star Wars Old Republic for 3 hours and then crashed for the night.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal" style="background: none repeat scroll 0% 0% white; color: black;"><br /></div><div class="MsoNormal" style="background: white;"><span style="color: black;">When you got home last night, you hit the gym, showered, cooked dinner for yourself and made lunch for today. Then you started working on your main project until about 10:00. Then you attended a Webinar for an hour, and then Skyped with your programmers in India until 2:00 AM.</span><span style="mso-spacerun: yes;"> </span><span style="color: black;"></span></div><div class="MsoNormal"><br /></div><div class="MsoNormal">In Arnold’s World – you are like the Mayor!<span style="mso-spacerun: yes;"> </span>You are revered because you are able to stay focused, eat right, hit the gym and survive on almost no sleep.<span style="mso-spacerun: yes;"> </span>Your co-worker is seen as a lazy piece of crap. <span style="mso-spacerun: yes;"> </span>He’s the equivalent to someone who is unemployed, on welfare and refuses to even look for a job, because welfare is covering his bills.<span style="mso-spacerun: yes;"> </span>He’s everything that you Arnold’s World hates!</div><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="320" src="" width="223" /></a></div><div class="MsoNormal"><br /></div><div class="MsoNormal">If you translate that into the real world – You and your co-worker are even.<span style="mso-spacerun: yes;"> </span>Huh?<span style="mso-spacerun: yes;"> </span>Doesn’t sound fair does it.<span style="mso-spacerun: yes;"> </span>Although you accomplished a lot last night, you are not earning money with your projects.<span style="mso-spacerun: yes;"> </span>In fact you are losing money because you are investing in time and resources.<span style="mso-spacerun: yes;"> </span>Your co-worker is shoving money without thinking in his 401 K.<span style="mso-spacerun: yes;"> </span>He drives a better car then<span style="mso-spacerun: yes;"> </span>you, and hangs out after work with the rest of your co-workers so in fact, his status is higher than yours.<span style="mso-spacerun: yes;"> </span></div><div class="MsoNormal"><br /></div><div class="MsoNormal">That is your challenge.<span style="mso-spacerun: yes;"> </span>Day in and day out, you have to accept that in society, until your bank account has grown to the KMAM (Kiss My ASS Mark), you are on par or less than that of the guy who does the same job as you do at work.<span style="mso-spacerun: yes;"> </span>It’s a hard pill to swallow, but it’s also why you know that<span style="mso-spacerun: yes;"> </span>you need to be doing more.<span style="mso-spacerun: yes;"> </span>Let that thought fuel you to push<span style="mso-spacerun: yes;"> </span>yourself to succeed.<span style="mso-spacerun: yes;"> </span>If you quit it will all be for nothing, You won’t fail unless you quit. </div><div class="MsoNormal"><br /></div><div class="MsoNormal">DANGER ZONE</div><div class="MsoNormal">Being able to separate the worlds are key.<span style="mso-spacerun: yes;"> </span>You are no better of a person because you are in Arnold’s World.<span style="mso-spacerun: yes;"> </span>Everyone doesn’t want the same things as you do.<span style="mso-spacerun: yes;"> </span>It doesn’t make you better than them, and they don’t deserve less respect.<span style="mso-spacerun: yes;"> </span>It’s a choice.<span style="mso-spacerun: yes;"> </span>Because your mom doesn’t go to the gym, doesn’t mean you don’t respect her less.<span style="mso-spacerun: yes;"> </span>We live in the Real World and everyone deserves respect until they un-earn it.<span style="mso-spacerun: yes;"> </span>(But in Arnold’s world, YOU ARE BETTER THEN THEM).<span style="mso-spacerun: yes;"> </span></div><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="192" src="" width="320" /></a></div><div class="MsoNormal"><br /></div><div class="MsoNormal">If you are reading this and don’t have a “project” – GET ONE!</div><div class="MsoNormal"><br /></div><div class="MsoNormal">Like the great Jim Rohn says, </div><div class="MsoNormal"><br /></div><div class="MsoNormal"><span class="body">If you don't design your own life plan, chances are you'll fall into someone else's plan. And guess what they have planned for you? Not much</span></div><div class="MsoNormal"><br /></div>An FTB Bloggers Blog Re-reading of Rich Dad Poor Dad.<span class="fullpost"> As a Future Millionaire it’s something that you should do often. Mainly because it’s a hard concept to embrace. It tells you everything that you ever learned about making money is false….similar to what I do here on this blog. When you get through the first two chapters you start feeling one of three ways: <br /><br />1. Rober Kyoski is full of sh*T<br /><br />2. Why didn’t I read this sooner, I feel so dumb. <br /><br />3. Glad to know I’m on the right track. <br /><br />In no uncertain terms, Mr. Kyoski basically says that going to work everyday makes you a Jack Ass. He portrays it as having a carrot dangled in front of you while you pull the cart. You identifiy yourself with that donkey, pulling the corporate American cart. I shouldn't say that going to work makes you a Jack Ass, but going to work and expecting to make it something it's not makes you a Jack Ass. <br /><br />That’s not where you want to be. <br /><br />In the middle of the book you start to think, “I want to be the farmer with the carrot”. <br /><br />That’s not toally true either. You are still working everyday. You still have to get up in the morning, load the cart, put the carrot on the string and dangle and walk for hours. <br /><br />Yes you have all the carrots you can eat so you are better off then the mule, but you still had to get up at the butt crack of dawn to make that happen. <br /><br /><strong>WHERE DO YOU REALLY WANT TO BE? </strong><br />You want to be the guy who owns all the farms. You buy and work the first farm. You get it to the point where it is making $2000.00 per month. You streamline the process and pay someone $1500 per month to run it. <br /><br />Move to the next farm. You get it earning $2000 per month, you pay someone $1500 to run it. Move to the third farm and fourth farm. You are good at this so you do it with less work in less time. Now you have four farms, with four farmers working them. Each one is happy to make $1500 per month, and you are making the same $2000 you were making before, but you get up at noon now. <br /><br />You play golf and spend time with your family. You have four farmes with their mules handing out carrots. <br /><br />The Owner, the Farmer, the Mule. Don’t kid youself, just because you hanve an MBA, a big office and a good job, but a slong as you have to go to work every morning you can still be a jackass!<br /><br />-MJL<br /><br /></span>An FTB Bloggers Blog Must Be Crazy to Dream Big<a href=""><img id="BLOGGER_PHOTO_ID_5665300035857715778" style="FLOAT: left; MARGIN: 0px 10px 10px 0px; WIDTH: 375px; CURSOR: hand; HEIGHT: 227px" alt="" src="" border="0" /></a> <span class="fullpost">I realized the bigger your dreams are, the crazier you must be. Period.<br /><br />Over the past week, week and a half I have been reading and watching about Hitler and Steve Jobs. Not to compare the two, because as we all know comparing anyone to Hitler will get you kicked off of ESPN, however there are similarities. Ouch!! Wait…..hear me out.<br /><br />There was a show on the History channel that dove into Hilter and the architecture that surrounded him in the 1940’s . That man was crazy. You couldn’t tell him what to do or what not to do. He wanted a dome over his capital building that was so big you could fly a mini helicopter in it. He would not take no for an answer . Steve Jobs, same thing. He wanted to do away with the keyboard, and I think if you gave him five more years, he would have done it. I have to imagine Thomas Edison, Leonardo Da Vinci, these guys were visionaries cut all of for the same cloth.<br /><br />How Does this Apply To Us?<br /><br />The world pushes you into what it thinks you should be, what it thinks you should do, how it thinks you should act. The more in line you are with that the less crazy you have to be. If your dad went to college and became an Engineer and you want to be an Engineer – that’s not crazy. That’s normal.<br /><br />If you want to be a Lawyer and your dad was an Engineer, that’s ambitious but not crazy.<br /><br />If you want to be a Rock Star and tour Europe, that’s Crazy. If you want to be a multi-billionaire. That’s considered crazy. Get it?<br /><br />Crazy is Good<br /><br />If you are not the right level of crazy then you won’t make it. The bigger your dream the crazier you must be. I want a net worth o 1.4 billion dollars. So how crazy am I? I’m down right nuts. I sleep, eat, and crap this dream. I don’t have regular friends anymore, I don’t watch Sunday football, I sleep 3 to 4 hours per night. I’m insane. I won’t take NO for an answer. You WILL NOT tell me this dream is not going to happen. Period.<br /><br />I’m INSANE! You need to be INSANE. If you want a promotion at work, you just have to be a little crazy, so there is no reason for you not to get it. If you want to quit you job and start your own company- YOU HAVE TO BE CRAZY. The amount of sacrifice, and hard work cannot be summed up in a book . You will be working hard with no payoff for a long time. Saving money, going without sleep, getting setbacks all while you are pushing ahead. You must not take NO for an answer. Move smart, but move crazy. You will know when you can quit your day job. You can layout a plan that is safe, but crazy to execute. Then GET CRAZY and execute it.<br /><br />There will be more to come on this.<br />-MJL</span>An FTB Bloggers Blog Time in Corporate America for the Entrepreneur<a href=""><img id="BLOGGER_PHOTO_ID_5589597078811747890" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; WIDTH: 260px; CURSOR: hand; HEIGHT: 208px" alt="" src="" border="0" /></a> <br /><p class="MsoNoSpacing" style="MARGIN: 0in 0in 0pt"><span style="font-family:Calibri;">“Surviving” is a difficult task because it is a conflicts with our current goal and mind set.<span style="mso-spacerun: yes"> </span>We spend most of our time plotting our escape, that we don’t actually focus on “how to survive” (until we can escape).<span style="mso-spacerun: yes"> </span>This is an essential ingredient because by surviving well, we expend less energy, that we can use to then better plot our escape.<span style="mso-spacerun: yes"> </span><?xml:namespace prefix = o /><o:p></o:p></span></p><br /><p class="MsoNoSpacing" style="MARGIN: 0in 0in 0pt"></p><br /><p class="MsoNoSpacing" style="MARGIN: 0in 0in 0pt"><span style="font-family:Calibri;"><strong>Got it?<span style="mso-spacerun: yes"> </span><o:p></o:p></strong></span></p><br /><p class="MsoNoSpacing" style="MARGIN: 0in 0in 0pt"></p><br /><p class="MsoNoSpacing" style="MARGIN: 0in 0in 0pt"><span style="font-family:Calibri;">The thought actually came to me while watching a prison show on “Hulu” that was on The National Geographic Channel.<span style="mso-spacerun: yes"> </span>They were profiling an inmate who was a shear mastermind.<span style="mso-spacerun: yes"> </span>He was your standard white guy, baby faced, imprisoned with Blacks and Hispanics (race will play a part in just a second, that’s why I’m mentioning it).<span style="mso-spacerun: yes"> </span>He looked like he couldn’t hurt a fly.<span style="mso-spacerun: yes"> </span>They showed him in the dining area eating, showed him in the yard playing softball, depicting him as a pillar of the prison community.<span style="mso-spacerun: yes"> </span>In every scene he actually appeared genuinely happy.;">Then they hold up pictures of some guy that was bludgeoned almost to death.<span style="mso-spacerun: yes"> </span>Then they show another, and the voiceover states that the Baby-faced white guy was responsible for every one of those beatings.<span style="mso-spacerun: yes"> </span>WHAT!?!?!!<o:p></o:p></span></p><br /><p class="MsoNoSpacing" style="MARGIN: 0in 0in 0pt"><o:p><span style="font-family:Calibri;"></span></o:p><span style="font-family:Calibri;">Then they pan to a shot of this guy in his cell.<span style="mso-spacerun: yes"> </span>On top of his bunk he has a variety of snacks.<span style="mso-spacerun: yes"> </span>Honey Buns, BBQ chips, Reeses, Snickers and any other snack you could think of. <span style="mso-spacerun: yes"></span>He proceeds to explain how he gets his cell floors swept and waxed for the price of a few honey buns.<span style="mso-spacerun: yes"> </span>He gets his prison uniform cleaned and pressed, for a few bags of BBQ chips and a Snickers bar.<span style="mso-spacerun: yes"> </span>That night he was hosting a dinner party in his cell where they were having Mexican food.<span style="mso-spacerun: yes"> </span>They found a way to cook tortillas right in the cell.<span style="mso-spacerun: yes"> </span>This guy was hosting dinner parties in prison!!!</span></p><br /><p class="MsoNoSpacing" style="MARGIN: 0in 0in 0pt"><span style="font-family:Calibri;">How did he do this?<span style="mso-spacerun: yes"> </span>All by the power of persuasion, keeping his emotions in control and bending a seemingly rigid prison system, to cater to his needs.;">That’s how you survive in Corporate America.<span style="mso-spacerun: yes"> </span>As an Entrepreneur your goals have changed.<span style="mso-spacerun: yes"> </span>You don’t want to be a CEO.<span style="mso-spacerun: yes"> </span>You don’t necessarily want to be moved up.<span style="mso-spacerun: yes"> </span>You just want to earn your check – do an adequate job, and get home to work on your other endeavors.<span style="mso-spacerun: yes"> </span>Instead of letting the place frustrate you with its policies, procedures and politics, use them to your advantage and mold them to help you in your journey out of there.<span style="mso-spacerun: yes"> </span></span></p><br /><p class="MsoNoSpacing" style="MARGIN: 0in 0in 0pt"><span style="font-family:Calibri;"></span></p><br /><p class="MsoNoSpacing" style="MARGIN: 0in 0in 0pt"><span style="font-family:Calibri;"><strong>WANT EXAMPLES? “Sure”</strong></span></p><br /><p class="MsoNoSpacing" style="MARGIN: 0in 0in 0pt"><strong><span style="font-family:Calibri;"></span></strong></p><br /><p class="MsoNoSpacing" style="MARGIN: 0in 0in 0pt"><span style="font-family:Calibri;">Although the place I work is frustrating and the people are frustrating there are a few really bright, really experienced people who work there. <span style="mso-spacerun: yes"></span>Gain the trust of these select few and talk with them frequently.<span style="mso-spacerun: yes"> </span>Run ideas and business strategies by them.<span style="mso-spacerun: yes"> </span>It’s like free counsel. Trust me, if they are really intelligent, they will welcome the discussion and began to vicariously live through you, thus really wanting to see you achieve your goals.;"><strong>Printers, Xerox Machines, Fax machines<o:p></o:p></strong></span></p><br /><p class="MsoNoSpacing" style="MARGIN: 0in 0in 0pt"><span style="font-family:Calibri;">They are all accessible to you to a certain extent. Use them after or before hours to give things you are working on a certain professional flare.<span style="mso-spacerun: yes"> </span></span></p><br /><p class="MsoNoSpacing" style="MARGIN: 0in 0in 0pt"><span style="font-family:Calibri;"><strong>Company laptop<o:p></o:p></strong></span></p><br /><p class="MsoNoSpacing" style="MARGIN: 0in 0in 0pt"><span style="font-family:Calibri;">It’s a laptop.<span style="mso-spacerun: yes"> </span>Great!!! Now you can postpone buying one and use this one for generating your business plans and excel spread sheets for your ideas.<span style="mso-spacerun: yes"> </span>Use it for a change of scenery when you are home working, and take it to the local coffee shop so you can keep working.<span style="mso-spacerun: yes"> </span>Keep grinding, keep getting it in!<o:p></o:p></span></p><br /><p class="MsoNoSpacing" style="MARGIN: 0in 0in 0pt"></p><br /><p class="MsoNoSpacing" style="MARGIN: 0in 0in 0pt"><span style="font-family:Calibri;"><strong>HAVE FUN<o:p></o:p></strong></span></p><br /><p class="MsoNoSpacing" style="MARGIN: 0in 0in 0pt"><span style="font-family:Calibri;">You are launching a new business; you have a great idea that’s going to take off in just a few months, or years (whatever your particular sentence is).<span style="mso-spacerun: yes"> </span>Knowing that should put you automatically in a good mood.<span style="mso-spacerun: yes"> </span>It’s as if, someone told you that in January, you would be inheriting 800,000.00.<span style="mso-spacerun: yes"> </span>Not enough money to quit, but it’s a nice chunk of change!<span style="mso-spacerun: yes"> </span>That would make coming to work a lot easier, you would be a lot happier, and everyone around you would notice. You wouldn’t stress out over reports, presentations, or reviews.<span style="mso-spacerun: yes"> </span>You would just make it happen to the best of your ability and let the chips fall where they may. </span></p><br /><p class="MsoNoSpacing" style="MARGIN: 0in 0in 0pt"><span style="font-family:Calibri;">Hopefully this makes sense, and it can help you survive the day to day at your 9 to 5.<span style="mso-spacerun: yes"> </span>It’s frustrating to work on a spreadsheet when you are secretly planning to launch the next facebook, but until you do it’s necessary. <o:p></o:p></span></p><br /><p class="MsoNoSpacing" style="MARGIN: 0in 0in 0pt"></p><br /><p class="MsoNoSpacing" style="MARGIN: 0in 0in 0pt"><span style="font-family:Calibri;">Make the best out of your prison and save that energy for “The Launch” of the next best thing! </span></p><br /><p class="MsoNoSpacing" style="MARGIN: 0in 0in 0pt"><span style="font-family:Calibri;">-MJL<o:p></o:p></span></p><br /><p class="MsoNoSpacing" style="MARGIN: 0in 0in 0pt"><o:p><span style="font-family:Calibri;"></span></o:p></p>An FTB Bloggers Blog Apprentice - A Lesson In Boardroom War by Richard Hatch<a href=""><img id="BLOGGER_PHOTO_ID_5582219024538493730" style="FLOAT: left; MARGIN: 0px 10px 10px 0px; WIDTH: 273px; CURSOR: hand; HEIGHT: 128px" alt="" src="" border="0" /></a> As some of you know I tried out for the Apprentice twice (both times getting call backs for the remaining 500 slots). I remain with the opinion that had I gotten picked for the show, Donald Trump would have held me back. So in retrospect, I’m quite happy for the experience of trying out and kind of glad that I didn’t get picked. <br /><br />I wasn’t a fan of Celebrity Apprentice because the desperation to succeed hasn’t always been there. This season however, they have picked just the right line up of characters who want to be seen, need to be seen and would also sincerely like to earn money for their charities. <br /><br />As great leaders it’s important to watch these competitive reality shows and watch how some of the greatest minds of our time work. They become a lesson in people management, persuasion, body language and numerous of other attributes that when skilled at, can help the road to success. <br /><br />In comes Richard Hatch. This guys is a manipulator extraordinaire. He makes Russell from survivor appear humble. In his mind, he has already won. In some ways dangerous but in some ways necessary. <br /><br />Richard had no qualms about stepping up to be the leader of the first challenge. He is surrounded by some of the greatest names in music, acting and sports and he was not impressed. In fact, it was as if he was resentful of them and their fame. He shouted orders out like a drill sergeant. The interesting part is that I think some of them not only didn’t mind it, but felt they needed it.<br /><br />There is no doubt that his arrogance got the best of him when he pushed David Cassidy during the pizza challenge. The other thing that was obvious was that Donald Trump wanted him to stay in the game. When Richard had to pick two people to come back into the boardroom he chose Canseco and David Cassidy. When questioned for the third or fourth time about physically contacting David, his tune changed slightly, he said, “If I did push David, then I’m sorry and I apologized”. That’s a lot different then “I did not push David”. No one called him out on that. Or at least it appeared that no one called him out on it (not sure what is left on the cutting floor). That’s no different then David and Jose Canseco both agreeing that he didn’t’ take any breaks, and then state a moment later that he took 2 breaks….and he was called out.<br /><br />Richard also picked up on the fighting part of the argument. Young Don, says that David was not possessing the energy to fight to stay in the game. As soon as “energy” was mentioned, Richard’s body language change sharply to portray a new sense of energy – to convince everyone that young Don was right. At that point you can even here him stating how mad he wanted to be there.<br /><br />As a Project Manager that essentially got his ass handed to him, he did a remarkable job staying in the game. I bet he also learned a lot about the people he’s playing with from that one instant.<br /><br />Kudos to Mark Burnett and his casting team. This cast of characters should make for a great season of Celebrity Apprentice<br /><br />MJL<br /><div><span class="fullpost"></span></div>An FTB Bloggers Blog as Interpreted by The Greats<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href=""><img style="float: left; margin: 0pt 10px 10px 0pt; cursor: pointer; width: 300px; height: 207px;" src="" alt="" id="BLOGGER_PHOTO_ID_5575943522279365666" border="0" /></a>Every issue of Esquire comes with a section where they ask famous people their meanings of life. It's one of the most thought provoking but yet profound reads and is the key to many of might thoughts and motives on obtaining ultimate success.<br /><br />Here are my favorite three from an interview with David Brown.<br /><br />DAVID BROWN<br />Producer, interviewed in June 2001 by Cal Fussman.<br /><ol><li>WORK YOURSELF to death. It's the only way to live. </li><li>GOOD HEALTH is beautifully boring. </li><li>NEVER BE the first to arrive at a party or the last to go home, and never, ever be both.</li></ol>Those are some good thoughts to start the week right.<br /><br />-MJLAn FTB Bloggers Blog what you love, I get it now!Ok, it’s been since October since my last post. But if anyone knows anything they know I have been around the internet. This blogger blog was my baby, and it still is, therefore I have to return where it all started. Right?<br /><br />So what have I been up to? I have been carving a piece of the net out for myself. I have started an Internet marketing company and I’m making it happen. I have about 40 websites and still counting. Some are what I call premium brand and some are just booster to the premium brand.<br /><br />I'VE SEEN THE LIGHT!!<br />Recently the light went on. I realize now when people like Oprah, Dr. Phil, Jim Rohn, and all these super successful people say, “You have to do something you love first?” I used to think, “That’s easy for you to say – you’re making millions doing what you love”. Then I realized, if not for money, but for your sanity – you should do what you love. But maybe take a two pronged approach. First do what you have to do to get a check, then do something you love and try and monetize it. This is actually critical to living the quality of life that you deserve. Critical? Sounds severe doesn't it? We it is!<br /><br />Let’s take the 97% of you out there that don’t like their jobs. Let’s split up your day and see how much out of the 24 hrs do you really truly get to enjoy.<br /><ul><li>Sleep 7 hrs. </li></ul><ul><li>Working a job you hate: 9 hrs. </li></ul><ul><li>Commuting and prepare for work: 1.5 hrs.</li></ul><ul><li>Gym (Exercise) 20 minutes at a minimum </li></ul><ul><li>Eating: 1 hr.</li></ul><ul><li>Chores for living (dishes, bill pay, ironing, etc.) 40 min</li></ul>That’s 19.5 hrs out of 24 that you are not enjoying yourself. What do you do with the other 4.5?<br /><br />How scary is that? Over 81% of your day is spent doing things you don't want to do. Sounds like a prison sentence.<br /><br />You have 4.5 hrs to enjoy yourself per weekday. Now, If you enjoy your job, that goes up to 13.5. 3 times more. The numbers don’t lie.<br /><br />If you throw children into the equation, all the numbers change drastically. The 4.5 can go down to 1 depending on the age of your children. One after school sport, help with homework, and a bedtime story can turn that 4.5 hours into 30 minutes. Then the guy working a job he loves is enjoying life 18 times more then you are.<br /><br />That’s what I call a better quality of life. Think about it and take action.<br />If that doesn't motivate you to take control of your life then nothing will.<br /><br />I’m not back – I never left.<br /><br />-MJLAn FTB Bloggers Blog Tiger's Third Swing Change Be Inspiration for Your Self ImprovementAs a sport fan (not a fanatic), I tend to equate many of life's trials and tribulations to those which occur in sports on the playing field.<br /><br />Golf is no exception. Observing Tiger struggle for the 3rd time changing his swing, I thought to myself, "That is true commitment to self improvement".<br /><br />Let's look at this a little further. In 1998 Woods only won one PGA Tour event. This was the first "Tiger Slump" we heard of. He was with his old coach Butch Harmon at that time, and was going through the first of what would be 3 swing changes. After the first change...well you know, the rest is history. He had a record 2000 and continued his dominance over the sport, and was crowned athlete of the decade.<br /><br />Compare that to you, in your career. In preparation for the next level of your career, you attend a seminar or training class that really impacts you and helps you reach that next level. In this new management position, with your new swing you hit a rhythm and start to perform well. This new rhythm is clearly at a level above your previous performance, so you begin to feel comfortable with the success and praise that you are receiving. This feeling of comfort is "The Tipping Point" 85% of us love this comfort zone and will stay here forever. 10% will fall out of the comfort zone; for some reason or other they cannot sustain this new level of performance. The last 5% will dominate this zone, get bored and look for new challenges.<br /><br />WHO BENEFITS THE MOST FROM CORPORATE TRAINING?<br /><br />We have all seen people go through various corporate funded training classes and come out totally unaffected. Some of the meanest nastiest SOB's I know have been to "Play to Win" training over 5 times.<br /><br />Human Resources and management at your job, do not know what percentage you fall into. They might have an idea, but it's hard to detect the upper 5% from the pack of Eighty five Percenters. They don't know if you will stay, fall or continue to rise, so they send your entire department to anew training class. After this training you are faced with a decision, "Do you want t reach the next level and make a change, or stay exactly where you are".<br /><br />Most of us are comfortable and will choose to stay put. For example, many of us play golf. We shoot in the 80's or 90's. In order to go lower, we would have to change something drastic. Little tweaks and tips from your buddies are not going to work anymore. In order to shoot into the 70's we will need to possibly change our grip, or perhaps change our stance, things we have done over and over for years! These things are incredibly hard to change and takes dedication and practice, with the risk that you might not reach your goal and possibly screw up what you already knew.<br /><br />Do you see the parallels? All these things are necessary if we want to get to that next level. Tiger is now in the process of doing this for the 3rd Time! His dedication to self improvement is driving him to make a third swing change, in the middle of the PGA Tour, Fed Ex Cup, & the Ryders. He knows that if he pulls this off, he will be unstoppable. Not to mention, he couldn't live with himself if he didn't try.<br /><br />Now back to you and your training class. You've learned something in this new class that could make you superstar. In order to implement these changes it will take a huge effort on your part. To modify your behavior takes dedication and practice, drive and focus. Every day. This is what's necessary for you to reach your next level. Sure it's easier to stay where you are but you have to ask yourself, "Am I a pussy (cat) or a Tiger".<br /><br />Self improvement is a difficult never ending journey, and few people are willing to make that trip or they take it, but decide not to continue after a certain point. That's why the road to the top is a pyramid. There is very little of us at the top. Only some of us are brave enough to correct our swings, and a few crazy enough to do it 3 times.<br /><br />*****************************<br />They wouldn't let me post this in Ezine articles because they didn't like the way I was promoting my webiste in the Signature. Well guess what??This site is mine, and I can promote any way I want:<br /><a href="">Baby Bjorn Travel Crib </a><br /><a href="">Baby Bjorn Travel Crib</a><br /><a href="">Kindercare</a><br /><a href="">Partition Magic<br /></a><br /><br />So There!<br />-MJL<br /><br /><br /><br /><span class="fullpost"></span>An FTB Bloggers Blog A TYPE A PERSONALITY IN THE WORK PLACE<h3>By: Eric Smitts<br /><br /></h3><p>One of the Keys to success when working in Cooperate America is learning how to work with and deal with other types of personalities. If you are an A type personality then it’s extremely hard to balance sensitivity with your own personal objectives. The most famous A-type personality that we know can be seen in the actor Jeremy Piven as his Character Ari Gold. If you watch Entourage Season 7 Online, you can see all of the issues that having a Type A personality creates in the workplace. Most of his issues stem from ruined personal relationships from running over people in pursuit of professional goals.<br /></p><p>BE AWARE OF OTHER TYPES<br />In a balanced work environment you will find 8 basic types: Reformers, Directors, motivators, Inspires, Helpers, Supporters, coordinators and observers. Reading these types I am sure you can see where you fit in. Depending on your role in your organization some types are perfect matches, and these people tend to perform better, and like their jobs more because they are in roles at which they were born to do. </p><p>If you are in R&D then you are more of an “observer”. If you are in Human Resources then you are more of a Supporter, or Helper. Most aggressive leaders tend to be Directors. Once you are aware of your “type” then you will have to adjust your behaviors to when dealing with others, based on their types. </p><p>EXAMPLES:<br />As a type A personality you have little patience, and very little tolerance for excuses. You like things brief and you like to be in control. Now to get the best performance out of your co-workers, employees or even your supervisor, being aware of and controlling your style is extremely important. Type A’s love being in control, but when dealing with your boss, you should be aware of this, and say to yourself, “I’m giving my boss control, so I can have it later”. This will help you accept direction. </p><p>When dealing with Helper’s they are more concerned with helping other people, and the softer things that you could care less about. Showing outwardly that you don’t care is very discouraging to the helper so this will turn this person off, and you might not get their full cooperation. Being aware of your impatience and lack of sensitivity will allow you to “grin and bear” the conversation with a smile, so you can in turn solidify a relationship that you may need to leverage over and over. </p><p>It’s playing a part to get what you want. Hopefully this will only need to happen in your professional life but there are many instances where you will have to play the part in your personal life as well. </p><p>As Type A, you have to remember that accomplishing your goal is the main objective and doing whatever it takes, even if it means not being yourself is sometimes necessary. Start practicing with the small everyday situation so then when you really need to leverage this skill set, it’s already dialed in. </p><p>To take a quiz to find your type, Google "Myers Briggs Test". </p><b>Author Resource:-></b> The author has been earning a full time income online for 3 years, read his latest Review on the <a href="">Britax Marathon Car Seat</a> and visit his newest enterprise for <a href="">Free iphone Apps.</a><br /><br />For more on Jeremy Piven's character Ari Gold <a href="">Watch Entourage Season 7 Online. </a><br /><br /><br /><b>Article From</b> <a href="">Articlear.com</a>An FTB Bloggers Blog’RE NOT GREAT, UNTIL YOU ARE GREAT - Introducing NSG Tiger Woods<a href=""><img id="BLOGGER_PHOTO_ID_5503445534745971730" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; WIDTH: 225px; CURSOR: hand; HEIGHT: 153px" alt="" src="" border="0" /></a>So I watched Tiger play the British Open & The Bridgestone Invitational, and finally it was a different Tiger. He didn’t win, or even get a top 10, but it was a Tiger who had finally accepted that he was no longer GREAT and with that acceptance he also accepted the challenge to rise back to Greatness.<br />Let me explain……Tiger knows that he is the Best, possibly the best that has ever played the game ever! He is used to showing up and having people so afraid that they stop playing their game and play his. He was crushing the competition, but a lot of that was because things went his way. Officials called things his way, the people in the galleries rooted for him and remained quiet when he was teeing off (not always, but mostly), tee times, and practice times seemed to be all in his favor. He hosted half of the tournaments and every Sunday was his day. His only job was concentrating on putting that ball in the hole; this is A mental luxury that only a few golfers ever have.<br />This is very similar to a company CEO. When they walk in, the tide moves with them. They don’t have to ask people 8 times to do something, or use some kind of angle to get people to do what they need them to do. They just do it. There is no pressure to get the job done a certain way, they just need to make sure they get the job done. It doesn’t matter, how they format charts, or how they display information, if they have a late lunch, if they pissed somebody off, if they are talking on the phone too much. It just doesn’t matter and without all those distractions, the CEO can focus on only one thing, running the company. This makes his job, in some cases much easier then being in middle management.<br /><br /><strong>HOW THIS APPLIES TO US</strong> <a href=""><img id="BLOGGER_PHOTO_ID_5503445426116909666" style="FLOAT: left; MARGIN: 0px 10px 10px 0px; WIDTH: 224px; CURSOR: hand; HEIGHT: 162px" alt="" src="" border="0" /></a><br />We have to look at ourselves like CEO’s that aren’t CEO’s yet, or better yet, CEO’s that have been laid off and must start from management level. We know we are CEO material because we already did the job, but now we have to do it all over again. This is similar to where Tiger is now. The important part is knowing that we can and will get to where we belong.<br /><br />He’s the “Not So Great” Tiger.<br />The NSG Tiger went through a learning phase his first few tournaments back, and has learned that everybody isn’t scared of him anymore. Now, he has to deal with all the bullcrap that everyone else has to, and STILL win. When he played in his first three tournaments this year he thought he could just walk in and be back on top, during the British Open he realized that he has to go back to 1999 and grind his way to the top again.<br /><br />YOU ARE NOT GREAT, IF YOU ARE NOT GREAT!<br />His demeanor during the British Open demonstrated that he understood the challenge and was ready to face it head on. The acknowledgement was there that he knows now that he isn’t Great, unless he’s great. No one is going to give him anything. He must prove himself<br /> all over again. All his past accomplishments are just fairy tales for Youtube. The respect, money and power will only come again with accomplishment.<br /><br />So that’s where I am. That's where most of us are. Entrepreneur's stuck at our 9 to 5's. It’s not going to be easy. I’m going to have to show everybody and grind my way to the top. You are not Great until you are great. Until you are a CEO, then you are not a CEO. It’s just that simple. The key is to “know” that it’s a process, and be patient, and “know” in your heart that it’s just a matter of time<br /><br />Tiger might have it easier because he's been there before, so there is very little doubt (at least in his mind and mine) that he will get there again, but there is also very little doubt that I will be there. ….. Trump went bankrupt and was rose right back on top, because once you are there on top and then fall –yes, you have to go through that initial realization process that you have to start again, but it happens quicker because you know exactly how to get there. Becoming a millionaire is great, not because of the money you make but what it makes of you to get to that point. The person and lessons you learn getting there. You have learned the skills required to be successful. You can lose the money, but the skills are much more valuable and much harder to lose. The tricky part is knowing when you must use them. Using them is not easy, and you might have thought you would never have to use some of those skills ever again. Don't worry, part of being successful is knowing when to dust off those skill and "Do Work". <br /><br />The other alternative is to be like most people and say how great you are, and never really push yourself to do a damn thing. Talk is cheap that’s why there is plenty of it.<br /><br />Very few can afford to pay the price to be great!<br />Me, I can’t afford not to!<br /><br />-MJL<br /><span style="font-size:x-small;"><em>An FTB Bloggers Blog</em></span>An FTB Bloggers Blog Can't Fail - Not: I will try to Succeed - Big Difference<p align="left">On my way to the book store this weekend I saw a huge Garage Sale. Hardcover books for $.50 Cents. Why Not? I went and took a look, and found a book called Power Thinking. It basically gives insight to how the mind, and our thought processes set us up to fail before we even start playing the game. </p><p align="left"><a href=""><img style="CLEAR: all; FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a></p><p align="left"></p><p align="left">One of the stories that really stuck with me was that of George Dantzig. From some internet snooping, I discovered this cat was a pioneer mathematician that did his thing. There are reports that some of Good Will Hunting was based off his life. George passed away last year; he was known as the inventor of the simplex algorithm and is considered the father of linear programming.</p><p align="left"></p><p align="left">All that's nice, but the best story comes from his college days. Georgie boy was a student at Berkeley, right at the peak of the Great Depression. Soup lines and hard times for everyone. He was poised to graduate with a mathematics degree. George knew that it would be impossible for a mathematician to find gainful employment. He learned that the person who scored the highest in his mathematics class would get a job as an assistant teacher. Now this was motivation for "Dat Ass". All he had to do was kill this course and he would have an instant job. </p><p align="left">Now Georgie boy knew he wasn't the brightest kid in the class but he had what we all have. The "Grind" factor. He was going to grind out the best grade possible. He grinded every night until he felt comfortable for the exam. In fact, he put in so much work that he lost track of time and ran late for the test. He ran to his desk and looked at the test. It had eight problems. "That's What's Up" he thought to himself ( or "Golly Gee Willikers, 8 Questions" I forgot he was a nerd). He knocked all eight questions out the box. Then he noticed that there were two additional questions on the blackboard. </p><p align="left"></p><p align="center"><a href=""></a></p><p></p><p></p><p></p><p><br /><a href=""><img style="CLEAR: all; FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a> He wrote them down and started working them. After trying to solve the first one he was at an impasse so he started working the second equation. No luck and time was running out. When the bell rang, he went to the professor and asked if he could have a little more time. "Sure G, You have until four o'clock on Friday, but your paper must be in by then." </p><p>George knew that there were crazy smart people in his class and that he had to solve these two problems in order to stand a chance at the teaching gig. He took the paper home and started grinding. He knew that he had know choice if he didn't want to be in the unemployment line. Tuesday & Wednesday he had no luck. Thursday afternoon, BOOOOYAH, he solved the first one. This gave him crazy confidence, and by Friday morning he nailed the second. He turned his paper in on time and went home wondering if he was going to have a job. </p><p>Early Sunday morning, George heard a knock on the door. "What the fu...", "Who Dat?" He opened the door and it was his Professor. 'What up Doc? " . The Professor said, George! George! You made mathematics history". George said, "What you talkin' bout Willis". The Professor said, "George, I was thinking as I came over here. You came to class late for the test, didn't you?" He continued, "Eight problems were on the test paper that you picked up off my desk. You solved them all correctly. The two problems I had written on the board were not a part of the test! I told everyone that if they had a love for mathematics to keep playing with these two famous unsolved problems for a lifetime of fun. Then I put those two problems on the blackboard. Even Einstein, to his death, played with those problems and couldn't solve them. You solved them, you solved them both. Not only have you made history, you also have the job". George gave a fist pump and started thinking about his future dough stacks. </p><p>That story shows you how we mentally beat ourselves. If George would have heard the statements made in the beginning of the exam, not only would he have not solved the problems, he probably wouldn't even have attempted. That's huge. I'm sure you can apply this to something in you past or present. Believe in yourself. You can achieve much more then you think. It's OK to fail but try again. It's Ok to be disappointed but don't be discouraged by it. Use it as motivation. If you have never been disappointed you were probably aiming too low or moving too slow. </p><p></p><p>-MJL" /></p><p><em><span style="font-size:85%;">An FTB Bloggers Blog</span></em></p><p><a href=""><img style="CLEAR: all; FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a></p><p>EINSTEIN WAS A PIMP</p><p></p><p><strong>ARE THOSE 22 INCH SPINNERS? </strong></p><p><strong>IS THAT THE NEW ROCAWEAR BUTTON UP? </strong></p><p><strong>ALBERT, HOLLA AT YOUR BOY!</strong></p><span class="fullpost"></span>An FTB Bloggers Blog the Need to Suceed<div class="separator" style="CLEAR: both; TEXT-ALIGN: center"></div><div class="separator" style="CLEAR: both; TEXT-ALIGN: center"><a style="CLEAR: left; FLOAT: left; MARGIN-BOTTOM: 1em; MARGIN-RIGHT: 1em; cssfloat: left" href="" imageanchor="1"><img src="" border="0" qu="true" /></a></div><div style="BORDER-RIGHT: medium none; BORDER-TOP: medium none; BORDER-LEFT: medium none; BORDER-BOTTOM: medium none"><span class="fullpost">Some days I go from wanting to give up to wanting to work harder and push myself further. I’ not sure what accounts for the severe swings but they do happen. The important thing is my desire to succeed easily outweighs the desire to give up. It’s important to take account of how you are feeling and monitor it closely. What just happened to make you feel a certain way? If it resulted in a bad feeling, how do you counter that to make those negative feelings stay as short as possible? I want to make sure that I starve the negative and feed the Need to Succeed.<br /><br />In my world things need to be extreme, if you are not extreme then you are just like everybody else, and your results will be just like everybody else’s. If you are extreme; you will achieve extreme results.<br /><br /><strong>STARVING THE QUITTER</strong></span></div><div style="BORDER-RIGHT: medium none; BORDER-TOP: medium none; BORDER-LEFT: medium none; BORDER-BOTTOM: medium none"><span class="fullpost">1. I tell myself “You haven’t lost or failed until you Quit”. Did you know that ants can’t quit?<br />Have you ever read the famous of Tammerlane, a famous Asian conqueror who once was going to quit. He was hiding out contemplating surrendering when he saw an ant. The ant was trying to get a piece of corn up over a wall. The kernel of corn was much larger than the ant and try as he may the ant could not get the kernel over the wall. So Tamerlane began to count the futile efforts of this ant, to see how many times he would try until he gave up. He counted 10, 20, 40, 60, 69, but on the 70th attempt the ant succeeded. Sixty-nine times the ant tried to carry it up over the wall, and sixty-nine times he failed. But on seventieth try he succeeded and pushed the grain of corn over the top. If an ant has 70 times in him, then I at least have a 100 in me.<br />2. If I quit, I will always have regrets – I hate regrets!<br />3. If I quit, then I am no longer worthy of the 272 posts that I have listed in this blog<br /><strong></strong></span><span class="fullpost"><br /><strong>FEED THE NEED TO SUCCEED</strong><br />1. I actually look at people who are the opposite of what I want to be in life. I look at fat, lazy, unmotivated, people and critique them to myself. This might sound harsh, it might sound extreme, but you are in this by yourself, fighting for your goals everyday. You have one life to live so you must give it your all. If you feed your mind and body right it will take you where you need to go. Looking at people that you DON’T want to be like is a reality check. It helps you remember where you came from spiritually, mentally and monetarily on a daily basis, and being thankful where you are now. It helps you stay hungry.<br /><br />2. Healthy living, Eating well, Working out<br />Subconsciously you see sexy people on TV and we relate the perfect body with success. If you are working out and getting yourself in shape and eating right you will subconsciously relate yourself with successful people. This might not be healthy mentally, but...</span></div><a name='more'></a><div style="BORDER-RIGHT: medium none; BORDER-TOP: medium none; BORDER-LEFT: medium none; BORDER-BOTTOM: medium none"><a style="CLEAR: right; FLOAT: right; MARGIN-BOTTOM: 1em; MARGIN-LEFT: 1em; cssfloat: right" href="" imageanchor="1"><img height="200" src="" width="123" border="0" qu="true" /></a> I’m already crazy. If my goal is to have a body like Tyson Beckford and I fall short, I’m still doing well. If my goal is to have a body like Al Roker and I fall short I’m a health risk. So why not shoot for the stars and feel like a winner. </div><div style="BORDER-RIGHT: medium none; BORDER-TOP: medium none; BORDER-LEFT: medium none; BORDER-BOTTOM: medium none"><br /></div><div style="BORDER-RIGHT: medium none; BORDER-TOP: medium none; BORDER-LEFT: medium none; BORDER-BOTTOM: medium none">3. Act as if.</div>This one is huge. Act as if you already reached your goal. How would you treat people? what would your attitude be like every day? Have you ever seen a millionaire, a CEO, an actor and he is the nicest person in the world? Immediately your response is, “I would be nice to if I had that kind of money”. Well then that’s what you need to be NOW! Would you be singing and dancing on your way home if you were rich, if you were a millionaire? Then sing and dance now. Would you greet everyone warmly, bring in bagels for the crew in the morning? Then do that stuff now! Don’t wait; because you are already there, time just has to catch up with you. Treat your car like it’s the Ferrari that you have always wanted. Treat your apartment like it’s the 8000 sq ft mansion you’ve always known you would live in. Clean it, care for it as if. The same goes for everything. If you were rich beyond your wildest dreams, you would be eating healthy because you would want this life to last forever. Act as if.<br /><br />Watch and read about successful people. What makes them tick? The common thread between all of them isn’t intelligence, it isn’t socio-economic status, or race….it’s that fact that they didn’t quit!<br /><br />Just posting this I’m already back to the succeed side of my pendulum swing. Matter of fact I have 4 other posts just like this to keep me focused, as well as 15 or 20 bookmarks of other blogs. You can’t talk as much s@#t as I do and be a quitter.<br /><br />In fact I’ve already succeeded – I just have to wait for my bank account to catch up with my attitude.<br /><span class="fullpost"><br />My swag is back on full – Get some!</span><br /><span class="fullpost"><br />-MJL<br /><em><span style="font-size:85%;">An FTB bloggers Blog</span></em></span>An FTB Bloggers Blog, Don’t suffer from B.H.S. - Battered Housewife Syndrome<div class="separator" style="border: medium none; clear: both; text-align: center;"><a href="" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="195" qu="true" src="" width="200" /></a></div><div style="border: medium none;"><u>Battered Housewife Syndrome </u><br /><br />I used to work with this older guy who had seen it all and done it all…..twice in fact. He used to say "at the end of the day, all this is, is a bunch of guys playing factory. " </div><div style="border: medium none;"><br /></div><div style="border: medium none;">That is a true statement. Part of what holds people in their current surroundings is the fact that they get caught up in all of the hype. They build their 9 to 5 jobs into something that absorbs all of their time and energy. It becomes this enigma that is used to impress your family and friends. At my job people walk around like they are ER Doctors! Their families think they are out saving the world. We make Sun Tan lotion for crying out loud. People are wheeling in bags filled with binders and laptops, we have teleconferences, nightly conference calls and enough meetings to design the next I-PAD. </div><div style="border: medium none;"><br /></div><div style="border: medium none;">Part of me understands the hype. I mean really, who wants to come home at the end of the day and tell their wives that they spent the day filling out paperwork, doing expense reports, typing emails and copying and pasting regurgitated crap into Power Point presentations. Ha ha…..I just summed up 85% of my job right there</div><br /><div style="border: medium none;">A real man wants to say, "Honey, I rolled out my new Corporate restructuring program and the board loved it. I just got the corner office with the bathroom and a 10% increase”</div><div style="border: medium none;"><br /></div><div style="border: medium none;">Our job as future millionaires is not to get caught up in our jobs. This doesn't mean don’t kill it. You still have to be the best <a href="">(don’t be a scape g.o.a.t),</a> but don't waste unnecessary time obsessing over it or expelling extra emotional energy that can be used to achieve your dreams. </div><div style="border: medium none;"><a href="" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" qu="true" src="" /></a>During a recent interview with Puff Daddy. He explained how he used to clean bathrooms during his unpaid internship. At the end of that statement, he said that he was "the best" at it. That's the mind frame that we have to keep. That's the perfect analogy. He knew he would move on to bigger and better things but he still had to be the best at what he did. I'm sure at the end of the day, he didn't call himself a Sanitation Engineer, or a commercial disinfectionist. He wanted to diminish the effect that this job had on his life. He was about to start Bad Boy records, Sean Jean clothing line and produce a slew of number 1 albums and TV shows and be one of the most famous people in Entertainment. Do you think he let that job be any bigger in his mind then it needed to be. Yeah, the bathrooms stunk, it was a filthy, $hitty job (Pun intended) and he hated it, but all the complaining, and getting angry would be a waste. He was a kid cleaning bathrooms and gave it the energy it deserved, not a ounce more. His energy was spent on bigger and better things. </div><br /><b>THE BATTERED WIFE</b><br /><div style="border: medium none;">Honestly I find myself taking crap from the job I have every day. Although I make a decent living, it's far below my capabilities but I gave up caring to the point where I let it affect me one ounce more then it needed to. I gave my last job everything I had and what did it get me…..a pink slip. A foot in the ass out the door. A one way trip to the parking lot, without so much as a severance check. All the extra hours, the skipped vacations and the thinking about the well being of everyone in that place just to get laid off. </div><br /><div style="border: medium none;"><a href="" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="155" qu="true" src="" width="200" /></a>If I let that happen again, I would be just like a battered wife. (I realize I'm a man, but stay with me for the analogy). Just because I’m getting a check now, I’m supposed to be romanced back into thinking that I need this man named job….thinking that I love this (man)job, thinking that I’m nothing without this (man)job. Then the moment that I think everything is...<br /><a name='more'></a> OK…..SLAP! what happens. I get (abused) fired again, and the cycle starts over. </div><br /><b>THE BATTERED WIFE’S INDEPENDANCE</b><br />But I’m much smarter then that. Slap me once and I call the cops and cut your balls off! You dig?<br /><div style="border: medium none;">I’m like the once battered housewife who has come to her senses. I understand that I’m independent and the only way to completely free myself is to work outside of the job and slowly develop myself. So now my job sees me every day and thinks that I love him. I play the role, but I’m emotionally detached. He doesn’t have a clue. I use him for his money, his benefits, and material comforts. Now when I get yelled at, or berated or made to do chores that I don’t agree with, I’m laughing inside because I’m secretly plotting. I’ve actually stepped up my game so he is not suspicious of anything. He’s keeping me so busy, how could I ever be doing anything else or seeing anyone else. He’s so cocky now he doesn’t even think that I would ever consider leaving him. But while no one is looking – I’m studying, I’m developing, I’m learning, I’m training, reading and working hard. I’m up until 3:00 AM making it happen and I’m awake in the morning with everyone else. It’s the emotional detachment from the job and the emotional attachment to my independence that keeps the fire burning. All of the energy that I used to waste getting frustrated, complaining to my friends, family and co workers about my job is now converted into action. The closer I get the hotter the fire and desire burns until one day – I have enough money, knowledge and security to break free. Without any notice – I’m gone! I’m not worried about job, he will find some other poor bastard to take my place and I feel sorry for that person, but I’m gone. I will send the next poor guy a link to this post.</div><div style="border: medium none;"><br /></div><div style="border: medium none;"><a href="" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" qu="true" src="" width="200" /></a>Project X is my freedom, FTB Inc. is my security, and I can taste it. </div><br />Let’s Go!!<br />-MJL<br /><i><span style="font-size: x-small;"><a href="">An FTB Bloggers Blog</a></span></i><br /><br /><br /><a href=""><span style="font-size: x-small;">Project X</span></a><span style="font-size: x-small;"> is a software program that I’m developing while trying to run and maintain all my other lines of business: </span><a href=""><span style="font-size: x-small;">FTB Bloggers</span></a><span style="font-size: x-small;">, Wash and Roll, </span><a href=""><span style="font-size: x-small;">Nuff Said Outfitters</span></a><span style="font-size: x-small;">; </span><a href=""><span style="font-size: x-small;">Money Saving Mikes Online Gifts</span></a><span style="font-size: x-small;">. Tune in weekly for updates on my progress and for words of inspiration!</span><br /><span class="fullpost"></span><span class="fullpost"> </span>An FTB Bloggers Blog Work - Part DeuxI don't know why I'm on this "hard work kick", but while I'm here I figured I might as well blog about it.<br /><br /><strong>THE TRAINING WHEELS ARE OFF</strong><br />I think people actually forget that it takes hard work to reach your goals...the more amazing your goals are the harder you will have to work to achieve them. I say we forgot because once upon a time we <span style="font-weight: bold;">knew </span>this. We lived it, and we are where we are because of it. You are where you are today either from the lack of hard work, or because of your hard work.<br /><br />The problem is, now, there is no structure to guide you through the “work toward your goals process” which could be good or bad. Good because there are no boundaries and bad because there is no coaching. The training wheels have been taken off and your dad has just removed his hand from the seat post; it's all up to you now.<br /><br />When you're out with your child or someone younger and you see someone who is not doing well in life; maybe a homeless person or a pan handler, you immediately say, "That person did not work hard in school" or “See <a href=""><img alt="" id="BLOGGER_PHOTO_ID_5475341126192184578" src="" style="float: left; height: 200px; margin: 0px 10px 10px 0px; width: 198px;" border="0" /></a>what happens when you don’t work hard?” <span style="font-weight: bold;">Exactly!</span>! …..but it was easy to work hard in school for a few reasons:<br />One: You were surrounded by people doing the same thing as you, which always makes things a lot easier (like going to the gym, it's always easier with a partner).<br /><br />Second: You fail many times in school and don't give up. You get C's, D's and sometime F's and you are pushed to do better and of course with hard work, you do improve and do better. These should be life lessons not just classroom lessons.<br /><br />Once you are in the real world with no boundaries, if you fail, no one is pushing you to try again. In school the mindset is....<br /><a name='more'></a> that you <strong><u>WILL</u></strong> graduate. In more affluent households the mind set is that <strong><u>YOU WILL GO TO COLLEGE</u></strong>.<br />For kids whose parents have college degrees in 1992, 65% of those kids enrolled in a 4 year college after high school, versus 21% whose parents had a high school diploma or less. <a href="">(From a 2001 Condition of Education Report)</a><br /><br /><strong>YOU WILL SUCCEED - YOU HAVE NO CHOICE</strong><br />In affluent households, parents don't give their children a choice to think about failure. That's the mind-set that you need to have. You are GOING to reach your goal. Yes you will fail, and have to work harder. Yes you will have to sacrifice but if you are focused you increase your chances of success everyday. <br /><br />So just like those who study hard on Fridays and sacrificed by not hanging out, smoking and drinking; just like those who got after school jobs to buy that bike; you have to use your free time wisely. After spending family time, working your 9 to 5, and going to the gym there is little time left for sleeping and pursuing your dreams. The time is there...it’s just hard to come by. If you make your dreams a priority you will find the time.<br /><br />Remember there are no boundaries, that means there are no limits to your dreams, how fast you can reach them and how hard you can fail and comeback. You can score lower then an F and you can graduate in 6 months. There are no rules. You will need coaching but you have found it. This blog, I am a coach. Blogs, books, and anything motivational are like your parents pushing you to reach your goals. Then use your small successes as promotions from one grade to the next. Since I started this blog, I have started and continue to run 3 businesses, overcame unemployment in 16 weeks, and have increased my annual earnings by over $60K. That’s Hard Work!<br /><br /><strong>More Proof – Forever 21 CEO Do Won Chang</strong><br />Chang's entrepreneurial bent became apparent in Korea. Three years before moving to California, he says, he<a href=""><img alt="" id="BLOGGER_PHOTO_ID_5475341130011128306" src="" style="float: left; height: 163px; margin: 0px 10px 10px 0px; width: 200px;" border="0" /><.<br />Today that business is one of the U.S.' fastest-growing retailers, Forever 21, known for its trendy, affordable fashions that often replicate styles of more expensive brands. He is chief executive; and his wife is chief merchandising officer. 2009 estimates sales of $1.7 billion and net income of $135 million for the fiscal year ending February 2009, up 37% and 25% respectively. The retailer has 460 stores in 13 countries, including one in Korea and one in China that both opened last year.<br /><br />That's what hard work is all about.<br />Let's Go!<br />-MJL<br /><em><span style="font-size:85%;">An FTB Bloggers blog</span></em><br /><br /><a href=""><br /></a>An FTB Bloggers Blog Hard? No one cares until you are a Success!<a href=""><img id="BLOGGER_PHOTO_ID_5054814052378313330" style="FLOAT: left; MARGIN: 0px 10px 10px 0px; CURSOR: hand" alt="" src="" border="0" /></a> Sometimes I would like to think that I work hard. Sometimes I would like to think that I work harder then the normal guy; that guy who thinks he wants to be rich. At the end of everyday, I go to bed when I can't even think straight. I barely brush my teeth, and then I pass out. I wake up the second,….. no the milli-second that I think I have enough sleep and again try to out perform the entire world. <br /> <br />As I talk to people, I start to hear things like, "I'm just sooo busy", or "I just had a 12 hour day". <br /> <br />What does that mean? <br /> <br />These people are actually sincere and do believe that they have worked hard. In their minds they have just killed it. They have done more then the average person, and have "left it all on the court" as they say in basketball. <br /> <br />Then I look in the mirror at myself. At the end of the day, you turn to yourself, because only you know what is real and what is not. I ask myself, " Self......are you like all of the other people who think they were so busy". Am I really working that hard? Do I really give my all, everyday, or have I just gotten so accustomed to saying so, that I actually believe it. You tell yourself you are skinny long enough, you start to believe it, until someone shows you a picture from last summer. Am I like a rapper who rants and raves he’s the best and richest when he knows the record company owns his cars, his jewelry and his girl. <br /> <br />I'm no longer going to say how busy I am, or how hard I work. People will never understand, and it's only frustrating to explain. There are certain things that can not be captured in words: passion, aggression, greed, anger. You can not get anyone to feel these emotions by just telling them how you feel. The worse thing to hear after you have gone through something is for some insensitive idiot to come and say, "Yeah me too, I have been working real hard lately too". Agghh! <br /> <br />I say, do what you have to do, and your results will speak for themselves. I am tired of justifying why I can’t go places or do things with my money. I am tired of trying to explain what s-a-c-r-i-f-i-c-e means. I’m either going to do something or not. Go to an event or not. No one cares what I have to do or how busy I am. No one cares if my money is tied up in investments or business ventures. No one cares if my businesses are losing money. So besides my blog, and the short list of people who actually do care, keep it to yourself. Only once you are rich do people care how you got there, that's why the Road to Riches is not only lengthy and filled with many perils; it's lonely as hell!! <br /> <br />If someone tells you that they have been working really hard, and are really busy, pull out a stack of $100 bills and smack them in the face and yell, "MEEE TOO"! <br /> <br /> <br />-MJL <br /> <br /><span style="font-size:78%;">del.icio.us Tags: </span><a href="" rel="tag"><span style="font-size:78%;">Success</span></a><span style="font-size:78%;">,</span><a href="" rel="tag"><span style="font-size:78%;">Wealth</span></a><span style="font-size:78%;">,</span><a href="" rel="tag"><span style="font-size:78%;">Hard Work</span></a><span style="font-size:78%;">,</span><a href="" rel="tag"><span style="font-size:78%;">Millionaire</span></a><span style="font-size:78%;"> <br />Technorati Tags: </span><a href="" rel="tag"><span style="font-size:78%;">Success</span></a><span style="font-size:78%;">,</span><a href="" rel="tag"><span style="font-size:78%;">Hard Work</span></a><span style="font-size:78%;">,</span><a href="" rel="tag"><span style="font-size:78%;">Millionaire</span></a></span><a href="" rel="tag"><span style="font-size:78%;">Gekko</span></a><span style="font-size:78%;">,</span><a href="" rel="tag"><span style="font-size:78%;">Hard Work</span></a> <br /> <br />An FTB Bloggers Blog for Success: Disgust??<div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="clear: left; cssfloat: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="" tt="true" width="149" /></a></div>I’m often asked two questions, “What motivates you?” and the second question is “Why do you call yourself insane”. <br /><br />And the answer is simple, my insanity is what motivates me and my disgust makes me insane. As I delve into and post about the complexities that will be involved with launching project X I am hoping that you will get to see a more in depth analysis of what make me tick. My insanity and my disgust, but of course it all revolves around money and success. When people meet me they are blown away with my upbeat attitude and zest for life, but that is all fueled by this weird sort of disgust. I’m disgusted by all things I want to be and all things that I am not.<br /><br />For instance, when I see someone driving a new Range Rover, I’m disgusted. They have one I don’t. When I’m trying to get to work, because I have to be one of the first people there, and there is a sweet dear old man, cruising in front of me. I’m disgusted. At that moment I don’t want anything more then to be at work with my first cup of coffee, kicking ass and taking names. Instead I’m behind Grandpa who’s cruising to the CVS for a pack of Depends! As cruel and harsh as that sounds, it’s honest, and that’s how I’m thinking at that time. I want nothing more then to take a shoulder holstered bazooka and blow his damn PT cruiser out of my way. I’m that bastard that blows by you during rush hour. That’s me, I am disgusted with myself because that’s me, but it is me. One day I will be that old guy and need adult diapers, but the difference is, I will be rich, a millionaire and my Huggies will be super absorbent Gucci diapers! I have a kind gentle heart but that is often over shadowed in my mind by my will to be the best at everything. That is the insanity. My compassion is overshadowed with my passion; my passion to be better then great. <br /><br />When I go to the gym in the morning, I’m disgusted by how many people were able to get there before me. I’m disgusted at the fact that there is a guy my height my build, that can out lift me. WTF!!! How can this “muscle head” out lift me? Bastard! That in turns makes me lift harder. Maybe he can out lift me, but he won’t be as intense as I am and I will train harder then him and in a month he will be playing catch up to me. That’s how I think about everything. It even disgusts myself, but that’s what makes me tick and therefore I embrace it. The more Mark Zuckerberg's, Larry Page’s, Sergey Brin’s and Jeremy Schoemaker’s of the world the more disgusted with myself I am on a daily basis. <br /><br />That’s the crazy man fuel that allows me to sleep 3 hours a night, get up and go to the gym at 4:45 and work 16 hours per day 7 days per week and still have a great attitude. I’m disgusted with myself, with what I don’t have and with those that have done more. People don’t want to embrace their disgust but I embrace it, nourish it, and attack it.<br /><br /><strong>EMBRACE YOUR INNER DISGUST</strong><br />I suggest looking at yourself in the mirror when you get a chance, look in your own eyes at everything you could be, and everything that you want to be and are not. Take responsibility for that, because you are the person responsible, nothing or no one else. Do you feel that? That’s disgust. Embrace that, don’t ignore it like you have been doing. Promise yourself that each time you look in the mirror you will be less and less disgusted with yourself, Start now. To wait is disgusting. <br /><br />I just re-read this post and I’m disgusted with myself!<br />-MJL<br /><a href=""><img alt="" border="0" src="" style="display: block; height: 84px; margin: 0px auto 10px; text-align: center; width: 500px;" /></a><br /><br /><span class="fullpost"></span>An FTB Bloggers Blog THAT PAPER<div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="160" src="" tt="true" width="200" /></a></div><div style="border: medium none;"><div style="text-align: center;"><span style="font-size: small;">Wall Paper of the Week <b style="color: blue;">"Gordon Gekko"</b></span></div><div style="text-align: center;"><span style="font-size: small;">If you read this blog then you are going to appreciate the Gordon Gekko Wall Street 20th Anniversary</span></div><div style="text-align: center;"><span style="font-size: small;">Wallpaper. Happy Friday</span><br /><div style="text-align: left;"><span style="font-size: small;">1. Double Click</span></div><div style="text-align: left;"><span style="font-size: small;">2. Right Click</span></div><div style="text-align: left;"><span style="font-size: small;">3. Save as Background</span></div><div style="text-align: left;"><span style="font-size: small;">4. Enjoy!</span></div><div style="text-align: left;"><span style="font-size: small;">-MJL</span></div><div style="text-align: left;"><i><span style="font-size: x-small;">An FTB Blog</span></i></div><div style="text-align: left;"><br /></div><div class="wlWriterEditableSmartContent" id="scid:0767317B-992E-4b12-91E0-4F059A8CECA8:310bcd75-5145-4cbc-ac5a-b0eba2bc1f8d" style="display: inline; float: none; margin: 0px; padding: 0px; text-align: left;"><span style="font-size: x-small;">del.icio.us Tags: <a href="" rel="tag">Wallpaper</a>,<a href="" rel="tag">Wall Street</a>,<a href="" rel="tag">Gordon Gekko</a>,<a href="" rel="tag">Millionaire</a></span><span style="font-size: x-small;"> </span><br /><span style="font-size: x-small;">Technorati Tags: <a href="" rel="tag">Wallpaper</a>,<a href="" rel="tag">Wall Street</a>,<a href="" rel="tag">Gordon </a></span><span style="font-size: x-small;"><a href="" rel="tag">Gekko</a>,<a href="" rel="tag">Millionaire</a></span></div><br /><br /><br /></div></div>An FTB Bloggers Blog BIRTH OF PROJECT XNow that I'm on the fast track to being debt free (14 months tops!), it's time to get my hands dirty again, and I figure this is the best time to chronicle the experience. This is.....as Fred Sanford would say, "the big one" <br />Let's recap my current endeavors<br />1. Owner of the FTB blog network and Affiliates <br />2. Owner of <a href="">Wash and Roll Mobile Wash & Detailing divisions</a><br />3. Co-owner of <a href="">Nuff Said Out fitters</a><br />4. Owner operator of <a href="">Money Saving Mikes on-line gifts</a><br />5. Owner of FTB Vending<br />6. Middle Management (Some Place in Corporate America)<br /><br />Now it's time to launch Project X. Project X has been in development for about five years and it's now or never. Ever since my layoff recovery I've become an advid reader of Fast Company and INC magazines, guys like Mark Zuckerberg, Mint.com's Aaron Patzer and on line marketing Guru Jeremy Schoemaker, make me want to resign and just hustle for myself full time...but I'm no dummy. <br /><div style="border: medium none;"><a href="" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="190" src="" tt="true" width="200" /></a>So after the failure to launch Igarage (I ran out of money and repairpal.com beat us to market, read the post<a href=""> here</a>), I'm throwing my hat into the software arena again. I've come up with another softwae idea and it's going to take me to the promiseland. </div><div style="border: medium none;"><br /></div><div style="border: medium none;">The first part of my execution is to partner up. I think I have found the right person and I have ptiched the idea. Preliminarily he is on board and as the vision becomes clear he will be just as excited as I am. </div><div style="border: medium none;"><br /></div><div style="border: medium none;"><b>Why a partner?</b></div><div style="border: medium none;">Just because. No really good reason, I would just like to share the experience with someone. Why not get rich together, it's fun and that's what life is all about. Another good reason is, you need someone that you respect to hold you accountable, to call you on your bullshzit and keep you motivated. That somone has to also have the balls to tell you that "Your baby is ugly". </div><div style="border: medium none;">If my potential business partner wants in, if she shares this vision with me and understands how much work is involved, and is willing to do it, you will be witnessing the birth of Project X .</div><div style="border: medium none;"><br /></div><div style="border: medium none;">My palms are itching just thinking about it. </div>Let's Go!!<br />-MJLAn FTB Bloggers Blog to Get Paid What You Are Worth.One of the hardest things to embrace is the fact that we get paid what we are worth; you are exactly where you are supposed to be. <br /><div style="border: medium none;">That’s a rough thing to hear; depending on where you are in life. We hate hearing the truth. Like most things , there are exceptions but the majority of us are exactly where we are supposed to be in life. The salaries we make are what we deserve. This is a hard fact and when you forget this fact, you could be setting yourself back. </div><div style="border: medium none;"><a href="" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="" width="200" wt="true" /></a>There are a lot of people who feel that they should be the <br /> CEO of the company they work for. They will tell you 100 times per day that they could run this place, or that they should be the boss; <b>BUT THEY ARE NOT.</b> The answer why is simple: They haven’t done enough to be the boss. Now the “enough” is the tricky part. The “enough” isn’t really quantifiable. It’s easy to simplify things and think that being the boss, or being wealthy, means knowing the most and working the hardest. That sounds nice, but look <br /> around you. How many people are making more money then you, working less hours then you and are about as smart as a box of rocks! The best example is your parents. <br /> Some of us had the smartest mom’s and dads around, and no one works harder then my pops, and he has about $20 G’s and an old ass Lincoln town car for his retirement. <br /><div style="border: medium none;"><a href="" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="200" src="" width="133" wt="true" /></a></div><div style="border: medium none;"><br /></div></div><div style="border: medium none;"></div><div style="border: medium none;">ARE YOU FEELING HELPLESS AFTER READING THAT? </div><div style="border: medium none;">LET’S TALK ABOUT THE MISSING INGREDIENT SHALL WE? </div><div style="border: medium none;"><br /></div><div style="border: medium none;"><br /></div><div style="border: medium none;">DETERMINATION. Blind dumbfounding, insane in the membrane Determination. </div><div style="border: medium none;"><div style="border: medium none;"><a href="" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="200" src="" width="200" wt="true" /></a>This is a skill that really can’t be taught. It’s a skill that can be identified and<br /> hopefully you can hone in on your inner grit and tenacity to harness it in the right direction. The perfect example of this is Survivor. We can learn a lot from Mark Burnett shows. We are all familiar with Survivor. “The Tribe has Spoken” and all that jazz. When some of the contestant<br />s are voted off the show, they will often say, “I’m the best Survivor”. Really? They might be the most skilled athlete, the best puzzle solver, hardest working tribe member, but that doesn’t make them the best Survivor. As soon as you think it does, is when you go home. The game is to Survive so the best Survivor is the person who is there at the end. Get it? This person can be the laziest, un-athletic person in the world, but if he or she is there in the end, then they deserve to be. Your Manager co<br />uld make Homer Simpson look like Einstein but he managed to get to that position and that’s really all that matters. He is getting paid what he thinks, no….what he knows heis worth. </div></div><div style="border: medium none;"><br /></div><div style="border: medium none;">Maybe he knew somebody – does that make it wrong. No. They knew someone and you didn’t. Maybe someone got fired above them and they were at the right place at the right time. You have to play the game from all angles and that means playing your position well. If you get <br />out of position, you expose yourself and you could be fired. You have to play the game to get where you want to be. How you play the game is up to you, you just have to identify where you are, where you want to be, and start working to get there. Try not to screw anyone over in the process, but that doesn’t mean not to be relentless. I will guarantee, that if you don’t let anything get in your <br />way, then you won’t be stopped. ….That’s funny. Of course I can guarantee that, because if you overcome your obstacles you won’t be stopped anyway. </div><div style="border: medium none;"><a href="" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" height="150" src="" width="200" wt="true" /></a>I hate to pick on former Govenor Rod Blagojevich but two weeks ago on Celebrity Apprentice, the former Governor of Chicago was crippled by his own ineptness. Not only could the man not turn on a laptop, but he couldn’t text on a cell phone or send digital pictures. After watching him struggle with the things that we do like breath, walk, and blink, I simply dismissed it to the fact that he’s been so busy and working so hard that he hasn’t had time t<br />o learn how to use these items. I figured, this man was Governor of our nation’s third largest city (2.8 Million People). He will show the world and more importantly Donald Trump how great he is by delegating and leading his team to victory. Well after spending valuable time napping on the plane he starts this victory march by putting Brett Michaels in charge. Really? Trump fired his ass faster then How can a man like this have run <br />Chicago you ask? How, because he knew what he was before he became governor, played his position and then didn’t stop until he got what he thought he was worth. </div><div style="border: medium none;"><br /></div><div style="border: medium none;">If someone is where you want to be, getting paid what you want to get paid then you can do it too! When it gets tough you have to ask yourself one question: “How bad do you want it”. If it’s bad enough then you will do everything in your power to make it happen. </div><div style="border: medium none;">-MJL</div><br /><div class="wlWriterEditableSmartContent" id="scid:0767317B-992E-4b12-91E0-4F059A8CECA8:00677316-01a5-45fe-8401-2bb408bd9713" style="display: inline; float: none; margin: 0px; padding: 0px;"><span style="font-size: x-small;">Technorati Tags: </span><a href="" rel="tag"><span style="font-size: x-small;">Trump</span></a><span style="font-size: x-small;">,</span><a href="" rel="tag"><span style="font-size: x-small;">Blagojevich</span></a><span style="font-size: x-small;">,</span><a href="" rel="tag"><span style="font-size: x-small;">Millionaire</span></a><span style="font-size: x-small;">,</span><a href="" rel="tag"><span style="font-size: x-small;">The Apprentice</span></a><span style="font-size: x-small;">,</span><a href="" rel="tag"><span style="font-size: x-small;">Mark Burnett</span></a><span style="font-size: x-small;">,</span><a href="" rel="tag"><span style="font-size: x-small;">Career</span></a> </div><div style="border: medium none;"><span style="font-size: x-small;">del.icio.us Tags: </span><a href="" rel="tag"><span style="font-size: x-small;">Career</span></a><span style="font-size: x-small;">,</span><a href="" rel="tag"><span style="font-size: x-small;">The Apprentice</span></a><span style="font-size: x-small;">,</span><a href="" rel="tag"><span style="font-size: x-small;">Millionaire</span></a><span style="font-size: x-small;">,</span><a href="" rel="tag"><span style="font-size: x-small;">Blagojevich</span></a><span style="font-size: x-small;">,</span><a href="" rel="tag"><span style="font-size: x-small;">Russell Hantz</span></a><span style="font-size: x-small;">,</span><a href="" rel="tag"><span style="font-size: x-small;">MLM</span></a></div><div class="wlWriterEditableSmartContent" id="scid:0767317B-992E-4b12-91E0-4F059A8CECA8:a2768043-19cb-4efc-9912-3f73e15e9cc1" style="display: inline; float: none; margin: 0px; padding: 0px;"><div style="border: medium none;"><br /></div><div align="center" style="border: medium none;"><b>CLICK "READ MORE" TO JOIN OUR MLM</b><br /><b>PROGRAM </b><br /><div style="border: medium none;"><a name='more'></a></div></div></div><a href=""><img alt="" border="0" id="BLOGGER_PHOTO_ID_5451975457207385458" src="" style="display: block; height: 124px; margin: 0px auto 10px; text-align: center; width: 400px;" /></a><br /><b><br /></b>An FTB Bloggers Blog from 2006! Man, I knew what I was talkin' About!!This week I am away on travel so I figured I would pull from my 4 years of posts to find something inspirational and appropriate.<br /><br />Since things are sarting to pick up along witht he economy and it's spring....I bring to you the Post Entitled:<br /><div align="left"><br /><b><span style="font-size: 130%;">WHEN THE SUN IS SHINING PREPARE FOR THE RAIN</span></b><br /><br />I know it sounds a little depressing but it's true. People definitely do not do this enough.<br /><br />When faced with adversity, we dig in deep, kick ass, take names and prevail. Once we are in the winners circle we forget the things that got us there. We forget the dedication, the focus, the struggle. We get complacent and end up in the same situation we've just fought our way out of. When you are just winning the first battle of the war you must continue to fight just as hard until the battle is over.<br /><br />Let’s look at two real life examples:<br /><br />EXAMPLE ONE: PERSONAL LIFE<br />You get one credit card paid off and you’re feeling good. You find yourself at Border's bookstore buying a book, or magazine <br />(a budgeted expense) and you see a night light for sale. You think to yourself, “It's only $16.99, It's only a night light!"<br /><b><i>ARE YOU NUTZ!</i></b><br />You shouldn’t even be considering that. Are all of your credit cards paid off? You have gone 20 something years without this dumb-ass night light, but yet all of a sudden you need it. STOP FOOLING YOURSELF. This is the trap that got you in debt to begin with. Get back on........<br /><a name='more'></a> the grind. Remember every penny counts. That night light is costing you 21%, in fact every penny not put down on your credit card is costing you 21%.<br /><br />Once you have refrained from spending, go harder. Take another look at your budget, and your savings, and turn it up a notch or two. You knew nothing about your finances when you first established your budget, now you are an expert. Take another look with your experienced eyes and tweak another $10 per week out.<br /><b>EXAMPLE TWO: BUSINESS</b><br />Your business is finally doing well. You feel like you can take $500 bucks and go buy some new company T-shirts.<br /><b><i>ARE YOU KIDDING ME!</i></b><br />You should take any extra money and prepare for repairs, maintenance, better equipment, or pay down overhead. You owe it to your business and your employees to do everything in your power to establish a solid foundation. Once that is completed, you then work on making sure your top performers are happy. Then and only then can you worry about T-shirts.<br /><br />Then...you guessed it....GO HARDER!<br />While things are going well for your business take this opportunity to step up your game. Get better at what you do. Now that you are not worrying about profits, over achieve to over satisfy your customer base. Use that extra energy to think of ways to improve quality and cut costs. Do product research, implement processes and procedures that will insure the continuance of quality work. Stay hungry.<br /><br />Keep this tip in mind. When you treat the good times not so good, then the bad times won’t really be that bad. Rich people have Great times and not so great times. There are no bad times. (Health issues excluded).<br /><br />So when the rain comes you are prepared to weather the storm. Why shouldn't you be, you knew it was coming.<br /><br />-MJL </div><div align="left">An FTB Blogger Blog</div><span class="fullpost"><a href=""><img alt="" border="0" src="" style="display: block; height: 84px; margin: 0px auto 10px; text-align: center; width: 500px;" /></a></span>An FTB Bloggers Blog't be a scape G.O.A.T<img alt="" border="0" id="BLOGGER_PHOTO_ID_5451974089803319922" src="" style="cursor: hand; float: left; height: 118px; margin: 0px 10px 10px 0px; width: 150px;" />Hello all of my over achieving readers. This post falls into the “How do I manage a 9-5 and still advance myself for myself? ” category.<br /><br />Super aggressive entrepreneurs who still work for corporate America often suffer from the need to be the GOAT. What’s the GOAT? It’s the Greatest Of All Time.<br /><br />I want to be the greatest of all times in making money. That’s how I’m going to get my millions. Not by being the greatest blogger, not by being the greatest Program Manager, the greatest online marketer or small business owner, but by managing my time and being one of the Greatest business minds of all time . Losing my job last year helped me see that by trying to be the GOAT at everything isn’t necessary to accomplish my goals.<br /><a href=""><img alt="" border="0" id="BLOGGER_PHOTO_ID_5451974889252634114" src="" style="cursor: hand; float: right; height: 200px; margin: 0px 0px 10px 10px; width: 192px;" /></a><br />You have to look at this like one of those strongmen competitions, sometimes the little guy is at a disadvantage when it comes to putting the big stones on the wall, but his shorter arms help him with all of the overhead lifts. In order to be the Greatest Strong Man he doesn’t have to be the Greatest at all the events, just good enough at most of them, and great at a few. That is the recipe for our success. Good at all, and great at a few.<br /><br /><strong>YOUR 9 TO 5 MIND SET</strong><br />At my last job, this was my big mistake. I thought that you should go into work every day and want to be the Greatest of all Time, the best that ever did what you do. It doesn’t really matter what the position is, just be the best ever. When I worked at McDonald’s, I wanted to be the best cashier and cook that ever did it. I wanted to be ranked World Wide Number 1 on Fries. The Tiger Woods of cashiers!! My last job, the same thing. I wanted to not only be the best Program manager in that place, but the best that anyone has ever seen for any company, ever!!!<br /><br />That is all wrong. <br />That is the quickest way to burn yourself out. Corporate America is not set up to reward Tiger Woods. They are setup to reward decent performance. When you show up, and you are the best that’s ever did it. You shake things up too much. The organization isn’t ready for it and it will take a lot of mental tenacity to put so much effort out, and receive little reward in return. You might even be ridiculed, resented and possibly let go, as you might pose a threat to those you work for.<br /><br /><strong>WHAT SHOULD YOU DO?</strong>You should do whatever it takes to outperform your peers. When you have outperformed them on a consistent basis, do no more. The reward for being t<br /> he GOAT and for barely outperforming your PEERS will be the same. That’s the secret<br /><a name='more'></a> that companies won’t tell you.<br /><br />Yep, I’m telling you to work below your potential. I realize that goes against what we teach here at CIFM (Confessions of an Insane Future Millionaire) but there is a purpose. You do enough so that your boss realizes that you’re one of the best, best Project coordinator, best artist, best Engineer in the group and that’s it.<br /><br /><a href=""><img alt="" border="0" id="BLOGGER_PHOTO_ID_5451975241093781858" src="" style="cursor: hand; float: left; height: 202px; margin: 0px 10px 10px 0px; width: 279px;" /></a>It’s a simple calculation. Being the GOAT requires all your effort and energy and just out performing the guy next to takes a little effort but results in the same thing. In essence you are “over” working for the same results. It’s the equivalent to overpaying for something. If I tell you that Subway has $5.00 footlongs and you start giving me $1.00 bills for the sandwhich. 1…2….3…4…$5. At this point I’m ready to hand you the sandwhich; why would you continue to give me money if you only get one sandwhich? Only a fool would pay $10.00 for a $5 sandwhich. That’s exactly what I’ve been doing all these years at my 9 to 5. For my 4% raise, I’ve been paying $10.00 when the guy next to me has been paying $5.00.<br /><br />They almost tell you not to over pay. Think about your performance reviews. Does anyone ever get the highest mark. There’s Meets Expectations, Exceeds Expectations and Exceptional. They almost tell you, “No one gets Exceptional”. Hence getting an Exceeds Expectations is your highest obtainable mark.<br /><br />Got it?<br /><br />NOW…..REAP THE REWARDS, SILLY!<br />Now use this extra (time, mental freedom, etc) to focus on your next steps. Performing to the $5.00 level means you can leave when you have barely outperformed your peers. The GOAT, he or she opens and closes the place every day. The GOAT is there on the weekends, working at night, thinking about improvements to company systems, how to increase profit margins and decrease the company’s carbon footprint. That’s not you anymore, you have all that time to focus on something much more important….YOU!!!<br /><br />Being the GOAT for yourself is what really pays. When you are working for yourself then being the GOAT is directly proportional to your financial reward and stability. In other words, you can’t overpay because the more output you get the more you get for it. If you give $10, now you get 2 $5 footlongs. If you give $20 you get 4. You can do this until you have enough $5.00 footlongs to last a lifetime. That’s called retirement baby!!!<br /><br />That’s why everyone says you have to be in business for yourself. Being the GOAT pays off if you are a CFO, CEO or President of a company.<br />So if you must over achieve let’s make sure the pay off is worth it.<br /><br />At the end of the day, I want to be the only one who gets’ my GOAT!!<br /><br />-MJL<br /><a href=""><img alt="" border="0" id="BLOGGER_PHOTO_ID_5451975457207385458" src="" style="cursor: hand; display: block; height: 124px; margin: 0px auto 10px; text-align: center; width: 400px;" /></a><br /><br /><br /><span class="fullpost"><br /></span>An FTB Bloggers Blog Unemployment Anniversary and becoming Fearless!<a href=""></a><div><br /><br /></div><a href=""><img id="BLOGGER_PHOTO_ID_5448633319794581906" style="FLOAT: left; MARGIN: 0px 10px 10px 0px; WIDTH: 200px; CURSOR: hand; HEIGHT: 194px" alt="" src="" border="0" /></a><br />This week makes it a year since I was laid off. Wow, what a difference a year makes. I am so much smarter then I was then….and back then I thought I knew it all.<br /><br />There were a lot of things that I thought about during my 16 weeks of “unenjoyment”. The lessons that I assumed I would learn, I really didn’t. In fact, it has done nothing but re-install a lot of the stuff that I already thought. I went from thinking to knowing.<br /><br />I gave that job all I had, no vacation, 70 hrs per week, just to get cut with no severance. They just don’t give a crap about you. I used to think getting laid off was for non-performers but when you are the Business Unit Manager for HUMMER and there is no HUMMER then there is no position for you, no matter how good you think you are.<br /><br />I think I rebounded pretty swiftly; two job offers in 16 weeks – literally the roughest 16 weeks of my life. Mentally, physically and emotionally. Here’s what I re-learned, because a lot of this stuff I already knew:<br /><br /><strong>WORK HARD AT WORK</strong><br />You would think that I would be against this, but let me tell you, I lasted a lot longer then most people in companies that were struggling. The automotive industry has been laying people off for the last 5 years and up until 2009 I was getting raises. The hard work that I’ve done is what enabled me to find a job in the worse job market that my generation will ever face and at least hold on to the job I had as long as I had it.<br /><br /><strong>WORK HARD OUTSIDE OF WORK<br /></strong>Don’t let your job be all that you have. I happened to want to be rich so outside of work I work on making that happen. Whatever your aspirations are, when you get home from that 9-5, don’t hit the couch every night, hit your dreams. The best way to insure your dreams don’t come true is to not even try.<br /><a href=""><img id="BLOGGER_PHOTO_ID_5448633426217007714" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; WIDTH: 200px; CURSOR: hand; HEIGHT: 200px" alt="" src="" border="0" /></a><br /><strong>FAMILY OUTRANKS MONEY – (YEP I SAID IT!!)</strong><br />I’ve seen husbands getting their company cars stripped away from them and returning home with their tales between their legs; tantamount to castration. When they get home they have the support of their loving wife and their kids still think they are the greatest! Family Is Key, not to mention if the wife is working and unemployment check can go a long way!<br /><br /><strong>YOU CAN DO ANYTHING THAT YOU PUT YOUR MIND TO – REALLY!!</strong><br />When I was out f work, I wanted nothing more then to get a job. I eat, slept, drank employment. I looked for a job, like it was my job....(click below to Read More)<span class="fullpost">. I divided my day into four parts. I woke up, showered and went to Panera Bread everyday. One Bagel and all the coffee my blatter could hold. From 8:30 to Noon, I used and applied for every job that fit my 6 different job descriptions I fit into. From 12:30 to 3:00 I worked the phones, calling head hunters, friends and friends of friends, and then after 4:00 I looked outside the box, emailing people I didn’t even know on Facebook and linked if their company’s were hiring. I watched interviews on Youtube, reviewed resume writing tips, looked for job fairs, and studied job fair tips. Then I spent the rest of the night doing self hypnosis, listening to Jim Rohn, Tim Robbins and Earl Nightingale. I knew that after 3 weeks of this, and I didn’t get a call that I would start to feel discouraged. It was true. Then 3 weeks turned to 4 and 4 to 8. I tweaked my routine a little but stayed at it. If there were 20 people doing the same thing, when 19 of them quit I would still be around ready to get that job.<br /><br />There was no choice. I was able to keep all my businesses running, and pay all my bills. I laid awake at night dreaming about the day I would get the call for an interview. When I got an interview I went over every possible question in my head, I went to Barnes and Noble and read interview books until they kicked me out.<br /><br />Some people, who were laid off, knew people in the right places and were able to obtain gainful employment super quick, some are still unemployed. I was prepared to work for it until I got it. How bad do you want it?<br /><br /><strong>FEARLESS!</strong></span><br /><span class="fullpost">That’s why I know I am going to hit the million dollar mark. If you asked me a year and 1 week ago, what my biggest fear was. Honestly, I would have told you losing my job; by far my biggest fear.<br /><br />I’m no longer afraid of that. Now if you asked me what my biggest fear is, I honestly couldn’t answer you.<br />I’ve already failed at many things many times, I’ve been so close to losing everything I’ve ever had many times, I’ve been forced to move out of my home because I couldn’t pay my bills, I’ve been divorced, I’ve been fat, I’ve been so many things except the one thing I am now. Fearless! (Fear of God doesn’t count). Getting laid off has removed one of my biggest fears….and for that, I’m Most Thankful.<br /><br />So thanks to that crappy company, with crappy management, and my crappy boss for doing what I thought couldn’t be done….removing my Fear!! Now it’s time to Do Work! <a href=""><img id="BLOGGER_PHOTO_ID_5448633633172264098" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; WIDTH: 200px; CURSOR: hand; HEIGHT: 200px" alt="" src="" border="0" /></a></span><span class="fullpost"><br /></span><span class="fullpost"><br /><a href=""><img id="BLOGGER_PHOTO_ID_5448633542530388674" style="FLOAT: left; MARGIN: 0px 10px 10px 0px; WIDTH: 200px; CURSOR: hand; HEIGHT: 160px" alt="" src="" border="0" /></a><br /><br />P.S. When you think Fearless - Think Jet-Li not Taylor Swift!<br /><br />Eitherway I've attached links to WallPapers of the Post. Click them and make these your inspirational Wall Papers of the week.<br /><br /><br />HAPPY ANNIVERSARY TO ME!!!<br />-MJL<br /><img id="BLOGGER_PHOTO_ID_5451876331788964882" style="DISPLAY: block; MARGIN: 0px auto 10px; WIDTH: 400px; CURSOR: hand; HEIGHT: 124px; TEXT-ALIGN: center" alt="" src="" border="0" /><br /></span>An FTB Bloggers Blog a super hero 101 - Part 2 of a Series.....<a href=""><img id="BLOGGER_PHOTO_ID_5443819462696140194" style="FLOAT: left; MARGIN: 0px 10px 10px 0px; WIDTH: 200px; CURSOR: hand; HEIGHT: 174px" alt="" src="" border="0" /></a><br /><div><div><div><div><span style="font-family:times new roman;">We find ourselves back at super hero school! Question: If you are a superhero, do you have a lunchbox of yourself? Ha ha.<br /><br />Welcome to Class. This is “Being a super hero 101”<br />In order to be eligible for this class, you have to have already completed “I am a Superhero”. If you need to be refreshed, click the Super hero badge to go back to the first part of this series.<a href=""></div><img id="BLOGGER_PHOTO_ID_5443817359384816706" style="DISPLAY: block; MARGIN: 0px auto 10px; WIDTH: 83px; CURSOR: hand; HEIGHT: 83px; TEXT-ALIGN: center" alt="" src="" border="0" /></a><br />Let’s get down to business…….<br /><br /><strong>UNLOCKING YOUR SUPER ABILITY</strong><br />Unlocking your superpowers is not easy! A matter of fact, it’s so hard that only a few of us will ever have the courage to do it. That’s where I come in.<br /><br />Let’s take a normal day in your life. Not a terrible day a typical day. If you have no kids, in your 20’s then you are a Super Duper Hero. A typical day for you might be,<br />7:30 AM Wake UP<br />9:00 AM Work an 8 to 10 hour day,<br />7:00 PM Go to the gym<br />8:30 PM Shower, eat dinner, watch some TV.<br /><br />Life is Good – but you are not using your Super Powers.<br /><br />Now unlocking your super powers depends on what your goals are. Hopefully you have some goals that you want to reach. Things that you want to accomplish. No matter what the scale. Let’s pick a hypothetical goal or a dream. It must be somewhat realistic, but it’s Ok if it’s a stretch. For instance, if you are a decent bowler and want to go on the Pro tour, if you want to be a motorcycle mechanic and own your own shop, or if you want to be CEO of a fortune 500 company.<br /><br /><strong>THE POWER OF MENTAL <span style="color:#ff0000;"><u>D-DUCTION</u></span>!!!</strong><br />Ok so let’s pick one. A Tour Pro bowler, on the PBA Tour. Let’s say you can easily bowl a 200 game. In order to qualify for the PBA, you must carry a 200 or better average for 36 games in a USBC (United States Bowling Congress) sanctioned league.<br />You have just used your super powers of deduction to make your problem simpler – add 20 points to your game. <a href=""><img id="BLOGGER_PHOTO_ID_5443818340023991058" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; WIDTH: 300px; CURSOR: hand; HEIGHT: 225px" alt="" src="" border="0" /></a><br /><br />Now this might seem impossible to most people, but you are a super hero! Difficult takes a day, impossible a week. We do the impossible, you had a bowl of impossibles for breakfast. Ha!<br /><br />If this is really what you want to do, you go after it! PERIOD!!! You start by going to the library and reading how the greats got started, reading Bowlers Weekly, Bowlers Digest, reading anything you can get your hands on. Are you in good enough physical shape to be on the PBA tour? You tailor your workouts to get you in Pro Bowling shape (which is round). You stop drinking your paycheck away and upgrade your equipment and hire a Pro-trainer. You look for a better bowling league to join so you are surrounded by better players. You spend your morning and nights at the bowling alley. Instead of taking that ski vacation, you take a week and enroll in one of the best Pro Bowling schools. You EAT, BREATH, SLEEP bowling. Do you get it?<br /><br />Your super powers allow you to do this. You have the mental capacity and the physical capacity to make yourself better.<br /><br />No need to pray to God for it…he’s already given you everything you need to Go GET IT. You should start thanking God already for giving it to you, and act like your dream is already come true. It's a a matter of how bad do you want it!!<br /><br /><strong>A REALISTIC EXAMPLE</strong><br />If you have a family, living paycheck to paycheck – you are a superhero but just not like “Super Single Twenty Man”, instead you are “Family Man! “<br /><br />Let’s take a more realistic example, one that is very real to me. I have a buddy who<a href=""><img id="BLOGGER_PHOTO_ID_5443819037937869602" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; WIDTH: 200px; CURSOR: hand; HEIGHT: 130px" alt="" src="" border="0" /></a> is working as a security guard and he wants to be a motorcycle mechanic. The school that will certify him costs a seemingly ton of money and he barely makes enough to pay his bills as is. After talking to him; I find out that once he’s completed his first year of school he can get a job that pays more then the security job and they will pay for the rest of his school. The more we talked the simpler the problem became. At the end of the conversation, after using our powers of D-duction all he needed was $5000. That’s it. His dreams were on hold for a mere $5000. How bad does he want it?<br /><br />The solution is there; you work 3 jobs, collect cans, eat franks and beans, do what you need to do to get the $5000. If I had the money would I give it to him?<br />It depends on what he was doing. Was he going out to the club every weekend spending $30 at the bar. Yep, I said $30 dollars. If your dreams are on hold for $5000, $30 is .6% of your dream. $50 is 1% of your dream. If I called him on Saturday and instead he rented a movie and cooked dinner, staying in, trying to save money, then I would easily give him the money. Does he need to Pray to God for the Money. NO. He needs to Thank God! God has already given it to him. He lives in America, he’s healthy, he’s intelligent i.e. he’s a super hero!!! Use your powers and make it happen.<br /></span></div><div><span style="font-family:times new roman;"><strong>IS THIS CONCEPT EXTREME?</strong><br />I know it sounds extreme, but life is extreme. Extreme people make it happen. Extreme people have uncovered their super powers. Anyone or anything that is not helping you is hurting you. Attack them with trying harder, pushing harder, being more clever. Finding out ways around your obstacles, using everything that God has already given you.<br /><br />Hopefully you get it. You are a super hero – it’s just up to you if you want to use your powers or not.<br /><br />Next lesson we will review Actual Seemingly Impossible real life Examples of superhero’s doing their thang!!<br /><br />-MJL<br /><br /></span><span class="fullpost"><br /></span></div></div></div>An FTB Bloggers Blog TRUE SECRET TO SUCCESS - REVEALED!!!I am reposting this for three reasons:<br />1. It will eventually fall in line with this months superhero theme.<br />2. Because sometime I need to be reminded of certain things<br />3. It's a powerful piece so this is it's 2nd Re-posting<br />4. Because I wrote it dammit, and I can do that! THIS IS MY HOUSE!!! (Like my dad would say!)<br /><br />Re-read and enjoy!<br /><br /><strong><u>THE TRUE SECRET TO SUCCESS - REVEALED!!!</u></strong><br /><br /><br />I think I have found one of the key ingredients that make successful people<a href=""><img id="BLOGGER_PHOTO_ID_5260545462977720242" style="FLOAT: right; MARGIN: 0px 0px 10px 10px; WIDTH: 163px; CURSOR: hand; HEIGHT: 166px" alt="" src="" border="0" /></a> successful. This is such a profound thought because in all of the numerous mind numbing literature I have read about success, self improvement, leadership etc. I have never been privy to this information.<br /><br />Now I’m passing it on to my readers free of charge.<br /><br />Here goes:<br /><br />How many people actually give 100% of themselves at anything? You can always say you tried hard, but can you really say that you gave 100%? 100% is usually used for a select group of people like Olympians, recovering addicts, and professional athletes.<br /><br />We tried to dissect ourselves to try and rationalize the reason that 100% effort is not given. The only thing that we could come up with, is the fact that the pay off is not guaranteed. I think the other reason why 100% isn't given is because on most all occasions 85% is more then enough to propel you to society’s definition of success.<br /><br />If someone told you that they would give you a million dollars to run a 6 minute mile 3 months from now, your life would change immediately. If you seriously signed a contract and the money was put right under your nose, you would put down the large fries and start training. Per the agreement you are not allowed.....<br /><span class="fullpost"> to quit your job, or spend any less time with your family, but you can do anything else to prepare for your six minute mile. Somehow, you would make time to train. You would replace all your reading material with running information. You would have sweat suits in your car for a run before you got home and before work. You would replace all of your internet browsing bookmarks with running sites, messageboards and blogs. You would be fitness crazy. Only a fool wouldn’t give a 100%<br /><br />This is the m<a href=""><img id="BLOGGER_PHOTO_ID_5260545800833253218" style="FLOAT: left; MARGIN: 0px 10px 10px 0px; WIDTH: 142px; CURSOR: hand; HEIGHT: 200px" alt="" src="" border="0" /></a>ind game that some worldclass athletes are brainwashed into when they are young. A perfect example of this is in the Spike Lee movie "He Got Game". It's about a basketball prodigy, Jesus Shuttlesworth (played by Ray Allen), and his relationship with his father, Jake Shuttlesworth (played by Denzel Washington). During the movie they show flashbacks of the basketball prodigy and his father. They are shown on the basketball court at night, doing running drills. Jake is running with a young Jesus telling him, "The only two people up right now are you and Michael Jordan, and what is Michael doing, training. ". Then he beginns to instill the guarantee of success in his child, " What are you gonna by your mama, What kind of house? ". So Jersus is brainwashed into thinking that if he gives 100% he would be the best basketball player in the US. There was no doubt, so that 100% is not hard to do.<br /><br />Tiger Woods had th<a href=""><img id="BLOGGER_PHOTO_ID_5260545804977232434" style="FLOAT: left; MARGIN: 0px 10px 10px 0px; WIDTH: 200px; CURSOR: hand; HEIGHT: 138px" alt="" src="" border="0" /></a>is same advantage since he was a kid and continues to enjoy it. He can give 100% daily because he knows that he will win and become the best in the world. Players like Phil Mickelson, find it hard to give 100% working on their game because Tiger has already defeated them mentally. Why give 100% when you know you will never be better then Tiger. Phil can give 85% and still be one of the top players in the world. For fat Phil, that is good enough.<br /><br /><br /><br /><strong><u>What does all this mean?</u></strong><br /><br />The reason that we don't give 100% is fear of failure. Fear that we would give everything we possibly have, to reach a goal and not hit it. We would rather give 85% percent and fail, saying at least I tried, then to go hard for 100%.<br /><br />This applies to me and my resistance to give 100% to my outside endeavors. What am I afraid of? I will try and concentrate on the reward rather then the failure. If I realize the reward as concrete then I should be able to go 100%.<br /><br />Sounds good in theory but can I apply it? 85% has been good enough to make me moderately successful in the past. Am I all talk and no action?<br />Knowing this information has put the responsibility of my success on me not any outside excuses.<br /><br />If you are reading this, you know it’s true, and your success has now been placed in your hands and know one else’s. I have chosen to accept this responsibility because I need to be successful, by <strong><u>our</u></strong> definition of success. There is no other alternative for me. Rich or nothing.<br /><br />LET’S GO!<br /><br /><br />-MJL<br /><br />FOR FUNNY A$$ SHIRTS CLICK <a href="">HERE</a><br />FOR COOL A$$ GIFTS CLICK <a href="">HERE</a><br /><br /><br /></span>An FTB Bloggers Blog
http://feeds.feedburner.com/ConfessionsOfAPoorInsaneFutureMillionaire
CC-MAIN-2019-35
refinedweb
23,959
72.16
System Design Development This chapter aim to develop an Intranet system in order to provide as much as possible resource and also the way to communicate between user and user especially in order to deliver as much as possible for educational purpose. Beside the implementation, this chapter also provide with some description about the some technical processing. The objective of this installation will be shown in Figure 7.1. In this chapter we will introduce not detail but only a brief about how to install and implement windows server 2003 as the Web Server operating system. We also install and implement some necessary Windows components such as IIS 6.0, DNS to make the Web Server meaning full. Beside Windows components we also need to install and configure some software in order to make the web site for the Web Server more and more color full (for instant MySQL and PHP). Because of this case study have a minimum limit of time; we cannot develop our own portal for system testing. On the other hand we are going to use a free Content Management System (CMS) web portal for system testing. Figure 7.1: The concept of system design 7.2 Hardware implementation 7.2.1 Network History In the early 1980s, when desktop computers began to proliferate the business world, the intend of their designers was to create machines that would operate independently of each other. The computing ideal was summed up with the phrase “One User, One Computer,” which means that individual was free to manage information on their own desktops any way they linked. This attitude was a reaction to the business-information environment of the time, based on large mainframe computers controlled by technical specialist and programmer. If you wanted information-a report on the aging of your accounts receivable, for instant-you made a request to the Information Services (IS) department, who would program the computer to provide the report for you. The report could take any length of time to produce, depending on its complexity, and your only choice was to wait while IS massaged your report out of the mainframe. Once you got the report, if you didn't like its format or if the information in it wasn't clear for any reason, you would make another request to IS, wait some more, and hope the revised report was useful. The market for desktop computers explored, and dozens of hardware and software vendors joined in fierce competition to exploit the open opportunity for vast profits. The competitions spurred intend technological development, which led to increased power on the desktop and lower prices. Desktop computers were soon outperforming older, slower mainframe applications, accomplishing what appeared to be miracles in desktop publishing, graphics, computer-aided drafting, more powerful databases, and sophisticated user interface. Small businesses in particular were able to benefit from information management services that, a few years earlier, have been available only to wealthy corporations. Market place competitions created large numbers of computer from different manufacturers and vendors, large number of applications, and the unimaginably vast amount of information stored in desktop systems. The large volume of information now being handled, it was impossible to pass along paper copies of information and ask each user to reenter it into their own computer. Copying files on to floppy disks and passing them around was a little better but still took too long and was impractical when individuals were separated by great distances. And you could never know for sure that the copy you received on a floppy disk was the most current version of the information—the other person might have updated it on their computer after the floppy was made. For all its speed a power, the desktop computing environment was sadly lacking in the most important element: communication among members of a business team. The obvious solution was to link the desktop computers together and link the group to a shared central repository of information. The problem was, desktop computers we not designed with this capacity in mind, and there were now thousands for theses machines in the marketplace representing billions of dollars in business assets. Few users were willing to scrap their desktop machines altogether and replace them with new, redesigned machines (and new software) that would communicate with each other this way. Besides, computer manufacturers were quite cleaver, and they were able to create additional components that users could attach to their desktop computers, which would allow them to share data among themselves and access centrally located sources of information. Unfortunately, they early designs for theses networks were slow and tended to break down at critical moments. The desktop computer continues to evolve. AS it became faster and more powerful, capable of addressing much larger amounts of processing memory and thereby incorporation more sophisticated and complex features, communication between desktop computers gradually became more reliable. The idea of a Local Area Network (LAN) became a practical reality for businesses. Computer networks, with all their promise and power, are more complicated to maintain that simple stand-alone machines. They require ongoing attention from managers whose job it is to oversee the networks and keep them running smoothly. Ironically, this concept looks a lot like the old mainframe paradigm, where a specialized cadre of technical insiders held near-absolute power to make information available. In some ways, it now appears as if business computing has come full circle, from IS to desktop and back to IS again Over the past few years, internet technology has become more cost-effective and easier to use. Internet access sites have proliferated. AS the Internet Has grown and evolved in just a few years, it has become host to the World Wide Web, a community of thousands of business, educational, and personal information sites. Users can tap into these web sites using special software, called Web browser (e.g. Internet Explorer, Mozila Firefox, Netscape, Opera etc,..) Networks now take a whole variety of forms: They can exist within a single room, an entire building, a city, a country, or the world. There are networks of networks, and there are networks that access each other at will or at the whim of individual users who can contact them any time over telephone lines. - Benefit of Computer networking - Networks allow more efficient management of resources. For example, multiple users can share a single top-quality printer, rather that putting duplicate, possibly lesser-quality printers on individual desktops. Also, networks software license can be less costly separate, stand-alone license for the same number of users. - Networks help keep information reliable and up-to-date. A well managed, centralized data storage system allows multiple users to access data from difference locations and to limit access to data while it is being processed. - Networks allow workgroups to communicate more efficiently. Electronic email and messaging is a staple of most network systems, in addition to scheduling systems, project monitoring, online conferencing, and groupware. All these things help work teams be more productive. - Networks helps businesses services their clients more effectively. Remote access to centralized data allow employee to service clients in the field and clients to communicate directly with suppliers - Networks greatly expand a business's marketing and customer service capacity. Using Internet technology, a business can automate its ability to inform customers about its products and services, take orders directly from customers, and provide up-to-the-minute facts are figures to be accessed at the customer's whim, anytime day or night. - Network Protocol - What Are Protocols? Protocols are the agreed-upon ways that computers exchange information. A computer needs to know exactly how message will arrive from the network so it can make sure the message gets to the right place. It needs to know how the network expects the message to be formatted (for instance, which part of the message is data and which part of the message identifies the recipients) so the network can convey the data to its destination. - Hardware Protocols: Hardware protocols define how hardware devices operate and work together. The 10baseT Ethernet devices will exchange information and what they will do if it is improperly transmitted or interrupted. It determines such things as voltage levels and which pairs of wires will be used to transmission and reception. There is no program involved; it is all done with circuitry. - The Hardware-Software Interfaces: Whenever program in a computer needs to access hardware; such as when a message has arrived from the network and is now waiting in the adapter card's memory, ready to be received, the computer program uses a predefined hardware-software protocol. This basically means that the computer program can expect the data to always be in the same place; that certain registers on the card will indicate what is to be done with it; and that when other registers are accessed in the proper order, the card will do something logical, such as receive another message or send a message out. - Software Protocols: Program communicates with each other via software protocols. Network client computers and network servers both have protocol packages that must be loaded to allow them to talk to other computers. These Packages that must be loaded to allow them to talk to other computers. There packages contain the protocols that computer needs to access a certain network device or service. Common type of Network Protocol: + NetBIOS and NetBEUI: Back when IBM first started marketing their PC Network, they needed a basic network protocol stack, which is an implementation of a board driver, transport protocol, and redirector. Network Basic Input/Output System (NetBIOS) is just 18 commands that ca create, maintain and use connections between PCs on a network. IBM soon extended NetBIOS with the NetBIOS Extended User Interface (NetBEUI), which basically is a refined set of NetBIOS commands. However, overtime the names netBEUI and NetBIOS has taken on new meanings. - NetBEUI: now refers to the actual transport protocol. It has been implemented in many different ways by difference vendors, to the point where, in some ways, it is the fasters transport protocol for small Networks. - NetBIOS: now refers to the actual set of programming commands that the system can use to manipulate the network—the technical term for such a set of commands is an Application Programming Interface (API) + IPX/SPX: The most popular local-area network type in the world is Novell Netware. When the Novell folks were building NetWare, they decided to build their own protocol rather than use an existing protocol. The Novell protocol is named IPX/SPX, for Internetwork Packet Exchange/Sequenced Packet Exchange. Since it is the protocol used most often on Netware networks, and since Microsoft wanted its software to be somewhat compatible with NetWare networks, Microsoft designed Windows 98 up to include an IPX/SPX implementation. + Apple Talk: is the name given to the protocol suite designed for the Apple Macintosh machine to communicate each other. Apple Computer began the development of AppleTalk in 1983 + TCP/IP (Transmission Control Protocol/Internet Protocol) It is the most famous protocol and widely implemented in the computer networking. It was created over many years by the U.S. government. The protocol is actually a protocol stack, called the TCP suite. The TCP suite is a very efficient, easy-to-extend protocol whose main strength has been in wide-area networking; it glues together dissimilar networks and brings together similar networks that are separated by distance and low-speed connection. It's one of the best supported and best-designed internetworking protocols around. - Network Operating System (NOS): The NOS is the computer software that runs on the network server and offers file, printer, application and other services to the clients. It acts as the director to keep the network running smoothly. There are some popular NOS such as: Ms Windows NT, Ms Windows 2000 server, Ms Windows 2003 server, Unix, Linux, Sun Solaris, Novel Netware. There are 2 major types of NOS: Peer-to-Peer: Peer-to-peer network operating systems allow users to share resources and files located on their computers and to access shared resources found on other computers. However, they do not have a file server or a centralized management source. In a peer-to-peer network, all computers are considered equal; they all have the same abilities to use the resources available on the network. Peer-to-peer networks are designed primarily for small to medium local area networks. The NOS such as AppleShare, Windows for Workgroups... Client/Server: Client/server network operating systems allow the network to centralize functions and applications in one or more dedicated file servers.. The NOS such as Novell Netware, Windows Server... - Server Implement - Ms Windows 2003 overview Microsoft has put an immense amount of time and effort into building Windows Server 2003. It's not fair to say that this operating system is an entirely new product because it still retains a great deal of core code from Windows 2000 and even Windows NT. Ms Windows server 2003 is a large, complicated, and very powerful operating system. To use it effectively, you have to understand how it works and how to make it do what you want it to do. There are 4 types of Ms Windows 2003 server in both 32-bit and 64-bit versions: such as Ms Server 2003 Standard Edition, Enterprise Edition, Datacenter Edition and Web Edition. - Ms Windows Server 2003, Standard Edition. See table 7.3.1 Table 7.3.1: Minimum system requirement for Ms Windows server 2003 Standard Edition: Source : - Ms Windows Server 2003, Enterprise Edition This windows Designed for business-critical applications. Windows Server 2003 Enterprise Edition builds on the standard features found in the Windows Server 2003 family by adding features designed to increase the reliability scalability, security, and manageability of enterprise applications. This paper provides an introduction to the major technical features in Windows Server 2003 Enterprise Edition and discusses the application of those features to mission-critical applications, such as e-mail, databases, and business applications. See table 7.3.2. Table 7.3.2: Minimum system requirement for Ms Windows Server 2003 Enterprise Edition: Source : - Ms Windows Server 2003, Datacenter Edition: Designed for the highest levels of scalability and reliability, Windows Server 2003, Datacenter Edition, supports mission-critical solutions for databases; enterprise resource planning software; high-volume, real-time transaction processing; and server consolidation. Windows Server 2003, Datacenter Edition, is available in both 32-bit and 64-bit versions through original equipment manufacturer (OEM). See table 7.3.3 Table 7.3.3: Minimum system requirement for Ms Windows Server 2003, for Datacenter Edition: Source : - Ms Windows Server 2003, Web Edition: Designed Microsoft .NET Framework to make it easier to build and host Web applications, Web pages, and XML Web services. See table 7.3.4 Table 7.3.4: Minimum system requirement for Ms Windows Server 2003, for Web Edition: Source : - Installation Ms Windows 2003 server standard edition 7.3.2.1 Planning and Preparing: - Prepare the Hardware: We need to know the minimum hardware requirement of each product and the product software CD. Your computer needs to be set boot from CD in the BIOS. Prepare the BIOS: Set your BIOS to 1st priority boot CD ROM - Partitioning: You should to partition your hard drive to at least 1 drive for system and other for data if you have. The minimum for system drive is 4 GB. - File System. Choose the right file system for your system. The Windows 2003 server is recommended to use NTFS as the file system. - Server Name: choose the right name for you server to identify the machine in the later use. The Server name is used in upper case. E.g FileServer, Server, Firewall, etc., we can put up to 63 characters. <<in name length in Internet>> Note: you can easily change the name of the server later. - Network Connection and Options: Not knowing your network configuration ahead of time isn't usually going to be a showstopper. Knowing it can save you time though. Plug and Play, Regional and Language, and Name and Organization Screens 7.3.2.2 Installing: The procedure is focus on new installation only, not for the upgrading. There are 2 phases for this installation: Phase 1: Text based setup: The part is very similar to Ms Windows 2000's text base setup or Ms Windows XP's text base. As soon as your machine boots into the text-based portion of Setup, you may notice a prompt at the bottom of the screen that tells you to press F6 if you need to install additional SCSI or RAID drivers. If you don't want these additional drivers, just wait a few seconds and it will go away. But if your system has a SCSI or RAID controller that you know isn't going to initialize without an EM-provided driver, you'll need to watch this part of Setup closely and hit F6. he install starts off with a Welcome to Setup screen. You have the choice to set up Server 2003, Repair an existing Server 2003 installation, or quit. The Press F3 to Quit option will live with you throughout this phase of the setup. If at any time during this phase you decide that you want to abort our setup attempt, this will be your escape route. Upon this exit, your system will be rebooted, but be aware that your boot.ini file will not have been changed. That'll take you to the Disk Partitioning and Installation Location Selection screen. Be careful here. There are two things to do. The most obvious is the selection of the partition in which you want Server 2003 installed. Highlight the partition where you would like Server 2003 installed, and press Enter. Let's take this a step further. Beneath this screen is a very handy disk-partitioning utility. From here, you can completely redo your partitioning scheme. You can delete existing partitions, create new partitions out of unpartitioned space, and format partitions in either the NTFS or FAT format file systems. Phase 2: Graphical text-base setup: As soon as you boot into the graphical-based setup phase of the install, Server 2003 will run a Plug-and-Play detection phase to configure all your hardware. This can take quite a while, and because disk formatting and file copying (both in Phase 2) take some time and Setup reboots itself and moves directly to the PnP detection phase, the format, copy, and PnP detection's done, and I can start answering the wizard's questions, such as - Plug and Play, Regional and Language, and Name and Organization Screens, - Products Key and Windows Product Activation - License, Names, and Password - Time and Time Zone - Network Setting - Computer account After finishing phase 2, the computer will reboot automatically and start the Windows 2003 server 7.3.2.3 Post installation procedure After the installation is complete, there are still a few more steps to perform to finalize the server and prep it for production: - On the first reboot, the Manage Your Server page will pop up automatically. It will identify the last few steps that must be completed to configure your server based on the additional network components you installed. It will also ask you some questions about your existing network to help you determine whether you want to install an Active Directory. I personally find the Manage Your Server page unhelpful and tend to just close it, but that's just my opinion—try it if you like. I'll show you how to manage your server with as few of those dumb wizards as possible! - Check your device manager for undetected or nonfunctioning hardware components. If you removed any hardware prior to the install due to conflicts, add them back in now. Before you are truly done with the install, every piece of hardware should work properly. - You'll want to finalize your disk partitions. In many clean install scenarios, you may have un partitioned space left on your hard drive. - For most new installations using TCP/IP, a DHCP address will be in effect. This may not be a standard practice for production servers. If necessary, acquire and configure the appropriate static TCP/IP information. - In many larger network environments, certain services, utilities, tools, or other programs are loaded on all servers. For example, some sites may utilize enterprise management tools that require the usage of an agent that runs on the server to collect and pass information up to a management console. Most likely, some sort of backup software will need to be installed also. Find out what additional software is needed and install it now. - Run through the Control Panel applets to set all server configurations the way they should be for the long haul. Especially noteworthy are the System Control Panel settings for the page file and maximum Registry size. - At this point, you may get the urge to walk away. Well, hold on just a minute. Too many times, people make some last-minute changes, like the Control Panel settings, and leave it at that. Even though you were never told to reboot the system—your changes were instantly accepted—there may be some unexpected side effects the next time you reboot. Just in case, give it another reboot now, before your users begin counting on the server being available. - If the system is a dual-boot machine, which is usually not the case on a server, boot into all operating systems to make sure the system integrity is intact and all data is available from all required operating systems. - Once the system itself is complete, create an automated recovery disk. And as an extra safeguard, you may also want to run a full backup. - Finally, a step we rarely perform is documenting the server. Ask yourself if anyone else could take care of the server should you decide to take a week off for a golf vacation. If there are any special things you have to do, like restart a service every day, it should be documented. This is a step you must take before you can consider your operating system “installed.”, which covers preparing for and recovering from server failures, for more details. 7.4 Installation Ms Windows 2003 server components There are some components in Ms Windows 2003 that originally not install after we have complete installing the OS in which we need to the WEB Server. - Internet Information Service (IIS) Internet Information Services (IIS) is really a suite of TCP/IP-based services all running on the same system. Although some of the services rely on shared components, they are functionally independent from one another. Just as an electrician has different tools for different jobs, IIS has different Internet capabilities to help meet different needs. With the release of Windows Server 2003, Microsoft has reached version 6 for internet Information Services. The following sections will briefly discuss some of the web application server functionality included with IIS 6: - World Wide Web (HTTP) Server: IIS includes an HTTP server so that you can publish data to the World Wide Web quickly and easily. IIS's Web service is easily configurable and reliable, and it supports security and encryption to protect sensitive data. You can use IIS's Web service to host a Web site for your own domain or multiple domains, an intranet, and the Internet, and even allow users to pass through your IIS Web server to access HTML documents on machines within your organization. - File Transfer Protocol (FTP) Server Although the use of File Transfer Protocol is not the only way to send a file from one location to another, it is by far the most widely supported as far as the Internet is concerned. FTP was one of the original means of copying files from one location to another on the Internet, long before the days of graphical browsers, HTTP, and Web sites. Since the protocol has been around for so long, support is available on almost any platform, including midrange and mainframe systems that might not typically support HTTP. In IIS 6, the FTP service now includes support for individual user directories. This feature can be used to permit access to private directories while preventing users from seeing or writing to directories other than their own. - Network News (NNTP) Server: Sometimes referred to as Usenet, Network News Transport Protocol (NNTP) is something, simply due to the great functionality it provides. By using Internet standards (RFC 977), the NNTP service can be used as a means of maintaining a threaded conversation database on an IIS server, just like in Usenet groups on the Internet. Users with properly configured newsreader programs can navigate through and participate in these conversation databases. Although services like Google Groups ( ) have made Usenet better known, it still isn't as widely used as something like HTTP. That's unfortunate; NNTP represents a great cross-platform protocol for managing threaded conversation databases. Let's hope the inclusion of NNTP with IIS will increase the use of this capability. - Email Services: Microsoft included an SMTP service with IIS version 5; however, it was not sufficient to act as a full-blown e-mail server for an organization. The SMTP service included with IIS 5 was only meant to support the other services within IIS—namely HTTP and NNTP. You see, the SMTP service included with IIS 5 was missing an important component—a POP3 or IMAP service. POP3 or IMAP is the means by which clients retrieve their specific messages from their mailbox on a mail server. SMTP provides a “store-and-forward” service for mail but does not support individual user Setting up IIS service Install IIS Service: Control Panel > Add and Remove Program > Add/Remove Windows Components > select on Application Server > click on the Details… button > select on the Internet Information Service > click on the Detail... button > check on Internet Information Service Manager > click the OK button twice time > then click the Next button to begin the installation. Insert the Ms Windows 2003 server CD that you used to install to the computer. Then wait till the IIS component has been installed successfully. See Figure 7.4.1.1 Figure 7.4.1.1 IIS installation *Note What we have been doing is the minimum configuration. To have IIS install properly we can keep default Web site as the default site so that we do not need to create a new one. - Domain Name System (DNS) This session will cover most with system configuration. On the other hand, some concept will be include but not in detail. For further about DNS server we recommend to read, Mastering Windows Server 2003, wrote by Mark Minaci, 2003. 7.4.2.1 DNS Fundamental DNS began in the early days of the Internet when the Internet was a small network created by the Department of Defense for research purposes. Before DNS, computer names, or hostnames, were manually entered into a file located on a centrally administered server. Each site that needed to resolve hostnames had to download this file. As the number of computers on the Internet grew, so did the size of this HOSTS file, and the amount of traffic generated by downloading it. The need for a new system that would offer features such as scalability, decentralized administration, and support for various data types became more and more obvious. The Domain Name Service (DNS), introduced in 1984, became this new system. With DNS, the hostnames reside in a database that can be distributed among multiple servers, decreasing the load on any one server and providing the ability to administer this naming system on a per-partition basis. DNS supports hierarchical names and allows registration of various data types in addition to the hostname-to-IP-address mapping used in HOSTS files. By virtue of the DNS database being distributed, its size is unlimited and performance does not degrade much when adding more servers. 7.4.2.2 What DNS Does DNS translates between computer hostnames and IP addresses. DNS works at the Application layer of the OSI reference model and uses TCP and UDP at the transport layer. The DNS model is pretty plain: Clients make requests (“what's the IP address for?”) and get back answers (“64.233.183.147”). If a particular server can't answer a query, it can forward it to another, presumably better informed, server. 7.4.2.3 Introduction to Domain Naming The Domain Name System is composed of a distributed database of names that establishes a logical tree structure called the domain name space. Each node, or domain, in that space has a unique name. Therefore, bluesun.com and Cambodia.bluesun.net are two different domains, and they can contain sub domains, such as sales.bluesun.com and pp.cambodia.bluesun.net. A domain name identifies the domain's position in the logical DNS hierarchy in relation to its parent domain by separating each branch of the tree with a period. Figure 7.3.2.1 shows the domains hierarchy, where the Microsoft domain fits, and a host called server1 within the khzone.com domain. If someone wanted to contact that host, they would use the fully qualified domain name (FQDN) server1.khzone.com. Each domain is associated with a DNS name server. In other words, for every domain registered in the DNS, there's some server that can give an authoritative answer to queries about that domain. For example, the chellis.netchellis.net domain is handled by a name server at an Internet provider. This means that any revolver or name server can go straight to the source if it can't resolve a query by looking in its own cache. Figure 7.4.2.1: The Public DNS Hierarchy 7.4.2.4 DNS and the Internet You're undoubtedly familiar with how DNS works on the Internet; if you've ever sent or received Internet e-mail or browsed web pages on the Net, you've got firsthand experience using DNS. Internet DNS depends on a set of top-level domains that serve as the root of the DNS hierarchy. These top-level domains and their authoritative name servers are managed by the Internet Network Information Center ( ). The top-level domains are organized in two ways: by organization and by country. - Top-Level Domain: Below the root are top-level domains. Most of the ones we tend to think of around the world are .com, .net, info and .org, as they've become sort of the worldwide “catch-all” domains. Then, each country has its own top-level domain—the United States has .us, Canada .ca, the Cambodia .kh, and so on. In November 2000, Internet Corporation for Assigned Names and Numbers (ICANN) created several new top-level domains. The following is the sample of the top level domains. - Second-Level Domain: The second-level domain can be created with the permission of the owner of the parent domain. To create the second-level domain, the parent domain only has to do one thing: “delegate” name responsibility for the second-level domain to some machine. That's an important concept. - Sub domain or Child Domain: From the second-level domains you can divide into sub domains and delegate. If your namespace is large enough, you may need to divide it, too. Typically, you divide the domain that corresponds to your network number into sub domains that correspond to your subnets. How that works depends on the type of network you have and on your network's subnet mask. Note: In internet world you can only create the second-level domain and child domain, but you can not create the Top level domain (it's predefined.) Table 7.4.2.1: The command DNS name Source: Wikipedia.org, at - Servers, Clients and Resolvers: There are a few terms and concepts you will need to know before managing a DNS server. Understanding these terms will make it easier to understand how the Windows Server 2003 DNS server works: - DNS servers: Any computer providing domain name services is a DNS server. That being said, not all DNS servers are alike. Earlier implementations of DNS (for example, early versions of the popular Berkeley Internet Name Domain, or BIND) were originally developed for Unix, and they handled a fairly small and simple set of Request for Comments (RFC: An official document of the Internet Engineering Task Force (IETF) that specifies the details for protocols included in the TCP/IP family.) through zone transfers (discussed later, in the section “Zone Transfers”). The secondary 0DNS server can resolve queries from this read-only copy but cannot make changes or updates. A single DNS server may contain multiple primary and secondary zones (more on zones in a minute). Any DNS server implementation supporting Service Location Resource Records (see RCF 2052) and Dynamic Updates (RFC 2136) is sufficient to provide the name service for Windows 2000 and newer computers. However, because Windows Server 2003 DNS is designed to fully take advantage of the Windows Active Directory service, it is the recommended DNS server for any networked organization with a significant investment in Windows or extranet partners with Windows based systems. - Clients a DNS client is any machine issuing queries to a DNS server. The client hostname may or may not be registered in a name server (DNS) database. Clients issue DNS requests through processes called resolvers. 7.4.2.5 DNS Server installation and configuration: - Install DNS component: To install the DNS component in Ms Windows 2003 server, go to Control Panel > Add and Remove Program > Add/Remove Windows Components>select on Networking Services > click on the Details… button > check on Domain Name System (DNS) > click the OK button. Insert the Ms Windows 2003 server CD that you used to install to the computer. Then wait till the DNS component has been installed successfully. - Point the DNS server to its self: On the DNS server machine configure the DNS to point to itself. So when the server wants to resolve a DNS query, the server will need to ask it self to resolve names. - To do it go Network Connections, right-click the NIC's name (e.g. Local Area Network,) choose Properties, click the General tab, then click Internet Protocol (TCP/IP) and the Properties button, and then click the radio button labeled Use the Following DNS Server Addresses and fill in the DNS server's IP address (e.g. 192.168.1.201) - Configure DNS Server: Before we continuo configure on DNS Server let take briefly about primary DNS suffix. - So first right click on My Computer> choose property> choose computer name tab> click on button Change…> click on button More…> the dialog box DNS suffix will appear as shown in Figure 7.4.2.2. As we describe above, we using DNS for translate from IP to the name so that it easy for user to remember. So that in this case we need to assign the DNS suffix for the server IP address. As we assume from the beginning to assign the DNS name khzone.com for IP 192.168.1.201, so just type khzone.com in the box. This is the actual domain name that we got. So what it going to happen. Together with NetBIOS computer name and DNS domain name Suffix will be form a full qualify domain name (FQDN) or the full name of the machine (server1.khzone.com). If we didn't configure this, this machine will be not part of the khzone.com. So when we do a forward lookup zone we cannot have to khzone.com as a DNS name. ok let click ok and continue to configure DNS server Figure 7.4.2.2: DSN Suffix and NetBIOS Computer Name - Go to the Administrator Tools > DNS (see Figure 7.4.2.3) Figure 7.4.2.3 : DSN server configuration - Configure new Forward lookup for Primary Zone for the DNS server: To record name to IP address. Right click on the Forward Lookup Zones> choose New Zone…>Click the Next button - As this is the first time we configure DNS server in this machine, we need to choose Primary zone in top option and than click button Next. See Figure 7.4.2.4. Figure 7.4.2.4 : Choose the zone type - As we decide to choose khzone.com as the DNS name so just type khzone.com for the Zone name as shown in figure 7.4.2.5 and than click button Next, keep the default setting and click Next> Next> and Finish. Figure 7.4.2.5: Choose DNS name khzone.com - The new primary zone after creating. As we only need the basic setup of the Web Server, this configuration may be enough for us. 7.4.3 MySQL 7.4.3.1 Introduction In several of Relational Database Management system (RDBMS) MySQL in one in those, which also used on several type of website such as portal, e-commerce, education etc,. With the advantage of speed, the most important factor for every developer, and also offer few features, MySQL have more advance than their major competitors like Oracle. On the other hand, even though this RDBMS have less featured compare with their commercial competitor, but it have enough for large group of developer. With this less, so than make MySQL easier to install and use and the most important think is the price witch make MySQL more and more advance. (Valade, 2004) MySQL is developed, marketed, and supported by MySQL AB, which is a Swedish company. The company licenses it two ways: - Open source software: MySQL is available via the GNU GPL (General Public License) for no charge. Anyone who can meet the requirements of the GPL can use the software for free. If you're using MySQL as a database on a Web site (the subject of this book), you can use MySQL for free, even if you're making money with your Web site. - Commercial license: MySQL is available with a commercial license for those who prefer it to the GPL. If a developer wants to use MySQL as part of a new software product and wants to sell the new product, rather than release it under the GPL, the developer needs to purchase a commercial license. The fee is very reasonable. 7.4.3.2 Advantages of MySQL7.4.1 With the speed and size and price, MySQL is one of the most ideal for majority web developer. According to (Valade, 2004) there are several advantages of this RDBMS: - It's fast. The main goal of the folks who developed MySQL was speed. Consequently, the software was designed from the beginning with speed in mind. - It's inexpensive. MySQL is free under the open source GPL license, and the fee for a commercial license is very reasonable. - It's easy to use. You can build and interact with a MySQL database by using a few simple statements in the SQL language, which is the standard language for communicating with RDBMSs. - It can run on many operating systems. MySQL runs on a wide variety of operating systems — Windows, Linux, Mac OS, most varieties of Unix (including Solaris, AIX, and DEC Unix), FreeBSD, OS/2, Irix, and others. - Technical support is widely available. A large base of users provides free support via mailing lists. The MySQL developers also participate in the e-mail lists. You can also purchase technical support from MySQL AB for a very small fee. - It's secure. MySQL's flexible system of authorization allows some or all database privileges (for example, the privilege to create a database or delete data) to specific users or groups of users. Passwords are encrypted. - It supports large databases. MySQL handles databases up to 50 million rows or more. The default file size limit for a table is 4GB, but you can increase this (if your operating system can handle it) to a theoretical limit of 8 million terabytes (TB). - It's customizable. The open source GPL license allows programmers to modify the MySQL software to fit their own specific environments. 7.4.3.3 Setup and configuration of MySQL for Windows7.4.2 To install MySQL on Windows follow these steps: - Download the latest available for production use version of MySQL from the website - Double click on the file you just download to be installed. - When the first screen of the installation wizard appear click next - Because we need to control witch components need to be install click custom as shown in Figure 7.4.3.1 and next > next > next > Install. When the MySQL Sign-Up dialog box appear choose Skip Sign-Up option and then click next to continuo configuration. See Figure 7.4.3.2 Figure 7.4.3.1: Finish the setup and continue configure Figure 7.4.3.1: Select the installation type - Once the new configuration popup click next see Figure 7.4.3.3 Figure 7.4.3.4: Detail configuration Figure 7.4.3.3: Configure MySQL 5.0 for Window - Because we have been sticking with the detail configuration, so just keep with that. Figure 7.4.3.4 - Read the option and click the one which best for you. If you install MySQL on your desktop machine for some development task, the first one just fine. But now we install for web server so option tow is the best at the moment so choose the option two as shown in Figure 7.4.3.5. Figure 7.4.3.5: Choose server type Figure 7.3.3.6: Using the storage technique - Choose Non-Transactional Database Only and click next Figure 7.4.3.6. - Choose the third option and modify the number to 100 and than click next Figure 7.4.3.7. Figure 7.4.3.8: Configure port for security purpose Figure 7.4.3.7 : Choose the amount of user - As long as you don't have any copy of MySQL in your system the port number 3306 just fine. Keep the Enable Strict Mode on as shown in Figure 7.4.3.8 and click next. - This step seems to be a tricky setting. It should be the best to choose option two, UTF8 the default character set which allow supporting different languages, to be installed. But the problem is the PHP up to including version 5.1 doesn't have strong support for UNICODE build in. So most of PHP code assuming MySQL using Latin1 character set to communicate. So choose option one (Standard Character Set) and click next. See Figure 7.4.3.9. Figure 7.4.3.10: Configure MySQL 5.0 for Window Figure 7.4.3.9: Choose the language type - Click check on Install as windows Service. Even though MySQL nowadays can modify by Graphic User Interface but it still need the command line for some configuration. So keep the option Include Bin Directory in Windows PATH on and click next. Figure 7.4.3.10. - Disable the Modify Security settings option and click next and then Execute and finish. After the configuration finish we need restart the service to make MySQL work properly. See the Figure 7.4.3.11 and 7.4.3.12 Figure 7.4.3.12: Configure MySQL 5.0 for Window Figure 7.4.3.11: Disable the user and password - After the MySQL setup and running, the first business is we should setup the username and password for security reason. To do so, go and fine the command line client from: go to the start menu > all Programs > MySQL > MySQL Server 5.0 > MySQL Command Line Client. The command line it the most primitive but also some time the most effective way to interacted with MySQL Database. When the Command Line Client appears it will request to type the password. As we didn't set the password yet just press Enter. - After we press the Enter MySQL prompt will be appear for wait to type the command. Figure 7.4.3.13: Setup Username and - To make the password for the root user type the following command mysql> UPDATE mysql.user SET Password=PASSWORD(“mypass”) WHERE User=”root”; - The command line for MySQL is case sensitive. All the SQL command line should be finish by semi colon “;”. When you all have the command right, press enter. - In order to make the new password to have effect we need to enter one more command. mysql> FLUSH PRIVILEGES; 7.4.4 Install PHP on IIS 7.4.4.1 Introduction7.4.1 PHP is a scripting language which designed specifically for use on the Web. It is the tool for creating dynamic Web pages which available in over 13 million domains world wild (according to the Netcraft survey at, 2004), and keep growing everyday. PHP was early development by a guy called Rasmus Lerdorf as a Personal Home Page tools during that time. When it developed into a full-blown language, the name was changed to be more in line with its expanded functionality. The syntax of this language is similar to the syntax of C, so that it makes the programmer who already familiar with C easier to adapted with PHP. Further more, PHP design for the concept of web development, so it syntax didn't required as strong as C which meant that it doesn't use some of the more difficult concepts of C. Also, PHP syntax doesn't include the low-level programming capabilities of C. PHP is particularly strong in its ability to interact with databases. PHP supports pretty much every database you've ever heard of (and some you haven't). PHP handles connecting to the database and communicating with it.. The popularity of PHP is growing rapidly because of its many advantages: - It's fast. Because it is embedded in HTML code, the response time is short. - It's inexpensive — free, in fact. PHP is proof that free lunches do exist and that you can get more than you paid for. - It's easy to use. PHP contains many special features and functions needed to create dynamic Web pages. The PHP language is designed to be included easily in an HTML file. - It can run on many operating systems. It runs on a wide variety of operating systems — Windows, Linux, Mac OS, and most varieties of Unix. - Technical support is widely available. A large base of users provides free support via e-mail discussion lists. - It's secure. The user does not see the PHP code. - It's designed to support databases. PHP includes functionality designed to interact with specific databases. It relieves you of the need to know the technical details required to communicate with a database. - It's customizable. The open source license allows programmers to modify the PHP software, adding or modifying features as needed to fit their own specific environments. 7.4.4.2 Install PHP on IIS 6.07.4.2 - Download the latest windows binaries zip package from web site - Extracted the contain file you just download to a directory let say c:\php - Find the files call libmysql.dll and php5ts.dll from the directory you just extracted - Copy and paste those two files to system32 folder in your system - Find the file call php.ini-dist from php directory you been extracted and make a copy of it. Rename a copy from php.ini-dist to php.ini - Move php.ini file to your Windows directory - Open php.ini file with notepad (the one in Windows directory not in php directory) - Find the line extension_dir in your php.ini and change the value to the one you install your PHP and add back slash ext on it (\ext). See Figure 7.4.4.1. This tells PHP all the library extension option that could be switch on and off to give the several of functionality. One of this functionality is allow PHP to communicate with MySQL database Figure 7.4.4.1: Configure the extension directory for PHP - Scroll down to fine the line extension=php_mysql.dll and remove the semi-colon “;” from the first lint. Make a new line and type extension=php_mysqli.dll - Finally scroll down until you find the sessions option and find the line session.save_path= “/tmp”. This point PHP store the temporary information where people visiting the site. Uncomment this line and pointed to the convenient directory in your system. Save and close the file. Configure PHP for IIS 6.0 In this configuration we assume that IIS 6.0 already install in the system. - Open the IIS manager console from administrative tool > navigate to Default web page and choose property. - When the property window appears choose ISAPI Filters tab and click Add… See Figure 7.4.4.2. Figure 7.4.4.2: Configure the IIS Property for PHP - Type for the filter name PHP and on the executable click browse… - Got to your file directory and select the file php5isapi.dll and click open Figure 7.4.4.3 Figure 7.4.4.3: Choose the file Filter of IIS for PHP - Now we have PHP load as a filter so that it can respond and request to a server. But we still need to tell IIS which file that work with the PHP script. - Go to home directory tab and click configuration, see figure 7.4.4.4. Figure 7.4.4.4: Configure Home Directory - Click Add… for add a new application mapping and for the executable choose php5isapi.dll and for the extension type “.php” then click ok. Figure 7.4.4.5. Figure 7.4.4.5: Configure PHP Extension - Finally go to the Documents tab. The documents tab control the file name that it search for when the browser only request a directory name on the server. In order to allow PHP script are able to do so we need to add index.php in the list and move it to up most if you work the most with php and than click ok. We also need to restart the server to make PHP successfully. Figure 7.4.4.6. Figure 7.4.4.6: Add the index.php to IIS - This session adapt from Valade, J. (2004), PHP & MySQL For Dummies, 2nd Edition. Indiana: Wiley Publishing. Page 12-14. - Meloni, J. (2007), Teach Yourself PHP, MySQL and Apache: All in One, Third Edition. Los Altos: Sams. Chapter installing MySQL on Windows We provide a professional essay writing service that thousands of our customers use as an effective way of improving their grades, improving their research and saving them lots of time.
http://www.ukessays.com/essays/information-technology/system-design-development.php
crawl-003
refinedweb
8,482
54.32
== — Compares two values for equality. In the above conditional, a and b are first compared. If the indicated relation is true (a is equal to b), then the conditional expression has the value of v1; if the relation is false, the expression has the value of v2. (For convenience, a sole "=" will function as "= =".) equals.csd. Example 32. equals.wav -W ;;; for file output any platform </CsOptions> <CsInstruments> sr = 44100 ksmps = 32 nchnls = 2 0dbfs = 1 instr 1 ienv = p4 ;choose envelope in score if (ienv == 0) kthen kenv adsr 0.05, 0.05, 0.95, 0.05 ;sustained envelope elseif (ienv == 1) kthen kenv adsr 0.5, 1, 0.5, 0.5 ;triangular envelope elseif (ienv == 2) kthen kenv adsr 1, 1, 1, 0 ;ramp up endif aout vco2 .1, 110, 10 aout = aout * kenv outs aout, aout endin </CsInstruments> <CsScore> i1 0 2 0 i1 3 2 1 i1 6 2 2 e </CsScore> </CsoundSynthesizer>
http://www.csounds.com/manual/html/equals.html
CC-MAIN-2016-40
refinedweb
157
71.75
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. About deprecated models in v7 In v7, I found that there are some models which are deprecated. I found three deprecated models in osv. So it means that now onwards we should not use osv.osv, osv.osv_memory and osv.osv_abstract. But I was surprised when I looked into the addons, osv.osv and osv.osv_memory are used. So I am wondering why those models are deprecated? Hello, osv and osv_memory are just aliases to Model and TransientModel. They are still here for backward compatibility and will be supress with next release. # deprecated - for backward compatibility. osv = Model osv_memory = TransientModel osv_abstract = AbstractModel # ;-) in osv code. So in my scripts I should use Model.osv when defining a new model? class my_object(osv.osv): should be: class my_object(Model.osv): Yes @patrick. You should use osv.Model instead of osv.osv and osv.TransientModel instead of osv.osv_memory. In fact the recommended solution is from openerp.osv import orm, fields,... then do class X(orm.Model): class Y(orm.TransientModel): Yes. Instead of from osv or from toole, etc, we should use from openerp.osv or from openerp.tools, etc. About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now +1 for nice question.
https://www.odoo.com/forum/help-1/question/about-deprecated-models-in-v7-8112
CC-MAIN-2017-47
refinedweb
248
55.5
Tomcat 6. Tomcat 6. hi......I have problem like windows could not start the Apache Tomcat 6 on Local Computer. for more information ,review theSystem Event Log. If this is a non-Microsoft service,contact the service vendor, and refer tomcat the following links: How to download and install Tomcat? How to download and install Tomcat? Hi, How to download and install Tomcat? Share me the url to learn Tomcat. Thanks Hi, Please check the tutorial: Guide to download and install Tomcat 6. Thanks Tomcat Tomcat I am using UBUNTU and I want to install tomcat to run servlet programs Guide to download and install Tomcat 6 Guide to download and install Tomcat 6  ... we will show the process to downloading and installing Tomcat 6 on your... of the JRE folder and click on the Install button. This will install the... me to solve this problem Tomcat Tomcat why we get the error like HTTP Status 500 -.what is the solution for this problem tomcat problem - JSP-Servlet tomcat problem Hi! Rose India Team, I am unable to install the tomcat server. It stops at this stage during installation .......c:\program files... found your problem.. Download Tomcat to install tomcat etc. What is Apache Tomcat ? Apache Tomcat is a Web Server... 6.0.37 7.0.0 beta 7.0.6 7.0.37 7.0.39 Tomcat 6 Download Tomcat 7 Downoad How To Install Tomcat To install tomcat first download the tomcat tomcat problem tomcat problem error like requested(rootdirectory/urlpattern) resources are not available getting while running servlet on tomcatserver Tomcat installation problem - JSP-Servlet Tomcat installation problem Hello I have installed Tomcat and my startup and shutdown programs are working but the problem is i couldnt open...:// Thanks connect jsp with tomcat - JSP-Servlet :// jsp with tomcat hello friends, i have severe problem in making connection between tomcat 6.0.18 and jsp pages.my system configuration path setting for tomcat to java for Desktop PC - JDBC :// setting for tomcat to java for Desktop PC Hi sir, I Sysdeo Tomcat Launcher Plugin This plugin does not contain Tomcat. (Download and install Tomcat before using... : This problem is due to a change in Tomcat 5.5.12 and above (... Sysdeo Tomcat Launcher Plugin   Tomcat Configuration For Eclipse Server Tomcat Server? I am using Tomcat 6.0v Thanks In Advance Install tomcat on eclipse 1)First download the tomcat and install it on your hard drive. 2... on the browse button. 6)Now find the location of tomcat on your local drive Server Tomcat Server Why my tomcat server installation stop at using:jvm c:\program files\java\jdk 1.6.0\bin\client\jvm.dll. Even though i trying to install several times. please help me.... Installing Tomcat Server Tomcat Books and Servlets, then explains how to install and administer the Tomcat server... Tomcat Books The Definitive Guide for Tomcat Tomcat: The Definitive Guide offers Tomcat Tomcat to run one application in tomcat server which files we need.i think jsp,xml,html is sufficient tomcat Is it necessary to restart the server if we change the JDK path in tomcat server? Yes, Thanks Yes, Thanks Tomcat Environment Setup - Java Beginners Tomcat Environment Setup Kindly explain how to set path in tomcat.., i have install tomcat 6 version. Hi Friend, Set the following...-server/tomcat/install-configure.shtml Apache Axis2 Tomcat: Installing Apache Axis2 on Tomcat to install Apache Axis2 Web service engine on the Tomcat server. After... can learn in half an hour. Software Required Apache Tomcat 6 or above... see how we can install the Apache Axis2 engine on Tomcat server Step 1 TOMCAT Tomcat.... - WebSevices Tomcat.... Hi i am using tomcat5.0 & jdk1.5.The installation...;Suggession: Restart tomcat server and then try again. Before that check... sure that your folder status must be true.Even though if same problem occurs go & Struts - Struts Tomcat & Struts How a install and configure struts version 1.3.8 and Tomcat version 4.0 installation problem - Hibernate Interview Questions Tomcat installation problem Hello Tomcat For Eclipse Tomcat For Eclipse How To Install... It: Create a java project In the java project properties select 'Tomcat Installing JSF 1.1 in TOMCAT 5.5.23 comfortable installing the tomcat. There is nothing much to do to install JSF if you...Installing JSF 1.1 in TOMCAT 5.5.23  ... installation of JSF 1.1 to TOMCAT 5.5.23. Java Server Faces (JSF) requires some Tomcat Server 6 and is released in May 2013. 6.0.0 : This version of Tomcat was released...Tomcat Server In this section we will read about the Tomcat Server. Here we will read the various aspects of "Tomcat Server" such as what tomcat installation - Java Beginners tomcat installation How to install the apache-tomact server on my..., To install the tomcat server,please visit the following link: Then set How to install and Configure the Tomcat Server How to install and Configure the Tomcat Server  ...) because any tomcat requires the Java 1.5 (Java 5) and Java 1.6 (Java 6.... This will install the Apache tomcat at the specified location Tomcat Server error - Framework Tomcat Server error I am doing problem on struts,spring integration.in that at the time of tomcat server starting SEVERE: Error listenerStart occurs.why this error will occurs please tell the reason tomcat web server Tomcat Web Server How to install and Configure the Tomcat Server Tomcat is an open source web server developed by Apache tomcat server - Java Server Faces Questions install INFO: Processing Context configuration file URL file:E:\tomcat\conf... org.apache.catalina.core.StandardHostDeployer install INFO: Installing web application at context path /tomcat-docs from...tomcat server Hi, friends /* the given given below is the tomcat tomcat installation - Java Beginners :// installation i am unable to run tomcat6.0 , can you give me a step by step procedure to instal it , the error shown is :the specified tomcat - Java Beginners tomcat can any body help me for installing apache tomcat and running the servlet program Just go to the website in this website help mention regarding your problem http status 500 error in tomcat 6 while using eclipse helios http status 500 error in tomcat 6 while using eclipse helios package com.lala.servlets; import javax.servlet.*; import javax.servlet.http....=6 color=red>"+msg+"<font>"); } public void destroy tomcat cache - JSP-Servlet tomcat cache hai friends i have a query that is i want to remove the cache memory from Tomcat server Automatically using jsp without deleting... this is the major problem raising in my application please help me to sort out server - Servlet Interview Questions Tomcat server Hello, I am Executing the Servlet application at the time am getting the message like HTTP STSTUS 404 plz tell me wht's this problem and show me the solution to me setting classpath of tomcat - Java Beginners setting classpath of tomcat hello i am giri. I Installed the Tomcat6 in my system and the problem is, How to set the classpath, Please give...\Tomcat 6.0\lib.*; Thanks Deployment in Tomcat 6.0 Deployment in Tomcat 6.0  ...;ARTIFACTORY_INSTALLATION _FOLDER>/webapp? to ?<TOMCAT_INSTALLATION... and Tomcat 6.0. Tomcat detects the web application and deploy it. Once Axis 2 & Tomcat & NetBeans - WebSevices Axis 2 & Tomcat & NetBeans Hi all, I have 3 classes House.java... and etc.). This is the easy part. My problem is that: When I finish constructing my... Axis 2 on NetBeans. I am using NetBeans 6.5, with Axis 2 1.4.1 and Tomcat 6.0. Introduction to Tomcat Server Here we are illustrating the guidelines to install and configure the Tomcat...Introduction to Tomcat Server Tomcat is an open source web Tomcat - WebSevices Tomcat Need help on Tomcat server with Questions and Answers apache tomcat server - Java Server Faces Questions apache tomcat server i dont know how to install apache tomcat... procedure. Hi Friend, To install the tomcat server,please visit the following link: Merve Tomcat Launcher Eclipse Plugin Merve Tomcat Launcher Eclipse Plugin  ... applications with Apache Tomcat. Merve includes already Tomcat Embedded. Alternatively use your own preinstalled Tomcat version. Merve supports TOMCAT INSTALLATION ERROR - Design concepts & design patterns TOMCAT INSTALLATION ERROR I HAD INSTALL APACHE TOMCAT 5.5...ALSO SET... a link,visit it.Here you will get the steps to install tomcat. Hope Tomcat Web Server Tomcat Web Server Introduction to the Tomcat web server Tomcat is an open source web server developed by Apache Group. Apache Tomcat is the servlet Tomcat - WebSevices Tomcat Error calling delegate restart I am getting error like! ENTRY org.eclipse.wst.server.core 4 0 2008-03-10 12:26:33.992 !MESSAGE Error calling delegate restart() Tomcat v5.0 Server at localhost Working with Tomcat Server and java Server Pages. Tomcat is very easy to install and configure. ... Working with Tomcat Server This section explains how to work with Tomcat Server Tomcat 6.0 Tomcat 6.0 Hi,this is harish i got an issue while running my project.I hope something wrong with my tomcat server .. ERROR:An internal error occurred during: "Publishing to Tomcat v6.0 Server at localhost Introduction to the Tomcat web server a web application with jsp/servlet install any web server like JRun, Tomcat etc... Introduction to the Tomcat web server Tomcat is an open source web server developed Error: coyote connector has not been started Tomcat Error: coyote connector has not been started Hi, I am not able to start the tomcat server. If I try to start it then I am getting the below... to solve the problem??? Thanks a ton, Hemachandra Tomcat server Tomcat server Can anyone tell me in simple words what this error is about.. ERROR org.hibernate.event.def.AbstractFlushingEventListener Could not synchronize database state with session org.hibernate.StaleObjectStateException install jdk 6 - Java Beginners install jdk 6 sir i have installed my jdk1.6 but dont know how to run, could you please help me i this Hi Friend, Please visit...-and-install-java.shtml Thanks install jdk 6 - Java Beginners install jdk 6 sir i have installed my jdk1.6 but dont know how to run, could you please help me i java web application without running tomcat jsp,html,javascript,css,tomcat,editplus and MSAccess as backend..i created set up file by using NSIS tool..but the problem when i will the project to some... the tomcat. That means by clicking an icon it has to go to the main page in the project When i click on Monitor Tomcat, it shows -mapping> 6)Compile your servlet. 7)Run Tomcat server by clicking...When i click on Monitor Tomcat, it shows To run servlet i have seen... installed java 7 and tomcat 7, when i click on Monitor Tomcat it shows How Run JSP in Apache TomCat Server? - JSP-Servlet program there. You can also check compete tomcat tutorial at Thanks...How Run JSP in Apache TomCat Server? How to Run JSP in Apache Tomcat jsp fie execution in tomcat and using mysql - JDBC jsp fie execution in tomcat and using mysql I created 2 jsp files and kept them under a new directory in the web-apps subdirectory of Tomcat's... userDataviewErrorpage.jsp handles any internal exception generated(eg;SOLException). PROBLEM apache tomcat start error - Java Server Faces Questions operating system ERROR:- Windows could not start the Apache Tomcat 6 on Local...apache tomcat start error Hi sir,While starting tomcat services i am...; Please check port for tomcat tomcat jsp call DLL tomcat jsp call DLL Where can i find Example of use JSP under Tomcat environment call DLL Download and Install Java Download and Install Java  ... program we need to install java platform. JDK (Java Development Kit). JDK.... This section enables you to download JDK and teaches you the steps to install JSP tomcat hosting JSP tomcat hosting Hi, What is JSP tomcat hosting? Thanks  ... application on internet. But if you are using Tomcat to host the applications then the hosting environment is called as jsp tomcat hosting. There are many
http://www.roseindia.net/tutorialhelp/comment/90543
CC-MAIN-2014-42
refinedweb
2,019
57.77
Borislav Hadzhiev Last updated: Mar 6, 2022 To re-export values from another file in TypeScript, make sure to export the name exports as export {myFunction, myConstant} from './another-file and the default export as export {default} from './another-file'. The values can be imported from the file that re-exported them. Here is an example of a file that has 2 named exports and a default export. // 👇️ named export export function getEmployee() { return { id: 1, salary: 100 }; } // 👇️ named export export const greeting = 'hello world'; // 👇️ default export export default function sum(a: number, b: number) { return a + b; } Here is how you would re-export the exported members of another-file.ts from a file called index.ts. export { getEmployee, greeting, default } from './another-file'; The example above directly re-exports the 2 named exports and the default export. getEmployee, greetingand the default export in the index.tsfile, because we haven't imported them, we directly re-exported them. If you have to use the values in the file, you would also have to import them. // 👇️ import (only if you need to use them in index.ts) import sum, { getEmployee, greeting } from './another-file'; // 👇️ re-export export { getEmployee, greeting, default } from './another-file'; console.log(sum(10, 15)); console.log(getEmployee()); console.log(greeting); The syntax for re-exporting members of another module is: // 👇️ re-export NAMED exports export { getEmployee, greeting } from './another-file'; // 👇️ re-export DEFAULT export export { default } from './another-file'; The two lines from the example above can be combined into a single line if you're re-exporting members of the same file. // 👇️ re-export NAMED exports export { getEmployee, greeting } from './first-file'; // 👇️ re-export default export export { default } from './second-file'; You could then import the re-exported members from the same module. import sum, { getEmployee, greeting } from './index'; console.log(sum(100, 50)); console.log(getEmployee()); console.log(greeting); The pattern you often see is - re-export members of different files from a file called index.ts. The name index.ts is important, because you don't have to explicitly specify the name index when importing. For example, assuming that third-file.ts and index.ts are located in the same directory, I could import from index.ts like so: import sum, { getEmployee, greeting } from './'; // 👈️ implicit console.log(sum(100, 50)); console.log(getEmployee()); console.log(greeting); This is useful when you group your code in directories with descriptive names, because you would be importing from ./utils, rather than ./utils/index or ./utils/nested1, ./utils/nested2, etc. Many of the files you use might make use of multiple utility functions that have been extracted into separated files, and you might not want to have 5 lines of imports just for utility functions or constants - this is when re-exporting from an index.ts file comes in.
https://bobbyhadz.com/blog/typescript-export-from-another-file
CC-MAIN-2022-40
refinedweb
472
51.14
Consuming .NET Components from COM-Aware Clients'�A Simple Tutorial Sometimes, it's useful to create COM components in C# and access them from unmanaged code (such as C++, VB, and JavaScript). The .NET runtime seamlessly allows unmanaged COM-aware clients to access .NET components through the COM interop. This ensures that COM-aware clients can talk to .NET components. In this article, we are going to have a C++ client talk to a C# component. using System; using System.Runtime.InteropServices; public interface IHelloDotNet { int GetAge(); } /* end interface IHelloDotNet */ [ClassInterface(ClassInterfaceType.None)] public class HelloDotNet : IHelloDotNet { public HelloDotNet() { } public int GetAge() { return 20; } } /* end class HelloDotNet */ Please note that the above is only one way to write components in C#. We could actually get .NET to auto generate the interface for the class for us, but the method of manually generating the interface and creating a class which derives from this interface is the best method (if not the most tedious) of creating C# components. Because COM interfaces are immutable, having interfaces auto generated would lead to versioning problems; any class changes would break unmanaged COM clients. The first thing to note in this C# client is the interop attribute ClassInterface. This is one of many interop service attributes. Note that we set the attribute to ClassInterfaceType.None. This means that no class interface is generated for the class; this is correct because we have manually created our own interface. Going back to having this interface automatically created, we would use ClassInterfaceType.AutoDual. This would create a dual interface for the class and also make typeinfo available to the type library. Now, to compile the component to a DLL, issue the following from a command prompt: csc /target:library /out:HelloDotNot.dll HelloDotNet.cs This generates a .NET assembly that, at the moment, COM-aware clients are clueless about. We need to get some COM-friendly type information from it so that our C++ client will be happy to play around with it. What we need to do is take in a .NET assembly and generate a typelibrary out of it so it is usable from a COM-aware client. The .NET framework provides a couple of tools to do this. We are going to use the RegASM tool; this will register the assembly and generate the type library in one go. Type in the following: regasm HelloDotNet.dll /tlb:HelloDotNet.tlb Now, let's get down to consuming this .NET component! I wrote a quick C++ MFC dialog application to do this, so fire up Visual C++ and create a MFC dialog application, throw a button on the dialog, and create a handler for it. Enter the following code behind the button handler: ::CoInitialize(NULL); HRESULT hr = spHelloNet.CreateInstance(__uuidof(HelloDotNet)); long Age = spHelloNet->GetAge(); Simple enough, but where do spHelloNet and HelloDotNet come from?!? Easy, the type library we generated for the .NET component. So, go to the top of your dialog code and enter the following: #import "HelloDotNet.tlb" no_namespace IHelloDotNetPtr spHelloNet = NULL; .NET automatically generated the IHelloDotNetPtr for us. Don't worry about it, just type it in and forget about it. Okay, almost done. What you need to do now is drop the type library file into your client source area and copy the HelloDotNet DLL into the executable area (example: debug or release folder). We have to do this because we have not made the component shareable. By this, I mean being able to be seen by any application; at the moment it is a private component. Now, slam a break point on the line that reads spHelloNet->GetAge() and run the program. When you step over this line, the result should be 20! Violà; we have called a .NET component from an unmanaged C++ application—groovy! GAC—Making Our Component Globally Available I'm getting fed up with copying the type library and DLL across. How can I prevent this? There is a way of registering the component for global use. Yet again, Microsoft has thought of everything. We want our .NET assembly to be used in multiple applications; thus, it must be registered in what is called the Global Assembly Cache (GAC). This applies not only to .NET components used from .NET, but also .NET components used from COM. Before adding assemblies to the GAC, they need to be strongly named. That sounds strange?! A strong name consists of the assembly identity (name, version, and culture), plus a public key and digital signature. This strong name is used for several reasons—it guarantees the following three things: - Uniqueness - Version protection - Code integrity The first one of these is the important one. It assures us that no two will ever be the same. Okay, enough dribble. How do I create this strong name? We use the sn utility, an example of which is shown below: sn -k HelloDotNet.snk This generates a key pair and stores it in the file named HelloDotNet.snk. At this point, the assembly knows nothing about the strong name. Let us change that by adding the following line to the C# code: [assembly:AssemblyKeyFile("HelloDotNet.snk")] You may need to put the correct path in the above also, to where the .snk file is located. Now, we need to install the assembly into the GAC. We are going to use yet another utility, called gacutil. Enter the following: gacutil /I HelloDotNet.dll That is it; the component is now global and can be used with any application! I hope this article has been of some use. I've tried not to go into any gritty details, just mainly the bare bones. Any comments and criticisms are welcome. What alot of crap!Posted by Microsoft God on 03/08/2013 02:47pm This is not a published paper, it's a forum!Reply Great Article!Posted by JamesVespa on 05/22/2008 08:54am Cool ArticlePosted by Legacy on 11/19/2003 12:00am Originally posted by: Bhagyashree For ATL, how to sink .NET events? Thanks.Posted by Legacy on 08/23/2003 12:00am Originally posted by: Dejun Y. For ATL, how to sink .NET events?Reply Thanks. Dejun Y What about .Net windows forms controls???Posted by Legacy on 08/01/2003 12:00am Originally posted by: Martin Can they be called?????Reply
http://www.codeguru.com/csharp/csharp/cs_misc/com/article.php/c4263/Consuming-NET-Components-from-COMAware-ClientsmdashA-Simple-Tutorial.htm
CC-MAIN-2017-34
refinedweb
1,057
67.65
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project. Hello there! I think I've found a bug in libstdc++, but I'm not sure. And I'm not sure how to isolate it. Could someone help me? What I am doing is basically this: I want to have non-blocking input from stdin. Of course, C++ doesn't make this possible. So I use a combination of POSIX and C++. What I do is this: I have a function, InputWaiting(), that return true if there unprocessed input on stdin, and false if there isn't: bool InputWaiting() { if (cin.rdbuf()->in_avail()) return true; fd_set readfds; FD_ZERO(&readfds); FD_SET(fileno(stdin), &readfds); struct timeval tv; tv.tv_sec = 0; tv.tv_usec = 0; select(16, &readfds, 0, 0, &tv); int data = FD_ISSET(fileno(stdin), &readfds); return (data != 0); } The idea is that data is either waiting in cin's buffer, or in the stdin file of the OS, which is checked by select. As I see it, the data can't be anywhere else. >From another function, I check if input is waiting, and if it is, I fetch a whole line. If a whole line is not yet available, this will be blocking. But due to the nature of my application, the rest of the line will come quickly, so this is ok: void ProcessOneInputLine() { if (currentLineParsed) getline(cin,currentLine); currentLineParsed = ParseInput(currentLine); // ParseInput returns false if the line couldn't be parsed, yet. This means we have to abort what we are doing. } void ProcessWaitingInput() { while (!currentLineParsed || InputWaiting()) { ProcessOneInputLine(); if (!currentLineParsed) abortSearch = true; // The line couldn't be passed while we're busy, so we abort, and the line gets processed elsewhere. if (abortSearch) return; } } This works on all sorts of platforms, and compilers I've tried (Sun, Intel, gcc 2.95.x with it's libstdc++2), but it hasn't worked since gcc 3.0, and libstdc++3. Is my code incorrect? Or does libstdc++3 have a bug? What happens, apparently, is that sometimes, InputWaiting() will return false, even if there is input. And it will keep doing so from then on. It happens, I think, for instance when more than one line is sent at a time on stdin. For instance if this is coming on stdin: foo\nbar\n It doesn't happen when just foo\n is sent and then a moment later bar\n is sent. I've tried both cin.setf(ios::unitbuf), and cin.rdbuf()->setbuf(NULL,0) to make cin unbuffered. But this doesn't help. Any ideas? /David
http://gcc.gnu.org/ml/libstdc++/2002-09/msg00260.html
crawl-001
refinedweb
437
76.82
table of contents NAME¶ scalb, scalbf, scalbl - multiply floating-point number by integral power of radix (OBSOLETE) SYNOPSIS¶ #include <math.h> double scalb(double x, double exp); float scalbf(float x, float exp); long double scalbl(long double x, long double exp); Link with -lm. scalb(): || /* Since glibc 2.19: */ _DEFAULT_SOURCE || /* Glibc versions <= 2.19: */ _BSD_SOURCE || _SVID_SOURCE scalbf(), scalbl(): || /* Since glibc 2.19: */ _DEFAULT_SOURCE || /* Glibc versions <= 2.19: */ _BSD_SOURCE || _SVID_SOURCE DESCRIPTION¶ These functions multiply their first argument x by FLT_RADIX (probably 2) to the power of exp, that is: x * FLT_RADIX ** exp The definition of FLT_RADIX can be obtained by including <float.h>. RETURN VALUE¶¶ See - errno is set to EDOM. An invalid floating-point exception (FE_INVALID) is raised. - BUGS¶ Before glibc 2.20, these functions did not set errno for domain and range errors. SEE ALSO¶ COLOPHON¶ This page is part of release 5.10 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
https://manpages.debian.org/unstable/manpages-dev/scalb.3.en.html
CC-MAIN-2022-05
refinedweb
172
68.47
Closures in Ruby: Blocks, Procs and Lambdas Jeff Kreeftmeijer ・1 min read. Blocks In Ruby, blocks are snippets of code that can be created to be executed later. Blocks are passed to methods that yield them within the do and end keywords. One of the many examples is the #each method, which loops over enumerable objects. [1,2,3].each do |n| puts "#{n}!" end [1,2,3].each { |n| puts "#{n}!" } # the one-line equivalent. In this example, a block is passed to the Array#each method, which runs the block for each item in the array and prints it to the console. def each i = 0 while i < size yield at(i) i += 1 end end In this simplified example of Array#each, in the while loop, yield is called to execute the passed block for every item in the array. Note that this method has no arguments, as the block is passed to the method implicitly. Implicit Blocks and the yield Keyword. irb> "foo bar baz".split { p "block!" } => ["foo", "bar", "baz"] If the called method does yield, the passed block is found and called with any arguments that were passed to the yield keyword. def each return to_enum(:each) unless block_given? i = 0 while i < size yield at(i) i += 1 end end This example returns an instance of Enumerator unless a block is given. The yield and block_given? keywords find the block in the current scope. This allows passing blocks implicitly, but prevents the code from accessing the block directly as it's not stored in a variable. Explicitly Passing Blocks. def each_explicit(&block) return to_enum(:each) unless block i = 0 while i < size block.call at(i) i += 1 end end When a block is passed like this and stored in a variable, it is automatically converted to a proc. Procs A "proc" is an instance of the Proc class, which holds a code block to be executed, and can be stored in a variable. To create a proc, you call Proc.new and pass it a block. proc = Proc.new { |n| puts "#{n}!" } Since a proc can be stored in a variable, it can also be passed to a method just like a normal argument. In that case, we don't use the ampersand, as the proc is passed explicitly. def run_proc_with_random_number(proc) proc.call(random) end proc = Proc.new { |n| puts "#{n}!" } run_proc_with_random_number(proc) Instead of creating a proc and passing that to the method, you can use Ruby’s ampersand parameter syntax that we saw earlier and use a block instead. def run_proc_with_random_number(&proc) proc.call(random) end run_proc_with_random_number { |n| puts "#{n}!" }. [1,2,3].map(&:to_s) [1,2,3].map {|i| i.to_s } [1,2,3].map {|i| i.send(:to_s) }. class Symbol def to_proc Proc.new { |i| i.send(self) } end end throws an ArgumentError. irb> lambda (a) { a }.call ArgumentError: wrong number of arguments (given 0, expected 1) from (irb):8:in `block in irb_binding' from (irb):8 from /Users/jeff/.asdf/installs/ruby/2.3.0/bin/irb:11:in `<main>'. def return_from_proc a = Proc.new { return 10 }.call puts "This will never be printed." end This function will yield control to the proc, so when it returns, the function returns. Calling the function in this example will never print the output and return 10. def return_from_lambda a = lambda { return 10 }.call puts "The lambda returned #{a}, and this will be printed." end When using a lambda, it will be printed. Calling return in the lambda will behave like calling return in a method, so the a variable is populated with 10 and the line is printed to the console. Blocks, procs and lambdas Now that we’ve gone all the way into both blocks, procs and lambdas, let’s zoom back out and summarize the comparison. - Blocks are used extensively in Ruby for passing bits of code to functions. By using the yieldkeyword, a block can be implicitly passed without having to convert it to a proc. - When using parameters prefixed with ampersands, passing a block to a method results in a proc in the method's context. Procs behave like blocks, but they can be stored in a variable. - Lambdas are procs that behave like methods, meaning they enforce arity and return as methods instead of in their parent scope.. Micro Frontends: a deep dive into the latest industry trend. Image from-... Okay, this is a piece that I'll need some time to chew on! Very detailed; I just need to read it s-l-o-w-l-y... Hey Arit! 👋 Take your time, and please be sure to let us know if there's something we can explain better. :) Great post Jeff!
https://dev.to/appsignal/closures-in-ruby-blocks-procs-and-lambdas-2hk9
CC-MAIN-2020-16
refinedweb
789
76.42
Zeeshan Sheikh wrote:Here is the complete solution. Hope this helps Ryan Sykes wrote: PS - if you have learnt about ternary operators, then this would be a more succinct way of expressing this part of your code: if (integer % 2 == 0) return true; else return false; return (integer % 2 == 0) ? true : false; Heather Hamrick wrote:ne.showInputDialog("Enter integer, enter -1 to quit"); // >>>how do I call the boolean method so that it runs? public static boolean isEven(int integer) { String message1 = ""; String message2 = ""; while (integer != -1) } //<<< error says it is "missing a return statement", I'm not sure what to do here. Zeeshan Sheikh wrote:As per your method signature it should always return a value, in your case use return true after while block. return true; Heather Hamrick wrote:so instead of doing "return true;" what kind of return should I use? fred rosenberger wrote:next...there will probably be some discussion about me using a returnVal variable. There is a lot of debate over whether a method should ever have more than one return statement. Many people feel there should only ever be one, and this is the idiom for doing that. I could have just as easily written the method without it, and done this: ...snip if (integer % 2 == 0) { return = true; } else { return = false; } ...snip fred rosenberger wrote: public static boolean isEven(int myInteger) { boolean returnVal = false; //this is the variable I will return, need to set it to something by default. return returnVal; } Jeff Verdegan wrote: I'm gonna have to go ahead and disagree with you there. In this particular case you don't need to set it to something by default, and in general, you should not. Heather Hamrick wrote:its messed up...help Heather Hamrick wrote:Alrighttt, I have rewritten my entire program Heather Hamrick wrote:I have a while loop because thats what my professor has told us to do within our program, so I guess that should be in my main? Heather Hamrick wrote:the purpose of the loop is to have the program keep running until -1 is entered. Heather Hamrick wrote:Within the while loop should be an if/else statement that test something like.. .. if(myAnswer ==true) JOptionPane(print even) else JOptionPane(print odd) ...im not sure what should precede the while loop? ...I guess I would say you would have to call the isEven Heather Hamrick wrote:myAnswer would come from calling the method as "boolean myAnswer;" , and then you would have to call the method in by doing "myAnswer = ???" to determine if true or false is returned is that how you get into the loop?
http://www.coderanch.com/t/568092/java/java/beginning-methods
CC-MAIN-2013-20
refinedweb
441
71.44
HTML::Mason::Request - Mason Request Class version 1.54 $m->abort (...) $m->comp (...) etc.. The methods Request->comp, Request->comp_exists, and Request->fetch_comp take a component path argument. Component paths are like URL paths, and always use a forward slash (/) as the separator, regardless of what your operating system uses.. The $m->cache API to use: Cache::Cachebased API. CHIbased API.. File name used for dhandlers. Default is "dhandler". If this is set to an empty string ("") then dhandlers are turned off entirely.. The maximum recursion depth for the component stack, for the request stack, and for the inheritance stack. An error is signalled if the maximum is exceeded. Default is 32.. An array of plugins that will be called at various stages of request processing. Please see HTML::Mason::Plugin for details. All of the above properties have standard accessor methods of the same name. In general, no arguments retrieves the value, and one argument sets and returns the value. For example: my $max_recurse_level = $m->max_recurse; $m->autoflush(1);. This method is syntactic sugar for calling clear_buffer() and then abort(). If you are aborting the request because of an error, you will often want to clear the buffer first so that any output generated up to that point is not sent to the client.. Returns the current base component. Here are the rules that determine base_comp as you move from component to component. $m->request_comp()). This may return nothing if the base component is not yet known, for example inside a plugin's start_request_hook() method, where we have created a request but it does not yet know anything about the component being called. $m->cache returns a new cache object with a namespace specific to this component. The parameters to and return value from $m->cache differ depending on which data_cache_api you are using.. Beyond that, cache_options may include any valid options to the new() method of the cache class. e.g. for FileCache, valid options include default_expires_in and cache_depth. See HTML::Mason::Cache::BaseCache for information about the object returend from $m->cache. chi_root_class specifies the factory class that will be called to create cache objects. The default is 'CHI'. driver specifies the driver to use, for example Memory or FastMmap. The default is File in most cases, or Memory if the interpreter has no data directory. Beyond that, cache_options may include any valid options to the new() method of the driver. e.g. for the File driver, valid options include expires_in and depth. : $cache->set. e.g. '10 sec', '5 min', '2 hours'. . See the the DATA CACHING section of the developer's manual section for more details on how to exercise finer control over caching.. A synonym for $m->callers(1), i.e. the component that called the currently executing component.. This method allows a component to call itself so that it can filter both its output and return values. It is fairly advanced; for most purposes the <%filter> tag will be sufficient and simpler. $m->call_self takes four arguments, all of them optional. $m->call_self acts because it would have to span multiple code sections and the main component body. <%init> my ($output, undef, $error); if ($m->call_self(\$output, undef, \$error)) { if ($error) { # check $error and do something with it } $m->print($output); return; } ... Clears the Mason output buffer. Any output sent before this line is discarded. Useful for handling error conditions that can only be detected in the middle of a request. clear_buffer is, of course, thwarted by flush_buffer.. Evaluates the content (passed between <&| comp &> and </&> tags) of the current component, and returns the resulting text. Returns undef if there is no content. Returns true if the component was called with content (i.e. with <&| comp &> and </&> tags instead of a single <& comp &> tag). This is generally better than checking the defined'ness of $m->content because it will not try to evaluate the content. Returns the number of this request, which is unique for a given request and interpreter. Returns the arguments passed to the current component. When called in scalar context, a hash reference is returned. When called in list context, a list of arguments (which may be assigned to a hash) is returned. Returns the current component object.. Returns true or undef indicating whether the specified $err was generated by decline. If no $err was passed, uses $@. Returns the current size of the component stack. The lowest possible value is 1, which indicates we are in the top-level component.. Given a comp_path, returns the corresponding component object or undef if no such component exists. Returns the next component in the content wrapping chain, or undef if there is no next component. Usually called from an autohandler. See the autohandlers section of the developer's manual for usage and examples. Returns a list of the remaining components in the content wrapping chain. Usually called from an autohandler. See the autohandlers section of the developer's manual for usage and examples. Returns the contents of filename as a string. If filename is a relative path, Mason prepends the current component directory.. <%filter> blocks will process the output whenever the buffers are flushed. If autoflush is on, your data may be filtered in small pieces. This class method returns the HTML::Mason::Request currently in use. If called when no Mason request is active it will return undef. If called inside a subrequest, it returns the subrequest object. Returns the Interp object associated with this request.. Returns a Log::Any logger with a log category specific to the current component. The category for a component "/foo/bar" would be "HTML::Mason::Component::foo::bar".. A synonym for $m->print.. Returns the arguments originally passed to the top level component (see request_comp for definition). When called in scalar context, a hash reference is returned. When called in list context, a list of arguments (which may be assigned to a hash) is returned. Returns the component originally called in the request. Without autohandlers, this is the same as the first component executed. With autohandlers, this is the component at the end of the $m->call_next chain. Returns the current size of the request/subrequest stack. The lowest possible value is 1, which indicates we are in the top-level request. A value of 2 indicates we are inside a subrequest of the top-level request, and so on. Like comp, but returns the component output as a string instead of printing it. (Think sprintf versus printf.) The component's return value is discarded. This method creates a new subrequest with the specified top-level component and arguments, and executes it. This is most often used to perform an "internal redirect" to a new component such that autohandlers and dhandlers take effect. Returns the interpreter's notion of the current time (deprecated). These additional methods are available when running Mason with mod_perl and the ApacheHandler. Returns the ApacheHandler object associated with this request. Returns the Apache request object. This is also available in the global $r.. This additional method is available when running Mason with the CGIHandler module. Returns the Apache request emulation object, which is available as $r inside components. See the CGIHandler docs for more details. This method is available when Mason is running under either the ApacheHandler or CGIHandler modules.. Given a url, this generates a proper HTTP redirect for that URL. It uses $m->clear_and_abort to This software is copyright (c) 2012 by Jonathan Swartz. This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself.
http://search.cpan.org/~jswartz/HTML-Mason-1.54/lib/HTML/Mason/Request.pm
CC-MAIN-2017-51
refinedweb
1,267
59.5
by Haardik How to build a blockchain network using Hyperledger Fabric and Composer A tutorial for new blockchain developers Before I begin, Hyperledger Fabric only runs on Unix-based operating systems. As a result, it will not be able to run on Windows and you’ll have restrictions on what you can do. I suggest setting up a virtual machine if you are running Windows before continuing. This article assumes some knowledge of Javascript. It isn’t a tutorial aimed at beginner programmers, but rather at programmers who are beginners in the blockchain space. What are we building? So, you want to build a blockchain application but have no idea where to start? Don’t worry. Through this tutorial, we will set up a trading cards network. Different Traders who own TradingCards of Baseball, Football, and Cricket players, will be able to trade cards among themselves. We’ll also set up a REST API server to allow client side software to interact with our business network. Finally, we will also generate an Angular 4 application which uses the REST API to interface with our network. You can find the full final code of what we are about to build on this Github repo Are you ready to get started? Table of Contents - Introduction to Hyperledger Fabric and related applications - Installing the prerequisites, tools, and a Fabric runtime - Creating and deploying our business network - Testing our business network - Generating a REST API server - Generating an Angular application which uses the REST API Introduction to Hyperledger Fabric and related applications. Hyperledger Composer is a set of Javascript based tools and scripts which simplify the creation of Hyperledger Fabric networks. Using these tools, we can generate a business network archive (BNA) for our network. Composer broadly covers these components: - Business Network Archive (BNA) - Composer Playground - Composer REST Server Business Network Archive — Composer allows us to package a few different files and generate an archive which can then be deployed onto a Fabric network. To generate this archive, we need: - Network Model — A definition of the resources present in the network. These resources include Assets, Participants, and Transactions. We will come back to these later. - Business Logic — Logic for the transaction functions - Access Control Limitations — Contains various rules which define the rights of different participants in the network. This includes, but is not limited to, defining what Assets the Participants can control. - Query File (optional) — A set of queries which can be run on the network. These can be thought of as similar to SQL queries. You can read more on queries here. Composer Playground is a web based user interface that we can use to model and test our business network. Playground is good for modelling simple Proofs of Concept, as it uses the browser’s local storage to simulate the blockchain network. However, if we are running a local Fabric runtime and have deployed a network to it, we can also access that using Playground. In this case, Playground isn’t simulating the network, it’s communicating with the local Fabric runtime directly. Composer REST Server is a tool which allows us to generate a REST API server based on our business network definition. This API can be used by client applications and allows us to integrate non-blockchain applications in the network. Installing the prerequisites, tools, and a Fabric runtime 1. Installing Prereqs Now that we have a high level understanding of what is needed to build these networks, we can start developing. Before we do that, though, we need to make sure we have the prerequisites installed on our system. An updated list can be found here. - Docker Engine and Docker Compose - Nodejs and NPM - Git - Python 2.7.x following commands in your Terminal, and make sure you’re NOT using sudo when running npm commands. composer-cli is the only essential package. The rest aren’t core components but will turn out to be extremely useful over time. We will learn more about what each of these do as we come across them. 3. Installing a local Hyperledger Fabric runtime Let’s go through the commands and see what they mean. First, we make and enter a new directory. Then, we download and extract the tools required to install Hyperledger Fabric. We then specify the version of Fabric we want, at the time of writing we need 1.2, hence hlfv12. Then, we download the fabric runtime and start it up. Finally, we generate a PeerAdmin card. Participants in a Fabric network can have business network cards, analogous to real life business cards. As we mentioned before, Fabric is a base layer for private blockchains to build upon. The holder of the PeerAdmin business card has the authority to deploy, delete, and manage business networks on this Fabric runtime (aka YOU!) If everything went well, you should see an output like this: Also, if you type ls you’ll see this: Basically what we did here was just download and start a local Fabric network. We can stop is using ./stopFabric.sh if we want to. At the end of our development session, we should run ./teardownFabric.sh NOTE: This local runtime is meant to be frequently started, stopped, and torn down for development use. For a runtime with more persistent state, you’ll want to deploy the network outside the dev environment. You can do this by running the network on Kubernetes or on managed platforms like IBM Blockchain. Still, you should go through this tutorial first to get an idea. Creating and deploying our business network Remember the packages yo and generator-hyperledger-composer we installed earlier? yo provides us a generator ecosystem where generators are plugins which can be run with the yo command. This is used to set up boilerplate sample applications for various projects. generator-hyperledger-composer is the Yo generator we will be using as it contains specs to generate boilerplate business networks among other things. 1. Generating a business network Open terminal in a directory of choice and type yo hyperledger-composer You’ll be greeted with something similar to the above. Select Business Network and name it cards-trading-network as shown below: 2. Modeling our business network The first and most important step towards making a business network is identifying the resources present. We have four resource types in the modeling language: - Assets - Participants - Transactions - Events For our cards-trading-network , we will define an asset type TradingCard , a participant type Trader , a transaction TradeCard and an event TradeNotification. Go ahead and open the generated files in a code editor of choice. Open up org.example.biznet.cto which is the modeling file. Delete all the code present in it as we’re gonna rewrite it (except for the namespace declaration). This contains the specification for our asset TradingCard . All assets and participants need to have a unique identifier for them which we specify in the code, and in our case, it’s cardId Also, our asset has a GameType cardType property which is based off the enumerator defined below. Enums are used to specify a type which can have up to N possible values, but nothing else. In our example, no TradingCard can have a cardType other than Baseball, Football, or Cricket Now, to specify our Trader participant resource type, add the following code in the modeling file This is relatively simpler and quite easy to understand. We have a participant type Trader and they’re uniquely identified by their traderIds. Now, we need to add a reference to our TradingCards to have a reference pointing to their owner so we know who the card belongs to. To do this, add the following line inside your TradingCard asset: --> Trader owner so that the code looks like this: This is the first time we’ve used --> and you must be wondering what this is. This is a relationship pointe r. o a nd --> are how we differentiate between a resource’s own properties vs a relationship to another resource type. Since the owner is a Trader which is a participant in the network, we want a reference to that Trader directly, and that’s exactly what --> does. Finally, go ahead and add this code in the modeling file which specifies what parameters will be required to make a transaction and emitting an event. 3. Adding logic for our transactions To add logic behind the TradeCard function, we need a Javascript logic file. Create a new directory named lib in your project’s folder and create a new file named logic.js with the following code: NOTE: The decorator in the comments above the function is very important. Without the @param {org.example.biznet.TradingCard} trade , the function has no idea which Transaction the code refers to from the modeling language. Also, make sure the parameter name being passed (i.e. trade) is the one you’re passing along in the function definition right after. This code basically checks if the specified card has forTrade == true and updates the card’s owner in that case. Then, it fires off the TradeNotification event for that card. 4. Defining permissions and access rules Add a new rule in permissions.acl to give participants access to their resources. In production, you would want to be more strict with these access rules. You can read more about them here. 5. Generating a Business Network Archive (BNA) Now that all the coding is done, it’s time to make an archive file for our business network so we can deploy it on our local Fabric runtime. To do this, open Terminal in your project directory and type this: composer archive create --sourceType dir --sourceName . This command tells Hyperledger Composer we want to build a BNA from a directory which is our current root folder. NOTE: The BNA name and version come from the package.json file. When you add more code, you should change the version number there to deploy unique archives capable of upgrading existing business networks. 6. Install and Deploy the BNA file We can install and deploy the network to our local Fabric runtime using the PeerAdmin user. To install the business network, type composer network install --archiveFile [email protected] --card [email protected] To deploy the business network, type composer network start --networkName cards-trading-network --networkVersion 0.0.1 --networkAdmin admin --networkAdminEnrollSecret adminpw --card [email protected] --file cards-trading-admin.card The networkName and networkVersion must be the same as specified in your package.json otherwise it won’t work. --file takes the name of the file to be created for THIS network’s business card. This card then needs to be imported to be usable by typing composer card import --file cards-trading-admin.card Amazing. We can now confirm that our network is up and running by typing composer network ping --card [email protected] --card this time takes the admin card of the network we want to ping. If everything went well, you should see something similar to this: Testing our Business Network Now that our network is up and running on Fabric, we can start Composer Playground to interact with it. To do this, type composer-playground in Terminal and open up in your browser and you should see something similar to this: Press Connect Now for The Define page is where we can make changes to our code, deploy those changes to upgrade our network, and export business network archives. Head over to the Test page from the top menu, and you’ll see this: Trader from Participants, click on Create New Participant near the top right, and make a new Trader similar to this: Go ahead and make a couple more Traders. Here are what my three traders look like with the names Haardik, John, and Tyrone. Now, let’s make some Assets. Click on TradingCard from the left menu and press Create New Asset. Notice how the owner field is particularly interesting here, looking something like this: This is a relationship. This is what the --> means. We specify the exact resource type followed by their unique identifier and voila, we have a relationship pointer. Go ahead and finish making a TradingCard something similar to this: Notice how the owner fields points to Trader#1 aka Haardik for me. Go ahead and make a couple more cards, and enable a couple to have forTrade set to true. Notice how my Card#2 has forTrade == true? Now for the fun stuff, let’s try trading cards :D Click on Submit Transaction in the left and make card point to TradingCard#2 and newOwner point to Trader#3 like this: Press Submit and take a look at your TradingCards, you’ll see that Card#2 now has owner Trader#3 :D Generating a REST API Server Doing transactions with Playground is nice, but not optimal. We have to make client-side software for users to provide them a seamless experience, they don’t even have to necessarily know about the underlying blockchain technology. To do so, we need a better way of interacting with our business network. Thankfully, we have the composer-rest-server module to help us with just that. Type composer-rest-server in your terminal, specify [email protected] , select never use namespaces, and continue with the default options for the rest as follows: Open and you’ll be greeted with a documented version of an automatically generated REST API :D Generating an Angular application which uses the REST API Remember the yo hyperledger-composer generator? It can do more than generating a business network. It can also create an Angular 4 application running against the REST API we created above. To create the Angular web application, type yo hyperledger-composer in your Terminal, select Angular, choose to connect to an existing business network with the card [email protected], and connect to an existing REST API as well. (Edit: Newer versions of the software may ask for the card file instead of just the name of the card) This will go on to run npm install , give it a minute, and once it’s all done you’ll be able to load up and be greeted with a page similar to this: Edit: Newer versions of the software may require you to run npm install yourself and then run npm start You can now play with your network from this application directly, which communicates with the network through the REST server running on port 3000. Congratulations! You just set up your first blockchain business network using Hyperledger Fabric and Hyperledger Composer :D You can add more features to the cards trading network, setting prices on the cards and giving a balance to all Trader. You can also have more transactions which allow the Traders to toggle the value of forTrade . You can integrate this with non blockchain applications and allow users to buy new cards which get added to their account, which they can then further trade on the network. The possibilities are endless, what will you make of them? Let me know in the comments :D KNOWN BUG: Does your Angular web app not handle Transactions properly? At the time of writing, the angular generator has an issue where the purple Invoke button on the Transactions page doesn’t do anything. To fix this, we need to make a few changes to the generated angular app. 1. Get a modal to open when you press the button The first change we need to make is have the button open the modal window. The code already contains the required modal window, the button is just missing the (click) and data-target attributes. To resolve this, open up /cards-trading-angular-app/src/app/TradeCard/TradeCard.component.html The file name can vary based on your transaction name. If you have multiple transactions in your business network, you’ll have to do this change across all the transaction resource type HTML files. Scroll down till the very end and you shall see a <button> tag. Go ahead and add these two attributes to that tag: (click)="resetForm();" data-target="#addTransactionModal" so the line looks like this: <button type=”button” class=”btn btn-primary invokeTransactionBtn” data-toggle=”modal” (click)=”resetForm();” data-target=”#addTransactionModal”>Invoke<;/button> The (click) attribute calls resetForm(); which sets all the input fields to empty, and data-target specifies the modal window to be opened upon click. Save the file, open your browser, and try pressing the invoke button. It should open this modal: 2. Removing unnecessary fields Just getting the modal to open isn’t enough. We can see it requests transactionId and timestamp from us even though we didn’t add those fields in our modeling file. Our network stores these values which are intrinsic to all transactions. So, it should be able to figure out these values on it’s own. And as it turns out, it actually does. These are spare fields and we can just comment them out, the REST API will handle the rest for us. In the same file, scroll up to find the input fields and comment out the divs responsible for those input fields inside addTransactionModal Save your file, open your browser, and press Invoke. You should see this: You can now create transactions here by passing data in these fields. Since card and newOwner are relationships to other resources, we can do a transaction like this: Press Confirm, go back to the Assets page, and you will see that TradingCard#2 now belongs to Trader#1: Congratulations! You have successfully built and deployed a blockchain business network on Hyperledger Fabric. You also generated a REST API server for that network and learnt how to make web apps which interact with that API. If you have any questions or doubts, drop it in the comments and I will get back to you.
https://www.freecodecamp.org/news/how-to-build-a-blockchain-network-using-hyperledger-fabric-and-composer-e06644ff801d/
CC-MAIN-2019-26
refinedweb
2,986
61.16
In this post I’ll show what dependency inversion looks like in the context of a dynamically typed language like Python. But first, I’m going to introduce the concept of dependency inversion and what it means in a statically typed language like Java so that we can see the differences between the two types of languages. Dependency Inversion According to wikipedia, the dependency inversion principle states that: - High-level modules should not depend on low-level modules. Both should depend on abstractions (e.g. interfaces). - Abstractions should not depend on details. Details (concrete implementations) should depend on abstractions. Put another way, it’s saying that if a component needs to be able to switch specific implentations (details), it should not have a source code dependency to those implementations. By removing the source code dependency and replacing it with a dependency to an interface, you are “inverting” the depedency between the high level component and the low level details. Java In java, this is an example without dependency inversion. class Cat { public void greet() { AngryGreeter greeter = new AngryGreeter(); greeter.greet(); } } The Cat class is our component here. It is directly referencing the class AngryGreeter which contains the specific details of greeting. In many cases, this source code dependency runs in the same direction as the flow of control. This isn’t too suprising because if Cat needs to invoke methods on instances of AngryGreeter, it needs to have a reference to a greeter object and the most common way to do that is to just use the new keyword to construct the object where it’s needed. The consequence of this relationship through this construction is that it creates tight coupling between the two modules. More specifically, our Cat depends on AngryGreeter. If there is no need to change greeting behavior at all, this relationship is perfectly fine. If we want to add new behaviors, we would have to modify Cat everytime. In order to do that, we need to: - Introduce an interface that concrete greeters impelement - Pass concrete greeters as arguments into Cat(dependency injection) Here’s what dependency inversion looks like in Java for this simple example: public interface Greeter{ void greet(); } class AngryGreeter implements Greeter { public void greet() { System.out.println("YOWL!"); } } class Cat { private final Greeter greeter; public Cat(Greeter greeter) { this.greeter = greeter; } public void greet() { this.greeter.greet(); } } Now, rather than Cat depending directly on AngryGreeter, both Cat and Angry greeter depend on the interface Greeter. What we have now is a dependency that points in the opposite direction of the control flow, hence it’s inverted. Now, we can add new greeters without ever touching the Cat component. Previously, Cat had a direct, hard coded reference to AngryGreeter. Now it has a direct reference to the interface instead which will only change when the greeting API changes (and not when new greeting behavior that uses the same API is added). Python Now lets look at the first example implemented in Python: class AngryGreeter: def greet(self): print("YOWL!") class Cat: def greet(self): greeter = AngryGreeter() greeter.greet() Since python is dynamically typed, there is no need to declare an interface the source code in the same way as Java. At run time, objects either can do something or they can’t. So if we wanted to invert the dependencies, we typically just make the name of the instance something generic and make it an argument: class AngryGreeter: def greet(self): print("YOWL!") class Cat: def greet(self, greeter) greeter.greet() While this is really easy to do in python (or really any dynamically typed language), there is one glaring drawback when we do this: we don’t know exactly what the interface is without looking at what methods are actually being called. So while inverting dependencies are easy because interfaces are implicit, the implictness of interfaces means that: Catmay crash if given a greeterobject that does not implement the same interface. Missing method? - You need to spend a lot more time reading existing source code (mostly existing concrete classes) in order to infer what the interface is Abstract Base Classes Python 3 introduced ABC classes to solve these problems (these are more akin to javas abstract interfaces since they can contain implementation). import abc class Greeter(abc.ABC): @abc.abstractmethod def greet(self): pass class AngryGreeter(Greeter): def greet(self): print("YOWL!") class HappyGreeter(Greeter): pass class Cat: def greet(self, greeter): greeter.greet() if __name__ == "__main__": c = Cat() c.greet(AngryGreeter()) c.greet(HappyGreeter()) Now, you’ll get an exception when HappyGreeter is instantiated without the greet method. In terms of understanding what methods are expected, we don’t have to go digging through multiple classes - we can just look at the interface that the greeters implement.
https://www.linisnil.com/articles/python-dependency-inversion-principle/
CC-MAIN-2020-45
refinedweb
795
52.9
While the JDO2 spec introduced being able to specify a 1-1 relation as having the related object as embedded, it doesn't appear to handle how to persist inherited embedded objects. public class A { @Embedded B b; } public class B {...} public class C extends B {...} So we embed an object, that may be a B or may be a C. Obviously, thinking in an RDBMS context, it would be desirable to have a discriminator column stored for the "b" field so we can extract the right type when reading it. The only problem is that <embedded> doesn't allow specification of such a thing (obviously we could default to some column name, but would be nice to specify it). I'd propose adding <discriminator> as a subelement of <embedded> so column name etc can be defined (and something equivalent for annotations). -- Andy DataNucleus ()
http://mail-archives.apache.org/mod_mbox/db-jdo-dev/201111.mbox/%3C201111211149.16414.andy@datanucleus.org%3E
CC-MAIN-2015-35
refinedweb
144
51.18
Today I soldered a PIR sensor to my Pi! Basically, I want it to detect movement and turn on a LCD screen, then turn the screen off again after a minute of no movement. So when I walk into a room, the screen turns on and when I leave, the screen turns off. Equipment - My Pi (See early Posts) - PIR Sensor ($10) - Soldering Iron (Aoyue 937+ is about $63 on Amazon) - Solder ($8.16 Amazon Prime) Solder First thing, I looked up the pinout for the Raspberry Pi. The below diagram comes from elinux.org. We care about one of the 5V, ground and GPIO25 pins. - Solder the sensor red cable to either 5V. - Solder the black cable to ground. - End by soldering the yellow line to GPIO25. Your results should be similar to my picture below. Next, I used this guy’s pir.py script. The script requires the Python library RPi.GPIO. I installed this by downloading the library from here, the direct link is here. To untag or unzip the file I used the following command: tar -xvf RPi.GPIO-0.5.4.tar.gz Before installing it, make sure you have python-dev installed. apt-get install python-dev With that necessary package, install RPi.GPIO. cd RPi.GPIO-0.5.4 python setup.py install Now you can run the pir.py script. I made some slight changes to his code. I didn’t feel the need to call separate scripts to run a single command so I made the following edits. import subprocess to import os and def turn_on(): subprocess.call("sh /home/pi/photoframe/monitor_on.sh", shell=True) def turn_off(): subprocess.call("sh /home/pi/photoframe/monitor_off.sh", shell=True) this def turn_on(): os.system("chvt 2") def turn_off(): os.system("chvt 2") Run the script and test it out! The sensor will turn off after a minute of no movement and on again once it detects something. I ended by setting my script to run on startup. I need to put a picture in the frame to act as background to the pi… 3 Replies to “PIR Sensor on the Pi” can you please help i am lost python setup.py install >> after installing which file do i edit to make it look similar to your code and how do i add it to run on start up Thank You where can i find the pir.py file I used this guy’s script as a base with the adjustments shown in this post. Sorry for the super late reply!
http://somethingk.com/main/?p=741
CC-MAIN-2017-43
refinedweb
428
76.22
Using Timer Queues The following example creates a timer routine that will be executed by a thread from a timer queue after a 10 second delay. First, the code uses the CreateEvent function to create an event object that is signaled when the timer-queue thread completes. Then it creates a timer queue and a timer-queue timer, using the CreateTimerQueue and CreateTimerQueueTimer functions, respectively. The code uses the WaitForSingleObject function to determine when the timer routine has completed. Finally, the code calls DeleteTimerQueue to clean up. For more information on the timer routine, see WaitOrTimerCallback. #include <windows.h> #include <stdio.h> HANDLE gDoneEvent; VOID CALLBACK TimerRoutine(PVOID lpParam, BOOLEAN TimerOrWaitFired) { if (lpParam == NULL) { printf("TimerRoutine lpParam is NULL\n"); } else { // lpParam points to the argument; in this case it is an int printf("Timer routine called. Parameter is %d.\n", *(int*)lpParam); if(TimerOrWaitFired) { printf("The wait timed out.\n"); } else { printf("The wait event was signaled.\n"); } } SetEvent(gDoneEvent); } int main() { HANDLE hTimer = NULL; HANDLE hTimerQueue = NULL; int arg = 123; // Use an event object to track the TimerRoutine execution gDoneEvent = CreateEvent(NULL, TRUE, FALSE, NULL); if (NULL == gDoneEvent) { printf("CreateEvent failed (%d)\n", GetLastError()); return 1; } // Create the timer queue. hTimerQueue = CreateTimerQueue(); if (NULL == hTimerQueue) { printf("CreateTimerQueue failed (%d)\n", GetLastError()); return 2; } // Set a timer to call the timer routine in 10 seconds. if (!CreateTimerQueueTimer( &hTimer, hTimerQueue, (WAITORTIMERCALLBACK)TimerRoutine, &arg , 10000, 0, 0)) { printf("CreateTimerQueueTimer failed (%d)\n", GetLastError()); return 3; } // TODO: Do other useful work here printf("Call timer routine in 10 seconds...\n"); // Wait for the timer-queue thread to complete using an event // object. The thread will signal the event at that time. if (WaitForSingleObject(gDoneEvent, INFINITE) != WAIT_OBJECT_0) printf("WaitForSingleObject failed (%d)\n", GetLastError()); CloseHandle(gDoneEvent); // Delete all timers in the timer queue. if (!DeleteTimerQueue(hTimerQueue)) printf("DeleteTimerQueue failed (%d)\n", GetLastError()); return 0; } Show:
https://msdn.microsoft.com/en-us/library/windows/desktop/ms687003(v=vs.85).aspx
CC-MAIN-2016-36
refinedweb
311
56.66
Java tutorial program to print a rectangle using special character like star, dollar etc : In this tutorial, we will learn how to print a rectangle in Java using any special character . For example take the below rectangle : $$$$$$ $ $ $ $ $ $ $ $ $ $ $$$$$$ The height of this rectangle is 7 and width is 6. Also, we are using ’’, we can use any character to print. Algorithm we are going to use in this example is as below : Algorithm : - Take the height and width of the rectangle from the user. - Also, take the character that the user wants to print the rectangle. - Run one ‘for’ loop. This will run same time as the height of the rectangle. - Each run of this ‘for’ loop, run one inner loop. This inner loop will run same as its width. - For the first run of the outer loop, print character as the width of the inner loop. Because , this will be the first row of the rectangle. - For the second to (height -1) run of the outer loop, print only the first and the last element for that row. - For the last run of the outer loop, print characters same as first run. Because the last row will also contain full row of characters. Let’s take a look into the example program below for better understanding : Java Program : import java.util.Scanner; public class Main { /** * Utility function to print */ private static void println(String str) { System.out.println(str); } private static void print(String str) { System.out.print(str); } private static void printRectangle(int height, int width, String c) { for (int i = 0; i < height; i++) { if (i == 0 || i == height - 1) { //for first line and last line , print the full line for (int j = 0; j < width; j++) { print(c); } println(""); //enter a new line } else { //else for (int j = 0; j < width; j++) { if (j == 0 || j == width - 1) { //print only the first and last element as the character print(c); } else { //else print only blank space for the inner elements print(" "); } } println(""); //enter a new line } } } public static void main(String[] args) { Scanner scanner = new Scanner(System.in); print("Enter the height of the rectangle : "); int height = scanner.nextInt(); print("Enter the width of the rectangle : "); int width = scanner.nextInt(); print("Enter the character you want to print the rectangle : "); String c = scanner.next(); printRectangle(height, width, c); } } Sample Output : Enter the height of the rectangle : 7 Enter the width of the rectangle : 6 Enter the character you want to print the rectangle : $ $$$$$$ $ $ $ $ $ $ $ $ $ $ $$$$$$ Similar tutorials : - Java program to find the counts of each character in a String - Java program to print a square using any character - Java Program to find the last non repeating character of a string - Java program to count the occurrence of each character in a string - Java program to swap first and last character of a string - 4 different ways to Sort String characters Alphabetically in Java
https://www.codevscolor.com/java-program-to-print-rectangle-using-character
CC-MAIN-2020-50
refinedweb
484
67.18
I have a project to have a file read using argc and argv. Then sort it and do some other things. I'm having trouble with the very first step. Loading the file. This is what I have so far. Any help you be great. #include <stdio.h> #include <stdbool.h> void openFile(int argc, char *argv[]); int main(int argc, char *argv[]) { openFile(argc, argv); printf("\n\n"); return 0; } void openFile(int argc, char *argv[]) { int i; printf("\nThe number of arguments is %d", argc); printf("\nThe name of the program is %s", argv[0]); for ( i = 1; i < argc; i++) printf("\nUser value No. %d: %s", i, argv[i]); }
https://www.daniweb.com/programming/software-development/threads/327524/reading-files-with-argc-argv
CC-MAIN-2017-09
refinedweb
112
94.25
, ... LOCKF(3) OpenBSD Programmer's Manual LOCKF(3) NAME lockf - record locking on files SYNOPSIS #include <unistd.h> int lockf(int filedes, int function, off_t size); DESCRIPTION The lockf() function allows sections of a file to be locked with adviso- ry-mode locks. Calls to lockf() from other processes which attempt to lock the locked file section will either return an error value or block until the section becomes unlocked. All the locks for a process are re- moved. The F_ULOCK function un- locked. The section to be locked or unlocked starts at the current offset in the file and extends forward for a positive size or backward for a negative size (the preceding bytes up to but not including the current. The F_LOCK and F_TLOCK requests differ only by the action taken if the section is not available. F_LOCK blocks the calling process until the section is available. F_TLOCK makes the function fail if the section is already locked by another process. File locks are released on first close by the locking process of any file descriptor for the file. F_ULOCK requests release (wholly or in part) of one or more locked sec- tions controlled by the process. Locked sections will be unlocked start- ing at the current file offset through size bytes or to the end of the file if size sepa- rate re- quest to unlock from the start of the requested section with a size equal to 0. Otherwise an F_ULOCK request will attempt to unlock only the re- quested section. A potential for deadlock occurs if a process controlling a locked region is put to sleep by attempting to lock the locked region of another pro- cess. the global variable. SEE ALSO fcntl(2), flock(2) STANDARDS The lockf() function conforms to X/Open Portability Guide Issue 4.2 (``XPG4.2''). OpenBSD 2.6 December 19, 1997 2
http://www.rocketaware.com/man/man3/lockf.3.htm
crawl-002
refinedweb
313
70.43
wfspy is a tool that helps you to view properties of any window form control in the system. I originally needed a small utility that will give me the managed control type name and the assembly name from a window handle, but gradually the tool became sophisticated enough to show all properties of a managed control (and optionally modify them). The only feature currently missing is the spying of events, which I plan to add later on. wfspy uses Windows hooks to do its job. There are 3 assemblies in the project:- The tool itself is pretty simple. The main window shows all the managed windows and their hierarchy, with the desktop window as the root. Any unmanaged window which is not parent of a managed window directly or indirectly, is not shown in the tree. The managed windows are shown using a slightly different icon. You can view the properties of the managed window by clicking on the details button in the main form. This brings up the dialog box with the property grid as shown in the second screen shot. The properties can even be modified in the grid. The next few sections discuss some important aspects of the implementation. To enumerate managed windows, the standard Win32 API function EnumChildWindows is used. Unmanaged windows are filtered out from the tree if they don't parent any manage windows. In order to find out whether a window is managed or not, its class name is inspected. Any managed window that derives from System.Windows.Forms.Control has class name of the form WindowsForms10.<character sequence>.app<hash code of appdomain>. The character sequence in the middle is the class name of the window that is being superclassed like Button, SysListView32 etc. The following code determines whether a class name is managed or not. EnumChildWindows System.Windows.Forms.Control WindowsForms10.<character sequence>.app<hash code of appdomain> Button SysListView32 private static Regex classNameRegex = new Regex(@"WindowsForms10\..*\.app[\da-eA-E]*$", RegexOptions.Singleline); public static bool IsDotNetWindow(string className) { Match match = classNameRegex.Match(className); return (match.Success); } The technique to inject a .NET assembly in another process is slightly tricky as unlike functions in a regular Win32 DLL. .NET functions are compiled into native code during run time so the addresses are not static. This problem can be overcome by using managed C++, which allows managed global functions to be exported from an assembly. The exported function is actually a thunk that at runtime, points to the code generated by the JIT compiler. Thus the exported function can be used as a hook procedure. That function can be used to load another assembly. I will cover this technique in detail in another article. Given an HWND the managed Control object can be found from the Control.FromHandle method. The properties of the control object can then be viewed in a property grid. There is a problem here the property grid should be created in the process where control object belongs. Luckily Windows allows a process to create child windows to a parent window belonging to another process. This technique is used to create the property grid from the target process, as a child of a form in the wfspy process. In order to do this the property grid control is placed in a user control. The CreateParams property of the user control is overridden. HWND Control Control.FromHandle wfspy CreateParams protected override CreateParams CreateParams { get { System.Windows.Forms.CreateParams cp = base.CreateParams; cp.Parent = parentWindow; RECT rc = new RECT(); UnmanagedMethods.GetClientRect(parentWindow, ref rc); cp.X = rc.left; cp.Y = rc.top; cp.Width = rc.right - rc.left; cp.Height = rc.bottom - rc.top; return cp; } } The parentWindow is actually the handle of a form belonging to the wfspy process. This enables editable property view of a control in the wfspy application. parentWindow The included wfspy works only for .NET windows 1.0.3705. If you want the tool to work with .NET 1.1, please build the code using VS.NET 2003. If you use wfspy to view a window in an application that uses a different .NET version, strange results may occur. Suggestions are welcome on how to fix this bug. I intend to update the article soon with the ability to spy control.
http://www.codeproject.com/Articles/4814/A-simple-Windows-forms-properties-spy/?fid=16563&df=90&mpp=10&sort=Position&tid=2588414
CC-MAIN-2015-27
refinedweb
715
58.69
hi, this application came to my mind to see how long would it take to type the string "que" (the purpose was a joke but seeing how it isn't behaving how i thought it should i decided to post it here to learn why) this is the Gui Class Code java: import javax.swing.*; public class Gui extends JFrame { private JTextField field; private JTextArea area; private int sec=0; private Long currentTime = 0L; private Long elapsedTime = 0L; private String s; public Gui(){ super("Word typing timer"); setLayout(null); field = new JTextField("0"); field.setBounds(10,10,200,30); field.setEditable(false); add(field); area = new JTextArea(); area.setBounds(10,50,400,300); add(area); s = area.getText().trim(); } public void tempori(){ try{ currentTime = System.currentTimeMillis(); while(true){ area.requestFocus(); while(!s.equalsIgnoreCase("que")){ if (!s.equalsIgnoreCase("")){ elapsedTime = System.currentTimeMillis() - currentTime; if (elapsedTime==1000){ elapsedTime=0L; sec+=1; } field.setText(String.format("%d:%d",sec,elapsedTime)); } s = area.getText().trim(); } } }catch(Exception ex){ System.out.println(ex.getMessage()); } } } This is the main Code java: import javax.swing.*; public class Main { public static void main(String[] args){ Gui form = new Gui(); form.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); form.setBounds(0,0,540,400); form.setLocationRelativeTo(null); form.setVisible(true); try{ form.tempori(); }catch(Exception ex){ ex.printStackTrace(); } } } What the code should do: if the JTextArea is equals to nothing or the string "que "then stop the timer process and if not then to continue. I have several problems: 1- sometimes the program just won't run, throwing a "null" exception for who knows what reason... i need to close it and re-run it for it to work, i have swear i have no idea why it does that. 2-Code java: if (elapsedTime==1000){ elapsedTime=0L; sec+=1; } it just doesn't do it, it never resets to 0L. if i change from == to >=1000 then the entire format goes to hell and the sec variable in the JTextField goes crazy. 3- if i change the line currentTime = System.currentTimeMillis(); to be right below area.requestFocus(); the sec variable literally jumps to a random number. I have tried to debug it but i swear i have no idea what is going on. Can someone tell me what is the error?
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/31599-problem-timer-printingthethread.html
CC-MAIN-2018-17
refinedweb
378
58.58
Manages the suspension of another process. More... #include "util/win/scoped_process_suspend.h" Manages the suspension of another process. While an object of this class exists, the other process will be suspended. Once the object is destroyed, the other process will become eligible for resumption. If this process crashes while this object exists, there is no guarantee that the other process will be resumed. Informs the object that the suspended process may be terminating, and that this should not be treated as an error. Normally, attempting to resume a terminating process during destruction results in an error message being logged for STATUS_PROCESS_IS_TERMINATING. When it is known that a process may be terminating, this method may be called to suppress that error message.
https://crashpad.chromium.org/doxygen/classcrashpad_1_1ScopedProcessSuspend.html
CC-MAIN-2019-13
refinedweb
121
56.45
LinuxQuestions.org ( /questions/ ) - Programming ( ) - - Undefined reference, why? ( ) george_mercury 07-10-2004 06:26 AM Undefined reference, why? Hi, Im working on a project to interface a simple USB chip, but this really isn't important in this case. The problem is that I cannot compile, because I get an "Undefined rereference to 'ftdi_init' error. ftdi_init is a function. Here's my case. I've got two libraries, one called ftdilib library, which needs the usblib library. I've installed both libraries in the standard way ( tar, configure, make, make install ). Here is what doesn't compile: #include <ftdi.h> #include <usb.h> int main(int argc , char *argv[]) { ftdi_init(&variable); return 0; } If I compile this I get an error: Undefined reference to 'ftdi_init'. I've checked ftdi.h and the ftdi_init declaration is in it. The ftdi.c file also contains the code for ftdi_init function. However, I couldn't find any #include <ftdi.c> in the ftdi.h. So what are the most frequent causes for errors like mine? George Mercury keefaz 07-10-2004 06:42 AM Maybe you have to explicitly link the library like : gcc -lftdi -o program program.c george_mercury 07-13-2004 03:47 PM Thanks that worked! It compiled successfully. However, I've got one more problem. The library install ( configure, make, make install ) does not seem to work. When I compile and run the program I get an error: error loading shared libraries libftdi.so.0: cannot open shared library object: No such file or directory. So which files do I need of linux to see them? Thanks for all the help, George Mercury shishir 07-14-2004 01:08 AM umm where is this library being installed..you have to specify this at gcc command line as -L option to tell the linker where it has to look for the library in addition to the standard library locations like /usr/lib, /lib, et al...try to put your library in the standard path ...if it has been compiled and generated properly after make, make install ..and stuff this is how you might need the command to look like.... gcc -L <where lftdi resides> -lftdi <xyz.c> joseluiselaprendizcol_ 05-07-2009 12:15 AM Similar problem Hello, I have a similar problem, something can help me with this question, is possible to obtain a share library that have errors? that is, g++ to stop the construction of the share libraries when this found an error ? or this can to continue the building of the share library with errors,thanks in advance, if the answers is yes, i have a big problem because g++ don't show where the library have erros. All times are GMT -5. The time now is 08:17 AM .
http://www.linuxquestions.org/questions/programming-9/undefined-reference-why-203329-print/
CC-MAIN-2015-22
refinedweb
462
76.52
If you're like me then you probably don't check your email often just because you forget or don't want to waste your time if there are no new emails. Then all your emails pile up and you have to go through all of them all at once. Well today that stops. I am going to show you step by step how to run a Python script I wrote that will run every hour through windows task scheduler and open up gmail if you have any unread messages. It should take about 20 minutes so if you have time now let's get to it. Step 1: Download Python Download it from here Step 2: Download PyCharm Download the community version here Step 3: In PyCharm Create a .py File It should be under File, New, then click python file name it whatever. I named mine hi. Step 4: Put Code Into File and Run REPLACE USERNAME with the part in your email address before the @ and PASSWORD with your password import imaplib import webbrowser obj = imaplib.IMAP4_SSL('imap.gmail.com','993') obj.login('USERNAME','PASSWORD') obj.select() unread = str(obj.search(None, 'UNSEEN')) print(unread) print(len(unread) - 13) if (len(unread) - 13) > 0: webbrowser.open('') RUN the file by clicking run then the file name. If it fails go on to the next step. If not go to the step after the next step. Step 5: Turn Down Security Go to this page then set access to less secure on Step 6: Run It Again This time run the script and there should be no errors Step 7: Open Windows Task Scheduler Step 8: Create a Task Follow these instructions here to create the task Sorry I dont know how to expain this part.... Step 9: Done Now every hour or whatever interval you set it at the program will run and open gmail if you have unread messages. Here is a video of it in action. 2 Discussions 3 years ago What is the interpreter do you use for your project? I use Python 3.5 (64 bits).INK and I think that is why it doesn't work.. Thank you. Reply 3 years ago my interpreter was already there when I downloaded PyCharm but also the interpreter shouldn't matter because it is just there to print out what I tell it to print which doesn't affect execution of the code. Make sure you have an unread message then try it again. Thanks and reply back if it still doesn't work
https://www.instructables.com/id/Python-Gmail-Checker/
CC-MAIN-2019-26
refinedweb
429
79.8
This is just a quick intro to React to show how easy it is on a very basic level. React is often compared to Angular, but the two are very different: Angular is more of a framework, whereas React is more of a library. So, with React, we can make Components, and in so doing, we can intersperse plain Javascript to instill behavior. This article is not showing (or using) best practices, or a recommended structure. It’s purpose is only to show how easy the basic mechanics of React are. Let’s grab the getting started cli from React’s page. Then: npm install -g create-react-app create-react-app my-app cd my-app npm start After this is done, and you have the project displayed in your browser, let’s experiment. A boiler-plate header we can use for each new class can be as simple as: /src/Foo.js import React, { Component } from 'react'; class Foo extends Component { render(){ return(); } } export default Foo; So, all that we need to change to get started is the name of the class you want to create (replace Foo with that name). And following the convention, the name of the file containing our class is the name of the class, and they’re both capitalized. Also note the function, render(). All classes based on React require this method. From this method, a single DOM element is returned. Let’s create one more class: /src/Bar.js import React, { Component } from 'react'; import Foo from './Foo'; class Bar extends Component { render() { return (); } } export default Bar; The thing to note here is, I am having Bar include Foo. The reason for this is because Bar is going to render a Foo Component. The structure will be: the main component of our application, “App.js” will contain our component, “Bar.js”. And, “Bar.js” will contain our component, “Foo.js”. Let’s flesh out Foo: /src/Foo.js import React, { Component } from 'react'; class Foo extends Component { constructor() { super(); this.user = { firstName: 'Moe', lastName: 'Bettah' }; } displayName() { return this.user.firstName + ' ' + this.user.lastName; } render(){ return( <h1>Hello, {this.displayName()}!</h1> ); } } export default Foo; So this simplistic class renders, “Hello, Moe Bettah!”. You can see that the larger portion of the class is plain old Javascript. The React portion of the class is just the render() function. Every render() must return a single element. It can be a complex element, like a div containing numerous other elements, but in that case, we are still just returning a single div (and its contents). Now, let’s flesh out Bar: /src/Bar.js import React, { Component } from 'react'; import Foo from './Foo'; class Bar extends Component { render() { return ( <h3>Here's a greeting...</h3> <Foo </> ); } } export default Bar; Here, notice that our Bar class file includes Foo. Bar is just a plain container component. It renders a div, which contain’s our Foo component. The Foo component through the magic of React, can now be referred to as, <Foo /> , and so that is how we refer to it in Bar. Notice that we also add some content to Bar’s returned div, just above our Foo element. So we have Bar, which renders a bit of text and also renders Foo. All that remains is to add this to our App. We need to add an import to the top App.js: import Bar from './Bar'; And finally, we need to add our component. Note that App follows the rule of only rendering a single element. In this case, a div with the css class of “App”. There can be any amount of content within this singly returned div, and here there is indeed content added. Now, just place our tag just above the closing div in App’s returned div. As an aside, note that the class attribute’s are referred to as className in camel-case. There are some simple rules in attribute naming that React follows, which you can read in the docs. Finally, let’s add a couple gratuitous tests to the single test that is autogenerated for us. Open, “App.test.js” and have a look at the file. We’re going to add to this, to make testing easier. On the command line, within the top directory, my-app enter: npm i --save-dev enzyme Enzyme allows us to easily test React components. See the included test in App.test.js? Have a look at how easy it is with enzyme: it('renders without crashing', () => { const app = shallow() }); That’s it! So, let’s make two gratuitous test’s, just to get an idea of how to access and test components we build. At the top of “App.test.js” add the following: import { shallow } from 'enzyme'; import App from './App'; import Foo from './Foo'; import Bar from './Bar'; We’re including enzyme and all three of our classes. The App test is already written, let’s make a couple test’s against Foo and Bar: it('shows bar\'s text, "Here\'s a greeting..."', () => { const bar = shallow(); const test_content = <h3>Here's a greeting...</h3> expect(bar.contains(test_content)).toEqual(true); }); it('the name Moe is included in the display', () => { const foo = shallow(); const test_content = <h1>Hello, Moe Bettah!</h1> expect(foo.contains(test_content)).toEqual(true); }); Here, in each case we’re grabbing the component we want to test using enzyme’s shallow() method. Within our component’s, we want to test for included HTML elements. In the case of Bar, we’re checking the static text within it’s <h3> element. In Foo, we’re checking that the dynamic text we are inserting is actually present. All very easy stuff. That’s it. Super basic, super easy. There is a great deal more to know of course, but this is a friendly starter to show that it’s not complex to begin tackling React.
https://nicksardo.wordpress.com/2017/05/22/react-simplicity/
CC-MAIN-2017-30
refinedweb
987
66.94
You can subscribe to this list here. Showing 7 results of 7 At 03:34 PM 5/12/2002, webware-discuss@... wrote: >I have upgraded to Webware 0.7 from 0.7b3, and now my webkit is >consistently throwing the errors -- > >WARNING: Cannot get request.timeStamp for activity log. >WARNING: Cannot get transaction.duration for activity log. > >-- is this a path problem, or .. ? OK, I am now 2 for 2 in terms of low quality questions to the list. :) Problem was in overwriting an old Webware directory instead of starting from scratch during the upgrade. David. ------------------------------------------------------------------ David Casti Managing Partner Neosynapse Hello, I have upgraded to Webware 0.7 from 0.7b3, and now my webkit is consistently throwing the errors -- WARNING: Cannot get request.timeStamp for activity log. WARNING: Cannot get transaction.duration for activity log. -- is this a path problem, or .. ? Thanks, David. ------------------------------------------------------------------ David Casti Managing Partner Neosynapse Ian Bicking wrote: > > Note that Mozilla now supports Digest Authentication. > > > > > > > > Digest Apache module here: > > > > and > > > > I don't feel like you can offer both possibilities to the client -- > though I don't really know. Ideally, I'd offer both, and if the client > supported digest it'd use that, and if not it'd use basic. But there's > not nearly enough support for digest to do it exclusively. I dunno. This was just to add to your comment on the page that only IE5, Opera & something else offer Digest authentication. Mozilla is a big add. Netscape 6 doesn't seem to support Digest though. The SSL route is probably how people will continue to go. -- Bill Eldridge Radio Free Asia bill@... Ian Bicking wrote: > > > I'm not sold on many of the features of the modules... it's so much > easier and more powerful to use Python, I was more interested in getting various variables out of Apache and passing them on rather than trying to do anything in an Apache module. Even the rewrite stuff seems like it'd be much more straightforward and controllably dynamic (without restarting that server) if done in Python. and you can be so much more > expressive in Python code than you can in the conf file. Simple > features, like mod_gzip, could be done in Webware just fine if we just > did it. as one alternative. -- Bill Eldridge Radio Free Asia bill@... Kendall Clark hat gesagt: // Kendall Clark wrote: > I have a piece that is due to be published on O'Reilly's Linux DevCenter > site this week about "daemon monitoring daemons" This is an interesting topic, and I'm looking forward for it. >). I'd be happy, if you could make a short annoncement here or privatly, when the text arrives at DevCenter. ciao, -- Frank Barknecht _ _______footils__ On Tue, May 07, 2002 at 12:46:37PM +0200, Frank Barknecht muttered something about: > Hi all, > > my AppServer process died, and I didn't recognize this for one or two > days. How do you guys make sure, the AppServer doesn't die, or if > it dies, that you get to know it or that it gets restarted > automatically? I have a piece that is due to be published on O'Reilly's Linux DevCenter site this week about "daemon monitoring daemons", which can be a good solution to this problem, as the people who mentioned daemontools later in this thread point out. I review the 4 or 5 most commonly used DMDs, including daemontools' supervise, and offer reasons for choosing to implement one or the other. I'm using monit as a DMD to monitor (and restart) Apache, Webware, named, sshd, and a few others. Setup is simple.). Best, Kendall Clark Remeber also that a module gets executed the first time it is called from anywhere. So modules are a very convient place for system wide info. In my config.py I have datapool = DBPool(PgSQL, 5, 'localhost::xxxxxx') and then elsewhere I from config import datapool conn = datapool.getConnection() .... -Aaron ----- Original Message ----- From: "Edmund Lian" <elian@...> To: "Bobby Kuzma" <bobby@...> Cc: <webware-discuss@...> Sent: Saturday, May 11, 2002 12:15 PM Subject: Re: [Webware-discuss] Calling dbPool on AppServer startup > > Bobby wrote: > > >>Where would I put the call to dbPool to establish it's connections so > that > it 1) starts when I start AppServer, and 2) is available to whatever > modules > get loaded in afterward.<< > > You could put this call into one of the context __init__.py files. Any > contextInitialize(appServer, path) method there gets called on context > initialization. The appserver has a hook to shut stuff down prior to exit > too. Look at Application.addShutdownHandler(). > > ...Edmund. > > > _______________________________________________________________ > > Have big pipes? SourceForge.net is looking for download mirrors. We supply > the hardware. You get the recognition. Email Us: bandwidth@... > _______________________________________________ > Webware-discuss mailing list > Webware-discuss@... > >
http://sourceforge.net/p/webware/mailman/webware-discuss/?viewmonth=200205&viewday=12
CC-MAIN-2014-23
refinedweb
794
67.04
Menus To let users navigate within your apps and access options that you provide, you can add menus to the screens of your app. A menu contains a set of actions that a user might want to do on a specific screen in your app. For example, an app that displays a list of sports scores can include actions to view the details of a particular game, change the league that's displayed, or change the order that the scores are displayed in. Menus let you hide certain options until they're needed. You probably don't want to show your users every option that's available in your app at the same time, because users might need an option only at a specific time. By using menus to organize the options, you can keep the main UI of your app neat and tidy. There is a set of icons for BlackBerry 10 that you can use for menus in your apps. To download these icons, visit the UI Guidelines for BlackBerry 10. Adding an action menu You may have already seen an example of a menu in the Cascades framework. When you add actions to a screen in your app, the actions are placed automatically in the action menu (as well as on the action bar, if you specify the appropriate property). This menu appears on the right side of the action bar, and users can tap the menu to display all of the actions for the screen. To learn more about actions and the action menu, see Adding actions. Adding a context menu A context menu (sometimes known as the cross-cut menu) is displayed when a user touches and holds a UI control in your app. This menu displays actions that are associated with that control. In an app that displays sports scores, you could add a context menu to each game that's displayed and include an action to view the details of that game. You can add a context menu to any Cascades component that extends the Control class. This includes custom components that you create by extending the CustomControl class (because CustomControl inherits from Control). When the user touches and holds a control that has an associated context menu, the context menu is partially displayed. This partial menu appears from the right side of the screen and shows only the icons that are associated with each action. The user can then swipe the partial menu to the left to display the entire context menu. Users don't need to expand the partial context menu to select an action. They can simply select one of the icons that's displayed in the partial menu to perform the associated action. When a user touches an action icon in the partial menu, the selected action expands to show its title. When the user releases the touch, the selected action is executed. Users can also touch and hold a control to display the context menu, and then drag their finger directly to an action in the partial context menu to select that action. This behavior lets users interact quickly with actions on the context menu; a single touch interaction can be used to both open the context menu for a control and select an action from the menu. You add actions to the context menu of a control by using an ActionSet. This class represents a group of actions, and contains a list of ActionItem objects. Each ActionItem represents an action in the context menu, and you can respond to the selection of an ActionItem in any way you want. You can also specify a title and subtitle for the ActionSet, and these are displayed at the top of the context menu when the full menu is displayed. There isn't any inheritance of context menus for controls that are children of other controls. For example, if you add a context menu to a button in your app, and the button's parent container also includes a context menu, only the button's context menu is displayed when the user touches and holds the button. Creating a context menu in QML To add a context menu to a control in QML, you use the contextActions list property. You add the ActionSet that contains your ActionItem objects to this list. When you add your ActionItem objects to the ActionSet, you use the actions list property to hold the actions. Here's how to create a blue Container with a context menu that includes three actions: import bb.cascades 1.0 Page { content: Container { Container { preferredWidth: 200 preferredHeight: 200 background: Color.Blue contextActions: [ ActionSet { title: "Action Set" subtitle: "This is an action set." actions: [ ActionItem { title: "Action 1" }, ActionItem { title: "Action 2" }, ActionItem { title: "Action 3" } ] } // end of ActionSet ] // end of contextActions list } // end of blue Container } // end of top-level Container } // end of Page The actions list property of ActionSet is a default property, meaning that you don't need to include the actions list property explicitly. You can simply start adding ActionItem objects to the ActionSet directly, allowing you to save some space and indents in your code. Here's what the ActionSet from the example above looks like without the actions list property. Note that when you use this approach, you no longer need to separate the ActionItem objects with commas. ActionSet { title: "Action Set" subtitle: "This is an action set." ActionItem { title: "Action 1" } ActionItem { title: "Action 2" } ActionItem { title: "Action 3" } } The ActionItem objects that you add to an ActionSet are the same ones that you can add as actions on a screen. So, you can do similar things when a user selects an action. You can change properties of other controls, display new screens, and so on. You can also add images to the actions, and these images are visible when the context menu is partially displayed. Here's how to create a context menu for an ImageView that contains three actions. Each action includes a custom image and starts a different animation for the control. import bb.cascades 1.0 Page { content: Container { layout: DockLayout {} ImageView { // Use layout properties to center the image on the // screen layoutProperties: DockLayoutProperties { horizontalAlignment: HorizontalAlignment.Center verticalAlignment: VerticalAlignment.Center } imageSource: "asset:///images/evil_smiley_small.png" animations: [ // A translation animation that moves the image // downwards by a small amount TranslateTransition { id: translateAnimation toY: 150 duration: 1000 }, // A rotation animation that spins the image // by 180 degrees RotateTransition { id: rotateAnimation toAngleZ: 180 duration: 1000 }, // A scaling animation that increases the size // of the image by a factor of 2 in both the // x and y directions ScaleTransition { id: scaleAnimation toX: 2.0 toY: 2.0 duration: 1000 } ] contextActions: [ ActionSet { title: "Animations" subtitle: "Choose your animation" // This action plays the translation animation ActionItem { title: "Slide" imageSource: "asset:///images/slide_action.png" onTriggered: { translateAnimation.play(); } } // This action plays the rotation animation ActionItem { title: "Spin" imageSource: "asset:///images/spin_action.png" onTriggered: { rotateAnimation.play(); } } // This action plays the scaling animation ActionItem { title: "Grow" imageSource: "asset:///images/grow_action.png" onTriggered: { scaleAnimation.play(); } } } // end of ActionSet ] // end of contextActions list } // end of ImageView } // end of Container } // end of Page Creating a context menu in C++ In C++, you create ActionItem objects and add them to an ActionSet. Then, you call Control::addActionSet(), which adds the ActionSet to the control as a context menu. Here's how to create a Container with a context menu. The context menu includes an ActionSet with three actions. // Create the Container and ActionSet Container* contextContainer = new Container(); ActionSet* actionSet = ActionSet::create() .title("Context menu") .subtitle("Select an action."); // Create the ActionItem objects and add them to the ActionSet ActionItem* action1 = ActionItem::create() .title("First action"); ActionItem* action2 = ActionItem::create() .title("Second action"); ActionItem* action3 = ActionItem::create() .title("Third action"); actionSet->add(action1); actionSet->add(action2); actionSet->add(action3); // Add the ActionSet to the Container contextContainer->addActionSet(actionSet); Adding an application menu You might want to include options in your apps that aren't associated with any particular screen or UI control, but that apply to the entire app and can be accessed no matter where a user is within the app. For example, you might want to provide a settings option for users to specify application-wide preferences, or a help option for users to learn how to use your app. To display these types of options, you can use the application menu. The application menu is displayed when a user swipes down from the top of the screen. You should consider using this menu for options that are important but seldom used.. Here's how to add an application menu that contains three actions, in QML. You use a property called Menu.definition and specify its value using a MenuDefinition object. This object contains the actions that you want to include in the application menu, all of which are included in the actions list property. In this code sample, each action changes the text in a TextField that's displayed on the screen. Note that if you don't specify your own image for an action, a default image is provided for you. import bb.cascades 1.0 Page { // Add the application menu using a MenuDefinition Menu.definition: MenuDefinition { // Specify the actions that should be included in the menu actions: [ ActionItem { title: "Action 1" imageSource: "images/actionOneIcon.png" onTriggered: { textField.text = "Action 1 selected!" } }, ActionItem { title: "Action 2" imageSource: "images/actionTwoIcon.png" onTriggered: { textField.text = "Action 2 selected!" } }, ActionItem { title: "Action 3" onTriggered: { textField.text = "Action 3 selected!" } } ] // end of actions list } // end of MenuDefinition Container { // Add a text field to display which action is selected TextField { id: textField text: "No action selected." } } } // end of Page As a best practice, you should add an application menu to the top-level control in your app. This approach makes sense because the application menu applies to your entire app instead of just a single screen or view. In the code sample above, the menu is added to the top-level Page, but it could just as easily be added to a NavigationPane or TabbedPane. import bb.cascades 1.0 NavigationPane { Menu.definition: MenuDefinition { actions: [ ActionItem { ... }, ActionItem { ... } ] } Page { .... } } Here's how to add an application menu in C++. You create a Menu object and add ActionItem objects to it. Then, you retrieve an instance of the application by using Application::instance() and call setMenu() to set the menu. // Create the application menu Menu *menu = new Menu; // Create the actions and add them to the menu ActionItem *actionOne = ActionItem::create() .title("Action 1"); ActionItem *actionTwo = ActionItem::create() .title("Action 2"); menu->addAction(actionOne); menu->addAction(actionTwo); // Set the menu of the application Application::instance()->setMenu(menu); You can display a maximum of five actions on the application menu. If you add more than five actions, the extra actions aren't displayed. By default, when you add actions to the application menu, the first action that you add appears on the left side of the menu, and the second action appears on the right side. Any remaining actions appear in the center of the menu. Using the Help action and Settings action When you include an application menu in your app, it's considered a best practice for the left-most action on this menu to provide some type of help or other information about your app. It's also typical for the right-most action to provide application settings or options. Cascades makes it easy to follow this convention by providing special classes for each of these types of actions. You can use a HelpActionItem to provide access to help or other information, and you can use a SettingsActionItem for application-wide settings. Each of these classes has a default image and title that's displayed if you don't specify an image or title for the action, and they both appear automatically in their respective locations on the menu (the Help action on the left and the Settings action on the right). Here's how to add a Help action and Settings action to the application menu, in QML, by using the helpAction and properties of a Menu. If you include both the Help action and Settings action, you can only have a maximum of three additional actions on the menu. import bb.cascades 1.0 Page { Menu.definition: MenuDefinition { // Add a Help action helpAction: HelpActionItem {} // Add a Settings action settingsAction: SettingsActionItem {} // Add any remaining actions actions: [ ActionItem { title: "Action 1" } ] } } Here's how to accomplish the same thing in C++. // Create the application menu Menu *menu = new Menu; // Create the actions to add to the application menu HelpActionItem *help = new HelpActionItem; SettingsActionItem *settings = new SettingsActionItem; ActionItem *actionOne = ActionItem::create() .title("Action 1"); // Create the application menu and add the actions menu->setHelpAction(help); menu->setSettingsAction(settings); menu->addAction(actionOne); // Set the menu of the application Application::instance()->setMenu(menu); Adding a custom menu The predefined menu types that are included in Cascades, such as the action menu and context menu, let you easily create menus, populate them with items, and add the menus to your apps. These menus use the same visual style and behavior as menus in core BlackBerry 10 apps, making it easy to match the style and presentation of your app with these other apps. However, you might want a bit more control of your menus and how they're presented in your app. You can create a menu with a custom look and feel, and this menu can take advantage of context-sensitive logic that performs actions based on a specific type of data. For example, you might want to display your menu in a radial style and include actions that invoke other apps (such as the Calendar application or BBM). To learn more about invoking other apps, see App integration. To create a custom menu, you can use the MenuManager class. You specify the data that you want the menu to apply to, and MenuManager creates the menu automatically and populates it with relevant actions for that data. Then, you can retrieve the menu that was created, access its items, set additional properties, and so on. This class is available only in C++. You can specify the data that you want the menu to apply to by using three primary types of information: the URI, the MIME type, or the raw data itself. Each of these types has its own setter function in MenuManager ( setUri(), setMimeType(), and setData()). For example, if you specify a URI of "file://<file_path>", MenuManager creates a menu with actions that apply to a file on the device's file system. Or, if you specify a MIME type of "image/jpeg", the MenuManager creates a menu with actions that apply to a JPEG image. When you specify the data that the menu applies to, you need to provide either the URI or the MIME type (or both). The raw data is usually optional, depending on the MIME type that you provide. If you choose to provide the data, it's used to populate a MenuItemInvokeParams object that contains all of the required information to invoke a target application. You can use this MenuItemInvokeParams to invoke the target application right away, with the data to act on. In some cases, the data that you provide is used to create the menu. For example, if you specify a MIME type of "application/vnd.blackberry.string.phone" (which represents a phone number), you should include the selected phone number as the raw data. This data is used to determine if the phone number is related to a contact. If so, contact-related actions (such as "View Contact") are added to the menu. In addition to the primary types, there are other types of information that you can provide to a MenuManager to help it create a suitable menu for your data. For example, you can use the setTargetTypes() function to indicate which types of invocation targets should be considered when building the menu. Invocation targets can be processes such as applications, viewers, or services. To learn more about invocation targets, see App integration. Here's how to create a simple menu using MenuManager that's designed to apply to MPEG video data. The populateMenu() function is used to request that the menu be populated with items, and the MenuManager emits the finished() signal after this population is complete. You can retrieve the created menu by calling menu(). // Create the menu manager and specify the type of data // that the menu should apply to bb::system::MenuManager *manager = new MenuManager; manager->setMimeType("video/mpeg"); // Connect the manager's finished() signal to a slot function. // Make sure to test the return value to detect any errors. bool connectResult = QObject::connect(manager, SIGNAL(finished()), this, SLOT(onFinished())); // If any Q_ASSERT statement(s) indicate that the slot failed to // connect to the signal, make sure you know exactly why this has // happened. This is not normal, and will cause your app to stop // working!! Q_ASSERT(connectResult); // Indicate that the variable connectResult isn't used in the rest // of the app, to prevent a compiler warning. Q_UNUSED(res); // Request that the menu is populated with relevant items if (manager->populateMenu() != true) { // Handle any errors that occurred while the menu was // being populated } // Create a slot function to handle the finished() signal Q_SLOT void onFinished() { // Check for any errors if (manager->error() != MenuManagerError::None) { // Handle the error } else { // Retrieve the menu bb::system::Menu *theMenu = manager->menu(); } } After you populate a menu and retrieve it using menu(), you receive a Menu object that contains relevant menu items for the data you specified. Each menu item is represented by a MenuItem object, and you can retrieve a list of these items by calling items(). You can also retrieve the title and subtitle of the menu by calling title() and subtitle(), respectively. Each MenuItem object in the menu represents an action to take when that item is selected. Selecting a MenuItem can result in one of the following: An invocation target is invoked using the specified data, MIME type, or URI. You can call invoke().isValid() to determine if a MenuItem represents an invocation target, and you can use the MenuItemInvokeParams class to determine the appropriate invocation parameters to pass to the target. A submenu is displayed that provides additional menu items to choose from. You can call subMenu().isValid() to determine if a MenuItem contains a submenu. Some of the predefined menu types in Cascades actually use an underlying MenuManager to populate their items. For example, consider the Music app on a device that's running BlackBerry 10. If you touch and hold a song in that app, a context menu is displayed that includes a Share action. This action is provided by an underlying MenuManager, and when you choose this action, a list of targets for the Share action is displayed. If you're creating a Cascades app, it's usually a good idea to use the predefined menu types as much as possible. If you use the MenuManager to create a custom menu, you receive a Menu that's populated with relevant MenuItem objects, but it's up to you to determine how to present the menu to users in your app. Last modified: 2013-12-21 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
https://developer.blackberry.com/native/documentation/cascades/ui/navigation/menus.html
CC-MAIN-2015-11
refinedweb
3,230
53.1
Introductionby Sai gowtham2min read What is react router? React router is a routing library built on top of the react which is used to create the routing in react apps. Is react router is static or dynamic? - Before the react router v4 it is static after v4 it is Dynamic. In single page apps, there is only single html page.we are reusing the same html page to render the different components based on the navigation. But in multipage apps, you will get an entirely new page when you navigate. Getting started Note : if you stuck anywhere in this tutorial, then please refer to the code repository How to install the react router? we are using the create-react-app to create the app. npx create-react-app routing cd routing To install the react router you need to download the react-router-dom package by running the following commands. npm install react-router-dom npm start //to run dev server Now open the routing folder in your favorite code editor. If you navigate to the public/index.html you will see a single html file which looks like this. <> <div id="root"></div> </body> </html> Currently, in our app, there is only a single App component. Let’s create two more components. users.js import React from 'react' class Users extends React.Component { render() { return <h1>Users</h1> } } export default Users contact.js import React from 'react' class Contact extends React.Component { render() { return <h1>Contact</h1> } } export default Contact app.js import React from 'react' class App extends React.Component { render() { return ( <div> <h1>Home</h1> </div> ) } } export default App Now our app has three components one is App and the other two are Users and Contact.
https://reactgo.com/reactrouter/introduction/
CC-MAIN-2020-16
refinedweb
289
58.79
MSDN, System.Diagnostics.Tracing.EventSource.WriteEvent has 14 overloads, but Mono implemented only 4. This causes the following error: Missing method System.Diagnostics.Tracing.EventSource::WriteEvent(int,object[]) in assembly /usr/local/lib/mono/4.5/mscorlib.dll, referenced in assembly /home/azyobuzin/foobar.dll Error: System.Reflection.TargetInvocationException Exception has been thrown by the target of an invocation. My provisional fix is here: Mono JIT compiler version 3.12.0 ((detached/de2f33f 2015年 2月 24日 火曜日 01:39:43 JST) TLS: __thread SIGSEGV: altstack Notifications: epoll Architecture: amd64 Disabled: none Misc: softdebug LLVM: supported, not enabled. GC: sgen ## Additional context for the Mono developer team Although at first glance it might seem odd to add _empty_ method overloads that contain no implementation, adding these empty overloads will be helpful in a few scenarios. In addition to preventing the error mentioned in comment 0, these overloads will allow Xamarin.iOS and Xamarin.Android customers to use the linker with applications that reference the default version of the "Microsoft TPL Dataflow" NuGet package (see [1]). Apparently the code paths in the Microsoft TPL Dataflow library that reference "System.Diagnostics.Tracing" are rarely used (or fail silently and do not break the primary intended behaviors of the library). In short, I think it would be helpful to add these empty overloads. To help keep this bug narrowly focused, I have also filed a separate enhancement request (Bug 34890) to track any additional work that the Mono team might eventually do to implement _non-empty_ methods in the "System.Diagnostics.Tracing" namespace. [1]- Thanks! ## Related Xamarin.Android and Xamarin.iOS linker errors (for reference) ### Linker error message on Xamarin.Android >. ### Linker error message on Xamarin.iOS >" *** Bug 34610 has been marked as a duplicate of this bug. *** ## Status update for any users CC'd on the bug These empty overloads have now been added in the "master" branch of Mono: ### Those new changes are _not_ yet included in Xamarin.iOS or Xamarin.Android The latest "Cycle 6 – Service Release 1" builds of Xamarin.iOS or Xamarin.Android are based on Mono commit 996df3c. That commit still uses the older version of `EventSource` [1]. The new code in the "master" branch will in theory be included into Xamarin.Android and Xamarin.iOS the next time they are "branched from master," which would by default happen for the upcoming "Cycle 7" feature release. The (very rough) initial estimate is that the first previews for Cycle 7 might be available in February. [1] This will make it into C7 as per comment 5, but not C6SR2. Updating milestone. I verified the commit is in the C7 branch (mono-4.4.0-branch) now. I have checked this issue with latest C7 build: MonoFramework-MDK-4.4.0.162.macos10.xamarin.universal_f3253a0ba5991411dd6def16c2374cf75816664b XamarinStudio-6.0.0.5156_6bb41168165682b4ed22a94364bf0cf24e6b1d5c. And observing the same reported behavior mention on bug description. System.Diagnostics.Tracing.EventSource.WriteEvent has 14 overloads, but Mono implemented only 4. Here is the screencast for the same: Hence reopening this issue. Thanks! It looks like the contracts (reference assemblies) weren't updated when the commit was merged, that's why you're seeing the missing overloads in the Assembly Browser. They should be there at runtime. @Marek: what's the process for regenerating the reference assemblies? We fixed that for .net 4.6 which is our default target framework. Please test with that. The api was not updated for previous version 4.0 & 4.5 and won't be that late in the cycle especially when the new code does nothing. I have checked this issue with latest C7 build: XamarinStudio-6.0.0.5165_b44eb696fa41a04696a84d6af7fed9aa494e5580 MonoFramework-MDK-4.4.0.171.macos10.xamarin.universal_c31aa7ec75fb5e9e062457c60ff09e0aedfc585e I have checked this after changing the target framework .net framework 4.6. Now I am able to see all overloads of System.Diagnostics.Tracing.EventSource.WriteEvent. Here is the screencast for the same: Hence closing this issue. Thanks!
https://bugzilla.xamarin.com/show_bug.cgi?id=27337
CC-MAIN-2019-09
refinedweb
656
51.24
979/can-celery-be-used-with-amazon-sqs Can we use Amazon SQS as broker backed of Celery. There’s the SQS transport implementation for Kombu, which Celery depends on. Lack of documentation hasnt let me configure SQS with Celery, is there a way to do it? Yes celery can be setup with SQS. The latest versions of Kombu and Celery are fairly simple to operate with. refer this pseudo code BROKER_Travel = 'sqs' BROKER_Travel_OPTIONS = { 'region': 'us-east-1', } BROKER_USER = AWS_ACCESS_KEY_ID BROKER_PASSWORD = AWS_SECRET_ACCESS_KEY There you go....... When i am using Celery 3.0, I was getting deprecation warnings while launching the worker with the BROKER_USER / BROKER_PASSWORD settings. I took a look at the SQS URL parsing in kombo.utils.url._parse_url and it is calling urllib.unquote on the username and password elements of the URL. So, to workaround the issue of secret keys with forward slashes, I was able to successfully use the following for the BROKER_URL: import urllib BROKER_URL = 'sqs://%s:%s@' % (urllib.quote(AWS_ACCESS_KEY_ID, safe=''), urllib.quote(AWS_SECRET_ACCESS_KEY, safe='')) When I configured Celery with Amazon SQS, it seems I achieved a small success. Patching Kombu this, so I wrote some patches and there is my pull request as well. You can configure Amazon SQS by setting BROKER_URL of sqs:// scheme in Celery on the patched Kombu. For example: BROKER_URL = 'sqs://AWS_ACCESS:AWS_SECRET@:80//' BROKER_TRANSPORT_OPTIONS = { 'region': 'ap-northeast-2', 'sdb_persistence': False } Almost all the services provided by AWS ...READ MORE Amazon have a PHPSDK , check the sample code // ...READ MORE Yes. There are a lot of tools ...READ MORE Yess! You can create a message with ...READ MORE Yes! It's totally possible to send messages ...READ MORE Hey @nmentityvibes, you seem to be using ...READ MORE It can work if you try to put ...READ MORE Consider this - In 'extended' Git-Flow, (Git-Multi-Flow, ...READ MORE When you use docker-compose down, all the ...READ MORE Yes this is possible. However I would ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/979/can-celery-be-used-with-amazon-sqs
CC-MAIN-2021-49
refinedweb
352
61.02
!Converted with LaTeX2HTML 95.1 (Fri Jan 20 1995) by Nikos Drakos (nikos@cbl.leeds.ac.uk), CBLU, University of Leeds > We first review some of the basic C++ constructs used in the rest of this chapter, so as to make subsequent material understandable to readers familiar with C but not C++ . Readers familiar with C++ can skip this section. With a few exceptions, C++ is a pure extension of ANSI C. Most valid ANSI C programs are also valid C++ programs. C++ extends C by adding strong typing and language support for data abstraction and object-oriented programming. ANSI standard C introduced function prototypes to the C language. A function prototype defines the type of each function argument and the function's return value (the function's signature ). For example: /* A forward declaration to a_function */ int a_function(float b, double c); /* The definition of a_function */ int a_function(float b, double c) { /* Function body */ } C++ requires that function prototypes be provided for all functions before they are used and enforces consistent function use between program files. Thus, it is possible to distinguish between functions that have the same name but different signatures. C++ uses this capability to allow function names to be overloaded. That is, more than one function can be defined with the same name; the compiler compares function call arguments with function signatures to determine which version to use. In C programs, the library routines malloc and free are used for dynamic memory allocation. C++ defines two additional operators, new and delete, as illustrated in the following code fragments. struct S { /* Structure body */ }; S *sPtr = new S; /* Allocate instance of S */ delete sPtr; /* Delete instance of S */ int *iPtr = new int[25]; /* Allocate array of integers */ delete [] iPtr; /* Delete array of integers */ Notice that new is given a description of the type of the data object to be allocated; it returns a pointer to dynamically allocated data of that type. The delete operator is used to release dynamically allocated storage. The programmer must indicate when an array of objects is being deleted. The most significant feature that C++ adds to C is the concept of classes. A class can be thought of as a generalization of a C structure. In C, a structure groups together data elements of various types under a single name; in C++ , structures can also contain member functions. Like data elements of a C structure, member functions of a class can be accessed only through a reference to an object of the appropriate type. In C++ , a class defines a scope in which names referring to functions and data can be defined. Classes can be introduced using the C keywords struct and union or the C++ keyword class. Program 5.1 illustrates various features of the C++ class mechanism. This program defines a class named Datum containing a data member x, a member function get_x, and two constructor functions. (Notice the C++ single-line comments; anything after a double slash // is a comment.) These terms are defined in the following discussion. The syntax Datum::get_x() is used to name a member function get_x of Datum. This name, called a quantified name, specifies that we are referring to a function defined in the scope of Datum. If we do not quantify the name, we are defining the global function get_x(), which is a different function. Notice that within the definition of Datum::get_x() we can refer to the data member x directly, because x and get_x are defined in the same scope. We also could incorporate the definition for function get_x directly in the class definition, as follows. public: int get_x() { return x; } ... The two member functions named Datum are constructor functions for the Datum class. A constructor has the same name as the class to which it applies and, if defined, is called whenever an object of the appropriate type is created. Constructor functions are used to perform initialization and can be overloaded. The function test in Program 5.1 creates and uses three Datum objects, two of which are declared in the first two lines in the function body. Notice that the class name Datum can be used directly; in C we would have to write struct Datum. In the third line, the new operator is used to allocate the third Datum object. Because constructors have been defined for Datum, they will be called whenever Datum objects are created. The constructor with no arguments, called a default constructor, is called when a_datum is created, thereby initializing the field x of a_datum to zero. The declaration of another_datum and the new operator both specify an integer argument and hence use the second constructor, thereby initializing the variable x to 23 in these two cases. Recall that in C, the fields of a structure are accessed by using the dot operator ( struct.fieldname), while the fields of a structure accessible via a pointer are accessed with the arrow operator ( structptr->fieldname). As illustrated in the function test, these same mechanisms can be used to refer to the member functions of a C++ class. The C++ class mechanism also supports protection. Members of a C++ class can be designated as being either public or private. A public class member can be used without restriction by any program that has a reference to an instance of a class. Public data members can be read or written, and public member functions may be called. In contrast, private members can be accessed only from within the class object. Private data members can be accessed only by a class member function, and private member functions can be called only from within another member function of the class. For example, the variable x in the Datum class is a private variable and hence can be accessed by the member function get_x but cannot be referenced directly as a_datum.x. The final C++ feature described here is inheritance. As in C, a class or structure can be included as a member of another class, hence defining a has-a relationship between the two classes. In C++ , inheritance is used to create an alternative relationship between classes, an is-a relationship. If a class D inherits from class B, then all public members of B are also members of D. We say that D is derived from B, and that D is a derived class while B is a base class. D includes all public members of B and may also include additional members, which are defined in the usual way. We can view D as being a specialized version of a B, hence the is-a relationship. Program 5.2 illustrates the use of inheritance. The syntax for inheritance is to specify a list of base classes after the derived class name. The base class list is separated from the derived class name by a colon. The keywords public and private are associated with the base class names to specify whether the inherited members are to be public or private members of the derived class. Members of the base class can be redefined in the derived class. For example, in Program 5.2 class D redefines func2. When func2 is called from an object of type B, we access the version of func2 defined in B. If func2 is called from an object of type D, we get the version of func2 defined in D. In some situations, we may want a base class to call functions that are defined in a derived class. This facility is supported by a C++ mechanism called virtual functions. A function declared virtual in a base class can be defined in a derived class. This feature, which allows a programmer to specialize a generic base class for a specific application, is used in Section 5.8.2 to build a reusable parallel library. © Copyright 1995 by Ian Foster© Copyright 1995 by Ian Foster
http://www.mcs.anl.gov/~itf/dbpp/text/node52.html
CC-MAIN-2015-40
refinedweb
1,317
54.22
This concept demo below will scan the proto-XML and escape out chars in the elements that are supposed to be literal. I thought about using Parse::RecDecent, or other parsing technology, but it should be a simple problem. I'm wondering if this general idea, of using cascaded RE's with a continuing "pos", can be improved. use strict; use warnings; sub is_literal ($$) { my ($name, $attrs)= @_; return ($name eq 'listing') || ($name eq 'signature'); # simple demo +. # change this to analyse $name and $attrs to decide whether to treat +this literally. } sub escape_out ($) { my $passage= shift; $passage =~ s/&/&/g; $passage =~ s/</</g; return "[[[* $passage *]]]"; # [[[]]] to visibly show that the right + "bite" was taken. } sub scan ($) { my @passages; my $line= shift; # first pass: note what sections need treatment, without actually mod +ifying the string. # modifying the string would mess up the "pos" used by the RE's. while ($line =~ m/<\s*(\w+)([^>]*)>/g) { # for every start tag... my $startpos= pos($line); my $name= $1; if (is_literal ($name, $2)) { # if targeted, find the matching end tag using simple pattern ( +ignoring other stuff). # this skips that passage for the continued search of all start $line =~ m/<\/$name>/g; my $endpos= pos($line); unshift @passages, [$startpos, $endpos-(length($name)+3)]; } } # second pass: process the sections noted above, from right-to-left s +o # positions don't change. foreach my $range (@passages) { my ($start, $end)= @$range; my $length= $end-$start; substr($line, $start, $length)= escape_out (substr($line, $start, +$length)); # is there an easier way to do that without substr'ing twice? } print $line; } my $testdata= <<'EOF'; <method name="mainloop"> <signature virtual="1">int mainloop (ratwin::message::MSG&)</sig +nature> <P>This is the canonocal logic of the message pump. It looks ap +roximatly like this:</P> <listing> use & and <things> in here. MSG msg; while ( GetMessage(msg) ) { if (msg.hwnd == 0) thread_message (msg); else { if (!pre_translate (msg)) { // check IsDialog, Trans +lateAccelerator if (!translate_key_even(msg)) // Win32 TranslateMe +ssage DispatchMessage(msg); } } } return (msg.wParam); </listing> <P>Override this if you need to customize this beyond the point +provided for by the virtual functions provided for the individual steps.< +/P> </method> EOF scan ($testdata); [download] Well, rather than comment on the potential fragility in such parsing schemes, I'll suggest a simplification to your scan() routine (reducing the loc by half+): sub scan ($) { my $line = shift; while ($line =~ m/<\s*(\w+)([^>]*)>/g) { next unless is_literal($1,$2); my $start = pos($line); my $len = index($line,"</$1>",$start) - $start; my $passage = \substr($line,$start,$len); $$passage = escape_out($$passage); pos($line) = $start; } print $line; } [download] This uses assignment to pos() at the end of the loop to reset to where we left off so we may continue our match after modifying the string. Also, this uses a reference to the substr() function ... this is a reference to an Lvalue so assigning through the reference changes the substring being pointed to (perhaps a wee bit obfu for production use, but that's your decision :-) Of course, if the data doesn't follow exactly according to your expectations (a closing </listing > tag for example won't be found because we didn't allow for a trailing space in the closing tag, nor did we check that index() found a closing tag, ...), then all bets are off for your preprocessor (OK, so I did make a fragility comment). Yes, that's exactly what I was trying for. I seem to recall trying to do this before, but it didn't work. Maybe I never got the syntax right, or it wasn't right on earlier versions. As for exactly following expectations, that's the point: it's all literal until a very strict end condition is reached. The <<EOF ... \nEOF construct is "fragile" too, as is forgetting to escape out a slash in a RE. Either follow the rules or get an error when things don't match up. —John I hate to look like an XML ayatollah but I think you are going down a slippery path. XML is XML, and what you want is not XML. XML gives you native ways to encode your "literal" chunks so the parser is happy with them. You should use them. If you want a different format then you should use a pre-processor, to turn your quasi-XML into real XML. As the XML parser will never see the original file you can just have a special marker for the beginning and end of literal code, you don't need to use attributes on existing tags. You can basically use anything, I would use something illegal in XML and unlikely to happen in your literal text, &&& for example, or a tag if you really want to: You pre-processor would then be as simple as this: #!/usr/bin/perl -w use strict; my $literal_tag= "literal"; { local undef $/; for (<DATA>) { # tag version, the && version would be even simpler # s{&&&(.*?)&&& }{xml_escape($1)}ges; s{<\s*$literal_tag\s*>(.*?)<\s*/\s*$literal_tag>} {xml_escape($1)}geso; print; } } sub xml_escape { my $literal= shift; $literal=~ s/&/&/g; $literal=~ s/</</g; return $literal; } __DATA__ <doc> <p>A regular para</p> <code><literal>there you put the code you want, including & and <> and all</literal> [download] another regular para Frankly using CDATA sections is simpler and let your original documents be well-formed XML, but that's your call. Having a single sequence that was used to delimit literal text, rather than configurable multiple sequences, would mean that a simple s/// operator would be able to find them. In fact, if separate begin and end sequences were used, they could be turned into "<![CDATA[" and "]]>"respecitivly, and not need to grab everything in between. That would be very useful to operate as a simple filter of.
http://www.perlmonks.org/index.pl?node_id=124247
CC-MAIN-2017-39
refinedweb
953
59.03
Taking into account the behavior of the free() function, it is a good practice to set your pointer to NULL right after you free it. By doing so, you can rest assured that in case you accidentally call free() more than one times on the same variable (with no reallocation or reassignment in between), then no bad side-effects will happen (besides any logical issues that your code might be dealing with). You can include free() from malloc.h and it will have the following signature extern void free(void *__ptr);. Description of operation: Free a block allocated by malloc, realloc or calloc. The free() function frees the memory space pointed to by ptr, which must have been returned by a previous call to malloc(), calloc(), or realloc(). Otherwise, if free(ptr) has already been called before, undefined behavior occurs. If ptr is NULL, no operation is performed. Working examples: #include <stdio.h> #include <malloc.h> int main() { printf("Hello, World!\n"); void * c = malloc (sizeof(char) * 10); free(c); c = NULL; free(c); return 0; } #include <iostream> int main() { std::cout << "Hello, World!" << std::endl; void * c = malloc (sizeof(char) * 10); free(c); c = NULL; free(c); return 0; } This post is also available in: Greek
https://bytefreaks.net/programming-2/c/cc-a-small-tip-for-freeing-dynamic-memory
CC-MAIN-2018-34
refinedweb
207
65.01
This blog shows an example of how to build an ASP.NET Web API ApiController that asynchronously talks to multiple other Web APIs in parallel without blocking a thread on the server. Btw, if you have detected a certain theme in these blogs around using Tasks with ASP.NET Web API then you are indeed on to something 🙂 Asynchronous programming is an important part of building scalable, robust, and responsive Web applications regardless of whether on client side or server side. Asynchronous programming has traditionally been very complicated leaving it to only the most dedicated to implement it but with the new Task model even complex patterns such as dealing with multiple asynchronous requests in parallel are manageable without braking too much of a sweat. The sample is written using Visual Studio 11 Beta but can be modified to run on Visual Studio 2010 as well. Scenario In the scenario we have an ApiController that exposes a query operation with a single token (in the sequence diagram below the sample query token is “microsoft”). The ApiController then turns around and issues a query to digg and delicious respectively for stories on the query token. When both results are complete, the results are accumulated into a single response that is then sent back to the client. The goal is to do the entire ApiController request asynchronously and to issue the queries to digg and delicious in parallel so that we don’t every block a thread and optimize network utilization. The response we send back to the client is a collection of “stories” that simply look like this: 1: public class Story 2: { 3: // The source (either digg or delicious) 4: public string Source { get; set; } 5: 6: // The description of the story 7: public string Description { get; set; } 8: } Creating the ApiController Writing the controller involves three things: two helpers for processing the requests for digg and delicious respectively and then the controller to put it all together. Executing digg Query First we write a helper for processing the digg query. Here we submit a request, wait for the response, and then process it as JsonValue to build a set of Story instances as defined in the scenario above. As there is no instance state involved the query processing can be done as a static method. 1: private static async Task<List<Story>> ExecuteDiggQuery(string queryToken) 2: { 3: List<Story> result = new List<Story>(); 4: 5: // URI query for a basic digg query -- see 6: string query = string.Format("{0}", queryToken); 7: 8: // Submit async request 9: HttpResponseMessage diggResponse = await _client.GetAsync(query); 10: 11: // Read result using JsonValue and process the stories 12: if (diggResponse.IsSuccessStatusCode) 13: { 14: JsonValue diggResult = await diggResponse.Content.ReadAsAsync<JsonValue>(); 15: foreach (var story in diggResult["stories"] as JsonArray) 16: { 17: result.Add(new Story 18: { 19: Source = "digg", 20: Description = story["title"].ReadAs<string>() 21: }); 22: } 23: } 24: 25: return result; 26: } Executing delicious Query Then we write a similar helper for processing the delicious query. It follows the exact same pattern and also uses JsonValue to read the response and create a set of Story instances. Again, as there is no instance state involved the query processing can be done as a static method. 1: private static async Task<List<Story>> ExecuteDeliciousQuery(string queryToken) 2: { 3: List<Story> result = new List<Story>(); 4: 5: // URI query for a basic delicious query -- see 6: string query = string.Format("{0}", queryToken); 7: 8: // Submit async request 9: HttpResponseMessage deliciousResponse = await _client.GetAsync(query); 10: 11: // Read result using JsonValue and process the stories 12: if (deliciousResponse.IsSuccessStatusCode) 13: { 14: JsonArray deliciousResult = await deliciousResponse.Content.ReadAsAsync<JsonArray>(); 15: foreach (var story in deliciousResult) 16: { 17: result.Add(new Story 18: { 19: Source = "delicious", 20: Description = story["d"].ReadAs<string>() 21: }); 22: } 23: } 24: 25: return result; 26: } Writing the Controller Now we can write the actual ApiController. First we look for a valid query token (we don’t want it to contain any ‘&’ characters). Then we kick off the two helpers in parallel and wait for them to complete (but without blocking a thread using the Task.WhenAll construct). Finally we aggregate the two results and return them to the client. We of course use HttpClient to submit requests to dig and delicious but as HttpClient can handler requests submitted from multiple threads simultaneously we only need one instance for all requests. This means that the same HttpClient instance is reused for all requests across all ApiController instances. 1: public async Task<List<Story>> GetContent(string topic) 2: { 3: List<Story> result = new List<Story>(); 4: 5: // Check that we have a topic or return empty list 6: if (topic == null) 7: { 8: return result; 9: } 10: 11: // Isolate topic to ensure we have a single term 12: string queryToken = topic.Split(new char[] { '&' }).FirstOrDefault(); 13: 14: // Submit async query requests and process responses in parallel 15: List<Story>[] queryResults = await Task.WhenAll( 16: ExecuteDiggQuery(queryToken), 17: ExecuteDeliciousQuery(queryToken)); 18: 19: // Aggregate results from digg and delicious 20: foreach (List<Story> queryResult in queryResults) 21: { 22: result.AddRange(queryResult); 23: } 24: 25: return result; 26: } That’s all we need for the controller – next is just to host it and then run it. Hosting the Controller As usual we use a simple command line program for hosting the ApiController and it follows the usual pattern seen in the other blogs: 1: static void Main(string[] args) 2: { 3: var baseAddress = ""; 4: HttpSelfHostServer server = null; 5: 6: try 7: { 8: // Create configuration 9: var config = new HttpSelfHostConfiguration(baseAddress); 10: 11: // Add a route 12: config.Routes.MapHttpRoute( 13: name: "default", 14: routeTemplate: "api/{controller}/{id}", 15: defaults: new { controller = "Home", id = RouteParameter.Optional }); 16: 17: // Create server 18: server = new HttpSelfHostServer(config); 19: 20: // Start server 21: server.OpenAsync().Wait(); 22: 23: Console.WriteLine("Hit ENTER to exit"); 24: Console.ReadLine(); 25: } 26: finally 27: { 28: if (server != null) 29: { 30: server.CloseAsync().Wait(); 31: } 32: } 33: } Trying it Out To try it out it is useful to use Fiddler as it allows you to see both the incoming and outgoing requests. In the screen capture below we use fiddler to directly compose a request to our localhost ApiController. To the left we can see the two requests to digg and delicious respectively and in the right lower corner you can see the result with parts coming from digg and parts coming from delicious. Have fun! Henrik It seems the generic ReadAsAsync() method is not present in the .Net 4.5 Beta version of System.Net.Http. Why is there a difference between that and the version in ASP.NET Web API? It is part of the System.Net.Http.Formatting packge which is also available on NuGet, see nuget.org/…/System.Net.Http.Formatting You can see what NuGet packages are part of ASP.NET Web API on the blog "ASP.NET Web API and HttpClient Available on NuGet" at blogs.msdn.com/…/asp-net-web-api-and-httpclient-available-on-nuget.aspx Hi Henrik. My question is: the async operations (above) are using ASP.Net worker threads ,IOCP or other threads? and what is the difference with the AsyncController? Thank you in advance. How does it dispatch to the GetContent method of the controller? Do I need to do anything to host this controller in IIS 7? What is the purpose of server.OpenAsync().Wait(); ? Stelios, By default threads used by Task-oriented programming comes out of the threadpool.The ApiController is different from the AsyncController in that the former is used to write Web APIs and the latter is used to write MVC controllers. The ApiController supports Task as a first-class concept so it doesn't need an "async" equivalent — it already is async. For more information on ASP.NET Web API you can go to Thanks, Henrik
https://blogs.msdn.microsoft.com/henrikn/2012/03/03/async-mashups-using-asp-net-web-api/
CC-MAIN-2018-09
refinedweb
1,316
54.22
MLPutReal32() This feature is not supported on the Wolfram Cloud. This feature is not supported on the Wolfram Cloud. DetailsDetails - The argument x is typically declared as float in external programs, but must be declared as double in MLPutReal32() itself in order to work even in the absence of C prototypes. - MLPutReal32() returns 0 in the event of an error, and a nonzero value if the function succeeds. - Use MLError() to retrieve the error code if MLPutReal32() fails. - MLPutReal32() is declared in the MathLink header file mathlink.h. ExamplesExamplesopen allclose all Basic Examples (1)Basic Examples (1) #include "mathlink.h" /* send the number 3.4 to a link */ void f(MLINK lp) { float numb = 3.4; if(! MLPutReal32(lp, numb)) { /* unable to send 3.4 to lp */ } }
http://reference.wolfram.com/language/ref/c/MLPutReal32.html
CC-MAIN-2016-22
refinedweb
127
60.51
In this challenge, the task is to debug the existing code to successfully execute all provided test files. Python supports a useful concept of default argument values. For each keyword argument of a function, we can assign a default value which is going to be used as the value of said argument if the function is called without it. For example, consider the following increment function: def increment_by(n, increment=1): return n + increment The functions works like this: >>> increment_by(5, 2) 7 >>> increment_by(4) 5 >>> Debug the given function print_from_stream using the default value of one of its arguments. The function has the following signature: def print_from_stream(n, stream) This function should print the first values returned by get_next() method of stream object provided as an argument. Each of these values should be printed in a separate line. Whenever the function is called without the stream argument, it should use an instance of EvenStream class defined in the code stubs below as the value of stream. Your function will be tested on several cases by the locked template code. Input Format The input is read by the provided locked code template. In the first line, there is a single integer denoting the number of queries. Each of the following lines contains a stream_name followed by integer , and it corresponds to a single test for your function. Constraints Output Format The output is produced by the provided and locked code template. For each of the queries (stream_name, n), if the stream_name is even then print_from_stream(n) is called. Otherwise, if the stream_name is odd, then print_from_stream(n, OddStream()) is called. Sample Input 0 3 odd 2 even 3 odd 5 Sample Output 0 1 3 0 2 4 1 3 5 7 9 Explanation 0 There are queries in the sample. In the first query, the function print_from_stream(2, OddStream()) is exectuted, which leads to printing values and in separated lines as the first two non-negative odd numbers. In the second query, the function print_from_stream(3) is exectuted, which leads to printing values and in separated lines as the first three non-negative even numbers. In the third query, the function print_from_stream(5, OddStream()) is exectuted, which leads to printing values and in separated lines as the first five non-negative odd numbers.
https://www.hackerrank.com/challenges/default-arguments/problem
CC-MAIN-2022-33
refinedweb
384
59.03
In this first chapter, we will introduce the three deep learning artificial neural networks that we will be using throughout the book. These deep learning models are MLPs, CNNs, and RNNs, which are the building blocks to the advanced deep learning topics covered in this book, such as Autoencoders and GANs. Together, we'll implement these deep learning models using the Keras library in this chapter. We'll start by looking at why Keras is an excellent choice as a tool for us. Next, we'll dig into the installation and implementation details within the three deep learning models. This chapter will: Establish why the Keras library is a great choice to use for advanced deep learning Introduce MLPs, CNNs, and RNNs – the core building blocks of most advanced deep learning models, which we'll be using throughout this book Provide examples of how to implement MLPs, CNNs, and RNNs using Keras and TensorFlow Along the way, start to introduce important deep learning concepts, including optimization, regularization, and loss function By the end of this chapter, we'll have the fundamental deep learning models implemented using Keras. In the next chapter, we'll get into the advanced deep learning topics that build on these foundations, such as Deep Networks, Autoencoders, and GANs. Keras [Chollet, François. "Keras (2015)." (2017)] is a popular deep learning library with over 250,000 developers at the time of writing, a number that is more than doubling every year. Over 600 contributors actively maintain it. Some of the examples we'll use in this book have been contributed to the official Keras GitHub repository. Google's TensorFlow, a popular open source deep learning library, uses Keras as a high-level API to its library. In the industry, Keras is used by major technology companies like Google, Netflix, Uber, and NVIDIA. In this chapter, we introduce how to use Keras Sequential API. We have chosen Keras as our tool of choice to work within this book because Keras is a library dedicated to accelerating the implementation of deep learning models. This makes Keras ideal for when we want to be practical and hands-on, such as when we're exploring the advanced deep learning concepts in this book. Because Keras is intertwined with deep learning, it is essential to learn the key concepts of deep learning before someone can maximize the use of Keras libraries. Note All examples in this book can be found on GitHub at the following link:. Keras is a deep learning library that enables us to build and train models efficiently. In a significantly smaller number of lines of code. By using Keras, we'll gain productivity by saving time in code implementation which can instead be spent on more critical tasks such as formulating better deep learning algorithms. We're combining Keras with deep learning, as it offers increased efficiency when introduced with the three deep learning networks that we will introduce in the following sections of this chapter. Likewise, Keras is ideal for the rapid implementation of deep learning models, like the ones that we will be using in this book. Typical models can be built in few lines of code using the Sequential Model API. However, do not be misled by its simplicity. Keras can also build more advanced and complex models using its API and Model and Layer classes which can be customized to satisfy unique requirements. Functional API supports building graph-like models, layers reuse, and models that are behaving like Python functions. Meanwhile, Model and Layer classes provide a framework for implementing uncommon or experimental deep learning models and layers. Keras is not an independent deep learning library. As shown in Figure 1.1.1, it is built on top of another deep learning library or backend. This could be Google's TensorFlow, MILA's Theano or Microsoft's CNTK. Support for Apache's MXNet is nearly completed. We'll be testing examples in this book on a TensorFlow backend using Python 3. This due to the popularity of TensorFlow, which makes it a common backend. We can easily switch from one back-end to another by editing the Keras configuration file .keras/keras.json in Linux or macOS. Due to the differences in the way low-level algorithms are implemented, networks can often have different speeds on different backends. On hardware, Keras runs on a CPU, GPU, and Google's TPU. In this book, we'll be testing on a CPU and NVIDIA GPUs (Specifically, the GTX 1060 and GTX 1080Ti models). Figure 1.1.1: Keras is a high-level library that sits on top of other deep learning models. Keras is supported on CPU, GPU, and TPU. Before proceeding with the rest of the book, we need to ensure that Keras and TensorFlow are correctly installed. There are multiple ways to perform the installation; one example is installing using pip3: $ sudo pip3 install tensorflow If we have a supported NVIDIA GPU, with properly installed drivers, and both NVIDIA's CUDA Toolkit and cuDNN Deep Neural Network library, it is recommended that we install the GPU-enabled version since it can accelerate both training and prediction: $ sudo pip3 install tensorflow-gpu The next step for us is to then install Keras: $ sudo pip3 install keras The examples presented in this book will require additional packages, such as pydot, pydot_ng, vizgraph, python3-tk and matplotlib. We'll need to install these packages before proceeding beyond this chapter. The following should not generate any error if both TensorFlow and Keras are installed along with their dependencies: $ python3 >>> import tensorflow as tf >>> message = tf.constant('Hello world!') >>> session = tf.Session() >>> session.run(message) b'Hello world!' >>> import keras.backend as K Using TensorFlow backend. >>> print(K.epsilon()) 1e-07 The warning message about SSE4.2 AVX AVX2 FMA, which is similar to the one below can be safely ignored. To remove the warning message, you'll need to recompile and install the TensorFlow source code from. tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.2 AVX AVX2 FMA This book does not cover the complete Keras API. We'll only be covering the materials needed to explain the advanced deep learning topics in this book. For further information, we can consult the official Keras documentation, which can be found at. We've already mentioned that we'll be using three advanced deep learning models, they are: MLPs: Multilayer perceptrons RNNs: Recurrent neural networks CNNs: Convolutional neural networks These are the three networks that we will be using throughout this book. Despite the three networks being separate, you'll find that they are often combined together in order to take advantage of the strength of each model. In the following sections of this chapter, we'll discuss these building blocks one by one in more detail. In the following sections, MLPs are covered together with other important topics such as loss function, optimizer, and regularizer. Following on afterward, we'll cover both CNNs and RNNs. Multilayer perceptrons or MLPs are a fully-connected network. You'll often find them referred to as either deep feedforward networks or feedforward neural networks in some literature. Understanding these networks in terms of known target applications will help us get insights about the underlying reasons for the design of the advanced deep learning models. MLPs are common in simple logistic and linear regression problems. However, MLPs are not optimal for processing sequential and multi-dimensional data patterns. By design, MLPs struggle to remember patterns in sequential data and requires a substantial number of parameters to process multi-dimensional data. For sequential data input, RNNs are popular because the internal design allows the network to discover dependency in the history of data that is useful for prediction. For multi-dimensional data like images and videos, a CNN excels in extracting feature maps for classification, segmentation, generation, and other purposes. In some cases, a CNN in the form of a 1D convolution is also used for networks with sequential input data. However, in most deep learning models, MLPs, RNNs, and CNNs are combined to make the most out of each network. MLPs, RNNs, and CNNs do not complete the whole picture of deep networks. There is a need to identify an objective or loss function, an optimizer, and a regularizer. The goal is to reduce the loss function value during training since it is a good guide that a model is learning. To minimize this value, the model employs an optimizer. This is an algorithm that determines how weights and biases should be adjusted at each training step. A trained model must work not only on the training data but also on a test or even on unforeseen input data. The role of the regularizer is to ensure that the trained model generalizes to new data. The first of the three networks we will be looking at is known as a multilayer perceptrons or (MLPs). for short, is often considered as the Hello World! of deep learning and is a suitable dataset for handwritten digit classification. Before we discuss the multilayer perceptron model, it's essential that we understand the MNIST dataset. A large number of the examples in this book use the MNIST dataset. MNIST is used to explain and validate deep learning theories because the 70,000 samples it contains are small, yet sufficiently rich in information: Figure 1.3.1: Example images from the MNIST dataset. Each image is 28 × 28-pixel grayscale. MNIST is a collection of handwritten digits ranging from the number random digit images. Listing 1.3.1, mnist-sampler-1.3.1.py. Keras code showing how to access MNIST dataset, plot 25 random samples, and count the number of labels for train and test datasets: import numpy as np from keras.datasets import mnist import matplotlib.pyplot as plt # load dataset .show() plt.savefig("mnist-samples.png") plt.close('all') The mnist.load_data() method is convenient since there is no need to load all 70,000 images and labels individually and store them in arrays. Executing python3 mnist-sampler-1.3.1.py on command the preceding figure, Figure 1.3.1. Before discussing the multilayer perceptron classifier model, it is essential to keep in mind that while MNIST data are 2D tensors, they should be reshaped accordingly depending on the type of input layer. The following figure shows how a 3 × 3 grayscale image is reshaped for MLPs, CNNs, and RNNs input layers: Figure 1.3.2: An input image similar to the MNIST data is reshaped depending on the type of input layer. For simplicity, reshaping of a 3 × 3 grayscale image is shown. The proposed MLP model shown in Figure 1.3.3 can be used for MNIST digit classification. When the units or perceptrons are exposed, the MLP model is a fully connected network as shown in Figure 1.3.4. It will also be shown how the output of the perceptron is computed from inputs as a function of weights, wi and bias, b n for the nth unit. The corresponding Keras implementation is illustrated in Listing 1.3.2. Figure 1.3.3: MLP MNIST digit classifier model Figure 1.3.4: The MLP MNIST digit classifier in Figure 1.3.3 is made up of fully connected layers. For simplicity, the activation and dropout are not shown. One unit or perceptron is also shown. Listing 1.3.2, mlp-mnist-1.3.2.py shows the Keras implementation of the MNIST digit classifier model using MLP: import numpy as np from keras.models import Sequential from keras.layers import Dense, Activation, Drop # network parameters batch_size = 128 hidden_units = 256 dropout = 0.45 #')) model.summary() plot_model(model, to_file='mlp-mnist.png', show_shapes=True) # loss function for one-hot vector # use of adam optimizer # accuracy is a good metric for classification tasks model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # train the network model.fit(x_train, y_train, epochs=20, batch_size=batch_size) # validate the model on test dataset to determine generalization loss, acc = model.evaluate(x_test, y_test, batch_size=batch_size) coding num_labels = 10 is also an option. But, it's always a good practice to let the computer do its job. The code assumes that y_train has labels 0 to 9. At this point, the labels are in digits format, 0 to 9. This sparse scalar representation of labels is not suitable for the neural network prediction layer that outputs probabilities per class. A more suitable format is called a one-hot vector, a 10-dim is stored in tensors. The term tensor applies to a scalar (0D tensor), vector (1D tensor), matrix (2D tensor), and a multi-dimensional tensor. From this point, the term tensor is used unless scalar, vector, or matrix makes the explanation clearer. The rest computes the image dimensions, input_size of the first Dense layer and scales each pixel value from 0 to 255 to range from 0.0 to 1.0. Although raw pixel values can be used directly, it is better to normalize the input data [60000, 28 * 28] and [10000, 28 * 28], respectively. # After relu activation and dropout. 256 units are chosen since 128, 512 and 1,024 units have lower performance metrics. At 128 units, the network converges quickly, but has a lower test accuracy. The added number units for 512 or 1,024 does not increase the test accuracy significantly. model, the classifier model is implemented using a sequential model API of Keras. This is sufficient if the model requires one input and one output processed by a sequence of layers. For simplicity, we'll use this in the meantime, however, in Chapter 2, Deep Neural Networks, the Functional API of Keras will be introduced to implement advanced deep learning models. #')) Since a Dense layer is a linear operation, a sequence of Dense layers can only approximate a linear function. The problem is that the MNIST digit classification is inherently a non-linear process. Inserting a relu activation between Dense layers will enable MLPs to model non-linear mappings. relu or Rectified Linear Unit (ReLU) is a simple non-linear function. It's very much like a filter that allows positive inputs to pass through unchanged while clamping everything else to zero. Mathematically, relu is expressed in the following equation and plotted in Figure 1.3.5: relu(x) = max(0,x) Figure 1.3.5: Plot of ReLU function. The ReLU function introduces non-linearity in neural networks. There are other non-linear functions that can be used such as elu, selu, softplus, sigmoid, and tanh. However, relu is the most commonly used in the industry and is computationally efficient due to its simplicity. The sigmoid and tanh are used as activation functions in the output layer and described later. Table 1.3.1 shows the equation for each of these activation functions: Table 1.3.1: Definition of common non-linear activation functions A neural network has the tendency to memorize its training data especially if it contains more than enough capacity. In such a case, the network fails catastrophically when subjected to the test data. This is the classic case of the network failing to generalize. To avoid this tendency, the model uses a regularizing layer or function. A common regularizing layer is referred to as prediction. There are regularizers that can be used other than dropouts like l1 or l2. In Keras, the bias, weight and activation output can be regularized per layer. l1 and l2 favor smaller parameter values by adding a penalty function. Both l1 and l2 enforce the penalty using a fraction of the sum of absolute ( l1) or square ( l2) of parameter values. In other words, the penalty function forces the optimizer to find parameter values that are small. Neural networks with small parameter values are more insensitive to the presence of noise from within the input data. As an example, l2 weight regularizer with fraction=0.001 can be implemented as: from keras.regularizers import l2 model.add(Dense(hidden_units, kernel_regularizer=l2(0.001), input_dim=input_size)) No additional layer is added if l1 or l2 regularization is used. The regularization is imposed in the Dense layer internally. For the proposed model, dropout still has a better performance than l2. The output layer has 10 units followed by softmax activation. The 10 units correspond to the 10 possible labels, classes or categories. The softmax activation can be expressed mathematically as shown in the following equation: (Equation 1.3.5) The equation is applied to all N = 10 outputs, x i: [ 3.57351579e-11 7.08998016e-08 2.30154569e-07 6.35787558e-07 5.57471187e-11 4.15353840e-09 3.55973775e-16 9.99995947e-01 1.29531730e-09 3.06023480e-06], like linear, sigmoid, and tanh. The linear activation is an identity function. It copies its input to its output. The sigmoid function is more specifically known as a logistic sigmoid. This will be used if the elements of the prediction tensor should be mapped between 0.0 and 1.0 independently. The summation of all elements of the predicted tensor is not constrained to 1.0 unlike in softmax. For example, sigmoid is used as the last layer in sentiment prediction (0.0 is bad to 1.0, which is good) or in image generation (0.0 is 0 to 1.0 is 255-pixel values). 1.0] by . The following graph shows the sigmoid and tanh functions. Mathematically, sigmoid can be expressed in equation as follows: (Equation 1.3.6) target and prediction. In the current example, we are using categorical_crossentropy. It's the negative of the sum of the product of the target and the logarithm of the prediction. There are other loss functions that are available in Keras, such as mean_absolute_error, and binary_crossentropy. The choice of the loss function is not arbitrary but should be a criterion that the model is learning. For classification by category, categorical_crossentropy or mean_squared_error is a good choice after the softmax activation layer. The binary_crossentropy loss function is normally used after the sigmoid activation layer while mean_squared_error is an option for tanh output. With optimization, the objective is to minimize the loss function. The idea is that if the loss is reduced to an acceptable level, the model has indirectly learned the function mapping input to output. Performance metrics are used to determine if a model has learned the underlying data distribution. The default metric in Keras is loss. During training, validation, and testing, other metrics such as accuracy can also be included. Accuracy is the percent, or fraction, of correct predictions based on ground truth. In deep learning, there are many other performance metrics. However, it depends on the target application of the model. In literature, performance metrics of the trained model on the test dataset is reported for comparison or opposite the gradient until the bottom is reached. The GD algorithm is illustrated in Figure 1.3.7. Let's suppose x is the parameter (for example, weight) being tuned to find the minimum value of y (for example, loss function). Starting at an arbitrary point of x = -0.5 with the gradient being . The GD algorithm imposes that x is then updated to . The new value of x is equal to the old value, plus the opposite of the gradient scaled by refers to the learning rate. If , then the new value of x = -0.48. GD is performed iteratively. At each step, y will get closer to its minimum value. At x = 0.5 , the the other hand, too small value of may take a significant number of iterations before the minimum is found. In the case of multiple minima, the search might get stuck in a local minimum. Figure 1.3.7: Gradient descent gradient descent to overcome the hill at x = 0.0. In deep learning practice, it is normally recommended to start at a bigger learning rate (for example. 0.1 to 0.001) and gradually decrease as the loss gets closer to the minimum. Figure 1.3.8: Plot of a function with 2 minima, x = -1.51 and x = 1.66. Also shown is the derivative of the function. Gradient descent is not typically used in deep neural networks since you'll often come upon millions of parameters that need to be trained. It is computationally inefficient to perform a full gradient descent. Instead, SGD is used. In SGD, a mini batch of samples is chosen to compute an approximate value of the descent. The parameters (for example, weights and biases) are adjusted by the following equation: (Equation 1.3.7) In this equation, and are the parameters and gradients tensor of the loss function respectively. The g is computed from partial derivatives of the loss function. The mini-batch size is recommended to be a power of 2 for GPU optimization purposes. In the proposed network, batch_size=128. Equation 1.3.7 computes the last layer parameter updates. So, how do we adjust the parameters of the preceding layers?() requires the size of train dataset divided by batch size, plus 1 to compensate for any fractional part. At this point, the model for the MNIST digit classifier is now complete. Performance evaluation will be the next crucial step to determine if the proposed model has come up with a satisfactory solution. Training the model for 20 epochs will be sufficient to obtain comparable performance metrics. The following table, Table 1.3.2, shows the different network configurations and corresponding performance measures. Under Layers, the number of units is shown for layers 1 to 3. For each optimizer, the default parameters in Keras are used. The effects of varying the regularizer, optimizer and number of units per layer can be observed. Another important observation in Table 1.3.2 is that bigger networks do not necessarily translate to better performance. Increasing the depth of this network shows no added benefits in terms of accuracy for both better performance than l2. Following table demonstrates a typical deep neural network performance during tuning. The example indicates that there is a need to improve the network architecture. In the following section, another model using CNNs shows a significant improvement in test accuracy: Using the Keras library provides us with a quick mechanism to double check the model description by calling: model.summary() Listing 1.3.2 input to Dense layer: 784 × 256 + 256 = 200,960. From first Dense to second Dense: 256 × 256 + 256 = 65,792. From second Dense to the output layer: 10 × 256 + 10 = 2,570. The total is 269,322. Listing 1.3.2 shows a) 2570 _________________________________________________________________ We're now going to move onto the second artificial neural network, Convolutional Neural Networks (CNNs). In this section, we're going solve the same MNIST digit classification problem, instead this time using CNNs. Figure 1.4.1 shows the CNN model that we'll use for the MNIST digit classification, while its implementation is illustrated in Listing 1.4.1. Some changes in the previous model will be needed to implement the CNN model. Instead of having: CNN model for MNIST digit classification Listing 1.4.1, cnn-mnist-1.4.1.py shows the Keras code for the MNIST digit classification using CNN: import numpy as np from keras.models import Sequential from keras.layers import Activation, Dense, Dropout from-mnist.png', show_shapes=True) # loss function for one-hot vector # use of adam optimizer # accuracy is good metric for classification tasks model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # train the network model.fit(x_train, y_train, epochs=10, batch_size=batch_size) loss, acc = model.evaluate(x_test, y_test, batch_size=batch_size) print("\nTest accuracy: %.1f%%" % (100.0 * acc)) The major change here is the use used without causing instability during training. top to bottom. This operation is called convolution. It transforms the input image into a feature maps, which is a representation of what the kernel has learned from the input image. The feature maps are then transformed into another feature maps will accept the option padding='same'. The input is padded with zeroes around its borders to keep the dimensions unchanged after the convolution: Figure 1.4.3: The convolution operation shows how one element of the feature map is computed The last change is the addition of a MaxPooling2D layer with the argument pool_size=2. MaxPooling2D compresses each feature map. Every patch of size pool_size × pool_size is reduced to one pixel. The value is equal to the maximum pixel maps size which translates to increased kernel coverage. For example, after MaxPooling2D(2), the 2 × 2 kernel is now approximately convolving with a 4 × 4 patch. The CNN has learned a new set of feature maps for a different coverage. is a stack of feature maps. The role of Flatten is to convert the stack of feature maps into a vector format that is suitable for either Dropout or Dense layers, similar to the MLP model output layer.. Figure 1.4.5 shows the graphical representation of the CNN MNIST digit classifier. Table 1.4.1 shows that. Listing 1.4.2 shows a summary of a CNN MNIST digit class,: Graphical description of the CNN MNIST digit classifier We're now going to look at the last of our three artificial neural networks, Recurrent neural networks, or RNNs. RNNs are a family of networks that are suitable for learning representations of sequential data like text in Natural Language Processing (NLP) In the following listing, Listing 1.5.1, the rnn-mnist-1.5.1.py shows the Keras code for MNIST digit classification using RNNs: import numpy as np from keras.models import Sequential from) loss, acc = model.evaluate(x_test, y_test, batch_size=batch_size) print("\nTest accuracy: %.1f%%" % (100.0 * acc)) There are the two main differences between RNNs kernel: ht = tanh(b + Wht-1 + Uxt) (1.5.1) In this equation, b is the bias, while W and U are called recurrent kernel (weights for previous output) and kernel (weights for the current input) respectively. Subscript t is used to indicate the position in the sequence. For SimpleRNN layer with units=256, the total number of parameters is 256 + 256 × 256 + 256 × 28 = 72,960 corresponding to b, W, and U contributions. Following figure shows the diagrams of both SimpleRNN and RNN that were used in the MNIST digit classification. What makes SimpleRNN simpler than RNN is the absence of the output values Ot = Vht + c before the softmax is computed: Figure 1.5.2: Diagram of SimpleRNN and RNN RNNs might be initially harder to understand when compared to MLPs or CNNs. In MLPs, the perceptron is the fundamental unit. Once the concept of the perceptron is understood, MLPs are just a network of perceptrons. In CNNs, the kernel is a patch or window that slides through the feature map to generate another feature map. In RNNs,. Figure 1.5.3 shows the graphical description of the RNN MNIST digit classifier. The model is very concise. Table 1.5.1 shows that the SimpleRNN has the lowest accuracy among the networks presented. Listing 1.5.2, RNN MNIST digit classifier summary: _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= simple_rnn_1 (SimpleRNN) (None, 256) 72960 _________________________________________________________________ dense_1 (Dense) (None, 10) 2570 _________________________________________________________________ activation_1 (Activation) (None, 10) 0 ================================================================= Total params: 75,530 Trainable params: 75,530 Non-trainable params: 0 Figure 1.5.3: The RNN MNIST digit classifier graphical description Table 1.5.1: The different SimpleRNN network configurations and performance measures In many deep neural networks, other members of the RNN family are more commonly used. For example, Long Short-Term Memory (LSTM) networks have been used in both machine translation and question answering problems. LSTM networks address the problem of long-term dependency or remembering relevant past information to the present output. Unlike RNNs or SimpleRNN, the internal structure of the LSTM cell is more complex. Figure 1.5.4 shows a diagram of LSTM in the context of MNIST digit classification. LSTM uses not only the present input and past outputs or hidden states; it introduces a cell state, st, that carries information from one cell to the other.M can be found at:. The LSTM() layer can be used as a drop-in replacement to SimpleRNN(). If LSTM is overkill for the task at hand, a simpler version called Gated Recurrent Unit (GRU) can be used. GRU simplifies LSTM by combining the cell state and hidden state together. state and units will also increase the capacity. However, another way of increasing the capacity is by stacking the RNN layers. You should note though that as a general rule of thumb, the capacity of the model should only be increased if needed. Excess capacity may contribute to overfitting, and as a result, both longer training time and slower performance during prediction. This chapter provided an overview of the three deep learning models – MLPs, RNNs, CNNs – and also introduced Keras, a library for the rapid development, training and testing those deep learning models. function. For ease of understanding, these concepts were presented in the context of the MNIST digit classification. Different solutions to the MNIST digit classification using artificial neural networks, specifically MLPs, CNNs, and RNNs, which are important building blocks of deep neural networks, were also discussed together with their performance measures. With the understanding of deep learning concepts, and how Keras can be used as a tool with them, we are now equipped to analyze advanced deep learning models. After discussing Functional API in the next chapter, we'll move onto the implementation of popular deep learning models. Subsequent chapters will discuss advanced topics such as autoencoders, GANs, VAEs, and reinforcement learning. The accompanying Keras code implementations will play an important role in understanding these topics. LeCun, Yann, Corinna Cortes, and C. J. Burges. MNIST handwritten digit database. AT&T Labs [Online]. Available:. lecun. com/exdb/mnist 2 (2010).
https://www.packtpub.com/product/advanced-deep-learning-with-keras/9781788629416
CC-MAIN-2021-21
refinedweb
5,001
56.76
***** [Update 11/25/2016: I humbly invite you to check out my InternetworkExpert Class on Learn SDN with OpenFlow and Ryu Controller] ***** The OpenFlow Tutorial is simply awesome. It is hands down the best way to gain some hands on experience with OpenFlow. Everything is encapsulated into a VirtualBox VM and you simply need to bootstrap it. Everything you need for network component is included in Mininet, which is another awesome tool in itself. The wireshark component has the OpenFlow dissect included already. And to run simple OpenFlow test you can use the built-in dpctl. I do, however, think when it comes to actually construct a Controller + Mininet network the tutorial needs more example and hand-holding. Let's face it, for those of us coming from Cisco / Juniper background with some Python chops it is still a very steep learning curve when: [If this is helpful to you, please leave me a note. If enough people respond, we can add a few items to the tutorial wiki. Otherwise I will just keep it here on my blog] 1. You dont know which portion is OpenFlow, which is Python, and which is Controller. For example, when the tutorial reference of.ofp_flow_mod() and "this instructs a switch to install a flow table entry"; it includes all three components, POX/OpenFlow/Python. The flow_mod is just an OpenFlow message, while it is implemented by POX via a Python function. But it is not until I read the POX WIki that I realize the the function is implemented in the "pox.openflow.libopenflow_01" module and the example probably skipped the potion where 'import pox.openflow.libopenflow_01 as of' at the top of the file. 2. Most of the pointers later on are task oriented, there is no end-to-end examples to build from. So here I aim to pick up at the "Create Learning Switch" section and provide a more detailed example using POX as the controller. [Update 04/06/2013: If you are using the new VM with mininet 2.0 (added Feb 2013) in the tutorial, the new s1 switch port 1 and port 2 now connects to h1 and h2 instead of h2 and h3. The section below used the old image, which uses h2 and h3 at the first two ports. Please adjust accordingly if you use the new image. old: Old: openflow@openflowtutorial:~$ sudo mn --topo single,3 --mac --switch ovsk --controller remote *** Adding links: (s1, h2) (s1, h3) (s1, h4) *** mininet> net s1 <-> h2-eth0 h3-eth0 h4-eth0 mininet> New: mininet@mininet-vm:~$ sudo mn --topo single,3 --mac --switch ovsk --controller remote *** Adding links: (h1, s1) (h2, s1) (h3, s1) *** mininet> ] 1. Clone the POX controller from Git repository, Git is already pre-installed on the VM: openflow@openflowtutorial:~$ git clone Cloning into pox… 2. Verify and change into the directory: openflow@openflowtutorial:~$ ls mininet nox oflops oftest openflow openvswitch pox openflow@openflowtutorial:~$ cd pox openflow@openflowtutorial:~/pox$ ls COPYING debug-pox.py doc ext pox pox.py README setup.cfg tests tools openflow@openflowtutorial:~/pox$ openflow@openflowtutorial:~$ sudo mn --topo single,3 --mac --switch ovsk --controller remote4. Test your POX controller by starting the POX, look for the switch registration message. You can optionally open up wireshark to verify: *** Adding controller *** Creating network *** Adding hosts: h2 h3 h4 *** Adding switches: s1 *** Adding links: (s1, h2) (s1, h3) (s1, h4) *** Configuring hosts h2 h3 h4 *** Starting controller *** Starting 1 switches s1 *** Starting CLI: mininet> openflow@openflowtutorial:~/pox$ ./pox.py INFO:openflow.of_01:[Con 1/1] Connected to 00-00-00-00-00-01 Ready. POX> POX> 5. If you go back to the Mininet window and try to have H2 ping H3, nothing will happen because the controller is not doing anything. This is pretty boring. mininet> h2 ping h3 <nothing> <ctrl+c to kill> 6. At this point, exit the interactive prompt with exit(). You can leave the Mininet topology running or kill it in another window. I find it easier to kill it and just 'up arrow + enter' every time to see my controller messages. POX> exit()7. The way POX initiate components, is to use the component name as the argument. To use a pre-built L2 learning switch component, you can use 'forwarding.l2_learning' as the first argument. openflow@openflowtutorial:~/pox$ ./pox.py forwarding.l2_learning DEBUG:forwarding.l2_learning:Connection [Con 1/1] 7. At this point, if you go back to Mininet window, you can see that as a learning switch, h2 can now poing h. 11. For example, to build a dumb hub, follow the code and only write out what is required. Here is a block-by-block: mininet> h2 ping -c5 h3And your POX controller prompt should show the controller installing the flows: PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 64 bytes from 10.0.0.3: icmp_req=1 ttl=64 time=17.4 ms 64 bytes from 10.0.0.3: icmp_req=2 ttl=64 time=0.298 ms 64 bytes from 10.0.0.3: icmp_req=3 ttl=64 time=0.041 ms 64 bytes from 10.0.0.3: icmp_req=4 ttl=64 time=0.050 ms 64 bytes from 10.0.0.3: icmp_req=5 ttl=64 time=0.000 ms --- 10.0.0.3 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4007ms rtt min/avg/max/mdev = 0.000/3.559/17.407/6.924 ms mininet> POX>You can also use the dpctl tool to verify that the flows are installed: POX> DEBUG:forwarding.l2_learning:installing flow for 00:00:00:00:00:03.2 -> 00:00:00:00:00:02.1 DEBUG:forwarding.l2_learning:installing flow for 00:00:00:00:00:02.1 -> 00:00:00:00:00:03.2 DEBUG:forwarding.l2_learning:installing flow for 00:00:00:00:00:03.2 -> 00:00:00:00:00:02.1 DEBUG:forwarding.l2_learning:installing flow for 00:00:00:00:00:03.2 -> 00:00:00:00:00:02.1 DEBUG:forwarding.l2_learning:installing flow for 00:00:00:00:00:02.1 -> 00:00:00:00:00:03.2 POX> openflow@openflowtutorial:~/pox$ dpctl dump-flows tcp:127.0.0.1:6634Notice the first ICMP reply is really slow, waiting for the flow to be installed. But later packets are fast. stats_reply (xid=0xad1245f1): flags=none type=1(flow) cookie=0, duration_sec=4s, duration_nsec=993000000s, table_id=0, priority=32768, n_packets=1, n_bytes=42, idle_timeout=10,hard_timeout=30,arp,nw_proto=2,tp_src=0,tp_dst=0,actions=output:2 cookie=0, duration_sec=9s, duration_nsec=1000000s, table_id=0, priority=32768, n_packets=7, n_bytes=686, idle_timeout=10,hard_timeout=30,icmp,icmp_type=8,icmp_code=0,actions=output:2 cookie=0, duration_sec=4s, duration_nsec=994000000s, table_id=0, priority=32768, n_packets=1, n_bytes=42, idle_timeout=10,hard_timeout=30,arp,nw_proto=1,icmp_type=0,icmp_code=0,actions=output:1 cookie=0, duration_sec=10s, duration_nsec=32000000s, table_id=0, priority=32768, n_packets=9, n_bytes=882, idle_timeout=10,hard_timeout=30,icmp,icmp_type=0,icmp_code=0,actions=output:1 openflow@openflowtutorial:~/po. openflow@openflowtutorial:~/pox$ git clone. There is also the interactive version of the same of_sw_tutorial_oo.py that is listed in the POX Wiki. But it did not work for me. Gave me the error below. I also prefer to do the 'dumb' thing and just type out code to help me learn. Just FYI. Cloning into poxstuff... remote: Counting objects: 70, done. remote: Compressing objects: 100% (41/41), done. remote: Total 70 (delta 39), reused 59 (delta 28) Unpacking objects: 100% (70/70), done. openflow@openflowtutorial:~/pox$ openflow@openflowtutorial:~/pox$ ./pox.py of_sw_tutorial_oo9. POX controller will automatically look for the ext/ directory for components. You can copy the of_switch_tutorial.py file to the ext/ folder if you want the POX controller to be able to find it automatically. POX 0.0.0 / Copyright 2011 James McCauley DEBUG:ext.of_sw_tutorial_oo:Initializing switch SW_IDEALPAIRSWITCH. Traceback (most recent call last): File "./pox.py", line 352, in main if doLaunch(): File "./pox.py", line 152, in doLaunch f(**params) File "/home/openflow/pox/ext/of_sw_tutorial_oo.py", line 327, in launch core.Interactive.variables['MySwitch'] = MySwitch File "/home/openflow/pox/pox/core.py", line 346, in __getattr__ raise AttributeError("'%s' not registered" % (name,)) AttributeError: 'Interactive' not registeredopenflow@openflowtutorial:~/pox$ openflow@openflowtutorial:~/pox$ cp poxstuff/of_sw_tutorial.py ext/10. To build muscle memory, I manually create a Python file, can start typing in line-by-line in order to learn. Here is the link I look at,. openflow@openflowtutorial:~/pox$ ./pox.py of_sw_tutorial POX 0.0.0 / Copyright 2011 James McCauley INFO:ext.of_sw_tutorial 11. For example, to build a dumb hub, follow the code and only write out what is required. Here is a block-by-block: 1 #!/usr/bin/env python- The pox.core and pox.openflow.libopenflow_01 are the packages being in use in the functions: 2 3 # Copy and paste 4 # step by step to understand the process 5 6 from pox.core import core- Logging for output: 7 import pox.openflow.libopenflow_01 as of 8 9 log = core.getLogger()- This is a base function to allow sending packets out: 10 11 table = {} 12 13 def send_packet(event, dst_port = of.OFPP_ALL):- Here is the dumb hub function itself: 14 msg = of.ofp_packet_out(in_port=event.ofp.in_port) 15 if event.ofp.buffer_id != -1 and event.ofp.buffer_id is not None: 16 msg.buffer_id = event.ofp.buffer_id 17 else: 18 if event.ofp.data: 19 return 20 msg.data = event.ofp.data 21 msg.actions.append(of.ofp_action_output(port = dst_port)) 22 event.connection.send(msg) 23 24 def _handle_dumbhub_packetin(event):- Here is the launch() function to specify which function to use: 25 packet = event.parsed 26 send_packet(event, of.OFPP_ALL) 27 28 log.debug("Broadcasting %s.%i -> %s.%i" % 29 (packet.src, event.ofp.in_port, packet.dst, of.OFPP_ALL)) 30 31 # launch whichever implementation you want via function 32 def launch(): 33 core.openflow.addListenerByName("PacketIn", _handle_dumbhub_packetin) 34 35 log.info("Switch Tutorial is running.") 36 12. Let's start our simple dumb hub controller: openflow@openflowtutorial:~/pox$ ./pox.py of_sw_tutorial_myTestPOX 0.0.0 / Copyright 2011 James McCauley INFO:ext.of_sw_tutorial_myTest mininet> h2 ping -c5 h3 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 64 bytes from 10.0.0.3: icmp_req=1 ttl=64 time=74.3 ms 64 bytes from 10.0.0.3: icmp_req=2 ttl=64 time=33.4 ms 64 bytes from 10.0.0.3: icmp_req=3 ttl=64 time=44.9 ms 64 bytes from 10.0.0.3: icmp_req=4 ttl=64 time=21.6 ms 64 bytes from 10.0.0.3: icmp_req=5 ttl=64 time=6.10 ms --- 10.0.0.3 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4026ms rtt min/avg/max/mdev = 6.101/36.093/74.358/23.053 ms mininet> Notice the packet time are longer on the first packet, and subsequent packets have longer time compare to the L2 learning switch. You will also see the broadcast packets on the POX console: POX> DEBUG:ext.of_sw_tutorial_myTest:Broadcasting 00:00:00:00:00:02.1 -> ff:ff:ff:ff:ff:ff:03.2 -> 00:00:00:00:00:02.65532 DEBUG:ext.of_sw_tutorial_myTest:Broadcasting 00:00:00:00:00:02.1 -> 00:00:00:00:00:03.65532 13. You can now start to experiment with the functions. Say, you can insert a print statement to see what the parsed packet look like: 23 24 def _handle_dumbhub_packetin(event): 25 packet = event.parsed 26 print packet 27 send_packet(event, of.OFPP_ALL) 28 29 log.debug("Broadcasting %s.%i -> %s.%i" % 30 (packet.src, event.ofp.in_port, packet.dst, of.OFPP_ALL)) 31 Restart the controller, you will now see the packet: POX> INFO:openflow.of_01:[Con 1/1] Connected to 00-00-00-00-00-01 [00:00:00:00:00:02>00:00:00:00:00:03:IP]|([v:4hl:5l:84t:64]ICMP cs:26a5[10.0.0.2>10.0.0.3]){t:ECHO_REQUEST c:0 chk:23e7}{id:2835 seq:3}DEBUG:ext.of_sw_tutorial_myTest:Broadcasting 00:00:00:00:00:02.1 -> 00:00:00:00:00:03.65532 [00:00:00:00:00:03>00:00:00:00:00:02:IP]|([v:4hl:5l:84t:64]ICMP cs:dfc5[10.0.0.3>10.0.0.2]){t:ECHO_REPLY c:0 chk:2be7}{id:2835 seq:3}DEBUG:ext.of_sw_tutorial_myTest:Broadcasting 00:00:00:00:00:03.2 -> 00:00:00:00:00:02.65532 [00:00:00:00:00:02>00:00:00:00:00:03:IP]|([v:4hl:5l:84t:64]ICMP cs:26a5[10.0.0.2>10.0.0.3]){t:ECHO_REQUEST c:0 chk:4fd1}{id:2835 seq:4} 14. Or if you want to look at the packet out message: 13 def send_packet(event, dst_port = of.OFPP_ALL): 14 msg = of.ofp_packet_out(in_port=event.ofp.in_port) 15 print msg 16 if event.ofp.buffer_id != -1 and event.ofp.buffer_id is not None: 17 msg.buffer_id = event.ofp.buffer_id 18 else: 19 if event.ofp.data: 20 return 21 msg.data = event.ofp.data 22 msg.actions.append(of.ofp_action_output(port = dst_port)) 23 event.connection.send(msg) 24 POX> ofp_packet_out header: version: 1 type: 13 (OFPT_PACKET_OUT) length: 8 xid: None buffer_id: -1 in_port: 1 actions_len: 0 actions:DEBUG:ext.of_sw_tutorial_myTest:Broadcasting 00:00:00:00:00:02.1 -> 00:00:00:00:00:03.65532 ofp_packet_out header: version: 1 type: 13 (OFPT_PACKET_OUT) length: 8 xid: None buffer_id: -1 in_port: 2 actions_len: 0 actions:DEBUG:ext.of_sw_tutorial_myTest:Broadcasting 00:00:00:00:00:03.2 -> 00:00:00:00:00:02.65532 15. Let's try the lazyhub function, just create the lazyhub function and change the launch() function: 23 24 def _handle_lazyhub_packetin (event): 25 packet = event.parsed 26 27 msg = of.ofp_flow_mod() 28 msg.idle_timeout = 10 29 msg.hard_timeout = 30 30 msg.actions.append(of.ofp_action_output(port = of.OFPP_ALL)) 31 event.connection.send(msg) 32 33 log.debug("Installing %s.%i -> %s.%i" % 34 ("ff:ff:ff:ff:ff:ff", event.ofp.in_port, "ff:ff:ff:ff:ff:ff", of.OFPF_ALL)) 35 36 37 # launch whichever implementation you want via function 38 def launch(): 39 core.openflow.addListenerByName("PacketIn", _handle_lazyhub_packetin) 40 41 log.info("Switch Tutorial is running.") 42 mininet> h2 ping -c5 h3 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 64 bytes from 10.0.0.3: icmp_req=2 ttl=64 time=0.674 ms 64 bytes from 10.0.0.3: icmp_req=3 ttl=64 time=0.060 ms 64 bytes from 10.0.0.3: icmp_req=4 ttl=64 time=0.041 ms 64 bytes from 10.0.0.3: icmp_req=5 ttl=64 time=0.286 ms --- 10.0.0.3 ping statistics --- 5 packets transmitted, 4 received, 20% packet loss, time 4014ms rtt min/avg/max/mdev = 0.041/0.265/0.674/0.255 ms mininet> POX> POX> INFO:openflow.of_01:[Con 1/1] Connected to 00-00-00-00-00-01 DEBUG:ext.of_sw_tutorial_myTest:Installing ff:ff:ff:ff:ff:ff.1 -> ff:ff:ff:ff:ff:ff.65532 POX> I am not sure if this is helpful for others, but at least for me, this provides me with a good entry point to start understanding the different aspects of the POX controller and to proceed. Cheers. Leave me comments to let me know how to improve. HI Eric Chou, i have some question, hope you'll help me: -when i created topo like you first: & sudo mn --topo single,3 --mac --switch ovsk --controller remote , i received message :Unable to contact the remote controller at 127.0.0.1:6633 -when i tried connect to POX: ~/pox$ ./pox.py, i only saw display: INFO:openflow.of_01:[Con 1/1] Connected to 00-00-00-00-00-01 and no more. I saw you have POX> in below, i think this is place to make code and can controlle in POX. This is 2 my question. I'm the beginner at mininet.my email: tuanhai.bk@gmail.com Hi Hai Pham, thanks for reading! I took down my VM at the time to save some laptop resources. But both of those messages you saw are normal. The mininet topology is created with 1 switch and 3 hosts, the switch is trying to connect to the kernel switch at port 6633. If you havn't started the POX controller at port 6633, then it would not connect. When the POX controller is started, the switch immediately connects to the controller. The mininet sets the switch mac address as 00:00:00:00:00:01 (hosts are 0::02, 0::03, 0:004, etc IIRC). But by default, there is no applications associated with the POX, so you should proceed to step 6, stop the POX controller after you make sure the switch can register. Then proceed to use the L2 Learning switch app to make sure everything is working. If you are happy with the boiler plate applications (dumb hub, L2 learning switch), then you can stop there. But if you are interested in learning how to make your own application, then you should proceed. Hope it helps. :) Hey Eric, I have a quick question: This is referring to step 7. So I have POX running simple l2_forwarding. I ping from h1 to h3. I can see the flows being added from the Debug of POX CLI, but I don't see any flows being dumped when I use dpctl. Here is the output: mininet> h1 ping h3 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 64 bytes from 10.0.0.3: icmp_req=1 ttl=64 time=15.9 ms POX> DEBUG:forwarding.l2_learning:installing flow for 00:00:00:00:00:03.3 -> 00:00:00:00:00:01.1 DEBUG:forwarding.l2_learning:installing flow for 00:00:00:00:00:01.1 -> 00:00:00:00:00:03.3 DEBUG:forwarding.l2_learning:installing flow for 00:00:00:00:00:03.3 -> 00:00:00:00:00:01.1 DEBUG:forwarding.l2_learning:installing flow for 00:00:00:00:00:03.3 -> 00:00:00:00:00:01.1 DEBUG:forwarding.l2_learning:installing flow for 00:00:00:00:00:01.1 -> 00:00:00:00:00:03.3 mininet@mininet-vm:~/pox/pox$ sudo dpctl dump-flows tcp:127.0.0.1:6634 stats_reply (xid=0xf4171fd9): flags=none type=1(flow) mininet@mininet-vm:~/pox/pox$ sudo ovs-ofctl dump-flows tcp:127.0.0.1:6634 NXST_FLOW reply (xid=0x4): mininet@mininet-vm:~/pox/pox$ sudo ovs-ofctl dump-flows s1 NXST_FLOW reply (xid=0x4): mininet@mininet-vm:~/pox/pox$ sudo ovs-ofctl dump-flows c0 ovs-ofctl: c0 is not a bridge or a socket Any help would be greatly appreciated Thanks, Daniel Hi Daniel, sounds like the flows is installed and you can see the ping flowing from the hosts, just not from the other monitoring tool. I think there are two things that we can try: 1. if you enable x-forwarding as taught by the openflow tutorial, clear flow tables, and do tcpdump on all hosts, do you see what you would expect? (i.e. broadcast first packet then just h1-h3). 2. this is an assumption, but worth checking, prior to enable any constroller, the tutorial explained how to install flows manually by dpctl, it is probably worth checking again that dpctrl itself actually works. I have also found that the VM / Mininet sometimes would give a hick up, for me it happens mostly when the VM is still running and I hibernate the machine, when I come back to it later there seems to be some small quirks and I have to restart mininet. I dont know if this just happens to me and my machine or not. Let me know what you find out. tcpdump is really the next step for verification in my humble opinion. HI Eric, now i want using Pox to control my topo. Example, when i connected to POX, i can ping h1,h2. But now i want h1 cant ping h2, can you tell me how can i do that by coding on POX, i dont know working on POX now.thanks!!! Hi Hai Pham, if I understand correctly, you have successfully implement whatever pre-made packages from POX to control your topology such as the dumb hub and L2 learning switch, now you would like to make 1 small tweak to make sure you understand how everything ties together? If the answer is yes, the short answer is I was in the same boat when I started to explore POX and my goal for the blog is to share my experience so others can build on top of it. So for example, in step 13, you can see the packet structure that was sent to the controller. So to do what you want to do, you can simply insert an if statement to look for (ip.src==10.0.0.2 and ip.dst==10.0.0.3) and of action to drop. Then everything besides those packets will pass, but the packets matching your description will drop, just like an access list. :) Hope it helps. Thanks very much, Eric can you send me your mail to my inbox, im studying at python code. If i have your mail, its very easily to contact you when i have a trouble.My mail: tuanhai.bk@gmail.com @Daniel, doing more work with OF / Mininet again. Wondering if you still cant see the flows in dpctl? Just tried it again with the new image with Mininet 2.0: 1. Initial state with no flow: mininet@mininet-vm:~$ date Sat Apr 6 18:25:08 PDT 2013 mininet@mininet-vm:~$ dpctl dump-flows tcp:127.0.0.1:6634 stats_reply (xid=0x906ef4d2): flags=none type=1(flow) mininet@mininet-vm:~$ mininet@mininet-vm:~/pox$ date Sat Apr 6 18:25:19 PDT 2013 mininet@mininet-vm:~/pox$ 2. Start POX in debug mininet@mininet-vm:~/pox$ ./pox.py --verbose forwarding.l2_learning POX 0.1.0 (betta) / Copyright 2011-2013 James McCauley, et al. DEBUG:core:POX 0.1.0 (betta) going up... DEBUG:core:Running on CPython (2.7.3/Sep 26 2012 21:51:14) DEBUG:core:Platform is Linux-3.5.0-17-generic-x86_64-with-Ubuntu-12.10-quantal INFO:core:POX 0.1.0 (betta) is up. DEBUG:openflow.of_01:Listening on 0.0.0.0:6633 INFO:openflow.of_01:[00-00-00-00-00-01 1] connected 3. h1 poing to h2 mininet> h1 ping -c 3 h2 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 64 bytes from 10.0.0.2: icmp_req=1 ttl=64 time=6.88 ms 64 bytes from 10.0.0.2: icmp_req=2 ttl=64 time=39.4 ms 64 bytes from 10.0.0.2: icmp_req=3 ttl=64 time=0.456 ms --- 10.0.0.2 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2005ms rtt min/avg/max/mdev = 0.456/15.608/39.481/17.083 ms mininet> 4. Flow gets installed: DEBUG:forwarding.l2_learning:Connection [00-00-00-00-00-01 1] DEBUG:forwarding.l2_learning:Port for 00:00:00:00:00:02 unknown -- flooding DEBUG:forwarding.l2_learning:installing flow for 00:00:00:00:00:02.2 -> 00:00:00:00:00:01.1 DEBUG:forwarding.l2_learning:installing flow for 00:00:00:00:00:01.1 -> 00:00:00:00:00:02.2 DEBUG:forwarding.l2_learning:installing flow for 00:00:00:00:00:02.2 -> 00:00:00:00:00:01.1 DEBUG:forwarding.l2_learning:installing flow for 00:00:00:00:00:01.1 -> 00:00:00:00:00:02.2 5. dpctl shows flows: ininet@mininet-vm:~$ dpctl dump-flows tcp:127.0.0.1:6634 stats_reply (xid=0xcdc223b6): flags=none type=1(flow) cookie=0, duration_sec=6s, duration_nsec=220000000s, table_id=0, priority=65535, n_packets=3, n_bytes=294, idle_timeout=10,hard_timeout=30,icmp_tos=0x00,icmp_type=0,icmp_code=0,actions=output:1 cookie=0, duration_sec=5s, duration_nsec=185000000s, table_id=0, priority=65535, n_packets=2, n_bytes=196, idle_timeout=10,hard_timeout=30,icmp_tos=0x00,icmp_type=8,icmp_code=0,actions=output:2 cookie=0, duration_sec=1s, duration_nsec=181000000s, table_id=0, priority=65535, n_packets=1, n_bytes=42, idle_timeout=10,hard_timeout=30,arp_proto=1,actions=output:1 cookie=0, duration_sec=1s, duration_nsec=179000000s, table_id=0, priority=65535, n_packets=1, n_bytes=42, idle_timeout=10,hard_timeout=30,arp_proto=2,actions=output:2 mininet@mininet-vm:~$ date Sat Apr 6 18:25:48 PDT 2013 mininet@mininet-vm:~$ Have you tried it again? how to check flow entry is full or not in the pox controller thanks Eric! :) Hi Eric, First of all it was very helpful in understanding it and cleared many doubts. I have one question. inside misc.of_tutorial.py file I wrote parser which uses Scapy. def parseEachPacket(pkt): if(pkt.haslayer(Ether)): print "Ethernet::" ether = pkt[Ether] print " Src:: " + str(ether.src) print " Dst:: " + str(ether.dst) and i have imported required scapy packages. but when I run Error pop up: AttributeError: 'ethernet' object has no attribute 'haslayer' Any idea ? please help Hi, I am new to Scapy and OF as well, just a fair warning.. :) How are you calling the function? Because the error seems to indicate that pkt in the function is not understood as a Scapy packet. Here is a simple program that I took from the Scapy tutorial with your function: ** begin** #!/usr/bin/env python from scapy.all import * def parsePacket(pkt): if pkt.haslayer(Ether): print "Found Ethernet packet" print str(pkt[Ether].dst) if __name__ == "__main__": sniff(prn=parsePacket, store=0) **end** mininet@mininet-vm:~/pox$ sudo ./scapyTest.py WARNING: No route found for IPv6 destination :: (no default route?) Found Ethernet packet :ac Found Ethernet packet :5a .... The haslayer() function resides under packet.Packet: In [1]: from scapy.all import * WARNING: No route found for IPv6 destination :: (no default route?) In [2]: In [2]: packet.Packet.ha packet.Packet.hashret packet.Packet.haslayer In [2]: So perhaps search for a way to make the pkt into a Scapy packet object would be the logical next step? Also, wanted to mention that it almost seems easier to parse out the packet in OF then reconstruct the packet in Scapy. For example, in step 13, you already know the IP source, then you can use Scapy to construct a new packet with the same source: from scapy.all import IP ipSource = packet.src newPacket = IP(src=ipSource) something like that might fit your needs? i want to start up a dumb hub. however, when i type it out it gives me this "Module not found: of_sw_tutorial_myTest" how can i solve this? By default it searches the ~/ext directory, maybe check if the file is in that directory? This is very helpful, many thanks! Thanks for reading Yen. :) Thanks Eric for this tutorial. It is very helpful. I've this question when I run: ./pox.py flow_stats l3_learning #### flow_stats.py from poxstuff/ with mn --controller remote I've faced this error: POX> INFO:openflow.of_01:[Con 1/1] Connected to 00-00-00-00-00-01 Task caused exception and was de-scheduled Traceback (most recent call last): File "/home/mininet/pox/pox/lib/recoco/recoco.py", line 276, in cycle rv = t.execute() File "/home/mininet/pox/pox/lib/recoco/recoco.py", line 94, in execute return self.gen.send(v) File "/home/mininet/pox/pox/lib/recoco/recoco.py", line 751, in run rv = self._callback(*self._args,**self._kw) File "/home/mininet/pox/ext/flow_stats.py", line 42, in _timer_func connection.send(of.ofp_stats_request(body=of.ofp_port_stats_request())) File "/home/mininet/pox/pox/openflow/of_01.py", line 572, in send data = data.pack() File "/home/mininet/pox/pox/openflow/libopenflow_01.py", line 2136, in pack packed += struct.pack("!HH", self.type, self.flags) error: cannot convert argument to integer DEBUG:ext.flow_stats:FlowStatsReceived from 00-00-00-00-00-01: [] INFO:ext.flow_stats:Web traffic from 00-00-00-00-00-01: 0 bytes (0 packets) over 0 flows Hi Ali, I am afraid that might be a bit out of my knowledge based on the information given. Is flow_stats your own code or one of the stock ones? Have you tried the some of the email alias such as opeflow-discuss@ and mininet-discuss-requests@lists.stanford.edu? I find the list to be very helpful and knowledgeable, sometimes answered by creator of the tutorials/tools themselves. Best of luck! :) hey Eric, Great thanks for your great job! It is a pity that there are no any references to this tutorial out there, I was having a tough time trying to understand the code before I got here. I think a small link to this tutorial in POX wiki and OpenFlow tutorial would be really helpful for newcomers. Nevertheless, I hope I am not too late to a party! :) I am trying to do a simple VLAN-translation function that would assign VLAN tags depending on given criteria. It would be very interesting to hear your opinion and maybe suggestions on realization of such function. Looking forward to hearing from you! Regards, Abe Hi Abe, thank you for reading and the encouragement. :) I am not sure I qualify to answer that VLAN-translation question. I think in today's implementation, it is already a bit complex to determine packet forwarding based not the existing fields; having the promise to swap headers is certainly possible, but would take more work. It is certainly fun to have this option, right? :) Hi Eric, actually I am new to python and POX. This tutorial helped me so much to understand the basic functions of POX and how it is working. would you mind tell me for understanding these codes in PYTHON, which reference would be useful? and another question is about lazyhub. I changed the dumbhub to lazyhub but it gives me erorr. i will be thanksful if you can help me. the error is as follow: POX> INFO:openflow.of_01:[Con 1/1] Connected to 00-00-00-00-00-01 ERROR:core:Exception while handling OpenFlowNexus!PacketIn... Traceback (most recent call last): File "/home/mininet/pox/pox/lib/revent/revent.py", line 233, in raiseEventNoErrors return self.raiseEvent(event, *args, **kw) File "/home/mininet/pox/pox/lib/revent/revent.py", line 280, in raiseEvent rv = event._invoke(handler, *args, **kw) File "/home/mininet/pox/pox/lib/revent/revent.py", line 158, in _invoke return handler(self, *args, **kw) File "/home/mininet/pox/ext/of_sw_tutorial_myTest.py", line 33, in _handle_lazyhub_packetin ("ff:ff:ff:ff:ff:ff", event.ofp.in_port, "ff:ff:ff:ff:ff:ff", of.OFPF_ALL)) AttributeError: 'module' object has no attribute 'OFPF_ALL' thanks a lot for providing this tutorial for others Thank you for the kind words, Farzaneh. :) hi eric, I have just started using pox can you please guide how to handle "www" request if their is no DNS server? thanks Hi Gurjot, sorry for the late reply, I have been thinking about doing a Part 2 of the tutorial. The OpenFlow project has come very far and POX has a ton of new features as well. Let me bundle this in with other questions for the Part 2. Stay tuned! :) I'm working with forwarding_l2.multi.. 1. Whether forwarding.l2_multi.py is for switch only? 2. That uses discovery.py to know the topology of the network and picks the shortest one, so where is the role of controller? Waiting eagerly for your reply... Thank you... Bondankit07@gmail.com Hi bondankit07, thank you for reading. I have been thinking about doing a part 2 for the tutorial with how different both OpenFlow and POX are now. I will probably bundle in the questions I have received and answer them together. Stay tuned! :) hi eric, I am new in SDN. Can you please help me how to block ip addresses in mininet using pox controller? thanks Hi Divam, sorry for the late reply, I have been thinking about doing a part 2 of the tutorial with so much change in OpenFlow, SDN, and POX. I will bundle in the questions I have received and answer them together. Stay tuned! :) Hi Eric, Great job! It helps me a lot trying to understand the differences and limits between OpenFlow/POX/Python, I've been struggling a while the original tutorial. Any clues on how to start designing and implementing a basic load balancer? Thanks a lot!! Hi Rous, thanks for reading! Yeah, when I first looked at POX the documentation were a bit lacking so I wrote this one after I bump my head many, many times. Glad it was helpful to you. To be honest, for basic load balancer, I will probably take a serious look at Google's project Seesaw,. It is not Python, but I have heard good things about Go so it would be probably be a good opportunity to play with Go, I guess. :) Hi Eric, great article for beginners It helped me a lot. I've a confusion ho can I build something in a way that there're 6 host one switch and one controller, where 3 host (Hx) and other 3 host(Gx) can only ping to each other but not to others. Like h1 ping g1 shouldn't ping but h1 ping h2 or g1 ping g2 should work. Any help is appreciated Thanks Hi, thanks for reading. I think so.. you just in the rules for the traffic that you want to go thru but not the ones you don't. With OpenFlow that is the level of control you have. Have you tried it and it does not work somehow? :) how to check network performance in pox or mininet. i want to check performance metrics like throughput, delay,packet loss etc. Hi Mohammed, at least for BigSwitch controllers they report the switch stats at the controller level. Some of the switches also expose SNMP (I think). Do you have any particular vendor in mind? This is a great tutorial. Help me a lot. Can anyone help me i want to drop packets from these two sources. but i am unable to run this. i am also new to python def _handle_dumbhub_packetin(event): packet = event.parsed print packet src_mac = packet.src dst_mac = packet.dst print src_mac print dst_mac if (packet.src == "00:00:00:00:00:01") or (packet.dst == "00:00:00:00:00:03"): print "hello" # drop(); else: send_packet(event, of.OFPP_ALL) log.debug("Broadcasting %s.%i -> %s.%i"%(packet.src, event.ofp.in_port, packet.dst, of.OFPP_ALL)) if condition is not going to execute correctly. Hi Sincer, just a note that I will look at this question when I spin up my OpenFlow lab again.. although I am not sure when I will get to it... :( Hi, can you tell me how I can use POX with wireless interfaces ? Yeah, if you use the OpenWRT54G Linksys with OpenFlow 1.0 load here is the instruction: edit the /etc/config/wireless file, etc. HI Eric, Can you tell me how can I communicate between two SDN controllers? Hi Sonal, from what I can tell the orchestration of controllers has always been fragmented and differs from implementer to implementer. The switch can always connect to multiple controllers, it is the automatic determining role of master/slave/dual-head that required some sort of controller-to-controller communication. This was defined in OF 1.2 and carried over to 1.3, but as you can see not much in terms of standardization. So... in short, if I were to do it, I would probably build my own using API by the controller itself instead of relying on the standard body. I think POX has not moved beyond OF 1.0, which was why I chose Ryu for my INE course. That would be my suggestion, use Ryu and see if you think the 1.3 spec fits your need, else use its included HTTP API to build your own. Cheers and good luck. :) This comment has been removed by the author. Hi Eric, When I run sudo python ip_loadbalancer.py in my terminal I get : File "ip_loadbalancer.py", line 23, in from pox.core import core ImportError: No module named pox.core Please Help. Hi Nitesh, core.py is still there by way of project code and documentation,. If your ip_loadbalancer.py is your own code outside of the pox directory, then you need to add the POX directory to your Python module search path. This can be done using 'import sys' and 'sys.path.append("")'. Hi Eric,none of my pox programs are running.Pox is installed in /home/pox. All my codes are in some folders in /home/pox/pox. For example, /home/pox/pox/forwarding/hub.py. So what should I add in sys.path.append("") ??? Hi Eric, I am working on an overlay topology (where all switches are connected) and I have a controller connected to one of the switches (In band controller). In my controller, I need to know the IPs of the hosts (not only the switches). Is there a way to know that at the controller level? or should I import these IPs as an input to the controller? Thanks in advance. Can you please tell how to communicate between two pox controllers? Great Article please keep writing, It helps people alot who love to read good articles.'ve got plenty of interview questions over at algrim.co as well as 45 Selenium Interview Questions To Hire Or Get Hired With - Algrim.co I have h1 and h2 as host and c1 as controller. I want to change the POX controller so that if I ping between h1 and h3 it is h2 that receives the packets sent because h3 does not exist.this is my problem.I want to have ideas on that. Thank you hi Eric can you help me please, i created my topology which includes two mobiles staions and one access point and i want to see the messages received by the controller i followed the instructions but i have always this line [dump:00-00-00-00-00-01|4096] [ethernet][ipv4][udp][dhcp] i didn't understand why pox just work fine with the predefine cmd "sudo mn --wifi --topo linear,5 --controller=remote,ip=10.0.2.15,port=6633" but not if i create my file.py This comment has been removed by the author. Very nice post here thanks for it. Best Interview Question has now been becoming a lifeline for all the aspirants, candidates or students visiting on the website with the aim of gaining vast knowledge and information. Mysql Interview Questions Angular 2 Interview Questions Php Interview Questions This comment has been removed by the author. Great post, Please visit Courseya for Mysql free Courses thanks for reading,InterviewQueries thanks for this excellent and nice post, its a good one. i'm working on attack and normal traffic generation using scapy, i actually created the SDN topology but having issue with traffic generation and traffic statistic collection. could you please help me out? Aivivu - đại lý vé máy bay, tham khảo vé máy bay đi Mỹ tháng nào rẻ nhất giá vé máy bay tết vé máy bay đi Pháp bao nhiêu tiền vé máy bay đi Anh bao nhiêu tiền booking vé máy bay giá rẻ vé máy bay đi San Francisco bao nhiêu tiền thời gian bay từ Việt nam sang Los Angeles combo marriott phú quốc combo đi đà nẵng 3 ngày 2 đêm If you are interested in your favorite actors' net worth and detailed information, please take a look at celeb networth database. Nice post and good to know about network automation nerds. Thanks for sharing. psychometric test for employees swift interview questions Prepare to best MYSQL MCQS Questions This is a great website .Relevant!! I finally found something that helped me. I have read all the suggestions here. Do you know How to apply for visa to Turkey ? The process is very simple and it is totally online based. It is a time saving activity. شركات تنظيف منازل الجبيل Hey everyone, “India is a place where color is twice as bright. Pinks that make your eyes scorch, blues that you can drown in." Foreign citizens can come to India and see this country's beauty. But you don't know how to apply for an India e visa? You can get Indian tourist visa info via our Indian visas website.
https://blog.pythonicneteng.com/2013/02/openflow-tutorial-with-pox.html
CC-MAIN-2022-27
refinedweb
6,822
68.97
Rahul, 25, is a young IT professional who earns about Rs. 3 Lakhs% of his earnings right from the beginning. This will enable him to tap the power of compounding with this savings in the long term. Long Term Investments He can choose debt investment instruments that offer tax rebate as part of a long term investment plan. He could also explore options like ELSS and also opt for SIP. 10000. Rahul is now 30 earning an income of Rs 6 lakhs L, for a premium paying tenure of 20 years will have a monthly premium of Rs 11000/- on average which is adequate in this circumstance. Health Insurance Health insurance for husband as well as wife must be opted for right after marriage. Rahul can take any such joint plan with a cover of about Rs. 3L, which will require an annual premium of about Rs. 4000 providing facilities such as cashless treatment, pre and post hospitalization. Rahul, 35, is now the proud father of a 2 year old daughter and has an annual income of Rs. 8 L. He has accumulated about Rs 7 Lakhs in savings and has Rs 2 Lakhs in his PF account. Rahul has now 2 dependants and this is the time for him to start looking to buy a house for himself. Additionally he must plan for his daughters future along with consolidating his retirement plans. Life Insurance At this stage the need for cover for Rahul will be around Rs. 50 Lakhs so that his wife and daughter can maintain the same lifestyle in any unfortunate case of his sudden demise. A term insurance for this amount will require an annual premium of Rs. 15000/- Health Insurance A comprehensive family insurance of Rs. 500000 will require an annual premium of Rs. 5000 approximately providing all the requisite facilities. Purchase of House This is the perfect time for Rahul to buy a house. With an annual income of Rs. 8 lakhs he can easily get a home loan of 5 x 7 (annual Income) = Rs 35 Lakhs. However Rahul should go for a house of about Rs. 25 Lakhs for which his contribution will be about Rs. 5 Lakhs as down payment that can be drawn from the savings he has made so far. The remaining Rs. 20 Lakhs can be taken as a home loan with an EMI of about Rs. 21000. Daughters Future Planning for the daughters future should be on the top of the agenda for Rahul now. This includes cost of higher education which is about 16 years away now and will require about Rs. 15 Lakhs 2 Lakhs in PF for which he maybe contributing Rs. 20000 annually and it will grow to about Rs. 28 lakhs in the next 25 years at an 8% growth rate. However for life till about 80 years he shall need Rs 1.3 Cr. to keep up his lifestyle even after retirement. Thus he will have to increase his contribution towards this aspect through other retirement tools to about Rs. 80,000 per year. Investment Planning for Rahul: By the age of 35 Rahul should have started dedicated investments to create a corpus required in later stages of life. He should expect returns at an average of at least 8 -12% on his investments. Therefore all his monthly savings of around Rs 8000 can be invested through SIPs. Additionally he can consider moving about Rs. 2 L from his low return (3.5%) bank savings account to a more liquid option that provides around 6% annually. The balance Rs. 1L left in the savings account must be invested as a lump sum in some debt/equity fund which shall earn a better annual return. Tax planning: Rahuls total tax liability is about Rs. 170000 annually. He must make sure to avail the advantage of Section 80c tax benefits of up to Rs. 1L. This will save about Rs. 30,000 in taxes. Since he is already contributing Rs. 20000. By BankBazaar.com
http://www.financialexpress.com/archive/financial-planning-for-young-professionals-making-the-best-of-earnings/1173011/0/
CC-MAIN-2017-30
refinedweb
671
74.9
8.1. Sequence Models¶ Imagine that you is anchoring, based on someone else’s opinion. For instance after the Oscar awards, ratings for the corresponding movie go up, even though it is still the same movie. This effect persists for a few months until the award is forgotten. [Wu et al., 2017] showed that the effect lifts rating by over half a point. There is the Hedonic adaptation, where humans quickly adapt to accept an improved (or a bad) situation as the new normal. For instance, after watching many good movies, the expectations that the next movie is equally good or better are high, hence even an average movie might be considered a bad movie after many great ones. There [Koren, 2009] to recommend movies more accurately. But it is extrapolation whereas the latter is called interpolation.. 8.1.1. Statistical Tools¶ In short, we need statistical tools and new deep neural networks architectures to deal with sequence data. To keep things simple, we use the stock price illustrated in Fig. 8.1.1 as an example. Let’s denote the prices by \(x_t \geq 0\), i.e., at time \(t \in \mathbb{N}\) we observe some price \(x_t\). For a trader to do well in the stock market on day \(t\) he should want to predict \(x_t\) via 8.1.1.1. Autoregressive Models¶ In order to achieve this, our trader could use a regressor such as the one we trained in Section 3.3. There \mid x_{t-1}, \ldots, x_1)\) efficiently. In a nutshell it boils down to two strategies: Assume that the potentially rather long sequence \(x_{t-1}, \ldots, x_1\) is, shown in Fig. 8.1.2, is to try and keep some summary \(h_t\) of the past observations, at the same time update \(h_t\) in addition to the prediction \(\hat{x_t}\). This leads to models that estimate \(x_t\) with \(\hat{x_t} = p(x_t \mid x_{t-1}, h_{t})\) and moreover updates of the form \(h_t = g(h_{t-1}, x_{t-1})\). Since \(h_t\) is never observed, these models are also called latent autoregressive models. LSTMs and GRUs are examples of this. Both cases raise the obvious question of will not. This is reasonable, since novel dynamics are just that, novel and thus not predictable using data that we have so far. Statisticians call dynamics that do \mid x_{t-1}, \ldots, x_1)\). 8 a discrete value, since in this case dynamic programming can be used to compute values along the chain exactly. For instance, we can compute \(p(x_{t+1} \mid x_{t-1})\) efficiently using the fact that we only need to take into account a very short history of past observations: Going into details of dynamic programming is beyond the scope of this section, but we will introduce it in Section 9.4. Control and reinforcement learning algorithms use such tools extensively. 8.1.1.3. Causality¶ In principle, there \(p(x_{t+1} \mid x_t)\) rather than \(p(x_t \mid x_{t+1})\). For instance, [Hoyer et al., 2009] show that in some cases we can find \(x_{t+1} = f(x_t) + \epsilon\) for some additive noise, whereas the converse is not true. This is great news, since it is typically the forward direction that we are interested in estimating. For more on this topic see e.g., the book by [Peters et al., 2017]. We are barely scratching the surface of it. 8.1.2. A Toy Example¶ After so much theory, let’s try this out in practice. Let’s begin by generating some data. To keep things simple we generate our time series by using a sine function with some additive noise. %matplotlib inline import d2l from mxnet import autograd, np, npx, gluon, init from mxnet.gluon import nn npx.set_np() T = 1000 # Generate a total of 1000 points time = np.arange(0, T) x = np.sin(0.01 * time) + 0.2 * np.random.normal(size=T) d2l.plot(time, [x]) Next we need to turn this time series into features and labels that data points, since we do. We kept the architecture fairly simple. A few layers of a fully connected network, ReLU activation and \(\ell_2\) loss. Since much of the modeling is identical to the previous sections when we built regression estimators in Gluon, we will not delve into much detail. tau = 4 features = np.zeros((T-tau, tau)) for i in range(tau): features[:, i] = x[i: T-tau+i] labels = x[tau:] batch_size, n_train = 16, 600 train_iter = d2l.load_array((features[:n_train], labels[:n_train]), batch_size, is_train=True) test_iter = d2l.load_array((features[:n_train], labels[:n_train]), batch_size, is_train=False) # Vanilla MLP architecture def get_net(): net = gluon.nn.Sequential() net.add(nn.Dense(10, activation='relu'), nn.Dense(1)) net.initialize(init.Xavier()) return net # Least mean squares loss loss = gluon.loss.L2Loss() Now we are ready to train. def train_net(net, train_iter, loss, epochs, lr): trainer = gluon.Trainer(net.collect_params(), 'adam', {'learning_rate': lr}) for epoch in range(1, epochs + 1): for X, y in train_iter: with autograd.record(): l = loss(net(X), y) l.backward() trainer.step(batch_size) print('epoch %d, loss: %f' % ( epoch, d2l.evaluate_loss(net, train_iter, loss))) net = get_net() train_net(net, train_iter, loss, 10, 0.01) epoch 1, loss: 0.036264 epoch 2, loss: 0.032119 epoch 3, loss: 0.029296 epoch 4, loss: 0.028024 epoch 5, loss: 0.026428 epoch 6, loss: 0.026475 epoch 7, loss: 0.026243 epoch 8, loss: 0.026159 epoch 9, loss: 0.025360 epoch 10, loss: 0.026161 8.1.3. Predictions¶ Since both training and test loss are small, we would expect our model to work well. Let’s see what this means in practice. The first thing to check is how well the model is able to predict what happens in the next timestep. estimates = net(features) d2l.plot([time, time[tau:]], [x, estimates], legend=['data', 'estimate']) This looks nice, just as we expected it. Even beyond 600 observations the estimates still look rather trustworthy. There is just one little problem to this: if we observe data only until timestep 600, we cannot hope to receive the ground truth for all future predictions. Instead, we need to work our way forward one step at a time: In other words, we will have to use our own predictions to make future predictions. Let’s see how well this goes. predictions = np.zeros(T) predictions[:n_train] = x[:n_train] for i in range(n_train, T): predictions[i] = net( predictions[(i-tau):i].reshape(1, -1)).reshape(1) d2l.plot([time, time[tau:], time[n_train:]], [x, estimates, predictions[n_train:]], legend=['data', 'estimate', 'multistep'], figsize=(4.5, 2.5)) As the above example shows, this is a spectacular failure. The estimates decay to a constant pretty quickly after a few prediction steps. Why did the algorithm work so poorly? This is ultimately due to the fact that the accuracy declines rapidly. We will discuss methods for improving this throughout this chapter and beyond. Let’s verify this observation by computing the \(k\)-step predictions on the entire sequence. k = 33 # Look up to k - tau steps ahead features = np.zeros((k, T-k)) for i in range(tau): # Copy the first tau features from x features[i] = x[i:T-k+i] for i in range(tau, k): # Predict the (i-tau)-th step features[i] = net(features[(i-tau):i].T).T steps = (4, 8, 16, 32) d2l.plot([time[i:T-k+i] for i in steps], [features[i] for i in steps], legend=['step %d' % i for i in steps], figsize=(4.5, 2.5)) This clearly illustrates how the quality of the estimates changes as we try to predict further into the future. While the 8-step predictions are still pretty good, anything beyond that is pretty useless. 8.1.4. Summary¶ Sequence models require specialized statistical tools for estimation. Two popular choices are autoregressive models and latent-variable autoregressive models. As we predict further in time, the errors accumulate and the quality of the estimates degrades, often dramatically. There is quite a difference in difficulty between interpolation and extrapolation. Consequently, if you have a time series, always respect the temporal order of the data when training, i.e., never train on future data. For causal models (e.g., time going forward), estimating the forward direction is typically a lot easier than the reverse direction. 8.1.5. Exercises¶ Improve the above model. Incorporate more than the past 4 observations? How many do you really need? How many would you need if there was no noise? Hint: you can write \(\sin\) and \(\cos\) as a differential equation. Can you incorporate older features while keeping the total number of features constant? Does this improve accuracy? Why? Change the neural network autoregressive model might be needed to capture the dynamic of the data.
https://d2l.ai/chapter_recurrent-neural-networks/sequence.html
CC-MAIN-2019-51
refinedweb
1,478
67.86
11 July 2012 05:57 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The company restarted its BDO plant around 22 June after one-week maintenance and increased the operating rates to 50% from early last week, according to the source. The company has started to offer some small volumes to a few key customers - including downstreams and traders - at about yuan (CNY) 15,000/tonne ($2,355/tonne) DEL (delivered), the source added. The company achieved on-spec product at the plant in late May this year, according to an earlier story from ICIS. Hebi Coal Group is a subsidiary of Henan Coal Chemical
http://www.icis.com/Articles/2012/07/11/9577109/chinas-hebi-coal-runs-its-bdo-plant-at-around-50.html
CC-MAIN-2014-42
refinedweb
104
60.65
On Thu, 5 Aug 2010, Christoph Lameter wrote:> > I bisected this to patch 8 but still don't have a bootlog. I'm assuming> > in the meantime that something is kmallocing DMA memory on this machine> > prior to kmem_cache_init_late() and get_slab() is returning a NULL> > pointer.> > There is a kernel option "earlyprintk=..." that allows you to see early> boot messages.> Ok, so this is panicking because of the error handling when trying to create sysfs directories with the same name (in this case, :dt-0000064). I'll look into while this isn't failing gracefully later, but I isolated this to the new code that statically allocates the DMA caches in kmem_cache_init_late().The iteration runs from 0 to SLUB_PAGE_SHIFT; that's actually incorrect since the kmem_cache_node cache occupies the first spot in the kmalloc_caches array and has a size, 64 bytes, equal to a power of two that is duplicated later. So this patch tries creating two DMA kmalloc caches with 64 byte object size which triggers a BUG_ON() during kmem_cache_release() in the error handling later.The fix is to start the iteration at 1 instead of 0 so that all other caches have their equivalent DMA caches created and the special-case kmem_cache_node cache is excluded (see below).I'm really curious why nobody else ran into this problem before, especially if they have CONFIG_SLUB_DEBUG enabled so struct kmem_cache_node has the same size. Perhaps my early bug report caused people not to test the series...There're a couple more issues with the patch as well: - the entire iteration in kmem_cache_init_late() needs to be protected by slub_lock. The comment in create_kmalloc_cache() should be revised since you're no longer calling it only with irqs disabled. kmem_cache_init_late() has irqs enabled and, thus, slab_caches must be protected. - a BUG_ON(!name) needs to be added in kmem_cache_init_late() when kasprintf() returns NULL. This isn't checked in kmem_cache_open() so it'll only encounter a problem in the sysfs layer. Adding a BUG_ON() will help track those down.Otherwise, I didn't find any problem with removing the dynamic DMA cache allocation on my machines.Please fold this into patch 8.Signed-off-by: David Rientjes <rientjes@google.com>---diff --git a/mm/slub.c b/mm/slub.c--- a/mm/slub.c+++ b/mm/slub.c@@ -2552,13 +2552,12 @@ static int __init setup_slub_nomerge(char *str) __setup("slub_nomerge", setup_slub_nomerge); +/*+ * Requires slub_lock if called when irqs are enabled after early boot.+ */ static void create_kmalloc_cache(struct kmem_cache *s, const char *name, int size, unsigned int flags) {- /*- * This function is called with IRQs disabled during early-boot on- * single CPU so there's no need to take slub_lock here.- */ if (!kmem_cache_open(s, name, size, ARCH_KMALLOC_MINALIGN, flags, NULL)) goto panic;@@ -3063,17 +3062,20 @@ void __init kmem_cache_init_late(void) #ifdef CONFIG_ZONE_DMA int i; - for (i = 0; i < SLUB_PAGE_SHIFT; i++) {+ down_write(&slub_lock);+ for (i = 1; i < SLUB_PAGE_SHIFT; i++) { struct kmem_cache *s = &kmalloc_caches[i]; - if (s && s->size) {+ if (s->size) { char *name = kasprintf(GFP_KERNEL, "dma-kmalloc-%d", s->objsize); + BUG_ON(!name); create_kmalloc_cache(&kmalloc_dma_caches[i], name, s->objsize, SLAB_CACHE_DMA); } }+ up_write(&slub_lock); #endif }
https://lkml.org/lkml/2010/8/17/23
CC-MAIN-2014-15
refinedweb
512
53.21
#GTMTips: 10 Useful Custom JavaScript Tricks I recently published a #GTMTips guide called 10 Useful CSS Selectors, and it was very well received. Inspired by the feedback, here’s the next instalment. This time, we’re going over some useful JavaScript tips and tricks that you can use to make your Google Tag Manager deployment even more efficient. I’ve written a lot about JavaScript in this blog, and I intend to keep on doing so in the future. As always, if JavaScript is somewhat of a mystery to you, I strongly recommend you take the Codecademy (free) course on JS, and take a look at the other available web technology tracks while you’re there! Tip 54: 10 Useful Custom JavaScript Tricks You can deploy these tricks in Custom HTML Tags or Custom JavaScript Variables, since they are the only contexts within Google Tag Manager where you can execute arbitrary JavaScript. Note that some of the tricks are just code snippets, so you will need to understand enough of how Google Tag Manager and JavaScript mesh together to be able to deploy them successfully. Before adding any of these to your deployments, remember to use caniuse.com to check for browser compatibility, and the MDN JavaScript Reference to find alternative ways (AKA polyfills) for writing the unsupported methods. 1. String methods String methods are utilities that you can use to modify any given string. Here are some of the most useful ones, in my opinion. // Use .trim() to strip leading and trailing whitespace from a string. " Oh no! Leading AND trailing whitespace!! ".trim(); // Result: "Oh no! Leading AND trailing whitespace!!" // Use .replace() to replace characters or regular expressions with something else. // .replace() without a regular expression replaces the first instance. "Food".replace('o', 'e'); // Result: "Feod" "Food".replace(/o/g, 'e'); // Result: "Feed" // Use .toUpperCase() and .toLowerCase() to change the case of the entire string "MixED CaSe String".toLowerCase(); // Result: "mixed case string" // Use .substring() to return only part of the string. "?some-query-key=some-query-value".substring(1); // Returns: "some-query-key=some-query-value" "id: 12345-12345".substring(4,9); // Returns: "12345" // Use .split() to split the string into its constituents "get the second word of this sentence".split(' ')[1]; // Returns "the" Naturally, you can combine these in inventive ways. For example, to capitalize the first letter of any string you could do this: var str = "capitalize the first letter of this string, please!"; str = str.replace(/^./, str.substring(0,1).toUpperCase()); Here we first identify the first letter of the string using a regular expression, after which we replace it with the first letter of the string that has been converted to upper case. 2. Array methods Array methods are really powerful in any programming language. Mastering methods such as filter() and forEach() is critical if you want to make your JavaScript more compact and often more readable. filter() filter() goes through each element in the Array, and returns a new Array for every element that passes the check you provide in the callback. Here’s the syntax: someArray.filter(function(eachItem) { return eachItem === someCondition; }); So eachItem is the variable where the iterator stores each member of the Array as it is processed. If the callback returns true, it means that the item is added to the returned, new Array. If it returns false, it’s dropped. Here’s an example: window.dataLayer = window.dataLayer || []; window.dataLayer.push({ 'event' : 'addMe!' },{ 'event' : 'doNotAddMe!' }); var newArray = window.dataLayer.filter(function(item) { return item.event === 'addMe!'; }); // Returns: [{'event' : 'addMe!'}] The iterator checks every single item for the property event, and returns true if that property has value addMe!. Thus the returned array only has those elements that have the key-value pair "event" : "addMe!". forEach() Remember the clumsy for-loop for iterating over an Array? Yuck! Instead, you can use the forEach() iterator. forEach() receives each item in the array one-by-one, and you can then do whatever you wish with this item. The syntax is very simple and intuitive, and thus should be preferred over the confusing for-loop. var array = ["I", 4, 2, true, "love", [1,2,3], {chocolate: 'too'}, "you"]; var newArray = []; array.forEach(function(item) { if (typeof item === 'string') { newArray.push(item); } }); newArray.join(" "); // Result: "I love you" As you can see, it’s more readable than a for-loop, as you don’t have to access the original array in the iterator. map() The map() iterates over each member in the array, again, but this time the code in the callback is executed against each member of the array, and a new array is returned with the results. Here’s how to set it up: array.map(function(item) { return doSomething(item); }); In other words, you are mapping each element in the array against the result of the callback function. Here’s are some examples: var array = [1,2,3,4,5]; array.map(function(item) { return item * 2; }); // Result: [2,4,6,8,10] var array = [" please ", " trim", " us "]; array.map(function(item) { return item.trim(); }); // Result: ["please", "trim", "us"]; reduce() The reduce() method is often the most complex one, but it actually has a very simple principle: You provide the function with an accumulator, and each member of the array is then operated against this accumulator. You can also provide an initial value to the accumulator. Here’s what the basic structure looks like: array.reduce(function(accumulator, item) { accumulator.doSomethingWith(item); return accumulator; }, initialValue); This time, it’s definitely easiest to learn via examples: // Example: calculate the sum of all even numbers in the array var array = [1,6,3,4,12,17,21,27,30]; array.reduce(function(accumulator, item) { if (item % 2 === 0) { accumulator += item; } return accumulator; }, 0); // Returns: 52 // Example, concatenate a string of all product IDs in array var array = [{ "id" : "firstId", "name" : "T-shirts" },{ "id" : "secondId", "name" : "Pants" },{ "id" : "thirdId", "name" : "shoes" }]; array.reduce(function(accumulator, item) { accumulator.push(item.id); return accumulator; }, []).join(); // Returns: "firstId,secondId,thirdId" 3. Ternary operator The ternary operator is just a very simple shorthand for running conditional checks in JavaScript. Here’s an example: // BEFORE: if (something) { somethingElse(); } else { somethingDifferent(); } // AFTER: something ? somethingElse() : somethingDifferent(); The ternary operator is thus used to combine an if-statement into a simple expression. First you provide an expression that evaluates to a truthy or falsy value, such as me.name() === "Simo". Then you type the question mark, after which you write an expression that is executed if the first item evaluates to a truthy value. Finally, you type the colon :, after which you type the expression that is executed if the first item evaluates to a falsy value. // BEFORE: if (document.querySelector('#findThisId') !== null) { return document.querySelector('#findThisId'); } else { return "Not found!"; } // AFTER: return document.querySelector('#findThisId') ? document.querySelector('#findThisId') : "Not found!"; // EVEN BETTER: return document.querySelector('#findThisId') || "Not found!"; As you can see, sometimes there are even more efficient ways to process JavaScript statements than the ternary operator. Especially when working with simple binary checks (if value exists, return it), it might be better to just use basic logical operators instead of complex statements or expressions. 4. return {{Click URL}}.indexOf({{Page Hostname}}) > -1 This is very Google Tag Managerish. It’s a simple Custom JavaScript Variable that returns true if the clicked element URL contains the current page hostname, and false otherwise. In other words, it returns true if the clicked link is internal, and false if it takes the user away from the website. function() { return {{Click URL}}.indexOf({{Page Hostname}}) > -1; } 5. return {{Click URL}}.split(‘/’).pop() Again, a simple Custom JavaScript Variable. This is especially useful when tracking file downloads, as it returns the actual filename of the downloaded item. It does this by returning whatever is in the clicked URL after the last ‘/’. function() { // Example: return {{Click URL}}.split('/').pop(); // Returns: download_me.pdf } 6. Create a random, unique GUID Every now and then it’s useful to create a random ID in GTM. For example, if you want to measure session IDs, or if you want to assign a unique identifier to each page hit, you can achieve this with the following Custom JavaScript Variable. The variable creates a GUID string (“Globally Unique Identifier”), and even though uniqueness isn’t guaranteed, it’s still very likely. There’s only a microscopically small chance of collision. This solution is gratefully adapted from this StackOverflow post. function() { return 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) { var r = Math.random()*16|0, v = c == 'x' ? r : (r&0x3|0x8); return v.toString(16); }); } 7. Return an ISO-formatted timestamp This is one of my favorite solutions, as it lets you convert the current client time to a proper, readable timestamp. In addition, it has the timezone offset included, so you’ll know just how much the users’ local times differ from your own timezone. I send this to Google Analytics with every single hit, so that I can create a timeline of events when analyzing the data. This solution is gratefully adapted from this StackOverflow post. function() { var now = new Date(); var tzo = -now.getTimezoneOffset(); var dif = tzo >= 0 ? '+' : '-'; var pad = function(num) { var norm = Math.abs(Math.floor(num)); return (norm < 10 ? '0' : '') + norm; }; return now.getFullYear() + '-' + pad(now.getMonth()+1) + '-' + pad(now.getDate()) + 'T' + pad(now.getHours()) + ':' + pad(now.getMinutes()) + ':' + pad(now.getSeconds()) + '.' + pad(now.getMilliseconds()) + dif + pad(tzo / 60) + ':' + pad(tzo % 60); // Returns, for example: 2017-01-18T11:58:32.977+02:00 } 8. .matches() polyfill When working with the Document Object Model (DOM), being able to identify elements is crucial. We already have a bunch of wonderful CSS selectors at our disposal, but now we just need a method we can use to check if any given element matches one of these selectors. Well, there’s the Element.matches(someSelector) method that you can use, but it doesn’t have stellar browser support, even with prefixes. With this solution, you can always use .matches() without having to worry about browser support. This trick is called a polyfill, as it patches lack of feature support with a workaround using JavaScript that is universally supported. First, here’s how the method works in general: // Check if the parent of the clicked element has ID #testMe var el = {{Click Element}}; console.log(el.parentElement.matches('#testMe')); // RESULT: true or false, depending on if the parent element matches the selector. To implement the polyfill, either ask your developers to add it to the site JavaScript as early as possible in the page load sequence, or use Google Tag Manager. In Google Tag Manager, you’ll need a Custom HTML Tag that fires as early as possible in the container load sequence (i.e. All Pages trigger with a high tag priority). Here’s the code you need to add to the Custom HTML Tag. It’s gratefully adapted from this MDN reference> The polyfill modifies the actual prototype of the Element object, which all HTML and DOM elements inherit from. After modifying the prototype, you can use the matches() method with confidence in all your GTM and site JavaScript. 9. DOM traversal Sometimes it’s necessary to climb up (or down) the Document Object Model. For example, if you’re using a Click / All Elements trigger, it always targets the actual element that was clicked. But that’s not always necessarily the element you want to track! Say you have an HTML structure like this: <a href="takemeaway.html"> <button id="clickMe"> <span>Click Me!</span> </button> </a> Now, if you use a Click / All Elements trigger, the element that is captured in the click is the <span/>. But I’m guessing you actually want to use the <a href="takemeaway.html"> element, since you’re more interested in knowing what happens after the click. So, you can use this Custom JavaScript Variable to return the nearest link above the clicked element in the DOM tree: function() { var el = {{Click Element}}; while (!el.matches('a') && !el.matches('body')) { el = el.parentElement; } return el.matches('a') ? el : undefined; } NOTE! This relies on the matches() method, so don’t forget to implement the polyfill from above, first! This Custom JavaScript Variable climbs up the DOM until it reaches the first link element it finds ( 'a'), after which it returns this element. If it doesn’t find a link, it returns undefined instead. 10. Set browser cookies with ease Cookies are a great, if somewhat outdated, way of storing information in the browser. Since Google Tag Manager operates in the context of a web page, it is essentially stateless. Thus any information you want to persist from one page to another must be stored either in the server or the browser itself. The latter is far easier to do, and with browser cookies it’s just a question of adding a couple of lines of code to your GTM deployment. First, you need a Custom JavaScript Variable. You can name it {{Set Cookie}}, for example.; } } This Custom JavaScript Variable returns a function that takes five parameters: name (required): the name of the cookie (string) value (required): the value of the cookie (string) ms: expiration time of the cookie in milliseconds. If unset, defaults to a Session cookie (expires when the browser is closed). path: the path of the cookie. If unset, defaults to the current page path. domain: the domain of the cookie. If unset, defaults to the current domain. To use the cookie, you invoke it with: {{Set Cookie}}('test', 'true', 10000, '/', 'simoahava.com'); The code above, when run in GTM, sets a cookie with name "test", value "true", expiration time of ten seconds, and it’s set on the root of the simoahava.com domain. With this helper, setting cookies is a breeze. Remember that you can then use the handy 1st Party Cookie variable in GTM to retrieve values from set cookies. Summary Here I listed 10 JavaScript tricks that I use (almost) all the time. There’s plenty more to JavaScript, but with these methods you can get started on making your clunky Google Tag Manager deployment a thing of the past. Do you have any favorite methods, tips, or tricks you want to share? Please do so in the comments below.
https://www.simoahava.com/gtm-tips/10-useful-custom-javascript-tricks/
CC-MAIN-2018-17
refinedweb
2,387
57.87
Hi Christian! On 30 Oct 2006, at 12:20, Christian Holm Christensen wrote: Sorry for not being crystal clear: without Qt I did not observe any error messages or problems when running with unset ROOTSYS.Sorry for not being crystal clear: without Qt I did not observe any error messages or problems when running with unset ROOTSYS.Try unsetting the ROOTSYS environment variable (and LD_LIBRARY_PATH for that matter). These variables are _not_ needed.Without the Qt plugin it gave no errors, but with it I gotSo, if you unset ROOTSYS, it works? Is that correctly understood? rkuhn@rk:~$ root Error in <TUnixSystem::ExpandFileName>: input: $ROOTSYS/bin/root.exe, output: $ROOTSYS/bin/root.exeXlib: extension "XInputExtension" missing on display "localhost: 10.0".I guess this has to do with the remote X server set-up. It's probably harmless. I figured. Failed to get list of devices\Does the pointer and keyboard work? Yes. ** $Id: TGQt.cxx,v 1.33 2006/10/04 16:08:48 antcheva Exp $ this=0xb12a70Theis, that the Qt code does not take into account that one might installROOT without the use of the ROOTSYS environment variable (sigh!). I'llopen a bug report on savannah.cern.ch and add a fix. Okay, thanks. To check e.g. library loading I tried 'gSystem.Load("libPhysics.so")' and got again two Errors with qt and none without. The library seems to work, though.That's because the Qt backend adds wrong stuff to the library path. Yup. I'm doing some stress tests now.Great. As far as I can tell, they all work, right? Yes. Well, it complains that $ROOTSYS/tutorials/hsimple.root does not exist, which is not a location I'd have write access for. Something like below might help:Well, it complains that $ROOTSYS/tutorials/hsimple.root does not exist, which is not a location I'd have write access for. Something like below might help. --- /usr/share/doc/root/test/stressGraphics.cxx 2006-09-12 13:24:40.000000000 +0000--- /usr/share/doc/root/test/stressGraphics.cxx 2006-09-12 13:24:40.000000000 +0000 +++ stressGraphics.cxx 2006-10-30 13:13:49.000000000 +0000 @@ -143,10 +143,21 @@ // Check if $ROOTSYS/tutorials/hsimple.root exists gErrorIgnoreLevel = 9999; + if (verbose) printf("searching hsimple.root\n"); gFile = new TFile("$ROOTSYS/tutorials/hsimple.root"); if (gFile->IsZombie()) {- printf("File $ROOTSYS/tutorials/hsimple.root does not exist. Run tutorial hsimple.C first\n"); - return; + if (verbose) printf(" not found in $ROOTSYS/tutorials\n"); + delete gFile; + gFile = new TFile("hsimple.root"); + if (gFile->IsZombie()) { + if (verbose) printf(" not found in current directory\n");+ printf("File hsimple.root does not exist. Run tutorial hsimple.C first\n"); + return; + } else { + if (verbose) printf(" found in current directory\n"); + } + } else { + if (verbose) printf(" found in $ROOTSYS/tutorials\n"); } gErrorIgnoreLevel = 0; =====Side note: the options parsing of stressGraphics.cxx is a bit weird, right?to some other users directory. There you execute `ls'. Now, what youdon't know, is that the other user has put a malicious version of `ls' in her directory, say something that forks and runs some bad stuff in the background, while executing `/bin/ls' in the foreground. Yep, that's why I said I shouldn't be forced to ;-) Looks reasonable. But it might be even better to use the dirname of argv[0], unless that's the empty string. The following Works For Me (tm) and also fixes a linker problem with libEvent.so (had to set LD_LIBRARY_PATH before):Looks reasonable. But it might be even better to use the dirname of argv[0], unless that's the empty string. The following Works For Me (tm) and also fixes a linker problem with libEvent.so (had to set LD_LIBRARY_PATH before))); } diff -ur /usr/share/doc/root/test/Makefile test/Makefile--- /usr/share/doc/root/test/Makefile 2006-09-30 18:17:14.000000000 +0000 +++ test/Makefile 2006-10-30 12:58:35.000000000 +0000 @@ -17,7 +17,7 @@ ifeq ($(PLATFORM),win32) EVENTLIB = libEvent.lib else -EVENTLIB = $(EVENTSO) +EVENTLIB = $(shell pwd)/$(EVENTSO) endif MAINEVENTO = MainEvent.$(ObjSuf) diff -ur /usr/share/doc/root/test/stressHepix.cxx test/stressHepix.cxx--- /usr/share/doc/root/test/stressHepix.cxx 2006-09-15 10:05:30.000000000 +0000 +++ test/stressHepix.cxx 2006-10-30 12:47:33.000000000 +0000 @@ -71,7 +71,14 @@ void runTest(const char *atest, int estimate) {printf("Running : %s, (takes %d RT seconds on the ref machine) \n",atest,estimate); - gSystem->Exec(Form("%s >>stressHepix.log",atest)); + TString cmdname(gROOT->GetApplication()->Argv(0)); + TString prefix("."); + Ssiz_t offset; + if ((offset = cmdname.Last('/')) != kNPOS) { + cmdname.Resize(offset); + prefix = cmdname; + } + gSystem->Exec(Form("%s/%s >>stressHepix.log",prefix.Data(),atest)); } int main(int argc, char **argv) =====This patch is badly whitespace-mangled, I don't know how to make Apple Mail behave in this respect, so please accept my apologies and if you want to actually apply this, use the attached version. Ciao, Roland Attachment: patch Description: Binary data -- TU Muenchen, Physik-Department E18, James-Franck-Str., 85748 Garching Telefon 089/289-12575; Telefax 089/289-12570 -- CERN office: 892-1-D23 phone: +41 22 7676540 mobile: +41 76 487 4482 -- Any society that would give up a little liberty to gain a little security------ Attachment: PGP.sig Description: This is a digitally signed message part
https://lists.debian.org/debian-science/2006/10/msg00052.html
CC-MAIN-2017-13
refinedweb
885
53.07
It's been a bit over a week since KubeCon EU, and whether you went to the conference and are looking to review the experience, or if you didn't get to go and want to catch up, now is the perfect time to take a look back. Oracle had a major presence at the conference with a Diamond sponsorship, a large booth, and lots of talks, podcasts, and other events happening throughout the conference. In this post, Kaslin Fields, Cloud Advocate from the Cloud Native Labs Team, will walk you through some the highlights of KubeCon/CloudNativeCon 2019 in Barcelona! P.S. Make sure to check out our (less than 2 minute) KubeCon Recap video in the tweet below! What an amazing week we had at #KubeCon + #CloudNativeCon Europe 2019. Thanks to everyone for participating in the #OpenSource community, attending our sessions and visiting our booth. Adios, Barcelona! pic.twitter.com/2WxTl6j2b8— Oracle Cloud (@OracleCloud) May 24, 2019 The fun began even before the conference itself. While KubeCon/CloudNativeCon didn't officially open until Tuesday, many co-located and related events happened on Monday. Two of the biggest Pre-Con events were the Kubernetes Contributor Summit and the KubeCon/CloudNativeCon Lightning talks. As the name implies, the Kubernetes Contributor summit is geared toward contributors to the Open Source Kubernetes project - and toward those who would like to become contributors. While the summit offered a variety of opportunities for contributors to various parts of Kubernetes (members of SIGs - Special Interest Groups) to get together and collaborate, it also celebrated another important segment of the community - new contributors. With its 101 and 201 New Contributor Workshops, the Contributor Summit embraced the responsibility of maintaining the Kubernetes community by supporting new members. At the 101 New Contributor Workshop, attendees were welcomed to the community with a day-long workshop that covered both how the community itself functions, and the basics of how to get started as a contributor. With everyone from software engineers to program managers to marketing professionals in attendance, the workshop leaders Guinevere Saenger and Tim Pepper emphasized the importance of making sure that everyone felt welcome and included in the Kubernetes Community. A key point that came up over and over again throughout the entire KubeCon/CloudNativeCon conference, was that Kubernetes needs contributions of all kinds, not just code! For example, release management is its own challenge which requires a unique skill set. The Kubernetes Documentation writing community is a welcoming group where you can learn about Kubernetes and then share your knowledge through the documentation, without ever needing to write a line of code. Artistic skills such as video, visual art creation, and writing can also be useful for functions like marketing through blog posts, articles and other media. So whatever your background may be, the Kubernetes community would be excited to have you! If you are interested in contributing to Kubernetes, one of the best ways to get started is to join the Kubernetes Slack. With the emphasis on community and inclusion, even newbies are encouraged to ask questions here. Many Kubernetes contributors will tell you that the best way to get started, is to start asking questions. Eventually, as a contributor to such a large and complex project, you will need to choose an area of specialty. Within the Kubernetes Slack, you can find individual channels for each SIG within the Kubernetes Community. Examples of SIGs include: SIG Storage - focused on improving Kubernetes' storage capabilities, SIG Testing - focused on creating and maintaining tests which verify the quality and functionality of Kubernetes, SIG PM - which helps manage the Kubernetes project as project managers, and there are many more. For a complete list of Kubernetes SIGs, you can check out the GitHub Page here. The largest single event on the day before the conference is the lightning talks. While the lightning talks are part of the official KubeCon/CloudNativeCon schedule, they are held the evening before the conference starts, and they are all done on one stage all in a row. The lightning talks at KubeCon/CloudNativeCon are a collection of five minute talks which can cover a huge range of categories. If you're looking to catch up, the lightning talks are a great place to start, as you can hear about a wide range of topics covered at the conference in a relatively short time. This year, Oracle Cloud Native Labs' own Kaslin Fields (that's me!) presented a lightning talk explaining containers and VMs as cookies. This talk came from Kaslin trying to find a way to explain containers that was easy to understand, memorable, and technically accurate/useful. Originally introduced on Oracle Cloud Infrastructure's Kickin' It With Karan Youtube series, Kaslin uses a cookie analogy to help make the concepts stick. You can check out her 4.5 minute talk here: The talk of tips and tricks for the Certified Kubernetes Administrator Exam: "Ready, Steady, CKA!" by Olive Power from VMware. And the cautionary tale of "Oh Sh*t! The Config Changed!" by Joel Speed from Pusher. From Tuesday morning through Thursday afternoon, KubeCon/CloudNativeCon was in full swing. With a huge number of exhibitors on the show floor, exciting big-name keynotes, and more breakout sessions than you could shake a stick at, it's easy to see why this is one of the must-see tech events of the year! The keynotes at KubeCon/CloudNativeCon EU were huge. With over 7,000 attendees, each keynote reached a huge audience of Kubernetes enthusiasts with varying backgrounds and interests. Naturally, as the largest stage at the conference, it also held the talks with some of the largest messages and speakers with great presence! I'll highlight just a few of the many great talks here. While the lightning talks will give you great breadth over several short talks, the keynotes will give you big, meaningful talks while taking a bit more time. The first keynote I will highlight here, will have to be Oracle Cloud's own VP of Developer Relations, Bob Quillin! Yes, Oracle had a keynote on the main stage at KubeCon EU! VP Bob Quillin emphasized the need for companies to embrace open source initiatives to enable customers to run workloads in the way that works for them. Building an inclusive tech industry where classic legacy applications and modern open source services can coexist is key to the future of tech. And tech giants like Oracle must play a key role in facilitating this new world. While I said watching the keynotes would take up more of your time, Bob's is only about the length of a lightning talk - less than six minutes! So you may as well check it out since you're here... Some other KubeCon/CloudNativeCon EU keynotes that got a lot of attention and you should check out if you have the time are: "Reperforming a Nobel Prize Discovery on Kubernetes" - in this keynote by CERN engineers Ricardo Rocha & Lukas Heinrich, you'll see how using Kubernetes makes doing the calculations to discover the Higgs Boson Particle look easy by splitting up the parts and completing the task faster than when it was done originally. This was one of the top talks of the conference, so definitely check it out! Another crowd favorite was "Getting Started in the Kubernetes Community" by Lucas Kaldstrom and Nikhita Raghunath. Lucas has been a star of the Kubernetes community for years - though he only just graduated high school this year! Nikhita is a new college grad who got involved with the Kubernetes community during her schooling. These two share their stories of how they became involved with Kubernetes and - perhaps even more importantly - why they're still with the community today. And why they think you should get involved too! If you're interested in contributing to Kubernetes, this is a can't-miss keynote! While these keynotes will take some time, you'll find they're well worth the watch! The sponsor showcase is always one of the coolest parts of any conference, and the same is true for KubeCon/CloudNativeCon EU. On the showcase floor you could find booths for sponsors all the way from small startups to all the major cloud players. Companies set out eye-catching displays, gave awe-inspiring lightning talks, and did giveaways of every shape and size. As a Diamond Sponsor, Oracle was well represented. Attendees had the opportunity to learn about Oracle's Cloud and in particular our Cloud Native services (like our managed Kubernetes service - OKE!) directly from Oracle's experts. But that wasn't all they could do! With a cleverly designed multi-sided booth, attendees also had the opportunity to check out lightning talks from Oracle experts and our partners (like Sauce Labs!), and to test out Oracle Cloud Infrastructure with some hands-on labs right there on the showcase floor (well, at a table but you get the idea)! KubeCon/CloudNativeCon received an incredible number of submissions for talks, but only a relative few could be chosen. And those relative few are still more than any one person could possibly keep up with! With such a tough selection process, you can rest assured that sessions at KubeCon/CloudNativeCon were of the highest quality. If you're particularly interested in what Oracle was up to at KubeCon/CloudNativeCon, you're in luck! The Oracle team proudly gave two official sessions at KubeCon: The Panel Discussion "Democratizing HPC & AI: Startups Scale Up with Cloud Native" was lead by Oracle's own Emily Tanaka-Delgado, with a group of awesome panelists including Charlie Davies from iGeolise, Priya Shah from Sauce Video, Ant Kennedy from Gapsquare, and Alfonso Santiago from ELEM. This panel shined the spotlight on the things these startups have been doing in the HPC space. An Oracle startup partner, Sauce Video also gave some lightning talks in the Oracle booth. If you're interested in serverless and Ruby, you might check out "Ouch! What I Learned From Being Hit by a Serverless, Ruby Boomerang!" by Oracle Cloud Architect Ewan Slater. This is a tale of the great power and responsibility that comes with involvement in open source. As it says in the abstract, "Asking an open source project to put more effort into supporting your favourite language (Ruby in my case) is asking to be hit by your own boomerang - that's a great idea, why don't you get started?" Some of the other popular talks include: "Helm 3: Navigating to Distant Shores" with Microsoft tech advocates Bridget Kromhout and Jessica Dean. The imminent release of Helm 3 was of great excitement to the community at KubeCon EU, with the keynote announcement "Helm 3 will no longer use Tiller" coming to great applause on the keynote stage! In this session, Bridget and Jessica do a great job of explaining the major (and they are major) changes we'll be seeing in the release of Helm 3. "Sharing is Caring: Your Kubernetes Cluster, Namespaces, and You" by Amy Chen and Eryn Muetzel of VMware. You might recognize Amy from her popular youtube channel "The Amy Code." This fun and approachable talk serves as a deep dive into Kubernetes namespaces. An oft-overlooked, but in reality critically useful, component of any Kubernetes architecture. Ian Coldwater of Heroku's "Crafty Requests: Deep Dive Into Kubernetes CVE-2018-1002105" may sound boring, but it received rave reviews for making this security vulnerability relatable and interesting. And although I didn't get to attend it, I'll certainly be checking out Phil Estes (IBM) in "Let's Try Every CRI Runtime Available for Kubernetes. No, Really!" These are just a few of the many exciting talks from KubeCon/CloudNativeCon EU. If there's a topic you're interested in, be sure to look for it on youtube - you just might find exactly the information you've been looking for. So another KubeCon/CloudNativeCon has come and gone. With loads of useful information shared, connections made, and lessons learned. Some of my favorite takeaways from the conference are: You, yes you can contribute to Kubernetes! Your skills are valuable and needed, all you need to do is reach out and find out where you can lend a hand. Excitement around Kubernetes and Cloud Native, Open Source technologies continues to grow. And although the breadth of knowledge out there may be intimidating, a few good talks can take you a long way toward learning what you need to know! I hope you'll take this opportunity to learn something new about Kubernetes and Cloud Native technologies. Until next time, the Cloud Native Labs team wishes you happy learning! (Pictured: the Oracle Cloud Native Labs team, from left to right: Karthik Gaekwad, Jesse Butler, Mickey Boxell, and Kaslin Fields.) If you would like to try out Oracle's Cloud for yourself, check out our free trial! Are you hands on learner trying to gain knowledge on Cloud Native topics? You can check learn about a variety of Cloud Native technologies by checking out the tutorials available from Oracle Cloud Native Labs: Learn.
https://blogs.oracle.com/cloudnative/kubeconcloudnativecon-eu-recap-oracle-cloud-native-labs
CC-MAIN-2019-35
refinedweb
2,187
59.84
Post your Comment Line Canvas MIDlet Example Line Canvas MIDlet Example In this example, we are going to draw to different lines which cross to each other on the center of the mobile window using Canvas class. " Rectangle Canvas MIDlet Example Rectangle Canvas MIDlet Example  ... of rectangle in J2ME. We have created CanvasRectangle class in this example that extends to the Canvas class to draw the given types of rectangle. In this figure canvas class at different locations on the screen. Given are the methods Image Icon Using Canvas Example Image Icon Using Canvas Example This example is used to create the Image on different location using Canvas class. In this example to create the image we are using J2ME Canvas Repaint J2ME Canvas Repaint In J2ME repaint is the method of the canvas class, and is used to repaint the entire canvas class. To define the repaint method in you midlet Co-ordinates MIDlet Example Co-Ordinates MIDlet Example In this example the CoordinatesCanvas class extends the Canvas...;javax.microedition.lcdui.*; public class Coordinates extends MIDlet Creating Midlet Application For Login in J2ME Creating MIDlet Application For Login in J2ME This example show to create the MIDlet... TextField Ticker In this example we will create a MIDlet Immutable Image using Canvas Class Immutable Image using Canvas Class This is the immutable image example which shows how to create the immutable image using canvas. In this example ImageCanvas class Align Text MIDlet Example Align Text MIDlet Example With the help of the canvas class, we can draw as many as graphics we... to the text. In this J2ME Midlet we are going to set the text at different locations Draw Font Using Canvas Example Draw Font Using Canvas Example This example is used to draw the different types of font using Canvas class. The following line of code is used to show the different style Text Example in J2ME in our show text MIDlet Example. We have created a class called CanvasBoxText... Text Example in J2ME In J2ME programming language canvas class is used to paint and draw Creating Canvas Form Example Creating Canvas Form Example This example shows that how to use the Canvas Class in a Form. In this example we take two field in which integer number passed from the form Audio MIDlet Example Audio MIDlet Example This example illustrates how to play audio songs in your mobile application by creating a MIDlet. In the application we have created Draw String Using Canvas Draw String Using Canvas This example is used to draw string on different location which...*; public class DrawString extends MIDlet{   Draw Clip Area Using Canvas Draw Clip Area Using Canvas This Example is going to draw a clip with SOLID line...: setStrokeStyle(Graphics.DOTTED) Methods, that are used in our example code Get Help MIDlet Example Get Help MIDlet Example This example illustrates how to take help from any other text file which is stored in res folder in your midlet. In this example we are creating J2ME Key Codes Example J2ME Key Codes Example  ... key pressed on the canvas. In this application we are using the keyPressed... is created in the KeyCodeCanvas class, in which we inherited the canvas class Media MIDlet Example Media MIDlet Example Creating more then one player in a MIDlet, Here we have... with object 'key' and object 'value' which maps the keys to value. In this example we Canvas Layout Container in Flex4 of Canvas Layout Container is <mx:Canvas>. In this example the colored area shows the Canvas container. Example: <?xml version="1.0...Canvas Layout Container in Flex4: The Canvas layout Container is used J2ME Display Size Example J2ME Display Size Example In the given J2ME Midlet example, we are going to display the size of the screen. Like a below given image 1, the midlet will print few items J2ME Vector Example . In this example we are using the vector class in the canvas form. The vector class... J2ME Vector Example  ...;VectorMIDlet extends MIDlet{ private  Date Field Midlet Example Date Field MIDlet Example This example illustrates how to insert date field in your form. We...;extends MIDlet{ private Form form; private  Post your Comment
http://www.roseindia.net/discussion/22873-Rectangle-Canvas-MIDlet-Example.html
CC-MAIN-2015-18
refinedweb
699
50.97
Program to validate Email-Id using regex in C++ In this tutorial, we will be learning how to validate email id using C++. Regular expression can be used to search for a specific pattern in a string. In C++ we need to include <regex> header in the code. C++ program to validate Email-Id using regular expression Firstly, we need to write a regular expression to check the email-id is valid or not. The regular expression is: "(\\w+)(\\.|_)?(\\w*)@(\\w+)(\\.(\\w+))+" Here, - The \w matches any character in any case any number of times. - Then the \.|_ matches if a dot or underscore is present 0 or 1 times. - Then \w again match n characters. - Then @ matches the @ in the email. - Then we again check for n characters and a ‘.’ and a word after it, which must be present at least one or more times. Program: #include<iostream> #include<regex> #include<stdio.h> using namespace std; bool Email_check(string email) { const regex pattern("(\\w+)(\\.|_)?(\\w*)@(\\w+)(\\.(\\w+))+"); return regex_match(email,pattern); } int main() { string str; cout<<"Enter your Email-Id:"<<endl; cin>>str; if(Email_check(str)) cout<<"Your Email-Id is valid"<<endl; else cout<<"Your Email-Id is invalid"<<endl; return 0; } //BY DEVIPRASAD D MAHALE Ouput 1: Enter your Email-Id:[email protected] Your Email-Id is valid Output 2: Enter your Email-Id:[email protected]+com Your Email-Id is invalid Hope, you have understood the concept and program to validate email-id using regex in C++. Feel free to comment. Also read, Difference between delete and free() in C++ with an example Program to calculate percentile of an array in C++
https://www.codespeedy.com/program-to-validate-email-id-using-regex-in-cpp/
CC-MAIN-2020-34
refinedweb
278
64.81
Martijn Moeling wrote: > Hi all, > > > > I started the development of a specific kind of CMS over 2 years ago, > and due to sparse documentation I am still puzzled about a few things. > > Basically it consist of one .py and a mysqldatabase with all the data > and templates, ALL pages are generated on the fly. > > > > First of all I am confused about PythonInterPerDirectory and > PythonInterpPerDirectory > > In the way I use modpython. > > > > My apache has no configured virtual hosts since my CMS can handle this > on its own by looking at req.host > > On one site which is running on my system ( > <> ) we use different subdomains, so basically > > the page to be build is derived from the req.host (e.g. > xxx.yyy.mkbok.nl) where xxx and yyy are variable. > > This is done by DNS records like * A ip1.ip2.ip3.ip4 You don't actually state what problem here. ;) > My next problem seems mysql and mysqldb. > > Since I do not know which website is requested (multiple are running on > that server) I open a database connection, do my stuff and close the > connection, > > Again the database selection comes from req.host, and here the > domainname is used for database selection. > > > > The system runs extremely well, but once in a while the webserver > becomes so busy that it does not respond to page request anymore. > >. > > So logging in to the UPS remotely and power down the system by virtually > unplugging the cable is the only (and BAD) solution. Ouch. Maybe you can run a cron job every 5 minutes to check the load and try to catch the problem before you hit 100%? I'm not suggesting this is a permanent solution, just do it until you can track down the cause. Is there a chance that mysql is hitting its connection limit? (although I'm not sure if that would cause the behaviour you describe). > > So what is best practice when you have to connect to mysql with mysqldb > in a mod_python environment keeping in mind that the database connection > has to be build every time a visitor requests a page? Think in terms of > a "globally" available db or db.cursor connection. I don't think the performance penalty for creating a connection to mysql is too great - at least compared to some other databases. You might want to google for more information. > Since global variables are troublesome in the .py contaning the handler > I use a class from which an instance is created every time a client > connects and the DB connection is global to that class, ist that wrong? This looks OK. > > > What happenens if mod_python finds an error before my mysqldb connection > is closed (not that this happenes a lot, but it does happen, sorry) It depends on how you handle the exception. This is why you should close the connection in a registered cleanup function, which will always run. > > Also I do not understand the req.register_cleanup() method, what is > cleaned up and what not? Whatever function you register is run during the cleanup phase (unless mod_python segfaults - but then you've got other problems). The cleanup phase occurs after the response has been sent and anything registered is guaranteed to run, regardless of what happens in prior phases. Typical usage looks like this: def handler(req): conn = MySQLdb.connect(db='blah', user='blah', passwd='blah') req.register_cleanup(db_cleanup, conn) ... do your request handling ... return apache.OK def db_cleanup(connection): connection.close() Jim
http://modpython.org/pipermail/mod_python/2006-July/021629.html
CC-MAIN-2017-51
refinedweb
578
64
If To know more about IIS Request Process, here is one of my aticle How IIS Process ASP.NET Request_1<< Identify Worker Process in IIS 7.0 From IIS 7.0 you need you to run IIS Command Tool ( appcmd ) . • Start > Run > Cmd • Go To Windows > System32 > Inetsrv • Run appcmd list wp This will show you list worker process that is running on IIS 7.0 in the similar format of IIS 6.0 List of IIS Article Published by Me Beginner’s Guide : Exploring IIS 6.0 With ASP.NET Debug Your ASP.NET Application that Hosted on IIS : Process Attach and Identify which process to attach Remote IIS Debugging : Debug your ASP.NET Application which is hosted on “Remote IIS Server” How IIS Process ASP.NET Request Can you help me how to identify Worker Process in IIS 5.1 for IIS 5.1 process name is aspnet_wp.exe. Good one buddy…. 🙂 Thanks !! excellent dude!! Thanks Ram !! Nice one dude. I always love reading your articles. Keep it up and make me always busy learning about ASP.Net… 🙂 Cheers, Kunal Thanks Kunal !! Nice to know you liked it !! Hi Very good article, solved my problem with sharepoint foundation 2010 – could not use the sharepoint lib from VS, without getting then error “file not found…?”. thanks Lars Hi, Could you please provide some more details on the error? what is the exact error and when you are getting that? Thanks ! Abhijit Hi Abhijit Here is my problem : I have made a .net asp project using sharepoint.dll (sharepoint foundation 2010) but then i run the code i get “filenotfound”, if i write the url in my browser i have no problem with acessing the site. I only get the problem then i call spsite from a asp.net app, and run it from visualstudio, if i deploy it to my IIS it works fine?. and i can also make a winform app and run if from visualstudio calling the same code with no problem, so it’s only then i’m running the asp.net project from VS i get the problem. br. Lars i’m using windows7 64bit, visual studio 2010. and sharepoint foundation. and IIS 7 my code: default.aspx.cs: using System; using System.Collections.Generic; using System.Linq; using System.Web; using System.Web.UI; using System.Web.UI.WebControls; using Microsoft.SharePoint; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { SPSite _MySite = new SPSite(“”); SPWeb _MyWeb = _MySite.OpenWeb(); } } ———–Default.aspx——————- Rest of the code ———–Default.aspx——————- ” “ Sorry but can’t post the Html – but can mail it to you if needed Abijit articles from you are always best. Just keep doing greart work. Thanks Raj Thanks man !!! Hi, Really nice article! And I appreciate that you covered both IIS 6 and 7. If you want, take a look at a VS 2010 Extension I have written: WADA. What it does: It finds the running W3WP processes and their application pools + allows to directly attach to that process for debugging purposes. You can find a description on my blog: Hi Martin, Great to know you liked it. Reagarding the extension, I have had started to developing the same few months back after posting this blog post. This is quite difficult when we are having multiple app pool running over IIS and need to attached a particular porcess. My plan was to enable some features when i can show the particular worker process that is related to that web site. I am also most done with the work. I am now just looking for which is the best place to put my extension in visual studio. I hope to share it very soon. Cheers Abhijit Thanks a lot its very useful to learn for us like a freshers.. For IIS7 you can just use the GUI. IIS manager–>Server–>Worker Processes there is no need to enter the command line. Thanks. I have talked about the same over here, hey abhijit ..nice articles…m new to .net n windows azure…i hav a problem cited below hope u can help me wid this. After i create a cloud project and add a asp.net web role, i run the project and i get a blank page in the browser. I then changed the http web activation thing in d windows features on or off place…still i hav been facing this problem since two days…hope u can help me out 🙂 I always reading your articles. also i am getting so many knowledge in your tip and trick. You are genius and exalant . I am from Orissa(Bhubaneshow) Thanks & Regards Tofan Nayak Mostly i can get details around Worker Process but what exactly it is am unable to get it. If you let me know it would be very helpful Thanks for sharing your info on this. Wouldn’t it be even better if IIS can assign a title to each worker process? It could take names from application pools. Hi Abhijit, Can you tell me what is worker process???????????? Hi Abhijit, i am just beginner with iis and i am following you and ……..your articals helping me very well. Thanks:- Mohit Bansla run this command ‘appcmd list wp’ in elevated command prompt Good job abhi .. super like! Nice Article Pretty! This was a really wonderul post. Thank you for supplying this info.
https://abhijitjana.net/2010/07/15/identifying-worker-process-w3wp-exe-iis-6-0-and-iis-7-0-for-debugging-asp-net-application/
CC-MAIN-2018-30
refinedweb
904
77.33
Hey guys, I've been working on a project recently. It's pretty simple and a few friends helped me out with it, but I wanna try to improve it a little bit. I'm trying to add a sound file whenever a player wins/loses. Being brand new to Pygame, I'm not completely familiar on how to use it. So far I have as follows: import random import pygame import pygame.mixer pygame.init() pygame.mixer.init() sound = pygame.mixer.Sound('hey.wav') sound.play() MAXBLOCKS=100 MINBLOCKS=0 userBlocks=50 dealCount=0 print "The goal of the game is to achieve, or go over, 100 points." print "The computer will automatically choose a value of 1 through 10." print "The tower's height is based upon the value of each card drawn." print "Even valued cards add blocks, while odd valued blocks remove blocks." print "Card values are listed as follows:" print " Card value - Block Height" print "______________________________\n" print " 1,2 - +/- 05" print " 3,4 - +/- 10" print " 5,6 - +/- 15" print " 7,8 - +/- 20" print " 9,10 - +/- 25\n" print "You start with", userBlocks, "blocks.\n" print "Test your luck!" dummy = raw_input("Hit enter to attain your cards...\n") print "\n\n" while(userBlocks<MAXBLOCKS and userBlocks>MINBLOCKS): card = random.randrange(1,10) print "Your card for this deal is:", card blockValue=0; if ( card==1 or card==2): blockValue=5 elif (card==3 or card==4): blockValue=10 elif (card==5 or card==6): blockValue=15 elif (card==7 or card==8): blockValue=20 else: blockValue=25 print blockValue, "points have been", if(card%2==0): print "added to", userBlocks+=blockValue else: print "subtracted from", userBlocks-=blockValue print "your tower." if(userBlocks>MINBLOCKS and userBlocks<MAXBLOCKS): print "You now have", userBlocks, "blocks in your tower!\n" dealCount+=1 print "You lost after", dealCount, "deals!" if userBlocks <= MINBLOCKS: print "You lost all of your blocks! You lost." else: print "You won! Your tower is", userBlocks, "high!" Whenever I try to run it, no sound is played whatsoever. I saved the Python file to a folder on my desktop, and put the sound file in the folder with it. As seen here. Any ideas? Thanks!
https://www.daniweb.com/programming/software-development/threads/438188/problem-with-importing-sound-through-pygame
CC-MAIN-2017-43
refinedweb
368
83.25
Malha para uma forma This tool is optimized for objects with flat faces (no curves). The corresponding tool The above code snippet converts the given Arch User documentation Outdated translations are marked like this. Descrição Arch MeshToShape converts a selected Mesh (Mesh Feature) object into a Shape (Part Feature) object. This tool is optimized for objects with flat faces (no curves). The corresponding tool Part ShapeFromMesh from the Part Workbench might be more suited for objects that contain curved surfaces. Utilização - Select a mesh object. - Press the Mesh to Shape entry in Arch → Utilities → Mesh to Shape. Propriedades Limitações Scripting See also: Arch API and FreeCAD Scripting Basics. This tool can be used in macros and from the Python console by using the following function: new_obj = meshToShape(obj, mark=True, fast=True, tol=0.001, flat=False, cut=True) The above code snippet converts the given obj (a mesh), into a shape, joining coplanar facets. - If markis True, non-solid objects will be marked in red. - If fastis True, it uses a faster algorithm by building a shell from the facets then removing splitter. tolis the tolerance used when converting mesh segments to wires. - If flatis True, it will force the wires to be perfectly planar to be sure they can be converted into faces, but this might leave gaps in the final shell. - If cutis True, holes in faces are made by subtraction. Example: import Arch, Mesh, BuildRegularGeoms Box = FreeCAD.ActiveDocument.addObject("Mesh::Cube", "Cube") Box.Length = 1000 Box.Width = 2000 Box.Height = 1000 FreeCAD.ActiveDocument.recompute() new_obj = Arch.meshToShape(Box) -
https://wiki.freecadweb.org/Arch_MeshToShape/pt-br
CC-MAIN-2021-49
refinedweb
262
68.16
On 05/01/06, David Menendez <zednenem at psualum.com> wrote: > Cale. My main concern is that mplus means fairly different things in different monads -- it would be good to be able to expect instances of MonadPlus to satisfy monoid, left zero, and left distribution laws, and instances of MonadElse to satisfy monoid, left zero and left catch. It's just good to know what kinds of transformations you can do to a piece of code which is meant to be generic. > > >. Well, me too :) Of course, this sort of thing (especially with the inclusion of PointedFunctor and Applicative) brings us back to wanting something along the lines of John Meacham's class alias proposal. I remember there was a lot of commotion about that and people arguing about concrete syntax. Was any kind of consensus reached? Do we like it? How does it fare on this hierarchy? I think we need some provision for simplifying instance declarations in this kind of situation. It seems to be a common scenario that you have some finely graded class hierarchy, and you really want to be able to declare default instances for superclasses based on instances for subclasses in order to not force everyone to type out a large number of class instances. Another idea I've had for this, though I haven't really thought all of the consequences out, (and I'm looking forward to hearing about all the awful interactions with the module system) is to allow for default instances somewhat like default methods, together with potentially a little extra syntax to delegate responsibility for methods in default instances. The start of your hierarchy could look something like: ---- class Functor f where map :: (a -> b) -> f a -> f b class Functor f => PointedFunctor f where return :: a -> f a instance Functor f where require map -- this explicitly allows PointedFunctor instances to define map class PointedFunctor f => Applicative f where ap :: f (a -> b) -> f a -> f b lift2 :: (a -> b -> c) -> f a -> f b -> f c ap = lift2 ($) lift2 f a b = map f a `ap` b instance Functor f where require map instance PointedFunctor f => Functor f where map = ap . return instance PointedFunctor f where require return class Applicative m => Monad m where join :: m (m a) -> m a (>>=) :: m a -> (a -> m b) -> m b join m = m >>= id m >>= f = join (map f m) instance Functor m where require map instance PointedFunctor m where require return map f x = x >>= (return . f) instance PointedFunctor m => Applicative m where ap fs xs = do f <- fs x <- xs return (f x) ---- This is a little verbose, but the intent is to have classes explicitly declare what default instances they allow, and what extra function declarations they'll need in an instance to achieve them. Here I used 'require' for this purpose. To be clear, in order to define an instance of Monad with the above code, along with all its superclasses, we could simply define return and (>>=) as we would in Haskell 98. Since a definition of return is then available, an instance of PointedFunctor would be inferred, with the default instance providing an implementation of map which satisfies the requirement for the default Functor instance for PointedFunctor. We then get a default implementation of Applicative for free due to the instance of PointedFunctor which we've been able to construct. An alternate route here would be to just define return, map, and join. The declaration for map leads to an instance of Functor. The default instance of PointedFunctor requiring return is used, but the default implementation of map there is ignored since the user has already provided an explicit implementation of map. Bind is defined by the default method provided using join and map, then an instance of Applicative is constructed using bind and return. Some things to note: * The goal should always be to take implementations as early as they become available, that is, if the user provides something it's taken first, then each subclass gets a shot at it starting with the bottom-most classes in the partial order and working up. We'll likely run into problems with multiple inheritance, but I think that simply reporting an error of conflicting instance declarations when two incomparable default instances would otherwise be available is good enough. Such situations are rare, and the user will always be able to simply provide their own instance and resolve ambiguities. * Class constraints on classes (like the Applicative constraint on the Monad class above) are used to deny instances only after it's been determined what instances are really available via defaulting. Since the class definition for Monad provides a default instance for Applicative, the user can avoid the trouble of declaring one, provided that an instance for PointedFunctor can be constructed. * What extra methods might be needed, and what defaults are available are specified directly in the classes, rather than simply allowing any instance to come along and mess with the class hierarchy by providing implementations of arbitrary superclass methods. - Cale
http://www.haskell.org/pipermail/haskell/2006-January/017254.html
CC-MAIN-2014-23
refinedweb
843
50.91
Name | Synopsis | Description | Return Values | Errors | Usage | Attributes | See Also #include <sys/time.h> int gettimeofday(struct timeval *tp, void *tzp); int settimeofday(struct timeval *tp, void *tzp); The gettimeofday() function gets and the settimeofday() function sets the system's notion of the current time. The current time is expressed in elapsed seconds and microseconds since 00:00 Universal Coordinated Time, January 1, 1970. The resolution of the system clock is hardware dependent; the time may be updated continuously or in clock ticks. The tp argument points to a timeval structure, which includes the following members: long tv_sec; /* seconds since Jan. 1, 1970 */ long tv_usec; /* and microseconds */ If tp is a null pointer, the current time information is not returned or set. The TZ environment variable holds time zone information. See TIMEZONE(4). The tzp argument to gettimeofday() and settimeofday() is ignored. Only privileged processes can set the time of day. Upon successful completion, 0 is returned. Otherwise, -1 is returned and errno is set to indicate the error. The settimeofday() function will fail if: The structure pointed to by tp specifies an invalid time. The {PRIV_SYS_TIME} privilege was not asserted in the effective set of the calling process. The gettimeofday() function will fail for 32-bit interfaces if: The system time has progressed beyond 2038, thus the size of the tv_sec member of the timeval structure pointed to by tp is insufficient to hold the current time in seconds. If the tv_usec member of tp is > 500000, settimeofday() rounds the seconds upward. If the time needs to be set with better than one second accuracy, call settimeofday() for the seconds and then adjtime(2) for finer accuracy. See attributes(5) for descriptions of the following attributes: adjtime(2), ctime(3C), gethrtime(3C), TIMEZONE(4), attributes(5), privileges(5), standards(5) Name | Synopsis | Description | Return Values | Errors | Usage | Attributes | See Also
http://docs.oracle.com/cd/E19082-01/819-2243/gettimeofday-3c/index.html
CC-MAIN-2014-15
refinedweb
311
53.92
In this Google flutter code example we are going to learn how to use IntrinsicWidth 'intrinsicwidthicWidth(), ); } } intrinsicwidth.dart import 'package:flutter/material.dart'; class BasicIntrinsicWidth extends StatelessWidget { //A widget that sizes its child to the child's intrinsic width. //This class is useful, for example, when unlimited width is available and you would like a child that would //otherwise attempt to expand infinitely to instead size itself to a more reasonable width. @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar(title: Text("IntrinsicWidth Widget")), body: Center( child: IntrinsicWidth( child: Container( height: 40.0, width: 150.0, color: Colors.red, child: Container(color: Colors.brown, width: 70.0), ), ), ), ); } } If you have any questions or suggestions kindly use the comment box or you can contact us directly through our contact page below.
https://inducesmile.com/google-flutter/how-to-use-intrinsicwidth-widget-in-flutter/
CC-MAIN-2019-22
refinedweb
131
51.14
Now let's cover how to visually show the results, try some more examples, and then talk about some ideas for moving forward. Here is the modified script to include matplotlib, and I will also copy and paste everything, so this is the full script for this series: from PIL import Image import numpy as np import matplotlib.pyplot as plt import time from collections import Counter from matplotlib import style style.use("ggplot") def createExamples(): numberArrayExamples = open('numArEx.txt','a') numbersWeHave = range(1,10) for eachNum in numbersWeHave: for furtherNum in numbersWeHave: imgFilePath = 'images/numbers/'+str(eachNum)+'.'+str(furtherNum)+'.png' ei = Image.open(imgFilePath) eiar = np.array(ei) eiarl = str(eiar.tolist()) lineToWrite = str(eachNum)+'::'+eiarl+'\n' numberArrayExamples.write(lineToWrite) def threshold(imageArray): balanceAr = [] newAr = imageArray for eachPart in imageArray: for theParts in eachPart: # for the reduce(lambda x, y: x + y, theParts[:3]) / len(theParts[:3]) # in Python 3, just use: from statistics import mean # then do avgNum = mean(theParts[:3]) avgNum = reduce(lambda x, y: x + y, theParts[:3]) / len(theParts[:3]) balanceAr.append(avgNum) balance = reduce(lambda x, y: x + y, balanceAr) / len(balanceAr) for eachRow in newAr: for eachPix in eachRow: if reduce(lambda x, y: x + y, eachPix[:3]) / len(eachPix[:3]) > balance: eachPix[0] = 255 eachPix[1] = 255 eachPix[2] = 255 eachPix[3] = 255 else: eachPix[0] = 0 eachPix[1] = 0 eachPix[2] = 0 eachPix[3] = 255 return newAr def whatNumIsThis(filePath): matchedAr = [] loadExamps = open('numArEx.txt','r').read() loadExamps = loadExamps.split('\n') i = Image.open(filePath) iar = np.array(i) iarl = iar.tolist() inQuestion = str(iarl) for eachExample in loadExamps: try: splitEx = eachExample.split('::') currentNum = splitEx[0] currentAr = splitEx[1] eachPixEx = currentAr.split('],') eachPixInQ = inQuestion.split('],') x = 0 while x < len(eachPixEx): if eachPixEx[x] == eachPixInQ[x]: matchedAr.append(int(currentNum)) x+=1 except Exception as e: print(str(e)) x = Counter(matchedAr) print(x) graphX = [] graphY = [] ylimi = 0 for eachThing in x: graphX.append(eachThing) graphY.append(x[eachThing]) ylimi = x[eachThing] fig = plt.figure() ax1 = plt.subplot2grid((4,4),(0,0), rowspan=1, colspan=4) ax2 = plt.subplot2grid((4,4),(1,0), rowspan=3,colspan=4) ax1.imshow(iar) ax2.bar(graphX,graphY,align='center') plt.ylim(400) xloc = plt.MaxNLocator(12) ax2.xaxis.set_major_locator(xloc) plt.show() whatNumIsThis('images/test.png') I encourage you to open up paint or something similar, create an 8 x 8 square, and draw your own numbers. You could use some of the training set, but this is highly improper. Try drawing numbers and then shifting them a bit left or right, or up and down. You should find some pretty decent success here, though there are obviously many problems moving forward from here. So far, we normalized images to be either white or black, but everything was 8x8. You wont always be this lucky, but you can indeed resize. The thickness of our characters was also very standard. The main objective of this series was to teach you that, while image recognition is a somewhat complex topic in layers, each problem can be broken down and solved with very simple, easily understood, code. I hope you enjoyed. That is the end of this series. Want more tutorials? Head to the
https://pythonprogramming.net/testing-visualization-and-conclusion/
CC-MAIN-2019-26
refinedweb
537
51.65
I decided to dig into Indigo.. err, Windows Communication Foundation… for awhile this weekend. I have done demos before and run through samples using TCP, but I have never used HTTP with it yet. Here are some of the most obvious stumbling blocks that I think other developers are going to run into when using Visual Studio 2005 and WCF for the first time. - If you try to create a client proxy for a file-based ASMX service using svcutil, you are going to get a MessageSecurityException with the message “The HTTP request is unauthorized with client authentication scheme ‘Anonymous’. The server authentication schemes are ‘NTLM'”. When you create a file-based ASMX service, you are running it using the ASP.NET Development Server (aka Cassini). Steve Maine’s post on Indigo and Cassini clears this one up. - If you are generating a client proxy for an ASMX service using svcutil, use the /uxs switch to use the XmlSerializer when generating the client proxy. - To control the namespace binding for wsdl:types, specify the namespace property in the DataContract attribute. To control the namespace binding for wsdl:deinitions, specify the namespace property on the ServiceContract attribute. This will get you away from the tempuri.org namespace binding. - If you have been watching Steve Swartz’ and Don Box’ demos on Indigo, you will have noticed that the config data generated by svcutil is considerably more verbose than the address, binding, and contract samples they use. You can either use the config info that svcutil generates in output.config (noting Steve’s advice above), or you can trim up considerably by using the basicHttpBinding (which also forces you to look at bindings more closely): < < <client> <endpoint address= binding=“basicHttpBinding“ configurationName=“ServiceSoap“ contract=“ServiceSoap“ /> </ </client> I was going to work on some WSE 3.0 interop scenarios tonight, but can’t seem to get WSE 3.0 installed with the WinFX September CTP. Instead, I am doing some hacking to see what weird scenarios I can make work with WCF and WSE 2.0. Next up is Workflow. Join the conversationAdd Comment
https://blogs.msdn.microsoft.com/kaevans/2005/09/24/wcf-indigo-baby-steps/
CC-MAIN-2016-22
refinedweb
349
56.15
Introduction: Bluetooth Control RC Tank + Android + Arduino May peace be upon you This is my first instructable. It is about controlling your rc tank via bluetooth instead of using radio frequency. There are many project like this but this is my way. hahaha I started to think that I am so lazy to bring the controller of the rc tank. So I got the idea to make one that can be controlled by my smartphone. The big advantage is that you can add whatever feature to your tank and the smartphone is the thing you bring anywhere anytime. Here are the main things that I used. 1) Bluetooth module HC-06 2) L298N Motor driver 3) Arduino Uno or Nano 4) RC Tank 5) smartphone with bluetooth 6) Android Controller app Step 1: Preparing Android Application By using MIT App Inventor, I have build a simple application that send a command in ASCII to the Arduino via Bluetooth. The app use of canvas, image, label, button, clock and bluetooth connectivity. It consists of 13 different command image (canvas). As we press and hold the command image (canvas), the data is send until we release it. The data is then compared with the Arduino coding. If the data is same, then an action is taken. If not same, no action taken. download the app below and install it on your phone. and if you want to add some more features, you can download .aia format below. Open the MIT App Inventor and import the folder. Step 2: Remove the Existance Circuit Carefully remove the circuit inside the tank. But be sure that you labelled them first as they might be use in the future. For easier, take the picture of the circuit connection. Step 3: The Arduino Circuit Note that as I read through some websites, the Pin Rx of the Arduino should be connected to voltage divider first to get a value about 3.3V before connect it to Tx of the Bluetooth module. But I have directly connect them together and until now there is no problem happen yet... wahahha For testing, you can replace the motor with LED or other output. Step 4: Pairing Smartphone With Bluetooth Module After connected all the wires, turn on the Arduino. You will notice that the Bluetooth module is now flashing. This is because it is not connected to other device yet. Then go to your smartphone, turn ON bluetooth and search for the module. The name of the module is "HC-06". Pair the module and if it ask for the password, it is "1234". Step 5: Uploading Arduino Coding Copy and paste into Arduino IDE. Now, it is time to upload the coding into Arduino. Dont forget to disconnect the Rx and Tx wires during uploading. If not, the uploading will fail. After uploading done, reconnect the wires again. Step 6: Testing Now you have reached the final stage. Activate the bluetooth on your smartphone. Then, open the app that you have installed earlier. Click the button "select device" A list of bluetooth device will appear and choose your Bluetooth module. In this case, HC-06. wait a moment and if the connection is successful, the Bluetooth module will stay lighting and it will pop out "the device is connected" Press any image from 13 different them. You will see there is an action take place on your RC tank. Test all of them. Step 7: Put Them Nicely in Your RC Tank This is the way to make your RC Tank neat and tidy. Depend on your creativity. I used the recycle item as much as possible such as: -the aluminum plate remnants -USB female, internal hard disk casing from damage laptop -audio jack male and female from broken radio and microphone -some jumper wire from broken electrical appliance 20 Discussions I am not able to get the L298N Motor Driver Module.Please help...... You can buy online on website above But i suggest you to go to your nearest electronic shop. Just give them the module number l298n and they will find it for you Hey jeneral... can you give me information how to matching button in canvas (appinvertor) and pinOUT (arduino). ? because i want add some button for moving relays.. thanks.. I see that w = forward d = backward q = left e = right how to setting the STATE ? i wonder where u bough that tank... its look very nice! owh.. that tank i bought it online from somewhere in malaysia. it is heng long brand made in china. cool, can u share the link?-... but they only ship within malaysia Sir please can you please send me fritzing diagram of your connection As i am a beginer this diagram i am able understand it I am very sorry for late reply because I am busy on my works lately. I think the drawing i made using microsoft word is already straight forward. One more thing, i am not familiar with fritzig software. Hahaha I am really looking forward to making this! Dont forget to share it here.. :) I was able to jump back into this project after being busy with other things for a few months. I was able to get the lower half working via the usb cable to the Arduino. Next step is to hook up the upper half, then the bluetooth module. Step by step it is coming together. I wanted to add that I am using L298 Motor Driver Modules instead of just the chip. It makes hooking up the Arduino and motors easier. I am lucky enough to have a local supplier. You can get them here: I got it to work. I had trouble with the communication between my Arduino and the Bluetooth. Fixed that by combining your sketch with the Sparkfun Bluetooth test sketch by Jim Lindblom. I am putting together my documentation and will be posting it on instructables. You were a great inspiration. Hi, i am making a slightly modified version of this (basically a car without the turret and horn, just a basic forward, backward, left and right function car). I tried to slightly change your code for the arduino uno but... Arduino gives me the following errors... I have no previous experience with this type of coding however i can easily get a general feeling of they work. "sketch_feb10b.ino:1:21: error: pitches.h: No such file or directory sketch_feb10b.ino: In function 'void loop()': sketch_feb10b:23: error: 'state' was not declared in this scope sketch_feb10b:26: error: 'state' was not declared in this scope" the changes i made to the code are: " #include "pitches.h" int motor1Pin1 = 12; int motor1Pin2 = 11; int motor2Pin1 = 10; int motor2Pin2 = 9; void setup() { // sets the pins as outputs: pinMode(motor1Pin1, OUTPUT); pinMode(motor1Pin2, OUTPUT); //pinMode(enable1Pin, OUTPUT); pinMode(motor2Pin1, OUTPUT); pinMode(motor2Pin2, OUTPUT); Serial.begin(9600); } void loop() { //if some date is sent, reads it and saves in state if(Serial.available() > 0){ state = Serial.read(); } // forward if (state == 'w') { digitalWrite(motor1Pin1, HIGH); digitalWrite(motor1Pin2, LOW); digitalWrite(motor2Pin1, LOW); digitalWrite(motor2Pin2, HIGH); } // left else if (state == 'e') { digitalWrite(motor1Pin1, HIGH); digitalWrite(motor1Pin2, LOW); digitalWrite(motor2Pin1, HIGH); digitalWrite(motor2Pin2, LOW); } // right else if (state == 'q') { digitalWrite(motor1Pin1, LOW); digitalWrite(motor1Pin2, HIGH); digitalWrite(motor2Pin1, LOW); digitalWrite(motor2Pin2, HIGH); } // backward else if (state == 'd') { digitalWrite(motor1Pin1, LOW); digitalWrite(motor1Pin2, HIGH); digitalWrite(motor2Pin1, HIGH); digitalWrite(motor2Pin2, LOW); } // Stop else { digitalWrite(motor1Pin1, LOW); digitalWrite(motor1Pin2, LOW); digitalWrite(motor2Pin1, LOW); digitalWrite(motor2Pin2, LOW); noTone(13); } delay(100); }" Could you please help me where i have gone wrong this is for a school project thanks >.< sorry for late reply.. I have uploaded the new tank.ino code.. If you dont want to use horn function, you can remove "#include "pitches.h" Peace. Good call using the MIT app inventor. I had no prior exposure. I would really like to use (and modify) your .aia file. When I follow the link, however I am offered an application which doesn't match your example. Could you update the .aia link or perhaps offer a link to a different site (like github). Much appreciated! Paul Oh my mistake. I have updated the latest link. Thank you for telling me. Well done! That was a lot of new components to cram back into the body of the tank, but it looks like it turned out great. Thanks for sharing this! You are welcome sir.. actually there is a lot of empty space in the belly of the tank. But I decided to let it empty so that I can run the tank on shallow water (half sink). That is why the turret is full of components and the battery is put outside.
http://www.instructables.com/id/Bluetooth-Control-RC-Tank-Android-Arduino/
CC-MAIN-2018-30
refinedweb
1,452
74.59
Hi, Hi <?xml:namespace prefix = o Thanks for your request. I logged your request in our defect tracking system. We will consider adding the requested feature. Maybe in your case, you can just read DocVariables from the document: Best regards. Hi, Hi there, Thanks for your request. Unfortunately there is no way to acheive this at the moment using Aspose.Words. We will look into providing this feature sometime in the future. We will keep you informed when it becomes avaliable. Thanks, The issues you have found earlier (filed as WORDSNET-3714) have been fixed in this Aspose.Words for .NET 19.6 update and this Aspose.Words for Java 19.6 update. With the latest version of Aspose.Words for .NET 20.5, you can read and write macros from/to the document. Please read the following article about working with VBA macros. . Working with VBA Macros
https://forum.aspose.com/t/how-to-access-macros-from-the-document-using-net/63631
CC-MAIN-2021-39
refinedweb
148
79.06
Tutorial How To Customize React Components with Props The author selected Creative Commons to receive a donation as part of the Write for DOnations program. Introduction In this tutorial, you’ll create custom components by passing props to your component. Props are arguments that you provide to a JSX element. They look like standard HTML props, but they aren’t predefined and can have many different JavaScript data types including numbers, strings, functions, arrays, and even other React components. Your custom components can use props to display data or use the data to make the components interactive. Props are a key part of creating components that are adaptable to different situations, and learning about them will give you the tools to develop custom components that can handle unique situations. After adding props to your component, you will use PropTypes to define the type of data you expect a component to receive. PropTypes are a simple type system to check that data matches the expected types during runtime. They serve as both documentation and an error checker that will help keep your application predictable as it scales. By the end of the tutorial, you’ll use a variety of props to build a small application that will take an array of animal data and display the information, including the name, scientific name, size, diet, and additional information. Note: The first step sets up a blank project on which you will build the tutorial exercise. If you already have a working project and want to go directly to working with props, start with Step 2. Prerequisites following this tutorial, you will use Create React App. You can find instructions for installing an application with Create React App at How To Set Up a React Project with Create React App. This tutorial also assumes a knowledge of React components, which you can learn about in our How To Create Custom Components in React tutorial. You will also need to know the basics of JavaScript, which you can find in How To Code in JavaScript, along with a basic knowledge of HTML and CSS. A good resource for HTML and CSS is the Mozilla Developer Network.. To start, make a new project. In your command line, run the following script to install a fresh project using create-react-app: - npx create-react-app prop-tutorial After the project is finished, change into the directory: - cd prop open it by navigating to. If you are running this from a remote server, the address will be. Your browser will load with a simple React application included as part of Create React App: You will be building a completely new set of custom components. You’ll: - nano src/App.js You will see a file like this: validate page that returns nothing. The final code will look like this: future. In the terminal window type the following command: - rm src/logo.svg your custom components. - mkdir src/components Each component will have its own directory to store the component file along with the styles, images if there are any, and tests. Create a directory for App: - mkdir src/components/App Move all of the App files into that directory. Use the wildcard, *, to select any files that start with App. regardless of file extension. Then use the mv command to put them into the new directory. - mv src/App.* src/components/App Finally, update the relative import path in index.js, which is the root component that bootstraps the whole process. - nano src/index.js The import statement needs to point to the App.js file in the App directory, so make the following highlighted change: — Building Dynamic Components with Props In this step, you will create a component that will change based on the input information called props. Props are the arguments you pass to a function or class, but since your components are transformed into HTML-like objects with JSX, you will pass the props like they are HTML attributes. Unlike HTML elements, you can pass many different data types, from strings, to arrays, to objects, and even functions. Here you will create a component that will display information about animals. This component will take the name and scientific name of the animal as strings, the size as an integer, the diet as an array of strings, and additional information as an object. You’ll pass the information to the new component as props and consume that information in your component. By the end of this step, you’ll have a custom component that will consume different props. You’ll also reuse the component to display an array of data using a common component. Adding Data First, you need some sample data. Create a file in the src/App directory called data. - touch src/components/App/data.js Open the new file in your text editor: - nano src/components/App/data.js Next, add an array of objects you will use as sample data: export default [ { name: 'Lion', scientificName: 'Panthero leo', size: 140, diet: ['meat'], }, { name: 'Gorilla', scientificName: 'Gorilla beringei', size: 205, diet: ['plants', 'insects'], additional: { notes: 'This is the eastern gorilla. There is also a western gorilla that is a different species.' } }, { name: 'Zebra', scientificName: 'Equus quagga', size: 322, diet: ['plants'], additional: { notes: 'There are three different species of zebra.', link: '' } } ] The array of objects contains a variety of data and will give you an opportunity to try a variety of props. Each object is a separate animal with the name of the animal, the scientific name, size, diet, and an optional field called additional, which will contain links or notes. In this code, you also exported the array as the default. Save and exit the file. Creating Components Next, create a placeholder component called AnimalCard. This component will eventually take props and display the data. First, make a directory in src/components called AnimalCard then touch a file called src/components/AnimalCard/AnimalCard.js and a CSS file called src/components/AnimalCard/AnimalCard.css. - mkdir src/components/AnimalCard - touch src/components/AnimalCard/AnimalCard.js - touch src/components/AnimalCard/AnimalCard.css Open AnimalCard.js in your text editor: - nano src/components/AnimalCard/AnimalCard.js Add a basic component that imports the CSS and returns an <h2> tag. import React from 'react'; import './AnimalCard.css' export default function AnimalCard() { return <h2>Animal</h2> } Save and exit the file. Now you need to import the data and component into your base App component. Open src/components/App/App.js: - nano src/components/App/App.js Import the data and the component, then loop over the data returning the component for each item in the array: import React from 'react'; import data from './data'; import AnimalCard from '../AnimalCard/AnimalCard'; import './App.css'; function App() { return ( <div className="wrapper"> <h1>Animals</h1> {data.map(animal => ( <AnimalCard key={animal.name}/> ))} </div> ) } export default App; Save and exit the file. Here, you use the .map() array method to iterate over the data. In addition to adding this loop, you also have a wrapping div with a class that you will use for styling and an <h1> tag to label your project. When you save, the browser will reload and you’ll see a label for each card. Next, add some styling to line up the items. Open App.css: - nano src/components/App/App.css Replace the contents with the following to arrange the elements: .wrapper { display: flex; flex-wrap: wrap; justify-content: space-between; padding: 20px; } .wrapper h1 { text-align: center; width: 100%; } This will use flexbox to rearrange the data so it will line up. The padding gives some space in the browser window. justify-content will spread out the extra space between elements, and .wrapper h1 will give the Animal label the full width. Save and exit the file. When you do, the browser will refresh and you’ll see some data spaced out. Adding Props Now that you have your components set up, you can add your first prop. When you looped over your data, you had access to each object in the data array and the items it contained. You will add each piece of the data to a separate prop that you will then use in your AnimalCard component. Open App.js: - nano src/components/App/App.js Add a prop of name to AnimalCard. import React from 'react'; ... function App() { return ( <div className="wrapper"> <h1>Animals</h1> {data.map(animal => ( <AnimalCard key={animal.name} name={animal.name} /> ))} </div> ) } export default App; Save and exit the file. The name prop looks like a standard HTML attribute, but instead of a string, you’ll pass the name property from the animal object in curly braces. Now that you’ve passed one prop to the new component, you need to use it. Open AnimalCard.js: - nano src/components/AnimalCard/AnimalCard.js All props that you pass into the component are collected into an object that will be the first argument of your function. Destructure the object to pull out individual props: import React from 'react'; import './AnimalCard.css' export default function AnimalCard(props) { const { name } = props; return ( <h2>{name}</h2> ); } Note that you do not need to destructure a prop to use it, but that this is a useful method for dealing with the sample data in this tutorial. After you destructure the object, you can use the individual pieces of data. In this case, you’ll use the title in an <h2> tag, surrounding the value with curly braces so that React will know to evaluate it as JavaScript. You can also use a property on the prop object using dot notation. As an example, you could create an <h2> element like this: <h2>{props.title}</h2>. The advantage of destructring is that you can collect unused props and use the object rest operator. Save and exit the file. When you do, the browser will reload and you’ll see the specific name for each animal instead of a placeholder. The name property is a string, but props can be any data type that you could pass to a JavaScript function. To see this at work, add the rest of the data. Open the App.js file: - nano src/components/App/App.js Add a prop for each of the following: scientificName, size, diet, and additional. These include strings, integers, arrays, and objects. import React from 'react'; ... function App() { return ( <div className="wrapper"> <h1>Animals</h1> {albums.map(album => ( <AnimalCard additional={animal.additional} diet={animal.diet} key={animal.name} name={animal.name} scientificName={animal.scientificName} size={animal.size} /> ))} </div> ) } export default App; Since you are creating an object, you can add them in any order you want. Alphabetizing makes it easier to skim a list of props especially in a larger list. You also can add them on the same line, but separating to one per line keeps things readable. Save and close the file. Open AnimalCard.js. - nano src/components/AnimalCard/AnimalCard.js This time, destructure the props in the function parameter list and use the data in the component: import React from 'react'; import './AnimalCard.css' export default function AnimalCard({ additional, diet, name, scientificName, size }) { return ( <div> <h2>{name}</h2> <h3>{scientificName}</h3> <h4>{size}kg</h4> <div>{diet.join(', ')}.</div> </div> ); } After pulling out the data, you can add the scientificName and size into heading tags, but you’ll need to convert the array into a string so that React can display it on the page. You can do that with join(', '), which will create a comma separated list. Save and close the file. When you do, the browser will refresh and you’ll see the structured data. You could create a similar list with the additional object, but instead add a function to alert the user with the data. This will give you the chance to pass functions as props and then use data inside a component when you call a function. Open App.js: - nano src/components/App/App.js Create a function called showAdditionalData that will convert the object to a string and display it as an alert. import React from 'react'; ... function showAdditional(additional) { const alertInformation = Object.entries(additional) .map(information => `${information[0]}: ${information[1]}`) .join('\n'); alert(alertInformation) }; function App() { return ( <div className="wrapper"> <h1>Animals</h1> {data.map(animal => ( <AnimalCard additional={animal.additional} diet={animal.diet} key={animal.name} name={animal.name} scientificName={animal.scientificName} showAdditional={showAdditional} size={animal.size} /> ))} </div> ) } export default App; The function showAdditional converts the object to an array of pairs where the first item is the key and the second is the value. It then maps over the data converting the key-pair to a string. Then it joins them with a line break— \n—before passing the complete string to the alert function. Since JavaScript can accept functions as arguments, React can also accept functions as props. You can therefore pass showAdditional to AnimalCard as a prop called showAdditional. Save and close the file. Open AnimalCard: - nano src/components/AnimalCard/AnimalCard.js Pull the showAdditional function from the props object, then create a <button> with an onClick event that calls the function with the additional object: import React from 'react'; import './AnimalCard.css' export default function AnimalCard({ additional, diet, name, scientificName, showAdditional, size }) { return ( <div> <h2>{name}</h2> <h3>{scientificName}</h3> <h4>{size}kg</h4> <div>{diet.join(', ')}.</div> <button onClick={() => showAdditional(additional)}>More Info</button> </div> ); } Save the file. When you do, the browser will refresh and you’ll see a button after each card. When you click the button, you’ll get an alert with the additional data. If you try clicking More Info for the Lion, you will get an error. That’s because there is no additional data for the lion. You’ll see how to fix that in Step 3. Finally, add some styling to the music card. Add a className of animal-wrapper to the div in AnimalCard: import React from 'react'; import './AnimalCard.css' export default function AnimalCard({ ... return ( <div className="animal-wrapper"> ... </div> ) } Save and close the file. Open AnimalCard.css: - nano src/components/AnimalCard/AnimalCard.css Add CSS to give the cards and the button a small border and padding: .animal-wrapper { border: solid black 1px; margin: 10px; padding: 10px; width: 200px; } .animal-wrapper button { font-size: 1em; border: solid black 1px; padding: 10; background: none; cursor: pointer; margin: 10px 0; } This CSS will add a slight border to the card and replace the default button styling with a border and padding. cursor: pointer will change the cursor when you hover over the button. Save and close the file. When you do the browser will refresh and you’ll see the data in individual cards. At this point, you’ve created two custom components. You’ve passed data to the second component from the first component using props. The props included a variety of data, such as strings, integers, arrays, objects, and functions. In your second component, you used the props to create a dynamic component using JSX. In the next step, you’ll use a type system called prop-types to specify the structure your component expects to see, which will create predictability in your app and prevent bugs. Step 3 — Creating Predictable Props with PropTypes and defaultProps In this step, you’ll add a light type system to your components with PropTypes. PropTypes act like other type systems by explicitly defining the type of data you expect to receive for a certain prop. They also give you the chance to define default data in cases where the prop is not always required. Unlike most type systems, PropTypes is a runtime check, so if the props do not match the type the code will still compile, but will also display a console error. By the end of this step, you’ll add predictability to your custom component by defining the type for each prop. This will ensure that the next person to work on the component will have a clear idea of the structure of the data the component will need. The prop-types package is included as part of the Create React App installation, so to use it, all you have to do is import it into your component. Open up AnimalCard.js: - nano src/components/AnimalCard/AnimalCard.js Then import PropTypes from prop-types: import React from 'react'; import PropTypes from 'prop-types'; import './AnimalCard.css' export default function AnimalCard({ ... } Add PropTypes directly to the component function. In JavaScript, functions are objects, which means you can add properties using dot syntax. Add the following PropTypes to AnimalCard.js: import React from 'react'; import PropTypes from 'prop-types'; import './AnimalCard.css' export default function AnimalCard({ ... } AnimalCard.propTypes = { additional: PropTypes.shape({ link: PropTypes.string, notes: PropTypes.string }), diet: PropTypes.arrayOf(PropTypes.string).isRequired, name: PropTypes.string.isRequired, scientificName: PropTypes.string.isRequired, showAdditional: PropTypes.func.isRequired, size: PropTypes.number.isRequired, } Save and close the file. As you can see, there are many different PropTypes. This is only a small sample; see the official React documentation to see the others you can use. Let’s start with the name prop. Here, you are specifying that name must be a string. The property scientificName is the same. size is a number, which can include both floats such as 1.5 and integers such as 6. showAdditional is a function ( func). diet, on the other hand, is a little different. In this case, you are specifying that diet will be an array, but you also need to specify what this array will contain. In this case, the array will only contain strings. If you want to mix types, you can use another prop called oneOfType, which takes an array of valid PropTypes. You can use oneOfType anywhere, so if you wanted size to be either a number or a string you could change it to this: size: PropTypes.oneOfType([PropTypes.number, PropTypes.string]) The prop additional is also a little more complex. In this case, you are specifying an object, but to be a little more clear, you are stating what you want the object to contain. To do that, you use PropTypes.shape, which takes an object with additional fields that will need their own PropTypes. In this case, link and notes are both PropTypes.string. Currently, all of the data is well-formed and matches the props. To see what happens if the PropTypes don’t match, open up your data: - nano src/components/App/data.js Change the size to a string on the first item: export default [ { name: 'Lion', scientificName: 'Panthero leo', size: '140', diet: ['meat'], }, ... ] Save the file. When you do the browser will refresh and you’ll see an error in the console. Errorindex.js:1 Warning: Failed prop type: Invalid prop `size` of type `string` supplied to `AnimalCard`, expected `number`. in AnimalCard (at App.js:18) in App (at src/index.js:9) in StrictMode (at src/index.js:8) Unlike other type systems such as TypeScript, PropTypes will not give you a warning at build time, and as long as there are no code errors, it will still compile. This means that you could accidentally publish code with prop errors. Change the data back to the correct type: export default [ { name: 'Lion', scientificName: 'Panthero leo', size: 140, diet: ['meat'], }, ... ] Save and close the file. Open up AnimalCard.js: - nano src/components/AnimalCard/AnimalCard.js Every prop except for additional has the isRequired property. That means, that they are required. If you don’t include a required prop, the code will still compile, but you’ll see a runtime error in the console. If a prop isn’t required, you can add a default value. It’s good practice to always add a default value to prevent runtime errors if a prop is not required. For example, in the AnimalCard component, you are calling a function with the additional data. If it’s not there, the function will try and modify an object that doesn’t exist and the application will crash. To prevent this problem, add a defaultProp for additional: import React from 'react'; import PropTypes from 'prop-types'; import './AnimalCard.css' export default function AnimalCard({ ... } AnimalCard.propTypes = { additional: PropTypes.shape({ link: PropTypes.string, notes: PropTypes.string }), ... } AnimalCard.defaultProps = { additional: { notes: 'No Additional Information' } } You add the defaultProps to the function using dot syntax just as you did with propTypes, then you add a default value that the component should use if the prop is undefined. In this case, you are matching the shape of additional, including a message that the there is no additional information. Save and close the file. When you do, the browser will refresh. After it refreshes, click on the More Info button for Lion. It has no additional field in the data so the prop is undefined. But AnimalCard will substitute in the default prop. Now your props are well-documented and are either required or have a default to ensure predictable code. This will help future developers (including yourself) understand what props a component needs. It will make it easier to swap and reuse your components by giving you full information about how the component will use the data it is receiving. Conclusion In this tutorial, you have created several components that use props to display information from a parent. Props give you the flexibility to begin to break larger components into smaller, more focused pieces. Now that you no longer have your data tightly coupled with your display information, you have the ability to make choices about how to segment your application. Props are a crucial tool in building complex applications, giving the opportunity to create components that can adapt to the data they receive. With PropTypes, you are creating predictable and readable components that will give a team the ability to reuse each other’s work to create a flexible and stable code base. If you would like to look at more React tutorials, take a look at our React Topic page, or return to the How To Code in React.js series page.
https://www.digitalocean.com/community/tutorials/how-to-customize-react-components-with-props
CC-MAIN-2021-31
refinedweb
3,686
57.98
Liberate Your Search in Rails with Tags Search is an essential part of any website, even more so for a content-centric website. A good search system is a fast one that provides accurate results. Search and filter are mostly the same depending on how you look at it. For this article, I’ll treat search and filter as the same kind of function. There are many existing methods to implement search from the tools we use like PostgreSQL full text search, which you can use in Rails. Today, however, we’ll see how to implement tag-based filtering in an Rails app using PostgreSQL. Let’s get started. About the Implementation Let’s see how the tag based implementation works. Assume we have a table called companies and it has the following relation association chain: companies -> BELONGS_TO -> cities -> BELONGS_TO -> states -> BELONGS_TO -> countries The companies table has columns such as yearly_revenue, employee_strength, started_year, etc. Say we have a page for companies and on that page are filters. We now want to filter all the companies that are in a certain country. We could achieve this by joining the tables and filtering based on the country ID provided, easy enough. But what about scenarios where the filters are combined and get more complicated? For example, we want the following filter: Companies that are in New York and started before 2001 and have revenue of more than $200 million and employ more than 1000 people. It’s still achievable, but the query gets extended and, in all likelihood, a bit uglier. The real problem is facilitating the filter options for the users. One way is putting relevant elements (text boxes and selects) for each filter we allow the user to perform, which becomes a problem when we want to include more and more filters. IF we want to do 50 different kinds of filters on the information, it won’t be a pretty sight using one element per filter. Also, on the query part, consider that there are relations from companies going in 10 different directions, each nesting its own set of associations? I am sure you can see how fast this gets ugly. This is where a tagging solution is a much better answer. In a tag-based implementation, we basically add a tag column with an array datatype to the table in which we want to perform the filtering. In our case, this is the companies table. The column consists of an array of Universally Unique Identifiers (UUIDs) which correspond to the IDs of tags to which the record relates. We’ll have a separate table called tags consisting of the tag id, name, and type. Now, the search and filter is as easy as searching on the tags table, finding the UUIDs, and then filtering the companies based on those tag IDs. Let’s implement this with help of an example. Example Rails App Our Rails app will have five tables - companies - cities - states - countries Begin by creating the Rails app along with the necessary models: rails new tag-example -d postgresql For the tag implementation to work best, we’ll stick to using a UUID as the primary key in all the tables, so let’s create a migration to enable PostgreSQL’s uuid-ossp extension: rails g migration enable_uuid_ossp After generating the migration file, add the following command to the file: enable_extension "uuid-ossp" Run the migration: rake db:create && rake db:migrate Create our models, starting with companies: rails g scaffold companies name founding_year:integer city_id:uuid As I mentioned above, we need to maintain the id field as a UUID for the tables. Head over to the create_companies migration file and modify the create_table line to specify that the id column should be a uuid. PostgreSQL will take care of auto-generating the UUIDs. Also, add a line to add a tags column to the companies table, since including it in a scaffold will make it display to the user. create_table :companies, id: :uuid do |t| t.string :tags, array: true, default: [] Next up, create the cities, states, and countries models, each making the ID a UUID, as mentioned above, by modifying the migrations: rails g model cities name state_id:uuid rails g model states name country_id:uuid rails g model countries name rails g model tags name tag_type After you run the migrations, quickly establish the relations in the model files: ## company.rb belongs_to :city ## city.rb has_many :companies belongs_to :state ## state.rb has_many :cities belongs_to :country ## country.rb has_many :states The example will be clearer if we generate some seed data. For this exercise, I’m using the faker gem to generate the company, city, and state information. Here is the seed (/db/seed.rb) file I’ve used, you can use this for reference; (1..10).each do |i| country = Country.create(name: Faker::Address.country) (1..10).each do |j| state = State.create(name: Faker::Address.state, country: country) (1..10).each do |k| city = City.create(name: Faker::Address.city, state: state) end end end City.all.each do |city| (1..10).each do |count| company_name = Faker::Company.name Company.create(name: company_name, founding_year: rand(1950..2015), city: city) p "Saved - #{company_name} - #{count}" end end Now, let’s add our before_save callback to the companies model which will generate the tags for the company every time it’s saved. This takes care of creating the entry in the tags table, too. In models/company.rb add the following code: class Company < ActiveRecord::Base belongs_to :city before_save :update_tags def save_location_tags [city, city.state, city.state.country].map do |loc| (Tag.it({id: loc.id, name: loc.name, tag_type: 'LOCATION'}).id rescue nil) end end def update_tags self.tags = [ save_location_tags, (Tag.it({name: founding_year, tag_type: 'COMPANY_FOUNDING_YEAR'}).id rescue nil) ].flatten.compact end end The above code actually re-generates and updates the company tags every time it’s saved. In apps/models/tag.rb, add the following: class Tag < ActiveRecord::Base def self.it content Tag.where(content).first_or_create! rescue nil end end You can add as many tags as you want to the update_tag method. We’re all set from the data update standpoint. Let’s now implement the tag search and filter. Tag Filter We’ll have a page to display the list of companies and have an autocomplete on the input for the tag names. Also, the code will invoke the filter every time the user makes a selection, displaying the currently selected filters as we go. In app/controllers/companies_controller.rb add the lines below to the index action: def index if params[:tags] @companies = Company.tagged(params[:tags]) else @companies = Company.all.limit(10) end end In app/models/company.rb, add the tagged scope as follows: scope :tagged, -> (tags) {where('companies.tags @> ARRAY[?]::varchar[]', [tags].flatten.compact)} This query accepts an array of tags and filters all the records where the given array is present in its tags column. We’re now going to use the pg_trgm extension to create the autocomplete for tags. Just like the other extension, we’ll need a migration and enable it: enable extension "pg_trgm" Let’s create a tags controller and add an autocomplete endpoint to it, which we’ll make use of from the front-end to get the matching tag IDs for user queries: rails g controller tags autocomplete Add the following code to the autocomplete method in tags controller: def autocomplete results = Tag.select("*, string <-> #{ActiveRecord::Base.sanitize(params[:q])} as distance").order('distance').limit(5) render json: results end Now we have a complete working set of tag-based filter endpoints and tag autocomplete endpoints. You can check them out by starting the Rails server ( rails s) and trying the below endpoints: - Tag autocomplete example: - Filtered companies based on tag example:[]=ANYUUIDFROMTAGSTABLE I’ll leave you to implement the autocomplete in the front-end, since there are many ways and many good tutorials out there on how to achieve this. Combine it with our above filter set and you’ll have a powerful filtering system for your data. Conclusion With that we have come to the conclusion of our tutorial. We now have a powerful and efficient filtering and search system readily available to be integrated to any number of tables. It is easy to use it in a new project or to integrate into an existing one. All the code shown in the example is available in github. Thanks for reading through and I hope you’ve learned something today.
https://www.sitepoint.com/liberate-your-search-in-rails-with-tags/
CC-MAIN-2020-10
refinedweb
1,425
55.64
> cdx.zip > IOCTL.HPP /*C4*/ //**************************************************************** // Author: Jethro Wright, III TS : 1/18/1994 9:36 // Date: 01/01/1994 // // ioctl.hpp : definitions/declarations for an // ioctl-oriented command interface for audio // cd access via mscdex // // History: // 01/01/1994 jw3 also sprach zarathustra.... //****************************************************************/ #if ! defined( __IOCTL_HPP__ ) #define __IOCTL_HPP__ #include "types.h" #include "device.h" // the principle error codes we're concerned w/ after each // ioctl operation. the entire status word from the // last ioctl operation is returned from each ioctl cmd.... #define IOS_ERROR 0x8000 // == some sort of failure #define IOS_BUSY 0x200 // == play audio is engaged // device status info bits from the GetDeviceStatus() fn.... #define STATUS_DOOR_OPEN 0x01 #define STATUS_DOOR_UNLOCKED 0x02 #define STATUS_COOKED_RAW_READS 0x04 #define STATUS_READ_WRITE 0x08 #define STATUS_DATA_AUDIO_CAPBL 0x10 #define STATUS_RESERVED 0x20 #define STATUS_PREFETCH 0x40 #define STATUS_AUDIO_MANIP 0x80 #define STATUS_DUAL_ADDRESSG 0x100 #define STATUS_NO_DISK 0x800 // // to use the IOCTL class, one simply creates an instance of the // class, specifying the mscdex drive to be accessed during // subsequent calls. since most pc systems are likely to have only // a sgl cdrom drive, the defalts are setup to go for the 1st drive // in the target system. the IOCTL instance shud be retained // for the entire lifetime of the object that uses it (or at least // until it no longer needs to issue cdrom cmds. // // maybe someone else will adapt this to make it more generic, but // so far the only other possible application for an ioctl-based // interface // class IOCTL { // // the public mbr fns of this class correspond to all of conventional // (read, anticipated) cmds one would want to perform on an audio cd // via mscdex. the c code which inspired this system, while // adequately coded, was less than organized in its approach to // the subject. this was probably true bec the mscdex spec doesn't // lend itself to understanding the means by which one gets audio out // of a cdrom. hence the design of this system, which reflects the // notion of encapsulating functionality where required. if add'l // capabilities are needed, it shud be fairly obvious how one would // supplement the classes in this kit. as stated above and in other // places in this kit, the goal of this proj was to // // anyway, when one of the these fn is invoked, it immed dispatches // a service request (a cmd) to the device driver and retns the // status word from the request header, for interrogation by the // caller.... // public: IOCTL( int, int, VFP, VFP ) ; ~IOCTL() ; WORD Play( DWORD, DWORD, BYTE ) ; WORD Stop( Boolean ) ; WORD LockDoor( Boolean ) ; WORD EjectDisk( void ) ; WORD CloseTray( void ) ; WORD ResetDisk( void ) ; WORD GetUPC( struct UPCCommand far * ) ; WORD GetTrackInfo( struct TrackInfoCmd far * ) ; WORD GetQInfo( struct QChannelInfoCmd far * ) ; WORD GetDiskInfo( struct DiskInfoCmd far * ) ; WORD GetDeviceStatus( DWORD far * ) ; WORD GetPlayStatus( struct PlayStatusCmd far * ) ; private: IOCTLSvcReq theRequest ; // these are the unit and sub-unit nbrs passed via the cstor. // normally, we req svc for the sub unit of the desired device, but // since it's only an extra couple of byts, let's hold onto the // unit nbr as well, in case we need it in the future.... int unitNbr, subUnitNbr ; VFP devStrategy ; // == addr of the device strategy fn // for this drive VFP devInterrupt ; // == addr of its device interrupt fn void DoCmd( void ) ; // dispatch the cmd to the device // driver inline void Setup( WORD sCmd, VFP sData, int sCnt ) { theRequest.rqh.command = sCmd ; theRequest.rqh.unit = subUnitNbr ; theRequest.rqh.len = sizeof( IOCTLSvcReq ) ; theRequest.xferBufr = sData ; theRequest.byteCnt = sCnt ; theRequest.media = theRequest.sector = 0 ; theRequest.volumeId = 0 ; } ; }; #endif // end: if ! defined( __IOCTL_HPP__ )
http://read.pudn.com/downloads/sourcecode/scsi/1449/IOCTL.HPP__.htm
crawl-002
refinedweb
578
58.62
Make sure that somewhere in the initialization code of your application or library Nepomuk is initialized via: Nepomuk::ResourceManager::instance()->init(); One often needs the URI of a specific class or a specific property in ones code. And not all ontologies are provided by the very convenient Soprano::Vocabulary namespace. The solution is rather simple: create your own vocbulary namespaces by using Soprano's own onto2vocabularyclass command line tool. It can generate convenient vocabulary namespaces for you. The Soprano documentation shows how to use it manually or even simpler with a simple CMake macro. When using Nepomuk one creates a lot of RDF statements in the Nepomuk RDF storage. It is often of interest to check which data has been created, if statements have been correctly created or simply look at existing data. Soprano provides a nice command line client to do all this called sopranocmd. It provides all the features one needs to debug data: it can add and remove statements, list and query them, import and export whole RDF files, and even monitor for statementAdded and statementRemoved events. To access the Nepomuk storage one would typically use the D-Bus interface: <parameters> If one wanted to list all the resources that have been tagged with the tag whose resource URI is nepomuk:/foobar one would use the following command: "" "" "<nepomuk:/foobar>" or one would use a SPARQL query (sopranocmd supports the standard URI prefixes out of the box): "select ?r where { ?r nao:hasTag ?tag . \ ?tag nao:prefLabel 'foobar'^^xsd:string . }" # sopranocmd --help is your friend for all details...
https://techbase.kde.org/index.php?title=Development/Tutorials/Metadata/Nepomuk/TipsAndTricks&oldid=42731
CC-MAIN-2016-44
refinedweb
261
50.36
Articles | News | Weblogs | Buzz | Books | Forums Artima Forums | Articles | Weblogs | Java Answers | News | Buzz Sponsored Link • News & Ideas Forum (Closed for new topic posts) The Good, The Bad, and the DOM 2 replies on 1 page. Most recent reply : Jun 21, 2003 9:46 PM by Mike Champion Guest Back to Topic List Reply to this Topic Threaded View Previous Topic Next Topic Flat View: This topic has 2 replies on 1 page Bill Venners Posts: 2248 Nickname: bv Registered: Jan, 2002 The Good, The Bad, and the DOM Posted: Jun 17, 2003 9:27 PM Reply Advertisement Elliotte Rusty Harold says: "There's a phrase, 'A camel is a horse designed by committee.' That's a slur on a camel. A camel is actually very well adapted to its environment. DOM, on the other hand, is the sort of thing that that phrase was meant to describe." Read this Artima.com interview with Elliotte Rusty Harold, in which he discusses the problems with the DOM API, and the design lessons he learned from DOM: What do you think of Elliotte Rusty Harold's comments? Erik Price Posts: 39 Nickname: erikprice Registered: Mar, 2003 Re: The Good, The Bad, and the DOM Posted: Jun 18, 2003 6:59 AM Reply I agree wholeheartedly. So far I've been lucky, in that the only XML structures I've had to work with have been relatively simple. In one project I've been able to use the simple-as-pie Digester (from ), and in the other it wasn't so complex that I couldn't just use SAX. Although I've not heard of XOM before, I have gone through the JWSDP and I think that I would far prefer to use JDOM than DOM in an actual project. But I think it's nice that we do have a horse designed by a committee, for some strange situations that I just can't think of where a language-neutral API is helpful*. Perhaps it's nice just to have DOM to put on a shelf and say "well, we can always fall back on that, but I hope we don't have to". Maybe without DOM we'd never have the more developer-friendly APIs like JDOM. * Taking into account DOM's use in JavaScript, where I do not know of an alternative. Mike Champion Posts: 2 Nickname: mchampion Registered: Apr, 2003 Re: The Good, The Bad, and the DOM Posted: Jun 21, 2003 9:46 PM Reply [disclaimer: I was the principal editor of the Level 1 DOM and was on the working group from 1997 to 2002, so take any whiny defensiveness below with as many grains of salt as you wish!] First, I have to agree with the "camel designed by a committee" gibe. DOM is an ugly beast in a lot of ways, and most of them stem from the fact that when a consensus-driven group has to make a decision between Option A and Option B, "A and B both" is usually the result. I would differ from Harold in one way: DOM is reasonably well suited and actually quite successful in the environments it was designed for -- HTML/XML Web browsers and XML authoring systems. I'd argue that like a camel, it's perverseness is sufficiently constant across environments that it's a good thing to ride on in a trip through uncharted terrain. On the other hand, it *does* make a terrible racehorse, and it is too prone to bite the unwary to be suitable for novices. That hasn't stopped people from using camels. Also, it's important to understand that DOM was not really intended as a high-level API for ordinary Dynamic HTML authors or people just trying to tweak some XML. It's better thought of as an "assembly language for the XML Infoset". The "complexity" of the API comes largely from the fact that it (at least originally) tried to confine itself to the most basic operations on an XML tree and include only the most obviously universally useful "convenience methods" (getElementsByTagName() is the example i remember from our 1997 discussions). The expectation was that libraries of other "convenience methods" would emerge to make life tolerable for ordinary users. I'm not sure why JDOM came to life as a whole different API rather than a "convenience library" on top of the DOM. I totally agree that it is silly to ask ordinary people to create a "Hello XML" DOM tree by laboriously creating and linking together the DOM nodes (Harold's "Java and XML" book has a great example of how much easier this is in JDOM than DOM, IIRC). Is it *that* much less efficient to implement such things as a sequence of DOM calls (collected into libraries) rather than define a whole new API? I may be missing something profound here... but the obvious solution (that I use in my own work) is to package up a set of utilities that alleviate the pain of the DOM's low-level orientation in whatever environment I'm working in. By now, I would have expected these utility libraries to be commoditized / standardized on top of DOM rather than fragmented into contending APIs. What am I missing here? I pretty strongly disagree with the points about its language neutrality and being defined as an interface rather than concrete class. I find it useful and even comforting to know that the DOM is *roughly* the same (if you stick close to the actual Recommendation) in Javascript, Java, Python, PHP, C#, and probably serveral more languages. I don't know much about PHP, for example, but I can figure out how to do things with the DOM without a whole lot of trouble. I'm sure that PHP geeks are just as apalled by DOM as Java geeks are, but someone just trying to get an XML processing script running in a world where someone has decreed that PHP is the platform of choice is not likely to care. Similar point about interfaces and classes. With DOM you can write code that works with any implementation, and switch (e.g. from one JAX implementation to another) if one is better suited to a particular application. With JDOM and XOM, your code works if you can link in your library of choice, or some application uses that library ... and if not, you get to rewrite the code. One can (and I have) worked with essentially the same DOM application-level code simultaneously to integrate across three environments (e.g., XMetal's DOM implementation, the MS DOM implementation , and Tamino's DOM interface). Maybe that's a corner case for most people, but that kind of application integration is my world! In any event, the original POINT of the DOM was to be an abstraction of the data structures and internal APIs used in the different browsers and different XML authoring tools. I suppose that could have been done with classes, but interfaces seemed like the "textbook correct" approach at the time. Perhaps that seems a bit quaint today when DOM-like data structures and APIs are built deeply into most XML products. I don't know ... I suspect that interfaces will rescue us once again as performance becomes critical and the underlying data structures become less and less like "trees of nodes" that are expensive to construct, and more like flat text buffers or optimized "binary" data for which Nodes are created on demand. Again, I suppose this could be done with classes, but the Interface and Factory design patterns seem like the obvious approches, and that's what the DOM ran with. Definitely more hassle for the application programmer that does not need that level of abstraction, but offers immensely more flexibility to the power user. Again, the obvious solution seems to be for the application programmers to use convenience libraries that hide the Factory and Interface stuff behind nice classes rather than rebuild the whole API on a class foundation ... but again I may be missing something profound here. I agree that namespaces in DOM are a bit of an abomination, but the namespace spec is a bit of an abomination IMHO -- it is (almost certainly by design) oriented completely toward XML syntax and parse-time implementation rather than a post-parse data model orientation that any reasonable read/write API requires. It is quite hard to model namespaces in a read-write environment; XML and JDOM do a better job than DOM because they have no pre-namespace legacy to support. (XPath does a *much* better job, but it is not a read/write data model!) Should DOM just toss out non namespace-aware processing ("Level 1")? That would make life easier for geeks, but as even the most casual reader of the xml-dev mailing list knows too well, there is a substantial amount (perhaps a majority) of real XML processing code that ignores or merely pays lip service to namespaces. The DOM working group made a conscious choice: "Do we make force all those Dynamic HTML scripts to either break or become namespace aware, or do we make the Level 2 DOM a bit kludgy and keep those scripts legal?" Lots of geeks flame the resulting inelegance, but I'm not sure even in hindsight whether that was a bad decision. All that said, the DOM is approaching its 5 year anniversary as a W3C Recommendation. I wish that the W3C had some sort of "sunset law" making Recommendations subject to reconsideration / refactoring after 5 years. XML is more than 5 now and long overdue for a vigorous application of Occam's Razor, and the DOM needs the same treatment. Some of the really pointless stuff that Harold points out (e.g. the use of the 'short' type) could be polished out at the same time. In any event, I welcome efforts such as XOM, dom4j, etc. that attempt to shave away the consensus-driven cruft. When the time comes to refactor this stuff (by W3C, the JCP, the unholy Microsoft-IBM alliance, or whomever) they should take XOM *extremely* seriously. Flat View: This topic has 2 replies on 1 page Previous Topic Next Topic Sponsored Links Web Artima.com - - Advertise with Us
http://www.artima.com/forums/flat.jsp?forum=32&thread=5379&start=0
CC-MAIN-2014-35
refinedweb
1,719
64.75
AGS Pointers for Dummies If you're reading this, then you've come looking for answers. What are pointers? How do I use them? What makes them better than just using integers? These questions, amongst others shall be answered. You're not alone in the confusion caused by pointers. Many people who are new to scripting, or haven't ever used a language that supported pointers don't understand them right away. That's why I'm here to help. Contents - 1 What Are Pointers? - 2 Defining A Pointer - 3 Assigning A Pointer A Value - 4 Testing A Pointer's Value - 5 Null Pointers - 6 What Pointers Do - 6.1 Pointer System Versus Integral System - 6.2 Extending The Built-In (Managed) Types - 6.3 Dynamic Arrays Are Pointers Too - 7 Closing - 8 Notes What Are Pointers? So what exactly are pointers? To understand what a pointer is, we must first understand the more basic idea of variables. If you already understand variables, you can skip over this section. Variables A variable, in the context of scripting, is a way to represent a value. In AGS, there are five different data types which can be used to represent a variable. These types include char, short, int, String[1], and float. A char-type variable is one that can hold only a single character (i.e., 'A', 'B', etc.) or a number within the range 0 to 255. A short-type variable can store integers within the range -32768 to 32767. An int-type variable can store integer values within the range -2147483648 to 2147483647. A String-type variable can hold a string of characters (i.e., "this is some text!") of virtually[2] infinite length. A float-type variable can store floating-point decimals within the range -2147483648.0 to 2147483647.0, and has precision[3] up to approximately 6 decimal places, though this will vary based on the actual number. For information on defining and assigning values to variables read the entry in the manual here. Pointers Okay, so now that we understand what a variable is, we can begin to understand what a pointer does. The basic idea of a pointer is that instead of creating a new variable, we are simply going to point to a variable that is already stored in the memory. This can have several uses in scripting, and AGS even has some special ones. Defining A Pointer And, how do I use them? In AGS, you can only create pointers to certain data types. These are called the managed types[4]. Managed Types AGS has certain managed types[4] that you can not create an instance (variable declaration) of, but you can create pointers to[5]. All of the variables of managed types are managed by AGS. These include the types: AudioChannel, AudioClip, Button, Character, DateTime, Dialog, DialogOptionsRenderingInfo, DrawingSurface, DynamicSprite, File, Game, GUI, GUIControl, Hotspot, InventoryItem, InvWindow, Label, ListBox, Maths, Mouse, Object, Overlay, Parser, Region, Room, Slider, TextBox, and ViewFrame. Working With Managed Types You can work with these managed types through pointers. You define a pointer by typing the name of the managed data type, then a space, an asterisk (*)[6], and finally the name of the pointer. So, if we want to create a GUI pointer (this is expressed as GUI*) called GUIPointer, we could type the following: GUI *GUIPointer; This creates a pointer that can point to any GUI stored in the memory. However, until it is assigned a value, it is an empty, or null pointer. We'll first discuss how to assign pointers a value, then we'll discuss null pointers. Array of Pointers It should be noted here that when defining pointers, you can also create an array of pointers. When you create an array you are simply defining a set of variables (or in this case, pointers) which all have the same name. You access each one individually using an index between brackets ([ and ]). Defining an array of pointers works the same way as defining any other array does, so to define an array of GUI*s called myguis to hold 5 GUI*s, you would type: GUI *myguis[5]; With arrays you can't assign initial values, and the valid indices are from 0 to the size of the array minus one (in this case, 0 to 4). You treat an array of pointers just like you would ordinary pointers. Dynamic Array of Pointers As of AGS 3.0, you can have dynamic arrays of the built-in types, including the managed types. The assignment here works a little differently: GUI *daguis[] = new GUI[5]; Notice that we don't use an asterisk after the new keyword. Keep that in mind if you plan to use dynamic arrays of pointer types. Assigning A Pointer A Value To make a pointer point to a GUI, you assign it the value of the GUI you want it to point to (with the assignment operator, =). So to make GUIPointer point at the GUI named MYGUI, you would type: GUIPointer = gMygui; As long as the pointer isn't global (i.e., the pointer is defined inside of a function), then you can also assign it an inital value when you create it, like this: GUI *GUIPointer = gMygui; Global pointers can't have an initial value assigned though, so this will only work if you define the pointer inside of a function. When defining more than one pointer of the same type at once, it is necessary to use an asterisk for every pointer. So, if you want MyGUIPointer to point to MYGUI, and OtherGUIPointer to point to OTHERGUI, you can do this: GUI *MyGUIPOINTER = gMygui, *OtherGUIPointer = gOthergui; If you forget an asterisk then it will try to create a new instance (create a new variable) of the type GUI. AGS doesn't allow the user to create new instances of managed types, so this would crash your game. So, it's always important to remember your asterisks. A More Useful Assignment This type of assignment is rather pointless however, unless you just want a new alias for your GUIs. A more useful assignment makes use of the function GUI.GetAtScreenXY. This function returns a GUI* to the GUI at the specifed coordinates. So, if you wanted to see what GUI the mouse was over, you could do this: GUI *GUIUnderMouse = GUI.GetAtScreenXY(mouse.x, mouse.y); Testing A Pointer's Value If you want to see what a pointer is actually pointing to, you can use the boolean operators == (checks if two things are equal) and != (checks if two things are not equal). So, to see if GUIUnderMouse is MYGUI or not, you could do this: if (GUIUnderMouse == gMygui) Display("MYGUI is under the mouse!"); else if (GUIUnderMouse != gMygui) Display("MYGUI is not under the mouse!"); Null Pointers If a pointer isn't pointing to anything, it is known as a null pointer. It will actually hold the value null. Operations on null pointers will cause the game to crash, so you should always be sure that your pointer is non-null before using it. You check if a pointer is null or not the same way you would normally check a pointer's value: if (GUIPointer == null) { /* the pointer is null */ } else { /* the pointer is non-null */ } What Pointers Do Okay, so we can create pointers, assign them a value, and test their value, but what do pointers do? We've discussed already that pointers point to variables stored in the memory to prevent having to reproduce the data, but we haven't actually discussed in depth how this can be used to our advantage. Pointer System Versus Integral System We've seen how a GUI* (remember that this is a GUI-pointer or pointer-to-GUI) can help us find out what GUI is on the screen at certain coordinates, but we could do this with an integral system, such as: int gui = GetGUIAt(mouse.x, mouse.y); if (gui == MYGUI) Display("MYGUI is under the mouse!"); So if we could already do this, why change to a pointer system and cause all the confusion? But let's not get ahead of ourselves. Pointers aren't designed with the sole purpose of causing confusion. And they are actually quite useful once you understand them. The String Type The String type isn't one of AGS's managed types, nor can you create a pointer to it. So why then am I bringing it up? The fact is, the String type is actually internally defined as a pointer, which is how it is able to have it's virtually infinite maximum length. Prior to AGS 2.71, AGS used the now deprecated string type. The string type was internally defined as an array of 200 characters (chars). This meant that strings had a maximum length of 200 characters themselves. With the introduction of AGS 2.71 came the new String type which removed that limit. And how did it do it? It used a pointer. Not an AGS-style pointer, but a pointer nonetheless. In programming languages such as C and C++, a pointer-to-char (char*)[7] creates a special type of pointer. Instead of just pointing to one single variable, a char* can point to a virtually infinite number of chars in the form of what is known as a string-literal (such as "this is some text"). Since AGS uses a special type of pointer for managing the String type, it can still hold the value null (this is what Strings are initialized to), and when used as a function parameter, can be made optional in the same manner (see the section on optional parameters for more information). Script O-Names Script o-names are another example of a pointer system versus an integral one. Basically the way a script o-name is defined is like this: // pseudo-AGS-internal-code GUI *gMygui = gui[MYGUI]; For all we know the gui array itself could be an array of pointers to something stored deeper within the bowels of AGS, but it's not really important as in the end they would still both point to the same GUI, and this is just an example anyway. Using an integral system you would have to acess the gui array any time you wanted to perform any operations on the GUI[8]. So, if we wanted to move MYGUI to (30, 120), in an integral system we could do this: gui[MYGUI].SetPosition(30, 120); In a pointer system we would do this: gMygui.SetPosition(30, 120); So it makes our code a bit shorter then, but it's essentially the same. All-in-all not a particularlly convincing example. So let's take a look at another built-in pointer: player. Player Keyword The player keyword provides a much simpler method for performing operations directly on the character. In an integral system we could use something like this: character[GetPlayerCharacter()].Move(20, 100); With the player keyword we now simply type: player.Move(20, 100); This also provides advantages when working with the player's active inventory item. Player.ActiveInventory In an integral system to access the player character's active inventory, you would have to do something like this: character[GetPlayerCharacter()].activeinv In a pointer system you do this: player.ActiveInventory But what about when we actually want to do something with that? Say, for example, changing it's graphic to slot 42: // integral system inventory[character[GetPlayerCharacter()].activeinv].graphic = 42; // pointer system player.ActiveInventory.Graphic = 42; Again, it makes the code shorter, but the second snippet is also easier to read, and more obvious what you're trying to do. File* Another example can be seen if we look at the File type. In an integral system, you would access an external file like this: int handle = FileOpen("temp.tmp", FILE_WRITE); if (handle == 0) Display("Error opening file."); else { FileWrite(handle, "test string"); FileClose(handle); } You have to store the file's handle when you open it, and you later have to be sure to close that file using the same handle. In-between this time you have to be sure that the value of the handle doesn't change or get lost. In a system with pointers, you can do this instead: File *output = File.Open("temp.tmp", eFileWrite); if (output == null) Display("Error opening file."); else { output.WriteString("test string"); output.Close(); } You create a File* which points to the opened file. You still have to close the file using the File*, but it's simpler since you are using a specifically created File* instead of just a generic int variable. Pointers as Function Parameters Pointers can also be used for function parameters. Those who have used versions of AGS prior to 2.7 know that integers used to be passed as parameters for several functions which have now been made into OO (object-oriented) functions, such as MoveCharacter (now known as Character.Move). The old MoveCharacter function took three parameters: CHARID, int x, and int y. CHARID was an integer parameter which held the character's ID (this is the same as the new Character.ID property). But what if we had the MoveCharacter function in a pointer-implemented, non-OO system? The parameter list would probably be something like this: Character *Char, int x, int y. The first parameter, a Character* would allow us to pass a Character* instead of just an int, which helps make clearer what the code is trying to accomplish. It also ensures that the parameter is valid (to an extent). An integer parameter could have any value passed into it, which the function would then have to check. A Character* helps to ensure the value is valid, though since it is a pointer it could still be null. Optional Parameters As of AGS 2.7 you can make function parameters optional by assigning them a default value when you import the function. For example, to make a function with an optional int parameter, you can define the import like this: import myfunc(int param1, int param2=5); That would make PARAM2 optional, with the default value of 5. This import doesn't necessarily have to be placed in a script header (which is where most of your imports will be). If you don't want the function to be globally accessible but you want an optional parameter, you can just put this import in your script before you define the function and it will allow you to have a non-global function with an optional parameter. Optional Pointer Parameters Okay, that's nice, but how does it apply to pointers? I tried assigning my Character* parameter a default value and it didn't work. AGS doesn't currently allow you to assign non-integer default values. This means that to make a parameter optional we will have to give it an integer value. But what integer value can we use with pointers? Though it is not normally recommended (and in most cases won't work), we can substitute the value 0 for null in this case. This does of course mean that you would have to have some means of handling null values for that parameter. Perhaps, for a Character*, it could default to the player character. You could do this by checking your parameter, and if it is null, reassigning it to the player character: if (Char == null) Char = player; Also, you may remember my mentioning that the String type uses pointers? You can make a String parameter optional in the same way you make any other pointer parameter optional, by assigning it the value of 0. This will cause the parameter to default to a null String. Extending The Built-In (Managed) Types Now that we've seen what pointers are, how they are used, how they relate to AGS, and some basic uses of them, let's take a look at a different kind of usage. In AGS we can create our own custom-defined datatypes using struct. You define a struct like this: struct MyStruct { int IntMember; String StringMember; import int do_something(); }; That would create a new datatype called MyStruct which would have two data members and one member function. You could then create a variable of this type, and do all sorts of fun things with it. Though it's uses don't end there. You can also make a pointer a member of a struct[9], which provides some interesting possibilities. With a pointer as a member, you can essentially extend built-in datatypes (i.e., the managed types). Extending the Character Type We can extend the built-in Character type using a Character* as a member of one of our structs. So, let's look at how we can do this: struct CharStats { int Health; int Mana; Character* Char; }; We've given this new datatype CharStats three members: Health, Mana, and Char. So how does this help us to extend the built-in datatypes then? By assigning a value to Char, we can access all the properties of that Character through our new datatype. First we have to assign the pointer a value, so let's look at that: // global script CharStats stEgo; // cEgo with Health and Mana properties export stEgo; // this makes stEgo global to all scripts, requires an import in the header // game_start function stEgo.Health = 100; // set Ego's Health stEgo.Mana = 80; // set Ego's Mana stEgo.Char = cEgo; // set Ego's Char And now for the remainder of your game you can use stEgo.Char any place you would normally use cEgo. This way you can put all of your properties and functions for working with Ego into one convenient place! You can extend any of the managed types that you can create pointers to in this manner. Extending the Character Type for AGS 3.0+ The above example was originally written and designed around the 2.7x branch of AGS. As of AGS 3.0 however, we have the ability to use extender methods. For this particular example, I would recommend adding global extenders such as Character.GetHealth and Character.SetHealth instead of using a separate structure globally. You could still use the structure inside of your script (perhaps in a different script, where the extenders would actually be defined), but it would make it simpler to integrate your extensions into existing scripts by using extenders instead. Check them out if you're using a 3.x version of AGS! Dynamic Arrays Are Pointers Too In addition to the managed types, there are another type of pointer you should be aware of: dynamic arrays. You can create a dynamic array of the base types (such as int) or of pointers to a managed type (such as Character*). For creating a dynamic array of pointers, see Dynamic Array of Pointers. Unlike the managed types, you use the new keyword to create a new array dynamically. The name you give it is treated as a pointer. The manual gives us this example: int characterHealth[]; characterHealth = new int[Game.CharacterCount]; Initially, characterHealth, just as with other pointers, holds the value of null. When you assign its value on the second line, you are telling it, as with other pointers, to point to the array that you've newly created. This is particularly important if you pass a dynamic array as a function parameter. If you change the value of a dynamic array passed as a function parameter, it will change the value of the array itself. Keep in mind that the parameter is pointing to the same array as what you passed into the function. Very useful if it's what you want, but it can be confusing if you're not aware why it's happening. Closing So you came to me with questions, and I hope I've answered some of them at least. In any case I hope I answered the ones you had about pointers and their usage in AGS. If you have any questions or comments you can PM me on the AGS forums, or email me at monkey.05.06@gmail.com any time. Thanks for reading my article, and I hope you've enjoyed it as much as I enjoyed writing it. monkey_05_06 Notes 1. ^ The String type is only defined as of AGS v2.71 and higher. Older versions use the now deprecated string type. 2. ^ The length for Strings is limited by your computer's physical memory. A String will take up 4 bytes of memory, plus 1 byte for each character it contains. 3. ^ Floating-point decimals won't always evaluate as you might expect when doing certain mathematical operations (this is due to their precision levels). See the manual entry on data types for more information. 4. ^ AGS's managed types are those listed here. You cannot create a new managed type within AGS's scripts; to create a new managed type you would need to write a plugin. Some module writers may use the keyword managed to prevent users from creating instances of structs that are meant to be used statically. This does not however make the type managed. Only AGS's built-in managed types and any managed types created via plugins are actually managed, and therefore are the only types that can have pointers to them. 5. ^ Not all of the managed types are meant to have pointers to them. Game, Maths, Parser, and Room do not need pointers (you can't even assign them a value). 6. ^ The asterisk doesn't necessarily have to be attached to the name of the pointer, such as "GUI *MyGUIPointer", it can also be attached to the data type itself, such as "GUI* MyGUIPointer". However, it will still be compiled as if it is attached to the name of the pointer, not the data type, so if you define multiple pointers at once, you will still need an asterisk for each pointer. 7. ^ In AGS you can't create a char*, as char isn't one of AGS's managed types. This type of pointer is used in scripting languages like C and C++. For storing string-literals AGS uses the String type (or the string type for AGS versions prior to 2.71). 8. ^ I have taken the liberty here of envisioning an integral system set up much as AGS 2.7+ is set up, only since it is an integral system it uses integers instead of pointers. In this example AGS structs still have member functions, and all other non-pointer-related functionality of AGS is the same. 9. ^ Structs can only have pointers as members in AGS 2.71 and later.
https://www.adventuregamestudio.co.uk/wiki/AGS_Pointers_for_Dummies
CC-MAIN-2019-04
refinedweb
3,798
63.59
Chapter 3. Fixed Income Securities - Timothy Bennett - 2 years ago - Views: Transcription 1 IE Chapter 3. Fixed Income Securities 2 IE Financial instruments: bills, notes, bonds, annuities, futures contracts, mortgages, options,...; assortments that are not real goods but they carry values by the promises they represent. Securities: financial instruments that are traded in well developed markets. Fixed income securities: securities that promise definite cash flow streams. 3 IE The market for future cash The only uncertainty in holding a fixed income security is that the issuer may default. There are various forms of fixed income securities. Savings deposits Certificate of deposit (CD): issued in standard denominations such as $10,000. Large CD s can be traded in the market. Money market instruments Short-term (1 year or less) loans by corporations and banks. Commercial papers: unsecured (without collateral) loans. A banker s acceptance: If A sells goods to B, and B promises to pay within a fixed time. Some bank may accept the promise by promising to pay the bill on behalf of B. A can then sell the banker s acceptance at a discount before expiration. Eurodollar deposits: deposits denominated in dollars but held in a bank outside US. 4 IE US government securities Treasury bills: issued in denominations of $10,000 or more with fixed terms to maturity of 13, 26, and 52 weeks. Treasury notes: maturities of 1 to 10 years, and sold in denominations as small as $1,000. The owner of notes also receives a coupon payment every 6 months until maturity. Treasury bonds: maturities more than 10 years. Treasury inflation-protected securities (TIPS): the principal value changes with the Consumer Price Index (CPI), but the coupon rate does not change in time. Treasury strips: each coupon payment is sold as separate security. Treasury strips are also known as zero-coupon bonds. 5 IE Example 3.3. Terms to learn: APR (Annual Percentage Rate); Points (the percentage of the loan amount charged for providing the mortgage, not including other possibly fees and expenses). A typical mortgage broker advertisement: Rate Pts Term Max amt APR yr $203, yr $203, yr $600, yr $203, yr $600, 6 IE What does the advertisement say about the mortgage expenses? For example, using the formula (A/P, 7.883%/12, 30 12) we can compute the monthly payment of a loan amount of $203,150 to be $1,474. Now, what is the implied expense? If we use the annual rate of 7.625% for the monthly payment of $1,474, then the principal would have been $1, 474 (P/A, 7.625%/12, 30 12) = $208, 253. The difference, $208,253-$203,150=$5,103, is the total cost. The loan fee itself is 1% $203, 150 $2, 032. Therefore, the other expense is $5,103-$2,032=$3,071. 7 IE Other bonds Municipal bonds: issued by agencies of state and local governments. Corporate bonds: issued by corporations. Some are traded on an exchange, many are traded over-the-counter. Callable bonds: a feature of bonds which allows the bond issuer to purchase back the bond at a specific price within a period. Mortgages Adjustable-rate mortgage: the interest rate is adjusted periodically. Mortgage-backed securities: individual mortgages are bundled into large packages and traded among institutions. 8 IE Details of a bond Face value (or par value). Coupon payments. The bid price: the price the bond is sold. The ask price: the price the bond is bought. Accrued interest: AI = # of days since last coupon # of days in current coupon coupon amount. Quality rating: Moody s (Aaa, Ba, etc.); Standard & Poors (AAA, BB, etc.). 9 IE Example of accrued interest: Suppose we purchased on May 8 a U.S. Treasury bond. The coupon rate is 9% per year, to be paid on February 15 and August 15 each year. Hence, AI = = This value will be added to the quoted price. If the face value is $1,000, then $20.50 would be added to the quoted price. 10 IE Quality Ratings Rating Classifications: Moody s Standard & Poor s High grade Aaa AAA Aa AA Medium grade A A Baa BBB Speculative grade Ba BB B B Default danger Caa CCC Ca CC C C D 11 IE Yield of a bond: Its internal rate of return (IRR). Consider a bond with face value F, coupon payment C per annum to be paid m times, mature in n/m years, and the purchase price P. Then, its yield is λ, satisfying P = = n F (1 + λ/m) n + k=1 F (1 + λ/m) n + C λ C/m (1 + λ/m) k { 1 1 [1 + (λ/m)] n }. 12 IE The yield is not explicitly computable in general. One exceptionally simple case is when C = 0 (zero-coupon). In that case, ( ) n F λ = m P 1. Another interesting case is when C/F = λ (i.e. the coupon rate is exactly the yield). Then, and the bond is said to be at par. P = F In general, the price-yield curve is convex. The steepness of the curve appears to be related to the length of the period. 13 IE Prices of 9% coupon bonds: 5% 8% 9% 10% 11% 1 yr yr yr yr yr where the rows are time-to-maturity, and the columns are the yields. Clearly, as time-to-maturity increases, the price of the bond tends to depend more sensitively to the change of yield. 14 IE Duration: For a cash flow {(F 1,..., F n ) F i occurs at t i, i = 1,..., n}, its duration is the weighted average of the payment dates D = n k=1 P V (F k)t k P V Macaulay Duration of a bond: D = n k=1 n k=1 F k (1+λ/m) k F k (1+λ/m) k k m Explicitly, for a bond with coupon c, paid m times, and yield rate y: D = 1 + y my 1 + y + n(c y) mc[(1 + y) n 1] + my. 15 IE Example: Consider a 7% bond with 3 years to maturity. Suppose that the yield is 8%. Then, the breakdown of the duration of its cash flows will be: Year Payment Discount Factor PV Weight Duration Total 16 IE Example 3.7. Consider a 10%, 30-year bond with 6-month coupons. Suppose it is at par (yield is 10%). We then compute from D = 1 + y my 1 + y + n(c y) mc[(1 + y) n 1] + my that D = 1 + y my = [ 1 [ 1 ] 1 (1 + y) n ] 1 (1.05) 60 = 17 IE If we denote then P (λ) = n F k /(1 + λ/m) k, k=1 D(λ) = P (λ) P (λ) (1 + λ/m). We call to be the modified duration. D M (λ) = P (λ) P (λ) The duration measures the sensitivity of the price relative to the change of interest rate. The most sensitive bond will be zero coupon bond with a long maturity period. 18 IE Example. Consider a 30-year, 10% coupon bond, which is at par with price $100. The duration is D = Hence, D M = 9.94/1.05 = The slope of the curve at that point is dp/dλ = 947. The straight line approximation suggests if the yield changes to 11%, then the change in price is P = D M 100 λ = = Hence the estimated new price is $ On the other hand, if the bond does not carry any coupons. Then, we have D = 30, and D M 27. If the yield changes to 11%, then the estimated new price will be $73, which is a big change! 19 IE It is important to control the risk of a portfolio with respect to the interest rate risk. Let us consider what happens if we hold two bonds, A and B, in a portfolio. We have D A = n k=0 P V A k t k P A and D B = Observe that the total PV is n P = (P V A k=0 The duration of the portfolio is n k=0 D = (P V k A + P V k B)t k P A + P B = k n k=0 P V B k t k P B. + P V B k ) = P A + P B. P A P A + P B DA + In other words, it is a convex combination of the two! P B P A + P B DB. 20 IE In general, if we have m fixed income securities, each with price P i and duration D i, i = 1, 2,..., m, then the portfolio will have price P and duration D: P = P 1 + P P m D = w 1 D 1 + w 2 D w m D m where w i = P i /(P 1 + P P m ), i = 1, 2,..., m. 21 IE Immunization: managing the interest rate risk. Example. The X Corporate has an obligation to pay $1 Million in 10 years. It wishes to invest in some bonds in order to meet this obligation. The following three bonds are under consideration: Rate Maturity Price Yield Bond 1 6% 30 yrs % Bond 2 11% 10 yrs % Bond 3 9% 20 yrs % We calculate that D 1 = 11.44, D 2 = 6.54, D 3 = 9.61, and P V = 414, 643. We decide to combine Bond 1 and Bond 2, and set P V = V 1 + V 2 10P V = D 1 V 1 + D 2 V 2 leading to V 1 = $292, and V 2 = $121, 22 IE Immunization results: 9% 8% 10% Bond 1 price shares value 292, , ,535 Bond 2 price shares value 121, , ,515 obligation value 414, , ,889 difference -19 1,562 1,162 23 IE Convexity of a bond: it is possible to improve the immunization by using a second order approximation. Let C = P (λ) P (λ). In the case of a cash flow with payments c k, C = n k=1 c k k(k + 1) (1 + λ/m) k m 2 P (1 + λ/m) 2. We have P D M P λ + CP 2 ( λ)2. 24 IE It is possible to use a combination of bonds to fit the PV, the duration, and the convexity of the obligation. If we hold a bond portfolio (P 1,, P m ), then its convexity is D = w 1 D w m D m where w i = P i /(P P m ), i = 1, 2,..., m. Back to the problem of the X Corporate, if the convexity is to be matched as well, then we can consider the following equation P V = V 1 + V 2 + V 3 D P V = D 1 V 1 + D 2 V 2 + D 3 V 3 C P V = C 1 V 1 + C 2 V 2 + C 3 V 3. One will need three bonds to do the matching. Bonds and Yield to Maturity Bonds and Yield to Maturity Bonds A bond is a debt instrument requiring the issuer to repay to the lender/investor the amount borrowed (par or face value) plus interest over a specified period of time. Bond Valuation. Capital Budgeting and Corporate Objectives Bond Valuation Capital Budgeting and Corporate Objectives Professor Ron Kaniel Simon School of Business University of Rochester 1 Bond Valuation An Overview Introduction to bonds and bond markets» What Chapter 6 Valuing Bonds. (1) coupon payment - interest payment (coupon rate * principal) - usually paid every 6 months. Chapter 6 Valuing Bonds Bond Valuation - value the cash flows (1) coupon payment - interest payment (coupon rate * principal) - usually paid every 6 months. (2) maturity value = principal or par value 4: Common Stocks. Chapter 5: Forwards and Futures 15.401 Part B Valuation Chapter 3: Fixed Income Securities Chapter 4: Common Stocks Chapter 5: Forwards and Futures Chapter 6: Options Lecture Notes Introduction 15.401 Part B Valuation We have learned Chapter 3 Fixed Income Securities Chapter 3 Fixed Income Securities Road Map Part A Introduction to finance. Part B Valuation of assets, given discount rates. Fixed-income securities. Stocks. Real assets (capital budgeting). Part C Determination 4 Valuing Bonds Chapter 4 Valuing Bonds MULTIPLE CHOICE 1. A 15 year, 8%, $1000 face value bond is currently trading at $958. The yield to maturity of this bond must be a. less than 8%. b. equal to 8%. c. greater than. INTERACTIVE BROKERS DISCLOSURE STATEMENT FOR BOND TRADING INTERACTIVE BROKERS DISCLOSURE STATEMENT FOR BOND TRADING THIS DISCLOSURE STATEMENT DISCUSSES THE CHARACTERISTICS AND RISKS OF TRADING BONDS THROUGH INTERACTIVE BROKERS (IB). BEFORE TRADING BONDS YOU Fixed Income: Practice Problems with Solutions Fixed Income: Practice Problems with Solutions Directions: Unless otherwise stated, assume semi-annual payment on bonds.. A 6.0 percent bond matures in exactly 8 years and has a par value of 000 dollars. Fixed-Income Survey Guidelines Morningstar Methodology Paper October 2012 2012 Morningstar, Inc. All rights reserved. The information in this document is the property of Morningstar, Inc. Reproduction or transcription by any means, - Review for Exam 1. Instructions: Please read carefully Review for Exam 1 Instructions: Please read carefully The exam will have 20 multiple choice questions and 5 work problems. Questions in the multiple choice section will be either concept or calculation CHAPTER 2. Asset Classes. the Money Market. Money market instruments. Capital market instruments. Asset Classes and Financial Instruments 2-2 Asset Classes Money market instruments CHAPTER 2 Capital market instruments Asset Classes and Financial Instruments Bonds Equity Securities Derivative Securities The Money Market 2-3 Table 2.1 Topics to be Discussed Description of Fixed Income Securities Characteristics Used to Evaluate Securities. Fixed Income Securities Topics to be Discussed Description of Characteristics Used to Evaluate Securities Treasury Bonds Agency Bonds Municipal Bonds Corporate Bonds Institutional Bonds Evaluation of Bonds Preferred Stock Description American Options and Callable Bonds American Options and Callable Bonds American Options Valuing an American Call on a Coupon Bond Valuing a Callable Bond Concepts and Buzzwords Interest Rate Sensitivity of a Callable Bond exercise policy Notes for Lecture 3 (February 14) INTEREST RATES: The analysis of interest rates over time is complicated because rates are different for different maturities. Interest rate for borrowing money for the next 5 years is ambiguous, because BOND - Security that obligates the issuer to make specified payments to the bondholder. Bond Valuation BOND - Security that obligates the issuer to make specified payments to the bondholder. COUPON - The interest payments paid to the bondholder. FACE VALUE - Payment at the maturity of Money Market and Debt Instruments Prof. Alex Shapiro Lecture Notes 3 Money Market and Debt Instruments I. Readings and Suggested Practice Problems II. Bid and Ask III. Money Market IV. Long Term Credit Markets V. Additional Readings Buzz Lecture 2 Bond pricing. Hedging the interest rate risk Lecture 2 Bond pricing. Hedging the interest rate risk IMQF, Spring Semester 2011/2012 Module: Derivatives and Fixed Income Securities Course: Fixed Income Securities Lecturer: Miloš Bo ović Lecture outline Chapter 5: Valuing Bonds FIN 302 Class Notes Chapter 5: Valuing Bonds What is a bond? A long-term debt instrument A contract where a borrower agrees to make interest and principal payments on specific dates Corporate Bond Quotations I. Readings and Suggested Practice Problems. II. Risks Associated with Default-Free Bonds Prof. Alex Shapiro Lecture Notes 13 Bond Portfolio Management I. Readings and Suggested Practice Problems II. Risks Associated with Default-Free Bonds III. Duration: Details and Examples IV. Immunization Investment Analysis (FIN 670) Fall Homework 3 Investment Analysis (FIN 670) Fall 2009 Homework 3 Instructions: please read carefully You should show your work how to get the answer for each calculation question to get full credit You should make 2 Investment and Portfolio Management. Lecture 8 Bond Prices and Yields. Bond Characteristics Investment and Portfolio Management Ms. Pham Le Thu Nga Lecture 8 Bond Prices and Yields Chapter 14 14-2 Bond Characteristics Face or par value (normally bullet maturity) Coupon rate (normally fixed) Zero Interest Rate and Credit Risk Derivatives Interest Rate and Credit Risk Derivatives Interest Rate and Credit Risk Derivatives Peter Ritchken Kenneth Walter Haber Professor of Finance Weatherhead School of Management Case Western Reserve University Maturity and interest-rate risk Interest rate risk, page 1 Maturity and interest-rate risk Suppose you buy one of these three bonds, originally selling at a yield to maturity of 8 percent. Yield to One-year 30-year 30-year maturity 8% Review for Exam 1. Instructions: Please read carefully Review for Exam 1 Instructions: Please read carefully The exam will have 21 multiple choice questions and 5 work problems. Questions in the multiple choice section will be either concept or calculation Global Financial Management Global Financial Management Bond Valuation Copyright 999 by Alon Brav, Campbell R. Harvey, Stephen Gray and Ernst Maug. All rights reserved. No part of this lecture may be reproduced without the permission Tax-exempt municipal bonds Tax-exempt municipal bonds Fixed-income securities generally exempt from taxes Attractive benefits Tax-exempt municipal bonds are among the most popular types of investments available today. They offer Bonds and preferred stock. Basic definitions. Preferred(?) stock. Investing in fixed income securities Bonds and preferred stock Investing in fixed income securities Basic definitions Stock: share of ownership Stockholders are the owners of the firm Two types of stock: preferred and common Preferred stock: Saving and Investing. Chapter 11 Section Main Menu Saving and Investing How does investing contribute to the free enterprise system? How does the financial system bring together savers and borrowers? How do financial intermediaries link savers and borrowers?; ANALYSIS OF FIXED INCOME SECURITIES ANALYSIS OF FIXED INCOME SECURITIES Valuation of Fixed Income Securities Page 1 VALUATION Valuation is the process of determining the fair value of a financial asset. The fair value of an asset is Investments Analysis Investments Analysis Last 2 Lectures: Fixed Income Securities Bond Prices and Yields Term Structure of Interest Rates This Lecture (#7): Fixed Income Securities Term Structure of Interest Rates Interest Duration and convexity Duration and convexity Prepared by Pamela Peterson Drake, Ph.D., CFA Contents 1. Overview... 1 A. Calculating the yield on a bond... 4 B. The yield curve... 6 C. Option-like features... 8 D. Bond ratings...: Analysis of Deterministic Cash Flows and the Term Structure of Interest Rates Analysis of Deterministic Cash Flows and the Term Structure of Interest Rates Cash Flow Financial transactions and investment opportunities are described by cash flows they generate. Cash flow: payment CHAPTER 16: MANAGING BOND PORTFOLIOS CHAPTER 16: MANAGING BOND PORTFOLIOS PROBLEM SETS 1. While it is true that short-term rates are more volatile than long-term rates, the longer duration of the longer-term bonds makes their prices and Municipal Bond Basics EBNY Financial, LLC Kevin Kautzmann, CFP Certified Financial Planner 80 Fifth Avenue #1403 212-269-2625 kevin@ebnyfinancial.com Municipal Bond Basics July 08, 2014 Page 1 of 8, Figure 10.1 Listing of Treasury Issues CHAPER 10 Bond Prices and Yields 10.1 BOND CHARACERISICS Bond Characteristics reasury Notes and Bonds Face or par value Coupon rate Zero coupon bond Compounding and payments Accrued Interest Indenture Yield Measures, Spot Rates & Forward Rates Fixed Income Yield Measures, Spot Rates & Forward Rates Reading - 57 1 Sources of Return Coupon interest payment: Periodic coupon interest is paid on the par value of the bond Bond Price Arithmetic 1 Bond Price Arithmetic The purpose of this chapter is: To review the basics of the time value of money. This involves reviewing discounting guaranteed future cash flows at annual, semiannual and continuously CHAPTER 6: FIXED-INCOME SECURITIES: FEATURES AND TYPES CHAPTER 6: FIXED-INCOME SECURITIES: FEATURES AND TYPES Topic One: The Fixed-Income Marketplace 1. Overview. A. Investing in a fixed-income security is like holding an IOU. An investor (lender) loans a Guidance Note AGN 112.1 Guidance Note AGN 112.1 Standardised Approach to Credit Risk: Risk-weighted On-balance sheet Credit Exposures 1. This Guidance Note and its Attachments set out how to calculate the riskweighted amount Chapter 11. Stocks and Bonds. How does this distribution work? An example. What form do the distributions to common shareholders take? Chapter 11. Stocks and Bonds Chapter Objectives To identify basic shareholder rights and the means by which corporations make distributions to shareholders To recognize the investment opportunities in Distinguishing duration from convexity Distinguishing duration from convexity Vanguard research May 010 Executive summary. For equity investors, the perception of risk is generally straightforward: Market risk the possibility that prices may Interest Rates and Bond Valuation and Bond Valuation 1 Bonds Debt Instrument Bondholders are lending the corporation money for some stated period of time. Liquid Asset Corporate Bonds can be traded in the secondary market. Price at which Answers to Concepts in Review Answers to Concepts in Review 1. Bonds are appealing to investors because they provide a generous amount of current income and they can often generate large capital gains. These two sources of income together High-yield bonds. Bonds that potentially reward investors for taking additional risk. High-yield bond basics High-yield bonds Bonds that potentially reward investors for taking additional risk Types of high-yield bonds Types of high-yield bonds include: Cash-pay bonds. Known as plain vanilla bonds, these bonds Alliance Consulting BOND YIELDS & DURATION ANALYSIS. Bond Yields & Duration Analysis Page 1 BOND YIELDS & DURATION ANALYSIS Bond Yields & Duration Analysis Page 1 COMPUTING BOND YIELDS Sources of returns on bond investments The returns from investment in bonds come from the following: 1. Periodic Fin 3312 Sample Exam 1 Questions Fin 3312 Sample Exam 1 Questions Here are some representative type questions. This review is intended to give you an idea of the types of questions that may appear on the exam, and how the questions might Overview of Lecture 5 (part of Lecture 4 in Reader book) Overview of Lecture 5 (part of Lecture 4 in Reader book) Bond price listings and Yield to Maturity Treasury Bills Treasury Notes and Bonds Inflation, Real and Nominal Interest Rates M. Spiegel and R. Stanton, Yield to Maturity Outline and Suggested Reading Yield to Maturity Outline Outline and Suggested Reading Yield to maturity on bonds Coupon effects Par rates Buzzwords Internal rate of return, Yield curve Term structure of interest rates Suggested reading Introduction to Bonds Bonds are a debt instrument, where the bond holder pays the issuer an initial sum of money known as the purchase price. In turn, the issuer pays the holder coupon payments (annuity), and a final sum (face Fixed-income Securities Lecture 3: Yield curves. Connecting various yield curves: intuition Philip H. Dybvig Washington University in Saint Louis The term structure of interest rates Fixed-income Securities Lecture 3: Yield curves Relations among yield curves: intuition Conventions and complications investor s guide BOND BASICS investor s guide BOND BASICS CONTENTS What Are Bonds? 1 Why Invest in Bonds? 1 Key Bond Investment Considerations 2 How to Invest 13 Investment Strategy Considerations 17 For More Information 19 Glossary Essential components of an IPS WELLS FARGO MONEY MARKET FUNDS Primer series A primer on cash investment policy statements An investment policy statement (IPS) is a document that serves as a policy guide to meet the goals and objectives Bond valuation and bond yields RELEVANT TO ACCA QUALIFICATION PAPER P4 AND PERFORMANCE OBJECTIVES 15 AND 16 Bond valuation and bond yields Bonds and their variants such as loan notes, debentures and loan stock, are IOUs issued by governments Overview of Lecture 4 Overview of Lecture 4 Examples of Quoted vs. True Interest Rates Banks Auto Loan Forward rates, spot rates and bond prices How do things change when interest rates vary over different periods? Wealth Management Education Series. Cultivate an Understanding of Bonds Wealth Management Education Series Cultivate an Understanding of Bonds Wealth Management Education Series Cultivate an Understanding of Bonds Managing your wealth well is like tending a beautiful formal Bond Pricing Problems with YTM YTM is the stated internal rate of return It is not guaranteed to be the realized return due to interest rate risk It may not be a correct description of expected Investor Guide to Bonds Investor Guide Investor Guide to Bonds threadneedle.com Introduction Why invest in bonds? Although your capital is normally considered safe in a traditional deposit account, low interest rates have eroded CHAPTER 15: THE TERM STRUCTURE OF INTEREST RATES CHAPTER 15: THE TERM STRUCTURE OF INTEREST RATES 1. Expectations hypothesis. The yields on long-term bonds are geometric averages of present and expected future short rates. An upward sloping curve is ETF Investment Solutions How to Target the Bond Market s Sweet Spot with Crossover Bonds ETF Investment Solutions How to Target the Bond Market s Sweet Spot with Crossover Bonds CONTENTS I. ASSET CLASS BACKGROUND What Are Crossover Bonds? II. CHARACTERISTICS OF CROSSOVER BONDS What Are the Basics of Investment Presented By Ye Yi 6718 South, 2680 East Salt Lake City, Utah 84121 1 A. Introduction of Money Value Measurement -- Present Value and Future Value Assuming that you have deposited $1,000 into a bank CD Goals. Bonds: Fixed Income Securities. Two Parts. Bond Returns Goals Bonds: Fixed Income Securities History Features and structure Bond ratings Economics 71a: Spring 2007 Mayo chapter 12 Lecture notes 4.3 Bond Returns Two Parts Interest and capital gains Stock comparison:
http://docplayer.net/22492684-Chapter-3-fixed-income-securities.html
CC-MAIN-2019-04
refinedweb
4,226
52.9
# WSL 2 is now available in Windows Insiders We’re excited to announce starting today you can try the Windows Subsystem for Linux 2 by installing Windows build 18917 in the Insider Fast ring! In this blog post we’ll cover how to get started, the new wsl.exe commands, and some important tips. Full documentation about WSL 2 is available on [our docs page](https://docs.microsoft.com/en-us/windows/wsl/wsl2-index). ![](https://habrastorage.org/webt/fr/v9/lc/frv9lcrbvw6fouzkcyimrk0urwc.gif) Getting Started with WSL 2 ========================== We can’t wait to see how you start using WSL 2. Our goal is to make WSL 2 feel the same as WSL 1, and we look forward to hearing your feedback on how we can improve. The [Installing WSL 2](https://docs.microsoft.com/en-us/windows/wsl/wsl2-install) docs explains how to get up and running with WSL 2. There are some user experience changes that you’ll notice when you first start using WSL 2. Here are the two most important changes in this initial preview. Place your Linux files in your Linux root file system ----------------------------------------------------- Make sure to put the files that you will be accessing frequently with Linux applications inside of your Linux root file system to enjoy the file performance benefits. We understand that we have spent the past three years telling you to put your files into your C drive when using WSL 1, but this is not the case in WSL 2. To enjoy the faster file system access in WSL 2 these files must be inside of the Linux root file system. We have also made it possible for Windows apps to access the Linux root file system (like File Explorer! Try running: `explorer.exe .` in the home directory of your Linux distro and see what happens) which will make this transition significantly easier. Access your Linux network applications with a dynamic IP address in initial builds ---------------------------------------------------------------------------------- WSL 2 includes a huge architecture change using virtualization technology, and we are still working on improving the networking support. Since WSL 2 now runs in a virtual machine, you will need to use that VM’s IP address to access Linux networking applications from Windows, and vice versa you will need the Windows host’s IP address to access Windows networking applications from Linux. We aim to include the ability for WSL 2 to access network applications with `localhost` as soon as we can! You can find full details and steps on how to do this in our documentation [here](https://docs.microsoft.com/en-us/windows/wsl/wsl2-ux-changes#accessing-network-applications). To read more about the user experience changes please see our documentation: [User Experience Changes Between WSL 1 and WSL 2](https://docs.microsoft.com/en-us/windows/wsl/wsl2-ux-changes). New WSL Commands ================ We’ve also added some new commands to help you control and view your WSL versions and distros. * `wsl --set-version` Use this command to convert a distro to use the WSL 2 architecture or use the WSL 1 architecture. : the specific Linux distro (e.g. “Ubuntu”) : 1 or 2 (for WSL 1 or 2) * `wsl --set-default-version` Changes the default install version (WSL 1 or 2) for new distributions. * `wsl --shutdown` Immediately terminates all running distributions and the WSL 2 lightweight utility virtual machine. The VM that powers WSL 2 distros is something that we aim to manage entirely for you, and so we spin it up when you need it and shut it down when you don’t. There could be cases where you would want to shut it down manually, and this command lets you do that by terminating all distributions and shutting down the WSL 2 VM. * `wsl --list --quiet` Only list the distribution names. This command is useful for scripting since it will only output the names of distributions you have installed without showing other information like the default distro, versions, etc. * `wsl --list --verbose` Shows detailed information about all the distributions. This command lists the name of each distro, what state the distro is in, and what version it is running. It also shows which distributions is default with an asterisk. Looking ahead and hearing your feedback ======================================= You can expect to get more features, bugfixes, and general updates to WSL 2 inside of the Windows Insiders program. Stay tuned to their experience blog and this blog right here to learn more WSL 2 news. If you run into any issues, or have feedback for our team please file an issue on our Github at: [github.com/microsoft/wsl/issues](https://github.com/microsoft/wsl/issues), and if you have general questions about WSL you can find all of our team members that are on Twitter on [this twitter list](https://twitter.com/craigaloewen/lists/wsl-team-members/members).
https://habr.com/ru/post/456202/
null
null
812
60.04
Visit my website at: This is a cool feature of BizTalk 2004! It allows you to map many documents into one or map one document into many. In either case, your inputs will not equal your outputs. This type of mapping does require some up front planning and careful consideration. First off, this type of mapping is only allowed inside the Orchestration. Basically, what the mapper does is creates a kind of multi-part message for either the input, output, or both. The parts of the multi-part message are the different input/output messages. This allows the mapper to take in or produce multiple messages. I have only been able to create these types of maps by creating new maps using the Transform Shape inside the Orchestration. I have tried to manually create them, but I have not been able to create/replicate the multi message behavior. Also, I have not been able to find the mystery schema that the Orchestration generates when creating these new maps. Surely is has to be someplace… I have only been able to create these types of maps by creating new maps using the Transform Shape inside the Orchestration. How to create maps with multiple messages? 1. Create Orchestration Messages inside your Orchestration. Lets say Input1, Input2, Output2, and Output2. 2. Add a Transform Shape to your Orchestration. 3. Select Output1 and Output2 inside the Construct Shape as Constructed Messages. 4. Click on the Transform Shape. It will look like the figure below. 5. Add the needed input message under Source and output messages under Destination. Each messages needs to go on a new line inside the Transform Shape. In this case, Input1 and Input2 are added to Source and Output1 and Output2 are added to Destination. 6. Once completed, make sure the “Launch the BizTalk Mapped” is checked and hit OK. 7. The mapped will open and you will see multi messages on the Source and Destination. These are nothing more then a new schema that includes the schemas from the input and output messages. Note: Be mindful of namespaces as I *think* they are required on all the multiple messages used by the new map. The results would look something like this. 8. Map as needed. You can modify an existing map to use multi messages but it will break any existing links. If you want to modify an existing map, it has to be in the same project in order to modify the map to allow multi messages. This is done by opening up the Transform Shape and adding additional messages to a new line under Source or Destination. Once modified, then it can be moved to another project. Note: Moving the map might mess up all the namespaces and references so manual editing of the XSLT may be required. So be careful! Moving the maps works best if all the schemas used inside the map are referenced from external schemas project. Overall: Mapping with multi messages is an extremely powerful feature when used inside the Orchestration. Although, it can be difficult to convert existing maps into maps that use multi messages. Print | posted on Friday, February 04, 2005 1:23 AM |
http://geekswithblogs.net/sthomas/archive/2005/02/04/21969.aspx
crawl-002
refinedweb
532
65.83
., hello, i have tried by this coad but its woeked only on localhost...and i want to send email to the client whisch are registered in my site for wellcome msg means auto genrated msg from SMTP server through my web site.... please give me link or coad of this example... regards, yamini I am attempting to send an email message with frustrating results. The problem is that the message will not send immediately; it only sends when I exit the application. This is happening in a larger app, but I have duplicated in a very simple example. Created new Windows Form Application in VB2005. Dropped a button into the middle of the form with the following code: Imports System.Net.Mail Public Class Form1 Private Sub Button1_Click(ByVal sender As System.Object, ByVal e As System.EventArgs) Handles Button1.Click Try Dim mailSMTPClient As New SmtpClient("mailserver") Dim mailFromAddress As New MailAddress("test@domain.com") Dim mailToAddress As New MailAddress("user@domain.com") Dim mailMessage As New MailMessage(mailFromAddress, mailToAddress) With mailMessage .Subject = "Test" .Body = "Hello! This is a test." End With With mailSMTPClient .DeliveryMethod = SmtpDeliveryMethod.Network .Send(mailMessage) Catch ex As Exception MsgBox("Exception: " & ex.Message) End Try End Sub End Class The example couldn't be more simple, but I get the same result. The message will not send until I exit the application. What am I missing? Thanks in advance for any help. System.web.mail works every time from the same project without changing any configuration settings, despite warnings that System.web.mail is obsolete. I have now written a new sendMail procedure with system.net.mail, which failed with Mailbox unavailable. The server response was: 5.7.1 Unable to relay... I have made changes to the smtp server on iis, which I shouldn't have had to. Now it sends the messages but I can see .eml files in C:\Inetpub\mailroot\queue\ going nowhere! What gives? How do we use System.net.mail the way System.web.mail worked. Changing iis smtp settings and messing about with smtpHost (whether localhost or 127.0.0.1) should be optional. Eti In case anyone is interested, I tracked down the cause of the problem I mentioned above with system.net.mail not sending outgoing messages until the application ends. It turned out being related to Symantec AntiVirus (Full version 9.0.0.338) that I have running on my machine. I was able to eliminate the problem by turning off the "Internet E-Mail Auto-Protect" feature. Turning it back on resulted in the same send delay. Using the system.web.mail namespace does not exhibit this problem (despite the IDE's warnings about it being obsolete.) I would love to use the system.net.mail namespace, but I can't assume client workstations will not have Symantec Anti-Virus running. Scott and Evryone, Please do help me.. not too much asp.net techies here in KSA and i am coming from a cfm environment. my problem is that I have a formview that I use for inserting data to a DB. after insertion it sends the form BUT i dont know how to embed sent the content of the user's input within the formview. i am using one page only inserting and sending.. thanks.. Hi Scott & Everyone, I am using .Net 1.1 and sending the mail using System.Web.Mail. On my machine McCafe is running. When I send the mail I turn off the McCafe. So no problem with sending the mail. When I send the mail to addresses within my domain they are sent. The From address uses my mail address. But it won't be used on production so I got a new email address created in my domain for From address. Now I am using this new From address. Now when I send the mail it shows sucessfully sent (I debugged and checked too) but I don't receive the mail (since I am using my address as recepient for testing). I also checked the folders at c:\inetpub\mailroot\ but there was no mail in any of the folders. I also cross-checked using mail2web if this new address was permitted to relay or not but it can both send and receive mails. So I don't understand where lies the problem. Can you guys please tell me what the heck may be going on? Thanks & Appriciations in advance Hi all, Please tell me what this namespace System.Net.Mail.SmtpFailedRecipientException checks so it throws exceptions. i.e what to want to confirm that in addition to dns checking ,is it also checks that, that host is not in the smtp server database. Moreover I want to know the specific record name the mail server validates the email Id of the receipent. e.g through MX records we get the mail server name but from where mail server validate the host of his domain. Please do help me or guide me how i validate the email id of my receipent. (Note : not validate the dns or mail server but email on mail server ) Thanks in advance for all your advices and help. Rajesh Yadav Hi Rajesh, I believe it only checks that the email address is in a valid format. When you send an email using that API it simply sends it to your local SMTP server, which then later forwards it to the destination address. As such, it can't know at the time the email is sent whether a user is on the remote domain. Hi scott, Thanks for reply, but please if you have any idea how i validate the email through host dns server list. I only know that this can be validated as there are lots of product in the market which do the same thing. Please do help me.! Thanks thanks and excellent PingBack from My company is using lotus notes dominos server to send emails, and I am using ASP.Net 1.1 to build a website which needs to send emails as a part of its functionality. Please help. Ashwar At end I am getting an error "Failure Sending Mail".Check the InnerException error "{"Unable to read data from the transport connection: net_io_connectionclosed."}" .Actually I gave Host is "localhost" I also discovered that symantec anti virus was the problem. Again, I cannot rely on this because the clients' machines will most definitely have Virus protection turned on. Additionally I noticed I can no longer initialise a new System.Net.Mail.Mailmessage() without including the to and from addresses! Do you have any idea why this is so? I would rather not include them at the initialisation stage since I have a list of addresses to send to in a comma delimited string and would rather not "bodge" it. Hi Scott, A few months ago I was able to send the mail with the below code. But from yesterday I am getting "System.Net.Mail.SmtpException:The operation has timed out" error. I cannot figure out wots the problem. Can u help me out. Thanks Irfan Web Config Code --------------- <system.net> <mailSettings> <smtp from="irfan.a.khan@capgemini.com"> <network host="BOMEX002.corp.capgemini.com" port="25" userName="irfkhan" password="expert!1234" defaultCredentials="true" /> </smtp> </mailSettings> </system.net> Code for Sending Mail ----------------------- MailMessage message = new MailMessage(); message.From = new MailAddress("irfan.a.khan@capgemini.com"); message.To.Add(new MailAddress("irfan.a.khan@capgemini.com")); message.Subject = "This is a test mail"; message.Body = "This is the content"; SmtpClient client = new SmtpClient(); client.Send(message); also having trouble :( i'm also having trouble using: SmtpMail.SmtpServer = "mail.server.dk"; SmtpMail.Send(Sender, Receiver, strSubject, strBody); turns out that .NET 2 requires username and password for the smtp. If i use the same code in .NET 1.1 it works without any problems. Any ideas for sending via smtp-server without providing username/pwd ??? I'm using the new system.net.mail class methods to compose email using asp.net 2.0, but when I execute the code to send email, I don't get any errors, but it doesn't send the email either. I'm specifying the smptclient host, username and password for the domain in the code. any help will be appreciated. Thanks. Hi Kishore, Usually when no errors happen, but no email gets sent, it means that the mail was sent to the local mail server, but it wasn't configured to forward it correctly. Can you check your SMTP service to see if that might be the case. Is there a way to force a delay when sending messages with System.Net.Mail? For example, can I add a message to a queue and have it sent x hours/days later? Thanks. We have an exchange server that is separate from the production web server. I have an account set up for sending email on the exchange server. Internal mail works fine, but external mail does not work. Could you point me in some direction? Steve Shiver my (me) timbers... Sending email asynchronously is not working. 1. Page directive is correct: (<%@ Page Async="true" %>) 2. Callback method runs but "e.Error.Message.ToString()" reports: "Failure sending mail." 3. Hence, email doesn't arrive. :<( 4. I guess I am stuck using the synchronous method because it works! Snippet: // SmtpClient SmtpClient smtp = new SmtpClient(); smtp.EnableSsl = true; // Gmail requirement if (smtp != null) { // Asynchronously object userState = mail; smtp.SendCompleted += new SendCompletedEventHandler(SmtpClient_OnCompleted); smtp.SendAsync(mail, userState); // Synchronously smtp.Send(mail); Alert.Show(Global.FormsCompleteMsg, "Home.aspx"); } I had implemeted code to send emails like your code . But i am getting error as "Request for the permission of type 'System.Security.Permissions.EnvironmentPermission, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed.". The code snippet is : message.From = new MailAddress(strFrom); message.To.Add(new MailAddress(strTo)); message.Subject = strSubject; message.Body = strBody; SmtpClient client = new SmtpClient(); client.Send(message); <mailSettings> <smtp from="admin@sales1up.com"> <network host="mrelay.perfora.net" port="25" userName="m43938458-admin" password="admin?2006" defaultCredentials="true" /> </smtp> </mailSettings> </system.net> the uploaded site url is :. Can you please suggest the way to solve the error. I am still having the same problem with hotmail not receiving my emails from SmtpClient.Send(). I am using the following code which someone says works for them(although they use a pop server instead of localhost). All other addresses I send to (including Yahoo) receive the mails. System.Net.Mail.SmtpClient client = new System.Net.Mail.SmtpClient("localhost"); System.Net.Mail.MailMessage msg = new System.Net.Mail.MailMessage("someone@somewhere.com", "someone@hotmail.com", "Test Mail Subject", "Test Mail Body"); client.Send(msg); Would it be possible for someone to test this code and see if they can get it to send to hotmail? I would be very grateful as this is causing massive problems. Thanks for all your help, but I have finally found the problem. My sentfrom address was name@blueyonder.co.uk and my server was completely different ie If I change the sentfrom address to postmaster@website.com it gets to hotmail. Hotmail must have extra security matching the host to the sender. hai all, i am trying to make a "contact me" page based on the tutorial 8 of ASP.NET the coding to send email is Private Sub SendMail(ByVal from As String, ByVal body As String) Dim mailServerName As String = "SMTP.yahoo.com" Dim message As MailMessage = New MailMessage(from, "tony@yahoo.com", "feedback", body) Dim mailClient As SmtpClient = New SmtpClient mailClient.Host = mailServerName mailClient.Send(message) message.Dispose() this is the error message i 15: Line 16: mailClient.Host = mailServerName Line 17: mailClient.Send(message) Line 18: message.Dispose() Line 19: End Sub do i need to configure my SMTP by default servername= 127.0.0.1 and port=25 thanks i solve the problem d it is the mail server name We are using Chilkats Email DLL to be more flexible however many of our send emails get rejected because when sending the email via SMTP a RECEIVED HEADER is loged in the email header. The receiving email server (hotmail, yahoo) than tells us "No relay permitted" That is because the email server and the webserver do not share the same IP address. Does this also happens with ASP.NET.Mail? hai i have a problem again. i can run my program on my friend computer but on my computer it shows this error message Unable to read data from the transport connection: net_io_connectionclosed. Exception Details: System.IO.IOException: Unable to read data from the transport connection: net_io_connectionclosed. pls help me i have tried to search the answer and configure the SMTP however it didnt work Found the problem thanks to a previous message in this blog which gave me a clue. The mail was going to the junk box which I checked but did not observe that it was not sorted by receive date. As an asside: I did notice that the HTML was being converted to text. Does anyone know if this can be avoided in Outlook 2003. Using System.Net.Mail my application sends a message successfully the first time that Send is called, but then throws a timeout exception (SMTPException with a StatusCode of GeneralFailure {-1})on any subsequent attempt. Any suggestions why this is happening? Please send me an e-mail if you found a solution...i'll probably won't be able to find this site again since i've found it with google! uzzy_net@yahoo.com Best Regards! I switched from System.Web.Mail to System.Net.Mail, however I am now reconsidering that move because if I send an email to an email address in the form john.doe@yahoo.com, I receive an error message "The specified string is not in the form required for an e-mail address". I have conducted further testing and evidently the SmtpClient class Send method does not like the period "." between the "john" and "doe". If I conduct a test with an email address in the form jdoe@yahoo.com...everything works fine. This is a showstopper for me... if I am using System.Net.Mail to inform e-commerce customers of post-purchase activity. I can never predict when I may encounter a non-conforming email address. Philip Is it possible to send an email using System.Net.Mail to a public folder in Exchange? I tried to do this using code that works when sending to a regular smtp address but it was returned as undeliverable. The error message was "The message reached the recipient's e-mail system, but delivery was refused." I don't know if this is a permission problem or how to accomplish this. Hi Scott & All, Just want to find out if there's any difference in sending out mails using either the (SMTPDeliveryMethod) Network Method or Pickup? We're leveraging on the W2K3 SMTP services to do so and also an email aliase to track the response of the email (including bounce mail). Another issue we realised is that when mails received in MS Outlook it will have its display name on the From field appended with an email alias address with the wordings of "on behalf of the senders actual email address" (eg., marketing_3_2061@domainname.com on behalf of Marketing [Marketing@domain.com]). However as compared to retrieving it in webmails (like Yahoo), there are no such issues, it will display the sender's email address. Any idea how this can be resolved? I'm trying to send an html message, but the message ends up having no formatting, it just shows the source code. Has anyone had success with this? msg = New System.Net.Mail.MailMessage() msg.Subject = sSubject sAddr = LoadSettings.Item("to") aAddrs = Split(sAddr, ";") For Each sAddr In aAddrs msg.To.Add(New MailAddress(Trim(sAddr))) msg.IsBodyHtml = True oRead = System.IO.File.OpenText(ApplicationPath & sFileName) EntireFile = oRead.ReadToEnd() msg.Body = EntireFile client.Send(msg) sorry, that was due to a mistake on my part. now it displays html content. but i have another problem. this is a report that is exported to html from crystal report. Did not have any problems with outlook 2003, but due to new formatting issues with outlook 2007 it gets disfigured in 2007. the formatting is a problem when i export from crystal to html. so now i am planning to export as doc or pdf, and send that in the message. does anyone know if its possible to attach a doc or pdf inline? Hi, I need to send an activation email to every registered user. the email should include a link that will direct user to activation address. and the user id should be inside the link so that the activation page can get the user id and activate the user.. thx for any help... :) Pingback from Doctorate Degree WebLog » Blog Archive » State of the Heart - Music Ministry 说明:本来打算用sina的,(smtp.sina.com),但经过测试,感觉这个非常不稳定,所以改用GMail的smtp服务,感觉非常稳定、快速。记录下来,方便后来者!Method1------... Pingback from Ozzie Perez » Sending E-mail with ASP.Net 2.0 and System.Net.Mail in C# Pingback from Programming Links Of The Day, 3/19 « 36 Chambers - The Legendary Journeys Aldactone. Drug interactions with aldactone. Aldactone use for acne. Spironolacto or aldactone. Treating acne with aldactone. Pingback from AspMail en .Net | hilpers Pingback from SMTP server: The transport failed to connect to the server | DYERPROJECTS Pingback from Richard Dingwall » TDD: How to supersede a single system library call
http://weblogs.asp.net/scottgu/archive/2005/12/10/432854.aspx
crawl-002
refinedweb
2,929
68.77
47 posts in this topic You need to be a member in order to leave a comment Sign up for a new account in our community. It's easy! Register a new account Already have an account? Sign in here. Similar Content - WaterEffect By wakillon Create water effects using waterctrl.dll. Topic - Resources UDF examples don't work By Michiel I By Shanheavel I?) By Jfish Hello - What's wrong with this picture (when displayed with GUICtrlCreatePic)? By timmy2 Here's a simple script I'm testing the ScreenCapture function with. #include <ScreenCapture.au3> #include <WindowsConstants.au3> #include <GUIConstantsEx.au3> ; Capture region _ScreenCapture_Capture(@ScriptDir & "\image.bmp", 290,18,1465,652,0) $form=GUICreate("Test",1175,634,290,18,BitOR($WS_SYSMENU,$WS_POPUP),$WS_EX_COMPOSITED) $PSimage=GUICtrlCreatePic(@ScriptDir & "\image.bmp", 0,0,1175,634) GUISetState(@SW_SHOW) MsgBox(0,"","Test of screen cap and display.") . The above script captures an area of the screen (the SciTE editor with this script open), writes it to image.bmp, and displays image.bmp. The problem I'm having is when image.bmp is displayed using GUICtrlCreatePic the result is fuzzy. Here's a SnagIt screen capture of what GUICtrlCreatePic displays. Attached is image.bmp, which is sharp and equivalent to seeing the actual script open in SciTE on-screen. Any suggestions for displaying an accurate rendition of ScreenCapture's output? (Must be borderless, no chrome, etc. -- equivalent to what my test script yields.) image.bmp
https://www.autoitscript.com/forum/topic/141892-maze-generator/?page=2
CC-MAIN-2016-22
refinedweb
238
53.98
Can't pretty print json from python Whenever I try to print out json from python, it ignores line breaks and prints the literal string "\n" instead of new line characters. I'm generating json using jinja2. Here's my code: print json.dumps(template.render(**self.config['templates'][name])) It prints out everything in the block below (literally - even the quotes and "\n" strings): "{\n \"AWSTemplateFormatVersion\" : \"2010-09-09\",\n \"Description\" : ... (truncated) I get something like this whenever I try to dump anything but a dict. Even if I try json.loads() then dump it again I get garbage. It just strips out all line breaks. What's going wrong? Answers I'm not sure if it is what you're looking for, but this is what I use for pretty-printing json-objects: def get_pretty_print(json_object): return json.dumps(json_object, sort_keys=True, indent=4, separators=(',', ': ')) print get_pretty_print(my_json_obj) json.dumps() also accepts parameters for encoding, if you need non-ascii support Need Your Help Read file with unknown encoding python file-io encoding character-encodingI'm trying to load the columns of a file with a strange encoding. Windows appears to have no issues opening it, but Linux complains and I have only been able to open it using the Atom text editor (...
http://unixresources.net/faq/16318543.shtml
CC-MAIN-2019-13
refinedweb
215
65.52
Screencast #37: Java EE 6 with NetBeans and GlassFish - Webinar Replay and Q&A By arungupta on Jan 21, 2011 The replay of Java EE 6 with NetBeans and GlassFish webinar is now available: This video can also be seen in full screen HD mode. The complete source code built during this webinar can be downloaded here. And here is a transcript of Q&A session from the webinar: Q. How can I using NetBeans generate an entity class with SequenceGenerator annotation for a PostGreSQL table? A. If you use NetBeans to generate entities from database, it should do the right thing, based on the SQL types of your DB columns. If not, please file a bug report against NetBeans. Thanks. Q. Do I need to have a good idea in Java EE 6 to attend this conference ? A. I would help a bit, but NetBeans is doing a great job at helping discovering Java EE 6 with GlassFish Q. Is there a EOL for GlassFish? A. There is no EOL for GlassFish as a product. As the reference implementation for Java EE, GlassFish is a strategic product for Oracle. As with any software product, specific versions of GlassFish will EOL over time, as newer releases come out. For example, Oracle GlassFish Server 2.x will be supported until 2014, and we are working now on releasing GlassFish Server 3.1 in the near future. Q. What is the link to file a bug report for NetBeans IDE? A. This should help: Q. Can we you eclipse instead of netBeans? A. Yes. The GlassFish Eclipse Plugin is available. Screencast #36 shows how to use Eclipse for Java EE 6 development and deployment with GlassFish. Q. I would love to see a GF 3.0.2 fixing the known memory leaks. Any plans on that, or will it be GF 3.1? (And when?) A. Fixes for GlassFish Server 3.0.1 will be available either through a support contract (patch), or you can always publish an issue on issuetracker at, and it will get addressed in the trunk (and 3.1 releases). Q. Hi, My name is Andrew. i'd like to know all the possible ways to pass values between different JSF pages.and the value which need to be passed is dynamic. A. The easiest way is to use the new Flash scope in JSF 2.0. Failing that you can put it in session and remove it. You could also use @ConversationScoped from CDI. Q. Why we have no glassfish rpm packets even for Oracle Enterprice Linux? A. We have to support many Unix/Linux variant so we provide one shell script for all flavors including OS X. In the next release we provide topology creation in the installer which make a RPM less viable. Zip installs are also available. Q. can you repeat? what blog? A. The blog we mentioned is hosted by Arun Gupta, blogs.sun.com/arungupta - and a link to all webinar content will always be available at Q. How do we do junit without depending on Glassfish server. I want my junit to be totally independent of runtime. A. If you want to test EJB and CDI code, you will need the embeddable container, which is server-specific. If you want to only test utility classes or JPA, this can be server-independent. Q. is there any online classes on Java EE6 with Netbeans and GlassFish? A. This is a great starting page:. See for example the 5-part video screencast. Q. it is possible.. they dont have any support A. If you are asking about support for the products, yes - support is available from Oracle for GlassFish server, and for Netbeans you can go to You can also obtain incident support for Netbeans from Oracle at If you would like more information, please email glassfish_ww@oracle.com Q. What kind of support is NetBeans providing for REST frameworks like Spring-RS? A. NetBeans has a very good and extensive REST support for JerSey (The reference implementation of JAX-RS) and GlassFish 3 which contains JerSey. For Sprin-RS you would need to regsiter as an external library... I am not sure how well it is tested with GlassFish, since there is no need to use a external RS implementation when one is provided in the Java EE 6 runtime Q. I feel like this is just a heavy adaptation of the Ruby on Rails scaffolding capability, don't you think ? A. Yes, Rails definitely had a good influence on Java server-side development. Many good things in Java are inspired by Rails. Q. where i can download Netbeans ? A. netbeans.org Q. What is the difference between singleton EJB and an EJB with maxbeans-in pool=initialbeans-inpool=1 ? A. A quick answer is that the @Singleton annotation is defined by the Java EE 6 specification, is much more readable by developers, and less error-prone than editing container configuration (or other historical workarounds such as static fields). You can also ask at the GlassFish users forum at Q. My Netbeans 6.9.1 does not have Servers or Glassfish on the Services tab, do I need Netbeans 7 to bring up Servers? A. Which edition of NetBeans do you have? If you have "all-in-one", then the Java EE development features may not be activated. When you start creating a web project, these features will be activated and you will see Servers. You need "Java" or "all-in-one" edition. Q. What is the Oracle strategy to move forward with NETBEANS and Jdeveloper? A. Both will continue as supported IDEs at Oracle. Jdeveloper is usually for ADF development and other Fusion development. NetBeans is great for cutting edge Java SE/EE/FX/ME development. Large teams continue to work on both products. They are both swing based IDE tools -- JDeveloper may start to incorporate select netbeans features in the future. Q. I understand netbeans IDE, but what is the benefit of glassfish? A. GlassFish offers a lightweight, modular Java EE 6 runtime. It offers rapid development features such as saving HTTP session state on redeploy. Q. What make me move to glassfish since there's lot IDE tools ? A. GlassFish offers a lightweight, modular, and productive runtime for Java EE 6. Thank you every body for attending the webinar! The complete list of webinars (replay and upcoming ones) is listed at glassfish.org/webinars. Technorati: conf webinar javaee6 glassfish netbeans Thanks for such great post having such useful video with questionnaire. I save this and would like to visit this site again. Posted by java ecommerce on January 21, 2011 at 08:42 PM PST # Mr. Gupta, do you have an alternate download URL for this video, one that allow me to use some sort of download accelerator? I'm from extreme north of Brazil and our internet link here is very very slow, like dial-up one. Thanks for your great job. Posted by Davi Shibayama on January 22, 2011 at 10:48 AM PST # At 29:42 when you select the friendEJB.create and it is highlighted, I am not getting that highlight and ability to jump to create(). As a consequence my create page gives a java.lang.ClassCastException. Can't get over this. Suggestions? Posted by Keith Smith on January 26, 2011 at 03:52 AM PST # Compliments. Do you already have a tutorial on Desktop Application Database: Client and Embedded. Especially the one that has a sample of Network Database connections and that can handle multiple request to the database per time. An example is the CRUD sample on Netbeans () but I am having a challenge with that particular sample. So, can you specify more samples, to help in resolving the issue. Thanks for this piece, it has been very helpful. Posted by John Okewole on January 26, 2011 at 12:48 PM PST # Davi, This tutorial can be viewed at as well. There are external tools available that will allow you to download the video for offline viewing. Posted by Arun Gupta on January 28, 2011 at 07:22 AM PST # John, Please post your questions on that tutorial to nbusers@netbeans.org or more details about contacting NetBeans community are at: Posted by Arun Gupta on January 28, 2011 at 07:24 AM PST # Keith, Does your EJB have that method defined ? Posted by Arun Gupta on January 28, 2011 at 07:27 AM PST # I have tried this but getting errors, the only change i made was friend to person entity. when i try to run the testservlet part through the ejb . I get Illegalstatexcecption. The database is the samples and is connected. The error seems to be coming from the return em.createNamedQuery("Person.findAll").getResultList(); any ideas ? and what can I do about it. Posted by adrianm on February 04, 2011 at 08:04 PM PST # adrianm, Does Person entity has the query Person.findAll NamedQuery ? What is the exact error message ? Output from server.log ? Posted by Arun Gupta on February 08, 2011 at 06:52 AM PST # Hi Arun Thanks for that I did track down the problem it was due to the templates not having all these in them... @Stateless @Named public class PersonSessionBean { @PersistenceContext EntityManager em; @Inject Person person; and ... @Named @RequestScoped @Entity @Table(name = "PERSON", catalog = "", schema = "APP") @XmlRootElement when i added all the persistence tags into the files it started working. Posted by adrianm on February 09, 2011 at 04:31 AM PST # Posted by guest on May 16, 2011 at 02:11 AM PDT # Posted by Arun Gupta on May 16, 2011 at 02:36 AM PDT # I am having problems with netbeans. I have tried to install and uninstalling it several time but am still getting the error. What to do? Posted by guest on July 12, 2011 at 04:41 AM PDT # What error are you getting ? Posted by Arun Gupta on July 12, 2011 at 10:28 AM PDT # well appreciated.but still interested in having the link for the download of the video and Java EE 6 with NetBeans and GlassFish pdf tutorial tnx Posted by nnaemeka on September 26, 2011 at 02:58 PM PDT #
https://blogs.oracle.com/arungupta/entry/screencast_37_java_ee_6
CC-MAIN-2015-11
refinedweb
1,708
65.83
A speech recognition plugin for flutter using BaiduSDK.See the changelog for more information about the function. Add this to your package's pubspec.yaml file: dependencies: baidu_speech_recognition: 0.1.3 import 'package:baidu_speech_recognition_example/speech_app.dart'; BaiduSpeechRecognition _speechRecognition = BaiduSpeechRecognition(); // initialize _speechRecognition.init().then((value) => print(value)); // start long speech recognition _speechRecognition.startLongSpeech().then((value) => print(value)); // start speech recognition 60s long _speechRecognition.start().then((value) => print(value)); // cancel recognition _speechRecognition.cancel().then((value) => print(value)); You can add a listener : _speechRecognitoin.speechRecognitionEvents .listen((String value) { // TODO do somethig with the value } The return value is a JSON String : { "type": "The recognition result type", "value": "The result" } the type have the following value: Go to 百度ASR download SDK for iOS,then copy BDSClientLib and BDSClientResource to the same directory of you flutter project,the file structure like this: ---------------- | |--Your FLutter Projcet/ | |--BDSClientLib/ | |--BDSCLientResource/ Then open your iOS projcet on Xcode and add the baidu speech SDK library and some resource. Add BDSClientLib/libBaiduSpeechSDK.a to you project group as "create groups", Add BDSClientResource/ASR/BDSClientResources to your project group as "create folder references", Add BDSClientResource/ASR/BDSClientEASRResources to your project group as "create groups". Add the follow framework to your project: Finally add Microphone Usage privacy to your info.plist file. Open you project with xcode and go to Pods, select the baidu_speech_recognition TARGETS, then select the Build Settings Tab, Change Mach-O Type to Static Library.Then go to the Build Phases, make sure all the Headers are Public. If you have any problem or Error Please make a issue. First become a baidu Developer The Follow the guide to add some permission and file you need example/README.md Demonstrates how to use the baidu_speech_recognition plugin. For help getting started with Flutter, view our online documentation. Add this to your package's pubspec.yaml file: dependencies: baidu_speech_recognition: ^0.1.3 You can install packages from the command line: with Flutter: $ flutter pub get Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:baidu_speech_recognition/baidu_speech_recognition.dart'; We analyzed this package on Aug 16, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using: Detected platforms: Flutter References Flutter, and has no conflicting libraries. Format lib/baidu_speech_recognition.dart. Run flutter format to format lib/baidu_speech_recognition.dart. The package description is too short. (-20 points) Add more detail to the description field of pubspec.yaml. Use 60 to 180 characters to describe the package, what it does, and its target use case.
https://pub.dev/packages/baidu_speech_recognition
CC-MAIN-2019-35
refinedweb
434
50.02
Azure 快速启动模板 使用社区提供的模板通过 Azure Resource Manager 部署 Azure 资源以执行更多操作。部署、学习、派生并回馈。 533 Quickstart templates are currently in the gallery. Join a VM to an existing domain This template demonstrates domain join to a private AD domain up in cloud.. Scalable Umbraco CMS Web App This template provides a easy way to deploy umbraco CMS web app on Azure App Service Web Apps. Create Azure SQL Servers and Database with Failover Group Creates two Azure SQL servers, a database, and a failover group. Provides a single view of the jobs' status across multiple VMM instances that helps you gain insight about the health & performance of these jobs. Create an EventHubs namespace and enable auto-inflate This template enables you to deploy an Event Hubs Standard namespace, an Event Hub, and a consumer group.This template also turns on the auto-inflate feature on your namespace. Create an Application Gateway This template creates an application gateway with Redirect functionalities in a virtual network and sets up load balancing and redirect rules (basic and pathbased)). 由合作伙伴提供. CloudbeesJenkins-DockerDataCenter Quick Start This quick start launches a stack that allows you to Build, Run & Ship Containerized Applications using Docker Datacenter and CloudBees Jenkins. This integrated stack is ready to use pre production environment..
https://azure.microsoft.com/zh-cn/resources/templates/
CC-MAIN-2017-26
refinedweb
209
59.43
Hopefully somebody could point out where I am going wrong... I am using WAR deployment under Tomcat3.2.1 on a WinNT4 box. I have placed a file (me.txt) in the WEB-INF directory, and have written the following servlet which tries to read the file. public class MercuryServlet extends HttpServlet{ public void init() throws ServletException { InputStream inputStream = getServletConfig().getServletContext().getResourceAsStream("me.txt"); if (inputStream == null) { System.out.println("NULL"); } else { System.out.println("NOT NULL"); } } } However, inputStream is always null! Am I approaching this the wrong way, or is this a bug in Tomcat? (I have searched the bug database and failed to find anything so I suspect my understanding of the Servlet spec is faulty). I thought that the ServletContext object was used to access the WEB-INF directory.. ? Thanx in advance, - Ch.
http://mail-archives.apache.org/mod_mbox/tomcat-users/200106.mbox/%3C41FBBB1B17AFD4119ECE000347088836A0DE47@wbnsmail.gtl.com%3E
CC-MAIN-2013-48
refinedweb
136
62.04
Refactoring in Eclipse Last modified: December 8, 2019 1. Overview On refactoring.com, we read that “refactoring is a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior.” Typically, we might want to rename variables or methods, or we may want to make our code more object-oriented by introducing design patterns. Modern IDEs have many built-in features to help us achieve these kinds of refactoring objectives and many others. In this tutorial, we’ll focus on refactoring in Eclipse, a free popular Java IDE. Before we start any refactoring, it's advisable to have a solid suite of tests so as to check that we didn't break anything while refactoring. 2. Renaming 2.1. Renaming Variables and Methods We can rename variables and methods by following these simple steps: - Select the element - Right-click the element - Click the Refactor > Rename option - Type the new name - Press Enter We can also perform the second and third steps by using the shortcut key, Alt+Shift+R. When the above action is performed, Eclipse will find every usage of that element in that file and replace them all in place. We can also use an advanced feature to update the reference in other classes by hovering over the item when the refactor is on and clicking on Options: This will open up a pop-up where we can both rename the variable or method and have the option to update the reference in other classes: 2.2. Renaming Packages We can rename a package by selecting the package name and performing the same actions as in the previous example. A pop-up will appear right away where we can rename the package, with options like updating references and renaming subpackages. We can also rename the package from the Project Explorer view by pressing F2: 2.3. Renaming Classes and Interfaces We can rename a class or interface by using the same actions or just by pressing F2 from Project Explorer. This will open up a pop-up with options to update references, along with a few advanced options: 3. Extracting Now, let's talk about extraction. Extracting code means taking a piece of code and moving it. For example, we can extract code into a different class, superclass or interface. We could even extract code to a variable or method in the same class. Eclipse provides a variety of ways to achieve extractions, which we'll demonstrate in the following sections. 3.1. Extract Class Suppose we have the following Car class in our codebase: public class Car { private String licensePlate; private String driverName; private String driverLicense; public String getDetails() { return "Car [licensePlate=" + licensePlate + ", driverName=" + driverName + ", driverLicense=" + driverLicense + "]"; } // getters and setters } Now, suppose we want to extract out the driver details to a different class. We can do this by right-clicking anywhere within the class and choosing the Refactor > Extract Class option: This will open up a pop-up where we can name the class and select which fields we want to move, along with few other options: We can also preview the code before moving forward. When we click OK, Eclipse will create a new class named Driver, and the previous code will be refactored to: public class Car { private String licensePlate; private Driver driver = new Driver(); public String getDetails() { return "Car [licensePlate=" + licensePlate + ", driverName=" + driver.getDriverName() + ", driverLicense=" + driver.getDriverLicense() + "]"; } //getters and setters } 3.2. Extract Interface We can also extract an interface in a similar fashion. Suppose we have the following EmployeeService class: public class EmployeeService { public void save(Employee emp) { } public void delete(Employee emp) { } public void sendEmail(List<Integer> ids, String message) { } } We can extract an interface by right-clicking anywhere within the class and choosing the Refactor > Extract Interface option, or we can use the Alt+Shift+T shortcut key command to bring up the menu directly: This will open up a pop-up where we can enter the interface name and decide which members to declare in the interface: As a result of this refactoring, we'll have an interface IEmpService, and our EmployeeService class will be changed as well: public class EmployeeService implements IEmpService { @Override public void save(Employee emp) { } @Override public void delete(Employee emp) { } public void sendEmail(List<Integer> ids, String message) { } } 3.3. Extract Superclass Suppose we have an Employee class containing several properties that aren't necessarily about the person's employment: public class Employee { private String name; private int age; private int experienceInMonths; public String getName() { return name; } public int getAge() { return age; } public int getExperienceInMonths() { return experienceInMonths; } } We may want to extract the non-employment-related properties to a Person superclass. To extract items to a superclass, we can right-click anywhere in the class and choose the Refactor > Extract Superclass option, or use Alt+Shift+T to bring up the menu directly: This will create a new Person class with our selected variables and method, and the Employee class will be refactored to: public class Employee extends Person { private int experienceInMonths; public int getExperienceInMonths() { return experienceInMonths; } } 3.4. Extract Method Sometimes, we might want to extract a certain piece of code inside our method to a different method to keep our code clean and easy to maintain. Let's say, for example, that we have a for loop embedded in our method: public class Test { public static void main(String[] args) { for (int i = 0; i < args.length; i++) { System.out.println(args[i]); } } } To invoke the Extract Method wizard, we need to perform the following steps: - Select the lines of code we want to extract - Right-click the selected area - Click the Refactor > Extract Method option The last two steps can also be achieved by keyboard shortcut Alt+Shift+M. Let's see the Extract Method dialog: This will refactor our code to: public class Test { public static void main(String[] args) { printArgs(args); } private static void printArgs(String[] args) { for (int i = 0; i < args.length; i++) { System.out.println(args[i]); } } } 3.5. Extract Local Variables We can extract certain items as local variables to make our code more readable. This is handy when we have a String literal: public class Test { public static void main(String[] args) { System.out.println("Number of Arguments passed =" + args.length); } } and we want to extract it to a local variable. To do this, we need to: - Select the item - Right-click and choose Refactor > Extract Local Variable The last step can also be achieved by the keyboard shortcut Alt+Shift+L. Now, we can extract our local variable: And here's the result of this refactoring: public class Test { public static void main(String[] args) { final String prefix = "Number of Arguments passed ="; System.out.println(prefix + args.length); } } 3.6. Extract Constant Or, we can extract expressions and literal values to static final class attributes. We could extract the 3.14 value into a local variable, as we just saw: public class MathUtil { public double circumference(double radius) { return 2 * 3.14 * radius; } } But, it might be better to extract it as a constant, for which we need to: - Select the item - Right-click and choose Refactor > Extract Constant This will open a dialog where we can give the constant a name and set its visibility, along with a couple of other options: Now, our code looks a little more readable: public class MathUtil { private static final double PI = 3.14; public double circumference(double radius) { return 2 * PI * radius; } } 4. Inlining We can also go the other way and inline code. Consider a Util class that has a local variable that's only used once: public class Util { public void isNumberPrime(int num) { boolean result = isPrime(num); if (result) { System.out.println("Number is Prime"); } else { System.out.println("Number is Not Prime"); } } // isPrime method } We want to remove the result local variable and inline the isPrime method call. To do this, we follow these steps: - Select the item we want to inline - Right-click and choose the Refactor > Inline option The last step can also be achieved by keyboard shortcut Alt+Shift+I: Afterward, we have one less variable to keep track of: public class Util { public void isNumberPrime(int num) { if (isPrime(num)) { System.out.println("Number is Prime"); } else { System.out.println("Number is Not Prime"); } } // isPrime method } 5. Push Down and Pull Up If we have a parent-child relationship (like our previous Employee and Person example) between our classes, and we want to move certain methods or variables among them, we can use the push/pull options provided by Eclipse. As the name suggests, the Push Down option moves methods and fields from a parent class to all child classes, while Pull Up moves methods and fields from a particular child class to parent, thus making that method available to all the child classes. For moving methods down to child classes, we need to right-click anywhere in the class and choose the Refactor > Push Down option: This will open up a wizard where we can select items to push down: Similarly, for moving methods from a child class to parent class, we need to right-click anywhere in the class and choose Refactor > Pull Up: This will open up a similar wizard where we can select items to pull up: 6. Changing a Method Signature To change the method signature of an existing method, we can follow a few simple steps: - Select the method or place the cursor somewhere inside - Right-click and choose Refactor > Change Method Signature The last step can also be achieved by keyboard shortcut Alt+Shift+C. This will open a popup where you can change the method signature accordingly: 7. Moving Sometimes, we simply want to move methods to another existing class to make our code more object-oriented. Consider the scenario where we have a Movie class: public class Movie { private String title; private double price; private MovieType type; // other methods } And MovieType is a simple enum: public enum MovieType { NEW, REGULAR } Suppose also that we have a requirement that if a Customer rents a movie that is NEW, it will be charged two dollars more, and that our Customer class has the following logic to calculate the totalCost(): public class Customer { private String name; private String address; private List<Movie> movies; public double totalCost() { double result = 0; for (Movie movie : movies) { result += movieCost(movie); } return result; } private double movieCost(Movie movie) { if (movie.getType() .equals(MovieType.NEW)) { return 2 + movie.getPrice(); } return movie.getPrice(); } // other methods } Clearly, the calculation of the movie cost based on the MovieType would be more appropriately placed in the Movie class and not the Customer class. We can easily move this calculation logic in Eclipse: - Select the lines you want to move - Right-click and choose the Refactor > Move option The last step can also be achieved by keyboard shortcut Alt+Shift+V: Eclipse is smart enough to realize that this logic should be in our Movie class. We can change the method name if we want, along with other advanced options. The final Customer class code will be refactored to: public class Customer { private String name; private String address; private List<Movie> movies; public double totalCost() { double result = 0; for (Movie movie : movies) { result += movie.movieCost(); } return result; } // other methods } As we can see, the movieCost method has been moved to our Movie class and is being used in the refactored Customer class. 8. Conclusion In this tutorial, we looked into some of the main refactoring techniques provided by Eclipse. We started with some basic refactoring like renaming and extracting. Later on, we saw moving methods and fields around different classes. To learn more, we can always refer to the official Eclipse documentation on refactoring.
https://www.baeldung.com/eclipse-refactoring
CC-MAIN-2020-40
refinedweb
1,969
53.65