text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91
values | source stringclasses 1
value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
Visual Studio - Use Multiple Visual Studio Project Types for Cloud Success
By Patrick Foley | January 2011
As you’ve probably noticed, there are many different project types in Visual Studio these days. Which one do you choose? All have strengths that help solve problems in different situations. Even within a single business problem, there are often multiple use cases that can best be solved by different Visual Studio project types.
I was confronted with a real-world example of such a problem recently while building out the infrastructure for a cloud-based program I lead that’s designed to highlight success stories: Microsoft Solutions Advocates (microsoftsolutionsadvocates.com). I used several different Visual Studio project types to build my solution, and in this article, I’ll walk through a simplified example of my project called “Customer Success Stories,” or CSS.
CSS has three distinct use cases:
- Anonymous users read success stories on a public Web site.
- Users who belong to the program log in to a private Web site to create and edit their own success stories.
- Administrators (like me) log in to an administrative Web site to manage and edit all the data, including minutia such as lookup tables.
The approach I settled on combined three Microsoft .NET Framework technologies:
- ASP.NET MVC for the public site
- WCF RIA Services for the private, customer-edit site
- ASP.NET Dynamic Data for the administrative site
Any of these technologies could’ve been used to create the whole solution, but I preferred to take advantage of the best features of each. ASP.NET MVC is an ideal technology for creating a public Web site that will work everywhere, because it emits standard HTML, for one thing. The public site has a marketing purpose, so I’ll eventually engage a designer to polish the appearance. Working with a designer adds complexity, but ASP.NET MVC has a straightforward view implementation that makes it easy to incorporate a designer’s vision. Making the site read-only and separating it from the other use cases helps isolate the scope of the designer’s involvement.
Although ASP.NET MVC could also be used to implement the customer-editing functionality, WCF RIA Services is an even better fit. (Conversely, WCF RIA Services could be used to build a great-looking public Web site, but Silverlight isn’t supported on some devices, such as iPhones and iPads, and I wanted the greatest reach for the public use case.) Silverlight is here to stay, and it’s perfect for creating a rich editing experience with very little programming, so long as it’s reasonable to expect users to have it or install it, as would be the case with customers collaborating on a success-stories site.
ASP.NET Dynamic Data provides a handy way to build an administrative solution without too much work. The administrative site doesn’t need to be fancy; it simply needs to provide a way to manage all of the data in the solution without having to resort to SQL Server Management Studio. As my solution evolves, the ASP.NET Dynamic Data site could conceivably be subsumed by the WCF RIA Services site. Nevertheless, it’s useful at the beginning of a data-centric development project such as this one, and it costs almost nothing to build.
Targeting Azure
Again, this example is based on a real-world problem, and because the solution requires a public Web site, I’m going to target Azure and SQL Azure. Windows Server and SQL Server might be more familiar, but I need the operational benefits of running in the cloud (no need to maintain the OS, apply patches and so on). I barely have time to build the solution—I certainly don’t have time to operate it, so Azure is a must for me.
To work through this example and try it on Azure, you need an account. There are various options and packages found at microsoft.com/windowsazure/offers. MSDN subscribers and Microsoft partners (including BizSpark startups—visit bizspark.com to learn more) have access to several months of free resources. For prototyping and learning (such as working through this example), you can use the “Introductory Special.” It includes three months of a 1GB SQL Azure database and 25 hours of a Azure small compute instance, which should be enough to familiarize yourself with the platform. I built this example using the Introductory Special without incurring any additional charges.
‘Customer Success Stories’ Overview
This example is presented in the form of a tutorial. Several important aspects of a real-world implementation are out of scope for this article, including user-level security, testing, working with a designer and evolving beyond an extremely simplified model. I’ll attempt to address these in future articles or on my blog at pfoley.com.
The steps are presented at a high level and assume some familiarity with Visual Studio and with the technologies involved. Code for the entire solution can be downloaded from code.msdn.microsoft.com/mag201101VSCloud, and click-by-click instructions for each step can be found at pfoley.com/mm2011jan.
Step 1: Create a Project for the Entity Framework Data Model
The Web technologies used in this example all use the Entity Framework effectively, so I chose to integrate the three use cases by having them all share a common Entity Framework model. When working with an existing database, you have to generate the model from the database. Whenever possible, I prefer to create the model first and generate the database from it, because I like to think about my design more at the model level than the database level. To keep things simple, this example uses just two entities: Company and Story.
The Entity Framework model will be created in its own project and shared across multiple projects (I learned how to do this from Julie Lerman; see pfoley.com/mm2011jan01 for more on that). I call best practices like these “secret handshakes”—figuring it out the first time is a challenge, but once you know the secret, it’s simple:
- Create the Entity Framework model in a “class library” project.
- Copy the connection string into any projects that share the model.
- Add a reference to the Entity Framework project and System.Data.Entity in any projects that share the model.
To start, create a Blank Solution in Visual Studio 2010 named “Customer Success Stories” and add a Class Library project named “CSSModel.” Delete the class file and add an empty ADO.NET Entity Data Model item named “CSSModel.” Add Company and Story entities with an association between them as in Figure 1 (when you right-click on Company to add the association, make sure that “Add foreign key properties to ‘Person’ Entity” is checked on the ensuing dialog—foreign key properties are required in future steps).
Figure 1 Adding Company and Story Entities to the Visual Studio Project
The model is now ready to generate database tables, but a SQL Azure database is needed to put them in. When prototyping is completed and the project is evolving, it’s useful to add a local SQL Server database for testing purposes, but at this point, it’s less complicated to work directly with a SQL Azure database.
From your SQL Azure account portal, create a new database called “CSSDB” and add rules for your current IP address and to “Allow Microsoft Services access to this server” on the Firewall Settings tab. Your SQL Azure account portal should look something like Figure 2.
Figure 2 Configuring Settings in the SQL Azure Portal
In Visual Studio, right-click on the design surface and select “Generate Database from Model.” Add a connection to your new SQL Azure database and complete the wizard, which generates some Data Definition Language (DDL) and opens it in a .sql file, as shown in Figure 3.
Figure 3 Generating the Database Model
Before you can execute the SQL, you must connect to the SQL Azure database (click the Connect button on the Transact-SQL Editor toolbar). The “USE” statement is not supported in SQL Azure, so you must choose your new database from the Database dropdown on the toolbar and then execute the SQL. Now you have a SQL Azure database that you can explore in Visual Studio Server Explorer, SQL Server Management Studio or the new management tool, Microsoft Project Code-Named “Houston” (sqlazurelabs.com/houston.aspx). Once you build the solution, you have an Entity Framework project that you can use to access that database programmatically, as well.
Step 2: Create the ASP.NET Dynamic Data Project
An ASP.NET Dynamic Data Web site provides a simple way to work with all the data in the database and establishes baseline functionality to ensure the environment is working properly—all with one line of code.
Add a new ASP.NET Dynamic Data Entities Web Application project to the solution and call it “CSSAdmin.” To use the data model from the first step, copy the connectionStrings element from App.Config in CSSModel to web.config in CSSAdmin. Set CSSAdmin as the startup project and add references to the CSSModel project and System.Data.Entity.
There are lots of fancy things you can do with ASP.NET Dynamic Data projects, but it’s surprisingly useful to implement the default behavior that comes by simply uncommenting the RegisterContext line in Global.asax.cs and changing it to:
Build and run the project, and you have a basic site to manage your data. Add some test data to make sure everything’s working.
Step 3: Create the Azure Services Project
The result of the previous step is a local Web site that accesses a database on SQL Azure—the next step is it to get that Web site running on Azure.
From your Azure account portal, create a Storage Service called “CSS Storage” and a Hosted Service called “CSS Service.” Your Azure account portal should look similar to Figure 4.
Figure 4 Creating Services in the Azure Portal
In Visual Studio, add a new Azure Cloud Service project to your solution called “CSSAdminService” (you must have Azure Tools for Visual Studio installed), but don’t add additional “cloud service solutions” from the wizard. The cloud service project adds the infrastructure necessary to run your application in a local version of the “cloud fabric” for development and debugging. It also makes it easy to publish to Azure interactively. This is great for prototyping and for simpler Azure solutions, but once you get serious about developing on Azure, you’ll probably want to use Windows PowerShell to script deployment, perhaps as part of a continuous integration solution.
Right-click on the Roles folder in CSSAdminService, then select “Add | Web Role Project in solution” to associate the CSSAdmin project with the cloud service project. Now when you compile and run the solution, it runs in the development fabric. At this point, the solution doesn’t look any different than it did running on IIS or Cassini, but it’s important to run on the dev fabric anyway to catch mistakes such as using unsupported Windows APIs as you evolve a Azure solution.
Deploy to your Azure account by right-clicking on the CSSAdminService project and selecting Publish. The first time you do this, you’ll need to add credentials (follow the instructions to copy a certificate to your Azure account). Then select a “Hosted Service Slot” and a “Storage Account” to deploy your solution to. There are two options for the hosted service slot: production and staging. When updating a real-world solution in production, deploy first to staging to make sure everything works and then promote the staging environment into production. While prototyping, I prefer to deploy straight to production because I’m not going to leave the solution running anyway. Click OK to deploy to Azure, which can take several minutes. Once complete, run your Azure application using the Web site URL shown on the service page, which should look similar to Figure 5.
Figure 5 An Azure Deployed Service
After verifying that the service works, suspend and delete the deployment to avoid charges (consumption-based plans are billed for any time a service package is deployed, whether or not it’s actually running). Don’t delete the service itself, because doing so returns the Web site URL back into the pool of available URLs. Obviously, when you’re ready to flip the switch on a real production solution, you’ll have to budget for running Azure services nonstop.
When a service deployment fails, it doesn’t always tell you exactly what’s wrong. It usually doesn’t even tell you something is wrong. The service status just enters a loop such as “Initializing … Busy … Stopping … Initializing …” When this happens to you—and it will—look for problems such as attempting to access local resources (perhaps a local SQL Server database) or referencing assemblies that don’t exist on Azure. Enabling IntelliTrace when you deploy the package (see Figure 6) can help you pinpoint problems by identifying the specific exceptions that are being thrown.
Figure 6 Enabling IntelliTrace While Publishing to Azure
Step 4: Create the ASP.NET MVC Project
The solution so far consists of an administrative Web site (albeit with no user-level security) that runs on Azure and accesses a SQL Azure database, all with one line of code. The next step is to create the public Web site.
Add a new ASP.NET MVC 2 Web Application project to the solution named “CSSPublic” (don’t create a unit test project while working through this example). If you’re already quite experienced with ASP.NET MVC, you might prefer to start with an ASP.NET MVC 2 Empty Web Application, but I prefer to start from a Web site structure that already works and modify it bit by bit to make it do what I want.
Right-click on CSSPublic to make it your startup project, and then run it to see what you’re starting with. The public site for CSS is read-only and anonymous, so remove all login and account functionality with these steps:
- Delete the “logindisplay” div from Site.Master.
- Delete the ApplicationServices connection string and authentication element from the main Web.config.
- Delete AccountController.cs, AccountModels.cs, LogOnUserControl.ascx and the entire Account folder under Views.
- Run it again to make sure it still works.
- Copy the connection string from the CSSModel App.Config into the CSSPublic Web.config and add references to CSSModel and System.Data.Entity as before.
- Select all the references for CSSPublic and set the Copy Local property to true.
I think it makes sense to add separate controllers (with associated views) for Companies and Stories, while keeping the Home controller as a landing page. These are important decisions; a designer can make the site look good, but the information architecture—the site structure—has to be right first.
Naming is relevant in a Model View Controller (MVC) project. Use plural controller names (Companies, not Company) and matching view folders. Use the standard conventions of Index, Details and so on for controller methods and view names. You can tweak these conventions if you really want to, but that adds complexity.
Right-click on the Controllers folder to add a new controller named “CompaniesController.” This site won’t implement any behavior—it’s read-only—so there’s no need for an explicit model class. Treat the Entity Framework model container itself as the model. In CompaniesController.cs, add “using CSSModel” and change the Index method to return a list of companies as follows:
To create the view, create an empty Companies folder under Views and right-click on it to add a view called “Index.” Make it strongly typed with a View data class of CSSModel.Company and View content of List.
In Site.Master, add a list item in the menu to reference the new Controller:
Run the app and click the menu to see the list of Companies. The default view is a good starting point, but remove the unnecessary “Id” field. Because this site is intended to be read-only, delete the ActionLink entries for “Edit,” “Delete” and “Create.” Finally, make the company name itself a link to the Details view:
Your Companies list should now look similar to what is shown in Figure 7.
Figure 7 A List of Companies in ASP.NET MVC Web Site
To implement Details, add a method to CompaniesController:
The id parameter represents the integer following Companies\Details\ in the URL. That integer is used to find the appropriate company using a simple LINQ expression.
Add the Details view underneath the Companies folder as before, but this time select “Details” as the View content and name the view “Details.”
Run the project, navigate to Companies, and then click one of the company names to see the default Details view.
Adding a controller and views for Stories is similar. The views still require a fair amount of tweaking before being ready for a designer, but this is a good start, and it’s easy to evolve.
To verify this project will work on Azure, create a cloud service project called “CSSPublicService” and add the CSSPublic role to it. Run the service locally in the dev fabric and then publish the site to Azure and run it from the public URL. Don’t forget to suspend and delete the deployment when you’re done to avoid being billed.
Step 5: Create the WCF RIA Services Project
The solution at this point contains ASP.NET MVC and ASP.NET Dynamic Data Web sites running (or at least runnable) on Azure, using a shared Entity Framework model to access a SQL Azure database. Very little manual plumbing code has been required to obtain a decent amount of functionality. Adding a WCF RIA Services Web site adds another dimension: a rich editing experience.
Add a Silverlight Business Application project named “CSSCustomerEdit” (no spaces) and Visual Studio adds two projects to your solution: a Silverlight client (CSSCustomerEdit) and a Web service (CSSCustomerEdit.Web). Run the solution to see what you’re starting with. Open ApplicationStrings.resx in CSSCustomerEdit project and change the value for ApplicationName to “Customer Success Stories” to make it look nice.
In CSSCustomerEdit.Web, copy connectionStrings from CSSModel into Web.config, add references to CSSModel and System.Data.Entity and set Copy Local to true for all the references. Then right-click on the Services folder and add a new Domain Service Class item called “CSSDomainService.” Make sure the name of this class ends in Service—without a number—to gain the full benefit of the tooling between the two projects (another “secret handshake”). Click OK to bring up the Add New Domain Service Class dialog and check all the entities along with Enable editing for each (Figure 8).
Figure 8 Adding a Domain Service Class
Notice that “Generate associated classes for metadata” is grayed out. This illustrates a tradeoff with the approach I’m espousing here. In a Silverlight Business Application, metadata classes can be used to add additional validation logic such as ranges and display defaults. However, when the Entity Framework model is in a separate project from CSSCustomerEdit.Web, the toolkit doesn’t let you add these metadata classes. If this feature is important to you or if you know you’re going to invest most of your energy in the Silverlight Business Application part of your solution, you might want to create your Entity Framework model directly in the “.Web” project instead of a separate project. You could still reference CSSCustomerEdit.Web to share the Entity Framework model in another project.
As mentioned, authentication and authorization are out of scope for this article, but it’s possible to punt and still be precise. In the CSSDomainService class, add a placeholder property named “myCompany” to represent the company the user is authorized to edit. For now, hardcode it to 1, but eventually the login process will set it to the right company for the authenticated user.
Edit the CSSDomainService class to reflect the specific use case for the project: the user can update companies but not insert or delete them (an administrator does that in the ASP.NET Dynamic Data Web site), so remove those service methods. Also, the user can only edit the single company they work for, not a list of companies, so change GetCompanies to GetMyCompany. Similarly, change GetStories to GetMyStories and ensure that the user creates stories whose CompanyId is equal to myCompany:
private int myCompany = 1; // TODO: set myCompany during authentication public Company GetMyCompany() { return this.ObjectContext.Companies.Single(c=>c.Id.Equals(myCompany)); } ... public IQueryable<Story> GetMyStories() { return this.ObjectContext.Stories.Where(s=>s.CompanyId.Equals(myCompany)); } public void InsertStory(Story story) { story.CompanyId = myCompany; story.Id = -1; // database will replace with next auto-increment value ... }
WCF RIA Services shines in the creation of editable, field-oriented interfaces, but it’s important to start simply and add functionality slowly. The DataGrid and DataForm controls are powerful, but whenever I work too fast or try to add too much functionality at once, I end up messing up and having to backtrack. It’s better to work incrementally and add one UI improvement at a time.
To implement a baseline UI for this example, add references to System.Windows.Controls.Data and System.Windows.Controls.DomainServices in CSSCustomerEdit. Create new views (Silverlight Page items) for Company (singular) and Stories, then mimic the XAML from the existing Home and About views. Edit MainPage.xaml to add new dividers and link buttons (alternatively, just co-opt the existing Home and About views to use for Company and Stories).
In Silverlight development, most of the magic involves editing XAML. In CSSCustomerEdit, add namespace entries, a DomainDataSource and a DataForm for the Company view. In addition, add a DataGrid for the Stories view. In both DataForms, handle the EditEnded event to call MyData.SubmitChanges.Stories.xaml, which should look similar to Figure 9.
Figure 9 XAML for the Stories View
Build it … run it … it works! A rich editing experience that’s ready to evolve (see Figure 10).
Figure 10 The Stories View in Action
As before, create a new cloud service project, publish it and test it on Azure. Copy CSSCustomerEditTestPage.aspx to Default.aspx for a cleaner experience, and you’re done.
No ‘One True Way’
Visual Studio and the .NET Framework provide myriad choices for creating solutions that can run on Azure. While it’s tempting to search for the “one true way” to create the next killer app for the cloud, it’s more rewarding to leverage capabilities from multiple technologies to solve complex problems. It’s easier to code, easier to evolve and—thanks to Azure—easier to operate.
Patrick Foley is an ISV architect evangelist for Microsoft, which means he helps software companies succeed in building on the Microsoft platform. Read his blog at pfoley.com.
Thanks to the following technical expert for reviewing this article: Hanu Kommalapati
Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus. | https://msdn.microsoft.com/en-in/magazine/gg535669.aspx | CC-MAIN-2019-43 | refinedweb | 3,869 | 53.61 |
Laravel 5.7 — Controllers
Here is a perfect breakdown on what a Laravel Controller is taken from the official Laravel documentation;
Instead of defining all of your request handling logic as Closures in route files, you may wish to organize this behavior using Controller classes. Controllers can group related request handling logic into a single class. Controllers are stored in the
app/Http/Controllersdirectory.
In our routes file, we may want to add more logic to a view, this could get tiresome with loads of code showing. We want to simplify how we work, Laravel allows us to do that, we can use a dedicated controller to add more logic to our routes/views.
If you take a look at the routes file in your editor, each page is static, so what we can do is create a Pages Controller — we can simplify the code more by using the controller we are about to create.
So let’s start with the homepage route, we need to create a new line in our routes file and define the Pages Controller followed by a method/action. If your getting confused now, hang in there, this will become clearer to you shortly.
Route::get(‘/’, ‘PagesController@home’);
Comment out the code for the homepage route that has the array of tasks, reload the homepage and you should see the following error;
Oops, we’ve not created the “PagesController” — Laravel is looking for this and can’t find it, this is easy to fix. Head over to your terminal and run the following command to create the controller.
$ php artisan make:controller PagesController
What we must not do is create the controller manually, Laravel offers the boilerplate to you, it whips up the necessary code when you create the controller through the command line. Let’s get into the habit of using the terminal and running the necessary commands, the more you use the commands, the more it’ll sit in your brain for future reference.
You will find the controller inside
app/http/controllers/ — this is where the controllers live in Laravel. This is what the Pages Controller looks like once it has been created;
<?php
namespace App\Http\Controllers;
use Illuminate\Http\Request;
class PagesController extends Controller {
//
}
You can see why it was much simpler to run the command to create the controller. We wouldn't want to be importing the namespace etc manually in case we need to.
Let’s create a method for home, otherwise, the PagesController will not work.
public function home() {
}
We now need to migrate the data from the old route we initially had into the method we have just created. Delete the old route for the homepage as we won’t be needing that moving forward. This is what the end result should look once you’ve migrated over the data to the method.
We can now migrate the rest of the routes over to the Pages Controller. We need to copy the same line where we called the home method and name the methods as the page name. This is to keep some consistency and quickly identify each method. Our routes file should now look something like this;
Route::get(‘/’, ‘PagesController@home’);;
Route::get(‘/about’, ‘PagesController@about’);;
Route::get(‘/contact’, ‘PagesController@contact’);
And here we have the Pages Controller with two additional methods added that return the view. If you’ve followed the steps correctly, then the homepage and the other pages should be working just like they did without the Pages Controller created.
So for any large projects that you may be working on, this is the ideal way to go about things, you may come across existing projects that follow this structure. This keeps things nicely separated.
That’s it, you now should understand how to use dedicated controllers in Laravel. In the next article, we will be looking at “Databases and Migrations”.
I hope you’ve enjoyed this article, give it a clap and share. If you have any comments, feel free to use the comments section or get in touch with me on my Twitter. Don’t forget to have a look at my previous articles; | https://medium.com/@mjcoder/laravel-5-7-controllers-921a546ca6f4?source=---------3------------------ | CC-MAIN-2019-13 | refinedweb | 694 | 67.49 |
I'm very new to Python and try to use Google's Optimization (or-)tools for solving the Travelling Salesman Problem (TSP):
I can run it and get a nice output for the random input numbers of the example, but I cannot find any way on how to input data myself. All I want to do is input a file/list with x/y coordinates. On the site
they show how the input file should/can look like, but how do I implement and run that with the
tsp.py
class RandomMatrix(object): """Random matrix."""
def init(self, size):
"""Initialize random matrix."""
rand = random.Random()
rand.seed(FLAGS.tsp_random_seed)
distance_max = 100
self.matrix = {}
for from_node in xrange(size):
self.matrix[from_node] = {}
for to_node in xrange(size):
if from_node == to_node:
self.matrix[from_node][to_node] = 0
else:
self.matrix[from_node][to_node] = rand.randrange(distance_max)
def Distance(self, from_node, to_node):
return self.matrix[from_node][to_node]
matrix = RandomMatrix(FLAGS.tsp_size)
matrix_callback = matrix.Distance
if FLAGS.tsp_use_random_matrix:
routing.SetArcCostEvaluatorOfAllVehicles(matrix_callback)
else:
routing.SetArcCostEvaluatorOfAllVehicles(Distance)
It seems that
Matrix is a "2-D array" that is indexed by its
from_node and
to_node. The nodes are numbered 0,1,..., up to number of nodes - 1.
What is stored in the matrix is the pair-wise distance between the nodes. For example, if you have three nodes, then you can set your matrix be something like:
[ [0, 3, 5], [3, 0, 7], [5, 7, 0] ], which represents a "triangle" map with edge lengths 3, 5, and 7.
Does this make sense to you? | https://codedump.io/share/WFYRYEhQ6GZE/1/how-to-input-data-in-google-or-tools39s-tsp-travelling-salesman-prob | CC-MAIN-2017-04 | refinedweb | 255 | 52.15 |
72
84
58
86 Lenovo K3 Note has generated a considerable amount of popularity in the smartphone market. The K3 Note comes in a neat, standard packaging. Lenovo has not tried to make its device look alluring, but it somewhat works in its favour. The unassuming facade gives way for the K3 Note to be a well-functioning device that is adept at doing what a smartphone is meant to, without any frills. Does it live up to its initial impression?
BUILD and DESIGN
As mentioned before, there is nothing exceptional about the Lenovo K3 Note’s design. It doesn’t overbear its 5.5-inch screen size, it is reasonably thin at 8mm, the 3.5mm audio port and the microUSB/charging port are at the top, and the volume rocker and power/unlock button to the right side. There is no metal on its outer casing, but it does not feel cheap in terms of overall build and the quality of the plastic used.
Lenovo K3 Note's build: Not the sturdiest, but still good
The only big qualm that you will have about the physical aspect of the device is the lack of backlight on the touch function keys. Until you get very used to using the device, maneuvering it in dimly lit or dark environments may get difficult. Yes, the arrangement of the function keys is the standard grouping of multifunction - Home - return (from left to right), which should be of relative help if you happen to have used cellphones with this key arrangement before. Otherwise, this is a considerable setback.
There are no hot-swappable SIM slots, and both the SIM slots and the microSD expansion slot need to be accessed by opening the back cover. With more smartphones going with the sealed back panel build, the Lenovo K3 Note’s detachable back panel might sit well with a lot of people, who would prefer to have the ability of being able to access the battery pack.
Back panel: Dual-SIM slots, battery pack and microSD expansion slot
The build is decent, although other competing smartphones like the Meizu M2 Note feature an outer shell that is sturdier, and better to look at. The Lenovo K3 Note misses glass protection, and although you might make it through a couple of falls with it, we would not be comfortable with letting our smartphone face the world without the screen protection.
Overall, the Lenovo K3 Note’s outer facade can be called decent, at best. The simplistic approach may work for users who focus on core functionality, and while it does work for us, it misses out on the added bonus that other devices with better looks and stronger builds present.
DISPLAY and UI
The Lenovo K3 Note scores high on display quality, and the originality of colours on it. The display is amply bright, with wide viewing angles that do not inflict any colour or hue shift at obtuse angles nearing the 180-degree mark. The only flaw that we could point at with the K3 Note’s display is the reflective outer coating, which makes viewing content on the phone under direct sunlight quite difficult. Nevertheless, the bright display prevents it from becoming unusable, but you might find the reflection of your own head jarring.
Lenovo K3 Note display: Bright, crisp, but reflective
In terms of indoor viewing, the Lenovo K3 Note is as good as you can expect. We played a few 1080p films on it, and enjoyed the experience. There is never any hint of interpolation, colours remain true to the source, and the bright screen manages backlight intensity well to not become too sharp on the eye. The 5.5-inch, IPS-panel LCD screen is large enough for viewing movies on the go, and the 1080p resolution ensures smooth playback. The screen occupies 72% of the front panel area, and the bezels are not overbearing. Additionally, touch response is very accurate. We did not experience any missed taps and swipes, and the uber-responsiveness makes for a very handy tool when typing fast or playing games.
L-R: Alarm, Home screen, Notification panel
L-R: Camera, Dialer, Audio control panel
Lenovo has loaded its custom Vibe UI v2.5 on top of Android Lollipop v5.0. There are no separate Home pages/app drawer, and the custom icons are just about good to look at. However, it comes with an extensive amount of pre-loaded applications including Evernote, Asphalt 8, Guvera Music, Skype and Flipkart, along with Lenovo’s own Theme Center, SYNCit and SHAREit. While a number of these applications can be deleted, many are locked. With the amount of apps pre-loaded and the free-flowing graphic animations, there is a heavy load on resources, because of which you will seldom see free memory more than 1100MB. The custom icons are decent to look at, and despite the load on memory resources, most applications run without any lags and stutters.
PERFORMANCE and BATTERY
It is here that the Lenovo K3 Note makes most of its money, and almost loses it too. The K3 Note set the benchmark test scores for its category, returning results higher than other devices by a considerable margin. In light of this, looking at its performance in terms of daily applications and services, the K3 Note lives up to its scores. Calling, messaging, browsing, music/video streaming and light gaming is smooth. With extensive usage, playing games like Subway Surfers and Jetpack Joyride seemed to take a slight strain, but there was no other rift between its scores and actual performance.
One of the biggest selling points of the Lenovo K3 Note is its superior specs-to-price ratio. It runs on a MediaTek MT6752 octa-core chipset clocked at 1.7GHz, along with Mali T760-MP2 GPU, 2GB of LPDDR3 RAM, 16GB of internal storage, and a 3000mAh battery. It features a 13-megapixel OmniVision PureCel OV13850 sensor primary camera with an f/2.0 lens and dual-tone LED flash, and a 5-megapixel front camera. Theoretically, Lenovo has presented the K3 Note as a formidable package, strong enough to ruffle feathers in its own category, and even a few above the 10k price market. In terms of the specs-to-price ratio, the Lenovo K3 Note is among the best, with the likes of Meizu M2 Note, Yu Yureka Plus and the Xiaomi Mi 4i for company.
Network retention quality is good - there were no drops in call, data or WiFi networks. In-call audio is pristine, and there are no aberrations in incoming audio via the earpiece, and the main microphone. The secondary microphone on the back, while not being the best in terms of noise cancellation and clarity of recorded audio, suffices when you need to make that emergency record, and is actually decent in closed, indoor environments.
The Lenovo K3 Note’s performance in terms of everyday applications is one of the best in its class. Messaging, internet-based social media applications and services, lightweight games, music streaming, YouTube and the likes work without any glitch. Surfing Facebook and Twitter timelines with simultaneous music playback through Guvera and updating apps in the background did not lead to any stutters, and running multiple applications at the same time, even with relatively low amount of RAM available, did not lead to any difficulty in terms of regular usage. So far, the K3 Note was really impressive.
The death knell lay in opening the pre-installed Asphalt 8. The K3 Note is meant to get your essentials right, and in process, misses out on optimising gaming performance. After about five minutes into the first race, Asphalt 8 stuttered to the extent that made it unplayable. The same occurred with Marvel: Contest of Champions, which performed even worse, if that was possible. Heavy gaming, on the Lenovo K3 Note, is practically impossible. The maximum that Asphalt 8 remained playable was for 15 minutes, and that too with a large extent of stutters. The best you can do with gaming, or continued photo editing with Adobe Photoshop Express, is for 15 minutes. Photoshop becomes extensively slow, taking long times to apply filters and saving photos to device. The K3 Note, unfortunately, is just not for heavy usage.
It does pick up with the audio, though. The Lenovo K3 Note packs in a Dolby Atmos audio chip, delivering stellar audio quality within its budget segment. Audio quality through headphones is excellent - put your favourite pair of headphones on, and you will not have a single ounce of disappointment. The speaker on the rear is loud, and surprisingly clear for a cellphone’s native speaker. We played a host of different tracks, ranging from streaming online at 64Kbps, to high fidelity 320Kbps audio tracks from Pink Floyd’s The Wall. Well done, Lenovo. Well done, Dolby.
While the K3 Note’s battery life is not really commendable, it is not bad, either. On an average day’s usage of making calls, messaging, occasional music playback and about one hour of playing White Tile, the K3 Note returned home with almost 15% of charge left. It does what every smartphone around it is doing - last an entire day. Our battery test returned a battery life of 8 hours and 20 minutes - standard performance. Continued gaming was difficult, but the battery drain was low. The K3 Note drained 11% of the battery in 30 minutes of gaming - decent, considering that most phones discharge the first 20% quite rapidly. As mentioned before, the Lenovo K3 Note is all about getting the essentials correct. It does miss out on high-octane performance, but with the main focus being on users with a utility-centric approach, the K3 Note works just fine.
CAMERA
In broad daylight, the Lenovo K3 Note takes decent photographs. Low light photography is not the best around - images often appearing blurry and pixelated. The main issue with low light photography is in the K3 Note’s image stabilisation, which is weak. The shutter, along with the camera application, is fast, light and responsive. Video recording is reasonably smooth, and there is little interpolation unless you shoot fast-moving objects, or pan the camera fast. The front camera is very good in well-lit environment, and decent even in dimly-lit conditions. Video recording with the front-facing camera is also decent, making the overall camera unit of the Lenovo K3 Note a well-performing camera for its price.
The K3 Note's camera performs well in certain conditions, but fails in low light photography
Check out photographs taken by the Lenovo K3 Note's camera in the album below:
BOTTOMLINE one that switches between BMWs and Harbour Line Second Class.
14 Jan 2020
14 Jan 2020
14 Jan 2020
14 Jan 2020
14 Jan 2020
17 Jan 2020
13 Jan 2020
16 Jan 2020
10 Jan 2020
04 Jan. | https://www.digit.in/reviews/mobile-phones/lenovo-k3-note-review-5723.html | CC-MAIN-2020-05 | refinedweb | 1,822 | 60.04 |
Hide Forgot
slic3r-1.1.7-4.fc23 fails to build with perl-5.22 because a test fails:
/builddir/build/BUILD/Slic3r-1.1.7
+ cd -
+ SLIC3R_NO_AUTO=1
+ perl Build.PL installdirs=vendor
# Failed test 'W direction'
# at t/angles.t line 47.
# got: '3.14159265358979'
# expected: '0'
# Looks like you failed 1 test of 34.
t/angles.t ...........
Dubious, test returned 1 (wstat 256, 0x100)
Failed 1/34 subtests
Note: This only happened on i686 and the test passed on x86_64.
The code that gets executed here is (simplified):
C++:
#define PI 3.141592653589793238
...
double atan2 = atan2(0-0,0-10);
// note: atan2 is mathematically speaking an actual pi now
return (atan2 == PI) ? 0
: (atan2 < 0) ? (atan2 + PI)
: atan2;
The routine returns 0 on x86_64 because the (atan2 == PI) condition passes.
On i686 it returns something else (3.14159265358979 ?) as the condition somehow doesn't pass.
But when I take the failing test:
is line_direction([ [10, 0], [0, 0] ]), (0), 'W direction';
and change it to:
is line_direction([ [10, 0], [0, 0] ]), (PI), 'W direction';
the test passes on i686, so the routine returned PI, which should not be possible.
It seems that the comparison (atan2 == PI) was false in C++, but true in Perl.
This sample program however:
#include <iostream>
#include <iomanip>
#include <math.h>
#define PI 3.141592653589793238
int main(int argc, char **argv)
{
double pi = atan2(0, -10);
std::cout << std::setprecision(35) << pi;
if (pi == PI) std::cout << " == ";
else std::cout << " != ";
std::cout << PI << std::endl;
return 0;
}
Produces:
3.1415926535897931159979634685441852 == 3.1415926535897931159979634685441852
in rawhide i686 mock. My head hurts :(
Rounding error? Is the manually defined constant PI representable on i686? Could you consider using M_PI constant provided by <math.h>?
Also if FPU is involved, there are different modes for handling saturation and other corner cases. Maybe perl or slic3r sets them in different mode that the other expects.
Probably a rounding error, yes.
It's not representable on any arch (considering i686 and x86_64), also as far as I know doubles are represented the same way on both.
Using M_PI doesn't help.
Recent version 1.2.9 does not suffer from this problem anymore. | https://bugzilla.redhat.com/show_bug.cgi?id=1231263 | CC-MAIN-2021-49 | refinedweb | 363 | 78.04 |
RIALogger v1.3 update to AIR Beta 3
January 9th, 2008
The RIALogger has been updated to the latest AIR release Beta 3.
You can get the more information about the RIALogger and the custom logging target here.
Entry Filed under: Adobe Flex
January 9th, 2008
The RIALogger has been updated to the latest AIR release Beta 3.
You can get the more information about the RIALogger and the custom logging target here.
Entry Filed under: Adobe Flex
5 Comments Add your own
1. Sean Wesenberg | 2008-01-15 at 12.59 pm
Thanks!
2. Dusty | 2008-01-27 at 2.42 pm
Any chance we can get a version with the trace(”lc.send” commented out?
Thanks!
3. Dusty | 2008-01-27 at 2.50 pm
Also, I’m getting this error:
Category [ [object DataPointVO]] has illegal characters.
when I try to Logger.info(this, messageString)
Any idea what I’m doing wrong?
4. Dusty | 2008-01-27 at 3.17 pm
So, just to prove that I can read source code, on Logger.as, line 322, you test for DisplayObject or Class. The problem is that for some reason “myObject is Class” == false, and I can’t understand how this could be. I’m using a standard Cairngorm model/VO:
public class DataPointVO implements IValueObject
And yet, dataPointVO is Class == false :-(
For the record, getFullyQualifiedClassName(this) returns the correct:
com.redclay.CAISDataVisualization.vo::DataPointVO
So.. how can I get my object to identify as Class? (so I don’t have to change the toString() functions on some of my other classes that I plan on using this with)
Thanks!
-d
5. Renaun Erickson | 2008-01-28 at 12.13 pm
To help with the confusion of instances of classes and the usage of “this” I started to just use the Class name it self. So in your example instead of using myObject just use DataPointVO.
There is an updated renaun_com_Logger.zip file. It has an updated example application with a custom class in it.
Some HTML allowed:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>
Trackback this post | Subscribe to the comments via RSS Feed | http://renaun.com/blog/2008/01/09/248/ | crawl-002 | refinedweb | 369 | 67.35 |
Working with Search Indexes
This topic provides an overview of index building process and discusses how to build an index.
In Elasticsearch, an index is a logical namespace that maps to one or more primary shards and can have zero or more replica shards.
Before end users can submit search requests against the Search Framework deployed objects, the search indexes must first be built on the search engine. Prior to the index being built, a deployed search definition is an empty shell, containing no searchable data. A search index needs to be built for search definitions.
An Application Engine program, PTSF_GENFEED, builds the search index data and pushes it to Integration Broker, which makes it available to Elasticsearch.
When an attachment is specified in a search definition, PeopleSoft Search Framework transfers the attachment data directly to the search engine using cURL and libcurl libraries and does not use the Integration Broker.
Creating a search index with the Search Framework involves the following technologies:
Search Framework
Application Engine
Query/Connected Query Manager
Process Scheduler
Integration Broker
cURL and libcurl Libraries
Once you invoke a search index build from the Build Search Index page, the system automatically completes these general steps:
The Schedule Search Index page initiates the PTSF_GENFEED Application Engine program.
The Pre Processing Application Engine program defined for the search definition runs.
PTSF_GENFEED Application Engine program runs the query (PeopleSoft Query or Connected Query) associated with the search definition.
Multiple PTSF_GENFEED Application Engine programs will run for a Connected Query in cases where users specify the number of partitions. See Partitioning Application Data in the Search Index Build Process for more information.
The query output becomes a data source for PeopleSoft Search Framework.
The PeopleSoft Search Framework converts the query output to the JSON format and pushes the data to the Integration Broker queue and the Deletion query (for incremental indexing) defined for the Search Definition runs.
During this step, the following steps are also executed:
If it is full indexing, a delete request is sent to Elasticsearch to clear any indexed data that is present.
On clearing indexed data, the data to be indexed is pushed to an Ordered IB Queue (PTSF_ES_SEND_Q) partitioned with segments.
IB Queue processes the data asynchronously based on the partition.
Each partition pushes the data in JSON format to Elasticsearch synchronously (that is, waits for response from Elasticsearch) and the response is read and acknowledged in PeopleSoft through exception handling.
The Deletion Query gets triggered when an incremental indexing is attempted after a full index was built.
Note: In the case of a search definition with attachment, PeopleSoft Search Framework downloads the attachment and encodes the attachment data in the JSON format and transfers the data to the search engine directly using cURL and libcurl libraries without using the Integration Broker.
The Post Processing Application Engine program defined for the search definition runs.
A run control ID is bound to both, a search definition and a search instance.
Access the Build Search Index page. (Select PeopleTools, Search Framework, Search Admin Activity Guide, Administration, Schedule Search Index.)
Image: Build Search Index page
This example illustrates the fields and controls on the Build Search Index page. You can find definitions for the fields and controls later on this page.
To build a Search Framework search index:
Select PeopleTools, Search Framework, Search Admin Activity Guide, Schedule Search Index.
Enter a run control ID.
On the Build Search Index page, select the appropriate options.
Select Run.
Use Process Monitor to verify program completion and success.
Note: Once the Run Control is executed, the page becomes read-only. To change any parameters of the page, you need to create a new run control.
After the PTSF_GENFEED program begins to run, you can view the details which display on the Build Search Index page for that run control ID.
Note: The Details section shows the results of the most recent feed generation for this Search Definition. It may not be the same run control ID as the one you selected. If the run control ID differs from the one you selected, it will be highlighted.
Image: Previous schedule details
This example illustrates the fields and controls on the Previous schedule details. You can find definitions for the fields and controls later on this page.
Select the Error link to view the details of exceptions with respect to attachment processing.
Indexing of search definitions that are based on connected query and that have reasonably large data volume can pose performance issues. To overcome such performance issues, PeopleSoft Search Framework enables you to partition the application data using the Date Range option. After the date range and number of partitions are specified, the Partition Manager partitions the date range into equal date spans based on the number of partitions. The search index build process creates multiple PTSF_GENFEED Application Engine programs to run the connected query on the partitioned data. The search index build process also creates multiple run control IDs based on the number of partitions that enables you to re-run any of the run control IDs associated with a partition. The partitioning of application data is performed based on the date range and the number of partitions that are specified.
You can specify the number of partitions, but you cannot control the date span within a date range that is allotted for each partition. The Partition Manager allots date span for the number of partitions specified by you.
Partitioning of application data works best when the volume of data is evenly spread over the specified date range. The purpose of partitioning is to divide the data volume into chunks and to run a connected query on the partitioned data, so when you plan to use the partitioning option, you must consider whether the search documents for a specific search definition are spread evenly over a specific date range. That is, if within a date range, for example, 01/01/2017 to 03/31/2017, a huge volume of search data is concentrated in the month of March and the other two months have few attachments, and you specify 3 partitions, then the partitioning may not be very effective because only one partition may get the bulk of the search documents, so you may still face performance issues.
The naming convention followed for run control IDs that are generated for each partition is as follows: _<USERID>_RUNCNTL_<TIME>_<SEQUENCE NUMBER>
For example, _QEDMO_RUNCNTL_020022_2
Note: Partitioning is available for both full and incremental indexing. However, Oracle PeopleSoft recommends that you use partitioning only with full indexing.
Note: When you specify a date range to implement partitioning of search data, the date span entered in the Full Indexing Criteria option is overridden.
After you use the partitioning option and run the indexing process, the Build Search Index page displays this information.
Image: Date Range grid
This example illustrates the fields and controls on the Date Range grid. You can find definitions for the fields and controls later on this page.
| https://docs.oracle.com/cd/E87544_01/pt856pbr1/eng/pt/tpst/task_WorkingWithSearchIndexes.html | CC-MAIN-2019-47 | refinedweb | 1,169 | 50.46 |
Imagine that you receive a requirement to calculate the aggregations like average on a range of percentiles and quartiles, for a given dataset. There are two ways to approach this problem:
- Calculate actual percentile values and aggregate over the window.
- Fit a cumulative distribution and then aggregate over the window.
The first method has a normalizing effect over the data since it will interpolate values. To elaborate, let us take an example in excel.
Here the cells A1: A7 is the dataset. Moreover, B1 is the 75th percentile which is not an actual value in the dataset. As mentioned earlier, this has a normalizing effect on the averages. We will understand this with the same example as above. Let us take a scenario where we want to have the average of top quantile values. Below are the values from 75th to 100th percentile for the same dataset.
You can clearly see that the average of all the values from 75th to 100th percentile brings the average close to 49.06. However, if we consider the only two values from the top quantile present in the dataset, we get (100+25)/2 i.e. 62.5.
Now, one might be tempted to say that the first approach is better since it nullifies the effect of outliers like the value ‘100’. However, in industry, for appropriate assessment, one needs to consider the actual values rather than the interpolated values. Hence, here comes the concept of the Cumulative Distribution Function.
Also Read: Knowing when to consider Machine Learning
Cumulative Distribution Function
A cumulative distribution function for a continuous variable is the probability that the variable takes a value less than or equal to that value. For more on cumulative distribution, read this blog. However, it is important to note that the probability mentioned above is the ‘actual’ percentile value of the random value taken by the variable. To elaborate, if the CDF value is 0.8, 80 per cent of values in the dataset lie below that number.
Now, in case of the example mentioned above, if you want to calculate the average of all the values lying in the top quantile, all you need to do is fit a CDF and take values greater than or equal to 0.75. This is very much similar to TOP 25 per cent in SQL. However, the challenge here is that we need to do this on spark DataFrames. Here window functions come to our rescue.
Window Functions in Spark
Window functions help us perform certain calculations over a group of records called a ‘window’. These are also called as ‘over’ functions since they are built on lines of Over clause in SQL. Furthermore, they entail ranking, analytical and aggregate functions. For more details about window functions, refer to this document.
In this section, we will use the analytical function known as cume_dist() to fit a cumulative distribution on a dataset.
cume_dist for Cumulative Distribution
The cume_dist computes/fits the cumulative distribution of the records in the window partition, similar to SQL CUME_DIST(). The cume_dist assigns the percentile value to a number in the window/group under consideration. To demonstrate this, let us create spark dataframe similar to the data above in the excel.
Step 1: Create a Dataset
from pyspark.sql.functions import pandas_udf, PandasUDFType df = spark.createDataFrame( [(1, 1.0),(1, 1.0), (1, 3.0), (1, 6.0), (1, 9.0),(1, 25.0),(1, 100.0),(2,50.0),(2,75.0),(2,100.0),(2,125.0)], ("id", "v"))
In this dataset, we have two columns id and v. The column ‘v’ consists of the values over which we need to fit a cumulative distribution. The id = 1 values consist of values similar to the excel example above. Moreover, we have 4 additional values with id = 2.
Step 2: Create a window
Next comes the important step of creating a window. The window creation consists of two parts viz. Partition By and Order By. Partition by helps us create groups based on columns, while order by specifies whether the analytical value should be based on ascending/descending order.
from pyspark.sql.window import Window w = Window.partitionBy(df.id).orderBy(df.v)
In this case, we are partitioning by the column id, while the column ‘v’ is in ascending order(default). Hence, the percentile values will be based on the ascending order.
Step 3: Calculating the CDF
After creating the window, use the window along with the cume_dist function to compute the cumulative distribution. Here is the code snippet that gives you the CDF of a group i.e. the percentile values of all the values ‘v’ in the window partitioned by the column ‘id’.
from pyspark.sql.functions import cume_dist cdf=df.select( "id", "v", cume_dist().over(w).alias("percentile") )
Let us display the results of the CDF.
In the above results, it is evident that we have a percentile value corresponding to every value in the ‘v’ column.
Step 4: Top quantile average
Now, let’s go back to our original stated objective i.e. calculating the average of all the values above the 75th percentile. Hence, let us first filter out all the values above the 75th percentile.
cdf = cdf[cdf['percentile']>0.75]
Finally, we calculate the average of top quantile with the below code snippet.
cdf.groupby('id').agg({'v':'avg'}).show()
The result is as follows:
It is evident here that for id = 1, the top quantile average is (100+25)/2 i.e. 62.5. As expected!
Conclusion
We hope that you find this article useful. Please note that this is only for information purposes. We don’t claim it’s completeness. Kindly try it for yourself before adopting it.
Feature Image Credit: CC BY-SA 3.0, | https://www.data4v.com/cumulative-distribution-in-azure-databricks/ | CC-MAIN-2022-05 | refinedweb | 965 | 66.44 |
160025 / SceneView crash
Greetings,
I have a critical issue with the SceneView on the newest beta. Here is a test script, it crashes Pythonista upon running. Is there something I'm missing ?
# coding: utf-8 import ui from scene import * import console class TestScene (Scene): def draw(self): # display hint -------- tint(1,1,1) background(1, 1, 1) fill(1, 0, 0) for touch in self.touches.values(): ellipse(touch.location.x - 25, touch.location.y-25,50,50) class TestView (ui.View): def __init__(self, width=1024, height=1024): self.bg_color = 'white' w, h = ui.get_screen_size() canvas_size = max(w, h) sv = TestView(canvas_size, canvas_size) sv.name = 'Test' sv.present('fullscreen') adv2 = SceneView() adv2.scene = TestScene() adv2.paused=False adv2.bounds = sv.bounds
Thanks for any help on this issue.
Thanks, I'll look into this. It seems that
SceneViewis somewhat buggy in the current beta – it won't be fixed in the next build (already uploaded), but I have it on my todo list.
Thanks for your reply, keep up the good work !
import scene scene.SceneView().bounds = (0, 0, 100, 100)
Will crash.
Thanks for the fix, build 160032 works perfectly ! | https://forum.omz-software.com/topic/2108/160025-sceneview-crash/5 | CC-MAIN-2020-45 | refinedweb | 193 | 71.61 |
Overview
You can create object node (OBJ) assets that are defined by a Python script instead of a subnetwork of nodes (File ▸ New operator type, click Python type, set Network type to "Object"). This example defines a Python object node that gets its transformation information from a file on disk.
This example shows how you can get raw transformation information, for example generated from another software package or from a hardware device, into Houdini using Python.
You can load the pre-made assets from
$HFS/houdini/help/files/hom_cookbook/PythonObjects.hda
Cook implementation
Open
$HFS/houdini/help/files/hom_cookbook/xforms_from_disk.hip.
Click
Play.
The
xforms_from_disk1object node loads its transforms from the file
motion.csvin the same directory. When the object node cooks, it looks up the transformation matrix corresponding to the current time from the file, and sets its transform to that matrix.
Right-click the
xforms_from_disk1node and choose Type properties to open the asset’s type properties window.
We created a parameter in the Parameters tab for the file name, just as we would for a normal asset.
The Code tab contains the Python code implementing the node’s logic.
# This code is called when instances of this object cook. # Get the Node object representing this node this = hou.pwd() # Try to get the cached transforms from this node's cached data. # We could be fancier here by checking if the file has changed # since the cache was saved. xforms = this.cachedUserData("diskxforms") if not xforms: # Read and cache the transform matrices from the file, # using a function defined in this asset type's Python module. xforms = this.hdaModule().reload(this) if xforms: # Get the transform for the current frame index = max(int(round(hou.frame())), 1) - 1 if index < len(xforms): xform = xforms[index] else: # If the index is after the last transform in the file, hold the # final transform in the file xform = xforms[-1] # Construct a matrix object from the 16 floats, and set this object's # transform to the matrix this.setCookTransform(hou.Matrix4(xform))
Reload function
You’ll often want to move functionality into helper functions/classes in the asset type’s Python module to keep the node’s actual code cleaner. In this case, we've put the function to reload the file in the asset module (on the Scripts tab).
def reload(this): """ Reloads the transformation matrices from the disk file, and caches them in the node's cached user data. """ # Get the name of the file from my parameters filename = this.evalParm("file") xforms = [] with open(filename) as f: # The file is a simple CSV table of floats separated by commas, # 16 floats per line for line in f: # Strip any whitespace/newline from the line line = line.strip() # Split the string on commas to pull out the numeric substrings ns = line.split(", ") # Convert the strings to floating point numbers fs = tuple(float(n) for n in ns) assert len(fs) == 16 # Add it to the list of matrices xforms.append(fs) # Cache the transforms this.setCachedUserData("diskxforms", xforms)
The asset’s parameter interface has a Reload button. The callback script on the button simply calls the helper function in the asset’s Python module:
hou.pwd().hdaModule().reload(hou.pwd()) | http://www.sidefx.com/docs/houdini/hom/cb/pythonobj.html | CC-MAIN-2018-05 | refinedweb | 540 | 62.68 |
Object oriented programming (OOP) gets a lot of hype. This lecture explores what OOP is, why it generates so much excitement, where it works well, and where it does not.
Practitioners associate the term Object Oriented Programming with a wide variety of concepts. Part of the reason that OOP gets so much hype is that some developers think that you need object-oriented programming to reap certain benefits. Here are some concepts associated with OOP:
thisin Java or C#). Then each procedure definition can choose whether it is appropriate to use methods from the bundle to accomplish subtasks.
I will try address all of the ideas above during this lecture.
You can find a similar list (with some differences in terminology and perspective) at the start of Jonathan Rees's "Object-Oriented" article.
Stephen Kell has his own list of what composes OOP. (I may add more commentary as I reflect further.)
Lets set the stage by reviewing our usual development methodology: "Applicative Programming"; then I will discuss how that contrasts with development in an object-oriented language.
An applicative programming style has two main stages: we first determine what the data definition is, and then we write functions that manipulate that data.
This strategy makes sense for a class on programming languages, where most functions process abstract-syntax trees for a particular language: interpreters, type-checkers, and various program transformers. Often programming language developers will be working with a fixed grammar for the language in question; the language is fixed, but the set of operations we want to perform on terms in the language is growing over time.
Note that this strategy makes it difficult to change the data definitions later; every change to the data definition may require review of all the functions that process instances of that data.
A data type consists of two parts:
For an abstract data type, the interface consists of a set of operations that clients are allowed to use when manipulating values of the abstract data type. With abstract data types, clients do not need to know how the data is represented. That means the representation can be changed without breaking any client code. In other words, client code is representation-independent.
A queue represents a sequence of values that are delivered in "first-in first-out" (FIFO) order; the first element enqueued will be the first element removed, the second element enqueued will be the second element removed, and so on.
We will write a sequence of values mathematically
using the notation
[a, b, c, …, x, y, z
];
thus we will denote an implementation's
representation of such a sequence
using the notation
⌈
[a, b, c, …, x, y, z
]⌉.
(Note that the ellipses notation is somewhat informal;
one important detail is that
we use
[a, …, k
] to denote a
potentially empty sequence,
but
[a, b, …, k
]
and
[a, …, k, l
]
both denote non-empty sequences.)
empty : → Queue
snoc : Queue × Val → Queue
isEmpty : Queue → Boolean
head : Queue → Val
tail : Queue → Queue
(empty)= ⌈
[]⌉
(snoc⌈
[a, …, w
]⌉ x
)= ⌈
[a, …, w, x
]⌉
(isEmpty⌈
[]⌉
)=
#t
(isEmpty⌈
[a, b, …, w
]⌉
)=
#f
(head⌈
[a, b, …, w
]⌉
)= a
(tail⌈
[a, b, …, w
]⌉
)= ⌈
[b, …, w
]⌉
Here is a one queue implementation: queue1 ("simple"). What are its features? What are its drawbacks?
Features:
Drawbacks:
snoctakes O(n) time!
Here is another queue implementation: queue2 ("fast"). What are its features? What are its drawbacks?
Features:
snocand
head.
Drawbacks:
tail.
tail, then this queue representation does not have amortized O(1) time for all operations.
There do exist (even more complex) queue implementations that can achieve:
But, should the client care how queues are implemented?
Can we write our client in a manner so that it does not care how queues are implemented? (See for example these black-box queue tests.)
(We can edit the first
import of the script
to change which queue implementation we want to use in the
test program.)
Here is the output from that test suite on the first and second queue implementations (after installing them as libraries).
% plt-r6rs queue-tests.sps success success success success success success
I have structured the code above in a manner such that I can run the tests importing either queue implementation.
However, I took special care when writing my test suite code to make sure that it never attempted to look at the internal representation of queues.
Here is another ("white-box") test suite that was not written to the abstract specification. In particular, it assumes that queues have been implemented using the first representation. Compare: a run using the first implementation:
% plt-r6rs queue-tests-rep-exposed.sps success success success success success success
but if we edit the testing script to use the second implementation,
by changing the import to start with:
(import (obj-lecture queue1-encapsulated) ...):
% plt-r6rs queue-tests-rep-exposed.sps FAILURE (empty) should be: () but got: (() ()) FAILURE (snoc (empty) 'a) should be: (a) but got: ((a) ()) success FAILURE (snoc (snoc (empty) 'a) 'b) should be: (a b) but got: ((a) (b)) FAILURE (tail (snoc (snoc (empty) 'a) 'b)) should be: (b) but got: ((b) ()) FAILURE (tail (tail (tail (snoc (snoc (snoc (snoc (snoc (empty) 'a) 'b) 'c) 'd) 'e)))) should be: (d e) but got: ((d e) ())
Note that these failures are not particularly illuminating; it is not obvious from the failure messages why the actual and expected values do not match.
The problem is that the client (in this case, the test suite) has violated the Queue abstraction; it has relied on the particular representation used for queues, but a proper client should only interact with the data by using the appropriate procedures defined in the abstraction.
One can enforce an abstraction by properly modularizing the code. Most modern languages provide facilities for defining modules as collections of related procedures, and rules for what procedures have access to the internal representation of an abstraction.
Here is a revision, queue1 (encapsulated), of the first (slow but simple) queue implementation that illustrates one way to control access to a representation, and thus enforces modularity.
This is only meant to illustrate one way to achieve this effect; as stated above, most modern languages provide more convenient facilities for defining access rules.
If we now change our script to import the encapsulated library:
% plt-r6rs queue-tests.sps success success success success success success % plt-r6rs queue-tests-rep-exposed.sps FAILURE (empty) should be: () but got: #<procedure:an-abstract-queue> FAILURE (snoc (empty) 'a) should be: (a) but got: #<procedure:an-abstract-queue> success FAILURE (snoc (snoc (empty) 'a) 'b) should be: (a b) but got: #<procedure:an-abstract-queue> FAILURE (tail (snoc (snoc (empty) 'a) 'b)) should be: (b) but got: #<procedure:an-abstract-queue> FAILURE (tail (tail (tail (snoc (snoc (snoc (snoc (snoc (empty) 'a) 'b) 'c) 'd) 'e)))) should be: (d e) but got: #<procedure:an-abstract-queue>
Now the failure messages are a bit clearer: the tests are failing because
the client (the test writer) wrote down that the code expected values such as
the list
(a)
but the actual values we receive are abstract queues.
Even if the client wanted to break the abstraction by trying to apply the returned procedure, they would be foiled (illustrated below).
% larceny -err5rs -path .. Larceny v0.963 "Fluoridation" (Jul 29 2008 20:26:38, precise:Posix Unix:unified) larceny.heap, built on Tue Jul 29 20:28:40 EDT 2008 ERR5RS mode (no libraries have been imported) > (import (rnrs)) Autoloading (rnrs) Autoloading (rnrs enums) Autoloading (rnrs lists) Autoloading (rnrs syntax-case) Autoloading (rnrs conditions) Autoloading (err5rs records procedural) Autoloading (rnrs exceptions) Autoloading (rnrs hashtables) Autoloading (rnrs arithmetic bitwise) Autoloading (rnrs programs) Autoloading (rnrs files) Autoloading (rnrs io ports) Autoloading (larceny deprecated) Autoloading (rnrs records syntactic) Autoloading (rnrs records procedural) Autoloading (rnrs control) Autoloading (rnrs sorting) Autoloading (rnrs bytevectors) Autoloading (rnrs unicode) > (import (obj-lecture queue1-encapsulated)) Autoloading (obj-lecture queue1-encapsulated) > (define my-queue (snoc (snoc (empty) 'a) 'b)) > my-queue #<PROCEDURE> > (my-queue 'attempting-to-get-inside!) Error: queue1-encapsulated: unauthorized-access-attempt Entering debugger; type "?" for help. debug>
This encapsulation of the internal representation, exposing only operations that know how to properly manipulate the data and preserve invariants of the representation, is often considered a key benefit of object-oriented programming.
Furthermore, with a sufficiently expressive language, the most common attempts to violate encapsulation can be detected statically; the program can be rejected before you run it. (The technique illustrated above is detecting the encapsulation violation dynamically, so the system does not signal an error until we run the code for the white-box test suite that violates the encapsulation.)
However, encapsulation is not a benefit that is provided by only object-oriented languages. Many non-object-oriented languages do support modular program definitions where representations are hidden; after all, I just demonstrated one way to accomplish the task in Scheme!
On top of that, you can write
code in Java or C# where the internal representation is
exposed as a set of
public fields; it is up
to the programmer to decide how to use the features of the
language to achieve their desired system design.
Still, many practitioners will list encapsulation as a reason that they use object-oriented languages for their systems development.
In the above demonstrations, I chose between the queue1 ("simple"), queue2 ("fast"), and queue1 ("encapsulated") implementations; it is not legal with the code above to mix-and-match queue implementations.
That is, the client code can be ignorant of which queue abstraction it is using, but it still needs to be linked against a single queue implementation, or else there are likely to be serious consequences.
We can see some of the bad effects of trying to link against several of the above queue implementations with this script, queue-mixing.sps.
% plt-r6rs queue-mixing.sps B? b mcar: expects argument of type <mutable-pair>; given a A?
Maybe that's a PLT bug; let's try Larceny.
% larceny -path .. -r6rs -program queue-mixing.sps B? b A? Error: no handler for exception #<record &compound-condition> Compound condition has these components: #<record &assertion> #<record &who> who : "car" #<record &message> message : " a is not a pair.\n" Terminating program execution.
No, Larceny agrees with PLT Scheme on this point.
Something is definitely wrong with how queue-mixing.sps attempts to use the queues.
You can see other "fun" results if you avoid the runtime error by commenting out the three lines grouped with the
(display "A? ")expression and try running the script again.
If you want to be able to use different implementations of the same interface at the same time, you need a more sophisticated way of mapping the operations you want to perform with the actual code that performs those operations.
By representing our data in a different way, we can interact with it by passing a message asking it to perform a particular operation.
Here are translations of the first, queue1 ("dispatch"), and second, queue2 ("dispatch"), implementations into a message-passing style.
As a quick aside: the R6RS library system actually helped my presentation here, since I was able to layer the dispatching implementations on top of the core implementations of
queue1and
queue2.
When I gave this lecture previously, I based the code on R5RS Scheme; but the R5RS does not provide a library system. So I had to choose between using
load+
definetricks to get the above effect, or copying the implementations of
queue1and
queue2into the dispatching implementations.
In that presentation, I chose to copy the implementations, but that meant that the details of the individual queue implementations were distracting the reader of the code from the core ideas of dispatch. In this version, the
librarysystem lets me focus on the relevant details to dispatch alone.
Now we can load one version, use it, load another version, use that as well, and the values we constructing with the first version continue to work, as in this script:
% plt-r6rs queue-mixing-dispatch.sps B? b A? a Y? y X? x
The code for queue1-dispatch and queue2-dispatch illustrates dynamic dispatch (or sometimes single dispatch, or sometimes just dispatch).
Dispatch can provide a strong separation between interface and implementation, because one typically defines the interface when one is developing the set of messages that will be passed around. The actual code that implements the desired behavior associated with the messages can be developed long after the interface has been conceived.
But there is a piece missing, and understanding what the missing piece is requires that we take another step back.
A queue abstraction can be useful for certain algorithms that require a FIFO order on element delivery and do not require any other sort of order of fast access to enqueued elements.
But some clients may not require a strict FIFO order; some clients will be happy to consume an element from anywhere in the queue.
And other clients may require fast access to both the most-recently inserted and least-recently inserted elements.
It would be useful to categorize our implementations according to what capabilities they have; that is, what operations they can support. If we were to classify our various interfaces into a hierarchy, then our clients could clearly state their requirements by choosing the interface appropriate to their needs.
For my examples in this lecture, I will use a simple hierarchy. Here is its interface (not implementation):
Collection supports methods isEmpty : Collection -> Bool addElem : Collection * Value -> Collection anyElem : Collection -> (list Value Collection) ;; only non-empty works! addAll : Collection * Collection -> Collection toList : Collection -> Listof[Value] Queue extends Collection and adds methods snoc : Queue * Value -> Queue head : Queue -> Value ;; only non-empty works! tail : Queue -> Queue ;; only non-empty works! Tree extends Collection and adds methods isLeaf : Tree -> Bool nodeValue : Tree -> Value ;; only non-leaf works! left : Tree -> Tree ;; only non-leaf works! right : Tree -> Tree ;; only non-leaf works! The base Queue constructor is empty : -> Queue The base Tree constructors are leaf : -> Tree node : Tree * Tree * Value -> Tree
There may be a lot of potential code sharing amongst the different
implementations. A
Collection class might implement
a method,
addAll, where one collection c1
consumes another collection c2
of elements and produces the union of the two by iteratively
invoking the c1
.add method on each element
it can get out of c2.
This is a common code pattern that we would like to put into one place,
into the
Collection class. Extensions of the
Collection will be responsible for properly implementing
its
add method, but once that is in place, then
all of the extensions immediately support the
addAll
method. (At least in principle; there are caveats here.)
The crucial idea to support implementation inheritance: pass yourself around as another parameter! Then, when you need to invoke a method, pass a message to yourself!
(This may sound absurd; why would this accomplish anything?)
This is the heart of delegation; pass a special self parameter around; for any method that you want to allow your subclasses to handle in their implementation, you perform the invocation by passing a message to self.
(Felix finds it fascinating that this notion of self-reference, the key to delegation, is also the key for implementing recursive functions in languages like PROC. But that is just an aside.)
Here is the relevant code that illustrates delegation, using Scheme as the implementation language so that the self parameter is explicit in the code.
% plt-r6rs demo-delegation.sps {}: () {a b c}: (a b c) {b c}: (b c) {}: () {p q r s t}: (p t q s r) t2 a leaf? #f t2.rgt leaf? #f t2.lft leaf? #f t2.lft.value: p t2.rgt as list: (q s r) q2 and t2 as list: (a b c p t q s r) t2 and q2 as list: (c b a p t q s r)
And here is the big punch-line: the above bits of Scheme code, but encoded
as Java classes! (plus a
Main program that tests them).
% javac Collection.java Queue.java Tree.java Main.java % java Main > q1.toString(): ( ) > q2.toString(): ( a b c ) > q2.tail().toString(): ( b c ) > t1.toString(): ( ) > t2.toString(): ( p t q s r ) > t2.isLeaf(): false > t2.right().isLeaf(): false > t2.left().isLeaf(): false > t2.left().nodeValue(): p > t2.right().toString(): ( q s r ) > q2.addAll(t2).toString(): ( a b c p t q s r )
(Does the output of
Main look familiar?)
class A { /* specification of four: must always return 4. */ public int four() { return 4; } /* specification of five: must always return 5. */ public int five() { return 5; } } class B { public int four() { return five() + 2; } }
Above is subclassing
(since we are inheriting the implementation
of the
five() method and
using it within the definition of the
B class)
but it is not subtyping.
Consider the
Object.equals
method
in Java.
In particular, consider the constraint that specfication
puts on its subclasses with respect to whether they
need to implement the
Object.hashCode
method.
Is this class,
C, a subtype of
Object?
class C { int hashCode() { return 42; } }
How about
D?
class D { boolean equals(Object d) { return (d instanceof D); } }
.slsfiles)
.spsfiles)
importat the top)
.javafiles)
Last updated 22 October 2008. | http://www.ccs.neu.edu/home/pnkfelix/objects-lecture/index.html | CC-MAIN-2015-06 | refinedweb | 2,876 | 51.58 |
Hi all, A few points for your delectation! Please do read the "Archive/Mirror Split" section; it's important. The other parts of this mail can be safely ignored. Archive/Mirror Split ~~~~~~~~~~~~~~~~~~~~ Since the archive has grown so much (170GB at present), we're ending the long standing expectation that mirrors will include the entire archive. This means two things, first that we'll be more accepting of limited mirrors using the aliases, and second that will soon stop including a number of architectures for etch (testing) and sid (unstable). What this means to you: (a) if you are mirroring from, and are happy to only mirror the mainstream architectures, just keep doing what you're currently doing. Your Debian mirror will drop back from 170GB to about 90-100GB in a few weeks time. (b) if you are mirroring from, and wish to *continue* mirroring all architectures, or choose specific architectures that your users are interested in, you should make sure you are mirroring with rsync (not using wget or similar over http, nor using ftp), and immediately switch to using the "debian-all" rsync module, instead of the "debian" rsync module. That is: rsync ... rsync:// /mirror/debian/ becomes: rsync ... rsync:// /mirror/debian/ anonftpsync users should change their RSYNC_DIR setting. (c) if you are mirroring from somewhere else, you may want to watch for any changes your upstream mirror makes, or talk to them directly. The --max-delete option to rsync might be useful if you want to ensure you don't entirely lose an architecture that your upstream decides to stop mirroring. The switchover will happen in a little over two weeks, probably between the 8th and 13th of March. If you wish to switch to the smaller subset of the archive in advance of that changeover, you can use rsync:// now. Note that that alias will be removed after the changeover happens, and you will need to switch to rsync://. The list of files in the "typical" set is also available on mirrors as indices/files/typical.files. It's possible to use this as input to rsync's --files-from option; however rsync won't delete files from your mirror that are no longer listed, so unfortunately this option isn't terribly usable. The set of files being mirrored as part of this "typical" set is currently: * all packages for all architectures for sarge (stable, Debian 3.1) * all i386 packages, and installer images for woody (oldstable, Debian 3.0), etch (testing), sid (unstable) and experimental. * all sources * miscellaneous files The anonftpsync program we encourage mirrors to use includes options to exclude architectures (ARCH_EXCLUDE); if you're already using this, please continue to do so. Unfortunately if you start using this now, it will also exclude architectures from the stable release, even though your site will remain listed as supporting the architectures it currently does, until we can update the hard coded mirrors list in the installation software. So if possible we'd ask that you not start dropping architectures via that feature immediately, and instead either mirror the typical set as per the above, or only exclude new architectures. Mirror Mailing Lists ~~~~~~~~~~~~~~~~~~~~ We've now got two lists for mirror admins. The debian-mirrors-announce list (this one), will be used for limited, irregular announcements of things of importance to mirrors. We strongly encourage everyone running a public Debian mirror to subscribe, if you're not already. There's also the debian-mirrors list which is for general discussion amongst mirror maintainers. Some mirror admins frequent the #debian-mirrors IRC channel on irc.oftc.net as well, for folks who are interested in that. See and mirror.debian.net ~~~~~~~~~~~~~~~~~ In order to help people find a local mirror for their architecture, we've set up the <cc>.<arch>.mirror.debian.net namespace, and populated it with a number of mirrors that already accept connections over http under that name. Over the next few weeks we'll be updating that, so you may wish to setup your http server to handle requests under that domain. It's not yet clear whether we'll start using that domain permanently, or simply make the Mirrors.masterlist more official and up to date, and encourage people to use that when looking for a mirror for their architecture. Example URL: Future Directions ~~~~~~~~~~~~~~~~~ In the next few weeks, amd64 packages will start being uploaded to the archive. Initially this will likely add about 5GB to mirrors, and will be immediately included in the "typical" set. Some (essentially empty) Packages and Sources files are already present, so mirrors can be configured to include amd64 immediately, if they so desire. In June/July, security support for woody (oldstable, Debian 3.0) will be dropped, and that suite will be archived (moved to archive.debian.org). This will cut the size of full mirrors by a significant amount, probably on the order of 25GB; and will obviously increase the size of archive.debian.org mirrors by a similar amount. We will likely be adding some other architectures in the not too distant future, as well. These will not be added to typical, apart from amd64 as mentioned above, however they may need to be added to your exclusion list as they are added, if you aren't mirroring either the "typical" set mentioned above, or the full archive. We will aim to announce new architectures that get added to this list as they happen. Additionally, we will probably be looking into doing multiple updates to the archive per day in the near future, probably up to four times a day. It's likely we'll leave the decision of whether to update each mirror once a day or more frequently up to the mirror admin. We're looking at trying to keep our mirrors list more up to date, so we'll hopefully be providing some new ways for you to update your mirror's details, and request things like an ftp.<cc>.debian.org alias, or arrange to be a push mirror, etc. Thankyou for your time in reading this, and for mirroring Debian :) Cheers, aj
Attachment:
signature.asc
Description: Digital signature | https://lists.debian.org/debian-mirrors-announce/2006/02/msg00000.html | CC-MAIN-2016-26 | refinedweb | 1,029 | 61.06 |
On Mon, Oct 17, 2005 at 02:44:30PM -0700, Greg KH wrote:>?Sounds good to me. The changes to driver model internals may be substantial.For example, because buses and classes will share more code, it'sreasonable to allow drivers to bind to any "device" object, even classdevices. Of course this would be limited to classes that choose toimplement driver matching etc. We are doing this now with the pci expressport driver.It also may make sense to move bus_types to the "class" interface. Thelayered classes suggestion is especially useful here because we can have a"hardware" or "bus" class that acts as a parent for "pci", "usb", etc.Also, we could make driver objects a "class" and represent them in theglobal device tree, giving each driver instance its own unique namespace.> >doesn't show the real relationships between these devices). Instead, justhang them off the root of the tree. If the device doesn't have any parentsor dependencies, then that's logically where it belongs.Thanks,Adam-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2005/10/17/290 | CC-MAIN-2018-17 | refinedweb | 197 | 66.94 |
C# project and call it test1.0
2. Hit F8/Compile once it finishes
What fails?
Compilation fails with
/home/marek/Projects/test4.5/test4.5/Main.cs(0,0): Error CS1001: Unexpected symbol `{', expecting identifier (CS1001) (test4.5)
I guess it's not creating a valid class name from the project name.
using System;
namespace test1.
{
class MainClass
{
public static void Main(string[] args)
{
Console.WriteLine("Hello World!");
}
}
}
I have not tested other not allowed C#/VB identifiers
I'd assume the best solution for this is to add better validation to the project name so instead of blacklisting known bad characters we instead whitelist whitelist all the characters which are valid. We should also disallow reserved names based on language binding as well. This would mean we could prevent people from calling their project 'int' or 'string'.
*** Bug 1069 has been marked as a duplicate of this bug. ***
It's silly to validate project names based on permitted language identifiers. That's just an artifact of limitations in our template system.
What we really need to to validate project names based only on what's valid for MSBuild project names. Then, the template system should derive & escape valid identifiers for the language. We do this already for class names, we just don't handle it for namespaces.
My new template system will fix this.
*** Bug 1649 has been marked as a duplicate of this bug. ***
*** Bug 19371 has been marked as a duplicate of this bug. ***
Already fixed. | https://xamarin.github.io/bugzilla-archives/10/1079/bug.html | CC-MAIN-2019-43 | refinedweb | 250 | 67.86 |
Red Hat Bugzilla – Full Text Bug Listing
Spec URL:
SRPM URL:
Description:
Beryl is a combined window manager and compositing
manager that runs on top of Xgl or AIGLX using OpenGL
to provide effects accelerated by a 3D graphics card
on the desktop. Beryl is a community-driven fork of
Compiz.
This package provides a utility capable of capturing
beryl-enabled desktop sessions as video.
Depends on beryl-core, submitted for FE-review under bug 209259.
You should fix this rpmlint output:
E: beryl-vidcap binary-or-shlib-defines-rpath /usr/lib/beryl/libcapture.so
'/build/BUILD/beryl-vidcap-0.1.2/seom/.libs']
W: beryl-vidcap no-documentation
E: beryl-vidcap library-without-ldconfig-postin /usr/lib/libseom.so.0.0.0
E: beryl-vidcap library-without-ldconfig-postun /usr/lib/libseom.so.0.0.0
E: beryl-vidcap library-without-ldconfig-postin /usr/lib/libseom.so.0
E: beryl-vidcap library-without-ldconfig-postun /usr/lib/libseom.so.0
About the rpath check this:
Ugh. I hate this package. And its utterly crappy Makefiles. Latest build works
around everything in comment #1, as well as making proper symlinks instead of
copying all the libseom .so's. Well, except there's still no documentation, but
nothing I can do about that one at the moment, since there is none. :)
Just a quick look at this:
* %{_libdir}/beryl/ is not owned by any package.
* Fedora specific compilation flags is not passed and -debuginfo
rpm is of no use.
(In reply to comment #4)
> Just a quick look at this:
>
> * %{_libdir}/beryl/ is not owned by any package.
Actually, it is. Its owned by beryl-plugins. I believe I should have Requires:
beryl-plugins.
> * Fedora specific compilation flags is not passed
Eep. Will add that next build.
> -debuginfo rpm is of no use.
Hrm, might be the Makefile installing stuff with -s...
I'll take a look at fixing all of these in the morning, thanks much!:
New build with trimmed BR: (requires new beryl-core-devel, which I haven't
pushed yet).:
Created attachment 141457 [details]
spec file to try to aviod using chrpath
Well, using chrpath is somewhat unwilling, I think.
How do you think about the spec file I attached?
Note: The change for 'make' command in %install process
is needed to pass fedora specific compilation flags.
NOTE: I only checked about chrpath and $RPM_OPT_FLAGS
issues and not checked anything else!!
Created attachment 141458 [details]
spec file (more correct)
Perhaps more correct spec file (use exec)
Well, during review on beryl-vipcap, several issues are
found.
* %{_libdir}/beryl/libcapture.so contains undefined
non-weak symbols.
----------------------------------------------------
[tasaka1@localhost ~]$ ldd -r /usr/lib/beryl/libcapture.so > /dev/null
undefined symbol: glEnd (/usr/lib/beryl/libcapture.so)
undefined symbol: glEnable (/usr/lib/beryl/libcapture.so)
undefined symbol: glColor4us (/usr/lib/beryl/libcapture.so)
undefined symbol: glEnableClientState (/usr/lib/beryl/libcapture.so)
undefined symbol: glDisable (/usr/lib/beryl/libcapture.so)
undefined symbol: glRecti (/usr/lib/beryl/libcapture.so)
undefined symbol: addScreenAction (/usr/lib/beryl/libcapture.so)
undefined symbol: compSetFloatOption (/usr/lib/beryl/libcapture.so)
undefined symbol: getIntOptionNamed (/usr/lib/beryl/libcapture.so)
(and others)
----------------------------------------------------
Please check the linkage against this package.
* Related to above, -devel package are missing necessary
Requires.
%{_includedir}/seom/seom.h reads:
----------------------------------------------------
#include <GL/gl.h>
#include <GL/glext.h>
#include <GL/glx.h>
#include <X11/Xlib.h>
#include <X11/Xatom.h>
#include <X11/keysym.h>
----------------------------------------------------
This means -devel package requires at least the following packages
---------------------------------------------------
libGL-devel (this is provided by mesa-libGL-devel)
libX11-devel
xorg-x11-proto-devel (this is required by mesa-libGL-devel so
redundant for writing to Requires)
---------------------------------------------------
and also this implies that the missing linkage on
%{_libdir}/beryl/libcapture.so _may_ be for
libGL.so and libX11.so (I have not checked).
* Fedora specific compilation flags are not passed.
----------------------------------------------------
+ umask 022
+ cd /builddir/build/BUILD
+ cd beryl-vidcap-0.1.2
+ LANG=C
+ export LANG
+ unset DISPLAY
+ rm -rf /var/tmp/beryl-vidcap-0.1.2-6.fc7-root-mockbuild
+ pushd seom
~/build/BUILD/beryl-vidcap-0.1.2/seom ~/build/BUILD/beryl-vidcap-0.1.2
+ make DESTDIR=/var/tmp/beryl-vidcap-0.1.2-6.fc7-root-mockbuild install
cc -W -Wall -std=c99 -Iinclude -ldl -lpthread -L.libs -lseom -o seom-filter
src/filter/main.c
cc -W -Wall -std=c99 -Iinclude -ldl -lpthread -L.libs -lseom -lX11 -lXv -o
seom-player src/player/main.c
cc -W -Wall -std=c99 -Iinclude -ldl -lpthread -L.libs -lseom -o seom-server
src/server/main.c
------------------------------------------------------
(This uses original 0.1.2-6 spec file). Please fix this.
* License:
I cannot find any GPL license document file in the source
files. Only I can read is in capture.c, however, it is not
GPL (looks like "free" license, however, what is this?)
Well, other files than capture.c don't contain any license
terms and no license document is provided, so, the license
of this package is really questionable.......
I like the rpath-killer there, very crafty, nice to not have to resort to
chrpath. I've put that in my local spec for the next build. I've also fully
fixed the cflags and -devel dependencies.
As for the undefined symbols... The gl* ones are indeed resolved by linking
against libGL, but I'm not having any luck figuring out exactly what to do for
the others. I did find that essentially the same unresolved symbols show up for
all beryl and compiz plugins though... Probably going to have to poke both
upstreams about this one...
On the license front... I see a #define _GNU_SOURCE, but yeah, the license block
is a little vague. I'll prod upstream for clarification.
(In reply to comment #11)
> On the license front... I see a #define _GNU_SOURCE, but yeah, the license block
> is a little vague. I'll prod upstream for clarification.
Do you mean the files in seom or the plugin itself? capture.c has a standard
header just like any other file in beryl-plugins, and the seom files have no
header, just a LICENSE file in the root directory. If you require that every
file needs to have a GPL header, I can add it, no problem.
I would also suggest, if somehow possible, to specify ARCH when building seom,
see - "Architecture
Optimizations"
Well, any state change on this package?
I've incorporated the asm stuff Tom suggested in comment #12 and done a 0.1.3
build, but it still suffers from the same undefined symbols problem, which I've
not been able to trace the root cause of just yet.
(In reply to comment #14)
> but it still suffers from the same undefined symbols problem, which I've
> not been able to trace the root cause of just yet.
Well, for this I change my opinion. This library
(/usr/lib/beryl/libcapture.so) is a plugin module for beryl and
undefined non-weal symbols issue can be ignored.
Then please check this rpmlint error.
-------------------------------------------
E: beryl-vidcap shlib-with-non-pic-code /usr/lib/libseom.so.0.0.0
-------------------------------------------
(In reply to comment #15)
> Then please check this rpmlint error.
> -------------------------------------------
> E: beryl-vidcap shlib-with-non-pic-code /usr/lib/libseom.so.0.0.0
> -------------------------------------------
>
If beryl-vidcap is installed, each time yum performs a ldconfig, it spits an
error dur to usr/lib/libseom.so.0.0.0
Ugh. I get no such errors (no rpmlint error and no ldconfig spew) at all on my
system, but its x86_64. I have a feeling this is an issue with the x86 asm...
It could be.. the .rodata section on top of frame.asm could cause this, but I'm
not an assembler expert so I can't tell for sure. but I could convert the
[yuv]Mul variables to 'immediates' (or how they are called), simply use the
values directly in the code instead of referencing the variables. Just need to
figure out how to do that correctly WRT endianess :)
Created attachment 143732 [details]
Mock build log of beryl-vidcap 0.1.3-1 on FC-devel i386
Well, I am working on FC-devel i386 and
on this environ rpmlint surely complains about
beryl-vidcap 0.1.3-1.
I don't know about yasm, however I suspect this does
something because mockbuild log says:
----------------------------------------------------------------
libtool --tag=CC --mode=link gcc -Wl,--as-needed -rpath /usr/lib -o libseom.la
src/buffer.lo src/client.lo src/codec.lo src/frame.lo src/server.lo
src/stream.lo src/arch/x86/frame.lo -ldl -lpthread
---------------------------------------------------------------------------
(In reply to comment #19)
>
Don't get fooled by the '-prefer-non-pic' because that's only there to tell
libtool not to add -DPIC to the commandline when executing yasm. If there is a
better way of doing that, please tell me - executing anything else than gcc with
libtool just asks for trouble.. :(
(In reply to comment #17)
> Ugh. I get no such errors (no rpmlint error and no ldconfig spew) at all on my
> system, but its x86_64. I have a feeling this is an issue with the x86 asm...
Definitely happens on i686.
[root@localhost ~]# ldconfig
ldconfig: /usr/lib/libseom.so.0 is not a symbolic link
(In reply to comment #21)
>
Please try 0.1.3-1. My complaint about shlib-with-non-pic-code is
for 0.1.3-1.
As of right now, the seom svn:extenals reference has been removed from
beryl-vidcap and the preferred way is to have separate seom and beryl-vidcap
packages.
But it's still unclear whether the beryl-vidcap plugin should be merged with
beryl-plugins or not, hopefully we'll have this resolved before the 0.1.4 release.
Well, a Happy new year to everyone.
Jarod, it seems that you updated beryl related packages to
0.1.4, so how about this package?
Yeah, once I get all the 0.1.4 bits pushed for FC6, I plan to start working on
this one again. Looks as if it may be a bit of work with seom broken out now (in
the long run, it would appear seom needs to be packaged by itself and BR/R by
beryl-vidcap).
Well, would you once create srpm for 0.1.4?
Maybe someone (including me) can point out how beryl-vidcap
should be fixed if there is any problem.
Sorry, sick wife and kids, plus my day job kept getting in the way... Finally
got around to it, SRPM available here:
At the moment, I'm just including a seom svn checkout in the package, will look
into packaging seom on its own in the future...
Created attachment 145532 [details]
objdump log of libseom.so.0.0.0
Well, I tried 0.1.4-1.fc7 (on FC-devel i386), however,
libseom.so.0.0.0 still contains non-pic code.
Created attachment 145533 [details]
Mock build log of beryl-vidcap-0.1.4-1.fc7
I attach a mockbuild log of 0.1.4-1 on FC-devel i386.
Any ideas, anyone?
(In reply to comment #28)
> Well, I tried 0.1.4-1.fc7 (on FC-devel i386), however,
> libseom.so.0.0.0 still contains non-pic code.
How do I find out which code (symbols?) is non-PIC?
(In reply to comment #30)
> (In reply to comment #28)
> > Well, I tried 0.1.4-1.fc7 (on FC-devel i386), however,
> > libseom.so.0.0.0 still contains non-pic code.
>
> How do I find out which code (symbols?) is non-PIC?
Actually I don't know _which_ code is non-PIC, however,
the existence of the line of "TEXTREL" in objump result
(in comment 28) means that some codes are non-PIC.' :(
(In reply to comment #32)
>' :(
Heh, good stuff. Were these updates included with the 0.1.99.2 snap of beryl, or
sometime after? I've got an updated srpm for 0.1.99.2 sitting here:
Created attachment 147234 [details]
Mock build log of beryl-vidcap-0.1.99.2-1.fc7
Mockbuild log of beryl-vidcap-0.1.99.2-1 on
FC-devel i386.
I just tried to rebuild, not checked anything else.
However please check the log why rebuilding failed.
> ./seom.pc.in /usr lib
> make: *** [seom.pc] Error 1
you have to either build seom from an official tarball
() or use a svn checkout and run
'make' from the directory. The reason is because seom uses either the file
VERSION (which is automatically created when I build the tarball) or the output
of 'svn info' to get the version, and the version is needed to build seom.pc.
(In reply to comment #35)
> > ./seom.pc.in /usr lib
> > make: *** [seom.pc] Error 1
>
> you have to either build seom from an official tarball
> () or
Umm?? This is the first time I heard that there is
a official _seperated_ tarball of seom!!
Then another review request for seom should be submitted,
should make this review request blocked by the seom review
request, and the review request for seom should be reviewed
first.
Ah, first I'd heard of an official tarball as well... I'm just putting the finishing touches on a stand-alone
seom package, which I'll hopefully get submitted for review shortly.
This review is now blocked on the acceptance of build requirement seom, which is
being tracked in bug 227309.
Just a minor update: I'm told all the unresolved symbols in comment #10 are
normal, not something we should worry about, per bug 216232.
Just now, as beryl is slowly being replaced by compcomm/compiz/coral (or
whatever because I don't follow the incredibly stupid discussions anymore) you
decide to work on this package again? I wouldn't bother anymore trying to push
this into the main distribution, unless you intend to support beryl in it (whose
support will stop after the compcomm release..).
What does 'fedora-review?' mean anyway?
fedora-review? means the package is being reviewed.
So how should I treat this review request? Should I close this
with NOTABUG?
Yeah, with beryl going away, lets just drop this one.
Okay, thank you. | https://bugzilla.redhat.com/show_bug.cgi?format=multiple&id=215569 | CC-MAIN-2016-26 | refinedweb | 2,365 | 60.31 |
Linked by Thom Holwerda on Mon 13th Dec 2010 19:27 UTC, submitted by lemur2
Thread beginning with comment 453480
To view parent comment, click here.
To read all comments associated with this story, please click here.
To view parent comment, click here.
To read all comments associated with this story, please click here.
Member since:
2009-05-30
"
The Samba code that is designed and written to implement the protocols is not a copy of Microsoft code, nor is it based on Microoft designs.
If oracle wins.. then samba can be sued by ms unless explicit permission is granted for use if ms say their networking protocols are part of its software protection. (see apple example above.. note this hasnt been proven in court properly yet)
I didnt know that, thanks for the info. I beleive ms will NEVER sue mono as microsoft rarely sues anyone. Obviously implicit permission to use is implied when ms provided samba with the documentation so .. its extremely unlike ms would ever sue them (however perhaps potentially still within their right to if namespace patents are enforced.)
The issue you were mainly aluding to is that samba is a clean room implementation and so should be free of patent issues.. your right and i agree (so long as google wins that part of the case which it should as ms and others have previously).
"
Don't bring samba into this.
Samba in the EU legally won the right to the documentation MS did not give it willingly reason Samba is IBM appointed company to take care of the compatibility between implementations of SMB. So is by law entitled to see all implemented features in the protocol set by anyone implementing the protocol.
Does not matter who you are if you want to extend SMB you are basically required to present it at the Samba setup meetings to compare implementations and share what you have done. Openly and Freely.
MS provide documentation it was defective Samba personal could prove it. MS paid samba to write a test suite for there alterations to the SMB protocals part of that payment is a forever license to any patents MS holds or MS licenses to implement the protocals for all of Samba users including any future MS alterations to the protocals.
So Samba is 100 percent legally covered without question. Yes Samba is no where near the same boat as Mono.
MS is sueing more often common action for a dieing company.
Orcale winning changes nothing here. Samba has the deal Mono should have had.
Java on the other hand has a requirement that you run the test suite to get the patent grant and right to use the trademark and documentation. There is an exception using the GPL version provided by SUN/Orcale.
Google is in trouble with Orcale because they did not read the fine print and obey it. $3000 USD for the testsuite per year and making android pass would have avoided the complete problem.
Mono is in the same issue there is fine print in the MS promise that they are not reading. How many years did appache get away not following the fine print of java before Orcale has ponced. Remember same was said about SUN they would never push the legal side.
Samba is the one in charge of the protocols in the Samba case and MS was the brat doing the wrong thing. So MS lost in court and got hit with a growing fine.
When it comes to Java. Orcale is the standard body Google and apache are the brats.
When it comes to .net Mono is the brat and MS is the standard body. Notice the problem here. When you are the brat you have to play by the rules of the standard body or the one in charge of the protocals can beat the living crud out of you until you obey.
The one in charge of the protocal can wait as long as they want before responding. There is no legal requirement to respond in a short time. | http://www.osnews.com/thread?453480 | CC-MAIN-2016-18 | refinedweb | 681 | 72.46 |
pep3134 0.1.2
Backport of PEP 3134 (with PEP 415 and PEP 409) to Python 2 as close as possible
This library is intended to give you an ability to use exception chaining and embedded tracebacks with both Python 2 and Python 3 (>= 3.3 only). Exception Chaining and Embedded Tracebacks are also well known as PEP3134 that’s why I have such geeky name for that library.
No, it is not. Geeky name is kinda PEP3134 (feat. PEP409, PEP415 Remix) but I think it is an overkill.
If you want to get more about exception chaining and tracebacks please refer to the documentation for Python 3 with modifications done in Python 3.3.
Short excerpt for those who still sit with Python 2 as me.
- Exceptions have new attributes: __traceback__, __context__, __suppress_context__ and __cause__.
- Exceptions have new syntax for explicit chaining: raise CustomError("Cannot read settings") from IOError("Cannot open /etc/settings").
- Exceptions always have their own tracebacks attached in __traceback__ attribute.
- If exception was raised without explicit cause, it has its own context (say, from sys.exc_info()) in __context__ attribute. In this case __cause__ keeps None.
- If exception was raised by implicit cause, then __suppress_context__ is False.
- If exception was raised with explicit cause (raise ... from ...) then __cause__ has a cause, __suppress_context__ is True and __context__ is (suddenly) None.
So this is pretty convenient to have chaining if you want to build human-readable error messages afterwards, right?
This library helps you to keep the same __context__, __cause__ and __suppress_context__ behavior with both Python 2 and Python 3.
I did not mentioned __traceback__. This is a reason
__traceback__ in Python 2
Tracebacks are very convenient data structure to work with but really irritating and magical if you want to deal with it using pure Python, without patching code or hacking interpreter internals. If you want to see some magic, please checkout, let’s say, Jinja sources. Armin is rather good but I am trying to escape magic if possible.
I cannot keep the same tracebacks to any exceptions even if I want because it requires to do some work on interpreter internals. But anyway this method will return you something.
The rule of thumb is: if it returns an object, it is the proper object you expect. If it returns None then no luck. Moreover: __traceback__ implemented as property so sometimes it raises traceback but afterwards it returns None on the same object. Unfortunately I do not know a good way how to deal with it.
But I can you give some guarantees:
- __traceback__ on implicit (__context__) and explicit causes (__cause__) always correct.
- __traceback__ in the associated except clause is always correct.
- Sometimes it works in other cases but do not rely on that.
This works like this because of _fixed_ sys.exc_info() behavior. Let’s check one example.
import sys def example(): try: raise KeyError("WOW SUCH ERROR") except KeyError: first = sys.exc_info() second = sys.exc_info() return first, second first, second = example() assert first == second
It works as a charm in Python2 but raises AssertionError in Python3. So it is not possible to keep tracebacks in the same way in both Python2 and Python3. Sad story.
So if we will rewrite given example with PEP3134
import sys import pep3134 def example(): error = -1 try: pep3134.raise_(KeyError("WOW SUCH ERROR")) except KeyError as err: error = err first = sys.exc_info() assert error.__traceback__ is first[2] second = sys.exc_info() assert error.__traceback__ is not second[2] # works in Python 2 only example()
This is the only pitfall. Causes, as I mentioned, work well.
PEP3134 library
This library gives you 3 functions you can use. Only 3 so no need to have full documentation on any external source.
pep3134.raise_
Works with the same signature as raise clause in both Python 2 and Python 3. Just a reminder:
raise exc_type, [exc_value, [exc_traceback]]
Raises exceptions on the same problems.
pep3134.reraise
Works in the same way as raise clause without any arguments does in Python 2.
pep3134.raise_from
Works absolutely in the same way as raise ... from ... clause does in Python 3.
- Author: Sergey Arkhipov
- License: MIT
- Categories
- Development Status :: 4 - Beta
- Intended Audience :: Developers
- License :: OSI Approved :: MIT License
- Operating System :: OS Independent
- Programming Language :: Python :: 2.6
- Programming Language :: Python :: 2.7
- Programming Language :: Python :: 3.3
- Programming Language :: Python :: 3.4
- Topic :: Software Development :: Libraries
- Topic :: Software Development :: Libraries :: Python Modules
- Package Index Owner: nineseconds
- DOAP record: pep3134-0.1.2.xml | https://pypi.python.org/pypi/pep3134/0.1.2 | CC-MAIN-2017-39 | refinedweb | 746 | 58.58 |
List permits the selection of multiple items (Choice allows single selection). Programmer prefers List when checkboxes are many to display (in case of Choice, when radio buttons are many to display). List adds a scrollbar with which items can be scrolled and seen. List can be designed for single selection or multiple selection (Java AWT List Multiple Single Selection).
In general, all GUI components generate only one event. But in contrary, List can generate two events. For single clicks, it generates ItemEvent handled by ItemListener and for double clicks, ActionEvent handled by ActionListener. These two types of clicks can be used as in Window Explorer; single click selects the file and double click opens the file.
Choice Vs. List
Following table gives clear cut differences with Choice.
Following is the class signature
public class List extends Component implements ItemSelectable, Accessible
Example on Java AWT List – Preparing Super Bazar Bill
We have seen earlier in Choice component, the creation of applets with GUI. This program is also meant to practice GUI with applets. As a practice, you can change this program into an application. In the following applet, the items, selected by the user, in a super bazar are tabulated and made in a bill format.
The class implements ActionListener and thereby double clicks on the items only work.
what if i want to print the the items selected in the list to be displayed in order in which I selected.?
Items u selected comes in the order they are added. The items selected are generally used for billing etc. where order is not important. If you want write extra code of storing them in a DS or array like that.
It is given in the posting itself. paint() is called automatically in two circumstances a) when the first time frame is created and b) when the user resizes the frame by catching and dragging the border of the frame. If the programmer would like to call paint() method in the code, he should call repaint() and not paint().
In the actionPerformed() we called repaint() method as the actual code of drawing on the frame exists in paint() method. | https://way2java.com/awt-components/java-awt-list-%E2%80%93-multiple-single-selection/?replytocom=10053 | CC-MAIN-2019-18 | refinedweb | 358 | 64.1 |
I have been very busy these last few weeks and have not had time to think about what I might write for Overload. I know what pressure there is on editors and how much they rely on the regular contributors coming up with something for each issue. Of course, as Francis frequently comments, there would not be so much pressure if there was a steady flow of material from elsewhere. Fiction magazines normally have large slush piles on which they can draw for the fillers they need to complete an issue. That gives new writers a chance as well as guaranteeing the readers a regular mix of old and new talent.
I had just about decided, reluctantly, to give this issue a miss when Francis asked me to look at some ideas he was playing with. What follows leans heavily on his ideas and is entirely speculative. In other words I have not had time to test even a line of code let alone explore how the ideas might work in detail. I hope, that you, the reader will explore some of these ideas and feed back your experiences, both good and bad.
This is a term that I have coined to refer to using derivation to add some small extra to a class while leaving the class functionally as it was before. I think this is best illustrated by a simple example.
Have you ever wanted to have a simple array of objects only to discover the class owner has decided not to support a default constructor? Well if you never have, you have been lucky. Look at the following:
class HisObject { < no default constructor available> }; int main(){ HisObject objects[10]; // other code }
Of course your compiler refuses to compile this. In this circumstance you can just about manage by providing explicit initialisation such as:
HisObject objects[10] = { HisObject(<suitable data>), HisObject(<suitable data>), HisObject(<suitable data>), // etc. for 10 instances };
However that would be intolerable if you had a large array and in some circumstances (array member of a class) it is impossible.
Maybe you are wondering why I do not use an STL type container instead. There are several reasons why I might choose an old style array. For example I might need the efficiency that these guarantee. I might also have difficulty if the HisObject class lacks strict copy semantics. In this case it might have had copying inhibited (I will visit that shortly) or it might actually have had some semantic aspect of copying perverted (see auto_ptr). Either of these two cases means that STL containers are dangerous.
So instead I use micro-derivation:
class MyObject : public HisObject { public: MyObject(); };
The first reaction from experienced programmers is that the original class designer probably had a good reason for not providing a default constructor. I entirely agree. However a good reason in general might not be relevant in the context of my program. The only feature I am adding is the ability to create default instances. Naturally I must ensure that such can be used effectively.
Some questions that will spring to mind is to ask 'Why public derivation? Why not either private derivation or layering?' My answer is that I want MyObject instances to be exactly what HisObject instances are with the solitary exception that they can be created by default. I am completely happy for MyObject instances to substitute for HisObject ones.
There is one tiny irritant, everything will work as I expect once the object has been created, but HisObject must have some constructors and these are not inherited. That means that I must add 'forwarding' constructors in addition to my default constructor. For example, suppose that HisObject has a constructor HisObject (int), now I must add:
MyObject(int i):HisObject(i){};
to my class interface. It would have been nice if I could have written:
using HisObject;
to mean that I wanted MyObject to have constructors that did no more than call equivalent HisObject constructors. As it is I have to implement this by hand. Perhaps a good idea as it should make me think about what I am doing.
The idea of deriving to add just one feature is quite powerful. We can take a pure object class (no public copy semantics) and convert it into a value class as long as we can code a copy process. [Note from Francis: indeed and this is the subject of a forthcoming column of mine in EXE Magazine].
I am going to leave this at this stage and invite readers to come up with other instances where micro-derivation is useful. Note that I apply the term precisely to the case where you wish to add one (or possible more) features to a class which otherwise must behave exactly as it did before. Adding some form of constructor is the most likely extra.
I suppose that you might call the next idea micro-layering. I first came across the idea when Francis was discussing ways of providing user defined types that behaved like built-in ones. In a sense enums do this for integer types. For example:
enum Int { min_value=INT_MIN, max_value=INT_MAX };
creates a type that has almost all the functionality of an int. Unfortunately it lacks any implicit inward conversions. Int values will convert to int ones implicitly but to go the other way you must make the conversion explicit with a cast. So try this instead:
class Int { int value_m; public: Int(int v=0): value_m(v) {} operator int () {return value_m;} };
I have deliberately not qualified the constructor as explicit because I want me Int type to behave as like the built-in type as possible. Once I have a UDT with int functionality I can derive from it, add constraints (for example declare a private operator*() and operator/() etc.
I wonder what weaknesses there maybe in this idea. Certainly I can think of several advantages. For example define:
class Double { double value_m; public: Double(int v=0): value_m(v) {} operator double () {return value_m;} };
There is no implicit conversion between Double and Int because that would require two user-defined conversions (one conversion operator and one constructor).
Our UDTs can work where-ever built-in values are required but will not satisfy a pointer to built-in type nor a non-const reference to a built-in type.
Let me finish be revisiting my problem with HisObject. If you are unhappy with using derivation you could try this instead:
class MyObject { HisObject obj_m; static const HisObject def_s; public: MyObject(HisObject data = def) : obj_m(data) {} operator HisObject () { return obj_m;} }; HisObject MyObject::def_s( <data for construction> );
This assumes that a copy constructor is available for HisObject.
I leave it to readers to provide a comparison between micro-derivation and micro-layering as a mechanism for adding functionality. One difference is that micro-layering consumes the permitted explicit user-defined conversion whereas micro-derivation doesn't.
One of the things that I have become increasingly aware of is the potential that very small classes have for fixing coding problems. I wonder what other ideas readers have for useful class techniques that require less than a dozen lines of class interface? | https://accu.org/index.php/journals/536 | CC-MAIN-2018-09 | refinedweb | 1,197 | 60.65 |
In the first five notebooks of this workshop session, we've reviewed the fundamentals tools of image processing: numerical arrays, convolutions, point clouds and gradient descent.
Now, for those of you who went really fast through these first steps, here's a little bonus: an introduction to the JPEG (1992) and JPEG2000 compression standards, which are ubiquitous in cameras and cinemas. These two formats rely on the Fourier and Wavelet transforms, two mathematical tools that can be thought of as precursors to Convolutional Neural Networks.
References, going further. These two bonus notebooks are based on the Numerical Tour on approximation with orthogonal bases, written by Gabriel Peyré. For additional references, you may have a look at:
First, we re-import our libraries:
%matplotlib inline import center_images # Center our images import matplotlib.pyplot as plt # Display library import numpy as np # Numerical computations from imageio import imread # Load .png and .jpg images
Re-define custom=-7, vmax=15)
And load, once again,:
Compression algorithms rely on transforms
f, which turn an image
I into
a new array
f(I) that is supposed to be easier to handle.
The most fundamental of these "helpers" is the Fourier Transform (click, it's great!), which decomposes a signal or an image as a superposition of harmonics (just like a piano note, really), with weights encoded in the array
# The numpy packages provides a Fast Fourier Transform in 2D, # and its inverse (the iFFT). FFTshift and iFFTshift # are just there to get nicer, centered plots: from numpy.fft import fft2, ifft2, fftshift, ifftshift fI = fft2(I) # Compute the Fourier transform of our slice # Display the logarithm of the amplitutde of Fourier coefficients. # The "fftshift" routine allows us to put the zero frequency in # the middle of the spectrum, thus centering the right plot as expected. display_2( I, "Image", fftshift( np.log(1e-7 + abs(fI)) ), "Fourier Transform" )
To get an intuition of this new object, the simplest thing to do is to take our Fourier Transform
fI, edit it, and see what the image
ifft2( edit( fI )) looks like:
def Fourier_bandpass(fI, fmin, fmax) : """ Truncates a Fourier Transform fI, before reconstructing a bandpassed image. """ Y, X = np.mgrid[:fI.shape[0], :fI.shape[1]] # Horizontal and vertical gradients radius = (X - fI.shape[0]/2) ** 2 \ + (Y - fI.shape[1]/2) ** 2 # Squared distance to the middle point radius = ifftshift( np.sqrt(radius) ) # Reshape to be fft-compatible fI_band = fI.copy() # Create a copy of the Fourier transform fI_band[ radius <=fmin ] = 0 # Remove all the low frequencies fI_band[ radius > fmax ] = 0 # Remove all the high frequencies I_band = np.real(ifft2(fI_band)) # Invert the new transform... display_2( I_band, "Image", # And display fftshift( np.log(1e-7 + abs(fI_band)) ), "Fourier Transform" )
As evidenced below, Fourier coefficients that are close to the center encode the low frequencies of the signal:
Fourier_bandpass(fI, 0, 10)
As we add more coefficients, we see that details start to appear:
Fourier_bandpass(fI, 0, 50)
We can also keep specific frequencies and compute "detail-only" images at a cheap numerical cost. Convolutions, that we presented in the 2nd notebook, can all be implemented this way:
Fourier_bandpass(fI, 50, 100)
Our image can then be simply expressed as a sum of low, medium and high frequencies:
Fourier_bandpass(fI, 0, 100) # = Sum of the last two images | http://jeanfeydy.com/Teaching/MasterClass_Radiologie/Part%206%20-%20JPEG%20compression.html | CC-MAIN-2021-17 | refinedweb | 551 | 50.87 |
This is the mail archive of the libc-alpha@sources.redhat.com mailing list for the glibc project.
On Tue, Oct 07, 2003 at 01:16:39PM +0200, Thorsten Kukuk wrote: > > Hi, > > the following patch is necessary to be able to compile current glibc > on SPARC64 again. But make check still shows a lot of seg.faults and > bus errors. This is not the right fix. You need something like e.g. s/u/s/l/x86_64/sysdep.h does: /* This is a kludge to make syscalls.list find these under the names pread and pwrite, since some kernel headers define those names and some define the *64 names for the same system calls. */ #if !defined __NR_pread && defined __NR_pread64 # define __NR_pread __NR_pread64 #endif #if !defined __NR_pwrite && defined __NR_pwrite64 # define __NR_pwrite __NR_pwrite64 #endif Otherwise, you won't be able to compile against older kernel headers. > 2003-10-07 Thorsten Kukuk <kukuk@suse.de> > > * sysdeps/unix/sysv/linux/sparc/sparc64/syscalls.list: Fix pread64 > and pwrite64 alias entry. Jakub | http://www.sourceware.org/ml/libc-alpha/2003-10/msg00038.html | CC-MAIN-2013-48 | refinedweb | 167 | 68.97 |
In this tip, I demonstrate how you can eliminate controller methods that simply return views. I show you how to use the HandleUnknownAction method to handle every request against a controller automatically.
In this tip, I demonstrate how you can eliminate controller methods that simply return views. I show you how to use the HandleUnknownAction method to handle every request against a controller automatically.
I saw Phil Haack use the following tip in a demo that he presented. I thought that it was such a great idea that I had to share it.
There is no good reason to write code unless there is a good reason to write code. I discover that I write a lot of controller actions that do nothing more than return a view. For example, consider the CustomerController in Listing 1.
Listing 1 – CustomerController.vb
Imports System
Imports System.Collections.Generic
Imports System.Linq
Imports System.Web
Imports System.Web.Mvc
Namespace Tip22.Controllers
Public Class CustomerController
Inherits Controller
Public Function Index() As ActionResult
Return View()
End Function
Public Function Details() As ActionResult
Public Function Help() As ActionResult
End Class
End Namespace
Listing 1 – CustomerController.cs
using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.Mvc;
namespace Tip22.Controllers
{
public class CustomerController : Controller
{
public ActionResult Index()
{
return View();
}
public ActionResult Details()
public ActionResult Help()
}
}
This controller includes three actions that return three different views. Each of these actions contains a single line of code. In fact, each of these actions contains exactly the same line of code. This code reeks of needless work. How can we fix it?
The Controller class includes a method, named the HandleUnknownAction() method, that executes whenever you attempt to invoke an action on a controller that does not exist. The controller in Listing 2 takes advantage of the HandleUnknownAction() method to render views even when a corresponding controller method does not exist.
Listing 2 – HomeController.vb
<HandleError> _
Public Class HomeController
ViewData("message") = "Hello from controller action!"
Protected Overrides Sub HandleUnknownAction(ByVal actionName As String)
Me.View(actionName).ExecuteResult(Me.ControllerContext)
End Sub
Listing 2 – HomeController.cs
[HandleError]
public class HomeController : Controller
ViewData["message"] = "Hello from controller action!";
protected override void HandleUnknownAction(string actionName)
this.View(actionName).ExecuteResult(this.ControllerContext);
When you use the controller in Listing 2, you can call any action and the controller will attempt to return a view that corresponds to the action. You don’t need to explicitly code an action method for each view.
Notice that the controller includes a Details() action. When you need to pass ViewData, then you need to explicitly code the action method.
Thank you for your great tips!
Pingback from ASP.NET MVC Archived Blog Posts, Page 1
ASP.NETMVCTip#22–无需创建ControllerAction直接返回一个ViewASP.NETMVCTip#22–ReturnaViewwithoutC...
ASP.NET MVC Tip #22 – 无需创建 Controller Action 直接返回一个View ASP.NET MVC Tip #22 – Return a View without Creating
As Eilon pointed out to me, you can accomplish the same thing using routing and a parameterized action method.
Route: url="/show/{viewName}", defaults={controller="home", action=showview}
public ActionResult ShowView(string viewName) {
return View(viewName);
And now, you don't have to write action methods for each view in the views/home directory, you can just go to /show/viewname
You've been kicked (a good thing) - Trackback from DotNetKicks.com
Pingback from Dew Drop - July 22, 2008 | Alvin Ashcraft's Morning Dew
Hi Stephen.
Would it be possible to provide a full text feed from your site. I generally read your blog in an RSS reader, and this is one of the few that I have to click through to read the actual article.
A couple of suggestions for your blog theme that hopefully won't go astray - if you cssify your vb code with a vbwrapper class instead of csharpwrapper css class, you could provide some quick javascript to hide one or the other. An option to expand all code samples for easier reading would be pretty nice also. The scrolling textboxes are nice at first, but are an annoyance when you want to read the code with the article.
@joshka - thanks for the feedback and for reading my blog. I have to publish excerpts because feedburner places a size limit on blog entries (feedburner stops broadcasting a blog when it gets too long). I'll look into a method for hiding/displaying different code versions. Thanks!
What if the client puts in a URL for a view that does not exist? I assume they get an error of some sort? What if we would like to display a "File (view) not found" page instead?
Thanks.
I don't see why this would be good other then for DRY. If you go to a page that doesn't exist, you will encounter an exception (I would think, never tested it though) that the view doesn't exist. At this point, does MVC push the user to the error page or does an ugly exception page show up?
Lexapro withdrawal.
Alcohol and lexapro. Why does lexapro make me sleep all the time. Lexapro. When do you feel better with 10mg lexapro.
Great tip! I'll try it out. I just love a DRY tip :)
This works great if you have actually created a corresponding view and simply want to leave out the glue code of returning a View with the same name as the action.
However, if you have not created a corresponding view you will receive an InvalidOperationException when you trigger the ExecuteResult method. This is in contrast to the "404 Not Found" message you would otherwise receive on an unknown action.
If you want to be both DRY and able to handle unknown action/view combinations you can add a custom route as suggested by "haacked" and then explicitly provide a fallback action by overriding the OnException method.
routes.MapRoute(
"Home/{viewName}",
new { controller = "Home", action = "ShowUnknownView" }
);
public ActionResult ShowUnknownView(string viewName)
return View(viewName);
protected override void OnException(ExceptionContext filterContext)
filterContext.HttpContext.Response.Redirect("/Home");
<a href= nicolecocoaustinvids.watersaua.info >nicole coco austin vid
<a href= temecasmediatakeoutcom.foodsaua.info >temeca s mediatakeout com</a>
<a href= htt
<a href= mildligamentumflavum.answerauts.info >mil
references home new suggests
<a href= northwestmissouristateuniverstiy.goodauts.info >northwest missouri state universtiy</a>
<a
<a href= yahoocurrencyconversions.fromauts.info >yahoo currency co | http://weblogs.asp.net/stephenwalther/archive/2008/07/21/asp-net-mvc-tip-22-return-a-view-without-creating-a-controller-action.aspx | crawl-002 | refinedweb | 1,052 | 58.18 |
react-date-picker
v7.9.0
Published
A date picker for your React app.
Downloads
137,889
Readme
React-Date-Picker
A date picker for your React app.
- Pick days, months, years, or even decades
- Supports virtually any language
- No moment.js needed
tl;dr
- Install by executing
npm install react-date-pickeror
yarn add react-date-picker.
- Import by adding
import DatePicker from 'react-date-picker'.
- Use by adding
<DatePicker />. Use
onChangeprop for getting new values.
Demo
Minimal demo page is included in sample directory.
Online demo is also available!.
License
The MIT License.
Author
Thank you
Sponsors
Thank you to all our sponsors! Become a sponsor and get your image on our README on GitHub.
Backers
Thank you to all our backers! Become a backer and get your image on our README on GitHub.
Top Contributors
Thank you to all our contributors that helped on this project! | https://www.pkgstats.com/pkg:react-date-picker | CC-MAIN-2019-51 | refinedweb | 149 | 62.44 |
Jay Taylor's notesback to listing index
What Are Checked Exceptions in Java? | Dr Dobb's[web search]
Oliver is a senior computer scientist at Adobe Systems. He can be reached at goldman@ieee.org.
Exceptions are designed to relieve you of the need to check return codes or state variables after every function call to determine if an unexpected event has occurred. Used well, exceptions can reduce the number of lines of code devoted to error handling and reduce fragmentation of what remains. These accomplishments simplify programs, and simpler programs are more likely to be correct, to be completed on time, and to be maintainable.
Exceptions eliminate the need to check error codes by allowing the run-time environment to unravel the call stack until an exception handler is found. The same thing can be accomplished with judicious examination of return codes and a sufficient number of conditionals. Exceptions, however, automate the checking and unraveling for you, thus simplifying the problem and leaving only handling the exception. Exceptions do not simplify raising the error condition: Throwing an exception requires no more or no less work than setting an error code.
Error codes are opaque to compilers; they do not distinguish between a method call that succeeds and one that fails based on the domain of the return value. Exception- handling mechanisms provide to the compiler an additional level of syntactic information by identifying where exceptions can be thrown and where they can be caught. Although handling an exception remains up to you, exceptions raise a question as to whether a compiler can additionally improve error handling by making use of this new syntactic information.
Java provides a platform to examine this question as its designers chose to include two types of exceptions: Checked exceptions, that is, those that the compiler requires to be handled; and unchecked exceptions, those that the compiler does not. In Java, whether or not an exception is checked is determined by its type, and is therefore fixed at development time.
Checked-Exception Strategies
The notion that a compiler can help establish program correctness is appealing, as is any automated scheme to improve coding. (Building the tool into the compiler even guarantees it will be run.) Most likely because this seems promising, and certainly now as an established pattern of standard Java APIs, most Java exceptions are declared to be checked. This places an immediate burden on you: Where a checked exception may be thrown, the method must either handle that exception or declare that it is propagated. Propagating an exception moves the burden to the calling routine but leaves it with you. In practice, a variety of coding strategies are employed to ease this burden.
Suppress
The simplest mechanism for handling a checked exception is to suppress it. This strategy is often chosen when an exception can occur because of a method's implementation and not because of its function; see Listing One.
This also occurs in static initializers where checked exceptions are not permitted. The implementation suffers because it suppresses the exception entirely. The caller has no way to know that any further use of an instance of this class will almost certainly fail on account of the failed initialization. Worse yet, the failure may be subtle, and therefore hard to detect.
Bail Out
When suppressed exceptions occur, repercussions tend to pop up later during program execution and can be difficult to trace back to their source. This drawback can be overcome by bailing out when an unexpected exception occurs, as in Listing Two.
The "bail out" strategy may be appropriate for simple, standalone programs. Bailing out is unacceptable when the code is used as part of a library or when multiple programs are executing in a single JVM (for instance, in an application server). In such circumstances, the entire application can be brought down due to what may be, from the application's point of view, a recoverable error.
Propagate
Rather than bailing out, library clients would be better served if the original exception were propagated, allowing the client to determine how the exception should be handled. Listing Three simplifies the method implementation.
Whereas the earlier solutions removed the burden on the caller to handle IOException, every method that calls initialize() must now handle it. Generally, higher level methods must declare that they throw the union of all exceptions thrown by the underlying libraries. This is cumbersome as the sets grow in size beyond three exceptions. It is unmaintainable when new exceptions are introduced at lower levels due to the sheer number of method signatures that require change. Before long, throws clauses will account for more lines of code than will program logic.
Base Case
The exception hierarchy provides a means of shortening these long throws clauses, namely, declaring throws Exception, as in Listing Four. This catch-all phrase relieves you of any need to enumerate the exceptions being passed through — as well as relieving the compilers of any ability to aid in constructing a correct program. You could argue that any mechanism intended to establish program correctness should disallow this behavior by requiring that the most specific exception type be given.
Wrap
To reduce the number of possible exceptions thrown by any library without resorting to the base-case throws clause, most Java APIs employ a wrapping scheme in which a single checked exception is possibly thrown from every public method. Should any other checked exception be thrown in the implementation of the library, it can be propagated by wrapping it in the public exception as demonstrated in Listing Five.
No matter how many different checked exceptions may be thrown in the course of executing initialize(), the caller needs to deal only with MyAPIException. Examples of this technique include RMI, with RemoteException, and JDBC, with SQLException. Thus, these checked exceptions become part of the interfaces exported by these libraries:
public class Connection {<br> ...<br> public Statement createStatement() throws SQLException;<br> ...<br> }</p>
Wrapping exceptions in a single type discards the classification inherent in the exception type hierarchy, but some information about the originating exception is preserved. JDBC uses an error code that can be examined using a switch statement, as in Listing Six. Often the originating exception itself is available from the wrapping exception and can be retrieved by recursively unwrapping (Listing Seven).
Wrapping is reasonable if the caller is known to never care about the root exception, but this is not a reasonable assumption for any library. Thus, the throws clause has been simplified at the expense of all of the raveling and unraveling code scattered everywhere.
Translate
Exceptions can also be handled by translating them back into error codes or funny return values. This is directly at odds with the rationale for an exception handling mechanism as given by the Java Language Specification, but an example appears in the java.io library. If java.io consistently threw exceptions when errors occurred, the code in Listing Eight should be commonplace.
In fact, you probably haven't seen this code, and there is no reason to handle IOException here. Although System.out.println() does call an underlying write() method that throws an IOException, it catches any IOExceptions and just sets an error flag. You can call PrintStream.checkError() periodically to determine if println() has failed. As the Java Language Specification states when arguing for exceptions, error codes are often ignored. This is why Listing Nine, which should be commonplace, also is not. The translate strategy is at best only slightly better than the suppress strategy.
Unchecked Exceptions
Although the preceding checked-exception strategies vary in details, none of them promote clean and correct exception handling. Each strategy requires additional code to suppress or to propagate checked exceptions that cannot be dealt with from the method in which they may be thrown. All of this code is written to convince the compiler that checked exceptions are being handled when in fact they are not. Simpler code could be written if IOException derived from RuntimeException and was therefore unchecked, as in Listing Ten. This code no longer contains any error-handling code whatsoever. This is as it should be because this method does not know how to deal with any errors that might arise.
Since the class has nothing to do with I/O, the caller concerns itself only with whether or not initialization succeeds. If an error occurs and the caller is prepared to handle this situation, a catch( Exception e ) suffices. Otherwise, the caller takes no action at all to cleanly propagate the original exception.
Conclusion
Checked exceptions are interesting because they purport to offer an improvement in error handling over error flags or funny return codes and a mechanism by which the compiler can help ensure program correctness. The argument that checked exceptions aid program correctness claims that requiring a calling method to handle each checked exception (even if by propagation) helps ensure that each exception is handled properly. In practice, propagation is the most common action for a calling method to take because most methods do not know how to properly handle exceptions. In requiring extra effort for the common case, checked exceptions encourage exception-handling strategies that strive to reduce the work required to propagate checked exceptions and not strategies that handle errors properly. In practice, checked exceptions are less likely than unchecked exceptions to be properly handled. The checks performed by the compiler actually have a detrimental effect.
Because Java provides both exception- handling schemes, the solution to improving error handling in Java is as simple as making all exceptions run-time exceptions. For convincing the compiler that one handles existing checked exceptions, select one of these strategies and remember that handling the exception properly is, as always, incumbent upon you.
DDJ
Listing One
// Strategy: Suppress // Initialization for a class that has nothing to do with I/O public void initialize() { try { // Load pre-computed values for this class. InputStream is = new FileInputStream( CACHE_FILE ); is.read( cache ); ... } catch( IOException ex ) { // Ignore: dont know what to do about this error, and // signature doesnt allow it to propagate. } } <H4><A NAME="l2"> Listing Two</H4> // Strategy: Bail Out // Initialization for a class that has nothing to do with I/O public void initialize() { try { // Load pre-computed values for this class. InputStream is = new FileInputStream( CACHE_FILE ); is.read( cache ); ... } catch( IOException ex ) { // After all, isnt this why printStackTrace() exists? ex.printStackTrace(); System.exit( 1 ); } }
Listing Three
// Strategy: Propagate // Initialization for a class that has nothing to do with I/O public void initialize() throws IOException { // Load pre-computed values for this class. InputStream is = new FileInputStream( CACHE_FILE ); is.read( cache ); ... }
Listing Four
// Strategy: Base Case // Initialization for a class that has nothing to do with I/O public void initialize() throws Exception { // Load pre-computed values for this class. InputStream is = new FileInputStream( CACHE_FILE ); is.read( cache ); ... }
Listing Five
// Strategy: Wrap // Initialization for a class that has nothing to do with I/O public void initialize() throws MyAPIException { // Load pre-computed values for this class. try { InputStream is = new FileInputStream( CACHE_FILE ); is.read( cache ); } catch( IOException ex ) { throw new MyAPIException( "Loading cache failed", ex ); } ... }
Listing Six
// Unwrapping by error code try { ... } catch( SQLException ex ) { // Big Ugly Switch Statements != Object Oriented Programming. switch( ex.getErrorCode()) { case 1: ... default: // Throw an exception, perhaps? } } ...
Listing Seven
// Recursively unwrapping try { ... } catch( MyAPIException ex ) { try { Throwable t = ex; while( ex != null ) { t = ex.getWrappedException(); ex = ( t instanceof MyAPIException ? (MyAPIException) t : null ); } throw t; } catch( UnderlyingException e ) { // according to the underlying exception type ... } ... }
Listing Eight
public static void main( String[] args ) { try { if( args.length < 1 ) { System.out.println( "usage: do [it]" ); System.exit( 1 ); } } catch( IOException ex ) { // System.out isnt working, so printStackTrace() isnt any good. System.exit( 1 ); } ... }
Listing Nine
public static void main( String[] args ) { ... System.out.print( "Current progress: " ); if( System.out.checkError()) { // Cant communicate with the user anymore System.exit( 1 ); } ... } }
Listing Ten
// Unchecked exceptions // Initialization for a class that has nothing to do with I/O public void initialize() { // Load pre-computed values for this class. InputStream is = new FileInputStream( CACHE_FILE ); is.read( cache ); ... } | https://jaytaylor.com/notes/node/1344896704000.html | CC-MAIN-2021-39 | refinedweb | 2,014 | 53.92 |
-- $Id: NEWS,v 1.58 2021/08/08 22:44:04 tom Exp $ ------------------------------------------------------------------------------- -- Changes by Thomas E. Dickey -- vile:txtmode ------------------------------------------------------------------------------- 2021/08/08 + add symbolic link for flex++ manpage in makefile "install" rule. > Boris Kolpackov: + add/use yylex_destroy() function like "new" flex which can be used to reset the lexer in order to re-execute it on different input. + fix a memory leak in gentab() 2021/08/06 + compiler-warning fixes 2021/08/04 + rewrote command-line options parsing, adding a table to map long options into the normal single-letter options. + updated config.guess, config.sub 2021/05/10 + modify skeleton to allow override of YY_BUF_SIZE + fix cppcheck warnings + update configure macros, e.g., for clang, BSDs, etc. CF_ADD_CFLAGS, CF_AR_FLAGS, CF_CC_ENV_FLAGS, CF_CHECK_CACHE, CF_CLANG_COMPILER, CF_CONST_X_STRING, CF_GCC_ATTRIBUTES, CF_GCC_WARNINGS, CF_INTEL_COMPILER, CF_LOCALE, CF_MAKE_DOCS, CF_MIXEDCASE_FILENAMES, CF_PATH_SYNTAX, CF_PROG_EXT, CF_WITHOUT_X, CF_WITH_MAN2HTML, CF_XOPEN_SOURCE + updated config.guess, config.sub 2020/07/15 + compiler-warning and shellcheck fixes + update configure macros, e.g., for clang, BSDs, etc. CF_ADD_CFLAGS, CF_AR_FLAGS, CF_CONST_X_STRING, CF_GCC_ATTRIBUTES, CF_GCC_WARNINGS, CF_PROG_CC, CF_WITHOUT_X + updated config.guess, config.sub 2019/11/23 + cleanup manual-page formatting. + add lower- and patch-version information to skeleton. + modify generated file to include unistd.h unless overridden, so that isatty() will be prototyped by default. + update configure macros, e.g., for clang, BSDs, etc. CF_CC_ENV_FLAGS, CF_CONST_X_STRING, CF_GCC_VERSION, CF_GCC_WARNINGS, CF_GNU_SOURCE, CF_POSIX_C_SOURCE, CF_POSIX_VISIBLE, CF_PROG_EXT, CF_PROG_GROFF, CF_PROG_LINT, CF_TRY_XOPEN_SOURCE, CF_WITH_MAN2HTML, CF_XOPEN_SOURCE + updated config.guess, config.sub 2017/12/31 + add a "FALLTHRU" comment to quiet compiler warning in vile's dcl-filt.l + update configure macros: CF_CC_ENV_FLAGS, CF_WITH_MAN2HTML + updated config.guess, config.sub 2017/11/11 + build-fix for rpms with Fedora 26. 2017/05/21 + amend mkscan.sh script to work with "make test" when building from other directories than the source-directory (report by Michael Tiernan) + add configure check for "ar" flags + add configure --with-man2html option + add "docs" rule to manpage + update configure macros, e.g., for clang and MingW CF_ACVERSION_CHECK, CF_ADD_CFLAGS, CF_ARG_OPTION, CF_CC_ENV_FLAGS, CF_DISABLE_ECHO, CF_GCC_ATTRIBUTES, CF_GCC_WARNINGS, CF_GNU_SOURCE, CF_INTEL_COMPILER, CF_MAKE_DOCS, CF_MIXEDCASE_FILENAMES, CF_POSIX_C_SOURCE, CF_PROG_CC, CF_PROG_EXT, CF_PROG_LINT, CF_XOPEN_SOURCE + updated config.guess, config.sub 2013/12/09 + minor compiler-warning fixes for the skeleton. + update configure macros, e.g., for clang and MingW + updated config.guess, config.sub 2010/09/06 + fix stricter compiler warnings, e.g., for 64-bits and gcc 4.1.2 with -Wconversion + remove unneeded "/" after $(DESTDIR) in Makefile.in, needed to install with Cygwin. 2010/06/27 + improve rename.sh, handling "FLEX" and "Flex" cases. + add configure checks for lint and tags programs. + add $DESTDIR to makefile. + drop mkdirs.sh, use "mkdir -p" + add build-scripts for RPM and Debian packages. + updates to configure script macros: + CF_ADD_CFLAGS, CF_ARG_OPTION, CF_INTEL_COMPILER, CF_POSIX_C_SOURCE, CF_XOPEN_SOURCE quoted params of ifelse() + CF_GCC_WARNINGS, change logic for warning options, to work with c89 wrapper for gcc + CF_GCC_VERSION, discard stderr, to work with c89 wrapper for gcc + CF_DISABLE_ECHO, uses different indent + updated config.guess, config.sub 2009/10/27 + add configure macro CF_XOPEN_SOURCE, to enable use of fileno() when compiling byacc output with c99. 2009/10/13 + more gcc warning fixes, including workaround for defective implementation of atttribute warn_unused_result. + change ccltbl[] to array of structs holding the reason why a character was added to a character class in addition to the character. 2009/09/02 + add patch-date to version message. + modify generated code to eliminate gcc -Wconversion warnings. + update utility scripts, using install-sh and mkdirs.sh + updated config.guess, config.sub 2008/11/17 + modify makefile rules and runtime handling of skeleton to make the C++ header file work using the same program prefix. + update FlexLexer.h to C++ standard header and namespace. + change default for --program-prefix option, to "re", making this install as "reflex". 2008/11/16 + add a missing ifdef for YY_NO_INPUT, needed to make the function actually removed. + move C-code supporting YY_FATAL_ERROR() in skeleton so that if the macro is overridden, then the support-code will not be compiled. That eliminates unused-code warnings. + modify makefile rule for "make check" to ensure that it uses POSIX locale, since the reference file is generated that way. + eliminate use of isascii(), which prevented flex from parsing according to 8-bit locales. + modify makefile to work with AC_EXEEXT and AC_OBJEXT macros. + add configure --disable-echo and --enable-warnings options. + eliminate register keyword. 2008/11/14 + modify skeleton and code-generator to improve indentation of generated file for better readability. 2008/11/12 + update configure script, adding ability to rename executable during install, e.g., as "reflex". 2008/07/26 + eliminate PROTO(), YY_PROTO and YY_USE_PROTOS macros. + ANSIfy'd all of the C files. + indent'd all of the C files. ------------------------------------------------------------------------------- -- Changes by Verne Paxson ------------------------------------------------------------------------------- Changes between release 2.5.4 (11Sep96) and release 2.5.3: - Fixed a bug introduced in 2.5.3 that blew it when a call to input() occurred at the end of an input file. - Fixed scanner skeleton so the example in the man page of scanning strings using exclusive start conditions works. - Minor Makefile tweaks. Changes between release 2.5.3 (29May96) and release 2.5.2: - Some serious bugs in yymore() have been fixed. In particular, when using AT&T-lex-compatibility or %array, you can intermix calls to input(), unput(), and yymore(). (This still doesn't work for %pointer, and isn't likely to in the future.) - A bug in handling NUL's in the input stream of scanners using REJECT has been fixed. - The default main() in libfl.a now repeatedly calls yylex() until it returns 0, rather than just calling it once. - Minor tweak for Windows NT Makefile, MISC/NT/Makefile. Changes between release 2.5.2 (25Apr95) and release 2.5.1: - The --prefix configuration option now works. - A bug that completely broke the "-Cf" table compression option has been fixed. - A major headache involving "const" declarators and Solaris systems has been fixed. - An octal escape sequence in a flex regular expression must now contain only the digits 0-7. - You can now use "--" on the flex command line to mark the end of flex options. - You can now specify the filename '-' as a synonym for stdin. - By default, the scanners generated by flex no longer statically initialize yyin and yyout to stdin and stdout. This change is necessary because in some ANSI environments, stdin and stdout are not compile-time constant. You can force the initialization using "%option stdinit" in the first section of your flex input. - "%option nounput" now correctly omits the unput() routine from the output. - "make clean" now removes config.log, config.cache, and the flex binary. The fact that it removes the flex binary means you should take care if making changes to scan.l, to make sure you don't wind up in a bootstrap problem. - In general, the Makefile has been reworked somewhat (thanks to Francois Pinard) for added flexibility - more changes will follow in subsequent releases. - The .texi and .info files in MISC/texinfo/ have been updated, thanks also to Francois Pinard. - The FlexLexer::yylex(istream* new_in, ostream* new_out) method now does not have a default for the first argument, to disambiguate it from FlexLexer::yylex(). - A bug in destructing a FlexLexer object before doing any scanning with it has been fixed. - A problem with including FlexLexer.h multiple times has been fixed. - The alloca() chud necessary to accommodate bison has grown even uglier, but hopefully more correct. - A portability tweak has been added to accommodate compilers that use char* generic pointers. - EBCDIC contact information in the file MISC/EBCDIC has been updated. - An OS/2 Makefile and config.h for flex 2.5 is now available in MISC/OS2/, contributed by Kai Uwe Rommel. - The descrip.mms file for building flex under VMS has been updated, thanks to Pat Rankin. - The notes on building flex for the Amiga have been updated for flex 2.5, contributed by Andreas Scherer. Changes between release 2.5.1 (28Mar95) and release 2.4.7: - A new concept of "start condition" scope has been introduced.'; As indicated in this example, rules inside start condition scopes (and any rule, actually, other than the first) can be indented, to better show the extent of the scope. Start condition scopes may be nested. - The new %option directive can be used in the first section of a flex scanner to control scanner-generation options. Most options are given simply as names, optionally preceded by the word "no" (with no intervening whitespace) to negate their meaning. Some are equivalent to flex flags, so putting them in your scanner source is equivalent to always specifying the flag (%option's take precedence over flags): 7bit -7 option 8bit -8 option align -Ca option backup -b option batch -B option c++ -+ option caseful opposite of -i option (caseful is the default); case-sensitive same as above caseless -i option; case-insensitive same as above (so use "%option nowarn" for -w) array equivalent to "%array" pointer equivalent to "%pointer" (default) Some provide new features: always-interactive generate a scanner which always considers its input "interactive" (no call to isatty() will be made when the scanner runs) main supply a main program for the scanner, which simply calls yylex(). Implies %option noyywrap. never-interactive generate a scanner which never considers its input "interactive" (no call to isatty() will be made when the scanner runs) stack if set, enable start condition stacks (see below) stdinit if unset ("%option nostdinit"), initialize yyin and yyout statically to nil FILE* pointers, instead of stdin and stdout yylineno if set, keep track of the current line number in global yylineno (this option is expensive in terms of performance). The line number is available to C++ scanning objects via the new member function lineno(). yywrap if unset ("%option noyywrap"), scanner does not call yywrap() upon EOF but simply assumes there are no more files to scan Flex scans your rule actions to determine whether you use the REJECT or yymore features (this is not new). Two %options can be used to override its decision, either by setting them to indicate the feature is indeed used, or unsetting them to indicate it actually is not used: reject yymore Three %option's take string-delimited values, offset with '=': outfile="<name>" equivalent to -o<name> prefix="<name>" equivalent to -P<name> yyclass="<name>" set the name of the C++ scanning class (see below) A number of %option's You can specify multiple options with a single %option directive, and multiple directives in the first section of your flex input file. - The new function: YY_BUFFER_STATE yy_scan_string( const char *str ) returns a YY_BUFFER_STATE (which also becomes the current input buffer) for scanning the given string, which occurs starting with the next call to yylex(). The string must be NUL-terminated. A related function: YY_BUFFER_STATE yy_scan_bytes( const char *bytes, int len ) creates a buffer for scanning "len" bytes (including possibly NUL's) starting at location "bytes". Note that both of these functions create and scan a *copy* of the string/bytes. (This may be desirable, since yylex() modifies the contents of the buffer it is scanning.) You can avoid the copy by using: YY_BUFFER_STATE yy_scan_buffer( char *base, yy_size_t size ) which scans in place the buffer starting at "base", consisting of "size" bytes, the last two bytes of which *must* be YY_END_OF_BUFFER_CHAR (these bytes are not scanned; thus, scanning consists of base[0] through base[size-2], inclusive). If you fail to set up "base" in this manner, yy_scan_buffer returns a nil pointer instead of creating a new input buffer. The type yy_size_t is an integral type to which you can cast an integer expression reflecting the size of the buffer. - Three new routines are available for manipulating stacks of start conditions: void yy_push_state( int new_state ) pushes the current start condition onto the top of the stack and BEGIN's "new_state" (recall that start condition names are also integers). void yy_pop_state() pops the top of the stack and BEGIN's to it, and int yy_top_state() returns the top of the stack without altering the stack's contents. The start condition stack grows dynamically and so has no built-in size limitation. If memory is exhausted, program execution is aborted. To use start condition stacks, your scanner must include a "%option stack" directive. - flex now supports POSIX character class expressions. These are expressions enclosed inside "[:" and ":]" delimiters (which themselves must appear between the '[' and ']' of a character class; other elements may occur inside the character class, too). The expressions flex recognizes are: [:alnum:] [:alpha:] [:blank:] [:cntrl:] [:digit:] [:graph:] [:lower:] [:print:] [:punct:] [:space:] [:upper:] [:xdigit:] These expressions all designate a set of characters equivalent to the corresponding (-i flag), then [:upper:] and [:lower:] are equivalent to [:alpha:]. - The promised rewrite of the C++ FlexLexer class has not yet been done. Support for FlexLexer is limited at the moment to fixing show-stopper bugs, so, for example, the new functions yy_scan_string() & friends are not available to FlexLexer objects. - The new flex.1). A non-zero value in the macro invocation marks the buffer as interactive, a zero value as non-interactive. Note that use of this macro overrides "%option always-interactive" or "%option never-interactive". yy_set_interactive() must be invoked prior to beginning to scan the buffer. - The new macro yy_set_bol(at_bol) can be used to control whether the current buffer's scanning context for the next token match is done as though at the beginning of a line (non-zero macro argument; makes '^' anchored rules active) or not at the beginning of a line (zero argument, '^' rules inactive). - Related to this change, the mechanism for determining when a scan is starting at the beginning of a line has changed. It used to be that '^' was active iff the character prior to that at which the scan started was a newline. The mechanism now is that '^' is active iff the last token ended in a newline (or the last call to input() returned a newline). For most users, the difference in mechanisms is negligible. Where it will make a difference, however, is if unput() or yyless() is used to alter the input stream. When in doubt, use yy_set_bol(). - The new beginning-of-line mechanism involved changing some fairly twisted code, so it may have introduced bugs - beware ... - The macro YY_AT_BOL() returns true if the next token scanned from the current buffer will have '^' rules active, false otherwise. - The new function void yy_flush_buffer( struct yy_buffer_state* b ) flushes the contents of the current buffer (i.e., next time the scanner attempts to match a token using b as the current buffer, it will begin by invoking YY_INPUT to fill the buffer). This routine is also available to C++ scanners (unlike some of the other new routines). The related macro YY_FLUSH_BUFFER flushes the contents of the current buffer. - A new "-ooutput" option writes the generated scanner to "output". If used with -t, the scanner is still written to stdout, but its internal #line directives (see previous item) use "output". - Flex now generates #line directives relating the code it produces to the output file; this means that error messages in the flex-generated code should be correctly pinpointed. - When generating #line directives, filenames with embedded '\'s have those characters escaped (i.e., turned into '\\'). This feature helps with reporting filenames for some MS-DOS and OS/2 systems. - The FlexLexer class includes two new public member functions: virtual void switch_streams( istream* new_in = 0, ostream* new_out = 0 ) reassigns yyin to new_in (if non-nil) and yyout to new_out (ditto), deleting the previous input buffer if yyin is reassigned. It is used by: int yylex( istream* new_in = 0, ostream* new_out = 0 ) which first calls switch_streams() and then returns the value of calling yylex(). - C++ scanners now have yy_flex_debug as a member variable of FlexLexer rather than a global, and member functions for testing and setting it. - When generating a C++ scanning class, you can now use %option yyclass="foo" to inform generates a run-time error if called (by invoking yyFlexLexer::LexerError()). This feature is necessary if your subclass "foo" introduces some additional member functions or variables that you need to access from yylex(). - Current texinfo files in MISC/texinfo, contributed by Francois Pinard. - You can now change the name "flex" to something else (e.g., "lex") by redefining $(FLEX) in the Makefile. - Two bugs (one serious) that could cause "bigcheck" to fail have been fixed. - A number of portability/configuration changes have been made for easier portability. - You can use "YYSTATE" in your scanner as an alias for YY_START (for AT&T lex compatibility). - input() now maintains yylineno. - input() no longer trashes yytext. - interactive scanners now read characters in YY_INPUT up to a newline, a large performance gain. - C++ scanner objects now work with the -P option. You include <FlexLexer.h> once per scanner - see comments in <FlexLexer.h> (or flex.1) for details. - C++ FlexLexer objects now use the "cerr" stream to report -d output instead of stdio. - The -c flag now has its full glorious POSIX interpretation (do nothing), rather than being interpreted as an old-style -C flag. - Scanners generated by flex now include two #define's giving the major and minor version numbers (YY_FLEX_MAJOR_VERSION, YY_FLEX_MINOR_VERSION). These can then be tested to see whether certain flex features are available. - Scanners generated using -l lex compatibility now have the symbol YY_FLEX_LEX_COMPAT #define'd. - When initializing (i.e., yy_init is non-zero on entry to yylex()), generated scanners now set yy_init to zero before executing YY_USER_INIT. This means that you can set yy_init back to a non-zero value in YY_USER_INIT if you need the scanner to be reinitialized on the next call. - You can now use "#line" directives in the first section of your scanner specification. - When generating full-table scanners (-Cf), flex now puts braces around each row of the 2-d array initialization, to silence warnings on over-zealous compilers. - Improved support for MS-DOS. The flex sources have been successfully built, unmodified, for Borland 4.02 (all that's required is a Borland Makefile and config.h file, which are supplied in MISC/Borland - contributed by Terrence O Kane). - Improved support for Macintosh using Think C - the sources should build for this platform "out of the box". Contributed by Scott Hofmann. - Improved support for VMS, in MISC/VMS/, contributed by Pat Rankin. - Support for the Amiga, in MISC/Amiga/, contributed by Andreas Scherer. Note that the contributed files were developed for flex 2.4 and have not been tested with flex 2.5. - Some notes on support for the NeXT, in MISC/NeXT, contributed by Raf Schietekat. - The MISC/ directory now includes a preformatted version of flex.1 in flex.man, and pre-yacc'd versions of parse.y in parse.{c,h}. - The flex.1 and flexdoc.1 manual pages have been merged. There is now just one document, flex.1, which includes an overview at the beginning to help you find the section you need. - Documentation now clarifies that start conditions persist across switches to new input files or different input buffers. If you want to e.g., return to INITIAL, you must explicitly do so. - The "Performance Considerations" section of the manual has been updated. - Documented the "yy_act" variable, which when YY_USER_ACTION is invoked holds the number of the matched rule, and added an example of using yy_act to profile how often each rule is matched. - Added YY_NUM_RULES, a definition that gives the total number of rules in the file, including the default rule (even if you use -s). - Documentation now clarifies that you can pass a nil FILE* pointer to yy_create_buffer() or yyrestart() if you've arrange YY_INPUT to not need yyin. - Documentation now clarifies that YY_BUFFER_STATE is a pointer to an opaque "struct yy_buffer_state". - Documentation now stresses that you gain the benefits of removing backing-up states only if you remove *all* of them. - Documentation now points out that traditional lex allows you to put the action on a separate line from the rule pattern if the pattern has trailing whitespace (ugh!), but flex doesn't support this. - A broken example in documentation of the difference between inclusive and exclusive start conditions is now fixed. - Usage (-h) report now goes to stdout. - Version (-V) info now goes to stdout. - More #ifdef chud has been added to the parser in attempt to deal with bison's use of alloca(). - "make clean" no longer deletes emacs backup files (*~). - Some memory leaks have been fixed. - A bug was fixed in which dynamically-expanded buffers were reallocated a couple of bytes too small. - A bug was fixed which could cause flex to read and write beyond the end of the input buffer. - -S will not be going away. Changes between release 2.4.7 (03Aug94) and release 2.4.6: - Fixed serious bug in reading multiple files. - Fixed bug in scanning NUL's. - Fixed bug in input() returning 8-bit characters. - Fixed bug in matching text with embedded NUL's when using %array or lex compatibility. - Fixed multiple invocations of YY_USER_ACTION when using '|' continuation action. - Minor prototyping fixes.. Changes between release 2.4.4 (07Dec93) and release 2.4.3: - Fixed two serious bugs in scanning 8-bit characters. - Fixed bug in YY_USER_ACTION that caused it to be executed inappropriately (on the scanner's own. Changes between release 2.4 private. Changes between release 2.4.2 (01Dec93) and release 2.4.1: - Fixed bug in libfl.a referring to non-existent "flexfatal" function. - Modified to produce both compress'd and gzip'd tar files for distributions (you probably don't care about this change!). Changes between release 2.4.1 (30Nov93) and release 2.3.8: - The new '-+' flag instructs flex to generate a C++ scanner class (thanks to Kent Williams). flex writes an implementation of the class defined in FlexLexer.h to lex.yy.cc. You may include multiple scanner classes in your program using the -P flag. Note that the scanner class also provides a mechanism for creating reentrant scanners. The scanner class uses C++ streams for I/O instead of FILE*'s (thanks to Tom Epperly). If the flex executable's name ends in '+' then the '-+' flag is automatically on, so creating a symlink or copy of "flex" to "flex++" results in a version of flex that can be used exclusively for C++ scanners. Note that without the '-+' flag, flex-generated scanners can still be compiled using C++ compilers, though they use FILE*'s for I/O instead of streams. See the "GENERATING C++ SCANNERS" section of flexdoc for details. - The new '-l' flag turns on maximum AT&T lex compatibility. In particular, -l includes support for "yylineno" and makes yytext be an array instead of a pointer. It does not, however, do away with all incompatibilities. See the "INCOMPATIBILITIES WITH LEX AND POSIX" section of flexdoc for details. - The new '-P' option specifies a prefix to use other than "yy" for the scanner's globally-visible variables, and for the "lex.yy.c" filename. Using -P you can link together multiple flex scanners in the same executable. - The distribution includes a "texinfo" version of flexdoc.1, contributed by Roland Pesch (thanks also to Marq Kole, who contributed another version). It has not been brought up to date, but reflects version 2.3. See MISC/flex.texinfo. The flex distribution will soon include G.T. Nicol's flex manual; he is presently bringing it up-to-date for version 2.4. - yywrap() is now a function, and you now *must* link flex scanners with libfl.a. - Site-configuration is now done via an autoconf-generated "configure" script contributed by Francois Pinard. - Scanners now use fread() (or getc(), if interactive) and not read() for input. A new "table compression" option, -Cr, overrides this change and causes the scanner to use read() (because read() is a bit faster than fread()). -f and -F are now equivalent to -Cfr and -CFr; i.e., they imply the -Cr option. - In the blessed name of POSIX compliance, flex supports "%array" and "%pointer" directives in the definitions (first) section of the scanner specification. The former specifies that yytext should be an array (of size YYLMAX), the latter, that it should be a pointer. The array version of yytext is universally slower than the pointer version, but has the advantage that its contents remain unmodified across calls to input() and unput() (the pointer version of yytext is, still, trashed by such calls). "%array" cannot be used with the '-+' C++ scanner class option. - The new '-Ca' option directs flex to trade off memory for natural alignment when generating a scanner's tables. In particular, table entries that would otherwise be "short" become "long". - The new '-h' option produces a summary of the flex flags. - The new '-V' option reports the flex version number and exits. - The new scanner macro YY_START returns an integer value corresponding to the current start condition. You can return to that start condition by passing the value to a subsequent "BEGIN" action. You also can implement "start condition stacks" by storing the values in an integer stack. - You can now redefine macros such as YY_INPUT by just #define'ing them to some other value in the first section of the flex input; no need to first #undef them. - flex now generates warnings for rules that can't be matched. These warnings can be turned off using the new '-w' flag. If your scanner uses REJECT then you will not get these warnings. - If you specify the '-s' flag but the default rule can be matched, flex now generates a warning. - "yyleng" is now a global, and may be modified by the user (though doing so and then using yymore() will yield weird results). - Name definitions in the first section of a scanner specification can now include a leading '^' or trailing '$' operator. In this case, the definition is *not* pushed back inside of parentheses. - Scanners with compressed tables are now "interactive" (-I option) by default. You can suppress this attribute (which makes them run slightly slower) using the new '-B' flag. - Flex now generates 8-bit scanners by default, unless you use the -Cf or -CF compression options (-Cfe and -CFe result in 8-bit scanners). You can force it to generate a 7-bit scanner using the new '-7' flag. You can build flex to generate 8-bit scanners for -Cf and -CF, too, by adding -DDEFAULT_CSIZE=256 to CFLAGS in the Makefile. - You no longer need to call the scanner routine yyrestart() to inform the scanner that you have switched to a new file after having seen an EOF on the current input file. Instead, just point yyin at the new file and continue scanning. - You no longer need to invoke YY_NEW_FILE in an <<EOF>> action to indicate you wish to continue scanning. Simply point yyin at a new file. - A leading '#' no longer introduces a comment in a flex input. - flex no longer considers formfeed ('\f') a whitespace character. - %t, I'm happy to report, has been nuked. - The '-p' option may be given twice ('-pp') to instruct flex to report minor performance problems as well as major ones. - The '-v' verbose output no longer includes start/finish time information. - Newlines in flex inputs can optionally include leading or trailing carriage-returns ('\r'), in support of several PC/Mac run-time libraries that automatically include these. - A start condition of the form "<*>" makes the following rule active in every start condition, whether exclusive or inclusive. - The following items have been corrected in the flex documentation: - '-C' table compression options *are* cumulative. - You may modify yytext but not lengthen it by appending characters to the end. Modifying its final character will affect '^' anchoring for the next rule matched if the character is changed to or from a newline. - The term "backtracking" has been renamed "backing up", since it is a one-time repositioning and not a repeated search. What used to be the "lex.backtrack" file is now "lex.backup". - Unindented "/* ... */" comments are allowed in the first flex input section, but not in the second. - yyless() can only be used in the flex input source, not externally. - You can use "yyrestart(yyin)" to throw away the current contents of the input buffer. - To write high-speed scanners, attempt to match as much text as possible with each rule. See MISC/fastwc/README for more information. - Using the beginning-of-line operator ('^') is fairly cheap. Using unput() is expensive. Using yyless() is cheap. - An example of scanning strings with embedded escape sequences has been added. - The example of backing-up in flexdoc was erroneous; it has been corrected. - A flex scanner's internal buffer now dynamically grows if needed to match large tokens. Note that growing the buffer presently requires rescanning the (large) token, so consuming a lot of text this way is a slow process. Also note that presently the buffer does *not* grow if you unput() more text than can fit into the buffer. - The MISC/ directory has been reorganized; see MISC/README for details. - yyless() can now be used in the third (user action) section of a scanner specification, thanks to Ceriel Jacobs. yyless() remains a macro and cannot be used outside of the scanner source. - The skeleton file is no longer opened at run-time, but instead compiled into a large string array (thanks to John Gilmore and friends at Cygnus). You can still use the -S flag to point flex at a different skeleton file. - flex no longer uses a temporary file to store the scanner's actions. - A number of changes have been made to decrease porting headaches. In particular, flex no longer uses memset() or ctime(), and provides a single simple mechanism for dealing with C compilers that still define malloc() as returning char* instead of void*. - Flex now detects if the scanner specification requires the -8 flag but the flag was not given or on by default. - A number of table-expansion fencepost bugs have been fixed, making flex more robust for generating large scanners. - flex more consistently identifies the location of errors in its input. - YY_USER_ACTION is now invoked only for "real" actions, not for internal actions used by the scanner for things like filling the buffer or handling EOF. - The rule "[^]]" now matches any character other than a ']'; formerly it matched any character at all followed by a ']'. This change was made for compatibility with AT&T lex. - A large number of miscellaneous bugs have been found and fixed thanks to Gerhard Wilhelms. - The source code has been heavily reformatted, making patches relative to previous flex releases no longer accurate. Changes between 2.3 Patch #8 (21Feb93) and 2.3 Patch #7: - Fixed bugs in dynamic memory allocation leading to grievous fencepost problems when generating large scanners. - Fixed bug causing infinite loops on character classes with 8-bit characters in them. - Fixed bug in matching repetitions with a lower bound of 0. - Fixed bug in scanning NUL characters using an "interactive" scanner. - Fixed bug in using yymore() at the end of a file. - Fixed bug in misrecognizing rules with variable trailing context. - Fixed bug compiling flex on Suns using gcc 2. - Fixed bug in not recognizing that input files with the character ASCII 128 in them require the -8 flag. - Fixed bug that could cause an infinite loop writing out error messages. - Fixed bug in not recognizing old-style lex % declarations if followed by a tab instead of a space. - Fixed potential crash when flex terminated early (usually due to a bad flag) and the -v flag had been given. - Added some missing declarations of void functions. - Changed to only use '\a' for __STDC__ compilers. - Updated mailing addresses. Changes between 2.3 Patch #7 (28Mar91) and 2.3 Patch #6: - Fixed out-of-bounds array access that caused bad tables to be produced on machines where the bad reference happened to yield a 1. This caused problems installing or running flex on some Suns, in particular. Changes between 2.3 Patch #6 (29Aug90) and 2.3 Patch #5: - Fixed a serious bug in yymore() which basically made it completely broken. Thanks goes to Jean Christophe of the Nethack development team for finding the problem and passing along the fix. Changes between 2.3 Patch #5 (16Aug90) and 2.3 Patch #4: - An up-to-date version of initscan.c so "make test" will work after applying the previous patches Changes between 2.3 Patch #4 (14Aug90) and 2.3 Patch #3: - Fixed bug in hexadecimal escapes which allowed only digits, not letters, in escapes - Fixed bug in previous "Changes" file! Changes between 2.3 Patch #3 (03Aug90) and 2.3 Patch #2: - Correction to patch #2 for gcc compilation; thanks goes to Paul Eggert for catching this. Changes between 2.3 Patch #2 (02Aug90) and original 2.3 release: - Fixed (hopefully) headaches involving declaring malloc() and free() for gcc, which defines __STDC__ but (often) doesn't come with the standard include files such as <stdlib.h>. Reordered #ifdef maze in the scanner skeleton in the hope of getting the declarations right for cfront and g++, too. - Note that this patch supercedes patch #1 for release 2.3, which was never announced but was available briefly for anonymous ftp. Changes between 2.3 (full) release of 28Jun90 and 2.2 (alpha) release: User-visible: - A lone <<EOF>> rule (that is, one which is not qualified with a list of start conditions) now specifies the EOF action for *all* start conditions which haven't already had <<EOF>> actions given. To specify an end-of-file action for just the initial state, use <INITIAL><<EOF>>. - -d debug output is now contigent on the global yy_flex_debug being set to a non-zero value, which it is by default. - A new macro, YY_USER_INIT, is provided for the user to specify initialization action to be taken on the first call to the scanner. This action is done before the scanner does its own initialization. - yy_new_buffer() has been added as an alias for yy_create_buffer() - Comments beginning with '#' and extending to the end of the line now work, but have been deprecated (in anticipation of making flex recognize #line directives). - The funky restrictions on when semi-colons could follow the YY_NEW_FILE and yyless macros have been removed. They now behave identically to functions. - A bug in the sample redefinition of YY_INPUT in the documentation has been corrected. - A bug in the sample simple tokener in the documentation has been corrected. - The documentation on the incompatibilities between flex and lex has been reordered so that the discussion of yylineno and input() come first, as it's anticipated that these will be the most common source of headaches. Things which didn't used to be documented but now are: - flex interprets "^foo|bar" differently from lex. flex interprets it as "match either a 'foo' or a 'bar', providing it comes at the beginning of a line", whereas lex interprets it as "match either a 'foo' at the beginning of a line, or a 'bar' anywhere". - flex initializes the global "yyin" on the first call to the scanner, while lex initializes it at compile-time. - yy_switch_to_buffer() can be used in the yywrap() macro/routine. - flex scanners do not use stdio for their input, and hence when writing an interactive scanner one must explictly call fflush() after writing out a prompt. - flex scanner can be made reentrant (after a fashion) by using "yyrestart( yyin );". This is useful for interactive scanners which have interrupt handlers that long-jump out of the scanner. - a defense of why yylineno is not supported is included, along with a suggestion on how to convert scanners which rely on it. Other changes: - Prototypes and proper declarations of void routines have been added to the flex source code, courtesy of Kevin B. Kenny. - Routines dealing with memory allocation now use void* pointers instead of char* - see Makefile for porting implications. - Error-checking is now done when flex closes a file. - Various lint tweaks were added to reduce the number of gripes. - Makefile has been further parameterized to aid in porting. - Support for SCO Unix added. - Flex now sports the latest & greatest UC copyright notice (which is only slightly different from the previous one). - A note has been added to flexdoc.1 mentioning work in progress on modifying flex to generate straight C code rather than a table-driven automaton, with an email address of whom to contact if you are working along similar lines. Changes between 2.2 Patch #3 (30Mar90) and 2.2 Patch #2: - fixed bug which caused -I scanners to bomb Changes between 2.2 Patch #2 (27Mar90) and 2.2 Patch #1: - fixed bug writing past end of input buffer in yyunput() - fixed bug detecting NUL's at the end of a buffer Changes between 2.2 Patch #1 (23Mar90) and 2.2 (alpha) release: - Makefile fixes: definition of MAKE variable for systems which don't have it; installation of flexdoc.1 along with flex.1; fixed two bugs which could cause "bigtest" to fail. - flex.skel fix for compiling with g++. - README and flexdoc.1 no longer list an out-of-date BITNET address for contacting me. - minor typos and formatting changes to flex.1 and flexdoc.1. Changes between 2.2 (alpha) release of March '90 and previous release: User-visible: - Full user documentation now available. - Support for 8-bit scanners. - Scanners now accept NUL's. - A facility has been added for dealing with multiple input buffers. - Two manual entries now. One which fully describes flex (rather than just its differences from lex), and the other for quick(er) reference. - A number of changes to bring flex closer into compliance with the latest POSIX lex draft: %t support flex now accepts multiple input files and concatenates them together to form its input previous -c (compress) flag renamed -C do-nothing -c and -n flags added Any indented code or code within %{}'s in section 2 is now copied to the output - yyleng is now a bona fide global integer. - -d debug information now gives the line number of the matched rule instead of which number rule it was from the beginning of the file. - -v output now includes a summary of the flags used to generate the scanner. - unput() and yyrestart() are now globally callable. - yyrestart() no longer closes the previous value of yyin. - C++ support; generated scanners can be compiled with C++ compiler. - Primitive -lfl library added, containing default main() which calls yylex(). A number of routines currently living in the scanner skeleton will probably migrate to here in the future (in particular, yywrap() will probably cease to be a macro and instead be a function in the -lfl library). - Hexadecimal (\x) escape sequences added. - Support for MS-DOS, VMS, and Turbo-C integrated. - The %used/%unused operators have been deprecated. They may go away soon. Other changes: - Makefile enhanced for easier testing and installation. - The parser has been tweaked to detect some erroneous constructions which previously were missed. - Scanner input buffer overflow is now detected. - Bugs with missing "const" declarations fixed. - Out-of-date Minix/Atari patches provided. - Scanners no longer require printf() unless FLEX_DEBUG is being used. - A subtle input() bug has been fixed. - Line numbers for "continued action" rules (those following the special '|' action) are now correct. - unput() bug fixed; had been causing problems porting flex to VMS. - yymore() handling rewritten to fix bug with interaction between yymore() and trailing context. - EOF in actions now generates an error message. - Bug involving -CFe and generating equivalence classes fixed. - Bug which made -CF be treated as -Cf fixed. - Support for SysV tmpnam() added. - Unused #define's for scanner no longer generated. - Error messages which are associated with a particular input line are now all identified with their input line in standard format. - % directives which are valid to lex but not to flex are now ignored instead of generating warnings. - -DSYS_V flag can now also be specified -DUSG for System V compilation. Changes between 2.1 beta-test release of June '89 and previous release: User-visible: - -p flag generates a performance report to stderr. The report consists of comments regarding features of the scanner rules which result in slower scanners. - -b flag generates backtracking information to lex.backtrack. This is a list of scanner states which require backtracking and the characters on which they do so. By adding rules one can remove backtracking states. If all backtracking states are eliminated, the generated scanner will run faster. Backtracking is not yet documented in the manual entry. - Variable trailing context now works, i.e., one can have rules like "(foo)*/[ \t]*bletch". Some trailing context patterns still cannot be properly matched and generate error messages. These are patterns where the ending of the first part of the rule matches the beginning of the second part, such as "zx*/xy*", where the 'x*' matches the 'x' at the beginning of the trailing context. Lex won't get these patterns right either. - Faster scanners. - End-of-file rules. The special rule "<<EOF>>" indicates actions which are to be taken when an end-of-file is encountered and yywrap() returns non-zero (i.e., indicates no further files to process). See manual entry for example. - The -r (reject used) flag is gone. flex now scans the input for occurrences of the string "REJECT" to determine if the action is needed. It tries to be intelligent about this but can be fooled. One can force the presence or absence of REJECT by adding a line in the first section of the form "%used REJECT" or "%unused REJECT". - yymore() has been implemented. Similarly to REJECT, flex detects the use of yymore(), which can be overridden using "%used" or "%unused". - Patterns like "x{0,3}" now work (i.e., with lower-limit == 0). - Removed '\^x' for ctrl-x misfeature. - Added '\a' and '\v' escape sequences. - \<digits> now works for octal escape sequences; previously \0<digits> was required. - Better error reporting; line numbers are associated with rules. - yyleng is a macro; it cannot be accessed outside of the scanner source file. - yytext and yyleng should not be modified within a flex action. - Generated scanners #define the name FLEX_SCANNER. - Rules are internally separated by YY_BREAK in lex.yy.c rather than break, to allow redefinition. - The macro YY_USER_ACTION can be redefined to provide an action which is always executed prior to the matched rule's action. - yyrestart() is a new action which can be used to restart the scanner after it has seen an end-of-file (a "real" one, that is, one for which yywrap() returned non-zero). It takes a FILE* argument indicating a new file to scan and sets things up so that a subsequent call to yylex() will start scanning that file. - Internal scanner names all preceded by "yy_" - lex.yy.c is deleted if errors are encountered during processing. - Comments may be put in the first section of the input by preceding them with '#'. Other changes: - Some portability-related bugs fixed, in particular for machines with unsigned characters or sizeof( int* ) != sizeof( int ). Also, tweaks for VMS and Microsoft C (MS-DOS), and identifiers all trimmed to be 31 or fewer characters. Shortened file names for dinosaur OS's. Checks for allocating > 64K memory on 16 bit'ers. Amiga tweaks. Compiles using gcc on a Sun-3. - Compressed and fast scanner skeletons merged. - Skeleton header files done away with. - Generated scanner uses prototypes and "const" for __STDC__. - -DSV flag is now -DSYS_V for System V compilation. - Removed all references to FTL language. - Software now covered by BSD Copyright. - flex will replace lex in subsequent BSD releases. | https://www.invisible-island.net/reflex/NEWS.html | CC-MAIN-2022-21 | refinedweb | 7,252 | 66.44 |
By using a single instance string cache, you can significantly reduce the memory footprint of your application. We discovered the value of this while doing performance and memory tuning of Gibraltar, our commercial application monitoring product. The overhead in processor time is minimal, and the memory improvement tends to increase as your application manages more data, which can significantly improve your ability to perform operations in memory. Just use one simple static class to easily swap strings for a single common value ensuring that each string is only in RAM once.
Using one of the sample applications that we ship with Gibraltar, we created a specific test application that lets us enable and disable the string cache to validate performance both in memory savings and in processor usage. What we found was that for a processor penalty of 5% (which did not translate into any runtime performance change in our case because of the way we use multithreading), we were able to reduce the memory footprint of the Gibraltar Agent, particularly in certain extreme cases where clients where stretching the capabilities of the Agent. Here's a chart that shows the observable difference of memory usage with the StringReference class enabled and disabled:
StringReference
This was done on a system with no memory pressure; when we examined the internal details, it was clear that the difference was more stark: the number of strings in memory dropped by 90%, consuming about 6MB of memory for the test instead of around 70MB. In the above test, the agent stored over 2.8 million log messages and metrics during the interval profiled.
You can duplicate these results for yourself: Attached is the sample application we used to run these tests. It has a checkbox that can enable and disable the single instance string cache so you can watch the effect on RAM. Just compile it and crank up the log message generation rates to maximum to quickly see the difference in memory footprint. Here's what the sample application looks like as it runs:
Because we wanted to be able to show exactly the tests we ran, the sample uses Reflection to reach into our agent assembly and disable the cache. It's an internal object because we don't anticipate anyone wanting to disable it in production use and we want to keep our API as clean as possible. You can use Reflector if you want to see that it is exactly the same source code as the StringReference class we've attached.
Virtually every piece of data your application works with ends up as a string - to be serialized to a display, log, or file. This is so common that ToString is an intrinsic feature of every object. As your application works with more data, you'll discover that the most common objects, and the ones tying up the most memory, are strings. Because your application is working within a common problem domain, you'll tend to have substantial repetition of values. Each time a value is repeated, it uses up the same amount of memory. Additionally, having string objects all over that have different durations causes the Garbage Collector to have to relocate objects more often. While it's very difficult to prevent unique strings from being created, if they can be immediately exchanged for a single common reference copy, it allows them to be garbage collected quickly and without fragmenting memory.
ToString
Fortunately, .NET Strings are immutable. This means that once they're created, they can never be changed: any attempt to change them results in a new String with the changes applied. This is one of the reasons that you can create real performance problems in your application by doing innocuous things like composing a string through a series of appends. While this immutability can cause performance problems in environments where you want to do a lot of string manipulation, it creates a golden opportunity for memory optimization: since a String can't ever be changed, any two String objects that have the same value are interchangeable.
Strings
String
Indeed, .NET does have a capability called Interning strings. With this, it's easy to create a string and then intern it, swapping it for an existing copy (if there is one) or putting it in the Interned string store for future reference. There's one big problem: interning is for the duration of the AppDomain. That means any string that you store will not be removed from memory until the AppDomain exits. This is generally fine for compile-time constants (which is done automatically), but for most applications, this would have the opposite effect we're looking for - no string would ever be released, and our memory consumption would continually increase. What we want is to keep them in memory only as long as the string is in use by an active object.
What we want is a way to have a dictionary of strings that are currently in memory so we can get the single reference copy of any string already there. But, we need the string to be garbage collected if no one has a reference to it. That means, the dictionary of strings itself can't have a reference to the string, but it needs to be able to return a reference when requested. So, we need something that isn't a full .NET reference - something closer to an old fashioned pointer where we can walk it, but the object may not be available anymore because it has been garbage collected.
Enter the WeakReference. A WeakReference is an object that has a property that will return the referenced object (if it's still available), or null if the object has been collected. Outstanding, that's half the problem: we can keep a list of strings we've been asked to manage without that list itself keeping them in memory.
WeakReference
null
The second half of the problem is that we can't just use a Dictionary with the string for a key: if we did, it'd keep a copy of the string itself so it could perform lookups, and that copy would be a strong reference that would prevent the String from ever being released. Therefore, to make this work, we'll have to have an efficient way of doing a lookup that doesn't in any way create a strong reference to the string. We did this by implementing a hash lookup to a linked list using the built-in GetHashCode method built into the String object. If there are multiple strings with the same Hash Code (which will happen if you have enough strings), then it does a linear search to find a match. This allows complete accuracy without requiring any strong references.
Dictionary
GetHashCode
All of the necessary code to implement our single instance string store is contained in the static StringReference class. As a static class, it can be accessed easily anywhere in your code with a straightforward syntax.
There are two ways that strings can be exchanged for a central, common copy:
SwapReference: Takes the original string as a reference and exchanges it for an existing copy within the String store, if found, or returns the original if it's a new string. This is most efficient when there is a key moment in your process where you want to fix strings to their common representation, as in this example:
SwapReference
private string m_TypeName;
private string m_Message;
public string TypeName { get { return m_TypeName; } set { m_TypeName = value; } }
public string Message { get { return m_Message; } set { m_Message = value; } }
public void FixData()
{
// Swap all strings for a common string reference
StringReference.SwapReference(ref m_TypeName);
StringReference.SwapReference(ref m_Message);
}
GetReference: Takes a string and supplies the correct single instance string as its return value. This can create simple code in property accessors and other situations, as in the following example:
GetReference
private string m_TypeName;
private string m_Message;
public string TypeName { get { return m_TypeName; }
set { m_TypeName = StringReference.GetReference(value); } }
public string Message { get { return m_Message; }
set { m_Message = StringReference.GetReference(value); } }
The StringReference class is fully thread safe internally, so no external locking is necessary.
There are two additional features of the StringReference class that can come in handy: a Disabled property that enables the cache to be seamlessly enabled and disabled, and a Pack method that can speed up garbage collection in very large string scenarios.
Disabled
Pack
The main use case for the Disabled property is for testing performance and compatibility. You can incorporate the StringReference class in your code and then use this property to globally disable it without changing any other code. If you suspect that the class is causing a problem, or you just want to see what it's doing for you, then use this property to turn the class on and off. When disabled, it simply returns the original string every time, and the Pack feature is disabled.
As the StringReference class is used, it will end up using memory on its own for the bookkeeping necessary to track the weak references. This isn't much compared to the strings themselves, but in scenarios where strings are relatively short lived and there are a very large number of unique strings, it can add up. To free up this memory, you can periodically call the Pack method which will find all weak references pointing to objects that have been garbage collected and therefore shouldn't be tracked any more. In most applications, there are key moments where a lot of strings are freed up - such as when a large form is closed or a business process completes. Relatively quickly after these actions, the GC will tend to release the objects and they can be released from the StringReference class.
For a processor impact of less than five percent, you can significantly reduce the memory footprint of most applications. This can be a significant consideration with 32 bit processes that are limited to about 1.5GB of usable data memory, and because the more strings there are, the higher the probability the next one is already in the list. This means the amount of memory reduction increases with the amount of memory used.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
tmpTest.MyProperty01 ="01_text";
tmpTest.MyProperty02 = "02_message";
tmpTest.MyProperty03 = "03_something";
tmpTest.MyProperty01 = string.Format("{0:00}_text",1);
tmpTest.MyProperty02 = string.Format("{0:00}_message", 2);
tmpTest.MyProperty03 = string.Format("{0:00}_something", 3);
// C#.
public class HybridLinkedList<T> : IEnumerable<T>
{
private int count;
private T singleInstance;
private LinkedList<T> multipleInstances;
public IEnumerator<T> GetEnumerator()
{
if(count == 0)
{
}
else if(count == 1)
{
yield return this.singleInstance;
}
else
{
return this.multipleInstances.GetEnumerator<T>();
}
}
public HybridLinkedList()
{
this.count = 0;
this.singleInstance = default(T);
this.multipleInstances = null;
}
//TODO: Add another constructor to initialize with 1 element.
public void Add(T val)
{
if(count == 0)
{
this.singleInstance = val;
}
else if(count == 1)
{
this.multipleInstances = new LinkedList<T>();
this.multipleInstances.AddLast(this.singleInstance);
this.singleInstance = default(T);
this.multipleInstances.AddLast(val);
}
else
{
this.multipleInstances.AddLast(val);
}
this.count++;
}
}
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/38057/Single-Instance-String-Store-for-NET?fid=1543642&df=90&mpp=10&noise=1&prof=True&sort=Position&view=None&spc=None&fr=11 | CC-MAIN-2014-41 | refinedweb | 1,882 | 50.67 |
Click on the image to go to the Scrimba course
TypeScript has been gaining a lot of popularity amongst JavaScript developers in the last few years. And it’s no wonder, as TypeScript code tends to be less error-prone, more readable and easier to maintain.
So we’ve partnered up with the eminent online instructor Dylan C. Israel and created a free TypeScript course on Scrimba. The course contains 22 lessons and is for people who already know JavaScript but who want a quick intro to TypeScript.
Take the course for free here.
Now let’s have a look at each of the lectures in the course.
Part #1: Introduction
In the introductory screencast, Dylan gives an overview of why you should learn TypeScript, and how the course is laid out. He also tells you a little bit about himself, so that you are familiar with him before jumping into the coding stuff.
Part #2: Variable types
Compile time type-checking is one of the most important features of TypeScript. It lets us catch errors related to the types of data at compile time. This lesson explains the data types available in TypeScript.
let firstName: string; let age: number; let isMarried: boolean;
You can see how we have types attached to all the variables. If we try to put a string value in place of a number type variable, TypeScript will catch it at compile time.
Part #3: Multiple types
In TypeScript, we keep a single type for a variable but that is not possible every time. So, instead, TypeScript provides us with the
any type. This means we can assign multiple types of values to one variable.
let myVariable: any = 'Hello World'; myVariable = 10; myVariable = false;
Above, we’ve declared
myVariable with
any type. First we assigned it a string, next a number, and finally a boolean. This is possible because of the
any type.
Part #4: Sub types
Sub types are used when we are unaware of the value of the variable. TypeScript provides us with two sub types:
null and
undefined. This lesson explains when we should use either of those.
let myVariable: number = undefined;
The variable
myVariable has been assigned the value of
undefined because, at this point in time, we don’t know what it is going to be. We can also use
null here.
Part #5: Implicit vs explicit typing
Part 5 talks about the difference between implicit and explicit typing. In the examples above, we saw explicit types where we set the type of the variable. Implicit typing, on other hand, is performed by the compiler without us stating the variable type.
let myVariable = 'Hello World';
In this example, we have not assigned any type to the variable. We can check the type of this variable using the
typeof function. This will show that
myVariable is of
string type because the compiler took care of the typing.
Part #6: Checking types
In this lesson we’ll learn how we can check the type of a variable, and catch any error or perform any operation. It uses an example in which we test if our variable is of type
Bear (where
Bear is a
class). If we want to check the type of our variable, we can use the
instanceof method.
import { Bear } from 'somefile.ts'; let bear = new Bear(3); if (bear instanceof Bear) { //perform some operation }
Part #7: Type assertions
Type assertion means we can cast a variable to any particular type, and we tell TypeScript to handle that variable using that type. Let’s try to understand it with an example:
let variable1: any = 'Hello World'; if ((variable1 as string).length) { //perform some operation }
variable1 has the type of
any . But, if we want to check its length, it will produce an error until we tell TypeScript to handle it as a string. This lesson explains more details about this concept.
Part #8: Arrays
This part of the course explains TypeScript arrays. In JavaScript, when we assign values to an array, we can put in different types of items. But, with TypeScript, we can declare an array with types as well.
let array1: number[] = [1, 2, 3, 4, 5];
In the above example, we declared an array of numbers by assigning it the
number type. Now TypeScript will make sure the array contains only numbers.
Part #9: Tuples
Sometimes we need to store multiple types of values in one collection. Arrays will not serve in this case. TypeScript gives us the data type of tuples. These are used to store values of multiple types.
let tuple_name = [10, 'Hello World'];
This example shows that we can have data items of number and string types in one collection. This lesson explains the concept of tuples in more detail.
Part #10: Enums
In this lesson, we will learn about enums in TypeScript. Enums are used to define a set of named constants which can be used to document intent or to create a set of different cases.
**enum** Direction { Up = "UP", Down = "DOWN", Left = "LEFT", Right = "RIGHT" }
Here is a basic example of how enums are declared, and how different properties are created inside them. The rest of the details are explained in this section of the course.
Part #11: Object
In JavaScript, objects have a pretty major role in how language has been defined and has evolved. This lesson talks about objects in TypeScript — how to declare an object, and which kinds of values can fit inside the object type.
Part #12: Parameters
Using TypeScript, we can also assign types to the parameters of a function. In this section of the course, Dylan explains how we can add types to parameters. This is a very useful way to handle errors regarding data type in a function.
const multiply = (num1: number, num2: number) => { return num1 * num2; }
We have declared a function
multiply which takes two parameters and returns the value from multiplying them. We added a type of
number to both the parameters so that no other value except a number can be passed to them.
Part #13: Return types
Like parameters, we can also add type-checking to the return value of a function. This way we can make sure that the return value from a function has an expected type. This part of the course explains the concept in detail.
const multiply = (num1: number, num2: number): number => { return num1 * num2; }
We have added a
return type of
number to the function. Now, if we return anything except a
number, it will show us an error.
Part #14: Custom types
In TypeScript, we can create a custom type using the keyword of
type. We can then type-check objects on the basis of that type.
type person = {firstName: string}; const example3: person = {firstName: 'Dollan'};
This feature is almost deprecated in TypeScript, so you should rather use
interface or
class for this purpose. However, it’s important that you get to know it, as you might come across custom types when you start to dive into TS code.
Part #15: Interfaces
In TypeScript, the core focus is on type-checking which enforces the use of a particular type. Interfaces are a way of naming these types. It’s basically a group of related methods and properties that describe an object. This part of the course explains how to create and use interfaces.
interface Person { firstName: string, lastName: string, age: number }
In the example above, we have an interface
Person which has some typed properties. Note that we don’t initiate data in interfaces, but rather define the types that the parameters will have.
Part #16: Barrels
A barrel is a way to rollup exports from multiple modules into a single module. A barrel is, itself, a module, which is exporting multiple modules from one file. This means that a user has to import just one module instead of all the modules separately.
// Without barrel import { Foo } from '../demo/foo'; import { Bar } from '../demo/bar'; import { Baz } from '../demo/baz';`
Instead of using these multiple lines separately to import these modules, we can create a barrel. The barrel would export all these modules and we import only that barrel.
// demo/barrel.ts export * from './foo'; // re-export all of its exportsexport * from './bar'; // re-export all of its exportsexport * from './baz'; // re-export all of its exports
We can simply create a TypeScript file and export the modules from their respective file. We can then import this barrel wherever we need it.
import { Foo, Bar, Baz } from '../demo'; // demo/barrel.ts
Part #17: Models
When using interfaces, we often face a number of problems. For example, interfaces can’t seem to enforce anything coming from the server side, and they can't keep the default value. To solve this issue, we use the concept of models classes. These act as an interface, and also may have default values and methods added to them.
Part #18: Intersection types
In this section, we’ll talk about intersection types. These are the ways we can use multiple types to a single entity or class. Sometimes we need to use multiple types to map one entity and, at that time, this feature comes in very handy.
import { FastFood, ItalianFood, HealthyFood} from ‘./interfaces’; let food1: FastFood | HealthyFood; let food2: ItalianFood; let food3: FastFood; let food4: FastFood & ItalianFood;
In the example above, we have three interfaces and we are creating different objects from them. For example,
food1 is going to be either
FastFood or
HealthyFood. Similarly,
food4 is going to be
FastFood as well as
ItalianFood.
Part #19: Generics
In short, generics is a way to create reusable components which can work on a variety of data types rather than a single one.
The concept of generics is actually not available in JavaScript so far, but is widely used in popular object-oriented languages such as C# or Java. In this lesson, we’ll learn how to use generics in TypeScript, and look at its key benefits.
Part #20: Access modifiers
The idea of access modifiers is relatively new in the arena of JavaScript and TypeScript, but they have been available in other object-oriented languages for a long time. Access modifiers control the accessibility of the members of a class.
In TypeScript, there are two access modifiers: public and private. Every member of a class defaults to public until you declare it otherwise.
class Customer { customerId: number; public companyName: string; private address: string; }
customerId is a default public member, so it’s always available to the outside world. We have specifically declared
companyName as
public, so it will also be available outside of class.
address is marked as
private, therefore it won’t be accessible outside the class.
Part #21: Local setup
In this lesson, we’ll learn the steps to install and run TypeScript on local computers. Those steps generally involve installing Node and TypeScript, and then compiling “.ts” files.
Click the image to get to the course.
Part #22: TSLint and — great job!
Yay! You’ve completed the course. In the last part of the video, Dylan will give some tips on how to take this learning further and improve the code we write today.
In this lesson, he also covers how you can use the amazing TSLint. This tool helps you write better production level code using best practices and conventions. It comes up with some basic settings which you can modify to meet your needs.
So go ahead and take this free course today!
Thanks for reading! My name is Per Borgen, I'm the co-founder of Scrimba – the easiest way to learn to code. You should check out our responsive web design bootcamp if want to learn to build modern website on a professional level.
| https://www.freecodecamp.org/news/want-to-learn-typescript-heres-our-free-22-part-course-21cd9bbb5ef5/ | CC-MAIN-2020-24 | refinedweb | 1,962 | 72.76 |
tag:blogger.com,1999:blog-7265176141841694262015-07-22T11:32:27.387-05:00The Jaded ConsumerA Critique of That We Are Offered To ConsumeJaded Consumer Killers<a href="">A "Black Lives Matter" activist</a> named Sandra Bland died recently in <a href="">the jail of Waller County, Texas</a>. Sandra Bland, 28, was <a href="">about to begin a new job at Texas A&M University</a> before she was killed in custody. Pulled over ostensibly for <a href="">failure to signal a lane change</a>, she was arrested rather than cited. To defend itself from criticism of the arrest, the department released a video showing her arrest by an officer who later asserted the unarmed <a href="">woman had "assaulted" him</a>. Although <a href="">the video contains obvious signs of editing</a>, the <a href="">department denies that the video was ever edited</a>.<br /><br />Speaking with her mother about her new job, <a href="">Bland had said</a>, "My purpose is to go back to Texas and stop all social injustice in the South."<br /><br />Well, Waller County had an answer for <i>that</i>, didn't it?<br /><br / <a href="">police killing of Caroline Small</a> emphasizes the ability of police to kill unarmed civilians with impunity even if they aren't men <a href="">trying to purchase an unloaded air rifle at Walmart</a>, or for <a href="">allegedly shoplifting</a>. Maybe – at least in New York, when the officer is a rookie – there's at least an apology for killing a civilian for <a href="">deciding to take the stairs</a>. The culture of killing American civilians is <a href="">an epidemic among U.S. law enforcement</a>. Suicide by cop just doesn't <i>work</i> in places like the United Kingdom: they <i>don't kill people</i>. While acknowledging that the United States has a greater population (<a href="">~319 million</a>) than the United Kingdom (<a href=";_ylt=A0LEVr2Ewq9VUv8AssInnIlQ;_ylc=X1MDMTM1MTE5NTY4NwRfcgMyBGZyA3locy1tb3ppbGxhLTAwMQRncHJpZAM2Z2JId3RseFE1dWtpYklQLmFZbUdyNQRxdWVyeQN1bml0ZWQga2luZ2RvbSBwb3B1bGF0aW9uBHRfc3RtcAMxNDM3NTgyMDIw?p=united+kingdom+population&fr2=sb-top-search&hspart=mozilla&hsimp=yhs-001">~64 million</a>), <a href="">grand total of three (3) times, with zero fatalities</a>. British shootings of civilians by police is extremely controversial even in the case of a known gangster. In the U.S., the <a href="">hypermilitarized "police"</a> now occupying our cities seem virtually expected to kill. For some real perspective: <a href="">U.S. police killed more people this March than U.K. police have killed since 1900</a>. Last year, the death toll was 1,100 killed in the U.S. compared to twenty-six (26) in the U.S. This isn't some multiple based on population difference, it's a cultural problem in U.S. "police" forces.<img src="" height="1" width="1" alt=""/>Jaded Consumer <a href="">multiple extradition requests from Switzerland</a>, so FIFA retaliated with the gratuitous step of <a href="">imposing a lifelong ban</a> against his officiating in national or international soccer events.<br /><br />Priorities.<img src="" height="1" width="1" alt=""/>Jaded Consumer General Notices Putin's Thugocracy A ThreatAt least <a href="">someone's noticing</a> Putin's innocent-faced invasion of its neighbor illustrates the harm he's willing to cause when convenient. In order to conceal the scope of the wars Putin conducts in secret, he's declared <a href="">"peacetime" troop deaths a state secret</a>. Well, of course. People might notice they're <i>actually at war</i>.<img src="" height="1" width="1" alt=""/>Jaded Consumer Whistleblower Retaliation in MissouriThe problem that those entrusted with responsibility will fail to exercise the diligence they're paid to exercise, but will instead act in their own interest to do things that are more convenient to themselves personally, is so well-known that the management literature has developed a term of art to describe it: the agency problem. The agency problem confronts voters whose legislators enact undesirable laws at the behest of monied special interests just as it confronts citizens whose police decide to use their badges and guns for some purpose other than to protect and serve.<br /><br />Or … choose simply to protect and serve only their buddies with badges and guns. In one case, a Missouri police officer who answered honest questions about the in-custody death of a college student (accused of a misdemeanor) was effectively punished by his superiors for stepping out of line. Allowing police to be held accountable to the public isn't in the interest of the self-interested police. <a href="">Read about the case here.</a><br /><br />The problem isn't police only. The problem is the agency problem and its solutions are applicable to shareholders who want Boards who protect shareholders instead of looting the firm to enrich themselves, to protect voters who want legislators to pass laws that comport with public concepts of justice and reason, and to protect members of the public who don't want to fear violence from police who are protected from the consequence of any misconduct they choose to commit. The agency problem is the problem of business just as it is the problem of democracy.<br /><br />Faithful agents mean the difference between justice and oppression, fair returns and fraud losses, free elections and a mislead public.<br /><br />The agency problem matters.<img src="" height="1" width="1" alt=""/>Jaded Consumer Trial Needed: How Mr. Browder DiedNew York magazine offers an articulate look at how the criminal justice system's servants took the freedom, and ultimately the will to live, from a teen charged with a felony the government never bothered to bring to trial. The story is called "<a href="">How All New Yorkers Killed Kalief Browder</a>" and it's worth your time. If you don't live in New York, the bail statute is likely much more restrictive, increasing the probability that an accused will languish in jail for years. <br />. <br /><br />If the law and its servants can't do that, they have failed.<img src="" height="1" width="1" alt=""/>Jaded Consumer Apple's Debt-Funded Share RepurchasesSeeking Alpha just published the Jaded Consumer article "<a href="">Apple's Debt-Funded Repurchases: Terrific or Terrible?</a>"<br /><br />Stop by for the gripping conclusion :-)<img src="" height="1" width="1" alt=""/>Jaded Consumer State: Crime For Kids To Play In Own YardThe <a href="">headline</a> says it all: "11-Year-Old Boy Played in His Yard. CPS Took Him, Felony Charge for Parents."<br /><br />UPDATE: the mother, charged with a felony, <a href="">now fears for her job</a>. Good going, cop-phoning neighbors. You stick the kids in a stranger's house where they're fed nothing but cereal for days, impose a small fortune in Court-mandated therapy that prevents the family from enjoying its summer plans, and terrify the kids of their insane government while threatening their mother's job. A+.<img src="" height="1" width="1" alt=""/>Jaded Consumer Prosecutions: Good for Transparency in Sport<a href="">This article</a>?<br /><br /.<img src="" height="1" width="1" alt=""/>Jaded Consumer Cop's Nonsense Explanation For Police Violence Explains A LotThis video segment does a great job of showing how one retired cop explains away violence against unarmed peaceful protestors by saying that (a) there was violence someplace else, and (b) someone else shot police someplace else. Therefore, he argues, we should understand police beatings and tear-gassings at peaceful protests by unarmed people. You kind of <a href="">have to see this to believe it</a>.<br /><br />The African American guest has it right: the retired white cop can't connect any of his grievances against those involved in violence against police with the violence actually witnessed by the guest. So he makes up explanations to get the conclusion he wants, which is that all the police violence is justified.<br /><br />If the retired cop's view reflects that of active members of police forces around the country, it's no wonder there's escalating violence against civilians: cops feel tit-for-tat violence (and killings) is justified. No wonder people chant "No Justice, No Peace." And no wonder it makes thugs like the retired cop nervous. They intend violence and know they're the problem.<img src="" height="1" width="1" alt=""/>Jaded Consumer Man Killed By Cop Who Just Got Off Suspension for Killing Unarmed ManIf you've been following the Jaded Consumer coverage of America's <a href="">national epidemic of killings by police</a>, you'll be unsurprised at what <a href="">should be shocking news</a>. This time, a Walmart called the police on a woman who approached to ask what her son supposedly stole before he was murdered by the boys in blue. She's been stonewalled – unsurprisingly – by the department who loosed its known killer on the civilian public despite knowing his history killing an unarmed civilian.<img src="" height="1" width="1" alt=""/>Jaded Consumer: Not What We Hoped ForApparently creating a new Federal agency to delay you on the way to your flight <a href="">doesn't create genuine security in airports</a>. Surprise, surprise.<img src="" height="1" width="1" alt=""/>Jaded Consumer's Financial Self-Interest Wasn't In Infants' InterestIt's well-known that certain procedures are riskier in hospitals that do few of them. Such is the case with St. Mary's Medical Center, a Tenet Healthcare (<a href="">Ticker:THC</a>) hospital in Miami Beach, Florida. Its Board-Certified pediatric surgeon Dr. Black (who's white, incidentally) <a href="">told Mrs. Campbell he'd never lost a patient at St. Mary's</a> before he killed her daughter – the fourth to die after he attempted a complex cardiac procedure on a newborn at St. Mary's.<br /><br />So, why does a doc BS patients like this? Running the pediatric surgery program at a hospital that aspires to make big bucks on highly-compensated procedures is a sweet gig. The annual salary of a Board-Certified pediatric surgeon is bigger than most Americans' life savings, and running a program involves an especially big bunch of boodle. And the doc's got to make the hospital enough money to justify the payments. Apparently, Dr. Black didn't want to lose the sale.<br /><br />Although state regulators didn't find problems with the hospital, the chairman of the Cardiac Technical Advisory Panel for Florida's Children's Medical Services (part of Florida's Department of Health) found problems. St. Mary's extremely low case volume left it without the skills to meet the national average mortality for the complex cardiac procedures it managed to convince cardiologists to refer to the hospital. Consequently, the hospital's mortality rate was calculated by CNN using Freedom of Information Act requests to Florida regulators and determined to be about three times the national average. Tenet Healthcare claims that's untrue, but won't provide the true number.<br /><br />Go figure.<br /><br />***<br />UPDATE: <a href="">Ninth infant died in connection with St. Mary's pediatric cardiac surgery program</a>. <img src="" height="1" width="1" alt=""/>Jaded Consumer City of Tikrit Freed from Tyranny, Looting FollowsWhen 20 houses were set on fire, 50 shops looted, and trucks loaded with stolen goods escaped unhindered, <a href="">an Iraqi official explained</a>: "No plans were made for what to do with the city after it was liberated[.]" Now, where have we seen <i>that</i> before?<br /><br /><br /> <img src="" height="1" width="1" alt=""/>Jaded Consumer Internet ExplorerMicrosoft is killing its Internet Explorer brand. Maybe because <a href="">anyone who knew better snickered at it</a>? <a href="">Codenamed Spartan</a>, the next browser from Microsoft should suck less. How could it not? Hopefully for <strike>victims</strike> users the revolution is more than <a href="">skin deep</a>.<img src="" height="1" width="1" alt=""/>Jaded Consumer: High Growth?Seeking Alpha recently <a href="">published the Jaded Consumer</a> article "<a href="">Berkshire Hathaway: High Growth Stock?</a>" To get another view on this, look at Berkshire vs Microsoft chart in "<a href="">Apple's Future After Joining Dow: Brighter Than Microsoft's</a>", which compares Berkshire to Microsoft in the decade and a half following Microsoft's addition to the Dow. Solid >8% annualized performance at Berkshire trounced the global desktop OS leader over the period.<img src="" height="1" width="1" alt=""/>Jaded Consumer's Future After Joining Dow: Brighter Than Microsoft'sLast week, Apple Inc. (once known as "Apple Computer Inc.") replaced AT&T in the stock index called the Dow Jones Industrial Average. (The author says "called" because it <a href="">is not, in fact, an average</a>.) The responses to this - ranging from "<a href="">Apple is doomed</a>" through "<a href="">So what?</a>" to "<a href="">Yay, index funds now have to buy Apple!</a>" - largely overlook the fundamentals that will drive whatever future Apple will provide its investors. This article contrasts Apple's position to the future that lay before Microsoft Corp. when it joined the Dow in 1999.<br /><br /><b>Numbers</b><br /><br />From November 1, 1999 (the date MSFT joined the Dow) through last Friday, Microsoft's stock's ups and downs landed it down over 8%. Since its market cap declined even faster, it's clear the current price benefited from its share buyback program, without which the company's declining market cap would have left each share down more than three times that much:<br /><br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a></div><br /><br />Microsoft has paid dividends since 2003. After an annual dividend of 8¢/sh in its fiscal 2003 and one of 16¢/sh in its fiscal 2004, Microsoft paid a $3 special dividend in December of 2004 a regularly quarterly dividend thereafter. Including 31¢/sh payable March 12. 2015, <a href="">Microsoft's dividend history</a> - since the inception of dividends in 2003 - paid shareholders who held over the period a total of $9.97. Added to Microsoft's Friday close of $42.36, this puts holders for the period at $52.33, up from its split-adjusted close on November 1, 1999, of $32.87 - a total return (before taxes) of $19.46 (59.2%), or <a href="">nearly 3.1% annualized over the period</a>. Although this initially looks better than the ~47% return of the S&P 500 over the period …<br /><br /><a href="" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a><br /><br />… it doesn't match the <a href="">S&P 500 period return with dividends</a> (>90%, and >4% annualized). Although Microsoft's dividend has been reliable since its inception - enough to beat its stock price decline - its competitor Hewlett Packard Co. began paying dividends earlier, putting Microsoft only modestly ahead over the period compared in investors in its biggest publicly-traded OEM vendor.<br /><br /><a href="" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a><br /><br />The apparent yield of Microsoft in 2005 results from its $3 special dividend in December of 2004; it never had a double-digit dividend yield. YCharts presents a total-return chart showing returns <a href="">adjusted for dividends as they're paid</a>, showing the relative performance of Microsoft, its largest remaining US-listed OEM vendor, the S&P 500 (because it's a common benchmark), and Berkshire Hathaway Class B (BRK.B)(because it's a widows-and-orphans investment always worth comparing to anything):<br /><br /> <a href="" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a><br /><br />Annualizing these numbers over the decade and a half period since Microsoft joined the Dow renders them fairly modest. Berkshire's total return of 248.1% <a href="">annualizes to a bit over 8%</a>, which isn't bad considering the market dislocations of 2001 and 2008. The fact that investing in a tech giant like Microsoft would fail to beat the S&P 500 probably came as a surprise to high-conviction long-term investors certain Microsoft would continue to dominate the world in market cap as it once did.<br /><br /><b>What Microsoft Was Up To When It Joined The Dow</b><br /><br />To understand Microsoft's lackluster performance as a Dow component, it's not enough to nod knowingly and say that it'd grown too big. When <a href="">Apple passed Microsoft in market cap</a> on May 26, 2010, mathematical law didn't halt its outperformance. Far from it:<br /><br /><br /><a href="" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" /></a><br />(The total return chart reflects both companies' dividends.)<br /><br />To understand why Microsoft's performance in the early 2000s held the company back, it's necessary to look at what it was doing to grow at the time. In 1999, Microsoft owned the PC desktop operating system market and held a hugely profitable share of the market for server operating systems and tools, and was <a href="">just deciding to make gaming consoles</a>, a loss-leader business in which <a href="">it only lost money until 2004</a>. Its zeal to obtain a monopoly in digital music formats and to secure royalties from every music player in every car, home stereo, and Walkman, <a href="">Microsoft drove Apple into music players</a> and inspired <a href="">Apple's Music Store</a> to protect Apple's customers from losing access to new music promised by Microsoft's threatened era of Microsoft-only music formats. Frightened of losing control of servers to virtualization firms like VMWare, <a href="">Microsoft bought VirtualPC from Connectix Corp.</a> (now dissolved) and rewrote its already-complicated licenses to charge customers a premium if they wanted to install Microsoft's products on virtual servers - something high-uptime management makes mandatory. In doing so, it raised OEM and customer incentive to consider alternatives. When Microsoft's <a href="">FUD campaign against Apple music players failed</a>, Microsoft gave up securing music file format hegemony through its PC model (i.e., having OEMs make hardware while licensing its software at little cost to Microsoft), and leapt into the portable music player market <a href="">only to abandon it</a>. Microsoft created, marketed, and <a href="">abandoned a Microsoft eBook reader</a>, only to <a href="">fund an eBook venture with Barnes & Noble Inc.</a> (BKS) in which Microsoft <a href="">lost hundreds of millions before bailing out in December</a>.<br /><br />Meanwhile, Microsoft made its operating systems <a href="">costlier for OEMs to deploy on desktops</a>. It also <a href="">complicated its OS licensing scheme</a> to ensure that while it would hold the bottom end share (to prevent encroachment by newcomers), it would profit much more from the higher-end market segment and command a premium from enterprises and pro users who depended on highly parallel application operation, remote administration, and access to networks through centrally-controlled credentialing systems. While working to leverage its OS monopoly into earnings growth, it saw mobile platforms grow in relevance - non-Microsoft mobile platforms that offered competitors a foothold in bringing developers and customers from dependence on Microsoft-controlled APIs, developer tools, and licensing fees. Microsoft's mobile device licensing fees drove mobile OEMs to competitive platforms, but <a href="">mobile hardware is a cut-throat business</a> and Microsoft <a href="">optimistically cheered</a> while firms with an established history making and marketing hardware were driven from the field. (Whether Microsoft cheered oblivious to its own plight or to distract attention from it is your guess.)<br /><br />Apple's entry into smartphones in 2007 changed the face of mobile competition. Besides Apple, <a href="">only the world's unit sales leader</a> profited in the smartphone market. But Apple added to its hardware complete control over the software it shipped, so that it produced superior performance for users even in areas in which its <a href="">hardware specs arguably lagged</a>. Competing with Apple required vendors to invest their own resources in their mobile OS development, reducing margins on a device count that could not compete with Apple's comparable products. (The bulk of the smartphone market consists of cheaper, lower-margin phones whose hardware wouldn't compete even with software customization.)<br /><br />Eleven years after joining the Dow, Microsoft used expertise it acquired with the Danger acquisition to launch its own phone hardware in May of 2010 - but it <a href="">didn't effectively market</a> the phone's greatest strength (cloud backup) and failed to include features users had come to depend on in the profitable segment of the phone market. The billion-dollar Danger acquisition led to a <a href="">major product flop</a>. (Apple had product flops, too - <a href="">as described here</a>. Marketing isn't everything.) After subsidizing Nokia Oyj to produce hardware shipping a Microsoft mobile OS, <a href="">Microsoft purchased Nokia's hardware division</a> only to share a <a href="">non-iOS/non-Android global share of 4%</a> with such struggling competitors as Blackberry Ltd. (formerly Research In Motion, which approved a sale of itself to a third party a few years ago but was apparently stymied by regulators) and <a href="">Jolla Ltd.</a> (privately-held 125-employee vendor of the Sailfish OS). On the way to this ignominious position, Microsoft had to endure headlines like "<a href="">Samsung's Bada OS growing faster than Windows Phone</a>". (<a href="">Samsung merged Bada's best components</a> into the <a href="">Linux-based Tizen project</a>, with which Samsung - much like Microsoft - <a href="">hopes against the odds</a> to take the "third choice" OS spot behind the market leaders.) Using Nokia to sell phones probably looked better then it <a href="">still led global cellphone unit sales</a>, but recently it's been struggling only to lose <a href="">not only share but absolute sales</a>.<br /><br /><b>What Apple is Doing As It Joins the Dow</b><br /><br />Apple earns more profit than all other smartphone vendors combined, and the smartphone market remains a growing market. In the first quarter of 2014, <a href="">only Apple and Samsung profited selling smartphones</a>: together, they shared 106% of the global smartphone profit. The also-rans - including Microsoft - <i>lost money</i> to stay in the game. The interesting fact is that Apple made most of this money - 65% - while holding less than 16% unit sales share. In the last calendar quarter of 2014, Apple's phones reportedly raked in <a href="">89% of the world's smartphone profit</a> even as the profit pie grew.<br /><br />Apple is doing this while selling a minority of global units. Not only is the market growing, but Apple hasn't come anywhere near saturation.<br /><br />Even as Microsoft commands the majority of the market for PC operating systems, Apple makes more profit than any of Microsoft's OEMs despite holding relatively modest share. In the last quarter of 2012, <a href="">Apple's computers took 45% of the PC market's profits</a> while selling but 5% of the PC units. By the third quarter of 2014, Apple grew PC profits to <a href="">50% of global share on the same 5% unit share</a>. This isn't the result of some flash-in-the-pan Mac fad but of the market segment Apple pursues - what ZDNet called "<a href="">the only segment of the PC market that still matters</a>". And by holding a sizeable minority of this richest segment, Apple not only establishes itself among those who need machines in that segment but establishes a base from which to grow its reach into that segment.<br /><br />Unlike Microsoft, Apple hasn't joined the Dow controlling a saturated low-growth market that forces it to look toward loss-leader markets like game consoles for growth. Apple didn't enter music players or phones with the idea of losing money to gain share, or even profiting on a razor-and-blades model of earning back on content what was lost on hardware. Apple makes good money on its hardware, and keeps post-sales revenue as pure gravy. And it's an interesting gravy market. After <a href="">selling $10 billion through the App Store in 2013</a>, App Store <a href="">revenue grew 50% in 2014</a> - during which Apple <i>paid developers</i> $10 billion. While this may be modest in the scale of Apple's overall revenue, the key is to consider what this does to Apple's ecosystem: Apple developers have <a href="">earned $25 billion from iOS development alone</a>. What developer would leave a market like that? Apple's successful after-sales program ensures developers target Apple platforms first and support them with the most engineering resources. The <a href="">points made in 2012</a> about Apple's ability to tempt developers with a user base that pays for and updates software remain valid. By building MacOS X - and therefore iOS - upon a Cocoa environment that allows a single application to support an unlimited number of languages, <a href="">Apple built international support into every product it sold</a> since departing MacOS 9. As a result, Apple's products enjoy <a href="">predictably hard-to-satisfy demand</a> in markets - like China - in which Apple couldn't compete at all when Microsoft joined the Dow.<br /><br />As Apple continues to thrive in its high-margin market segment, Apple puts devices in the hands of customers with a proven willingness to pay for quality. <a href="">As predicted in 2011</a>, Apple expanded its iOS electronic wallet system into a <a href="">payment processing business</a>. Since the ApplePay rollout, Whole Foods Market Inc. saw <a href="">"significant growth" in mobile payment use</a> with double-digit week-over-week growth of ApplePay volumes and 400% growth in mobile payments by the end of January. In under a month, Apple had 1% of Whole Foods' entire transaction volume. By the end of January 2015, ApplePay accounted for <a href="">80% of all mobile payments at Panera Bread Co.</a> (PNRA). Three months into the product's launch, <a href="">ApplePay handles $2 of every $3 spent through contactless payment</a> across the three largest card networks in the U.S. ApplePay's security doesn't keep <a href="">card-issuing banks from foolishly believing fraudsters actually hold the bank's cards</a>, but since each issuing bank sets the procedure it will use to determine whether an ApplePay user is its authorized card holder, this should improve. The <a href="">ApplePay global rollout is still underway</a>. But since <a href="">Apple's mobile customers outspend competitors' customers</a>, Apple stands to participate in a prime section of the payment processing market. Despite having a smaller number of users, <a href="">iOS devices drove five times Android users' spending</a> between Black Friday and Christmas last year. Not only were purchases more frequent, but average purchase size doubled on iOS. Whatever might be said about Samsung's mobile payment platform, it's Apple's customers that will make its payment processing more profitable.<br /><br />And that's not the only growing business. <a href="">Apple's "hobby" in AppleTV</a> poises it to assault a multibillion-dollar television market while vending content. Apple's also about to launch a smartwatch, which Piper Jaffray said would face disinterest as only <a href="">14% of those surveyed would buy </a>one for $350 without having been able to see or use it (up from 12% in an earlier poll). No disrespect, but if 14% of those surveyed would buy a product for $350 without being able to see or use it beforehand, this is either a very acquisitive demographic that's being surveyed or there's a shocking demand for an Apple watch. Most of us tell time on our phones, no? Recall, if you will, that Apple first entered the phone business with an ambition of gaining <a href="">1% of the cellphone market</a>.<br /><br />In the last quarterly results announcement, Apple guided to a March-ending quarterly revenue between $52-55 billion. At the top end, this represents <a href="">20% growth over the year-ago quarter</a>, even accounting for foreign-currency headwinds.<br /><br /><b>Conclusion</b><br /><br />Adding Apple to the Dow may result in some transient sales as funds adjust holdings, but the real returns will follow Apple's performance. Unlike Microsoft when it joined the Dow in 1999, Apple stands at a high-growth period that starts with a projected growth in this March-ending quarter of some 20% over the comparable quarter last year. Apple continues to strengthen its high-margin product segments even as it grows the value of its ecosystem through post-sales opportunities. Rather than descending into low-margin market segments in the quest for growth as Microsoft did following its addition to the Dow, Apple is still growing its most high-margin businesses while adding mechanisms to improve post-sales revenues opportunities. Apple isn't dead after being added to the Dow, but very much alive - and represents much better deal than Microsoft did when it joined the Dow at the end of the millennium.<img src="" height="1" width="1" alt=""/>Jaded Consumer's Alchemists Turn Lead Into GoldSeeking Alpha just published the Jaded Consumer article <a href="">Disney Alchemists Convert Lead to Gold On Command</a>. Read how Disney reorients its resources toward unencumbered intellectual property to improve its margins.<img src="" height="1" width="1" alt=""/>Jaded Consumer Cole Hits Target with Gemini Cell<div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="" height="320" width="198" /></a></div>Set in the universe of his first Shadow Ops trilogy, <a href="">Myke Cole</a>'s new novel <i><a href="">Gemini Cell</a></i> requires no background from prior books to enjoy. This was also true of his first trilogy's capstone volume, <a href="">Shadow Ops: Breach Zone</a> (<a href="">reviewed here</a>); Cole has a gift for writing books that you can enjoy without additional background. That having been said, you should go enjoy them, especially if you're reading this while <i>Gemini Cell</i> is still a few weeks from release.<br /><br /><b>Overview</b><br /><i>Gemini Cell</i> – <i>Gemini Cell</i> is set in the early days of the paranormal awakening that sweeps the globe, before the rules are clear, and before the protagonist has any idea what he's in for once he find himself "in business" with supernatural practitioners.<br /><br /.<br /><br /.<br /><br />More themes appear, but why spoil the story? <br /><br /.<br /><br /><b>Protagonist and Adversity</b><br /><i>Gemini Cell</i> is the story of Jim Schweitzer, a Navy SEAL who doesn't expect to star in a mashup between <a href=""><i>Revenge of the Ninja</i></a> and <a href=""><i>Frankenstein</i></a> mixed with a dose of <a href=""><i>Ghost</i></a>. From the back-cover copy, we know Schweitzer is going to bite it – and soon. So we are kind of at the edge of our seats from the outset, expecting him to die at any moment, but hoping he'll manage to come out okay despite everything.<br /><br /.<br /><br />Jim Schweitzer's plot arc in <i>Gemini Cell</i>.<br /><br />Of course, questions overhang – how, exactly, did Jim get into this mess in the first place? There's more to learn, and you can expect future volumes to satisfy hunger for answers. <br /><br /><b>Other Characters</b><br />The story isn't all Jim Schweitzer, or testosterone-drenched white men gunning down whole armies by one-handing belt-fed machine guns whose ammo chains never shorten. Scenes cut between plotlines about Jim, and about people wondering what has <i>become</i>.<br /><br / <i>good</i> thing.<br /><br /><b>Darkness</b><br />As in prior Myke Cole books, <i>Gemini Cell</i.<br /><br />There <i>are</i> unprepared targets in <i>Gemini Cell</i>, of course … but are they all bad guys? How would the good guys ever learn? The book hits a few times on the ambiguous position of good guys who act (with finality) on information they know is incomplete. They <i>trust</i>….)<br /><br />Are the wrong people in charge already?<br /><br />Well. We'll see. I hear the next one's called <a href=""><i>Javelin Rain</i></a>.<br /><br /><b>Craftsmanship</b><br />On social media and <a href="">in interviews</a>,: <i>it's worth it</i>.<br /><br / <a href="">Jim Butcher</a>, author of the 6-volume fantasy series <a href="">Codex Alera</a> and the still-in-progress-after-fifteen-volumes <a href="">Dresden Files</a> series. Jim Butcher's praises have been sung here <a href="">since 2008</a>.. <br /><br />Readers: you're in good hands with Myke Cole.<br /><br /><b>Conclusion</b><br />Cole's universe is populated by characters with interesting internal and external conflicts that are easy to get interested in seeing resolved. People who worry about picking up a series' first book out of cliffhanger-dissatisfaction concerns need not worry. <i>Gemini Cell</i.<br /><br / <i>enjoy</i> losing sleep, pick it up early in the day. You've been warned.<br /><br />Myke Cole’s <i>Gemini Cell</i>.<br /><br />(For those interested in an interview with Myke Cole, try <a href="">this one</a>, on which the author <a href="">says</a>, "I love new and unusual questions." Also, his mother read it and said, "What a great interview!" So read it, already.) <img src="" height="1" width="1" alt=""/>Jaded Consumer: A ReviewI recently read <a href="">Devin Faraci's pan</a> of the <a href="">TV series Gotham</a>, and felt obliged to respond.<br /><br /><a href="" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="" height="150" width="400" /></a>Gotham is for people interested in what Batman's world looks like without the Caped Crusader to rescue anyone. Without Batman, the city retains all its grit, corruption, deceit, danger, and weirdness – just no Batman to save the day. This makes it much more like a gritty crime story, except that the weirdness of a superhero's city is added – without offering a built-in rescuer to save the city and its inhabitants. Gotham is Batman for the self-help crowd, as it were. But it's a it more: because it's set in Batman's Gotham before Batman comes into his own, it offers a view of Gotham from an angle we've never seen. Gotham's pre-Batman history isn't delivered in flashback to inform some years-later adventure, but in its own story: how Gotham created Batman and the villains he opposes.<br /><br /><br />This).<br /><br / masks:<br /><blockquote class="tr_bq". </blockquote><blockquote class="tr_bq">Bruno Heller, <a href="">interviewed by Entertainment Weekly </a></blockquote>And the acting is outstanding. Just. Outstanding.<br /><br /><a href="" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" src="" height="311" width="400" /></a>Although the show is built around the police work of James Gordon, it's hard not to start with Gotham's rising villains. <a href="">Jada Pinkett Smith</a>'s Fish Mooney is such a gloriously ambitious crime boss, just <i>waiting</i> to take the crime-lord crown from the old guard gangster running Gotham, that you can't help cheering for her bloody advance. She's that good.<br /><br /><a href="">Robin Lord Taylor</a> <i>not</i> a supervillain – yet – that it's hard not to ask <i>how</i>.<br /><br />When is <a href="">Cory Michael Smith</a>'s Edward Nygma going to snap? We keep seeing him as a crime scene tech. But one day – one day …. Selena Kyle, portrayed by <a href="">Camren Bicondova</a>, isn't a villain yet, just a street-smart survivalist with sticky fingers. (And a host of survival skills to teach young master Bruce!) There's plenty of mid-level thugs, including one played by <a href="">David Zayas</a> (as "Don" Sal Marone), but they become burdensomly numerous. Behind them all stand <a href="">John Doman</a>'s crime boss "Don" Carmine Falcone – maybe the most vanilla villain on the show, but plenty scary for all that.<br /><br />But the show is built around James Gordon, long before he becomes Commissioner. <a href="">Benjamin McKenzie</a> <a href="">David Mazouz</a>' Bruce Wayne – who's done an outstanding job – who in turn is being raised by his badass butler and guardian Alfred Pennyworth (absolutely beautifully done by <a href="">Sean Pertwee</a>), who has some firm ideas how a boy should be raised. Heh, heh. <br /><br />Gordon's older, jaded partner Harvey Bullock (masterfully portrayed by <a href="">Donal Logue</a>) is a piece of work from the start: friend or foe? Both? Gotta love Gotham.<br /><br /.<br /><br /.<br /><br />I'm dying to go to the defense of the <a href="">Balloon Man</a> <i>must not become</i>. The whimsical balloon motif is perfect for Gotham: it's a nod to the culture that <i>must exist</i> to produce the weird future full of costumed villains with which the city is doomed to be inundated. The bizarro scheme is <i>exactly</i>.<br /><br />Fans of Batman can hardly find a better show.<img src="" height="1" width="1" alt=""/>Jaded Consumer Rookie Kills Unarmed Innocent Who Took StairsIn a <a href="">continuation of earlier coverage</a>, The Jaded Consumer notes that <a href="">NYPD rookie Peter Liang killed Akai Gurley with one shot to the chest</a>Akai Gurley was African American</a>.<br /><br /.<br /><br />When asked about killing Gurley with a single gunshot to the chest, Officer Liang said, "I shot him accidentally."<br /><br />Uh-huh.<br /><br /. <a href="">Contrary to assertions</a> that reforming the NYPD's stop-and-frisk practices would "<a href="">end in buckets of blood on city streets</a>[,]" a 75% drop in police stops – from <a href="">about 700,000 in 2011</a> to <a href="">50,000 this year</a> – has not prevented New York's murder count to <a href="">drop by 20 deaths</a> compared to the same period last year. <a href="">New York City's crime rate hit a 20-year low</a>. The NYPD's <a href="">stop-and-frisk practices were ruled unconstitutional</a> last year.<br /><br />Maybe the way to reduce crime isn't to escalate oppression. Who knew?<img src="" height="1" width="1" alt=""/>Jaded Consumer Capital's Pre-Split Value Per ShareSeeking Alpha posted my article ("<a href="">American Capital Ltd.: What A Share Is Worth</a>") outlining the company's post-dilution NAV in the event all outstanding options were exercised. What's not yet clear is how <a href="">the impending split</a> effects the options. If they're not repriced, then any options not exercised before the dividends are paid will be worth quite a lot less. It'll be something to watch as the transaction unfolds.<img src="" height="1" width="1" alt=""/>Jaded Consumer"Islamic State" Sex Slavery, PersonalizedPutting a human face on the <a href="">sexual slavery practiced by the "Islamic State"</a> declared in Syria and parts of Iraq, <a href="">CNN is running a story</a> about a 19-year old aspiring physician abducted at gunpoint. Apparently, the "Islamic State" offers a compensation package to fighters that goes beyond <a href="">$2,000 cash and drugs to stave off flight</a> from battle: they offer the opportunity to rape captive women.<br /><br /><br /><img src="" height="1" width="1" alt=""/>Jaded Consumer's Payment Processor Prejudice: Unlawful Tying?Rite-Aid, which supported both Google Wallet and Apple Pay until just recently, <a href="">halted its use of both payment processors</a> – apparently in favor of <a href="">a payment processor it will co-own</a>. Is Rite-Aid alone in this, or is it a boycott? Even if it's not a <a href="">boycott,</a> isn't <a href="">tying the purchase of a service to the purchase of some other good or service</a> an indication that a market participant is using market power to create a monopoly?<br /><br />More news as the payment processor competition heats up.<img src="" height="1" width="1" alt=""/>Jaded Consumer Director Worried 1st, 4th Amendments Might Mean SomethingThe Director of the FBI <a href="">expressed concern recently</a> that technological advances might render <i>practically meaningful</i> the First Amendment's right to free assembly and the Fourth Amendment's right to freedom from unreasonable search. Instead, it might be necessary to <a href="">get a court order</a> to snoop on U.S. citizens. Poor G-man.<br /><br />Incidentally, nothing in the tech interferes with government <a href="">collection of metadata</a>, only with the encrypted message contents. Mapping networks of connected individuals is apparently still fair game for <a href="">government agencies interested in snooping warrentlessly</a> into the relationships of those presumed innocent.<br /><br />UPDATE: The Director of the Federal Bureau of Investigation is <a href="">asking Congress to create federal law</a> that would interfere with genuine privacy of the sort already required by federal law in areas like credit card transactions and health care privacy. Apparently those technologies are too dangerous for Americans, after all.<img src="" height="1" width="1" alt=""/>Jaded Consumer Fuel Prices, Taxes, and Profits<a href="">ExxonMobil's fuel tax map of the United States</a> shows regional variation in tax policy: <br /><br /><div class="separator" style="clear: both; text-align: center;"><a href="" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="" height="214" width="320" /></a></div>The page's author presents a defense to the charge that oil companies are scamming government out of tax money: the government earns in taxes an order of magnitude more on each gallon refined, shipped, and sold in the United States than ExxonMobil earns in profit on the same gallons.<br /><br />The defense is interesting, but I think it dodges the charge. Those who accuse multinational oil companies of running a tax scam aren't focused on sales taxes imposed on locally-sold products, but the international business of companies that historically paid U.S. income taxes on income earned in foreign jurisdictions. From the point of view of ExxonMobil, of course, the government collects not only 40 to 60 cents per gallon refined, shipped, and sold in the U.S. – but also 35% income tax on ExxonMobil's 5.5¢ profit per gallon. From the perspective of ExxonMobil's detractors, what has that to do with ExxonMobil's 'right' to use U.S. resources to build and defend a global business empire from which it gathers income free of U.S. taxes?<br /><br />It's an interesting situation that invites inquiry into local competitive conditions globally and examination of the practical effects of tax policy. With the <a href="">elimination of the double-Irish scheme</a>, international tax planning will take another wave of innovation (and consultants in the area will make another fortune). Is there a tax policy that will result in more tax collected and less resources wasted avoiding taxation?<img src="" height="1" width="1" alt=""/>Jaded Consumer | http://feeds.feedburner.com/blogspot/KnJFq | CC-MAIN-2015-32 | refinedweb | 7,404 | 50.46 |
can use this module with the following in your
~/.xmonad/xmonad.hs file:
import XMonad.Actions.CopyWindow
Then add something like this to your keybindings:
-- mod-[1..9] @@ Switch to workspace N -- mod-shift-[1..9] @@ Move client to workspace N -- mod-control-shift-[1..9] @@ Copy client to workspace N [((m .|. modm, k), windows $ f i) | (i, k) <- zip (workspaces x) [xK_1 ..] , (f, m) <- [(W.view, 0), (W.shift, shiftMask), (copy, shiftMask .|. controlMask)]]
To use the above key bindings you need also to import XMonad.StackSet:
import qualified XMonad.StackSet as W
You may also wish to redefine the binding to kill a window so it only removes it from the current workspace, if it's present elsewhere:
, ((modm .|. shiftMask, xK_c ), kill1) -- @@ Close the focused window
Instead of copying a window from one workspace to another maybe you don't want to have to remember where you placed it. For that consider:
, ((modm, xK_b ), runOrCopy "firefox" (className =? "Firefox")) -- @@ run or copy firefox
Another possibility which this extension provides is 'making window always visible' (i.e. always on current workspace), similar to corresponding metacity functionality. This behaviour is emulated through copying given window to all the workspaces and then removing it when it's unneeded on all workspaces any more.
Here is the example of keybindings which provide these actions:
, ((modm,'.
Remove the focused window from this workspace. If it's present in no other workspace, then kill it instead. If we do kill it, we'll get a delete notify back from X.
There are two ways to delete a window. Either just kill it, or if it supports the delete protocol, send a delete event (e.g. firefox).. | http://hackage.haskell.org/package/xmonad-contrib-0.11.3/docs/XMonad-Actions-CopyWindow.html | CC-MAIN-2015-48 | refinedweb | 280 | 69.07 |
curl_url_get - extract a part from a URL
NAME
curl_url_get - extract a part from a URL
SYNOPSIS
#include <curl/curl.h>
CURLUcode curl_url_get(CURLU *url, CURLUPart what, char **part, unsigned int flags)
DESCRIPTION
Given
When asked to return the full URL, curl_url_get will return a normalized and possibly cleaned up version of what was previously parsed.
Scheme cannot be URL decoded on get.
The host name. If it is an IPv6 numeric address, the zoneid will not be part of it but is provided separately in CURLUPART_ZONEID. IPv6 numerical addresses are returned within brackets ([]).
If the host name is a numeric IPv6 address, this field might also be set.
Port cannot be URL decoded on get.
part will be '/' even if no path is supplied in the URL.
The initial question mark that denotes the beginning of the query part is a delimiter only. It is not part of the query contents.
A not-present query will lead part to be set to NULL. A zero-length query will lead part to be set to a zero-length string.
The query part will also get pluses converted to space when asked to URL decode on get with the CURLU_URLDECODE bit.
RETURN VALUE
Returns a CURLUcode error value, which is CURLUE_OK (0) if everything went fine.
If this function returns an error, no URL part is returned.
EXAMPLE); }
AVAILABILITY
Added in curl 7.62.0. CURLUPART_ZONEID was added in 7.65.0.
SEE ALSO
curl_url_cleanup(3), curl_url(3), curl_url_set(3), curl_url_dup(3), CURLOPT_CURLU(3),
This HTML page was made with roffit. | https://curl.se/libcurl/c/curl_url_get.html | CC-MAIN-2021-39 | refinedweb | 259 | 75.4 |
Iterators
Iterators, in themselves, aren't a revolutionary idea; many programming languages have them, but much of python's standard library is concerned with producing and consuming iterators.
iter
The iter function turns an iterable into an iterator. This is not normally required since functions that require an iterator also accept iterables.
iter can also be used to create an iterator from a function. For example, you can create an iterator that reads lines of input until the first empty line:
import sys entries = iter(sys.stdin.readline, '\n') for line in entries: sys.stdout.write(line)
Because of the magic of iterators, this will print lines out as they are entered, rather than after the first empty line.
itertools
There are many useful functions to manipulate iterators in the itertools library.
The ones I use most often are chain.from_iterable and product.
product
product takes any number of iterables, and returns tuples of every combination, in what is called by mathematicians the cartesian product.
What this means is that the following is functionally equivalent:
for x in [1, 2, 3]: for y in ['a', 'b', 'c']: print(x, y) for x, y in itertools.product([1, 2, 3], ['a', 'b', 'c']): print(x, y)
Except the latter example needs less indentation, which makes it easier to keep to a maximum code width of 79 columns.
chain.from_iterable
chain takes iterators and returns an iterator that returns the values of each iterator in turn.
This can be used to merge dicts, as the dict constructor can take an iterator that returns pairs of key and value; but I've not found too many used for chain by itself.
This example implements a simple env(1) command using chain, takewhile and dropwhile though.)
chain has an alternative chain.from_iterable constructor, which takes an iterable of iterables. I find this useful when I have a set of objects that have an iterable field, and want to get the set of all those items.])
Generators
You may have noticed I passed something weird to the call to chain.from_iterable, this is called a generator expression.
Generators are a short-hand for creating certain kinds of iterator.
Indeed we could have used
itertools.imap(lambda foo: foo.bars, foos),
but as you can see, the generator expression syntax is shorter, and once
you understand its general form, simpler.
You can do both filtering and mapping operations in generator expressions, so the following expressions are equivalent.
itertools.imap(transform, itertools.ifilter(condition, it)) (transform(x) for x in it if condition(x))
However, there's some calculations that aren't as easily expressed as a simple expression. To handle this, you can have generator functions.
generator functions
generator functions are a convenient syntax for creating generators from what looks like a normal function.
Rather than creating a container to keep the result of your calculation and returning that at the end, you can yield the individual values, and it will resume execution the next time you ask the iterator for the next value.
They are useful for calculations where the result is not simple, and may even be recursive.
def foo(bar): yield bar for baz in bar.qux: for x in foo(baz): yield x
Sub-generators
If you have python of version 3.3 or higher available, then you can use the yield from statement to delegate to sub-generators.
def foo(bar): yield bar for baz in bar.qux: yield from foo(baz)
In this example, rather than using
yield from, you can do:
for x in foo(baz): yield x
However this is longer and doesn't handle the potential interesting corner cases where values can be passed into a generator function or returned when iteration ends.
Context managers
Context managers are used with the with statement. A context manager
can be any object that defines the
__enter__ and
__exit__ methods.
You don't need a dedicated object to be a context manager, using the
open file object as a context manager will have it close the file at
the end of the with block. It is common to use
open this way:
with open('foo.txt', 'w') as f: f.write('bar\n')
You can define the
__enter__ and
__exit__ methods yourself, but
provided you don't need much logic at construction time (you rarely
do) and you don't need it to be re-usable, you can define a context
manager like:
import contextlib import os @contextlib.contextmanager def chdir(path): pwd = os.getcwd() os.chdir(path) try: yield finally: os.chdir(pwd)
This uses a generator function that yields only one value (in this case we yield a None implicitly), and mostly exists so that you can run cleanup code after it has finished and re-enters your generator function.
The
try...finally is necessary because when you yield in a context
manager, it is resumed when the
with block finishes, which can be from
an exception. If it is from an exception then it is raised inside the
context manager function, so to ensure that the
chdir is always run,
you need to wrap the
yield in a
try...finally block. | https://yakking.branchable.com/posts/cool-bits-of-python/ | CC-MAIN-2021-43 | refinedweb | 864 | 54.12 |
ila Fedorov498 Points
Product
I need you to write a function named product. It should take two arguments, you can call them whatever you want, and then multiply them together and return the result.
def product(count): product = (Var1*Var2) Var1 = 5 Var2 = 6 return(Var1,Var2)
i don't understand what i did wrong
def product(count): product = (Var1*Var2) Var1 = 5 Var2 = 6 return(Var1,Var2)
2 Answers
Ramon Villarreal-Leal4,764 Points
I need help on this one
Andreas cormackPython Web Development Techdegree Graduate 33,011 Points
Hi Danila
All the task asks of you is to create a function that takes 2 arguments, multiply the two arguments and return the result. At the moment, you are passing one argument, creating a variable called product which is Var1 * Var2, by the way Var1 and Var2 are not even defined. When we say arguments we mean two values passed to a function within the parenthesis. See example below
# in this example I will add the two arguments passed to a function called add and return the result def add(num1, num2): return num1 + num 2 | https://teamtreehouse.com/community/product-2 | CC-MAIN-2021-43 | refinedweb | 187 | 50.5 |
.
My Haskell solution (see for a version with comments):
[…] Pages: 1 2 […]
Is there a simple proof of the assertion that this always works? I looked briefly but failed to find one on the web and I need to attend to other things. I did find some other pages that simply assert that this pair of permutations produces the minimum. I found a purported counter-example that failed to be a counter-example. And I wrote a test program, below, that seems to confirm the assertion by always returning an empty set of permutations (of the second vector) that produce an even smaller result. So I believe it but if there is a memorable proof, I’d like to know.
It’s not a proof, but a simple observation is that the maximum scalar product is produced by multiplying the largest item from each vector, then the second largest, and so on; then the minimum scalar product is the opposite. I’ll ask at a couple of web sites I know.
perl6 version
Here's an attempt at a proof:
Consider two vectors:
A = <a_1, a_2, ..., a_n> sorted highest to lowest, so a_i >= a_j if i < j; and
B = <b_1, b_2, ..., b_n> sorted lowest to highest, so b_i <= b_j if i < j.
The scalar product (A dot B) of these two vectors is:
a_1*b_1 + a_2*b_2 + ... + a_i*b_i + ...
Assume there exist a B' in which the positions of b_i and b_j have been
exchanged so that:
A dot B' < A dot B
... + a_i*b_j + ... + a_j*b_i + ... < ... + a_i*b_i + ... + a_j*b_j + ...
all the terms in the '...' are the same on both sides, so this reduces to:
a_i*b_j + a_j*b_i < a_i*b_i + a_j*b_j
which can be rearranged:
a_i*(b_j - b_i) < a_j*(b_j - b_i)
By definition of B, b_i <= b_j for i < j.
If b_i == b_j, then B' == B, and (A dot B') == (A dot B).
If b_i < b_j, then (b_j - b_i) > 0, so:
a_i < a_j
This contradicts that A is sorted so that a_i >= a_j for i < j.
Therefore, the assumption must be false.
[…] today’s Programming Praxis exercise, our goal is to calculate the minimum scalar product of two vectors. […]
My Python solution:
def min_scalar_product(v1, v2):
return sum([x * y for x, y in zip(sorted(v1), sorted(v2, reverse=True))])
Min Scalar Product
My try in Common lisp
I asked at and got this answer from Luonos. I think that’s pretty much the same as Mike’s proof.
Requires -std=c++0x option when compiled with g++.
A Go solution:
[…] Another post from Programming Praxis, this time we’re to figure out what is the minimum scalar product of two vectors. Basically, you want to rearrange two given lists a1, a2, …, an and b1, b2, …, bn such that a1b1 + a2b2 + … + anbn is minimized. […] | http://programmingpraxis.com/2012/08/10/minimum-scalar-product/?like=1&source=post_flair&_wpnonce=019c5482d9 | CC-MAIN-2015-06 | refinedweb | 468 | 78.38 |
How to take the reference of spiceworks in a windows application in c#. While trying to add reference of the spiceworks.dll , it is giving the following error:-
Refence could not be added.Pleas make sure that the file is accessible and it is a valid assembly or COM component.
14 Replies
Apr 21, 2010 at 2:44 UTC
Keep IT Simple Technology Group is an IT service provider.
Sounds similar to
What are you trying to accomplish?
Apr 21, 2010 at 2:59 UTC
Lawrie Dalman Consulting is an IT service provider.
I'm with Yasaf Burshan, what on earth are you trying to do?
Sounds a little like trying to use some of Spiceworks works for a school programming project.
Apr 21, 2010 at 7:15 UTC
I want to access the API 's of spiceworks through a .net application.
Apr 21, 2010 at 7:29 UTC
I want to capture the events occured in spiceworks Like "ticket added"..
Apr 21, 2010 at 8:00 UTC
Keep IT Simple Technology Group is an IT service provider.
I want to capture the events occured in spiceworks Like "ticket added"..
Again - What is your final goal with this - perhaps there are other ways "built-in" to do what you want.
For example the ticket added evet can be phrase by email notifications.
Apr 21, 2010 at 8:07 UTC
Actually I want to capture this event and do some jobs based on this event.e.g suppose there is a tkt added to Spiceworks then we would like to capture thjis event and do some other job in our application.
Apr 21, 2010 at 8:25 UTC
Keep IT Simple Technology Group is an IT service provider.
What is the othe application? Can you trigger the other job based on mail message?
You can use rules to forward the ticket to the other application then start a job based on that ticket / event.
Apr 21, 2010 at 8:27 UTC
How do i create SQLLite ODBC data connector to read data from Spiceworks?
Apr 21, 2010 at 8:33 UTC
Spiceworks.dll doesn't export any COM interfaces. You would have to handle it with Interop. Take a look with DLL export viewer. I don't see any functions that would be helpful to you - the DLL is a package of the ruby interpreter, supporting gems, and a ruby script. All of the exported functions relate to the ruby interpreter itself, not the application script.
You would handle this best and easiest by polling the database for new events. Either use a SQL query, or a page using the data API.
Apr 23, 2010 at 2:01 UTC
What reference do we need to take in our c# application to get the SPICEWORKS namespace so that I can use the API available.
Apr 23, 2010 at 8:28 UTC
The Data API is meant for plugins that run on the Spiceworks pages, but it retrieves data through a REST service. The page I linked to is a script that uses the REST service to retrieve a list of tickets and display them as an ATOM feed. In your C# program, you would follow the same flow as the script:
- Send an HTTP post to the REST service with a valid Spiceworks user name and password
- Retrieve the cookie it gives you back
- Create and HTTP GET request, set the cookie on it, and send it to the REST service to retrieve the list of recent tickets
- Parse the JSON you get back to access ticket information
Apr 23, 2010 at 9:22 UTC
Can u send me a sample code for this?
Apr 23, 2010 at 9:26 UTC
Sorry, you'll have to translate from the JScript page I linked above. I don't have anything for C#.
Apr 23, 2010 at 10:07 UTC
ok thank u. | https://community.spiceworks.com/topic/96137-how-to-take-the-reference-of-the-spiceworks-dll-to-access-its-api-s | CC-MAIN-2017-04 | refinedweb | 649 | 79.7 |
24 February 2010 10:48 [Source: ICIS news]
SINGAPORE (ICIS news)--Singapore-based Advanced Holdings said on Wednesday it has secured contracts from PetroChina to supply process equipment for the oil giant's two ethylene plants in China.
One ethylene plant is located in Daqing in ?xml:namespace>
“These projects will commence in the second quarter of 2010 and are expected to be completed in mid 2011,” Advanced Holdings said in a statement.
The company also secured a third contract to provide process equipment for the coal-gasification processes of a plant owned by Jinjiang Chemical.
The project started earlier this year and is expected to be completed by August 2010, Advanced Holdings said.
The three contracts are worth about
In a separate statement, the company reported fourth quarter profits of
Revenue from its petrochemicals and chemicals division in 2009 rose 9.1% year on year to S$54.8m, the company said.
Advanced Holdings reported a full-year net profit of S$9.5m, up 28% from a year ago, it said.
(S$1= $0.71, €0 | http://www.icis.com/Articles/2010/02/24/9337302/singapore-based-advanced-wins-contracts-from-petrochina.html | CC-MAIN-2014-10 | refinedweb | 178 | 63.29 |
The Monadic Way
From HaskellWiki
Revision as of 12:37, 25 August 2006
1 An evaluation of Philip Wadler's "Monads for functional programming"
This tutorial is a "translation" of Philip Welder's "Monads for functional programming". (avail. from here)
I'm a Haskell newbie trying to grasp such a difficult concept as the ones of Monad and monadic computation. While "Yet Another Haskell Tutorial" gave me a good understanding of the type system when it comes to monads I find it almost unreadable.
But I had also Welder constants and calculates their sum
For instance, something like:
(Add (Con 5) (Con 6))
should yeld:
11
We will implement our language with the help of a data type constructor such as:
module MyMonads where data Term = Con Int | Add Term Term deriving (Show)
After that we build our interpreter:
eval :: Term -> Int eval (Con a) = a eval (Add a b) = eval a + eval b
That's it. Just an example:
*MyMonads> eval (Add (Con 5) (Con 6)) 11 *MyMonads> term., that takes a Term (of the expression to be evaluated), an Int (the result of the evaluation) and gives back an output of type Output (that is a synonymous of String).
The evaluator changed quite a lot! Now it has a different type: it takes a Term data type and produces a new type, we called MOut, that is actually a:
*MyMonads> evalO (Add (Con 5) (Con 6)) (11,"eval (Con 5) <= 5 - eval (Con 6) <= 6 - eval (Add (Con 5) (Con 6)) <= 11 - ") *MyMonads>, together with the output coming from their calculation (to be concatenated by the expression x ++ y ++ formatLine ...).
So we need to separate the pairs produced by "evalO t" and "eval u" (remember: eval now produces a value of type M Int, i.e. a pair of an Int and a Int (Int,Output). It does so by creating a pair with that Int and some text. t" and "eval u" respectively. "a","b","x" and "y" will be then used in the evaluation of ((a+)(x ++ ...). anew pair: a new Int produced by a new evaluation; some new output.
bindM will return the new Int in pair with the concatenated outputs resulting from the evaluation of "m" and "f a".
So let's write the new version of the evaluator::
</haskell> bindM (evalM_1 u) (\b -> ((a + b), formatLine (Add t u) (a + b))) </haskell>
bindM takes the result of the evaluation "evalM_1 u", a type Mout Int, and a function. It will extract the Int from that type and use it to bind "b".
So in bindM (evalM_1 u)... "b" will be bound to a value.
Then the outer part (bindM (evalM_1 t) (\a...) will bind "a" to the value needed to evaluate "((a+b), formatLine...) and produce our final MOut Int.
We can write the evaluator in a more convinient way, now that we know what it.
First we need a method for creating someting of type M a, starting fromsomething of type a. This is what
Very simply:
mkM :: a -> MOut a mkM a = (a, "")
We then need to "insert" some text (Output) in our type M:
outPut :: Output -> MOut () outPut x = ((), x)
Very simple: we have a string "x" (Output) and create a pair with a () instead of an Int, and the output.
This way we will be able to define also this firts part in terms of bindM, that will take care of concatenating outputs.
So we have now a new something for this case, when we concatenate computations without the need of binding variables. Let's call it `combineM`:
combineM :: MOut a -> MOut b -> MOut b combineM m f = m `bindM` \_ -> f (and change some names):M,.
- will take the old version of our evaluator and substitute `bindMO` with >>= and `mkMO` with return:
evalM_4 :: Term -> Eval Int evalM_4 (Con a) = return a evalM_4 (Add t u) = evalM_4 t >>= \a -> evalM_4 u >>= \b -> return (a + b)
which the results of our computation (i.e.: the procedures to calculate the final result). The second part, the String called Output, will get filled up with the concatenated output of the computation.
The sequencing done by bindMO (now >>=) will take care of passing to the next evaluation the needed Int and will do some more side calculation to produce the output (concatenating outputs resulting from computation of the new Int, for instance).
So we can grasp the basic concept of a monad: it is like a label which we attach to each step of the evaluation (the String attached to the Int). This label is persistent within the process of computation and at each step bindMO can do some manipulation of it. We are creating side-effects and propagating them within our monads.
Ok. Let's translate our output-producing evaluator in monadic notation:
newtype Eval_IO a = Eval_IO (a, O) deriving (Show) type O = String instance Monad Eval_IO where return a = Eval_IO (a, "") (>>=) m f = Eval_IO (b, x ++ y) where Eval_IO (a, x) = m Eval_IO (b, y) = f a print_IO :: O -> Eval_IO () print_IO x = Eval_IO ((), x) eval_IO :: Term -> Eval_IO Int eval_IO (Con a) = do print_IO (formatLine (Con a) a) return a eval_IO (Add t u) = do a <- eval_IO t b <- eval_IO u print_IO (formatLine (Add t u) (a + b)) return (a + b)
Let's see the evaluator with output in action:
*MyMonads> - ") *MyMonads>
That's it. For today...
(TO BE CONTINUED)
Andrea Rossato arossato AT istitutocolli.org | http://www.haskell.org/haskellwiki/index.php?title=The_Monadic_Way&diff=5678&oldid=5677 | CC-MAIN-2014-23 | refinedweb | 904 | 64.24 |
- Explore
- Create
- Tracks
I recently posted an article that reads in JSON and uses Spark to flatten it into a queryable table. Link is here
using this cmd
val jsonEvents = sqlc.read.json(events)
I want to now write the newly created schema to a file, is this possible?
example:
{account:{name:123, type:retail}}. Read.json method will take the input and create account.name, account.type. How can I get spark to write this same format to a file? Desired format "123,retail".
So far I have tried these options
jsonEvents.rdd.saveAsTextFile("/events/one") jsonEvents.write.json("/events/one/example.json")
but each gave different results as expected.
[[null,Columbus,Ohio,21000],future,forward,null,null,40.00,456,sell] {"account":{"city":"Columbus","state":"Ohio","zip":"21000"},"assetClass":"future","contractType":"forward","strikePrice":"40.00","tradeId":"456","transType":"sell"}
the schema of read.json is :
root |-- account: struct (nullable = true) | |-- accountType: string (nullable = true) | |-- city: string (nullable = true) | |-- state: string (nullable = true) | |-- zip: string (nullable = true) |-- assetClass: string (nullable = true) |-- contractType: string (nullable = true) |-- price: string (nullable = true) |-- stockAttributes: struct (nullable = true) | |-- 52weekHi: string (nullable = true) | |-- 5avg: string (nullable = true) |-- strikePrice: string (nullable = true) |-- tradeId: string (nullable = true) |-- transType: string (nullable = true)
Answer by Kirk Haslbeck ·)
Answer by Paul Hargis ·
Better answer here, using built-in sql functions concat() and lit() to create a single String value holding the contents of the selected Row values. For simplicity, I only included 2 columns here, the "tradeId" and "assetClass" columns:
%spark import org.apache.spark.sql.functions.{concat, lit} val jsonStrings = jsonEvents.select(concat($"tradeId", lit(","), $"assetClass").alias("concat")) jsonStrings.show()
Results in the following:
import org.apache.spark.sql.functions.{concat, lit} jsonStrings: org.apache.spark.sql.DataFrame = [concat: string] +----------+ | concat| +----------+ | 123,stock| |456,future| |789,option| +----------+
@Paul Hargis The issue with this approach is that you need to type all the column names "$tradeId" which negates the advantage of using sparks json reader. The reader dynamically creates the schema, this line would break that by physically typing the schema.
If I'm understanding you correctly your desired format is a csv file? I know there are some libraries out there that do this if you need things like quote and escape chars (spark-csv). For simpler cases you can also map through your events and transform them into a string with the values you want separated by commas.
Answer by Paul Hargis ·
At first, it appears what you want is a flat file of the values (not the keys/columns) stored in the events DataFrame. Perhaps not the direct approach, but consider writing the DataFrame to a Hive table using registerTempTable(), which will store the values to Hive managed table, as well as storing metadata (i.e. column names) to Hive metastore. For instance:
events.registerTempTable("staging") sqlContext.sql("CREATE TABLE events STORED AS ORC AS SELECT * FROM staging")
On the other hand, if you want to use the DataFrame API to do this, you might try this:
events.write.saveAsTable(tableName)
@Paul Hargis I tried this but the hive table did not end up with flattened columns like the spark SQL table. See image. | https://community.hortonworks.com/questions/43787/can-sparksql-write-a-flattened-json-table-to-a-fil.html?sort=newest | CC-MAIN-2019-26 | refinedweb | 527 | 54.83 |
# PVS-Studio for Java hits the road. Next stop is Elasticsearch

The PVS-Studio team has been keeping the blog about the checks of open-source projects by the same-name static code analyzer for many years. To date, more than 300 projects have been checked, the base of errors contains more than 12000 cases. Initially the analyzer was implemented for checking C and C++ code, support of C# was added later. Therefore, from all checked projects the majority (> 80%) accounts for C and C++. Quite recently Java was added to the list of supported languages, which means that there is now a whole new open world for PVS-Studio, so it's time to complement the base with errors from Java projects.
The Java world is vast and varied, so one doesn't even know where to look first when choosing a project to test the new analyzer. Ultimately, the choice fell on the full-text search and analytical engine Elasticsearch. It is quite a successful project, and it's even especially pleasant to find errors in significant projects. So, what defects did PVS-Studio for Java manage to detect? Further talk will be right about the results of the check.
Briefly about Elasticsearch
---------------------------
[Elasticsearch](https://www.elastic.co/products/elasticsearch) is a distributed, RESTful search and analytics engine with open source code, capable of solving a growing number of use cases. It enables you to store large amounts of data, carry out a quick search and analytics (almost in real time mode). Typically, it is used as the underlying mechanism/technology, which provides applications with complex functions and search requirements.
Among the major sites using Elasticsearch there are Wikimedia, StumbleUpon, Quora, Foursquare, SoundCloud, GitHub, Netflix, Amazon, IBM, Qbox.
Fine, enough of introduction.
The whole story of how things were
----------------------------------
There were no problems with the check itself. The sequence of actions is rather simple and didn't take much time:
* Downloaded Elasticsearch from [GitHub](https://github.com/elastic/elasticsearch);
* Followed the [instructions](https://www.viva64.com/en/m/0044/) how to run the Java analyzer and ran the analysis;
* Received the analyzer's report, delved into it and pointed out interesting cases.
Now let's move on to the main point.
Watch out! Possible NullPointerException
----------------------------------------
[V6008](https://www.viva64.com/en/w/v6008/) Null dereference of 'line'. GoogleCloudStorageFixture.java(451)
```
private static PathTrie defaultHandlers(....) {
....
handlers.insert("POST /batch/storage/v1", (request) -> {
....
// Reads the body
line = reader.readLine();
byte[] batchedBody = new byte[0];
if ((line != null) ||
(line.startsWith("--" + boundary) == false)) // <=
{
batchedBody = line.getBytes(StandardCharsets.UTF\_8);
}
....
});
....
}
```
The error in this code fragment is that if the string from the buffer wasn't read, the call of the *startsWith* method in the condition of the *if* statement will result in throwing the *NullPointerException* exception. Most likely, this is a typo and when writing a condition developers meant the *&&* operator instead of *||*.
[V6008](https://www.viva64.com/en/w/v6008/) Potential null dereference of 'followIndexMetadata'. TransportResumeFollowAction.java(171), TransportResumeFollowAction.java(170), TransportResumeFollowAction.java(194)
```
void start(
ResumeFollowAction.Request request,
String clusterNameAlias,
IndexMetaData leaderIndexMetadata,
IndexMetaData followIndexMetadata,
....) throws IOException
{
MapperService mapperService = followIndexMetadata != null // <=
? ....
: null;
validate(request,
leaderIndexMetadata,
followIndexMetadata, // <=
leaderIndexHistoryUUIDs,
mapperService);
....
}
```
Another warning from the [V6008](https://www.viva64.com/en/w/v6008/) diagnostic. The object *followIndexMetadata* kindled my interest. The *start* method accepts several arguments as input, our suspect is right among them. After that, based on checking our object for *null,* a new object is created, which is involved in further method logic. Check for *null* shows us that *followIndexMetadata* can still come from the outside as a null object. Well, let's look further.
Then multiple arguments are pushed to the *validate* method (again, there is our considered object among them). If we look at the implementation of the validation method, it all falls into place. Our potential null object is passed to the *validate* method as a third argument, where it unconditionally gets dereferenced. Potential *NullPointerException* as a result.
```
static void validate(
final ResumeFollowAction.Request request,
final IndexMetaData leaderIndex,
final IndexMetaData followIndex, // <=
....)
{
....
Map ccrIndexMetadata = followIndex.getCustomData(....); // <=
if (ccrIndexMetadata == null) {
throw new IllegalArgumentException(....);
}
....
}}
```
We don't know for sure with what arguments the *start* method is called. It is quite possible that all arguments are checked somewhere before calling the method and no null object dereference will happen. Anyway, we should admit that such code implementation still looks unreliable and deserves attention.
[V6060](https://www.viva64.com/en/w/v6060/) The 'node' reference was utilized before it was verified against null. RestTasksAction.java(152), RestTasksAction.java(151)
```
private void buildRow(Table table, boolean fullId,
boolean detailed, DiscoveryNodes discoveryNodes,
TaskInfo taskInfo) {
....
DiscoveryNode node = discoveryNodes.get(nodeId);
....
// Node information. Note that the node may be null because it has
// left the cluster between when we got this response and now.
table.addCell(fullId ? nodeId : Strings.substring(nodeId, 0, 4));
table.addCell(node == null ? "-" : node.getHostAddress());
table.addCell(node.getAddress().address().getPort());
table.addCell(node == null ? "-" : node.getName());
table.addCell(node == null ? "-" : node.getVersion().toString());
....
}
```
Another diagnostic rule with the same problem triggered here. *NullPointerException*. The rule cries out for developers: «Guys, what are you doing? How could you do that? Oh, it is awful! Why do you first use the object and check if for *null* in the next line? Here is how null object dereference happens. Alas, even a developer's comment didn't help.
[V6060](https://www.viva64.com/en/w/v6060/) The 'cause' reference was utilized before it was verified against null. StartupException.java(76), StartupException.java(73)
```
private void printStackTrace(Consumer consumer) {
Throwable originalCause = getCause();
Throwable cause = originalCause;
if (cause instanceof CreationException) {
cause = getFirstGuiceCause((CreationException)cause);
}
String message = cause.toString(); // <=
consumer.accept(message);
if (cause != null) { // <=
// walk to the root cause
while (cause.getCause() != null) {
cause = cause.getCause();
}
....
}
....
}
```
In this case we should take into account that the *getCause* method of the *Throwable* class might return *null*. The above problem repeats further, so its explanation is needless.
Meaningless conditions
----------------------
[V6007](https://www.viva64.com/en/w/v6007/) Expression 's.charAt(i) != '\t'' is always true. Cron.java(1223)
```
private static int findNextWhiteSpace(int i, String s) {
for (; i < s.length() && (s.charAt(i) != ' ' || s.charAt(i) != '\t'); i++)
{
// intentionally empty
}
return i;
}
```
The considered function returns the index of the first space character, starting from the *i* index. What's wrong? We have the analyzer warning that *s.charAt(i) != '\t'* is always true, which means the expression *(s.charAt(i) != ' ' || s.charAt(i) != '\t')* will always be true as well. Is this true? I think, you can easily make sure of this, by substituting any character.
As a result, this method will always return the index, equal to *s.length()*, which is wrong. I would venture to suggest that the above method is to blame here:
```
private static int skipWhiteSpace(int i, String s) {
for (; i < s.length() && (s.charAt(i) == ' ' || s.charAt(i) == '\t'); i++)
{
// intentionally empty
}
return i;
}
```
Developers implemented the method, then copied and got our erroneous method *findNextWhiteSpace,* having made some edits. They kept fixing and fixing the method but haven't fixed it up. To get this right, one should use the *&&* operator instead of *||*.
[V6007](https://www.viva64.com/en/w/v6007/) Expression 'remaining == 0' is always false. PemUtils.java(439)
```
private static byte[]
generateOpenSslKey(char[] password, byte[] salt, int keyLength)
{
....
int copied = 0;
int remaining;
while (copied < keyLength) {
remaining = keyLength - copied;
....
copied += bytesToCopy;
if (remaining == 0) { // <=
break;
}
....
}
....
}
```
From the condition of the *copied < keyLength* loop we can note, that *copied* will always be less than *keyLength*. Hence, it is pointless to compare the remaining variable with 0, and it will always be false, at which point the loop won't exit by a condition. Remove the code or reconsider the behavior logic? I think, only developers will be able to put all the dots over the i.
[V6007](https://www.viva64.com/en/w/v6007/) Expression 'healthCheckDn.indexOf('=') > 0' is always false. ActiveDirectorySessionFactory.java(73)
```
ActiveDirectorySessionFactory(RealmConfig config,
SSLService sslService,
ThreadPool threadPool)
throws LDAPException
{
super(....,
() -> {
if (....) {
final String healthCheckDn = ....;
if (healthCheckDn.isEmpty() &&
healthCheckDn.indexOf('=') > 0)
{
return healthCheckDn;
}
}
return ....;
},
....);
....
}
```
Meaningless expression again. According to the condition, the *healthCheckDn* string has to be both empty and contain the character '=' not in the first position, so that lambda expression would return the string variable *healthCheckDn*. Phew, that's all then! As you probably understood, it is impossible. We're not going to go deep in the code, let's leave it to developers' discretion.
I cited only some of the erroneous examples, but beyond that there was plenty of the [V6007](https://www.viva64.com/en/w/v6007/) diagnostic triggerings, which should be regarded one by one, making relevant conclusions.
Little method can go a long way
-------------------------------
```
private static byte char64(char x) {
if ((int)x < 0 || (int)x > index_64.length)
return -1;
return index_64[(int)x];
}
```
So here we have a teeny-tiny method of several lines. But bugs are on the watch! Analysis of this method gave the following result:
1. V6007 Expression '(int)x < 0' is always false. BCrypt.java(429)
2. V6025 Possibly index '(int) x' is out of bounds. BCrypt.java(431)
Issue N1. The expression *(int)x < 0* is always false (Yes, [V6007](https://www.viva64.com/en/w/v6007/) again). The *x* variable cannot be negative, as it is of the *char* type. The *char* type is an unsigned integer. It cannot be called a real error, but, nonetheless, the check is redundant and can be removed.
Issue N2. Possible array index out of bounds, resulting in the *ArrayIndexOutOfBoundsException* exception*.* Then it begs the question, which lies on the surface: „Wait, how about the index check?“
So, we have a fixed-size array of 128 elements:
```
private static final byte index_64[] = {
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
-1, -1, -1, -1, -1, -1, 0, 1, 54, 55,
56, 57, 58, 59, 60, 61, 62, 63, -1, -1,
-1, -1, -1, -1, -1, 2, 3, 4, 5, 6,
7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27,
-1, -1, -1, -1, -1, -1, 28, 29, 30,
31, 32, 33, 34, 35, 36, 37, 38, 39, 40,
41, 42, 43, 44, 45, 46, 47, 48, 49, 50,
51, 52, 53, -1, -1, -1, -1, -1
};
```
When the *char64* method receives the *x* variable, the index validity gets checked. Where is the flaw? Why is array index out of bounds still possible?
The check (int)x > index\_64.length is not quite correct. If the char64 method receives x with the value 128, the check won't protect against ArrayIndexOutOfBoundsException. Maybe this never happens in reality. However, the check is written incorrectly, and one has to change»greater than" operator (>) with «greater than or equal to (> =).
Comparisons, which did their best
---------------------------------
[V6013](https://www.viva64.com/en/w/v6013/) Numbers 'displaySize' and 'that.displaySize' are compared by reference. Possibly an equality comparison was intended. ColumnInfo.java(122)
```
....
private final String table;
private final String name;
private final String esType;
private final Integer displaySize;
....
@Override
public boolean equals(Object o) {
if (this == o) {
return true;
}
if (o == null || getClass() != o.getClass()) {
return false;
}
ColumnInfo that = (ColumnInfo) o;
return displaySize == that.displaySize && // <=
Objects.equals(table, that.table) &&
Objects.equals(name, that.name) &&
Objects.equals(esType, that.esType);
}
```
What is incorrect here is that *displaySize* objects of the *Integer* type are compared using the *==* operator, that is, by reference. It's quite possible that *ColumnInfo* objects, whose *displaySize* fields have different references but the same content, will be compared. In this case, comparison will give us the negative result, when we expected to get true.
I would venture to guess that such a comparison could be the result of a failed refactoring and initially the *displaySize* field was of the *int* type.
[V6058](https://www.viva64.com/en/w/v6058/) The 'equals' function compares objects of incompatible types: Integer, TimeValue. DatafeedUpdate.java(375)
```
....
private final TimeValue queryDelay;
private final TimeValue frequency;
....
private final Integer scrollSize;
....
boolean isNoop(DatafeedConfig datafeed)
{
return (frequency == null
|| Objects.equals(frequency, datafeed.getFrequency()))
&& (queryDelay == null
|| Objects.equals(queryDelay, datafeed.getQueryDelay()))
&& (scrollSize == null
|| Objects.equals(scrollSize, datafeed.getQueryDelay())) // <=
&& ....)
}
```
Incorrect objects comparison again. This time objects with incompatible types are compared (*Integer* and *TimeValue*). The result of this comparison is obvious, and it is always false. You can see that class fields are compared with each other, only the names of the fields have to be changed. Here is the point — a developer decided to speed up the process by using the copy-paste and got a bug into the bargain. The class implements a getter for the *scrollSize* field, so to fix the error one should use the method *datafeed .getScrollSize().*
Let's look at a couple of erroneous examples without any explanation. Problems are quite obvious.
[V6001](https://www.viva64.com/en/w/v6001/) There are identical sub-expressions 'tookInMillis' to the left and to the right of the '==' operator. TermVectorsResponse.java(152)
```
@Override
public boolean equals(Object obj) {
....
return index.equals(other.index)
&& type.equals(other.type)
&& Objects.equals(id, other.id)
&& docVersion == other.docVersion
&& found == other.found
&& tookInMillis == tookInMillis // <=
&& Objects.equals(termVectorList, other.termVectorList);
}
```
[V6009](https://www.viva64.com/en/w/v6009/) Function 'equals' receives an odd argument. An object 'shardId.getIndexName()' is used as an argument to its own method. SnapshotShardFailure.java(208)
```
@Override
public boolean equals(Object o) {
....
return shardId.id() == that.shardId.id() &&
shardId.getIndexName().equals(shardId.getIndexName()) && // <=
Objects.equals(reason, that.reason) &&
Objects.equals(nodeId, that.nodeId) &&
status.getStatus() == that.status.getStatus();
}
```
Miscellaneous
-------------
[V6006](https://www.viva64.com/en/w/v6006/) The object was created but it is not being used. The 'throw' keyword could be missing. JdbcConnection.java(88)
```
@Override
public void setAutoCommit(boolean autoCommit) throws SQLException {
checkOpen();
if (!autoCommit) {
new SQLFeatureNotSupportedException(....);
}
}
```
The bug is obvious and requires no explanation. A developer created an exception, but didn't throw it anywhere else. Such an anonymous exception is created successfully, as well as, most importantly, will be removed seamlessly. The reason is the lack of the *throw* operator.
[V6003](https://www.viva64.com/en/w/v6003/) The use of 'if (A) {....} else if (A) {....}' pattern was detected. There is a probability of logical error presence. MockScriptEngine.java(94), MockScriptEngine.java(105)
```
@Override
public T compile(....) {
....
if (context.instanceClazz.equals(FieldScript.class)) {
....
} else if (context.instanceClazz.equals(FieldScript.class)) {
....
} else if(context.instanceClazz.equals(TermsSetQueryScript.class)) {
....
} else if (context.instanceClazz.equals(NumberSortScript.class))
....
}
```
In multiple *if- else* one of the conditions is repeated twice, so the situation requires competent code review.
[V6039](https://www.viva64.com/en/w/v6039/) There are two 'if' statements with identical conditional expressions. The first 'if' statement contains method return. This means that the second 'if' statement is senseless. SearchAfterBuilder.java(94), SearchAfterBuilder.java(93)
```
public SearchAfterBuilder setSortValues(Object[] values) {
....
for (int i = 0; i < values.length; i++) {
if (values[i] == null) continue;
if (values[i] instanceof String) continue;
if (values[i] instanceof Text) continue;
if (values[i] instanceof Long) continue;
if (values[i] instanceof Integer) continue;
if (values[i] instanceof Short) continue;
if (values[i] instanceof Byte) continue;
if (values[i] instanceof Double) continue;
if (values[i] instanceof Float) continue;
if (values[i] instanceof Boolean) continue; // <=
if (values[i] instanceof Boolean) continue; // <=
throw new IllegalArgumentException(....);
}
....
}
```
The same condition is used twice in a row. Is the second condition superfluous or should another type be used instead of *Boolean*?
[V6009](https://www.viva64.com/en/w/v6009/) Function 'substring' receives an odd arguments. The 'queryStringIndex + 1' argument should not be greater than 'queryStringLength'. LoggingAuditTrail.java(660)
```
LogEntryBuilder withRestUriAndMethod(RestRequest request) {
final int queryStringIndex = request.uri().indexOf('?');
int queryStringLength = request.uri().indexOf('#');
if (queryStringLength < 0) {
queryStringLength = request.uri().length();
}
if (queryStringIndex < 0) {
logEntry.with(....);
} else {
logEntry.with(....);
}
if (queryStringIndex > -1) {
logEntry.with(....,
request.uri().substring(queryStringIndex + 1,// <=
queryStringLength)); // <=
}
....
}
```
Let's consider here the erroneous scenario which may cause the exception *StringIndexOutOfBoundsException*. The exception will occur when *request.uri()* returns a string which contains the character '#' before '?'. There are no checks in the method, so in case if it happens, the trouble is brewing. Perhaps, this will never happen due to various checks of the object outside the method, but setting hopes on this is not the best idea.
Conclusion
----------
For many years PVS-Studio has been helping to find defects in code of commercial and free open-source projects. Just recently Java has joined the list of supported languages for analysis. Elasticsearch became one of the first tests for our newcomer. We hope that this check will be useful for the project and interesting for readers.
PVS-Studio for Java needs new challenges, new users, active feedback and clients in order to quickly adapt to the new world :). So I invite you to [download](https://www.viva64.com/en/pvs-studio-download/) and try our analyzer on your work project right away! | https://habr.com/ru/post/445674/ | null | null | 2,863 | 51.24 |
The next version of C# will feature a code refactoring engine built into the Visual Studio environment.
A term coined by Martin Fowler, code refactoring allows you to change the code structure without changing or affecting what the code itself actually does. For example, changing a variable name or packaging a few lines of code into a method are code refactoring. The main difference between C# 2.0 refactoring and a mere edit or find-and-replace is that you can harness the intelligence of the compiler to distinguish between code and comments, and so on. This article provides a preview of Visual C# 2.0 code refactoring, to be released with the next version of Visual Studio .NET, code-name Whidbey.
Why Refactoring?
Perhaps the single most important contributor to the long term maintainability of an application is how well laid-out and structured the code base is. Elements such as proper variable names, naming conventions, a consistent look and feel to statements, code format, and style enable readability by any developer, not just the one who wrote the code. Member variables encapsulation decouples clients from servers. Cohesive interface definitions enables interface reuse in other contexts. Allocation of interfaces to components is key to modular development and reuse. Eliminating blocks of repeated code by factoring it into a method increase quality because you only need to fix a defect in a single place.
Code refactoring allows you to change the code structure, without changing or affecting what the code itself actually does.. Refactoring may or may not change the public interface of a type?it is at the discretion of the developer whether the changes made should be limited to the internals of a single component or it should trigger a massive update of all the clients as well. In its simplest form, refactoring can rename types, variables, methods, or parameters, extract a method out of a code section (and insert a method call instead), extract an interface out of a set of methods the type already implements, encapsulate type members in properties, automate many formatting tasks and auto-expand common statements. This is what Visual C# 2.0 reformatting supports, and it is the subject of this article. Note that in this upcoming version, reformatting changes are limited to an assembly, and do not propagate to client assemblies, even in the same solution. More advanced forms or refactoring are also possible. For example, a refactoring engine could analyze your code for similar code sections that could be factored into a separate method, perhaps with different parameter values. Refactoring could enforce compliance with a coding standard and propagate changes across a set of interacting assemblies. No doubt, future versions of Visual C# .NET and other enterprise development tools from Microsoft will provide these and other advanced features. But for now, here are the refactoring features of Visual C# 2.0.. You can invoke refactoring in two ways: you can select Refactor from the top level Visual Studio .NET menu, or you can Select from the pop-up context menu. For example, to rename the type Form1 to ClientForm, right-click anywhere in your code where type Form1 is present (in its definition or places it is being used), and select Rename... from the Refactor menu, as shown in Figure 1.
This will bring up the Rename dialog box shown in Figure 2 where you can preview the changes (always a good idea), and instruct the refactoring tools to rename inside comments and strings as well.
Supply ClientForm for the new form name and click OK. The Preview Changes dialog box shown in Figure 3 presents all the places in the assembly, across files, when the type Form1 is present. You can clear the checkbox before any occurrence where you do not want the renaming to take place. You can also double-click on each preview change to go to its code line. You can even keep the Preview Changes dialog box open, work on your code, and click the Refresh button (middle button in the dialog toolbar) to pick up the latest occurrences of the literal..
Method Extraction
Refactoring method extraction lets you convert a section of in-line selected code into a method. The extraction removes the original code section, places it into a new private method, and injects a call to the new method instead of the extracted lines.
Tool-based refactoring relies heavily of the complier and its ability to discern and keep track of various symbols in the.
For example, suppose in the following code snippet you highlight the third line and select Extract Method... from the Refactor menu:
int number1 = 1; int number2 = 2; int result = number1+number2; Trace.WriteLine(result);
This will bring up the Extract Method dialog box shown in Figure 4. The bottom portion of the dialog shows the new method signature. Type the new method name and click OK.
The extracted method will be added as a new private static method, and the highlighted line will call it as in the following code snippet:
int number1 = 1; int number2 = 2; int result = Add(number1,number2); Trace.WriteLine(result); ... private static int Add(int number1,int number2) { int result = number1+number2; return result; }
Interface Extraction
My favorite refactoring feature is interface extraction, which creates an interface definition out of the public methods of a class or a struct. For example, consider the flowing Calculator class:
public abstract class Calculator { public int Add(int number1,int number2) { return number1+number2; } public int Subtract(int number1,int number2) { return number1-number2; } public virtual int Divide(int number1,int number2) { return number1+number2; } public abstract int Multiply(int number1,int number2); }
To extract an interface out of the Calculator class, right-click anywhere inside the class definition and select Extract Interface... from the Refactor menu. This will bring up the Extract Interface dialog box, shown in Figure 5.
The dialog box will propose the interface name?the type's name prefixed with an I (the standard .NET naming convention for an interface). The interface will be extracted to a separate file (that will be automatically added to the project), and you can provide a file name in the dialog. Finally, all the public methods (or properties) of the type will be listed in the dialog, regardless of whether they are public, virtual, or abstract. Note that when you have a class hierarchy involved, the refactoring engine will only include public methods explicitly declared by the class or overridden methods. To include the suggested methods in the new interface definition, you must explicitly check the checkbox to the left of the method. After you click the OK button, the new interface will be in a new file, and the tool will add the interface derivation to the Calculator class, as shown here:
//In the file ICalculator.cs interface ICalculator { int Add(int number1,int number2); int Divide(int number1,int number2); int Multiply(int number1,int number2); int Subtruct(int number1,int number2); } //In the file Calculator.cs public abstract class Calculator : ICalculator {...}
You can even extract one interface from the definition of another, in which case the new interface will be placed in a new file, but the original interface definition will not change (such as inheriting from the new interface).
Interface extraction (in the current Alpha version) is not as smart as it should be. Specifically, if the type already implements as interface, that interface's members will be included in the Extract Interface dialog. The only workaround is to use explicit interface implementation (see my article, ".NET Interface-Based Programming," in the May 2002 issue of CoDe Magazine). shown in Figure 6.
EncapsulateField. You can also specify the property's visibility (public, internal, protected internal, protected, private), and what should be done with external references: You can have the refactoring tool replace all references to the field (inside the type or outside) with references to the new property. Although the default reference update selection is set to External, I recommend always choosing All, because that will promote looser internal coupling in the type itself and that makes maintenance easier. Any business rule enforced by the property later on will apply automatically inside the type. You can choose if you want to review the changes to the references and apply the change. The result will be a public property wrapping the member:
public class MyClass { int m_Number; public int Number { get { return m_Number; } set { m_Number = value; } } }
You can use the field encapsulation capability to do just what its name implies. For example, instead of this public member variable:
public class MyClass { public int m_Number; }
After using field encapsulation refactoring, you will end up with a public property called Number, and the public m_Number member will be converted to a private member:
public class MyClass { private int m_Number; public int Number { get {...} set {...} } }
Note that there is no refactoring support for generating an indexer or an iterator (another C# 2.0 feature). Unfortunately, Microsoft's design for encapsulation of an event field is poor. C# supports event accessors, which are property-like accessors encapsulating access to delegates. In my opinion, exposing member delegates in public should be explicitly forbidden by your C# coding standard. For example, instead of this definition:
public class MyPublisher { public event EventHandler m_MyEventHandler; }
You should write:
public class MyPublisher { EventHandler m_MyEventHandler; public event EventHandler MyEventHandler { add { m_MyEventHandler += value; } remove { m_MyEventHandler -= value; } } }
Unfortunately, when you apply the field encapsulation refactoring selection to an event, it will generate the following invalid code:
//Invalid refactoring code public class MyPublisher { private event EventHandler m_MyEventHandler; public EventHandler MyEventHandler { get { return m_MyEventHandler; } set { m_MyEventHandler = value; } } }
Be sure to always encapsulate your events, even without refactoring support.
Signature Change Add() method in this Calculator class to use double instead of int parameters:
public class Calculator { public int Add(int number1,int number2) { return number1+number2; } }
Right-click anywhere inside the method and select Change Method Signature... from the Refactor popup menu to bring up the Change Method Signature dialog box shown in Figure 7.
Use the dialog to change the order of parameters by moving parameters up or down, add or remove a parameter, and edit a parameter type and name.
Method extraction lets you convert a section of in-line selected code into a method.
For example, select the number1 parameter and click the Edit... button to bring up the Parameter dialog box. Change the parameter type to double. Note that the Parameter dialog will only let you change the type to one of the pre-defined C# types, such as int or string. Next, the Parameter dialog will warn you that the change you are about to do may render existing code invalid. Once you apply the signature change, you need to manually change the Add() method's returned type to double, as well as all its call sites. I find signature change to be of little practical value because it is usually faster to just change the signature manually using the code editor.
Surround With and Expansions
The last two refactoring features?surround with and expansions?are about code typing automation rather than code layout and structure.
Surround with generates a template with blank place holders for commonly used statements (such as foreach or exception handling) around a selected code section. For example, to automatically generate a foreach statement around a trace statement, highlight the statement, right-click, and select Refactor from the pop-up menu, then choose Surround With... and then select For Each, as shown in Figure 8.
It is important to understand that Kill() is not the same as Dispose(). Kill() handles execution flow such as application shutdown or timely termination of threads, whereas Dispose() caters to memory and resource management and disposing of other resources the WorkerThread class might hold. The only reason you might have Dispose() call Kill() is as a contingency in case the client developer forgets to do it.
This will insert a foreach statement were you need to fill in the blanks, by tabbing through them, as shown in Figure 9.
You can use the surround with statement to generate code for the following statements: If, Else, For, For Each, While, Do While, Region, and Try...Catch.
The Expand feature injects template code in-place. When you use Expand with control statements such as For Each, there is no need to surround existing code?it will simply expand a foreach statement where you need to fill in the blanks, similar to Figure 10. You can also use it to expand a multitude of code snippets from a static Main() method, (returning int or void, referred to as SIM and SMV respectively) to an enum definition. For example, to inject a reverse for statement, select Insert Expansion... from the Refactor menu. This will pop-up a scrollable list box, with the possible expansions. Select forr from it, as shown in Figure 10.
Table 1 lists the code statements available for extraction.
This will expand the code template shown in Figure 11, where you have to tab through the fields and fill in the blanks.
Table 1 shows the available code expansions in Visual C# 2.0.
Code expansions in the Visual Studio .NET Whidbey Alpha have a few glitches that I hope will be addressed before the product's release. When you expand an interface definition, the suggested interface name does not start with an I. Also, when you expand a lock statement, the tool injects a lock on the value true, which is not only wrong, it does not compile:
lock(true) { }
For now, you need to fix it manually, typically by locking on the this reference:
lock(this) { }
Code expansion allows developers to add their own code templates (called exstencils). You need to place an XML file in My Documents\Visual Studio Projects\VC#\Exstencil that provides Visual Studio .NET with the information for the custom exstencil.
Surround with and expansions are about code typing automation, rather than code layout and structure.
This is, of course, a very effective way to enable use of frameworks or coding standards. Look at the MSDN library for more information on how to compose custom exstencils. | http://www.codemag.com/article/0401071 | CC-MAIN-2016-07 | refinedweb | 2,367 | 51.99 |
Re: [CODE4LIB] One Data Format Identifier (and Registry) to Rule Them All
RDF is fine with one 'thing' having multiple identifiers, it just hands the problem up a level to the application to deal with. For example, the owl:sameAs predicate is used to express that the subject and object are the same 'thing'. Then the application can infer that if a owl:sameAs b, and a
Re: [CODE4LIB] One Data Format Identifier (and Registry) to Rule Them All
On Mon, 2009-05-11 at 11:31 +0100, Jakob Voss wrote A format should be described with a schema (XML Schema, OWL etc.) or at least a standard. Mostly this schema already has a namespace or similar identifier that can be used for the whole format. This is unfortunately not the case. For
Re: [CODE4LIB] One Data Format Identifier (and Registry) to Rule Them All
On Mon, 2009-05-11 at 12:02 +0100, Alexander Johannesen wrote: On Mon, May 11, 2009 at 16:04, Rob Sanderson azar...@liverpool.ac.uk wrote: * One namespace is used to define two _totally_ separate sets of elements. There's no reason why this can't be done. As opposed to all the reasons
Re: [CODE4LIB] Formats and its identifiers
On Mon, 2009-05-11 at 14:53 +0100, Jakob Voss wrote: A format should be described with a schema (XML Schema, OWL etc.) or at least a standard. Mostly this schema already has a namespace or similar identifier that can be used for the whole format. This is unfortunately not the case.
Re: [CODE4LIB] RDA in RDF, was: Something completely different
See also the thread, 'RDA: A Standard Nobody Will Notice'. A standard nobody will notice ... for good reason. Rob On Tue, 2009-04-07 at 18:24 +0100, Eric Lease Morgan wrote: On Apr 7, 2009, at 1:15 PM, Karen Coyle wrote:
Re: [CODE4LIB] registering info: uris?
On Wed, 2009-04-01 at 14:17 +0100, Mike Taylor wrote: Ed Summers writes: Assuming a world where you cannot de-reference this DOI what is it good for? It wouldn't be good for much if you couldn't dereference it at all. The point is that (I argue) the identifier shouldn't tie itself to a
Re: [CODE4LIB] registering info: uris?
On Mon, 2009-03-30 at 16:08 +0100, Ross Singer wrote: There should be no issue with having both, mainly because like I mentioned earlier, nobody cares about info:uris. s/nobody cares/the web doesn't care/ 'The Web' isn't the only use case. There are plenty of reasons for having non] RDA - a standard that nobody will notice?
My first question would be: Why? Why invent a new element for title (etc.) rather than using Dublin Core? Wouldn't it have been easier to do this building from SWAP? And my second question would be: Really? 251
Re: [CODE4LIB] Open Source Institutional Repository Software?
To throw in my 2c. Eric Lease Morgan wrote: On Aug 21, 2008, at 4:34 PM, Jonathan Rochkind wrote: If you can figure out what the difference between an 'institutional repository' and a 'digital library' is, let me know. I think an institutional repository is a type of digital library. I
[CODE4LIB] ORE software libraries from Foresite
to provide all the ingest, transformation and dissemination support required in DSpace. Please feel free to download and play with the source code, and let us have your feedback via the Google group: [EMAIL PROTECTED] All the best, Richard Jones Rob Sanderson [1] Foresite project page: http
Re: [CODE4LIB] Latest OpenLibrary.org release
On Thu, 2008-05-08 at 11:41 -0400, Godmar Back wrote: On Thu, May 8, 2008 at 11:25 AM, Dr R. Sanderson [EMAIL PROTECTED] wrote: Like what? The current API seems to be concerned with search. Search is what SRU does well. If it was concerned with harvest, I (and I'm sure many others)
[CODE4LIB] OAI-ORE European Open Meeting, April 4 2008
Apologies | https://www.mail-archive.com/search?l=code4lib%40listserv.nd.edu&q=from:%22Rob+Sanderson%22&o=newest | CC-MAIN-2021-39 | refinedweb | 666 | 72.26 |
Over and hosted by Stephen Frost. The purpose of this exercise was three-fold: - Find out about compiler problems in GCC 4.1 itself as well as in packages that may fail with the new version *before* GCC 4.1 is uploaded to unstable. GCC, in particular G++, is becoming stricter regarding adherence to standards and packages may fail to build with 4.1 due to invalid code that was accepted previously. - Find out about MIPS specific problems in GCC 4.1 and to answer Matthias Klose's question [1] as to which platforms can move to GCC 4.1 as the default compiler once it is uploaded to unstable. - Find MIPS specific assembler warnings and create a list of all users of xgot (a MIPS specific toolchain problem). Executive summary ----------------- such before G++ 4.1): Methodology ----------- I generated a list of package that are "Architecture: any" or "mips", sorted by upload (old packages first). I then started compiling these packages, and after the mirror pulse I would update my packages list again (excluding packages which had no new version). What I explicitly did *not* do was to exclude packages which have known build failures because I wanted to see if they might have GCC 4.1 issues too. Another thing I did on MIPS which the official build machines would not do is to compile as "mips64" rather than "mips" (using a 64-bit kerneel but 32-bit userland, uname -m shows "mips64"; this can be changed by using the linux32 program). The aim of this was to identify mips64 specific problems. I compiled every packages that failed with "mips64" using "mips" too, though. In total, 6192 individual packages were compiled on MIPS, with 6761 compilations (because of new versions uploaded to the archive during those two weeks). A listing is available from [2] and all build logs from MIPS from [3]. On AMD64, the number of individual packages compiled was 5862. This number is lower than the one for MIPS because I started with MIPS first and then ignored packages on AMD64 with known build errors. Detailed summary of bugs found ------------------------------ While I've tried to keep count of the different errors, some of the numbers are slightly off, partly because you do make some errors when keeping track of so many bugs and partly because the classification below is quite arbitrary and I slightly changed it over time. (Normally, you'd go back and classify each bug again, but I didn't do that because this was not a scientific study.) 1. New bugs I have filed in the BTS - gcc/g++ 4.1 strictness: 277 (see [4] for a list] - failures due to the new version of make: 4 - old or missing build-dependencies: 50 - host type cannot be recognized: - config.* out of date: 26 - other method (mips64): 7 - other method (amd64): 1 - architecture specific bugs: - mips: 9 - amd64: 7 - GCC 4.1 compiler bugs: 6 - packages that could (but don't) support mips: 5 - non-i386 brokenness: 4 - 64-bit brokenness: 2 - a cast loses precision: 2 - not using PIC: 1 - .orig.tar.gz missing from archive: 1 - other/generic: 50 2. The build is "successful" but there is a bug if you look closely I tried to look at successful build logs but with over 6000 packages I could obviously not do so in great detail. Therefore, I'm sure there are other bugs in "successful" builds I missed. - "Architecture" is "any" but should be "all": 45 - package contains nothing useful (no binary, no headers, etc): 1 - package contains no binary on !i386: 1 - build doesn't show what commands are run (wishlist bug): 2 - test suite not run (wrong command): 1 3. Bugs which I saw but which have already been reported - gcc/g++ 4.1 strictness: 2 - gcc/g++ 3.4/4.0 strictness: 17 - failures due to the new version of make: 6 - old or missing build-dependencies: 66 - host type cannot be recognized: - config.* out of date: 42 - architecture specific bugs: - mips: 2 - amd64: didn't count but there some - non-i386 brokenness: 27 - 64-bit brokenness: 3 - a cast loses precision: 2 - not using PIC: 2 - the "Architecture" is "any" but should be "all": 2 - other/generic: 40 4. Most often seen warning - dh_*: Compatibility levels before 4 are deprecated. Most common programming errors ------------------------------ Basically, it all boils down to broken C++ code. There were a few bugs in C code, but the majority was in C++. The most common errors I found (and some *approximate* numbers) are: - extra qualification: about 187 bugs - reliance on friend injection: 26 bugs - wrong escape characters (e.g. "\."; most commonly seen in regular expressions): 6 bugs - iterator problems (such as assigning 0 or NULL to an iterator): 3 - template specialisation in wrong namespace - template reliance on a function declared later - use of template's base class members, unqualified, where the base class is dependent - use of "assert" without #include <cassert>: 5 bugs - dereferencing type-punned pointer will break strict-aliasing rules together (a warning) combined with -Werror: 6 bugs If you intend to touch C++ code, please take some time to read the following pages: - - - - a detailed list of changes in. Status of 4.1 readiness ----------------------- (These numbers are slightly off but give a good overview) * 248 Outstanding * 12 Forwarded * 10 Pending Upload * 3 Fixed in NMU * 12 Resolved * 186 Patch Available * 1 Confirmed * 38 Unclassified Further work and unresolved issues ---------------------------------- These are issues I haven't had time to investigate yet: - look for packages that use a specific version of GCC (hard-coded) and see if they work with GCC 4.1. - a number of Java programs fail to build because jni.h cannot be found. - many gnustep packages didn't get built - many package that require kaffe didn't get built MIPS specific comments ---------------------- 1. Assembler warnings 20 packages had assembler warnings on MIPS. A list will be sent to debian-mips so Thiemo Seufer (or others) can investigate. 2. Packages that couldn't be built because of missing packages (depwait) - ghc-cvs/ghc6: 33 [see #274942] - libopenafs-dev: 3 [not ported to mips yet] - linux source/headers 2.6.15-1: 6 [2.6 is in the archive for mips now] - libdb4.2-java: 1 (openoffice) - gjdoc: 7 - zeroc-ice/zeroc-icee-translators: 3 - hmake: 1 - libopenvrml5-dev: 1 - libavifile-0.7-dev: 2 - fp-compiler: 2 - eclipse: 1 - thunderbird-dev: 1 - libgtkada2-dev: 1 - mono/cil: 16 3. Packages not relevant to mips (notforus) - depends on syslinux: 1 - mig: 1 (hurd) - low-level stuff: 1 - cmucl: 1 - mondo: 1 - pbbuttonsd-dev (gtkpbbuttons): 1 [powerpc specific] - svga: 1 - firebird2: 2 4. Packages which I've now ported to mips - numactl: add mips syscall definitions - liblinux-inotify2-perl: add mips syscall definitions - zeroc-icee-translators: add some simple #ifdefs for __mips__ - zeroc-ice: add some simple #ifdefs for __mips__ 5. Unclarified issue 7 packages fail to build because MIPS doesn't have a generic cpu-feature-overrides.h header. We have to check whether those packages should be fixed (userspace should not depend on kernel headers anyway) or whether the kernel can provide a generic cpu-feature-overrides.h. Musings ------- Obviously, compiling the archive from source leads to many important insights. What I found out over the last two weeks is just how much work it actually is. We should try to create better infrastructure to make manual compilation of the whole archive easier. It seems that Roland Stigge has done some work on this already with his DARTS project [5] but I have yet to take a look at it. Acknowledgements ---------------- - Broadcom for supporting our MIPS port through the donation of hardware to the project and developers. Without Broadcom, our little and big endian MIPS ports would have a hard time meeting the release criteria regarding autobuilders. - Google for supporting my PhD, thereby allowing me to spend two weeks compiling the archive with GCC 4.1 and sorting out failures. - Intel for supporting some PhD work which led to this experiment. - Ben Hutchings for explaining many of the C++ bugs I found to me. I've learned more about C++ in these two weeks than I ever wanted to know. ;-) Ben also submitted a number of patches for tricky bugs I couldn't fix. - Intel for donating an EM64T machine to Debian. - Thiemo Seufer for fixing all the tricky MIPS problems in Debian. - The GCC project for creating a great compiler, and in particular Andrew Pinski for doing lots of bug triaging work. - Roger Leigh for quickly implementing all my sbuild feature requests. - Everyone who compiles the archive from scratch from time to time. Kudos to you. I'm now aware how much work it is! - Everyone who has fixed the bugs I filed already. :-) References ---------- [1] [2] [3] [4] [5] -- Martin Michlmayr | http://lists.debian.org/debian-devel/2006/03/msg01084.html | crawl-002 | refinedweb | 1,473 | 62.17 |
01011100 is an example of what?
Binary Code
1 person found this useful
What is the example of prose?
Prose is anything that is not poetry. So, nonfiction, novels,speeches, and formal letters are all examples of prose.
What is an example of slanting?
Slanting is presenting an argument in a way that ignores opposingpositions. A slanted argument in favor of higher taxes would ignorethe negative effects.
What are examples of weathering?.
Examples of parable?
stories that give lessons old testament 1. the walls of Jericho fall 2. dividing of the Jordan river 3. staying of the sun and the moon 4. death of uzzah when he touched the ark of the covenant 5. houses on rock and sand the list goes on new testament 1. the barren fig tree 2. t…he pearl of great price 3. the rich fool 4. the bread of life 5. the lost sheep and the list goes on (MORE)
What are examples of weaknesses?
My weakness is that i can stay in peace 'ONLY' after completing my work/job well. Also, sorry.........i can't do any work individually since i love team work.
What is an example of xenocentrism?
Xenocentrism is a preference for another culture rather than yourown. An example of this would be Americans who think that Francehas the best wine.
What are examples of heat?
One example of heat is the energy that the sun gives off. Otherexamples include the heat that is given off by burners and ovens. Alightning bolt also carries heat energy.
What is an example of treason?
The most famous traitor in American history is Benedict Arnold. He was general for the colonial army during the Revolutionary War but had a dispute with the Continental Congress and decided to switch sides and work for the British. That is called treason.
Example of cadence?
There are many different types of cadence. Cadence can be definedas being a beat or measure of that is rhythmic. It can also be afall that occurs in the pitch of the voice. An example of cadencein literary work is the Raven which was written by Edgar Allen Poe.
What is an example of a nucleolus?
An example of a nucleolus would be a peach pit. However, a chemicalnucleus is much different from a biological nucleus.
What is an example of a denotation?
The explicit or direct meaning or something as in Poodle, which is the denotation of a certain breed of dog.
Example of mass?
An example of mass can be pretty much everything. Here are a few examples: . Trees. Lamps. Desk. Computer/Laptop. Chair.
Example of hacking?
The tall bloke with the red shirt on that was in the fourball in front of me on Saturday morning. You know who you are!!!
What is an example of Holocaust?
When the Jews were treated really badly by the Germans, and it was so bad it was called The Holocaust.
What are examples of adaptations?
Adaptations are functional products of Natural selection; they are solutions to recurrent evolutionary problems. Most, but not all features of organisms are adaptations (some are byproducts- this is especially important to remember when considering behavioral adaptations). If you are trying to decid…e if something is an adaptation consider these three questions: 1) Is it universal? (generally with same sex conspecifics) 2) Is it functional? 3) Is it complex? If you answer yes to all of the questions it is safe to call it an adaptation. For example, thumbs. Thumbs are universal- all humans have them; they are functional (ever tried to not use your thumb?) and they are complex (biologically speaking). Examples of adaptation can be found in any living creature or plant you care to look at! All you have to do is ask yourself what does this organism use for nutrition, how does it compete for that nutrition and what other organisms get nutrition from it ? Those questions will lead you to its adaptations. Some examples: Rabbit - Uses grass and green plants for nutrition so it has sharp teeth to bite through vegetation and a colon that can digest cellulose. It provides nutrition to carnivores so it has large ears to hear their approach, eyes on the side of the head for all round vision, strong back legs to run away. It competes well by breeding quickly so there are alway young rabbits to continue the genetic line. Owl - Uses small mammals for nutrition so it has forward facing eyes for binocular vision (can see depth and judge distances), keen eyesight to see prey, thick feathers to enable silent flight so as to not alarm prey, strong claws to pick up prey, sharp beak to tear flesh from bone to eat. They are not typically eaten by anything - top predator - so have few defences. Competition is seen off by being a better hunter. Chilli pepper - Uses soil and water for nutrients so will only germinate seeds when both are available. Uses sunlight for energy. Eaten by any animal that is herbivorous so defenses are an incredibly hot flavour to the fruit. Humans are daft enough to eat it, but few animals are! Dandelion - Soil and water for nutrition, sun for energy. The dandelion makes sure it has adequate access to all 3 by laying its large leaves one top of any plant in the vicinity - starving them of sunlight and hopefully killing them. If that doesn't work the root of the dandelion secretes a poison to kill nearby plants too. This means the dandelion gets loads of sunshine and all the water and nutrients it needs. It also competes well because a single plant can release thousands of seeds twice a year. Pretty clever if you think that your humble apple tree only manages to make seeds once a year! I could go on, from camouflaged hunters to camouflaged prey, flesh tearing teeth to grass grinding plates, producing thousands of offspring to nurturing less than a handful to adulthood. Everything you see an animal do, or look like or whatever is an adaptation geared to the survival of the species. A Dog teeth adapt to sharp teeth to eat meat. The Snowshoe Hare's adapting it's coat in winter is a great example. The definition of adaptation is learning to live or adapt to new climates and/or conditions. So one example would be city birds. They used to live in the wild, but now they have adapted with city life; learning to make new sounds and finding different foods that can sustain their life force. The negative side about that is that they are staying up longer and are having trouble sleeping due to city lights at night. an adaptation is any physical change express by an organism due to the external environment. and example is growing and shedding of coat during that seasons. OR An adaptation is any change expressed by an organism due to the external environment. An adaptation does not necessarily have to be a physical change. An example would be the Kangaroo Rat which lives in the deserts of America. It has a behavioural adaptation of burrowing underground when it is too hot. A physical change of the Kangaroo Rat could be the development of cheek pouches, for it to store food (hence, Kangaroo Rat). It developed pouches on the cheek to be able to store food outside of it's mouth, because if it stored the food inside the mouth it would lose some moisture. . An example of an adaptation is when a chameleon changes color soit can blend in with its environment. (MORE)
What are examples of a tracheophyte?
\n. \nflowers and trees, basically anything plant that can grow, has a vascular system (transport tubes that allow water to flow) and has a well-developed root system is a tracheophyte
What is an example?
Something that illustrates or explains what something else is ordoes. An example is a term used to describe something relative tosomething else. If I try to tell you what a software is or any suchterm and you are not able to understand it, then I may take the useof an 'example' to explain to you wh…at it is. You may have seen people saying " For example" where they arereally trying to explain something by relating it to somethingelse. Say you don't get a mathematical problem, then a person mayuse an 'example' to try to explain it in a way you will understand. (MORE)
What is an example of an example paragraph?
peanut butter peanut butter oh we all love peanut butter who does not love it these days. it is the best
What are the examples of example?
Resources available to you along with the specific topic at hand will help determine the kinds of examples you will provide. Examples can be specific physical objects that) (MORE)
What is an example of an example?
When some one asks to see something that is green and you show them grass, that would be showing them an example. This is an example of an example.
What are five examples of example?
An example of a fruit is a banana. An example of a type of computer is a desktop. An emaple of a vegetable is asparagus. An example of a type of dog is a Terrior. An example of a word that describes a student is studious.
Can you give me an example of how to write an example?
Giving an example is easy! the word example can be found in most compo examples,for example,'your friend,jack,can be used as a example for the science research.' How is the above helpful? Furthermore, how does it answer the question? Consider: When giving an example, for example, the exam…ple above, ensure proper grammatical form and conventions of syntax. Examples might include, but are not limited to, spelling and capitalisation. Consider this example: In an essay, for example, this is an example of the form for introducing an example. Further, you might use your friend Jack's essay writing as an example of a bad example being a good example of what not to do. i love muffins is an example of some one who likes muffins Dear previous poster: No, "i love muffins is an example of some one who likes muffins" is not an example of someone who likes muffins. You present a scenario, which ought not to be confused with an example. In the scenario you provide, the muffin-liker is the exemplar of someone who likes muffins. "I (note that it is capitalised) love muffins" would then be an example of what an exemplar (in this example) might say. In this example, your scenario becomes an example. (MORE)
What is an example of an example in a sentence?
An example in a sentence might look like this. "A good example of acredit card would be a Visa or Mastercard". elem…ents withdifferent sized nucleuses and different shielding form a covalentbond. Other example would be ammonia (NH 3 ) and ammonium(NH 4 ). off…er an infinitenumber of possibilities, there are also programming functions.Programming functions are typically groups of commands that acceptone or more parameters, then return one or more pieces ofinformation. The above Mathematical function can be re-written as a Perlprogramming function that accepts a value for x: sub calculateFunction { my ($x) = @_; my $y = x**2 + 4; return ($y); } (MORE)
Is an example of of a rhetoric?
rhetorical question are essentially questions that dont need to be answered. like if a teacher asks you a question where he/she knows the answer to, or he/she knows that you know the answer but does not ask you to answer A 'rhetorical question' as described above is a rhetorical device (often c…alled a 'figure of speech') called Hypophora (they all have fancy Latin or Greek names, unfortunately). There are dozens of different devices. A good example would be ' Ask not what your country can do for you - ask what you can do for your country .' This is a device called Chiasmus, where the words in a clause or phrase are rversed in the next, i.e. 'country ... you' becomes 'you ... country.' (MORE)
What is a example of organs?
a organ is a group of tissues such as your heart,brain,liver,kidney and even your stomach
What is example of URL?
The Example of URL is : URL means Uniform Resource Locator It is the address to a website. Example for URL is - Servers are always need a specify URL to request it from servers. Hope Answer is useful Thank You!
What is an example of a dickweed?
An example of a dickweed would be Rick Selles because as the definition says : He is a stupid and ineffective male
What is an example a domain?
There are several examples of domains... Domain Bacteria Domain Archaea Domain Eukarya which consists of different kingdoms: Protists Plantae Fungi Animalia
What are exclamation examples?
I hate you! My parents are letting me go to Italy this summer! I forgot my homework!
Examples of a stereotypes?
canadians live in igloos ride their polar bear or moose to school/work eat beavers only eat poutine and real maple surop. i am canadian i dont live in an igloo ive never even seen a polar bear or moose outside of a zoo nobody eats beavers. theyre our national animal i love poutine, but its super u…nhealthy so i dont eat it much i perfer the sugary fake maple surop cause its not as runny (MORE)
What is pseudoscience an example of?
Astrology, intelligent design (creationism), homeopathy, miracle healings, ghost hunting and documenting, parapsychology thus far hasn't turned up any scientific results, many 'new age' claims such as crystal healing or magnetic therapy among others.
What is an example of scienter?
Generally "scienter" is legal/Latin for having some basic understanding that what one is doing, will do or has done, is wrong in some way. So an example would including holding a loaded gun by the trigger with the safety off pointed at a person knowing that if it fires the person could be killed. It… is usually associated with criminal conduct because most classic crimes require both a required mental state of mind ("mens rea") and co-incident action ("actus reus"). That's where you get the differences between murder and manslaughter, for example, because the mental state is considered different. But scienter can apply to a tortious wrongdoing as well, such as an intentional tort like assault and battery (which is usually also a crime, by the way). (MORE)
What is example of algorithm?
Go to the library and borrow this book: Are you Smart Enough to Work at Google? It's filled with them.
What is an example of a phototroph?
A phototroph is an organism that carries out photosynthesis to acquire energy. So any green plant is a phototroph. a plant
What are examples or chemiluminescent?
Glows sticks are a common type of chemiluminescent. When the glass vial of hydrogen peroxide reacts with the phenyl oxalate ester (oxidizing it), a glow is produced; the color based on the fluorescent dye also in the stick. Another example is luminol, which is used in crime scene investigation do de…tect blood (the chemical reacts with iron in the hemoglobin). In nature, the glow from fireflies is another example of chemiluminescence. (MORE)
Chewing is an example of?
a. absorption b. chemical digestion c. elimination d. mechanical digestion The answer is: d. mechanical digestion
What is rust an example of?
it is an example of dirty stuff...just kidd'enit could be found on pipes, bikes, and tin..........usually tin. it is a sceintific word, i do know it's about carbohydrates, and calcium....maybe. example.. i was ridin mah bike.... it started to rain, i put out a tin pipe(lol), i brought it inside for …air, hey, look, Rust!...............................................................................that probably didn't answer your question........... (MORE)
I you he she and it are examples of?
I , you , he , she , and it are all examples of pronouns. Pronouns are words that take the place of proper and improper nouns. For example, instead of saying " Mary opened the door ," one could say " She opened the door. " Or, " Mary opened it. "
What is an example of a sentence using for example?
I love to experiment with grilled cheese sandwiches. For example, today I put a little mustard between the slices of cheese before grilling it.
Adjective example and explain the examples?
adjectives describe things (nouns). A large dog. The adjective is large it describes the dog (noun). A big black dog. The adjectives are big and black they describe the dog (noun). She is hungry . The adjective is hungry it describes she (pronoun). Your dog is bigger than …my dog. This is a comparative adjective it compares two things (your dog and my dog). My dog is the biggest . This is a superlative adjective it tells us that something has some feature to a greater degree than anything it is being compared to. (MORE)
What is an example of a counter example?
A counter example occurs when somebody makes a claim that all members of some category of things have a particular property, and then someone else proves that the claim is not true by showing an example of a thing in the category that does not have the property claimed. For example, if someone cl…aimed that "All presidents of the United States are dead white men", then Barack Obama would be a counter-example because he is a president of the United States but isn't a dead white man. For another example, if someone claimed that all mammals bear live young, the echidna and the platypus would be counter-examples because they are mammals that lay eggs. For another example, if someone claimed that the United States is the only country that has never defaulted on its debts, Australia and Tuvalu would be counter-examples. The claim is logically equivalent to saying that all countries in the category of being not the USA have defaulted on a debt at least once,.... For another example, if someone claimed that all prime numbers are odd, "2" would be a counter example. Or if someone claimed that all odd numbers are prime "9" would be a counter-example. In short, a counter-example to a proposition or claim is an example that proves that the proposition or claim is not true. (MORE)
What is an example of contagious diffusion why is it an example?
The spread of Islam from Mecca to all the other countries around it. That's an example because Contagious Diffusion is The distance-controlled spreading of an idea, innovation, or some other item through a local population by contact from person to person..
What is an Example and non example of compound?
A compound is two or more different elements bonded together. A molecule is two atoms bonded together. C0 2 is a compound and a molecule. H 2 is only a molecule, as there is only hydrogen in the formula.
What is an example of comparative example?
Comparative is a term for an adjective, a word that describes a noun. Some examples of adjective comparatives are: good; comparative = better; superlative = best short; comparative = shorter; superlative = shortest happy; comparative = happier; superlative = happiest modern; comparative = more mod…ern; superlative = most modern fragile; comparative = more fragile; superlative = most fragile (MORE)
What is examples and non-examples of empire?
an example if an empire i sthat it has to havesome sort ofgovernment, army, resources, etc.
What were examples and non examples of ritual?
ritual - abuse . non example- i don't know can someone help me i need it for homework!!.
What is an example of what is an example of a synthesis reaction?
Generally, a synthesis reaction is as follows; A + B > AB. An example of such is the synthesis of potassium chloride from potassium (solid) and chlorine gas: . 2K + Cl 2 > 2KCl.
What is an example of an example of metallic bonds?
any piece of metal. Copper wire. The atoms of copper are holding the wire together with metallic bonds. Bend the wire and the metal bonds let the shape change without breaking the wire. | https://www.answers.com/Q/01011100_is_an_example_of_what | CC-MAIN-2019-09 | refinedweb | 3,387 | 65.83 |
The Windows Azure WebJobs SDK is a framework that simplifies the task of adding background processing to Windows Azure Web Sites. This tutorial provides an overview of features in the SDK and walks you through creating and running a simple Hello World background process.
Software versions
Questions and Comments
If you have questions that are not directly related to the tutorial, you can post them to the Windows Azure forum, the ASP.NET forum, or StackOverflow.com. For questions and comments regarding the tutorial itself, see the comments section at the bottom of the page.
Contents
- Introduction
- Prerequisites
- Create a WebJobs project
- Create a Windows Azure Web Site and Storage account
- Run the WebJobs project locally
- Run the WebJobs project in Windows Azure
Introduction. But for any work that involves Windows Azure Storage, the WebJobs SDK will make your life easier. You also don’t have to use the WebJobs feature of Windows Azure Web Sites in order to use the Web Jobs SDK NuGet packages. However, the Dashboard and the monitoring and diagnostics information it provides are only available as a site extension when running under a Windows Azure Web Site.
Typical Scenarios, but you don't want to make the user wait while you do that.
- Queue processing. A common way for a web frontend to communicate with a backend service is to use queues. When the web site needs to get work done, it pushes a message onto a queue. A backend service pulls messages from the queue and does the work. For example, you could use queues with image processing:.
Scheduling a program that uses the WebJobs SDK
To create and run a program that uses the WebJobs SDK, all you do is write a Console App that includes the WebJobs NuGet packages, upload a .zip file that contains your .exe, .dll , and .config files to Windows Azure, and tell Windows Azure when to run the process. You have three options:
- Continuous. For programs that need to be running all the time, such as services that poll queues.
- Scheduled. For programs that need to be run at particular times, such as nightly file maintenance tasks.
- On demand. When you want to start a program manually, such as when you want to start an additional run of a file maintenance task outside its normal schedule.
You can also use Git to publish directly from your local computer to a Windows Azure Web Site. With the Local Git approach, you don’t need to upload a zip at all (and you always get a Continuous job).
Coding a program that uses the WebJobs SDK
The code for handling typical tasks that work with Windows Azure Storage is simple. In a Console
Application, you write methods for the background tasks that you want to
execute, and you decorate them with attributes from the WebJobs SDK. Your
Main method
creates a
JobHost object that coordinates the calls to methods
you write to perform tasks. The WebJobs SDK framework knows when to call your
methods based on the WebJobs SDK attributes you use in them. For example:
static void Main() { JobHost host = new JobHost(); host.RunAndBlock(); } public static void ProcessQueueMessage([QueueInput("webjobsqueue")]] string inputText, [BlobOutput("containername/blobname")]TextWriter writer) { writer.WriteLine(inputText); }
The
JobHost object is a container for a set of background
functions. The
JobHost object
monitors the functions, watches for events that trigger them, and executes them
when trigger events occur. You call a
JobHost
method to indicate whether you
want the container process to run on the current thread or a background thread.
In the example, the
RunAndBlock method runs the process
continuously on the current thread.
Because the
ProcessQueueMessage method in this example has a
QueueInput attribute, the trigger for that function is the creation
of a new queue message. The
JobHost object watches for new queue
messages on the specified queue ("webjobsqueue" in this sample) and when one is found, it calls
ProcessQueueMessage. The
QueueInput attribute also notifies the framework to bind the
inputText parameter to the value of the queue message:
public static void ProcessQueueMessage([QueueInput("webjobsqueue")]] string inputText, [BlobOutput("containername/blobname")]TextWriter writer)
The framework also binds a
TextWriter object to a blob named
"blobname" in a container named "containername":
public static void ProcessQueueMessage([QueueInput("webjobsqueue")]] string inputText, [BlobOutput("containername/blobname")]TextWriter writer)
The function then uses these parameters to write the value of the queue message to the blob:
writer.WriteLine(inputText);
As you can see, the trigger and binder features of the WebJobs SDK greatly simplify the code you have to write to work with Windows Azure Storage objects. All of the code required to handle queue and blob processing -- opening the queue, reading queue messages, deleting them when processing for them is completed, creating and writing to containers and blobs, etc., is done for you by the WebJobs SDK framework.
The WebJobs SDK provides many other ways to work with
Windows Azure Storage. For example, the parameter you decorate with the
QueueInput
attribute can be a byte array or a custom type, and it is
automatically deserialized from JSON. And you can use a
BlobInput
attribute to trigger a process whenever a new blob is created in your Windows
Azure Storage account. Note that while
QueueInput
finds new queue messages within a few seconds,
BlobInput can take
up to 20 minutes to detect a new blob. (
BlobInput scans for blobs
whenever the
JobHost starts and then periodically checks the
Windows Azure Storage logs to detect new blobs.)
Monitoring programs that run in WebJobs
The WebJobs SDK provides a dashboard in Windows Azure Web Sites that you can use to monitor the status of the programs you run and examine any exceptions that they throw.
Limitations
As this tutorial is being written, WebJobs for Windows Azure Web Sites is a preview release, and the WebJobs SDK is an alpha release. They are not supported for production use.
In the WebJobs environment, programs run in the context of a Web Site and are not independently scalable. For example, if you have one standard Web Site instance, you can only have 1 instance of your background process running, and it is using some of the server resources that otherwise would be available to serve web content. For high volumes of backend work, consider using a worker role in a Windows Azure Cloud Service.
Prerequisites
Before you start, make sure you meet the following prerequisites:
- Visual Studio 2013 or 2012, as indicated in Software Versions at the top of the page.
- A Windows Azure subscription that you can manage. If you don't already have a Windows Azure account, but you do have an MSDN subscription, you can activate your MSDN subscription benefits. Otherwise, you can create a free trial account in just a couple of minutes. For details, see Windows Azure Free Trial.
- Azure Storage Explorer is required for this tutorial (it's not required for using the WebJobs SDK).
Create a WebJobs project
To get started, you'll create a Console Application project, install the WebJobs SDK NuGet packages, and write the code that will use the WebJobs SDK framework to perform a background task.
Open Visual Studio 2013 or Visual Studio 2013 Express for Desktop.
(If you use the Express version of Visual Studio, use Express for Desktop because there is no Console Application template in Express for Web.)
In the File menu, click New Project.
In the templates pane, under Installed, expand Visual C#, click Windows, and select the Console Application template.
Name the project WebJob, and then click OK.
From the Tools menu click Library Package Manager and then click Package Manager Console.
In the Package Manager Console window enter the following command:
Install-Package Microsoft.WindowsAzure.Jobs.Host -pre
This also installs dependent packages, including another WebJobs SDK package, Microsoft.WindowsAzure.Jobs. (You use the other WebJobs SDK package separately only when you create your user functions in a separate DLL; for this tutorial you are writing all of your code in Console Application.)
In Program.cs, replace the
Mainmethod with the following code:
static void Main() { JobHost host = new JobHost(); host.RunAndBlock(); }
RunAndBlockmeans the WebJobs SDK framework will run continuously in the current thread. The framework looks for any public static methods that have WebJobs SDK attributes and then watches for the triggers for those methods, such as new queue messages or new blobs. When a triggering event occurs, the framework calls the method.
You can optionally call the
RunOnBackgroundThreadmethod to run the functions on a background thread. You could call
RunOnBackgroundThreadmore than once to easily create a multi-threaded batch process.
Add a new method:
public static void ProcessQueueMessage([QueueInput("webjobsqueue")] string inputText, [BlobOutput("containername/blobname")]TextWriter writer) { writer.WriteLine(inputText); }
The
QueueInputattribute means that this method will be called when a queue message is received. There are other trigger attributes, such as
BlobInputwhich means the method will be called when a new blob appears in a specified container.
The
BlobOutputattribute binds a
TextWriterobject to blob "blobname" in container "containername", and the method body uses that object to write the queue message to the blob.
Add
usingstatements for the references to WebJobs SDK classes and the
TextWriterclass:
using Microsoft.WindowsAzure.Jobs; using System.IO;
Build the solution to save your work and make sure are no compile errors.
Create a Windows Azure Web Site and Storage Account
You can run your program locally, but the WebJobs SDK framework needs a Windows Azure Storage account for logging done by the framework and for your functions that work with queues, blobs, or tables. The framework and your functions can use the same storage account or separate ones. For this tutorial you'll use one storage account.
You aren't creating any custom web site content for this tutorial, but you'll need a Windows Azure Web Site to monitor your background process when you run it in Visual Studio, so you'll also create a web site in this section.
Go to the Windows Azure Management Portal and sign in to work with the subscription you're going to use.
Click New.
Click Web Site -- Quick Create
Enter a URL for the web site.
The URL must be unique within the
.azurewebsites.netdomain.
Choose the region closest to you, and then click Create Web Site.
Click New -- Data Services -- Storage -- Quick Create.
Enter a URL for the storage account.
The URL must be unique within the
.core.windows.netdomain.
If possible, choose the same region for the storage account that you chose earlier for the web site.
Set Replication to Locally Redundant.
When geo-replication is enabled for a storage account, the stored content is replicated to a secondary location to enable failover to that location in case of a major disaster in the primary location. Geo-replication can incur additional costs. For test and development accounts, you generally don't need geo-replication. For more information, see How To Manage Storage Accounts.
Click Create Storage Account.
After a few seconds the storage account is created.
Run the project locally
In order to connect to a storage account, the WebJobs SDK framework has to have a connection string with the storage account name and access key. In this section you'll put the connection string in the App.config file and run the WebJobs process locally. You'll test by using Azure Storage Explorer, a tool for viewing and manipulating Windows Azure Storage objects.
In the Windows Azure Management Portal, select your storage account and click Manage Access Keys at the bottom of the page.
Copy the Primary Access Key.
In the App.config file, add the following connection strings, replacing <accountname> and <accesskey> with the values for your storage account. The <accountname> value is the name you entered for the account, not the entire URL that you use to access items in the storage account.
<configuration> <connectionStrings> <add name="AzureJobsRuntime" connectionString="DefaultEndpointsProtocol=https;AccountName=[accountname];AccountKey=[accesskey]"/> <add name="AzureJobsData" connectionString="DefaultEndpointsProtocol=https;AccountName=[accountname];AccountKey=[accesskey]"/> </connectionStrings> <startup> <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5" /> </startup> </configuration>
Press CTRL+F5 to run the application.
The console output starts to slowly display a string of periods to let you know it's waiting for trigger events.
The program is now listening for new queue messages on the queue named webjobsqueue in your storage account. Now you'll use Azure Storage Explorer to create the queue and create a message in the queue.
Open Azure Storage Explorer.
In the Storage Account menu, click Add.
In the Add Storage Account dialog box, enter your storage account name and your primary access key, and then click Add Storage Account.
These are the same values that you plugged into the connection strings earlier.
With your storage account selected in the drop-down box, click blobs in the left pane.
Notice that there is no "containername" blob container in the left pane.
Click queues in the left pane.
In the Queue pane at the bottom of the window, click New.
In the New Queue dialog box, enter webjobsqueue as the Queue name, and then click Create Queue.
In the left pane, click webjobsqueue, and then click New in the Message pane at the bottom of the window.
In the Message content box, enter "Hello World!" and then click Create Message.
Soon you'll notice that the console window shows that your
ProcessQueueMessagemethod was executed.
In Azure Storage Explorer, in the left pane, click blobs. and then in the Container pane click Refresh.
Now you see that there's a container named containername.
Click containername to see the blobs in the container.
The new container contains a new blob named blobname.
Double-click blobname.
In the Blob Detail dialog box, click the Text tab to see your "Hello World!" message.
Close the command window that is running your WebJob project.
Run the project in Windows Azure
To deploy to Windows Azure, you zip and upload the contents of your bin/debug folder (or bin/release if you compile a Release build), and tell Windows Azure when you want the job to run. For this example we'll run the job continuously.
In Solution Explorer, click Show All Files.
Expand the bin folder, right-click the Debug folder, and click Open Folder in File Explorer.
Select all of the files in the folder, right-click the selected files, and click Send to -- Compressed Folder.
Name the new .zip file WebJob.zip.
In Windows Azure Management Portal, open your web site and click the Configure tab.
Scroll down to Connection Strings.
Under Connection Strings, enter AzureJobsRuntime in the Name field.
Copy the connection string (without the quotation marks) from the App.config file, and paste it into the Value field.
In the drop-down list click Custom.
Click Save.
Click the WebJobs tab, and then click Add.
In the Basic web job settings dialog box, enter HelloWorld as the name for your web job.
Click the folder icon under Content, navigate to your bin\Debug folder, and then click Open.
Leave How to Run set to the default option, Run continuously.
Click the check mark at the lower right hand corner of the dialog box.
Open Azure Storage Explorer, and follow the procedures you did earlier to create a queue message, but this time enter Hello World from Windows Azure! as the message.
Click the blobs tab, select containername container, double-click blobname blob, and select the Text tab to see the Hello World from Windows Azure! message in the blob.
In the management portal, click the link in the Logs column for the HelloWorld web job.
Before the WebJobs dashboard appears you might get a dialog box asking for credentials. If you do, perform the following steps to get the user name and password that you need to enter.
- Click the Dashboard tab.
-wBlJXnrfahHvbWs92MA6HKlvvLEZznr1EFA6kzSGKbY4v" . . .
When the WebJobs dashboard appears, under Statistics you see a count of how many times your method was executed successfully and how many times it failed.
Under Invocation Log click Program.ProcessQueueMessage.
Each time your method is executed, there is a log of its execution.
Click the containername/blobname link and you see the contents of the blob.
Next Steps
You've now created and run a simple program that processes messages received from a Windows Azure Storage queue. You can do much more with a minimum amount of code and the WebJobs SDK. For example:
- Bind to custom types serialized as JSON in the queue message.
- Dynamically change output container and blob names based on properties of incoming queue message JSON objects.
- Watch for blobs to appear in a container and process them.
- Write to Windows Azure Storage Tables.
For more information about the WebJobs feature in Windows Azure Web Sites and the Windows Azure WebJobs SDK, see the following resources:
- How to Create Web Jobs for Windows Azure Web Sites
- Code samples at
- Curah complete list of Web Jobs tutorials and videos
- Video: Azure WebJobs 101 - Basic WebJobs with Jamie Espinosa
- Video: Azure WebJobs 102 - Scheduled WebJobs and the WebJobs Dashboard with Jamie Espinosa
- Video: Azure WebJobs 103 - Programming WebJobs in .NET with Pranav Rastogi. For questions and comments regarding the tutorial itself, see the comments section at the bottom of the page.
Please leave feedback on how you liked this tutorial and what we could improve. You can also request new topics and code samples at Show Me How With Code. | http://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/getting-started-with-windows-azure-webjobs | CC-MAIN-2014-15 | refinedweb | 2,905 | 55.03 |
This preview shows
pages
1–4. Sign up to
view the full content.
View Full
Document
This preview
has intentionally blurred sections.
View Full
Document
This preview
has intentionally blurred sections.
Unformatted text preview: University of Illinois at Urbana-Champaign Department of Computer Science First Examination CS 225 Data Structures and Software Principles Spring 2009 7p-9p, Tuesday, Feb 24 Name: NetID: Lab Section (Day/Time): This is a closed book and closed notes exam. No electronic aids are allowed, either. You should have 5 problems total on 18 pages. The last two sheets are scratch paper; you may detach them while taking the exam, but must turn them in with the exam when you leave. Unless otherwise stated in a problem, assume the best possible design of a particular imple-mentation is being used. Unless the problem specically. We will be grading your code by rst reading your comments to see if your plan is good, and then reading the code to make sure it does exactly what the comments promise. In general, complete and accurate comments will be worth approximately 30% of the points on any coding problem. Please put your name at the top of each page. Problem Points Score Grader 1 20 2 20 3 17 4 10 5 33 Total 100 1. [Pointers, Parameters, and Miscellany 20 points]. MC1 (2.5pts) Consider the following statements: int *p; int i; int k; i = 37; k = i; p = &i; After these statements, which of the following will change the value of i to 75? (a) k = 75; (b) *k = 75; (c) p = 75; (d) *p = 75; (e) Two or more of the answers will change i to 75. MC2 (2.5pts) Consider the following statements: int i = 1; int k = 2; int * p1; int * p2; p1 = &i; p2 = &k; p1 = p2; *p1 = 3; *p2 = 4; cout << i << endl; Which of the following is printed by the output statement (assume cout works)? (a) 1 (b) 2 (c) 3 (d) 4 (e) None of these. The code does not compile. MC3 (2.5pts) Consider the following C++ statements: #include <iostream> using namespace std; void increment1(int x) { x = x + 1; } void increment2(int * x) { *(x) = *x + 1; } void increment3(int & x) { x = x + 1; } int main() { int x = 1; increment1(x); increment2(&x); increment3(x); cout << x << endl; return 0; } What is the printed out when this code is compiled and run? (a) 1 (b) 2 (c) 3 (d) 4 (e) This code does not compile. MC4 (2.5pts) Which of the following is a correct way to declare and initialize a dynamic array of length max , each element of which is a List whose parameterized type is a sphere ? (a) List<sphere> * myList = new List<sphere>[max]; (b) sphere ** myList = new sphere *[max]; (c) List<sphere> myList[max]; (d) More than one of (a), (b), (c), are correct....
View Full Document
- Spring '08
- ttt
- Data Structures
Click to edit the document details | https://www.coursehero.com/file/6369102/ex1solnsp09/ | CC-MAIN-2016-50 | refinedweb | 494 | 69.11 |
SCOTIABANK STUDENT GIC PROGRAM GUIDE. I. How does the Scotiabank Student GIC Program work?
- Gary Williamson
- 2 years ago
- Views:
Transcription
1 IMPORTANT: This Scotiabank Student GIC Program Guide outlines program and product details effective May 1st, If you are set-up under the old program (Cashable GIC) and require assistance, please see the last page of this guide for information on how to contact us. I. How does the Scotiabank Student GIC Program work? The following chart outlines the steps required to apply for an Investment/GIC under the Scotiabank Student GIC Program: This guide is subject to change 1
2 1. YOU APPLY Download The Scotiabank Student GIC Program Application ( Application ) is available on our website under the How to apply" section. Please follow the link: Complete Input the information requested in the application in the appropriate fields. The Application must be TYPED to be accepted. Review, print and sign the Application. Keep a copy of the Application for your records. Submit A. Scan all documents in ONE SINGLE PDF File, in the following order: i. Typed and signed copy of the Scotiabank Student GIC Program Application, ii. Copy of passport pages (photograph page and the last page), and iii. Copy of acceptance letter from a Canadian Educational Institution NOTE: The should contain only ONE PDF which should include all the above noted documents; otherwise the Application will not be processed. B. the single PDF file to The subject line of your should state: Scotiabank Student GIC Program New Application Your Full Name. This GIC Application must be sent to us directly from you (student) using the same address that you have provided in the Application. All communication from Scotiabank will be sent only to this address. 2. WE OPEN INVESTMENT ACCOUNT Following review and acceptance of the completed Application, Scotiabank will send you a secure confirming the Scotia Investment Account Number and provide wire transfer instructions (to transfer money) within five (5) business days to enable you to purchase the GIC. Please allow for time difference, weekends and other Public Holidays in Canada. 3. YOU SEND MONEY You will be required by the Canadian High Commission, India to remit $10,000 CAD to Scotiabank for your Scotia Investment Account plus $100 CAD to cover administrative fees, for a total of $10,100 CAD. The funds must be wire transferred only from a bank in India where you hold your account (in your own name or jointly with your parent(s) otherwise the money will be returned. Funds will be accepted only if the name of the sender and beneficiary (recipient) is the same. This guide is subject to change 2
3 NOTE: Funds from other sources such as Money Exchange House, Money Transfer Services, and Third Party Services OR payments received from a third party (e.g. parents, relatives, friend, etc.) are not permitted. Intermediary banks usually charge a fee for wire transfers. Please advise your bank in India that Scotiabank must receive $10,100 in full and any charges/ fees (e.g. wire transfers/swift/tt) will be paid by you. International wire transfer takes about 5-8 business days. Please check with your local remitting bank for a more specific time frame. 4. WE ISSUE CONFIRMATION Investment Directions confirming the details of your Scotia Investment Account and Welcome Letter will be sent to you by secure within three (3) business days (delayed accordingly for Canadian Public Holidays) after receipt of your wire payment. NOTE: The Investment Directions include an electronic signature; a handwritten signature is not included, nor required. You are not required to return ( ) the signed Investment Directions confirmation to Scotiabank. You are responsible for printing a copy of this confirmation and letter for your records as secure s are only available for 30 calendar days before automatic deletion. See Section ll for additional information. 5. YOU APPLY FOR STUDY PERMIT You must submit a copy of the Investment Directions confirmation to the Canadian High Commission, India along with your Study Permit Application. Please refer to the VFS and/or Canadian High Commission, India websites for cut off dates for submission of Study Permit Applications for each session intake. Scotiabank will continue to accept the Scotiabank Student GIC Program Application and issue confirmation when funds are received. However, you must allow for sufficient processing time for obtaining your Study Permit. 6. WELCOME TO CANADA Upon arrival in Canada, you may visit any Scotiabank branch of your choice to open a personal deposit account and to purchase your GIC. You will be required to provide the following: o Welcome Letter o Investment Directions confirmation o Valid foreign passport o Letter of Enrolment from a Canadian Educational Institution (or a student ID card) o Study Permit (e.g. IMM 1442) Please refer to the Bank's branch locator to select a branch that will be convenient for you when you arrive in Canada. This guide is subject to change 3
4 7. WE OPEN ACCOUNT FOR MONTHLY DEPOSITS After confirming your identification/documentation (see above) we will open a personal deposit account (Student Banking Advantage Plan). *It is the responsibility of the student to ensure that they have adequate funds to cover their living expenses. The disbursement of funds is scheduled so that you will receive $2,000 at your initial visit. Should you believe that your living expenses will be higher than $2,000 for your first month, you may wish to bring additional funds with you when you move to Canada. Canada is a very large country, so costs can vary significantly depending on where you live and may be different from those you are familiar with. Learn more by reviewing our Living Expenses resource on our website. This guide is subject to change 4
5 II. Important Information Regarding Secure s from Scotiabank To protect your privacy, Scotiabank will send customer sensitive information by secure , directing you to the Scotiabank Webmail portal link. To protect your privacy and security, please only correspond with Scotiabank using Secure . Clicking this link will launch your web browser. 1. You will be required to register first by clicking the Register now link for the first time. 2. You will be sent a temporary password to complete the registration. 3. Once you ve logged in using the temporary password, you will be prompted to create a display name and new password. Complete all required fields marked with an asterisk (*). Once completed, click the Save button to continue. If the new password meets the requirements, green checkmarks will display next to each of the password criteria. 4. The Password Hint field is highly recommended to be completed as the Hint will assist with password recovery in the event your password is forgotten. Select a Challenge Question from the list provided and enter the answer in the Answer field. The Challenge Question will be asked in the event you forget your password and request a password reset. This will complete your registration process. You will be able to access your Scotiabank Secure Service mailbox and view the secure s. IMPORTANT: Secure s will be available for viewing for 30 calendar days before automatic deletion. You must print and save any content and/or attachments for future reference/use. Once you register, all future communications to Scotiabank must be REPLY only to the last message received from the Scotiabank Secure Service mailbox. PASSWORD RESET If you have forgotten your password or have been locked out, and require a password reset: 1. Enter your address in the Address field and select the See Password Hint link at the bottom of the log in page. By clicking this link, the Scotiabank Secure Service will send you an automated message with the password hint you entered during registration. 2. If the password hint option did not assist you in accessing the Scotiabank Secure Service, utilize the password reset option: a) Enter your address in the Address: field and select the Reset Password link at the bottom of the page. b) Enter the answer you provided during registration to the Challenge Question and click the Answer button. c) Enter a new password in the New Password field and retype it in the Confirm the New Password field. d) Enter a specific hint that is easily remembered. e) Click on the Save button at the bottom of the screen. f) Select the Return to Login Page link at the bottom left of the login page. Login with your address and new Password. For a step-by-step guide, please refer to our website and see the Secure Client User Guide provided. This guide is subject to change 5
6 Study Permit Declined or Cancelled When & How to Request a Refund This guide is subject to change 6
7 1. YOU APPLY FOR A REFUND Full redemption of the outstanding principal of your Investment Account or GIC cannot occur prior to the Maturity Date unless you provide us with proof that: Download 1. Your Study Permit has been declined / cancelled. Please provide us with a copy of the Refusal Letter issued by the Canadian Visa Authorities; or 2. Your application for admission to a Canadian Educational Institution has been declined; or 3. You have withdrawn from enrolment at the Canadian Educational Institution before or after your arrival in Canada. Please provide us with a copy of the cancelled Visa and Study Permit from the Canadian High Commission office in India. The Scotiabank Student GIC Program Refund Application ( Refund Application ) is available on our website under the If Your Study Permit is Declined/ Cancelled" section. Please follow the link: Complete Input the information requested in the Refund Application in the appropriate fields. The Refund Application must be TYPED to be accepted. Review, print and sign the Refund Application. Keep a copy of the Refund Application for your records. Submit Include with the Refund Application a copy of: 1. Refusal Letter (all pages) provided by Canadian Visa Authorities in India or passport page showing the cancelled Visa/Study Permit as provided by the Canadian High Commission, India or a self-attested letter confirming that you have not applied for a Study Permit; 2. The wire instructions form provided by your bank in India for the purpose of wiring/transferring funds to your GIC account in Canada; and 3. The Application used to apply for the Scotiabank Student GIC Program. NOTE: The should contain only ONE PDF which should include all the above noted documents; otherwise the Refund Application will not be processed. Reminder, only use Secure . the single PDF file from your Scotiabank Secure Service mailbox (REPLY to the last message in the secure mailbox) to The subject line of your should state: Scotiabank Student GIC Program Refund Application - Your Full Name. This guide is subject to change 7
8 2. WE CONFIRM DECLINE/ CANCELLATION Upon receipt of your completed Refund Application and supporting documents we will seek confirmation of your Study Permit decline or cancellation from the Canadian High Commission, India. In case the Refund Application is incomplete or supporting documentation is not attached, we will you to provide us with further information. Expect processing delays in such cases. 3. WE WIRE FUNDS TO INDIA Upon receipt of proof of any of the above mentioned events and confirmation of that event from the Canadian High Commission, India, we will redeem the outstanding principal plus any accumulated interest. After we wire the funds to your bank in India, we provide you a confirmation by secure . NOTE: Funds are returned to the bank and account from where you originally sent the funds to us. International wire transfer may take up to 5-8 business days to reach your account. Only if you have not received the funds after this time, please contact us at the following toll free telephone number: Our representatives are available to speak with you Monday to Friday, 9 a.m. to 8 p.m. Eastern Standard Time in Canada (excluding Canadian Public Holidays). 4. YOU RECEIVE YOUR FUNDS Please Note: The refund may take up to 8 weeks from the date the correctly completed Refund Application is received at Scotiabank, Canada. If there are any corrections required, the refund will be delayed. All processing and administration fees will not be refunded. Additional fees may be charged by the intermediary banks during the refund. It is the applicant s responsibility to cover all additional refund fees. This guide is subject to change 8
9 III. Frequently Asked Questions APPLICATION Q. My school is not listed as one of the participating schools in partnership with the Association of Canadian Community Colleges (ACCC). Can I still apply for a GIC? A. For institutions not participating in the Student Partners Program, the purchase of a GIC is not mandatory; however students may also choose to apply for the Scotiabank Student GIC Program. Please follow the same application process and timelines as outlined in this guide. Q. Can I open a Joint Scotia Investment Account? A. No. Under the Scotiabank Student GIC Program the GIC can be opened only in the name of the applicant who is applying for the Study Permit under the Student Partners Program. Q. More than five (5) business days have passed and I have not received a response to my original Application submission, how do I follow up with regards to my Application? A. Please check your junk/spam mail folder to ensure that the did not get flagged as junk/spam mail. If the was in your junk/spam folder, label the address: not junk/spam mail. This will prevent future s from being sent to your Junk folder. If you have not received an with your Scotia Investment Account Number, please call our Scotiabank Student GIC Program Customer Service department for assistance at the following toll free telephone number: Our representatives are available to speak with you Monday to Friday, 9 a.m. to 8 p.m. Eastern Standard Time in Canada (excluding Canadian Public Holidays). To help us investigate please have a copy of your original Application and the date you ed your Application available when you call. YOUR INVESTMENT ACCOUNT Q. Will I receive interest on the $10,000 CAD that I wire Scotiabank? A. Yes, you will receive interest on your investment. The current rate for Scotia Investment Account (Investment Cash) on any day can be found at ttp:// The Interest Rate is Scotiabank s posted rate for Scotia Investment Account on the Issue Date. WIRE INSTRUCTIONS Q. Can I transfer funds from other sources such as Money Exchange House (Money Transfer Services)/Third party services? A. No. The funds must be wire transferred only from a bank in India where you hold your account (in your own name or jointly with your parent(s) otherwise the money will be returned. You will cover all administrative/ intermediary bank fees incurred as a result of the decline. Q. Can the Scotia Investment Account be deposited by anyone other than the student? This guide is subject to change 9
10 A. No. Funds must be deposited in the by the student from their bank account in India. In the case of a minor, a joint account with the parents in India will be acceptable. In the event of a refund, the money will be returned to the student s bank account at the bank where the original remittance was made. CONTACTING SCOTIABANK Q. How do I communicate with Scotiabank? A. All communications with Scotiabank must be REPLY only to the last message received from the Scotiabank Secure Service mail box. Q. Can a third party (someone other than the account holder) follow up with Scotiabank in regards to my Application? A. For reasons of privacy, we are unable to disclose any information to anyone other than the account holder. Q. I have forgotten my secure password, what do I do? A. To reset your password, please refer to Important Information Regarding Secure s from Scotiabank section in this guide for step-by-step instructions. Should you still require assistance, contact us at the following toll free telephone number: Our representatives are available to speak with you Monday to Friday, 9 a.m. to 8 p.m. Eastern Standard Time in Canada (excluding Canadian Public Holidays). We will respond to your request with a temporary password reset within four (4) business days. A password reset will be ed with instructions and a web link to access the Scotiabank Secure System. Click on the link provided and you will be prompted to select a new password and password hint. Please check your junk mail folder in case the notification does not appear in your inbox. POSTPONING ARRIVAL IN CANADA & CHANGES TO YOUR SCHOOL Scotiabank will accept Applications and funds all through the year. Established/funded Scotia Investment Accounts can be used to apply for a Study Permit for a later intake session. Q. I have funded my Scotia Investment Account, but never completed my Study Permit Application. Can I use my existing Investment Directions confirmation to re-apply once again for a Study Permit? A. Yes, you can use your existing Investment Directions confirmation for a new Study Permit Application. The Canadian High Commission, India will validate the Investment Directions with Scotiabank directly. Q. I have deferred my enrollment or changed my educational institution, but have already funded my Scotia Investment Account or have not funded my Scotia Investment Account. Can I still use the same Scotia Investment Account to apply for my Study Permit? A. Yes, you may use the same Scotia Investment Account Number. In order to do so, send your new offer letter/ acceptance letter including your new date of arrival to Your to Scotiabank must be REPLY only to the last message received from the Scotiabank Secure Service mail box. The Subject Line must state: Enrollment Update- School Change, Student Name (Given/First and Surname/Last). This guide is subject to change 10
11 UPDATING YOUR PERSONAL INFORMATION Q. How do I update my contact information (Telephone Number or Home Address)? A. To request an update to your personal information, with supporting documents (if applicable). Your to Scotiabank must be REPLY only to the last message received from the Scotiabank Secure Service mailbox. The Subject Line must state: Update- <Change required e.g. home address>, Student Name (Given/First and Surname/Last). Q. How do I update my personal information (Name, Date of Birth or Passport Number)? A. To request an update to your personal information with a copy of your passport pages (photograph page and the last page. Your to Scotiabank must be REPLY only to the last message received from the Scotiabank Secure Service mailbox. The Subject Line must state: Update- <Change required e.g. passport number>, Student Name (Given/First and Surname/Last). Q. I have misplaced/damaged my passport and now have a new passport, how do I update my new passport details with Scotiabank. A. To request an update please with a copy of your new passport pages (photograph page and the last page). Your to Scotiabank must be REPLY only to the last message received from the Scotiabank Secure Service mailbox. The Subject Line must state: Update- New Passport- Student Name (Given/First and Surname/Last) Your request will be processed within 5 business days. The auto response is confirmation that we have received your request for processing no further confirmations will be sent to you. Should we require further information we will contact you. SCOTIABANK STUDENT GIC PROGRAM APPLICANTS WITH CASHABLE GICs Q. I applied for the Scotiabank Student GIC Program before May 1 st and purchased a Cashable GIC, who can I contact if I have questions regarding my application or accessing my funds? A. If you were set-up under the old program with a Cashable GIC prior to our May 1 st program changes and require assistance, please contact us at the following toll free number: Our representatives are available to speak with you Monday to Friday, 9 a.m. to 8 p.m. Eastern Standard Time in Canada (excluding Canadian Public Holidays). Note: This guide is subject to change. Changes, modifications, additions, or deletions to the terms to this guide shall be effective immediately upon notice thereof, which may be given by any means including, but not limited to, posting a new guide on the Scotiabank StartRight website. You should revisit this guide online prior to completing your application. This guide is subject to change 11
SCOTIABANK STUDENT GIC PROGRAM GUIDE. I. Applying for the Scotiabank Student GIC Program
I. Applying for the Scotiabank Student GIC Program The following chart outlines the steps required to apply for the Scotiabank Student GIC Program: May 1, 2015. This guide is subject to change 1 1. You
Scotiabank Student GIC Program Frequently Asked Questions (FAQ s)
1) Q. What is a GIC? Scotiabank Student GIC Program Frequently Asked Questions (FAQ s) A. A Guaranteed Investment Certificate or GIC is a Canadian investment that offers a guaranteed rate of return over
Following is the Sample Copy of the Application Form:
Begin your GIC Application. Complete all required fields. Application Form must be TYPED to be accepted. It CANNOT be HANDWRITTEN. Ensure your Personal email address is correctly input and review your
SnoPAY FREQUENTLY ASKED QUESTIONS
SnoPAY FREQUENTLY ASKED QUESTIONS GENERAL QUESTIONS What is SnoPAY? SnoPAY allows you to view and pay your bills anywhere you have Internet access anytime you want, within the United States and Canada.
Answers to Cardmember questions about Online Services and statement delivery.
Answers to Cardmember questions about Online Services and statement delivery. For more information, please contact your Program Administrator or Customer Service. Online Statements What is an Online Statement?
Receiving Secure Email Customer Support frequently asked questions
Registering a Secure Password Q. Where do I find the current Secure email guidance documentation? A. Published on Glasgow City Council website. Q. My organisation has its own email encryption tool, do
Secure Message Center User Guide
Secure Message Center User Guide Using the Department of Banking Secure Email Message Center 2 Receiving and Replying to Messages 3 Initiating New Messages 7 Using the Address Book 9 Managing Your Account
Directory and Messaging Services Enterprise Secure Mail Services
Title: Directory and Messaging Services Enterprise Secure Mail Services Enterprise Secure Mail Services for End Users Attention: Receivers of Secure Mail Retrieval of Secure Mail by the Recipient Once
Outlook Web Access User Guide
Table of Contents Title Page How to login...3 Create a new message/send attachment...5 Remove the reading pane...10 Calendar functions...11 Distribution lists...11 Contacts list...13 Tasks...18 Options...19,
U.S. Bank Secure Mail
U.S. Bank Secure Mail @ Table of Contents Getting Started 3 Logging into Secure Mail 5 Opening Your Messages 7 Replying to a Message 8 Composing a New Message 8 1750-All Introduction: The use of email
FDIC Secure Email Procedures for External Users April 23, 2010
FDIC Secure Email Procedures for External Users April 23, 2010 This document contains information proprietary to the Federal Deposit Insurance Corporation. Table of Contents 1. Introduction...2 2. Receiving
Online Payment Frequently Asked Questions
Online Payment Frequently Asked Questions Contents Getting Started... 1 What is the Mutual of Omaha Online Payments website?... 1 When will my payment be processed?... 1 What kind of payments can I make
Secure Email - Customer User Guide How to receive an encrypted email
How to receive an encrypted email This guide has been developed for customers/suppliers of Glasgow City Council who are due to receive sensitive information from us. It will explain how to use our secure
Using the PeaceHealth Secure E-mail System
1 PeaceHealth is using a Secure E-mail System that allows for secure E-mail communications between individuals with a PeaceHealth E-mail address and individuals with E-mail addresses outside the PeaceHealth
RSCCD REMOTE PORTAL TABLE OF CONTENTS: Technology Requirements NOTE
RSCCD REMOTE PORTAL The RSCCD Remote Portal allows employees to access their RSCCD Email (via Outlook Web Access), Department (Public) Folders, Personal (H Drive) Folder, and the District Intranet from
Welcome to HomeTown Bank s Secure E-mail! User Guide
Welcome to HomeTown Bank s Secure E-mail! User Guide To access the secure email message center, click the Secure Email link on the main web page. Select whether you are a new user of the
North Georgia Credit Union Bill Pay Agreement & Disclosure
North Georgia Credit Union Bill Pay Agreement & Disclosure Welcome to Paytraxx Bill Pay Service. Bill Pay is an optional service that may be added to the financial institution s Internet Banking Service.
Electronic Banking. Government Tax Payment & Filing Service
Electronic Banking Government Tax Payment & Filing Service June 2009 Table of Contents 1 Scotiabank s Government Tax Payment & Filing Service..............................2 2 Getting Started...............................................................3
HealthyCT Online Bill Pay
HealthyCT Online Bill Pay User Guide for Enrollment and Online Payments Table of Contents I. Enrollment Process: On-line Bill Pay Page 1 II. Payment Process- Pay Your HealthyCT Bill Online A. One-Time
Only checking accounts can be used for Bill Payment purposes.
INTERNET BILL PAYMENT SERVICE AGREEMENT The Internet Bill Payment service allows you to pay your bills electronically through a personal computer, rather than manually writing and mailing checks. You can
MUTUAL OF OMAHA SECURE EMAIL SYSTEM CLIENT/PARTNER USER GUIDE
MUTUAL OF OMAHA SECURE EMAIL SYSTEM CLIENT/PARTNER USER GUIDE Mutual of Omaha Secure Email Client/Partner User Guide April 2015 TABLE OF CONTENTS INTRODUCTION 3 About this Guide 3 CREATING A MUTUAL OF
DimeOnLine BillPay Frequently Asked Questions
DimeOnLine BillPay Frequently Asked Questions The Dime Bank has made banking easier by providing access to your accounts 24 hours a day, 7 days a week. Now you can view up-to-the-minute deposit account
MQA Online Services Portal
MQA Online Services Portal Registration and Adding a License User Guide 1. Hello and welcome to the Division of Medical Quality Assurance s online help tutorials. The MQA Online Services Portal is the
Electronic approvals for forms
Click on any of the boxes below to explore more detail, including answers to frequently asked questions, video quick links, and more. Electronic approvals for wires Electronic approvals for forms Security
Online Banking Quick Reference Guide
1 Page 2 Set Up and Access to Online Banking 2 How do I set up Online Banking? 2 How do I sign in to Online Banking? Online Banking Quick Reference Guide 3 Security 3 How do I change User ID? 3 What should
Regions Secure Webmail. Instructions
Regions Secure Webmail Instructions Regions Bank Member FDIC Revised 092015 REGIONS SECURE WEBMAIL Regions has established privacy guidelines to protect customers, vendors, and associates of Regions Bank.
CONTENTS. SETUP SECURITY ENHANCEMENTS... 17 Existing User... 17 New User (Enrolled by Employer or Self)... 21
HEALTH SAVINGS ACCOUNT SUBSCRIBER WEBSITE GUIDE CONTENTS BROWSER COMPATIBILITY... 2 ONLINE ENROLLMENT... 3 Online Enrollment Process... 3 REGISTERING YOUR ACCOUNT FOR ONLINE ACCESS... 12 INDIVIDUAL ENROLLMENT...
What s Inside. Welcome to Busey ebank
What s Inside Security............................ Getting Started...................... 5 Account Access...................... 6 Account Detail...................... 7 Transfer Funds......................
Onboarding User Manual Version 9-20159
Contents Hire (Companies using Hiring Manager + Onboarding)... 4 Hire (Companies using Onboarding only)... 5 Starting the Onboarding Process... 6 Complete at Home... 6 What If the Employee Can t Locate
SnoPAY FREQUENTLY ASKED QUESTIONS
SnoPAY FREQUENTLY ASKED QUESTIONS GENERAL QUESTIONS What is SnoPAY? SnoPAY allows you to view and pay your bills anywhere you have Internet access anytime you want. You can pay by transferring money directly
Frequently Asked Questions
Frequently Asked Questions We ve compiled a short list of frequently asked questions that will help the transition to new Homebanking easier for members. This list highlights some of the most common questions,
Receiving Secure Email from Citi For External Customers and Business Partners
Citi Secure Email Program Receiving Secure Email from Citi For External Customers and Business Partners Protecting the privacy and security of client information is a top priority at Citi. Citi s Secure
Online Account Opening Customer FAQs
Online Account Opening Customer FAQs Q. Why are you offering this new service to customers? A. At United Bank, we always look to identify and implement ways to enhance your banking experience with us whether
Frequently Asked Questions
Frequently Asked Questions Contents Frequently Asked Questions...1 Getting Started...2 Creating a Profile...3 Navigating Within Profile...3 Home...4 Managing Properties...5 Managing Payment Accounts...6
SCHS Frequently Asked Questions
SCHS Frequently Asked Questions 1. How do I apply? Applicants have following options to submit applications: ONLINE APPLICATION: Please visit the link and register as per
Account Link Funds Transfer Service. Account-to-Account Transfers between Texans Credit Union and other Financial Institutions
Account Link Funds Transfer Service Account-to-Account Transfers between Texans Credit Union and other Financial Institutions Frequently Asked Questions Getting Started How do I sign up for this service?
BCSD WebMail Documentation
BCSD WebMail Documentation Outlook Web Access is available to all BCSD account holders! Outlook Web Access provides Webbased access to your e-mail, your calendar, your contacts, and the global address?
Add Title. Single Sign-On Registration
Add Title Single Sign-On Registration Registration Instructions for Single Sign-On (SSO) Create SSO User ID Create SSO Password Subscribing to CHAMPS Accessing CHAMPS Step 1: Open your web browser (e.g.
Online Payment Center T-Mobile User s Guide
Online Payment Center T-Mobile User s Guide Table of Contents Introduction... 3 Features... 3 Getting Started... 4 Starting Qpay Online Payment Center... 4 Login... 4 Changing Your Password... 5 Navigating...
Guidance for completing an online application for admission to school
Guidance for completing an online application for admission to school This document has been compiled to assist parent(s) / carer(s) in completing an online application for their child s admission to a
Online Banking - Terms and Conditions
Online Banking - Terms and Conditions (updated 12/2015) Welcome to electronic banking at Capstone Bank. This Agreement and Disclosure Statement for Online Banking Services (the "Agreement") describes,
AT&T Business Class Email SM
`` July 2012 AT&T Business Class Email SM Getting Started Guide Welcome to AT&T Website Solutions SM We are focused on providing you the very best service including all the tools necessary to establish
Secure Email Client Guide
PRESIDIO BANK 33 Secure Email Client Guide THE BUSINESS BANK THAT WORKS 8/2013 Table of Contents Introduction.....3 Our Responsibility to Protect Confidential Information....4 Registering and Accessing
Bill Payment and Electronic Funds Transfer Service Agreement
ab Bill Payment and Electronic Funds Transfer Service Agreement For more information Call ResourceLine, our interactive voice response telephone unit, 24 hours a day, 7 days a week at 800-762-1000, Option
Provider OnLine. Log-In Guide
Provider OnLine Log-In Guide Table of Contents 1 LOG-IN ACCESS... 3 1.1 ENTERING THE USER ID AND PASSWORD... 4 1.2 OVERVIEW AND PURPOSE OF TRICIPHER... 5 1.2.1 Log-in for Users Who Are Active, But Not
Patient Portal: Policies and Procedures & User Reference Guide
Patient Portal: Policies and Procedures & User Reference Guide NextMD/Patient Portal Version 5.6 Page 1 of 23 6028-17MR 10/01/11 Welcome to the NextMD Patient Portal We would like to welcome you to the
Using Outlook Web App
Using Outlook Web App About Outlook Web App Using a web browser and the Internet, Outlook Web App (OWA) provides access to your Exchange mailbox from anywhere in the world at any time. Outlook Web App
Vendor Registration. Rev. 3/26/2013 Vendor Registration Page 1
Thank you for your interest in becoming a vendor to the State of Louisiana. It is crucial that we avoid duplicate registrations to facilitate correct award and payment processing. 1. Please go to
Online Bill Pay User Manual
\ Online Bill Pay User Manual Updated: November 14, 2014 Page 1 Table of Contents I. Welcome to Online Bill Pay... 3 II. New User Registration... 4 III. Login for Registered Users... 7 IV. Home Page Functionality...
Outlook Web Access End User Guide
Outlook Web Access End User Guide Page 0 Outlook Web Access is an online, limited version of an Outlook client which can be used to access an exchange account from a web browser, without having an Outlook
Verizon Business National Unified Messaging Service Enhanced Service Guide
USER GUIDE Voice Verizon Business National Unified Messaging Service Enhanced Service Guide What Is Unified Messaging? Verizon Business National Unified Messaging Service is an interactive voicemail system
Filling out an online application
Filling out an online application After choosing a program or college to which to apply and learning the admission requirements and deadlines, the applicant should fill out and submit an application at
Voice Mail Online User Guide
Voice Mail Online User Guide Overview Welcome to the online version of SaskTel Voice Mail that is now accessible from any computer with Internet access You can listen to, sort, forward and/or delete your
Online Banking Frequently Asked Questions
HOME BANKING Q. What is Freedom's Home Banking? A. Freedom s Home Banking allows you to bank anywhere, at any time, electronically. Our system allows you to securely access your accounts by way of any
FAQ S FOR BAIL BOND RENEWAL PROCESS
Q. How do I renew my Bail Bond License? FAQ S FOR BAIL BOND RENEWAL PROCESS A. April 1, 2016, you will receive an email from North Carolina Licensing Office of PearsonVUE. In order to complete the renewal
MSI Secure Mail Tutorial. Table of Contents
Posted 1/12/12 Table of Contents 1 - INTRODUCTION... 1-1 INTRODUCTION... 1-1 Summary... 1-1 Why Secure Mail?... 1-1 Which Emails Must Be Encrypted?... 1-2 Receiving Email from MSI... 1-2 Sending Email
Common Questions about NetTeller Internet Banking
Common Questions about NetTeller Internet Banking 1. What is NetTeller Online Banking? NetTeller Online Banking allows our customers a secure and convenient access to their accounts using the Internet
Health Services provider user guide
Health Services provider user guide online claims submission... convenient service, delivered through an easy-to-use secure web site... convenient service, delivered
Yahoo E-Mail Terminology
ka 412.835.2207 Yahoo E-Mail Terminology Yahoo Yahoo is the name of the website that your account will be set up in. To get to your e-mail, you will always need to start
Secure Mail Registration and Viewing Procedures
Secure Mail Registration and Viewing Procedures May 2011 For External Secure Mail Recipients Contents This document provides a brief, end user oriented overview of the Associated Banc Corp s Secure Email
FIRST TIME USER GUIDE COMMODITY TRACKING SYSTEM
FIRST TIME USER GUIDE COMMODITY TRACKING SYSTEM Table of Contents Introduction... 2 Contacts... 3 Steps required Prior To Re-Registering Online With The National Energy Board (NEB):... 4 Become an Authorized
MSGCU SECURE MESSAGE CENTER
MSGCU SECURE MESSAGE CENTER Welcome to the MSGCU Secure Message Center. Email is convenient, but is it secure? Before reaching the intended recipient, email travels across a variety of public servers and
Secure Messaging Service
Human Resources Secure Messaging Service Receiving Secure Emails from West Berkshire Council Table of Contents What are Secure Messaging notifications?... 3 How do I set up my Secure Messaging account?...
Delaware Insurance Plan
Delaware Insurance Plan Web Application User s Guide Issued November 2012 Page 2 TABLE OF CONTENTS I. INTRODUCTION II. GETTING STARTED A. First Time User B. Forgot Password or User ID C. Welcome Screen
River Valley Credit Union Online Banking
River Valley Credit Union Online Banking New user setup guide Members can self enroll for the service by answering a few simple questions. Before beginning the process, please make sure you have this information
THE GUIDE OF ONLINE MONEY TRANSFER
THE GUIDE OF ONLINE MONEY TRANSFER I. INTRODUCTION: 1. Definition: Online money transfer is to make transfer money from sub account to one of registed account in advance via customer s Balance online interface.
Security First Bank Consumer Online Banking Information Sheet, Access Agreement and Disclosures
Security First Bank Consumer Online Banking Information Sheet, Access Agreement and Disclosures Welcome to Online Banking with Security First. This Online Banking Agreement and Disclosure (Agreement) discusses
Secure Email Actions for Email Recipients
Secure Email Actions for Email Recipients Actions for Email Recipients The recipient cannot forward encrypted email outside the secure system. Each email will only be available to the recipient for 30
Scotia Bill Payment Remittance Reporting Service
Payment Services Getting Started Scotia Bill Payment Remittance Reporting Service July 2010 Table of Contents 1 Registration & Login...........................................................3 a. Your
Microsoft Office 365 Outlook Web App (OWA)
CALIFORNIA STATE UNIVERSITY, LOS ANGELES INFORMATION TECHNOLOGY SERVICES Microsoft Office 365 Outlook Web App (OWA) Winter 2015, Version 2.0 Table of Contents Introduction...3 Logging In...3 Navigation,
How to Create a New User Account for MyGovernmentOnline
How to Create a New User Account for MyGovernmentOnline *Prior to getting started, we encourage you to download and install the web browser Mozilla Firefox. While the MyGovernmentOnline software is designed
Virtual Branch Services Terms and Conditions
Virtual Branch Services Terms and Conditions The following terms and conditions govern the manner in which TMH Federal Credit Union (Us, We, Our) will provide Virtual Branch Internet Banking and Bill Payment
Mane-Link Online Banking. First-Time User Logon
Mane-Link Online Banking First-Time User Logon 1 ank.com Table of Contents Overview... 3 Mane-Link Online Banking... 4 First-Time User Logon... 4 Secure Access Code... 4 Online Banking Agreement... 5 Creating | http://docplayer.net/15717092-Scotiabank-student-gic-program-guide-i-how-does-the-scotiabank-student-gic-program-work.html | CC-MAIN-2018-43 | refinedweb | 6,243 | 53.81 |
Created on 2011-02-01 14:04 by vlachoudis, last changed 2011-02-21 19:43 by rhettinger. This issue is now closed.
The ConfigParser class in 2.7 is almost >50 times slower than in the 2.6 which for large files it renders it almost unusable. Actually the speed decrease depends on the amount of the stored data
Results from test program:
Python 2.7 (r27:82500, Sep 16 2010, 18:02:00)
on 3.5GHz Fedora14 64bit machine
ConfigParser 166.307140827
RawConfigParser 0.1887819767
Python 2.6.4 (r264:75706, Jun 4 2010, 18:20:31)
on 3.0GHz Fedora13 64bit machine
ConfigParser 4.24494099617
RawConfigParser 0.172905921936
If OrderedDict is used, the test case quickly uses 8GB of memory. With
this change (I'm not suggesting this as a fix!), the timings are normal:
Index: Lib/ConfigParser.py
===================================================================
--- Lib/ConfigParser.py (revision 88298)
+++ Lib/ConfigParser.py (working copy)
@@ -92,6 +92,7 @@
except ImportError:
# fallback for setup.py which hasn't yet built _collections
_default_dict = dict
+_default_dict = dict
import re
Commenting.
Attaching a patch that fixes the algorithmic atrocities by using the Chainmap recipe:
Fixed for 2.7 in r88318. Will make a similar fix for 3.1.4 and for 3.2.1.
Attaching patch for Python 3.2.
Georg, I was think of waiting for 3.2.1 for this one, but it can go into 3.2.0 RC2 if you prefer.
3.2.1 should be fine.
Fixed 3.1 in r88323.
See r88469 and r88470. | http://bugs.python.org/issue11089 | CC-MAIN-2014-41 | refinedweb | 254 | 72.93 |
Here.
#include <assert.h> #include "Node.h" using namespace std; class Bst { public: //constructor for when a head Node is provided and when it is not Bst() { root = nullptr; } Bst(Node *np) { root = np; } //destroy the tree, we need to go through and destroy each node ~Bst() { destroyTree(root); } //get the number of nodes in the tree int size() { return size(root); } //erase a value in the tree void erase(int item) { erase(item, root); } //insert a Node in the tree void insert(int item) { insert(item, root); } private: Node* root; //Go through each branch and recursively destroy all Nodes void destroyTree(Node*& n) { if (n != nullptr) { destroyTree(n->left); destroyTree(n->right); delete n; } } //For each Node return the number of left and right nodes //Add it up recursively to get the total size int size(Node* n) { if (n != nullptr) { int left = size(n->left); int right = size(n->right); int self = 1; return left + self + right; } return 0; } //Find the minimum Node value Node* findMin(Node* n){ assert(n != nullptr); if (n->left != nullptr) { return findMin(n->left); } return n; } //this one is a beast //look through all the nodes recursively //once you find the node value there are numerous cases we need to look for //If the current node does not have left and right nodes, just delete it //If it does have a left or right node, set the child to the parent //If it has both left and right, we need to work some magic. First we find //the smallest value and set the node we want to delete to that value (removing it) void erase(int item, Node*& n) { if (n != nullptr) { if (item == n->data) { if (n->right == nullptr && n->left == nullptr) { delete n; n = nullptr; } else if (n->right == nullptr) { Node* temp = n; n = n->left; delete n; } else if (n->left == nullptr){ Node* temp = n; n = n->right; delete n; } else { Node *temp = findMin(n->right); n->data = temp->data; erase(item, n->right); } } else if (item < n->data) { erase(item, n->left); } else { erase(item, n->right); } } } //look through all the nodes //insert the node on the correct node, it will be added to the left if the value is less //added to the right if the value is greater void insert(int item, Node*& n) { if (n != nullptr) { if (item < n->data) { insert(item, n->left); } else { insert(item, n->right); } } else { n = new Node(item); } } };
Let me know if you have any improvements or comments! | http://somethingk.com/main/?p=1075 | CC-MAIN-2017-43 | refinedweb | 418 | 55.34 |
This tutorial will show you the basic of Django when we will make a google search engine front end in the framework. I've googled out an interesting python module called web_search which allows to get search results from few search engines and dmoz.
- Download web_search.py from that page
- Create a new project called “google”
django-admin.py startproject google
- Create a new application called "searchengine":
python manage.py startapp searchengine
- Put web_search.py file in searchengine folder
- Create templates folder in the main project folder
- Edit settings.py and set TEMPATES_DIR to:
TEMPLATE_DIRS = ( 'templates/' )
- Start the development server:
python manage.py runserver 8080
We have preconfigured django project that is ready for the search engine code. We will have to make a template with a form where we will be able to enter the search term, and display the results if any.
- Create in templates file called search.html with the code:
<form action="/" method="post"> <input type="text" name="term" size="30"> <input type="submit" value="Search"> </form>
A simple form that sends the data to / (main url)
- Edit searchengine/views.py to get:
from django.shortcuts import render_to_response from django.http import Http404, HttpResponse, HttpResponseRedirect def search(request): if request.POST: print request.POST['term'] return HttpResponseRedirect("/") else: return render_to_response('search.html')
We have a simple view which returns a template “search.html” if no POST data is available and redirects to the / main page when the POST data is available.
- Hook the view to the main URL by editing urls.py and entering the code:
from django.conf.urls.defaults import * urlpatterns = patterns('', (r'^/?$', 'google.searchengine.views.search'), )
When we enter the main page - we will see the form. When we enter a term and send the form we will be redirected to the main page. The form works. Note the:
print request.POST['term']
in the view. You should see the term from the form in the terminal with running django server. request.POST is a dictionary like object which we can easily access (key is the form field name).
We have the term in the view but we have to do something with it – do google search. web_search.py module is easy in use, an example:
from web_search import google for (name, url, desc) in google('search term', 20): print name, url
we need to use this code, pass the term form the form and then pass the search results to a template and send them to the browser. Edit our view.py to:
from django.shortcuts import render_to_response from django.http import Http404, HttpResponse, HttpResponseRedirect # from project.application.web_search.... from google.searchengine.web_search import google def search(request): if request.POST: return render_to_response('search.html', {'result': google(request.POST['term'], 10)}) #return HttpResponseRedirect("/") else: return render_to_response('search.html')
Now we don't redirect but we send the search.html template with a variable called result that has the search results. To see the results we need to modify the search.html template:
<form action="/" method="post"> <input type="text" name="term" size="30"> <input type="submit" value="Search"> </form><hr> {% if result %} {% for res in result %} <li>{{ res }}</li> {% endfor %} {% endif %}
The if result tag will pass if the variable exist (if we POSTed the data) the for will show all data from the search. Test it. You will see that each result is a list:
('Wine Development HQ', '', 'Wine is a free implementation of Windows on Unix. WineHQ is a collection of resources for Wine developers and users.')
element 0 – page title, element 1 – page URL, element 2 – page description. To make it look as it should we edit our for loop to:
{% for res in result %} <a href="{{ res.1 }}"><b>{{ res.0 }}</b></a><br> {{ res.2 }}<br><br> {% endfor %}
And it's done. Check out a screenshot here A Polish version of this tutorial can be found on | https://code.djangoproject.com/wiki/searchengine?version=4 | CC-MAIN-2014-23 | refinedweb | 650 | 68.16 |
XPath 1.0
defines
27 built-in functions for use in XPath expressions. Various
technologies that use XPath, such as XSLT and XPointer, also extend
this list with functions they need. XSLT even allows user-defined
extension functions.
Every function is evaluated in the context of a particular node,
called the context node. The higher-level
specification in which XPath is used, such as XSLT or XPointer,
decides exactly how this context node is determined. In some cases
the function operates on the context node. In other cases it operates
on the argument, if present, and the context node, if no argument
exists. The context node is ignored in other cases.
In the following sections, each function is described with at least
one signature in this form:
return-type function-name(type argument, type argument, ...)
Compared to languages like Java, XPath argument lists are quite
loose. Some XPath functions take a variable number of arguments and
fill in the arguments that are omitted with default values or the
context node.
Furthermore, XPath is weakly typed. If you pass an argument of the
wrong type to an XPath function, it generally converts that argument
to the appropriate type using the boolean( ),
string( ), or number( )
functions, described later. The exceptions to the weak-typing rule
are the functions that take a node-set as an argument. Standard XPath
1.0 provides no means of converting anything that
isn't a node-set into a node-set. In some cases a
function can operate equally well on multiple argument types. In this
case, its type is given simply as object.
boolean boolean(object o)
Zero and NaN are false. All other numbers are true.
Empty node-sets are false. Nonempty node-sets are true.
Empty strings are false. Nonempty strings are true.
number ceiling(number x)
string concat(string s1, string s2)
string concat(string s1, string s2, string s3)
string concat(string s1, string s2, string s3, string s4)
...
boolean contains(string s1, string s2)
number count(node-set set)
boolean false( )
number floor(number x)
node-set id(string IDs)
node-set id(node-set IDs)
boolean lang(string languageCode)
The lang( ) function takes into account country
and other subcodes before making its determination. For example,
lang('fr') returns true for elements whose
language code is fr-FR, fr-CA,
or fr. However, lang('fr-FR')
is not true for elements whose language code is
fr-CA or fr.
number last( )
string local-name( )
string local-name(node-set nodes)
string name( )
string name(node-set nodes)
string namespace-uri( )
string namespace-uri(node-set nodes)
string normalize-space( )
string normalize-space(string s)
boolean not(boolean b)
number number( )
number number(object o)
A string is converted by first stripping leading and trailing
whitespace and then picking the IEEE 754 value that is closest
(according to the IEEE 754 round-to-nearest rule) to the mathematical
value represented by the string. If the string does not seem to
represent a number, it is converted to NaN. Exponential notation
(e.g., 75.2E-12) is not recognized.
True Booleans are converted to 1; false Booleans are converted to 0.
Node-sets are first converted to a string as if by the
string( ) function. The resulting string is then
converted to a number.
If the argument is omitted, then it converts the context node.
number position( )
number round(number x)
boolean starts-with(string s1, string s2)
string string( )
string string(object o)
A node-set is converted to the string value of the first node in the
node-set. If the node-set is empty, it's converted
to the empty string.
A number is converted to a string as follows:
NaN is converted to the string NaN.
Positive Inf is converted to the string Infinity.
Negative Inf is converted to the string -Infinity.
Integers are converted to their customary English form with no decimal point and no leading zeros. A minus sign is used if the number is negative, but no plus sign is used for positive numbers.
Nonintegers (numbers with nonzero fractional parts) are converted to their customary English form with a decimal point, with at least one digit before the decimal point and at least one digit after the decimal point. A minus sign is used if the number is negative, but no plus sign is used for positive numbers.
A Boolean with the value true is converted to the
English word "true." A Boolean with
the value false is converted to the English word
"false." Lowercase is always used.
The object to be converted is normally passed as an argument, but if
omitted, the context node is converted instead.
WARNING:
The XPath specification specifically notes that the
"string function is not intended for converting
numbers into strings for presentation to users." The
primary problem is that it's not localizable and not
attractive for large numbers. If you intend to show a string to an
end user, use the format-number( ) function and/or
xsl:number element in XSLT instead.
number string-length(string s)
number string-length( )
string substring(string s, number index, number length)
string substring(string s, number index)
string substring-after(string s1, string s2)
string substring-before(string s1, string s2)
number sum(node-set nodes)
string translate(string s1, string s2, string s3)
boolean true( ) | https://docstore.mik.ua/orelly/xml/xmlnut/ch22_05.htm | CC-MAIN-2020-16 | refinedweb | 894 | 55.95 |
Introduction to C# StringWriter
The StringWriter class in C# is derived from TextWriter subclass and strings can be manipulated using the StringWriter class and this StringWriter class is used to write to a StringBuilder class which belongs to System. Text namespace and the strings can be built efficiently using this StringBuilder class because strings are immutable in C# and Write and WriteLine methods are provided by StringWriter to be able to write into the object of StringBuilder and write to a string can be done in a synchronous of asynchronous manner and StringBuilder class stores the information written by StringWriter class.
Syntax:
[SerializableAttribute] [ComVisibleAttribute(true)] public class StringWriter : TextWriter
Working & Constructors of C# StringWriter
In order to understand the working of StringWriter class in C#, we need to understand the constructors of the StringWriter class, properties of StringWriter class, and methods of StringWriter class.
- StringWriter(): A new instance of the StringWriter class is initialized using StringWriter() method.
- StringWriter(IFormatProvider): A new instance of the StringWriter class is initialized using (StringWriter(IFormatProvider) method with format control specified as a parameter.
- StringWriter(StringBuilder): A new instance of the StringWriter class is initialized using the StringWriter(IFormatProvider) method with format control specified as a parameter.
- StringWriter(StringBuilder,?IFormatProvider): A new instance of the StringWriter class is initialized to write to the StringBuilder specified as the first parameter and has the format provider specified as the second parameter.
Properties of C# StringWriter Class
There are several properties of StringWriter class. They are explained as follows:
- Encoding: Encoding the property of StringWriter class in C# is used to get the encoding into which we write the output.
- FormatProvider: FormatProvider property of StringWriter class in C# is used to get the object which performs controlling of format.
- NewLine: NewLine property of StringWriter class in C# is used to get or set the string of line terminator and this string of line terminator is used by the current TextWriter.
Methods of C# StringWriter Class
There are several methods of the StringWriter class. They are explained as follows:
1. Close(): The StringWriter and the stream can be closed using Close() method.
2. Dispose(): All the resources used by the object of TextWriter can be released using dispose() method.
3. Equals(Object): Equals(Object) method is used to determine is the specified object is equal to the current object or not.
4. Finalize(): An object can free the resources occupied by itself and perform other operations of cleanup using Finalize() method.
5. GetHashCode(): GetHashCode() method can be used as the hash function by default.
6. GetStringBuilder(): The underlying StringBuilder is returned using GetStringBuilder() method.
7. ToString(): A string consisting of characters is returned to the StringWriter using the ToString() method.
8. WriteAsync(String): A string is written to the string specified as parameter asynchronously using WriteAsync(String) method.
9. Write(Boolean): The Boolean value specified as a parameter is represented in the form of text and is written to the string using the Write(Boolean) method.
10. Write(String): A string is written to the current string specified as a parameter using the Write(String) method.
11. WriteLine(String): A string that is followed by a line terminator is written to the current string specified as a parameter using the WriteLine(String) method.
12. WriteLineAsync(): A string that is followed by a line terminator is written to the current string specified as a parameter asynchronously using WriteLineAsync(String) method.
Examples to Implement of C# StringWriter
Below are the examples of C# StringReader class:
Example #1
Code :
using System using System.IO; using System.Linq; using System.Text; using System.Threading.Tasks; namespace Program { class Check { //calling the main method static void Main(string[] args) { //define a string to hold the path of the file containing data String str = @"D:\Ex.txt"; //create an instance of the stream writer class and pass the string containing the path of the file to appendtext() method. using (StreamWriter sw = File.AppendText(str)) { //using the instance of the stringwriter class, write the data to the file sw.WriteLine("Welcome to StringWriter class"); sw.Close(); //using the string containing the path of the file, the contents of the file are read Console.WriteLine(File.ReadAllText(str)); } Console.ReadKey(); } } }
Output:
In the above program, a namespace called the program is declared. Then the main method is called. Then a string is declared which holds the path of the file in which the data will be written. Then an instance of the StringWriter method is created which is assigned to the appendtext() method to which the string containing the path of the file is passed as a parameter. Using the instance of the StringWriter class that was just created, data is written to the file Ex.txt. Here the data written is “Welcome to StringWriter class.” Then the instance of the StringWriter class is closed using the Close() method. Then using the string containing the path of the file, the contents of the file are read and the same is displayed in the output.
Example #2
C# program to demonstrate usage of WriteLine() method of StringWriter class.
Code :
using System; using System.IO; using System.Text; namespace Program { class Check { //Main method is called static void Main(string[] args) { //define a string to hold the data to be displayed string str = "Hello, Welcome to the StringWriter class \n" + "This tutorial is for learning \n" + "Learning is fun"; // An instance of the string builder class is created StringBuilder build = new StringBuilder(); // an instance of the stringwriter class is created and the instance of the stringbuilder class is passed as a parameter to stringwriter class StringWriter write = new StringWriter(build); // data is written using string writer writeline() method write.WriteLine(str); write.Flush(); // the instance of the stringwriter is closed write.Close(); // an instance of stringreader class is created to which the instance of stringbuilder class is passed as a parameter StringReader read = new StringReader(build.ToString()); while (read.Peek() > -1) { Console.WriteLine(read.ReadLine()); } } } }
Output:
Conclusion
In this tutorial, we understand the concept of StringWriter class in C# through definition, constructors of StringWriter class, properties of StringWriter class, and methods of StringWriter class, working of StringWriter class through programming examples and their outputs demonstrating the methods of StringWriter class.
Recommended Articles
This is a guide to C# StringWriter. Here we discuss a brief overview on C# StringWriter Class and its working along with different Examples and Code Implementation. You can also go through our other suggested articles to learn more – | https://www.educba.com/c-sharp-stringwriter/ | CC-MAIN-2022-40 | refinedweb | 1,082 | 52.49 |
byte_size 1.4.1
byte_size: ^1.4.1 copied to clipboard
ByteSize is a library that handles how byte sizes are represented and an easy to use interface to convert to other forms of representation also taking locale into consideration.
Use this package as a library
Depend on it
Run this command:
With Dart:
$ dart pub add byte_size
With Flutter:
$ flutter pub pub add byte_size
This will add a line like this to your package's pubspec.yaml (and run an implicit
dart pub get):
dependencies: byte_size: ^1.4.1
Alternatively, your editor might support
dart pub get or
flutter pub get.
Check the docs for your editor to learn more.
Import it
Now in your Dart code, you can use:
import 'package:byte_size/byte_size.dart'; | https://pub.dev/packages/byte_size/install | CC-MAIN-2021-17 | refinedweb | 126 | 63.29 |
import "gopkg.in/src-d/go-vitess.v1/vt/logz"
Package logz provides an infrastructure to expose a list of entries as a sortable table on a webpage.
It is used by many internal vttablet pages e.g. /queryz, /querylogz, /schemaz /streamqueryz or /txlogz.
See tabletserver/querylogz.go for an example how to use it.
func EndHTMLTable(w http.ResponseWriter)
EndHTMLTable writes the end of a logz-style table to an HTTP response.
func StartHTMLTable(w http.ResponseWriter)
StartHTMLTable writes the start of a logz-style table to an HTTP response.
Wrappable inserts zero-width whitespaces to make the string wrappable.
Package logz imports 2 packages (graph) and is imported by 6 packages. Updated 2019-06-13. Refresh now. Tools for package owners. | https://godoc.org/gopkg.in/src-d/go-vitess.v1/vt/logz | CC-MAIN-2019-51 | refinedweb | 123 | 70.39 |
I have just learned how to use fstream in my programming class and we are supposed to gather the information from a file we have created which I have named Section51.dat
The program is supposed to find the largest integer in the file. There is something wrong with my while loop and I can't work it out in my head. It keeps telling me the number of integers in the file instead of telling me the LARGEST integer in the file. Can someone show me how to design this loop in the correct way? I am having trouble seeing how it will compare one integer to the next in a loop .
Code:#include <iostream> #include <fstream> using namespace std; int main() { ifstream RED; RED.open("Section51.dat"); int aNumber; int highestNumber = 0; while (RED >> aNumber) { if (aNumber >> highestNumber) { highestNumber = aNumber; } } cout << highestNumber; return 0; } | http://cboard.cprogramming.com/cplusplus-programming/101092-fstream.html | CC-MAIN-2014-52 | refinedweb | 146 | 70.63 |
Join devRant
Pipeless API
From the creators of devRant, Pipeless lets you power real-time personalized recommendations and activity feeds using a simple APILearn More
Search - "packages"
-
- Python: let me manage those packages for you.
Node: here's the whole post office. You're welcome.
c: Write the packages yourself.
Luarocks: What the fuck is a package13
- Declare variables not wars,
Build packages not walls,
Execute programs not people,
Throw exceptions not stones.12
-
- sudo apt update
"All packages up to date."
For the past 14 days? Huh.
Everyone on vacation or something?
It's quiet.
It's a little.... tooo ... quiet.9
- My mom finally allowed me to get a raspberry pi! I'm really excited.
Now I have two packages to look forward to... A bunch of books and a raspberry pi!
Except she called it an apple pie. She said send me a link to the apple pie you wanted lol.42
-
-
- Seriously amazing how some people just contribute that much to open source:
"I actively maintain 1100+ npm packages (1 billion downloads a month) and many popular projects. You're probably depending on some of my packages in your dependency tree. For example, Webpack relies on 77 of my packages."11
- devRant, I love you.
You suggested I learn Java & Nodejs. I decided to jump straight into node & it's just great. I haven't done much, just a mini blockchain and played with packages, but I'm having a lot of fun with node.15
- Laziest thing!!!??? You better ask me when I was not lazy. Framework here framework there, library here library there.
npm install 29282818 packages
Bye
-
- So I tried to apt-get purge 14 packages and 496 got removed. Now I'm sitting here with a lot of broken shit trying to get my web up and running again .
:C
The perfect time to return to devRant4
- $
- FUCK FUCK FUCK
Started working on devRantron after a month holiday.
Major version upgrade to 4 different packages. All the upgrades are breaking the current configurations.
I just fucking wish JS community would understand the importance of backward compatibility.5
-
- Me: Bro look, I have learnt so many things from the past couple of days.
-Introduction
-Data Types
-Variables
-Arrays
-Operators
-Control Statements
-Classes
-Methods
-Inheritance
-Packages
-Interfaces
-Exception Handing
-Multi-threaded Programming
-Enumerations
-Autoboxing
-Annotations
-Generics
My senior: Congrats on finishing up the basics
Me: Those were just basics???...///!!! 😜3
- ?21
- FUCK NODEJS
FUCK NPM
FUCK ANGULAR
FUCK ALL THOSE FUCKIN PACKAGES
FUCK THIS PILE OF CRAP MAKING ME WASTE MY TIME13
- Sorry. I don’t give out free tips.
These are the available packages:
TipsPack Basic (9.99$): 5 tips to increase your productivity 2 fold.
1 ad between every tip.
TipsPack Premium (17.99$): 20 tips + 1 bonus tip for 10x productivity. 2 ads.
One bug fix free when you purchase either pack.8
- Trying to install ubuntu on usb...
Make it persistent, so everything stays installed on the usb.
Installing some packages, personalizing it.
Reboot... And..... It's gone.7
- $ crontab -r monthly.irl
The following extra packages will be installed:
pregnancy lib-life lib-fuckyou
Do you wanna continue? [y/n]7
- I just finally released my first dart library!
(... )
It's a wrapper for @linuxxx's lynkz.me service.
My love for dart continues to grow.3
- Be me, new dev on a team. Taking a look through source code to get up to speed.
Dev: **thinking to self** why is there no package lock.. let me bring this up to boss man
Dev: hey boss man, you’ve got no package lock, did we forget to commit it?
Manager: no I don’t like package locks.
Dev: ...why?
Manager: they fuck up computer. The project never ran with a package lock.
Dev: ..how will you make sure that every dev has the same packages while developing?
Manager: don’t worry, I’ve done this before, we haven’t had any issues.
**couple weeks goes by**
Dev: pushes code
Manager: hey your feature is not working on my machine
Dev: it’s working on mine, and the dev servers. Let’s take a look and see
**finds out he deletes his package lock every time he does npm install, so therefore he literally has the latest of like a 50 packages with no testing**
Dev: well you see you have some packages here that updates, and have broken some of the features.
Manager: >=|, fix it.
Dev: commit a working package lock so we’re all on the same.
Manager: just set the package version to whatever works.
Dev: okay
**more weeks go by**
Manager: why are we having so many issues between devs, why are things working on some computers and not others??? We can’t be having this it’s wasting time.
Dev: **takes a look at everyone’s packages** we all have different packages.
Manager: that’s it, no one can use Mac computers. You must use these windows computers, and you must install npm v6.0 and node v15.11. Everyone must have the same system and software install to guarantee we’re all on the same page
Dev: so can we also commit package lock so we’re all having the same packages as well?
Manager: No, package locks don’t work.
**few days go by**
Manager: GUYS WHY IS THE CODE DEPLOYING TO PRODUCTION NOT WORKING. IT WAS WORKING IN DEV
DEV: **looks at packages**, when the project was built on dev on 9/1 package x was on version 1.1, when it was approved and moved to prod on 9/3 package x was now on version 1.2 which was a change that broke our code.
Manager: CHANGE THE DEPLOYMENT SCRIPTS THEN. MAKE PROD RSYNC NODE_MODULES WITH DEV
Dev: okay
Manager: just trust me, I’ve been doing this for years
Who the fuck put this man in charge.10
- So i started using atom text editor like a month ago. After finding out i can install packages and going on a spree.... I may have broke it. You know you are in for a pickle if the editor starts with more errors than windows vista.
-
- $ npm install ...
$ added 10 packages from 7 contributors and audited 21813 packages.
I realized that after some point you don't even think about your project dependencies growing. Because even adding 10 packages, it looks like it doesn't even changes the total number of packages. 21813, 21920, 21980... Does it even matter? Fuck.7
- I just removed a couple of packages from package.json and my compiled war went from 53 MB to 37 MB.
What the heck are you downloading node? I don't like such bloated packages.
- "We live in an age where people install npm packages like they’re popping pain killers"
~ David Gilbertson
- Fuck spent like 2 days just to figure out dev build packages since they doesn’t allow nuget restore on dev server4
- So I finally decided to get a theme for sublime (And other packages). I'm loving it. Post your IDE/Text editors or whatever you use to code.32
- I love the feeling of running `sudo apt autoremove` and getting rid of like 500 MB of useless packages.8
- Love it when you open up a frontend devs project and among some 50 included npm packages for a really simple project you see this18
- Oh god, I'm rewriting an old Python script we use at work and I had a look at the original tests for inspiration... There are 600 lines of "passes", #TODOs, assertions that can never fail, and tests of imported packages. Basically none of it is testing the actual script 🙃4
-
- have spent 6 hours waiting for Chrome OS to build and my new smartphone to arrive.
It's 14:56 +0900 JST and there's no sign of the courier yet, Chrome OS still getting those packages built...
I hate the waiting game9
-
- This is adorable
The packages name is oneko, you can even set colours or speed, or make it an... Anime girl?
Whatever suits you best, I guess
Almost forgot, it's an actual animated cat that's running after the mouse
- Retarded senior web dev:
shouting 'STOP' to the ones who pointed out his design flaws
cannot accept a js file with more than 100 lines.
nitpicking others not limited to his owm group
eager to try bleeding edge alpha builds packages for large application
left the company before finishing the project he started2
-
- When you try learn a new programming language and think it will take 40 mins and you cannot install any packages and spend 4 hours googling the errors with no solution... So you admit it's not meant to be, and try another one6
-
- A friend of mine wanted to name her flowers, and was supprised when I had so many nice suggestions. I just suggested names of linux distributions, programming languages and python packages 😅
- I did a fucking huge mistake.. didn’t update arch for too long..
What a fucking pain in the ass to solve those package conflicts..
From now on, I’ll update EVERY FUCKING FRIDAY...15
- Visual Studio (Code)
-Cross-Platform App Development
-Cloud Integration
-Extensions/Packages
-Lightweight (Installer VS2017)
-Many of Langs (C#, js, Python, F#,...)
-Data Science Tools built in
-...6
-
- Yesterday, no packages update for 14 days. Things get eerily quiet. Today....
Followup To
-
- Today I fucking learnt that RHEL is no longer an open source operating system in the full meaning of the terms starting from 8 onward as it shifts toward being a binary only distribution.
What does this mean? Historically in RHEL you could install packages that would allow you to compile software that would use the system libraries.
Now you can't. These packages are being taken away and no longer provided.
If you wanted an operating system you could develop on or build software on well you need something other than RHEL.
The OS is now crippled. There's a bunch of things you used to be able to do where as now you have to pay for a support contract.23
- Wow our network is so safe, our network is so secure, our network is so non exploitable that our devs can't downloade packages in VS, our company only have two IT dudes who can fix that issue and they're non existing. Wow..2
-
- Atom packages be like, "You can easily access me with a shortcut by holding shift + alt + ctrl + a + b + c while rapidly hitting up, up, down, down, left, right, left, right, a and b."5
- I'm a big fan of 'as' keyword in Python. It makes importing packages in the beginning of the code so slick.
Instead of doing:
> import what
> what.does.the.fox.say()
You can neatly do:
> from what import does.the.fox as fox
> fox.say()
-
- -> Contribute to Zulip's mobile app on github.
-> Contribute to babel.
-> Build 5 npm packages.
-> Dive into Haskell.
-> Have 100,000 ++s on devRant😁
-> Make a private project I built on github public.(still thinking about it).4
- Do I want to continue?
Y -> vacation lost (Production server is down).
N -> Ok, I will gather more packages for you to update next time.
😭😭4
- NPM has this cool feature called "link" which allows you to easily link local npm packages as dependencies of other local packages for developement. It's so cool in fact that everything you run npm install it deletes all your links for fuck all reason1
-
-
-!7
-
-
- So I just had to tell three people to read the fucking docs in the comments of an AUR package.
They complained about linker errors, figured "oh happens with GCC 10, doesn't with GCC 9, let's use GCC 9".
If they had read the docs, they'd know that maybe, all that was needed to be able to compile the code was a single command-line flag. `-fcommon`.
People, just RTFM. If you see "oh upgrading from version X to version Y causes some issue", look up "porting from S X to Y", and find something like this:...
Was it so hard? Yes? Then why are you compiling any packages for yourself with a PKGBUILD when you should rather just stick to the non-customized packages built by people that know what they're doing, from the repositories?22
- Piece of shit cake. I'll stab you in the goddamn virtual neck with a screwdriver. Not get my nuget packages. Go fuck yourself in your fat fucking ass. Goddamn, who automated this build process. I did. Fuck me.5
- Hey!
I was looking to purchase a VPS to host my stuff on. I've looked at DigitalOcean, but their packages seem rather expensive.
Is there a cheaper VPS provider? I'd like a machine with at least 2GB of RAM.18
-
- How deep does the rabbit hole go?
Problem: Convert numpy array containing an audio time series to a .wav file and save on disk
Error 1:
Me: pip install "stupid package"
Console: Can't pip, behind a proxy
Me: Finds workaround after several minutes
Error 2:
Conversion works, but audio file on disk doesn't work
Encoding Error only works with array of ints not floats
BUT I NEED IT TO BE FLOATS
Looks for another library
scikits.audiolab <- should work
Me: pip --proxy=myproxy:port install "this shit"
Command Line *spits back huge error*
Googles error <- You need to install this package with a .whl file
Me: Downloads .whl file <- pip install "filename".whl
Command Line: ERROR: scikits.audiolab-0.11.0-cp27-cp27m-win32.whl is not a supported wheel on this platform.
Googles Error <- Need to see supported file formats
Me: python -c "import pip; print(pip.pep425tags.get_supported())"
Console: AttributeError: module 'pip' has no attribute 'pep425tags'
Googles Error <- Use another command for pip v10
Me: python -c "import pip._internal; print(pip._internal.pep425tags.get_supported())"
Console: complies
Me: pip install "filename".whl
Console: complies
Me: *spends 30 minutes to find directory where I should paste .dll file*
Finds Directory (was hidden btw), pastes file
Me: Runs .py file
Console: from version import version as _version ModuleNotFoundError: No module named 'version'
Googles Error <- Fix is: "just comment out the import statement"
Me: HAHAHAHAHAHA
Console: HAHAHAHAHA
Unfortunately this shit still didn't work after two hours of debugging, lmao fuck this7
-
- I don't remember/saw if somebody posted it in this much detail, but here's how one developer essentially showed how broken npm once again is, by just removing all his published packages, basically breaking thousands of other packages that depended on it, very interesting read, especially to understand how npm can't be relied on..........
- Sublime Text - absolute favourite! Tried many editors but nothing is faster than Sublime on a 4 gigs machine .. and also the packages.
- Today while cleaning windows, came up to my mind how I miss when developers not only knew what is but also how to implement ajax requests properly....
Nowadays a framework with 10 composer packages will do the trick, and looks like black magic to juniors3
- Is programming a girls dream come true?
I want a package for something so I search NPM, 50 results.
I could spend a life time browsing, shopping for packages, trying them out to see if they fit.
It's all I bloody do these days
-
- Pressing ctrl+s in sql dev when checking packages.. you accidentally press space or sth and later ctrl s it compiles.. then shit gets real when you accidentaly lock up everything on prod db..2
-
- Since im crazy and love bleeding edge im creating GNOME-dev packages for arch linux. Somebody willing to help me ? Come on guys i really need help with this.11
- I just found that that when you have ultra bleeding edge system that actually AUR packages break due to this. HAHA. Im fine with this just edit the makepkg to have the bleeding edge then normal versions of it.1
- Made an order for 3 bubble teas + 1 vacuum from Amazon. Somehow they came all in 1 tall box...
I thought they'd be in 2 packages since they were in different departments but I'm wondering what the layout of Amazon warehouses are and who/what decides that all these things can fit in one box...9
- HOW COCKSUCKING DIFFICULT CAN IT BE TO CHANGE THE FUCKING MEMORY LIMIT FOR NPM PACKAGES?!
HOLY MOTHERFUCKING SHIT.6
-
-
- My first packages, uploaded to nuget!
A simple neural network library, written in C# (.Net Standard)
Here's a link:
- !rant
I have multiple (similar) linux environments on which I work. How do you guys manage installed packages and configurations across them?14
-
- Fucking love how one-liner packages are breaking basically the entire JS ecosystem every once in a while. Why the fuck do you add one-liner packages as dependencies in your code?10
-
-
- What good is a ssh-server when the machine doesn't even connect to the lan? Seriously, this is a fresh version of ubuntu server and i just updated the packages.4
-
- I can't eat lunch in our office anymore.
Because there are ant and fly, and the room is isolated so they can't get out.
How did they end up here?
How am I end up here?🤔
Note: I wrote them in singular, so they look like js packages2
- I'M CHANGING 200 + SSIS PACKAGES WHILE VISUAL STUDIO KEEPS ON CRASHING EVERY 15 MINUTES WITH SOME WEIRD UI ERROR THAT HAS NOTHING TO DO WITH THE PACKAGE ITSELF. ITS MAKING ME WANT TO GAUGE MY EYES OUT AND PULL OUT MY HAIR. FUCK MICROSOFT AND FUCK PEOPLE THAT MAKE ME USE THIS WORTHLESS TOOL!!!!!1
- One thing JS does great is that everything from the server to the gui to the (extremely flexible) build system is 100% platform independent with very few platform specific bugs. And that's a big deal when a basic setup is 1200 packages from 650+ semi-coordinated people.13
-
- I work in a place where I don't have ssh access to the web server. No proper use of composer. I have to pull packages to my local machine and upload through
- Even Fedora has 'very frequent' updates. But they don't irritate you like hell! Who doesn't like updates when they come easy? But luckily for Microsoft, they've always been able to find a unique way to piss off their users someway or the other
- Why isn't the Lua scripting language widely used in the industry? It's flexible, modular and it's packages aren't bad; seems like a great fit.8
- Python drives me nuts. Can we just have 1 environment to run python. Virtual environment or conda environment. Hard to switch from notebooks to ide's because you need to reinstall the packages for that environment3
-
-
- Found file called 'bullshit' in my work folder with list of packages and no comment whatsoever. I wonder what did my past self wanted to do with those packages... What was this list for? Sounds important. And back then (month ago) I thought it was obvious. Sometimes I wonder what games is my past self playing with me...1
- When everybody keeps downloading separate IDE for separate language and you just install packages kn VSCode 😎😎3
- Today I learned:
In Java, you're supposed to compile a source file in its package one directory up, outside of its package. You can't compile the source file in its own package directory, for it will state "cannot find symbol" on files in the same package, even though they're in the correct package directory. That can be quite confusing at first.
Given the following directory structure:
|_
|_ \pkg
|_ _ Src1.java
|_ _ Src2.java (interface with static method)
and the following source:
package pkg;
public interface Src2 { static void doStuff() { ... }; } // assuming JDK8+, where static default methods are allowed
package pkg;
public class Src1 { public static void main(String[] args) { Src2.doStuff(); } }
..being inside the pkg/ directory in the console,
this won't work:
javac -cp . Src1.java
"cannot find symbol: Src2"
However, go one directory up and..
javac -cp pkg/*.java pkg/Src1.java
..it works!
Yeah, you truly start learning how the compiler works when you don't use the luxury of a IDE but rather a raw text editor and a console.1
- Planning to make a github list of usefull links on topics related to javascript,react,react native,redux etc . Drop usefull links if you have any. They can be tutorials,articles, talks,repos,packages
- When you spend half the work day and skipping lunch to resolve dependencies and process packages for an API but realize you're using the wrong OS…
- "Instead of writing classes in Python, create packages and use functions. Each function should do exactly one thing without being specific to the program you are developing." - Steve Wawer!3
- /*
*
- Linux hates me. Been trying to install Mint for hours, just keeps getting stuck on installing grub2 packages...5
- It looks like packages on npm have "disappeared"....
Gotta love javascript.2
- *Lists packages with name "weechat"
1 result
*Tries to start package*
*Says package is not installed*
Wat6
- Thank to this UPS driver ... i'm not going to get my package (laptop screen) until Monday evening.
Express Priority shipping $35.xx or whatever that is down the drain.
Never had this issue with Fedex and 100's packages delivered.
This ends the rant.3
- R.
The statistical "scene" (if there is such thing) grew so much in recent years, because now there is a single language that everyone can use and easily share code via packages.
Before everyone used a different propietary and paid statistical software, and could not share code.1
- apt-get install life
The following packages will be REMOVED:
destiny, future, goals, mother , father, siblings
FUCK THIS SHIT2
-
-
- Learning Image Processing,Deep Learning,Machine Learning,Data modeling,mining and etc related to and also work on them are so much easier than installing requiremnts, packages and tools related to them!2
- Confession: I've been installing npm packages globally using sudo for years just because I'm too lazy to set it up properly.5
- I updated my hosting packages, purchasing a new VPS. Half way through my download of all the hosted sites, I wondered why it had stopped. Yeaaaaah... I'd updated the DNS to point to the new server mid transaction. Hodor.2
- If you are using arch and are making packages from the aur all I can say is use makepkg -s because then it will install all the dependencies for you.
Yay6
- LFS Update: Well I fixed my perl issue last night and today my goal is to do a bit of level grinding so to speak on LFS by trying to finish ch 6. I'm a little over halfway through so I only got about 40 more packages to compile from source 😄
- when i'm new to this mac stuff and spend 20mins trying to install packages only to find they are actually pre-installed...
- Purchasing the premium version of packages and making it open source so you can rip off the author. 😂😂3
- Installing a deb file on Manjaro. 😐
Tried Docker...over complicated
Tried dpkg... dependencies nonexistent in manjaro.
Snap packages?
Appimages?14
- Found a nice place to start if you are planning to switch, tinkering or even just want find some new packages (like me) for GNU/Linux. It's a blog about operating systems, software and software development.
Check it out:
- That's why we love NPM:
>npm install
*installing packages*
npm warn ........................
npm warn deprecated .....................
npm warn .......................
********** A million times more ***********
Oh it works! eh, just ignore every warning :)4
- ?!15
- How is C and C++ development done in reallife ffs?
Like, you got no ecosystem whatsoever, just thousands of build chains. How does this work?26
-
-.
-
- Today I had to edit wpa.conf file in Open WRT in vi because it didn't have enough space to install nano and dependig packages. FML
- Spend the same amount of time looking for and testing existing npm packages as it would take to build something from scratch.
Nothing yet, but Boss is still certain that building our own is unnecessary.
😐
- Kinda annoyed that I got arch just so I could have a light weight Linux experience, after a day or two of setup I have over 800 packages installed with not much difference from my old mint setup.1
- Just realizing than using packages without reading the source is exactly the same as copy pasting somoeone's else work straight into your final dissertation, without rereading it.1
- When updating nuget packages takes longer than cloning, configuring and compiling the Linux kernel...
while(true)
bang(head, table);3
-
-
-
- What does composer.phar even do? I swear it downloads six versions of itself before installing the required packages...1
-
-
-
-
- I should've tried Mint sooner the first distro that has just worked out of the box for me.
Easy to install deb packages the ui looks good built in dark theme. If it runs as well on my desktop as it is on my laptop I think I'll have a distro for life
-
- One of the best things about linux is the ease with which you can update packages, just sudo apt-get update && sudo apt-get upgrade. None of that stupid installer which downloads an updater which updates the program shit.
- One of the DB guys at work writes DB packages like this:
- open package in PL/SQL developer
- copy code to notepad
- edit PL/SQL code in notepad (yes, fucking windows notepad)
- copy and paste back from notepad to PL/SQL developer
- commit
Everytime I see him edit DB packages I can feel my brain mass shrinking.5
- Emacs, once you gotten familiar it's just the best and there's lots of packages to make everything easy to do from emacs. Also it's very configurable1
-
- Why do average people insist on buying ridiculous packages from companies like GoDaddy - when they don't know a thing about hosting or domains? They may as well just break a few of my fingers before I start their project.4
- Newtonsoft JSON...
CSV Helper...
With ETL these two cover 90% of file ingest. I’m still looking for a good XML “auto class” package.2
- Its soo annoying when u have Anaconda installed with all the shit packages and when u IDLE run the python and import packages it gives errors and when u try to pip install them it says request all ready satisfied in anaconda!!!!😭
- Fuck all those special snowflake npm packages who each implement their own incomprehensible documents format and even make an ugly ass website full of lies for it.
Next to JavaScript, this is the biggest reason why I hate frontend development with a passion.2
- What would you think about a dotfile-manager, with, some kind of, packages management capability?
So, dotfiles are organized in packages (git-repos) and these git-repos have a standardised inner structure, so one can easily share configurations and install configurations of other users.3
-
-...
- Just cos everyone loves react so much I give it a new try.
npx create-react-app test
And I have a folder test that weights in at 266 MB. And an environment that will completely disguise any JS error I might create. For what bloodymir gain?5
- Is it just me or have you all be noticing a significant increase in the number of posts on devRant asking for simple technical assistance with code and packages?
I've always had the opinion that asking for personal advice here is fine but technical stuff should be kept for StackOverflow and other such forums. What is your opinion about that?2
-
- Someone send help. IBM has taken over my village. They're brainwashing the children; they wont use any packages that don't end in 'd'!2
- How the fuck does a community write an open source library/package that works wonders, but fail to write a proper documentation.
So many times I have to look into the core of frameworks, libraries and packages so form my own documentation in my head.
I guess I should be contributing too.
We are all to blame
- WTF is wrong with with you VS?!111 I only updated these efing NuGet packages and my whole project goes down the toilet? Don't tell me these files are not there!! THEY ARE!!!! I SEE THEM!!!
...ohh i forgot, my fault! these files in my packages folder are the new ones and YOU STILL WANT THE OLD FILES BECAUSE YOU FORGOT TO UPDATE YOU FUCKING PROJECT FILE!
- "So... you know SQL? Great!! Here, I have this project for you to fix a few things."
"What is it?", you ask... SSIS packages and stuff!
Where do I start??!!
- Is it just me or is Node a pain to set up? I'm trying to make an Ionic app and every time I've tried installing Node, all necessary packages, etc I always get errors.4
-
- I finally got the time for learning Go.
Anyways I'm reading this about packages, and made me wonder, what exactly a package is in terms of Go lang?
That example extracts two functions into a new package, and that made me confused11
-
- I just installed Linux mint. I'm loving it. I always used Windows. Which packages should I install or what should I do with it that Windows or other OS can't do?21
-!
- C# and VS have the worst packaging system of any language. I have errors about packages not being found from WITHIN THE SAME SOLUTION1
-."
- Linux users of dR, what are the packages/softwares you would highly recommend to a Linux kiddie (Beside latte-dock, Timeshift)? For a Kubuntu ofc?16
-
-
-
- When you're writing your most beautiful module yet and realize that you need the new Jackson library, go to upgrade the dependency and all your old dependencies break because nobody ever took the time to keep packages up to date! #DependencyHell
- I really wanna use sublime because it's super snappy but I can't use it without having a shit tonne of inconvenience. It has no property package Implementing something like intellisense. Vscode has that but it makes me feel really slow3
- Ubuntu mate looks good. But man its so hard to remove it. A simple apt-get remove didnt do the trick. Spent a long time removing all packages. Still it is there in the login screen. Let it stay there. I am tired
- Well.. I just installed Antergos on my laptop, and I'm already getting frustrated just installing Spotify.
Might go to something else, I just want to use my computer, why must it be so difficult to listen to music12
- I just started maintaining a few AUR packages, and I got to say it's rather fun and rewarding, just to know that your responsible for making sure that something is up to date, and that no one using the package is getting anything bad.2
- Why the FUCK do I feel an urge to always update all the mf packages and then spend hours debugging some shitty mismatch? WHAT IS WRONG WITH ME?2
- If you're running Manjaro and use mesa...
...yeah, you should definitely run your pacman (or wrapper thereof) with -Syyuu. Otherwise their fake-upgrade mess that they've done yesterday will likely break something on your system, too.7
- any unity dev has advice, documents, tutorial suggestions on creating resuable libraries, packages?
I'm working on a library for the company, and want people to use it as easy and comfortable as possible.4
- Any of you uses Fedora with an Nvidia-GPU?
The distro seems pretty cool, but i doubt I'll get everything to work, since fedora doesn't offer premade packages for proprietary drivers.4
- Have spent the better part of two days trying to fix the build because I foolishly tried to update some NuGet packages :-(
-
- Feels pretty good when you chroot into your first LFS enviornment and nothing breaks (yet..). Also my toolchain seems to be compiling packages correctly so that's a plus :)
- Web development has become nothing but a big shit show. Worse than Android development with all the packages and frameworks people keep stuffing into projects.1
- Hey folks, new here.
I'm started to get into some RoR, so I was curious; do you have know any packages/extensions (context is irrelevant) that you think I should check out?6
- I just read about the npm dependency incident and was confused at how someone could create a package that brings so much dependency and simply have the right to delete it? How many other vital packages can be deleted?1
- #1 clean up the internet of domains, use those beautiful and fancy TLDs - blog, photography, gallery, cloud, house, gov, xxx
#2 more fanatical - clean the internet of cat / dog / [supposedly cute animal] pictures, and later - npm packages1
-
- Question to the Gentoo-Users,
What profile do you use?
I want to switch to gentoo, and the desktop-profile looks like it emerges quite a lot if packages that I won't use in the near future2
- Software packages can be installed only through proprietary software manager on a corporate server to ensure auditability and compliance.
The package manager fails, because it attempts to execute `yum` on an Ubuntu server.3
- As somebody who is used to Ubuntu (I just love the simplicity of just running an apt-get to download packages), should I switch to Arch?
Please only give factual reasons, not just "yes, cus it's arch"6
- What a holy shit the fucking Windows File History and all the rubbish from the Microsoft installer files!! Is it so difficult to remove automatically the unnecessary packages!!! My SSD partition for Windows is completely full of rubbish!!4
- When you have to pull out that Windows laptop for the first time in a couple months, for a .NET project, and have to watch Dropbox sync. The you say "fuck it" 30 minutes later when you recall how many MEAN "test" projects you installed packages for.....
-
-
- Some people like to spring clean, rearrange their house, wash their car, build shelves, or some other chore.
Me? I just spent a couple or so days manually "syncing" the packages on my 2 laptops. That is, making sure that `apt-mark showmanual` and `dpkg-query -l` shows the exact same output on both of them, by manually apt-marking / apt-get installing / removing the exact same set of packages as manual/auto and ensuring the exact same set of "recommended" / "suggested" packages are pulled as dependencies on both.
The end result is a sysad's ideal - I have wasted countless hours making sure absolutely nothing has changed. But hey, my package list is clean, and if aliens from a software dimension abducted one laptop... I have an exact clone ready.
- !rant
fucking hate all online programming exam because they're limiting the way of examineer answering the problem as they're not allowed to import any packages.8
- OpenCV,OpenFace,Caffe are supported in Arch Linux,but CUDA is not supported!! :| WTF!! How these packages could be supported in Arch but CUDA not!7
-
- Boss uses symlinks for packages in go. He checked in changes for several services and then left for vacations.
-
- My employer announced that it has to let people go.
The severance packages can't be announced quickly enough!
- About to write (and publish) my first npm package with TypeScript. It's basically just for json stream writing because the existing packages suck and/or don't do what I need
Guess my actual project I need this for will have to take a bit longer now
- Is there any way to find the users of your npm packages. I can see there is no dependent of my some npm packages but number of downloads are being increased. I've also checked Github insights dependency graph but couldn't find any.2
- I write a hello world and then start importing libs and packages on top. Then I adapt the hello world to test those libs and packages. The entire thing is one big sanity check. The logic is done in increments as well.
Is this bad?
-
- Any c++ devs here,im looking for an explanation on how to distribute c++ packages with dependant libraries.what methods are there to make sure that every person who receives your application can use it without installing any libraries or dependancies.im currently developing on linux.2
- Just went to update my nextcloud instance, is there an archive of packages for archlinux ARM, nextcloud stable isnt compatible with php 7.2.
I regularly clear /var/cache/pacman/pkg yes i already checked...
- I'm really frustrated by the size of node_modules folder that gets created every time for every project. So, I've been looking for some time-space saving solutions. And I found PNPM ( ) , Yarn ( ) and Pkglink ( ). But I'm not sure which is better to serve my purpose.
Things I'm looking forward to solving:
1. I don't want to re-download the same packages over and over again
2. I don't want the same packages to be in multiple projects and eat up space
3. I want a stable, fast and disk space saving solution
Looking for experts advice.7
- Installed Kali linux,
Did sudo update,
Upgrade,
Installed packages,
Programmed in VIM,
Found....atom and sublime are a bit better!
Downloaded atom and sublime....
Saw parrot os features...
*Su delete Kali
Apt-get install parrot.....bye bye Kali!
- People are whining about frontend bloat, overengineering, too many packages on npm and whatnot.
And I'm just like: "Hey! You still can write your own leftpads y'know..."
I just don't get why having lots of options has to be so bad...
-
- I wish windows would release a personal version of Windows micro server so that I could only install exactly what I want for my dev workstation through budget packages...*sigh*1
- when you haven’t updated the packages in ages.
i would rather have the vulnerabilities than have an app that’s not working.2
- IDE: Visual Studio. Overkill of an IDE yet very very useful for everything.
Text Editor: Code and Atom. Although both of these text editors eat more resources than Sublime (especially Atom), what I love about both editors are the available packages and the monthly updates.
- Data structure and Analysis
For experienced ones, add "System Design".
...path for big fat packages.
- When you can't use anything else other than php with no libraries and no frameworks/external packages and you have to reinvent the wheel every. single. time.1
- I have Ubuntu 16.10 installed on my laptop...guess how painful is it to install new packages :/ I should really add reinstall to newer version to my to-do list.4
-
- Well. Here we go... new version of buildroot, new version of the kernel patches, new version of several packages...1
- How does anyone wrangle all these fucking JS packages!
Trying to fix issue where a table overflow problem...only in firefox. Found a quick fix then discovered it does not work when there is an event with a handle.
So come to find out perfect scrollbar does not like flex nor firefox (the only browser for the end user)
Jesus christ I miss laravel.
- Does anyone knows if there's a way to make my NugGet PM for my API to make some packages only available for some environments?
I mean, is it possible by modifying some instances to make some packages just to be installed/seen from the development side but not in Production?1
- Recommended packages for Atom for PHP, HTML, CSS, MySQL development to make my life easier? Autocomplete, preview, etc.?5
-4
- A lot of my apps are hosted in Debian servers from a long time.
I'm upgrading some machine to Debian 10 and it's a nightmare: phpmyadmin and monit packages are no more available.
Any suggestion?
Is there someone in the same situation?6
- About to scrap the multiplayer functions in massmello altogether and work solely on the singleplayer package instead - i had far more done in regard to singleplayer functionality before i had the packages split anyway. >:(
-
- I love laravel. But I hate blade as it's templating engine. Why didn't they choose pug. Its so much more readable. I know I can have it by packages. But I would rather see it as the default.10
- Very specific for r, but Hadley Wickham wrote tons of stuff about how to build packages and has inspired me a lot. He uses a lot clean code practices and writes very clearly.
- So I ordered the wrong size coffee filters on Amazon 😶 I know they were probably already in the area but UPS guy got here in 10 minutes to pickup my return I need Amazon to deliver my packages that fast 😂5
-.
- Today's lesson , never
sudo rm -rf /usr/lib/python3.5/dist-packages on a fedora Machine. It breaks everything.
So any ideas how to fix it. A4
-
- I am working on my website getmeroof.com, and I am using Angular 5. I have used many different packages due to which vendor size is too big now, now Is there any good way to figure out which library or package I am not using, because going through each and every file is taking too much time, any help?14
- Why is it so fucking impossible to install SINGLE packages for texlive on ubuntu when my windows (miktex) can do that on its own?!?!
tried different solutions (e.g. tlmgr), nothing works reliably, I'm just texlive-fulling now, urgh.4
- It would be really nice if bower packages had a consistent naming convention as far as getting to the relevant file path. I'm always surprised how whacky it is. bower_component/special_plugin/code/dist/SpecialPlugin/Script.js ... nonsense!
- I need an open source library /package or any addons to implement the given functionality. Its consist of tinder like swiping for clicks. Sadly i coudlnt get any. I thought about angular animations. But didnt implement any. Each card has each questions for clients. Let me know your findings.
-
- Just a little question about the Flutter web version
I'm trying to run one of my old projects on web (Flutter 1.9)
But it gives me a bunch of errors about not importing some packages (flare_flutter is one of them)
So...can I do anything about it or should I just wait to them to support it?
- :(
- Spent an hour today trying to start arch update because gstreamer0.10 made me first delete a buttload of packages. Why did I even install it in the first place 😩
- !rant
Hey, God bless the pamac devs -
I'm not constantly trying to queue packages for install when others are already installing niw, since it's all locked until the first install is finished
felt compelled to write about this idk
- apt, why do you try to install packages when you know there's not enough space. It really messes with my head in a morning when I have to clean up after you trashing my boot partition.
- You learn with more zeal when you pay to learn. Unless you have abundance of internal motivation, pay for online training or learning packages, you will see what I mean.
- Learned react recently. It's such an amazing library, way better than Angular for smaller web-apps. Any suggestions to get more in-depth with React, without randomly installing multiple packages doing the same job?9
- I need to write something using... because it’s bugging me. But I don’t know what.2
- If only USPS was so timely... They can't even fwd to the correct address...
Just got an email saying my packages were delivered.
And they were literally just delivered.4
- Fuck Arch Linux
Love the AUR, but the fact that every -Syu I've done led to system wide failure is unforgivable
I'm coming home Fedora, also screw you Debian and your broken packages9
-
-
-
- Has anyone else lost all compatibility with Netbeans 10 and javafx on linux? I can't for the life of me get any packages to work together to fix the problem. Even installing "Netbeans8" and jdk8 and openjfx8 doesn't fix the problem.2
-
-
-
-?
- !rant
I took a lot of effort to find some not so famous nice NPM packages... Here's a list, that too an alphabetical one xD...
- I'm having a hell of a time figuring out how to use atom editor with my website somebody please direct me to the right package!!2
-
- So, anyone know of a way to use an npm package as a compilation of other packages? Could I write a personal package that when installed lets the user run code from a bunch of my favorite packages? For easy system installs?4
- am developing a webapp with a couple of friends and we want to implement stripe API with Django Rest. Does anybody knows about good integration test packages/practice that could be useful in this case?
Top Tags | https://devrant.com/search?term=packages | CC-MAIN-2021-39 | refinedweb | 7,622 | 73.68 |
Download Button - JSP-Servlet
Download Button HI friends,
This is my maiden question at this site. I am doing online banking project, in that i have "mini statement" link. And i want the Bank customers to DOWNLOAD the mini statement. Can any one help me
project guidance - JSP-Servlet
form, can anyone guide me through the project ? Hi maverick
Here is the some free project available on our site u can visit and find the solution...project guidance i have to make a project on resume management
Programming
jsp project - JSP-Servlet
jsp project sure vac call management project
Struts Books
, including JSPs, servlets, Web applications, the Jakarta-Tomcat JSP/servlet container...;
Free
Struts Books
The Apache... Servlet and
Java Server Pages (JSP) technologies.
Need E-Books - JSP-Servlet
Download file - JSP-Servlet
Servlet download file from server I am looking for a Servlet download file example
Free JSP Books
Free JSP Books
... Servlet 2.3 filtering, the Jakarta Struts project and the role of JSP and servlets.... Servlet and JSP technology is the foundation of this platform: it provides the link
download - JSP-Servlet
download here is the code in servlet for download a file.
while...();
System.out.println("inside download servlet");
BufferedInputStream...());
out.println("");
out.println("");
out.println("Servlet download
Free Java Books
Free Java Books
Sams Teach Yourself Java 2 in 24 Hours
As the author of computer books, I spend a lot...;
Servlet
and JSP Programming
This IBM Redbook provides - JSP-Servlet
project i have to do a project in jsp...
plz suggest me some good topics.
its an mini project..
also mention some good projects in core java...
reply urgently Hi friend,
Please visit for more
help on project - JSP-Servlet
help on project Need help on Java Project
upload and download files - JSP-Servlet
upload and download files HI!!
how can I upload (more than 1 file) and download the files using jsp.
Is any lib folders to be pasted? kindly... and download files in JSP visit to :
java project - JSP-Servlet
java project i am going to do a mini-project on computerization... mini-project...
where all the students can enter into the application... over to the ID card manufacturers..
so this is the idea of my mini-project
Upload and download file - JSP-Servlet
Upload and download file What is JSP code to upload and download............
Now to download the word document file, try the following code...();
response.getOutputStream().flush();
%>
Thanks I want to download a file
Servlets Books
;
Books : Java Servlet & JSP Cookbook... leading free servlet/JSP engines- Apache Tomcat, the JSWDK, and the Java Web Server...;
JSP, Servlet, Struts, JSF, and Java Training
Download the following JSP books...
A JSP directive affects the overall structure of the servlet that results
servlet and jsp - JSP-Servlet
servlet and jsp can any one give me jsp-servlet related project-its urgent-
1-chat application
2-bug tracking system
3-online shopping
4-online...://
Thanks-Servlet - JSP-Servlet
JSP-Servlet how to pass the value or parameter from jsp page to servlet and view the passed value
jsp-logout - JSP-Servlet
jsp-logout hai friends
please provide me a powerful login & logout programs for my web application regarding my project.
thanks...://
Send Email From JSP & Servlet
J2EE Tutorial - Send Email From JSP &
Servlet... , the correct practice is to invoke the RMI
through a servlet. In the JSP...' are
collected by the servlet and then processed for sending the mailor
jsp/servlet - JSP-Servlet
jsp/servlet How to create and save the excel file on given location using jsp/servlet? hi Geetanjali,
Read for more information,
Thanks
JSP - JSP-Servlet
JSP Hi!
In my project i have to send an email to the registered at the time of their registration.
I dont know how to send an email from JSP... email in jsp visit
GRID IN JSP - JSP-Servlet
GRID IN JSP I m creating one ERP project in which i need to fill the data from the data base in a grid in jsp page. So pls any one provide the code and servlet
jsp and servlet what is the difference between jsp and servlet ? what is the advantages and disadvantages of jsp and servlet
JSP - JSP-Servlet
JSP Designation
Project Manager
Project... to write code in jsp for getting database values..
"
Means in the first select box we have Team Leader,Team Member,Project Manager, etc.....
When
Downloading in JSP - JSP-Servlet
friend,
For download the file in Jsp visit to :
Thanks... R.Ragavendran.. How are you roseindia team? I am having an irritative problem in my JSPter
Jsp - JSP-Servlet
Jsp Hi! Friend thanks for your reply. Now it is working. My project has been completed with your help..
I got a new project. In that project I am...://
Thanks
import project from eclipse - JSP-Servlet
import project from eclipse i have two jsp project in eclipse workspace one project run in eclipse.
how import second project in eclipse editor
please tell me which file are required to run jsp project in lib directory
JSP-Servlet - JSP-Servlet
JSP-Servlet how to pass the value or parameter from jsp page to servlet and view the passed value.
Hi Friend,
Please visit the following links:
jsp code - JSP-Servlet
jsp code hi
i am doing project work i am generating time table for this i have taken form
courseyear textbox
semistername textbox
no of periods per day textbox
no of classes per subject in a week textbox
no of lab subjects
JSP - JSP-Servlet
JSP Hi Sir I am developing the project for online exam.Here i am facing one problem that i need to show the questions one by one in the same page...://
Thanks
JSF Books
to Java developers working in J2SE with a JSP/Servlet engine like Tomcat, as well... The course is usually
taught on-site at customer locations, but servlet, JSP...JSF Books
programming - JSP-Servlet
programming hello, I am doing online exam project using jsp-servlet... to retrieve next question when click on next button in jsp page and that question will come in same jsp page.
please help me how can I do
online bookstore - JSP-Servlet
online bookstore i want to display some books like online shoping.please send me code for that using jsp and servlets code - JSP-Servlet
JSP code Hi!
Can somebody provide a line by line explanation of the following code. The code is used to upload and download an image.
<... have successfully upload the file by the name of:
Download
/*Code
Java - JSP-Servlet
Java Using Servlet,JSP,JDBC and XML How to create a web application... can create JSP/Servlet web application.
You can also download the Shopping cart application from
JSP - JSP-Servlet
JSP & Servlet Example Code Need example of JSP & Servlet
JSP,Servlet - JSP-Servlet
JSP,Servlet How can i pass a list of objects from jsp to an Action?
Please help me to do
Disable of Combo - JSP-Servlet
details with source code and specify the
technology you have used like Jsp/Servlet...Disable of Combo Hi!
Thanks for your fast reply.
In my project, JSP vs Servlet If anyone have idea about how to use all these technologies in one project ..please share with me. Thanks
Java - JSP-Servlet
Java Dear Deepak,
In my Project we need to integrate Outlook Express mailing system with java/jsp.
thanks & regards,
vijayababu.m
Tomcat Books
the Java technology. It is a JSP and servlet container that can be attached to other... and Servlet specifications, JSP 2.0, and Servlets 2.4. This tutorial walks you... or in understanding more than just servlet/JSP programming. Besides simply covering Programming Books
using Servlet 2.3 filtering, the Jakarta Struts project and the role of JSP...
JSP Programming Books
... Pages, Marty Hall shows you how to apply recent advances in servlet and JSP
HTML - JSP-Servlet
HTML
To process a value in jsp project I have to set the text box as non editable. what is the attribute and value to be submitted... code.
Visit for more information.
java and oracle - JSP-Servlet
java and oracle I am developing a small project where I need to upload the resume of the employee and store it into database and retrieve the same... and jsp
session concept - JSP-Servlet
session concept Hello friends,
How can we track unexpectedly closing a window when a jsp project running with session concept. And this tracking should update the log in status in data base | http://www.roseindia.net/tutorialhelp/comment/86119 | CC-MAIN-2015-14 | refinedweb | 1,419 | 72.87 |
Extra fit
extra_fit aligns multiple objects to a reference object.
New in PyMOL 1.7.2
It can use any of PyMOL's pairwise alignment methods (align, super, cealign, fit...). More precisely it can use any function which takes arguments mobile and target, so it will for example also work with tmalign. Additional keyword arguments are passed to the used method, so you can for example adjust outlier cutoff or create an alignment object.
There are several similar commands/scripts for this job, like "A > align > all to this" from the PyMOL panel, the "alignto" command, align_all.py and super_all.py scripts from Robert Campbell.
Usage
extra_fit [ selection [, reference [, method ]]]
Example
This will align 4 structures on CA atoms using the super method. It will also create an alignment object.
fetch 1e4y 1ake 4ake 3hpq, async=0 remove not chain A extra_fit name CA, 1ake, super, object=aln_super
Same, but with tmalign method (see TMalign)
import tmalign extra_fit name CA, 1ake, tmalign, object=aln_tmalign | https://pymolwiki.org/index.php/Extra_fit | CC-MAIN-2021-43 | refinedweb | 164 | 57.16 |
Twitter is simply a Web-based way to tell certain people know what you are currently doing in 140 characters or less.
That's the short definition.
The long definition is a bit more involved, but it does merit consideration. Twitter is one of the most successful entries in what the industry now refers to as social media, online social networking, or Web 2.0. Using Twitter, you gather a number of followers. Then, from time to time, to tell them what you are doing, you type a little story (known as a tweet in the industry) in the Twitter GUI and click a button. That tweet is then transmitted to all of your followers, and they can read, understand, reply, or not care accordingly.
Shakespeare tells us that "brevity is the soul of wit." This philosophy is enforced by the Twitter authorities, as tweets are limited to a maximum of 140 characters. Actually, that limitation has nothing to do with Shakespeare: It has to do with limitations on mobile devices at the time Twitter was developed. But it is a welcome enforcement, as it prevents unnecessary spam and verbal clutter within a single tweet.
Although the length of tweets is strictly enforced, the actual content of those tweets is not so strictly enforced. The original intent of Twitter was to tell your followers what you are doing right now. Needless to say, that is not always the subject of the millions of tweets issued daily. People will post opinions, headlines, links to their blogs, links to someone else's blog, and so on. So, new users of Twitter should be prepared to receive tweets that have nothing to do with the tweeter's current task.
Twitter also comes with an additional benefit associated with most (if not all) of Web 2.0: It is free. That's right, it doesn't cost you anything to join. It doesn't cost you anything to follow someone else. It doesn't cost you anything to have any number of followers. It doesn't cost you anything to tweet. It's just there for your consumption.
By now, you have a broad overview of Twitter and what it does. If you have not yet visited the Twitter site, now is a good time before you go on with the rest of the article. It will be much easier to understand the REST API that way.
The Twitter REST API
Having covered the basics, you're ready to move on to the stuff that Web application developers enjoy. Twitter is not only a useful tool within the social media space, it also offers developers a comprehensive array of services to enable automation of Twitter functionality. One of those services (and perhaps the most popular) is the REST API.
REST is an acronym for Representational State Transfer. The full explanation of everything entailed in a proper REST definition is outside of the scope of this article; however, it is available elsewhere on IBM® developerWorks® (see Resources). For the subject covered here, it is sufficient to state that REST enables developers to access information and resources using a simple HTTP invocation.
As an example, imagine that FishinHole.com operates a Web site that markets fishing tackle to its customers. Users who access the site can see a variety of lures, reels, rods, and so forth. They do this the old-fashioned way: by clicking links. In this way, FishinHole.com makes its services available to human beings.
But FishinHole.com also makes its services available to other Web applications by exposing its catalog of fishing tackle with REST. So, instead of clicking around, the Web application obtains information about lures, reels, rods, and so forth with a simple HTTP invocation. For example, returns a list of all lures offered by the company in XML format. As another example, returns information about item #343221 in the default format.
Think of REST this way: you can obtain domain-specific data simply by pointing a URL to a specific location. For our purposes here, that's really all it is. You can also think of it as a simplified Web service, but if you say that too loudly around the wrong people, you might find yourself in the middle of a debate.
Note: I should point out that FishinHole.com doesn't actually exist. So, if you paste any of those URLs into your browser, you might wonder why you got an error. I provide these examples simply to show the format of a typical REST invocation.
Would you like to see an example of a fully functioning REST API? One where you can actually paste URLs into a browser and get something significant returned? Then please read on.
Getting started: a simple example
You just finished reading Joel Comm's great book, Twitter Power, and you decide that today you will begin to achieve financial independence with an aggressive online marketing program using Twitter.
But you're also a great software developer. And that means that you prefer to let software do much of the work for you rather than actually do a lot of the Twittering yourself. Not only do you register a new Twitter account, but you also start to learn about the API so you can automate certain aspects of Twitter functionality.
The first thing you want to do is use the API to retrieve Joel Comm's timeline (see Listing 1). This makes sense, because he wrote the book that was such an inspiration to you.
Listing 1. Retrieving Joel Comm's timeline
That's it. It's just that simple. Open another browser, paste that URL into the address bar, and see what you get.
Obviously, a closer examination of that REST invocation is warranted. First, the prefix should be self-explanatory.
The
twitter.com portion is the domain name, which
indicates that you will access a resource at an IP address to which that name
is mapped. The
http that precedes it indicates that
you will use the Hypertext Transfer Protocol. This is frequently the case with
REST.
Next comes
/statuses. This is how Twitter specifies
the REST function within a particular category. Think of it like a directory within
a file system. In this case, the REST function that is invoked is categorized under
statuses. In Twitter terminology, a user status
is basically a tweet, as it tells you what the user is doing right now.
Next comes
user_timeline. This is the actual function name
to be invoked. This is intuitively named
user_timeline
because, in fact, you are retrieving a user timeline, or a series of tweets that
the user has recently entered.
Don't miss the
.xml extension that follows the function name:
This is important. It is the format in which the timeline will be retrieved. Here,
you use XML. Other valid formats are Java™ Simple Object Notation
(JSON), Atom, and RSS.
Using standard
GET notation, the parameters follow
the function and are delimited by a question mark (
?).
In this case, there is only one parameter—
id—and
it specifies the Twitter name of the user whose timeline you want to view. Here,
joelcomm is specified, because he is the one whose timeline you want to see.
Evaluating the output
You realize after viewing the output from above that you prefer to receive your results in Atom format. Fortunately, this is not a problem at all and only requires a minor change (Listing 2) to the code in Listing 1.
Listing 2. Retrieving Joel Comm's timeline in Atom
The REST invocation above yields something similar to the results in Listing 3. If you paste that code into your URL, your browser might ask you to download the resulting output, because your browser is not configured to display files ending with the .atom extension.
Obviously, Joel's timeline will be different when this article goes to press (and at the time you read this) versus his timeline while I wrote this article. So the exact results will certainly vary.
Listing 3. Joel Comm's timeline in Atom format (abbreviated)
<?xml version="1.0" encoding="UTF-8"?> <feed xml: <title>Twitter / joelcomm</title> <id>tag:twitter.com,2007:Status</id> <link type="text/html" rel="alternate" href=""/> <updated>2009-03-22T10:21:31+00:00</updated> <subtitle>Twitter updates from Joel Comm / joelcomm.</subtitle> <entry> <title>joelcomm: thinking...</title> <content type="html">joelcomm: thinking...</content> <id>tag:twitter.com,2007:</id> <published>2009-03-22T05:15:01+00:00</published> <updated>2009-03-22T05:15:01+00:00</updated> <link type="text/html" rel="alternate" href=""/> <link type="image/jpeg" rel="image" href=""/> <author> <name>Joel Comm</name> <uri></uri> </author> </entry> </feed>
If you are familiar with XML, you'll find that most of Listing 3 is intuitive. If you're familiar with Atom, you'll find it even more familiar. If you're familiar with Atom and Twitter, you can probably skip this section.
Here's the breakdown on the code in Listing 3:
- Note that the root element is
feed. This is standard according to the Atom specification. The namespace that Twitter uses is, as specified as an attribute in the root element.
- The
titleelement identifies the user whose timeline you are viewing. It also provides a bit of advertising for the Twitter Web site.
- The
linkelement is also important: It specifies the URL you use if you want to view Joel Comm's timeline the old-fashioned way (by manually viewing it in your browser).
- The
entrystanza represents a tweet. Although for the sake of brevity I only list one, in reality, you will see 20 of these in your output.
- Notice that
titleand
contentare.
- In Atom format, the content is preceded by the Twitter name, then a colon (
:). Here,
joelcomm:precedes the actual tweet.
- The actual tweet here is the oh-so-significant statement
thinking.... That's Joel's latest tweet as I write this article. A cynical individual might infer that this indicates that there are certain times when Joel is not thinking or that Joel was lacking material for his latest tweet and felt the urge to simply type something. However, I leave such suppositions to others.
- The
idelement is required by Atom and is a globally unique identifier (GUID) for this particular tweet. All tweets across the universe of Twitter will have unique IDs so that they can be referenced individually.
- The
publishedand
updateddate and times are also identical. This makes sense, because Joel simply entered his tweet and never updated it.
- The first
linkelement provides a link to this single tweet. Go ahead and paste a browser window, and you'll see that Joel was "thinking..." at that time.
- The second
linkelement simply provides a link to Joel's picture.
- The
authorstanza provides information about the Twitter user. Here, you see Joel's full name and Web site URL.
After contemplating this much of the API, you realize that this is fabulous information and that you can easily write code to parse the Atom output. You, of course, can also parse time lines from other power users, not just Joel Comm. The parsed information can be gleaned for data relevant to your online marketing campaign. The only limitation is your imagination: the possibilities are endless.
Other parameters
The
user_timeline has several parameters in addition
to
id. You can also specify
screen_name
instead of
id in the case above. If you happen to know
the user's numerical Twitter ID, you can specify that in the
user_id parameter.
You can use the
since_id parameter to specify tweets with an
ID higher than the number specified in this parameter (see Listing 4).
Recall from above that Joel's famous "thinking..." tweet had an ID of 1369295498.
So, the following URL returns tweets later than that one.
Listing 4. Retrieving Joel Comm's timeline since "thinking..."
The parameter
max_id is basically the reverse of
since_id. It returns tweets with IDs less than
the one specified by the parameter value.
The parameter
since allows you to apply an actual date
to your timeline filter, as opposed to an ID. The
page
parameter allows you to paginate your results. A default
user_timeline
invocation returns the last 20 tweets. If you were to number these tweets 1-20, then
the code in Listing 5 returns tweets 41-60.
Listing 5. Retrieving Joel Comm's third set of 20 tweets
Other functions
Thus far, you've extensively examined the
user_timeline function. However, the Twitter API provides other functions that are accessible through REST.
The
public_timeline function (Listing 6)
allows you to see the latest tweets across the entire Twitterverse—at
least for those users who make their tweets publicly available.
Listing 6. The latest tweets
The
friends_timeline function (Listing 7)
allows you to see the tweets of people that you follow. This is the same as if
you log in to Twitter and go straight to your Twitter home page.
Listing 7. The latest tweets of people you are following
If you copy and paste the URL in Listing 7 into your browser, you might notice that you are prompted to supply your Twitter user name and a password. Your home page in Twitter is a secure environment, as it contains links to your direct messages. So, this is a security measure on the part of Twitter. (I discuss security in more detail later in this article.)
The
update function allows you to actually tweet using
the REST API. In this case, the function invocation must be accomplished using a
POST request (as opposed to a
GET
request). The parameter
status submitted with the
POST request contains the text of the actual tweet.
The
replies function returns the 20 most recent
@replies to the authenticated user. Basically,
@replies are tweets directed specifically at a particular
user. For example, if you tweet
@joelcomm are you done thinking
yet?, that message appears as one of a series of messages directed in
particular to Joel Comm. He can see these messages by clicking a link on his
Twitter home page. However,
@replies are also
visible to all users following the user who issued the reply.
It is beyond the scope of this article to explain all the REST API functions in complete detail. However, they are clearly documented in the API documentation. For a link to that documentation, see Resources.
Limitations on API use
Using the Twitter REST API is not a carte blanche to do whatever you want. Twitter has placed certain restrictions on the use of its API to prohibit bandwidth hogs from jeopardizing the usefulness of the feature set.
For starters, you are only allowed a maximum of 100 requests per hour. Although
this limit applies only to
GET (as opposed to
POST) requests, it is still a good rule of thumb to
follow. If you exceed the limit, your resultant document from the REST invocation
tells you so. If, for whatever reason, you must invoke the Twitter REST API
more than 100 times per hour, you can request whitelisting from Twitter.
Another limitation is that a maximum of 3200 statuses can be returned when using
either the
page or
count
parameters.
In addition, Twitter requests but does not demand other limitations. For
example, Twitter advises that you use the
page attribute
over the
count attribute. The company also asks that
you cache results locally as opposed to requesting the same status repeatedly.
Authentication
As I mentioned earlier, certain functions require authentication. If you want to use the Twitter REST API and take advantage of those functions, you must include your credentials in the request. Otherwise, you will get a status code 401 for your reply.
As of this writing, Twitter only supports HTTP basic authentication, which means that the request header must contain your user name and password in an encrypted format. You will then have full access to the Twitter API functions as though you logged in to Twitter from your browser. For more information about basic authentication, see Resources.
Currently, Twitter is working on a way to enable OAuth authentication for secured requests. As of this writing, that is still in development.
Conclusion
Twitter is a fabulous entry in the Web 2.0 genre. Using Twitter, you can microblog your way to building an entire online network of individuals who share common interests with you.
Using the Twitter REST API, you can automate just about everything you can do with Twitter manually. You can programatically access a specific user's timeline. You can reply to that user, either directly or indirectly. You can search a user's tweets for information specific to your own interests. You can filter tweets based on certain criteria and display those tweets on your own blog.
The possibilities are endless.
Resources
Learn
- Build a RESTful Web service (Andrew Glover, developerWorks, July 2008): Read an excellent explanation of Representational State Transfer in an introduction to REST and the Restlet framework.
- RESTful Web services: The basics (Alex Rodriguez, developerWorks, November 2008): Discover another excellent overview of REST and its basic principles.
- HTTP Basic authentication: Start with the Wikipedia Basic authentication entry, a great place to learn about Basic authentication.
- Twitter REST API documentation: Get a comprehensive overview of the entire API, complete with Twitter site: Explore the Twitter service. Try it and be connected with friends, family, and co–workers as you exchange short messages about what you. | http://www.ibm.com/developerworks/xml/library/x-twitterREST/ | CC-MAIN-2015-06 | refinedweb | 2,907 | 65.12 |
Steve Ballmer has a hard job. Being the CEO of a company the size of Microsoft is brutal and exhausting. Against his nature, Ballmer has been trying to change his own persona and the company culture. So far, he appears to have made progress. In my mind, Microsoft is a more mature corporate entity that it used to be. The startup mentality is important to hold on to, but isn’t functional as the core value of a $30 billion company. What Ballmer needs to do is hold on to the best of the existing culture, while transforming it into something new. Tough job. I couldn’t do it.Editorial Notice: All opinions are those of the author and not necessarily those of osnews.com
Ballmer has also re-engineered himself. His combative, hardball salesman nature is inappropriate for a Fortune 500 CEO. You can insert any bean counter joke you want, but CEOs have to balance a lot of different interests. Almost nobody gets it right. Ballmer has done pretty well. But its a strain, and sometimes it shows. It did yesterday in Orlando.
Ballmer was onstage at a Gartner sponsored Tech love-in. I don’t know what he was led to expect, but what he got were some pretty sharp questions. Some of the sharpest were about security, and the constant drumbeat is clearly getting to him. In what I am guessing was a “shoot-from-the-hip” throwback to the old Ballmer, he blurted a few doozies. When asked if open source software is not by definition more secure than closed, he said, “The data doesn’t jibe with that. In the first 150 days after the release of Windows 2000, there were 17 critical vulnerabilities. For Windows Server 2003 there were four. For Red Hat (Linux) 6, they were five to ten times higher…There’s no roadmap for Linux. There’s nobody to hold accountable for security issues with Linux. There’s nobody sort of, so to speak, rear end on the line for issues.”
Obviously, the data is suspect. Any comparison between Windows and Linux security bulletins has to take two things into account. First, Red Hat comes with more than 1000 applications. They issue security bulletins for all of them. A Sendmail or MySQL security problem gets publicized the same as a kernel issue. Windows security bulletins concern Windows and its associated applications only. Thus a raw number comparison of bulletin frequency is misleading. In addition, MS and its minions have cited total bulletins for Linux compared to Windows. This is equally misleading. When MS discovers a problem, they issue a bulletin. When a Linux application discovers a problem, every distribution that carries that application issues a bulletin. That can mean more than a dozen bulletins for an important problem. But not everyone needs to pay attention to all the bulletins. I use Mandrake Linux, and ignore bulletins concerning RedHat, SUSE and the rest.
The second reason the data is suspect is that Microsoft has occasionally changed the definition of “critical vulnerabilities”. This raises or lowers the number of criticals without changing the overall number of actual security issues. Are the numbers Ballmer cited critical vulnerabilities by today’s definition or those in effect at the time? And does MS use the same kind of criteria in categorizing Linux vulnerabilities as it does its own?
Apart from these two data problems, there is a more substantive objection. By choosing the appropriate time periods, one could “prove” that winter is hotter than summer. Comparing Windows Server 2003 with RedHat 6.0 (released in 1999) sounds like that kind of exercise. In terms of the actual number of security issues that one needs to act on, there isn’t much doubt that Windows is way ahead (behind?). Groklaw did a wider comparison of the numbers with predictable results.
Ballmer also poured scorn on the patching process in the open source world, saying . ”
Like I said, its getting to him. They may have “a process that will lead to sustainable level of quality”, but it hasn’t so far. And the “someone in China in the middle of the night” isn’t an accurate characterization of the Linux process either.
Mr. Ballmer’s statements don’t actually bother me much. Although his inaccurate “data” can’t go unchallenged. As I said above, this seems to me a throwback to an older, more combative persona. I expect he’ll snap out of it.
What is worth commenting on is the potential he and Mr. Gates show for “executive insulation syndrome”. This is a little known business malady that I’ve just made up. The most blatant example of the syndrome was Alex Trotman. When Mr. Trotman became President and CEO of Ford in 1993, he admitted that he had never in his life actually bought a car. He joined Ford in 1955 as a young man and drove nothing but company cars. Buying a car is a wretched process, and someone who has never done it can’t understand how unpleasant it really is. Or the essentially adversarial relationship it creates between company and customer.
I am guessing that Mr. Ballmer has a small army of IT people that make sure everything he, Gates and the other mucky-mucks touch is all smooth and seamless. I wonder how well he understands what a pain it is to run a system. Any system. Those that actually run multiple operating systems know very well that Linux is not as great as people say, and that Windows is not as bad. But they also know that Linux is unquestionably more secure than Windows. It comes like that out of the box. Windows is insecure out of the box. You can make it secure, but with the patch-a-minute regime in Redmond, its a lot of work to keep it that way. The change to monthly patches doesn’t actually improve the situation. It reduces the workload, but leaves more vulnerabilities unpatched for longer periods.
The “we’re better than Linux when it comes to security” line seems au courant at Microsoft lately. Bill Gates said last week in Germany about security patches, “We’ve gone from little over 40 hours on average to 24 hours. With Linux, that would be a couple of weeks on average.” What a wacky guy. Really though, this kind of statement is just self-defeating. It creates a no-win for MS. First, there’s nothing Gates or Ballmer can say to convince me Windows is more secure than Linux. Because in my daily experience, its not. Second, even if it were, that would help me the customer how? I have multiple Windows machines to take care of and switching to Linux across the board is not an option. If it were, I would have. Mr. Ballmer has famously stated a new dedication to customers. I believe he means it. But he should start by dealing with reality rather than spin. Who’s rear end, so to speak, is on the line at Microsoft?
>Of course he picks redhat to compare the amount of security
>advisories have been issued for windows. Redhat is probably
>the most hacked up distro out there.
And yet, it’s the company that _is_ making a profit. It’s always that “use-that-other-distro” argument that makes no sense to me. Well, I agree: “give me a break.”
Whenever his mouth opens all that comes out is FUD. I just get so irritated at the site of him. He has no idea what OSS is at all. It really just seems like he has an OSS advisor or something that tells him how to answer questions.
You would prefer… his company endorsed open source? Somehow that doesn’t seem like a winning business strategy.
There is an excellent overview of Ballmer’s recent talk over at Groklaw.net. PJ does an excellent job of correcting the gross errors in Ballmer’s speech..
I could go on and on about the errors of MS’s ways but that has been rehashed here and various other places many, many times. The bottom line is this, to use an old, tired expression: what comes around, goes around. MS, you want to be an arrogant, monopolistic bully…fine. But it will eventually come back around to you. It is your turn to pay the price.
Well I honestly don’t think that Slackware is out there trying to break the bank. Althought its basically I think a 4 man team so I’m sure they’re not doing too bad.
And debian is a non profit distro. I’m completely missing ur point. Are you agreeing w/ me or trying to start an argument?
The point of linux isn’t necessarily to make a profit. The two companies (while one is commercial) aren’t out there to run m$ over. What exactly are you saying? Should I use redhat because they’re making a (very small) profit? No, that doesn’t make sense. I use what works for me.
One thing is certain, whatever his status, he is still JUST “CEO”. Gates is still the Chairman and he is the guy with the brains in the family. I don’t care about Microsoft Propaganda what I care about is products. And from my knowledge the new era of Microsoft products will definitely change people’s perceptions about what software is all about. This is evident in the consumer level Windows XP. It’s an improvement. All all this whinig when it came out “buy it now it doesn’t crash as much” etc.. Of course that’s how software evolves. At first it sucks, then it gets better. And if people belive in a company and a vision they will go with the flow. There are two types of services, one is what some call “mission critical” like Nuclear Facilities and NASA. They don’t use $MS products for their critical data. They use it for typing memos and makeing excel spreadsheets. Maybe they use WebEX and interactive applications for presentations etc…. Business is about money and software that enhances productivity will get bought. I haven’t seen that from Linux yet, sure Openoffice works but not well enough. IT will take many moons to have the features of office. So get outside and learn how the world works. Anyone who thinks Linux will get to the desktop (this is my main argument) has to realize there better be billions of r&d before anything can happen to put a dent into Microsoft’s Ferrari. So my advice to the college kids compiling libdvdcss is to grow up and when you get a real job you will learn why Microsoft runs the software world for businesses.
You would prefer… his company endorsed open source? Somehow that doesn’t seem like a winning business strategy.
It’s working for Apple, why not?
But seriously, I understand that OSS is a major threat to MS. However, they need to atleast do their homework so they can go out and not sound like idiots. They also need to actually do something inovative and which addresses security, and quit releasing crap like Office 2003. Office 2003’s only new “feature” is not being able to open your documents where you want when u want. Absolutely ridiculous.
Mmh. I don’t want to offense the contributor, but does he have a proof that he really says the following sentance :
[[ The fact that someone in China in the middle of the night patched it ]]
I don’t see any link with the interview. If it’s a real interview, can you prove that the journalist doesn’t misrender what he really says as it is often the case
I regard this sentance as just plain ordinary racism, and I have a hard time to believe that the CEO of one of the biggest company said it in public.
Wait for replies.
“Now there’s a mature Balmer.”
That he was confident about himself enough to do that without caring what people might think… Yes, I would say that it shows him as mature.
On the other hand posting about it on web forums and calling someone they don’t know “monkey boy” doesn’t say anything positive about the poster…
[i]I don’t see any link with the interview. If it’s a real interview, can you prove that the journalist doesn’t misrender what he really says as it is often the case
“…Most people that who are putting their softare under opensource are doing so, because it wasn’t very successful when it was sold. And if something is not very successful sold, why not make it free. That’s not where we come from, we’re trying to build software that actually builds value.”
–Steve Balmer
He’s such an idiot. Ok so samba isn’t very good, um its faster than Windows 2003. KDE, Gnome, Gaim, Grip, Mozilla. Such horrible pieces of software I know, it’s too bad they never worked out commercially. Oh wait, they were never being sold.
BTW this quote came off of the video clip of his interview on
It’s working for Apple, why not?
Apple is fighting the status quo. They need a host of interoperability tools in order to be competative in a Windows dominated market, and many of these they have pulled from open source projects.
Microsoft *IS* the status quo. Interoperability with 3rd party programs is their enemy, as this opens the door for consideration of the alternatives.
However, they need to atleast do their homework so they can go out and not sound like idiots.
Whether or not they sound like idiots to someone with a technical background has little effect on the public perception.
They also need to actually do something inovative and which addresses security.
Thanks for the link.
It comes from a major site, and there is a clip while I’m not able to see it I must admit thie sentance was really said.
Sigh.
This OS war pertubs many people ; I can only hope they will calm down and use better arguments in the future.
Actually I’m quite sure that I don’t want to administrate Systems/Firewalls without knowing why rules are set up.
I’m too paranoid to believe that this feature wouldn’t be abused. Let’s say the RIAA want’s MS to close some Ports globally..
“Business is about money and software that enhances productivity will get bought. I haven’t seen that from Linux yet, sure Openoffice works but not well enough. IT will take many moons to have the features of office. So get outside and learn how the world works.”
The argument would be stronger if You could produce a list of features not available in OpenOffice.org and widely used in ‘real business’.
I’m using Linux/OOo for two years now and never missed a single feature. ‘Business’ uses MS-products because of their marked dominance.
Just as linux users often refer to Windows 95, MS refers to redhat 6.0 Everyone is full of unfair comparisons.
I do find it strange that an article based solely on bashing Balmer for not using ‘real’ provable facts, makes such a blatant statement as
“But they also know that Linux is unquestionably more secure than Windows”
At least Balmer’s ‘facts’ are simply derived from using favourable metrics.
Bash balmer all you want, but don’t use his same style of ‘rhetoric’ to promote Linux in the same article.
Yamin
Actually Office 2003 is what Office XP should have been. It has far more updates, features, and new programs than XP did. Outlook 2003 is worth it alone for the corporate enviroment. The company I work at is thinking of rolling Outlook 2003 now because of the changes and features it brings to the table.
Microsoft does alot of stupid things and does quite a few things badly. Yet, it does learn and it does get better. Yes, it improves slowly and often times with questionable business practices but…its making money and its at the top. *shrugs* Unless it gets shut down by the government…its going to be the global leader when it comes to operating systems, office applications, etc. for the forseeable future. Realize it, get with it…and try and make Linux better. Instead of constantly glorifying Linux…make it better. Fix the problems of which there are leagues
@skaeight
It’s working for Apple, why not?
Is it really correct to say Apple has endorsed OSS? Hitching one’s cart to open source is not the same as endorsing it. After all, the source for Aqua, Finder, iTunes, &c are not publicly available.
“Is it really correct to say Apple has endorsed OSS?”
Absolutely.
“Hitching one’s cart to open source is not the same as endorsing it.”
Considering the fact that the Free Software Foundation endorses Apple’s open source license for darwin and because Apple has actively contributed code to many open source projects, not only do they endorse it, but they support it.
“After all, the source for Aqua, Finder, iTunes, &c are not publicly available.”
That does not make them any less supportive of open source software.
I wish you people would recognise these facts and stop trying to pick nits all the time.
Is it really correct to say Apple has endorsed OSS? Hitching one’s cart to open source is not the same as endorsing it. After all, the source for Aqua, Finder, iTunes, &c are not publicly available.
======
It is correct to say that Apple benefits a lot of free software. It must be said that they give some things in return ( Darwin Streaming Server, Compiler Tools, Rendezvous, WebCore, X11). It must be said that the APSL 2.0 is this time really free.
Obviously though, they took more in free software that they give in return. I can see two domains where Apple can make a good PR action and not hurt (who said benefit ?) their business :
* Help the gnustep.org project to achieve these goals (implementation of the nextstep api). More software
for macosx won’t hurt. Some devs are reluctant to develop apps for macosx. If they can develop cocoa apps not only for macosx but also for the following platforms (), they will feel better.
* Microsoft Office is a big vendor lockin. The netbios/smb networks were a big vendor lockin but Samba has broken it. In the domain of Office, AppleWorks just cannot compete. What is needed is to change the rules of the game with a app who haves the SAME features, BUT is cheap, BUT is cross-platform, BUT has an open XML fileformat. This app exists and it is OpenOffice.org . A basic port for macosx+X11 already exists. In my opinion, Apple should move his ass and make from it a just-working Aqua application. It’s up to them.
They have no obligation to do so, but I think it would be clever. Hope that helps.
Apple Public Source License Now FSF Approved
“Is it really correct to say Apple has endorsed OSS? Hitching one’s cart to open source is not the same as endorsing it. After all, the source for Aqua, Finder, iTunes, &c are not publicly available.”
Apple have done something which very few other companies have so far managed; to utilise Open Source (OS) software in a way which enables them to make a profit and still keeps the community happy! They take an OS product, polish it, add essential features to it and release it as their own along-side proprietary products while returning their improvements to the community for everyone elses benefit.
Just because a product uses OS, doesn’t mean it all has to be OS. OS software such as Konqueror/KHTML, Mozilla and Open Office provide good non-proprietary base platforms for others to extend and sell commercially. The more you improve and integrate them into your targetted environment, as Apple has done with its OS based products, the more perceived value they have. Willingly adhering to the requirement to return your changes to the community means the project as a whole benefits and the original developers accept you as a member of their community. A symbiotic relationship which so many closed-minded “business men” just don’t get.
Ballmer and Gates are the epitomy of this type. Men with bad cases of tunnel vision. If MS were to have a complete re-think, admit they were wrong in some of their opinions, and adopted some OS projects as the new bases for their products everyone would benefit. Microsoft would immediately improve their standing in the security community and could direct their resources more efficiently on improving their core products; their customers would have more reliable and open products to avoid being locked-in; and the OS community would gain valuable returns from MS’ investments. Will this happen? Never! Ballmer and Gates will reap what they have sown by stifling competition in the past and it will humble them, if not completely destroy them. They will never accept OS even though it is the only thing which can ultimately save them.
[/i.
Do you know what [i.
Do you even know what patches do? They usually fix buffer overruns, actually. One thing they never do is block ports. Open ports and remote exploits are two different beasts, friend. Sounds like you’ve bought into Microsoft’s techno-babble propaganda.
Linux & OpenOffice because i don’t want to put my customers in a bind with Microsoft’s file format, so i use .rtf and OpenOffice’s pdf export tool. why would i want to make ANYONE pay several hundred dollars for a Office suite just to read & edit a few pages of documents…
who says you have to use Microsoft’s OS & Office to make professional documents is just a microsoft shill.
Do you know what.
Thats true however you need to trigger the exploit in some way and to date the most common method of doing that is via an open port that exposes the service with the exploit.
I was unpatched when Blaster hit and yet I wasn’t effected. Why ? Because my hardware firewall was configured properly and the ports that Blaster was looking for were not open on my system.
Yea, Ballmer has a TOUGH job trying to preserve 80 percent profits on products that there’s an OSS free equivalent.
Let’s be honest, the ONLY reason those profits are still around is Office file format lock in and windows lock in through the large universe of windows apps. That’s it. That accounts for the billions in profit.
If we could have an open file format for Office type files and linux versions of all those windows apps out there, then Ballmer and Gates would be forgotten and never heard from again.
“I regard this sentance as just plain ordinary racism, and I have a hard time to believe that the CEO of one of the biggest company said it in public. ”
Racism? Where is the racism, exactly?
Ports are already blocked fine by intelligent firewalls, yet systems still get compromised. The person I was replying to implied that changing which ports get blocked is a replacement for patching. Companies networks get compromised through ports they cannot block.
I’m puzzled whenever I hear corporate guys talk about responsibility, accountability or indemnification. Can anyone tell me which company has ever received a check from Microsoft to compensate their losses due to Redmond’s malware ? How many IT executives went to jail because their product caused problems like identity theft, shutdown of a US Navy destroyer, power shortage throughout the East Coast, … ? Where in Microsoft EULA does it say they’re willing to indemnify customers ? On the contrary, they’re among those who created their own private law enforcement agency (the BSA) to wreak havoc in businesses (ask Ernie Ball about this), schools and municipalities.
If wealth causes Gates and Ballmer to forget the meaning of words, maybe they should stick to exclamations (great, wrong, super, cool, …). This way, we won’t have to guess what they’re actually saying.
I didn’t said racism, I said plain ordinary racism, the kind of things you see every day, a little like making comments about the physic of a woman in politic when you strongly disagree with her. This last thing happens in the OS community
in Europe those days agains Arlene McCarthy (England, proposal for software patents), and I try to fight it every time I see it.
Check the sentance and tell me what is wrong.
[[ The fact that someone in China in the middle of the night patched it … ==> no sustainable level of quality ]]
Absolutely right. One of the greatest ironies is to hear Gates go on and on about protecting Intellectual Property. That’s what’s so evil about Open Source; can’t protect IP.
But Microsoft has been convicted several times and settled out of court many more for stealing code.
If I had to use Linux for a year I would quit using computers after a week and join a Mennonite Community. You are right I am addicted to what works.
Just before you posted, you may have view a little link
“Submission of a comment on OSNews implies that you have acknowledged and fully agreed with THESE TERMS”
So, if you didn’t agree with point #4, and what bothers me more point #3, it was not worth to post.
More, I don’t see what is wrong with Steve Ballmer in this video. I think I would prefer a boss who acts from time to time just like an human.
Truly yours.
Apple isn’t 100% opensource. Only the kernel and some tools came from BSD licensed software. The GUI is not opensource, it’s poprietary. Apple also aims on a different target users than MS does, though i honestly can’t define the exact difference.
By Bascule (IP: —.atmos.colostate.edu) > “Microsoft’s “securing the perimeter” strategy (i.e. automatically updated firewall/netfilter rules) is the most innovative approach to security I’ve seen in recent history. If effectively implemented, this would allow network security vulnerabilities,”
Can you tell me more about this feature? It sounds like a NIDS. Or like pfysnc… or like PortSentry…
If the Windows box downloads firewall rules from a Microsoft.com machine ”automagically” i see it as a hostile/trojan feature.
Apple isn’t 100% opensource. Only the kernel and some tools came from BSD licensed software. The GUI is not opensource, it’s poprietary
Which company makes 100% free software ? Redhat, Mandrake, not much more. All others make proprietary software : SuSE has YAST, IBM, Oracle, HP, … have participate in free software projects and have their proprietary software as well.
It doesn’t usually make sense to make opensource for a company what is in the core of your strategy. But they are very often some domain where it makes sense.
I tried to list two for Apple (gnustep.org and OpenOffice.org)
“Microsoft’s “securing the perimeter” strategy (i.e. automatically updated firewall/netfilter rules) is the most innovative approach to security I’ve seen in recent history. If effectively implemented, this would allow network security vulnerabilities,”
Can you tell me more about this feature?
Sure, read the TrustedComputing/TCG/TCPA/NGSCB/Longhorn/Palladium/ FAQ from this security expert guy
Prove it to yourself, quit for a year and use something else, anything else. Prove you are strong enough to quit your “junk” habit.
Too much hassle.
On the server side Linux has been an easy move. On the desktop its just not ready for me. I don’t have time to screw around and find replacements for the software I run. There is no point anyway. The stuff I use is already paid for and it works.
“If I had to use Linux for a year I would quit using computers after a week and join a Mennonite Community. You are right I am addicted to what works.”
Hmm … let me see. Exactly how many times has my Linux system (since RH6 – currently RH9/XD2) actually not worked. Well there was the time I overwrote /dev/null by mistake (that’s how much of a Linux newbie I was – reminded me of when I ran Cleansweep on an early version of Win98 and it deleted a load of IE/Explorer system files – ahh memories :o) ). I tried to re-compile the kernel once and that didn’t go too well either (I *can* now but never bother – the RH stock is fine but at least I have the option). Driver support has got progressively better and now supports all of my hardware with at least generic drivers if not specific.
X occasionally locked up on me but that got diagnosed to a hardware fault in the end. A funny story actually as it was the very same fault causing problems in WinXP which finally got me using Linux full time (Mandrake 7 IIRC). Once I figured out what it was I found I could live without Windows but I couldn’t live without Linux so, although I have tried numerous distros, I am still using Linux to this day!
I’m not going to say Linux is faultless, but even after learning it from scratch and trying several flavours, I would still consider it to have been a very worthy investment which make me more productive than Windows ever did. That said, I am trialling BeOS R5 ATM and very much looking forward to giving Zeta a good thrashing. I think it might just be the comfortable middle ground I am looking for with the power of the Unix command line I have grown to love and a GUI which is faster and more intuitive than Windows. We’ll see how that goes though. Whatever happens, Linux will still live on in the various servers because it’s so secure, reliable and easy to manage.
Addicted to what works? Yep, but it ain’t the drug you’re expecting!
“Which company makes 100% free software ? Redhat, Mandrake, not much more. All others make proprietary software : SuSE has YAST, IBM, Oracle, HP, … have participate in free software projects and have their proprietary software as well.”
Didn’t knew YaST was proprietary. Does RedHat include such stuff too? If so, which?
I also think a setup tool like YaST is something totally different than a GUI like Apple made.
“I tried to list two for Apple (gnustep.org and OpenOffice.org)”
Sorry, i don’t understand what you mean with this. What do you mean with ”list two for Apple”? What did you tried?
“Can you tell me more about this feature?
Sure, read the TrustedComputing/TCG/TCPA/NGSCB/Longhorn/Palladium/ FAQ from this security expert guy“
Didn’t Orwell innovate this?
anyway thanks for clearifying, i didn’t knew it was a part of TC(G)/NSSCB. I’m not so sure wether the ”block everything, allow some” idea is really innovative… on firewalls it isn’t. On servers with virtual OSes, chroots, et al it isn’t either. And finally, i at least know an implementation of this in NetBSD called verified exec which also exists as kernel patch for OpenBSD, called Stephanie. Seems very similair…
Excuse me, forgot to add URL for Stephanie
NetBSD docs somewhere at
“By Anonymous (IP: 12.105.181.—) – Posted on 2003-10-23 22:06:47
Apple Public Source License Now FSF Approved“
(6 august 2003)
No, NOT FSF approved. OSI approved. Which doesn’t mean much to me taken they also approve this license which is imo far from Free:
Didn’t knew YaST was proprietary.
Yep. Specically it does not allow commercial redistributions
of derative works, so ain’t free softwar.
Does RedHat include such stuff too? If so, which?
Yes. Each distro developp their own tool, for this reason each setup tool works only at 95% . I think it’s a consequence of YAST being proprietary. Mandrake has urpmi and the drak* tools. I don’t know the exact names for Redhat.
“I tried to list two for Apple (gnustep.org and OpenOffice.org)”
Sorry, i don’t understand what you mean with this.
My english sucks 😉 I talked about that
Didn’t Orwell innovate this?
A guy said before that Microsoft really innovate in the security domain. I have to agree. Classical security tries to defends the PCs from foreign attacks. The DRM and the trusted computing try to protect the PC against hist user.
No, NOT FSF approved. OSI approved. Which doesn’t mean much to me taken they also approve this license which is imo far from Free:
=====
Nope. Version 2.0 of the APSL is both OSI AND FSF approved.
There is absolutly no doubt :
Microsoft’s “securing the perimeter” strategy (i.e. automatically updated firewall/netfilter rules) is the most innovative approach to security I’ve seen in recent history.
(1) Cisco NetRanger could dynamically reprogram perimeter routers to repel attacks back in 1999 when I worked with them.
(2) Dynamically reprogramming firewalls doesn’t help you when your laptop users bring a worm in from outside (or they maliciously or through ignorance, download a trojan etc. off the net).
Remember this, when baseball legend roger clemmons stood up because people were appluading him, the other team lauded him as well. When MSFT put be out of business, it can be said from that point foward, that it would be very difficult to regain *that* sort of mutual respect.
And, BeOS is like bsd, and linux, except, we use a well fleshed out gui without a lot of cruft. Security experiments, nonwithstanding.
You can protect IP in OSS. You can actually manage it, and make money on it.
Also, Be people do it for way cheaper than free…
“Do you even know what patches do? They usually fix buffer overruns, actually.” — ThanatosNL
This brings me to an (alas, somewhat off topic) question:
Why don’t operating systems simply mark their executable’s stack pages non-executable? It seems as if this would eliminate 90+% of the buffer overrun exploits possible..
No, actually I would say Microsoft suffers from attention deficit desorder. When people aren’t paying absolute attention to Microsoft, Balmer and Gates go nuts doing anything that can get the spotlight to shine back onto them. Say a lie or two about Linux, give something away for free, they’ll try absolutely anything to get that industry spot light back on them.
Linux now has the industry spotlight and for all of Microsofts anti-Linux and anti-UNIX remarks, it has done them diddly squat in stopping Linux’s marketshare from increasing. It is time for Microsoft to suck in their bottom lip and find out what people like in Linux as a server operating system and adopt the same approach.
Linux is popular because is familar; UNIX-Like, dynamic; constantly evolving with a transparent development process where implementations compete to get into the kernel and are chosen because of they’re superior, compatible; this allows many companies who have UNIX applications to get them ported to Linux with minimum fuss, low cost; ISV’s love the fact that the operating system is no longer hogging up the huge about of money it did before. Look at Red Hat Enterprise for example, you can buy the biggest, meanest support package and unlike Microsoft, you don’t need to pay for every computer accessing that server. It is flat rate meaning that you don’t have to increase licenses as more people access the server.
With money freed up there, more can be spent on third party software, which is why IBM is happy, They’re the middleware king and if a customer saves money on the operating system, the customer can then be sold DB2 and a whole heap of other software without the customer having a huge cost over their head.
One of the reason you see Ballmer spouting out false and stupid phrase about Linux as if he had a bad case of tourettes syndrome is because of story’s like the one below.…
Highway patrol gives Linux a green light
WASHINGTON, D.C. — As Microsoft’s support for Windows NT Server 4.0 grinds to a halt, many enterprises will be.”
Howdy,
Reading what Gates and Ballmer both have to say about Linux, I am struck by the realization that they seem to completely miss the real reason for Linux’s success. They both focus on the fact that some people say Linux is cheaper and try to argue whether that is true or not. They seem to just dismiss any other Linux advantage out of hand. I know I did not leave the Windows world because of the cost. I would have kept paying them pretty much whatever they asked. I left because of the increasingly unreasonable license terms and DRM. It was hard at first. I thought Linux(and FreeBSD) were just weird, doing things the hard way. But as I have grown more experienced, I find that Linux is easier to use than Windows. Almost everything I do on a computer works better on Linux that on Windows and the momentum is not in Windows favor.
But I can’t decide if Microsoft’s blinders are a good thing or a bad thing. They clearly do not seem to understand the long term implications of what they say and do.
So, I don’t think Ballmer slipped at all. He said what he thinks. He’s just wrong, is all.
There was a good point in the article how the “top of the food chain” has no idea what it takes for the little guys to survive…
For example to understand a user and his motivation to move to linux they should aks themselevs: “How many application come with any flavor of Windows, that help me as a user to get the job done and without paying extra for it?” The answer is: NONE!!!
And what you get with … lets say RedHat Linux (8.0, 9.0):
office suit – oh yes! – OpenOffice.org
Webserver (Apache), fileserver (Samba), databases (MySQL, PostgreSQL), programming (C, Perl, …)….
and on top of that you find the system out of box is secure, while your friends battle with windows patches that break the system you are being productive!!!
That’s why I gave up on Windows…
It seems as though people are confused and think that I said “securing the perimerter” was a good idea. That was someone’s responce to me saying MS needs to do something about security.
If MS’s only tactic is to “Secure the Permimeter” that is the dumbest thing I’ve ever heard of. It’s even dumber to say that it is “inovative.” Ok here’s some proof:
netstat -a of my linux machine at home:
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 *:x11 *:* LISTEN
tcp 0 0 *:ssh *:* LISTEN
tcp 0 0 *:631 *:* LISTEN
tcp 0 0 localhost:x11-ssh *:* LISTEN
netstat -a of my windows 2000 box at work:
Proto Local Address Foreign Address State
TCP charlie:epmap charlie:0 LISTENING
TCP charlie:microsoft-ds charlie:0 LISTENING
TCP charlie:1029 charlie:0 LISTENING
TCP charlie:1034 charlie:0 LISTENING
TCP charlie:2954 charlie:0 LISTENING
TCP charlie:2977 charlie:0 LISTENING
TCP charlie:3090 charlie:0 LISTENING
TCP charlie:3092 charlie:0 LISTENING
TCP charlie:3097 charlie:0 LISTENING
TCP charlie:3115 charlie:0 LISTENING
TCP charlie:3120 charlie:0 LISTENING
TCP charlie:3123 charlie:0 LISTENING
TCP charlie:3128 charlie:0 LISTENING
TCP charlie:3130 charlie:0 LISTENING
TCP charlie:44334 charlie:0 LISTENING
TCP charlie:netbios-ssn charlie:0 LISTENING
TCP charlie:1032 charlie:0 LISTENING
TCP charlie:1334 charlie:0 LISTENING
I run slackware and by default the only ports I have open are 22 (ssh) and 631(cups). It also listens for x11 connections when I’m running X. Windows 2000, who knows why all those ports are open. It’s stupid, my system is wide open (well not really I’m running TPF)
But my point is, it’s not innovative to “secure the perimeter.” Most other operating systems are already realatively secure by default. Yes firewalls are a great thing, but you also have to worry about internal security policies (i.e. why does every version of windows make Administrator accounts by default?)
I wouldn’t put much store, in what Missouri government officials do. They probably couldn’t figure a way to siphon money off from thier present contractors and MS.
I live in Missouri. It is has one the most corrupt state governments in the US. Around a decade sg $60,000.000.00 dollars earmarked for improvement of the interstate system got squandered off into thin air. Some of the local construction companies were really upset at the time. No one paid attention. They thought the companies were just complaining because they didn’t get the money. Now our highway’s are in dire need of repairs and it is not getting done. Look at the rating for interstates Missouri is one of the worse.
Is your computer at work on local network? Is your computer at home?
“Yes firewalls are a great thing, but you also have to worry about internal security policies (i.e. why does every version of windows make Administrator accounts by default?)”
The local admin thing has to stop. Certain programs will not run unless the user is a local admin and this is just plain stupid. I don’t know aobut newer versions of MS Office, but on Office 97 the spell checker in Word was non-functional unless you were local admin. The registry can be set to allow locked down users to access the correct keys for many of these programs but with a multitude of machines and apps, this far from feasible.
In my opinion the above is the greatest security flaw with Windows and one that I don’t feel they truly are addressing.
Yes both computers are on a network. My home computer has also has “true” SPI firewall between itself and the internet, so it’s not like I’m saying that ports being closed is the ultimate in security (i’m pretty paranoid about security).
And that’s exactly what I was saying, why doesn’t microsoft see it as a problem that just about every windows user out there is using a “root” account? I rememeber when I first started using linux and was still very much in the MS mindset, I was like wow this is a pain in the a$$, whenever I want to do something I have to do it in root (and before i learned about su and CLI text editors I would sometimes open X up as root, DOH!) Anyways, yeah I agree it doesn’t even seem like MS sees this as a problem whatsoever.
I think apple approached this issue perfectly. Basically, whenever a major system change is about to happen, they prompt you for your password. It’s not a true “root” account approach, but its better than nothing (although osx does have a root account, you just have to enable it). I’m sure they didnt’ want to lock things down so much that the average user woudln’t be able to user their machine. MS really needs to learn that if they limited user accounts by default, viruses might become a non issue like they are in other OS’s, simply because unless I go out of my way to authorize an executable, it CAN’T run. It’s impossible.
Thank all you OSS nay-sayers out there! My consulting company works exclusively with OSS, and so far its been a gold mine. People (especially business owners) are tired of Microsoft’s licensing schemes and expensive forced upgrades (not to mention the crashes.) I can replace NT servers with FreeBSD or RedHat, switch Windows desktops with Mandrake or RedHat running OpenOffice.org, and offer full support, all for less than the cost of running a Microsoft shop. I’m glad few people have awakened to realize that the latest OSS solutions (samba, OOo, the various 9.* versions of several distros, etc) are in fact ready for business RIGHT NOW. I’m glad because I’m getting in early while there is lots of money to be made. By the time you guys try to get on this ship (2-3 years), there won’t be any more room.
RedHat Linux is 100% free software, feel free to ask them.
“People (especially business owners) are tired of Microsoft’s licensing schemes and expensive forced upgrades (not to mention the crashes.)”
MS is going to have modify thier licensing. Not just because of Linux but because poeple aren’t going to be told when they have to upgrade. It also imposes an artificial deadline on MS themselves. What are thier cusotmers going to say when they have renew thier software assurance lisc. for XP and there was no upgrade provided during the last period.
About a year ago I bought Visio 2002 Pro. I want the Technical to go with it but my rep at Insight told me it only came with software assurance which would double the price of the product to $800.00. I told her no way.
Two months later they discontinued the product. I called her and asked her what would have happened if I had bought the software assurance. She didn’t have an answer for me. Not good.
As far as crashing goes, W2K and XP rarely crash. When they do it is almost always hardware related. The only time I have had to reboot any of my servers or PC’s running w2k, xp or w2k3 is when installing patches (too often) or other software that requires a reboot.
For my clients they wouldn’t like the idea of me putting OSS software on thier computers. Maybe in the future. But I think they feel that would make them too dependant on me as thier provider. They just aren’t familiar enough with it. Best of luck in your biz though.
“Basically, whenever a major system change is about to happen, they prompt you for your password.”
Windows 2000 is doing that for years: executing a program as a-name-account.
But I agree: windows has a problem with multi users; more exactly, most non microsoft programms have problems : they still write configuration in programm files, which is stupid !
It is actually very difficult to use a windows with several account, perticularly on XP Home edition. For example, I need to be root to have the access to the infra red port !
Jim Parker
“This brings me to an (alas, somewhat off topic) question: Why don’t operating systems simply mark their executable’s stack pages non-executable? It seems as if this would eliminate 90+% of the buffer overrun exploits possible.”
Virtually every operating system does this… on CPU architectures that support it (e.g. Solaris has a non-executable user stack per default on sparcv9 architectures) Unfortunately, the list of architectures that supports this doesn’t include x86. It’s also not a silver bullet against buffer overflows.
skaeight
But my point is, it’s not innovative to “secure the perimeter.” Most other operating systems are already realatively secure by default.
Yes, but the instant a buffer overflow is discovered in any service (especially ones running as root) your system can most likely be completely compromised. The only ways to protect yourself on a typical operating system are 1) disable the service 2) patch/upgrade the service to a fixed version. Both of these require user intervention, and the first may take away a valuable resource.
“Securing the perimeter” is innovative because it allows services to continue to operate, but works without any user intervention. The need for user intervention in maintaining system security leads to millions of vulnerable systems which can be used for the purposes of propogating worms. Microsoft will be the first company to offer a means of automated, transparent system security that has little/no chance of impacting system operation.
Did anybody else notice how often Steve Ballmer referred to Linux Programmers as being “in China” in the video of the gartner interview? Why China? Open Source is coded by programmers resident in hundreds of countries on earth – why be so careful to specify one particular country?
The reason is rather cunning, and very cynical. China is the great Communist ‘Bogey man’ of the West. Mr Ballmer is intentionally inferring that Open Source is in some way associated with the ideals of communism, rather than wholesome All-american big business. This is classic FUD, and typically Microsoft.
It is also part of a strategy. The company caused much anger in the past by claiming that Open Source was in some way “un-american” and “like a Cancer”. Open Sourcers were deeply offended that their principles of academic openness, intellectual fraternity and intensive peer-review were misrepresented as being akin to Communism – when of course the Open Source movement is most clearly akin to the finest principles of freedom and liberty. The insult is made more ironic by the fact that Microsoft has itself has been found guilty of abuse of power and monopoly control. Rather ‘communist’ sounding charges, don’t you think?
Oscar Wilde once said that “Patriotism is the last bastion of the scoundrel”. Perhaps he might have added that McCarthyism is the last bastion of the morally bankrupt. | https://www.osnews.com/story/4898/opinion-ballmers-slip/ | CC-MAIN-2021-31 | refinedweb | 8,336 | 64.61 |
Marque, I downloaded the Lounge LED Controller app and tried re-installing .-Android asked if I wanted to replace app - yes -it looked like it took a while to install , and it said it was ok -I opened the app and it indicated the following: "Unfortunately , Lounge LED Controller has stopped." Please let me know if there is anything else I should be trying as I am still very interrested in testing this app.BobbyD
Hello redcell,It looks like you are going to have to remove the <eeprom.h> and make sure you have downloaded the <EEPROMex> library in this post . That is where the conflict is happening ...you just need to use the extended ( EEPROMex ) library.Hope this helps !BobbyD
#ifndef EEPROMEX_h#define EEPROMEX_h#include <EEPROM.h>#if ARDUINO >= 100
ThermoduinoPro_beta.ino:24:10: error: 'A10' was not declared in this scopeThermoduinoPro_beta.ino:31:19: error: 'A13' was not declared in this scopeThermoduinoPro_beta.ino: In function 'void setup()':ThermoduinoPro_beta.ino:982:12: error: 'A8' was not declared in this scopeThermoduinoPro_beta.ino:985:12: error: 'A12' was not declared in this scopeThermoduinoPro_beta.ino:988:12: error: 'A14' was not declared in this scope
#include <EEPROM.h>
//#include <EEPROM.h>
The "lan connection" issue I was having as been now sorted! (At least no network connection loss over the last week... XD)
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy | http://forum.arduino.cc/index.php?topic=180344.15 | CC-MAIN-2015-48 | refinedweb | 257 | 69.38 |
bigger filehandles for NFSv4 - die NFSv2 die
By erickustarz on Nov 16, 2005
So what does a filehandle created by the Solaris NFS server look like? If we take a gander at the fhandle_t struct, we see its layout:
struct svcfh { fsid_t fh_fsid; /\* filesystem id \*/ ushort_t fh_len; /\* file number length \*/ char fh_data[NFS_FHMAXDATA]; /\* and data \*/ ushort_t fh_xlen; /\* export file number length \*/ char fh_xdata[NFS_FHMAXDATA]; /\* and data \*/ }; typedef struct svcfh fhandle_t;
Where fh_len represents the length of valid bytes in fh_data, and likewise, fh_xlen is the length fh_xdata. Note, NFS_FHMAXDATA used to be:
#define NFS_FHMAXDATA ((NFS_FHSIZE - sizeof (struct fhsize) + 8) / 2)
To be less confusing, I removed fhsize and shortened that to:
#define NFS_FHMAXDATA 10
Ok, but where does fh_data come from? Its the FID (via VOP_FID) of the
local file system. fh_data represents the actual file of the filehandle,
and fh_xdata represents the exported file/directory. So for NFSv2 and
NFSv3, the filehandle is basically:
fsid + file FID + exported FID
NFSv4 is pretty much the same thing, except at the end we add two fields, and you can see the layout in nfs_fh4_fmt_t:
struct nfs_fh4_fmt { fhandle_ext_t fh4_i; uint32_t fh4_flag; uint32_t fh4_volatile_id; };
The fh4_flag is used to distinguish named attributes from "normal" files, and fh4_volatile_id is currently only currently used for testing purposes - for testing volatile filehandles of course, and since Solaris doesn't have a local file system that doesn't have persistent filehandles we don't need to use fh4_volatile_id quite yet.
So back to the magical "10" for NFS_FHMAXDATA... what's going on there? Well, adding those fields up, you get: 8(fsid) + 2(len) + 10(data) + 2(xlen) + 10(xdata) = 32 bytes. Which is the protocol limitation of NFSv2 - just look for "FHSIZE". So the Solaris server is currently limiting its filehandles to 10 byte FIDs just to make NFSv2 happy. Note, this limitation has purposely crept into the local file systems to make this all work, check out UFS's ufid:
/\* \* This overlays the fid structure (see vfs.h) \* \* LP64 note: we use int32_t instead of ino_t since UFS does not use \* inode numbers larger than 32-bits and ufid's are passed to NFS \* which expects them to not grow in size beyond 10 bytes (12 including \* the length). \*/ struct ufid { ushort_t ufid_len; ushort_t ufid_flags; int32_t ufid_ino; int32_t ufid_gen; };
Note that NFSv3's protocol limitation is 64 bytes and NFSv4's limitation is 128 bytes. So these two file systems could theoreticallly give out bigger filehandles, but there's two reasons why they don't for currently existing data: 1) there's really no need and more importantly 2) the filehandles MUST be the same on the wire before any change is done. If 2) isn't satisfied, then all clients with active mounts will get STALE errors when the longer filehandles are introduced. Imagine a server giving out 32 byte filehandles over NFSv3 for a file, then the server is upgraded and now gives out 64 byte filehandles - even if all the extra 32 bytes are zeroed out, that's a different filehandle and the client will think it has a STALE reference. Now a force umount or client reboot will fix the problem, but it seems pretty harsh to force all active clients to perform some manual admin action for a simple (and should be harmless) server upgrade.
So yeah my blog title is how i changed filehandles to be bigger - which almost contradicts the above paragraph. The key point to note is that files that have never been served up via NFS have never had a filehandle generated for them (duh), so they can be whatever length the protocol allows and we don't have to worry about STALE filehandles.
If you're not familiar with ZFS's .zfs/snapshot, there will be a future blog on it soon. But basically it places a dot file (.zfs) under the "main" file system at its root, and all snapshots created are then placed namespace-wise under .zfs/snapshot. Here's an example:
fsh-mullet# zfs snapshot bw_hog@monday fsh-mullet# zfs snapshot bw_hog@tuesday fsh-mullet# ls -a /bw_hog . .. .zfs aces.txt is.txt zfs.txt fsh-mullet# ls -a /bw_hog/.zfs/snapshot . .. monday tuesday fsh-mullet# ls -a /bw_hog/.zfs/snapshot/monday . .. aces.txt is.txt zfs.txt fsh-mullet#
With the introduction of .zfs/snapshot, we were faced with an interesting dilemma for NFS - either only have NFS clients that could do "mirror mounts" have access to the .zfs directory OR increase ZFS's fid for files under .zfs. "Mirror mounts" would allow us to do the technically correct solution of having a unique FSID for the "main" file system and each of its snapshots. This requires NFS clients to cross server mount points. The latter option has one FSID for the "main" file system and all of its snapshots. This means the same file under the "main" file system and any of its snapshots will appear to be the same - so things like "cp" over NFS won't like it.
"Mirror mounts" is our lingo for letting clients cross server file system boundaries - as dictated by the FSID (file system identifier). This is totally legit in NFSv4 (see section "7.7. Mount Point Crossing" and section "5.11.7. mounted_on_fileid" in rfc 3530). NFSv3 doesn't really allow this functionality (see "3.3.3 Procedure 3: LOOKUP - Lookup filename" here). Though, with some little trickery, i'm sure it could be achieved - perhaps via the automounter?
The problem with mirror mounts is that no one has actually implemented them. So if we went with the more technically correct solution of having a unique FSID for the "main" local file system and a unique FSID for all its snapshots, only Solaris Update 2(?) NFSv4 clients would be able to access .zfs upon initial delivery of ZFS. That seems silly.
If we instead bend a little on the unique FSID, then all NFS clients in existence today can access .zfs. That seems much more attractive. Oh wait... small problem. We would rather like at least the filehandles to be different for files in the "main" files ystem from the snapshots - this ensures NFS doesn't get completely confused. Slight problem is that the filehandles we give out today are maxed out at the 32 byte NFSv2 protocol limitation (as mentioned above). If we add any other bit of uniqueness to the filehandles (such as a snapshot identifier) then v2 just can't handle it.... hmmm...
Well you know what? Tough s\*&t v2. Seriously, you are antiquated and really need to go away. So since the snapshot identifier doesn't need to be added to the "main" file system. FIDs for non-.zfs snapshot files will remain the same size and fit within NFSv2's limitations. So we can access ZFS over NFSv2, just will be denied .zfs's goodness:
fsh-weakfish# mount -o vers=2 fsh-mullet:/bw_hog /mnt fsh-weakfish# ls /mnt/.zfs/snapshot/ monday tuesday fsh-weakfish# ls /mnt/.zfs/snapshot/monday /mnt/.zfs/snapshot/monday: Object is remote fsh-weakfish#
So what about v3 and v4? Well since v4 is the default for Solaris and its code is simpler, i just changed v4 to handle bigger filehandles for now. NFSv3 is coming soooon. So we basically have the same structure as fhandle_t, except we extend it a bit for NFSv4 via fhandle4_t:
/\* \* This is the in-memory structure for an NFSv4 extended filehandle. \*/ typedef struct { fsid_t fhx_fsid; /\* filesystem id \*/ ushort_t fhx_len; /\* file number length \*/ char fhx_data[NFS_FH4MAXDATA]; /\* and data \*/ ushort_t fhx_xlen; /\* export file number length \*/ char fhx_xdata[NFS_FH4MAXDATA]; /\* and data \*/ } fhandle4_t;
So the only difference is that FIDs can be up to 26 bytes instead of 10 bytes. Why 26? Thats NFSv3's protocol limitation - 64 bytes. And if we ever need larger than 64 byte filehandles for NFSv4, its easy to change - just create a new struct with the capacity for larger FIDs and use that for NFSv4. Why will it be easier in the future than it was for this change? Well part of what i needed to do to make NFSv4 filehandles backwards compatible is that when filehandles are actuallly XDR'd, we need to parse them so that filehandles that used to be given out with 10 byte FIDs (based on the fhandle_t struct) continue to give out filehandles base on 10 byte FIDs, but at the same time VOP_FID()s that return larger than 10 byte FIDs (such as .zfs) are allowed to do so. So NFSv4 will return different length filehandles based on the need of the local file system.
So checking out xdr_nfs_resop4, the old code (knowing that the filehandle was safe to be a contigious set of bytes), simply did this:
case OP_GETFH: if (!xdr_int(xdrs, (int32_t \*)&objp->nfs_resop4_u.opgetfh.status)) return (FALSE); if (objp->nfs_resop4_u.opgetfh.status != NFS4_OK) return (TRUE); return (xdr_bytes(xdrs, (char \*\*)&objp->nfs_resop4_u.opgetfh.object.nfs_fh4_val, (uint_t \*)&objp->nfs_resop4_u.opgetfh.object.nfs_fh4_len, NFS4_FHSIZE));
Now, instead of simply doing a xdr_bytes, we use the template of fhandle_ext_t and internally always have the space for 26 byte FIDS but for OTW we skip bytes depending on what fhx_len and fhx_xlen, see xdr_encode_nfs_fh4.
whew, that's enough about filehandles for 2005.
Posted by Julius Rahmandar on January 27, 2006 at 02:49 AM PST #
Ok, so "A" is your server running s10... "B" (s9) and "C" (s10) are your clients.
Is the /ramdisk temporary? What are you trying to "ls -l" when you get "no such file or directory?
Is the 90 seconds after "A" comes back up or is that including the reboot time?
Note this blog isn't about fixing filehandles, its about extending them when the local file system needs a larger FID - like ZFS's .snapshot. So what you're seeing has nothing to do with filehandles.
Please follow up with a message to nfs-discuss@opensolaris.org - its easier to respond there.
Posted by eric kustarz on January 27, 2006 at 02:58 AM PST # | https://blogs.oracle.com/erickustarz/en_US/entry/bigger_filehandles_for_nfsv4_die | CC-MAIN-2015-35 | refinedweb | 1,671 | 61.97 |
So basically I have an array saved in class, I then have a separate class which brings up a form. I wish to use this form to search within the array, i later want to work with the array to edit it.
class to hold array
class to hold form
*any suggestions, I have tried placing the array in a method and calling it using EventData.Ent however this didn't work eitherany suggestions, I have tried placing the array in a method and calling it using EventData.Ent however this didn't work eitherpublic class Picker { public static void main(String[] args ) { final JTextField t = new JTextField(0); final JTextField t1 = new JTextField(40); JButton b = new JButton("Open Calendar"); JLabel l1 = new JLabel("Event"); JPanel p = new JPanel(); p.add(b); p.add(l1); p.add(t1); final JFrame f = new JFrame(); f.getContentPane().add(p); f.pack(); f.setVisible(true); b.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent ae) { //*If the array is set here the code works fine, // outside of the actionListener and it doesn't work* /* String[] Event; Event = new String[2]; Event[0] = ""; Event[1] = "01-01-2013 -- Closed"; /* t.setText(new DatePicker(f).setPickedDate()); String input = t.getText(); for(String ent: Event) { if(ent.startsWith(input)){ t1.setText("" + ent); } else { t1.setText ("No Event"); } } } } ); } } | http://www.javaprogrammingforums.com/whats-wrong-my-code/23140-search-array-separate-class.html | CC-MAIN-2017-30 | refinedweb | 221 | 63.29 |
When dealing with zip files you have a few choices: use native APIs from third party Dlls, java APIs or .Net APIs.
If you rush to use APIs from System.IO.Compress .net namespace you will be very disappointed.
For reasons only Microsoft knows, the support is limited to streams only and lacks completely
for multi-files archives. This was probably a reason why third party .net libraries like SharpZipLib cropped up.
If you don't trust the free software, you might be surprised to find out that you can find .net support
for multi-file archives in .net buried in J# assemblies that offers parity with Java APIs.
To make a useful application that uses it I started with an existing code project application that is very handy
when backing up source code. I replaced the SharpZipLib references and used the Microsoft's J# APIs instead.
When porting the application I noticed that SharpZipLib API's were looking very similar with J# APIs and that
made my work so much easier.
To make this utility more enticing to use I've added quite a few features that I will detail below.
In order to use Microsoft's API for multi-file zips and Java streams, you have to add vjslib.dll and vjslibcw.dll
.net assemblies as project references. They are part of J# distribution pack.
The Java like types will show up in the java.util.zip namespace. Since Microsoft's documentation on this
topic is quite Spartan ,I often had to rely on intellisense to figure it out.
For simplicity's sake, some nonessential UI code is omitted bellow and can be found only in the source code provided.
Below you could see a snippet of code edited for simplicity that enumerates the files in the archive:
public static List<string > GetZipFileNames(string zipFile) { ZipFile zf = null; List<string > list = new List<string >(); try { zf = new ZipFile(zipFile); java.util.Enumeration enu = zf.entries(); while (enu.hasMoreElements()) { ZipEntry zen = enu.nextElement() as ZipEntry; if (zen.isDirectory()) continue;//ignore directories list.Add(zen.getName()); } } catch(Exception ex) { throw new ApplicationException("Please drag/drop only valid zip files\nthat are not password protected.",ex); } finally { if (zf != null) zf.close(); } return list; }
As you probably noticed ZipEntry and ZipFile are easy to use for this goal.
Below you could see a helper method used to zip the files from a folder:
private static void _CreateZipFromFolder(string Folder, IsFileStrippableDelegate IsStrip) { System.IO.DirectoryInfo dirInfo = new System.IO.DirectoryInfo(Folder); System.IO.FileInfo[] files = dirInfo.GetFiles("*");//all files foreach (FileInfo file in files) { if (IsStrip != null && IsStrip(file.FullName)) continue;//skip, don't zip it java.io.FileInputStream instream = new java.io.FileInputStream(file.FullName); int bytes = 0; string strEntry = file.FullName.Substring(m_trimIndex); _zos.putNextEntry(new ZipEntry(strEntry)); while ((bytes = instream.read(_buffer, 0, _buffer.Length)) > 0) { _zos.write(_buffer, 0, bytes); } _zos.closeEntry(); instream.close(); } System.IO.DirectoryInfo[] folders = null; folders = dirInfo.GetDirectories("*"); if (folders != null) { foreach (System.IO.DirectoryInfo folder in folders) { _CreateZipFromFolder(folder.FullName, IsStrip); } } }
The IsStrip delegate acts as a filter that trashes the unwanted files.
Below you could see an edited for brevity piece of code used to unzip the files from a zip:
ZipInputStream zis = null; zis = new ZipInputStream(new java.io.FileInputStream(file)); ZipEntry ze = null; while ((ze = zis.getNextEntry()) != null) { if (ze.isDirectory()) continue;//ignore directories string fname = ze.getName(); bool bstrip = IsStrip != null && IsStrip(fname); if (!bstrip) { //unzip entry int bytes = 0; FileStream filestream = null; BinaryWriter w = null; string filePath = Folder + @"\" + fname; if(!Directory.Exists(Path.GetDirectoryName(filePath))) Directory.CreateDirectory(Path.GetDirectoryName(filePath)); filestream = new FileStream(filePath, FileMode.Create); w = new BinaryWriter(filestream); while ((bytes = zis.read(_buffer, 0, _buffer.Length)) > 0) { for (int i = 0; i < bytes; i++) { unchecked { w.Write((byte)_buffer[i]); } } } } zis.closeEntry(); w.Close(); filestream.Close(); } if (zis != null) zis.close(); }
Again the IsStrip delegate acts as a filter that trashes the unwanted files.
Also I had to mix the java.io with System.IO namespaces because of the sbyte[] array.
You can not directly modify a zip file. However you can create another zip and copy only select files in it.
When the transfer is complete we can rename the new file as the original and it would look like as we
changed the zip. The edited for brevity method below receives a list of string with the unwanted files:
public static void StripZip(string zipFile, List<string > trashFiles) { ZipOutputStream zos = null; ZipInputStream zis = null; //remove 'zip' extension bool bsuccess = true; string strNewFile = zipFile.Remove(zipFile.Length - 3, 3) + "tmp"; zos = new ZipOutputStream(new java.io.FileOutputStream(strNewFile)); zis = new ZipInputStream(new java.io.FileInputStream(zipFile)); ZipEntry ze = null; while ((ze = zis.getNextEntry()) != null) { if (ze.isDirectory()) continue;//ignore directories string fname = ze.getName(); bool bstrip = trashFiles.Contains(fname); if (!bstrip) { //copy the entry from zis to zos int bytes = 0; //deal with password protected files zos.putNextEntry(new ZipEntry(fname)); while ((bytes = zis.read(_buffer, 0, _buffer.Length)) > 0) { zos.write(_buffer, 0, bytes); } zis.closeEntry(); zos.closeEntry(); } } if (zis != null) zis.close(); if (zos != null) zos.close(); if (bsuccess) { System.IO.File.Delete(zipFile + ".old"); System.IO.File.Move(zipFile, zipFile + ".old"); System.IO.File.Move(strNewFile, zipFile); } else System.IO.File.Delete(strNewFile); }
To make this tool more attractive I've added some improvements of my own: The first one to notice
is the usage of a checked list box that allows doing manual changes on the fly.
My favorite is the ability to edit the list of filter extensions that are bound to the CPZipStripper.exe.xml
file through a DataTable. Here is an edited snapshot of this file.
<configuration> <maskRow maskField="*.plg" / > <maskRow maskField=".opt" / > <maskRow maskField=".ncb" / > <maskRow maskField=".suo" / > <maskRow maskField="*.pdb" / > ...... </configuration>
Notice that in the application configuration file we keep not only the appSetttings node,
but also the files, paths, and most importantly the DataTable content.
Loading the data from this xml file in the respective lists and dataSet is easy:
XmlDocument xd = new XmlDocument(); xd.Load(cfgxmlpath); //use plain xml xpath for the rest m_paths.Clear(); XmlNode xnpath = xd["configuration"]["paths"]; if(xnpath!=null) { foreach(XmlNode xn in xnpath.ChildNodes) { m_paths.Add(xn.InnerXml); } } XmlNode xnfile = xd["configuration"]["files"]; if(xnfile!=null) { foreach(XmlNode xn in xnfile.ChildNodes) { m_files.Add(xn.InnerXml); } } //use the data set m_extensions.Clear(); _dataSet = new DataSet("configuration"); DataTable mytable = new DataTable("maskRow"); DataColumn exColumn = new DataColumn("maskField", Type.GetType("System.String"), null, MappingType.Attribute); mytable.Columns.Add(exColumn); _dataSet.Tables.Add(mytable); _dataSet.Tables[0].ReadXml(MainForm.cfgxmlpath); for (int i = 0; i < _dataSet.Tables[0].Rows.Count; i++) { DataRow row = _dataSet.Tables[0].Rows[i]; string val = row[0].ToString().ToLower(); if (val.Length > 0)//no empty mask { //.....code eliminated for brevity } else { //don't show empty rows row.Delete(); } } _dataSet.Tables[0].AcceptChanges();
Using WriteXml from the dataSet will eliminate the data that does not belong to the table.
For this reason we have to save it before calling WriteXml and restore it afterwards:
XmlDocument xd = new XmlDocument(); //get the original xd.Load(MainForm.cfgxmlpath); //save nodes not part of the dataset XmlNode xnpath = xd["configuration"]["paths"]; XmlNode xnfile = xd["configuration"]["files"]; XmlNode lastFolderpath = xd["configuration"]["LastUsedFolder"]; //write the masks _dataSet.WriteXml(MainForm.cfgxmlpath); //restore the old saved nodes xd.Load(MainForm.cfgxmlpath); if(xnpath != null) xd.DocumentElement.AppendChild(xnpath); if (xnfile != null) xd.DocumentElement.AppendChild(xnfile); if (lastFolderpath != null) xd.DocumentElement.AppendChild(lastFolderpath); xd.Save(MainForm.cfgxmlpath);
I'm not going to get into details on this one, but as you've already noticed in the xml snippet,
you can use * and ? chars.
It's a good ideea that the first thing you do when you open this application is setting the configuration.
I've added some new functionality in regards to using the context menu from Explorer.
You should start the exe only once before you can right click on the folder and zip it.
As a utility I consider version 2.x to be an improvement over the old one.
You can use it to some extent as a Winzip replacement, but it lacks features like encryption.
.Net 2.0 and J# package have to be installed on your machine to run it. If you have problems running
the exe alone,it might be because you are missing the J# distribution package or the .Net 2.0 runtime.
In that that's the case, I recommend you try to install this msi install file I've created or download
vjredist_32bit.zip and install it from locally.
General
News
Question
Answer
Joke
Rant
Admin | http://www.codeproject.com/KB/recipes/cpzipstrip2.aspx | crawl-002 | refinedweb | 1,438 | 53.27 |
Nov 25, 2010 09:11 PM|rcm01|LINK
This is probably something simple. The following JQuery call is executing and calling the controller.DataTemplate").tmpl(myData).appendTo("#itemContainer");
}
);
alert("there");
}
);
</script>
//Controller
public ActionResult Filter()
{
var myData = this.repository.GetAllUsers();
//return Json(myData);
return Json(myData, JsonRequestBehavior.AllowGet);}
JQuery ajax Json MVC3
All-Star
20723 Points
Nov 26, 2010 06:02 AM|asteranup|LINK
Hi,
This can happen if the data return by the controller method is violating JSON format. You can try the following-
Its working fine with the following code-
[AcceptVerbs(HttpVerbs.Get)] public JsonResult Filter() { var myData = new[] { new { first = "Jan'e", last = "Doe" }, new { first = "John", last = "Doe" } }; return Json(myData, JsonRequestBehavior.AllowGet); }
<script type="text/javascript"> $(document).ready( function () { $.getJSON( "/Account/Filter", {}, function (myData) { alert(myData); }); }); </script>
You can also look at the post below-
Nov 26, 2010 04:04 PM|rcm01|LINK
AnupDG, thanks for the reply.
Changing my controller to return the data object as you described worked for me also.
I changed the controller so that it would set a variable with the JsonResult object. The object's Data collection contains the two records I'm expecting from the database, but as you suspected, they are not in Json format, at least I think they are not. What's listed is System.Data.Entity.DynamicProxies.User_....
I'm using EntityFramework CTP4 to fetch the data from SQLCE, are you familiar with what I would need to change to get the data into the correct format?
Nov 26, 2010 06:41 PM|rcm01|LINK
It turns out that I had an issue in my model with a referenced object that at this point I'm not sure how to correct. Once i removed that reference so that the model was simple properties, the data was returned from the repository to the controller to the calling page and rendered by the template correctly.
This also means that the original ActionResult worked. I'm not sure what the best practice is. It would seem that using JsonResult would be preferred, however most examples I've see use ActionResult. If anyone has insight into this I'd appreciate the clarification.
All-Star
20723 Points
Nov 27, 2010 04:49 AM|asteranup|LINK
Hi,
Can you post the result of Json(myData, JsonRequestBehavior.AllowGet) method. That is the resultant json string.
Nov 27, 2010 10:37 PM|rcm01|LINK
Anup-
I began adding refereces back to the model and am encountering the error again. The first error appeared to be caused by a many to many reference that I defined in both classes. The error returned was that the table User_Category did not exist.
Example:
public class User
{
...
public virtual ICollection<Category> Categories {get;, set;}
}
public class Category
}
...
public virtual ICollection<User> Users {get; set;}
}
When I removed the collection from Category this error went away.
Now when I execute the code the data no longer has the error and Category collection has zero records. I'm guessing that this is affecting the JSon Result Object. I'm not sure how to get around this.
Thanks for your help.
6 replies
Last post Nov 27, 2010 10:37 PM by rcm01 | http://forums.asp.net/t/1627118.aspx?JQuery+getJSON+call+to+MVC+Controller+not+returning+data | CC-MAIN-2014-52 | refinedweb | 534 | 56.86 |
branwall 0 Posted May 14, 2012 (edited) If I want to call a function that takes multiple parameters, like pixelsearch, for instance, Autoit expects me to have something in the format of pixelsearch(x,y,x,y,color). Could I do this with fewer parameters by using another function? My code looks something like this: $x=10 $y=10 func box($iX,$iY) return ($iX-10&","&$iY-10&","&$iX+10&","&$iY+10) endfunc Clearly, this will only return one result, but it really contains enough data to fill in 4 of the 5 required parameters for pixelsearch. What I want to do is this: pixelsearch(box($X,$Y),$Color) But it thinks I have too few parameters. Is there a way to 'tell' autoit to treat one parameter as multiple parameters? If not, how can I get around this simply? Thanks! Edited May 14, 2012 by branwall Share this post Link to post Share on other sites | https://www.autoitscript.com/forum/topic/140543-calling-a-function-with-a-single-func-as-the-parameter/ | CC-MAIN-2018-34 | refinedweb | 157 | 60.04 |
Archives
Rory Blyth has the most consistently funny posts about PDC
Rory's the guy who wrote about who he saw in the men's room and about Ted Neward's Ninja pony tail at the XML DevCon in July. He is always witty, funny and sarcastic - did I say funny? I finally got to meet him at the PDC. Now I've had some time to start reading his PDC blogs. Highly recommended reading.
Misbehaving.NET : women in tech blogsite
Wow - what fun. A great site title “Misbehaving.Net” and a bunch of fantastic bloggers there already.
Don't be surprised if Sam Gentile has the first compiled Longhorn app outside of Microsoft
Earlier today Sam said “This should keep you all busy while I go write HelloWorld in XAML on Longhorn!!”
being accused of being an ageist.
What would I add to VB? - why the control array, of course
I visited the VB area at expo and they had one last VB shirt. They were dark blue with huge white VB letters on the back like those FBI jackets. The person I talked to said "what would you add to VB to make it better?" Off the top of my head, I didn't have an answer. I have one now, but I have to double check in Whidbey to see if it's there. It is the control array that we had in VB__ - VB6. This was SO missed in VB.NET that Erik Porter re-created this functionality in a control as an add in and received a Coding Hero award from for it. Is there a reason that it's gone?
So, tell us a little about yourself...“.
criticism of Longhorn/Aero/Avalon - "why did it take so long?" (here)
yes, I drank the Koolaid!
Post PDC - Home at last
I spent most of the day flying across the country and read Code Magazine's excellent PDC issue cover to cover and am half way through the same with the ASP.NET PRO mag PDC issue. READ EVERYTHING. There is a wealth of knowledge already out there. I keep wondering - if it's supposed to be “all about the smart client” now, why the hell is ASPNet 2.0 so freaking awesome! (Oops, sorry, did I just dribble some of that koolaid?) It will be interesting to see if this initial “WOW” wears off or not...
Scott Hanselman at LAX
It was fun to have one last meet-up before leaving L.A. when I bumped into Scott Hanselman waiting in the same boarding gate area as me. Whoops time to board. (Link later...)
Spam coming in through my blog
In the last two days I have gotten spam in my blog. Someone posted a spam comment on a post I made yesterday. Today I got spam that someone sent through the contact form on my website. Pasted below. Unbelievable!
Microsoft's REAL Architect, Lili Cheng ,does the Keynote and MS's BLOGGING SOFTWARE
The research keynote was just amazing. Here are my notes. Again just the mht file, I'll upt html up later.
Dinner with Chris Anderson and Robert Scoble
I figured that title would get your attention. Actually, it was Chris and a number of other incredible folks and the first chance I have really had to have a real conversation wtih Robert Scoble and get to know him a bit.
sys-con interview
I thought my sys-con interview was tomorrow and then had this sinking feeling that it was todya and missed it - but luckily it's TOMORROW at 12:15 . Phew!!
ASP.NET IDE in WHIDBEY
This is not inthe alpha bits - it is from a VERY recent build so will probably be part of the beta.So maybe 75% (??) of this is in alpha.
whidbey and yukon notes
HERE ARE MY NOTES FROM THE KEYNOTE!!!! (journal)
pre-pdc MVP Regional Summit
I attended the regional MVP Summit. Fun to see friends like Kathleen Dollard and Don Kiely and met a lot of interesting people. I think connecting with folks outside of your own technology is pretty valuable. I grabbed Chris de Herrera of TabletPCTalk.com and talked with him for a while about some usability issues I have been worrying about for the end users of my tablet application.
Jeff Julian discover's he's not in Kansas anymore
Sorry I couldn't resist but he IS from Kansas and he writes this about coming to L.A.
the hotel wi-fi promise
uh-huh low..very good...very low...excellent...good...gone...good...low
Jason Beres blogs
well lookie here, Jason Beres has a blog. Jason is one of those guys who sure gets around. He is a .NET author, and INETA speaker, chair of the INETA Academic Committee and on his way to his first W-2 stint - with Infragistics. He has a lot to share...well, not in his first “hello world” blog post, but I know great things will be coming.
the dealmakers
Hanging around in the large area that accomodates many airline gates, I can't help overhearing bits of peoples' conversations on their cell phones. They talk so loud - hard to shut it out. Anyway, most of what I'm hearing reminds me of a quote (which I can but paraphrase) that I heard many years ago -- that America's premier industry is deal-making.
wireless in airports
Last time I came through Phila airport was on the way to Dallas for teched. No wireless then but now it's here - for a small fee. AT&T has set up a few areas throughout the airport . One is where my gate is for the flight to l.a. So I decided to fork over the $10 for “24 access from one location”. That's an odd feature. I sure hope I'm not here for four hours. BUt I did take Doug Reilly's advice and look again into using desktop remoting to my home machine while I am away and it is truly awesome!
PDC Schedule you can actually READ
I have 4 hours in the philly airport tomorrow so since nobody else (that I am aware of) has done anything with Ryan LaNeve's xml version of the schedule. I has not been updated to show the Thursday panels, but it could give you a good start. The one by track will help see what sessions are being repeated. This is not a work of art. Hey, only 4 more hours till I have to leave to get my plane.
Missing Robert Hurlbut at PDC
I'm really bummed that my pal Robert had to change his plans last minute to deal with housing issues and won't be able to make the PDC. Robert has been doing a lot of interesting work with Sam Gentile for the last 4 or 5 months. And of course he has been sharing in his blog his explorations of Rotor, BSD, Enterprise Services, his .NET Security presentations, unit testing in SQL and a host of other high end topics that he is up to his elbows in. If you are developing .NET production apps, this is all recommended reading.
Thinking out of the box to solve a problem
There is a woman at one of my client's office who I adore. I think she hates computers, but patiently puts up with them in order to do her job. Sadly, this also means that when something goes wrong, she puts up with that too. I just found out that she was having a problem printing a few reports in the old FoxPro app that she uses every day. This was the last application targetted for a re-write and is, in fact, the one that I am working on now with the tablet pc's. So when Barbara has a problem it makes me sad and I just want her to be happy because she is just the nicest woman! This problem was that when she printed two specific reports, the app would hang for about 30 seconds before she could print. This is a report she has to print many many times a day so you can imagine how frustrating this was to her.
Developmentor's Craig Andera on the Application Blocks
Craig explains why he thinks that the Application Blocks are not mature (yet).
Scoble on "How to Hate Microsoft"
I really want the title of this post to complete itself with “... and learn to love the bomb” but you have to be a Peter Sellers fan to understand that one.
Scott Guthrie leaks aspnet 2.0 bits
yeah - everyone's already mentioned this but here's one that really caught my eye:
Count your web views on .Text
This is a little scary. The .Text web admin just got new bits: webviews and aggregator views. The 2nd is not implemented yet. The first is. Interesting, a little scary. I know some people are very interested in counting the number of people reading there blogs. I have to admit, I'd rather not know. Because for every high #, there will be lows that might make me question what I am writing about. Hmmm, wonder if I can turn that feature off. Scott's a busy man.
Kathleen Dollard x 3
This month's VS.NET features not one, not two but three articles by Kathleen Dollard. The first creates a winform that organizes the details of an exception so that you can more readily find the info you need to solve the problem. In the second she builds an expandable/collapsable control to enable users have an always available search tool without giving up screen real estate. She's also got a lot of great .net lessons built into that control. The 3rd is what I believe is a pet topic for Kathleen, XSLT. This time she writes about Creating Readable XSLT. I'm happy to see these articles as Kathleen is a friend of mine and is co-hosting the Women who Code BOF at PDC. I know she wants to start blogging but has wants to clear her plate a little first. Kathleen is very detailed as well as outspoken. She will have a fantastic blog.
PDC ...preparing for the big what-if
It always happens. I'm away at a conference (or god forbid a vacation) and suddenly there are frantic calls from a client that I haven't talked to in 3 months. Maybe someone figured out how, after 4 years, to enter data that broke the application or the guys in holland sent a badly formatted xml file for a critical daily import process or the web database reached its capacity and rather than giving ANY type of error message, just ignores any inserts or att.net put a block on every email coming from Alentus, killing an asp.net app process that emails the results of a data transformation to a client. Something! It even happened when I was visiting friends in Koln, Germany a few years ago. Oh, my phone bill! So now that I have a brand new computer that I'm bringing with me, I have to load on it VB6, every single 3rd party tool that I use for all of my apps, VS.NET2003 (I'm skipping VS2002) and all code for all clients. And all of my utilities like good ol ws-ftp. And the databases. Can't debug without sample data! Just in case. Then there's the “my plane got blown up because of a mouse nest in the engine” what-if. That's the one that's solved by making separate backup cd's of all code for each client and mailing it to them. Hey, I'm a programmer. I get paid to think “what-if”.
sharing the love - of pdc geekness
Now I understand why everyone is creating extra noise to thank Jeff Sandquist et alia for the “I'm blogging this t's“. I showed it to my husband last night who just rolled his eyes. So it's just our own little party here of geeks who care about this stuff. I know that Rich knows that this is more significant than say, going to a trekkie convention. It's about my career. And what we do effects the lives of people all over the world. But still there's a big part of him that just does NOT get why so much of my time is spent doing stuff in the .net community - blogging/reading blogs, user group and INETA and then 3 conferences in the past year yet not ONE vacation in the past 2 years. He says that I work for Microsoft for free. Our life has changed a lot in the last couple of years because I have gotten so involved with .NET and all of this. I know it's really important to remember the bigger picture of the world, my life, etc etc but unfortunately, with the non-stop rollercoaster ride of a learning curve that is now necessary to be current and valid as a developer - this is unfortunately the way it has to be. This is why I am always so astonished by people like Kate Gregory and Deborah Kurata and others who are practically full time mothers in addition to keeping ahead of the curve and so significant in their work and contributions to our industry. I have joked many times that if I had kids, they'd probably have to call family services on me because I lose myself in my work so completely and so frequently. Whoa - do I really want to post all of this? Aaah, what the hell... it's cheaper than a shrink...here goes!
Extra Battery for PDC
After all the trouble I had at TechEd because I had left my power cord at home and had to depend on the kindness of strangers to re-charge my laptop battery (hey, no link for when Stephen Walther lent me his power cord). Anyway, since my wonderful Acer C110 tablet unfortunately is a bit of a loser when it comes to battery life, I'm definitely concerned. I just can't be left standing with a dead Tablet at PDC. I might fall off of Jon Box's “.net somebodies” (see last sentence) if that happened. So, at the last minute, I was able to find a battery that will be shipped today at MobilePlanet.com so I can have it in my hands tomorrow. All other sites I found the battery on were saying 3 days or 1-2 weeks.
Joy points out that she and I both are in the PopDex top 100
I had not heard of PopDex but Joy checks it out and found that she and I are both listed. Oooh aaah. Quick before it's gone! :-) Of course it was for a post that inspired a debate on blogging styles and had nothing to do with me or my thoughts!
Clemens wants beer!
If you see a guy at PDC wearing this t-shirt:
Another day another something for Vermont .NET
Now that Dave Burke has emblazened in my brain “another month, another guru at Vermont .NET”, it now seems to apply to everything. So today's use is this:
PDC BOF Women who Code Session
This session that Kathleen Dollard and I (and really many others!) are hosting has been rescheduled to Sunday 9-10pm.
Brainy Bloggers on my Blog Roll
When I first set up my weblog, I created a place for blogs I read and entitled it “Brainy Bloggers”. And for the few of these bloggers who I knew, I created a separate one called “Brainy Bloggin' Buds“. When I switched to using an aggregator, I didn't really need that any more and didn't add to it everytime I added a feed to my OPML. Unfortunately, this has caused a little problem with my friends who ARE brainy and who I am absolutely subscribed to in SharpReader. But I occasionally get an email asking “hey, how come I'm not on that list? Aren't I a brainy blogger or even your bud?”
Cross Posting your Blogs
Just a few days ago I was dreaming out loud about being able to post blogs in one central place and have some of them spit out to other weblogs I might have. So I could have a weblog on my own domain and then some of my posts would spit out to this weblog (that you are reading) and to any other weblogs I wanted to participate in.
Identity.
Hey Outlook team - User forums over here...
I inadvertantly have started a little Outlook 2003 user forum in my little space I created to post my discoveries if anyone is interested. Maybe we need OutlooksBlogs.com now! I'm KIDDING.
Little discoveries in Outlook 2003 that make my day
Rather than post these to here constantly (I have a feeling there will be many) I have started an article that I will add to of all the little things I am discovering in Outlook that I love. It's here...
My new friend Outlook 2003
No need to post every little discovery in my weblog so I'll put them here. Certainly this will be forgotten after a day, but I'll have the gratification of having an outlet.
PDC - Brad Abrams and Jeffrey Richter on .NET Framework
Brad Abrams thinks we may not have noticed this talk that is on Monday at 1:30 and repeated on Tuesday at 3:45. As some would say: heh!
promoting the great speakers we get at my user group
Generally a day or so before our meetings, if I'm not happy with the # of rsvps, I pull out all the stops out and send one last email blast to my whole mailing list for Vermont.NET. With the unbelievable array of speakers we have managed to lure to Vermont, I often find myself referring to them as gurus, legends, etc. They tend to hate that. I remember Ken Getz making me take those words off of the website and the flyer I had created and replace them with “book author“ or something like that. I like to refer to him as “swami” now. There are people who give me shit for what looks like big time grovelling and hero worship, which, c'mon, it just is not that. But it's not just marketing either. I'm sorry but it's my way of being supportive of my peers. And when you are dealing with a lot of people who don't have the kind of exposure to information that we do here in this community and who don't get to go to conferences, they sometimes really don't know who some of these people are. Yes, it's true. Luckily the folks in my user group trust that when I say someone is “freakin' awesome” that they truly are. In fact, our September speaker might not be as well known to the average VB user or web developer, but when I explained to my group who he was and what his background was, it really got a great crowd of people to show up and every single one of them was thrilled that they had come to the meeting.
Hey - Mr. PDC Blogger - it's a girl
Drew Robbins and his wife, Aya, FINALLY :-) had their baby. If you don't know by now, Drew started TechEdBloggers and PDCBloggers and has a great blog of his own, not to mention probably one of the most attractive blog sites I have every seen. Drew is a fellow .NET user group leader and INETA volunteer, so we have met on more than one occasion. It just occurred to me that I didn't notice anyone else on dotnetweblogs mention the baby. Baby Kotomi Robbins was born on Tuesday Oct. 14th. There's more and pics here on Drew's site. Even with a newborn at home, Drew will not be missing the PDC conference! Congratulations Drew!
Billy Hollis at Vermont .NET Monday
Well, finally, the October meeting of VTdotNET is around the corner. We normally have our meetings on the 2nd Monday of the month, which would have been 10/13. But our speaker, Billy Hollis, coming to us as an INETA speaker, was just at DevConnections, so we moved the meeting to 3rd monday - tomorrow. Our last meeting was 9/8 (Sam Gentile) and it will have been 6 weeks since then. User Group withdrawal?
Multiple weblogs??
Recently I have been invited to start two additional weblogs. One to cover my desire to write about non .NET topics and one for another specific technology that I don't have anything to say about yet beyond the fact that I'm very curious about it and WinFS, too!!
outlook 2003 UnRead folder
The unread mail folder has answered a big problem for me. Somehow, I am willing to delete emails from here after I read them, rather than letting them linger in a subfolder or the inbox for all eternity. I cannot tell you what the psychology is behind this. Perhaps my inbox is so bad that it is daunting to clean it up. But the “unread mail” folder is easy.
2 new blogs: Juval Loway and Bill Evjen
Juval Lowy and Bill Evjen, yahoo! Read more from Jon Box
BOF Sessions Schedule in Flux
Be aware that there is a flurry of emails going between the BOF Session leaders as a number of people need to re-schedule their session due to conflicts. We have until, I believe, the 24th or 25th, to have everything firm. I know that Kathleen Dollard and I are trying to get our session (Women who Code) moved from Tuesday 9-10 to another slot. We're looking at Sunday 9-10 right now but who knows...
WSE 2.0 talk in Chicago
choices in leveraging ink for data entry
I had a interesting session with the person I trust most regarding ui design at my client's office. He is the guy who interacts the most with the users who are not really computer savvy - my target audience.
designing for tablet portrait orientation
All of the design recommendations for tablet assume that you want your user to be able to use either orientation. But what if this isn't the case?
lazy lazy lazy - code I shouldn't have to write
Why do I still have to create these shortcut functions to replace something like this:
Kate Gregory does 6 hours of MSDN Canada's "The .NET Show"
BOF Sessions Scheduled
The BOF sessions have been scheduled here as part of the big PDC schedule. Jeffrey McManus, who is leading a BOF and is the author of one of my favorite VB6 Database books and co-author with Chris Kinsman of one of my favorite asp.net books, created a readable view of the schedule here:
our own little bloggerCon session
I inadvertantly started an interesting discussion on the style of posting various pieces of information at once in your weblog. It is happening in the comments of my post about Robert Scoble's prolific work last night.
gettin' no respect here...
Kasia (a Unix programmer) apologizes for ever making fun of us windows programmers. And because I like her and her blog and have a lot of respect for her ... especially considering the trials and tribulations she must suffer working in the Unix environment ;-) ... I got a good chuckle out of her post.
Infragistics posts free e-book and reference app
jeezum pete's, Scoble
I dare anyone to count how many posts he made last night
PowerPoint story on NPR this morning
Did anyone catch this story on Morning Edition this a.m.? It won't be on the NPR website until tomorrow. It was mostly about grade schoolers using powerpoint and the debate between it's makers' saying it helps kids organize their ideas and educators saying they only learn to think in bullets and not to have real conversations, etc. It is not good for spherical or holistic thinking. They quoted Edward Tufte's essay (book?): The Cognitive Style of PowerPoint. He says PP is for the 20% of people who are really discorganized and really poor presenters and that for everyone else, it completely cramps the style and their thinking. Here's from his essay:
web apps for virtual catalogues
recent shopping for furnishing my home office and sundry things in our new house has led to some interesting discoveries with some website implementations of catalogues:
running my .Network through another power outage
Due to ridiculously high winds (50mph+??), the power went out all around me at about 5am. It's still out at 2:00 pm. Below is what has kept my computers going all day: a Honda eu2000i. It outputs inverted power which is cleaner for the computers. They run about $1,000 - though we happily bought ours [barely] used for $400. I'm still on my first gallon of gas that I fired up at about 9am. (running server box, dev box, one 19“ monitor, dsl modem and wireless router).
Luckily I have a little Jotul (gas fireplace) in my office and our kitchen stove is gas too. (I always make my coffee in a Chemex pot, never the electric coffee maker, so this is key!) While I was still getting my [always needed] beauty sleep this morning, my hubby got this little guy (honda) all set up outside my office , brought me a kerosene lamp in case I actually got out of bed before sunrise (yeah, right) and even left a little igniter/lighter by the stove for me. It's child proof, so I had to call him at work to find out how to use it. It's really nice to get taken care of sometimes. Thanks Rich. smooch smooch.
overdesigning? and a lesson in cooking ideas
No! You misunderstand. I am much more of an XP type person. I can barely spell UML. I am talking about one small piece of my application. But it's a foundation piece. I wanted it to be constructed properly. And if you subscribe to the late Ilya Prigogine's theory of Chaos and Disorder in the Universe (chaos precedes order), then you will understand that my frustration this morning was after spending a lot of time thinking about how I want to build this application. I've been stewing.. This morning everything suddenly just went to high boil - and I was steaming. But the next moment - everything crystalized, fell into place, and I have been banging out code ever since and feeling REEEEEEALY good about it.
tangled up in classes and The Ivory Tower
Here's the biggest problem with trying to learn everything and constantly reading what so many unbelievably smart people have to say. I'm working out the classes for a new project. I want them to be PERFECT - understand? PERFECT! (Don't worry, I know... there is no such thing, really) I have been sitting here for days and days all tangled up in interfaces, inherits, collections ... frustrated as hell. I mean, I have DONE this stuff before. I know how they work. I know how to build them. I have Juval Lowy's fantastic .NET Components book and access to all of the patterns & practices info on msdn. I think what I want is Juval to just come sit next to me in front of my white board for about 3 hours. The bar has been raised right over my head I think. Whatever I do, suddenly I think - “if so & so (Juval, Sam G, Scott & Sean, insert name here) looked at my code, would they say this about me? Would they just give me a gun to put to my head because it's so unbelievably hopeless?” Everything looks like spaghetti code to me now. Whenever I start a new project, I really try to do it WAY better than the last one so instead of just knocking this out quickly using a previous model (which would make my client perfectly happy), I have to do this to myself instead. The more you know the stupider you feel. Where have I heard that before?! Is it time for me to look into this?
programming not just for profit
I used to do a lot of programming work for non-profits. Now most of the programming I do is for for-profit businesses. One of those companies makes sure that when roads and bridges and buildings are built that they don't collapse! (i.e. meeting regulations). So that's for some public good in the long run. Another company is in the flower importing business. Ummm, makes people happy at the end of the day! But when I read about companies using the same technologies to accomplish things that REALLY help people, such as the Glucose meter with integrated wireless alerts that Scott Hanselman has written about, that really turns me on. Now Scott didn't write the software for that, but he too is always looking at how the latest and greatest technology can help his fellow diabetics. My pal Scott Lock is in charge of the donations website for the American Red Cross . By the way, ACR's Disaster Relief Fund is really hurting. Here's an idea. Go make a donation on the current site and then when he rolls out the .NET site, you can make another donation while admiring the difference from the old site to the new! Well, I guess I can feel happy that I'm at least part of the chain that makes your bridges safe and another that maybe helps to keep marriages together (the flowers I mean)! :-)
Tablet Dev Thoughts
ed: I had to come in here with a keyboard and make this a little more readable!
Susan Warren at Vertigo
I was happy to have a nice chat with ASP/ASP.Net maven Susan Warren who is now at Vertigo. Susan took a well-earned break and is now back to what she loves doing most as well as enjoying the California sunshine. Welcome back Susan! You've been missed by many.
PDC BOF Session proposal - last minute
Women Who Code... “A lot of programmers are women - perhaps 10-20% of the industry, but it’s often hard to connect with other women at gatherings like PDC. This session is an opportunity for women to meet and talk about careers in IT. Have you wondered what it takes to be more visible in the industry? Are you curious about the backgrounds of women you see at conferences or in journals? Is there a disparity between how many women work in this industry and the number of women that are visible at the top of our field? This is a chance to gather with talented women coders like yourself to discuss how you can better define and reach your goals as a technology leader.”
First post From ACER
Boy even this recognizes my crappy handwriting! cocoon!! well, that was cool, not cocoon
I want my PDC Schedule!!
It's really great being able to find PDC Sessions by time slot now, but it's still really cumbersome. Why can't there be something like a big grid where you can see a whole day's schedule at a time? PDF. Excel. I don't care. Hey - Interknowlogy - what's the deal???
the social life of an independent developer
Who's your best friend? Of course, it's the FedEx guy/gal (or UPS driver or mail carrier). Today when I received my new Tablet (wheeeee!), I learned that the UVM graduate students who live next door to me convert vehicles from running on gas to running on vegetable oil. I have always wondered why they were fiddling around with their cars so much and the occasional odd vehicle. How cool is that? :-) Apparently their most recent one was for a state senator. Hey Arnold, maybe you should bring that Hummer to Vermont!
possibility of hiring those .NET "gurus"
Clemens Vasters writes this morning about the frustration of having folks presume that Newtelligence is just too far out of their league to hire for consulting projects.
upcoming webcasts and chats schedule rss feeds?
Still dreaming of this...
ink enabled controls, err "tools"
Jon Box talks about having seen Infragistic's ink-aware user
controlstools at a user group presentation by Brad McCabe (of Infragistics). I am definitely excited about this. I don't see anything on Infragistic's site yet. Of course, I still want to learn how to implement ink directly with Microsoft.Ink namespace in my WinForms app, but this is going to mean that if this stuff is here soon I can just pop them right into my new tablet app and probably get it rolled out MUCH faster! And then it will be easy enough to dynamically grab either a regular control on a regular Windows box or the ink control when the application is run on a tablet. Oh I just can't wait to get my hands on these! I know somebody who if he were reading this is going “oh no, Julie ... not drag and drop!!! You are a mort!”.
guess who else is an MVP -- can you say "dottext"? "dotnet weblogs"?
SCOTT WATERMASYSK ASP.NET MVP!!!!
the contemplative man
It's nice to see Robert Scoble writing about what he thinks today rather than only linking to other posts. Happy to see that he has some time to sit back and relax and therefore able to mull things over and share them with us -- the contemplative Scoble in addition to the informative Scoble.
basic outline for next .net app and Acer Tablet PCs
So here's what I have ahead of me.
organize that code - why I love regions
Returning to some old code in a data layer file...this makes me happy cause I know I can find exactly what I'm looking for so easily...
finally can move to asp.net 1.1
I've been waiting for Alentus to put 1.1 on some of their boxes. I stopped pestering them a while ago and thought to ask again yesterday. Yes, they finally upgraded some servers. I have moved ALL of my winforms projects to .NET 1.1 (with vs.net 2003) months ago. But I couldn't touch my web services or my asp.net apps. Now I can go ahead and put 1.1 onto my webserver here and start porting all of that stuff. Phew.
Sociology of Blogging from Werner Vogels at BloggerCon
Werner Vogels is at BloggerCon at Harvard right now and was answering why he, a serious technologist, is there. He responds that blogging is going to have a huge impact on the academic world in terms of how information is disseminated. He also is very happy to be hanging out with people who he has things in common with outside of the academic research circle. In the meantime, someone else is frustrated that the technology to enable him to attend this conference remotely is failing him.
a blog idea
Occasionally I find something in the blogs or make a post myself that is something I would like to share with my user group. So I just copy and paste and send an email to my user group member list. Boy (lazy lazy girl I am) wouldn't it be cool to flag the post and as I post it, the flag would get read and that post would then automatically get emailed to the list? Yeah, they could all get a rss aggregator and subscribe and see that stuff, but that'll never happen and then they'd miss all of the goodies! I also know that I can just use my rss display control that I wrote back in April for the user group website (used already for MSDN feeds and others - see here) and then just subscribe that control to a particular category on my blog, but again, how often do you think those people REALLY go to the website to see what's new?
Win2K3 Features Packs and Tools
Sam Gentile found more goodies. Here's one
Whidbey Preview at ASP.NET Connections and on DotNetRocks
Scott Guthrie spills a few beans here and here.
PDC Session Times Posted
That Scott Watermasysk doesn't miss a trick! Go here to get to there. Now we can finally start scheduling our extra-curricular activities. Now if they could just show you one big pdf grid or even an excel spreadsheet so you don't have to go trolling around for hours to figure it all out. Hey wait, that's all done by InterKnowlogy. I know JUST who to bug about THIS one!
some more new mvps
I wanted to point out some of the people I know that became MVP's recently
UK to PDC via "Air Microsoft"
Did anybody happen to catch this little tidbit in Yosi Tagur's blog?
watching the .NET Seminar videos from MSDN
There have been a number of posts about the video on Designing and Developing a Line of Business Web Application. The direct link that people were providing gave me lots of trouble. I was, however, able to view the video if I started at the Seminar website (). And there are a lot of other ones there, too. I hope this helps someone else who is just sitting there and waiting and waiting for the video to start up like I was!!
VB6 build problem - lesson that cost me 2 days
I was suddenly unable to build a particular VB6 exe. I mucked around with cleaning up files, defragging hard my drive, moving to a new dev machine etc for 2 days to no avail. It would compile but then would hang before getting to the Build Exe stage. I finally had the bright idea of rebuilding every single dependent dll (all vb6) that the exe uses. Then I was able to compile. Since I found nothing in google or msdn that led me to this solution, I thought I would stick it here for some poor soul in the future to find when they have the same problem. I do not know what caused the problem. But at least I can get back to work again! Getting this little mod to this vb6 app out of the way means I can now dig into my newest .NET project. Yee haw!
hey whadya know! There's girls going to PDC, too!
win a free pass to pdc!!!!!!
Yup - Wintellect is giving away a free pass. You'll have to pay all other associated costs of the trip. But this is huge! You guys rock. (hooray Sara!!!!!) | http://weblogs.asp.net/jlerman/archive/2003/10 | CC-MAIN-2015-22 | refinedweb | 6,523 | 81.02 |
In C or C++, there is no special method for comparing NULL values. We can use if statements to check whether a variable is null or not.
Here we will see one program. We will try to open a file in read mode, that is not present in the system. So the function will return null value. We can check it using if statement. See the code for better understanding.
#include <stdio.h> main() { //try to open a file in read mode, which is not present FILE *fp; fp = fopen("hello.txt", "r"); if(fp == NULL) printf("File does not exists"); fclose(fp); }
File does not exists | https://www.tutorialspoint.com/how-to-check-if-a-variable-is-null-in-c-cplusplus | CC-MAIN-2021-25 | refinedweb | 107 | 93.95 |
Hi,
On 3/27/07, Nandana Mihindukulasooriya <nandana.cse@gmail.com> wrote:
> Nope, Same image will not be attached to multiple Blog Entires. But if it
> is the case, as I understood I should use property of reference type in the
> blog entry which references to a image attachment. Is that the way to handle
> it ?
Yes. Such solutions are typically called image or document (or even
digital asset) libraries in content management systems. In a JCR
content repository such a library could conveniently be managed as a
separate content tree, like /my:library.
A good solution would be to make the image library consist of standard
nt:folder and nt:file nodes to make it easily manageable. You can then
for example mount the library subtree as a standard WebDAV folder
using jackrabbit-jcr-server as the server. And since the nt:resource
nodes within nt:files are referenceable, you can readily link to the
images and other resources within the library.
> 1.) In naming node types and properties, do we follow the naming conventions
> used in Java ? Eg. myFristType
That's the precedent set by the JCR spec (for example
nt:hierarchyNode), so it's a good convention to follow.
> 2.) What is purpose mixin node type ? Purpose of primary node is to define
> the structure of the node as I understood. What is the advantage of adding
> some properties or child nodes via mixin types to a node ?
The Java class/interface model mentioned by alartin is a good way to
look at it, but there are two important differences. The first one is
that Jackrabbit supports multiple inheritance, i.e. a primary type can
inherit any number of both primary and mixin types. The second one is
that you can dynamically add or remove mixin types of a node.
I usually think of the primary type of a node as the primary
definition of that node. The node should be perfectly usable with no
dynamically added mixin types (note that the primary type can include
mixin types like mix:referenceable if needed).
Mixin types on the other hand can be used for adding extra "features"
to a node. Such features would typically be orthogonal to the main
purpose of the node. For example, if I have a standard nt:folder
structure, I might decide that I want to make some folder
referenceable, lockable, or versionable in which case I'd simply add
the respective mixin type to that node. The folder would still work
just as before, with just the extra functionality added.
> 3.) What is the purpose of primary item of a node type?
As alartin mentioned, it is mostly used to as a guide to applications.
For example jackrabbit-jcr-server knows how to follow primary item
definitions to decide which property to serve as the primary content
of a node even though the implementation has little or no built-in
knowledge of the node type semantics.
> 4.) In a property definition, in Required Type, what does NAME, PATH types
> mean ?
They are for storing JCR item names and paths in a way that is
independent of particular namespace mappings. For example a path
/foo:name would become /bar:name if the "foo" namespace mapping is
modified. If the path was just stored as a "/foo:name" string, then it
would become invalid when the prefix changes, but if it was stored as
a PATH property, then Property.getString() on it would actually return
"/bar:name" with the new prefix.
> In CND notation,
>
> <
> <
>
> [blog:user] > mix:referenceable
> - blog:userID (long) mandatory
You don't really need these ID numbers, they are likely a result of
the earlier normalization process. In JCR all content items are always
uniquely identified by their paths, and making a node referenceable
adds a persistent UUID identifier that won't change even if the node
is moved around.
> + blog:year [blog:year] multiple
Having custom node types for the intermediate nodes isn't really
necessary, and could even be seen as detrimental. I would instead use
a standard nt:folder hierarchy, with the added benefit of making the
content tree more "familiar" to generic JCR tools. Something like
this:
+ blog:content [nt:folder] mandatory autocreated
Your application can then create the required yyyy/mm subfolders as
needed. This approach would allow you to later change the substructure
if needed.
> [blog:blogEntry]
> - blog:blogEntryID (long) mandatory
Again, no need for the explicit ID number. If the entries need to be
referenceable (would be a good idea for example to support unique
identifiers in syndication feeds), then mix:referenceable is a better
solution. To go with the above nt:folder solution I would also make
the blog entries extend nt:hierarchyNode.
[blog:blogEntry] > nt:hierarchyNode, mix:referenceable
> - blog:content (string) mandatory
I would mark this as the primary item:
- blog:content (string) mandatory primary
> - blog:image (binay) multiple
Instead of a single binary property it would probably be better to use
either a single nt:resource child node for the image or an nt:folder
subtree that could support multiple attachments and even links
(nt:linkedFile) to a document library as mentioned above. Something
like this:
+ blog:attachments (nt:folder) mandatory autocreated
> - blog:dateCreated (date) mandatory
If you made the node entries extend nt:hierarchyNode as mentioned
above, you would get the standard autocreated jcr:created property for
free.
> [blog:comment]
> - blog:commentID (long) mandatory
Same as with the blog entries, drop the artificial identifier for
mix:referenceable and use nt:hierarchyNode:
[blog:comment] > nt:hierarchyNode, mix:referenceable
BR,
Jukka Zitting | http://mail-archives.apache.org/mod_mbox/jackrabbit-users/200703.mbox/%3C510143ac0703270204vae39006p2f9014b5105678c0@mail.gmail.com%3E | CC-MAIN-2015-32 | refinedweb | 929 | 51.58 |
Walkthrough: Creating Web Pages for Mobile Devices.
Mobile development follows the standard .NET Framework event-driven model in which your application responds to user requests, button clicks, and so on.
In this walkthrough, you will create two Web pages that inherit from the MobilePage class and that are designed for a mobile device. The first page will have a mortgage calculator that you can use to determine loan information. The second page displays data in a format that is easy to page through on a small device. Tasks illustrated in this walkthrough include:
Creating an ASP.NET Web page that displays output on a device such as a mobile phone.
Adding paging so that users with small devices can move effectively through long lists.
Testing pages with a device emulator.
To complete this walkthrough, you will need:
Ability to run the page on a device such as a mobile phone. Alternatively, you can use one of a number of emulators. In this walkthrough, it is assumed that you have an emulator and that it is available on the same computer as your Web server.
Access to Microsoft Internet Information Services (IIS) and permission to create a new application Microsoft Visual Web Developer Web server. For details, see the "Next Steps" section at the end of this walkthrough.
If you have already created a Web site in Visual Web Developer (for example, by following the steps in Walkthrough: Creating a Basic Web Page in Visual Web Developer), you can use that Web site and skip to the next section, "Creating the Mortgage Calculator." Otherwise, create a new Web site and page by following these steps.
To create a new local IIS Web site under the IIS root
Open Visual Web Developer.
On the File menu, choose New, and then choose Web Site.
The New Web Site dialog box appears.
Under Visual Studio installed templates, select ASP.NET Web Site.
Click Browse.
The Choose Location dialog box appears.
Click the Local IIS tab.
Select Default Web Site.
Click the Create New Web Application button.
A new application is added under Default Web Site.
In the box for the new Web site, type DeviceWalkthrough and then click Open.
You are returned to the New Web Site dialog box with the Location box filled in with.
In the Language list, select the programming language you prefer to work in.
The programming language you choose will be the default for your Web site. However, you can use multiple languages in the same Web application by creating pages and components in different programming languages.
Click OK.
Visual Web Developer creates the new Web site and opens a new page named Default.aspx.
For the walkthrough, you will create a page that inherits from the MobilePage class and that contains a simple mortgage calculator. The calculator prompts the user to enter a loan amount, a loan term in years, and the interest rate. The calculator can determine the monthly payment for that loan.
In this walkthrough, you will use controls from the System.Web.Mobile namespace that are specifically designed for devices that cannot display as much information as a desktop browser. Instead, the controls present information in separate views that users can switch between.
To begin, you will delete the Default.aspx page and create a mobile page in its place.
To add a mobile page
Right-click the Default.aspx page in Solution Explorer and choose Delete.
Click OK in the dialog box.
Right-click the application in Solution Explorer and choose Add New Item.
Choose Mobile Web Form under Visual Studio installed templates.
Name the mobile Web page MobileCalculator.aspx and then click Add.
A Web page that inherits from the MobilePage class is created and added to your Web site.
Now that you have a mobile page, you will add controls that allow users to enter mortgage information.
To add controls for entering mortgage information
After you have created the Form where users enter their loan information, you will create another Form that will show the results.
To create a form to display mortgage calculation results
From the Mobile Web Forms folder of the Toolbox, drag a Form control to the design surface.
The Form control is assigned the default ID of Form2.
From the Mobile Web Forms folder of the Toolbox, drag controls onto Form2 and set their properties as noted in the following table.
You can now create the code that will calculate the loan information and display it.
To calculate the mortgage information and display results
If you are using C#, add a reference to the Microsoft.VisualBasic namespace so you can use the Pmt method to calculate the payment information. Follow these steps:
In Solution Explorer, right-click the Web site name and choose Property Pages.
Click Add Reference.
In the .NET tab, select Microsoft.VisualBasic.dll and then click OK.
In the Property Pages dialog box, click OK.
In the form1 control, double-click the Calculate button to create a Click event handler, and then add the following code.
protected void Calculate_Click(object sender, EventArgs e) { // Get values from the form Double principal = Convert.ToDouble(PrincipalText.Text); Double apr = Convert.ToDouble(RateText.Text); Double monthlyInterest = (Double)(apr / (12 * 100)); Double termInMonths = Convert.ToDouble(TermText.Text) * 12; Double monthlyPayment; // Calculate the monthly payment monthlyPayment = Microsoft.VisualBasic.Financial.Pmt( monthlyInterest, termInMonths, -principal, 0, Microsoft.VisualBasic.DueDate.BegOfPeriod); // Change to the other form this.ActiveForm = this.form2; // Display the resulting details string detailsSpec = "{0} @ {1}% for {2} years"; LoanDetailsLabel.Text = String.Format(detailsSpec, principal.ToString("C0"), apr.ToString(), TermText.Text); PaymentLabel.Text = "Payment: " + monthlyPayment.ToString("C"); }
The code gathers the values from the text boxes, converts them to appropriate data types, and then uses them as parameters for the Visual Basic Pmt function to calculate the monthly cost of the mortgage. (You can use the Visual Basic function in any language as long as you fully qualify the function call with the namespace.) After calculating the monthly amount, the code switches to the second Form control and displays the results in the respective Label controls.
In the Form2 control, double-click the Command control to create a Click event handler, and then add the following highlighted code.
Testing the Calculator
You are now ready to test the calculator. You can test the calculator in a desktop browser. However, a more interesting test is to use your device emulator.
To test the calculator
Press CTRL+F5 to see your page in the default browser, and to get the exact URL.
The first form appears on the page.
Start your emulator and connect to the URL for your page.
When the page appears in the emulator, enter a loan amount of 100000, the number of years as 30, and a percentage rate of 5, and then click Calculate.
The calculator is replaced by the results view, with the result 534.59.
Many devices have small display areas, making it impractical to display long lists. ASP.NET provides an ObjectList control designed for mobile devices that can automatically display an entire screen of information at one time and provide links so that users can move forward and backward in the list.
In this section of the walkthrough, you will create a data listing that displays more information than can be shown on one screen of even a desktop browser. By adding an ObjectList control, you will automatically add paging capability to the output, appropriately sized to the browser the user has.
The first thing you need to do is create a mobile Web Forms page and add an ObjectList control to it.
To add a mobile Web Forms page and create an ObjectList control on it
Right-click the application in Solution Explorer and choose Add New Item.
Choose Mobile Web Form under Visual Studio installed templates.
Name the page MobilePaging.aspx and then click Add.
A Web page that inherits from the MobilePage class is created and added to your project. The page includes a Form control named form1 on it. You can only use controls in the System.Web.Mobile namespace on a page that inherits from the MobilePage class.
From the Mobile Web Forms folder of the Toolbox, drag an ObjectList control to the design surface and place it on form1.
An ObjectList control is added to your page. It shows a generic set of data that gives you an idea of what the control will look like when it is rendered on the client.
After the ObjectList control is created, you need to create data that will populate the control.
To create the data
In the MobilePaging.aspx page, switch to Design view and double-click the empty design surface to create an empty event handler for the page Load event.
In the empty handler, add the following code.
protected void Page_Load(object sender, EventArgs e) { // Create and fill an array of strings string[] listItems = new string[25]; for (int i = 0; i < 25; i++) listItems[i] = String.Format("This is item {0}.", i); // Bind the ObjectList to the Items this.ObjectList1.DataSource = listItems; this.ObjectList1.DataBind(); }
The code creates an array of string objects and populates it with strings. It then binds that array to the ObjectList control.
You can now test the page.
To test the page
Press CTRL+F5 to run the page.
The page is displayed with a long list of numbered items.
Start your device emulator and type in the URL of the page ().
Notice that the data is displayed in a long list.
Adding Paging
Now that you have a page that displays data, you can add paging so that the display is automatically sized to the size of the screen in the device.
To add paging
In the MobilePaging.aspx page, switch to Design view and then select form1.
In the Properties window, set the Paginate property to true.
Select the ObjectList control and, in the Properties window, set the ItemsPerPage property to 5.
You can now test paging.
To test paging
Press CTRL+F5 to run the page in Internet Explorer.
The page is displayed with a page of data and a navigation control.
Use the Next and Previous links to move through the data.
In your device emulator, enter the URL of the page.
The emulator displays a page of data (five items). If necessary, you can scroll the page up or down.
Use the links to move to other pages showing more items.
In this walkthrough, you have created a page that is tailored to devices by taking advantage of controls that are designed for devices with limited display areas. ASP.NET and Visual Web Developer include facilities for creating applications for a wide range of devices and browsers.
You also might want to explore the following aspects of devices:
Depending on what emulator you use, you might be able to integrate the emulator into Visual Web Developer. In Solution Explorer, right-click the mortgage calculator page and choose Browse With. Click Add and type the information for your emulator to add it to the list of browsers. You can then use the Browse With command to view a page in the emulator. Note that not all emulators are supported. | http://msdn.microsoft.com/en-us/library/z8h56a3f(v=vs.100).aspx | CC-MAIN-2014-52 | refinedweb | 1,870 | 66.33 |
# .NET Reference Types vs Value Types. Part 2
[](https://github.com/sidristij/dotnetbook)
The Object base type and implementation of interfaces. Boxing
-------------------------------------------------------------
It seems we came through hell and high water and can nail any interview, even the one for .NET CLR team. However, let's not rush to microsoft.com and search for vacancies. Now, we need to understand how value types inherit an object if they contain neither a reference to SyncBlockIndex, not a pointer to a virtual methods table. This will completely explain our system of types and all pieces of a puzzle will find their places. However, we will need more than one sentence.
Now, let's remember again how value types are allocated in memory. They get the place in memory right where they are. Reference types get allocation on the heap of small and large objects. They always give a reference to the place on the heap where the object is. Each value type has such methods as ToString, Equals and GetHashCode. They are virtual and overridable, but don’t allow to inherit a value type by overriding methods. If value types used overridable methods, they would need a virtual methods table to route calls. This would lead to the problems of passing structures to unmanaged world: extra fields would go there. As a result, there are descriptions of value type methods somewhere, but you cannot access them directly via a virtual methods table.
This may bring the idea that the lack of inheritance is artificial
> This chapter was translated from Russian jointly by author and by [professional translators](https://github.com/bartov-e). You can help us with translation from Russian or English into any other language, primarily into Chinese or German.
>
>
>
> Also, if you want thank us, the best way you can do that is to give us a star on github or to fork repository [ github/sidristij/dotnetbook](https://github.com/sidristij/dotnetbook).
>
>
This may bring the idea that the lack of inheritance is artificial:
* there is inheritance from an object, but not direct;
* there are ToString, Equals and GetHashCode inside a base type. In value types these methods have their own behavior. This means, that methods are overridden in relation to an `object`;
* moreover, if you cast a type to an `object`, you have the full right to call ToString, Equals and GetHashCode;
* when calling an instance method for a value type, the method gets another structure that is a copy of an original. That means calling an instance method is like calling a static method: `Method(ref structInstance, newInternalFieldValue)`. Indeed, this call passes `this`, with one exception, however. A JIT should compile the body of a method, so it would be unnecessary to offset structure fields, jumping over the pointer to a virtual methods table, which doesn’t exist in the structure. *It exists for value types in another place*.
Types are different in behavior, but this difference is not so big on the level of implementation in the CLR. We will talk about it a little later.
Let's write the following line in our program:
```
var obj = (object)10;
```
It will allow us to deal with number 10 using a base class. This is called boxing. That means we have a VMT to call such virtual methods as ToString(), Equals and GetHashCode. In reality boxing creates a copy of a value type, but not a pointer to an original. This is because we can store the original value everywhere: on the stack or as a field of a class. If we cast it to an object type, we can store a reference to this value as long as we want. When boxing happens:
* the CLR allocates space on the heap for a structure + SyncBlockIndex + VMT of a value type (to call ToString, GetHashCode, Equals);
* it copies an instance of a value type there.
Now, we’ve got a reference variant of a value type. A structure has got **absolutely the same set of system fields as a reference type**,
becoming a fully-fledged reference type after boxing. The structure became a class. Let’s call it a .NET somersault. This is a fair name.
Just look at what happens if you use a structure which implements an interface using the same interface.
```
struct Foo : IBoo
{
int x;
void Boo()
{
x = 666;
}
}
IBoo boo = new Foo();
boo.Boo();
```
When we create the Foo instance, its value goes to the stack in fact. Then we put this variable into an interface type variable and the structure into a reference type variable. Next, there is boxing and we have the object type as an output. But it is an interface type variable. That means we need type conversion. So, the call happens in a way like this:
```
IBoo boo = (IBoo)(box_to_object)new Foo();
boo.Boo();
```
Writing such code is not effective. You will have to change a copy instead of an original:
```
void Main()
{
var foo = new Foo();
foo.a = 1;
Console.WriteLite(foo.a); // -> 1
IBoo boo = foo;
boo.Boo(); // looks like changing foo.a to 10
Console.WriteLite(foo.a); // -> 1
}
struct Foo: IBoo
{
public int a;
public void Boo()
{
a = 10;
}
}
interface IBoo
{
void Boo();
}
```
The first time we look at the code, we don’t have to know what we deal with in the code *other than our own* and see a cast to IBoo interface. This makes us think Foo is a class and not a structure. Then there is no visual division in structures and classes, which makes us think the
interface modification results must get into foo, which doesn’t happen as boo is a copy of foo. That is misleading. In my opinion, this code should get comments, so other developers could deal with it.
The second thing relates to the previous thoughts that we can cast a type from an object to IBoo. This is another proof that a boxed value type is a reference variant of a value type. Or, all types in a system of types are reference types. We can just work with structures as with value types, passing their value entirely. Dereferencing a pointer to an object as you would say in the world of C++.
You can object that if it was true, it would look like this:
```
var referenceToInteger = (IInt32)10;
```
We would get not just an object, but a typed reference for a boxed value type. It would destroy the whole idea of value types (i.e. integrity of their value) allowing for great optimization, based on their properties. Let’s take down this idea!
```
public sealed class Boxed
{
public T Value;
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public override bool Equals(object obj)
{
return Value.Equals(obj);
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public override string ToString()
{
return Value.ToString();
}
[MethodImpl(MethodImplOptions.AggressiveInlining)]
public override int GetHashCode()
{
return Value.GetHashCode();
}
}
```
We’ve got a complete analog of boxing. However, we can change its contents by calling instance methods. These changes will affect all parts with a reference to this data structure.
```
var typedBoxing = new Boxed { Value = 10 };
var pureBoxing = (object)10;
```
The first variant isn’t very attractive. Instead of casting a type we create nonsense. The second line is much better, but the two lines are almost identical. The only difference is that there is no memory cleaning with zeros during the usual boxing after allocating memory on the heap. The necessary structure takes the memory straight away whereas the first variant needs cleaning. This makes it work longer than the usual boxing by 10%.
Instead, we can call some methods for our boxed value.
```
struct Foo
{
public int x;
public void ChangeTo(int newx)
{
x = newx;
}
}
var boxed = new Boxed { Value = new Foo { x = 5 } };
boxed.Value.ChangeTo(10);
var unboxed = boxed.Value;
```
We’ve got a new instrument. Let's think what we can do with it.
* Our `Boxed` type does the same as the usual type: allocates memory on the heap, passes a value there and allows to get it, by doing a kind of unbox;
* If you lose a reference to a boxed structure, the GC will collect it;
* However, we can now work with a boxed type, i.e. calling its methods;
* Also, we can replace an instance of a value type in the SOH/LOH for another one. We couldn’t do it before, as we would have to do unboxing, change structure to another one and do boxing back, giving a new reference to customers.
The main problem of boxing is creating traffic in memory. The traffic of unknown number of objects, the part of which can survive up to generation one, where we get problems with garbage collection. There will be a lot of garbage and we could have avoided it. But when we have the traffic of short-lived objects, the first solution is pooling. This is an ideal end of .NET somersault.
```
var pool = new Pool>(maxCount:1000);
var boxed = pool.Box(10);
boxed.Value=70;
// use boxed value here
pool.Free(boxed);
```
Now boxing can work using a pool, which eliminates memory traffic while boxing. We can even make objects go back to life in finalization method and put themselves back into the pool. This might be useful when a boxed structure goes to asynchronous code other than yours and you cannot understand when it became unnecessary. In this case, it will return itself back to pool during GC.
Let’s conclude:
* If boxing is accidental and shouldn’t happen, don’t make it happen. It can lead to problems with performance.
* If boxing is necessary for the architecture of a system, there may be variants. If the traffic of boxed structures is small and almost invisible, you can use boxing. If the traffic is visible, you might want to do the pooling of boxing, using one of the solutions stated above. It spends some resources, but makes GC work without overload;
Ultimately let’s look at a totally impractical code:
```
static unsafe void Main()
{
// here we create boxed int
object boxed = 10;
// here we get the address of a pointer to a VMT
var address = (void**)EntityPtr.ToPointerWithOffset(boxed);
unsafe
{
// here we get a Virtual Methods Table address
var structVmt = typeof(SimpleIntHolder).TypeHandle.Value.ToPointer();
// change the VMT address of the integer passed to Heap into a VMT SimpleIntHolder, turning Int into a structure
*address = structVmt;
}
var structure = (IGetterByInterface)boxed;
Console.WriteLine(structure.GetByInterface());
}
interface IGetterByInterface
{
int GetByInterface();
}
struct SimpleIntHolder : IGetterByInterface
{
public int value;
int IGetterByInterface.GetByInterface()
{
return value;
}
}
```
The code uses a small function, which can get a pointer from a reference to an object. The library is available at [github address](https://github.com/mumusan/dotnetex/blob/master/libs/). This example shows that usual boxing turns int into a typed reference type. Let’s
look at the steps in the process:
1. Do boxing for an integer.
2. Get the address of an obtained object (the address of Int32 VMT)
3. Get the VMT of a SimpleIntHolder
4. Replace the VMT of a boxed integer to the VMT of a structure.
5. Make unboxing into a structure type
6. Display the field value on screen, getting the Int32, that was
boxed.
I do it via the interface on purpose as I want to show that it will work
that way.
### Nullable\
It is worth mentioning about the behavior of boxing with Nullable value types. This feature of Nullable value types is very attractive as the boxing of a value type which is a sort of null returns null.
```
int? x = 5;
int? y = null;
var boxedX = (object)x; // -> 5
var boxedY = (object)y; // -> null
```
This leads us to a peculiar conclusion: as null doesn’t have a type, the
only way to get a type, different from the boxed one is the following:
```
int? x = null;
var pseudoBoxed = (object)x;
double? y = (double?)pseudoBoxed;
```
The code works just because you can cast a type to anything you like
with null.
Going deeper in boxing
----------------------
As a final bit, I would like to tell you about [System.Enum type](http://referencesource.microsoft.com/#mscorlib/system/enum.cs,36729210e317a805). Logically this should be a value type as it’s a usual enumeration: aliasing numbers to names in a programming language. However, System.Enum is a reference type. All the enum data types, defined in your field as well as in .NET Framework are inherited from System.Enum. It’s a class data type. Moreover, it’s an abstract class, inherited from `System.ValueType`.
```
[Serializable]
[System.Runtime.InteropServices.ComVisible(true)]
public abstract class Enum : ValueType, IComparable, IFormattable, IConvertible
{
// ...
}
```
Does it mean that all enumerations are allocated on the SOH and when we use them, we overload the heap and GC? Actually no, as we just use them. Then, we suppose that there is a pool of enumerations somewhere and we just get their instances. No, again. You can use enumerations in structures while marshaling. Enumerations are usual numbers.
The truth is that CLR hacks data type structure when forming it if there is enum [turning a class into a value type](https://github.com/dotnet/coreclr/blob/4b49e4330441db903e6a5b6efab3e1dbb5b64ff3/src/vm/methodtablebuilder.cpp#L1425-L1445):
```
// Check to see if the class is a valuetype; but we don't want to mark System.Enum
// as a ValueType. To accomplish this, the check takes advantage of the fact
// that System.ValueType and System.Enum are loaded one immediately after the
// other in that order, and so if the parent MethodTable is System.ValueType and
// the System.Enum MethodTable is unset, then we must be building System.Enum and
// so we don't mark it as a ValueType.
if(HasParent() &&
((g_pEnumClass != NULL && GetParentMethodTable() == g_pValueTypeClass) ||
GetParentMethodTable() == g_pEnumClass))
{
bmtProp->fIsValueClass = true;
HRESULT hr = GetMDImport()->GetCustomAttributeByName(bmtInternal->pType->GetTypeDefToken(),
g_CompilerServicesUnsafeValueTypeAttribute,
NULL, NULL);
IfFailThrow(hr);
if (hr == S_OK)
{
SetUnsafeValueClass();
}
}
```
Why doing this? In particular, because the idea of inheritance — to do a customized enum, you, for example, need to specify the names of possible values. However, it is impossible to inherit value types. So, developers designed it to be a reference type that can turn it into a value type when compiled.
What if you want to see boxing personally?
------------------------------------------
Fortunately, you don’t have to use a disassembler and get into the code jungle. We have the texts of the whole .NET platform core and many of them are identical in terms of .NET Framework CLR and CoreCLR. You can click the links below and see the implementation of boxing right away:
* There is a separate group of optimizations each of which uses a
specific type of a processor:
+ *[JIT\_BoxFastMP\_InlineGetThread](https://github.com/dotnet/coreclr/blob/master/src/vm/amd64/JitHelpers_InlineGetThread.asm#L86-L148)*
(AMD64 — multiprocessor or Server GC, implicit Thread Local Storage)
+ *[JIT\_BoxFastMP](https://github.com/dotnet/coreclr/blob/8cc7e35dd0a625a3b883703387291739a148e8c8/src/vm/amd64/JitHelpers_Slow.asm#L201-L271)*
(AMD64 — multiprocessor or Server GC)
+ *[JIT\_BoxFastUP](https://github.com/dotnet/coreclr/blob/8cc7e35dd0a625a3b883703387291739a148e8c8/src/vm/amd64/JitHelpers_Slow.asm#L485-L554)*
(AMD64 — single processor or Workstation GC)
+ *[JIT\_TrialAlloc::GenBox(..)](https://github.com/dotnet/coreclr/blob/38a2a69c786e4273eb1339d7a75f939c410afd69/src/vm/i386/jitinterfacex86.cpp#L756-L886)*
(x86) connected through JitHelpers
* In general cases a JIT inlines a call of a helper function
[Compiler::impImportAndPushBox(..)](https://github.com/dotnet/coreclr/blob/a14608efbad1bcb4e9d36a418e1e5ac267c083fb/src/jit/importer.cpp#L5212-L5221)
* Generic-version uses less optimized
[MethodTable::Box(..)](https://github.com/dotnet/coreclr/blob/master/src/vm/methodtable.cpp#L3734-L3783)
+ Finally, [CopyValueClassUnchecked(..)] is called
(<https://github.com/dotnet/coreclr/blob/master/src/vm/object.cpp#L1514-L1581>).
Its code shows why it’s better to choose structures with the size up to 8 bytes included.
Here, the only method is used for unboxing:
*[JIT\_Unbox(..)](https://github.com/dotnet/coreclr/blob/03bec77fb4efaa397248a2b9a35c547522221447/src/vm/jithelpers.cpp#L3603-L3626)*, which is a wrapper around *[JIT\_Unbox\_Helper(..)](https://github.com/dotnet/coreclr/blob/03bec77fb4efaa397248a2b9a35c547522221447/src/vm/jithelpers.cpp#L3574-L3600)*.
Also, it is interesting that (<https://stackoverflow.com/questions/3743762/unboxing-does-not-create-a-copy-of-the-value-is-this-right>), unboxing doesn’t mean copying data to the heap. Boxing means passing a pointer to the heap while testing the compatibility of types. The IL opcode following unboxing will define the actions with this address. The data might be copied to a local variable or the stack for calling a method. Otherwise, we would have a double copying; first when copying from the heap to somewhere, and then copying to the destination place.
Questions
---------
### Why .NET CLR can’t do pooling for boxing itself?
If we talk to any Java developer, we will know two things:
* All value types in Java are boxed, meaning they are not essentially value types. Integers are also boxed.
* For the reason of optimization all integers from -128 to 127 are taken from the pool of objects.
So, why this doesn’t happen in .NET CLR during boxing? It is simple. Because we can change the content of a boxed value type, that is we can do the following:
```
object x = 1;
x.GetType().GetField("m_value", BindingFlags.Instance | BindingFlags.NonPublic).SetValue(x, 138);
Console.WriteLine(x); // -> 138
```
Or like this (С++/CLI):
```
void ChangeValue(Object^ obj)
{
Int32^ i = (Int32^)obj;
*i = 138;
}
```
If we dealt with pooling, then we would change all ones in application to 138, which is not good.
The next is the essence of value types in .NET. They deal with value, meaning they work faster. Boxing is rare and addition of boxed numbers belongs to the world of fantasy and bad architecture. This is not useful at all.
### Why it is not possible to do boxing on stack instead of the heap, when you call a method that takes an object type, which is a value type in fact?
If the value type boxing is done on the stack and the reference will go to the heap, the reference inside the method can go somewhere else, for example a method can put the reference in the field of a class. The method will then stop, and the method that made boxing will also stop. As a result, the reference will point to a dead space on the stack.
### Why it is not possible to use Value Type as a field?
Sometimes we want to use a structure as a field of another structure which uses the first one. Or simpler: use structure as a structure field. Don't ask me why this can be useful. It cannot. If you use a structure as its field or through dependence with another structure, you create recursion, which means infinite size structure. However, .NET Framework has some places where you can do it. An example is `System.Char`, [which contains itself](http://referencesource.microsoft.com/#mscorlib/system/char.cs,02f2b1a33b09362d):
```
public struct Char : IComparable, IConvertible
{
// Member Variables
internal char m_value;
//...
}
```
All CLR primitive types are designed this way. We, mere mortals, cannot implement this behavior. Moreover, we don't need this: it is done to give primitive types a spirit of OOP in CLR.
> This charper translated from Russian as from language of author by [professional translators](https://github.com/bartov-e). You can help us with creating translated version of this text to any other language including Chinese or German using Russian and English versions of text as source.
>
>
>
> Also, if you want to say «thank you», the best way you can choose is giving us a star on github or forking repository [ https://github.com/sidristij/dotnetbook](https://github.com/sidristij/dotnetbook)
>
> | https://habr.com/ru/post/439490/ | null | null | 3,310 | 57.47 |
The following is a list of typographical conventions used in this book:
Used to indicate new terms, URLs, filenames, file extensions, directories, commands and options, and program names, and to highlight comments in examples. For example, a path in the filesystem may appear as C:\Hacks\examples or /usr/mike/hacks/examples.
Used to show code examples, XML markup, Java package or C# namespace names, or output from commands.
Constant width bold
Used in examples to show emphasis.
Used in examples.
You should pay special attention to notes set apart from the text with the following icons:
The thermometer icons, found next to each hack, indicate the relative complexity of the hack: | https://etutorials.org/XML/xml+hacks/Preface/Conventions+Used+in+This+Book/ | CC-MAIN-2021-31 | refinedweb | 112 | 53.41 |
Getting Started With HTTP Middleware in Kitura
Middle.
Version
- Swift 5, macOS 10.14, Xcode 10
As Swift on the server continues to mature, many popular frameworks are adopting industry standards to handle incoming requests and outgoing responses. One of the most popular ways to handle traffic is by implementing middleware, which intercepts requests and performs logic that you define on specific routes.
In this tutorial, you’ll take a hands-on approach to using middleware in a REST API. You will:
- Enable cross-origin traffic by adding CORS to your router.
- Use middleware to authenticate a HTTP(s) request.
- Ensure that your API properly understands the meaning of life!
You’ll follow test-driven development principles throughout this tutorial by starting with routes that behave incorrectly, and adding the needed HTTP Route and middleware logic that you need to resolve the issues.
This tutorial uses Kitura. However, the concepts of middleware and HTTP routing work in the same manner in Vapor, as well as many other Swift server frameworks. From a technical standpoint, here’s what you’ll use throughout this tutorial:
- Kitura 2.7 or higher
- macOS 10.14 or higher
- Swift 5.0 or higher
- Xcode 10.2 or higher
- Terminal
Getting Started
To start, click on the Download Materials button at the top or bottom of this page to download the projects that you’ll use throughout this tutorial. Open Terminal and navigate the starter project folder.
You’ll notice the lack of a
.xcodeproj file in this folder, and that’s a-OK! Enter the following commands in Terminal:
swift package generate-xcodeproj xed .
This will pull all the dependencies needed to run the project and open Xcode.
In Xcode, build and run your project by pressing Command-R. Check the Xcode console and you should see a server listening on port 8080. Open a web browser, and navigate to. You should see Kitura’s “Hello, World!” page:
You’re ready to dive right into the middle of this tutorial!
First, revisit some core concepts of HTTP.
The Request/Response Model
The server you’ll build throughout this tutorial is responsible for handling a request made from a client, then responding appropriately based on the content of that request. In this tutorial, you will only use
cURL commands from Terminal to interact with your server, but these commands could come just as easily from an iOS or Android app. :]
What makes up a request?
- Any HTTP request must have an address, which contains:
a. A domain or host (e.g.,,
localhost:8080, or even
127.0.0.1:8080)
b. A path (e.g.,
/users)
c. Query parameters (e.g.,
?username=ray&id=42)
- A method must be specified, (e.g.,
GET,
POST, or
OPTIONS).
- All requests can have headers, which you can think of as metadata for your request (e.g.,
{"Origin": ""}).
- For certain methods, a body must be specified, which you usually serialize into JSON.
When your server creates a response, it needs to specify the following:
- A status code between 200 and 599 that indicates the result of the request.
- A set of headers as response metadata.
- A body, which is usually in text, JSON, or another data type.
Later in this tutorial, you are going to write a server that responds to a request validating your interpretation of the meaning of life. Using
cURL from Terminal, your request will look like this:
curl -X POST \ \ -H 'content-type: application/json' \ -H 'origin:' \ -d '{"meaningOfLife": 42}'
When you write your handler for this request at first, this request won’t quite work out of the box. However, you’ll use middleware to inspect the headers and body of the request to make it work, until you get this response:
Yes, the meaning of life is indeed 42!
Before you start writing code, review what happens when you send off your request from your client.
HTTP Routing
Consider two components in your server: the HTTP router and the HTTP route. The router is the object responsible for handling incoming requests and routing them to the appropriate route. When you’ve set up a route on your server, the router will hand it off to the handler function that is responsible for sending the response back to the client.
Assume that you are working on a team of developers who have already put a lot of work into their existing routes. You join the team and your first task is to validate data coming into the
/test route — all requests that are routed to
/test must have the
Access-Control-Allow-Origin header specified as
true. This is what the route looks like:
func corsHandler(request: RouterRequest, response: RouterResponse, next: () -> Void) { if response.headers["Access-Control-Allow-Origin"] == "false" { response.status(.badRequest).send("Bad origin") } else { response.status(.OK).send("Good origin") } }
One idea would be to ask everyone who connects to this API very nicely to specify this header in their request, but what if someone doesn’t get the message? Furthermore, what if you could get your hands on the request without having to mess with any of the existing code in the route handler? This is the crux of this tutorial — middleware to the rescue.
Middleware
Simply put, middleware is code that runs in the middle of your router and your route handler. Whenever you write middleware, you can intercept a request on its way to its specified route, and you can do whatever you need to do with the request at that point. You can even choose to send back a response early if the request doesn’t meet your needs, ignoring the route handler altogether.
In Kitura, middleware is achieved through writing a class or struct that adheres to a protocol, and then by registering the middleware on specific routes.
The example read above, in terms of middleware, is a common requirement called CORS, an acronym for Cross Origin Resource Sharing. If you wanted to roughly write your own code to fulfill this need, you could write middleware that looks like this:
class RazeMiddleWare: RouterMiddleware { public func handle(request: RouterRequest, response: RouterResponse, next: @escaping () -> Void) throws { request.headers.append("Access-Control-Allow-Origin", value: "true") next() } }
And then you could add it to the
/test route, as required previously, like so:
let middleware = RazeMiddleware() router.all("/test", middleware: middleware)
POSTrequests to
/test? Instead of specifying only this method, you are choosing to handle all methods used on this route. The next section is going to explain why.
Prepping Your Kitura Server for Middleware
In Xcode, navigate to your Package.swift and the following code to the end of the
dependencies array:
.package(url: "", .upToNextMinor(from: "2.1.0")), .package(url: "", .upToNextMinor(from: "2.1.3"))
Next, scroll down to
targets and add the following two dependencies to your
Application target:
"KituraCORS", "CredentialsHTTP"
Close your Xcode project. Open Terminal and navigate to the root directory of your project. Run two commands:
swift package generate-xcodeproj xed .
You have now updated your project to add CORS middleware and some authentication capabilities to your project.
Next, you’ll write three HTTP routes that fail or give you an undesirable result at first, and then you will write middleware to make each of them work correctly!
In Xcode, open the Sources/Application/Routes directory in the Project Navigator on the left, and right-click on the directory. Click on New File…, and add a new Swift file called RazeRoutes.swift. Make sure you select the
Application target.
Replace the contents of the file with the following import statements and initialization function:
import LoggerAPI import KituraContracts import Kitura func initializeRazeRoutes(app: App) { }
Before you start adding some more code to this file, go back to Application.swift and add the following line to the end of
postInit():
initializeRazeRoutes(app: self)
Every HTTP route you now add in RazeRoutes.swift will register with your router every time you start your server. Now, you’ll add a place to put all of your middleware.
Right-click Application.swift and click New File… again. This file should be named Middleware.swift, and should also be targeted to
Application.
Replace the file’s contents with the following:
import Foundation import Kitura import KituraCORS class Middleware { }
Alright, the stage is set — time for you to enable cross-origin requests on your server and get your first taste of middleware!
Enabling CORS
Open RazeRoutes.swift and add this route registration to
initalizeRazeRoutes:
app.router.get("/cors", handler: corsHandler)
Xcode should show you an error at this point because you have not yet declared a function called
corsHandler. Fix that by adding the following code at the very bottom of the file outside of your function:
// 1 func corsHandler(request: RouterRequest, response: RouterResponse, next: () -> Void) { // 2 guard response.headers["Access-Control-Allow-Origin"] == "" else { response.status(.badRequest).send("Bad origin") return } // 3 response.status(.OK).send("Good origin") }
Here’s what you just wrote:
- You define the
GETroute you registered on
/corsby specifying a
request, a
response, and a
nexthandler, which tells your router to continue searching for things to do according to the request that was made.
- Next, you validate the value of the
Access-Control-Allow-Originheader in your response. If you’re wondering why you’d be checking a response without having previously set anything to it, you’re spot on! This is what you will have to fix with middleware.
- This is the “happy path.” If everything looks good, simply return a successful response.
Build and run your server, and then confirm that your server is running on port 8080. Open Terminal and execute the following command:
curl -H "Origin:" localhost:8080/cors
In Terminal, you should see the string
"Bad Origin" sent as a response. This might not be the desired response, but you can trust that it’s expected for now!
You’re going to implement middleware to fix this. In Xcode, open Middleware.swift, and add the following method to the
Middleware class:
// 1 static func initializeCORS(app: App) { // 2 let options = Options(allowedOrigin: .origin(""), methods: ["GET"], maxAge: 5) // 3 let cors = CORS(options: options) // 4 app.router.all("/cors", middleware: cors) }
Here’s what you just added, step by step:
- The method signature you write to add this middleware as a convenience to your HTTP route.
- You create and set an object of options to enable CORS, most notably that you will only allow
GETto be an acceptable method on the
/corsroute, and that you will only allow requests that specify an origin of pass through. The
maxAgeparameter is a value that specifies how long you want this value to be cached for future requests.
- Here, you are creating a
CORSmiddleware with your options for use on your HTTP route.
- Finally, you register your CORS middleware for all HTTP methods that hit
/corson your router. Even though you listed
GETas the only method in your
optionsmap, any method should still be able to access this middleware.
Hold down the Command button and click on the
CORS text in your constructor. This will open the
CORS class in Xcode. Scroll to the definition of the
handle method, and add on its first line. Finally, go back to RazeRoutes.swift and at the top of the
initializeRazeRoutes function, add the following to register your middleware:
Middleware.initializeCORS(app: app)
Build and run your server again. Once your server is running on port 8080, execute the same
cURL command in Terminal:
curl -H "Origin:" localhost:8080/cors
Go back to Xcode, where you’ve hit the breakpoint you’ve just added. Inspect the
request and
response objects as you step through the code. When you finally let the program continue, you should see
Good origin in your response!
Good work! The
CORS middleware took care of ensuring that your response was marked appropriately and allowed a cross-origin resource to access the
GET method on
/cors, all thanks to your middleware! Now let’s do something from scratch.
Middleware From Scratch
Open RazeRoutes.swift again, and at the end of your
initializeRazeRoutes function, register a new
GET route like so:
app.router.get("/raze", handler: razeHandler)
Below your
corsHandler function, add the following code to handle any
GET requests that come in for
/raze:
func razeHandler(request: RouterRequest, response: RouterResponse, next: () -> Void) { guard let meaning = request.queryParameters["meaningOfLife"] else { return } response.status(.OK).send("Yes, the meaning of life is indeed \(meaning)!") }
Here’s the drill: You’ve been asked to make a route that simply echoes back the “meaning of life” to any client that makes a
GET request to
/raze, and you want to have control over what that value is. Build and run your server, and execute the following command in Terminal:
curl "localhost:8080/raze?meaningOfLife=42"
You should get a response that says, “Yes, the meaning of life is indeed 42!”.
While truer words may have never been spoken, this route is built on the assumption that all clients know to include this parameter both as a query parameter in the
GET request and to ensure that it is an integer and not a string. You might have good direction, but not every client that consumes this API might remember to include this!
To see what happens if you forget this parameter, execute the following command in Terminal:
curl -v localhost:8080/raze
You should get a response returning a 503 code, meaning that the server is unable to handle the request. What gives? You registered the route and everything, right?
Since you didn’t include the query parameter
meaningOfLife in your request, and you didn’t write code to send back a user-friendly response, it makes sense that you’re going to get a less-than-ideal response in this case. Guess what? You can write middleware to make sure that this parameter is handled correctly in all of your requests to this route!
Further, you can make sure that malformed requests are responded to correctly, so that you can ensure a good developer experience for consumers of this API and not have to worry about touching the original HTTP route code!
In Xcode, open Middleware.swift. Scroll to the very bottom of this file, and add the following code:
// 1 public class RazeMiddleware: RouterMiddleware { private var meaning: Int // 2 public init(meaning: Int) { self.meaning = meaning } // 3 public func handle(request: RouterRequest, response: RouterResponse, next: @escaping () -> Void) throws { } }
What you’ve added, here:
- Kitura requires that your middleware class or struct conforms to the
RouterMiddlewareprotocol.
- It’s generally a good idea to set up a constructor for your middleware instance. This allows you to handle stored properties that are relevant to your middleware — similar to the options you handled with
CORSin the previous example.
- The single requirement of the
RouterMiddlewareprotocol is the
handlemethod. It should look familiar, as it takes a
RouterRequest, a
RouterResponseand a closure to tell the router to continue on.
TypeSafeMiddlewarethat allows you to use a strongly typed object instead of “raw middleware” as you’ve done here.
When you implemented
CORS, you elected to add some headers to your response if and only if your request included certain headers. Inside the
handle() method in your new middleware, add the following code:
guard let parsedMeaning = request.queryParameters["meaningOfLife"] else { response.status(.badRequest).send("You must include the meaning of life in your request!") return } guard let castMeaning = Int(parsedMeaning) else { response.status(.badRequest).send("You sent an invalid meaning of life.") return } guard castMeaning == meaning else { response.status(.badRequest).send("Your meaning of life is incorrect") return } next()
After you register this middleware with the appropriate route, you’ll ensure that you can do the following with incoming requests. All you do in the above code is make sure a
meaningOfLife parameter exists in your requests, make sure it’s a valid number, and finally make sure it’s equal to the correct meaning. If any of these is wrong, you simply respond with an erroneous response. Otherwise, you call
next() to signal this middleware is done with its work.
This might be a fairly contrived example, but consider for a moment that you were able to intervene on all requests made to the
/raze route this way without touching a single line of code on the existing route! This perfectly illustrates the power of middleware. Scroll up to the
Middleware class and the following method to it:
static func initializeRazeMiddleware(app: App) { let meaning = RazeMiddleware(meaning: 42) app.router.get("/raze", middleware: meaning) }
By parameterizing the
meaning value, you can let developers who want to use this middleware set whatever value they want! However, you’re a well-read developer and you understand the true meaning of life, so you set it to
42 here.
Lastly, open up RazeRoutes.swift in Xcode, and inside the
initializeRazeRoutes() function, but above your route registrations, add this line of code:
Middleware.initializeRazeMiddleware(app: app)
Build and run your server, and ensure that it is live on port 8080. Open Terminal, and run the following commands:
curl "localhost:8080/raze" curl "localhost:8080/raze?meaningOfLife=43" curl "localhost:8080/raze?meaningOfLife=42"
You should see the following output:
$ curl "localhost:8080/raze" You must include the meaning of life in your request! $ curl "localhost:8080/raze?meaningOfLife=43" Your meaning of life is incorrect $ curl "localhost:8080/raze?meaningOfLife=42" Yes, the meaning of life is indeed 42!
- Open Terminal.
- Run the command
lsof -i tcp:8080.
- Note the value under
PIDin the returned text.
- Run the command
kill -9 **PID**with the value of the above PID instead.
Feel free to put breakpoints on your middleware class to observe how it handles each request, but you’ll notice that only the properly formed request gets through to your route handler now. Here’s a reminder: You ensured that this route yields a safe experience no matter what the request is, and you did it all without touching the existing route handler code! Nice work!
Your last example is going to deal with authentication — this might seem scary at first, but the principles are the exact same!
Authentication Middleware
Whenever you’re browsing your favorite social media website, it would make sense that you could only see your personal content if you’re logged in, right? Why would you even want to waste time performing an operation in a route handler if the request is unauthenticated? You’re going to implement a route handler that uses Codable Routing in Kitura with type-safe middleware to ensure that the request is authenticated.
First, open RazeRoutes.swift and register your route in your
initalizeRazeRoutes() function:
app.router.get("/auth", handler: authHandler)
Next, scroll to the bottom of this file and add the following handler:
func authHandler(profile: RazeAuth, completion: (RazeAuth?, RequestError?) -> Void) { completion(profile, nil) }
Your server should not compile properly at this point, because you have not yet defined
RazeAuth. Open Middleware.swift and import the following module at the top of your file underneath your import of
KituraCORS:
import CredentialsHTTP
Next, scroll to the bottom of this file and add the following code to define your middleware instance:
// 1 public struct RazeAuth: TypeSafeHTTPBasic { // 2 public var id: String // 3 static let database = ["David": "12345", "Tim": "54321"] // 4 public static func verifyPassword(username: String, password: String, callback: @escaping (RazeAuth?) -> Void) { } }
Take a moment to examine what you’ve added:
- The main requirement of your middleware is that it must conform to the
TypeSafeHTTPBasicprotocol.
- The first required implementation in the
TypeSafeBasicHTTPprotocol is the
idproperty, to be able to identify an authenticated user.
- In this example, you are setting up a very small and simple database of usernames and passwords — this is here to demonstrate that you could use any existing database module to query by username!
- The other required implementation for the
TypeSafeBasicHTTPprotocol is the
verifyPasswordmethod. After you have confirmed that the username and password match expected values, you can create a
RazeAuthobject with the proper username, and pass it on in the callback. Since you registered the route with a non-optional
RazeAuthobject, this means that calling
callback()with
nilwill instead send a 401 Unauthorized response to the client.
Next, add this code inside
verifyPassword() to verify if the given username and password are valid according to your super secure database of usernames and passwords:
guard let storedPassword = database[username], password == storedPassword else { return callback(nil) } return callback(RazeAuth(id: username))
Lastly, go to RazeRoutes.swift and put a breakpoint inside your
/auth route handler. Build and run your server, and ensure your server is listening on port 8080. Open Terminal, and run the three following commands:
curl -u "Ray":"12345" localhost:8080/auth curl -u "David":"12345" localhost:8080/auth curl -u "Tim":"54321" localhost:8080/auth
For the commands that are properly authenticated (David and Tim’s), you should trigger your breakpoint, and your server should respond with the username that you sent over! Now, your server only has to do the work its authenticated to do!
Where to Go From Here?
Middleware opens up a large realm of possibilities for developers to enhance routes that might already exist on a server. This tutorial showed you how easy it is to both implement existing middleware libraries, and how you can roll your own library to add custom behavior to your server, like the koba library written by Caleb Kinney.
Both our Server Side Swift with Kitura and Server Side Swift with Vapor books have plenty of information about implementing middleware and authentication, and you can work on them in a real-life scenario!
If you want to learn more about how Kitura handles HTTP routing and works in general, read this beginner’s tutorial about it.
Please leave a comment below if you know about any other middleware libraries or if you have any questions! | https://www.raywenderlich.com/3158581-getting-started-with-http-middleware-in-kitura | CC-MAIN-2020-16 | refinedweb | 3,648 | 54.22 |
As our projects get bigger, we'll need to break up our code into multiple files. Doing this - and managing our tests as well as our import and export statements - can be tough for beginners. For that reason, we'll walk through the process by adding functionality for calculating the area of a rectangle to our
shape-tracker application. In the process, we'll use ES6 classes and update our application so the UI can check if three lengths make a triangle and calculate the area of a rectangle. It's a hodgepodge of functionality, but that's not the point. Instead, the goal here is to keep our code modular and well-organized. The principles we apply here can be used for any number of business logic and test files.
We are adding simple functionality to our application - and it would be easy to just shove the new code into the files we already have. However, that would be a bad move. In the real world, we need to think about scalability. Specifically, how can we make our applications scale up and grow bigger with a minimum amount of pain points? While we should have a general road map for how an application might expand, we can't predict everything the application might need. If it's a successful application, it will likely look very different in five years than in does now. For that reason, we always need to build with an eye on the future. Think of the analogy of making a building. If it has a strong foundation, we can add more stories on it in the future. If it has a weak foundation, it will need major overhauls - or worse, we might need to start from scratch - in order for us to keep building. When an application with a weak foundation starts running into scalability problems, it can lead to major headaches for businesses - pain points, wasted developer time, less time spent on new features that users want right now - not a year from now. And if competitors are already building those features and other problems arise for users, they will quickly desert the application.
Modular code scales better and is easier to read. There are fewer issues with global scope and fewer bugs. Developers can work more efficiently on different parts of the codebase - and they'll be able to communicate better, too.
We already have most of the files we need. Because we're only adding a small amount of functionality, we'll just need two new files. We'll also add a new directory for our
js code as well - because it's always better to organize our code in directories.
src/js/rectangle.js: This will contain the business logic for a
Rectangleclass.
__tests__/rectangle.test.js: This will contain the test suite for tests related to the
Rectangleclass.
Add those files to the project now. Also, don't forget to move
triangle.js into the
js directory. VSCode has a handy little feature where it can automatically update any import statements in the code for you. Here's an example of the prompt (though this one is for
rectangle.js).
If you want to do it manually (or VSCode doesn't automatically update the import statements), the relative path for the
triangle.js import statement in
triangle.test.js looks like this:
import Triangle from './../src/js/triangle.js';
We'll update import statements in the UI (
main.js) later in this lesson.
By the way,
main.js shouldn't be in your
js directory. It should be in
src because it's our entry point file.
Before we move on, let's update the code in
triangle.js to use ES6 classes:
export default class Triangle { constructor(side1, side2, side3) { this.side1 = side1; this.side2 = side2; this.side3 = side3; } checkType() {"; } } }
Because we've made a code update, we should verify that our tests still pass. And they do. Because of our tests, we can be assured that everything is still working correctly after refactoring our code.
Because we are using a test-driven approach, our next step is to write a test. We'll start with a test for a
Rectangle constructor:
import Rectangle from '../src/js/rectangle.js'; describe('Rectangle', () => { test('should correctly create a rectangle object using two sides', () => { const rectangle = new Rectangle(3,5); expect(rectangle.side1).toEqual(3); expect(rectangle.side2).toEqual(5); }); });
Because a rectangle has two pairs of sides, each with equal length, we'll only need to pass in two sides as parameters.
As expected, this test will fail, but it should be clear by this point that it's a bad fail:
TypeError: _rectangle.default is not a constructor
It's clear why that's the case. There's no constructor yet! Let's add just enough code to have a good fail.
export default class Rectangle { constructor() { } }
We just add and export a
Rectangle class with an empty constructor.
expect(received).toEqual(expected) // deep equality Expected: 3 Received: undefined
This is a better fail. We've reached our expectation and we know our code is properly wired up.
Next, let's get the code passing by adding parameters and statements to our constructor:
export default class Rectangle { constructor(side1, side2) { this.side1 = side1; this.side2 = side2; } }
Once we save, VSCode will automatically run the tests again - and everything is passing.
By the way, note that we use the same parameters as we do for triangles (
side1 and
side2). Imagine, for a moment, the havoc that would occur if these variables were globally scoped. It's very common to reuse variable and property names. Thankfully, we can scope them locally.
Next, we'll need to write a test for our only function:
import Rectangle from '../src/js/rectangle.js'; describe('Rectangle', () => { ... test('should correctly create a rectangle object using two sides', () => { const rectangle = new Rectangle(3,5); expect(rectangle.getArea()).toEqual(15); }); });
If we run our tests now, we'll get a bad fail:
TypeError: rectangle.getArea is not a function
Our new method doesn't exist yet - of course testing something that doesn't exist will result in a fail - and a bad one. Here's the code we need for a good fail:
getArea() { } }
And here's the fail:
expect(received).toEqual(expected) // deep equality Expected: 15 Received: undefined
That's much better! Finally, let's add the code to get the test passing:
... getArea() { return this.side1 * this.side2; } ...
Now all our tests are passing.
We should always look for an opportunity to refactor our code. Our source code looks fine but we can DRY up our tests a bit because we are using some repeated code:
const rectangle = new Rectangle(3,5);. If we were to build out our code further and add more tests, it would be nice to have a reusable rectangle. This also gives us an opportunity to practice adding a
beforeEach() block in our code. Here's the updated tests refactored to use a
beforeEach() block:
import Rectangle from '../src/js/rectangle.js'; describe('Rectangle', () => { let rectangle; beforeEach(() => { rectangle = new Rectangle(3,5); }); test('should correctly create a rectangle object using two sides', () => { expect(rectangle.side1).toEqual(3); expect(rectangle.side2).toEqual(5); }); test('should correctly create a rectangle object using two sides', () => { expect(rectangle.getArea()).toEqual(15); }); });
Now that we have all tests passing, we're ready to update our UI. As we mentioned earlier in the lesson,
main.js is not in our
js directory - it's in
src because it's our entry point file.
import $ from 'jquery'; import 'bootstrap'; import 'bootstrap/dist/css/bootstrap.min.css'; import './css/styles.css'; import Triangle from './js/triangle.js'; import Rectangle from './js/rectangle.js'; $(document).ready(function() { $('#triangle-checker-form').submit(function(event) { event.preventDefault(); const length1 = parseInt($('#length1').val()); const length2 = parseInt($('#length2').val()); const length3 = parseInt($('#length3').val()); const triangle = new Triangle(length1, length2, length3); const response = triangle.checkType(); $('#response').append(`<p>${response}</p>`); }); $('#rectangle-area-form').submit(function(event) { event.preventDefault(); const length1 = parseInt($('#rect-length1').val()); const length2 = parseInt($('#rect-length2').val()); const rectangle = new Rectangle(length1, length2); const response = rectangle.getArea(); $('#response2').append(`<p> The area of the rectangle is ${response}.</p>`); }); });
There are a few key things to note:
Triangleand
Rectangle. As our projects grow in size and our UI needs access to more business logic files, we'd add more import statements here.
parseInt()when we get the value of lengths from both forms. We don't want to have an issue with working with strings instead of numbers.
And that's really it! It's not a fancy UI but everything is wired together correctly. Most importantly, this lesson should provide a clearer picture of how we can have multiple business logic files working with our UI and tests.
Remember, whenever a file needs access to a function, class or some other code from another file, we just need to use import/export statements. We can use these in any JavaScript file. For instance, we might have a business logic file that imports a function from another business logic file. In that case, import and export statements are applicable in the exact same way.
As you build out a bigger project, take the time to break up your business logic into smaller, more modular files and then use import and export statements as needed. webpack will take care of the rest!
Below is a repository for the complete project.
Example GitHub Repo for Shape Tracker
Lesson 41 of 48
Last updated April 8, 2021 | https://www.learnhowtoprogram.com/intermediate-javascript/test-driven-development-and-environments-with-javascript/working-with-multiple-files | CC-MAIN-2021-17 | refinedweb | 1,601 | 58.38 |
What is the Bollinger Bands?
A Bollinger Band® is a technical analysis tool defined by a set of trendlines plotted two standard deviations (positively and negatively) away from a simple moving average (SMA) of a security’s price, but which can be adjusted to user preferences.
The Bollinger Bands are used to discover if a stock is oversold or overbought. It is called a mean reversion indicator, which measures how far a price swing will stretch before a counter impulse triggers a retracement.
It is a lagging indicator, which is looking at historical background of the current price. Opposed to a leading indicator, which tries to where the price is heading.
Step 1: Get some time series data on a stock())[['Close', 'High', 'Low']] print(ticker)
We will use the Close, High and Low columns to do the further calculations.
Close High Low Date 2020-01-02 300.350006 300.600006 295.190002 2020-01-03 297.429993 300.579987 296.500000 2020-01-06 299.799988 299.959991 292.750000 2020-01-07 298.390015 300.899994 297.480011 2020-01-08 303.190002 304.440002 297.160004 ... ... ... ... 2020-08-06 455.609985 457.649994 439.190002 2020-08-07 444.450012 454.700012 441.170013 2020-08-10 450.910004 455.100006 440.000000 2020-08-11 437.500000 449.929993 436.429993 2020-08-12 452.040009 453.100006 441.190002
Step 2: How are the Bollinger Bands calculated
Luckily, we can refer to Investopedia.org to get the answer, which states that the Bollinger Bands are calculated as follows.
BOLU=MA(TP,n)+m∗σ[TP,n]
BOLD=MA(TP,n)−m∗σ[TP,n]
Where BOLU is the Upper Bollinger Band and BOLD is Lower Bollinger Band. The MA is the Moving Average. The TP and σ are calculated as follows.
TP (typical price)=(High+Low+Close)÷3
σ[TP,n] = Standard Deviation over last n periods of TP
Where n is the number of days in smoothing period (typically 20), and m is the number of standard deviations (typically 2).
Step 3: Calculate the Bollinger Bands
This is straight forward. We start by calculating the typical price TP and then the standard deviation over the last 20 days (the typical value). Then we calculate the simple moving average of rolling over the last 20 days (the typical value). Then we have the values to calculate the upper and lower values of the Bolling Bands (BOLU and BOLD).'] print(ticker)
Resulting in the following output.
Date Close High ... BOLU BOLD Date ... 2020-01-02 300.350006 300.600006 ... NaN NaN 2020-01-03 297.429993 300.579987 ... NaN NaN 2020-01-06 299.799988 299.959991 ... NaN NaN 2020-01-07 298.390015 300.899994 ... NaN NaN 2020-01-08 303.190002 304.440002 ... NaN NaN ... ... ... ... ... ... 2020-08-06 455.609985 457.649994 ... 445.784036 346.919631 2020-08-07 444.450012 454.700012 ... 453.154374 346.012626 2020-08-10 450.910004 455.100006 ... 459.958160 345.317173 2020-08-11 437.500000 449.929993 ... 464.516981 346.461685 2020-08-12 452.040009 453.100006 ... 469.891271 346.836730
Note, that if you compare you results with Yahoo! Finance for Apple, there will be some small difference. The reason is, that they by default use TP to be closing price and not the average of the Close, Low and High. If you change TP to equal Close only, you will get the same figures as they do.
Step 4: Plotting it on a graph
Plotting the three lines is straight forward by using plot() on the DataFrame. Making an filled area with color between BOLU and BOLD can be achieved by using fill_between().
This results in the full program to be.
import pandas_datareader as pdr import datetime as dt import matplotlib.pyplot as plt ticker = pdr.get_data_yahoo("AAPL", dt.datetime(2020, 1, 1), dt.datetime.now())[['Close', 'High', 'Low']] # Boillinger band calculations'] ticker = ticker.dropna() print(ticker) # Plotting it all together ax = ticker[['Close', 'BOLU', 'BOLD']].plot(color=['blue', 'orange', 'yellow']) ax.fill_between(ticker.index, ticker['BOLD'], ticker['BOLU'], facecolor='orange', alpha=0.1) plt.show()
Giving the following graph.
Step 5: How to use the Bollinger Band Indicator?
If the stock price are continuously touching the upper Bollinger Band (BOLU) the market is thought to be overbought. While if the price continuously touches the lower Bollinger Band (BOLD) the market is thought to be oversold.
The more volatile the market is, the wider the upper and lower band will be. Hence, it also indicates how volatile the market is at a given period.
The volatility measured by the Bollinger Band is referred to as a squeeze when the upper and lower band are close. This is considered to be a sign that there will be more volatility in the coming future, which opens up for possible trading opportunities.
A common misconception of the bands are that when the price outbreaks the the bounds of the upper and lower band, it is a trading signal. This is not the case.
As with all trading indicators, it should not be used alone to make trading decisions. | https://www.learnpythonwithrune.org/pandas-calculate-and-plot-the-bollinger-bands-for-a-stock/ | CC-MAIN-2021-25 | refinedweb | 862 | 76.72 |
filtered - Apply source filter on external module
# Apply source filter YourFilter.pm on Target.pm, then result can be used as FilteredTarget use filtered by => 'YourFilter', as => 'FilteredTarget', on => 'Target',);
Source filter has unlimited power to enhance Perl. However, source filter is usually applied on your own sources. This module enables you to apply source filter on external module.
Rest of the options are passed to
import of filtered module.
by
Specify a source filter module you want to apply on an external module.
as
Specify the package name for the resultant filtered module. This option can be omitted. If omitted, original names are used.
on
Specify a target module.
on keyword can be ommited.
For @INC hook, please consult
perldoc -f require. Hook itself is enabled in short period but it may affect other modules.
asis applied in limited context.
If you specified
as => FilteredTarget, on => Target, the following codes:
package Target::work; package Target; Target::work::call();
are transformed into as follows:
package FilteredTarget::work; package FilteredTarget; FilteredTarget::work::call();
Actually, only
'\bpackage\s+Target\b' and
'\bTarget::\b' are replaced.
Yasutaka ATARASHI <yakex@cpan.org>
This is free software; you can redistribute it and/or modify it under the same terms as the Perl 5 programming language system itself. | http://search.cpan.org/~yakex/filtered-v0.0.2/lib/filtered.pm | CC-MAIN-2017-13 | refinedweb | 212 | 51.95 |
This should be made in W3C's
This document was produced by a group operating under the
This document specifies usage scenarios for XQuery.
Robie, 8 May 2005: New status section. Added note on static typing.
Robie, 31 July 2002: Minor changes to ensure that examples parse with the current syntax. All occurrences of date() changed to current-date(), and editorial notes on dates deleted since they now work properly in FandO. Deleted use case REF, since we no longer support the => operator, and people did not feel this use case was a good illustration of the problem domain. Replaced use case FNPARM with use case STRONG. Replaced filter() in the table of contents query with a recursive function call, since filter() no longer exists. Deleted queries Q3 and Q6, which jury rigged some primitive full text search capabilities to get the right answer, but didn't solve the real underlying problem. Added the new W3C patent policy language to the status section. Replaced 'precedes' with '<<', replaced follows with '>>'.
Robie, 23 April 2002: Updated to reflect current language. Many use cases have been corrected based on testing and analysis done by Dana and me.
Robie, 17 Dec 2001: Changed all examples to the current syntax. Changed functions to support the current Functions and Operators equivalents.
Robie, 8 Jun 2001: Corrected many examples, converted all queries to the new XQuery syntax, and added use case FNPARM.
Robie, 15 Feb 2001: First stand-alone Working Draft. This material previously appeared as a part of the W3C XML Query Requirements Working Draft, but was placed into a separate document to make it easier to incorporate solutions.:
Some implementations of XQuery bind input to external variables. If the environment has bound the external variable $b to the same document used in the above query, this expression would return the same set of authors:.
This use case contains several example queries that illustrate requirements gathered from the database and document communities.
Most of the example queries in this use case are based on a bibliography document named "" with the following DTD:
Here is the data found at "bstore1.example.com/bib.xml":
Q5 also uses information on book reviews and prices from a separate data source named "" with the following DTD:
Here are the contents of "":
Q9 uses an input document named "books.xml", with the following DTD:
Here are the contents of books.xml:
Q10 uses an input document named "prices.xml", with the following DTD:
Here are the contents of prices.xml:
List books published by Addison-Wesley after 1991, including their year and title.
Create a flat list of all the title-author pairs, with each pair enclosed in a "result" element.
For each book in the bibliography, list the title and authors, grouped inside a "result" element.
For each author in the bibliography, list the author's name and the titles of all books by that author, grouped inside a "result" element.
The order in which values are returned by distinct-values() is undefined. The distinct-values() function returns atomic values, extracting the names from the elements.
For each book found at both bstore1.example.com and bstore2.example.com, list the title of the book and its price from each source.
For each book that has at least one author, list the title and first two authors, and an empty "et-al" element if the book has additional authors.
List the titles and years of all books published by Addison-Wesley after 1991, in alphabetic order.
Find books in which the name of some element ends with the string "or" and the same element contains the string "Suciu" somewhere in its content. For each such book, return the title and the qualifying element.
In the above solution, string(), local-name() and ends-with() are functions defined in the Functions and Operators document.
In the document "books.xml", find all section or chapter titles that contain the word "XML", regardless of the level of nesting.
In the document "prices.xml", find the minimum price for each book, in the form of a "minprice" element with the book title as its title attribute.
For each book with an author, return the book with its title and authors. For each book with an editor, return a reference with the book title and the editor's affiliation.
Find pairs of books that have different titles but the same set of authors (possibly in a different order).":
The queries in this use case are based on the following sample data.
Prepare a (nested) table of contents for Book1, listing all the sections and their titles. Preserve the original attributes of each <section> element, if any.
Prepare a (flat) figure list for Book1, listing all the figures and their titles. Preserve the original attributes of each <figure> element, if any.
How many sections are in Book1, and how many figures?
How many top-level sections are in Book1?
Make a flat list of the section elements in Book1. In place of its original attributes, each section element should have two attributes, containing the title of the section and the number of figures immediately contained in the section.
Make a nested list of the section elements in Book1, preserving their original attributes and hierarchy. Inside each section element, include the title of the section and an element that includes the number of figures immediately contained in the section..
The queries in this use case are based on the following sample data.
In the Procedure section of Report1, what Instruments were used in the second Incision?
In the Procedure section of Report1, what are the first two Instruments to be used?
In Report1, what Instruments were used in the first two Actions after the second Incision?
In Report1, find "Procedure" sections where no Anesthesia element occurs before the first Incision
(No sections satisfy Q4, thankfully.)
In Report1, what happened between the first Incision and the second Incision?
Here is another solution that is perhaps more efficient and less readable::
In the following solution, the between() function takes a sequence of nodes, a starting node, and an ending node, and returns the nodes between them:
Here is the output from the above query:.
This use case is based on three separate input documents named users.xml, items.xml, and bids.xml. Each of the documents represents one of the tables in the relational database described above, using the following DTDs:
Here is an abbreviated set of data showing the XML format of the instances:
The entire data set is represented by the following table:
List the item number and description of all bicycles that currently have an auction in progress, ordered by item number.
This solution assumes that the current date is 1999-01-31.
The above query returns an element named
item_tuple, but its definition does
not match the definition of item_tuple in the DTD.
For all bicycles, list the item number, description, and highest bid (if any), ordered by item number.
Find cases where a user with a rating worse (alphabetically, greater) than "C" is offering an item with a reserve price of more than 1000.
List item numbers and descriptions of items that have no bids.
For bicycle(s) offered by Tom Jones that have received a bid, list the item number, description, highest bid, and name of the highest bidder, ordered by item.
For each item whose highest bid is more than twice its reserve price, list the item number, description, reserve price, and highest bid.
Find the highest bid ever made for a bicycle or tricycle.
How many items were actioned (auction ended) in March 1999?
List the number of items auctioned each month in 1999 for which data is available, ordered by month.
For each item that has received a bid, list the item number, the highest bid, and the name of the highest bidder, ordered by item number.
List the item number and description of the item(s) that received the highest bid ever recorded, and the amount of that bid.
List the item number and description of the item(s) that received the largest number of bids, and the number of bids it (or they) received.
For each user who has placed a bid, give the userid, name, number of bids, and average bid, in order by userid.
List item numbers and average bids for items that have received three or more bids, in descending order by average bid.
List names of users who have placed multiple bids of at least $100 each.
List all registered users in order by userid; for each user, include the userid, name, and an indication of whether the user is active (has at least one bid on record) or inactive (has no bid on record).
List the names of users, if any, who have bid on every item.
(No users satisfy Q17.)
List all users in alphabetic order by name. For each user, include descriptions of all the items (if any) that were bid on by that user, in alphabetic order..
The queries in this use case are based on the following sample data, which is found in the file "sgml.xml". Line numbers have been added to the data to allow the results of queries to be conveniently specified.
Locate all paragraphs in the report (all "para" elements occurring anywhere within the "report" element).
Elements whose start-tags are on lines 6, 11, 20, 27, 34, 39, 46, 53, 56, 62, 67, 71, 76, 83, 90, 94
Locate all paragraph elements in an introduction (all "para" elements directly contained within an "intro" element).).
Elements whose start-tags are on lines 90, 94
Locate the second paragraph in the third section in the second chapter (the second "para" element occurring in the third "section" element occurring in the second "chapter" element occurring in the "report").
Element whose start-tag is on line 67
Locate all classified paragraphs (all "para" elements whose "security" attribute has the value "c").
Element whose start-tag is on line 94
List the short titles of all sections (the values of the "shorttitle" attributes of all "section" elements, expressing each short title as the value of a new element.)
Attribute values in start-tags on lines 23, 50, 59
Locate the initial letter of the initial paragraph of all introductions (the first character in the content [character content as well as element content] of the first "para" element contained in an "intro" element).
Character after start-tag on lines 6, 20, 27, 53, 62, 90
Locate all sections with a title that has "is SGML" in it. The string may occur anywhere in the descendants of the title element, and markup boundaries are ignored.
Elements whose start-tags are on lines 50, 59
Same as (Q8a), but the string "is SGML" cannot be interrupted by sub-elements, and must appear in a single text node.
Element whose start-tag is on line 59
Locate all the topics referenced by a cross-reference anywhere in the report (all the "topic" elements whose "topicid" attribute value is the same as an "xrefid" attribute value of any "xref" element).
Element whose start-tag is on line 65
Locate the closest title preceding the cross-reference ("xref") element whose "xrefid" attribute is "top4" (the "title" element that would be touched last before this "xref" element when touching each element in document order).:
The queries in this use case are based on the following input data, which is found in the file "string.xml".
In addition, the following data, listing the partners and competitors of companies, is found in the file "company-data.xml".
Find the titles of all news items where the string "Foobar Corporation" appears in the title.
Find news items where the Foobar Corporation and one or more of its partners are mentioned in the same paragraph and/or title. List each news item by its title and date.
Query Q3 has been withdrawn from the use cases document.
Find news items where a company and one of its partners is mentioned in the same news item and the news item is not authored by the company itself.".
List all unique namespaces used in the sample data.
Select the title of each record that is for sale.
Select all elements that have an attribute whose name is in the XML Schema namespace.
List the target URI's of all XLinks in the document.
Select all records that have a remark in German.
Select the closing time elements of all AnyZone auctions currently monitored.
Select the homepage of all auctions where both seller and high bidder are registered at the same auctioneer.
Select all traders (either seller or high bidder) without negative comments
The schema for this example is the International Purchase Order schema taken from the XML Schema Primer, which imports a schema for addresses. The main schema is found in a schema document named "ipo.xsd":
The address constructs are found in a schema document named "address.xsd":
The sample data used for the query is found in a file named "ipo.xml":
Count the invoices shipped to the United Kingdom..
The corresponding schema document is named "zips.xsd"::
Here is the schema for the above file.
This is not a complete query, it is a function that is meant to be called in a query. We will use this function in Q4.
Determine whether the postal code or zip code for a purchase order is right.:
The following sample data contains instances of these substitution groups::
Find all comments found in an item shipped to Helen Zoe on the date 1999-12-01, including all elements in the substitution group for ipo:comment.
Note that
schema-element(ipo:comment) matches
any valid element in the substitution group of
ipo:comment.
Write a function that returns all comments found on an element, whether an item element or some other element that may have a comment..
In American slang, a "deadbeat" is a person who fails to meet a financial obligation.
This query assumes that "deadbeats.xml" lists the names deadbeats in the following format:.
Here is a query that calls the function we just defined to get the total for an invoice (before calculating taxes and shipping charges):
This query illustrates the need to be able to pass a sequence as a parameter to a function.
If the input document contains more than one purchase order for the given date and person, a total will be computed for all purchase orders.
In
This is the schema given for the above report:
This report, which lists products sold by zip code, is based on the same international purchase report used in previous queries.
Here is a query that generates the desired report from a collection that contains US purchase orders:
The editors thank the members of the XML Query Working Group, which produced the material in this document.
The use cases in this paper were contributed by the following individuals:
Use case "XMP" has been previously published in.
Updated status section.
Added note on static typing.
Removed whitespace from the price element in the source document for Use Case "XMP" - it was <price> 65.95</price>,
and is now <price> 65.95</price>. Fixes
Removed trailing whitespace from the <remark/> elements used in "auction.xml". Fixes bug
Need to discuss
Fixed many errors in Use Case "Strong" as proposed
Alignment with 04 April 2005 Working Draft of XQuery. This actually did not change the results of any of the queries, so it was just a matter of tweaking the front matter.. | http://www.w3.org/TR/2006/WD-xquery-use-cases-20060608/xquery-use-cases.xml | CC-MAIN-2016-50 | refinedweb | 2,598 | 62.17 |
.
The first thing that you need to do is to include the header file at the top of your code:
#include<header.h>
Then you have to find out the names of the functions that are contained in the header file.
Then you should note what parameters are necessary to call the funcion.
For example if the function in the header file is called:
int function1(int num1, int num2)
This function is called function1 it has two integer parameters (num1, num2) and it returns an integer.
So in your code you can call it like this:
#include<header.h>
main()
{
int integer1
int integer2
int integer3
..
..
..
integer3 = function1(integer1, integer2)
..
..
..
}
This would call function1 using integer1 and integer2 as parameters and return answer into integer3.
homer99 said basic things, other see
in every C/C++ tutorial. But about scanning barcode:
I risk to answer, becouse i have made same system(work
name: Shop). And reply is very simple: you don't need
make anything! Detailes:
You use plug-in cable of keybord and set it in Scaner.
You use plug-in cable of Scaner and set it in Computer.
Now: if user presses key on the keybord, key simple
go to computer , but if user scans a Barcode, you get
Input : some digits( 1234567890123 for example,
usually 13 digits) . You programm in this moment
is in BarCode Contol and treats it (my, for example,
get data about Products from DataBase and display
name of product,price...).
Question: what will be, if you in other Control(for example
in name of product) and user scans a Barcode? For protect from this, i test Name of product : can't be all digits!
I hope, it helps. Alex
Managing email signatures in Office 365 can be a challenging task if you don't have the right tool. CodeTwo Email Signatures for Office 365 will help you implement a unified email signature look, no matter what email client is used by users. Test it for free!
//////////////////////////
// SCAN is the scanned data from the read routine
// Reading is this converted to ASCII
// Both routines remove start and stop sentinels from the data
#include "SCNTST.h"
// 6 bit data TEXT and Numeric
int decode_track1 (char *reading, char *scan);
// 6 bit data Numeric only (and separators)
int decode_track2 (char *reading, char *scan);
// This routine calculates the length of the barcode data
int data_length (char *buffer);
//////////////////////////
again, all I need are basic guidelines as to how these calls are made in code
DaveMon
int decode_track1 (char *reading, char *scan);
int decode_track2 (char *reading, char *scan);
int data_length (char *buffer);
for example, in your C code:
int nReturnVal;
int nDataLen;
char szReading[1024]; // define this however big you need it to be...
char szScan[1024]; // same here...
you would like to see the user documentation on how these functions are actually used, however...
nReturnVal = decode_track1(&szReading, &szScan);
nDataLen = data_length(&szScan);
#include<thenameoftheheade
near the top of your program, just like homer said.
You call the procedures just like you would call a procedure that you wrote.
You also will have to tell the linker to include the object file that contains the procedures. It will look something like procedures.o
You didn't say what compiler you are using so no one can tell you the exact command.
The above mentioned steps are alright.
it will be nice if you have all your header files in one directory and then give the path for library searching as that directory's path.Otherwise sometimes you get an error saying that library not found or something similar to that.
Anup
#include "SCNTST.h"
void main()
{
//1 variables
char szIn[100], szBar[13];
int len, bEnd = 0;
//init scaner
InitScaner();
//loop for read barcode
while(!bEnd)
{ /rread
ReadBarCode(szIn);
len = data_len (szIn)
if (len == 0)
bEnd = 1;//exist
else
{
//form barcode from input
decode_track1(szIn, szBar);
decode_track2(szIn, szBar);
//make something with barcode
....
}
}
//Close Scaner
CloseScaner();
}
The header file takes care of the compile step so your program can compile your function calls.
For the link step: you need to supply information to the linker so the library that corresponds to the header can be drawn into your executable program. The IDE that you are using will have an option screen that allows you to specify what library files to include and where they can be found. If you don't do this you will get linker errors along the lines of "Unresolved symbol" or "undefined function" or something like that.
Bill Nicholson
DaveMon | https://www.experts-exchange.com/questions/10312760/how-to-call-functions.html | CC-MAIN-2018-13 | refinedweb | 753 | 70.63 |
High Integrity Software 238
What is SPARK? It's a language, a subset of Ada that will run on any Ada compiler, with extensions that automated tools can analyze to prove the correctness of programs. As the author says in insure any more. And I'd like to see real-life examples of SPARK's successes, though there's more info on that at. ueber alles, versus creating provably correct code from the outset.
You can purchase High Integrity Software from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, carefully read the book review guidelines, then visit the submission page.
hmmm (Score:5, Funny)
Not to be confused with C.A.M. Hoare's famous and profound statement: "Want to see my boobies?"
question (Score:5, Insightful)
Or do they sit around thinking of methodologies to write books about?
Those who can, do, those who can't, teach?
Re:question (Score:4, Insightful)
Those who can't teach, teach theory.
Re:question - Answer (Score:2)
Those who can, do. Those who can't, teach.
Those who can't teach, teach gym.
(At least according to Woody Allen).
myke
Re:question (Score:5, Informative)
SPARK Ada came from Praxis Critical Systems. (). Go take a look. You can read about how SPARK Ada is used in things like aircraft, and (increasingly) in the automotive industry.
Re:Reminds me of the real estate guys (Score:2)
Re:question (Score:2, Funny)
Yes. And I regularly see job adverts specifying SPARK.
Re:question (Score:2, Interesting)
These people have successfully used SPARK on many projects. They also provide the tools making the SPARK approach feasible.
The SPARK approach causes discomfort for many software developers because its approach diverges from that of the various agile development processes.
If you read the book you will discover that the author's claims are supported by real data from real projects.
The SPARK approach is extremely formal. It has frequently been used on the safety critical portions of larger sys
A _review_? (Score:5, Insightful)
Re:A _review_? (Score:4, Interesting)
Re:A _review_? (Score:2, Interesting)
heheh (Score:4, Funny)
"High Integrity Software"
SCO should adopt that as their motto.
But what.. (Score:4, Insightful)
Re:But what.. (Score:3, Insightful)
Re:But what.. (Score:5, Insightful)
In the early days of compilers, one of the claims for compilers was that they would make mistakes impossible. Of course, all they did was make one class of stupid assembler mistakes impossible.
The reason for the verbosity of COBOL is the idea that it would be so like business English that management could read it, if not write it.
Eash time we get a tool that removes one class of mistakes, all we do is increase the systen m complexity until the level of mistakes returns to the previous level of near-unacceptability. "Snafu" is the normal state of the programmers universe - it is only a case of how large a system you build before it all fouls up.
Having said that, Design By Contract is a good idea. While accepting that it is always going to turn to ratshit, you might as well do so at a higher level as a lower one. However, it isn't new: look at Eiffel and Aspect Java for instance.
Re:But what.. (Score:5, Insightful)
Yeah, the article's statement, "The book describes a language that insures programs are inherently correct," is really misleading. Very few real-world programs have ever been proved correct, and the reasons for that are language-independent. Very few real-world problems even lend themselves to a precise mathematical statement of what "correct" would mean. What would it mean for mozilla to be "correct?" Even if your code is running a nuclear reactor, the code can only be as "correct" as the accuracy of the assumptions and engineering data that went into the spec.
In many real-life projects, the nastiest bugs come from requirements that change after the first version has already been written. Proving that a program correctly implements a spec doesn't help if the spec changes.
Re:But what.. (Score:3, Insightful)
That's the beauty of this system. You can close out the issue in your bug tracking database anyway:
"CLOSED: Not a bug; behavior as designed."
Re:But what.. (Score:3, Informative)
A search on Google for formal methods [google.com] will give you a lot of stuff. The first site [ox.ac.uk] that comes up is a good starting point.
Note that at some point, one has to hope that what the client wants is what he has described. A tax calculation programme will not be of use if he really wanted a custom
Re:But what.. (Score:4, Informative)
In real life that's what usually kills people.
_Safeware_ by Nancy Leveson looked at several software-related disasters. Only one disaster, the Therac radiation machine that fried several patients, was the result of actual bugs (and those bugs were race conditions). The rest consisted of software obediently and disastrously doing exactly what it was supposed to do, like the black lab at #7 in.
If you build safety-critical software be sure to have some organized way to flush out what-if questions and hidden assumptions.
Car Whores? (Score:2, Funny)
public class interfaces (Score:2, Interesting)
Re:public class interfaces (Score:5, Insightful)
me neither, me too...
my understanding is that the contract has hard requirements on specific input and specific output for results. all of which are defined prior to executing that code. something like "we require an incomming integer with a value that is between zero and fifteen. we gaurantee that an integer value will be returned that is either zero or one"
with a public class interface you can write a peice of code that does this, but it won't gaurantee anything. it's up to the developer to exhaustively test all situations and make sure that it happens. in a contract based language, i would guess that the program either won't compile, won't run, or will fail in obvious ways in the development stage if the requirements are not met. i'm not sure how they handle requirements that aren't met.
Re:public class interfaces (Score:2, Informative)
or a ot of other languages with data validation written at the begging and ending of each method
Re:public class interfaces (Score:5, Informative)
Anyhow, DBC is totally distinct from object orientation. In DBC, each component in your software comes with a "contract" that states "if I am called when the _preconditions_ are true, I promise that after I run the _postconditions_ will be true."
The preconditions and postconditions are a group of logical statements, hopefully ones which are useful to your program
Let me give a little example.
function: sqrt( x )
preconditions:
- integer (x)
- positive (x)
postconditions:
- result > 0
- result * result x
Do you see what's happening there? Without knowing
Adding in object orientation support to DBC is a little more complex, but I won't go into that unless asked.
Traditional DBC systems, including Eiffel, couldn't verify your contracts, so most of them would translate the contracts into code, and include that code in the executable; if a contract failed, the code would throw an exception or otherwise fail. SPARK is interesting because it can detect contract failures without running the code; it can also detect when your contracts fail to promise enough.
-Billy
Re:public class interfaces (Score:2)
Proving sqrt() correctness? (Score:2)
Re:Proving sqrt() correctness? (Score:4, Informative)
i.e. Y, your sqrt, is no more than X when squared, but increase it by 1 and it is more than X. You require X to be non-negative.
Assuming that your implementation implements an initial guess at Y and then repeatedly increments it, you would specify a loop invariant that shows that your guess at Y (say 'Z') is such that (Z+1)*(Z+1) For more information on what's practicable in a customer-specified system, read the peer-reviewed publications...
Disclaimer: SPARK hacker for 6 years
Re:public class interfaces (Score:3, Insightful)
Anyway, I see nothing new in DBC in SPARK not already in Eiffel.
Re:public class interfaces (Score:2)
I also wasn't talking about Eiffel's support for DBC; I was describing DBC in general, or more accurately DBC without OO support. I didn't claim that OO isn't possible with DBC (on the contrary, I stated that it was, and I would explain it if asked). I didn't claim that Eiffel didn't have OO.
And finally, the huge thing that's new in SPARK that isn't in Eiffel i
Re:public class interfaces (Score:2, Insightful)
The code gets written and it turns out to have bugs anyway. We go back to examine everything and notice missing details in the contracts. We fix those problems, the code gets re-written and the cycle continues. In the end we've not achieved much in the way of program correctness, efficie
Re:public class interfaces (Score:2)
The rest of your post absolutely requires the absence of a tool like SPARK. Throw SPARK into the mix, and all your grounding assumptions are absolutely wrong -- the specifications _do_ describe the interfaces, and the interfaces _do_ work together, and the code _does_ implement the specifications, and there are no possible exceptions.
Trivial examples a
Re:public class interfaces (Score:3, Interesting)
int mod(int num, int den)
This lets you add
den != 0
(den = 0)
Then the compiler could automatically write the check for division by 0, and could optimize it out in cases when it can stat
What horseshit (Score:4, Insightful)
Now, is there a language to ensure that your boss asks you to program the right thing?
Re:What horseshit (Score:2)
Never underestimate a BOFH.
References to the story (Score:5, Informative)
Here [praxis-cs.co.uk]is a PDF that contains samble chapters of the book reviewed.
Also from the same site is the following text and links for those of you wanting "real world examples":
"Industrial Experience with SPARK [praxis-cs.co.uk] (PDF 234kb) Dr. Roderick Chapman, Praxis Critical Systems Limted. Presented at ACM SigAda 2000 conference. This paper discusses three large, real-world projects (C130J, SHOLIS and the MULTOS CA) where SPARK has made a contribution to meeting stringent software engineering standards. "
"no obvious deficiencies" (Score:5, Funny)
Wrong order - first test, than code (Score:5, Insightful)
Eurofighter (Score:5, Informative)
SPARK is used heavily in the safety critical software in the Eurofighter amongst other projects. It is a complete pain to type all of the annotation, takes forever to run the tool and it very rarely comes up with any real problems in the code. I would pay good money never to have to go near it again. It was used to meet contractual requirements, not engineering requirements.
One neat trick is to generate a large proportion of the annotation from the output error messages. Sort of defeats using the tool though but since it doesn't find much anyway the time freed up can be used to do some real testing.
Re:Eurofighter (Score:4, Interesting)
> requirements, not engineering requirements.
There be dragons.
> One neat trick is to generate a large
> proportion of the annotation from the
> output error messages
That's classic. It makes sense, though - kind of like running a code reformatter rather than running a "code format checker". Every night, the code gets reformatted to meet the style guide... no nagging emails, just silent enforcement.
Re:Eurofighter (Score:2, Interesting)
Re:Eurofighter (Score:2)
In order to really ensure that the code does the right thing, the contract has to be about as detailed as the code itself. This means that you end up writing the same thing twice, with possibilities for errors in both copies.
If you relax the contract to only partially describe what needs to happen, you create opportunities for bugs to go undetected, and lose the certainty the contracts were meant to provide.
Re:Eurofighter (Score:2, Interesting)
Re:Eurofighter (Score:5, Insightful)
This type of unprofessional crap is the reason people have such low expectations of software. You didn't want to use the tool because it was a "pain to type"?! If the length of time it takes you to type your code is a bottleneck then you're not doing enough thinking before you type. The extar effort requierd to type more verbose code is close to zero. You're coming across like an aeronatical engineer would if they tightened a critical bolt to only 90% of the required torque because it was less effort.
By saying very rarely comes up with any real problems" means it found some, and those problems may have been easily been overlooked by other types of testing. And what are problems wouldn't be "real" in saefty critical code?! Yes, there are other tools besides SPARK that could have been used but the principles should have been the same.
Don't ever forget you're talking about a serious piece of hardware and there's a human being sitting in the pointy end. If I was a pilot of something that had a bug in it's safety critical software because of your lack of pride I would kick your ass.
Re:Eurofighter (Score:2, Interesting)
I think people have such low expectations of software because for the most part, software doesn't meet their expectations, and the expectations people have of software are often unrealistic. Software is like everything else - built with the trade off of cost versus utility.
Re:Eurofighter (Score:2)
You didn't want to use the tool because it was a "pain to type"?!
Boilerplate code, near cut-and-paste code, and highly verbose code produce greater errors, and induce programmer fatigue.
In addition, they make the code harder to understand at first glance.
That's quite important. Much is made of how unreadable Perl can be, but sitting down and trying to figure out the logic and nuances behind a bit of C code is often much, much worse. This is because the C is usually more verbose to do the same thing.
Re:Eurofighter (Score:5, Interesting)
Re:Eurofighter (Score:2, Interesting)
Lockheed has done some really cool things over the years, but I just don't buy this. If they could positively identify the defect rates of these programs, they could just get rid of the bugs in the first place, in the SPARK projects *and* in the C projects. It's more likely they've got some sort of automated checked that catches exactly the same sort of thing that SPARK itself does.
Really, it looks like the SPARK program basically
Re:Eurofighter (Score:2)
And
VLISP sounds much more interesting (Score:2)
Why not use a language that's smart enough to prove code written in a useful language, not just a toy?
Ok... (Score:3)
Re:Ok... (Score:4, Informative)
I would have been interested if all this instrumentation had been grafted onto a language like Java, or C++. But to have to switch to Ada just to be able to add in instrumentation that helps in code analysis?
Switching languages is a tiny effort compared to the change required to design your code for static validation. The SPARK people strongly recommend against trying to "switch" to SPARK; if you want the benefits, you have to code with it from the start. It's kind of like taking a 100,000 line C program written by 30 programmers over 10 years and trying to "switch" it to C++ -- it's theoretically possible, but in practice it's easier to start over..
It's also funny that he WOW's at the idea of no dynamic memory allocation...
I felt that way too
The reason they did it is simple, though -- they wanted to be able to set absolute bounds on when a SPARK program will or will not fail (throw an exception). There's no way to do that with dynamic memory allocation as it's defined in Ada and most other languages.
Yes, that's limiting; no argument. But for some problems, particularly ones solvable by programs managing their own memory, the limitation doesn't matter compared to the benefits -- a SPARK program can execute without any runtime support code.
Why not just use a type safe language?
No such thing -- type safety is an uncomputable problem.
If you meant strongly typed, that's easy; Ada was already strongly typed. SPARK just guerantees that the programs will always run the same, and SPARK's verifier guarantees that the types are chosen and described correctly.
-Billy
Re:Ok... (Score:3, Insightful).
I'm not sure what you meant by formal specification here. As I recal
Re:Ok... (Score:2)
You're right that the phrase "formal specification" was a bad choice on my part. Ada doesn't have an official formal specification (although SPARK does); what Ada has is a highly formal official specification, automatically verified by an extensive
Re:Ok... (Score:2, Funny)
Yes, I admit it! It's true that I have not kept up with developments in Ada. I'm still scared by horror inflicted by the original version. And the trauma produced by the Ada design philosophy which produced languages like VHDL.
OK, so they added objects, interfaces and other wonders of modern languages. But it still does not change the fact that Ada is not exactly in the main stream. To continue my walk out on the limbs of issues with which I am only shallowly familiar, I'll speculate that very few
Re:Ok... (Score:2, Interesting)
Theorem proving languages (Score:2, Informative)
Alloy's cool because you can use it to model code at a very abstract, high level (much like SPARK, it seems), although with Alloy you aren't tied to any specific language. The downside is that since the model isn't embedded in the code
Re:Theorem proving languages (Score:3, Insightful)
This industried ability to continually re-invent the wheel never ceases to amaze me.
Let's go back to lisp and smalltalk every frikken language since then is just a rewrite of one or the other anyway.
Re:Theorem proving languages (Score:2)
Alloy lets you model a system (Alloy code isn't executable), and statically prove properties about the model. Once you verify the model, you prove that it's correct for all possible runs (but you still have to implement it correctly).
Programming by Contract? (Score:4, Insightful)
This seems rather a waste of time. You either first describe exactly what the code does, then write the code, or you write a simplification of what the code does, then the code.
In the first case, you write the exact same thing twice, in different languages. That sounds like an immense waste of time to me.
In the second case, your specification does not cover every aspect, which introduces loopholes, defeating the purpose of the contract.
In either case, you get in trouble if there are errors in the contract.
Re:Programming by Contract? (Score:2, Interesting)
For example, will this work with your favorite sorting algorithm? Presumably all sorting algorithms for sets drawn from a given domain will have the same pre and post conditions, but very different algorithms.
Re:Programming by Contract? (Score:2)
It uses acl2 [utexas.edu], a lisp language based prover.
Re:Programming by Contract? (Score:4, Interesting)
Re:Programming by Contract? (Score:2)
And that's exactly where the loopholes are. In functional programming, all there is to a function is what goes in and what comes out. However, I SPARK being a subset of ADA, I think we can safely assume there will be lots of side effects, and reliance on side effects. This makes specifying only pre- and postcondi
Re:Programming by Contract? (Score:4, Interesting)
It's not a waste of time to describe what a function does. It's essential to keep "what" a function does distinct from "how" it does it. That's the whole point of interface versus implementation.
Consider a function with the following contract: Now, can you see how that's useful? And do you see that this tells you something _completely_ different than what you'd know if you read the actual source code for that function (perhaps an implementation of Newton's method)?
In the second case, your specification does not cover every aspect, which introduces loopholes, defeating the purpose of the contract.
That's what SPARK's automatic verifier is for -- to prove that there are no loopholes.
-Billy
Re:Programming by Contract? (Score:3, Informative)
You could, in theory, use to do design by contract, but you'd have to be very careful to put your assertions in the right places, make sure that none of them have side effects, and manually do a few other things that a DbC language automatically takes care of for you. It'd be risky, but it's possible. It becomes MUCH harder when you add object orientation, though; managin
Re:Programming by Contract? (Score:2)
That's like saying disk mirroring and ECC is a waste of time.
The value of writing things twice or thrice is when it turns out it's not the same thing after all. Then you know you have a problem (whether you can fix it there and then is a different problem
If it's significantly easier to write one of the "checks" that'll be rather useful.
Free download of a similar system for Java (Score:5, Insightful)
The best available modern system for formal verification is the Extended Static Checking system for Java [hp.com] developed at DEC SRL. This was developed at DEC before HP shut down that research operation. It's still available as a free download [compaq.com].
What all this machinery does is put teeth into "design by contract". With systems like this, you can tell if a function implements its contract, and you can tell if a caller complies with the contract of each thing they call. Before running the program.
Developing in this mode means spending forever getting rid of the static analysis errors. Then, the program usually just runs. That's exactly what you want for embedded systems. But it's painful for low-grade programming like web site development, where "cosmetic errors" are tolerable and time-to-market matters more than correctness.
Re:GNU Nana for C++ (Score:2)
No, he's just the publicity guru of them. He came late to that technology. C.A. Hoare is probably the "father", back in the 1970s.
Re:Free download of a similar system for Java (Score:2)
If you're involved in that, you might want to look into how we avoided the "false axiom" problem that makes ESC/Java unsound. ESC/Java has an Oppen/Nelson type prover. So did we. But we also used the Boyer-Moore theorem prover, which understands recursion, to prove more difficult theorems. This removed the need for users to add "axioms".
There's so much that could be done in this area. We really do know in theory how to eliminate bugs. Somebody needs to make
Sounds like Resolve (Score:3, Interesting)
The basic idea was that they added a whole ton of syntactic sugar to C++ (not by structured comments, but by adding a bunch of key words that were #defined into nothing). I'm curious if this is related to that work at all. (At the time I was convinced that it was total crap, but several years of experience have shown me what they were trying to accomplish, if poorly.)
Why not make the tester the compiler? (Score:3, Interesting)
Re:Why not make the tester the compiler? (Score:3, Informative)
> to get both the test and the code that
> passes the test!
This came up on the Extreme Programming list a while back. I think the Java IDE IDEA [jetbrains.com] does something like this, in that you can write a test and it'll generate the source code for the method signatures that you're trying to test. Then you fill in the implementation. *Disclaimer - I haven't used that feature so I don't know how well it works*
One problem with this, though, is that code can pass a test but s
Software deserves more respect (Score:4, Interesting)
Since so much of what we depend on these days is powered by software, I can't help but feel that industrial software development should be taken under the wing of Engineering. Why, you say? Well, professional fields like medicine, law, and engineering associate a duty to public safety with the job, and the regulatory bodies for the professions ensure that individuals who practice irresponsibly will lose their profesional status.
There is no such accountability for software development. Look at Microsoft Windows, that our banks and governments rely upon! I think such a product would be much higher quality if the coders working on it were professionals and had to adhere to Codes; violating their professional duties would mean severe personal consequences. And the firm itself (Microsoft) would be legally liable if it produced a shoddy, dangerous product!
Re:Software deserves more respect (Score:2, Insightful)
Secondly, we have the entire legal syst
Re:Software deserves more respect (Score:2)
UNIX:
runs databases (commercial)
runs manufacturing equipment (industrial)
runs handheld portable units (specialized) and I've seen it used on a SPRC box as a controller for BAS (big ass switches)
and it is currently easy to get a copy of Linux, BSD or Solaris running in your bedroom.
Your arguments do not hold water.
Re:Software deserves more respect (Score:2)
Re:Software deserves more respect (Score:2)
Re:Software deserves more respect (Score:2)
Delphi Assert (Score:2)
1. Global vars: BAD.
2. Borland Delphi has had some of this for while with the assert [about.com] function.
It's basicly a way of making sure that all the things that can't go wrong, actually doesn't.
Re:Delphi Assert (Score:2)
I believe the instrumentation shown in the article is intended to be read in by an analysis tool, which should, in theory, find errors/inefficiencies that you as a programmer may not have noticed, and hence wouldn't have represented within asserts. But, as has been mentioned before in another comment, such errors are rare enough to not justify having to migrate to Ada and write so much extra instrumentation.
using ada is enough (Score:5, Interesting)
For example ada already had constrained types (x
The ada compiler checks alot of things during compile time that I've never seen before.
constraint errors are exceptions in ada (Score:2)
In our ada code constraint errors are handled by the exception handling. Also note that a lot of our messages/values are comming from hardware from external hardware, so we have little control over what we're getting, so read in the message into unconstrained types and then convert. We test each potential incomming message against max/min theoretically posibly values. So we think we have most of it handled.
I'm not saying I'm against the idea of SPARK, it just seems to duplicate all
Ummmmm... (Score:4, Interesting)
Basically the only excuse you could possibly have for writing something in SPARK is extremely critical code (ie, if it fails, many people die). Even then I'd be skeptical it would provide much benefit, but at least it would provide some ass-covering ability.
For a alternatve view of the practicality of correctness proofs, see chapter 4 [cypherpunks.to] of Peter Guttman's thesis. IIRC there was a book review of it on
"No programming language can save you from yourself."
- Me
Re:Ummmmm... (Score:3, Insightful)
My Dad hand-patches microcode on 60s-era safety systems in a chemical plant for a living. It's pretty intense.
Perl is pretty far from a B&D language, but I'd sure hate to see an autopilot written in perl, no matter how productive or satisfied it made the coder.
The happiness of the coder is not really the issue. If we could get safe, secure, reliable software by coding them in restrictiv
Re:Ummmmm... (Score:2)
I'm sorry, but that statement is factually incorrect.
Pratt & Whitney collected metrics on jet engine controller software devel
Re:Ummmmm... (Score:2)
The military controllers were all in Ada, the civilian engine controllers were in various things, with C/C++ heavily represented. The team capabilities were about equal across the board. After they crunched the data down, they discovered that Ada was giving them twice the programmer productivity and 1/4 the defect density.
On an offtopic note, I remember a study comparing Fortran 77 and C in a scientific/techical setting, and the conclusion of that was that Fortran codes had 1/2 the defects of comparable
Newspeak (Score:2, Interesting)
SPARK seems to be an extreme example. Though I've never used it, I venture to guess that in a quixotic effort to avoid all bugs SPARK only buries real bugs underneath a mountain of its own pedantry.
Eiffel and Sather (Score:2, Informative)
I needed this six months ago (Score:4, Funny)
IMHO the only way to go (Score:2, Insightful)
"The true measure of a good coder is not how complex his code is, but how simple."
Today's software systems become bigger, bigger, and bigger. Maybe single components are simplified, debugged or optimised, but not a system as a hole. The results we see today, in many systems, a single slip in one place, can screw up the entire system.
IMHO the logical way to combat this, would be to design software using methods that can
Re:IMHO the only way to go (Score:3, Informative)
Proving non-trivial programs correct is nearly always intractable, if not strictly speaking impossible. We simply don't have the computing power to do it for larger programs, and (IMO) we probably never will.
Re: the limits of proof (Score:2)
To expand on the point made above...
Let us assume that we have a magic proof system that will prove that our software matches our specification.
So what do we use to define the specification? Certainly not a natural language like english. Natural languages are full of ambiguity. So we'll use a formal specification language. However, such a language is basicly like a programming language, perhaps supporting more mathematical formalism (maybe single assignment). While our formal proof system can prove
Homeland Security (Score:2, Insightful)
Knuth said it best... (Score:3, Interesting)
Agile methods (Score:3, Insightful)
I have to disagree here, the agile methodologies I'm aware of stress automated unit testing to ensure the code that follows meets the specifications. They are agile because the "contract" enforced by the unit tests allow you to see what you have broken easily after a change. If your unit tests pass then you've either not broken anything or your test coverage is insufficient. It seems that these SPARK "tags" have some of the benefits and all the problems that a good suite of automated unit tests provides.
I do however like the idea that your assumptions and dependencies are explicitly mentioned nearby where they occur. These are things that definitely sting you, especially in code you are new to (written by someone else.) All the little interdependencies and unexpected side-effects that make their way into code can really make life difficult sometimes.
I have a feeling though that this would take discipline, and if all team members were skilled and disciplined then you would likely have much of these things stated anyway.
Oracle PL/SQL packages do something similar (Score:2)
This enables procedures to be public in scope (present in both PACKAGE and PACKAGE BODY) or private in scope (present in PACKAGE BODY only). Other elements, such as user-defined data types, constants and cursor definitions, can be part
25 years after the first type inferencing systems. (Score:2)
Re:Just a toy, or what? (Score:2, Informative)
Re:Another ADA proselytizer (Score:2)
Re:Check out D (Score:2, Interesting)
Yes D is interesting. Only, like Eiffel, it concentrates only on procedural contracts and lacks type contract.
SPARC, being based on Ada does have type contracts:
type Day_Of_Month is range 1
..31;
BTW: The example won't work. It does not take into account the fact that math.sqrt(x) only calculates an approximation - which is truncated to long. Correct examples have been posted before - by SPARC hackers.
It is not a good sign that the D developers made such an obvious mistake.
With Regards
Martin | https://developers.slashdot.org/story/04/05/19/190235/high-integrity-software | CC-MAIN-2016-44 | refinedweb | 5,509 | 62.68 |
bootalchemy 0.4.1
A package to create database entries from yaml using sqlalchemy.BootAlchemy
=============
BootAlchemy is a tool which allows you to load data into an SQL
database via yaml-formatted text. You provide bootalchemy with set
of mapped objects, and some text, and it will push objects
with that text into the database. In addition to all of the functionality
YAML provides, BootAlchemy can also de-obfuscate relationships and
add those to the database as well.
Current Version
------------------
|version|
Requirements
---------------
* SqlAlchemy>=0.5
* PyYaml
Getting Started With BootAlchemy
---------------------------------
Let us first consider this model, assume it is defined in a module called "model"::))
description = Column(String(300))")
Simple Example
----------------
First let's explore the structure used to push data into the database. We
will use plain python to load in the data::
from bootalchemy.loader import Loader
data = [{'Genre':[{'name': "action",
'description':'Car chases, guns and violence.'
}
]
}
]
loader = Loader(model)
loader.from_list(session, data)
genres = session.query(Genre).all()
print [genre.name, genre.description) for genre in genres]
produces::
[('action', 'Car chases, guns and violence.')]
Note that while the data is in the session, it has not yet been commited
to the database. Boot alchemy does not commit by default but can be
made to do so.
The BootAlchemy Data Structure
-----------------------------------
The basic structure of a BootAlchemy data structure is like this::
[
{ #start of the first grouping
ObjectName:[ #start of objects of type ObjectName
{'attribute':'value', 'attribute':'value' ... more attributes},
{'attribute':'value', 'attribute':'value' ... more attributes},
...
}
]
ObjectName: [ ... more attr dicts here ... ]
[commit:None] #optionally commit at the end of this grouping
[flush:None] #optionally flush at the end of this grouping
}, #end of first grouping
{ #start of the next grouping
...
} #end of the next grouping
]
The basic structure is a list of dictionaries. Each dictionary represents a "group" of objects.
Each object can have one or more records associated with it.
Flushing and Committing
------------------------
If you provide keys with the name commit and flush to the grouping, the session will
be committed or flushed accordingly. One thing to note is that if you define any
relationships within a record, the grouping will be flushed at that point.
There is no way to avoid this flush.
About Your Model
------------------
BootAlchemy expects that your models have the ability to pass a set of keyword pairs into
your objects. DeclarativeBase does this automatically, but if you have the standard SqlAlchemy object
definitions, you may want to augment them with a superclass that looks something like this::)
Storing References (think Autoincrement)
-----------------------------------------
You can store references within your records and then use them later. For instance, let's
store the genre_id, and use it in a movie define.::
data = [{'Genre':[{'genre_id':'&scifi_id',
'name': "sci-fi",
'description':'Science Fiction, see: 42'
}
],
'flush':None},
{'Movie':[{"title": "Back to the Future",
"description": "In 1985, Doc Brown invents time travel; in 1955,\
Marty McFly accidentally prevents his parents from\
meeting, putting his own existence at stake",
"release_date": "1985-04-03",
"genre_id": '*scifi_id'}],
'flush':None
}]
loader.from_list(session, data)
movies = session.query(Movie).all()
print [(movie.title, movie.genre.name) for movie in movies]
produces::
[('Back to the Future', 'sci-fi')]
If you provide a string with a '&' as one of the attribute values,
boot alchemy will store the value of this item in a reference dictionary. This is then
retrieved when you provide a string starting with '*'. The reference is set after the
object is flushed to the database, which means that if you have an auto-incrementing
field, it will be set to the incremented value.
Notice that the genre was populated within the movie object. This is more of an affect of the
ORM than of bootalchemy, but we will see next how boot alchemy itself takes advantage of the
inner workings of the orm.
Relationships
----------------
Since we have an object mapping to tables, and not just tables in our database, we cann
assign actual objects to the reference dictionary, not just id's. Here is another
way to assign the genre to our movie::
data = [{'Genre':[{'&comedy':{'name': "comedy",
'description':"Don't you _like_ to laugh?"
}}
],
'flush':None},
{'Movie':[{"description": '"Dude" Lebowski, mistaken for a millionaire Lebowski,\
seeks restitution for his ruined rug and enlists his \
bowling buddies to help get it.',
"title": "The Big Lebowski",
"release_date": "1998-03-06",
"genre": "*comedy"}],
'flush':None
}]
loader.from_list(session, data)
movies = session.query(Movie).all()
print [(movie.title, movie.genre.name) for movie in movies]
produces::
#[('Back to the Future', 'sci-fi'), ('The Big Lebowski', 'comedy')]
This also works for one-to-many and many-to-many relationships. If you profide a list of
objects, bootalchemy will retrieve them from the reference dictionary and attach them
to the proper attribute of your object. Lets assign some directors to a movie.::
data = [{'Director':[{'&andy':{'name': "Andy Wachowski"}},
{'&larry':{'name': "Larry Wachowski"}}
],
'flush':None},
{'Movie':[{"description": "A computer hacker learns from mysterious rebels\
about the true nature of his reality and his\
role in the war against the controllers of it.",
"title": "The Matrix",
"release_date": "1999-03-31",
"directors": ['*andy', '*larry'], "genre_id": "*scifi_id"}],
'flush':None
}]
loader.from_list(session, data)
movies = session.query(Movie).all()
print [(movie.title, [d.name for d in movie.directors]) for movie in movies]
produces::
[('Back to the Future', []), ('The Big Lebowski', []), ('The Matrix', ['Andy Wachowski', 'Larry Wachowski'])]
Yaml
---------
BootAlchemy has a very simple data structure because we wanted it to work with YAML. You can easily
provide a yaml string to BootAlchemy for parsing. Yaml has the benefit that it is a standard that
non-python developers can follow, and has a large set of functionality outside of what you can
do with simple strings in dictionaries. Take a look at the spec: . Here is
an example Yaml string loaded into the database with bootalchemy::
from bootalchemy.loader import YamlLoader
data = """
- Movie:
- description: An office employee and a soap salesman build a global organization to help vent male aggression.,
title: Fight Club,
release_date: 1999-10-14
genre: "*action"
flush:
"""
action = session.query(Genre).filterGenre.name=='action').first()
loader = YamlLoader(model, references={'action':action})
loader.loads(session, data)
movies = session.query(Movie).all()
print [(movie.title, movie.genre.name) for movie in movies]
produces::
[('Back to the Future', 'sci-fi'), ('The Big Lebowski', 'comedy'), ('The Matrix', 'sci-fi'), ('Fight Club,', 'action')]
Notice too that we supplied existing references into this loader since it did not have them from the previous runs.
As a python programmer, you might find yaml pretty refreshing. It has simple syntax, rewards brevity, and is sensitive
to indentation. In many ways it is nicer to set data up within than Python, as many of the quotes have been eliminated.
PyYaml supplies readable debug output in case your yaml syntax is incorect. Here is an example where a stray "}" has been
left at the end of the genre line::
yaml.parser.ParserError: while parsing a block mapping
in "<string>", line 3, column 7:
- description: An office employee ...
^
expected <block end="">, but found '}'
in "<string>", line 6, column 23:
genre: "*action"}
^
:class:`YamlLoader` also provides a loadf function which takes a file name and loads it into the database.
Json!
------
One of the great things about YAML is that JSon is a subset of the specification for Yaml. Often times I find
myself with a bunch of data that I have hand-entered into a database, and I want to replicate that data for some
kind of development process. I can output the database data into Json using my browser and then inject it as a
stream into my bootloader program.
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
- Downloads (All Versions):
- 25 downloads in the last day
- 379 downloads in the last week
- 1470 downloads in the last month
- Author: Christopher Perkins
- License: MIT
- Package Index Owner: percious
- DOAP record: bootalchemy-0.4.1.xml | https://pypi.python.org/pypi/bootalchemy/0.4.1 | CC-MAIN-2015-14 | refinedweb | 1,305 | 54.52 |
Get it straight from the horse’s mouth: Step one: use Data Binding in Android. Step two: profit 💰. Yigit Boyar and George Mount are Google developers who helped build Android’s Data Binding Library to allow developers to build rich & responsive user experiences with minimal effort. In this talk at the Bay Area Android Dev Group, they demonstrate how using Data Bindings can improve your application by removing boilerplate for data-driven UI, allowing you to write cleaner, better code.
Introduction (0:00)
We are George Mount and Yigit Boyar, and we work on the Android UI Toolkit team. We have a lot of information about Data Binding to share with you, and lots of code to go with it. We’ll discuss the important aspects of how Data Binding works, how to integrate it into your app, how it works with other components, and we’ll mention some best practices.
Why Data Binding? (0:44)
You may wonder why we decided to implement this library. Here’s an example of a common use case.
<LinearLayout …> <TextView android: <TextView android: </LinearLayout>
This is an Android UI you see all the time. Say you have a bunch of videos with IDs. Your designer comes and says, “Okay, let’s try adding new information to this layout,” so that when you add any video, you need to tack on another ID. You go back to your Java code in order to modify the UI.
private TextView mName protected void onCreate(Bundle savedInstanceState) { setContentView(R.layout.activity_main); mName = (TextView) findViewById(R.id.name); } public void updateUI(User user) { if (user == null) { mName.setText(null); } else { mName.setText(user.getName()); } }
You write a new TextView, you find it from the UI, and you set your logic so that whenever you need to update your user, you have to set the information on the TextView.
private TextView mName protected void onCreate(Bundle savedInstanceState) { setContentView(R.layout.activity_main); mName = (TextView) findViewById(R.id.name); mLastName = (TextView) findViewById(R.id.lastName); } public void updateUI(User user) { if (user == null) { mName.setText(null); mLastName.setText(null); } else { mName.setText(user.getName()); mLastName.setText(user.getLastName()); } }
All in all, that is a lot of things you have to do just to add one view to your UI. It seems like too much stupid boilerplate code that doesn’t require any brainpower.
There are already some really nice libraries to make this easier and more solid. For example, if you use ButterKnife, you could get two of those ugly viewByIds, making it much easier to read. You can get rid of the extra code, telling ButterKnife to delete it for you.
private TextView mName protected void onCreate(Bundle savedInstanceState) { setContentView(R.layout.activity_main); ButterKnife.bind(this); } public void updateUI(User user) { if (user == null) { mName.setText(null); mLastName.setText(null); } else { mName.setText(user.getName()); mLastName.setText(user.getLastName()); } }
It’s a good step forward, but we can go one step further. We can say “Okay, why do I need to create items for these? Something can just generate it. I have a layout file, I have ID’s.” So you can use Holdr, which does that for you. It processes your files and then creates views for them. You initiate from Holdr, which converts the IDs you entered into field names.
private Holdr_ActivityMain holder; protected void onCreate(Bundle savedInstanceState) { setContentView(R.layout.activity_main); holder = new Holdr_ActivityMain(findViewById(content)); } public void updateUI(User user) { if (user == null) { holder.name.setText(null); holder.lastName.setText(null); } else { holder.name.setText(user.getName()); holder.lastName.setText(user.getLastName()); } }
This is better again, but there’s still something unnecessary in this code. There’s a huge part that I never touched, where I was unable to reduce the amount of code. It’s all very simple code, too: I have a user object, I just want to move the data inside of this object to the view class. How many times have you made a mistake when you see code like this? You remember to change one thing, but forget to change another, and end up with a crash on production. This is the part we want to focus on: we want to get through all the boilerplate code.
When you use Data Binding, it’s very similar to using Holdr, but you have to do a lot less work. Data Binding figures the rest out.
private ActivityMainBinding mBinding; protected void onCreate(Bundle savedInstanceState) { mBinding = DataBindingUtil.setContentView(this, R.layout.activity_main); } public void updateUI(User user) { mBinding.setUser(user); }
Behind the Scenes (3:53)
How does Data Binding work behind the scenes? Take a look at the layout file from before:
<LinearLayout …> <TextView android: <TextView android: </LinearLayout>
I have these IDs, but why do I need them if I could find them back in my Java code? I actually don’t need them anymore, so I can get rid of them. In their place, I put the most obvious thing I want to display.
<LinearLayout …> <TextView android: <TextView android: </LinearLayout>
Now, when I look at this layout file, I know what the TextView shows. It has become very obvious, so I don’t need to go back to read my Java code. We designed the Data Binding library in a way that didn’t include any magic that wasn’t easy to explain. If you are using something in your layout file, you need to tell Data Binding what it is. You simply say, “We are labeling this layout file with this type of user, and now we are going to find it.” If your designer asks you to add another view, you simply add one more line and show your new view, with no other code changes.
<layout> <data> <variable name="user" type="com.android.example.User"/> </data> <LinearLayout …> <TextView android: <TextView android: <TextView android: </LinearLayout> </layout>
It’s also really easy to find bugs. You can look at something like the above code and and say, “Oh, look! Empty string plus user.age!” You just set text on the integer, and then bang! We did that many times, it just happens.
But How Does It Work? (5:57)
The first thing the Data Binding library does is process your layout files. By “process,” I mean that it goes into to your layout files when your application is being compiled, finds everything about Data Binding, grabs that information and deletes it. We delete it because the view system doesn’t know about it, so it disappears.
Get more development news like this
The second step is to parse these expressions by running it through a grammar. For example, in this case:
<TextView android:
The
user is an ID, the
View is an ID, and the other
View is an ID. They’re identifiers, like real objects, but we don’t really know what they are yet at this point. The other things are invisible or gone. There is field access, and the whole thing’s a ternary. That’s what we have understood so far. We parse things from a file, and understand what’s inside.
The third step is resolving dependencies, which happens when your code is being compiled. In this step, for example, we look at
user.isAdmin and figure out what it means. We think “Okay, this method turns a boolean inside that user class. I know this expression means some sort of boolean at run time.”
The final step is writing data binders. We write the classes that YOU don’t need to write anymore. In short, final step: profit 💰
An Example Case (7:40)
Here is an actual case of a layout file.
<layout xmlns: <data> <variable name="user" type="com.android.example.User"/> </data> <RelativeLayout android: <TextView android: <TextView android: </RelativeLayout> </layout>
As we process, we get rid of everything the view system doesn’t know anymore, link them, and put back our binding tags:
<RelativeLayout xmlns: <TextView android: <TextView android: </RelativeLayout>
This is actually how we make Data Binding backwards compatible. When you put it on a Gingerbread device, the poor guy has no idea what’s going on.
Expression Tree (8:01)
<TextView android:
Here’s another example expression. When we parse this, it turns into an expression tree which is resolved at compile time. It’s important to note that it happens in the compile time, so that when the application starts running, you already know everything. We check the left side of this expression, and it’s a boolean. We check the right side, and it’s a string. The resource is also a string. So I have a boolean, string, string, ternary, which is also a string. There’s a text attribute and I have a string. How do I set this?
There’s a perfect
setText(CharSequence). Now, Data Binding knows how to turn that expression into Java code. If you go into detail, there’s TextView and ImageView.
<TextView android: textView.setText(myVariable); <ImageView android: imageView.setSrc(user.image);
ImageView is a source attribute, so would it be correct, as in the above example, to use
setSrc? No, because there’s no set source method on ImageView. Instead, there’s an inside ImageView source method. But how does Data Binding know about this?
It’s called source attribute, and since you’re used to using that attribute, Data Binding has to support it.
<TextView …/> textView.setText(myVariable); <ImageView android: imageView.setImageResource(user.image); @BindingMethod( type = android.widget.ImageView.class, attribute = "android:src", method = "setImageResource")
We have these annotations that we create, where you can simply say, “Okay, in the ImageView class, attribute source maps to this method.” We just write it once, we actually form the framework once. We provide it, but you may have custom views that you want to add. Once you add that method, Data Binding knows how to resolve this. Again, this all happens in the compile time.
Data Binding Goodies (9:54)
Data Binding makes your life a lot easier. Let’s take a look at the expression language that we support, which is mostly Java. It allows things like field access, method calls, parameters, addition, comparisons, index access on arrays, constant access, and even ternary expressions. That’s basically what you want from your Java expressions. There are also a few things it doesn’t do, like
new. We really don’t want you to do
new in your expressions.
Our basic goal is to make this thing as short and readable as possible in your expressions, right in your XML. We don’t want you to have to write super long expressions just to access your contact’s name. We want you to be able to use
contact.name. We look at it and think “Okay, is this a field, or is it a getter?” Or you could have “name” as a method.
We also do automatic null checks, which is actually really, really cool. If you want to access the name, but contact is null, how much of pain in the neck would it be to write
contact null ? null : contact.friend null ? :? You don’t want to do that. Now, if contact is null, the whole expression in null.
We also have the null coalescing operator, which you may have seen from other languages. It’s just a convenient way to do this ternary operator:
contact.lastName ?? contact.name contact.lastName != null ? contact.lastName : contact.name
It says if the first one is not null, choose the first one. If it is null, then choose the second one.
We also have list access and map access using the bracket operator. If you have
contacts[0], that contact could be a list or an array, it’d be fine. If you have contactInfo, you can use a bracket notation for that. It’s a little easier.
Resources (12:20)
We want you to be able use resources in your expressions. What would an expression language be in Android without resources? Now you can use resources and string formatting right in your expressions.
In Expressions:
android:padding="@{isBig ? @dimen/bigPadding : @dimen/smallPadding}"
Inline string formatting:
android:text="@{@string/nameFormat(firstName, lastName)}"
Inline plurals:
android:text="@{@plurals/banana(bananaCount)}"
Automagic Attributes (13:00)
Here we have a DrawerLayout…
<android.support.v4.widget.DrawerLayout android:
drawerLayout.setScrimColor( resources.getColor(R.color.scrim))
We have this attribute
app:scrimColor. There’s no scrim color on the DrawerLayout, but there happens to be
setScrimColor. We look for this
setScrimColor when we have an attribute with a name
scrimColor, and we check if the types match. First we look at color, which is an
int. If
setScrimColor takes an
int, it’s a match. It’s convenient!
Event Handlers (13:41)
I don’t know how many of you have done
clicked using a button or a view, but we also support it here in Data Binding. You can use a
clicked, but now any of the events are supported as well. Of course, this works back to Gingerbread. You can even do things where you have assigned an arbitrary event handler as part of an expression (I’m not saying I recommend you do this, but you can!). You can also do some of the weird listeners, like
onTextChanged. TextWatcher has three methods on it, but everybody only cares about
onTextChanged, right? You can actually access just one of them if you want, or all of them.
<Button android:onClick="clicked" …/> <Button android:onClick="@{handlers.clicked}" …/> <Button android:onClick="@{isAdult ? handlers.adultClick : handlers.childClick}" …/> <Button android:onTextChanged="@{handlers.textChanged}" …/>
Observability in Detail (14:56)
What happens when you update your view? Imagine we have a store, and we have an item whose price has recently changed. This has has to automatically update our UI. How does that happen? With Data Binding, that happens really cheaply and easily.
The first thing we have to do is create an item, some kind of observable object. Here, I’ve extended the base observable object, and then we have our fields in there.
public class Item extends BaseObservable { private String price; @Bindable public String getPrice() { return this.name; } public void setPrice(String price) { this.price = price; notifyPropertyChanged(BR.price); } }
We notify it by adding in this
notifyPropertyChanged. But what do we notify that’s going to change? We have to put in a
@Bindable annotation on the
getPrice. That generates this
BR.price, the price field in the BR class. The BR is like the R class, we just generate it for you and it just sucks in these binding resources. However, you may not want us to invade your whole hierarchy, so we allow you to implement the observable class as well. Yes, I hear the has-a vs. is-a people complaining… Here we allow you to implement it yourself.
public class Item implements Observable { private PropertyChangeRegistry callbacks = new … … @Override public void addOnPropertyChangedCallback( OnPropertyChangedCallback callback) { callbacks.add(callback); } @Override public void removeOnPropertyChangedCallback( OnPropertyChangedCallback callback) { callbacks.remove(callback); } }
We have this convenient class called the
PropertyChangedRegistry that lets you essentially take those callbacks and notify them. Some of you might think this is just a pain in the neck, and instead want to have an observable field. Essentially, each of these is an observable object, and it just has one entry in it. It’s conveniently set up so that when you use, say, accessImage, it actually accesses the content within that image. If you access price, it accesses the string content of that price.
The special thing about these objects is that in the Java code, you have to call the set and get methods, but in your binding expressions, you can just say
item.price, and we will know that we need to call the getter. So when the price changes, it just sets the value.
public class Item { public final ObservableField<Drawable> image = new ObservableField<>(); public final ObservableField<String> price = new ObservableField<>(); public final ObservableInt inventory = new ObservableInt(); } item.price.set("$33.41");
In other cases you may have more “blobby” data. This often happens at the beginning of your development cycle, specifically during prototyping, where you might have some blob that comes down from the net, and you don’t really want to define objects quite yet, so you might want a map. In this case, all you have to do is have an observable map into which you can put your items, and then you can access them. Unfortunately, you don’t access it the same way exactly: you have to use the bracket notation.
ObservableMap<String, Object> item = new ObservableArrayMap<>(); item.put("price", "$33.41");
<TextView android:layout_width="wrap_content" android:layout_height="wrap_content" android:
Notify on Any Thread (18:29)
One of the conveniences here is that you don’t have to notify on the UI thread anymore: you can update on any thread you want. However, note that we are going to read on the UI thread, so you have to be careful about that. Also, please don’t do this with lists! For lists, you should still notify on the UI thread, because we are going to read on the UI thread, and we are going to need the length on the UI thread, and we do not do any kind of synchronization on that. You probably already know this from Recycler and ListView, which get very upset. It’s because of lists, not because of those classes.
Performance (19:21)
Perhaps the most important consideration for this project was to not make it slow. Data Binding is infamous for being slow, so in Android, we were double careful to take that into consideration, and we believe we did a good job.
The foremost aspect of performance is that there is basically zero reflection. Everything happens in compile time. Occasionally, things are inconvenient because it happens in the compile time, but in the long run we don’t care. We don’t want to have to resolve anything when the application is running.
The second part is something nice that you get for free. Let’s say you are using Data Binding in a layout where you name the price of an object, but then the price of an object changes. So new price comes, the notify comes. Data Binding is only going to update the TextView, nothing else, however that TextView will be measured. If you were writing that code by hand, it’s very unlikely that you would write that code, and it would just set off the view again. So this comes with a free benefit.
Another performance benefit in Data Binding comes in cases where you have two expressions such as these:
<TextView android: <TextView android:
You have
user.address and another
user.address. The code DataBinding will generate looks like this:
Address address = user.getAddress(); String street = address.getStreet(); String city = address.getCity();
It’s going to move the address into a local variable, then operate on it. Now imagine that there’s some calculation, which is actually expensive. Data Binding is only going to do it once. It’s yet another thing that you wouldn’t do by hand.
Another positive side effect of the performance is the
findById. When you code
findById on the view on Android, it actually goes to all of its children, and says something like “Children zero, can you find this view by ID?” That child asks its children, which then go to the next children, etc. until you find the view. Then, you code
findViewById a second time for the other view, and the same thing happens again.
However, when you initialize Data Binding, we actually know which views we are interested in at compile time, so we have a method of finding all the views we want. We traverse the layout hierarchy once to collect all the views. It’s the same story, we traverse it, but it only happens once. The second time we need another view, there’s no second pass, because we already found all the views.
Performance is about the little details. You’re including a library in your code, so yes, some behaviors will change, and yes, there will be some cost. But with all these things, I think we made it equal to, even sometimes better than the code you would write, which is very important.
RecyclerView and Data Binding (22:14)
Using ViewHolders was very common for ListViews, and RecyclerView enforces this pattern. If you look at what Data Binding generates, you’ll see that it actually generates the ViewHolder for you. It has the fields, it knows the views. You can also easily use the inside of RecyclerView. We create a ViewHolder that has this basic create method, a static method, which tells the UserItemBinding (the generated class from a user item layout file). So you call UserItemBinding inflate. And now you have a very simple ViewHolder class that just keeps a reference to the binding that was generated, and the binding method likes this.
public class UserViewHolder extends RecyclerView.ViewHolder { static UserViewHolder create(LayoutInflater inflater, ViewGroup parent) { UserItemBinding binding = UserItemBinding .inflate(inflater, parent, false); return new UserViewHolder(binding); } private UserItemBinding mBinding; private UserViewHolder(UserItemBinding binding) { super(binding.getRoot()); mBinding = binding; } public void bindTo(User user) { mBinding.setUser(user); mBinding.executePendingBindings(); } }
One little detail to be careful about is to call this
executePendingBindings. This is necessary because when your data invalidates, Data Binding actually waits until the next animation frame before it sets the layout. This is not so that we can batch all the changes that happen in your data and apply all it once, but because RecyclerView doesn’t really like it. RecyclerView calls BindView, it wants you to prepare that view so that it can measure a layout. This is why we call
executePendingBindings, so that Data Binding flushes all pending changes. Otherwise, it’s going to create another layout invalidation. You may not notice it visually, but it’s going to be on the list of operations.
For
onCreateViewholder, it simply calls the first method, and
onBind passes the object to the ViewHolder. That’s it. We didn’t write any
findViewById, no settings, nada. Everything is encapsuled in your layout file.
public UserViewHolder onCreateViewHolder(ViewGroup viewGroup, int i) { return UserViewHolder.create(mLayoutInflater, viewGroup); } public void onBindViewHolder(UserViewHolder userViewHolder, int position) { userViewHolder.bindTo(mUserList.get(position)); }
In the previous code, we showed a very simple, straightforward implementation. Say, for instance, that your user object’s name changed. The binding system is going to realize it and re-layout itself in the next animation frame. The next animation frame starts, calculates what has changed, and updates the TextView. Then, TextView says, “Okay, my text has changed, I have to re-case the layout now because I don’t know my new size. Let’s go tell RecyclerView that one of its children is unhappy, and it needs to re-layout itself too.” When this happens, you’re not going to receive any animations because you told RecyclerView after everything happened. RecyclerView will try to fix itself, it will be done. Result: NO ANIMATIONS But that’s not what we wanted.
What we wanted to happen was that when the user’s object is invalidated, we tell the adapter the item has changed. In turn, it is going to tell the RecyclerView, “Hey, one of your children is going to change, prepare yourself.” RecyclerView will layout, and for those whose children have changed, it’s going to instruct them to rebind themselves. When they rebind, TextView will say, “Okay, my text is set, I need the layout.” RecyclerView will say, “Okay, don’t worry, I’m already laying you out, let me measure you.” Result: MUCHO ANIMATIONS. You will get all the animations because everything happened under the control of RecyclerView.
Rebind Callback and Payload (25:50)
This is actually the part we need to release as a library, but in the meantime, I want to let you know how you can do this. In Data Binding, we have this API where you can add a rebind callback. It’s basically a callback you can attach and then get notified when Data Binding is about to calculate. For instance, maybe you may want to freeze the changes to the UI. You can just hook to this
onPreBind, at which point you get to return a boolean where you can say, “No, don’t rebind yet.” If one of the listeners says that, Data Binding is going to call, “Hey, I canceled the rebind. Now it’s your responsibility to call me, because I’m not going to do anything.”
Now all we have to do here is if RecyclerView is not calculating your layout, return false. View, you should not update yourself when RecyclerView is not doing your computation. That is computing layouts, the new RecyclerView API that was released this summer. And when the
onCanceled comes, we just tell the adapter that, “Hey, this item has changed, go figure it out,” because we already know they’re at their position from the holder. Then, let RecyclerView decide what it wants to do.
public UserViewHolder onCreateViewHolder(ViewGroup viewGroup, int i) { final UserViewHolder holder = UserViewHolder.create(mLayoutInflater, viewGroup); holder.getBinding().addOnRebindCallback(new OnRebindCallback() { public boolean onPreBind(ViewDataBinding binding) { return mRecyclerView != null && mRecyclerView.isComputingLayout(); } public void onCanceled(ViewDataBinding binding) { if (mRecyclerView == null || mRecyclerView.isComputingLayout()) { return; } int position = holder.getAdapterPosition(); if (position != RecyclerView.NO_POSITION) { notifyItemChanged(position, DATA_INVALIDATION); } } }); return holder; }
Previously, we only had the one
onBind method, so we started writing this new RecyclerView API, where you get a list of payloads. It’s the list of things that change on that ViewHolder. The cool thing about this API is that you receive payloads if, and only if, RecyclerView is rebinding to the same view. You know that that view already represents the same item, but there are just some changes (maybe grammatical changes, hopefully) that you want to execute. The data invalidation payload we sent comes back to here. If it’s coming because of Data Binding, we just call
executePendingBindings. Do you remember we didn’t let it update itself? Now, it is time to update itself because RecyclerView has told it to.
If you’re wondering what this looks like, Data Binding simply traverses the payloads, and checks to see if this data validation is the only payload it received. For example, maybe someone else is sending payloads that you don’t know about, which you should bail out since you don’t know what those changes are.
public void onBindViewHolder(UserViewHolder userViewHolder, int position) { userViewHolder.bindTo(mUserList.get(position)); } public void onBindViewHolder(UserViewHolder holder, int position, List<Object> payloads) { notifyItemChanged(position, DATA_INVALIDATION); ... }
We will ship this as a library, because it gives you performance, it gives you animations, makes everything nicer, and makes RecyclerView happy. Data Binding is mostly a happy child!
Data Invalidation is just a simple object, but I want to show it in case you’re curious:
static Object DATA_INVALIDATION = new Object(); private boolean isForDataBinding(List<Object> payloads) { if (payloads == null || payloads.size() == 0) { return false; } for (Object obj : payloads) { if (obj != DATA_INVALIDATION) { return false; } } return true; }
Multiple View Types (28:50)
Another use case of Data Binding is multiple view types. This always happens: you have a header view, or maybe you have an application which shows search results from Google, where you can have a photo result or a place result. How can you structure this in RecyclerView? Let’s say you have a layout file that uses a variable, you name it “data.” This name “data” is important because you are going to reuse the same name. You use a regular layout file:
<layout> <data> <variable name="data" type="com.example.Photo"/> </data> <ImageView android:src="@{data.url}" …/> </layout>
If you need another type of result, for example something called “place,” then you need to have a totally different layout, another XML file:
<layout> <data> <variable name="data" type="com.example.Place"/> </data> <ImageView android:src="@{data.url}" …/> </layout>
The only thing shared between these two layout files is the variable name, which is called “data.” When we do this, we create something called
dataBoundViewHolder.
public class DataBoundViewHolder extends RecyclerView.ViewHolder { private ViewDataBinding mBinding; public DataBoundViewHolder(ViewDataBinding binding) { super(binding.getRoot()); mBinding = binding; } public ViewDataBinding getBinding() { return mBinding; } public void bindTo( Place place) { mBinding.setPlace(place); mBinding.executePendingBindings(); } }
This is the same implementation as the previous example. It is a Real Data Binding object that keeps the binding. Real Data Binding is a base class for all generated classes. This is why you can usually keep the reference. We create this bind method — previously, it was binding it to user, now it’s to place.
Unfortunately, there’s a problem here. There’s no
setPlace method in the Real Data Binding class, because it’s the base class. Instead, there is another API that the base class provides, which is basically a
setVariable:
public void bindTo( Object obj) { mBinding.setVariable(BR.data, obj); mBinding.executePendingBindings(); }
You can provide the identifier of the variable, and then whatever object you want, like a regular Java object. The generated class is going to check the type will assign it.
The set variable looks something like this, which basically says “If the past ID is one of the IDs I know, cast it and assign it.”
boolean setVariable(int id, Object obj) { if (id == BR.data) { setPhoto((Photo) obj); return true; } return false; }
Once you do this, the
onBind,
onCreate methods are exactly the same. What we do is
getItemViewType, so the in the view type, we return the layout ID as the ID of the type. This works very well because when we return the layout ID and the get item leave type, RecyclerView passes it back onto the
onCreateViewHolder, which will pass through the DataBindingUtil to create the correct binding class for that. Every item has its own layout, you don’t have to layout an object.
DataBoundViewHolder onCreateViewHolder(ViewGroup viewGroup, int type) { return DataBoundViewHolder.create(mLayoutInflater, viewGroup, type); } void onBindViewHolder(DataBoundViewHolder viewHolder, int position) { viewHolder.bindTo(mDataList.get(position)); } public int getItemViewType(int position) { Object item = mItems.get(position); if (item instanceof Place) { return R.layout.place_layout; } else if (item instanceof Photo) { return R.layout.photo_layout; } throw new RuntimeException("invalid obj"); }
Of course, if you were writing this in a production application, you would probably reserve doing instance check. You should probably have a base class that knows how to return the layout, but you get the general idea.
Binding Adapters and Callbacks (31:27)
Prepare yourselves for the coolest feature in Data Binding, according to popular polls… (that I made). It may even be the coolest feature in all of Android. Okay, maybe I’m hyping it up a little.
Let’s imagine you have something a little bit more complicated than
setText, for example an image URL. You want to set as ImageView, and you want to set an image URL. Of course, you don’t want to do this on the UI thread (remember that these things are evaluated on the UI thread). You want to use Picasso or one of the other libraries out there. Maybe you’ll make an expression like this?
<ImageView … android:
That’s not going to quite work. Where did that context come from, and what do you put it into? There’s no view. You’ll lose your job if you write this. Instead what we’re going to do is create a BindingAdapter.
@BindingAdapter("android:src") public static void setImageUrl(ImageView view, String url) { Picasso.with(view.getContext()).load(url).into(view); }
Now the BindingAdapter here is an annotation. This one’s for Android source, because we’re setting the attribute android:src. This is a public static method and it takes two parameters. It takes a view and a string, but note that it can also take other types too. If you wanted to have a different one that takes an int or a drawable, for example, you could do that as well. Then you fill it in, and you can put whatever you want in here. In this case, we’ve put in the Picasso stuff. All our code goes right in there. Now that we have the view, we can get the context. We can do whatever we want right in that code. We can now load off the UI thread, just like we want to.
Attributes Working Together (33:12)
You also might want to do something even more complex, for example in this case where we have the PlaceHolder, the source, and the image URL.
<ImageView … android:
We have two different attributes, and they have two different static methods, so that’s not going to work. Actually, now we can have two attributes in the same BindingAdapter. You just pass both values and fill in your Picasso code right there, right in the middle.
<ImageView … android:
@BindingAdapter(value = {"android:src", "placeHolder"}, requireAll = false) public static void setImageUrl(ImageView view, String url, int placeHolder) { RequestCreator requestCreator = Picasso.with(view.getContext()).load(url); if (placeHolder != 0) { requestCreator.placeholder(placeHolder); } requestCreator.into(view); }
What if you have three now? You have to have one Android source, Android PlaceHolder, and Android image URL. All these different BindingAdapters. Really, I am a little too lazy for that, so let’s do something else. We can have a BindingAdapter that takes all of those, or even just one or two, or any combination of them. All we have to do is set the required all to be false, and we take all those parameters. It’s going to pass in all of those values. If it’s not provided, it’ll pass them in as the default value. PlaceHolder will be zero if you don’t have a PlaceHolder attribute in your layout. We check for that before we call the setter in the Picasso right there.
Previous Values (34:55)
You also sometimes need previous values. In this example, we have an OnLayoutChanged.
); } } }
We want to remove the old one before we add a new one, but in this case, we don’t know what the old one was. We can just add the new one, and that’s easy enough, but how do we remove the old one? Well, you can take that value too, we’ll give it to you. If you have this kind of code, we will hold on to that old value for you. So, if you have something ginormous, that is actually only transient, we’re going to still hold on to that. However, for cases like this, where it’s going to be in your memory anyway, it’s great. Each time it changes, to start the correct animation, you want to know what it was before.
Just using this API, Data Binding does the thinking for you. You just need to think about how you animate the change. Of course, you can also do this with multiple properties as well. We’re going to pass you in. All you have to do is put all your old values first and then all your new values.
Depdendency Injection (36:20)
Let’s imagine that we have this adapter:
public interface TestableAdapter { @BindingAdapter("android:src") void setImageUrl(ImageView imageView, String url); } public interface DataBindingComponent { TestableAdapter getTestableAdapter(); } DataBindingUtil.setDefaultComponent(myComponent); ‐ or ‐ binding = MyLayoutBinding.inflate(layoutInflater, myComponent);
Obviously what’s going to happen is when our binding code calls, it’s going to call this my binder
setImageUrl. What if I have some state that I want to have in my BindingAdapter? Or, let’s say I have different kinds of BindingAdapters depending on what I’m doing in my application. In that case, it gets to be a pain. What we really what we want is to have just one instance of the BindingAdapter. Where does that come from?
What we can do is create a binding component,
DataBindingComponent, which is an interface. When you have an instance method, we’re going to generate this get my adapter into this interface. Then, it’s up to you to implement it. We don’t know how you implement it, but you implement it, and then you just set the default component.
You can also do this on a per-layout basis. In this case, one sets the default and it can be used in all of your layouts. Then we know exactly what component to use to get your adapter.
You may also want to use your component as a parameter. For example, we just saw this
setImageURL before.
@BindingAdapter("android:src") public static void setImageUrl(MyAppComponent component, ImageView view, String imageUrl) { component.getImageCache().loadInto(view, imageUrl); }
We want to use some kind of state. Let’s imagine that’s the image cache, and we want to load the image with that image cache. Where does that sum state come from? What we’re going to do is use the component. You can put whatever method you want in there: in this case, it’s [somestate].get[somestate]. You’re going to pass it in as the first parameter to your BindingAdapter, and then you can do whatever you want with it. We don’t know anything about what you’re doing with your component, right? It’s whatever you want to do, so it can be very convenient.
Event Handlers (38:56)
We have this
onClick attribute, we have a
clicked method on handler.
clicked could be
getClicked, or
isClicked, or it could be a field, “clicked”, so how do we know what to do in this case?
<Button … android:
// No "setOnClick" method for View. Need a way to find it. @BindingMethods({ @BindingMethod(type = View.class, attribute = "android:onClick", method = "setOnClickListener"}) // Look for setOnClickListener in View void setOnClickListener(View.OnClickListener l) // Look for single abstract method in OnClickListener void onClick(View v);
First of all, we need to find out what
onClick means? We know
onClick is not
setOnClick, because we looked and we saw that there was no
setOnClick, but there’s this binding method. It says
onClick means
setOnClickListener. So we look at
setOnClickListener, which takes a parameter: it takes an
onClickListener argument, so let’s look at that.
We see that there’s only one abstract method in the
onClickListener, so we know that this could possibly be a listener that you want to use for your event handler. Now we look at the handler, and we find a method in there, called
clicked.
static class OnClickListenerImpl1 implements OnClickListener { public Handler mHandler; @Override public void onClick(android.view.View arg0) { mHandler.adminClick(arg0); } } static class OnClickListenerImpl2 implements OnClickListener { public Handler mHandler; @Override public void onClick(android.view.View arg0) { mHandler.userClick(arg0); } }
We found the
clicked method, and it takes the same parameters. We have a match, yay! We know this is an event handler, so we know exactly what to do: we’ll treat it like an event.
So what do you do in the case of TextWatcher? Cause there is no single abstract method. There’s three of them in there. In that case, what you do is you make up your own interfaces and then you merge them all together. This is essentially what I did, I merged them all together. Essentially what you’re doing is you’re merging all of the before- and on- and after changes. If they’re null, then you don’t do anything, and if they’re not null, then you do something. The required all is of course required.
About the content
This content has been published here with the express permission of the author. | https://academy.realm.io/posts/data-binding-android-boyar-mount/ | CC-MAIN-2022-27 | refinedweb | 6,620 | 66.23 |
tf.zeros: How To Use tf zeros Operation
tf.zeros - How to use tf zeros operation to create a TensorFlow zeros Tensor
< > Code:
Transcript:
We import TensorFlow as tf.
import tensorflow as tf
Then we print the TensorFlow version we are using.
print(tf.__version__)
We are using TensorFlow 1.0.1.
In this video, we’re going to create a TensorFlow constant tensor full of zeros so that each element is a zero using the tf.zeros operation.
All right, let’s get started.
For the first example, we’ll create a TensorFlow tensor full of zeros that are full of integers.
tf_int_zeros_ex = tf.zeros(shape=[1,2,3], dtype="int32")
We use tf.zeros.
We pass in the shape and we pass in a Python list that says 1, 2, 3, and the data type that we want is int32.
We assign it to the Python variable tf_int_zeros_ex.
Let’s print out the tf_int_zeros_ex variable to see what we have.
print(tf_int_zeros_ex)
We see that it’s a TensorFlow tensor, it has the name, the shape is 1x2x3 which is what we defined it as, and the data type is int32.
Because we haven’t run it in a TensorFlow session, right now, it doesn’t have any values yet.
For the second example, we’ll create a TensorFlow tensor full of zeros that are now float numbers.
tf_float_zeros_ex = tf.zeros(shape=[2,3,4], dtype="float32")
So we use again tf.zeros, the shape is going to be 2x3x4, and the data type is float32.
We assign it to the Python variable tf_float_zeros_ex.
We print out the tf_float_zeros_ex Python variable to see what we have:
print(tf_float_zeros_ex)
And we see again that it is a TensorFlow tensor, the shape is 2x3x4, and the data type is float32.
Now that we’ve created the TensorFlow tensors, let’s run the computational graph.
First, we launch the graph in a TensorFlow session.
sess = tf.Session()
Then we initialize all the global variables in the graph.
sess.run(tf.global_variables_initializer())
Let’s now print our first TensorFlow tensor full of zeros that are int32s.
print(sess.run(tf_int_zeros_ex))
So we do a print and then we evaluate the variable in a TensorFlow session run, and we see that it is a 1x2x3 tensor.
All the values are zeros for every element place and there is no decimal point which designates to us visually that these are integers.
Next, let’s print our other TensorFlow tensor which is full of zeros that are float32 numbers.
print(sess.run(tf_float_zeros_ex))
We print it inside a TensorFlow session run and we see a 2x3x4 tensor.
We see that every single element is a zero, and because it is a float32 data type, we see that each element has a decimal point.
So zero and decimal points are all the elements.
Perfect - We were able to create a TensorFlow tensor full of zeros – in the first example, full of integer zeros, in the second example, full of float32 zeros.
Finally, we close the TensorFlow session to release the TensorFlow resources used in that session.
sess.close()
That is how you create a TensorFlow constant tensor full of zeros so that each element is a zero using the tf.zeros operation. | https://aiworkbox.com/lessons/create-a-tensorflow-tensor-full-of-zeros | CC-MAIN-2020-40 | refinedweb | 546 | 63.59 |
NAME
uidinfo, uihashinit, uifind, uihold, uifree - functions for managing UID information
SYNOPSIS
#include <sys/param.h> #include <sys/proc.h> #include <sys/resourcevar.h> void uihashinit(void); struct uidinfo * uifind(uid_t uid); void uihold(struct uidinfo *uip); void uifree(struct uidinfo *uip);
DESCRIPTION
The uidinfo family of functions is used to manage uidinfo structures. Each uidinfo structure maintains per uid resource consumption counts, including the process count and socket buffer space usage. The uihashinit() function initializes the uidinfo hash table and its mutex. This function should only be called during system initialization. The uifind() function looks up and returns the uidinfo structure for uid. If no uidinfo structure exists for uid, a new structure will be allocated and initialized. The uidinfo hash mutex is acquired and released. The uihold() function increases the reference count on uip. uip’s lock is acquired and released. The uifree() function decreases the reference count on uip, and if the count reaches 0 uip is freed. uip’s lock is acquired and release and the uidinfo hash mutex may be acquired and released.
RETURN VALUES
uifind() returns a pointer to an initialized uidinfo structure, and should not fail.
AUTHORS
This manual page was written by Chad David 〈davidc@acns.ab.ca〉. | http://manpages.ubuntu.com/manpages/lucid/man9/uidinfo.9freebsd.html | CC-MAIN-2015-27 | refinedweb | 206 | 50.94 |
#include <vxl_config.h>
#include <vcl_functional.h>
#include <vcl_map.h>
#include <vcl_iosfwd.h>
Go to the source code of this file.
Sparse 3D array.
vbl_big_sparse_array_3d is a sparse 3D array allowing space efficient access of the form s(300,700,900) = 2; It uses the 64-bit integer type "long long" (whenever supported by the compiler) to store the 3D index: 21 bits per dimension. Hence the largest possible coordinate in each dimension is 2^21-1 = 2097151. On platforms that do not have 64-bit integers, the maximum is 2^10-1 = 1023. (Actually, for some dimensions, it could be a factor 2 higher.)
Example usage:
vbl_big_sparse_array_3d<double> x; x(1,2,3) = 1.23; x(100,200,3) = 100.2003; x.put(200,300,4, 200.3004); vcl_cout << "123 = " << x(1,2,3) << vcl_endl << "222 = " << x(2,2,2) << vcl_endl << "333 is full? " << x.fullp(3,3,3) << vcl_endl << x;
Modifications 180497 AWF - Moved to Basics 261001 Peter Vanroose - documentation added about implementation 261001 Peter Vanroose - bug fixed in bigencode - had 11,22 instead of 21,42. 271001 Peter Vanroose - ported to vxl from BigSparseArray3; removed n1,n2,n3
Definition in file vbl_big_sparse_array_3d.h.
Definition at line 99 of file vbl_big_sparse_array_3d.h.
Definition at line 94 of file vbl_big_sparse_array_3d.h. | http://public.kitware.com/vxl/doc/release/core/vbl/html/vbl__big__sparse__array__3d_8h.html | crawl-003 | refinedweb | 211 | 60.92 |
15 March 2012 12:11 [Source: ICIS news]
LONDON (ICIS)--Shares in Yule Catto rose on Thursday after Swiss investment bank UBS raised the ?xml:namespace>
At 11:12 GMT, Yule Catto’s shares on the London Stock Exchange were trading at 229.85p, up by 0.42% from the previous close.
On Wednesday, Yule Catto reported a net loss of £5.30m ($8.41m, €6.39m) for the full year of 2011 from a net profit of £55.3m in 2010, as the company incurred losses from discontinued operations.
However, excluding the loss from discontinued operations, Yule Catto’s underlying net profit in 2011 almost doubled to £63.7m from £32.2m in 2010.
“Full year 2011 profitability was in line with consensus as easing key raw material prices such as butadiene helped offset severe fourth quarter destocking [-13% volume declines],” UBS said.
“This has not continued into 2012, with Yule Catto volumes improving sequentially quarter on quarter [but still below year on year] and management expects to make further progress throughout the year,” it added.
UBS maintained a "buy" rating for Yule Catto.
($1 = £0.64, €1 = £ | http://www.icis.com/Articles/2012/03/15/9541793/uk-specialty-producer-yule-catto-shares-up-as-ubs-lifts-price-target.html | CC-MAIN-2014-15 | refinedweb | 189 | 66.33 |
User:Cajek/HTBFANJS
From Uncyclopedia, the content-free encyclopedia
Common mistakes I see
edit Humor
- Spelling stuff out for us: Give the reader some credit, guys!
- Total randomness: We don't get it.
- No joke set up: You have to have some serious parts to get the set up to the joke.
edit Concept
- Too realistic: If you're writing about how hard it is to raise a baby, for instance, normal hyperbole won't work. You're going to have to go way off the deep end.
- HYPERBOLE: A lot of authors, like me, go overboard to make something funny (Traditional Values)
- WITTY OBSERVATIONS: You could, like Jerry Seinfeld, just make witty observations for a whole article. Don't do this with hyperbole, it's too confusing: choose one or the other.
edit Prose and Formatting
- Repetitive words: Use a thesaurus, people.
- Redundancy: AH! Not the same! "Celebratory party" is an example.
- Unnecessary words: When you tell me something happened, then you tell me something else happened because of that, don't repeat yourself by saying "X happened. Then Y happened because of X."
edit Images
- Don't use overused images. Please.
- You don't need to have a hilarious caption, but it helps.
- Right aligned, please. It's easier to read the stuff around it when you set it up logically like that.
edit Alternate Namespaces
- Give a reason Why is this a "why?" or a "howto" as opposed to just a plain old article? Unless the humor is amazing, a straight encyclopedic style trumps a first-person style or didactic tone.
edit UnNews
edit Concept
- How did it happen?
- Why should we care?
- What is going to be done?
- What's happening now with the issue?
edit Style
- Remember to take out ALL opinion, that includes almost all adjectives.
- UnNews is the perfect namespace for hyperbole. No, not the user. It's the place for going overboard.
- No first person voice. Seriously. | http://uncyclopedia.wikia.com/wiki/User:Cajek/HTBFANJS | CC-MAIN-2014-52 | refinedweb | 325 | 68.77 |
One easy way to improve the speed of a website is to only download images only when they’re needed, which would be when they enter the viewport. This “lazy loading” technique has been around a while and there are lots of great tutorials on how to implement it.
But even with all the resources out there, implementing lazy loading can look different depending on the project you’re working in or the framework you’re using. In this article, I’ll use the Intersection Observer API alongside the
onLoad event to lazy load images with the Svelte JavaScript framework.
Check out Tristram Tolliday’s introduction to Svelte if you’re new to the framework.
Let’s work with a real-life exampleLet’s work with a real-life example
I put this approach together while testing the speed on a Svelte and Sapper application I work on, Shop Ireland. One of our goals is to make the thing as fast as we possible can. We hit a point where the homepage was taking a performance hit because the browser was downloading a bunch of images that weren’t even on the screen, so naturally, we turned to lazy loading them instead.
Svelte is already pretty darn fast because all of the code is compiled in advance. But once we tossed in lazy loading for images, things really started speeding up.
This is what we’re going to work on together. Feel free to grab the final code for this demo from GitHub and read along for an explanation of how it works.
This is where we’ll end up by the end:
Let’s quickly start up SvelteLet’s quickly start up Svelte
You might already have a Svelte app you’d like to use, but if not, let’s start a new Svelte project and work on it locally. From the command line:
npx degit sveltejs/template my-svelte-project cd my-svelte-project npm install npm run dev
You should now have a beginner app running on.
Adding the components folderAdding the components folder
The initial Svelte demo has an
App.svelte file but no components just yet. Let’s set up the components we need for this demo. There is no components folder, so let’s create one in the
src folder. Inside that folder, create an
Image folder — this will hold our components for this demo.
We’re going to have our components do two things. First, they will check when an image enters the viewport. Then, when an image does enter, the components will wait until the image file has loaded before showing it.
The first component will be an
<IntersectionObserver> that wraps around the second component, an
<ImageLoader>. What I like about this setup is that it allows each component to be focused on doing one thing instead of trying to pack a bunch of operations in a single component.
Let’s start with the
<IntersectionObserver> component.
Observing the intersectionObserving the intersection
Our first component is going to be a working implementation of the Intersection Observer API. The Intersection Observer is a pretty complex thing but the gist of it is that it watches a child element and informs us when it enters the bounding box of its parent. Hence images: they can be children of some parent element and we can get a heads up when they scroll into view.
While it’s definitely a great idea to get acquainted with the ins and outs of the Intersection Observer API — and Travis Almand has an excellent write-up of it — we’re going to make use of a handy Svelte component that Rich Harris put together for svelte.dev.
We’ll set this up first before digging into what exactly it does. Create a new
IntersectionObserver.svelte file and drop it into the
src/components/Image folder. This is where we’ll define the component with the following code:
); } // The following is a fallback for older browsers>
We can use this component as a wrapper around other components, and it will determine for us whether the wrapped component is intersecting with the viewport.
If you’re familiar with the structure of Svelte components, you’ll see it follows a pattern that starts with scripts, goes into styles, then ends with markup. It sets some options that we can pass in, including a
once property, along with numeric values for the top, right, bottom and left distances from the edge of the screen that define the point where the intersection begins.
We’ll ignore the distances but instead make use of the
once property. This will ensure the images only load once, as they enter the viewport.
The main logic of the component is within the
onMount section. This sets up our observer, which is used to check our element to determine if it’s “intersecting” with the visible area of the screen.
For older browsers it also attaches a scroll event to check whether the element is visible as we scroll, and then it’ll remove this listener if we’ve determined that it is viable and that
once is
true.
Let’s use our
<IntersectionObserver> component to conditionally load images by wrapping it around an
<ImageLoader> component. Again, this is the component that receives a notification from the
<IntersectionOberserver> so it knows it’s time to load an image.
That means we’ll need a new component file in
components/Image. Let’s call it
ImageLoader.svelte. Here’s the code we want in it:
<script> export let src export let alt import IntersectionObserver from './IntersectionObserver.svelte' import Image from './Image.svelte' </script> <IntersectionObserver once={true} let:intersecting={intersecting}> {#if intersecting} <Image {alt} {src} /> {/if} </IntersectionObserver>
This component takes some image-related props —
src and
alt — that we will use to create the actual markup for an image. Notice that we’re importing two components in the scripts section, including the
<IntersectionObserver> we just created and another one called
<Image> that we haven’t created yet, but will get to in a moment.
The
<IntersectionObserver> is put to work by acting as a wrapping around the soon-to-be-created
<Image> component. Check out those properties on it. We are setting
once to
true, so the image only loads the first time we see it.
Then we make use of Svelte’s slot props. What are those? Let’s cover that next.
Slotting property valuesSlotting property values
Wrapping component, like our
<IntersectionObserver> are handy for passing props to the children it contains. Svelte gives us something called slot props to make that happen.
In our
<IntersectionObserver> component you may have noticed this line:
<slot {intersecting}></slot>
This is passing the intersecting prop into whatever component we give it. In this case, our <ImageLoader> component receives the prop when it uses the wrapper. We access the prop using let:intersecting={intersecting} like so:
<IntersectionObserver once={true} let:intersecting={intersecting}>
We can then use the intersecting value to determine when it’s time to load an
<Image> component. In this case, we’re using an if condition to check for when it’s go time:
<IntersectionObserver once={true} let:intersecting={intersecting}> {#if intersecting} <Image {alt} {src} /> {/if} </IntersectionObserver>
If the intersection is happening, the
<Image> is loaded and receives the
alt and
src props. You can learn a bit more about slot props in this Svelte tutorial.
We now have the code in place to show an
<Image> component when it is scrolled onto the screen. Let’s finally get to building the component.
Showing images on loadShowing images on load
Yep, you guessed it: let’s add an
Image.svelte file to the
components/Image folder for our
<Image> component. This is the component that receives our
alt and
src props and sets them on an
<img> element.
Here’s the component code:
<script> export let src export let alt import { onMount } from 'svelte' let loaded = false let thisImage onMount(() => { thisImage.onload = () => { loaded = true } }) </script> <style> img { height: 200px; opacity: 0; transition: opacity 1200ms ease-out; } img.loaded { opacity: 1; } </style> <img {src} {alt} class:loaded bind:this={thisImage} />
Right off the bat, we’re receiving the
alt and
src props before defining two new variables:
loaded to store whether the image has loaded or not, and
thisImage to store a reference to the img DOM element itself.
We’re also using a helpful Svelte method called
onMount. This gives us a way to call functions once a component has been rendered in the DOM. In this case, we’re set a callback for
thisImage.onload. In plain English, that means it’s executed when the image has finished loading, and will set the
loaded variable to a
true value.
We’ll use CSS to reveal the image and fade it into view. Let’s give set an
opacity: 0 on images so they are initially invisible, though technically on the page. Then, as they intersect the viewport and the
<ImageLoader> grants permission to load the image, we’ll set the image to full opacity. We can make it a smooth transition by setting the
transition property on image. The demo sets the transition time to 1200ms but you can speed it up or slow it down as needed.
That leads us to the very last line of the file, which is the markup for an
<img> element.
<img {src} {alt} class:loaded bind:this={thisImage} />
This uses
class:loaded to conditionally apply a
.loaded class if the loaded variable is
true. It also uses the
bind:this method to associate this DOM element with the
thisImage variable.
Native lazy loadingNative lazy loading
While support for native lazy loading in browsers is almost here, it’s not yet supported across all the current stable versions. We can still add support for it using a simple capability check.
In our
ImageLoader.svelte file we can bring in the
onMount function, and within it, check to see if our browser supports lazy loading.
import { onMount } from 'svelte' let nativeLoading = false // Determine whether to bypass our intersecting check onMount(() => { if ('loading' in HTMLImageElement.prototype) { nativeLoading = true } })
We then adjust our
if condition to include this
nativeLoading boolean.
{#if intersecting || nativeLoading} <Image {alt} {src} /> {/if}
Lastly, in
Image.svelte, we tell our browser to use lazy loading by adding
<img> element.
<img {src} {alt} class:loaded bind:this={thisImage}
This lets modern and future browsers bypass our code and take care of the lazy loading natively.
Let’s hook it all up!Let’s hook it all up!
Alright, it’s time to actually use our component. Crack open the
App.svelte file and drop in the following code to import our component and use it:
<script> import ImageLoader from './components/Image/ImageLoader.svelte'; </script> <ImageLoader src="OUR_IMAGE_URL" alt="Our image"></ImageLoader>
Here’s the demo once again:
And remember that you’re welcome to download the complete code for this demo on GitHub. If you’d like to see this working on a production site, check out my Shop Ireland project. Lazy loading is used on the homepage, category pages and search pages to help speed things up. I hope you find it useful for your own Svelte projects!
No mention of native lazy load for images?
Why the scroll listener? I thought the purpose of IntersectionObserver is not having to use scroll listeners?
The scroll listener seems to be a fallback for browsers that do not support the IntersectionObserver.
Awesome post! Super clean code, explained very well for beginners and advanced developers. Thanks a lot!
THIS DO THE SAME EFFECT….
const lazyLoading = (node) => {
node.onload = () => {
node.style.opacity = 1;
}
}
img {
width: 100%;
opacity: 0;
transition: 1s opacity;
}
The solution is for lazy loading images, not the way are displayed.
Imagine you have 1mil images on the page. Without lazy loading the browser will go like crazy and download them all, making the page unusable. Your solution will load them all and just change opacity.
Thank you a lot! What a great article. It works like a charm <3 | https://css-tricks.com/lazy-loading-images-in-svelte/ | CC-MAIN-2022-27 | refinedweb | 2,014 | 63.19 |
class AtomFeed(XmlElement):Now for the whys and hows.
_qname = '{}feed'
title = Title
entries = [Entry]
class Entry(XmlElement):
_qname = '{}entry'
links = [Link]
title = Title
content = Content
class Link(XmlElement):
_qname = '{}link'
rel = 'rel'
address = 'href'
class Url(XmlElement):
_qname = '{}url'
class Title(XmlElement):
_qname = '{}title'
title_type = 'type'
For the past few years I've been working with Web Services and most of them use XML to represent the data (though I hope JSON catches on more widely). There are some great XML libraries out there, and my library is based on one of them (ElementTree). XML parsing is certainly nothing new, so why create a new one?
The WhyThere are a few limitations with the XML parsing approaches I've used in Python:
- XML structure isn't documented or available using
- No autocompete for finding elements in the XML
- If the XML changes in a new version of the web service, my code needs to be rewritten
- My code interacting with the XML is verbose
However, one of the biggest drawbacks to representing each type of XML element with it's own class is that you end up needing to write lots of class definitions. For this reason I've tried to make the XML class definitions as compact as possible. Specifying a simple XML class only takes two lines of code. For each type of sub-element and each XML attribute, you can add one line of code. You don't need to declare all of the elements or attributes either. The XmlElement will preserve all of the XML which it parses. If there are class members which correspond to a specified sub-element, the element will be placed in that member. Any unspecified elements will be converted to XmlElement instances. You can search over all XML elements (both anticipated members and unanticipated generic objects) using the
get_elementsmethod. XML attributes are handled in a similar fashion and can be searched using
get_attributes.
I've saved the most unique feature of this library for last: Sometimes web services change the XML definition thereby breaking your code. If it is something small like a change in XML namespace or changing a tag, it seems like such a waste to have to edit lines upon lines of code. To address this kind of problem, this XML library supports versioning. When you parse or generate XML, you can specify the version of the available rules that you'd like to use. You can use the same objects with any version of the web service.
To use versioning, write a class definition with tuples containing the version specific information:
class Control(XmlElement):If you create an instance of the Control element like this:
_qname = ('{}control', #v1
'{}control') #v2
draft = Draft
uri = 'atomURI'
lang = 'atomLanguageTag'
tag = ('control_tag', 'tag') # v1, v2
class Draft(XmlElement):
_qname = ('{}draft',
'{}draft')
c = Control(draft=Draft('yes'), tag='test')Then you can generate XML for each version like this:
c.to_string(1)returns
<control xmlns=""while
<draft>yes</draft>
</control>
c.to_string(2)returns
<control xmlns=""Note the difference in XML namespaces in the above. I also added an example of an attribute name which changed between versions, though "tag" doesn't actually belong in
<draft>yes</draft>
</control>
AtomPub control(so don't go trying to use it m'kay).
Since this library is open source, you're free to examine how it works and use it however you like. Allow me to highlight a few key points.
The HowEarlier I showed how to define XML element classes which look for specific sub elements and attributes and convert them into member objects. I also mentioned that this XML library handles versioning, meaning that the same object can parse and produce different XML depending on a version parameter. Both of these are accomplished by creating class level rule sets which are built up using introspection the first time an XML conversion is attempted.
In pseudo-code it works like this.
XML --> objectWhen generating XML the process is similar but slightly different.
- find out the desired version
- is there an entry for this version in _rule_set?
- if not, look at all XML members of this class
in _members
- create XML matching rules based on each member's type
(and store in _rule_set so we don't need to generate
the rules again)
- iterate over all sub-elements in the XML tree
- sub-elements and attributes which are in the rule set
are converted into the declared type
- sub-elements and attributes which don't fit a rule are
stored in _other_elements or _other_attributes
object --> XMLArmed with the above explanation, understanding the source code should be a bit easier.
- create an XML tree with the tag and namespace for this
object given the desired version
- look at all members of this class in _members
- tell each member to attach itself to the tree using
it's rules for the desired version
- iterate through _other_elements and _other_attributes
and tell each to attach to the XML tree | https://blog.jeffscudder.com/2008/11/xml-library-with-versioning.html | CC-MAIN-2022-40 | refinedweb | 832 | 55.88 |
On Thu, Sep 29, 2011 at 1:10 PM, Devin Jeanpierre <jeanpierreda at gmail.com> wrote: > It's absolutely pannoying that you have to prefix the globals like in > C, but deftime-assigned nonlocals are less convenient in other > respects too. e.g. try sharing a lock among multiple functions, or try > testing the usage of a lock, etc. Sharing state amongst multiple functions is a solved problem - you use either a closure or a class instance to create a shared namespace, depending on which approach makes more sense for the problem at hand. However, both of those solutions feel very heavy when all you want to do is share state across multiple invocations of the *same* function. Hence the current discussion (and the numerous ones that have preceded it over the years). It's a tricky problem precisely because it doesn't take much to tip any given use case over the threshold of complexity to where it's a better idea to use a full-fledged closure or class. And, as I have said several times, I agree closures currently impose testability problems, but I think the answer there lies in providing better introspection tools to lessen those problems rather than advising people not to use closures specifically for those reasons. Regards, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia | https://mail.python.org/pipermail/python-ideas/2011-September/011946.html | CC-MAIN-2016-50 | refinedweb | 223 | 58.32 |
Using The Test Framework
The most common scenario for using the Test Framework is to construct test cases for ViewController implementation classes. Because the runtime environment of a ViewController is quite constrained, it is easy to construct isolated unit tests that exercise the methods exposed by a ViewController class.
- Create a new Java class SelectTestCase, in a package directory (typically under src/test in your project) that is the same as the package directory for the class you will be testing. This allows your test case to access package private and protected variables and methods in the class being tested.
- Make sure that the package declaration matches that of the class to be tested (in this case, org.apache.myfaces.usecases.locale. Declare your class to extend AbstractViewControllerTestCase (or, if you are not testing a ViewController implementation, extend AbstractJsfTestCase):
public class SelectTestCase extends AbstractViewControllerTestCase { ... }
- Create a constructor that takes a String parameter, and passes it to the superclass constructor:
public SelectTestCase(String name) { super(name); }
- Create a setUp() method and be sure to call super.setUp() at the beginning. This method will be called by JUnit immediately before it executes each test method.
public void setUp() { super.setUp(); // Customization will go here }
- After the call to the superclass setUp() method, perform any other initialization required to execute the tests in this test case. In our example case, a configuration method on the MockApplication instance will be used to define the default and supported Locales for this set of tests. This corresponds to what would happen at runtime, when the JavaServer Faces initialization process used the contents of the /WEB-INF/faces-config.xml resource to initialize these values. In addition, we will create a new instance of the Select class(); }
- Create a tearDown() method that cleans up any custom variables you allocated in your setUp() method, and then calls the super.tearDown() method. This will be called by JUnit after each test is executed.
public void tearDown() { vc = null; super.tearDown(); }
- Declare the custom instance variable(s) that you are setting up in your setUp() method. In this case, we create an instance of the ViewController class to be tested. A new instance will be created (via a call from JUnit to the setUp() method) before each test method is executed.
// The instance to be tested Select vc = null;
- Create one or more individual test methods (which must beController event()); }
The test case sets the locale property locale for the current view was NOT actually changed.
- Finally, integrate the execution of this test case into your build script. Many IDEs will take care of this for you; however, if you are creating an Ant build script by hand, you might find the test target from the Myfaces/myfaces/usecases/*/*TestCase.class"/> </batchtest> </junit> </target> | http://myfaces.apache.org/test/usecase.html | CC-MAIN-2014-41 | refinedweb | 464 | 52.7 |
Back to index
nsIStreamConverter provides an interface to implement when you have code that converts data from one type to another. More...
import "nsIStreamConverter.
STREAM CONVERTER USERS
There are currently two ways to use a stream converter:
SYNCHRONOUS Stream to Stream You can supply the service with a stream of type X and it will convert it to your desired output type and return a converted (blocking) stream to you.
STREAM CONVERTER SUPPLIERS):
.org/streamconv;1?from=FROM_MIME_TYPE&to=TO_MIME_TYPE
Definition at line 86 of file nsIStreamConverter.idl.
ASYNCRONOUS VERSION.
SYNCRONOUS VERSION Converts a stream of one type, to a stream of another type.
Use this method when you have a stream you want to convert.
Called when the next chunk of data (corresponding to the request) may be read without blocking the calling thread.
The onDataAvailable impl must read exactly |aCount| bytes of data before returning.
NOTE: The aInputStream parameter must implement readSegments.
An exception thrown from onDataAvailable has the side-effect of causing the request to be canceled.
Called to signify the beginning of an asynchronous request.
An exception thrown from onStartRequest has the side-effect of causing the request to be canceled.
Called to signify the end of an asynchronous request.
This call is always preceded by a call to onStartRequest.
An exception thrown from onStopRequest is generally ignored. | https://sourcecodebrowser.com/lightning-sunbird/0.9plus-pnobinonly/interfacens_i_stream_converter.html | CC-MAIN-2016-44 | refinedweb | 222 | 57.06 |
User:PeeWee32/Tags on roads with cycleway=*
Contents
Reason for starting this wiki
The cycleway= * wiki is pretty clear on how to tag attributes that belong to the cycleway when a cycleway=track or cycleway=lane is used. This is done by using the namespace: cycleway.
Example: cycleway:surface=aspalt.
The reason why this new wiki was made lies in a discussion we had on the bicycle=use_sidepath wiki. The issue was this:
There is a road tagged with cycleway=track. When this road is also tagged with a bicycle=no (or bicycle=use_sidepath) is this OK or is this a error? Two visions on the matter:
1. It is OK because the bicycle=no applies to the main road only. There is no cycleway namespace.
2. It is an error because a cycleway=track implies that cycling is not forbidden so bicycle=no is wrong.
So what is the best way of tagging then?
The question is: What does a tag mean when there is no cycleway namespace? Does this tag apply just to the main road? What does width=7 mean? Does this include the lanes (in case of cycleway=lane)? And should we distinguish between track and lane? If so how? Will this way of tagging be simple enough for all to understand? How about mappers that are nog bicycle minded? Will they still understand?
Is tagging for cycleway=track & cycleway=lane the same principle?
Many cyclist see a difference between cycleway=track and cycleway=lane.
bla bla bla.
Cycleway=lane and using the lanes sufix
There is a way to express width. Example : width:lanes=2.75|2.75|3.25 . This bla bla bla. | http://wiki.openstreetmap.org/wiki/User:PeeWee32/Tags_on_roads_with_cycleway%3D* | CC-MAIN-2017-17 | refinedweb | 279 | 69.28 |
how to parse xml attribute value with dot to PsiElements Follow Lin Yang Created September 03, 2013 11:05 For example <bean com.seventh7.demo.Man" splitted by dot to PsiElement?
Hi,
This functionality should already be offered to you out of the box.
You check out the DOM API for adding validation etc
Alan
hi,
i can understand the api to manage xml, i just want to separate the attribute value part to PsiElement,
such as:
<bean com" part as a PsiElement, and take the "seventh7" as another PsiElement
Could you explain your usecase please?
This could help us give more concise advice related to your problem :)
Task an example,
A xml tag like: <bean demo", i can navigate to a package, and i click on the "Entity", i can navigate to the Human class. It's done by intellij as default.
But now, i want to something else when i click on the world "demo"
If you own your own XmlFileDescription class etc You can add add to your XmlAttribute field the `@Converter(YourConverter.class)` annotation. Then you should create the converter to be of type `List<YourReferenceType>`.
For instance `public class YourConverter extends Converter<List<YourReferenceType>>` and within the DomElement interface class change the field type to be of `GenericAttributeValue<List<YourReferenceType>>`
If this is functionality you are wishing to add to existing Spring files, a different approach will be required.
Thank you buddy
It's very near to my purpose. And does this way work with code completion as well?
I'm also interested in that how i have to deal with existing Spring files? :)
Thank you very very much for your help
(Intellij is so well designed :p)
Yes, this is all configurable under the Converter implementation.
Unfortunately if you wish to add this to existing Spring files you will have to ask for the help of a Jetbrains team member, as the plugin is currently closed source.
Hopefully they will be able to help you with your issue :)
It's generous of you to give me so much of your time. Thank you
What functionality do you want to provide exactly? I don't understand what navigation target could be useful additionally to the package/class already provided by Spring plugin.
I'm writting a mybatis plugin.
I want to support the code completion and reference of property of resultMap in mapper xml files :)
Such as:
<resultMap id="example" type="com.seventh7.demo.Man">
<result property="blog.user.name" column="username" />
</resultMap>
Ok, so this is not about integrating with spring.xml files then.
Btw, there's already three iBatis plugins, maybe you can join efforts?
yeah. i'm the author of,
i just want to find the right way to deal with my requirement:)
If you're using DOM, you don't need to do anything special, just use
GenericAttributeValue<PsiClass> getProperty();
in DOM-class for <result>. All navigation/completion will be provided automatically.
ok, i will try it. thank you so much
And I am the author of iBATIS/MyBatis mini-plugin that has pretty basic MyBatis support and emphasizes on iBATIS support. Now I feel like removing MyBatis support out of my plugin to avoid clashing with Lin Yang's MyBatis plugin. Lin Yang, is your plugin open-source? returns 404
i plan to make it open source when i rewrite it to make code clear and support more features, as i'm just a newer to intellij plugin development.
:) | https://intellij-support.jetbrains.com/hc/en-us/community/posts/206770335-how-to-parse-xml-attribute-value-with-dot-to-PsiElements?page=1 | CC-MAIN-2020-10 | refinedweb | 579 | 61.67 |
Use the Personalization API to retrieve information about an email, hashed email, or postal address. Sign up to receive your unique API Key.
The API can be queried with HTTP GET requests.
Query email for user "John Doe" with email personalize@example.com:
Query email for user "John Doe" with email personalize@example.com and display in browser for testing purposes:
Query MD5 hashed email for user personalize@example.com:
Query SHA-1 hashed email for user personalize@example.com:
Query for user John Doe with email john@doe.com at 667 Mission St:
These parameters are required for all uses.
These parameters are used to query with an email address.
Tip: Querying by email, name, and postal will give you the highest match rate. Querying by email and name will also give you a better match rate than by email alone.
These parameters are used to query with a postal address. Providing email parameters will increase the match rate. first and last name must always be provided to query via postal. Either zip4 must be provided or street, city, and state.
Tip: All postal parameters should be URL encoded.
Tip: It's recommended to use the standardized format for postal addresses.
These parameters are optional to aid viewing query responses within a browser.
In order to query for a certain field, you can simply use the fields parameter on the end of your query string. For example, a regular query of personalize@rapleaf.com with your API key would look like this:
Now if you simply add the fields parameter followed by specific comma separated fields (as they appear in the response), you can view just the specific fields you queried for in the response.
Please note the %20 is simply the URL encoded space which is needed (you must exactly match
the field name as it appears in the response for this to work).
Here are a few email addresses and name and postal address combinations you can try out with our API:
Successful responses are returned in JSON format. For more details about all the fields available and their possible values, download our data dictionary..
{ "age":"21-24", "gender":"Male", "interests":{ "blogging":true, "high_end_brand_buyer":true, "sports":true, }, "eam":{ "date_first_seen":"2009-06-20", "month_last_open":"2014-11", "popularity":10, "velocity":2, }, "education":"Completed Graduate School", "occupation":"Professional", "children":"No", "household_income":"75k-100k", "marital_status":"Single", "home_owner_status":"Rent" }
The Personalization API is easy to implement in a variety of languages. The code snippets below use the libraries on our GitHub Page to query our API and output the results. For more details, please consult each library's accompanying README docs.
require 'towerdata_api' begin api = TowerDataApi::Api.new("API_KEY") # Set API key here hash = api.query_by_email("personalize@rapleaf.com") puts hash.inspect rescue Exception => e puts e.message end
from towerDataApi import TowerDataApi api = TowerDataApi.TowerDataApi('API_KEY') try: response = api.query_by_email('personalize@rapleaf.com') for k, v in response.iteritems(): print '%s = %s' % (k, v) except Exception as e: print e
import org.json.JSONObject; import com.towerdata.api.personalization.TowerDataApi; public class TowerDataApiExample { public static void main(String[] args) { TowerDataApi api = (args[0] != null) ? new TowerDataApi(args[0]):new TowerDataApi("YOUR_KEY"); // Set API key here
final String email = (args[1] != null) ? args[1]:"personalize@rapleaf.com";
// Query by email try { JSONObject response = api.queryByEmail(email, true); System.out.println("Query by email: \n" + response); } catch (Exception e) { e.printStackTrace(); } } }
use 'TowerDataAPI.pm'; eval { my $response = query_by_email('pete@rapleafdemo.com'); while(my ($k, $v) = each %$response) { print "$k = $v.\n"; } }; if ($@) { print $@ }
using System; using System.Collections.Generic; using System.Linq; using System.Text; using personalization; namespace MyApplication { class TowerDataExample { public static void Main(string[] args) { RapleafApi api = new RapleafApi("SET_ME"); // Set API key here try { Dictionary<string, response = api.queryByEmail("personalize@rapleaf.com", true); foreach (KeyValuePair<string, kvp in response) { Console.WriteLine("{0}: {1}", kvp.Key, kvp.Value); } } catch (System.Net.WebException e) { Console.WriteLine(e.Message); } } } }
Need to query multiple people in a single request? Check out the Personalization API, Bulk Version.
If you add
'&format=html' to the url of a request in your browser, it will automatically 'pretty print' JSON for testing purposes.
Please email questions to TowerData Developer Support. | http://intelligence.towerdata.com/developers/personalization-api/personalization-api-documentation | CC-MAIN-2015-11 | refinedweb | 702 | 52.36 |
Learn more about these different git repos.
Other Git URLs
85ac3ff
b667f73
@@ -9249,7 +9249,7 @@
context.session.assertPerm('admin')
add_external_rpm(rpminfo, external_repo, strict=strict)
- def tagBuildBypass(self, tag, build, force=False):
+ def tagBuildBypass(self, tag, build, force=False, notify=True):
"""Tag a build without running post checks or notifications
This is a short circuit function for imports.
@@ -9261,6 +9261,8 @@
"""
_tag_build(tag, build, force=force)
+ if notify:
+ tag_notification(True, None, tag, build, context.session.user_id)
def tagBuild(self, tag, build, force=False, fromtag=None):
"""Request that a build be tagged
@@ -9347,8 +9349,8 @@
tag_notification(False, None, tag, build, user_id, False, "%s: %s" % (exctype, value))
raise
- def untagBuildBypass(self, tag, build, strict=True, force=False):
- """Untag a build without any checks or notifications
+ def untagBuildBypass(self, tag, build, strict=True, force=False, notify=True):
+ """Untag a build without any checks
Admins only. Intended for syncs/imports.
@@ -9356,6 +9358,8 @@
No return value"""
_untag_build(tag, build, strict=strict, force=force)
def moveBuild(self, tag1, tag2, build, force=False):
"""Move a build from tag1 to tag2
Fixes:
This also changes the default behavior. Previous bypass tag operations did not notify. I believe there are a number of rel-eng scripts that perform mass tagging and I think that quieting notifications is part of the reasons.
But maybe it is better to make the change an notify by default. I'm not sure, but I think we should check with some users before we do this.
Also, seems like we ought to make the same change to tagBuildBypass if we're going this way.
rebased onto f4b532eb4df3c52d0b8aac9a8e2c994b314e96eb
I've added fix also for tagBuild and sent a question to koji-devel.
I don't think the bypass calls should notify in the failure case. This could lead to some some repetitive messages in some cases.
rebased onto b667f73
Ok, I think it should work, rebased and updated.
:thumbsup:
merging with a docstring fix
Commit 8290719 fixes this pull-request
Pull-Request has been merged by mikem
Fixes: | https://pagure.io/koji/pull-request/691 | CC-MAIN-2022-21 | refinedweb | 337 | 54.22 |
CTLOG_new, CTLOG_new_from_base64, CTLOG_free, CTLOG_get0_name, CTLOG_get0_log_id, CTLOG_get0_public_key - encapsulates information about a Certificate Transparency log
#include <openssl/ct.h> CTLOG *CTLOG_new(EVP_PKEY *public_key, const char *name); int CTLOG_new_from_base64(CTLOG ** ct_log, const char *pkey_base64, const char *name); void CTLOG_free(CTLOG *log); const char *CTLOG_get0_name(const CTLOG *log); void CTLOG_get0_log_id(const CTLOG *log, const uint8_t **log_id, size_t *log_id_len); EVP_PKEY *CTLOG_get0_public_key(const CTLOG *log);
CTLOG_new() returns a new CTLOG that represents the Certificate Transparency ( CT ) log with the given public key. A name must also be provided that can be used to help users identify this log. Ownership of the public key is transferred.
CTLOG_new_from_base64() also creates a new CTLOG, but takes the public key in base64-encoded DER form and sets the ct_log pointer to point to the new CTLOG. The base64 will be decoded and the public key parsed.
Regardless of whether CTLOG_new() or CTLOG_new_from_base64() is used, it is the caller’s responsibility to pass the CTLOG to CTLOG_free() once it is no longer needed. This will delete it and, if created by CTLOG_new(), the EVP_PKEY that was passed to it.
CTLOG_get0_name() returns the name of the log, as provided when the CTLOG was created. Ownership of the string remains with the CTLOG.
CTLOG_get0_log_id() sets *log_id to point to a string containing that log’s LogID (see RFC 6962 ). It sets *log_id_len to the length of that LogID. For a v1 CT log, the LogID will be a SHA-256 hash (i.e. 32 bytes long). Ownership of the string remains with the CTLOG.
CTLOG_get0_public_key() returns the public key of the CT log. Ownership of the EVP_PKEY remains with the CTLOG.
CTLOG_new() will return NULL if an error occurs.
CTLOG_new_from_base64() will return 1 on success, 0 otherwise.
ct(7)
These functions were added in OpenSSL 1.1.0.
Licensed under the OpenSSL license (the "License"). You may not use this file except in compliance with the License. You can obtain a copy in the file LICENSE in the source distribution or at <>. | https://www.zanteres.com/manpages/CTLOG_get0_log_id.3ssl.html | CC-MAIN-2022-33 | refinedweb | 329 | 66.13 |
> openh323-v1_15_1-src.zip > cu30codec.h
/* * cu30codec.h * * H.323 protocol handler * * Open H323 Library * * Copyright (c) 1999-2000. * * Contributor(s): ______________________________________. * Derek J Smithies (derek@indranet.co.nz) * * $Log: cu30codec.h,v $ * Revision 1.6 2002/10/09 18:18:35 rogerh * Apply a patch from Damien Sandras * * Revision 1.5 2002/09/16 01:14:15 robertj * Added #define so can select if #pragma interface/implementation is used on * platform basis (eg MacOS) rather than compiler, thanks Robert Monaghan. * * Revision 1.4 2002/09/03 06:19:36 robertj * Normalised the multi-include header prevention ifdef/define symbol. * * Revision 1.3 2002/08/05 10:03:47 robertj * Cosmetic changes to normalise the usage of pragma interface/implementation. * * Revision 1.2 2002/01/16 02:53:52 dereks * Add methods to cope with H.245 RequestModeChange in h.261 video codec. * * Revision 1.1 2001/10/23 02:18:06 dereks * Initial release of CU30 video codec. * * */ #ifndef __OPAL_CU30CODEC_H #define __OPAL_CU30CODEC_H #ifdef P_USE_PRAGMA #pragma interface #endif #include "h323caps.h" /////////////////////////////////////////////////////////////////////////////// /**This class describes the CU30 video codec capability. */ class H323_Cu30Capability : public H323NonStandardVideoCapability { PCLASSINFO(H323_Cu30Capability, H323NonStandardVideoCapability); public: /**@name Construction */ //@{ /**Create a new CU30 capability. */ H323_Cu30Capability( H323EndPoint & endpoint, // Endpoint to get NonStandardInfo from. PString statsDir, // Directory to read statistics for codec from/to. INT _width, // width and height for the transmitter. INT _height, // INT _statsFrames // Number of frames to collect stats for. ); //@} /**@name Overrides from class PObject */ //@{ /**Create a copy of the object. */ virtual PObject * Clone() const; //@} /**@name Operations */ //@{ /**Create the codec instance, allocating resources as required. */ virtual H323Codec * CreateCodec( H323Codec::Direction direction /// Direction in which this instance runs ) const; //@} /**@name Identification functions */ //@{ /**Get the name of the media data format this class represents. */ virtual PString GetFormatName() const; //@} PString statisticsDir; //Required by cu30 codec at initialization. //directory containing stats. Good stats==good compression. INT newWidth; // width and height for the transmitter. INT newHeight; // INT statsFrames; // Number of frames to collect stats over. }; /////////////////////////////////////////////////////////////////////////////// /**This class is a CU30 codec. */ class H323_Cu30Codec : public H323VideoCodec, public PDynaLink { PCLASSINFO(H323_Cu30Codec, H323VideoCodec) public: /**@name Construction */ //@{ /**Create a new CU30 video codec. */ H323_Cu30Codec( Direction direction, /// Direction in which this instance runs PString statsDir, INT _width, /// width and height for the transmitter. INT _height, INT _statsFrames /// Number of frames to collect stats over. ); ~H323_Cu30Codec(); //@} /**@name openh323 interface routines. */ //@{ /**Encode the data from the appropriate device. This will encode a frame of data for transmission. The exact size and description of the data placed in the buffer is codec dependent but should be less than H323Capability::GetTxFramesInPacket() * OpalMediaFormat::GetFrameSize() in length. The length parameter is filled with the actual length of the encoded data, often this will be the same as the size parameter. This function is called every GetFrameRate() timestamp units, so MUST take less than (or equal to) that amount of time to complete! Note that a returned length of zero indicates that time has passed but there is no data encoded. This is typically used for silence detection in an audio codec. This function grabs, displays, and compresses a video frame into into CU30 packets. Get another frame if all packets of previous frame have been sent. Get next packet on list and send that one. Render the current frame if all of its packets have been sent. */ virtual BOOL Read( BYTE * buffer, /// Buffer of encoded data unsigned & length, /// Actual length of encoded data buffer RTP_DataFrame & rtpFrame /// RTP data frame ); /**Decode the data and output it to appropriate device. This will decode a single frame of received data. The exact size and description of the data required in the buffer is codec dependent but should be less than H323Capability::GetRxFramesInPacket() * OpalMediaFormat::GetFrameSize() in length. It is expected this function anunciates the data. That is, for example with audio data, the sound is output on a speaker. This function is called every GetFrameRate() timestamp units, so MUST take less than that amount of time to complete! */ virtual BOOL Write( const BYTE * buffer, /// Buffer of encoded data unsigned length, /// Length of encoded data buffer const RTP_DataFrame & rtp, /// RTP data frame unsigned & written /// Number of bytes used from data buffer ); /** Used to acquire statistics on this frame. Used in later h323 connections for minimising the bits required to transmit cu30 video. */ BOOL RecordStatistics(unsigned char *src); protected: /** Resize the internal variables to cope with a new frame size. */ BOOL Resize(int width, int height); /** call RenderFrame() routine. */ BOOL Redraw(); /** Display the current frame that the encoder/decoder has in memory. Takes the address of the current frame (set in last call to encode/decode) and then call rawDataChannel->Write(). The current frame is in YUV420P format, and consists of width*height*1.5 bytes. If there is no raw data channel, return true (success). */ BOOL RenderFrame(); /**Process a request for a new frame, as part of the picture has been lost. This request is handled by causing the transmitting video codec to send out an intra frame. Subsequent frames will be inter, inter, inter,,,,, and then an intra frame. */ virtual void OnLostPartialPicture(); /**In the context of the Cu30 codec, this message means "Not all the statistics fields got through." "Please resend the statistics". */ virtual void OnLostPicture(); private: /*There is a problem with the CU30codec. It needs to be able to carry out two tasks. 1)Grab data from the camera. 2)Render data from an array. Thus, we either: two PVideoChannels, or one PVideoChannel to both grab and render. We use one PVideoChannel, which is not consistant with elsewhere, but enables us to (later) have a grab and display process irrespective of there being a H323 connection. */ /** Close the encoder & decoder objects in the run time library. Delete the allocated memory for the frame buffer. */ void Close(); //@} /**@name cu30 interface routines. */ //@{ /** Function pointer initialised when the plug in codec is read */ int (*OpenEncoderWith)(void *, int,int,char *); /** Function pointer initialised when the plug in codec is read */ int (*OpenEncoder)(void *, int,int); /** Function pointer initialised when the plug in codec is read */ int (*CloseEncoder)(void *); /** Function pointer initialised when the plug in codec is read */ int (*OpenDecoder)(void *, int,int); /** Function pointer initialised when the plug in codec is read */ int (*CloseDecoder)(void *); /** Function pointer initialised when the plug in codec is read */ int (*OpenStats)(void *, int,int); /** Function pointer initialised when the plug in codec is read */ int (*CloseStats)(void *); /** Function pointer initialised when the plug in codec is read */ int (*DoEncode)(void *, unsigned char *,unsigned char **); /** Function pointer initialised when the plug in codec is read */ int (*DoDecode)(void *, const unsigned char*, int, unsigned char **); /** Function pointer initialised when the plug in codec is read */ int (*DoStats)(void *, const unsigned char*); /** Function pointer initialised when the plug in codec is read */ int (*SetQuality)(void *, int); /** Function pointer initialised when the plug in codec is read */ int (*SetCodecSize)(void *, int,int); /** copy statistics for a particular field from the library. */ int (*CopyStatsFromLib)(void *, unsigned char *dest, unsigned &length, char *field); /** copy statistics for a particular field to the library. */ int (*CopyStatsToLib)(void *, unsigned char *src, unsigned length, char *field); /**When packets have been lost in the network, we need to wait for an intraframe. Intraframes do not depend on the previous frames. Use the test "IsIntraFrame" to determine if it is an intra frame. */ int (*IsIntraFrame)(void *, const unsigned char *); /** If statistics have been kept on this session, save them to a directory. Statistics are saved in four text files, called "y", "u", "v", and "mc" */ int (*SendStatsToFiles)(void *, char *dir); /** Given a message from the remote computer, generate an intra frame. This occurs because the remote computer has not received all video packets. */ int (*ForceIntraFrame)(void *); /** Tell the codec to create some internal data. This data is specific to this thread, and must not be viewed by other threads. */ int (*MakeInternalData)(void **); /** Tell the codec to free the internal data. This data was created in the call to "MakeInternalData". */ int (*FreeInternalData)(void *); /** Query the Cu30 library, and ask if the all the statistics files have been loaded successfully. Returns 1 if everything is ready for the decoder to run. */ int (*StatsLoadedOK)(void *); /** Allocate the necessary space for yuv420pImage/encodedImage, depending on frame size and direction. Checks for non existance of images first. The encoder needs just the source image. The decoder needs just the soure encoded image. For each Cu30 decoder created, the Cu30 decoder creates one output image. */ BOOL AllocateInternalImages(void); //@} /** Encoder creates a memory block to hold the raw image from the grabber. The decoder just knows where this data is in the runtime codec. */ unsigned char *yuv420pImage; /// the rawimage, in yuv420p format. /** The encoder just knows where this data is in the runtime codec. The decoder uses this block of memory to assemble incoming packets to form the the entire encoded image. */ unsigned char *encodedImage; /// Current image we are woring on. /** Size of the encoded image. */ int encodedImageSize; ///Size (in bytes) of current image. /** position in encoded image that in(out)going packets are writtten(read) to(from) */ int encodedImageIndex; ///position of next packet in encodedImage. /** packetCount is used to determine if (a)need to send the statistics fields and (b)which field to send. */ PINDEX packetCount; /**Codec active determines if the codec has send (or received) one packet. There are two instances of this codec. one for rx, one for tx. Each codec does not need to have an encoder and decoder. Using this variable, we prevent duplication of encoder, and the decoder. */ BOOL codecActive; /** the Statistics dir describes where the stats files are. These files provide a means for improving the compression achieved. The encoder remembers the old stats dir, so once set, can just use the OpenEncoder function, and not OpenEncoderWith(). */ PString statisticsDir; /** For the decoder, sometimes miss incoming video packets. In this case, cannot keep going and hope. Consequently, we wait, until we get a frame that does not depend on the previous frame. Thus, we wait for an IntraFrame. */ BOOL waitForIntraFrame; /** During the current video connection, record the statistics for N frames. These statistics are saved, and used in subsequent video connections. By taking statistics, we can optimise the compression ratio next time a connection occurs. */ INT statsFrames; /** Advises transmitting video codec that the statistics frames need to be resent. This boolean is set true in response to a On_lostPicture H245 Message. */ BOOL resendStats; /** Pointer to the internal data used by the codec library. */ void *internData; }; #endif // __OPAL_CU30CODEC_H ///////////////////////////////////////////////////////////////////////////// | http://read.pudn.com/downloads14/doc/comm/56136/openh323/include/cu30codec.h__.htm | crawl-002 | refinedweb | 1,729 | 57.57 |
There was a discussion about htmlfill and FormEncode on the Subway list a while ago. One of the things that occurred to me during the discussion is that htmlfill would be a lot simpler and more reliable if it didn't use HTMLParser, and just worked with a nice DOM-ish tree.
And then I thought, well, if you are going to generate a form and then pass it to htmlfill (one of a couple options), wouldn't it be nice if you passed in the already-parsed tree, instead of reparsing? Saves a few cycles at least.
In FormEncode I made a little module simplehtmlgen to generate the HTML -- it's kind of like HTMLGen, but a little more isomorphic to HTML/XML. More like stan, really. Well, I could use stan (which also produces a DOM-like object), but I decided to try ElementTree instead, since I feel vaguely like it's growing standard for Pythonic XML. It's not perfect for my purposes -- it might be too XML, where I would prefer a more lax perspective that would better accommodate HTML.
Anyway, I wrote a module for ElementTree, htmlgen. You use it like:
html.textarea(name='entry', class_='big_field')(text_content)
And you get back subclasses of ElementTree's Elements (which you can continue to call to add more attributes or content to). The subclass also adds a __str__ method which serializes the XML (using a default encoding -- I'm not 100% comfortable with a default encoding, but it seems like a good idea to my naive unicode mind). Anyway, about ElementTree...
One of the odd parts of ElementTree is how it deals with text. Tags have a text attribute, which is the text immediately contained in the tag, and a tail attribute which contains the text immediately after the tag ends. There's no text node or text structure that is a child of another tag. There's also no object to represent a set of nodes (except a normal list) so I had to be careful to flatten lists (since I do want to handle sets of tags that aren't a valid XML tree). Anyway, I think this library simplifies some of that, things you'd mostly notice if you are building trees with ElementTree instead of parsing XML documents.
Another odd thing is that there's no way to serialize nodes to unicode -- to do that I had to serialize them to bytes and then decode to unicode. Seemed like a weird omission. And you can't put in any kind of unparsed literal into the tree, you can only put real nodes in, so there's no way to make a literal class/function/builder. This makes sense from a parsing point of view (since you couldn't reparse the serialized output if it wasn't valid), but is a common feature of HTML builders. Instead I guess you just have to parse XML strings before inserting them, which is easy enough.
One positive point (which from another perspective might be a negative) ElementTree doesn't seem very namespace-aware, so I can create tags and attributes with : in them (which means I can generate ZPT).
I feel a little badly about subclassing Element (technically _ElementInterface), because it means there's a more-featureful class of nodes that can easily be mixed in with a less-featureful class (or vice versa). The builder syntax isn't a big deal -- there's no real reason to use that in lieu of the normal methods when manipulating a tree that is already created. But things like __str__ are likely to be useful, but at the same time limiting if you depend on them.
"Another odd thing is that there's no way to serialize nodes to unicode --"..
this could help..
from cElementTree import Element, tostring
html = Element('html')
tostring(html, encoding='utf-8') # -> returns html node serialized into a string using specified encoding
Right, which is why I implemented a __unicode__ method like:def __unicode__(self): return tostring(self, 'utf-8').decode('utf-8')
Doesn't that seem really weird, though?
Unfortunately, htmlgen is already taken in the Python HTML generator namespaces. It's a really old (but still fairly widely used) module available on the starship.
Yes, but it's HTMLgen not htmlgen... ;) Probably mine should be called etgen or something. Anyway, for now it's part of a package, so there's no real name conflict.
"htm.
I thought the 'text' attribute thing with ElementTree was a little odd at first as well. However, beyound this, I think ElementTree is Pythonic and quite handy. I have written dozens of parsers and generators with ElementTree and can no complain. Also, I do use which reminds me a bit of what your doing here. | http://www.ianbicking.org/my-first-bit-of-elementtree.html | CC-MAIN-2018-05 | refinedweb | 799 | 60.95 |
use of "return"
Discussion in 'Ruby' started by SB, Jan 9, 2006.59
- PvdK
- Jul 24, 2003
How do I return a return-code from main?wl, Mar 5, 2004, in forum: Java
- Replies:
- 2
- Views:
- 566
- Dimitri Maziuk
- Mar 5, 2004
difference between return &*i and return i;Ganesh Gella, Nov 11, 2004, in forum: C++
- Replies:
- 4
- Views:
- 344
- Stuart Gerchick
- Nov 12, 2004
Why use "return (null);" instead of "return null;" ?Carl, Aug 21, 2006, in forum: Java
- Replies:
- 21
- Views:
- 965
- Patricia Shanahan
- Aug 24, 2006
what value does lack of return or empty "return;" returnGreenhorn, Mar 3, 2005, in forum: C Programming
- Replies:
- 15
- Views:
- 780
- Keith Thompson
- Mar 6, 2005 | http://www.thecodingforums.com/threads/use-of-return.827629/ | CC-MAIN-2014-23 | refinedweb | 116 | 59.98 |
Several new scripts and module updates over at pythonutils. Visit *NEW* includer.py Version 1.0.0 Adds an INCLUDE command to python scripts for including other modules into them. Lots of nifty features including recursive include, incdir command, will remove relevant import statements and anything under the 'if __name__ ==...' line from included scripts etc.... Useful for distributing modules with several dependencies as a single file. The features mean that it's possible to test a module using normal import statements (of the type from module import....). When you run includer.py, the INCLUDE command causes your included script (minus the test code under a 'if __name__ ==...' line) to be added in *and* the import statements are removed. Using ##INCLUDE is the equivalent of 'from module import *' - avoiding namespace pollution is up to you. python includer.py infilename outfilename Easy hey ! *NEW* proxycleaner.py Version 1.0.0 *NEW* approxClientproxy.py Version 0.1.0 *UPDATED* approx.py Version 0.4.0 approx.py is a cgiproxy for use by those in restricted internet environments. (Work, internet cafes, colleges, libraries, restrictive regimes etc). It is still at beta stage but functional. Cookie handling with ClientCookie now works although support for multiple users and cookie management will be added. Authentication and POST methods are the next issues to work on. DEBUG mode added. proxycleaner.py will 'clean' webpages that have been fetched through approx or the James Marshall perl cgiproxy. Both these scripts modify URLs a pages are fetched through them - this script undoes the modification. approxClientproxy.py is a client script that works in conjunction with approx.py - it runs on your machine. Instead of having approx modify pages you can point your browser at approxClientproxy which handles the communication between itself and approx. Things like javascript, which approx can't modify, work through aPc which transparently mangles *all* access to go through approx... At the moment it's a crude hack of TCPwatch by the Zope corp.. but it works fine and will only improve. Allows transparent unrestricted http access in a restricted environment. *UPDATE* ConfigObj Version 3.2.0 Removed the charmap 'function' that eliminated unicode errors by stamping on unicode. Unicode problems will now raise an exception. than the previous distribution. Changed license text. *UPDATE* listquote Version 1.1.0 Added handling for non-string elements in elem_quote (optional). Replaced some old += with lists and ''.join() for speed improvements... Using basestring and hasattr('__getitem__') tests instead of isinstance(list) and str in a couple of places. Changed license text. Made the tests useful. *NEW* http_test.py Version 1.2.0 Now at (was previously just online in the Python Cookbook) A CGI script that will fetch a URL and display all server headers, saved cookies, the first part of the content (as raw HTML) *and* all the CGI environment variables. Some authentication code as well. Very useful for debugging in various situation or as an illustration of several aspects of basic CGI. # Copyright Michael Foord # Free to use, modify and relicense. # No warranty express or implied for the accuracy, fitness to purpose or otherwise for this code.... # Use at your own risk !!! # E-mail michael AT foord DOT me DOT uk # Maintained at -- | https://mail.python.org/pipermail/python-announce-list/2004-August/003313.html | CC-MAIN-2016-50 | refinedweb | 536 | 61.83 |
CodePlexProject Hosting for Open Source Software
First of all, I have read this post and did not find the answer for my problem.
I am not sure if this is an aggregated Model class or an aggregated ViewModel class, but this is what I have:
In my WPF (with Prism) application, I have a view 'Filter Customers View' that connects to a service and requests a list of 'Customer' objects, based on a filter.
The list that is returned from the service is this :
List<CustomerDTO> FilteredCustomers;
And the CustomerDTO looks like this:
public class CustomerDTO
{
public Guid CustomerId;
public String Name;
public String Address;
public String PhoneNumber;
public OrderInfoDTO LastOrderInformation;
public List<OtherClass> ListOfSomething;
}
And the OrderInfoClass looks like this:
public class OrderInfoDTO
{
public Guid OrderId;
public DateTime OrderDate;
public int NumberOfProducts;
public double TotalAmountSpent;
}
And the OtherClass looks like this:
public class OtherClass
{
public Guid Id;
public String SomeText;
}
As you can see - the customer might or might not have a 'Last Order',
I would like to wrap the 'CustomerDTO' object in a ViewModel, so that I can bind it to the view.
This is what I thought of doing :
public class CustomerViewModel : NotificationObject
{
private CustomerDTO _customerDTO;
public CustomerViewModel(CustomerDTO customerDTO)
{
_customerDTO = customerDTO;
}
public Guid CustomerId
{
get { return _customerDTO.CustomerId; }
set { _customerDTO.CustomerId = value; RaisePropertyChanged("CustomerId "); }
}
public String Name
{
get { return _customerDTO.Name; }
set { _customerDTO.Name = value; RaisePropertyChanged("Name"); }
}
public String Address
{
get { return _customerDTO.Address; }
set { _customerDTO.Address = value; RaisePropertyChanged("Address"); }
}
public String PhoneNumber
{
get { return _customerDTO.PhoneNumber; }
set { _customerDTO.PhoneNumber= value; RaisePropertyChanged("PhoneNumber"); }
}
}
Questions:
Hi,
I believe that most of those questions do not have a "correct" or "wrong" answer, as they seem to be more related to designing decisions in your application. Hence, which approach you should follow will depend mostly on your personal preferences and the
requirements of your scenario.
However, I will try to answer them providing my personal opinion:
Based on my understanding, a model could be as simple as a representation of data used in your application or as complex as a part of the data access layer in charge of the accessing the corresponding data. Therefore, I believe that all of the three classes
can be considered Models (or at least, primitive types of models.) Also, in my opinion, you should wrap the attributes of your models inside properties, as attributes should not be accessed directly from outside of an object.
As far as I know, there is no limitation about using only one model per view model. Therefore, I believe your
CustomerViewModel could also expose the OrderInfoDTO
model or encapsulate it by exposing its data through other properties. Also, creating a separate
OrderInfoViewModel for it is a valid approach. What is more, you could even divide the
CustomerView in different and more modular views, creating a
OrderInfoView for the OrderInfoViewModel. The same goes for the
OtherClass model: for each OtherClass model in the collection, you could have an associated view and view model. As you can see, this is a designing decision. You can implement either a single view / view model or three separated
ones.
In my opinion, if you only want to show the OtherClass collection of the
CustomerDTO, the simpler approach is to simply expose them through a collection in the
CustomerViewModel and consume the collection in the view. As you are not performing any presentation of business logic on the data contained in the
OtherClass models, just showing it, it might no be required to have another view model for it, besides the
CustomerViewModel. However, if you wish to edit OtherClass
models, then it would be required to have a view model to handle these operations. The same goes for the
OrderInfoDTO models.
In case you decide to use separated views / view models for each model, the challenge you will need to address is how the view models interact and shared data. This again, depends of your personal preferences and the requirements of your scenario. You could
have the CustomerViewModel in charge of creating the other view models or you could communicate them in a more loosely coupled fashion using one of the
communication approaches provided in Prism.
Again, in my opinion, this is more related to designing decisions rather that "correct" or "incorrect" approaches.
I hope you find this useful,
Damian Cherubini
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later. | https://compositewpf.codeplex.com/discussions/389946 | CC-MAIN-2016-50 | refinedweb | 753 | 50.06 |
Where Things Stand
Our goal is a markup-based means of delivering alternate image sources based on device capabilities, to prevent wasted bandwidth and optimize display for both screen and print. From an accessibility standpoint, it should at the very least have something equaling img’s alt tag. For non-supporting browsers, fallback content should be displayed.
This Community Group is my effort to bring a growing number of developers together with some like-minded browser and standards folk, so we can iron out any issues we may not have considered and keep things moving forward.
The idea is to use the video tag’s markup pattern as the inspiration, as it’s specced to allow the use of media queries within attributes on its source elements and reliably displays the markup inside the tag in any browser that doesn’t recognize printing.
You can see part of our brainstorming process out in the open here:
I’ve also published the most viable-seeming markup pattern at A List Apart:
That markup pattern is something to the tune of:
It turns out we reached much the same conclusion as many others have previously—in some cases, right down to the semantics:
-
-
-
We posted all this information to the WHATWG mailing list recently, but ended up covering a lot of the same “what if we”/“how about” ground. While a little disjointed, you can mine through said thread at by doing a search for “responsive images”.
I’m looking forward to some focused, productive discussion here. Let’s get this thing done.
Rereading the thread has me more encouraged than I felt before. There were two conversations going on about images on the WhatWG list at the same time.
One was about having the client send back headers. There was a lot of pushback on that discussion.
The discussion about a new element seemed to primarily revolve around different ideas on implementation. I didn’t take the time tonight to reread every single comment in the thread, but I looked at quite a few of them and didn’t bump into anyone pushing back on the idea in general.
Mat, is that consistent with your sense of how the conversation about a new element went?
Great intro, thanks Mat.
Regarding the markup pattern: In order to support full media query syntax, I think the media attributes above need to be more like
media="(min-width: 500px)", basically, just adding some parentheses. They could just as well be more specific, like
media="screen and (min-width: 500px) and (orientation: landscape)".
Seem accurate?
Scott
Can I ask why we are ruling out servers, and why we are assuming CSS media queries are the right way to go from the offset?
If it’s simply because this group is only interested in client-side solutions thats fine (I’m well up for that!), but if the group is about finding strategies to adapt content then I think it is entirely valid to talk about server adaption too. There is a place for both – where is the focus for the group?
For me Responsive Design is about the following:
* Respond to the needs of the user.
* Present information as clearly as possible.
* Be as efficient as possible.
I believe that any image solution needs to respect these principals as responsive images are merely an extension of this mindset.
There are problems with our technologies; what are we responding *to*? People often say screen size, but this very often simply a proxy for other things we would rather be responding to but can’t: bandwidth, usage metering, memory size, device speed, etc.
I propose that any solution should define what we’re responding to as well as a mechanism for responding.
I also propose that any solution must:
* Be easy to achieve, or it won’t get used.
* Obey progressive enhancement methodologies.
* Work with the eco system of the web.
I’d also like to point out that alternate image resources need not be simple re-sized versions of the same resource. It is appropriate to substitute different images with the same semantic meaning at certain times. E.g., a profile page may have a photo of a person in it. At large display sizes it’s OK to have a full body shot with the person in a setting. At small sizes the important details may be lost, and it makes sense to switch to a head-shot.
(PS, sorry for the slightly stilted comment – this is all knicked from a presentation I’m writing for StandardsNext in Manchester next month)
Jason,
That was my feeling as well with the WhatWG list. The discussion about HTTP headers really took off! And the arguments made in the push-back confirmed my belief that a client-side solution is the way forward.
Just after the holidays, I was trying to ping Mat over twitter about the necessity of explaining the “why we got here” for <picture> but Twitter is crap for starting lengthy discussions. Here is a much better forum!
The Drupal community has recently created “Initiative Leaders” that organize discussions around large architectural changes to the codebase. The Leads have usually had public discussions and then come to a consensus-based solution. The problem we’ve repeatedly faced is when presenting “the solution”. If we present the solution without also presenting all the steps for how that decision was arrived, we always end up repeating a lot of the same arguments (but with a much larger audience and in a less productive way).
The responsive image problem faces the same issue.
The giant tangential threads we saw in the WhatWG email list were because, while we did link to that etherpad, no one read the background. We need to present the background right along with the solution or we’ll end up repeating the same arguments ad nauseum.
🙂
edit: Looks like Matt posted before I had a chance to submit and proved my point. People just don’t follow the background material links and read the lengthy material. It’s human nature.
We’ll need a wiki-like page for a document describing various proposed solutions and their pro/cons. Again. 🙂
– John
But to respond to Matt, I agree that we should consider any solution. I’m personally convinced client-side is the way to go, but we need to convince everyone here and then everyone everywhere that a specific client-side way is _the_ way. And the only way we’ll get there is by documenting all the wrong turns.
I’d like to partially frame the problem this way…
Designers want to have the editorial ability to have different “displays” of a single image resource, depending on the context that resource is presented in. As such, they’ll need to create rules for when each display is used for a resource. Those rules can either be put into a server-side or client-side mechanism. (There’s a further background discussion to prove this is true, but I’ll save it for later. i.e. explaining why “waiting for everyone to get faster connections” isn’t a solution.)
How the rules are evaluated is based solely on the context. And all of that context is firmly on the client side of things. Its the browser that knows what the hell is going on.
Can we have the browser send context to the server? Sure. But we can’t send all of the possible context. If we limit the context to a specific set of pre-defined criteria, we limit our future ability to have different kinds of rulesets. There was also a lengthy discussion on the WhatWG list about how adding weight to the HTTP header is counter-productive since it adds a performance penalty for every single request. Its only a small set of HTTP resources that would need the context. GET /favicon.ico, GET /scripts/jquery.js, GET /README.txt, etc., etc. ALL of those would be penalized. Its simply not a good trade-off.
Could we negotiate the context sent? The negotiation would violate our “first request” problem (we’d waste time with back and forth HTTP requests).
So do we send all the context from the browser to the server? Or, instead, do we send all the rules from the server to the browser?
I think the latter wins.
@ John – I hope I didn’t prove that point, considering I’ve read a lot of that stuff. My post above was to clarify what the intention of this group is:
a) to provide a mark-up solution
b) to look into *all* solutions, of which mark-up may be a part.
That’s why I asked that explicitly.
I think client side is generally the preferable approach and would strongly back a solution like the one suggested. But I also feel there is a need for server side responsiveness too. I’d rather see *both* than just one, but if I was to pick just one then it would be the client side solution.
I’ve already discussed mechanisms to make client/server header communication efficient in other places and argued that the “every request” argument is poor, especially given SPDY rather than HTTP. For clarity my thoughts were these:
1) Client requests as normal
2) Server responds with normal content but accompanied by a feature request header [request-bandwidth|request-screen-dimensions|request-on-jpg]
3) Non-conformant browser does nothing with that. No impact has been had.
4) Conformant browser sends future requests on the domain with matching file-types along with requested headers.
SPDY can also push content that hasn’t been requested, so in the event the first request got a sub-optimal response the server could re-send with a better one. There would be no more than one potential additional outgoing request per *website*. SPDY also Gzips headers, so there’s not much overhead.
My apologies! I felt like the WhatWG discussion kind of put an end to the idea of header-based negotiations. But your proposal sounds interesting. I think I may have missed it when you posted that in the WhatWG list. But I didn’t read all the posts in that discussion, so maybe I proved my own point! 😀
I’m gonna write this up in more detailed post, but I really want to keep this group laser-focused on the markup based solution. Don’t get me wrong—I’d love to have detailed gadget-describing headers available. I think we’d all rather have more device information readily available than not.
I’m just really uneasy about treating headers as a solution to this problem, since there’s a big gap between “having the user’s screen size” and an implementation—and that’s assuming the information contained in the headers is always correct. My concern is that it would basically put the onus on developers to implement their own versions of the element we’re proposing—which not only makes implementation more difficult for the developer, but varying quality implementations of “responsive images” are gonna impact the user in terms of wasted bandwidth or—at worst—completely broken images.
I do think the header conversation is worth having, and I’d love to be involved in it. For now, though, I want to keep our focus on the front-end solution.
Totally agree, I would not want to push server-side stuff as “the” solution, and I’m happy with this group being focussed on the mark-up side of things. I just wanted to make sure that’s what the idea is, rather than anything more holistic, before I went off on any tangents! 🙂
I’m convinced that for simple situations (I want to send a low-res image for very small device width, and a hi-res, wide picture for widescreen TVs) that we need a declarative, client side solution.
Given that we have that, combining video element with CSS MQs, we need something equally simple for images.
I’m delighted if we have another mechanism based on content negotiation, more headers or robust server-side detects for those developers who are comfortable with server-side work, and who even have access to their servers. But that’s a discussion for another day.
The urgent need – and low-hanging fruit – is a declarative, mark-up based solution,
I’d agree with all of that, including the lifting of CSS-MQ syntax.
That said, a concern I have at the moment is to avoid repetitive declaration of identical trigger points in multiple instances of an element. That’s potentially a lot of identical CSS-MQ statements cropping up which is a waste of time, bandwidth, and bad for maintenance (re-designs may call for different sized *content* images) 🙂
element. First of all I really like it being similar to
tag. But all of the solutions have some different issues. Wouldn’t it be the right thing to do — to review those potential solutions with
tag and make a decision whether we can solve those OR live with those being present OR they are a real blockers for
solution. If we come to an agreement that
can not be used for the responsive images, I am fine with having two different tags of course. But then, even if it still bugs me, we will have good arguments to explain our decision.
Hmm… all my tag references in the comment got stripped out. Should I re-post I wonder? Or the idea is clear?
D’oh, <img>… this form needs a preview
I’ll better re-post my comment. Sorry for the inconvinience…
——————– <picture> element. First of all I really like it being similar to <video> <img> <img> tag. But all of the solutions have some different issues. I would suggest that the first thing we do is review those potential solutions with <img> tag and make a decision whether we can solve those, live with those being present or they are a real blockers for <img> solution. If we come to an agreement that <img> can not be used for the responsive images, I am fine with having two different tags of course. But then, even if it still bugs me, we will have good arguments to explain our decision.
What about re-purposing <picture> and calling it <media> ? I’d quite like to have seen a blanket element like this that could handle images, video, audio, and anything else media related. All of which may end up wanting to be responsive and all of which may want different source.
Is it too late to does something like that with a flag of media-type to indicate whether it’s an audio, video, etc?
IMG is a necessary fallback no matter what we end up with, be that a new tag or attributes on an img.
Hah, scrapping video and audio in favor of a combined element is a big ask—I think we’re better off sticking with solving the single problem, for now.
I figure: the tighter our focus, the better our chances of this thing seeing the light of day, yeah?
Yeah, that idea was never likely, but I figure we may as well air everything. I think if we could re-start on HTML there would be quite a bit different 🙂
FWIW I think the current syntax is pretty much the best we’re going to be able to get in a realistic world. That doesn’t make it ideal conceptually though.
Hi Matt,
I don’t really think that an “umbrella” tag like <media> is solving my concern of having two different tags for the same thing — image. But I do agree that it’s better than multiple tags for different media indeed — this way we can have unified and predictable behavior for all of them that is an advantage.
I realize that IMG is a necessary fallback of course. But ideally, I would like to have one IMG tag that can be either responsive or not. It’s ONE tag because no matter whether an image is responsive or not, it is still an image and in most cases, it’s the same image but in different resolutions.
We spent a great deal of time looking for ways to work with/around the img tag (and its alias in many browsers, “image”)—unfortunately, it was established early on that it was nigh impossible to modify the behavior of img to “look ahead” for multiple sources, and that any modifications we made stood to break things in terms of backwards-compatibility. A lot of that history is documented at
Thanks Mathew. This is enough for me to ditch the hope of re-using IMG for the responsive images. Now, if possible, I would like us to formulate the grounded answer why a new PICTURE (or any other element we can come up with) element is better than the existing solutions. Please read my response to Nicolas.
Hi Denys, yeah someone proposing a modification of
imgis what triggered this etherpad.
But changes to
imgparsing is not going to happen and wouldn’t be backwards compatible. I asked Hixie about it directly and he said as much.
The
pictureidea would only require 1 new element to be forged, because
sourcealready exists for
videoand
audio.
Hi Nicolas,
Yeah, that’s the etherpad I was telling about. That’s good to know that changes to the IMG parsing have been discussed in the WG and there is a response. So to summarize this — there have been discussions in a wider circle about this. And re-purposing IMG is not an option due to compatibility issues and reasons mentioned in the etherpad. Fair enough. Now to the new potential solution being PICTURE.
I wonder what makes PICTURE better comparing to, for example NOSCRIPT solutions like Head’s one or the one I proposed as an extension of it? Please don’t get me wrong, I am not opposed to PICTURE at all. But if we are about to introduce a new element I would like to know what are the reasons for that and what makes it better than the existing solutions. Is it just the similarity with VIDEO/AUDIO and contemporary markup (due to use of SOURCE)? I would appreciate if anybody could explain the benefits or just point me to any reference with such explanation.
Ok, gave it another thought (need to take brakes from computer from time to time :)).The reason why we want a new element is obvious, I suppose, — it would let us have adaptive images without javascript (in capable browsers that is). Scratch that silly question. Had to figure that out myself 🙂
What are the implications of with regard to our <picture> element?
Can we leverage the Network API through HTML, or CSS? We’re already stealing media-queries from CSS to get <picture> functional.
It was based on considerably different use-cases, but, as a result, I suggested a function considerably similar to Responsive Images.
I feel the need to examine relations of CSS media query and SVG.
Pingback: The slow elephant in the responsive images room | Bram.us
Hey everyone,
I’m going to attempt to share some holistic, self-contradicting ideas I’ve been kicking about this problem that I haven’t seen suggested or mentioned yet (though there may be a reason for that).
Warning: entering the wavy-hand zone.
1. img is content, so having it in markup makes sense. However, the size of an img is presentational. By design, this is most likely a problem that CSS needs to solve, though I’m going to use this post to propose markup-based solutions. CSS solutions I’ll save for another post. Also, as much as I love JS, I have a very hard time thinking it needs to solve this problem. We’re better off implementing a solution natively in the long run.
2. I also don’t think adding picture to the spec is going to add much other than confusion. Having img and picture? The differences could only be minimal at best, and I sincerely doubt the tradeoff would be worth the mud it adds to the water. Unless there’s a very compelling and clear use case, I think we can do better.
3. If the img tag is locked down according to the powers that be, perhaps we can invent more semantic tags to handle this problem, while still keeping the img namespace.
Examples (100% off of the top of my head):
mobileimg
tabletimg
desktopimg
tvimg
hdimage
In these cases, the tags themselves have media query like range defaults in CSS. For example, defaults to 240px min up to 480px.
defaults to 481px up to 768px, etc. This still needs to take landscape orientations into account and other such niceties, but I’m just spit-balling here.
These can be overridden in CSS, just like any other set of properties, obviously.
4. I also think the foundation of whatever solution we end up proposing should establish the mantra of approaching images mobile-first.
5. Another possible approach could be to standardize a file naming convention like in iOS. i.e. ‘icon@2X.png’, but instead of focusing on pixel density, focusing on dimension. So, perhaps something like banner@mobile.png, banner@tablet.png, etc.
6. The topic of bandwidth keeps coming up, but addressing it is tricky at best. When considering data like “the majority of mobile usage happens at home”, the Rubik’s cube starts to grow sides.
I realize that the lines of what exactly mobile, tablets, and HD can be quite blurry, but this post is really to propose some ideas outside of the conversations I’ve seen. It’s entirely possible that we not use any specific device tag like mobile, and decide on something a little more familiar, like ‘fleximg’, with multiple source attribute/value pairs. We already have flexbox, and this is similar enough to keep the mental model in check, and express semantics at the same time.
Yeah, I like this better, forget all my other ideas. 🙂
I also wonder if augmenting the scheme property of the meta tag could be helpful somehow.
We already use the meta tag for view options on mobile.
meta content=”width=device-width, target-densitydpi=160dpi, initial-scale=1.0″ name=”viewport”‘
Indeed you hit a valid point with no. 1 for 90 percent of the images. But I sometimes would like to change an image on a media-query. And this will be more and more the more devices we have. And then we have a different content which should not be changed in CSS but in HTML itself.
This is the major point for the new syntax in HTML. For all other presentational things you are totally right.
3.) Nope that isn’t a good idea because if a new device comes up we need another new tag. It should be done with media-attribute on a specified universal tag.
5.) Same problem as in 3.)
fleximg.) This might be an interesting approach but you’re mixing up HTML and CSS now. But it’s a good idea although.
6.) All I can say about this is “we don’t know and will never know how our user is viewing the site and under what circumstances”. If we always have this in mind we’ll end up with a pretty good solution I think.
Thanks Anselm,
Definitely valid arguments there. So, scratch the device-specific tags. About ‘fleximg’, my purpose here is two-fold. 1) ‘picture’ is too generic and too close to ‘img’ and trying to define it will reinforce that. It also adds no real semantics in regards to the fact that the image will somehow be responsive or have multiple sources. 2) fleximg solves this and also creates a single, new, unified tag to work with and adheres to a standard that was set with ‘flexbox in CSS. This same namespace could also add ‘flexvideo’ and ‘flexaudio’ in the future. How does everyone feel about this? I can’t be the only one thinking ‘picture’ is not the solution.
No, you aren’t the only one. I agree with your last arguments. I certainly like the fleximg idea.
So, Apple is updating their site to be compatible with the new iPad’s Retina Display, employing a progressive download technique in JS.
Also see for resolution tests.
Thanks to Jason Grigsby, Scott Jehl, Brad Frost, and Jim Newbury for the links.
Pingback: WTFWG | TimKadlec.com
Pingback: Definición de imágenes responsive como atributo de img o nuevo elemento picture? | Desarrollo Web Avanzado en Guatemala
Pingback: HTML5 responsive images spat explodes » b.c.s.
Pingback: » The real conflict behind <picture> and @srcset Cloud Four Blog
Pingback: Picture 元素的故事 – 陈三 | https://www.w3.org/community/respimg/2012/03/07/14/ | CC-MAIN-2018-05 | refinedweb | 4,152 | 71.24 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.