text stringlengths 20 1.01M | url stringlengths 14 1.25k | dump stringlengths 9 15 ⌀ | lang stringclasses 4
values | source stringclasses 4
values |
|---|---|---|---|---|
Hi, sosomething like this:diff --git a/drivers/mmc/host/dw_mmc.c b/drivers/mmc/host/dw_mmc.cindex 7de6b42..526b5cb 100644--- a/drivers/mmc/host/dw_mmc.c+++ b/drivers/mmc/host/dw_mmc.c@@ -988,10 +988,11 @@ static void dw_mci_pull_data32(struct dw_mci *host, void *buf, int cnt) *pData++ = mci_readl(host, DATA); cnt--; } } +#if BITS_PER_LONG >= 64 static void dw_mci_push_data64(struct dw_mci *host, void *buf, int cnt) { u64 *pData = (u64 *)buf; WARN_ON(cnt % 8 != 0);@@ -1013,10 +1014,11 @@ static void dw_mci_pull_data64(struct dw_mci *host, void *buf, int cnt) while (cnt > 0) { *pData++ = mci_readq(host, DATA); cnt--; } }+#endif static void dw_mci_read_data_pio(struct dw_mci *host) { struct scatterlist *sg = host->sg; void *buf = sg_virt(sg);@@ -1591,15 +1593,17 @@ static int dw_mci_probe(struct platform_device *pdev) if (!i) { host->push_data = dw_mci_push_data16; host->pull_data = dw_mci_pull_data16; width = 16; host->data_shift = 1;+#if BITS_PER_LONG >= 64 } else if (i == 2) { host->push_data = dw_mci_push_data64; host->pull_data = dw_mci_pull_data64; width = 64; host->data_shift = 3;+#endif } else { /* Check for a reserved value, and warn if it is */ WARN((i != 1), "HCON reports a reserved host data width!\n" "Defaulting to 32-bit access.\n");This is only useful if you just want the driver to compile (it compileson ARM after the above) and don't expect a working device if you findthe HCON programmed with 64-bit width on an ARM board, though.Thanks,-- | http://lkml.org/lkml/2010/12/12/73 | CC-MAIN-2014-52 | en | refinedweb |
14 December 2010 19:53 [Source: ICIS news]
WASHINGTON (ICIS)--The US Federal Reserve Board on Tuesday said it would continue efforts to stimulate the nation’s economy and decided to hold its key federal funds interest rate at 0-0.25% - a record low level that has now been in place for 24 months.
The Fed said that while the ?xml:namespace>
In its statement about the
While business spending on equipment and software has been rising, that key sector has been expanding less rapidly than earlier in the year, the Fed said.
The Fed governors and top economists also noted that investment in non-residential structures - factories, office buildings, shopping malls - continues to be weak.
“Employers remain reluctant to add to payrolls,” the committee said, adding that “The housing sector continues to be depressed”.
The Fed said that it expects a gradual return to higher levels of employment, resource utilisation and growth but that “progress toward [these] objectives has been disappointingly slow”.
In hopes of speeding and strengthening the recovery, the Fed said it would continue its policy, announced last month, of buying US Treasury securities. The central bank renewed its commitment to purchase up to $600bn (€450bn) of Treasury securities by the end of the second quarter next year, acquiring the investments at a pace of about $75bn per month.
That securities-purchasing plan is designed to put more money into circulation and help lower long-term interest rates to spur more business borrowing and capital spending.
In announcing that it would keep the federal funds interest rate at 0% to 0.25%, the Fed said that it “continues to anticipate that economic conditions … are likely to warrant exceptionally low levels for the federal funds rate for an extended period”.
That suggests that the Fed expects to maintain the record-low interest rate well into 2011.( | http://www.icis.com/Articles/2010/12/14/9419551/us-fed-holds-interest-rate-at-record-low-continues-money-flow.html | CC-MAIN-2014-52 | en | refinedweb |
09 December 2010 11:44 [Source: ICIS news]
LONDON (ICIS)--Egyptian Petrochemicals Company has halted production at its caustic soda plant in ?xml:namespace>
The unplanned outage began on Monday 29 November and production was expected to resume on Friday, with normal operating rates restored by the end of the weekend.
Nameplate capacity of the plant is 400 tonnes/day of caustic soda.
Northern African caustic soda demand has been high since the beginning of the fourth quarter because of supply shortages in
Domestic demand was also high because hot temperatures in the region were driving demand in the major downstream soaps and detergents market, according to sources.
50% liquid spot prices were assessed by ICIS at $420-500/tonne (€315-375/tonne) FOB (free on board)
As of 1 October, northern African liquid spot prices were $300-370/tonne FOB
The top end of the range was representative of material exported from
Prices at the bottom end of the range were representative of exports from
($1 = €0.75) | http://www.icis.com/Articles/2010/12/09/9417948/egyptian-petrochemicals-company-shuts-alexandria-caustic-soda.html | CC-MAIN-2014-52 | en | refinedweb |
Search
Search Hi,
I have a project in which I am trying to enter "Marathi" (Indian local language) data in JSP using JSTL and trying to search data... and tries to search then It shows no data from database
STRUTS-display search results - Struts
STRUTS-display search results Hii, I am a beginner in struts..I want to retrieve few records from database and display in jsp page.First i tried... them in jsp.. Search Results Projects
ASAP.
These Struts Project will help you jump the hurdle of learning complex
Struts Technology.
Struts Project highlights:
Struts Project to make learning easy
Using Spring framework in your application
Project in STRUTS
Struts Hibernate Integration
string and presses search button.
Struts framework process the request... and you can download and
start working on it for your project or to learn Struts... and
integrate it with the Struts.
Writing Web Client to Search the database
saritha project - Struts
struts
struts shopping cart project in struts with oracle database connection shopping cart project in struts with oracle database connection
Have a look at the following link:
Struts Shopping Cart using MySQL
TYBsc IT final project (MOBILE BASED SMS SEARCH ENGIN)
TYBsc IT final project (MOBILE BASED SMS SEARCH ENGIN) How to send sms pc to mobile using JSP & Servelet
TYBsc IT final project (MOBILE BASED SMS SEARCH ENGIN)
TYBsc IT final project (MOBILE BASED SMS SEARCH ENGIN) How to send... completed. so i requested to you pls answer my question. My project name is MOBILE BASED SMS SEARCH EN
AJAX Search
AJAX Search I have to create a project where the user enters a letter in a text box and all the names starting with that letter are retrieved. I'm using PHP and MYSQL as the database. Can somebody please suggest me the AJAX
Need Project
Need Project How to develop School management project by using Struts Framework? Please give me suggestion and sample examples for my project
search functionality with in application
search functionality with in application I have created on web based web site in struts. i want to give user functionality for search any link... and can perform search with in application. i have heard about search engine .can
simple java search engine
simple java search engine i have already downloaded the project simple java search engine.but i am not able to run it.can anyone help me.i have.... This is needed for every one.
The project is simple search engine, which searches
search filter and JTable
search filter and JTable I first im not speak englis very well, so my question is:
how can i make search data in JTable of java?
i wan to search... java.sql.*;
import java.awt.event.*;
import javax.swing.*;
class Search{
public
Binary Search Tree
Binary Search Tree Question-1 )
Modify the BinarySearchTree class... are serializable.Test your class with a project that serializable a BinarySearchTree object, and another project that deserializes and prints that object
Binary Search Tree
Binary Search Tree Question-1 ) Modify the BinarySearchTree class so... are serializable.Test your class with a project that serializable a BinarySearchTree object, and another project that deserializes and prints that object
MINI PROJECT
MINI PROJECT CAN ANYONE POST A MINI PROJECT IN JAVA?
Hi...
You can have the following projects as per ur requirement. Free and easy... management in struts
search
search how to develop search box and how to retrive data from database..
Please visit the following link:
Search box
B+ trees search
B+ trees search Can anyone send the code for implementing the B+ trees searching on a oracle database?
please your answer will be useful for my project
SEARCH
SEARCH how can we do search in jsp...?
option for search criteria like name and DOB...
Please visit the following links:
struts - Struts
struts Hi,
i want to develop a struts application,iam using eclipse... you. hi,
to add jar files -
1. right click on your project.
2... functionality u want to use in your project. There is no standard list of jar files
Major Search Engines List
, Netscape Search and Lycos to name just a few. Open
Directory Project...
Major Search Engine Lists
Listing of 10 Top Search Engines
Form with a search filter in spring mvc?
Form with a search filter in spring mvc? Hi
I am new to Spring MVC, I have created a spring project which is displaying the list from database into my jsp page, Now in the same jsp page at the right hand corner I have a search
search - Java Server Faces Questions
search hello!
my project is document search system. Im am suppose to search keywords for my document by the scanning the document itself and the criteria is if any word appears more than ten times in the document i application - Struts
struts login application form code Hi, As i'm new to struts can anyone send me the coding for developing a login form application which involves a database search like checking user name in database
Submitting Web site to search engine
Registering Your Web Site
To Search Engines.... 85% of
Internet users find sites through search engines, so each search... to the
search engines and directories. Once you register your site
Android Application Project
Android Application Project Hi,
How to create Android Application Project with following features:
Select list of music station
Play the the selected station
Save station in the list
Search stations of interest based
Java Project - Development process
Java Project Hello Sir I want Java Project for Time Table of Buses.that helps to search specific Bus Time and also add ,Update,Delete the Bus Routes. Back End MS Access
Plz Give Me
hibernate web project - Hibernate
hibernate web project hi friends,very good morning,how to develop and execute web project using myeclipse ide.plz give me step-by-step procedure.../hibernate/runninge-xample.shtml
Search bar application
Search bar application
In this tutorial, will be creating a Search screen, which have a table view with a search bar. Table should display all the data if search field is empty other wise it should show all the data which matches
project on avl tree and hashing
project on avl tree and hashing I want to make graphical interface in java ,in this graphical interface there are seven buttons ,first button Read...,four button delete customer ,five button search customer,six button print data
Project on avl tree and hashing
Project on avl tree and hashing I want to make graphical interface in java ,in this graphical interface there are seven buttons ,first button Read... delete customer ,five button search customer,six button print data to file(in order
Struts 1 Tutorial and example programs
completing this tutorial you will be able to use Hibernate in your
Struts project... Struts project. We will be using Hibernate Struts
plug-in to write... to Search the database using Struts Hibernate Plugin
In this section module and source code on my project hospital management system
project
project does hostel management system project contain any dynamic web pages
project on avl tree and hashing
customer ,five button search customer,six button print data to file(in order
how to show search results in the same panel of jframe to where search field and button is present..
how to show search results in the same panel of jframe to where search field and button is present.. Hello Sir,
I am working on project where i have to show the search result in the same panel of where search field | http://www.roseindia.net/tutorialhelp/comment/19583 | CC-MAIN-2014-52 | en | refinedweb |
This topic includes the following sections:
For Microsoft .NET programmers, Tuxedo .NET Workstation Client is a facilitating tool that will help to efficiently develop Tuxedo .NET Workstation Client applications. Besides providing a set of Object Oriented (OO) interfaces to .NET programmers, this tool allows you to design and write code in OO styles.
For Tuxedo programmers, the Tuxedo .NET Workstation Client inherits most ATMI function invocation behavior which makes it easier to understand and use .NET Client classes to write applications. Because the Tuxedo .NET Workstation Client is published as a .NET assembly, it also leverages the benefit of .NET Framework. It can be used with many .NET programming languages (for example, C#, J#, VB .NET, and ASP.NET).
The Tuxedo .NET Workstation Client enables you to write Tuxedo client applications using .NET programming languages to access Tuxedo services. It also provides connectivity between .NET workstation applications and Tuxedo services.
The Tuxedo .NET Workstation Client contains the following components:
libwsclient.dll
This Microsoft .NET Framework
.dll assembly wraps Tuxedo ATMI and FML functions for developing Tuxedo .NET workstation clients.
viewcs,
viewcs32;
mkfldcs,
mkfldcs32; and
buildnetclient
These executable utilities help to develop C# code using Tuxedo VIEW/VIEW32 and FML/FML32 typed buffer and compile C# code to Tuxedo .NET Workstation Client executable assemblies. For more information, see viewcs, viewcs32(1), mkfldcs, mkfldcs32(1), buildnetclient(1)
.
These three samples explain how to create Tuxedo .NET Workstation Client application using C#. See Tuxedo .NET Workstation Client Samples.
The Tuxedo .NET Workstation Client has the following limitations:
tpxmltofml32(3)and
tpfmltoxml32(3)).
tx_open(),
tx_close(), and
tx_begin()), are not supported by Tuxedo .NET Workstation Client. Only Tuxedo TP transaction functions can be used.
For more Tuxedo .NET Client installation and Tuxedo install set information, see Installing the Oracle Tuxedo System.
To download Microsoft .NET Framework 2.0 and for more Microsoft .NET Framework information, see Microsoft’s .NET Developer Center.
The Tuxedo .NET Workstation Client works as an intermediate layer or wrapper between your .NET applications and underlying Tuxedo workstation shared libraries (
libwsc.dll,
libengine.dll, and
libfml.dll).
Figure 1 illustrates how the Tuxedo .NET Workstation client works.
The . NET assembly
libwscdnet.dll contains the wrapper API classes for Tuxedo .NET Workstation Client and implements a set of object-oriented-styled interfaces that wrap around Tuxedo ATMI functions and FML functions.
buildnetclient references
libwscdnet.dll in order to build Tuxedo workstation clients written in . NET programming languages. It targets the Common Language Runtime (CLR) environment, and is invoked by the assemblies (for example, client executables, libraries) depending on it at runtime.
The win32 shared library,
libwdi.dll, implements platform specific functions that
libwscdnet.dll uses.
The Tuxedo .NET Workstation Client requires Microsoft .NET Framework 2.0 SDK installation on your system. The Oracle Tuxedo installer program automatically checks to see if .NET Framework is installed or not. If installed,
libwscdnet.dll is automatically registered in the .NET Framework global assembly cache.
If .NET Framework is not installed, you must install it. You can download the .NET Framework from
Microsoft’s .NET Developer Center. After you have installed .NET Framework, manually registering
libwscdnet.dll in the global assembly cache is highly recommended.
To manually register/unregister
libwscdnet.dll in the global assembly cache, you must do the following steps:
libwscdnet.dllfrom the
%TUXDIR%\bindirectory. Click Open.
libwscdnet.dllis added to the Assembly Cache list.
libwscdnet.dlland click Delete.
You can also register/un-register
libwscdnet.dll from the command line.
To register enter:
%WINDIR%\Microsoft.NET\Framework\v2.0.4322\gacutil.exe /i %TUXDIR%\bin\libwscdnet.dll.
To unregister enter:
%WINDIR%\Microsoft.NET\Framework\v2.0.4322\gacutil.exe /u libwscdnet.dll.
Programmers developing Tuxedo .NET Workstation Client applications must:
The Tuxedo .NET Workstation Client provides development utilities that can aid programmers using Tuxedo FML/VIEW typed buffer and building .NET executable files. See Using FML/FML32 Typed Buffers and Using VIEW/VIEW32 Typed Buffers.
Main changes in Tuxedo .NET Workstation Client interface (compared to Tuxedo ATMI and FML C functions), are as follows:
Class AppContextis used to organize almost all ATMI C functions.
Class Transaction, (for example,
tpbegin(),
tpcommit(), and so on).
TypedBufferand its derived classes. See Using Typed Buffers.
Tuxedo .NET Workstation Client namespaces are divided into two categories. The first category includes two namespaces,
Bea.Tuxedo.ATMI and
Bea.Tuxedo.FML that bundles ATMI and FML wrapper classes.
The second category uses the
Bea.Tuxedo.Autogen namespace to bundle all auto-generated .NET classes using .NET Client utilities.
These namespaces include all the classes and structures related to the functions listed in the Tuxedo .NET Workstation Client API Reference.
The
AppContext class is a key class used to perform Tuxedo service access functions.
AppContext leverages the OO programming style in a multi-contexted client application.
Most Tuxedo ATMI C functions (for example,
tpcall(), and
tpnotify()), are defined as AppContext class methods. Creating an
AppContext class instance is a key component in connecting to a Tuxedo domain and call services provided by that Tuxedo domain.
In a multi-contexted application written in C or COBOL, programmers typically have to switch between different Tuxedo context using two ATMI functions,
tpgetctxt() and
tpsetctxt(). This is not required using the Tuxedo .NET Workstation Client. Creating a class
AppContext instance also creates specific Tuxedo context instance.
Operations on a particular
AppContext will not impact other
AppContext instances. You can develop multi-context applications and easily switch between them.
To create a Tuxedo context instance you need to invoke the static class method,
AppContext.tpinit(TPINIT), instead of the class constructor.
AppContextclass instances without terminating the Tuxedo context session.
The following sample shows how to connect to a single context Tuxedo domain.
……
TypedTPINIT tpinfo = new TypedTPINIT();
AppContext ctx1 = AppContext.tpinit(tpinfo); // connect to Tuxedo domain
……
ctx1.tpterm(); // disconnect from Tuxedo domain
The following sample shows how to connect to a multi-context Tuxedo domain .
……
TypedTPINIT tpinfo = new TypedTPINIT();
tpinfo.flags = TypedTPINIT.TPMULTICONTEXTS; // set multi context flag
// connect to the first Tuxedo domain
AppContext ctx1 = AppContext.tpinit(tpinfo);
Utils.tuxputenv("WSNADDR=//10.2.0.5:1001");
// connect to the second Tuxedo domain
AppContext ctx2 = AppContext.tpinit(tpinfo);
……
ctx1.tpterm(); // disconnect from the first Tuxedo domain
ctx2.tpterm(); // disconnect from the second Tuxedo domain
The Tuxedo .NET Workstation Client supports the following built-in Oracle Tuxedo buffer types:
FML,
FML32,
VIEW,
VIEW32,
CARRAY, and
STRING.
Figure 2 provides an illustration of the Tuxedo .NET Workstation Client typed buffer class hierarchy.
The Tuxedo .NET Workstation Client class
TypedBuffer is the base class of all concrete Tuxedo buffer types and provides some low level functions to all derived classes. Class
TypedBuffer is an abstract class and cannot be used to create instances.
The Tuxedo .NET Workstation Client uses class
TypedString to define
STRING typed buffer characters.
TypedString instances can be used directly to communicate with
AppContext methods such as
tpcall(). See the following example.
……
TypedString snd_str = new TypedString ("Hello World");
TypedString rcv_str = new TypedString(1000);
AppContext ctx = AppContext.tpinit(null);
……
ctx.tpcall("TOUPPER", snd_str, rcv_str, 0);
……
ctx.tpterm();
……
The Tuxedo .NET Workstation Client uses class
TypedFML/
TypedFML32 to define most FML C functions. You should do the following steps to develop Tuxedo .NET applications using FML typed buffers:
Compile field table files into C# source files using the
mkfldcs Tuxedo .NET Workstation Client utility. The generated C# files contain public classes including definitions of every FML field ID defined in the field table files. See also
mkfldcs(1)
Use
TypeFML class methods to create and access FML data.
For more FML typed buffer programming information, see Programming a Tuxedo ATMI Application Using FML.
using Bea.Tuxedo.FML;
namespace Bea.Tuxedo.Autogen {
public class fnext_flds {
public static readonly FLDID F_short = 110; // number: 110 type: short
public static readonly FLDID F_view32 = 369098863; // number: 111 type: view32
public static readonly FLDID F_double = 134217840; // number: 112 type: double
public static readonly FLDID F_ptr = 301990001; // number: 113 type: ptr
}
} // namespace Bea.Tuxedo.Autogen
……
TypedFML fmlbuf = new TypedFML(2048);
short s = 123;
fmlbuf.Fadd(fnext_flds.F_short, s);
……
fmlbuf.Resize(3000);
……
fmlbuf.Dispose();
……
The Tuxedo .NET Workstation Client uses class
TypedVIEW to create and access VIEW/VIEW32 data. You should do the following steps to develop Tuxedo .NET Workstation Client applications using VIEW/VIEW32 typed buffers:
viewcsutility to compile the VIEW definition file into a VIEW binary file (.VV). For more information, see viewc(1), viewcs(1).
viewcsutility to generate class
TypedVIEWderived definition C# code and corresponding .dll library (if necessary) from the View binary file.
TypedVIEWto write your .NET application.
Using class
TypedVIEW provides you with two options:
This is the most common usage of
TypedVIEW.
Use the
viewcs utility to generate derived class
TypedVIEW definition C# code from the xxx.VV file, then compile the C# code into an
.exe file. No additional environment variables are required.
See the following example:
viewcs(32) view1.VV view2.VV
buildnetclient -o simpapp.exe simpapp.cs view1.cs view2.cs
You can use the
viewcs utility along with .NET assembly environment variables to generate .dll libraries. The .NET assembly environment variables
ASSFILES,
ASSDIR (
ASSFILES32,
ASSDIR32 for view32) must be set accordingly in order to view
viewcs-generated .dll libraries.
Using these environment variables,
.dll libraries can be generated automatically or manually:
Automatic viewcs-generated .dll libraries
This method may be used when many xxx.VV files exist. To simplify management of
TypedVIEW C# code, these xxx.VV files can be compiled into a .dll library.
Use the
viewcs utility to generate derived class
TypedVIEW definition C# code and corresponding .dll library from the xxx.VV files. Manually register the
libwscdnet.dll assembly, and then compile your client application using the .dll library.
See the following example:
viewcs(32) view.dll view1.VV view2.VV
gacutil.exe /i view.dll
buildnetclient -o simpapp.exe simpapp.cs view.dll
set ASSFILES(32)=view.dll
set ASSDIR(32)=%APDIR%
Manual-generated .dll Libraries
In certain integrated programming environments (for example, VB .NET, and ASP.NET). the framework provides the executing environment. Client applications are integrated as .dll files. In this case it is best to manually generate .dll libraries.
Use the
viewcs utility to generate derived class
TypedVIEW definition C# code from the xxx.VV file, then compile the C# code into an application .dll.
The .NET assembly environment variables
ASSFILES,
ASSDIR (
ASSFILES32,
ASSDIR32 for view32) must be set to application .dll libraries and directories that have
TypedVIEW defined.
See the following example:
viewcs(32) view1.VV view2.VV
csc /t:library /out:simpapp.dll /r:%TUXDIR%\bin\libwscdnet.dll simpapp.cs
view1.cs view2.cs
set ASSFILES(32)=simpapp.dll
set ASSDIR(32)=%APDIR%
The Typed Buffer Samples file (included in the Tuxedo . NET Workstation Client package) demonstrates how to use FML and VIEW typed buffers.
One benefit of the .NET Framework environment is language integration. Once a .NET assembly is generated, you can use any .NET supported language to develop applications using that assembly. Accordingly, you can also use J#, VB, C++ or other .NET supported languages to develop Tuxedo .NET Workstation Client applications.
The following is a VB language code example:
Imports System
Imports Bea.Tuxedo.ATMI
Module Main
Sub Main()
Dim sndstr, rcvstr As TypedString
Dim ac As AppContext
Dim info As TypedTPINIT
info = New TypedTPINIT()
info.cltname = "vb client"
Try
ac = AppContext.tpinit(info)
sndstr = New TypedString("hello world")
rcvstr = new TypedString(1000)
ac.tpcall("TOUPPER", sndstr, rcvstr, 0)
Console.WriteLine("rcvstr = {0}"
...rcvstr.GetString(0,1000))
ac.tpterm()
Catch e as ApplicationException
Console.WriteLine("Got Exception = {0}", e)
End Try
End Sub
End Module
The
buildnetclient utility is provided to help compile C# source files into a .NET executable files. See also
buildnetclient(1).
The following is a
buildnetclient syntax example:
buildnetclient -v -o simpapp.exe simpapp.cs
The error code return mechanism used with Tuxedo ATMI C and FML C functions is replaced with an exception mechanism in the Tuxedo .NET Workstation Client. You can use the
try statement to handle errors using the Tuxedo .NET Workstation Client. Errors are defined into two categories:
TPException and
FException.
……
try {
……
TypedTPINIT tpinfo = new TypedTPINIT();
AppContext ctx1 = AppContext.tpinit(tpinfo); // connect to Tuxedo domain
……
ctx1.tpterm(); // disconnect from Tuxedo domain
……
} catch (ApplicationException e) {
Console.WriteLine("******Error******, e = {0}", e);
}
……
Three sample applications are bundled with Tuxedo .NET Workstation Client package:
Describes how to develop Tuxedo .NET Workstation Client applications
Demonstrates
FML/VIEW typed buffer usage in Tuxedo .NET Workstation Client applications
Demonstrates how to register unsolicited message handler in Tuxedo .NET Workstation Client applications
You must do the following steps to access the sample applications:
readme.ntfile in each sample application directory.
setenv.cmdto set Tuxedo environment variable.
nmake -f xxx.ntto build the Tuxedo .NET Workstation Client application, Tuxedo server program and Tuxedo
TUXCONFIGfile.
tmboot -yto start Tuxedo application. | http://docs.oracle.com/cd/E26665_01/tuxedo/docs11gr1/dotnet/dotnet_chapter.html | CC-MAIN-2014-52 | en | refinedweb |
I'm currently without internet connection to my own computer, and I've been working out ways to make send and receiving e-mail as seamless as possible. I thought I'd document my procedure, with scripts, for those who find themselves in the same predicament.
Thankfully, although the machines I'm using to connect to the net are Windows machines, I do have access to a Linux box that is connected to the net with an ssh account I can log in to. Here is my setup for e-mail:
Sending
I'm still using KMail for composing and sending e-mail. I've set up a 'sendmail' item for sending, but pointed it at my own python script, which, instead of attempting to send the e-mail, saves it as a file on a folder in my MP3 player (or complains if I've forgotten to plug it in). Here is the script:
#!/usr/bin/python # Dump stdin in a file with unique name from datetime import datetime import operator import sys import os import random from copy import deepcopy import itertools DUMPDIR = '/media/sda1/mail/outgoing' def mk_filename(): "Creates a unique filename" return (reduce(operator.add, [chr(random.randint(ord('A'), ord('Z'))) for i in xrange(0,10)]) \ + '.' + datetime.isoformat(datetime.now())).replace(':','_') def dump_file(recips): if len(recips) == 0: print "No recipients found." sys.exit(1) fname = mk_filename() fn = os.path.join(DUMPDIR, fname + ".msg") fp = open(fn, 'w') for line in sys.stdin: fp.write(line) fp.close() fn2 = os.path.join(DUMPDIR, fname + ".recips") fp2 = open(fn2, 'w') fp2.write(' '.join(recips)) fp2.close() def prompt_for_continue(): prompt = "Cannot find directory %s for saving mail. " \ "Create directory or mount device and press 'Continue', " \ "or cancel." % (DUMPDIR,) exit_status = os.system('kdialog --warningcontinuecancel "%s"' % prompt) return exit_status == 0 def check_dir_and_dump(recips): if not os.path.isdir(DUMPDIR): if prompt_for_continue(): check_dir_and_dump() else: sys.stderr.write("Cancelled.") sys.exit(1) else: dump_file(recips) # For BCC, have to read recipients from command line, # and then for simplicity create separate files for each, with a # 'To' header added def get_recipients(): recipients = deepcopy(sys.argv) recipients = recipients[1:] # name of this shell script cont = True i = 0 while (cont): if i >= len(recipients): cont = False else: if recipients[i] == '-i': recipients.pop(i) else: if recipients[i] == '-f': recipients.pop(i) recipients.pop(i) else: i += 1 return recipients check_dir_and_dump(get_recipients()) os.system('sync')
Then, on the machine that is connected it to the internet, which I'll use perhaps once a day, I transfer the files from my MP3 player to my Linux box on the net via ftp. I also log in via ssh, using PuTTy, and run a script with sends all the emails in the 'outgoing' folder, deleting them if successful. To send these e-mails, I use 'msmtp', which can easily be downloaded, compiled and installed locally:
./configure --prefix=~/local && make && make install
(Needs ~/local to exist, probably, and needs ~/local/bin on your path to use the installed binary.)
Then, the script to send all the e-mails is just something like:
#!/bin/bash cd $HOME for TMP in ~/lp/mail/outgoing/*.msg; do echo $TMP RECIPFILE=${TMP%%.msg}.recips RECIPS=$(cat $RECIPFILE); msmtp $RECIPS < $TMP || exit 1; rm $TMP $RECIPFILE done
(once you've created a config file for msmtp).
Receiving
Currently, I check my e-mail using Fastmail's web interface, and at that point deal with all the spam, and delete other e-mail that there is no point transfering, and answer some e-mails. What needs to be transfered goes into my 'received' folder, and you can then use Fastmail's 'Archive' feature to take all the e-mails in a folder and download them as a zip file. This zip file is saved back onto my MP3 player in a specific folder, and taken back to my computer.
Once on my computer, I have another script which imports all the e-mail in that folder into KMail, removing them from the MP3 player.
#!/bin/bash # Attempt to import all files on removable device into KMail INCOMING_DIR=/media/sda1/mail/incoming EXTRACT_DIR=/home/luke/download/mail die_loudly() { # kdialog --error "$1" echo "$1" echo exit 1 } ps ax | egrep 'kontact|kmail' > /dev/null || { die_loudly "KMail isn't running, can't import messages" ; } if [ \! -d "$INCOMING_DIR" ] then die_loudly "$INCOMING_DIR cannot be found" fi if [ \! -d "$EXTRACT_DIR" ] then die_loudly "$EXTRACT_DIR cannot be found" fi mv $INCOMING_DIR/*.zip $EXTRACT_DIR || die_loudly "Can't move files from device" cd $EXTRACT_DIR unzip -o *.zip for FILE in *.eml do echo dcop kmail KMailIface dcopAddMessage "inbox" \"\" "" retval=`dcop kmail KMailIface dcopAddMessage "inbox" "" ""` if [ $? -ne 0 ] then die_loudly "Failed to import $FILE" fi if [ $retval -ne 1 ] then die_loudly "Failed to import $FILE" fi rm "$FILE" done rm $EXTRACT_DIR/*.zip
I normally run this from a console (Yakuake, to be precise, which is only ever 'F12' away), so I can see any error messages. Otherwise I'd change the 'die_loudly' function to use kdialog.
This is of course quite a bit of a faff, but it's doable. The methods and scripts are robust against forgetting to do it some days. If I wasn't using Windows boxes for connecting to the net, or if it was always the same box and I was allowed to install any software on, things would be better. As it is, 'PuTTy', which is a single, small executable, is the only thing I have to carry around with me. Also, if for whatever reason I'm reduced to only the web interface of Fastmail, I'm OK -- the Linux box isn't periodically retrieving my mails by POP or anything like that, and I only need it for sending e-mails I've prepared on my own machine.
UPDATED: Fixed scripts to handle BCC and other recipients that are passed only on the sendmail commandline. | http://lukeplant.me.uk/blog/posts/working-with-email-offline/ | CC-MAIN-2014-52 | en | refinedweb |
A Docker.
A software container platform designed for developing, shipping, and running apps leveraging container technology. Docker comes in two versions: enterprise edition and community edition
Unlike a VM which provides hardware virtualization, a container provides lightweight, operating-system-level virtualization by abstracting the “user space.” Containers share the host system’s kernel with other containers. A container, which runs on the host operating system, is a standard software unit that packages code and all its dependencies, so applications can run quickly and reliably from one environment to another. Containers are nonpersistent and are spun up from images.
The open source host software building and running the containers. Docker Engines act as the client-server application supporting containers on various Windows servers and Linux operating systems, including Oracle Linux, CentOS, Debian, Fedora, RHEL, SUSE, and Ubuntu.
Collection of software to be run as a container that contains a set of instructions for creating a container that can run on the Docker platform. Images are immutable, and changes to an image require to build a new image.
Place to store and download images. The registry is a stateless and scalable server-side application that stores and distributes Docker images.5.services:
Both dimensions are explained in more detail in the following paragraphs.,.3.
The underlying Linux kernel features that Docker uses are cgroups and namespaces. In 2008 cgroups were introduced to the Linux kernel based on work previously done by Google developers1.. | https://www.oracle.com/cloud-native/container-registry/what-is-docker/?ytid=U4vJFUpBqNM | CC-MAIN-2021-49 | en | refinedweb |
Mirror of :pserver:anonymous@cvs.schmorp.de/schmorpforge libev
You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
5.4 KiB
5.4 KiB
To include only the libev core (all the ev_* functions):
#define EV_STANDALONE 1
#include "ev.c"
This will automatically include ev.h, too, and should be done in a
single C source file only to provide the function implementations. To
use it, do the same for ev.h in all users:
#define EV_STANDALONE 1
#include "ev.h"
You need the following files in your source tree, or in a directory
in your include path (e.g. in libev/ when using -Ilibev):
ev.h
ev.c
ev_vars.h
ev_wrap.h
ev_win32.c
ev_select.c only when select backend is enabled (which is by default)
ev_poll.c only when poll backend is enabled (disabled by default)
ev_epoll.c only when the epoll backend is enabled (disabled by default)
ev_kqueue.c only when the kqueue backend is enabled (disabled by default)
"ev.c" includes the backend files directly when enabled.
PREPROCESSOR SYMBOLS
Libev can be configured via a variety of preprocessor symbols you have to define
before including any of its files. The default is not to build for mulciplicity
and only include the select backend.
EV_STANDALONE
Must always be "1", which keeps libev from including config.h or
other files, and it also defines dummy implementations for some
libevent functions (such as logging, which is not supported). It
will also not define any of the structs usually found in "event.h"
that are not directly supported by libev code alone.
EV_USE_MONOTONIC
If undefined or defined to be "1", libev will try to detect the
availability of the monotonic clock option at both compiletime and
runtime. Otherwise no use of the monotonic clock option will be
attempted.
EV_USE_REALTIME.
EV_USE_SELECT
If undefined or defined to be "1", libev will compile in support
for the select(2) backend. No attempt at autodetection will be
done: if no other method takes over, select will be it. Otherwise
the select backend will not be compiled in.
EV_USE_POLL
If defined to be "1", libev will compile in support for the poll(2)
backend. No attempt at autodetection will be done. poll usually
performs worse than select, so its not enabled by default (it is
also slightly less portable).
EV_USE_EPOLL
If defined to be "1", libev will compile in support for the Linux
epoll backend. Its availability will be detected at runtime,
otherwise another method will be used as fallback. This is the
preferred backend for GNU/Linux systems.
EV_USE_KQUEUE
If defined to be "1", libev will compile in support for the BSD
style kqueue backend. Its availability will be detected at runtime,
otherwise another method will be used as fallback. This is the
preferred backend for BSD and BSd-like systems. Darwin brokenness
will be detected at runtime and routed around by disabling this
backend.
EV_COMMON this:
#define EV_COMMON \
SV *self; /* contains this struct */ \
SV *cb_sv, *fh;
EV_PROTOTYPES
If defined to be "0", then "ev.h" will not define any function
prototypes, but still define all the structs and other
symbols. This is occasionally useful..
EXAMPLES
For a real-world example of a program the includes libev
verbatim, you can have a look at the EV perl module
(). It has the libev files in
the liev/ subdirectory and includes them in the EV/EVAPI.h (public
interface) and EV.xs (implementation) files. Only EV.xs file will be
compiled. | https://git.lighttpd.net/mirrors/libev/src/commit/5dd46d018abe6d5f6ed87dc8eaa4810107c8960d/README.embed | CC-MAIN-2021-49 | en | refinedweb |
Summary
Trains a deep learning model for point cloud classification using the PointCNN architecture.
Usage
This tool uses the PointCNN implementation using deep learning frameworks.
To set up your machine to use deep learning frameworks in ArcGIS Pro, see Install deep learning frameworks for ArcGIS.
The point classification model can be trained using either a CUDA-capable NVIDIA graphics card or the CPU. The tool will attempt to select the fastest CUDA-capable graphics card on the computer. If multiple GPUs are present and the tool does not use the fastest card, the desired GPU can be specified through the GPU ID environment setting. If the selected GPU is also being used for the display, its available memory will be decreased by the operating system and any other applications using the display at the time. To maximize the GPU memory for training, consider using the graphics card that will not be used for training to drive the computer's display.
Using the GPU is typically much faster than using the CPU. The CPU should only be used if no available GPU is present. If the CPU is used for training, provide the smallest possible training sample to estimate the time it will take to process the data prior to performing the training operation.
Learn more about training a point cloud classification model
When a pretrained model is specified, the model that will be trained will adopt the weights from the pretrained model and fine-tune the weights to classify the same objects in the pretrained model. For this reason, the training data must have the same attributes and class codes as the pretrained model. If the training data uses different class codes to represent the same objects in the pretrained model, remap the training data's class codes accordingly.
A folder is created to store the checkpoint models, which are models that are created at each epoch. The name of this folder is the same as the Output Model Location parameter value with a suffix of .checkpoints and is created in the same location. Once the training has completed, a CSV table with a name that is the same as the Output Model Name parameter value with a suffix of _stats.csv is created and added to the checkpoint folder. This table provides the following fields regarding the results obtained for each class code and epoch:
- Epoch—The epoch number associated with the results in the row. This value corresponds to the model created in the checkpoint models directory. The results are obtained by applying the model trained in the epoch on the validation data.
- Class_Code—The class code for which the results are being reported.
- Precision—The ratio of points that were correctly classified (true positives) over all the points that were classified (true positives and false positives).
- Recall—The ratio of correctly classified points (true positives) over all the points that should have been classified with this value (true positives and false negatives).
- F1_Score—The harmonic mean of the precision and recall value.
During training, PointCNN learns patterns from the training data and minimizes the entropy loss function. When the tool is running, the progress message returns the following statistical summary of the training results that were achieved for each epoch:
- Epoch—The epoch number for which the result is associated.
- Training Loss—The result of the entropy loss function that was averaged for the training data.
- Validation Loss—The result of the entropy loss function that was determined when applying the model trained in the epoch on the validation data.
- Accuracy—The ratio of points in the validation data that were correctly classified by the model trained in the epoch (true positives) over all the points in the validation data.
- Precision—The macro average of the precision for all class codes.
- Recall—The macro average of the recall for all class codes.
- F1 Score—The harmonic mean of the macro average of the precision and recall values for all class codes.
A model that achieves low training loss but high validation loss is considered to be overfitting the training data, whereby it detects patterns from artifacts in the training data that result in the model not working well for the validation data. A model that achieves a high training loss and a high validation loss is considered to be underfitting the training data, whereby no patterns are being learned effectively to produce a usable model.
Learn more about assessing point cloud training results
The dedicated memory used during training is the sum of memory allocated for the deep learning framework and the size of the data processed in each batch of an iteration in a given epoch. The size of the data in each batch depends on the number of additional point attributes specified in the Attribute Selection parameter, the total number of points in any given block, and the number of blocks that will be processed in each batch as specified by the Batch Size parameter. The maximum number of points per block is determined when the training data is exported, and this value should be assumed when estimating the potential memory footprint of the training operation.
Parameters
arcpy.ddd.TrainPointCloudClassificationModel(in_training_data, out_model_location, out_model_name, {pretrained_model}, {attributes}, {min_points}, {class_remap}, {target_classes}, {background_class}, {class_descriptions}, {model_selection_criteria}, {max_epochs}, {epoch_iterations}, {learning_rate}, {batch_size}, {early_stop}, {learning_rate_strategy})
Derived Output
Code sample
The following sample demonstrates the use of this tool in the Python window.
import arcpy arcpy.env.workspace = "D:/Deep_Learning_Workspace" arcpy.ddd.TrainPointCloudClassificationModel("Powerline_Training.pctd", "D:/DL_Models", "Powerline", attributes=['INTENSITY', 'RETURN_NUMBER', 'NUMBER_OF_RETURNS'], target_classes=[14, 15], background_class = 1, class_descriptions=[[1,"Background"],[14, "Wire Conductor"], [15, "Transmission Tower"]], model_selection_criteria="F1_SCORE", max_epochs=10)
Environments
Licensing information
- Basic: Requires 3D Analyst
- Standard: Requires 3D Analyst
- Advanced: Requires 3D Analyst | https://pro.arcgis.com/en/pro-app/latest/tool-reference/3d-analyst/train-point-cloud-classification-model.htm | CC-MAIN-2021-49 | en | refinedweb |
introduce
Copy-on-write (COW) is an optimization strategy in the field of computer program design.
Core Ideas
Multiple callers read the same resource that the pointer points to. Only when the caller writes, copy a copy of the resource, and replace the old resource with the copy.
application
- Linux uses COW technology to reduce Fork overhead;
- The file system guarantees data integrity to some extent through COW technology.
- The database provides users with a snapshot using the write-time replication strategy.
- COW technology is also utilized by the CopyOnWriteArrayList and CopyOnWriteArraySet of JDK.
Vector and Collections.SynchronizedXxx
ArrayList threads are not secure, Vector and Collections.SynchronizedXxx threads are secure.
Vector ensures thread security by embellishing each method declaration with the synchronized keyword:
Collections.SynchronizedXxx ensures thread security by encapsulating specific operations within each method with the synchronized keyword:
However, container threads are safe, which does not mean that concurrent use can be safely and boldly performed, but also that care should be taken about how to use them, such as:
public class CopyOnWriteTest { public static void main(String[] args) { Vector<Integer> vector = new Vector<>(); vector.add(1); vector.add(2); vector.add(3); vector.add(4); vector.add(5); for (Integer item : vector) { new Thread(vector::clear).start(); System.out.println(item); } } }
Execution results:
1 Exception in thread "main" java.util.ConcurrentModificationException at java.util.Vector$Itr.checkForComodification(Vector.java:1210) at java.util.Vector$Itr.next(Vector.java:1163) at com.wkw.study.copyonwrite.CopyOnWriteTest.main(CopyOnWriteTest.java:20)
The root cause is that Vector inherits AbstractList, which maintains the container modification number modCount, which adds 1 for each Vector modification, but Vector's iterator:
/** * Returns an iterator over the elements in this list in proper sequence. * * <p>The returned iterator is <a href=" "><i>fail-fast</i></a >. * * @return an iterator over the elements in this list in proper sequence */ public synchronized Iterator<E> iterator() { return new Itr(); } /** * An optimized version of AbstractList.Itr */ private class Itr implements Iterator<E> { int cursor; // index of next element to return int lastRet = -1; // index of last element returned; -1 if no such int expectedModCount = modCount; public E next() { synchronized (Vector.this) { checkForComodification(); int i = cursor; if (i >= elementCount) throw new NoSuchElementException(); cursor = i + 1; return elementData(lastRet = i); } } final void checkForComodification() { if (modCount != expectedModCount) throw new ConcurrentModificationException(); } ...... }
You can see that Vector's iterator gets the number of modifications to Vector at iterator initialization, the iterator checks the number of modifications each time it gets the next element, and throws an exception whenever it finds that the number of modifications is inconsistent with Vector (that is, Vector is modified), in order to fail quickly.
Collections.SynchronizedXxx also has problems traversing using iterators.
The protocol that a collection's remove/add/clear method cannot be called in a foreach loop applies not only to non-thread-safe ArrayList/LinkedList s, but also to thread-safe Vector s and Collections.SynchronizedXxx.
To solve these problems, you need to lock the entire Vector before iteration. However, distributing the container CopyOnWriteArrayList can avoid these problems.
New Generation Concurrent Container under JUC VS java.util Old Generation Concurrent Container
CopyOnWriteArrayList is a substitute for synchronous List and CopyOnWriteArraySet is a substitute for synchronous Set.
Hashtable -> ConcurrentHashMap, Vector -> CopyOnWriteArrayList; Concurrency-enabled containers under JUC are, in summary, lock granularity issues compared to older generation thread security classes:
Locks such as Hashtable, Vector, Collections.SynchronizedXxx are granular and use the synchronized keyword directly at the method declaration.
ConcurrentHashXxx, CopyOnWriteArrayXxx, and so on have small lock granularity.
Thread security is implemented in a variety of ways, such as ConcurrentHashMap using CAS + volatile
Thread-safe containers under JUC do not throw ConcurrentModificationException exceptions when traversing
So in general, it is recommended that you use the thread-safe containers provided under the JUC package (ConcurrentHashMap, ConcurrentHashSet, CopyOnWriteArrayList, CopyOnWriteArraySet, and so on) instead of older generation thread-safe containers (Hashtable, Vector, Collections.SynchronizedXxx, and so on).
Principle of CopyOnWriteArrayList
COW is a way of solving concurrency. The basic principle is read-write separation:
When writing, copy a new collection, add or delete elements within the new collection; After all modifications have been made, the reference to the original set is pointed to the new set.
The advantage is that COW s can be read and traversed in high concurrency without locking, since the current collection does not add any elements.
Basic Definitions
public class CopyOnWriteArrayList<E> implements List<E>, RandomAccess, Cloneable, java.io.Serializable { private static final long serialVersionUID = 8673264195747942595L; /** The lock protecting all mutators */ //Write Lock final transient ReentrantLock lock = new ReentrantLock(); /** The array, accessed only via getArray/setArray. */ private transient volatile Object[] array; ...... }
Write operation
/** * Appends the specified element to the end of this list. * * @param e element to be appended to this list * @return {@code true} (as specified by {@link Collection#add}) */ public boolean add(E e) { final ReentrantLock lock = this.lock; lock.lock(); try { Object[] elements = getArray(); int len = elements.length; Object[] newElements = Arrays.copyOf(elements, len + 1); newElements[len] = e; setArray(newElements); return true; } finally { lock.unlock(); } } /** * Gets the array. Non-private so as to also be accessible * from CopyOnWriteArraySet class. */ final Object[] getArray() { return array; } /** * Sets the array. */ final void setArray(Object[] a) { array = a; }
As you can see, the principle is simple:
- Write operations are locked to prevent data loss during concurrent writes.
- Copy a new array and add operations on the new array.
- Point array to a new array
- Final unlock
Read operation
/** * {@inheritDoc} * * @throws IndexOutOfBoundsException {@inheritDoc} */ public E get(int index) { return get(getArray(), index); } private E get(Object[] a, int index) { return (E) a[index]; }
Read operations are direct reads of the original array;
iterator
/** * Returns an iterator over the elements in this list in proper sequence. * * <p>The returned iterator provides a snapshot of the state of the list * when the iterator was constructed. No synchronization is needed while * traversing the iterator. The iterator does <em>NOT</em> support the * {@code remove} method. * * @return an iterator over the elements in this list in proper sequence */ public Iterator<E> iterator() { return new COWIterator<E>(getArray(), 0); } static final class COWIterator<E> implements ListIterator<E> { /** Snapshot of the array */ private final Object[] snapshot; /** Index of element to be returned by subsequent call to next. */ private int cursor; private COWIterator(Object[] elements, int initialCursor) { cursor = initialCursor; snapshot = elements; } public boolean hasNext() { return cursor < snapshot.length; } public boolean hasPrevious() { return cursor > 0; } @SuppressWarnings("unchecked") public E next() { if (! hasNext()) throw new NoSuchElementException(); return (E) snapshot[cursor++]; } ...... }
You can see that iteration is also an iteration of the original array. If the set is modified, the array inside the set points to a new array object, while the snapshot inside the COWIterator points to the old array passed in at initialization, so no exception is thrown, because the old array never changes and the old array reading operation is always reliable and safe.
Because the read-write separation does not affect the read, the CopyOnWriteArrayList does not maintain the modCount number of modifications.
Performance: CopyOnWriteArrayList VS Collections.synchronizedList
Analysis
- In terms of space utilization, CopyOnWriteArrayList needs to copy an array when it is written, so Collections.synchronizedList must have a higher space utilization rate;
- For read operations, Collections.SynchronizedList needs to be locked, CopyOnWriteArrayList reads the original array directly, so CopyOnWriteArrayList reads more efficiently;
- On the write side, CopyOnWriteArrayList needs to copy arrays when writing, ReentrantLock is locked, Collections.SynchronizedList uses the synchronized keyword, synchronized monitor lock takes more time in high concurrency cases; However, Collections.synchronizedList does not need to copy arrays; Taken together, Collections.SynchronizedList writing may perform better;
Verification
public static void main(String[] args) { List<Integer> copyOnWriteArrayList = new CopyOnWriteArrayList<>(); List<Integer> synchronizedList = Collections.synchronizedList(new ArrayList<>()); StopWatch stopWatch = new StopWatch(); int loopCount = 1000; stopWatch.start("CopyOnWriteList write"); /** *.add(ThreadLocalRandom.current().nextInt(loopCount))); stopWatch.stop(); stopWatch.start("Collections.synchronizedList write"); /** * parallelStream Features: Based on server kernel limitations, if you are an octet * Only eight threads per thread, no custom thread pool */ IntStream.rangeClosed(1, loopCount).parallel().forEach( item -> synchronizedList.add(ThreadLocalRandom.current().nextInt(loopCount))); stopWatch.stop(); System.out.println(stopWatch.prettyPrint()); }
Result:
StopWatch '': running time (millis) = 55 ----------------------------------------- ms % Task name ----------------------------------------- 00054 098% CopyOnWriteList write 00001 002% Collections.synchronizedList write
CopyOnWriteList takes more time to write than Collections.synchronizedList;
public static void main(String[] args) { List<Integer> copyOnWriteArrayList = new CopyOnWriteArrayList<>(); List<Integer> synchronizedList = Collections.synchronizedList(new ArrayList<>()); copyOnWriteArrayList.addAll(IntStream.rangeClosed(1, 1000000).boxed().collect(Collectors.toList())); synchronizedList.addAll(IntStream.rangeClosed(1, 1000000).boxed().collect(Collectors.toList())); int copyOnWriteArrayListSize = copyOnWriteArrayList.size(); StopWatch stopWatch = new StopWatch(); int loopCount = 1000000; stopWatch.start("CopyOnWriteArrayList read"); /** *.get(ThreadLocalRandom.current().nextInt(copyOnWriteArrayListSize))); stopWatch.stop(); stopWatch.start("Collections.synchronizedList read"); int synchronizedListSize = synchronizedList.size(); /** * parallelStream Features: Based on server kernel limitations, if you are an octet * Only eight threads per thread, no custom thread pool */ IntStream.rangeClosed(1, loopCount).parallel().forEach( item -> synchronizedList.get(ThreadLocalRandom.current().nextInt(synchronizedListSize))); stopWatch.stop(); System.out.println(stopWatch.prettyPrint()); }
Result:
StopWatch '': running time (millis) = 158 ----------------------------------------- ms % Task name ----------------------------------------- 00030 019% CopyOnWriteArrayList read 00128 081% Collections.synchronizedList read
Collections.synchronizedList takes longer to write than CopyOnWriteList;
Advantages and disadvantages of CopyOnWriteArrayList
Advantage
For some scenarios with more reading and less writing, COW is more appropriate.
For example, configuration, blacklist, logistics address and so on change very little data, which is a lock-free implementation, which can achieve higher concurrency of programs.
CopyOnWriteArrayList is concurrently secure and performs better than Vector.
Vectors are add synchronized methods to ensure synchronization, but when each method is executed to obtain a lock, performance will be greatly reduced. CopyOnWriteArrayList only locks additions and deletions, but read without locks, performance is better than Vector.
shortcoming
Data consistency issues: The CopyOnWrite container can only guarantee the final consistency of the data, not the real-time consistency of the data.
Thread A, for example, iterates over the data in the CopyOnWriteArrayList container. Thread B modified the data in the CopyOnWriteArrayList section between thread A iterations, but thread A iterated over the old data.
Memory usage problem. If CopyOnWriteArrayList frequently adds or deletes data inside it, and objects are large and frequent writes consume memory, causing Java GC problems, consider other containers such as ConcurrentHashMap.
Reference resources:
Interviewer: Do you know how Copy-On-Write is used in Java? | https://programmer.help/blogs/java-copyonwrite-on-write.html | CC-MAIN-2021-49 | en | refinedweb |
Hi, so I’m on exercise 9 of Inheritance / Polymorphism ( )
The code below is provided as the solution but I’m a little confused on the ‘why’. Specifically why does using append require the use of super() and sort does not since the sort method is still coming from the list parent class? Is it because in this case it’s sorting itself so the super() is assumed?
Similarly the fact that we don’t specify what list is being appended to in super().append(value) seems strange to me. I’m assuming there’s just something basic here that I’m not getting so any clarity would be appreciated.
Thanks!
class SortedList(list):
def append(self, value):
super().append(value)
self.sort() | https://discuss.codecademy.com/t/question-about-inheritance-super/418032 | CC-MAIN-2020-40 | en | refinedweb |
10 Visual Studio features to turbocharge your codingPosted Jul 12, 2017 | 7 min. (1470 words)
Here is a list of the top 11 Visual Studio features that I use just about every day (yes, including the weekend). If you’re new to using Visual Studio, I recommend giving these a go and then get into the rhythm of using them often. If you’ve been coding in Visual Studio for years, this list may serve as a refresher, or you may even learn something new.
1. Open IntelliSense popup
Let’s start off with a simple one – opening the IntelliSense popup. IntelliSense is Visual Studio’s way of auto-completing names based on what you type and what’s available in the current context. If you’re calling a method, for example, you’ll see a popup containing only method names accessible on the class you’re calling on. (And any available extension methods), filtered by what you’ve typed so far. The popup allows you to hit the Return key generally after typing a few characters to complete the method name and swiftly start typing the parameters. The IntelliSense popup automatically opens as you type, but sometimes you may lose it – say if you are navigating away and back. Or sometimes you may want to open it without having typed anything yet. Use the Ctrl+Space keyboard shortcut to manage this.
2. Renaming
Naming can be hard to get right the first time. Sometimes naming something is always hard. Renaming, on the other hand, is super easy in Visual Studio. By replacing the name of a class, interface, method, namespace, property, variable, constant, delegate, event and so on, the name will be surrounded by a dotted line. While the caret is still within the name, you can press Ctrl+. and Visual Studio will ask if you want to rename that member. Hit Return to apply the rename which will update all the references to that member too – saving a lot of otherwise manual work. You will lose the option to rename if you make any other changes to the file while the dotted line surrounds the name.
Renaming a file in the solution explorer will often cause Visual Studio to ask you if you want to perform a rename in the class within that file. Make sure to rename files first if you intend to rename the class to save a bit of time.
3. Conditional breakpoints
A breakpoint is a flag that you can set on a line of code that will cause the Visual Studio debugger to pause code execution when the running process reaches that line of code. You can also view the application state to check that everything is fine or to debug why something is not working as expected. A breakpoint can be added by either clicking the gray margin to the left of the line of code or by pressing F9 while the caret is on the line of code. If a line of code is hit many times, such as within a loop, while you’re trying to debug, getting to the breakpoint while the application is in the state that you want to investigate can be a time sink.
Conditional breakpoints help save time here. Right-click a breakpoint and select “Condition” or use the Alt+F9 keyboard shortcut. With the popup, you can check for all sorts of conditions such as if a particular value equals an expected value, or if it has hit a breakpoint. The text box for entering a condition even has IntelliSense to help avoid coding typos. Instead of manually stepping through a breakpoint many times this is a great way to avoid coding errors.
4. Find all
Ctrl+F is great but only gives you the option to step through each search result one at a time. Once you lose focus from the search dialog, you need to loop through all the search results again, rather than continuing from where you were. Find all is where it’s at, which you can initiate using the Ctrl+Shift+F keyboard shortcut. The Ctrl+Shift+F shortcut will open a different dialog with more search options, the main one I use of which is to specify what types of files to search through. By clicking Find All, you’ll get a list of all search results ordered by file, allowing you to scan through them however you like. Once you’ve found the right result, click on it to navigate to the code. The search list persists in the find results window allowing you to examine other results whenever you need to.
5. Go To Definition
As a project grows, navigating around code within many many files can start to become time-consuming. Fortunately, Visual Studio has many simple features to speed up and simplify code navigation. “Go To Definition” is one of the navigation features that I use the most. Within your code, click on any usage of a member, such as a class or a method, then hit F12 to navigate the definition of that member. The F12 shortcut is used to revise the implementation of a method or to jump to a class where you plan to add a new member.
6. Go To Implementation
A lot of my code implements interfaces, and those implementations use interfaces to call other code. Hitting F12 on a method that’s only known by its interface will land you in the interface definition, which sometimes isn’t what you want. To get to an implementation of that interface, use Ctrl+F12 instead. If Visual Studio only knows of one implementation to that interface, it will take you there straight away. If there are multiple implementations, Visual Studio will present you with a list.
7. View all references
In more recent versions of Visual Studio, displaying the definition of a class or method will show the number of references to that class or method just above the name. Clicking this number will open a popup displaying a list grouped by file name. Alternatively, or in older Visual Studio versions, click on the class or method name, and then either use the Ctrl+K, R shortcut or right-click and select “Find All References.” Visual Studio 2017 will be bringing further improvements to the find all references feature which you can read about here.
8. NavigateTo
NavigateTo allows you search for something without knowing or remembering where it is (or any of its references). Scott Hanselman also names this as one of his favorite features of Visual Studio, and it’s one I use all the time, too.
“Absolutely high on the list of useful things is Ctrl+, for NavigateTo. Why click around with your mouse to open a file or find a specific member or function?”
To get a filtered list of options quickly, use the shortcut Ctrl+, :
9. Navigate backward and forward
As you navigate around your code, be it via the solution explorer, or using any of the navigation features described above, Visual Studio remembers the path you’ve taken, much like navigating the Internet in a browser. Navigate backward and forward along this path quickly using the keyboard shortcuts Ctrl+- to go backward and Ctrl+Shift+- to go forward. Visual Studio displays the “Standard” toolbar by default. The backward button has a drop down allowing you to jump to a particular point, rather than one at a time. If you’ve got a mouse that has backward and forward buttons, then you can use those in place of the keyboard shortcuts which is much faster.
10. Sync with active document
Once you’ve found a class, you may want to find or add a class to the same folder that it lives within. If you have no idea where the class lives in the solution explorer, just get Visual Studio to guide you. Use the keyboard shortcut Ctrl+[, S or hit the double arrows icon in the solution explorer toolbar. The toolbar will expand, scroll to and highlight the file you’re currently editing:
11. (Bonus feature) Customize keyboard shortcuts
One of the most useful Visual Studio features is the ability to customize your keyboard shortcuts. In the menu, click “Tools” and then “Options…”. Head to the sidebar, expand “Environment” and then select “Keyboard.” Use the search box to type part of the name of the feature you’re editing with no spaces, such as activedocument for the above feature. Focus on the “Press shortcut keys” box and input the keyboard shortcut you want. The box below will display the feature(s) that are also using that keyboard shortcut so that you can avoid any unwanted conflicts. Hit OK and try it out.
It’s all about saving time
Most of these features, for me, are all about saving time. Which Visual Studio features do you use all the time that help save time and supercharge your coding skills? Comment below!
Further reading
Raygun integrates with Visual Studio Online
Raygun and Visual Studio Team Services
How to design better unit tests | https://raygun.com/blog/visual-studio-features/ | CC-MAIN-2020-40 | en | refinedweb |
import "github.com/documize/community/domain/template"
Handler contains the runtime information such as logging and database.
SaveAs saves existing document as a template.
SavedList returns all templates saved by the user
Use creates new document using a saved document as a template. If template ID is ZERO then we provide an Empty Document as the new document.
Package template imports 25 packages (graph) and is imported by 2 packages. Updated 2019-12-04. Refresh now. Tools for package owners. | https://godoc.org/github.com/documize/community/domain/template | CC-MAIN-2020-40 | en | refinedweb |
import "github.com/dgraph-io/badger"
Package badger implements an embeddable, simple and fast key-value database, written in pure Go. It is designed to be highly performant for both reads and writes simultaneously. Badger uses Multi-Version Concurrency Control (MVCC), and supports transactions. It runs transactions concurrently, with serializable snapshot isolation guarantees.
Badger uses an LSM tree along with a value log to separate keys from values, hence reducing both write amplification and the size of the LSM tree. This allows LSM tree to be served entirely from RAM, while the values are served from SSD.
Badger has the following main types: DB, Txn, Item and Iterator. DB contains keys that are associated with values. It must be opened with the appropriate options before it can be accessed.
All operations happen inside a Txn. Txn represents a transaction, which can be read-only or read-write. Read-only transactions can read values for a given key (which are returned inside an Item), or iterate over a set of key-value pairs using an Iterator (which are returned as Item type values as well). Read-write transactions can also update and delete keys from the DB.
See the examples for more usage details.
backup.go batch.go compaction.go db.go dir_unix.go doc.go errors.go histogram.go iterator.go key_registry.go level_handler.go levels.go logger.go managed_db.go manifest.go merge.go options.go publisher.go stream.go stream_writer.go structs.go txn.go util.go value.go
const ( // KeyRegistryFileName is the file name for the key registry file. KeyRegistryFileName = "KEYREGISTRY" // KeyRegistryRewriteFileName is the file name for the rewrite key registry file. KeyRegistryRewriteFileName = "REWRITE-KEYREGISTRY" )
const ( // ValueThresholdLimit is the maximum permissible value of opt.ValueThreshold. ValueThresholdLimit = math.MaxUint16 - 16 + 1 )
var ( // ErrValueLogSize is returned when opt.ValueLogFileSize option is not within the valid // range. ErrValueLogSize = errors.New("Invalid ValueLogFileSize, must be between 1MB and 2GB") // ErrKeyNotFound is returned when key isn't found on a txn.Get. ErrKeyNotFound = errors.New("Key not found") // ErrTxnTooBig is returned if too many writes are fit into a single transaction. ErrTxnTooBig = errors.New("Txn is too big to fit into one request") // ErrConflict is returned when a transaction conflicts with another transaction. This can // happen if the read rows had been updated concurrently by another transaction. ErrConflict = errors.New("Transaction Conflict. Please retry") // ErrReadOnlyTxn is returned if an update function is called on a read-only transaction. ErrReadOnlyTxn = errors.New("No sets or deletes are allowed in a read-only transaction") // ErrDiscardedTxn is returned if a previously discarded transaction is re-used. ErrDiscardedTxn = errors.New("This transaction has been discarded. Create a new one") // ErrEmptyKey is returned if an empty key is passed on an update function. ErrEmptyKey = errors.New("Key cannot be empty") // ErrInvalidKey is returned if the key has a special !badger! prefix, // reserved for internal usage. ErrInvalidKey = errors.New("Key is using a reserved !badger! prefix") // ErrRetry is returned when a log file containing the value is not found. // This usually indicates that it may have been garbage collected, and the // operation needs to be retried. ErrRetry = errors.New("Unable to find log file. Please retry") // ErrThresholdZero is returned if threshold is set to zero, and value log GC is called. // In such a case, GC can't be run. ErrThresholdZero = errors.New( "Value log GC can't run because threshold is set to zero") // ErrNoRewrite is returned if a call for value log GC doesn't result in a log file rewrite. ErrNoRewrite = errors.New( "Value log GC attempt didn't result in any cleanup") // ErrRejected is returned if a value log GC is called either while another GC is running, or // after DB::Close has been called. ErrRejected = errors.New("Value log GC request rejected") // ErrInvalidRequest is returned if the user request is invalid. ErrInvalidRequest = errors.New("Invalid request") // ErrManagedTxn is returned if the user tries to use an API which isn't // allowed due to external management of transactions, when using ManagedDB. ErrManagedTxn = errors.New( "Invalid API request. Not allowed to perform this action using ManagedDB") // ErrInvalidDump if a data dump made previously cannot be loaded into the database. ErrInvalidDump = errors.New("Data dump cannot be read") // ErrZeroBandwidth is returned if the user passes in zero bandwidth for sequence. ErrZeroBandwidth = errors.New("Bandwidth must be greater than zero") // ErrInvalidLoadingMode is returned when opt.ValueLogLoadingMode option is not // within the valid range ErrInvalidLoadingMode = errors.New("Invalid ValueLogLoadingMode, must be FileIO or MemoryMap") // ErrReplayNeeded is returned when opt.ReadOnly is set but the // database requires a value log replay. ErrReplayNeeded = errors.New("Database was not properly closed, cannot open read-only") // ErrWindowsNotSupported is returned when opt.ReadOnly is used on Windows ErrWindowsNotSupported = errors.New("Read-only mode is not supported on Windows") // ErrPlan9NotSupported is returned when opt.ReadOnly is used on Plan 9 ErrPlan9NotSupported = errors.New("Read-only mode is not supported on Plan 9") // ErrTruncateNeeded is returned when the value log gets corrupt, and requires truncation of // corrupt data to allow Badger to run properly. ErrTruncateNeeded = errors.New( "Value log truncate required to run DB. This might result in data loss") // ErrBlockedWrites is returned if the user called DropAll. During the process of dropping all // data from Badger, we stop accepting new writes, by returning this error. ErrBlockedWrites = errors.New("Writes are blocked, possibly due to DropAll or Close") // ErrNilCallback is returned when subscriber's callback is nil. ErrNilCallback = errors.New("Callback cannot be nil") // ErrEncryptionKeyMismatch is returned when the storage key is not // matched with the key previously given. ErrEncryptionKeyMismatch = errors.New("Encryption key mismatch") // ErrInvalidDataKeyID is returned if the datakey id is invalid. ErrInvalidDataKeyID = errors.New("Invalid datakey id") // ErrInvalidEncryptionKey is returned if length of encryption keys is invalid. ErrInvalidEncryptionKey = errors.New("Encryption key's length should be" + "either 16, 24, or 32 bytes") // ErrGCInMemoryMode is returned when db.RunValueLogGC is called in in-memory mode. ErrGCInMemoryMode = errors.New("Cannot run value log GC when DB is opened in InMemory mode") // ErrDBClosed is returned when a get operation is performed after closing the DB. ErrDBClosed = errors.New("DB Closed") )
var DefaultIteratorOptions = IteratorOptions{ PrefetchValues: true, PrefetchSize: 100, Reverse: false, AllVersions: false, }
DefaultIteratorOptions contains default options when iterating over Badger key-value stores.
func WriteKeyRegistry(reg *KeyRegistry, opt KeyRegistryOptions) error
WriteKeyRegistry will rewrite the existing key registry file with new one. It is okay to give closed key registry. Since, it's using only the datakey.
type DB struct { sync.RWMutex // Guards list of inmemory tables, not individual reads and writes. // contains filtered or unexported fields }
DB provides the various functions required to interact with Badger. DB is thread-safe.
Open returns a new DB object.
Code:
dir, err := ioutil.TempDir("", "badger-test") if err != nil { panic(err) } defer removeDir(dir) db, err := Open(DefaultOptions(dir)) if err != nil { panic(err) } defer db.Close() err = db.View(func(txn *Txn) error { _, err := txn.Get([]byte("key")) // We expect ErrKeyNotFound fmt.Println(err) return nil }) if err != nil { panic(err) } txn := db.NewTransaction(true) // Read-write txn err = txn.SetEntry(NewEntry([]byte("key"), []byte("value"))) if err != nil { panic(err) } err = txn.Commit() if err != nil { panic(err) } err = db.View(func(txn *Txn) error { item, err := txn.Get([]byte("key")) if err != nil { return err } val, err := item.ValueCopy(nil) if err != nil { return err } fmt.Printf("%s\n", string(val)) return nil }) if err != nil { panic(err) }
Output:
Key not found value
OpenManaged returns a new DB, which allows more control over setting transaction timestamps, aka managed mode.
This is only useful for databases built on top of Badger (like Dgraph), and can be ignored by most users.
Backup dumps a protobuf-encoded list of all entries in the database into the given writer, that are newer than or equal to the specified version. It returns a timestamp (version) indicating the version of last entry that is dumped, which after incrementing by 1 can be passed into later invocation to generate incremental backup of entries that have been added/modified since the last invocation of DB.Backup(). DB.Backup is a wrapper function over Stream.Backup to generate full and incremental backups of the DB. For more control over how many goroutines are used to generate the backup, or if you wish to backup only a certain range of keys, use Stream.Backup directly.
BlockCacheMetrics returns the metrics for the underlying block cache.
Close closes a DB. It's crucial to call it to ensure all the pending updates make their way to disk. Calling DB.Close() multiple times would still only close the DB once.
DropAll would drop all the data stored in Badger. It does this in the following way. - Stop accepting new writes. - Pause memtable flushes and compactions. - Pick all tables from all levels, create a changeset to delete all these tables and apply it to manifest. - Pick all log files from value log, and delete all of them. Restart value log files from zero. - Resume memtable flushes and compactions.
NOTE: DropAll is resilient to concurrent writes, but not to reads. It is up to the user to not do any reads while DropAll is going on, otherwise they may result in panics. Ideally, both reads and writes are paused before running DropAll, and resumed after it is finished.
DropPrefix would drop all the keys with the provided prefix. It does this in the following way: - Stop accepting new writes. - Stop memtable flushes before acquiring lock. Because we're acquring lock here
and memtable flush stalls for lock, which leads to deadlock
- Flush out all memtables, skipping over keys with the given prefix, Kp. - Write out the value log header to memtables when flushing, so we don't accidentally bring Kp
back after a restart.
- Stop compaction. - Compact L0->L1, skipping over Kp. - Compact rest of the levels, Li->Li, picking tables which have Kp. - Resume memtable flushes, compactions and writes.
Flatten can be used to force compactions on the LSM tree so all the tables fall on the same level. This ensures that all the versions of keys are colocated and not split across multiple levels, which is necessary after a restore from backup. During Flatten, live compactions are stopped. Ideally, no writes are going on during Flatten. Otherwise, it would create competition between flattening the tree and new tables being created at level zero.
GetMergeOperator creates a new MergeOperator for a given key and returns a pointer to it. It also fires off a goroutine that performs a compaction using the merge function that runs periodically, as specified by dur.
GetSequence would initiate a new sequence object, generating it from the stored lease, if available, in the database. Sequence can be used to get a list of monotonically increasing integers. Multiple sequences can be created by providing different keys. Bandwidth sets the size of the lease, determining how many Next() requests can be served from memory.
GetSequence is not supported on ManagedDB. Calling this would result in a panic.
IndexCacheMetrics returns the metrics for the underlying index cache.
IsClosed denotes if the badger DB is closed or not. A DB instance should not be used after closing it.
KeySplits can be used to get rough key ranges to divide up iteration over the DB.
Load reads a protobuf-encoded list of all entries from a reader and writes them to the database. This can be used to restore the database from a backup made by calling DB.Backup(). If more complex logic is needed to restore a badger backup, the KVLoader interface should be used instead.
DB.Load() should be called on a database that is not running any other concurrent transactions while it is running.
MaxBatchCount returns max possible entries in batch
MaxBatchSize returns max possible batch size
NewKVLoader returns a new instance of KVLoader.
func (db *DB) NewManagedWriteBatch() *WriteBatch
NewStream creates a new Stream.
NewStreamAt creates a new Stream at a particular timestamp. Should only be used with managed DB.
func (db *DB) NewStreamWriter() *StreamWriter
NewStreamWriter creates a StreamWriter. Right after creating StreamWriter, Prepare must be called. The memory usage of a StreamWriter is directly proportional to the number of streams possible. So, efforts must be made to keep the number of streams low. Stream framework would typically use 16 goroutines and hence create 16 streams.
NewTransaction creates a new transaction. Badger supports concurrent execution of transactions, providing serializable snapshot isolation, avoiding write skews. Badger achieves this by tracking the keys read and at Commit time, ensuring that these read keys weren't concurrently modified by another transaction.
For read-only transactions, set update to false. In this mode, we don't track the rows read for any changes. Thus, any long running iterations done in this mode wouldn't pay this overhead.
Running transactions concurrently is OK. However, a transaction itself isn't thread safe, and should only be run serially. It doesn't matter if a transaction is created by one goroutine and passed down to other, as long as the Txn APIs are called serially.
When you create a new transaction, it is absolutely essential to call Discard(). This should be done irrespective of what the update param is set to. Commit API internally runs Discard, but running it twice wouldn't cause any issues.
txn := db.NewTransaction(false) defer txn.Discard() // Call various APIs.
NewTransactionAt follows the same logic as DB.NewTransaction(), but uses the provided read timestamp.
This is only useful for databases built on top of Badger (like Dgraph), and can be ignored by most users.
func (db *DB) NewWriteBatch() *WriteBatch
NewWriteBatch creates a new WriteBatch. This provides a way to conveniently do a lot of writes, batching them up as tightly as possible in a single transaction and using callbacks to avoid waiting for them to commit, thus achieving good performance. This API hides away the logic of creating and committing transactions. Due to the nature of SSI guaratees provided by Badger, blind writes can never encounter transaction conflicts (ErrConflict).
func (db *DB) NewWriteBatchAt(commitTs uint64) *WriteBatch
NewWriteBatchAt is similar to NewWriteBatch but it allows user to set the commit timestamp. NewWriteBatchAt is supposed to be used only in the managed mode.
Opts returns a copy of the DB options.
PrintHistogram builds and displays the key-value size histogram. When keyPrefix is set, only the keys that have prefix "keyPrefix" are considered for creating the histogram
RunValueLogGC triggers a value log garbage collection.
It picks value log files to perform GC based on statistics that are collected during compactions. If no such statistics are available, then log files are picked in random order. The process stops as soon as the first log file is encountered which does not result in garbage collection.
When a log file is picked, it is first sampled. If the sample shows that we can discard at least discardRatio space of that file, it would be rewritten.
If a call to RunValueLogGC results in no rewrites, then an ErrNoRewrite is thrown indicating that the call resulted in no file rewrites.
We recommend setting discardRatio to 0.5, thus indicating that a file be rewritten if half the space can be discarded. This results in a lifetime value log write amplification of 2 (1 from original write + 0.5 rewrite + 0.25 + 0.125 + ... = 2). Setting it to higher value would result in fewer space reclaims, while setting it to a lower value would result in more space reclaims at the cost of increased activity on the LSM tree. discardRatio must be in the range (0.0, 1.0), both endpoints excluded, otherwise an ErrInvalidRequest is returned.
Only one GC is allowed at a time. If another value log GC is running, or DB has been closed, this would return an ErrRejected.
Note: Every time GC is run, it would produce a spike of activity on the LSM tree.
SetDiscardTs sets a timestamp at or below which, any invalid or deleted versions can be discarded from the LSM tree, and thence from the value log to reclaim disk space. Can only be used with managed transactions.
Size returns the size of lsm and value log files in bytes. It can be used to decide how often to call RunValueLogGC.
Stream the contents of this DB to a new DB with options outOptions that will be created in outDir.
Subscribe can be used to watch key changes for the given key prefixes. At least one prefix should be passed, or an error will be returned. You can use an empty prefix to monitor all changes to the DB. This function blocks until the given context is done or an error occurs. The given function will be called with a new KVList containing the modified keys and the corresponding values.
Sync syncs database content to disk. This function provides more control to user to sync data whenever required.
Tables gets the TableInfo objects from the level controller. If withKeysCount is true, TableInfo objects also contain counts of keys for the tables.
Update executes a function, creating and managing a read-write transaction for the user. Error returned by the function is relayed by the Update method. Update cannot be used with managed transactions.
VerifyChecksum verifies checksum for all tables on all levels. This method can be used to verify checksum, if opt.ChecksumVerificationMode is NoVerification.
View executes a function creating and managing a read-only transaction for the user. Error returned by the function is relayed by the View method. If View is used with managed transactions, it would assume a read timestamp of MaxUint64.
type Entry struct { Key []byte Value []byte UserMeta byte ExpiresAt uint64 // time.Unix // contains filtered or unexported fields }
Entry provides Key, Value, UserMeta and ExpiresAt. This struct can be used by the user to set data.
NewEntry creates a new entry with key and value passed in args. This newly created entry can be set in a transaction by calling txn.SetEntry(). All other properties of Entry can be set by calling WithMeta, WithDiscard, WithTTL methods on it. This function uses key and value reference, hence users must not modify key and value until the end of transaction.
WithDiscard adds a marker to Entry e. This means all the previous versions of the key (of the Entry) will be eligible for garbage collection. This method is only useful if you have set a higher limit for options.NumVersionsToKeep. The default setting is 1, in which case, this function doesn't add any more benefit. If however, you have a higher setting for NumVersionsToKeep (in Dgraph, we set it to infinity), you can use this method to indicate that all the older versions can be discarded and removed during compactions.
WithMeta adds meta data to Entry e. This byte is stored alongside the key and can be used as an aid to interpret the value or store other contextual bits corresponding to the key-value pair of entry.
WithTTL adds time to live duration to Entry e. Entry stored with a TTL would automatically expire after the time has elapsed, and will be eligible for garbage collection.
Item is returned during iteration. Both the Key() and Value() output is only valid until iterator.Next() is called.
DiscardEarlierVersions returns whether the item was created with the option to discard earlier versions of a key when multiple are available.
EstimatedSize returns the approximate size of the key-value pair.
This can be called while iterating through a store to quickly estimate the size of a range of key-value pairs (without fetching the corresponding values).
ExpiresAt returns a Unix time value indicating when the item will be considered expired. 0 indicates that the item will never expire.
IsDeletedOrExpired returns true if item contains deleted or expired value.
Key returns the key.
Key is only valid as long as item is valid, or transaction is valid. If you need to use it outside its validity, please use KeyCopy.
KeyCopy returns a copy of the key of the item, writing it to dst slice. If nil is passed, or capacity of dst isn't sufficient, a new slice would be allocated and returned.
KeySize returns the size of the key. Exact size of the key is key + 8 bytes of timestamp
String returns a string representation of Item
UserMeta returns the userMeta set by the user. Typically, this byte, optionally set by the user is used to interpret the value.
Value retrieves the value of the item from the value log.
This method must be called within a transaction. Calling it outside a transaction is considered undefined behavior. If an iterator is being used, then Item.Value() is defined in the current iteration only, because items are reused.
If you need to use a value outside a transaction, please use Item.ValueCopy instead, or copy it yourself. Value might change once discard or commit is called. Use ValueCopy if you want to do a Set after Get.
ValueCopy returns a copy of the value of the item from the value log, writing it to dst slice. If nil is passed, or capacity of dst isn't sufficient, a new slice would be allocated and returned. Tip: It might make sense to reuse the returned slice as dst argument for the next call.
This function is useful in long running iterate/update transactions to avoid a write deadlock. See Github issue:
ValueSize returns the approximate size of the value.
This can be called to quickly estimate the size of a value without fetching it.
Version returns the commit timestamp of the item.
type Iterator struct { // ThreadId is an optional value that can be set to identify which goroutine created // the iterator. It can be used, for example, to uniquely identify each of the // iterators created by the stream interface ThreadId int // contains filtered or unexported fields }
Iterator helps iterating over the KV pairs in a lexicographically sorted order.
Close would close the iterator. It is important to call this when you're done with iteration.
Item returns pointer to the current key-value pair. This item is only valid until it.Next() gets called.
Next would advance the iterator by one. Always check it.Valid() after a Next() to ensure you have access to a valid it.Item().
Rewind would rewind the iterator cursor all the way to zero-th position, which would be the smallest key if iterating forward, and largest if iterating backward. It does not keep track of whether the cursor started with a Seek().
Seek would seek to the provided key if present. If absent, it would seek to the next smallest key greater than the provided key if iterating in the forward direction. Behavior would be reversed if iterating backwards.
Valid returns false when iteration is done.
ValidForPrefix returns false when iteration is done or when the current key is not prefixed by the specified prefix.
type IteratorOptions struct { // Indicates whether we should prefetch values during iteration and store them. PrefetchValues bool // How many KV pairs to prefetch while iterating. Valid only if PrefetchValues is true. PrefetchSize int Reverse bool // Direction of iteration. False is forward, true is backward. AllVersions bool // Fetch all valid versions of the same key. // The following option is used to narrow down the SSTables that iterator picks up. If // Prefix is specified, only tables which could have this prefix are picked based on their range // of keys. Prefix []byte // Only iterate over this given prefix. InternalAccess bool // Used to allow internal access to badger keys. // contains filtered or unexported fields }
IteratorOptions is used to set options when iterating over Badger key-value stores.
This package provides DefaultIteratorOptions which contains options that should work for most applications. Consider using that as a starting point before customizing it for your own needs.
KVList contains a list of key-value pairs.
KVLoader is used to write KVList objects in to badger. It can be used to restore a backup.
Finish is meant to be called after all the key-value pairs have been loaded.
Set writes the key-value pair to the database.
KeyRegistry used to maintain all the data keys.
func OpenKeyRegistry(opt KeyRegistryOptions) (*KeyRegistry, error)
OpenKeyRegistry opens key registry if it exists, otherwise it'll create key registry and returns key registry.
func (kr *KeyRegistry) Close() error
Close closes the key registry.
type KeyRegistryOptions struct { Dir string ReadOnly bool EncryptionKey []byte EncryptionKeyRotationDuration time.Duration InMemory bool }
type Logger interface { Errorf(string, ...interface{}) Warningf(string, ...interface{}) Infof(string, ...interface{}) Debugf(string, ...interface{}) }
Logger is implemented by any logging system that is used for standard logs.
type Manifest struct { Levels []levelManifest Tables map[uint64]TableManifest // Contains total number of creation and deletion changes in the manifest -- used to compute // whether it'd be useful to rewrite the manifest. Creations int Deletions int }
Manifest represents the contents of the MANIFEST file in a Badger store.
The MANIFEST file describes the startup state of the db -- all LSM files and what level they're at.
It consists of a sequence of ManifestChangeSet objects. Each of these is treated atomically, and contains a sequence of ManifestChange's (file creations/deletions) which we use to reconstruct the manifest at startup.
ReplayManifestFile reads the manifest file and constructs two manifest objects. (We need one immutable copy and one mutable copy of the manifest. Easiest way is to construct two of them.) Also, returns the last offset after a completely read manifest entry -- the file must be truncated at that point before further appends are made (if there is a partial entry after that). In normal conditions, truncOffset is the file size.
MergeFunc accepts two byte slices, one representing an existing value, and another representing a new value that needs to be ‘merged’ into it. MergeFunc contains the logic to perform the ‘merge’ and return an updated value. MergeFunc could perform operations like integer addition, list appends etc. Note that the ordering of the operands is maintained.
MergeOperator represents a Badger merge operator.
func (op *MergeOperator) Add(val []byte) error
Add records a value in Badger which will eventually be merged by a background routine into the values that were recorded by previous invocations to Add().
func (op *MergeOperator) Get() ([]byte, error)
Get returns the latest value for the merge operator, which is derived by applying the merge function to all the values added so far.
If Add has not been called even once, Get will return ErrKeyNotFound.
func (op *MergeOperator) Stop()
Stop waits for any pending merge to complete and then stops the background goroutine.
type Options struct { Dir string ValueDir string SyncWrites bool TableLoadingMode options.FileLoadingMode ValueLogLoadingMode options.FileLoadingMode NumVersionsToKeep int ReadOnly bool Truncate bool Logger Logger Compression options.CompressionType InMemory bool MaxTableSize int64 LevelSizeMultiplier int MaxLevels int ValueThreshold int NumMemtables int // Changing BlockSize across DB runs will not break badger. The block size is // read from the block index stored at the end of the table. BlockSize int BloomFalsePositive float64 KeepL0InMemory bool BlockCacheSize int64 IndexCacheSize int64 LoadBloomsOnOpen bool NumLevelZeroTables int NumLevelZeroTablesStall int LevelOneSize int64 ValueLogFileSize int64 ValueLogMaxEntries uint32 NumCompactors int CompactL0OnClose bool LogRotatesToFlush int32 ZSTDCompressionLevel int // When set, checksum will be validated for each entry read from the value log file. VerifyValueChecksum bool // Encryption related options. EncryptionKey []byte // encryption key EncryptionKeyRotationDuration time.Duration // key rotation duration // BypassLockGaurd will bypass the lock guard on badger. Bypassing lock // guard can cause data corruption if multiple badger instances are using // the same directory. Use this options with caution. BypassLockGuard bool // ChecksumVerificationMode decides when db should verify checksums for SSTable blocks. ChecksumVerificationMode options.ChecksumVerificationMode // DetectConflicts determines whether the transactions would be checked for // conflicts. The transactions can be processed at a higher rate when // conflict detection is disabled. DetectConflicts bool // contains filtered or unexported fields }
Options are params for creating DB object.
This package provides DefaultOptions which contains options that should work for most applications. Consider using that as a starting point before customizing it for your own needs.
Each option X is documented on the WithX method.
DefaultOptions sets a list of recommended options for good performance. Feel free to modify these to suit your needs with the WithX methods.
LSMOnlyOptions follows from DefaultOptions, but sets a higher ValueThreshold so values would be collocated with the LSM tree, with value log largely acting as a write-ahead log only. These options would reduce the disk usage of value log, and make Badger act more like a typical LSM tree.
Debugf logs a DEBUG message to the logger specified in opts.
Errorf logs an ERROR log message to the logger specified in opts or to the global logger if no logger is specified in opts.
Infof logs an INFO message to the logger specified in opts.
Warningf logs a WARNING message to the logger specified in opts.
WithBlockCacheSize returns a new Options value with BlockCacheSize set to the given value.
This value specifies how much data cache should hold in memory. A small size of cache means lower memory consumption and lookups/iterations would take longer. It is recommended to use a cache if you're using compression or encryption. If compression and encryption both are disabled, adding a cache will lead to unnecessary overhead which will affect the read performance. Setting size to zero disables the cache altogether.
Default value of BlockCacheSize is zero.
WithBlockSize returns a new Options value with BlockSize set to the given value.
BlockSize sets the size of any block in SSTable. SSTable is divided into multiple blocks internally. Each block is compressed using prefix diff encoding.
The default value of BlockSize is 4KB.
WithBloomFalsePositive returns a new Options value with BloomFalsePositive set to the given value.
BloomFalsePositive sets the false positive probability of the bloom filter in any SSTable. Before reading a key from table, the bloom filter is checked for key existence. BloomFalsePositive might impact read performance of DB. Lower BloomFalsePositive value might consume more memory.
The default value of BloomFalsePositive is 0.01.
Setting this to 0 disables the bloom filter completely.
WithBypassLockGuard returns a new Options value with BypassLockGuard set to the given value.
When BypassLockGuard option is set, badger will not acquire a lock on the directory. This could lead to data corruption if multiple badger instances write to the same data directory. Use this option with caution.
The default value of BypassLockGuard is false.
func (opt Options) WithChecksumVerificationMode(cvMode options.ChecksumVerificationMode) Options
WithChecksumVerificationMode returns a new Options value with ChecksumVerificationMode set to the given value.
ChecksumVerificationMode indicates when the db should verify checksums for SSTable blocks.
The default value of VerifyValueChecksum is options.NoVerification.
WithCompactL0OnClose returns a new Options value with CompactL0OnClose set to the given value.
CompactL0OnClose determines whether Level 0 should be compacted before closing the DB. This ensures that both reads and writes are efficient when the DB is opened later. CompactL0OnClose is set to true if KeepL0InMemory is set to true.
The default value of CompactL0OnClose is true.
func (opt Options) WithCompression(cType options.CompressionType) Options
WithCompression returns a new Options value with Compression set to the given value.
When compression is enabled, every block will be compressed using the specified algorithm. This option doesn't affect existing tables. Only the newly created tables will be compressed.
The default compression algorithm used is zstd when built with Cgo. Without Cgo, the default is snappy. Compression is enabled by default.
WithDetectConflicts returns a new Options value with DetectConflicts set to the given value.
Detect conflicts options determines if the transactions would be checked for conflicts before committing them. When this option is set to false (detectConflicts=false) badger can process transactions at a higher rate. Setting this options to false might be useful when the user application deals with conflict detection and resolution.
The default value of Detect conflicts is True.
WithDir returns a new Options value with Dir set to the given value.
Dir is the path of the directory where key data will be stored in. If it doesn't exist, Badger will try to create it for you. This is set automatically to be the path given to `DefaultOptions`.
WithEncryptionKey return a new Options value with EncryptionKey set to the given value.
EncryptionKey is used to encrypt the data with AES. Type of AES is used based on the key size. For example 16 bytes will use AES-128. 24 bytes will use AES-192. 32 bytes will use AES-256.
WithEncryptionKeyRotationDuration returns new Options value with the duration set to the given value.
Key Registry will use this duration to create new keys. If the previous generated key exceed the given duration. Then the key registry will create new key.
WithInMemory returns a new Options value with Inmemory mode set to the given value.
When badger is running in InMemory mode, everything is stored in memory. No value/sst files are created. In case of a crash all data will be lost.
WithIndexCacheSize returns a new Options value with IndexCacheSize set to the given value.
This value specifies how much memory should be used by table indices. These indices include the block offsets and the bloomfilters. Badger uses bloom filters to speed up lookups. Each table has its own bloom filter and each bloom filter is approximately of 5 MB.
Zero value for IndexCacheSize means all the indices will be kept in memory and the cache is disabled.
The default value of IndexCacheSize is 0 which means all indices are kept in memory.
WithKeepL0InMemory returns a new Options value with KeepL0InMemory set to the given value.
When KeepL0InMemory is set to true we will keep all Level 0 tables in memory. This leads to better performance in writes as well as compactions. In case of DB crash, the value log replay will take longer to complete since memtables and all level 0 tables will have to be recreated. This option also sets CompactL0OnClose option to true.
The default value of KeepL0InMemory is false.
WithLevelOneSize returns a new Options value with LevelOneSize set to the given value.
LevelOneSize sets the maximum total size for Level 1.
The default value of LevelOneSize is 20MB.
WithLevelSizeMultiplier returns a new Options value with LevelSizeMultiplier set to the given value.
LevelSizeMultiplier sets the ratio between the maximum sizes of contiguous levels in the LSM. Once a level grows to be larger than this ratio allowed, the compaction process will be
triggered.
The default value of LevelSizeMultiplier is 15.
WithLoadBloomsOnOpen returns a new Options value with LoadBloomsOnOpen set to the given value.
Badger uses bloom filters to speed up key lookups. When LoadBloomsOnOpen is set to false, bloom filters will be loaded lazily and not on DB open. Set this option to false to reduce the time taken to open the DB.
The default value of LoadBloomsOnOpen is true.
WithLogRotatesToFlush returns a new Options value with LogRotatesToFlush set to the given value.
LogRotatesToFlush sets the number of value log file rotates after which the Memtables are flushed to disk. This is useful in write loads with fewer keys and larger values. This work load would fill up the value logs quickly, while not filling up the Memtables. Thus, on a crash and restart, the value log head could cause the replay of a good number of value log files which can slow things on start.
The default value of LogRotatesToFlush is 2.
WithLogger returns a new Options value with Logger set to the given value.
Logger provides a way to configure what logger each value of badger.DB uses.
The default value of Logger writes to stderr using the log package from the Go standard library.
WithLoggingLevel returns a new Options value with logging level of the default logger set to the given value. LoggingLevel sets the level of logging. It should be one of DEBUG, INFO, WARNING or ERROR levels.
The default value of LoggingLevel is INFO.
WithMaxLevels returns a new Options value with MaxLevels set to the given value.
Maximum number of levels of compaction allowed in the LSM.
The default value of MaxLevels is 7.
WithMaxTableSize returns a new Options value with MaxTableSize set to the given value.
MaxTableSize sets the maximum size in bytes for each LSM table or file.
The default value of MaxTableSize is 64MB.
WithNumCompactors returns a new Options value with NumCompactors set to the given value.
NumCompactors sets the number of compaction workers to run concurrently. Setting this to zero stops compactions, which could eventually cause writes to block forever.
The default value of NumCompactors is 2. One is dedicated just for L0 and L1.
WithNumLevelZeroTables returns a new Options value with NumLevelZeroTables set to the given value.
NumLevelZeroTables sets the maximum number of Level 0 tables before compaction starts.
The default value of NumLevelZeroTables is 5.
WithNumLevelZeroTablesStall returns a new Options value with NumLevelZeroTablesStall set to the given value.
NumLevelZeroTablesStall sets the number of Level 0 tables that once reached causes the DB to stall until compaction succeeds.
The default value of NumLevelZeroTablesStall is 10.
WithNumMemtables returns a new Options value with NumMemtables set to the given value.
NumMemtables sets the maximum number of tables to keep in memory before stalling.
The default value of NumMemtables is 5.
WithNumVersionsToKeep returns a new Options value with NumVersionsToKeep set to the given value.
NumVersionsToKeep sets how many versions to keep per key at most.
The default value of NumVersionsToKeep is 1.
WithReadOnly returns a new Options value with ReadOnly set to the given value.
When ReadOnly is true the DB will be opened on read-only mode. Multiple processes can open the same Badger DB. Note: if the DB being opened had crashed before and has vlog data to be replayed, ReadOnly will cause Open to fail with an appropriate message.
The default value of ReadOnly is false.
WithSyncWrites returns a new Options value with SyncWrites set to the given value.
When SyncWrites is true all writes are synced to disk. Setting this to false would achieve better performance, but may cause data loss in case of crash.
The default value of SyncWrites is true.
func (opt Options) WithTableLoadingMode(val options.FileLoadingMode) Options
WithTableLoadingMode returns a new Options value with TableLoadingMode set to the given value.
TableLoadingMode indicates which file loading mode should be used for the LSM tree data files.
The default value of TableLoadingMode is options.MemoryMap.
WithTruncate returns a new Options value with Truncate set to the given value.
Truncate indicates whether value log files should be truncated to delete corrupt data, if any. This option is ignored when ReadOnly is true.
The default value of Truncate is false.
WithValueDir returns a new Options value with ValueDir set to the given value.
ValueDir is the path of the directory where value data will be stored in. If it doesn't exist, Badger will try to create it for you. This is set automatically to be the path given to `DefaultOptions`.
WithValueLogFileSize returns a new Options value with ValueLogFileSize set to the given value.
ValueLogFileSize sets the maximum size of a single value log file.
The default value of ValueLogFileSize is 1GB.
func (opt Options) WithValueLogLoadingMode(val options.FileLoadingMode) Options
WithValueLogLoadingMode returns a new Options value with ValueLogLoadingMode set to the given value.
ValueLogLoadingMode indicates which file loading mode should be used for the value log data files.
The default value of ValueLogLoadingMode is options.MemoryMap.
WithValueLogMaxEntries returns a new Options value with ValueLogMaxEntries set to the given value.
ValueLogMaxEntries sets the maximum number of entries a value log file can hold approximately. A actual size limit of a value log file is the minimum of ValueLogFileSize and ValueLogMaxEntries.
The default value of ValueLogMaxEntries is one million (1000000).
WithValueThreshold returns a new Options value with ValueThreshold set to the given value.
ValueThreshold sets the threshold used to decide whether a value is stored directly in the LSM tree or separately in the log value files.
The default value of ValueThreshold is 1 KB, but LSMOnlyOptions sets it to maxValueThreshold.
WithVerifyValueChecksum returns a new Options value with VerifyValueChecksum set to the given value.
When VerifyValueChecksum is set to true, checksum will be verified for every entry read from the value log. If the value is stored in SST (value size less than value threshold) then the checksum validation will not be done.
The default value of VerifyValueChecksum is False.
WithZSTDCompressionLevel returns a new Options value with ZSTDCompressionLevel set to the given value.
The ZSTD compression algorithm supports 20 compression levels. The higher the compression level, the better is the compression ratio but lower is the performance. Lower levels have better performance and higher levels have better compression ratios. We recommend using level 1 ZSTD Compression Level. Any level higher than 1 seems to deteriorate badger's performance. The following benchmarks were done on a 4 KB block size (default block size). The compression is ratio supposed to increase with increasing compression level but since the input for compression algorithm is small (4 KB), we don't get significant benefit at level 3. It is advised to write your own benchmarks before choosing a compression algorithm or level.
no_compression-16 10 502848865 ns/op 165.46 MB/s - zstd_compression/level_1-16 7 739037966 ns/op 112.58 MB/s 2.93 zstd_compression/level_3-16 7 756950250 ns/op 109.91 MB/s 2.72 zstd_compression/level_15-16 1 11135686219 ns/op 7.47 MB/s 4.38 Benchmark code can be found in table/builder_test.go file
Sequence represents a Badger sequence.
Next would return the next integer in the sequence, updating the lease by running a transaction if needed.
Release the leased sequence to avoid wasted integers. This should be done right before closing the associated DB. However it is valid to use the sequence after it was released, causing a new lease with full bandwidth.
type Stream struct { // Prefix to only iterate over certain range of keys. If set to nil (default), Stream would // iterate over the entire DB. Prefix []byte // Number of goroutines to use for iterating over key ranges. Defaults to 16. NumGo int // Badger would produce log entries in Infof to indicate the progress of Stream. LogPrefix can // be used to help differentiate them from other activities. Default is "Badger.Stream". LogPrefix string // ChooseKey is invoked each time a new key is encountered. Note that this is not called // on every version of the value, only the first encountered version (i.e. the highest version // of the value a key has). ChooseKey can be left nil to select all keys. // // Note: Calls to ChooseKey are concurrent. ChooseKey func(item *Item) bool // KeyToList, similar to ChooseKey, is only invoked on the highest version of the value. It // is upto the caller to iterate over the versions and generate zero, one or more KVs. It // is expected that the user would advance the iterator to go through the versions of the // values. However, the user MUST immediately return from this function on the first encounter // with a mismatching key. See example usage in ToList function. Can be left nil to use ToList // function by default. // // Note: Calls to KeyToList are concurrent. KeyToList func(key []byte, itr *Iterator) (*pb.KVList, error) // This is the method where Stream sends the final output. All calls to Send are done by a // single goroutine, i.e. logic within Send method can expect single threaded execution. Send func(*pb.KVList) error // contains filtered or unexported fields }
Stream provides a framework to concurrently iterate over a snapshot of Badger, pick up key-values, batch them up and call Send. Stream does concurrent iteration over many smaller key ranges. It does NOT send keys in lexicographical sorted order. To get keys in sorted order, use Iterator.
Backup dumps a protobuf-encoded list of all entries in the database into the given writer, that are newer than or equal to the specified version. It returns a timestamp(version) indicating the version of last entry that was dumped, which after incrementing by 1 can be passed into a later invocation to generate an incremental dump of entries that have been added/modified since the last invocation of Stream.Backup().
This can be used to backup the data in a database at a given point in time.
Orchestrate runs Stream. It picks up ranges from the SSTables, then runs NumGo number of goroutines to iterate over these ranges and batch up KVs in lists. It concurrently runs a single goroutine to pick these lists, batch them up further and send to Output.Send. Orchestrate also spits logs out to Infof, using provided LogPrefix. Note that all calls to Output.Send are serial. In case any of these steps encounter an error, Orchestrate would stop execution and return that error. Orchestrate can be called multiple times, but in serial order.
ToList is a default implementation of KeyToList. It picks up all valid versions of the key, skipping over deleted or expired keys.
StreamWriter is used to write data coming from multiple streams. The streams must not have any overlapping key ranges. Within each stream, the keys must be sorted. Badger Stream framework is capable of generating such an output. So, this StreamWriter can be used at the other end to build BadgerDB at a much faster pace by writing SSTables (and value logs) directly to LSM tree levels without causing any compactions at all. This is way faster than using batched writer or using transactions, but only applicable in situations where the keys are pre-sorted and the DB is being bootstrapped. Existing data would get deleted when using this writer. So, this is only useful when restoring from backup or replicating DB across servers.
StreamWriter should not be called on in-use DB instances. It is designed only to bootstrap new DBs.
func (sw *StreamWriter) Flush() error
Flush is called once we are done writing all the entries. It syncs DB directories. It also updates Oracle with maxVersion found in all entries (if DB is not managed).
func (sw *StreamWriter) Prepare() error
Prepare should be called before writing any entry to StreamWriter. It deletes all data present in existing DB, stops compactions and any writes being done by other means. Be very careful when calling Prepare, because it could result in permanent data loss. Not calling Prepare would result in a corrupt Badger instance.
func (sw *StreamWriter) Write(kvs *pb.KVList) error
Write writes KVList to DB. Each KV within the list contains the stream id which StreamWriter would use to demux the writes. Write is thread safe and can be called concurrently by multiple goroutines.
type TableInfo struct { ID uint64 Level int Left []byte Right []byte KeyCount uint64 // Number of keys in the table EstimatedSz uint64 IndexSz int }
TableInfo represents the information about a table.
type TableManifest struct { Level uint8 KeyID uint64 Compression options.CompressionType }
TableManifest contains information about a specific table in the LSM tree.
Txn represents a Badger transaction.
Commit commits the transaction, following these steps:
1. If there are no writes, return immediately.
2. Check if read rows were updated since txn started. If so, return ErrConflict.
3. If no conflict, generate a commit timestamp and update written rows' commit ts.
4. Batch up all writes, write them to value log and LSM tree.
5. If callback is provided, Badger will return immediately after checking for conflicts. Writes to the database will happen in the background. If there is a conflict, an error will be returned and the callback will not run. If there are no conflicts, the callback will be called in the background upon successful completion of writes or any error during write.
If error is nil, the transaction is successfully committed. In case of a non-nil error, the LSM tree won't be updated, so there's no need for any rollback.
CommitAt commits the transaction, following the same logic as Commit(), but at the given commit timestamp. This will panic if not used with managed transactions.
This is only useful for databases built on top of Badger (like Dgraph), and can be ignored by most users.
CommitWith acts like Commit, but takes a callback, which gets run via a goroutine to avoid blocking this function. The callback is guaranteed to run, so it is safe to increment sync.WaitGroup before calling CommitWith, and decrementing it in the callback; to block until all callbacks are run.
Delete deletes a key.
This is done by adding a delete marker for the key at commit timestamp. Any reads happening before this timestamp would be unaffected. Any reads after this commit would see the deletion.
The current transaction keeps a reference to the key byte slice argument. Users must not modify the key until the end of the transaction.
Discard discards a created transaction. This method is very important and must be called. Commit method calls this internally, however, calling this multiple times doesn't cause any issues. So, this can safely be called via a defer right when transaction is created.
NOTE: If any operations are run on a discarded transaction, ErrDiscardedTxn is returned.
Get looks for key and returns corresponding Item. If key is not found, ErrKeyNotFound is returned.
func (txn *Txn) NewIterator(opt IteratorOptions) *Iterator
NewIterator returns a new iterator. Depending upon the options, either only keys, or both key-value pairs would be fetched. The keys are returned in lexicographically sorted order. Using prefetch is recommended if you're doing a long running iteration, for performance.
Multiple Iterators: For a read-only txn, multiple iterators can be running simultaneously. However, for a read-write txn, iterators have the nuance of being a snapshot of the writes for the transaction at the time iterator was created. If writes are performed after an iterator is created, then that iterator will not be able to see those writes. Only writes performed before an iterator was created can be viewed.
Code:
dir, err := ioutil.TempDir("", "badger-test") if err != nil { panic(err) } defer removeDir(dir) db, err := Open(DefaultOptions(dir)) if err != nil { panic(err) } defer db.Close() bkey := func(i int) []byte { return []byte(fmt.Sprintf("%09d", i)) } bval := func(i int) []byte { return []byte(fmt.Sprintf("%025d", i)) } txn := db.NewTransaction(true) // Fill in 1000 items n := 1000 for i := 0; i < n; i++ { err := txn.SetEntry(NewEntry(bkey(i), bval(i))) if err != nil { panic(err) } } err = txn.Commit() if err != nil { panic(err) } opt := DefaultIteratorOptions opt.PrefetchSize = 10 // Iterate over 1000 items var count int err = db.View(func(txn *Txn) error { it := txn.NewIterator(opt) defer it.Close() for it.Rewind(); it.Valid(); it.Next() { count++ } return nil }) if err != nil { panic(err) } fmt.Printf("Counted %d elements", count)
Output:
Counted 1000 elements
func (txn *Txn) NewKeyIterator(key []byte, opt IteratorOptions) *Iterator
NewKeyIterator is just like NewIterator, but allows the user to iterate over all versions of a single key. Internally, it sets the Prefix option in provided opt, and uses that prefix to additionally run bloom filter lookups before picking tables from the LSM tree.
ReadTs returns the read timestamp of the transaction.
Set adds a key-value pair to the database. It will return ErrReadOnlyTxn if update flag was set to false when creating the transaction.
The current transaction keeps a reference to the key and val byte slice arguments. Users must not modify key and val until the end of the transaction.
SetEntry takes an Entry struct and adds the key-value pair in the struct, along with other metadata to the database.
The current transaction keeps a reference to the entry passed in argument. Users must not modify the entry until the end of the transaction.
WriteBatch holds the necessary info to perform batched writes.
func (wb *WriteBatch) Cancel()
Cancel function must be called if there's a chance that Flush might not get called. If neither Flush or Cancel is called, the transaction oracle would never get a chance to clear out the row commit timestamp map, thus causing an unbounded memory consumption. Typically, you can call Cancel as a defer statement right after NewWriteBatch is called.
Note that any committed writes would still go through despite calling Cancel.
func (wb *WriteBatch) Delete(k []byte) error
Delete is equivalent of Txn.Delete.
func (wb *WriteBatch) DeleteAt(k []byte, ts uint64) error
DeleteAt is equivalent of Txn.Delete but accepts a delete timestamp.
func (wb *WriteBatch) Error() error
Error returns any errors encountered so far. No commits would be run once an error is detected.
func (wb *WriteBatch) Flush() error
Flush must be called at the end to ensure that any pending writes get committed to Badger. Flush returns any error stored by WriteBatch.
func (wb *WriteBatch) Set(k, v []byte) error
Set is equivalent of Txn.Set().
func (wb *WriteBatch) SetEntry(e *Entry) error
SetEntry is the equivalent of Txn.SetEntry.
func (wb *WriteBatch) SetEntryAt(e *Entry, ts uint64) error
SetEntryAt is the equivalent of Txn.SetEntry but it also allows setting version for the entry. SetEntryAt can be used only in managed mode.
func (wb *WriteBatch) SetMaxPendingTxns(max int)
SetMaxPendingTxns sets a limit on maximum number of pending transactions while writing batches. This function should be called before using WriteBatch. Default value of MaxPendingTxns is 16 to minimise memory usage.
func (wb *WriteBatch) Write(kvList *pb.KVList) error
Package badger imports 40 packages (graph) and is imported by 507 packages. Updated 2020-09-18. Refresh now. Tools for package owners. | https://godoc.org/github.com/dgraph-io/badger | CC-MAIN-2020-40 | en | refinedweb |
In this blog we would discuss about Messages and internationalization in Play Framework which would drive us through the implementation and testing of the Play MessagesAPI.
Messages and internationalization
Play supports Internationalization (i18n) out of the box by leveraging the underlying internationalization support. With Play you are able to customize the text that appears in a view based on the user’s Locale.
Specifying languages supported by your application
A valid language code is specified by a valid ISO 639-2 language code, optionally followed by a valid ISO 3166-1 alpha-2 country code, such as
fr or
en-US.
To start you need to specify the languages supported by your application in the
conf/application.conf file:
play.i18n.langs = [ "en", "en-US", "fr" ]
File Externalization
You can externalize messages in the
conf/messages.xxx files.
The default
conf/messages file matches all languages. Additionally you can specify language-specific message files such as
conf/messages.fr or
conf/messages.hi. play.api.i18n.I18nSupport class AppController.
Implementation of Messages Format
Messages are formatted using the
java.text.MessageFormat library. For example, assuming you have message defined like:
app.name=The application {0} is running on port {1}.
You can then specify parameters as:
Messages("
app.name
", "Message Example", 9000)
The output of above code is:
The application
Message Example
is running on port 9000.
Use."
Retrieving supported language from an HTTP request
You can retrieve the languages supported by a specific HTTP request:
def index = Action { request => Ok("Languages: " + request.acceptLanguages.map(_.code).mkString(", ")) }
Testing Messages API
At the time of testing we can create a new object of DefaultMessagesApi and then pass it to the appropriate Controller/Services/Utils
new DefaultMessagesApi(Environment.simple(), app.configuration, new DefaultLangs(app.configuration))
Now we know about Play MessagesAPI implementations. So let’s start enjoy with i18n messages and Play easily. If you have any question then feel free to comment on the same 🙂 Stay tuned.
1 thought on “Messages and internationalization in Play 2.4.x3 min read”
Reblogged this on Play!ng with Scala. | https://blog.knoldus.com/messages-and-internationalization-in-play/ | CC-MAIN-2021-04 | en | refinedweb |
Consider the following short program. demo() is a trivial async function that creates a QObject instance, connects a Python signal, and then exits. When we call `send(None)` on this object, we expect to get a StopIteration exception.
-----
from PySide2 import QtCore
class MyQObject(QtCore.QObject):
sig = QtCore.Signal()
async def demo():
myqobject = MyQObject()
myqobject.sig.connect(lambda: None)
return 1
coro = demo()
try:
coro.send(None)
except StopIteration as exc:
print(f"OK: got {exc!r}")
except SystemError as exc:
print(f"WTF: got {exc!r}")
-----
Actual output (tested on 3.8.2, but I think the code is present on all versions):
-----
StopIteration: 1
WTF: got SystemError("<method 'send' of 'coroutine' objects> returned NULL without setting an error")
-----
So there are two weird things here: the StopIteration exception is being printed on the console for some reason, and then the actual `send` method is raising SystemError instead of StopIteration.
Here's what I think is happening:
In genobject.c:gen_send_ex, when the coroutine finishes, we call _PyGen_SetStopIterationValue to raise the StopIteration exception:
Then, after that, gen_send_ex clears the frame object and drops references to it:
At this point, the reference count for `myqobject` drops to zero, so its destructor is invoked. And this destructor ends up clearing the current exception again. Here's a stack trace:
-----
#0 0x0000000000677eb7 in _PyErr_Fetch (p_traceback=0x7ffd9fda77d0,
p_value=0x7ffd9fda77d8, p_type=0x7ffd9fda77e0, tstate=0x2511280)
at ../Python/errors.c:399
#1 _PyErr_PrintEx (tstate=0x2511280, set_sys_last_vars=1) at ../Python/pythonrun.c:670
#2 0x00007f1afb455967 in PySide::GlobalReceiverV2::qt_metacall(QMetaObject::Call, int, void**) ()
from /home/njs/.user-python3.8/lib/python3.8/site-packages/PySide2/libpyside2.abi3.so.5.14
#3 0x00007f1afaf2f657 in void doActivate<false>(QObject*, int, void**) ()
from /home/njs/.user-python3.8/lib/python3.8/site-packages/PySide2/Qt/lib/libQt5Core.so.5
#4 0x00007f1afaf2a37f in QObject::destroyed(QObject*) ()
from /home/njs/.user-python3.8/lib/python3.8/site-packages/PySide2/Qt/lib/libQt5Core.so.5
#5 0x00007f1afaf2d742 in QObject::~QObject() ()
from /home/njs/.user-python3.8/lib/python3.8/site-packages/PySide2/Qt/lib/libQt5Core.so.5
#6 0x00007f1afb852681 in QObjectWrapper::~QObjectWrapper() ()
from /home/njs/.user-python3.8/lib/python3.8/site-packages/PySide2/QtCore.abi3.so
#7 0x00007f1afbf785bb in SbkDeallocWrapperCommon ()
from /home/njs/.user-python3.8/lib/python3.8/site-packages/shiboken2/libshiboken2.abi3.so.5.14
#8 0x00000000005a4fbc in subtype_dealloc (self=<optimized out>)
at ../Objects/typeobject.c:1289
#9 0x00000000005e8c08 in _Py_Dealloc (op=<optimized out>) at ../Objects/object.c:2215
#10 _Py_DECREF (filename=0x881795 "../Objects/frameobject.c", lineno=430,
op=<optimized out>) at ../Include/object.h:478
#11 frame_dealloc (f=Frame 0x7f1afc572dd0, for file qget-min.py, line 12, in demo ())
at ../Objects/frameobject.c:430
#12 0x00000000004fdf30 in _Py_Dealloc (
op=Frame 0x7f1afc572dd0, for file qget-min.py, line 12, in demo ())
at ../Objects/object.c:2215
#13 _Py_DECREF (filename=<synthetic pointer>, lineno=279,
op=Frame 0x7f1afc572dd0, for file qget-min.py, line 12, in demo ())
at ../Include/object.h:478
#14 gen_send_ex (gen=0x7f1afbd08440, arg=<optimized out>, exc=<optimized out>,
closing=<optimized out>) at ../Objects/genobject.c:279
------
We can read the source for PySide::GlobalReceiverV2::qt_metacall here:
And we see that it (potentially) runs some arbitrary Python code, and then handles any exceptions by doing:
if (PyErr_Occurred()) {
PyErr_Print();
}
This is intended to catch exceptions caused by the code it just executed, but in this case, gen_send_ex ends up invoking it with an exception already active, so PySide2 gets confused and clears the StopIteration.
-----------------------------------
OK so... what to do. I'm actually not 100% certain whether this is a CPython bug or a PySide2 bug.
In PySide2, it could be worked around by saving the exception state before executing that code, and then restoring it afterwards.
In gen_send_ex, it could be worked around by dropping the reference to the frame before setting the StopIteration exception.
In CPython in general, it could be worked around by not invoking deallocators with a live exception... I'm actually pretty surprised that this is even possible! It seems like having a live exception when you start executing arbitrary Python code would be bad. So maybe that's the real bug? Adding both "asyncio" and "memory management" interest groups to the nosy. | https://bugs.python.org/msg370046 | CC-MAIN-2021-04 | en | refinedweb |
- ChatterFeed
- 23Best Answers
- 2Likes Received
- 1Likes Given
- 117Questions
- 241Replies
How to I check who updated records
First trigger & need help correcting error
Greatly appreciate any help!
My Trigger:
trigger oliUpdate on OpportunityLineItem (after insert, after update) { Set <String> oliID = New Set <String> (); For (OpportunityLineItem oli: Trigger.new) { if (oli.OpportunityId != Null ) { oliID.add (oli.Id); } } If (oliID.size ()> 0) { List <OpportunityLineItem> upOpiList = new List <OpportunityLineItem> (); For (OpportunityLineItem ol: [SELECT Id, Net_Amount__c, UnitPrice FROM OpportunityLineItem WHERE id in: oliID AND Net_Amount__c > 0.0]) { ol.UnitPrice = ol.Net_Amount__c; UpOpiList.add (ol); } If (upOpiList.size ()> 0) update upOpiList; } }
Batch Apex - how to populate convertedopportunityID and sending opportunity records below leads
global class LoanOfficerBatch implements Database.Batchable<sObject> { public String query = 'SELECT Loan_Officer_1a__c,Loan_Officer_1a__r.Email, ConvertedOpportunityId, Name, Phone, Starting_Credit_Score__c, ' + 'Status, Enrolled_On__c, Est_Re_Pull_Date__c, Realtor_Name__c ' + ' FROM Lead'; /*JMA:: add ConvertedOpportunityId field*/ public EmailTemplate templateId = [Select Id,HtmlValue,Subject from EmailTemplate where name = 'LoanOfficerRecord' LIMIT 1]; global Database.QueryLocator start(Database.BatchableContext bc) { query += ' WHERE CreatedDate >= LAST_MONTH AND CreatedDate <= THIS_MONTH AND Loan_Officer_1a__c != null'; return Database.getQueryLocator(query); } global void execute(Database.BatchableContext BC, list<Lead> allLeads) { //JMA:: Create a map of <Id, Opportunity> Map<Id,List<Opportunity>> OpptyMap = new Map<Id,List<Opportunity>>(); //JMA:: Query Opportunities by using Lead.ConvertedOpportunityId. You need to first create a set of Id from Lead.ConvertedOpportunityId //JMA:: populate the map <Id, Opportunity> Map<Id,List<Lead>> leadMap = new Map<Id,List<Lead>>(); List<Messaging.SingleEmailMessage> mails = new List<Messaging.SingleEmailMEssage>(); if(allLeads != null && allLeads.size() > 0){ for(Lead l: allLeads){ if(!leadMap.containsKey(l.Loan_Officer_1a__c)){ leadMap.put(l.Loan_Officer_1a__c, new List<lead>()); } leadMap.get(l.Loan_Officer_1a__c).add(l); } } if(leadMap.keySet().size() > 0){ Map<Id,Contact> officers = new Map<Id,Contact>([SELECT Id,Email,Name FROM Contact WHERE Id IN: leadMap.keySet()]); for(Id i: leadMap.keySet()){ Contact con = officers.get(i); System.debug(con); if(String.isnOtBlank(con.Email)){ Messaging.SingleEmailMessage mail = new Messaging.SingleEmailMessage(); mail.setToAddresses(new String[]{con.EMail}); mail.setSubject(templateId.Subject); String html = templateId.HtmlValue; html = html.replace('||OfficerName||',con.Name); String leadsTable = '<table cellpadding="3" cellspacing="3" width="100%" align="center" border="1" style="border-collapse:collapse;">'+ '<tr style="font-weight:bold;"><td>Name</td><td>Phone</td><td>Starting Credit Score</td><td>Status</td><td>Enrolled On</td><td>Est. Re Pull Date</td><td>Realtor Name</td></tr>'; for(Lead l: leadMap.get(i)){ //JMA:: for each lead you can get the associated Opportunity from map <Id, Opportunity> leadsTable += '<tr><td>'+l.Name+'</td>'+ '<td>'+l.Phone+'</td><td>'+l.Starting_Credit_Score__c+'</td><td>'+l.Status+'</td><td>'+l.Enrolled_On__c+'</td>'+ '<td>'+l.Est_Re_Pull_Date__c+'</td><td>'+l.Realtor_Name__c+'</td></tr>'; } leadsTable += '</table>'; html = html.replace('||Leads||',leadsTable); html = html.replace('null',' '); mail.setHTMLBody(html); mails.add(mail); } } } if(mails.size() > 0){ //Messaging.sendEmail(mails); } } global void finish(Database.BatchableContext BC) { } }
Fileupload lightning controller not working
I am new to lightning and trying use file upload lightning components.
I used exact same code which is mentioned in salesforce website
just replaced recordid to my account id
But it is not working.
Please help me on this
Thanks in Advance
Apex rest callout - Sample
New to Integration, learning how the parse json and xml from response. I have one url for json() but not for xml.
Thanks in Advance.....
HttpRequest req=new HttpRequest();
req.setendpoint(' ? ');
req.setmethod('GET');
http req1=new http();
httpResponse xmlresponse=req1.send(req);
Modify Reports for Other Audiences Trailhead Module Error
I am having trouble completing a module, specifically, the second hands-on challenge. In the challenge you are asked to modify the Opportunity Pipeline report and create a Key Accounts report with a probability greater than 30%. When I go to check the challenge I am told that the report does not show records with a probability greater than 30%. I have checked the report multiple times along with recreating the report in another trailhead playground.
Has anyone experienced this problem?
Is It Possible to Access another User Record Without knowledge Them??
Can Anyone give me a good solution !!
Example:-
Person A and Person B Working in Same company , Person A Will not continue his job hereafter. Suddenly he leaves his job without give any information. So Person B need to handle him record as well. How it's Possible ???
1. Person A Using Different Profile .
2. Person A Records OWD is Private.
3. How person B can access that records ???
Thanks In Advance !
Regards,
Soundar P
How to find The API in salesforce?
Am very new to API.In my salesforce org, somebody implemented the API. I don't know whether its REST/SOAP.
Now I want to do some changes in that. Kindly let me know, where I to find the respective API details in my Org.
apex:outputField will not render a div tag correctly
From VF Page:
<div style="float: left; width: 20%;">
<apex:outputField
</div>
In the controller, if the Company__c is null, I want to set its value to '<div class="spacer" />' so that the div tag is rendered correctly on the page. My output at the moment is literally <div class="spacer" />.
The goal is to have 5 static columns on the page but if one of the custom fields for the custom object is null, I just want a placeholder there in the HTML so that everything continues to line up correctly.
APex code with process builder
How to access apex class method form process builder, I want to access apex class function by process builder but ,i am not able to access my class function.
Please suggest.
Thanks
Deepak
Omni-Channel
I am not aware about omni channel and where it's used. Can you please benefit of this.
Thanks
Romash
VF Page renderAs PDF (JavaScript not rendered)
I'm trying to render a VF Page which contains JavaScript to Generate a QR Code like so:
VF Page:
But PDF after the renderAs is all messed up:
Anyone knows how to render Javascript to PDF?
Display an image within a Visualforce Section dependent on Opportunity Record Type.
We have a visualforce section on all of our Opportortunity page layouts.
This section shows a particular image depending on what has been selected within the stage field.
Below is a sample of the code that we have in place.
Is it possible to split it up so that depending on which Opportunity Record Type is selected different images can be displayed against each stage?
For example if record type = X and the stage is Closed Won then show image A
If record type = Y and the Stage is closed Won then show image B
Thank you
<apex:page <div align="center"> ></apex:image> <apex:image</apex:image> <apex:image id="lost" value="{!$Resource.stageLost}" height="60" rendered="{!Opportunity.stageName = 'Closed Lost'}" </div> </apex:page>
calling webservice in future handler from a trigger | Error | System.CalloutException: You have uncommitted work pending. Please commit or rollback before calling out
I have following error:
System.CalloutException: You have uncommitted work pending. Please commit or rollback before calling out
My usecase:
I am calling a webservice in a future handler.
Other workaround i tried:
I tried to call the webservice without a future handler, but as i am calling in trigger it is giving me error.
Can someone please tell me any workaroud ?
thank you
Check Clone by process builder
if user cloned any opportunity then one of my process builder should not execute, after creation of this opportunity if user edit this opportunity then process builder should execute.
Please suggest
Task field update by API
I have a custom field on Task object, and now i want to update task by API. i have tried to update by Postman, but facing error.
{ "message": "insufficient access rights on object id", "errorCode": "INSUFFICIENT_ACCESS_ON_CROSS_REFERENCE_ENTITY", "fields": [] }
Please suggest
Regards
Mukesh
Script stop on addError in apex
I am using add error method, when criteria match then addError method call, but here script doenot stop and execute next loop.
i want to stop script when addError call. i have used break; return; but nothing happen
for (Opportunity currOpp: newOpportunities_1){ System.debug('currOpp name '+currOpp.name); if (currOpp.RecordTypeId!=PMRecordTypeId.Id ){ currOpp.addError(Label.pmr); } } for (Opportunity currOpp: newOpportunities_2){ System.debug('currOpp name '+currOpp.name); }
Thanks
Mukesh
inputCheckbox destroy
I want to destroy selected check box when user try to select all and click on delete buttion.
without destroy this is exist already and make extra check boxes that's was previously deleted so now i want to destroy selected items
Below code is creating check boxes:-
this is select All check box
<ui:inputCheckbox
this is list of check boxes:-
<aura:iteration <ui:inputCheckbox aura: </aura:iteration>
Please suggest
Thanks
Approval process display addError msg
Please suggest
Regards
Mukesh
ui:inputCurrency
I am using below code but not able to get maximum length on keyUp event.
<ui:inputCurrency
Please suggest
Object Id e.force:createRecord
I am creating contact by 'e.force:createRecord', but now i want to created contact record Id. after this event.
Can you please suggest.
Regards
Mukesh
Lightning web component error
I am trying to create a lwc project in VS code, but when i try to create
then facing error:
command 'sfdx.force.project.create' not found
I am using jdk1.8.0_231 for 64 bit
Lightning component picklist
I have multiple country picklist in lightning compont that's is iterating in Table TR
we i choose any picklist value 'UAS' and click on save button than this picklist should be disable.
but when i change of picklist then picklist disable on selection time. but i want to disable on save button.
Please suggest
<aura:iteration <tr> <td class="slds-cell-wrap"> <lightning:select <option value="--None--">--None--</option> <aura:iteration <option value="{!val.value}" selected="{!val.value==item.Country__c}">{!val.key}</option> </aura:iteration> </lightning:select> </td> </tr> </aura:iteration>
Sales console tab not close
on opportunity object -->> button & Link (Name:-AddProduct)
add URL parameter :-
/apex/OppProduct_VFP?oppId={!Opportunity.Id}
when user click on 'AddProduct' button then below VF page open in new tab with lightnig component:-
OppProduct_VFP:-
<apex:page <apex:includeLightning /> <div class="slds" style="margin-top:10px;margin-left:10px;"> <div id="lightning" /> </div> <script> $Lightning.use("c:AddProductApp", function() { $Lightning.createComponent( "c:AddProduct", {recordId : "{!OpportunityId}"}, "lightning", function(cmp) { }); }); </script> </apex:page>
i have a close button in AddProductController to close this tab but not able to closed this tab getting UNDEFINED in console log. i am not able to get TabId by this code.
Close : function(component, event, helper){ var workspaceAPI = component.find("workspace"); // mentioned in component workspaceAPI.isConsoleNavigation().then(function(consoleResponse) { console.log('consoleResponse-->>> '+consoleResponse) // return undefined workspaceAPI.getFocusedTabInfo().then(function(tabResponse) { console.log('tabResponse-->> '+tabResponse)// return undefined var isSubtab = tabResponse.isSubtab; console.log('isSubtab-->> '+isSubtab);// return undefined }); });Please suggest and let me know where i am wrong
Sales console tab
i am using below code for tab close
closeFocusedTab : function(component, event, helper) { var workspaceAPI = component.find("workspace"); //alert(workspaceAPI); workspaceAPI.getFocusedTabInfo().then(function(response) { //alert('aaaa -- '+response); var focusedTabId = response.tabId; workspaceAPI.closeTab({tabId: focusedTabId}); }) .catch(function(error) { console.log(error); }); }
Can you please suggest.
Attachment on record
I want to add attachment on record creation time in apex code, but in attachment custom code we need record Id, but record is not created. so how achive this functionality,
Example: when we fill any exam from then we can attachemtn or document, and then save the record.
Thanks
lightning component popup
i want to open a popup box that should be open when user logged In. when user closed this then this will not reopen for this session. if user logged in again then popup should be open .
what i have done:
create a lightning component and added this on home page layout. but this popup open again and again when home page refresh.
So can you please suggest what should i use sothat model popup should open once for current session.
Regards
Mukesh
Remote Objects in lightning component instead of VF page
I want to use Remote Objects in lightning component , below code is working in visualforce pages, but now i need to implement this code in lightning component.
for example:-
<!-- Remote Objects definition to set accessible sObjects and fields -->
<apex:remoteObjects >
<apex:remoteObjectModel
<apex:remoteObjectField
</apex:remoteObjectModel>
</apex:remoteObjects>
Thanks
Mukesh
SOQL Limit
In one class i have two soql:-
Example:-
public class test(){
List<Account> acc = [select id, name from Account Limit 40000];
List<Contact> con = [seldect id, email from Contact Limit 15000];
}
//above queries will return limit excede error or not
Case Reopen by Email
I want to reopen a case when any email comes from user. what i implemented:-
I creaed a workflow that execute on when agent close a case with email template taht's have a case thread id. but when customer reply this email then related case shouuld be reopen and this mail should be add in Email related list on that case.
Please suggest.
Regards
Mukesh
Profile Roles and Sharing Setting
How would sharing rule work in the below scenario
Suppose i create a new object called "XXXX". Now profile called "AAAA" doesn't have read, create, edit permission on it.
Q1: What would happen if I create a record of object "XXX" and share it with user which has profile "AAAA" and give him "Edit" permission on the record? Would user be able to see the record or edit the record? please qualify your answer.
Q2: Can anybody explain in what order access on record or object is granted in terms of OWD, Sharing rule, Role and profile?
Q3: If i set OWD setting as Public Read/Edit on Object "XXXX" but profile "AAAA" don't have read, create and edit permission on Object "XXXX" then the user who owns profile "AAAA" would be able to see and edit the records of object "XXXX"?
Q4: What would happen if profile "AAAA" has only Read permission on Object "XXXX" then user who owns profile "AAAA" would be able to see and edit ALL the records of object "XXXX"?
Q5: In order to work out OWD setting, at least profile must have Read permission on that particular object?
Q6: The user who is higher role in hierarchy would get owner permission on the records created by user who are lower in the roles means he can edit and delete the record as well. Is that correct?
Please share your best
Regards
Mukesh
File Uploading By Inline visualforce page
I have added a inline visualforce page on case page layout. Now i am able to upload files perfectlry, but now i want to add loader or spiner on file upload time. I add a spiner in visualforce page but when uploading compleatd. then this move on case Tab with files list not on same page.
<apex:page <apex:form > <style type="text/css"> .bPageBlock .detailList .dataCol { width: 100%; } .bPageBlock .detailList .dataCol { width: 107%; } dataCol.first { text-align: center; } .detailList td.dataCol.first.last:nth-child(1) { text-align: right; width: 50%; } .inner-tab { padding: 20px 0px; } </style> <apex:pageMessages /> <apex:actionStatus <apex:facet <apex:outputPanel > <img src="/img/loading32.gif" width="50" height="50" /> <apex:outputLabel </apex:outputPanel> </apex:facet> <apex:facet </apex:facet> </apex:actionStatus> <apex:pageBlock > <apex:pageBlockSection <apex:commandButton <br/> <br/> <apex:pageBlock <apex:pageBlockTable <apex:column<apex:outputLink{!fl.Url}</apex:outputLink></apex:column> <apex:column</apex:column> </apex:pageBlockTable> </apex:pageBlock> </apex:pageBlockSection> <div class="inner-tab"> <apex:pageBlockSection <apex:inputFile <apex:commandButton </apex:pageBlockSection> </div> </apex:pageBlock> </apex:form> </apex:page>
public with sharing class UploadFileController { Public Id RecordId {get;set;} public Blob AttchBody {get;set;} public String AttchDesc {get;set;} public String AttchName {get;set;} Public Integer AttachSize {get;set;} Public Attachment attch {get;set;} public boolean attachmentBtn {get; set;} public boolean sec1 {get;set;} public boolean sec2 {get;set;} public External_File_Relationship__c fileDB; public External_File__c ef; public List<String> fileList {get;set;} public String parentId; Public List<External_File__c>listOfFiles {get;set;} public UploadFileController(ApexPages.StandardController controller) { parentId = ((case)controller.getRecord()).id; sec1 = true; sec2 = false; RecordId = ApexPages.CurrentPage().getParameters().get('Id'); init(); } public PageReference SubmitAttachment(){ if(AttchName != '' && AttchBody != NUll ){ //file that is 25Mb or less if(AttachSize <= 25000000 ){ External__c extFile = new External__c(); extFile.Name = AttchName; extFile.CaseId__c = RecordId; Insert extFile ; attch = new Attachment(ParentId=extFile.id,Description=AttchDesc,Name=AttchName,Body=AttchBody); try{ insert attch; system.debug('attch '+attch); AttchBody = null; AttchDesc = null; AttchName = null; } catch(DMLException ex){ ApexPages.addMessage(new ApexPages.message(ApexPages.severity.ERROR,'Error uploading attachment')); return null; } ApexPages.addMessage(new ApexPages.message(ApexPages.severity.INFO,'Attachment uploaded successfully')); string Attid= attch.id; string attachmentid=Attid.substring(0,15); String sfdcBaseURL = URL.getSalesforceBaseUrl().toExternalForm(); extFile.File_Public_URL__c = sfdcBaseURL+'/servlet/servlet.FileDownload?file='+attachmentid; update extFile; sec2 = false; sec1 = true; init(); } }else{ ApexPages.addMessage(new ApexPages.message(ApexPages.severity.ERROR, AttchName +' is too large - please choose a file that is 25Mb or less')); } return null; } public void init(){ List<External_File__c> lst = [Select URL__c, CreatedDate from External__c where CaseId__c =: RecordId]; listOfFiles = lst; } public PageReference Upload() { sec1 = false; sec2 = true; return null; } }
when i add spinner or Loding then, this move to case tab with inline visualforce page.
Please suggest, what i am doing wrong
Regards
Mukesh
Static resource is not working in Lightning
I am using below code in my lightning componant but facing error "Jquery_File/jquery-3.3.1.min.js 404 (Not Found)" in browser console.
here "Jquery_File" is static resources name and
<ltng:require 404 (Not Found)
Thanks
Mukesh
Marketing cloud journey builder
I want to create some demo on marketing cloud journey builder. Can any one share about how to acces journey builder in markeing cloud.
Thanks
Mukesh
SMS directly from Salesforce
I need to create an application for me to send SMS directly from Salesforce. Please suggest.
Check Clone by process builder
if user cloned any opportunity then one of my process builder should not execute, after creation of this opportunity if user edit this opportunity then process builder should execute.
Please suggest
Get number of Files attached to a record
Should I be looking at ContentDocumentLinks
if ( Trigger.isInsert || Trigger.isDelete ){ Set<Id> FileCount = new Set<Id>(); for(SObject cdl : Trigger.new){ ContentDocumentLink cdls = ( ContentDocumentLink ) cdl; FileCount.add(cdls.LinkedEntityId); } List<File_c> fileList = [SELECT Id, (SELECT Id FROM ContentDocumentLinks) FROM File_c WHERE Id IN :FileCount]; for(File_c fl : fileList){ fl.Number_of_Files__c = fileList.size(); } }
Endpoint URL
I am clueless to locating or telling my salesforce endpoint url. Would anyone know how to tell please? Any documentation?
Thanks!
upload metadata package: trigger test coverage
I didnt write these triggers - can i solve the test coverage myself?
thanks
IF formula receiving syntax error missing ')' in summary report formula
I am trying to build a summary report formula for our funding report but receive the Syntax Error. Missing ")". appreciate your advise on this.
Trigger To uncheck a checkbox
I am trying to create a trigger to uncheck a checkbox field on Account called "Active Order".
Basically when an Order has benn deleted from an account and there is no Order remaining for the Account, the trigger should uncheck Active Order.
No Order:
Uncheck Active Order on Account
Can't get my trigger working properly
trigger AccountActiveOrderTrigger on Account (after delete) { List<Order> lstOrder = new List<Order>(); //Get Accounts with orders Map<Id, Account> accWithOrders = new Map<Id, Account>([SELECT Id, Name, Active_Order__c, (SELECT Id FROM Orders) FROM Account WHERE Id IN: Trigger.old]); //Iterate through each Accounts for(account a : Trigger.old) { System.debug('Accounts with orders' +accWithOrders.get(a.Id).Orders.size()); //Check if an Account has an order if(accWithOrders.get(a.Id).Orders.size() == 0) { a.Active_Order__c = False; } update a; } }
Thank you
One method in my Apex Test is Failing....
Below is my test class where the SearchMembers method fails: System.AssertException: Assertion Failed: Expected: 4, Actual: 0
The testLoadData method passes....
Any help appreciated....
==
@isTest
private class MembershipDirectoryControllerTest {
@testSetup static void createData() {
List<Account> accounts = new List<Account>();
Account a1 = new Account();
a1.Name = 'Test 1';
a1.Membership_Level__c = 'Catalyst';
a1.Category__c = 'Farms';
a1.Keywords__c = 'farm';
a1.BillingPostalCode = '11111';
a1.Directory_Display_Override__c = true;
accounts.add(a1);
Account a2 = new Account();
a2.Name = 'Test 2';
a2.Membership_Level__c = 'Champion';
a2.Category__c = 'Education';
a2.Keywords__c = 'test';
a2.BillingPostalCode = '22222';
a2.Directory_Display_Override__c = true;
accounts.add(a2);
Account a3 = new Account();
a3.Name = 'Test 3';
a3.Membership_Level__c = 'Catalyst';
a3.Category__c = 'Farms';
a3.Keywords__c = 'farm';
a3.BillingPostalCode = '11111';
a3.Directory_Display_Override__c = true;
accounts.add(a3);
Account a4 = new Account();
a4.Name = 'Test 4';
a4.Membership_Level__c = 'Champion';
a4.Category__c = 'Business Supplies';
a4.Keywords__c = 'farm';
a4.BillingPostalCode = '22222';
a4.Directory_Display_Override__c = true;
accounts.add(a4);
Account a5 = new Account();
a5.Name = 'Test 5';
a5.Membership_Level__c = 'Catalyst';
a5.Category__c = 'Farms';
a5.Keywords__c = 'test';
a5.BillingPostalCode = '11111';
a5.Directory_Display_Override__c = true;
accounts.add(a5);
Account a6 = new Account();
a6.Name = 'Test 6';
a6.Membership_Level__c = 'Champion';
a6.Category__c = 'Personal Services';
a6.Keywords__c = 'farm';
a6.BillingPostalCode = '22222';
a6.Directory_Display_Override__c = true;
accounts.add(a6);
Account a7 = new Account();
a7.Name = 'Test 7';
a7.Membership_Level__c = 'Business';
a7.Category__c = 'Farms';
a7.Keywords__c = 'test';
a7.BillingPostalCode = '11111';
a7.Directory_Display_Override__c = true;
accounts.add(a7);
insert accounts;
}
@isTest static void testLoadData() {
Test.startTest();
MembershipDirectoryModels.FormData fd = MembershipDirectoryController.loadData();
Test.stopTest();
System.assertNotEquals(null, fd.categories);
System.assertEquals(6, fd.spotlightMembers.size());
}
@isTest static void testSearchMembers() {
Test.startTest();
List<Account> categorySearch = MembershipDirectoryController.searchMembers('category', 'Farms');
List<Account> nameLetterSearch = MembershipDirectoryController.searchMembers('name', 'T');
List<Account> nameSearch = MembershipDirectoryController.searchMembers('nameSearch', 'Test 3');
List<Account> keywordSearch = MembershipDirectoryController.searchMembers('keyword', 'test');
List<Account> zipCodeSearch = MembershipDirectoryController.searchMembers('zipCode', '11111');
Test.stopTest();
System.assertEquals(4, categorySearch.size());
System.assertEquals(7, nameLetterSearch.size());
System.assertEquals(1, nameSearch.size());
System.assertEquals(3, keywordSearch.size());
System.assertEquals(4, zipCodeSearch.size());
}
}
Hi I am getting the error on Account Record Page while editing Account record from UI Error: First exception on row 0; first error: MISSING_ARGUMENT, Id not specified in an update call: []: Trigger.AccConTestTrigger
trigger AccConTestTrigger on Account (After update) {
if(trigger.isAfter && trigger.isUpdate){
System.debug('this is in isAfter and isUpdate context');
List<Contact> conList=new List<Contact>();
for(Account acc : trigger.new ){
Contact con=new Contact();
con.AccountId = acc.Id;
System.debug('Account Id value is --->'+con.AccountId);
System.debug('Account Id value from the screen --->'+acc.Id);
con.Department = 'The Last Dep';
conList.add(con);
}
update conList;
}
}
Could some one please help me in this,how to solve this
Testing scheduled Apex jobs
When I test manually I can see the data gets populated as expected but the assertion in the scheduler test class fails.
Here is the code @isTest public with sharing class ScheduleEmployeeBatchTest { static testmethod void schedulerTest() { String CRON_EXP = '0 0 0 15 3 ? 2022'; // Create test data Employee__c emp =TestData.getNewEmployee(); emp.EmployeeNumber__c = ‘Emp001’; insert emp; Test.startTest(); String jobId = System.schedule('ScheduleEmployeeBatchTest', CRON_EXP, new ScheduleEmployeeBatch()); CronTrigger ct = [SELECT Id, CronExpression, TimesTriggered, NextFireTime FROM CronTrigger WHERE id = :jobId]; System.assertEquals(CRON_EXP, ct.CronExpression); System.assertEquals(0, ct.TimesTriggered); Test.stopTest(); // verify that the employee was created successfully System.assertEquals( 1, [ select Id from EmpTeam__C where Employee__r.Name = ‘Emp001’ ].size() ); } }
Code Coverage error on Trigger despite showing 100%
I have a code coverage of 100% for my trigger but uploading my packages to production I have an error
Code Coverage Failure
Your code coverage is 74%. You need at least 75% coverage to complete this deployment.
I really don't understand why when my code is covered 100%
my Trigger:
trigger CaseConcernAircallDateTrigger on Task (after insert) { List<Case> cList = new List<Case>(); for(Task t: Trigger.New) { if(t.WhatId!=Null && t.whatId.getsObjectType() == Case.sObjectType){ Case c = new Case(); c.Id = t.whatId; c.Last_Aircall_Logged__c = t.CreatedDate; cList.add(c); } } if(cList.size() > 0) update cList; }
my Test class:
@isTest public class CaseConcernAircallDateTriggerTest { @isTest static void testAircallDateUpdate() { Contact con = new Contact (FirstName = 'First Name',LastName = 'Test'); insert con; Case c = new Case(Status = 'New',ContactId = con.Id,Phone_Number__c = '123456789'); insert c; Task t = new Task(Subject = 'Test', WhatId = c.Id); insert t; c.Id = t.WhatId; c.Last_Aircall_Logged__c = t.CreatedDate; update c; } }
Thank you
help on scheduling a class
Need a help,
1) want to run the class from 8 Am to 2 PM everyday
2) and want to run the class from 5PM to 3 AM Everyday.
any ideas on cron expression.
Thanks,
Billing
URGENT!!!!! Invalid conversion from runtime type List<ANY> to Map<String,ANY>
list<object> attchlist;
list<ContentVersion> CV = new list<ContentVersion>();
list<ContentDocumentLink> CDLink = new list<ContentDocumentLink>();
Set<Id> contentDocumentIds = new Set<Id>();
Set<FileData> fileFromSAP = new Set<FileData>();
for(Object mapa:results.values() ){
Map<String,Object> tempMap = (Map<String,Object>)mapa;
map<string,object> attc = (Map<String,Object>)tempMap.get('Attachment');
}
This exception is coming while running the code...someone please help
SOQL for fetching contact roles and fields from Contacts through opportunity
•All the opportunities which has these contacts as a Ship To contact roles , update its stage as “Qualified”.
but I am unable to fetch the contact roles from the opportunity
UNABLE_TO_LOCK_ROW, unable to obtain exclusive access to this record: []
my batch getting failed to update some records and I gets following error message.
Apex script unhandled exception by user/organization: 0098d000007tvRV/00Db002ef00HO6N Failed to process batch for class 'CalcFirstInvoice' for job id '7070X5640CQtuKf' caused by: System.DmlException: Update failed. First exception on row 0 with id 001b000000OlwatAAB; first error: UNABLE_TO_LOCK_ROW, unable to obtain exclusive access to this record: [] Class.CalcFirstInvoice.execute: line 26, column 1
Please help me
Error: Syntax error. Missing ')' in formula
IMAGE("//c.na139.content.force.com/servlet/servlet.FileDownload?file=0154W00000BiFeo", "green", 30, 30)
IF( ROI_No_of_Months__c > 24,
IMAGE("//c.na139.content.force.com/servlet/servlet.FileDownload?file=0154W00000BiFee", "yellow", 30, 30),
IMAGE("//c.na139.content.force.com/servlet/servlet.FileDownload?file=0154W00000BiFej", "red", 30, 30),
))
Any help would be appreciated. Thanks! | https://developer.salesforce.com/forums/ForumsProfile?userId=005F0000005GA4cIAG&communityId=09aF00000004HMGIA2 | CC-MAIN-2021-04 | en | refinedweb |
- - - Hello, World! - - -
- The module comes nicely packed in a box and in a ESD bag.
- The first thing to consider is that the IO pins are made for 3V3 and do not have clamping diodes. This is one of the reason I opted for Raspberry Pi version since the only 3V3 "Arduino" I own is my own design Arduino pinout compatible AVR board witah ATmega328PB and is barely tested since it was finished only few weeks ago and designed with Atmel Studio in mind not Arduino IDE.
- Most of the documentation and files is on the RF explorer site . Among them is the pre-made Raspbian image with everything necessary already installed and setup. User name and password remain default(pi / raspberry)
- in the root directory there is a readme file with basic instructions where to find the 2 pre-made examples
- 1st example constantly searches for the peaks and the 2nd one scans the the RF spectrum
- - - For now module works and is operational - - -
- - - What does this pin do? - - -
- After sifting through arduino code I noticed that it only uses 2 extra pins besides serial port
- Reset and GPIO2 pin, reset is self explanatory, and GPIO2 is commented as ( LOW -> Device mode to 2400bps )
- Potential problem could arise since even though GPIO2 is used as output it is nowhere defined as output. Which on AVR MCUs means that is in the default( INPUT ) mode. But it is possible the module still works since pull-down on the module is 2k2 and setting it to HIGH sets the internal Pull-UP resistor in AVR. (will notify the team to check that)
- The following is the list of the GPIO pins:
RFE_RESET A4 // pinMode(_RFE_RESET,OUTPUT); _RFE_GPIO2 A0 // output - although it is nowhere specified as such //Possibly ther is a pull down on the circuit so it works despite that mistake #define _RFE_BUZZER A5 // not connected to module - assumed output(buzzer) //Folowing are not used in this example - therefore assumed input #define _RFE_GPIO0 13 // could be TXA of software serial #define _RFE_GPIO1 11 // could be RXA of software serial #define _RFE_GPIO4 A1 #define _RFE_GPIO5 A2 #define _RFE_GPIO6 12 #define _RFE_GPIO7 10 #define _RFE_RFGPIO0 6 #define _RFE_RFGPIO1 4 #define _RFE_RFGPIO2 5 //TXA and RXA are meant TX / RX for arduino
- And since I am using board with ATmega328PB - one small modification is also needed
// in file RFExplorer_3GP_IoT.h #if defined(_SAM3XA_) && defined(__AVR_ATmega328PB__) #error REVIEW COMPILATION DEFINED BOARD #endif #if !defined(_SAM3XA_) && !defined(__AVR_ATmega328PB__) #error REVIEW COMPILATION DEFINED BOARD #endif
- Code compiles in Arduino IDE so I installed the shield
- Quickly it started to pull 100mA of current - this is OK initially when device is initalasing...
- ... but. Serial is acting strange...
- Basically the code sets the module with HWserial, receives the configuration from module with2400 baud, and then also data from module with HWserial. Plot is transmitted on software serial pin13( PB5 )
- I am using a second USB uart adapter for receiving this plot data
- I have modified the code to add headers to plot data
- It requires constant switching of uart0(so arduino can talk to module) and uart3(so computer can program arduino)
- There is a option to use the second UART on ATmega328PB and modify connections
- - - By this point I am able to receive "some" plot and have general idea how to proceed - - -
- - - Making it work with arduino and arduino plotter - - -
- My goal for now is streaming full spectrum(15MHZ to 2.7GHz) scans constantly to arduino plotter with minimal 1Mhz or better RBW resolution.
- First to start with a nice read also known as RTFM on the internet: - BTW no PDF so printing the manual gives mixed results.
- I have managed to write to general serial plotter code where you insert start frequency, stop frequency and required precision.
#define _START_FREQ_KHZ 15000 //kHz #define _STOP_FREQ_KHZ 2700000 //kHz #define _RBW_KHZ 5250 //kHz
- The problem is, that I am unable to put more than 500 points on the arduino plotter. So the above settings give us full range(15-2700MHz) in the plot
- And the following is the WiFi plot 2400-2500MHz
- Now I am able to get a full scale plot.
- Next thing on the list will be how to display frequencies on the graph, adding option for logarithmic plot, better control filter bandwidth, figure out how to switch between ranges (attenuator, 0dB, LNA) and speeding up the refresh rate of measurements
- Although code is very messy and I have some major issues with variable definitions. Let say it is at least barely useful.
- - - So now I am at the point I can at least partially use the module as I want to - - -
Maybe this project can help to inspire the fully DIY version:
In addition RF Explorer released a RFETouch application for Raspberry Pi which converts the RF Explorer 3G+ IoT in a DIY spectrum analyzer with no need for further code. It can be used with touchscreen or normal screen.
More details at | https://hackaday.io/project/20444-basic-spectrum-analyser-with-rf-explorer-3g-iot | CC-MAIN-2021-04 | en | refinedweb |
C++ Using STL Unordered Multiset Program
Hello Everyone!
In this tutorial, we will learn about the working of Multiset in STL and its implementation in the C++ programming language.
What is a Multiset?
Multisets are similar to set, with the exception that multiple elements can have the same values (duplicates are retained).
What is an Unordered Multiset?
It is the same as a Multiset but here the elements are not sorted but are stored in random order.
For a better understanding of its implementation, refer to the well-commented C++ code given below.
Code:
#include <iostream> #include <bits/stdc++.h> using namespace std; //Function to print the elements of the vector using an iterator void showVector(vector<int> v) { //declaring an iterator to iterate through the vector elements vector<int>::iterator i; for (i = v.begin(); i != v.end(); i++) { cout << *i << " "; //accessing the elements of the vector using * as i stores the address to each element } cout << endl; } //Function to print the elements of the unordered multiset using an iterator void showMultiset(unordered_multiset<int> s) { //declaring an iterator to iterate through the multiset unordered_multiset<int>::iterator i; for (i = s.begin(); i != s.end(); i++) { cout << *i << " "; //accessing the elements of the unordered multiset using * as i stores the address to each element } cout << endl; } int main() { cout << "\n\nWelcome to Studytonight :-)\n\n\n"; cout << " ===== Program to demonstrate the working of a Unordered Multiset, in CPP ===== \n\n\n\n"; cout << "*** Multisets are similar to set, with an exception that multiple elements can have same values. *** \n\n"; cout << "*** Unordered Multisets stores its elements in a random order depending on the hash method used internally. *** \n\n"; //Unordered Multiset declaration (Set of integers where duplicates are allowed) unordered_multiset<int> s; //Filling the elements by using the insert() method. cout << "\n\nFilling the Multiset with integers in random order."; //Unordered Multiset stores them in a random order s.insert(50); s.insert(30); s.insert(50); s.insert(80); s.insert(30); s.insert(60); cout << "\n\nThe number of elements in the Unordered Multiset are: " << s.size(); cout << "\n\nThe elements of the Unordered Multiset are: "; showMultiset(s); //Sorting the unordered multiset by copying its elements to a vector vector<int> v(s.begin(), s.end()); vector<int>::iterator it; cout << "\n\nThe elements of the Unordered Multiset after sorting using a vector are: "; //sorting the vector elements in ascending order sort(v.begin(), v.end()); showVector(v); cout << "\n\n\n"; return 0; }
Output:
We hope that this post helped you develop a better understanding of the concept of an Unordered Multiset in STL and its implementation in C++. For any query, feel free to reach out to us via the comments section down below.
Keep Learning : ) | https://studytonight.com/cpp-programs/cpp-using-stl-unordered-multiset-program | CC-MAIN-2021-04 | en | refinedweb |
By Anish Nath.
Alibaba cloud's built-in Kubernetes engine does not offer an out-of-the-box HTTPS solution or TLS/SSL certificates for your website. For this, you can set up Let's Encrypt. Let's Encrypt is a handy, non-profit Certificate Authority that provides free TLS/SSL certificates that can be used to secure websites with HTTPS encryption. In tandem with Let's Encrypt, you will also need to set up cert-manager and nginx-ingress. cert-manager is a third-party Kubernetes controller that automates getting TLS/SSL certificates from Let's Encrypt and refreshing them. Next, nginx-ingress, which is an Ingress Controller, is a daemon, deployed as a Kubernetes Pod, which watches the apiserver's /ingresses endpoint for updates to the Ingress resource. Its job is to satisfy requests for Ingresses.
The following diagram shows how all of these can work together.
So in this tutorial, you will learn how you can set up Let's Encrypt along with cert-manager and nginx-ingress on Alibaba Cloud to achieve the above architecture and secure your Kubernetes application.
For this tutorial, you will need the following items. Note that, in the below steps, I will go over how you can set up your Kubernetes cluster.
zariga.com, that is pointing to a Kubernetes Service Type loadbalancer.
If you don't have a Kubernetes cluster and want to set up a Kubernetes cluster, you can follow these steps:
Initialize the master using the following command.
master# sudo kubeadm init --pod-network-cidr=192.168.0.0/16 an etcd instance.
master# kubectl apply -f \
Install Calico.
master# kubectl apply -f \
Confirm that all of the pods are running with the following command.
NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-etcd-x2482 1/1 Running 0 2m45s kube-system calico-kube-controllers-6ff88bf6d4-tgtzb 1/1 Running 0 2m45s kube-system calico-node-24h85 2/2 Running 0 2m43s kube-system coredns-846jhw23g9-9af73 1/1 Running 0 4m5s kube-system coredns-846jhw23g9-hmswk 1/1 Running 0 4m5s kube-system etcd-jbaker-1 1/1 Running 0 6m22s kube-system kube-apiserver-jbaker-1 1/1 Running 0 6m12s kube-system kube-controller-manager-jbaker-1 1/1 Running 0 6m16s kube-system kube-proxy-8fzp2 1/1 Running 0 5m16s kube-system kube-scheduler-jbaker-1 1/1 Running 0 5m41s
On the minion node join the cluster (also returned by
kubeadm init).
minion# kubeadm join 172.20.240.112:6443 --token 2xg5nx.zv65d9mnz4g1802b --discovery-token-ca-cert-hash sha256:1cae1effbad759b7c70572dd509936340db5cc7d38ff1951422d45b91b3de03c
Now, let's set up and install Hel. To do this, follow these steps:
Download the helm binary from the official helm repo.
root@kube-master:~# wget
Extract the helm tar file.
root@kube-master:~# tar -zxvf helm-v2.13.0-linux-amd64.tar.gz linux-amd64/ linux-amd64/LICENSE linux-amd64/README.md linux-amd64/helm linux-amd64/tiller
Move the helm to your
$PATH location.
root@kube-master:~# mv linux-amd64/helm /usr/local/bin/helm
Install the Helm server-side components (Tiller) on your Alibaba Cloud cluster.
kubectl create serviceaccount -n kube-system tiller kubectl create clusterrolebinding tiller-binding \ --clusterrole=cluster-admin \ --serviceaccount kube-system:tiller helm init --service-account tiller
Once tiller pod becomes ready, update chart repositories:
helm repo update
Now, it's time to set up cert-manager. To do so follow these steps:
Install cert-manager using the helm chart.
root@kube-master:~# helm install --name cert-manager --version v0.5.2 --namespace kube-system stable/cert-manager
The output is as follows:
NAME: cert-manager LAST DEPLOYED: Tue Mar 12 15:29:21 2019 NAMESPACE: kube-system STATUS: DEPLOYED RESOURCES: ==> v1/Pod(related) NAME READY STATUS RESTARTS AGE cert-manager-6464494858-4lhjg 0/1 ContainerCreating 0 1s ==> v1/ServiceAccount NAME SECRETS AGE cert-manager 1 1s ==> v1beta1/ClusterRole NAME AGE cert-manager 1s ==> v1beta1/ClusterRoleBinding NAME AGE cert-manager 1s ==> v1beta1/Deployment NAME READY UP-TO-DATE AVAILABLE AGE cert-manager 0/1 1 0 1:
And, now, let's configure the Let's Encrypt Cluster Issuer for the staging and production environment. To start, adjust the email according to your specific needs.
root@kube-master:~# cat letsencrypt-issuer.yaml apiVersion: certmanager.k8s.io/v1alpha1 kind: ClusterIssuer metadata: name: letsencrypt-staging spec: acme: server: email: 'zarigatongy@gmail.com' privateKeySecretRef: name: letsencrypt-staging http01: {} --- apiVersion: certmanager.k8s.io/v1alpha1 kind: ClusterIssuer metadata: name: letsencrypt-prod spec: acme: server: email: 'zarigatongy@gmail.com' privateKeySecretRef: name: letsencrypt-prod http01: {}
Apply the cluster Issuer in your Kubernetes cluster.
root@kube-master:~# kubectl apply -f letsencrypt-issuer.yaml clusterissuer.certmanager.k8s.io/letsencrypt-staging created clusterissuer.certmanager.k8s.io/letsencrypt-prod created
To deploy a web app on a doemain name, for testing purposes, create a NGINX deployment and exposing over clusterIP.
kubectl create deployment --image nginx my-nginx kubectl expose deployment my-nginx --port=80 --type=ClusterIP root@kube-master:~# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 28h my-nginx ClusterIP 10.101.150.247 <none> 80/TCP 25h
Now, let's move on to installing the NGINX ingress control. To do this, first use helm chart
stable/nginx-ingress for the installation.
helm install stable/nginx-ingress --namespace kube-system
On successful install, the helm chart will show that
nginx-ingress is deployed.
root@kube-master:~# helm ls NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE cert-manager 1 Tue Mar 12 19:44:24 2019 DEPLOYED cert-manager-v0.5.2 v0.5.2 kube-system nginx-ingress 1 Tue Mar 12 19:38:19 2019 DEPLOYED nginx-ingress-1.3.1 0.22.0 kube-system
Last, confirm that your
LoadBalancer IP for nginx-ingress-controller is no longer pending by using the following command.
root@kube-master:~# kubectl get svc -n kube-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE calico-etcd ClusterIP 10.96.232.136 <none> 6666/TCP 28h kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 28h nginx-ingress-controller LoadBalancer 10.105.8.20 172.20.240.112 80:31151/TCP,443:32087/TCP 23h nginx-ingress-controller-stats ClusterIP 10.96.180.163 <none> 18080/TCP 23h nginx-ingress-default-backend ClusterIP 10.99.138.158 <none> 80/TCP 23h tiller-deploy ClusterIP 10.104.47.231 <none> 44134/TCP 27h
At this stage, you have already forwarded your domain name to the IP address where your Kubernetes cluster is running. Now, you can test your domain in your preferred browser. Note that the NGINX ingress controller has issued a fake certificate. This confirms that nginx ingress is up and running and ready to use to configure let's Encrypt issuers.
Now, let's get a staging Let's Encrypt certificate for your domain name. For this, first create the Ingress resource with the annotations and the required service.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-ingress-staging annotations: kubernetes.io/ingress.class: nginx certmanager.k8s.io/cluster-issuer: letsencrypt-staging kubernetes.io/tls-acme: 'true' labels: app: 'my-nginx' spec: rules: - host: zariga.com http: paths: - path: / backend: serviceName: my-nginx servicePort: 80 tls: - secretName: tls-staging-cert hosts: - zariga.com
Next, create the ingress in the Kubernetes cluster.
root@kube-master:~# kubectl create -f nginx-ingress-staging.yaml ingress.extensions/nginx-ingress-staging created root@kube-master:~# vim nginx-ingress-staging.yaml
You can view the
Certificate resource which was create automatically.
root@kube-master:~# kubectl get certificate NAME AGE tls-staging-cert 1m
After somewhere between two and ten minutes,
kubectl describe certificate should show that the Certificate has been issued successfully.
root@kube-master:~# kubectl describe certificate Name: tls-staging-cert Namespace: default Labels: <none> Annotations: <none> API Version: certmanager.k8s.io/v1alpha1 Kind: Certificate Metadata: Creation Timestamp: 2019-03-12T10:17:13Z Generation: 4 Owner References: API Version: extensions/v1beta1 Block Owner Deletion: true Controller: true Kind: Ingress Name: nginx-ingress-staging UID: fe528f28-44af-11e9-b431-00163e005d19 Resource Version: 16843 Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/default/certificates/tls-staging-cert UID: 014f9299-44b0-11e9-b431-00163e005d19 Spec: Acme: Config: Domains: zariga.com Http 01: Ingress: Ingress Class: nginx Dns Names: zariga.com Issuer Ref: Kind: ClusterIssuer Name: letsencrypt-staging Secret Name: tls-staging-cert Status: Acme: Order: URL: Conditions: Last Transition Time: 2019-03-12T10:17:51Z Message: Certificate issued successfully Reason: CertIssued Status: True Type: Ready Last Transition Time: <nil> Message: Order validated Reason: OrderValidated Status: False Type: ValidateFailed Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CreateOrder 103s cert-manager Created new ACME order, attempting validation... Normal DomainVerified 69s cert-manager Domain "zariga.com" verified with "http-01" validation Normal IssueCert 69s cert-manager Issuing certificate... Normal CertObtained 68s cert-manager Obtained certificate from ACME server Normal CertIssued 68s cert-manager Certificate issued successfully
The important things to note here are:
spec.secretName: The secret in which the certificate will be stored. Usually, this will be prefixed with
-tlsso it doesn't get mixed up with other secrets.
spec.issuerRef.name: The named we defined earlier for our ClusterIssuer.
spec.issuerRef.kind: This specifies that the issuer is a ClusterIssuer.
spec.acme.config.http01.ingress: The name of the ingress deployed with NGINX.
Now visit the domain name again, you should notice now that a stage certificate is being issued and configured to your domain name.
Next, let's get a production Let's Encrypt certificate for your domain name. To do this, first define ingress with required annotations and service to lookup.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx-ingress-prod annotations: kubernetes.io/ingress.class: nginx certmanager.k8s.io/cluster-issuer: letsencrypt-prod kubernetes.io/tls-acme: 'true' labels: app: 'my-nginx' spec: rules: - host: zariga.com http: paths: - path: / backend: serviceName: my-nginx servicePort: 80 tls: - secretName: tls-prod-cert hosts: - zariga.com
Next, apply the ingress in the kubernetes cluster.
kubectl apply -f letsencrypt-nginx-prod.yaml
View the ingress.
root@kube-master:~# kubectl get ingress NAME HOSTS ADDRESS PORTS AGE nginx-ingress-prod zariga.com 80, 443 23h
Get the certificate.
root@kube-master:~# kubectl get certificate NAME AGE tls-prod-cert 23h
After somewhere between two and ten minutes,
kubectl describe certificate should show Certificate issued successfully.
root@kube-master:~# kubectl describe certs Name: tls-prod-cert Namespace: default Labels: <none> Annotations: <none> API Version: certmanager.k8s.io/v1alpha1 Kind: Certificate Metadata: Creation Timestamp: 2019-03-12T10:26:58Z Generation: 2 Owner References: API Version: extensions/v1beta1 Block Owner Deletion: true Controller: true Kind: Ingress Name: nginx-ingress-prod UID: 5ab11929-44b1-11e9-b431-00163e005d19 Resource Version: 17687 Self Link: /apis/certmanager.k8s.io/v1alpha1/namespaces/default/certificates/tls-prod-cert UID: 5dad4740-44b1-11e9-b431-00163e005d19 Spec: Acme: Config: Domains: zariga.com Http 01: Ingress: Ingress Class: nginx Dns Names: zariga.com Issuer Ref: Kind: ClusterIssuer Name: letsencrypt-prod Secret Name: tls-prod-cert Status: Acme: Order: URL: Conditions: Last Transition Time: 2019-03-12T10:27:00Z Message: Order validated Reason: OrderValidated Status: False Type: ValidateFailed Last Transition Time: <nil> Message: Certificate issued successfully Reason: CertIssued Status: True Type: Ready Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CreateOrder 27s cert-manager Created new ACME order, attempting validation... Normal IssueCert 27s cert-manager Issuing certificate... Normal CertObtained 25s cert-manager Obtained certificate from ACME server Normal CertIssued 25s cert-manager Certificate issued successfully
Visit the URL the domain name is issued with production certificate.
Finally, let's do some cleanup. First, delete the Ingress, Service and Deployment:
kubectl delete ingress,deployment my-nginx kubectl delete service my-nginx
Use this command to uninstall cert-manager deployment:
helm del --purge cert-manager
Use this command to uninstall Helm:
helm reset
Use this command to delete the TLS certificates:
kubectl delete secret <NAME>
The views expressed herein are for reference only and don't necessarily represent the official views of Alibaba Cloud.
Flexibly Utilize Cloud Message Service in Your Business
Setting up ClickHouse on Alibaba Cloud
2,310 posts | 523 followersFollow
Alibaba Clouder - December 6, 2017
Alibaba Clouder - July 31, 2018
Alibaba Clouder - August 9, 2018
Alibaba Clouder - January 25, 2018
Alibaba Clouder - March 28, 2018
Alibaba Clouder - May 21, 2018
2,310 posts | 523
An online computing service that offers elastic and secure virtual cloud servers to cater all your cloud hosting needs.Learn More
RICKY11 September 26, 2020 at 5:45 pm
that is a lot of work to simply get ssl certificates isnt it?Also to maintain the instance may cost you more then purchasing a ssl certificate from alibaba ssl itself vs lets encrpt? | https://www.alibabacloud.com/blog/setting-up-lets-encrypt-on-alibaba-cloud_596365 | CC-MAIN-2021-04 | en | refinedweb |
Here's a few ideas that I have found work. This isn't a comprehensive list that should always be included, again think about who your audience is and would they be interested (relevance) in this section. Consider the analogy of building a car: The person who will use it doesn't care about the tensile strength of the second and fourth piston. They care about how to drive it, maybe maintaining it, and what to do when something abnormal happens. But remember it depends on the audience, the mechanic will want different information, but he still won't necessarily care about the technology used to build the piston.
Technical Design should consider the following topics/sections: (In no particular order).
Tech Design is not:
- A complete War And Peace of how every line works. (Tech Design should be as short as possible using as many diagrams as possible to avoid text).
- Sequence Diagrams based exactly on code or Class diagrams based exactly on code. (This will go stale too quickly and creates a maintenance nightmare)
- An install guide.
- Xml Code Comments/SandCastle.
- Any Documentation Generated from code.
- An API Guide.
- A Test Plan.
- How-to-use / user guide (this should be a separate document).
A See Also Section.
List links to other relevant or related information.
Overview Section.
No more than two paragraphs. This is for business or non-techies who just want to know something about how the system works and how it fits into
the bigger picture with other systems. What is it used for? What problem does it solve?
System Context.
A single diagram showing this system as a single box and the other systems it integrates into.
Main Sequences and Flows.
This is the main or significant work flows thru the system. Use Abstract high level sequence diagrams. Each line should not represent a class but
a major component / library / namespace within the system. Do not get into class and object detail, it will get out of date too quickly.
Hosting and Deployment.
List all different options on how to host in a production environment. Also state the recommended or preferred hosting model. State why there are
multiple models. Use deployment diagrams not text. Try to show one box per deployed application, not one box per library. Draw lines only where
external calls are made between applications. Add text to each line describing http/tcp, authentication, authorisation, json/soap etc.
Could include details of configuration options and environment settings required, if necessary. This is not a installation guide.
How to integrate.
Link to a document (best to keep it separate as it will not be relevant to all readers and could be long) describing how a developer takes the
system as a framework and uses it. How do they install it, use it, and what do they need to do, how long is this getting up and running process
going to take. Obviously only applies if the system is a framework for developers rather than for consumers.
Multi-Tenancy
Is the system multi-tenanted. If so how have you acheived this, trade-offs made, limitations etc. How many tenants can it handle, how few?
Concurrency
Multi-threading? STA? UI Threading? ASP.Net async? Task Factory - how?
Code Organisation.
How is the code organised and why. Possibly code metrics too so another party can get a sense for the size and complexity of the application.
Service Contracts, Endpoints and Data Contracts
Describe all inbound services and their contracts. Describe all outbound service calls, their purpose, when they happen. Also define or link to a
data dictionary that describes all SOAP / Json payloads and fields.
Diagnostic Logging, Monitoring and Auditing
Database tables involved. What logging framework? Links to configure it. Default config (debug/release builds). What is auditing vs logging.
Why is auditing required. Performance counters and definition. Other standard recommended performance counters. How do the Ops team support
the system? Monitor it? Can they tell when it is failing before it fails? Windows Event Log. Health / Support service endpoints that might be available.
(Ping, heartbeat, health check services).
Resilience
Estimated usage stats (year 1,2,3). Estimate db size, requests per second, tenants, 24 hour (or other time period) load patterns.
Where are the fail points from a business perspective? Is 1000ms per request ok? 3000? 10,000ms? 2 hours to process the last item in a queue behind a
large batch?
Exceptions: Different scenarios - (db is down, external 3rd party service is down, one server in the farm is down, entry point service endpoint is down,
validation issues with inbound service data, inflight data cannot be updated back to db after reading it, system crashes during processing - how to
recover? etc) General exception strategy within the app. Where are they logged or not.
Security
What security is applied to all endpoints. User security. Pen testing considerations. OWASP. STRIDE. Known security concerns to address and how they
are addressed.
Database.
Trusted subsystem?
Required infrastructure security.
Deployment time required security config not covered by installer.
Configuration Options for Runtime System.
Are there runtime settings applied immediately without restart? App.config only? Other config? How are server farms updated?
All config options listed and described. Or links to other doc.
Database Schema
Data Dictionary or link to
Describe data access strategy - stored procs, ORM, both, etc
Backwards Compatibility
Are the existing devices out there with a version of the application. Ie are you upgrading backend services and there are existing UI's out there?
Forwards Compatibility: How will future versions be compatible with this?
Are your service contracts / Json transmissions versioned adequately?
Are you recommending to clients that the URL contain a version number?
Other important / architecturally significant items
Queueing / ServiceBus / Asynchronous design
Installation issues - not an install guide
Availability (do you need a draining shutdown, how much down time can you get / need)
Usability
Accessibility | http://blog.rees.biz/2013/09/documenting-technical-design-ideas.html | CC-MAIN-2021-04 | en | refinedweb |
From: Jared McIntyre (jmcintyre_at_[hidden])
Date: 2005-02-02 12:01:15
> Well, it doesn't look like you're inclined to give up so I guess we can't
> either.
Thas greatly appreciated.
> Do I take this to mean that almost all the tests (200+) compiled, linked,
> and executed with no error?
Correct. To be exact, there were 5 skipped and 5 that failed.
> General
> 1) Configuration Type: Application(.exe)
> 2) Use of MFC: Use Standard Windows Libraries
> 3) Use of ATL: Not using ATL
This project is MFC based, but I can't imagine that is causing an issue. Otherwise its the same.
> C/C++
> 1) General
> Additional Include Directories - the directory that contains boost
> headers
> 3) Preprocessor:
> Preprocessor definitions - WIN32;_DEBUG;_CONSOLE
> 4) Code Generation
> Enable C++ Exceptions - Yes(/EHsc)
> Runtime Library - Multi-threaded Debug DLL - note the DLL ! ( I
> suspect this is the problem)
> Enable Function-level Linking - Yes(/Gy) I prefer this myself but
> not strictly necessary
> 5) Language
> Enable Run-Time type info - Yes (/GR)
> Linker
> 1) General
> Additional Library Directories - <directory that contains the
> serialization library here>
> 2) Input
> Additional Dependencies - libboost_serialization.lib
This is all the same with the exception of the console part (since it is an MFC GUI app), but again, I doubt that is
causing it.
> I would hope that those settings should do it. To make sure its finding the
> correct libboost_serialization.lib - change the name of this file. It
> should come up with an error at link time - can't find file
> libbost_serialization.lib. Changing this name back should make this error go
> away.
Check
> The program should now compile !?
Saddly, the problem persists. Perhaps it is something other than the library. Here are some code snippets in case
there are issues there.
The includes in the file that performes the save:
#include <fstream>
#include <boost/serialization/serialization.hpp>
#include <boost/serialization/vector.hpp>
#include <boost/archive/text_oarchive.hpp>
#include <boost/archive/text_iarchive.hpp>
Here is the save code:
std::ofstream ofs(PCSZ_USER_TOOLS_FILE, std::ios::binary);
boost::archive::text_oarchive oa(ofs);
// write class state from archive
oa << boost::serialization::make_nvp("UserToolList", m_UserTools);
// close archive
ofs.close();
m_UserTools is an instance of a typedefed vector defined as:
typedef std::vector<BusUserTool*> USER_TOOL_LIST;
Thanks for hanging in with me. I will probably need to change the code so I compile the serialization code into the
project soon in order to meet my deadlines, but I'd like to get the library working if possible.
Jared
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2005/02/79621.php | CC-MAIN-2021-04 | en | refinedweb |
Created on 2008-03-03 21:41 by gregory.p.smith, last changed 2009-04-02 05:42 by brett.cannon. This issue is now closed.
Some common python utilities had problems on Feb 29 this year when
parsing dates using format strings that did not include a year in them.
>>> time.strptime('Feb 29', '%b %d')
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "/usr/lib/python2.4/_strptime.py", line 425, in strptime
julian = datetime_date(year, month, day).toordinal() - \
ValueError: day is out of range for month
This is apparently because python assumes the year is 1900 unless it
explicitly parses another year out of the string.
Applications can work around this by always adding a year and a %Y to
the string they are parsing.
But not all date manipulating applications care about years. In this
case the application was fail2ban, bug report and patches to it here:
Should the year default to 1900 (the equivalent of what the much more
forgiving C API does by leaving struct tm tm_year = 0) or should this
error be raised? If the answer is yes, works as is this is easy and
just turns into us adding a note in the documentation to mention the
behavior.
I do believe this was a valid bug in fail2ban as assuming the current
year for date parsing is a bad idea and will do the wrong thing when
parsing across a year change.
Python's strptime is much more strict than C strptime (glibc's C
strptime is happy to return tm_mon 2 tm_mday 31. Its range checking is
minimal.
here's a C test case to play with its behavior:
#include <assert.h>
#include <stdio.h>
#include <time.h>
int main(int argc, char *argv[]) {
unsigned long ret, parsed;
assert(argc == 2);
struct tm tm = { 0 };
ret = strptime(argv[1], "%b %d", &tm);
parsed = ret - (unsigned long)(argv[1]);
printf("ret 0x%x parsed %d tm_mon %d tm_mday %d tm_year %d\n",
ret, parsed,
tm.tm_mon, tm.tm_mday, tm.tm_year);
}
% ./foo 'Feb 28'
ret 0xffffda8a parsed 6 tm_mon 1 tm_mday 28 tm_year 0
% ./foo 'Feb 29'
ret 0xffffda8a parsed 6 tm_mon 1 tm_mday 29 tm_year 0
% ./foo 'Feb 31'
ret 0xffffda8a parsed 6 tm_mon 1 tm_mday 31 tm_year 0
% ./foo 'Feb 32'
ret 0x0 parsed 9596 tm_mon 1 tm_mday 0 tm_year 0
The documentation already mentions that the default values when
information left out is (1900, 1, 1, 0, 0, 0, 0, 1, -1) so the docs are
already clear. If you want to generate a patch to make the default year
be this year I would be willing to review it and consider applying it. I
doubt very much code would break because of this.
Here is a patch, hope it'll make it to 2.6
Applying the _strptime.diff patch broke the _strptime
test("test_defaults"). Once you change the year, you also have to adapt
the day of week, as this becomes dynamic, too. The rest remains the
same, though. I attached a patch to this test which tests for the
new-years day of the current year instead of 1900, but I feel like
changing the semantic of the default value is no minor change. Also, I
am not sure what the documentation should say then.
After having thought about this I have decided I am going to stick with
the current semantics. Having the year change underneath code based
solely on when it executes will cause more problems than it will solve. | https://bugs.python.org/issue2227 | CC-MAIN-2021-49 | en | refinedweb |
W.
This example implements a small Hello World application that greets the user with the name entered.
from html import escape from werkzeug.wrappers import Request, Response @Request.application def hello_world(request): result = ['
Greeter'] if request.method == 'POST': result.append(f"
Hello {escape(request.form['name'])}!") result.append(''' ''') return Response(''.join(result), mimetype='text/html')
Alternatively the same application could be used without request and response objects but by taking advantage of the parsing functions werkzeug provides:
from html import escape from werkzeug.formparser import parse_form_data def hello_world(environ, start_response): result = ['
Greeter'] if environ['REQUEST_METHOD'] == 'POST': form = parse_form_data(environ)[1] result.append(f"
Hello {escape(form['name'])}!") result.append(''' ''') start_response('200 OK', [('Content-Type', 'text/html; charset=utf-8')]) return [''.join(result).encode('utf-8')]. | https://getdocs.org/Werkzeug/docs/2.0.x/levels | CC-MAIN-2021-49 | en | refinedweb |
Building, running, and managing containers
Building, running, and managing Linux containers. Starting with containers
Linux containers have emerged as a key open source application packaging and delivery technology, combining lightweight application isolation with the flexibility of image-based deployment methods. RHEL implements Linux containers using core technologies such as:
- Control groups (cgroups) for resource management
- Namespaces for process isolation
- SELinux for security
- Secure multi-tenancy
These technologies reduce the potential for security exploits and provide you with an environment for producing and running enterprise-quality containers.
Red Hat OpenShift provides powerful command-line and Web UI tools for building, managing, and running containers in units referred to as pods. Red Hat allows you to build and manage individual containers and container images outside of OpenShift. This guide describes the tools provided to perform those tasks that run directly on RHEL systems.
Unlike other container tools implementations, the tools described here do not center around the monolithic Docker container engine and
docker command. Instead, Red Hat provides
- crun - an optional runtime that can be configured and gives greater flexibility, control, and security for rootless containers and Using the CRI-O Container Engine for details.
1.1. Characteristics of Podman, Buildah, and Skopeo
The Podman, Skopeo, and Buildah tools were developed to replace Docker command features. Each tool in this scenario is more lightweight and focused on a subset of features.
The main advantages of Podman, Skopeo and Buildah tools include:
- Running in rootless mode - rootless containers are much more secure, as they run without any added privileges
- No daemon required - these tools have much lower resource requirements at idle, because if you are not running containers, Podman is not running. Docker, on the other hand, have a daemon always running
- Native systemd integration - Podman allows you to create systemd unit files and run containers as system services
The characteristics of Podman, Skopeo, and Buildah include:
-.
- To interact programmatically with Podman, you can use the Podman v2.0 RESTful API, it works in both a rootful and a rootless environment. For more information, see chapter Using the container tools API.
1.2. Overview of Podman commands
Table 1.1 shows a list of commands you can use with the
podman command. Use
podman -h to see a list of all Podman commands.
Table 1.1. Commands supported by podman
Additional resources
1.3. Running containers without Docker
Red Hat removed the Docker container engine and the docker command from RHEL 8.
If you still want to use Docker in RHEL, you can get Docker from different upstream projects, but it is unsupported in RHEL 8.
- You can install the
podman-dockerpackage, every time you run a
dockercommand, it actually runs a
podmancommand.
- Podman also supports the Docker Socket API, so the
podman-dockerpackage also sets up a link between
/var/run/docker.sockand
/var/run/podman/podman.sock. As a result, you can continue to run your Docker API commands with
docker-pyand
docker-composetools without requiring the Docker daemon. Podman will service the requests.
- The
podmancommand, like the
dockercommand, can build container images from a
Containerfileor
Dockerfile. The available commands that are usable inside a
Containerfileand a
Dockerfileare equivalent.
-.
Additional resources
1.4. Choosing a RHEL architecture for containers
Red Hat provides container images and container-related software for the following computer architectures:
- AMD64 and Intel 64 (base and layered images; no support for 32-bit architectures)
- PowerPC 8 and 9 64-bit (base image and most layered images)
- 64-bit IBM Z (base image and most layered images)
- ARM 64-bit (base image only)
Although not all Red Hat images were supported across all architectures at first, nearly all are now available on all listed architectures.
Additional resources
1.5. Getting container tools
This procedure shows how you can install the
container-tools module which contains the Podman, Buildah, Skopeo, and runc tools.
Procedure
- Install RHEL.
Register RHEL: Enter your user name and password. The user name and password are the same as your login credentials for Red Hat Customer Portal:
# subscription-manager register Registering to: subscription.rhsm.redhat.com:443/subscription Username: ******** Password: **********
To auto-subscribe to RHEL:
# subscription-manager attach --auto
To subscribe to RHEL by Pool ID:
# subscription-manager attach --pool PoolID
Install the
container-toolsmodule:
# yum module install -y container-tools
Optional. Install the
podman-dockerpackage:
# yum install -y podman-docker
The
podman-dockerpackage replaces the Docker command-line interface and
docker-apiwith the matching Podman commands instead.
1.6. Setting up rootless containers
Running the container tools such as Podman, Skopeo, or Buildah as a user with superuser privileges . As a result, regular users can make requests through their containers that can harm the system. By setting up rootless container users, system administrators prevent potentially damaging container activities from regular users, while still allowing those users to safely run most container features under their own accounts.
This procedure describes how to set up your system to use Podman, Skopeo, and Buildah tools to work with containers as a non-root user (rootless). It also describes some of the limitations you will encounter, because regular user accounts do not have full access to all operating system features that their containers might need to run.
Prerequisites
- You need to become a root user to set up your RHEL system to allow non-root user accounts to use container tools.
Procedure
- Install RHEL.
Install the
podmanpackage:
# yum install podman -y
Create a new user account:
# useradd -c "Joe Jones" joe # passwd joe
- The user is automatically configured to be able to use rootless Podman.
- The
useraddcommand automatically sets the range of accessible user and group IDs automatically in the
/etc/subuidand
/etc/subgidfiles.
Connect to the user:
$ ssh joe@server.example.com
Do not use
su or
su - commands because these commands do not set the correct environment variables.
Pull the
registry.access.redhat.com/ubi8/ubicontainer image:
$ podman pull registry.access.redhat.com/ubi8/ubi
Run the container named
myubiand display the OS version:
$ podman run --rm --name=myubi registry.access.redhat.com/ubi8/ubi cat \ /etc/os-release NAME="Red Hat Enterprise Linux" VERSION="8.4 (Ootpa)"
Additional resources
1.7. Upgrading to rootless containers
This section shows how to upgrade to rootless containers from RHEL 7. You must configure user and group IDs manually.
Here are some things to consider when upgrading to rootless containers from RHEL 7:
- If you set up multiple rootless container users, use unique ranges for each user.
- Use 65536 UIDs and GIDs for maximum compatibility with existing container images, but the number can be reduced.
- Never use UIDs or GIDs under 1000 or reuse UIDs or GIDs from existing user accounts (which, by default, start at 1000).
Prerequisites
- The user account has been created.
Procedure
Run the
usermodcommand to assign UIDs and GIDs to a user:
# usermod --add-subuids 200000-201000 --add-subgids 200000-201000 username
- The
usermod --add-subuidcommand manually adds a range of accessible user IDs to the user’s account.
- The
usermod --add-subgidscommand manually adds a range of accessible user GIDs and group IDs to the user’s account.
Verification steps
Check that the UIDs and GIDs are set properly:
# grep username /etc/subuid /etc/subgid #/etc/subuid:username:200000:1001 #/etc/subgid:username:200000:1001
1.8. Special considerations for rootless containers
There are several considerations when running containers as a non-root user:
- The path to the host container storage is different for root users (
/var/lib/containers/storage) and non-root users (
$HOME/.local/share/containers/storage).
- Users running rootless containers are given special permission to run as a range of user and group IDs on the host system. However, they have no root privileges to the operating system on the host.
- If you need to configure your rootless container environment, create configuration files in your home directory (
$HOME/.config/containers). Configuration files include
storage.conf(for configuring storage) and
containers.conf(for a variety of container settings). You could also create a
registries.conffile to identify container registries that are available when you use Podman to pull, search, or run images.
There are some system features you cannot change without root privileges. For example, you cannot change the system clock by setting a
SYS_TIMEcapability inside a container and running the network time service (
ntpd). You have to run that container as root, bypassing your rootless container environment and using the root user’s environment. For example:
$ sudo podman run -d --cap-add SYS_TIME ntpd
Note that this example allows
ntpdto adjust time for the entire system, and not just within the container.
A rootless container cannot access a port numbered less than 1024. Inside the rootless container namespace it can, for example, start a service that exposes port 80 from an httpd service from the container, but it is not accessible outside of the namespace:
$ podman run -d httpd
However, a container would need root privileges, using the root user’s container environment, to expose that port to the host system:
$ sudo podman run -d -p 80:80 httpd
The administrator of a workstation can allow users to expose services on ports numbered lower than 1024, but they should understand the security implications. A regular user could, for example, run a web server on the official port 80 and make external users believe that it was configured by the administrator. This is acceptable on a workstation for testing, but might not be a good idea on a network-accessible development server, and definitely should not be done on production servers. To allow users to bind to ports down to port 80 run the following command:
# echo 80 > /proc/sys/net/ipv4/ip_unprivileged_port_start
Additional resources
1.9. Additional resources
Chapter 2. Types of container images
The container image is a binary that includes all of the requirements for running a single container, and metadata describing its needs and capabilities.
There are two types of container images:
- Red Hat Enterprise Linux Base Images (RHEL base images)
- Red Hat Universal Base Images (UBI images)
Both types of container images are built from portions of Red Hat Enterprise Linux. By using these containers, users can benefit from great reliability, security, performance and life cycles.
The main difference between the two types of container images is that the UBI images allow you to share container images with others. You can build a containerized application using UBI, push it to your choice of registry server, easily share it with others, and even deploy it on non-Red Hat platforms. The UBI images are designed to be a foundation for cloud-native and web applications use cases developed in containers.
2.1. General characteristics of RHEL container images
Following characteristics apply to both RHEL base images and UBI images.
In general, RHEL container images are:
- Supported: Supported by Red Hat for use with containerized applications. They contain the same secured, tested, and certified software packages found in Red Hat Enterprise Linux.
- Cataloged: Listed in the Red Hat Container Catalog, with descriptions, technical details, and a health index for each image.
- Updated: Offered with a well-defined update schedule, to get the latest software, see Red Hat Container Image Updates article.
- Tracked: Tracked by Red Hat Product Errata to help understand the changes that are added into each update.
- Reusable: The container images need to be downloaded and cached in your production environment once. Each container image can be reused by all containers that include it as their foundation.
2.2. Characteristics of UBI images
The UBI images allow you to share container images with others. Four UBI images are offered: micro, minimal, standard, and init. Pre-build language runtime images and YUM repositories are available to build your applications.
Following characteristics apply to UBI images:
- Built from a subset of RHEL content: Red Hat Universal Base images are built from a subset of normal Red Hat Enterprise Linux content.
- Redistributable: UBI images allow standardization for Red Hat customers, partners, ISVs, and others. With UBI images, you can build your container images on a foundation of official Red Hat software that can be freely shared and deployed.
- Provides a set of four base images: micro, minimal, standard, and init.
- Provides a set of pre-built language runtime container images: The runtime images based on Application Streams provide a foundation for applications that can benefit from standard, supported runtimes such as python, perl, php, dotnet, nodejs, and ruby.
Provides a set of associated YUM repositories: YUM repositories include RPM packages and updates that allow you to add application dependencies and rebuild UBI container images.
- The
ubi-8-baseosrepository holds the redistributable subset of RHEL packages you can include in your container.
- The
ubi-8-appstreamrepository holds Application streams packages that you can add to a UBI image to help you standardize the environments you use with applications that require particular runtimes.
- Adding UBI RPMs: You can add RPM packages to UBI images from preconfigured UBI repositories. If you happen to be in a disconnected environment, you must allowlist the UBI Content Delivery Network () to use that feature. See the Connect to solution for details.
- Licensing: You are free to use and redistribute UBI images, provided you adhere to the Red Hat Universal Base Image End User Licensing Agreement.
2.3. Understanding the UBI standard images
The standard images (named
ubi) are designed for any application that runs on RHEL. The key features of UBI standard images include:
-: You have access to free yum repositories for adding and updating software. You can use the standard set of
yumcommands (
yum,
yum-config-manager,
yumdownloader, and so on).
- utilities: Utilities include
tar,
dmidecode,
gzip,
getfacland further acl commands,
dmsetupand further device mapper commands, between other utilities not mentioned here.
2.4. Understanding the UBI init images
The UBI init images, named
ubi-init, contain. However, there are a few critical differences:
ubi8-init:
- CMD is set to
/sbin/initto start the systemd Init service by default
- includes
psand process related commands (
procps-ngpackage)
- sets
SIGRTMIN+3as the
StopSignal, as systemd in
ubi8-initignores normal signals to exit (
SIGTERMand
SIGKILL), but will terminate if it receives
SIGRTMIN+3
ubi8:
- CMD is set to
/bin/bash
- does not include
psand process related commands (
procps-ngpackage)
- does not ignore normal signals to exit (
SIGTERMand
SIGKILL)
2.5. Understanding the UBI minimal images
The UBI minimal images, named
ubi-minimal offer a minimized pre-installed content set and a package manager (microdnf). As a result, you can use a
Containerfile while minimizing the dependencies included in the image.
The key features of UBI minimal images include:
- Small size: Minimal images are about 92M on disk and 32M, when compressed. This makes it less than half the size of the standard images.
- Software installation (
microdnf): Instead of including the fully-developed
yumfacility for working with software repositories and RPM software packages, the minimal images includes the
microdnfutility. The
microdnfis a scaled-down version of
dnfallowing you to enable and disable repositories, remove and update packages, and clean out cache after packages have been installed.
- Based on RHEL packaging: Minimal images incorporate regular RHEL software RPM packages, with a few features removed. Minimal images do not include initialization and service management system, such as systemd or System V init, Python run-time environment, and some shell utilities. You can rely on RHEL repositories for building your images, while carrying the smallest possible amount of overhead.
Modules for
microdnfare supported: Modules used with
microdnfcommand let you install multiple versions of the same software, when available. You can use
microdnf module enable,
microdnf module disable, and
microdnf module resetto enable, disable, and reset a module stream, respectively.
For example, to enable the
nodejs:14module stream inside the UBI minimal container, enter:
# microdnf module enable nodejs:14 Downloading metadata... ... Enabling module streams: nodejs:14 Running transaction test...
Red Hat only supports the latest version of UBI and does not support parking on a dot release. If you need to park on a specific dot release, please take a look at Extended Update Support.
2.6. Understanding the UBI micro images
The
ubi-micro is the smallest possible UBI image, obtained by excluding a package manager and all of its dependencies which are normally included in a container image. This minimizes the attack surface of container images based on the
ubi-micro image and is suitable for minimal applications, even if you use UBI Standard, Minimal, or Init for other applications. The container image without the Linux distribution packaging is called a Distroless container image.
2.7. Using the UBI init images
This procedure shows how to build a container using a
Containerfile that installs and configures a Web server (
httpd) to start automatically by the systemd service (
/sbin/init) when the container is run on a host system. The
podman build command uses a
Containerfile if found in the context directory, if it is not found the
podman build command will use a
Containerfile; otherwise any file can be specified with the
--file option.
Procedure
Create a
Containerfilewith the following contents to a new directory:
FROM registry.access.redhat.com/ubi8/ubi
Containerfileinstalls the
httpdpackage, enables the
httpdservice to start at boot time, creates a test file (
index.html), exposes the Web server to the host (port 80), and starts the systemd init service (
/sbin/init) when the container starts.
Build the container:
# podman build --format=docker -t mysysd .
Optional. If you want to run containers with systemd and SELinux is enabled on your system, you must set the
container_manage_cgroupboolean variable:
# setsebool -P container_manage_cgroup 1
Run the container named
mysysd_run:
# podman run -d --name=mysysd_run -p 80:80 mysysd
The
mysysdimage runs as the
mysysd_runcontainer as a daemon process, with port 80 from the container exposed to port 80 on the host system.
- NOTE
In rootless mode, you have to choose host port number >= 1024. For example:
$ podman run -d --name=mysysd -p 8081:80 mysysd
To use port numbers < 1024, you have to modify the
net.ipv4.ip_unprivileged_port_startvariable:
$ sudo sysctl net.ipv4.ip_unprivileged_port_start=80
Check that the container is running:
# podman ps a282b0c2ad3d localhost/mysysd:latest /sbin/init 15 seconds ago Up 14 seconds ago 0.0.0.0:80->80/tcp mysysd_run
Test the web server:
# curl localhost/index.html Successful Web Server Test
Additional resources
2.8. Using the UBI micro images
This procedure shows how to build a
ubi-micro container image using the Buildah tool.
Prerequisites
The
container-toolsmodule is installed.
# yum module install -y container-tools
Procedure
Pull and build the
registry.access.redhat.com/ubi8/ubi-microimage:
# microcontainer=$(buildah from registry.access.redhat.com/ubi8/ubi-micro)
Mount a working container root filesystem:
# micromount=$(buildah mount $microcontainer)
Install the
httpdservice to the
micromountdirectory:
# yum install \ --installroot $micromount \ --releasever 8 \ --setopt install_weak_deps=false \ --nodocs -y \ httpd # yum clean all \ --installroot $micromount
Unmount the root file system on the working container:
# buildah umount $microcontainer
Create the
ubi-micro-httpdimage from a working container:
# buildah commit $microcontainer ubi-micro-httpd
Verification steps
Display details about the
ubi-micro-httpdimage:
# podman images ubi-micro-httpd localhost/ubi-micro-httpd latest 7c557e7fbe9f 22 minutes ago 151 MB
Chapter 3. Working with container images
The Podman tool is designed to work with container images. You can use this tool to pull the image, inspect, tag, save, load, redistribute, and define the image signature.
3.1. Container registries
A container registry is a repository or collection of repositories for storing container images and container-based application artifacts. The registries that Red Hat provides are:
- registry.redhat.io (requires authentication)
- registry.access.redhat.com (requires no authentication)
- registry.connect.redhat.com (holds Red Hat Partner Connect program images)
To get container images from a remote registry, such as Red Hat’s own container registry, and add them to your local system, use the
podman pull command:
# podman pull <registry>[:<port>]/[<namespace>/]<name>:<tag>
where
<registry>[:<port>]/[<namespace>/]<name>:<tag> is the name of the container image.
For example, the
registry.redhat.io/ubi8/ubi container image is identified by:
- Registry server (
registry.redhat.io)
- Namespace (
ubi8)
- Image name (
ubi)
If there are multiple versions of the same image, add a tag to explicitly specify the image name. By default, Podman uses the
:latest tag, for example
ubi8/ubi:latest.
Some registries also use <namespace> to distinguish between images with the same <name> owned by different users or organizations. For example:
For details on the transition to registry.redhat.io, see Red Hat Container Registry Authentication . Before you can pull containers from registry.redhat.io, you need to authenticate using your RHEL Subscription credentials.
3.2. Configuring container registries
You can find the list of container registries in the
registries.conf configuration file. As a root user, edit the
/etc/containers/registries.conf file to change the default system-wide search settings.
As a user, create the
$HOME/.config/containers/registries.conf file to override the system-wide settings.
unqualified-search-registries = ["registry.fedoraproject.org", "registry.access.redhat.com", "docker.io"]
By default, the
podman pull and
podman search commands search for container images from registries listed in the
unqualified-search-registries list in the given order.
- Configuring a local container registry
You can configure a local container registry without the TLS verification. You have two options on how to disable TLS verification. First, you can use the
--tls-verify=falseoption in Podman. Second, you can set
insecure=truein the
registries.conffile:
[[registry]] location="localhost:5000" insecure=true
- Blocking a registry, namespace, or image
You can define registries the local system is not allowed to access. You can block a specific registry by setting
blocked=true.
[[registry]] location = "registry.example.org" blocked = true
You can also block a namespace by setting the prefix to
prefix="registry.example.org/namespace". For example, pulling the image using the
podman pull registry. example.org/example/image:latestcommand will be blocked, because the specified prefix is matched.
[[registry]] location = "registry.example.org" prefix="registry.example.org/namespace" blocked = trueNote
prefixis optional, default value is the same as the
locationvalue.
You can block a specific image by setting
prefix="registry.example.org/namespace/image".
[[registry]] location = "registry.example.org" prefix="registry.example.org/namespace/image" blocked = true
- Mirroring registries
You can set a registry mirror in cases you cannot access the original registry. For example, you cannot connect to the internet, because you work in a highly-sensitive environment. You can specify multiple mirrors that are contacted in the specified order. For example, when you run
podman pull registry.example.com/myimage:latestcommand, the
mirror-1.comis tried first, then
mirror-2.com.
[[registry]] location="registry.example.com" [[registry.mirror]] location="mirror-1.com" [[registry.mirror]] location="mirror-2.com"
Additional resources
3.3. Searching for container images
Using the
podman search command you can search selected container registries for images. You can also search for images in the Red Hat Container Registry. The Red Hat Container Registry includes the image description, contents, health index, and other information.
The
podman search command is not a reliable way to determine the presence or existence of an image. The
podman search behavior of the v1 and v2 Docker distribution API is specific to the implementation of each registry. Some registries may not support searching at all. Searching without a search term only works for registries that implement the v2 API. The same holds for the
docker search command.
This section explains how to search for the
postresql-10 images in the quay.io registry.
Prerequisites
- The registry is configured.
Procedure
Authenticate to the registry:
# podman login quay.io
To search for a particular image on a specific registry, enter:
podman search quay.io/postgresql-10 INDEX NAME DESCRIPTION STARS OFFICIAL AUTOMATED redhat.io registry.redhat.io/rhel8/postgresql-10 This container image ... 0 redhat.io registry.redhat.io/rhscl/postgresql-10-rhel7 PostgreSQL is an ... 0
Alternatively, to display all images provided by a particular registry, enter:
# podman search quay.io/
To search for the image name in all registries, enter:
# podman search postgresql-10
To display the full descriptions, pass the
--no-truncoption to the command.
Additional resources
podman-searchman page
3.4. Pulling images from registries
Use the
podman pull command to get the image to your local system.
Procedure
Log in to the registry.redhat.io registry:
$ podman login registry.redhat.io Username: username Password: ********** Login Succeeded!
Pull the registry.redhat.io/ubi8/ubi container image:
$ podman pull registry.redhat.io/ubi8/ubi
Verification steps
List all images pulled to your local system:
$ podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.redhat.io/ubi8/ubi latest 3269c37eae33 7 weeks ago 208 MB
Additional resources
podman-pullman page
3.5. Configuring short-name aliases
Red Hat recommends always to pull an image by its fully-qualified name. However, it is customary to pull images by short names. For instance, you can use
ubi8 instead of
registry.access.redhat.com/ubi8:latest.
The
registries.conf file allows to specify aliases for short names, giving administrators full control over where images are pulled from. Aliases are specified in the
[aliases] table in the form
"name" = "value". You can see the lists of aliases in the
/etc/containers/registries.conf.d directory. Red hat ships a set of aliases in this directory. For example,
podman pull ubi8 directly resolves to the right image, that is
registry.access.redhat.com/ubi8:latest.
For example:
unqualified-search-registries=["registry.fedoraproject.org", “quay.io"] [aliases] "fedora"="registry.fedoraproject.org/fedora") or in the
/var/cache/containers/short-name-aliases.conf(root user). If the user cannot be prompted (for example, stdin or stdout are not a TTY), Podman fails. Note that the
short-name-aliases.conffile has precedence over the.
- disabled: All unqualified-search registries are tried in a given order, no alias is recorded.
Red Hat recommends using fully qualified image names including registry, namespace, image name, and tag. When using short names, there is always an inherent risk of spoofing. Add registries that are trusted, that is, registries that do not allow unknown or anonymous users to create accounts with arbitrary names. For example, a user wants to pull the example container image from
example.registry.com registry. If
example.registry.com is not first in the search list, an attacker could place a different example image at a registry earlier in the search list. The user would accidentally pull and run the attacker image rather than the intended content.
Additional resources
3.6. Pulling container images using short-name aliases
You can use secure short names to get the image to your local system. The following procedure describes how to pull a
fedora or
nginx container image.
Procedure
Pull the container image:
Pull the
fedoraimage:
$ podman pull fedora Resolved "fedora" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf) Trying to pull registry.fedoraproject.org/fedora:latest… ... Storing signatures ...
Alias is found and the
registry.fedoraproject.org/fedoraimage is securely pulled. The
unqualified-search-registrieslist is not used to resolve
fedoraimage name.
Pull the
nginximage:
$ podman pull nginx ? Please select an image: registry.access.redhat.com/nginx:latest registry.redhat.io/nginx:latest ▸ docker.io/library/nginx:latest ✔ docker.io/library/nginx:latest Trying to pull docker.io/library/nginx:latest… ... Storing signatures ...
If no matching alias is found, you are prompted to choose one of the
unqualified-search-registrieslist. If the selected image is pulled successfully, a new short-name alias is recorded locally, otherwise an error occurs.
Verification
List all images pulled to your local system:
$ podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.fedoraproject.org/fedora latest 28317703decd 12 days ago 184 MB docker.io/library/nginx latest 08b152afcfae 13 days ago 137 MB
Additional resources
3.7. Listing images
Use the
podman images command to list images in your local storage.
Prerequisites
- A pulled image is available on the local system.
Procedure
List all images in the local storage:
$ podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi8/ubi latest 3269c37eae33 6 weeks ago 208 MB
Additional resources
podman-imagesman page
3.8. Inspecting local images
After you pull an image to your local system and run it, you can use the
podman inspect command to investigate the image. For example, use it to understand what the image does and check what software is inside the image. The
podman inspect command displays information on containers and images identified by name or ID.
Prerequisites
- A pulled image is available on the local system.
Procedure
Inspect the
registry.redhat.io/ubi8/ubiimage:
$ podman inspect registry.redhat.io/ubi8/ubi … "Cmd": [ "/bin/bash" ], "Labels": { "architecture": "x86_64", "build-date": "2020-12-10T01:59:40.343735", "com.redhat.build-host": "cpt-1002.osbs.prod.upshift.rdu2.redhat.com", "com.redhat.component": "ubi8-container", "com.redhat.license_terms": "..., "description": "The Universal Base Image is ... } ...
The
"Cmd"key specifies a default command to run within a container. You can override this command by specifying a command as an argument to the
podman runcommand. This ubi8/ubi container will execute the bash shell if no other argument is given when you start it with
podman run. If an
"Entrypoint"key was set, its value would be used instead of the
"Cmd"value, and the value of
"Cmd"is used as an argument to the Entrypoint command.
Additional resources
podman-inspectman page
3.9. Inspecting remote images
Use the
skopeo inspect command to display information about an image from a remote container registry before you pull the image to your system.
Procedure
Inspect the
registry.redhat.io/ubi8/ubi-initimage:
# skopeo inspect docker://registry.redhat.io/ubi8/ubi-init { "Name": "registry.redhat.io/ubi8/ubi8-init", "Digest": "sha256:c6d1e50ab...", "RepoTags": [ "8.2-13-source", "8.0-15", "8.1-28", ... "latest" ], "Created": "2020-12-10T07:16:37.250312Z", "DockerVersion": "1.13.1", "Labels": { "architecture": "x86_64", "build-date": "2020-12-10T07:16:11.378348", "com.redhat.build-host": "cpt-1007.osbs.prod.upshift.rdu2.redhat.com", "com.redhat.component": "ubi8-init-container", "com.redhat.license_terms": "", "description": "The Universal Base Image Init is designed to run an init system as PID 1 for running multi-services inside a container ...
Additional resources
skopeo-inspectman page
3.10. Copying container images
You can use the
skopeo copy command to copy a container image from one registry to another. For example, you can populate an internal repository with images from external registries, or sync image registries in two different locations.
Procedure
Copy the
skopeocontainer image from
docker://quay.ioto
docker://registry.example.com:
$ skopeo copy docker://quay.io/skopeo/stable:latest docker://registry.example.com/skopeo:latest
Additional resources
skopeo-copyman page
3.11. Copying image layers to a local directory
You can use the
skopeo copy command to copy the layers of a container image to a local directory.
Procedure
Create the
/var/lib/images/nginxdirectory:
$ mkdir -p /var/lib/images/nginx
Copy the layers of the
docker://docker.io/nginx:latest imageto the newly created directory:
$ skopeo copy docker://docker.io/nginx:latest dir:/var/lib/images/nginx
Verification
Display the content of the` /var/lib/images/nginx` directory:
$ ls /var/lib/images/nginx 08b11a3d692c1a2e15ae840f2c15c18308dcb079aa5320e15d46b62015c0f6f3 ... 4fcb23e29ba19bf305d0d4b35412625fea51e82292ec7312f9be724cb6e31ffd manifest.json version
Additional resources
skopeo-copyman page
3.12. Tagging images
Use the
podman tag command to add an additional name to a local image. This additional name can consist of several parts: registryhost/username/NAME:tag.
Prerequisites
- A pulled image is available on the local system.
Procedure
List all images:
$ podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.redhat.io/ubi8/ubi latest 3269c37eae33 7 weeks ago 208 MB
Assign the
myubiname to the
registry.redhat.io/ubi8/ubiimage using either:
The image name:
$ podman tag registry.redhat.io/ubi8/ubi myubi
The image ID:
$ podman tag 3269c37eae33 myubi
Notice that the default tag is
latestfor both images. You can see all the image names are assigned to the single image ID 3269c37eae33.
Add the
8.4tag to the
registry.redhat.io/ubi8/ubiimage using either:
The image name:
$ podman tag registry.redhat.io/ubi8/ubi myubi:8.4
The image ID:
$ podman tag 3269c37eae33 myubi:8.4 localhost/myubi 8.4 3269c37eae33 2 months ago 208 MB
Notice that the default tag is
latestfor both images. You can see all the image names are assigned to the single image ID 3269c37eae33.
After tagging the
registry.redhat.io/ubi8/ubi image, you have three options to run the container:
- by ID (
3269c37eae33)
- by name (
localhost/myubi:latest)
- by name (
localhost/myubi:8.4)
3.13. Saving and loading images
Use the
podman save command to save an image to a container archive. You can restore it later to another container environment or send it to someone else. You can use
--format option to specify the archive format. The supported formats are:
docker-archive
oci-archive
oci-dir(directory with oci manifest type)
docker-dir(directory with v2s2 manifest type)
The default format is the
docker-dir format.
Use the
podman load command to load an image from the container image archive into the container storage.
Prerequisites
- A pulled image is available on the local system.
Procedure
Save the
registry.redhat.io/rhel8/rsyslogimage as a tarball:
In the default
docker-dirformat:
$ podman save -o myrsyslog.tar registry.redhat.io/rhel8/rsyslog:latest
In the
oci-archiveformat, using the
--formatoption:
$ podman save -o myrsyslog-oci.tar --format=oci-archive registry.redhat.io/rhel8/rsyslog
The
myrsyslog.tarand
myrsyslog-oci.tararchives are stored in your current directory. The next steps are performed with the
myrsyslog.tartarball.
Check the file type of
myrsyslog.tar:
$ file myrsyslog.tar myrsyslog.tar: POSIX tar archive
To load the
registry.redhat.io/rhel8/rsyslog:latestimage from the
myrsyslog.tar:
$ podman load -i myrsyslog.tar ... Loaded image(s): registry.redhat.io/rhel8/rsyslog:latest
3.14. Redistributing UBI images
Use
podman push command to push a UBI image to your own, or a third party, registry and share it with others. You can upgrade or add to that image from UBI yum repositories as you like.
Prerequisites
- A pulled image is available on the local system.
Procedure
Optional: Add an additional name to the
ubiimage:
# podman tag registry.redhat.io/ubi8/ubi registry.example.com:5000/ubi8/ubi
Push the
registry.example.com:5000/ubi8/ubiimage from your local storage to a registry:
# podman push registry.example.com:5000/ubi8/ubi
- IMPORTANT
- While there are few restrictions on how you use these images, there are some restrictions about how you can refer to them. For example, you cannot call those images Red Hat certified or Red Hat supported unless you certify it through the Red Hat Partner Connect Program, either with Red Hat Container Certification or Red Hat OpenShift Operator Certification.
3.15. Default verification of the image signatures
The policy YAML files for the Red Hat Container Registries
/etc/containers/registries.d/registry.access.redhat.com.yaml and
/etc/containers/registries.d/registry.redhat.io.yaml files are included in the
containers-common package which is included in the
container-tools:latest module. Use the
podman image trust command to verify the container image signatures on RHEL.
Procedure
Update an existing trust scope for the registry.access.redhat.com:
# podman image trust set -f /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release registry.access.redhat.com
Optional. To verify the trust policy configuration, display the
/etc/containers/policy.jsonfile:
... "transports": { "docker": { "registry.access.redhat.com": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ] }, ...
Update an existing trust scope for the registry.redhat.io:
# podman image trust set -f /etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release registry.redhat.io
Optional. To verify the trust policy configuration, display the
/etc/containers/policy.jsonfile:
... "transports": { "docker": { "registry.access.redhat.com": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ], "registry.redhat.io": [ { "type": "signedBy", "keyType": "GPGKeys", "keyPath": "/etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release" } ] }, ...
Additional resources
podman-image-trustman page
3.16. Removing images
Use the
podman rmi command to remove locally stored container images. You can remove an image by its ID or name.
Procedure
List all images on your local system:
$ podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.redhat.io/rhel8/rsyslog latest 4b32d14201de 7 weeks ago 228 MB registry.redhat.io/ubi8/ubi latest 3269c37eae33 7 weeks ago 208 MB localhost/myubi X.Y 3269c37eae33 7 weeks ago 208 MB
List all containers:
$ podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7ccd6001166e registry.redhat.io/rhel8/rsyslog:latest /bin/rsyslog.sh 6 seconds ago Up 5 seconds ago mysyslog
To remove the
registry.redhat.io/rhel8/rsyslogimage, you have to stop all containers running from this image using the
podman stopcommand. You can stop a container by its ID or name.
Stop the
mysyslogcontainer:
$ podman stop mysyslog 7ccd6001166e9720c47fbeb077e0afd0bb635e74a1b0ede3fd34d09eaf5a52e9
Remove the
registry.redhat.io/rhel8/rsyslogimage:
$ podman rmi registry.redhat.io/rhel8/rsyslog
To remove multiple images:
$ podman rmi registry.redhat.io/rhel8/rsyslog registry.redhat.io/ubi8/ubi
To remove all images from your system:
$ podman rmi -a
To remove images that have multiple names (tags) associated with them, add the
-foption to remove them:
$ podman rmi -f 1de7d7b3f531 1de7d7b3f531...
Chapter 4. Working with containers
Containers represent a running or stopped process created from the files located in a decompressed container image. You can use the Podman tool to work with containers.
4.1. Podman run command
The
podman run command runs a process in a new container based on the container image. If the container image is not already loaded then
podman run pulls the image, and all image dependencies, from the repository in the same way running
podman pull image, before it starts the container from that image. The container process has its own file system, its own networking, and its own isolated process tree.
The
podman run command has the form:
podman run [options] image [command [arg ...]]
Basic options are:
--detach (-d): Runs the container in the background and prints the new container ID.
--attach (-a): Runs the container in the foreground mode.
--name (-n): Assigns a name to the container. If a name is not assigned to the container with
--namethen it generates a random string name. This works for both background and foreground containers.
--rm: Automatically remove the container when it exits. Note that the container will not be removed when it could not be created or started successfully.
--tty (-t): Allocates and attaches the pseudo-terminal to the standard input of the container.
--interactive (-i): For interactive processes, use
-iand
-ttogether to allocate a terminal for the container process. The
-i -tis often written as
-it.
4.2. Running commands in a container from the host
This procedure shows how to use the
podman run command to display the type of operating system of the container.
Prerequisites
The Podman tool is installed.
# yum module install -y container-tools
Procedure
Display the type of operating system of the container based on the
registry.access.redhat.com/ubi8/ubicontainer image using the
cat /etc/os-releasecommand:
$ podman run --rm registry.access.redhat.com/ubi8/ubi cat /etc/os-release"
Optional: List all containers.
$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Because of the
--rmoption you should not see any container. The container was removed.
Additional resources
man podman-run
4.3. Running commands inside the container
This procedure shows how you can use the
podman run command to run a container interactively.
Prerequisites
The Podman tool is installed.
# yum module install -y container-tools
Procedure
Run the container named
myubibased on the
registry.redhat.io/ubi8/ubiimage:
$ podman run --name=myubi -it registry.access.redhat.com/ubi8/ubi /bin/bash [root@6ccffd0f6421 /]#
- The
-ioption creates an interactive session. Without the
-toption, the shell stays open, but you cannot type anything to the shell.
- The
-toption opens a terminal session. Without the
-ioption, the shell opens and then exits.
Install the
procps-ngpackage containing a set of system utilities (for example
ps,
top,
uptime, and so on):
[root@6ccffd0f6421 /]# yum install procps-ng
Use the
ps -efcommand to list current processes:
# ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 12:55 pts/0 00:00:00 /bin/bash root 31 1 0 13:07 pts/0 00:00:00 ps -ef
Enter
exitto exit the container and return to the host:
# exit
Optional: List all containers:
$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 1984555a2c27 registry.redhat.io/ubi8/ubi:latest /bin/bash 21 minutes ago Exited (0) 21 minutes ago myubi
You can see that the container is in Exited status.
Additional resources
man podman-run
4.4. Listing containers
Use the
podman ps command to list the running containers on the system.
Prerequisites
The Podman tool is installed.
# yum module install -y container-tools
Procedure
Run the container based on
registry.redhat.io/rhel8/rsyslogimage:
$ podman run -d registry.redhat.io/rhel8/rsyslog
List all containers:
To list all running containers:
$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 74b1da000a11 rhel8/rsyslog /bin/rsyslog.sh 2 minutes ago Up About a minute musing_brown
To
If there are containers that are not running, but were not removed (
--rm option), the containers are present and can be restarted.
Additional resources
man podman-ps
4.5. Starting containers
If you run the container and then stop it, and not remove it, the container is stored on your local system ready to run again. You can use the
podman start command to re-run the containers. You can specify the containers by their container ID or name.
Prerequisites
The Podman tool is installed.
# yum module install -y container-tools
- At least one container has been stopped.
Procedure
Start the
myubicontainer:
In the non interactive mode:
$ podman start myubi
Alternatively, you can use
podman start 1984555a2c27.
In the interactive mode, use
-a(
--attach) and
-t(
--interactive) options to work with container bash shell:
$ podman start -a -i myubi
Alternatively, you can use
podman start -a -i 1984555a2c27.
Enter
exitto exit the container and return to the host:
[root@6ccffd0f6421 /]# exit
Additional resources
man podman-start
4.6. Inspecting containers from the host
Use the
podman inspect command to inspect the metadata of an existing container in a JSON format. You can specify the containers by their container ID or name.
Prerequisites
The Podman tool is installed.
# yum module install -y container-tools
Procedure
Inspect the container defined by ID 64ad95327c74:
To get all metadata:
$ podman inspect 64ad95327c74 [ { "Id": "64ad95327c740ad9de468d551c50b6d906344027a0e645927256cd061049f681", "Created": "2021-03-02T11:23:54.591685515+01:00", "Path": "/bin/rsyslog.sh", "Args": [ "/bin/rsyslog.sh" ], "State": { "OciVersion": "1.0.2-dev", "Status": "running", ...
To get particular items from the JSON file, for example, the
StartedAttimestamp:
$ podman inspect --format='{{.State.StartedAt}}' 64ad95327c74 2021-03-02 11:23:54.945071961 +0100 CET
The information is stored in a hierarchy. To see the container
StartedAttimestamp (
StartedAtis under
State), use the
--formatoption and the container ID or name.
Examples of other items you might want to inspect include:
.Pathto see the command run with the container
.Argsarguments to the command
.Config.ExposedPortsTCP or UDP ports exposed from the container
.State.Pidto see the process id of the container
.HostConfig.PortBindingsport mapping from container to host
Additional resources
man podman-inspect
4.7. Mounting directory on localhost to the container
This procedure shows how you can make log messages from inside a container available to the host system by mounting the host
/dev/log device inside the container.
Prerequisites
The Podman tool is installed.
# yum module install -y container-tools
Procedure
Run the container named
log_testand mount the host
/dev/logdevice inside the container:
# podman run --name="log_test" -v /dev/log:/dev/log --rm \ registry.redhat.io/ubi8/ubi logger "Testing logging to the host"
Use the
journalctlutility to display logs:
# journalctl -b | grep Testing Dec 09 16:55:00 localhost.localdomain root[14634]: Testing logging to the host
The
--rmoption removes the container when it exits.
Additional resources
man podman-run
4.8. Mounting a container filesystem
Use the
podman mount command to mount a working container root filesystem in a location accessible from the host.
Prerequisites
The Podman tool is installed.
# yum module install -y container-tools
Procedure
Run the container named
mysyslog:
# podman run -d --name=mysyslog registry.redhat.io/rhel8/rsyslog
Optional: List all containers:
# podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c56ef6a256f8 registry.redhat.io/rhel8/rsyslog:latest /bin/rsyslog.sh 20 minutes ago Up 20 minutes ago mysyslog
Mount the
mysyslogcontainer:
# podman mount mysyslog /var/lib/containers/storage/overlay/990b5c6ddcdeed4bde7b245885ce4544c553d108310e2b797d7be46750894719/merged
Display the content of the mount point using
lscommand:
# ls /var/lib/containers/storage/overlay/990b5c6ddcdeed4bde7b245885ce4544c553d108310e2b797d7be46750894719/merged bin boot dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv sys tmp usr var
Display the OS version:
# cat /var/lib/containers/storage/overlay/990b5c6ddcdeed4bde7b245885ce4544c553d108310e2b797d7be46750894719/merged/etc/os-release NAME="Red Hat Enterprise Linux" VERSION="8.3 (Ootpa)" ID="rhel" ID_LIKE="fedora" VERSION_ID="8.3"
Additional resources
man podman-mount
4.9. Running a service as a daemon with a static IP
The following example runs the
rsyslog service as a daemon process in the background. The
--ip option sets the container network interface to a particular IP address (for example, 10.88.0.44). After that, you can run the
podman inspect command to check that you set the IP address properly.
Prerequisites
The Podman tool is installed.
# yum module install -y container-tools
Procedure
Set the container network interface to the IP address 10.88.0.44:
# podman run -d --ip=10.88.0.44 registry.access.redhat.com/rhel7/rsyslog efde5f0a8c723f70dd5cb5dc3d5039df3b962fae65575b08662e0d5b5f9fbe85
Check that the IP address is set properly:
# podman inspect efde5f0a8c723 | grep 10.88.0.44 "IPAddress": "10.88.0.44",
Additional resources
man podman-run
4.10. Executing commands inside a running container
Use the
podman exec command to execute a command in a running container and investigate that container. The reason for using the
podman exec command instead of
podman run command is that you can investigate the running container without interrupting the container activity.
Prerequisites
The Podman tool is installed.
# yum module install -y container-tools
- The container is running.
Procedure
Execute the
rpm -qacommand inside the
myrsyslogcontainer to list all installed packages:
$ podman exec -it myrsyslog rpm -qa tzdata-2020d-1.el8.noarch python3-pip-wheel-9.0.3-18.el8.noarch redhat-release-8.3-1.0.el8.x86_64 filesystem-3.8-3.el8.x86_64 ...
Execute a
/bin/bashcommand in the
myrsyslogcontainer:
$ podman exec -it myrsyslog /bin/bash
Install the
procps-ngpackage containing a set of system utilities (for example
ps,
top,
uptime, and so on):
# yum install procps-ng
Inspect the container:
To list every process on the system:
# ps -ef UID PID PPID C STIME TTY TIME CMD root 1 0 0 10:23 ? 00:00:01 /usr/sbin/rsyslogd -n root 8 0 0 11:07 pts/0 00:00:00 /bin/bash root 47 8 0 11:13 pts/0 00:00:00 ps -ef
To display file system disk space usage:
# df -h Filesystem Size Used Avail Use% Mounted on fuse-overlayfs 27G 7.1G 20G 27% / tmpfs 64M 0 64M 0% /dev tmpfs 269M 936K 268M 1% /etc/hosts shm 63M 0 63M 0% /dev/shm ...
To display system information:
# uname -r 4.18.0-240.10.1.el8_3.x86_64
To display amount of free and used memory in megabytes:
# free --mega total used free shared buff/cache available Mem: 2818 615 1183 12 1020 1957 Swap: 3124 0 3124
Additional resources
man podman-exec
4.11. Sharing files between two containers
You can use volumes to persist data in containers even when a container is deleted. Volumes can be used for sharing data among multiple containers. The volume is a folder which is stored on the host machine. The volume can be shared between the container and the host.
Main advantages are:
- Volumes can be shared among the containers.
- Volumes are easier to back up or migrate.
- Volumes do not increase the size of the containers.
Prerequisites
The Podman tool is installed.
# yum module install -y container-tools
Procedure
Create a volume:
$ podman volume create hostvolume
Display information about the volume:
$ podman volume inspect hostvolume [ { "name": "hostvolume", "labels": {}, "mountpoint": "/home/username/.local/share/containers/storage/volumes/hostvolume/_data", "driver": "local", "options": {}, "scope": "local" } ]
Notice that it creates a volume in the volumes directory. You can save the mount point path to the variable for easier manipulation:
$ mntPoint=$(podman volume inspect hostvolume --format {{.Mountpoint}}).
Notice that if you run
sudo podman volume create hostvolume, then the mount point changes to
/var/lib/containers/storage/volumes/hostvolume/_data.
Create a text file inside the directory using the path is stored in the
mntPointvariable:
$ echo "Hello from host" >> $mntPoint/host.txt
List all files in the directory defined by the
mntPointvariable:
$ ls $mntPoint/ host.txt
Run the container named
myubi1and map the directory defined by the
hostvolumevolume name on the host to the
/containervolume1directory on the container:
$ podman run -it --name myubi1 -v hostvolume:/containervolume1 registry.access.redhat.com/ubi8/ubi /bin/bash
Note that if you use the volume path defined by the
mntPointvariable (
-v $mntPoint:/containervolume1), data can be lost when running
podman volume prunecommand, which removes unused volumes. Always use
-v hostvolume_name:/containervolume_name.
List the files in the shared volume on the container:
# ls /containervolume1 host.txt
You can see the
host.txtfile which you created on the host.
Create a text file inside the
/containervolume1directory:
# echo "Hello from container 1" >> /containervolume1/container1.txt
- Detach from the container with
CTRL+pand
CTRL+q.
List the files in the shared volume on the host, you should see two files:
$ ls $mntPoint container1.rxt host.txt
At this point, you are sharing files between the container and host. To share files between two containers, run another container named
myubi2. Steps 10 - 13 are analogous to steps 5 - 8.
Run the container named
myubi2and map the directory defined by the
hostvolumevolume name on the host to the
/containervolume2directory on the container:
$ podman run -it --name myubi2 -v hostvolume:/containervolume2 registry.access.redhat.com/ubi8/ubi /bin/bash
List the files in the shared volume on the container:
# ls /containervolume2 container1.txt host.txt
You can see the
host.txtfile which you created on the host and
container1.txtwhich you created inside the
myubi1container.
Create a text file inside the
/containervolume2directory:
# echo "Hello from container 2" >> /containervolume2/container2.txt
- Detach from the container with
CTRL+pand
CTRL+q.
List the files in the shared volume on the host, you should see three files:
$ ls $mntPoint container1.rxt container2.txt host.txt
Additional resources
man podman-volume
4.12. Exporting and importing containers
You can use the
podman export command to export the file system of a running container to a tarball on your local machine. For example, if you have a large container that you use infrequently or one that you want to save a snapshot of in order to revert back to it later, you can use the
podman export command to export a current snapshot of your running container into a tarball.
You can use the
podman import command to import a tarball and save it as a filesystem image. Then you can run this filesystem image or you can use it as a layer for other images.
Prerequisites
The Podman tool is installed.
# yum module install -y container-tools
Procedure
Run the
myubicontainer based on the
registry.access.redhat.com/ubi8/ubiimage:
$ podman run -dt --name=myubi registry.access.redhat.com/ubi8/ubi
Optional: List all containers:
$ podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a6a6d4896142 registry.access.redhat.com/ubi8:latest /bin/bash 7 seconds ago Up 7 seconds ago myubi
Attach to the
myubicontainer:
$ podman attach myubi
Create a file named
testfile:
[root@a6a6d4896142 /]# echo "hello" > testfile
- Detach from the container with
CTRL+pand
CTRL+q.
Export the file system of the
myubias a
myubi-container.taron the local machine:
$ podman export -o myubi.tar a6a6d4896142
Optional: List the current directory content:
$ ls -l -rw-r--r--. 1 user user 210885120 Apr 6 10:50 myubi-container.tar ...
Optional: Create a
myubi-containerdirectory, extract all files from the
myubi-container.tararchive. List a content of the
myubi-directoryin a tree-like format:
$ mkdir myubi-container $ tar -xf myubi-container.tar -C myubi-container $ tree -L 1 myubi-container ├── bin -> usr/bin ├── boot ├── dev ├── etc ├── home ├── lib -> usr/lib ├── lib64 -> usr/lib64 ├── lost+found ├── media ├── mnt ├── opt ├── proc ├── root ├── run ├── sbin -> usr/sbin ├── srv ├── sys ├── testfile ├── tmp ├── usr └── var 20 directories, 1 file
You can see that the
myubi-container.tarcontains the container file system.
Import the
myubi.tarand saves it as a filesystem image:
$ podman import myubi.tar myubi-imported Getting image source signatures Copying blob 277cab30fe96 done Copying config c296689a17 done Writing manifest to image destination Storing signatures c296689a17da2f33bf9d16071911636d7ce4d63f329741db679c3f41537e7cbf
List all images:
$ podman images REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/library/myubi-imported latest c296689a17da 51 seconds ago 211 MB
Display the content of the
testfilefile:
$ podman run -it --name=myubi-imported docker.io/library/myubi-imported cat testfile hello
Additional resources
podman-exportman page
podman-importman page
4.13. Stopping containers
Use the
podman stop command to stop a running container. You can specify the containers by their container ID or name.
Prerequisites
The Podman tool is installed.
# yum module install -y container-tools
- At least one container is running.
Procedure
Stop the
myubicontainer:
Using the container name:
$ podman stop myubi
Using the container ID:
$ podman stop 1984555a2c27
To stop a running container that is attached to a terminal session, you can enter the
exit command inside the container.
The
podman stop command sends a SIGTERM signal to terminate a running container. If the container does not stop after a defined period (10 seconds by default), Podman sends a SIGKILL signal.
You can also use the
podman kill command to kill a container (SIGKILL) or send a different signal to a container. Here
Additional resources
man podman-stop
man podman-kill
4.14. Removing containers
Use the
podman rm command to remove containers. You can specify containers with the container ID or name.
Prerequisites
The Podman tool is installed.
# yum module install -y container-tools
- At least one container has been stopped.
Procedure
Remove the containers:
To remove the
peaceful_hoppercontainer:
$ podman rm peaceful_hopper
Notice that the
peaceful_hoppercontainer was in Exited status, which means it was stopped and it can be removed immediately.
To remove the
musing_browncontainer, first stop the container and then remove it:
$ podman stop musing_brown $ podman rm musing_brown
- NOTE
To remove multiple containers:
$ podman rm clever_yonath furious_shockley
To remove all containers from your local system:
$ podman rm -a
Additional resources
man podman-rm
4.15. The runc container runtime
The runc container runtime is a lightweight, portable implementation of the Open Container Initiative (OCI) container runtime specification. The runc runtime shares a lot of low-level code with Docker but it is not dependent on any of the components of the Docker platform. The.
4.16. The crun container runtime
The crun is a fast and low-memory footprint OCI container runtime written in C. The crun binary is up to 50 times smaller and up to twice as fast as the runc binary. Using crun, you can also set a minimal number of processes when running your container. The crun runtime also supports OCI hooks.
Additional features of crun include:
- Sharing files by group for rootless containers
- Controlling the stdout and stderr of OCI hooks
- Running older versions of systemd on cgroup v2
- A C library that is used by other programs
- Extensibility
- Portability
Additional resources
4.17. Running containers with runc and crun
With runc or crun, containers are configured using bundles. A bundle for a container is a directory that includes a specification file named
config.json and a root filesystem. The root filesystem contains the contents of the container.
The
<runtime> can be crun or runc.
Procedure
Pull the
registry.access.redhat.com/ubi8/ubicontainer image:
# podman pull registry.access.redhat.com/ubi8/ubi
Export the
registry.access.redhat.com/ubi8/ubiimage to the
rhel.tararchive:
# podman export $(podman create registry.access.redhat.com/ubi8/ubi) > rhel.tar
Create the
bundle/rootfsdirectory:
# mkdir -p bundle/rootfs
Extract the
rhel.tararchive into the
bundle/rootfsdirectory:
# tar -C bundle/rootfs -xf rhel.tar
Create a new specification file named
config.jsonfor the bundle:
# <runtime> spec -b bundle
- The
-boption specifies the bundle directory. The default value is the current directory.
Optional. Change the settings:
# vi bundle/config.json
Create an instance of a container named
myubifor a bundle:
# <runtime> create -b bundle/ myubi
Start a
myubicontainer:
# <runtime> start myubi
The name of a container instance must be unique to the host. To start a new instance of a container:
# <runtime> start <container_name>
Verification
List containers started by
<runtime>:
# <runtime> list ID PID STATUS BUNDLE CREATED OWNER myubi 0 stopped /root/bundle 2021-09-14T09:52:26.659714605Z root
Additional resources
crunman page
runcman page
- An introduction to crun, a fast and low-memory footprint container runtime
4.18. Temporarily changing the container runtime
You can use the
podman run command with the
--runtime option to change the container runtime.
The
<runtime> can be crun or runc.
Prerequisites
The Podman tool is installed.
# yum module install -y container-tools
Procedure
Pull the
registry.access.redhat.com/ubi8/ubicontainer image:
$ podman pull registry.access.redhat.com/ubi8/ubi
Change the container runtime using the
--runtimeoption:
$ podman run --name=myubi -dt --runtime=<runtime> ubi8 bashe4654eb4df12ac031f1d0f2657dc4ae6ff8eb0085bf114623b66cc664072e69b
Optional. List all images:
$ podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e4654eb4df12 registry.access.redhat.com/ubi8:latest bash 4 seconds ago Up 4 seconds ago myubi
Verification
Ensure that the OCI runtime is set to
<runtime>in the myubi container:
$ podman inspect myubi --format "{{.OCIRuntime}}" <runtime>
Additional resources
4.19. Permanently changing the container runtime
You can set the container runtime and its options in the
/etc/containers/containers.conf configuration file as a root user or in the
$HOME/.config/containers/containers.conf configuration file as a non-root user.
The
<runtime> can be crun or runc runtime.
Prerequisites
The Podman tool is installed.
# yum module install -y container-tools
Procedure
Change the runtime in the
/etc/containers/containers.conffile:
# vim /etc/containers/containers.conf [engine] runtime = "<runtime>"
Run the container named myubi:
# podman run --name=myubi -dt ubi8 bash Resolved "ubi8" as an alias (/etc/containers/registries.conf.d/001-rhel-shortnames.conf) Trying to pull registry.access.redhat.com/ubi8:latest… ... Storing signatures
Verification
Ensure that the OCI runtime is set to
<runtime>in the
myubicontainer:
# podman inspect myubi --format "{{.OCIRuntime}}" <runtime>
Additional resources
- An introduction to crun, a fast and low-memory footprint container runtime
containers.confman page
4.20. Creating SELinux policies for containers
To generate SELinux policies for containers, use the UDICA tool. For more information, see Introduction to the udica SELinux policy generator.
Chapter 5. Using Podman in HPC environment
You can use Podman with Open MPI (Message Passing Interface) to run containers in a High Performance Computing (HPC) environment.
5.1. Using Podman with MPI
The example is based on the ring.c program taken from Open MPI. In this example, a value is passed around by all processes in a ring-like fashion. Each time the message passes rank 0, the value is decremented. When each process receives the 0 message, it passes it on to the next process and then quits. By passing the 0 first, every process gets the 0 message and can quit normally.
Procedure
Install Open MPI:
$ sudo yum install openmpi
To activate the environment modules, type:
$ . /etc/profile.d/modules.sh
Load the
mpi/openmpi-x86_64module:
$ module load mpi/openmpi-x86_64
Optionally, to automatically load
mpi/openmpi-x86_64module, add this line to the
.bashrcfile:
$ echo "module load mpi/openmpi-x86_64" >> .bashrc
To combine
mpirunand
podman, create a container with the following definition:
$ cat Containerfile FROM registry.access.redhat.com/ubi8/ubi RUN yum -y install openmpi-devel wget && \ yum clean all RUN wget && \ /usr/lib64/openmpi/bin/mpicc ring.c -o /home/ring && \ rm -f ring.c
Build the container:
$ podman build --tag=mpi-ring .
Start the container. On a system with 4 CPUs this command starts 4 containers:
$ mpirun \ --mca orte_tmpdir_base /tmp/podman-mpirun \ podman run --env-host \ -v /tmp/podman-mpirun:/tmp/podman-mpirun \ --userns=keep-id \ --net=host --pid=host --ipc=host \ mpi-ring
As a result,
mpirunstarts up 4 Podman containers and each container is running one instance of the
ringbinary. All 4 processes are communicating over MPI with each other.
Additional resources
5.2. The mpirun options
The following
mpirun options are used to start the container:
--mca orte_tmpdir_base /tmp/podman-mpirunline tells Open MPI to create all its temporary files in
/tmp/podman-mpirunand not in
/tmp. If using more than one node this directory will be named differently on other nodes. This requires mounting the complete
/tmpdirectory into the container which is more complicated.
The
mpirun command specifies the command to start, the
podman command. The following
podman options are used to start the container:
runcommand runs a container.
--env-hostoption copies all environment variables from the host into the container.
-v /tmp/podman-mpirun:/tmp/podman-mpirunline tells Podman to mount the directory where Open MPI creates its temporary directories and files to be available in the container.
--userns=keep-idline ensures the user ID mapping inside and outside the container.
--net=host --pid=host --ipc=hostline sets the same network, PID and IPC namespaces.
mpi-ringis the name of the container.
/home/ringis the MPI program in the container.
Additional resources
Chapter 6. Creating and restoring container checkpoints
Checkpoint/Restore In Userspace (CRIU) is a software that enables you to set a checkpoint on a running container or an individual application and store its state to disk. You can use data saved to restore the container after a reboot at the same point in time it was checkpointed.
6.1. Creating and restoring a container checkpoint locally
This example is based on a Python based web server which returns a single integer which is incremented after each request.
Procedure
Create a Python based server:
# cat counter.py #!/usr/bin/python3 import http.server counter = 0 class handler(http.server.BaseHTTPRequestHandler): def do_GET(s): global counter s.send_response(200) s.send_header('Content-type', 'text/html') s.end_headers() s.wfile.write(b'%d\n' % counter) counter += 1 server = http.server.HTTPServer(('', 8088), handler) server.serve_forever()
Create a container with the following definition:
# cat Containerfile FROM registry.access.redhat.com/ubi8/ubi COPY counter.py /home/counter.py RUN useradd -ms /bin/bash counter RUN yum -y install python3 && chmod 755 /home/counter.py USER counter ENTRYPOINT /home/counter.py
The container is based on the Universal Base Image (UBI 8) and uses a Python based server.
Build the container:
# podman build . --tag counter
Files
counter.pyand
Containerfileare the input for the container build process (
podman build). The built image is stored locally and tagged with the tag
counter.
Start the container as root:
# podman run --name criu-test --detach counter
To list all running containers, enter:
# podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e4f82fd84d48 localhost/counter:latest 5 seconds ago Up 4 seconds ago criu-test
Display IP address of the container:
# podman inspect criu-test --format "{{.NetworkSettings.IPAddress}}" 10.88.0.247
Send requests to the container:
# curl 10.88.0.247:8080 0 # curl 10.88.0.247:8080 1
Create a checkpoint for the container:
# podman container checkpoint criu-test
- Reboot the system.
Restore the container:
# podman container restore --keep criu-test
Send requests to the container:
# curl 10.88.0.247:8080 2 # curl 10.88.0.247:8080 3 # curl 10.88.0.247:8080 4
The result now does not start at
0again, but continues at the previous value.
This way you can easily save the complete container state through a reboot.
Additional resources
6.2. Reducing startup time using container restore
You can use container migration to reduce startup time of containers which require a certain time to initialize. Using a checkpoint, you can restore the container multiple times on the same host or on different hosts. This example is based on the container from the Creating and restoring a container checkpoint locally.
Procedure
Create a checkpoint of the container, and export the checkpoint image to a
tar.gzfile:
# podman container checkpoint criu-test --export /tmp/chkpt.tar.gz
Restore the container from the
tar.gzfile:
# podman container restore --import /tmp/chkpt.tar.gz --name counter1 # podman container restore --import /tmp/chkpt.tar.gz --name counter2 # podman container restore --import /tmp/chkpt.tar.gz --name counter3
The
--name(
-n) option specifies a new name for containers restored from the exported checkpoint.
Display ID and name of each container:
# podman ps -a --format "{{.ID}} {{.Names}}" a8b2e50d463c counter3 faabc5c27362 counter2 2ce648af11e5 counter1
Display IP address of each container:
#️ podman inspect counter1 --format "{{.NetworkSettings.IPAddress}}" 10.88.0.248 #️ podman inspect counter2 --format "{{.NetworkSettings.IPAddress}}" 10.88.0.249 #️ podman inspect counter3 --format "{{.NetworkSettings.IPAddress}}" 10.88.0.250
Send requests to each container:
#️ curl 10.88.0.248:8080 4 #️ curl 10.88.0.249:8080 4 #️ curl 10.88.0.250:8080 4
Note, that the result is
4in all cases, because you are working with different containers restored from the same checkpoint.
Using this approach, you can quickly start up stateful replicas of the initially checkpointed container.
Additional resources
6.3. Migrating containers among systems
This procedure shows the migration of running containers from one system to another, without losing the state of the applications running in the container. This example is based on the container from the Creating and restoring a container checkpoint locally. section tagged with
counter.
Prerequisites
The following steps are not necessary if the container is pushed to a registry as Podman will automatically download the container from a registry if it is not available locally. This example does not use a registry, you have to export previously built and tagged container (see Creating and restoring a container checkpoint locally. section).
Export previously built container:
# podman save --output counter.tar counter
Copy exported container image to the destination system (
other_host):
# scp counter.tar other_host:
Import exported container on the destination system:
# ssh other_host podman load --input counter.tar
Now the destination system of this container migration has the same container image stored in its local container storage.
Procedure
Start the container as root:
# podman run --name criu-test --detach counter
Display IP address of the container:
# podman inspect criu-test --format "{{.NetworkSettings.IPAddress}}" 10.88.0.247
Send requests to the container:
# curl 10.88.0.247:8080 0 # curl 10.88.0.247:8080 1
Create a checkpoint of the container, and export the checkpoint image to a
tar.gzfile:
# podman container checkpoint criu-test --export /tmp/chkpt.tar.gz
Copy the checkpoint archive to the destination host:
# scp /tmp/chkpt.tar.gz other_host:/tmp/
Restore the checkpoint on the destination host (
other_host):
# podman container restore --import /tmp/chkpt.tar.gz
Send a request to the container on the destination host (
other_host):
# curl 10.88.0.247:8080 2
As a result, the stateful container has been migrated from one system to another without losing its state.
Additional resources
Chapter 7. Working with pods
Containers are the smallest unit that you can manage with Podman, Skopeo and Buildah container tools. A Podman pod is a group of one or more containers. The Pod concept was introduced by Kubernetes. Podman pods are similar to the Kubernetes definition. Pods are the smallest compute units that you can create, deploy, and manage in OpenShift or Kubernetes environments. Every Podman pod includes an infra container. This container holds the namespaces associated with the pod and allows Podman to connect other containers to the pod. It allows you to start and stop containers within the pod and the pod will stay running. The default infra container on the
registry.access.redhat.com/ubi8/pause image.
7.1. Creating pods
This procedure shows how to create a pod with one container.
Prerequisites
The Podman tool is installed.
# yum module install -y container-tools
Procedure
Create an empty pod:
$ podman pod create --name mypod 223df6b390b4ea87a090a4b5207f7b9b003187a6960bd37631ae9bc12c433aff The pod is in the initial state Created.
The pod is in the initial state Created.
Optional: List all pods:
$ podman pod ps POD ID NAME STATUS CREATED # OF CONTAINERS INFRA ID 223df6b390b4 mypod Created Less than a second ago 1 3afdcd93de3e
Notice that the pod has one container in it.
Optional: List all pods and containers associated with them:
$ podman ps -a --pod CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD 3afdcd93de3e registry.access.redhat.com/ubi8/pause Less than a second ago Created 223df6b390b4-infra 223df6b390b4
You can see that the pod ID from
podman pscommand matches the pod ID in the
podman pod pscommand. The default infra container is based on the
registry.access.redhat.com/ubi8/pauseimage.
Run a container named
myubiin the existing pod named
mypod:
$ podman run -dt --name myubi --pod mypod registry.access.redhat.com/ubi8/ubi /bin/bash 5df5c48fea87860cf75822ceab8370548b04c78be9fc156570949013863ccf71
Optional: List all pods:
$ podman pod ps POD ID NAME STATUS CREATED # OF CONTAINERS INFRA ID 223df6b390b4 mypod Running Less than a second ago 2 3afdcd93de3e
You can see that the pod has two containers in it.
Optional: registry.access.redhat.com/ubi8/pause Less than a second ago Up Less than a second ago 223df6b390b4-infra 223df6b390b4
Additional resources
podman-pod-createman page
- Podman: Managing pods and containers in a local container runtime article
7.2. Displaying pod information
This procedure provides information on how to display pod information.
Prerequisites
The Podman tool is installed.
# yum module install -y container-tools
- The pod has been created. For details, see section Creating pods.
Procedure
Display active processes running in a pod:
To display the running processes of containers in a pod, enter:
$ podman pod top mypod USER PID PPID %CPU ELAPSED TTY TIME COMMAND 0 1 0 0.000 24.077433518s ? 0s /pause root 1 0 0.000 24.078146025s pts/0 0s /bin/bash
To display a live stream of resource usage stats for containers in one or more pods, enter:
$ podman pod stats -a --no-stream ID NAME CPU % MEM USAGE / LIMIT MEM % NET IO BLOCK IO PIDS a9f807ffaacd frosty_hodgkin -- 3.092MB / 16.7GB 0.02% -- / -- -- / -- 2 3b33001239ee sleepy_stallman -- -- / -- -- -- / -- -- / -- --
To display information describing the pod, enter:
$ podman pod inspect mypod { "Id": "db99446fa9c6d10b973d1ce55a42a6850357e0cd447d9bac5627bb2516b5b19a", "Name": "mypod", "Created": "2020-09-08T10:35:07.536541534+02:00", "CreateCommand": [ "podman", "pod", "create", "--name", "mypod" ], "State": "Running", "Hostname": "mypod", "CreateCgroup": false, "CgroupParent": "/libpod_parent", "CgroupPath": "/libpod_parent/db99446fa9c6d10b973d1ce55a42a6850357e0cd447d9bac5627bb2516b5b19a", "CreateInfra": false, "InfraContainerID": "891c54f70783dcad596d888040700d93f3ead01921894bc19c10b0a03c738ff7", "SharedNamespaces": [ "uts", "ipc", "net" ], "NumContainers": 2, "Containers": [ { "Id": "891c54f70783dcad596d888040700d93f3ead01921894bc19c10b0a03c738ff7", "Name": "db99446fa9c6-infra", "State": "running" }, { "Id": "effc5bbcfe505b522e3bf8fbb5705a39f94a455a66fd81e542bcc27d39727d2d", "Name": "myubi", "State": "running" } ] }
You can see information about containers in the pod.
Additional resources
podman pod topman page
podman-pod-statsman page
podman-pod-inspectman page
7.3. Stopping pods
You can stop one or more pods using the
podman pod stop command.
Prerequisites
The Podman tool is installed.
# yum module install -y container-tools
- The pod has been created. For details, see section Creating pods.
Procedure
Stop the pod
mypod:
$ podman pod stop mypod
Optional: List all pods and containers associated with them:
$ podman ps -a --pod CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD ID PODNAME 5df5c48fea87 registry.redhat.io/ubi8/ubi:latest /bin/bash About a minute ago Exited (0) 7 seconds ago myubi 223df6b390b4 mypod 3afdcd93de3e registry.access.redhat.com/ubi8/pause About a minute ago Exited (0) 7 seconds ago 8a4e6527ac9d-infra 223df6b390b4 mypod
You can see that the pod
mypodand container
myubiare in "Exited" status.
Additional resources
podman-pod-stopman page
7.4. Removing pods
You can remove one or more stopped pods and containers using the
podman pod rm command.
Prerequisites
The Podman tool is installed.
# yum module install -y container-tools
- The pod has been created. For details, see section Creating pods.
- The pod has been stopped. For details, see section Stopping pods.
Procedure
Remove the pod
mypod, type:
$ podman pod rm mypod 223df6b390b4ea87a090a4b5207f7b9b003187a6960bd37631ae9bc12c433aff
Note that removing the pod automatically removes all containers inside it.
Optional: Check that all containers and pods were removed:
$ podman ps $ podman pod ps
Additional resources
podman-pod-rmman page
Chapter 8. Adding software to a running UBI container
Red Hat Universal Base Images (UBIs) are built from a subset of the RHEL content. UBIs also provide a subset of RHEL packages that are freely available to install for use with UBI. To add or update software to a running container, you can use the yum repositories that include RPM packages and updates. UBIs provide a set of pre-built language runtime container images such as Python, Perl, Node.js, Ruby, and so on.
To add packages from UBI repositories to running UBI containers:
- On UBI init and UBI standard images, use the
yumcommand
- On UBI minimal images, use the
microdnfcommand
Installing and working with software packages directly in running containers adds packages temporarily. The changes are not saved in the container image. To make package changes persistent, see section Building an image from a Containerfile with Buildah.
When you add software to a UBI container, procedures differ for updating UBIs on a subscribed RHEL host or on an unsubscribed (or non-RHEL) system.
8.1. Adding software to a UBI container on a subscribed host
If you are running a UBI container on a registered and subscribed RHEL host, the RHEL Base and AppStream repositories are enabled inside the standard UBI container, along with all the UBI repositories.
Additional resources
8.2. Adding software in a standard UBI container
To add software inside the standard UBI container, disable non-UBI yum repositories to ensure the containers you build can be redistributed.
Procedure
Pull and run the
registry.access.redhat.com/ubi8/ubiimage:
$ podman run -it --name myubi registry.access.redhat.com/ubi8/ubi
Add a package to the
myubicontainer.
To add a package that is in the UBI repository, disable all yum repositories except for UBI repositories. For example, to add the
bzip2package:
# yum install --disablerepo=* --enablerepo=ubi-8-appstream --enablerepo=ubi-8-baseos bzip2
To add a package that is not in the UBI repository, do not disable any repositories. For example, to add the
zshpackage:
# yum install zsh
To add a package that is in a different host repository, explicitly enable the repository you need. For example, to install the
python38-develpackage from the
codeready-builder-for-rhel-8-x86_64-rpmsrepository:
# yum install --enablerepo=codeready-builder-for-rhel-8-x86_64-rpms python38-devel
Verification steps
List all enabled repositories inside the container:
# yum.3. Adding software in a minimal UBI container
UBI yum repositories are enabled inside UBI Minimal images by default.
Procedure
Pull and run the
registry.access.redhat.com/ubi8/ubi-minimalimage:
$ podman run -it --name myubimin registry.access.redhat.com/ubi8/ubi-minimal
Add a package to the
myubimincontainer:
To add a package that is in the UBI repository, do not disable any repositories. For example, to add the
bzip2package:
# microdnf install bzip2
To add a package that is in a different host repository, explicitly enable the repository you need. For example, to install the
python38-develpackage from the
codeready-builder-for-rhel-8-x86_64-rpmsrepository:
# microdnf install --enablerepo=codeready-builder-for-rhel-8-x86_64-rpms python38-devel
Verification steps
List all enabled repositories inside the container:
# microdnf.4. Adding software to a UBI container on a unsubscribed host
You do not have to disable any repositories when adding software packages on unsubscribed RHEL systems.
Procedure
Add a package to a running container based on the UBI standard or UBI init images. Do not disable any repositories. Use the
podman runcommand to run the container. then use the
yum installcommand inside a container.
For example, to add the
bzip2package to the UBI standard based container:
$ podman run -it --name myubi registry.access.redhat.com/ubi8/ubi # yum install bzip2
For example, to add the
bzip2package to the UBI init based container:
$ podman run -it --name myubimin registry.access.redhat.com/ubi8/ubi-minimal # microdnf install bzip2
Verification steps
List all enabled repositories:
To list all enabled repositories inside the containers based on UBI standard or UBI init images:
# yum repolist
To list all enabled repositories inside the containers based on UBI minimal containers:
# microdnf repolist
- Ensure that the required repositories are listed.
List all installed packages:
# rpm -qa
- Ensure that the required packages are listed.
8.5. Building UBI-based images
You can create a UBI-based web server container from a
Containerfile using the Buildah utility. You have to disable all non-UBI yum repositories to ensure that your image contains only Red Hat software that you can redistribute.
- NOTE
For UBI minimal images, use
microdnfinstead of
yum:
RUN microdnf update -y && rm -rf /var/cache/yum RUN microdnf install httpd -y && microdnf clean all
Procedure
Create a
Containerfile: container ... Writing manifest to image destination Storing signatures --> f9874f27050 f9874f270500c255b950e751e53d37c6f8f6dba13425d42f30c2a8ef26b769f2
Verification steps
Run the web server:
# podman run -d --name=myweb -p 80:80 johndoe/webserver bbe98c71d18720d966e4567949888dc4fb86eec7d304e785d5177168a5965f64
Test the web server:
# curl The Web Server is Running
8.6. Using Application Stream runtime images
Runtime images based on Application Streams offer a set of container images that you can use as the basis for your container builds.
Supported runtime images are Python, Ruby, s2-core, s2i-base, .NET Core, PHP. The runtime images are available in the Red Hat Container Catalog.
- NOTE
- Because these UBI images contain the same basic software as their legacy image counterparts, you can learn about those images from the Using Red Hat Software Collections Container Images guide.
Additional resources
8.7. Getting UBI container image source code
Source code is available for all Red Hat UBI-based images in the form of downloadable container images. Source container images cannot be run, despite being packaged as containers. To install Red Hat source container images on your system, use the
skopeo command, not the
podman pull command.
Source container images are named based on the binary containers they represent. For example, for a particular standard RHEL UBI 8 container
registry.access.redhat.com/ubi8:8.1-397 append
-source to get the source container image (
registry.access.redhat.com/ubi8:8.1-397-source).
Procedure
Use the
skopeo copycommand to copy the source container image to a local directory:
$ skopeo copy \ docker://registry.access.redhat.com/ubi8:8.1-397-source \ dir:$HOME/TEST ... Copying blob 477bc8106765 done Copying blob c438818481d3 done ... Writing manifest to image destination Storing signatures
Use the
skopeo inspectcommand to inspect the source container image:
$ skopeo inspect dir:$HOME/TEST { "Digest": "sha256:7ab721ef3305271bbb629a6db065c59bbeb87bc53e7cbf88e2953a1217ba7322", "RepoTags": [], "Created": "2020-02-11T12:14:18.612461174Z", "DockerVersion": "", "Labels": null, "Architecture": "amd64", "Os": "linux", "Layers": [ "sha256:1ae73d938ab9f11718d0f6a4148eb07d38ac1c0a70b1d03e751de8bf3c2c87fa", "sha256:9fe966885cb8712c47efe5ecc2eaa0797a0d5ffb8b119c4bd4b400cc9e255421", "sha256:61b2527a4b836a4efbb82dfd449c0556c0f769570a6c02e112f88f8bbcd90166", ... "sha256:cc56c782b513e2bdd2cc2af77b69e13df4ab624ddb856c4d086206b46b9b9e5f", "sha256:dcf9396fdada4e6c1ce667b306b7f08a83c9e6b39d0955c481b8ea5b2a465b32", "sha256:feb6d2ae252402ea6a6fca8a158a7d32c7e4572db0e6e5a5eab15d4e0777951e" ], "Env": null }
Unpack all the content:
$ cd $HOME/TEST $ for f in $(ls); do tar xvf $f; done
Check the results:
$ find blobs/ rpm_dir/ blobs/ blobs/sha256 blobs/sha256/10914f1fff060ce31388f5ab963871870535aaaa551629f5ad182384d60fdf82 rpm_dir/ rpm_dir/gzip-1.9-4.el8.src.rpm
If the results are correct, the image is ready to be used.
- NOTE
- It could take several hours after a container image is released for its associated source container to become available.
Additional resources
skopeo-copyman page
skopeo-inspectman page
Chapter 9. Running Skopeo, Buildah, and Podman in a container
This chapter describes how you can run Skopeo, Buildah, and Podman in a container.
With Skopeo, you can inspect images on a remote registry without having to download the entire image with all its layers. You can also use Skopeo for copying images, signing images, syncing images, and converting images across different formats and layer compressions.
Buildah facilitates building OCI container images..
With Podman, you can manage containers and images, volumes mounted into those containers, and pods made from groups of containers. Podman is based on a
libpod library for container lifecycle management. The
libpod library provides APIs for managing containers, pods, container images, and volumes.
Reasons to run Buildah, Skopeo, and Podman in a container:
CI/CD system:
- Podman and Skopeo: You can run a CI/CD system inside of Kubernetes or use OpenShift to build your container images, and possibly distribute those images across different container registries. To integrate Skopeo into a Kubernetes workflow, you need to run it in a container.
- Buildah: You want to build OCI/container images within a Kubernetes or OpenShift CI/CD systems that are constantly building images. Previously, people used a Docker socket to connect to the container engine and perform a
docker buildcommand. This was the equivalent of giving root access to the system without requiring a password which is not secure. For this reason, Red Hat recommends using Buildah in a container.
Different versions:
- All: You are running an older OS on the host but you want to run the latest version of Skopeo, Buildah, or Podman. The solution is to run the container tools in a container. For example, this is useful for running the latest version of the container tools provided in RHEL 8 on a RHEL 7 container host which does not have access to the newest versions natively.
HPC environment:
- All: A common restriction in HPC environments is that non-root users are not allowed to install packages on the host. When you run Skopeo, Buildah, or Podman in a container, you can perform these specific tasks as a non-root user.
9.1. Running Skopeo in a container
This procedure demonstrates how to inspect a remote container image using Skopeo. Running Skopeo in a container means that the container root filesystem is isolated from the host root filesystem. To share or copy files between the host and container, you have to mount files and directories.
Prerequisites
The
container-toolsmodule is installed.
# yum module install -y container-tools
Procedure
Log in to the registry.redhat.io registry:
$ podman login registry.redhat.io Username: myuser@mycompany.com Password: *********** Login Succeeded!
Get the
registry.redhat.io/rhel8/skopeocontainer image:
$ podman pull registry.redhat.io/rhel8/skopeo
Inspect a remote container image
registry.access.redhat.com/ubi8/ubiusing Skopeo:
$ podman run --rm registry.redhat.io/rhel8/skopeo skopeo inspect docker://registry.access.redhat.com/ubi8/ubi { "Name": "registry.access.redhat.com/ubi8/ubi", ... "Labels": { "architecture": "x86_64", ... "name": "ubi8", ... "summary": "Provides the latest release of Red Hat Universal Base Image 8.", "url": "", ... }, "Architecture": "amd64", "Os": "linux", "Layers": [ ... ], "Env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "container=oci" ] }
The
--rmoption removes the
registry.redhat.io/rhel8/skopeoimage after the container exits.
9.2. Running Skopeo in a container using credentials
Working with container registries requires an authentication to access and alter data. Skopeo supports various ways to specify credentials.
With this approach you can specify credentials on the command line using the
--cred USERNAME[:PASSWORD] option.
Prerequisites
The
container-toolsmodule is installed.
# yum module install -y container-tools
Procedure
Inspect a remote container image using Skopeo against a locked registry:
$ podman run --rm registry.redhat.io/rhel8/skopeo inspect --creds $USER:$PASSWORD docker://$IMAGE
9.3. Running Skopeo in a container using authfiles
You can use an authentication file (authfile) to specify credentials. The
skopeo login command logs into the specific registry and stores the authentication token in the authfile. The advantage of using authfiles is preventing the need to repeatedly enter credentials.
When running on the same host, all container tools such as Skopeo, Buildah, and Podman share the same authfile. When running Skopeo in a container, you have to either share the authfile on the host by volume-mounting the authfile in the container, or you have to reauthenticate within the container.
Prerequisites
The
container-toolsmodule is installed.
# yum module install -y container-tools
Procedure
Inspect a remote container image using Skopeo against a locked registry:
$ podman run --rm -v $AUTHFILE:/auth.json registry.redhat.io/rhel8/skopeo inspect docker://$IMAGE
The
-v $AUTHFILE:/auth.jsonoption volume-mounts an authfile at /auth.json within the container. Skopeo can now access the authentication tokens in the authfile on the host and get secure access to the registry.
Other Skopeo commands work similarly, for example:
- Use the
skopeo-copycommand to specify credentials on the command line for the source and destination image using the
--source-credsand
--dest-credsoptions. It also reads the
/auth.jsonauthfile.
- If you want to specify separate authfiles for the source and destination image, use the
--source-authfileand
--dest-authfileoptions and volume-mount those authfiles from the host into the container.
9.4. Copying container images to or from the host
Skopeo, Buildah, and Podman share the same local container-image storage. If you want to copy containers to or from the host container storage, you need to mount it into the Skopeo container.
The path to the host container storage differs between root (
/var/lib/containers/storage) and non-root users (
$HOME/.local/share/containers/storage).
Prerequisites
The
container-toolsmodule is installed.
# yum module install -y container-tools
Procedure
Copy the
registry.access.redhat.com/ubi8/ubiimage into your local container storage:
$ podman run --privileged --rm -v $HOME/.local/share/containers/storage:/var/lib/containers/storage registry.redhat.io/rhel8/skopeo skopeo copy docker://registry.access.redhat.com/ubi8/ubi containers-storage:registry.access.redhat.com/ubi8/ubi
- The
--privilegedoption disables all security mechanisms. Red Hat recommends only using this option in trusted environments.
To avoid disabling security mechanisms, export the images to a tarball or any other path-based image transport and mount them in the Skopeo container:
$ podman save --format oci-archive -o oci.tar $IMAGE
$ podman run --rm -v oci.tar:/oci.tar registry.redhat.io/rhel8/skopeo copy oci-archive:/oci.tar $DESTINATION
Optional: List images in local storage:
$ podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi8/ubi latest ecbc6f53bba0 8 weeks ago 211 MB
9.5. Running Buildah in a container
The procedure demonstrates how to run Buildah in a container and create a working container based on an image.
Prerequisites
The
container-toolsmodule is installed.
# yum module install -y container-tools
Procedure
Log in to the registry.redhat.io registry:
$ podman login registry.redhat.io Username: myuser@mycompany.com Password: *********** Login Succeeded!
Pull and run the
registry.redhat.io/rhel8/buildahimage:
# podman run --rm --device /dev/fuse -it registry.redhat.io/rhel8/buildah /bin/bash
- The
--rmoption removes the
registry.redhat.io/rhel8/buildahimage after the container exits.
- The
--deviceoption adds a host device to the container.
Create a new container using a
registry.access.redhat.com/ubi8image:
# buildah from registry.access.redhat.com/ubi8 ... ubi8-working-container
Run the
ls /command inside the
ubi8-working-containercontainer:
# buildah run --isolation=chroot ubi8-working-container ls / bin boot dev etc home lib lib64 lost+found media mnt opt proc root run sbin srv
Optional: List all images in a local storage:
# buildah images REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi8 latest ecbc6f53bba0 5 weeks ago 211 MB
Optional: List the working containers and their base images:
# buildah containers CONTAINER ID BUILDER IMAGE ID IMAGE NAME CONTAINER NAME 0aaba7192762 * ecbc6f53bba0 registry.access.redhat.com/ub... ubi8-working-container
Optional: Push the
registry.access.redhat.com/ubi8image to the a local registry located on
registry.example.com:
# buildah push ecbc6f53bba0 registry.example.com:5000/ubi8/ubi
Additional resources
9.6. Privileged and unprivileged Podman containers
By default, Podman containers are unprivileged and cannot, for example, modify parts of the operating system on the host. This is because by default a container is only allowed limited access to devices.
The following list emphasizes important properties of privileged containers. You can run the privileged container using the
podman run --privileged <image_name> command.
- A privileged container is given the same access to devices as the user launching the container.
- A privileged container disables the security features that isolate the container from the host. Dropped Capabilities, limited devices, read-only mount points, Apparmor/SELinux separation, and Seccomp filters are all disabled.
- A privileged container cannot have more privileges than the account that launched them.
Additional resources
- How to use the --privileged flag with container engines
podman-runman page
9.7. Running Podman with extended privileges
If you cannot run your workloads in a rootless environment, you need to run these workloads as a root user. Running a container with extended privileges should be done judiciously, because it disables all security features.
Prerequisites
The
container-toolsmodule is installed.
# yum module install -y container-tools
Procedure
Run the Podman container in the Podman container:
$ podman run --privileged --name=privileged_podman registry.access.redhat.com/rhel8/podman podman run ubi8 echo hello Resolved "ubi8" as an alias (/etc/containers/registries.conf.d/001-rhel-shortnames.conf) Trying to pull registry.access.redhat.com/ubi8:latest... ... Storing signatures hello
- Run the outer container named
privileged_podmanbased on the
registry.access.redhat.com/rhel8/podmanimage.
- The
--privilegedoption disables the security features that isolate the container from the host.
- Run
podman run ubi8 echo hellocommand to create the inner container based on the
ubi8image.
- Notice that the
ubi8short image name was resolved as an alias. As a result, the
registry.access.redhat.com/ubi8:latestimage is pulled.
Verification
List all containers:
$ podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 52537876caf4 registry.access.redhat.com/rhel8/podman podman run ubi8 e... 30 seconds ago Exited (0) 13 seconds ago privileged_podman
Additional resources
- How to use Podman inside of a container
podman-runman page
9.8. Running Podman with less privileges
You can run two nested Podman containers without the
--privileged option. Running the container without the
--privileged option is a more secure option.
This can be useful when you want to try out different versions of Podman in the most secure way possible.
Prerequisites
The
container-toolsmodule is installed.
# yum module install -y container-tools
Procedure
Run two nested containers:
$ podman run --name=unprivileged_podman --security-opt label=disable --user podman --device /dev/fuse registry.access.redhat.com/rhel8/podman podman run ubi8 echo hello
- Run the outer container named
unprivileged_podmanbased on the
registry.access.redhat.com/rhel8/podmanimage.
- The
--security-opt label=disableoption disables SELinux separation on the host Podman. SELinux does not allow containerized processes to mount all of the file systems required to run inside a container.
- The
--user podmanoption automatically causes the Podman inside the outer container to run within the user namespace.
- The
--device /dev/fuseoption uses the
fuse-overlayfspackage inside the container. This option adds
/dev/fuseto the outer container, so that Podman inside the container can use it.
- Run
podman run ubi8 echo hellocommand to create the inner container based on the
ubi8image.
- Notice that the ubi8 short image name was resolved as an alias. As a result, the
registry.access.redhat.com/ubi8:latestimage is pulled.
Verification
List all containers:
$ podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a47b26290f43 podman run ubi8 e... 30 seconds ago Exited (0) 13 seconds ago unprivileged_podman
9.9. Building a container inside a Podman container
This procedure shows how to run a container in a container using Podman. This example shows how to use Podman to build and run another container from within this container. The container will run "Moon-buggy", a simple text-based game.
Prerequisites
The
container-toolsmodule is installed.
# yum module install -y container-tools
You are logged in to the registry.redhat.io registry:
# podman login registry.redhat.io
Procedure
Run the container based on
registry.redhat.io/rhel8/podmanimage:
# podman run --privileged --name podman_container -it registry.redhat.io/rhel8/podman /bin/bash
- Run the outer container named
podman_containerbased on the
registry.redhat.io/rhel8/podmanimage.
- The
--itoption specifies that you want to run an interactive bash shell within a container.
- The
--privilegedoption disables the security features that isolate the container from the host.
Create a
Containerfileinside the
podman_containercontainer:
# vi Containerfile FROM registry.access.redhat.com/ubi8/ubi RUN yum install -y RUN yum -y install moon-buggy && yum clean all CMD ["/usr/bin/moon-buggy"]
The commands in the
Containerfilecause the following build command to:
- Build a container from the
registry.access.redhat.com/ubi8/ubiimage.
- Install the
epel-release-latest-8.noarch.rpmpackage.
- Install the
moon-buggypackage.
- Set the container command.
Build a new container image named
moon-buggyusing the
Containerfile:
# podman build -t moon-buggy .
Optional: List all images:
# podman images REPOSITORY TAG IMAGE ID CREATED SIZE localhost/moon-buggy latest c97c58abb564 13 seconds ago 1.67 GB registry.access.redhat.com/ubi8/ubi latest 4199acc83c6a 132seconds ago 213 MB
Run a new container based on a
moon-buggycontainer:
# podman run -it --name moon moon-buggy
Optional: Tag the
moon-buggyimage:
# podman tag moon-buggy registry.example.com/moon-buggy
Optional: Push the
moon-buggyimage to the registry:
# podman push registry.example.com/moon-buggy
Additional resources
Chapter 10. Running special container images
This chapter provides information about some special types of container images. Some container images have built in labels called runlabels that allow you to run those containers with preset options and arguments. The
podman container runlabel <label> command, allows you to execute the command defined in the
<label> for the container image. Supported labels are
install,
run and
uninstall.
10.1. Opening privileges to the host
There are several differences between privileged and non-privileged containers. For example, the toolbox container is a privileged container. Here are examples of privileges that may or may not be open to the host from a container:
- Privileges: A privileged container disables the security features that isolate the container from the host. You can run a privileged container using the
podman run --privileged <image_name>command. You can, for example, delete files and directories mounted from the host that are owned by the root user.
- Process tables: You can use the
podman run --privileged --pid=host <image_name>command to use the host PID namespace for the container. Then you can use the
ps -ecommand within a privileged container to list all processes running on the host. You can pass a process ID from the host to commands that run in the privileged container (for example,
kill <PID>).
- Network interfaces: By default, a container has only one external network interface and one loopback network interface. You can use the
podman run --net=host <image_name>command to access host network interfaces directly from within the container.
- Inter-process communications: The IPC facility on the host is accessible from within the privileged container. You can run commands such as
ipcsto see information about active message queues, shared memory segments, and semaphore sets on the host.
10.2. Container images with runlabels
Some Red Hat images include labels that provide pre-set command lines for working with those images. Using the
podman container runlabel <label> command, you can use the
podman command to execute the command defined in finish running the container.
10.3. Running rsyslog with runlabels
The
rhel8/rsyslog container image is made to run a containerized version of the
rsyslogd daemon. The
rsyslog image contains the following runlabels:
install,
run and
uninstall. The following procedure steps you through installing, running, and uninstalling the
rsyslog image:
Procedure
Pull the
rsyslogimage:
# podman pull registry.redhat.io/rhel8/rsyslog
Displayimage will use later.
Display the
runrunlabel for
rsyslog:
# podman container runlabel run --display
This shows that the command opens privileges to the host and mount specific files and directories from the host inside the container, when it launches the
rsyslogcontainer to run the
rsyslogddaemon.container opens privileges, mounts what it needs from the host, and runs the
rsyslogddaemon in the background (
-d). The
rsyslogddaemon script just removes the
/etc/logrotate.d/syslog file. It does not clean up the configuration files.
Chapter 11. Porting containers to OpenShift using Podman
This chapter describes how to generate portable descriptions of containers and pods using the YAML ("YAML Ain’t Markup Language") format. The YAML is a text format used to describe the configuration data.
The YAML files are:
- Readable.
- Easy to generate.
- Portable between environments (for example between RHEL and OpenShift).
- Portable between programming languages.
- Convenient to use (no need to add all the parameters to the command line).
Reasons to use YAML files:
- You can re-run a local orchestrated set of containers and pods with minimal input required which can be useful for iterative development.
- You can run the same containers and pods on another machine. For example, to run an application in an OpenShift environment and to ensure that the application is working correctly. You can use
podman generate kubecommand to generate a Kubernetes YAML file. Then, you can use
podman playcommand to test the creation of pods and containers on your local system before you transfer the generated YAML files to the Kubernetes or OpenShift environment. Using the
podman playcommand, you can also recreate pods and containers originally created in OpenShift or Kubernetes environments.
11.1. Generating a Kubernetes YAML file using Podman
This procedure describes how to create a pod with one container and generate the Kubernetes YAML file using the
podman generate kube command.
Prerequisites
- The pod has been created. For details, see Creating pods.
Procedure k8s.gcr.io/pause:3.1 Less than a second ago Up Less than a second ago 223df6b390b4-infra 223df6b390b4
Use the pod name or ID to generate the Kubernetes YAML file:
$ podman generate kube mypod > mypod.yaml
Note that the
podman generatecommand does not reflect any Logical Volume Manager (LVM) logical volumes or physical volumes that might be attached to the container.
Display the
mypod.yamlfile:
$ cat mypod.yaml # Generation of Kubernetes YAML is still under development! # # Save the output of this file and use kubectl create -f to import # it into Kubernetes. # # Created with podman-1.6.4 apiVersion: v1 kind: Pod metadata: creationTimestamp: "2020-06-09T10:31:56Z" labels: app: mypod name: mypod spec: containers: - command: - /bin/bash env: - name: PATH value: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin - name: TERM value: xterm - name: HOSTNAME - name: container value: oci image: registry.access.redhat.com/ubi8/ubi:latest name: myubi resources: {} securityContext: allowPrivilegeEscalation: true capabilities: {} privileged: false readOnlyRootFilesystem: false tty: true workingDir: / status: {}
Additional resources
man podman-generate-kube
- Podman: Managing pods and containers in a local container runtime article
11.2. Generating a Kubernetes YAML file in OpenShift environment
In the OpenShift environment, use the
oc create command to generate the YAML files describing your application.
Procedure
Generate the YAML file for your
myappapplication:
$ oc create myapp --image=me/myapp:v1 -o yaml --dry-run > myapp.yaml
The
oc createcommand creates and run the
myappimage. The object is printed using the
--dry-runoption and redirected into the
myapp.yamloutput file.
In the Kubernetes environment, you can use the
kubectl create command with the same flags.
11.3. Starting containers and pods with Podman
With the generated YAML files, you can automatically start containers and pods in any environment. Note that the YAML files must not be generated by the Podman. The
podman play kube command allows you to recreate pods and containers based on the YAML input file.
Procedure
Create the pod and the container from the
mypod.yamlfile:
$ podman play kube mypod.yaml Pod: b8c5b99ba846ccff76c3ef257e5761c2d8a5ca4d7ffa3880531aec79c0dacb22 Container: 848179395ebd33dd91d14ffbde7ae273158d9695a081468f487af4e356888ece
List all pods:
$ podman pod ps POD ID NAME STATUS CREATED # OF CONTAINERS INFRA ID b8c5b99ba846 mypod Running 19 seconds ago 2 aa4220eaf4bb
List all pods and containers associated with them:
$ podman ps -a --pod CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD 848179395ebd registry.access.redhat.com/ubi8/ubi:latest /bin/bash About a minute ago Up About a minute ago myubi b8c5b99ba846 aa4220eaf4bb k8s.gcr.io/pause:3.1 About a minute ago Up About a minute ago b8c5b99ba846-infra b8c5b99ba846
The pod IDs from
podman pscommand matches the pod ID from the
podman pod pscommand.
Additional resources
man podman-play-kube
- Podman can now ease the transition to Kubernetes and CRI-O article
11.4. Starting containers and pods in OpenShift environment
You can use the
oc create command to create pods and containers in the OpenShift environment.
Procedure
Create a pod from the YAML file in the OpenShift environment:
$ oc create -f mypod.yaml
In the Kubernetes environment, you can use the
kubectl create command with the same flags.
Chapter 12. Porting containers to systemd using Podman
Podman (Pod Manager) is a fully featured container engine that is a simple daemonless tool. Podman provides a Docker-CLI comparable command line that eases the transition from other container engines and allows the management of pods, containers and images.
Podman. You can use the systemd initialization service to work with pods and containers. You can use the
podman generate systemd command to generate a systemd unit file for containers and pods.
With systemd unit files, you can:
- Set up a container or pod to start as a systemd service.
- Define the order in which the containerized service runs and check for dependencies (for example making sure another service is running, a file is available or a resource is mounted).
- Control the state of the systemd system using the
systemctlcommand.
This chapter provides you with information on how to generate portable descriptions of containers and pods using systemd unit files.
12.1. Enabling systemd services
When enabling the service, you have different options.
Procedure
Enable the service:
To enable a service at system start, no matter if user is logged in or not, enter:
# systemctl enable <service>
You have to copy the systemd unit files to the
/etc/systemd/systemdirectory.
To start a service at user login and stop it at user logout, enter:
$ systemctl --user enable <service>
You have to copy the systemd unit files to the
$HOME/.config/systemd/userdirectory.
To enable users to start a service at system start and persist over logouts, enter:
# loginctl enable-linger <username>
Additional resources
man systemctl
man loginctl
- Managing services with systemd chapter
12.2. Generating a systemd unit file using Podman
Podman allows systemd to control and manage container processes. You can generate a systemd unit file for the existing containers and pods using
podman generate systemd command. It is recommended to use
podman generate systemd because the generated units files change frequently (via updates to Podman) and the
podman generate systemd ensures that you get the latest version of unit files.
Procedure
Create a container (for example
myubi):
$ podman create --name myubi registry.access.redhat.com/ubi8:latest sleep infinity 0280afe98bb75a5c5e713b28de4b7c5cb49f156f1cce4a208f13fee2f75cb453
Use the container name or ID to generate the systemd unit file and direct it into the
~/.config/systemd/user/container-myubi.servicefile:
$ podman generate systemd --name myubi > ~/.config/systemd/user/container-myubi.service
Verification steps
Display the content of generated systemd unit file:
$ cat ~/.config/systemd/user/container-myubi.service # container-myubi.service # autogenerated by Podman 3.3.1 # Wed Sep 8 20:34:46 CEST 2021 [Unit] Description=Podman container-myubi.service Documentation=man:podman-generate-systemd(1) Wants=network-online.target After=network-online.target RequiresMountsFor=/run/user/1000/containers [Service] Environment=PODMAN_SYSTEMD_UNIT=%n Restart=on-failure TimeoutStopSec=70 ExecStart=/usr/bin/podman start myubi ExecStop=/usr/bin/podman stop -t 10 myubi ExecStopPost=/usr/bin/podman stop -t 10 myubi PIDFile=/run/user/1000/containers/overlay-containers/9683103f58a32192c84801f0be93446cb33c1ee7d9cdda225b78049d7c5deea4/userdata/conmon.pid Type=forking [Install] WantedBy=multi-user.target default.target
- The
Restart=on-failureline sets the restart policy and instructs systemd to restart when the service cannot be started or stopped cleanly, or when the process exits non-zero.
- The
ExecStartline describes how we start the container.
- The
ExecStopline describes how we stop and remove the container.
Additional resources
12.3. Auto-generating a systemd unit file using Podman
By default, Podman generates a unit file for existing containers or pods. You can generate more portable systemd unit files using the
podman generate systemd --new. The
--new flag instructs Podman to generate unit files that create, start and remove containers.
Procedure
Pull the image you want to use on your system. For example, to pull the
httpd-24image:
# podman pull registry.access.redhat.com/ubi8/httpd-24
Optional. List all images available on your system:
# podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi8/httpd-24 latest 8594be0a0b57 2 weeks ago 462 MB
Create the
httpdcontainer:
# podman create --name httpd -p 8080:8080 registry.access.redhat.com/ubi8/httpd-24 cdb9f981cf143021b1679599d860026b13a77187f75e46cc0eac85293710a4b1
Optional. Verify the container has been created:
# podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES cdb9f981cf14 registry.access.redhat.com/ubi8/httpd-24:latest /usr/bin/run-http... 5 minutes ago Created 0.0.0.0:8080->8080/tcp httpd
Generate a systemd unit file for the
httpdcontainer:
# podman generate systemd --new --files --name httpd /root/container-httpd.service
Display the content of the generated
container-httpd.servicesystemd unit file:
# cat /root/container-httpd.service # container-httpd.service # autogenerated by Podman 3.3.1 # Wed Sep 8 20:41:44 CEST 2021 [Unit] Description=Podman container-httpd -d --replace --name httpd -p 8080:8080 registry.access.redhat.com/ubi8/httpd-24
- NOTE
Unit files generated using the
--newoption do not expect containers and pods to exist. Therefore, they perform the
podman runcommand when starting the service (see the
ExecStartline) instead of the
podman startcommand. For example, see Section Generating a systemd unit file using Podman.
The
podman runcommand uses the following command-line options:
- The
--conmon-pidfileoption points to a path to store the process ID for the
conmonprocess running on the host. The
conmonprocess terminates with the same exit status as the container, which allows systemd to report the correct service status and restart the container if needed.
- The
--cidfileoption points to the path that stores the container ID.
- The
%tis the path to the run time directory root, for example
/run/user/$UserID.
- The
%nis the full name of the service.
Copy unit files to
/usr/lib/systemd/systemfor installing them as a root user:
# cp -Z container-httpd.service /etc/systemd/system
Enable and start the
container-httpd.service:
# systemctl daemon-reload # systemctl enable --now container-httpd.service Created symlink /etc/systemd/system/multi-user.target.wants/container-httpd.service → /etc/systemd/system/container-httpd.service. Created symlink /etc/systemd/system/default.target.wants/container-httpd.service → /etc/systemd/system/container-httpd.service.
Verification steps
Check the status of the
container-httpd.service:
# systemctl status container-httpd.service ● container-httpd.service - Podman container-httpd.service Loaded: loaded (/etc/systemd/system/container-httpd.service; enabled; vendor preset: disabled) Active: active (running) since Tue 2021-08-24 09:53:40 EDT; 1min 5s ago Docs: man:podman-generate-systemd(1) Process: 493317 ExecStart=/usr/bin/podman run --conmon-pidfile /run/container-httpd.pid --cidfile /run/container-httpd.ctr-id --cgroups=no-conmon -d --repla> Process: 493315 ExecStartPre=/bin/rm -f /run/container-httpd.pid /run/container-httpd.ctr-id (code=exited, status=0/SUCCESS) Main PID: 493435 (conmon) ...
Additional resources
12.4. Auto-starting containers using systemd
You can control the state of the systemd system and service manager using the
systemctl command. This section shows the general procedure on how to enable, start, stop the service as a non-root user. To install the service as a root user, omit the
--user option.
Procedure
Reload systemd manager configuration:
# systemctl --user daemon-reload
Enable the service
container.serviceand start it at boot time:
# systemctl --user enable container.service
Start the service immediately:
# systemctl --user start container.service
Check the status of the service:
$ systemctl --user status container.service ● container.service - Podman container.service Loaded: loaded (/home/user/.config/systemd/user/container.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2020-09-16 11:56:57 CEST; 8s ago Docs: man:podman-generate-systemd(1) Process: 80602 ExecStart=/usr/bin/podman run --conmon-pidfile //run/user/1000/container.service-pid --cidfile //run/user/1000/container.service-cid -d ubi8-minimal:> Process: 80601 ExecStartPre=/usr/bin/rm -f //run/user/1000/container.service-pid //run/user/1000/container.service-cid (code=exited, status=0/SUCCESS) Main PID: 80617 (conmon) CGroup: /user.slice/user-1000.slice/user@1000.service/container.service ├─ 2870 /usr/bin/podman ├─80612 /usr/bin/slirp4netns --disable-host-loopback --mtu 65520 --enable-sandbox --enable-seccomp -c -e 3 -r 4 --netns-type=path /run/user/1000/netns/cni-> ├─80614 /usr/bin/fuse-overlayfs -o lowerdir=/home/user/.local/share/containers/storage/overlay/l/YJSPGXM2OCDZPLMLXJOW3NRF6Q:/home/user/.local/share/contain> ├─80617 /usr/bin/conmon --api-version 1 -c cbc75d6031508dfd3d78a74a03e4ace1732b51223e72a2ce4aa3bfe10a78e4fa -u cbc75d6031508dfd3d78a74a03e4ace1732b51223e72> └─cbc75d6031508dfd3d78a74a03e4ace1732b51223e72a2ce4aa3bfe10a78e4fa └─80626 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1d
You can check if the service is enabled using the
systemctl is-enabled container.servicecommand.
Verification steps
List containers that are running or have exited:
# podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f20988d59920 registry.access.redhat.com/ubi8-minimal:latest top 12 seconds ago Up 11 seconds ago funny_zhukovsky
To stop
container.service, enter:
# systemctl --user stop container.service
Additional resources
man systemctl
- Running containers with Podman and shareable systemd services article
- Managing services with systemd chapter
12.5. Auto-starting pods using systemd
You can start multiple containers as systemd services. Note that the
systemctl command should only be used on the pod and you should not start or stop containers individually via
systemctl, as they are managed by the pod service along with the internal infra-container.
Procedure
Create an empty pod, for example named
systemd-pod:
$ podman pod create --name systemd-pod 11d4646ba41b1fffa51c108cbdf97cfab3213f7bd9b3e1ca52fe81b90fed5577
Optional. List all pods:
$ podman pod ps POD ID NAME STATUS CREATED # OF CONTAINERS INFRA ID 11d4646ba41b systemd-pod Created 40 seconds ago 1 8a428b257111 11d4646ba41b1fffa51c108cbdf97cfab3213f7bd9b3e1ca52fe81b90fed5577
Create two containers in the empty pod. For example, to create
container0and
container1in
systemd-pod:
$ podman create --pod systemd-pod --name container0 registry.access.redhat.com/ubi8 top $ podman create --pod systemd-pod --name container1 registry.access.redhat.com/ubi8 top
Optional. List all pods and containers associated with them:
$ podman ps -a --pod CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES POD ID PODNAME 24666f47d9b2 registry.access.redhat.com/ubi8:latest top 3 minutes ago Created container0 3130f724e229 systemd-pod 56eb1bf0cdfe k8s.gcr.io/pause:3.2 4 minutes ago Created 3130f724e229-infra 3130f724e229 systemd-pod 62118d170e43 registry.access.redhat.com/ubi8:latest top 3 seconds ago Created container1 3130f724e229 systemd-pod
Generate the systemd unit file for the new pod:
$ podman generate systemd --files --name systemd-pod /home/user1/pod-systemd-pod.service /home/user1/container-container0.service /home/user1/container-container1.service
Note that three systemd unit files are generated, one for the
systemd-podpod and two for the containers
container0and
container1.
Display
pod-systemd-pod.serviceunit file:
$ cat pod-systemd-pod.service # pod-systemd-pod.service # autogenerated by Podman 3.3.1 # Wed Sep 8 20:49:17 CEST 2021 [Unit] Description=Podman pod-systemd-pod.service Documentation=man:podman-generate-systemd(1) Wants=network-online.target After=network-online.target RequiresMountsFor= Requires=container-container0.service container-container1.service Before=container-container0.service container-container1.service [Service] Environment=PODMAN_SYSTEMD_UNIT=%n Restart=on-failure TimeoutStopSec=70 ExecStart=/usr/bin/podman start bcb128965b8e-infra ExecStop=/usr/bin/podman stop -t 10 bcb128965b8e-infra ExecStopPost=/usr/bin/podman stop -t 10 bcb128965b8e-infra PIDFile=/run/user/1000/containers/overlay-containers/1dfdcf20e35043939ea3f80f002c65c00d560e47223685dbc3230e26fe001b29/userdata/conmon.pid Type=forking [Install] WantedBy=multi-user.target default.target
- The
Requiresline in the
[Unit]section defines dependencies on
container-container0.serviceand
container-container1.serviceunit files. Both unit files will be activated.
- The
ExecStartand
ExecStoplines in the
[Service]section start and stop the infra-container, respectively.
Display
container-container0.serviceunit file:
$ cat container-container0.service # container-container0.service # autogenerated by Podman 3.3.1 # Wed Sep 8 20:49:17 CEST 2021 [Unit] Description=Podman container-container0.service Documentation=man:podman-generate-systemd(1) Wants=network-online.target After=network-online.target RequiresMountsFor=/run/user/1000/containers BindsTo=pod-systemd-pod.service After=pod-systemd-pod.service [Service] Environment=PODMAN_SYSTEMD_UNIT=%n Restart=on-failure TimeoutStopSec=70 ExecStart=/usr/bin/podman start container0 ExecStop=/usr/bin/podman stop -t 10 container0 ExecStopPost=/usr/bin/podman stop -t 10 container0 PIDFile=/run/user/1000/containers/overlay-containers/4bccd7c8616ae5909b05317df4066fa90a64a067375af5996fdef9152f6d51f5/userdata/conmon.pid Type=forking [Install] WantedBy=multi-user.target default.target
- The
BindsToline line in the
[Unit]section defines the dependency on the
pod-systemd-pod.serviceunit file
- The
ExecStartand
ExecStoplines in the
[Service]section start and stop the
container0respectively.
Display
container-container1.serviceunit file:
$ cat container-container1.service
Copy all the generated files to
$HOME/.config/systemd/userfor installing as a non-root user:
$ cp pod-systemd-pod.service container-container0.service container-container1.service $HOME/.config/systemd/user
Enable the service and start at user login:
$ systemctl enable --user pod-systemd-pod.service Created symlink /home/user1/.config/systemd/user/multi-user.target.wants/pod-systemd-pod.service → /home/user1/.config/systemd/user/pod-systemd-pod.service. Created symlink /home/user1/.config/systemd/user/default.target.wants/pod-systemd-pod.service → /home/user1/.config/systemd/user/pod-systemd-pod.service.
Note that the service stops at user logout.
Verification steps
Check if the service is enabled:
$ systemctl is-enabled pod-systemd-pod.service enabled
Additional resources
man podman-create
man podman-generate-systemd
man systemctl
- Running containers with Podman and shareable systemd services article
- Managing services with systemd chapter
12.6. Auto-updating containers using Podman
The
podman auto-update command allows you to automatically update containers according to their auto-update policy. The
podman auto-update command updates services when the container image is updated on the registry. To use auto-updates, containers must be created with the
--label "io.containers.autoupdate=image" label and run in a systemd unit generated by
podman generate systemd --new command.
Podman searches for running containers with the
"io.containers.autoupdate" label set to
"image" and communicates to the container registry. If the image has changed, Podman restarts the corresponding systemd unit to stop the old container and create a new one with the new image. As a result, the container, its environment, and all dependencies, are restarted.
Prerequisites
The
container-toolsmodule is installed.
# yum module install -y container-tools
Procedure
Start a
myubicontainer based on the
registry.access.redhat.com/ubi8/ubi-initimage:
# podman run --label "io.containers.autoupdate=image" \ --name myubi -dt registry.access.redhat.com/ubi8/ubi-init top bc219740a210455fa27deacc96d50a9e20516492f1417507c13ce1533dbdcd9d
Optional: List containers that are running or have exited:
# podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 76465a5e2933 registry.access.redhat.com/ubi8/ubi-init:latest top 24 seconds ago Up 23 seconds ago myubi
Generate a systemd unit file for the
myubicontainer:
# podman generate systemd --new --files --name myubi /root/container-myubi.service
Copy unit files to
/usr/lib/systemd/systemfor installing it as a root user:
# cp -Z ~/container-myubi.service /usr/lib/systemd/system
Reload systemd manager configuration:
# systemctl daemon-reload
Start and check the status of a container:
# systemctl start container-myubi.service # systemctl status container-myubi.service
Auto-update the container:
# podman auto-update
Additional resources
12.7. Auto-updating containers using systemd
As mentioned in section Auto-updating containers using Podman, you can update the container using the
podman auto-update command. It integrates into custom scripts and can be invoked when needed. Another way to auto update the containers is to use the pre-installed
podman-auto-update.timer and
podman-auto-update.service systemd service. The
podman-auto-update.timer can be configured to trigger auto updates at a specific date or time. The
podman-auto-update.service can further be started by the
systemctl command or be used as a dependency by other systemd services. As a result, auto updates based on time and events can be triggered in various ways to meet individual needs and use cases.
Prerequisites
The
container-toolsmodule is installed.
# yum module install -y container-tools
Procedure
Display the
podman-auto-update.serviceunit file:
# cat /usr/lib/systemd/system/podman-auto-update.service [Unit] Description=Podman auto-update service Documentation=man:podman-auto-update(1) Wants=network.target After=network-online.target [Service] Type=oneshot ExecStart=/usr/bin/podman auto-update [Install] WantedBy=multi-user.target default.target
Display the
podman-auto-update.timerunit file:
# cat /usr/lib/systemd/system/podman-auto-update.timer [Unit] Description=Podman auto-update timer [Timer] OnCalendar=daily Persistent=true [Install] WantedBy=timers.target
In this example, the
podman auto-updatecommand is launched daily at midnight.
Enable the
podman-auto-update.timerservice at system start:
# systemctl enable podman-auto-update.timer
Start the systemd service:
# systemctl start podman-auto-update.timer
Optional: List all timers:
# systemctl list-timers --all NEXT LEFT LAST PASSED UNIT ACTIVATES Wed 2020-12-09 00:00:00 CET 9h left n/a n/a podman-auto-update.timer podman-auto-update.service
You can see that
podman-auto-update.timeractivates the
podman-auto-update.service.
Additional resources
Chapter 13. Building container images with Buildah
Buildah facilitates building OCI container images that meet the OCI Runtime Specification..
13.1. The Buildah tool
Using Buildah is different from building images with the docker command in the following ways:
- No Daemon
- Buildah requires no container runtime.
- Base image or scratch
- You can build an image based on another container or start with an empty image (scratch).
- Build tools are external
Buildah does not include build tools within the image itself. As a result, Buildah:
- Reduces the size of built images.
- Increases security of images by excluding software (e.g. gcc, make, and yum) from the resulting image.
- Allows to transport the images using fewer resources because of the reduced image size.
- Compatibility
- Buildah supports building container images with Dockerfiles allowing for an easy transition from Docker to Buildah.
The default location Buildah uses for container storage is the same as the location the CRI-O container engine uses for storing local copies of images. As a result, the images pulled from a registry by either CRI-O or Buildah, or committed by the buildah command, are stored in the same directory structure. However, even though CRI-O and Buildah are currently able to share images, they cannot share containers.
Additional resources
- Buildah - a tool that facilitates building Open Container Initiative (OCI) container images
- Buildah Tutorial 1: Building OCI container images
- Buildah Tutorial 2: Using Buildah with container registries
- Building with Buildah: Dockerfiles, command line, or scripts
- How rootless Buildah works: Building containers in unprivileged environments
13.2. Installing Buildah
Install the Buildah tool using the
yum command.
Procedure
Install the Buildah tool:
# yum -y install buildah
Verification
Display the help message:
# buildah -h
13.3. Getting images with Buildah
Use the
buildah from command to create a new working container from scratch or based on a specified image as a starting point.
Procedure
Create a new working container based on the
registry.redhat.io/ubi8/ubiimage:
# buildah from registry.access.redhat.com/ubi8/ubi Getting image source signatures Copying blob… Writing manifest to image destination Storing signatures ubi-working-container
Verification
List all images in local storage:
# buildah images REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi8/ubi latest 272209ff0ae5 2 weeks ago 234 MB
List the working containers and their base images:
# buildah containers CONTAINER ID BUILDER IMAGE ID IMAGE NAME CONTAINER NAME 01eab9588ae1 * 272209ff0ae5 registry.access.redhat.com/ub... ubi-working-container
Additional resources
buildah-fromman page
buildah-imagesman page
buildah.containersman page
13.4. Running commands inside of the container
Use the
buildah run command to execute a command from the container.
Prerequisites
- A pulled image is available on the local system.
Procedure
Display the operating system version:
# buildah run ubi-working-container cat /etc/redhat-release Red Hat Enterprise Linux release 8.4 (Ootpa)
Additional resources
buildah-runman page
13.5. Building an image from a Containerfile with Buildah
Use the
buildah bud command to build an image using instructions from a
Containerfile.
The
buildah bud command uses a
Containerfile if found in the context directory, if it is not found the
buildah bud command uses a
Dockerfile; otherwise any file can be specified with the
--file option. The available commands that are usable inside a
Containerfile and a
Dockerfile are equivalent.
Procedure
Create a
Containerfile:
# cat Containerfile FROM registry.access.redhat.com/ubi8/ubi ADD myecho /usr/local/bin ENTRYPOINT "/usr/local/bin/myecho"
Create a
myechoscript:
# cat myecho echo "This container works!"
Change the access permissions of
myechoscript:
# chmod 755 myecho
Build the
myechoimage using
Containerfilein the current directory:
# buildah bud -t myecho . STEP 1: FROM registry.access.redhat.com/ubi8/ubi STEP 2: ADD myecho /usr/local/bin STEP 3: ENTRYPOINT "/usr/local/bin/myecho" STEP 4: COMMIT myecho ... Storing signatures
Verification
List all images:
# buildah images REPOSITORY TAG IMAGE ID CREATED SIZE localhost/myecho latest b28cd00741b3 About a minute ago 234 MB
Run the
myechocontainer based on the
localhost/myechoimage:
# podman run --name=myecho localhost/myecho This container works!
List all containers:
# podman ps -a 0d97517428d localhost/myecho 12 seconds ago Exited (0) 13 seconds ago myecho
You can use the
podman history command to display the information about each layer used in the image.
Additional resources
buildah-budman page
13.6. Inspecting containers and images with Buildah
Use the
buildah inspect command to display information about a container or image.
Prerequisites
- An image was built using instructions from Containerfile. For details, see section Building an image from a Containerfile with Buildah.
Procedure
Inspect the image:
To inspect the myecho image, enter:
# buildah inspect localhost/myecho { "Type": "buildah 0.0.1", "FromImage": "localhost/myecho:latest", "FromImageID": "b28cd00741b38c92382ee806e1653eae0a56402bcd2c8d31bdcd36521bc267a4", "FromImageDigest": "sha256:0f5b06cbd51b464fabe93ce4fe852a9038cdd7c7b7661cd7efef8f9ae8a59585", "Config": ... "Entrypoint": [ "/bin/sh", "-c", "\"/usr/local/bin/myecho\"" ], ... }
To inspect the working container from the
myechoimage:
Create a working container based on the
localhost/myechoimage:
# buildah from localhost/myecho
Inspect the
myecho-working-containercontainer:
# buildah inspect ubi-working-container { "Type": "buildah 0.0.1", "FromImage": "registry.access.redhat.com/ubi8/ubi:latest", "FromImageID": "272209ff0ae5fe54c119b9c32a25887e13625c9035a1599feba654aa7638262d", "FromImageDigest": "sha256:77623387101abefbf83161c7d5a0378379d0424b2244009282acb39d42f1fe13", "Config": ... "Container": "ubi-working-container", "ContainerID": "01eab9588ae1523746bb706479063ba103f6281ebaeeccb5dc42b70e450d5ad0", "ProcessLabel": "system_u:system_r:container_t:s0:c162,c1000", "MountLabel": "system_u:object_r:container_file_t:s0:c162,c1000", ... }
Additional resources
buildah-inspectman page
13.7. Modifying a container using buildah mount
Use the
buildah inspect command to display information about a container or image.
Prerequisites
- An image built using instructions from Containerfile. For details, see section Building an image from a Containerfile with Buildah.
Procedure
Create a working container based on the
registry.access.redhat.com/ubi8/ubiimage and save the name of the container to the
mycontainervariable:
# mycontainer=$(buildah from localhost/myecho) # echo $mycontainer myecho-working-container
Mount the
myecho-working-containercontainer and save the mount point path to the
mymountvariable:
# mymount=$(buildah mount $mycontainer) # echo $mymount /var/lib/containers/storage/overlay/c1709df40031dda7c49e93575d9c8eebcaa5d8129033a58e5b6a95019684cc25/merged
Modify the
myechoscript and make it executable:
# echo 'echo "We modified this container."' >> $mymount/usr/local/bin/myecho # chmod +x $mymount/usr/local/bin/myecho
Create the
myecho2image from the
myecho-working-containercontainer:
# buildah commit $mycontainer containers-storage:myecho2
Verification
List all images in local storage:
# buildah images REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/library/myecho2 latest 4547d2c3e436 4 minutes ago 234 MB localhost/myecho latest b28cd00741b3 56 minutes ago 234 MB
Run the
myecho2container based on the
docker.io/library/myecho2image:
# podman run --name=myecho2 docker.io/library/myecho2 This container works! We even modified it.
Additional resources
buildah-mountman page
buildah-commitman page
13.8. Modifying a container using buildah copy and buildah config
Use
buildah copy command to copy files to a container without mounting it. You can then configure the container using the
buildah config command to run the script you created by default.
Prerequisites
- An image built using instructions from Containerfile. For details, see section Building an image from a Containerfile with Buildah.
Procedure
Create a script named
newechoand make it executable:
# cat newecho echo "I changed this container" # chmod 755 newecho
Create a new working container:
# buildah from myecho:latest myecho-working-container-2
Copy the newecho script to
/usr/local/bindirectory inside the container:
# buildah copy myecho-working-container-2 newecho /usr/local/bin
Change the configuration to use the
newechoscript as the new entrypoint:
# buildah config --entrypoint "/bin/sh -c /usr/local/bin/newecho" myecho-working-container-2
Optional. Run the
myecho-working-container-2container whixh triggers the
newechoscript to be executed:
# buildah run myecho-working-container-2 -- sh -c '/usr/local/bin/newecho' I changed this container
Commit the
myecho-working-container-2container to a new image called
mynewecho:
# buildah commit myecho-working-container-2 containers-storage:mynewecho
Verification
List all images in local storage:
# buildah images REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/library/mynewecho latest fa2091a7d8b6 8 seconds ago 234 MB
Additional resources
buildah-copyman page
buildah-configman page
buildah-commitman page
buildah-runman page
13.9. Creating images from scratch with Buildah
Instead of starting with a base image, you can create a new container that holds only a minimal amount of container metadata.
When creating an image from scratch container, consider: A * You can copy the executable with no dependencies into the scratch image and make a few configuration settings to get a minimal container to work. * You must initialize an RPM database and add a release package in the container to use tools like
yum or
rpm. * If you add a lot of packages, consider using the standard UBI or minimal UBI images instead of scratch images.
Procedure
This procedure adds a web service httpd to a container and configures it to run.
Create an empty container:
# buildah from scratch working-container
Mount the
working-containercontainer and save the mount point path to the
scratchmntvariable:
# scratchmnt=$(buildah mount working-container) # echo $scratchmnt /var/lib/containers/storage/overlay/be2eaecf9f74b6acfe4d0017dd5534fde06b2fa8de9ed875691f6ccc791c1836/merged
Initialize an RPM database within the scratch image and add the
redhat-releasepackage:
# yum install -y --releasever=8 --installroot=$scratchmnt redhat-release
Install the
httpdservice to the
scratchdirectory:
# yum install -y --setopt=reposdir=/etc/yum.repos.d \ --installroot=$scratchmnt \ --setopt=cachedir=/var/cache/dnf httpd
Create the
$scratchmnt/var/www/html/index.htmlfile:
# mkdir -p $scratchmnt/var/www/html # echo "Your httpd container from scratch works!" > $scratchmnt/var/www/html/index.html
Configure
working-containerto run the
httpddaemon directly from the container:
# buildah config --cmd "/usr/sbin/httpd -DFOREGROUND" working-container # buildah config --port 80/tcp working-container # buildah commit working-container localhost/myhttpd:latest
Verification
List all images in local storage:
# podman images REPOSITORY TAG IMAGE ID CREATED SIZE localhost/myhttpd latest 08da72792f60 2 minutes ago 121 MB
Run the
localhost/myhttpdimage and and configure port mappings between the container and the host system:
# podman run -p 8080:80 -d --name myhttpd 08da72792f60
Test the web server:
# curl localhost:8080 Your httpd container from scratch works!
Additional resources
buildah-configman page
buildah-commitman page
13.10. Pushing containers to a private registry
Use
buildah push command to push an image from local storage to a public or private repository.
Prerequisites
- An image was built using instructions from Containerfile. For details, see section Building an image from a Containerfile with Buildah.
Procedure
Create the local registry on your machine:
# podman run -d -p 5000:5000 registry:2
Push the
myecho:latestimage to the
localhostregistry:
# buildah push --tls-verify=false myecho:latest localhost:5000/myecho:latest Getting image source signatures Copying blob sha256:e4efd0... ... Writing manifest to image destination Storing signatures
Verification
List all images in the
localhostrepository:
# curl {"repositories":["myecho2]} # curl {"name":"myecho","tags":["latest"]}
Inspect the
docker://localhost:5000/myecho:latestimage:
# skopeo inspect --tls-verify=false docker://localhost:5000/myecho:latest | less { "Name": "localhost:5000/myecho", "Digest": "sha256:8999ff6050...", "RepoTags": [ "latest" ], "Created": "2021-06-28T14:44:05.919583964Z", "DockerVersion": "", "Labels": { "architecture": "x86_64", "authoritative-source-url": "registry.redhat.io", ... }
Pull the
localhost:5000/myechoimage:
# podman pull --tls-verify=false localhost:5000/myecho2 # podman run localhost:5000/myecho2 This container works!
Additional resources
buildah-pushman page
13.11. Pushing containers to the Docker Hub
Use your Docker Hub credentials to push and pull images from the Docker Hub with the
buildah command.
Prerequisites
- An image built using instructions from Containerfile. For details, see section Building an image from a Containerfile with Buildah.
Procedure
Push the
docker.io/library/myecho:latestto your Docker Hub. Replace
usernameand
passwordwith your Docker Hub credentials:
# buildah push --creds username:password \ docker.io/library/myecho:latest docker://testaccountXX/myecho:latest
Verification
Get and run the
docker.io/testaccountXX/myecho:latestimage:
Using Podman tool:
# podman run docker.io/testaccountXX/myecho:latest This container works!
Using Buildah and Podman tools:
# buildah from docker.io/testaccountXX/myecho:latest myecho2-working-container-2 # podman run myecho-working-container-2
Additional resources
buildah-pushman page
13.12. Removing images with Buildah
Use the
buildah rmi command to remove locally stored container images. You can remove an image by its ID or name.
Procedure
List all images on your local system:
# buildah images REPOSITORY TAG IMAGE ID CREATED SIZE localhost/johndoe/webserver latest dc5fcc610313 46 minutes ago 263 MB docker.io/library/mynewecho latest fa2091a7d8b6 17 hours ago 234 MB docker.io/library/myecho2 latest 4547d2c3e436 6 days ago 234 MB localhost/myecho latest b28cd00741b3 6 days ago 234 MB localhost/ubi-micro-httpd latest c6a7678c4139 12 days ago 152 MB registry.access.redhat.com/ubi8/ubi latest 272209ff0ae5 3 weeks ago 234 MB
Remove the
localhost/myechoimage:
# buildah rmi localhost/myecho
To remove multiple images:
# buildah rmi docker.io/library/mynewecho docker.io/library/myecho2
To remove all images from your system:
# buildah rmi -a
To remove images that have multiple names (tags) associated with them, add the
-foption to remove them:
# buildah rmi -f localhost/ubi-micro-httpd
Verification
Ensure that images were removed:
# buildah images
Additional resources
buildah-rmiman page
13.13. Removing containers with Buildah
Use the
buildah rm command to remove containers. You can specify containers for removal with the container ID or name.
Prerequisites
- At least one container has been stopped.
Procedure
List all containers:
# buildah containers CONTAINER ID BUILDER IMAGE ID IMAGE NAME CONTAINER NAME 05387e29ab93 * c37e14066ac7 docker.io/library/myecho:latest myecho-working-container
Remove the myecho-working-container container:
# buildah rm myecho-working-container 05387e29ab93151cf52e9c85c573f3e8ab64af1592b1ff9315db8a10a77d7c22
Verification
Ensure that containers were removed:
# buildah containers
Additional resources
buildah-rmman page
Chapter 14. Monitoring containers
This chapter focuses on useful Podman commands that allow you to manage a Podman environment, including determining the health of the container, displaying system and pod information, and monitoring Podman events.
14.1. Performing a healthcheck on a container
The healthcheck allows you to determine the health or readiness of the process running inside the container. A healthcheck consists of five basic components:
- Command
- Retries
- Interval
- Start-period
- Timeout
The description of healthcheck components follows.
- Command
- Podman executes the command inside the target container and waits for the exit code.
The other four components are related to the scheduling of the healthcheck and they are optional.
- Retries
- Defines the number of consecutive failed healthchecks that need to occur before the container is marked as "unhealthy". A successful healthcheck resets the retry counter.
- Interval
- Describes the time between running the healthcheck command. Note that small intervals cause your system to spend a lot of time running healthchecks. The large intervals cause struggles with catching time outs.
- Start-period
- Describes the time between when the container starts and when you want to ignore healthcheck failures.
- Timeout
- Describes the period of time the healthcheck must complete before being considered unsuccessful.
Healthchecks run inside the container. Healthcheck only make sense if you know what is a health state of the service and can differentiate between a successful and unsuccessful health check.
Procedure
Define a healthcheck:
$ podman run -dt --name hc1 -p 8080:8080 --health-cmd='curl || exit 1' --health-interval=0 registry.access.redhat.com/ubi8/httpd-24
- The
--health-cmdoption sets a healthcheck command for the container.
- The
-health-interval=0option with 0 value indicates that you want to run healthcheck manually.
Run the healthcheck manually:
$ podman healthcheck run hc1 Healthy
Optionally, you can check the exit status of last command:
$ echo $? 0
The "0" value means success.
Additional resources
man podman-run
- Monitoring container vitality and availability with Podman article
14.2. Displaying Podman system information
The
podman system command allows you to manage the Podman systems. This section provides information on how to display Podman system information.
Procedure
Display Podman system information:
To show Podman disk usage, enter:
$ podman system df TYPE TOTAL ACTIVE SIZE RECLAIMABLE Images 3 2 1.085GB 233.4MB (0%) Containers 2 0 28.17kB 28.17kB (100%) Local Volumes 3 0 0B 0B (0%)
To show detailed information on space usage, enter:
$ podman system df -v Images space usage: REPOSITORY TAG IMAGE ID CREATED SIZE SHARED SIZE UNIQUE SIZE CONTAINERS registry.access.redhat.com/ubi8 latest b1e63aaae5cf 13 days 233.4MB 233.4MB 0B 0 registry.access.redhat.com/ubi8/httpd-24 latest 0d04740850e8 13 days 461.5MB 0B 461.5MB 1 registry.redhat.io/rhel8/podman latest dce10f591a2d 13 days 390.6MB 233.4MB 157.2MB 1 Containers space usage: CONTAINER ID IMAGE COMMAND LOCAL VOLUMES SIZE CREATED STATUS NAMES 311180ab99fb 0d04740850e8 /usr/bin/run-httpd 0 28.17kB 16 hours exited hc1 bedb6c287ed6 dce10f591a2d podman run ubi8 echo hello 0 0B 11 hours configured dazzling_tu Local Volumes space usage: VOLUME NAME LINKS SIZE 76de0efa83a3dae1a388b9e9e67161d28187e093955df185ea228ad0b3e435d0 0 0B 8a1b4658aecc9ff38711a2c7f2da6de192c5b1e753bb7e3b25e9bf3bb7da8b13 0 0B d9cab4f6ccbcf2ac3cd750d2efff9d2b0f29411d430a119210dd242e8be20e26 0 0B
To display information about the host, current storage stats, and build of Podman, enter:
$ podman system info host: arch: amd64 buildahVersion: 1.22.3 cgroupControllers: [] cgroupManager: cgroupfs cgroupVersion: v1 conmon: package: conmon-2.0.29-1.module+el8.5.0+12381+e822eb26.x86_64 path: /usr/bin/conmon version: 'conmon version 2.0.29, commit: 7d0fa63455025991c2fc641da85922fde889c91b' cpus: 2 distribution: distribution: '"rhel"' version: "8.5" eventLogger: file hostname: localhost.localdomain idMappings: gidmap: - container_id: 0 host_id: 1000 size: 1 - container_id: 1 host_id: 100000 size: 65536 uidmap: - container_id: 0 host_id: 1000 size: 1 - container_id: 1 host_id: 100000 size: 65536 kernel: 4.18.0-323.el8.x86_64 linkmode: dynamic memFree: 352288768 memTotal: 2819129344 ociRuntime: name: runc package: runc-1.0.2-1.module+el8.5.0+12381+e822eb26.x86_64 path: /usr/bin/runc version: |- runc version 1.0.2 spec: 1.0.2-dev go: go1.16.7 libseccomp: 2.5.1 os: linux remoteSocket: path: /run/user/1000/podman/podman.sock security: apparmorEnabled: false capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT rootless: true seccompEnabled: true seccompProfilePath: /usr/share/containers/seccomp.json selinuxEnabled: true serviceIsRemote: false slirp4netns: executable: /usr/bin/slirp4netns package: slirp4netns-1.1.8-1.module+el8.5.0+12381+e822eb26.x86_64 version: |- slirp4netns version 1.1.8 commit: d361001f495417b880f20329121e3aa431a8f90f libslirp: 4.4.0 SLIRP_CONFIG_VERSION_MAX: 3 libseccomp: 2.5.1 swapFree: 3113668608 swapTotal: 3124752384 uptime: 11h 24m 12.52s (Approximately 0.46 days) registries: search: - registry.fedoraproject.org - registry.access.redhat.com - registry.centos.org - docker.io store: configFile: /home/user/.config/containers/storage.conf containerStore: number: 2 paused: 0 running: 0 stopped: 2 graphDriverName: overlay graphOptions: overlay.mount_program: Executable: /usr/bin/fuse-overlayfs Package: fuse-overlayfs-1.7.1-1.module+el8.5.0+12381+e822eb26.x86_64 Version: |- fusermount3 version: 3.2.1 fuse-overlayfs: version 1.7.1 FUSE library version 3.2.1 using FUSE kernel interface version 7.26 graphRoot: /home/user/.local/share/containers/storage graphStatus: Backing Filesystem: xfs Native Overlay Diff: "false" Supports d_type: "true" Using metacopy: "false" imageStore: number: 3 runRoot: /run/user/1000/containers volumePath: /home/user/.local/share/containers/storage/volumes version: APIVersion: 3.3.1 Built: 1630360721 BuiltTime: Mon Aug 30 23:58:41 2021 GitCommit: "" GoVersion: go1.16.7 OsArch: linux/amd64 Version: 3.3.1
To remove all unused containers, images and volume data, enter:
$ podman system prune WARNING! This will remove: - all stopped containers - all stopped pods - all dangling images - all build cache Are you sure you want to continue? [y/N] y
- The
podman system prunecommand removes all unused containers (both dangling and unreferenced), pods and optionally, volumes from local storage.
- Use the
--alloption to delete all unused images. Unused images are dangling images and any image that does not have any containers based on it.
- Use the
--volumeoption to prune volumes. By default, volumes are not removed to prevent important data from being deleted if there is currently no container using the volume.
Additional resources
man podman-system-df
man podman-system-info
man podman-system-prune
14.3. Podman event types
You can monitor events that occur in Podman. Several event types exist and each event type reports different statuses.
The container event type reports the following statuses:
- attach
- checkpoint
- cleanup
- commit
- create
- exec
- export
- import
- init
- kill
- mount
- pause
- prune
- remove
- restart
- restore
- start
- stop
- sync
- unmount
- unpause
The pod event type reports the following statuses:
- create
- kill
- pause
- remove
- start
- stop
- unpause
The image event type reports the following statuses:
- prune
- push
- pull
- save
- remove
- tag
- untag
The system type reports the following statuses:
- refresh
- renumber
The volume type reports the following statuses:
- create
- prune
- remove
Additional resources
man podman-events
14.4. Monitoring Podman events
You can monitor and print events that occur in Podman. Each event will include a timestamp, a type, a status, name (if applicable), and image (if applicable).
Procedure
Show Podman events:
To show all Podman events, enter:
$ podman events 2020-05-14 10:33:42.312377447 -0600 CST container create 34503c192940 (image=registry.access.redhat.com/ubi8/ubi:latest, name=keen_colden) 2020-05-14 10:33:46.958768077 -0600 CST container init 34503c192940 (image=registry.access.redhat.com/ubi8/ubi:latest, name=keen_colden) 2020-05-14 10:33:46.973661968 -0600 CST container start 34503c192940 (image=registry.access.redhat.com/ubi8/ubi:latest, name=keen_colden) 2020-05-14 10:33:50.833761479 -0600 CST container stop 34503c192940 (image=registry.access.redhat.com/ubi8/ubi:latest, name=keen_colden) 2020-05-14 10:33:51.047104966 -0600 CST container cleanup 34503c192940 (image=registry.access.redhat.com/ubi8/ubi:latest, name=keen_colden)
To exit logging, press CTRL+c.
To show only Podman create events, enter:
$ podman events --filter event=create 2020-05-14 10:36:01.375685062 -0600 CST container create 20dc581f6fbf (image=registry.access.redhat.com/ubi8/ubi:latest) 2019-03-02 10:36:08.561188337 -0600 CST container create 58e7e002344c (image=registry.access.redhat.com/ubi8/ubi-minimal:latest) 2019-03-02 10:36:29.978806894 -0600 CST container create d81e30f1310f (image=registry.access.redhat.com/ubi8/ubi-init:latest)
Additional resources
man podman-events
Chapter 15. Using the container-tools API
The new REST based Podman 2.0 API replaces the old remote API for Podman that used the varlink library. The new API works in both a rootful and a rootless environment.
The Podman v2.0 RESTful API consists of the Libpod API providing support for Podman, and Docker-compatible API. With this new REST API, you can call Podman from platforms such as cURL, Postman, Google’s Advanced REST client, and many others.
15.1. Enabling the Podman API using systemd in root mode
This procedure shows how to do the following:
- Use systemd to activate the Podman API socket.
- Use a Podman client to perform basic commands.
Prerequisities
The
podman-remotepackage is installed.
# yum install podman-remote
Procedure
Start the service immediately:
# systemctl enable --now podman.socket
To enable the link to
var/lib/docker.sockusing the
docker-podmanpackage:
# yum install podman-docker
Verification steps
Display system information of Podman:
# podman-remote info
Verify the link:
# ls -al /var/run/docker.sock lrwxrwxrwx. 1 root root 23 Nov 4 10:19 /var/run/docker.sock -> /run/podman/podman.sock
Additional resources
- Podman v2.0 RESTful API - upstream documentation
- A First Look At Podman 2.0 API - article
- Sneak peek: Podman’s new REST API - article
15.2. Enabling the Podman API using systemd in rootless mode
This procedure shows how to use systemd to activate the Podman API socket and podman API service.
Prerequisites
The
podman-remotepackage is installed.
# yum install podman-remote
Procedure
Enable and start the service immediately:
$ systemctl --user enable --now podman.socket
Optional. To enable programs using Docker to interact with the rootless Podman socket:
$ export DOCKER_HOST=unix:///run/user/<uid>/podman//podman.sock
Verification steps
Check the status of the socket:
$ systemctl --user status podman.socket ● podman.socket - Podman API Socket Loaded: loaded (/usr/lib/systemd/user/podman.socket; enabled; vendor preset: enabled) Active: active (listening) since Mon 2021-08-23 10:37:25 CEST; 9min ago Docs: man:podman-system-service(1) Listen: /run/user/1000/podman/podman.sock (Stream) CGroup: /user.slice/user-1000.slice/user@1000.service/podman.socket
The
podman.socketis active and is listening at
/run/user/<uid>/podman.podman.sock, where
<uid>is the user’s ID.
Display system information of Podman:
$ podman-remote info
Additional resources
- Podman v2.0 RESTful API - upstream documentation
- A First Look At Podman 2.0 API - article
- Sneak peek: Podman’s new REST API - article
- Exploring Podman RESTful API using Python and Bash - article
15.3. Running the Podman API manually
This procedure describes how to run the Podman API. This is useful for debugging API calls, especially when using the Docker compatibility layer.
Prerequisities
The
podman-remotepackage is installed.
# yum install podman-remote
Procedure
Run the service for the REST API:
# podman system service -t 0 --log-level=debug
- The value of 0 means no timeout. The default endpoint for a rootful service is
unix:/run/podman/podman.sock.
- The
--log-level <level>option sets the logging level. The standard logging levels are
debug,
info,
warn,
error,
fatal, and
panic.
In another terminal, display system information of Podman. The
podman-remotecommand, unlike the regular
podmancommand, communicates through the Podman socket:
# podman-remote info
To troubleshoot the Podman API and display request and responses, use the
curlcomman. To get the information about the Podman installation on the Linux server in JSON format:
# curl -s --unix-socket /run/podman/podman.sock | jq { "host": { "arch": "amd64", "buildahVersion": "1.15.0", "cgroupVersion": "v1", "conmon": { "package": "conmon-2.0.18-1.module+el8.3.0+7084+c16098dd.x86_64", "path": "/usr/bin/conmon", "version": "conmon version 2.0.18, commit: 7fd3f71a218f8d3a7202e464252aeb1e942d17eb" }, … "version": { "APIVersion": 1, "Version": "2.0.0", "GoVersion": "go1.14.2", "GitCommit": "", "BuiltTime": "Thu Jan 1 01:00:00 1970", "Built": 0, "OsArch": "linux/amd64" } }
A
jqutility is a command-line JSON processor.
Pull the
registry.access.redhat.com/ubi8/ubicontainer image:
# curl -XPOST --unix-socket /run/podman/podman.sock -v '' * Trying /run/podman/podman.sock... * Connected to d (/run/podman/podman.sock) port 80 (#0) > POST /v1.0.0/images/create?fromImage=registry.access.redhat.com%2Fubi8%2Fubi HTTP/1.1 > Host: d > User-Agent: curl/7.61.1 > Accept: / > < HTTP/1.1 200 OK < Content-Type: application/json < Date: Tue, 20 Oct 2020 13:58:37 GMT < Content-Length: 231 < {"status":"pulling image () from registry.access.redhat.com/ubi8/ubi:latest, registry.redhat.io/ubi8/ubi:latest","error":"","progress":"","progressDetail":{},"id":"ecbc6f53bba0d1923ca9e92b3f747da8353a070fccbae93625bd8b47dbee772e"} * Connection #0 to host d left intact
Display the pulled image:
# curl --unix-socket /run/podman/podman.sock -v '' | jq * Trying /run/podman/podman.sock... %.61.1 > Accept: / > < HTTP/1.1 200 OK < Content-Type: application/json < Date: Tue, 20 Oct 2020 13:59:55 GMT < Transfer-Encoding: chunked < { [12498 bytes data] 100 12485 0 12485 0 0 2032k 0 --:--:-- --:--:-- --:--:-- 2438k * Connection #0 to host d left intact [ { "Id": "ecbc6f53bba0d1923ca9e92b3f747da8353a070fccbae93625bd8b47dbee772e", "RepoTags": [ "registry.access.redhat.com/ubi8/ubi:latest", "registry.redhat.io/ubi8/ubi:latest" ], "Created": "2020-09-01T19:44:12.470032Z", "Size": 210838671, "Labels": { "architecture": "x86_64", "build-date": "2020-09-01T19:43:46.041620", "com.redhat.build-host": "cpt-1008.osbs.prod.upshift.rdu2.redhat.com", ... "maintainer": "Red Hat, Inc.", "name": "ubi8", ... "summary": "Provides the latest release of Red Hat Universal Base Image 8.", "url": "", ... }, "Names": [ "registry.access.redhat.com/ubi8/ubi:latest", "registry.redhat.io/ubi8/ubi:latest" ], ... ] } ]
Additional resources
- Podman v2.0 RESTful API - upstream documentation
- Sneak peek: Podman’s new REST API - article
- Exploring Podman RESTful API using Python and Bash - article
podman-system-serviceman page | https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9-beta/html-single/building_running_and_managing_containers/index | CC-MAIN-2021-49 | en | refinedweb |
Welcome to part four of this blog series! So far, we have a Kafka single-node cluster with TLS encryption on top of which we configured different authentication modes (
TLS and
SASL SCRAM-SHA-512), defined users with the User Operator, connected to the cluster using CLI and Go clients and saw how easy it is to manage Kafka topics with the Topic Operator. So far, our cluster used
ephemeral persistence, which in the case of a single-node cluster, means that we will lose data if the Kafka or Zookeeper nodes (
Pods) are restarted due to any reason.
Let's march on! In this part we will cover:
- How to configure Strimzi to add persistence for our cluster:
- Explore the components such as
PersistentVolumeand
PersistentVolumeClaim
- How to modify the storage quality
- Try and expand the storage size for our Kafka cluster
The code is available on GitHub -
What do I need to go through this tutorial?
kubectl -
I will be using Azure Kubernetes Service (AKS) to demonstrate the concepts, but by and large it is independent of the Kubernetes provider. If you want to use
AKS, all you need is a Microsoft Azure account which you can get for FREE if you don't have one already.
I will not be repeating some of the common sections (such as Installation/Setup (Helm, Strimzi, Azure Kubernetes Service), Strimzi overview) in this or subsequent part of this series and would request you to refer to part one
Add persistence
We will start off by creating a persistent cluster. Here is a snippet of the specification (you can access the complete YAML on GitHub)
apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-kafka-cluster spec: kafka: version: 2.4.0 replicas: 1 storage: type: persistent-claim size: 2Gi deleteClaim: true .... zookeeper: replicas: 1 storage: type: persistent-claim size: 1Gi deleteClaim: true
The key things to notice:
storage.typeis
persistent-claim(as opposed to
ephemeral) in previous examples
storage.sizefor Kafka and Zookeeper nodes is
2Giand
1Girespectively
deleteClaim: truemeans that the corresponding
PersistentVolumeClaims will be deleted when the cluster is deleted/un-deployed
You can take a look at the reference for
storage
To create the cluster:
kubectl apply -f
Let's see the what happens in response to the cluster creation
Strimzi Kubernetes magic...
Strimzi does all the heavy lifting of creating required Kubernetes resources in order to operate the cluster. We covered most of these in part 1 -
StatefulSet (and
Pods),
LoadBalancer Service,
ConfigMap,
Secret etc. In this blog, we will just focus on the persistence related components -
PersistentVolume and
PersistentVolumeClaim
If you're using Azure Kubernetes Service (AKS), this will create an Azure Managed Disk - more on this soon
To check the
PersistentVolumeClaims
kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-my-kafka-cluster-kafka-0 Bound pvc-b4ece32b-a46c-4fbc-9b58-9413eee9c779 2Gi RWO default 94s data-my-kafka-cluster-zookeeper-0 Bound pvc-d705fea9-c443-461c-8d18-acf8e219eab0 1Gi RWO default 3m20s
... and the
PersistentVolumes they are
Bound to
kubectl get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-b4ece32b-a46c-4fbc-9b58-9413eee9c779 2Gi RWO Delete Bound default/data-my-kafka-cluster-kafka-0 default 107s pvc-d705fea9-c443-461c-8d18-acf8e219eab0 1Gi RWO Delete Bound default/data-my-kafka-cluster-zookeeper-0 default 3m35s
Notice that the disk size is as specified in the manifest ie.
2and
1Gib for Kafka and Zookeeper respectively
Where is the data?
If we want to see the data itself, let's first check the
ConfigMap which stores the Kafka server config:
export CLUSTER_NAME=my-kafka-cluster kubectl get configmap/${CLUSTER_NAME}-kafka-config -o yaml
In
server.config section, you will find an entry as such:
########## # Kafka message logs configuration ########## log.dirs=/var/lib/kafka/data/kafka-log${STRIMZI_BROKER_ID}
This tells us that the Kafka data is stored in
/var/lib/kafka/data/kafka-log${STRIMZI_BROKER_ID}. In this case
STRIMZI_BROKER_ID is
0 since we all we have is a single node
With this info, let's look the the Kafka
Pod:
export CLUSTER_NAME=my-kafka-cluster kubectl get pod/${CLUSTER_NAME}-kafka-0 -o yaml
If you look into the
kafka
container section, you will notice the following:
One of the
volumes configuration:
volumes: - name: data persistentVolumeClaim: claimName: data-my-kafka-cluster-kafka-0
The
volume named
data is associated with the
data-my-kafka-cluster-kafka-0 PVC, and the corresponding
volumeMounts uses this volume to ensure that Kafka data is stored in
/var/lib/kafka/data
volumeMounts: - mountPath: /var/lib/kafka/data name: data
To see the contents,
export STRIMZI_BROKER_ID=0 kubectl exec -it my-kafka-cluster-kafka-0 -- ls -lrt /var/lib/kafka/data/kafka-log${STRIMZI_BROKER_ID}
You can repeat the same for Zookeeper node as well
...what about the Cloud?
As mentioned before, in case of AKS, the data will end up being stored in an Azure Managed Disk. The type of disk is as per the
default storage class in your AKS cluster. In my case, it is:
kubectl get sc azurefile kubernetes.io/azure-file 58d azurefile-premium kubernetes.io/azure-file 58d default (default) kubernetes.io/azure-disk 2d18h managed-premium kubernetes.io/azure-disk 2d18h //to get details of the storage class kubectl get sc/default -o yaml
More on the semantics for
defaultstorage class in
AKSin the documentation
To query the disk in Azure, extract the
PersistentVolume info using
kubectl get pv/<name of kafka pv> -o yaml and get the ID of the Azure Disk i.e.
spec.azureDisk.diskURI
You can use the Azure CLI command
az disk show command
az disk show --ids <diskURI value>
You will see that the storage type as defined in
sku section is
StandardSSD_LRS which corresponds to a Standard SSD
This table provides a comparison of different Azure Disk types
"sku": { "name": "StandardSSD_LRS", "tier": "Standard" }
... and the
tags attribute highlight the
PV and
PVC association
"tags": { "created-by": "kubernetes-azure-dd", "kubernetes.io-created-for-pv-name": "pvc-b4ece32b-a46c-4fbc-9b58-9413eee9c779", "kubernetes.io-created-for-pvc-name": "data-my-kafka-cluster-kafka-0", "kubernetes.io-created-for-pvc-namespace": "default" }
You can repeat the same for Zookeeper disks as well
Quick test ...
Follow these steps to confirm that the cluster is working as expected..
Create a producer
Pod:
export KAFKA_CLUSTER_NAME=my-kafka-cluster kubectl run kafka-producer -ti --image=strimzi/kafka:latest-kafka-2.4.0 --rm=true --restart=Never -- bin/kafka-console-producer.sh --broker-list $KAFKA_CLUSTER_NAME-kafka-bootstrap:9092 --topic my-topic
In another terminal, create a consumer
Pod:
export KAFKA_CLUSTER_NAME=my-kafka-cluster kubectl run kafka-consumer -ti --image=strimzi/kafka:latest-kafka-2.4.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server $KAFKA_CLUSTER_NAME-kafka-bootstrap:9092 --topic my-topic --from-beginning
What if(s) ...
Let's explore how to tackle a couple of requirements which you'll come across:
- Using a different storage type - In case of Azure for example, you might want to use Azure Premium SSD for production workloads
- Re-sizing the storage - at some point you'll want to add storage to your Kafka cluster
Change the storage type
Recall that the default behavior is for Strimzi to create a
PersistentVolumeClaim that references the
default Storage Class. To customize this, you can simply include the
class attribute in the
storage specification in
spec.kafka (and/or
spec.zookeeper).
In Azure, the
managed-premium storage class corresponds to a Premium SSD:
kubectl get sc/managed-premium -o yaml
Here is a snippet from the storage config, where
class: managed-premium has been added.
storage: type: persistent-claim size: 2Gi deleteClaim: true class: managed-premium
Please note that you cannot update the storage type for an existing cluster. To try this out:
- Delete the existing cluster -
kubectl delete kafka/my-kafka-cluster(wait for a while)
- Create a new cluster -
kubectl apply -f
//Delete the existing cluster kubectl delete kafka/my-kafka-cluster //Create a new cluster kubectl apply -f
To confirm, check the
PersistentVolumeClain for Kafka node - notice the
STORAGECLASS colum
kubectl get pvc/data-my-kafka-cluster-kafka-0 NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-my-kafka-cluster-kafka-0 Bound pvc-3f46d6ed-9da5-4c49-87ef-86684ab21cf8 2Gi RWO managed-premium 21s
We only configured the Kafka broker to use the Premium storage, so the Zookeeper
Podwill use the
StandardSSDstorage type.
Re-size storage (TL;DR - does not work yet)
Azure Disks allow you to add more storage to it. In the case of Kubernetes, it is the storage class which defines whether this is supported or not - for AKS, if you check the default (or the
managed-premium) storage class, you will notice the property
allowVolumeExpansion: true, which confirms that you can do so in the context of Kubernetes PVC as well.
Strimzi makes it really easy to increase the storage for our Kafka cluster - all you need to do is update the
storage.size field to the desired value
Check the PVC now:
kubectl describe pvc data-my-kafka-cluster-kafka-0
Conditions: Type Status LastProbeTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- Resizing True Mon, 01 Jan 0001 00:00:00 +0000 Mon, 22 Jun 2020 23:15:26 +0530 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning VolumeResizeFailed 3s (x11 over 13s) volume_expand error expanding volume "default/data-my-kafka-cluster-kafka-0" of plugin "kubernetes.io/azure-disk": compute.DisksClient#CreateOrUpdate: Failure sending request: StatusCode=0 -- Original Error: autorest/azure: Service returned an error. Status=<nil> Code="OperationNotAllowed" Message="Cannot resize disk kubernetes-dynamic-pvc-3f46d6ed-9da5-4c49-87ef-86684ab21cf8 while it is attached to running VM /subscriptions/9a42a42f-ae42-4242-b6a7-dda0ea91d342/resourceGroups/mc_my-k8s-vk_southeastasia/providers/Microsoft.Compute/virtualMachines/aks-agentpool-42424242-1. Resizing a disk of an Azure Virtual Machine requires the virtual machine to be deallocated. Please stop your VM and retry the operation."
Notice the
"Cannot resize disk... error message. This is happening because the Azure Disk is currently attached with AKS cluster node and that is because of the
Pod is associated with the
PersistentVolumeClaim - this is a documented limitation
I am not the first one to run into this problem of course. Please refer to issues such as this one for details.
There are workarounds but they have not been discussed in this blog. I included the section since I wanted you to be aware of this caveat
Final countdown ...
We want to leave on a high note, don't we? Alright, so to wrap it up, let's scale our cluster out from one to three nodes. It'd dead simple!
All you need to do is to increase the replicas to the desired number - in this case, I configured it to 3 (for Kafka and Zookeeper)
... spec: kafka: version: 2.4.0 replicas: 3 zookeeper: replicas: 3 ...
In addition to this, I also added an external load balancer listener (this will create an Azure Load Balancer, as discussed in part 2)
... listeners: plain: {} external: type: loadbalancer ...
To create the new, simply use the new manifest
kubectl apply -f
Please note that the overall cluster readiness will take time since there will be additional components (Azure Disks, Load Balancer public IPs etc.) that'll be created prior to the Pods being activated
In your k8s cluster, you will see...
Three Pods each for Kafka and Zookeeper
kubectl get pod -l=app.kubernetes.io/instance=my-kafka-cluster NAME READY STATUS RESTARTS AGE my-kafka-cluster-kafka-0 2/2 Running 0 54s my-kafka-cluster-kafka-1 2/2 Running 0 54s my-kafka-cluster-kafka-2 2/2 Running 0 54s my-kafka-cluster-zookeeper-0 1/1 Running 0 4m44s my-kafka-cluster-zookeeper-1 1/1 Running 0 4m44s my-kafka-cluster-zookeeper-2 1/1 Running 0 4m44s
Three pairs (each for Kafka and Zookeeper) of PersistentVolumeClaims ...
kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-my-kafka-cluster-kafka-0 Bound pvc-0f52dee1-970a-4c55-92bd-a97dcc41aee6 3Gi RWO managed-premium 10m data-my-kafka-cluster-kafka-1 Bound pvc-f8b613cb-3da0-4932-acea-7e5e96df1433 3Gi RWO managed-premium 4m24s data-my-kafka-cluster-kafka-2 Bound pvc-fedf431c-d87a-4bf7-80d0-d43b1337c079 3Gi RWO managed-premium 4m24s data-my-kafka-cluster-zookeeper-0 Bound pvc-1fda3714-3c37-428f-9e4b-bdb5da71cda6 1Gi RWO default 12m data-my-kafka-cluster-zookeeper-1 Bound pvc-702556e0-890a-4c07-ae5c-e2354d74d006 1Gi RWO default 6m42s data-my-kafka-cluster-zookeeper-2 Bound pvc-176ffd68-7e3a-4e04-abb1-52c54dcb84f0 1Gi RWO default 6m42s
... and the respective PersistentVolumes they are bound to
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-0f52dee1-970a-4c55-92bd-a97dcc41aee6 3Gi RWO Delete Bound default/data-my-kafka-cluster-kafka-0 managed-premium 12m pvc-176ffd68-7e3a-4e04-abb1-52c54dcb84f0 1Gi RWO Delete Bound default/data-my-kafka-cluster-zookeeper-2 default 8m45s pvc-1fda3714-3c37-428f-9e4b-bdb5da71cda6 1Gi RWO Delete Bound default/data-my-kafka-cluster-zookeeper-0 default 14m pvc-702556e0-890a-4c07-ae5c-e2354d74d006 1Gi RWO Delete Bound default/data-my-kafka-cluster-zookeeper-1 default 8m45s pvc-f8b613cb-3da0-4932-acea-7e5e96df1433 3Gi RWO Delete Bound default/data-my-kafka-cluster-kafka-1 managed-premium 6m27s pvc-fedf431c-d87a-4bf7-80d0-d43b1337c079 3Gi RWO Delete Bound default/data-my-kafka-cluster-kafka-2 managed-premium 6m22s
... and Load Balancer IPs. Notice that these are created for each Kafka broker as well as a bootstrap IP which is recommended when connecting from client applications.
kubectl get svc -l=app.kubernetes.io/instance=my-kafka-cluster NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-kafka-cluster-kafka-0 LoadBalancer 10.0.11.154 40.119.248.164 9094:30977/TCP 10m my-kafka-cluster-kafka-1 LoadBalancer 10.0.146.181 20.43.191.219 9094:30308/TCP 10m my-kafka-cluster-kafka-2 LoadBalancer 10.0.223.202 40.119.249.20 9094:30313/TCP 10m my-kafka-cluster-kafka-bootstrap ClusterIP 10.0.208.187 <none> 9091/TCP,9092/TCP 16m my-kafka-cluster-kafka-brokers ClusterIP None <none> 9091/TCP,9092/TCP 16m my-kafka-cluster-kafka-external-bootstrap LoadBalancer 10.0.77.213 20.43.191.238 9094:31051/TCP 10m my-kafka-cluster-zookeeper-client ClusterIP 10.0.3.155 <none> 2181/TCP 18m my-kafka-cluster-zookeeper-nodes ClusterIP None <none> 2181/TCP,2888/TCP,3888/TCP 18m
To access the cluster, you can use the steps outlined in part 2
It's a wrap!
That's it for this blog series on which covered some of the aspects of running Kafka on Kubernetes using the open source Strimzi operator.
If this topic is of interest to you, I encourage you to check out other solutions such as Confluent operator and Banzai Cloud Kafka operator
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/azure/kafka-on-kubernetes-the-strimzi-way-part-4-453h | CC-MAIN-2021-49 | en | refinedweb |
What would Leo’s data model look like if I were to build it now? Leo has a long history and its current data model is the result of the evolution process. Many different ideas were combined and applied, and some of them were later abandoned. All these ideas left their trails on the model and even though we can say that it is a solid piece of code with a good performance, it isn’t very developer friendly. Using it makes your code more complex than is necessary. At least that is what I believe to be true. That is why I decided to try to make my own version of data model suitable for representing Leo’s trees.
What I like about Leo’s current tree model is how easy it is to make changes in tree. However, it makes catching the information about changes almost impossible, or at least it requires lots of processing. That makes updating view very hard. That is why I intend to make my model more centralized. Data model will provide all necessary operations for tree modification, and those operations should be the only way to modify tree.
The other thing I would try to achieve is some stability of positions in tree. If a tree is modified some positions may become invalid, but those positions that point to the nodes that are still part of the tree should be valid even after the tree has been changed. For example if we have a position that points to one specific node in the tree, and if this node, or any of its ancestors is deleted, then model should tell us that this position doesn’t exist any more. But if the node or any of its ancestors, is moved to some other place in tree, this position should still point to the same node and remain valid.
Theory of operation
Suppose that model keeps a list of all nodes in outline order. Positions would be represented by indexes in this list. But, if order of nodes is changed, those indexes would point to the wrong node. But if we add one more layer of indirection we can have immutable and persistent positions. Data model need to keep list of position names. Now, the outside code would know only about position names. Those names will be internaly in data model translated in integer indexes that would point to the specific node. If position name is removed from this translation list, then it becomes invalid which would mean position does not exist any more. If node is moved, position name should be moved accordingly and it will remain valid even after node has been moved.
... little bit later
Update
I wished to write about this mini-project simultaneously as I develop it. But once, I have started to code, I just couldn’t stop until almost all intended features were completed. Now, I can only describe what I’ve got in the end.
The code in its current version can be found here: leo-tree-model.leo.
First some helpers
To have some tree to work on it was natural to use existing Leo tree that I could see and examine easily. So, my coding started with a few helper functions for converting Leo trees to new model.
Here is the start of
LeoTreeModel class definition:
class LeoTreeModel(object): def __init__(self): self.positions = [] self.nodes = [] self.attrs = {} self.levels = [] self.gnx2pos = defaultdict(list) self.parPos = []
positionsis a list of position names. Each element in this list corresponds to one unique position in outline. As an implementation detail values are simple floats.
nodesis a list of node identificators. Implementation detail elements are just p.gnx values.
attrsis a dict containing values attached to each unique node, (its headline, body, list of parent and children identificators and size of subtree).
levelsis a list containing level for each node in outline. This is a value attached to a position rather than a node. For example one node may be several places in outline each on different level.
gnx2pos- is a dictionary of lists. Keys are gnx node identificators and values are lists of position identificators, places that one particular node can be found in outline. Nodes that are not clones have a list with the single position, while clones have more than one element in this lists.
parPosis a list of parent position identificators. For each node in outline, corresponding element in this list is a position identifier of node’s parent.
def vnode2treemodel(vnode): '''Utility convertor: converts VNode instance into LeoTreeModel instance''' def viter(v, lev0): s = [1] mnode = (v.gnx, v.h, v.b, lev0, s, [x.gnx for x in v.parents], [x.gnx for x in v.children]) yield mnode for ch in v.children: for x in viter(ch, lev0 + 1): s[0] += 1 yield x return nodes2treemodel(tuple(viter(vnode, 0))) def nodes2treemodel(nodes): '''Creates LeoTreeModel from the sequence of tuples (gnx, h, b, level, size, parentGnxes, childrenGnxes)''' ltm = LeoTreeModel() ltm.positions = [random.random() for x in nodes] ltm.levels = [x[3] for x in nodes] ltm.parPos = list(parPosIter(ltm.positions, ltm.levels)) ltm.nodes = [x[0] for x in nodes] gnx2pos = defaultdict(list) for pos, x in zip(ltm.positions, nodes): gnx2pos[x[0]].append(pos) ltm.gnx2pos = gnx2pos for gnx, h, b, lev, sz, ps, chn in nodes: ltm.attrs[gnx] = [h, b, ps, chn, sz[0]] # root node must not have parents rgnx = nodes[0][0] ltm.attrs[rgnx][2] = [] return ltm
Here is a utility iterator that yields elements of parPos list:
def parPosIter(ps, levs): '''Helper iterator for parent positions. Given sequence of positions and corresponding levels it generates sequence of parent positions''' rootpos = ps[0] levPars = [rootpos for i in range(256)] # max depth 255 levels it = zip(ps, levs) next(it) # skip first root node which has no parent yield rootpos for p, l in it: levPars[l] = p yield levPars[l - 1]
Accessing nodes data
Now that all necessary data is acquired and organized let’s see how we can access information.
To get a list of children identificators and parents identificators for any given node:
def parents(self, gnx): '''Returns list of gnxes of parents of node with given gnx''' a = self.attrs.get(gnx) return a[2] if a else [] def children(self, gnx): '''Returns list of gnxes of children of the node with given gnx''' a = self.attrs.get(gnx) return a[3] if a else []
The elements of these lists are not v-nodes like in Leo, but rather just identificators of nodes (gnx).
To access
h,
b of any given node, similar access functions can be written.
Traversing outline
To visit every position in outline one can simply use
nodes iterator.
If we have a position and want to visit parent and all ancestors:
def parents_iterator(self, p): i = self.positions.index(p) while i: p = ltm.parPos[i] yield p i = self.positions.index(p)
Subtree iterator:
def subtree_iterator(self, p): i = self.positions.index(p) gnx = self.nodes[i] sz = self.attrs[gnx][4] for j in range(i+1, i+sz): yield self.positions[j] # or yield self.nodes[j] # or yield j # depending on what we need
Following siblings iterator:
def following_siblings_iterator(self, p): i = self.positions.index(p) lev0 = self.levels[i] N = len(self.nodes) while self.levels[i] == lev0: gnx = self.nodes[i] sz = self.attrs[gnx][4] i += sz if sz >= N: break yield self.nodes[i] # or yield self.positions[i] # or yield i # depending on what we need
As you can see traversals involve only plain element access and integer arithmetic. This allows much faster traversals than Leo can achieve using its position class. Position class for traversals relies on list manipulation methods: building new lists, or appending elements to it. Accessing ivars of vnode directly may seem to be faster but in essence it is implemented in Python in the same way as accessing values from dict.
When order of traversal is not important, but we want to visit every node just once we can traverse
attrs keys. As an implementation detail in Python3.6 and newer dict keeps keys in order in which they were added to dict. This gives traversal of unique nodes a nice outline order.
I haven’t implemented any of these traversal methods in LeoTreeModel class, because they are easy to implement, and custom iterators may be better suited for the job we need to do. Sometimes it is more suitable to iterate gnx values, sometimes it is positions that we are interested in or just indexes in lists. Index is fastest to use but it is valid only as long as tree remains unchanged. Positions are immune to tree changes to some degree. And using gnxes allows fast access to children, parents, h, b…
In the following part I will be discussing loading outlines both from xml and from external files. | https://computingart.net/leo-tree-model-1.html | CC-MAIN-2021-49 | en | refinedweb |
fabricfabric
Abstract Syntax Tree (AST) based on JSON concepts, but more abstract for parsing and application.
JustificationJustification
Having worked with Circe and uPickle for years there are many things I love about each, but unfortunately a few things I was frustrated by. At a high level, I think Circe can be a bit overly complicated and compilation quite slow in large projects. With uPickle, I found the mutable underlying references within the structure very concerning and problematic when doing things like merges. Both of them suffer from slow releases periodically, so I ultimately decided to try my hand at accomplishing the same and incorporate some of my own crazy ideas in the process.
I won't say that fabric is a better library than either of those great projects, but it was inspired by both of them and customized to suit my particular needs. If you find it useful as well, please use it and offer some feedback.
PerformancePerformance
I wrote a performance benchmark with every expectation to be slower than the alternatives as I've done very little tuning, and I'm just one person versus the many developers that have worked on the others for years. However, I was shocked to see how well my little library performed compared to the alternatives:
FeaturesFeatures
The focus of this project is minimalism and flexibility. To that end, the features are somewhat sparse:
- Support for JVM, Scala.js, and Scala Native
- Support for Scala 2.11, 2.12, 2.13, and 3.0
- AST for representation of
Map,
Array,
Numeric,
String,
Boolean, and
nullin a type-safe and immutable way
- Clean DSL to create tree structures
- Deep merging support
- Compile-time generation of conversions to/from case classes with support for default arguments
- Easy and convenient extensibility support
- Parsing support for JSON on JVM and Scala.js
Getting StartedGetting Started
SetupSetup
For SBT simply include:
libraryDependencies += "com.outr" %%% "fabric-core" % "x.y.z"
For parsing support include:
libraryDependencies += "com.outr" %%% "fabric-parse" % "x.y.z"
CreateCreate
Creating fabric structures with the DSL is very easy:
import fabric._ val v1 = obj( "name" -> "John Doe", "age" -> 21, "numbers" -> List(1, 2, 3), "address" -> obj( "street" -> "123 Somewhere Rd.", "city" -> "San Jose" ) )
MergingMerging
Deep-merging is trivial:
import fabric._ val v2 = obj( "age" -> 23, "numbers" -> List(4, 5, 6), "address" -> obj( "state" -> "California" ) ) val v3 = v1.merge(v2)
It is worth mentioning that because values are immutable,
v1 and
v2 remain unchanged.
ConvertConvert
Conversion to other types is very easy with the built-in compile-time conversions:
import fabric._ import fabric.rw._ val person = obj( "name" -> "John Doe", "age" -> 21 ).as[Person] case class Person(name: String, age: Int) object Person { implicit val rw: ReadableWritable[Person] = ccRW[Person] }
ParseParse
Parsing from existing JSON requires the use of the
fabric-parse module:
import fabric._ import fabric.json._ val value = Json.parse("""{"name": "John Doe", "age": 21}""")
FormattingFormatting
Taking an existing value and formatting it for output as JSON:
val formattedString = Json.format(value) | https://index.scala-lang.org/outr/fabric/fabric-core/1.0.4?target=_sjs1.x_3.0.0-RC2 | CC-MAIN-2021-49 | en | refinedweb |
Subject: Re: [boost] [conversion] Motivation for two NEW generic conver_to and assign_to functions
From: vicente.botet (vicente.botet_at_[hidden])
Date: 2009-10-23 19:15:15
----- Original Message -----
From: "Jeffrey Hellrung" <jhellrung_at_[hidden]>
To: <boost_at_[hidden]>
Sent: Friday, October 23, 2009 10:07 PM
Subject: Re: [boost] [conversion] Motivation for two NEW generic conver_to and assign_to functions
>
> vicente.botet wrote:
>> Hi,
>>
>> I would like to share with you what motivated me to add two new free template functions convert_to<> and assign_to<> on the Boost.Conversion.
>>
> <snip>
>
> This is a problem I've run across, too, of which I just have a basic
> framework in place that relies on finding a "convert" overload via ADL.
> I might have to read your docs and consider migrating to your specific
> implementation ;)
>
>.
BTW, can we add functions on the 'std' namespace?
Thanks for your comments and questions,
Vicente
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk | https://lists.boost.org/Archives/boost/2009/10/157416.php | CC-MAIN-2021-49 | en | refinedweb |
NAME¶
librpmem - remote persistent memory support library (EXPERIMENTAL)
SYNOPSIS¶
#include <librpmem.h> cc ... -lrpmem
Library API versioning:¶
const char *rpmem_check_version(
unsigned major_required,
unsigned minor_required);
Error handling:¶
const char *rpmem_errormsg(void);
Other library functions:¶
A description of other librpmem functions can be found on the following manual pages:
DESCRIPTION¶, for details.
The maximum replicated memory region size can not be bigger than the maximum locked-in-memory address space limit. See memlock in limits.conf(5) for more details.
This library is for applications that use remote persistent memory directly, without the help of any library-supplied transactions or memory allocation. Higher-level libraries that build on libpmem(7) are available and are recommended for most applications, see:
- •
- libpmemobj(7), a general use persistent memory API, providing memory allocation and transactional operations on variable-sized objects.
TARGET NODE ADDRESS FORMAT¶
[<user>@]<hostname>[:<port>]
The target node address is described by the hostname which the client connects to, with an optional user name. The user must be authorized to authenticate to the remote machine without querying for password/passphrase. The optional port number is used to establish the SSH connection. The default port number is 22.
REMOTE POOL ATTRIBUTES¶
The rpmem_pool_attr structure describes a remote pool and is stored in remote pool’s metadata. This structure must be passed to the rpmem_create(3) function by caller when creating a pool on remote node. When opening the pool using rpmem_open(3) function the appropriate fields are read from pool’s metadata and returned back to the caller.
#define RPMEM_POOL_HDR_SIG_LEN 8 #define RPMEM_POOL_HDR_UUID_LEN 16 #define RPMEM_POOL_USER_FLAGS_LEN 16 struct rpmem_pool_attr {
char signature[RPMEM_POOL_HDR_SIG_LEN];
uint32_t major;
uint32_t compat_features;
uint32_t incompat_features;
uint32_t ro_compat_features;
unsigned char poolset_uuid[RPMEM_POOL_HDR_UUID_LEN];
unsigned char uuid[RPMEM_POOL_HDR_UUID_LEN];
unsigned char next_uuid[RPMEM_POOL_HDR_UUID_LEN];
unsigned char prev_uuid[RPMEM_POOL_HDR_UUID_LEN];
unsigned char user_flags[RPMEM_POOL_USER_FLAGS_LEN]; };
The signature field is an 8-byte field which describes the pool’s on-media format.
The major field is a major version number of the pool’s on-media format.
The compat_features field is a mask describing compatibility of pool’s on-media format optional features.
The incompat_features field is a mask describing compatibility of pool’s on-media format required features.
The ro_compat_features field is a mask describing compatibility of pool’s on-media format features. If these features are not available, the pool shall be opened in read-only mode.
The poolset_uuid field is an UUID of the pool which the remote pool is associated with.
The uuid field is an UUID of a first part of the remote pool. This field can be used to connect the remote pool with other pools in a list.
The next_uuid and prev_uuid fields are UUIDs of next and previous replicas respectively. These fields can be used to connect the remote pool with other pools in a list.
The user_flags field is a 16-byte user-defined flags.
SSH¶¶¶
libr.
librpmem registers a pool as a single memory region. A Chelsio T4 and T5 hardware can not handle a memory region greater than or equal to 8GB due to a hardware bug. So pool_size value for rpmem_create(3) and rpmem_open(3) using this hardware can not be greater than or equal to 8GB.
LIBRARY API VERSIONING¶
This section describes how the library API is versioned, allowing applications to work with an evolving API.
The rpmem_check_version() function is used to see if the installed librpmem supports the version of the library API required by an application. The easiest way to do this is for the application to supply the compile-time version information, supplied by defines in <librpmem.h>, like this:
reason = rpmem_check_version(RPMEM_MAJOR_VERSION,
RPM rpmem_check_version() is successful, the return value is NULL. Otherwise the return value is a static string describing the reason for failing the version check. The string returned by rpmem_check_version() must not be modified or freed.
ENVIRONMENT¶
librpmem can change its default behavior based on the following environment variables. These are largely intended for testing and are not normally required.
- •
- RPMEM_SSH=ssh_client
Setting this environment variable overrides the default ssh(1) client command name.
- •
-).
- •
-.
- •
- RPMEM_ENABLE_VERBS=0|1
Setting this variable to 0 disables using fi_verbs(7) provider for in-band RDMA connection. The verbs provider is enabled by default.
- •
- RPMEM_MAX_NLANES=num
Limit the maximum number of lanes to num. See LANES, in rpmem_create(3), for details.
- •
- RPMEM_WORK_QUEUE_SIZE=size
Suggest the work queue size. The effective work queue size can be greater than suggested if librpmem requires it or it can be smaller if underlying hardware does not support the suggested size. The work queue size affects the performance of communication to the remote node. rpmem_flush(3) operations can be added to the work queue up to the size of this queue. When work queue is full any subsequent call has to wait till the work queue will be drained. rpmem_drain(3) and rpmem_persist(3) among other things also drain the work queue.
DEBUGGING AND ERROR HANDLING¶
If an error is detected during the call to a librpmem function, the application may retrieve an error message describing the reason for the failure from rpm librpmem function indicated an error, or if errno was set. The application must not modify or free the error message string, but it may be modified by subsequent calls to other library functions.
Two versions of librpmem are typically available on a development system. The normal version, accessed when a program is linked using the -lrpmem option, is optimized for performance. That version skips checks that impact performance and never logs any trace information or performs any run-time assertions.
A second version of libr
- RPMEM_LOG_LEVEL
The value of RPMEM_LOG_LEVEL enables trace points in the debug version of the library, as follows:
- •
- 0 - This is the default level when RPMEM_LOG_LEVEL is not set. No log messages are emitted at this level.
- •
- 1 - Additional details on any errors detected are logged (in addition to returning the errno-based errors as usual). The same information may be retrieved using rpmem_errormsg().
- •
- 2 - A trace of basic operations is logged.
- •
- 3 - Enables a very verbose amount of function call tracing in the library.
- •
- 4 - Enables voluminous and fairly obscure tracing information that is likely only useful to the librpmem developers.
Unless RPMEM_LOG_FILE is set, debugging output is written to stderr.
- •
-¶
The following example uses librpmem to create a remote pool on given target node identified by given pool set name. The associated local memory pool is zeroed and the data is made persistent on remote node. Upon success the remote pool is closed.
#include <assert.h> #include <unistd.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <librpmem.h> #define POOL_SIGNATURE "MANPAGE" #define POOL_SIZE (32 * 1024 * 1024) #define NLANES 4 #define DATA_OFF 4096 #define DATA_SIZE (POOL_SIZE - DATA_OFF) static void parse_args(int argc, char *argv[], const char **target, const char **poolset) {
if (argc < 3) {
fprintf(stderr, "usage:\t%s <target> <poolset>\n", argv[0]);
exit(1);
}
*target = argv[1];
*poolset = argv[2]; } static void * alloc_memory() {
long pagesize = sysconf(_SC_PAGESIZE);
if (pagesize < 0) {
perror("sysconf");
exit(1);
}
/* allocate a page size aligned local memory pool */
void *mem;
int ret = posix_memalign(&mem, pagesize, POOL_SIZE);
if (ret) {
fprintf(stderr, "posix_memalign: %s\n", strerror(ret));
exit(1);
}
assert(mem != NULL);
return mem; } int main(int argc, char *argv[]) {
const char *target, *poolset;
parse_args(argc, argv, &target, &poolset);
unsigned nlanes = NLANES;
void *pool = alloc_memory();
int ret;
/* fill pool_attributes */
struct rpmem_pool_attr pool_attr;
memset(&pool_attr, 0, sizeof(pool_attr));
strncpy(pool_attr.signature, POOL_SIGNATURE, RPMEM_POOL_HDR_SIG_LEN);
/* create a remote pool */
RPMEMpool *rpp = rpmem_create(target, poolset, pool, POOL_SIZE,
&nlanes, &pool_attr);
if (!rpp) {
fprintf(stderr, "rpmem_create: %s\n", rpmem_errormsg());
return 1;
}
/* store data on local pool */
memset(pool, 0, POOL_SIZE);
/* make local data persistent on remote node */
ret = rpmem_persist(rpp, DATA_OFF, DATA_SIZE, 0, 0);
if (ret) {
fprintf(stderr, "rpmem_persist: %s\n", rpmem_errormsg());
return 1;
}
/* close the remote pool */
ret = rpmem_close(rpp);
if (ret) {
fprintf(stderr, "rpmem_close: %s\n", rpmem_errormsg());
return 1;
}
free(pool);
return 0; }
NOTE¶
The librpmem API is experimental and may be subject to change in the future. However, using the remote replication in libpmemobj(7) is safe and backward compatibility will be preserved.
ACKNOWLEDGEMENTS¶
librpmem builds on the persistent memory programming model recommended by the SNIA NVM Programming Technical Work Group: <>
SEE ALSO¶
rpmemd(1), ssh(1), fork(2), dlclose(3), dlopen(3), ibv_fork_init(3), rpmem_create(3), rpmem_drain(3), rpmem_flush(3), rpmem_open(3), rpmem_persist(3), strerror(3), limits.conf(5), fabric(7), fi_sockets(7), fi_verbs(7), libpmem(7), libpmemblk(7), libpmemlog(7), libpmemobj(7) and <> | https://manpages.debian.org/testing/librpmem-dev/librpmem.7.en.html | CC-MAIN-2021-49 | en | refinedweb |
SkyOS Beta 8.5 has just been released. New features include the Indexing Service, an SQL based file attribute and content index service which makes it possible to find your files in a fraction of a second, better developer support, and a lot of bug fixes. NVU has also been ported and is available in this release. You can read the changelog here. Update: Screenshots gallery by OSDir.
SkyOS Beta 8.5 Released
About The Author
Thom Holwerda
Follow me on Twitter @thomholwerda
110 Comments
-
-
2005-08-08 12:10 amAnonymous Penguin
Trouble is, what is a troll? everybody with a different opinion? Then we are all trolls..
That aside, SkyOS isn’t something I’m particularly fond of–to me, it looks like a hodge-podge of features with no particular goal and a dozen or so applications whose UI’s, to put it bluntly, aren’t that great. To put it bluntly, SkyOS is basically a dis-organized, less elegant BeOS.
I know the author tends to visit this website, so I suppose the nice thing to do would be to give him some suggestions instead of bashing his product:
1) Read up on human-computer interaction and interface design. I recommend Tog on Interface, along with his website asktog.com. I also recommend you read Tog on Software Design, which is invaluable for anyone designing a next-generation OS.
2) Get your hands on as many OSes as possible and study their UI and how they act in general.
3) Reuse as much code as possible, but for the love of god, make sure you rewrite the UI’s to fit in with the overall feel of your OS. I cannot stress this enough!
4) There is no number 4.
5) I recommend you ditch C++ for a more modern language. Python is great, but it wouldn’t be too hard to create your own python-like language that compiles into C. If you don’t want to create your own, I recommend using C++ for the back-ends and python or some other advanced language for the graphical front-ends.
– bytecoder
2005-08-06 9:41 pmThom Holwerda
I agree that SkyOS needs some proper, sensible (G)UI design. Badly. I mentioned this multiple times in the SkyOS forums. The conclusion? They’ll look at it when they’ye in RC stage. Let’s hope they don’t forget
.
2005-08-06 11:32 pm
“I agree that SkyOS needs some proper, sensible (G)UI design. Badly. I mentioned this multiple times in the SkyOS forums. The conclusion? They’ll look at it when they’ye in RC stage. Let’s hope they don’t forget
.”
Yeah, but Thom all you ever do is complain that window control buttons aren’t in the OSX position, or that the title bar doesn’t act like BeOS, then back up you opinions with either anecdotes or the claim that “apple spent millions on HCI”.
you never actually bring up real issues like space wasting window frames, or the moving taskbar, or actual app interfaces.
2005-08-06 9:45 pmsonic1001
“Python is great, but it wouldn’t be too hard to create your own python-like language that compiles into C.”
I think your pretty much descrbing what Pypy does
RPython -> Translator -> Highly optimized C
—–
Compared to the CPython implementation, Python takes the role of the C Code. We rewrite the CPython interpreter in Python itself.
[…]
translate our high-level description of Python to a lower level one.
[…]
In order to make a C code generator feasible we restrict ourselves to a subset of the Python language [RPython], and we adhere to some rules which make translation to lower level languages more obvious
-
2005-08-06 10:44 pm
[quote.[/quote]
You expect companies who invest their time, money, and effort to give back to the community. That is something companies are bound to do if they use GPL software. They wont give back otherwise. They simply dont care and are in it for the money.
2005-08-07 6:29 am
first of all GPL DOES NOT prevent companies from releasing commercial software. It only insures that open source software stays that way. There is commercial software published under GPL and Xchat is only one that comes to mind. While xchat is free for linux and *nix in general it is not so for win32. However you can still get the source and compile is yourself thought there are some major problems you might encounter. Oh yeah and if you think that companies will spend money developing software and then give it away … I suggest you take a look at say Novell, IBM, Apple, and several others. They all do that and they all use some for of an OSS licencing. (some more that others) So get with the program. Most companies do not really make their money from selling software but rather supporting it and about 99% of the users love the convenience of commercial software ( OSS or not) because it’s easier for them to pay than learn all that *useless crap*.
Oh and SkyOS is too little too late as far as OSes go. No one really cares about another OS even if it were to change the face of computing. Plus SkyOS is really a noname OS published by a noname company. And as far as the OSS thing goes I am sorry guys but you suck. Turning an OSS system into a closed source one for the sake of making money in just gay and stupid. Provided that the only market share that you may have (for a long while) is geeks, this is a cutting-the-hand-that-feeds-you moove.
-
-
2005-08-06 9:50 pmroguelazer
The developer knew the Win32 API and wanted to stick to what he knew rather than foraging blindly into unknown territory? There’s still lots of people that don’t like this newfangled object oriented stuff. I may not count myself in their number, but I do understand where they are coming from…
-
2005-08-07 4:54 amLumbergh
OO development is not natural to many people. Once you move out of the GUI realm of development it gets hard to map abstract concepts into OO. Luckily, the Unix world hasn’t bought into OO as much as others. I suggest looking at data-driven programming.
One of my pet peeves has always been languages that force you into stuffing everything into a class and declaring it static. Just put into a namespace and be done with it.
When in this world do we have to pay for beta? LoL people in this world don’t even like paying for finished products and this guy is trying to sell beta. Well sure some itchy people might be paying but hey screw it, if he asked for donation i would have given but nah i am not paying for a beta of something which is far behind other OSes and even free OSes.
SkyGI in terms of design is very similar to Win32, this was proved when the ReactOS team, created a SkyGI Wrapper for ReactOS’s win32 implementation and MS Win32, allowing SkyOS apps to be run on ReactOS and Windows.
I recommend you ditch C++ for a more modern language. Python is great, but it wouldn’t be too hard to create your own python-like language that compiles into C.
What on Earth have you been smoking? Python is MUCH slower than C/C++; why would you want to create an OS GUI with it? And creating “your own Python-like language” is not exactly a trivial undertaking. Far better to deploy the time and personnel developing the project than reinventing the wheel.
2005-08-10 5:49 pm
as of 8.5 I think it has a great api. It introduces interfaces, so that you can assign a whole different function to each widget. For example you can define kind of an OnMyButtonClick function. Ive programed strait c in windows before, and trust me.. skyos is a cakewalk to program in compared to windows. It may have had a few similarities, but they are certainly nothing alike.
Of course I know it’s much slower than C/C++, but it doesn’t usually matter when you’re writing non-intensive graphical programs. Also, I’m not sure how you managed to mistake “graphical front-ends” with OS GUI.
Anyway, Creating a python-like language that compiles into C isn’t really that hard if your a decent programmer; I should know, I’ve already looked into it heavily.
– bytecoder
2005-08-06 10:36 pmRodrigo
Anyway, Creating a python-like language that compiles into C isn’t really that hard if your a decent programmer; I should know, I’ve already looked into it heavily.
I wouldn’t recommend the guy developing SkyOS to venture that himself, better to keep the focus…
If he really wanted to go on the Python direction (which I agree would be good), he’d better pick something ready and use it.
Well, the problem with that is that there really isn’t anything good enough to (generally) replace C++ out there. Python is great and all, but it’s incredibly slow. O’Caml comes close speed-wise, but it falls short in general applicability. The language of choice for an OS is very important, and if it’s well-liked, it can help foster a nice dev community, which is why I don’t think he should skimp on something like this.
2005-08-06 11:23 pm
If you don’t write shity Ocaml code it’s faster then C++, the only compiler thats then faster is the Intel C compiler, the Intel C++ being slower again.
I’m nos sure what you mean with “applicability”, since the only thing I can think of that keeps ocaml from being used is that so few people know it (or even aoubt it).
2005-08-08 1:14 pm
“Well, the problem with that is that there really isn’t anything good enough to (generally) replace C++ out there”
-
2005-08-07 1:36 amrm6990
So? At least then they might produce a decent product, even if they don’t give back. Helping a company make a better product is considered giving back to the community in my book. Even if they do charge for it, you still have the option of using the open source version and improving it.
So write your own code and release it under a BSD license and then companies can use the code all they want, whether they give back or not.
Really, it is up to the developers what license their software uses, not the users. So, like I said, write your own code and respect other programmer’s wishes.
There is a reason they chose the GPL….just in-case you weren’t aware of this 😛
2005-08-07 1:47 amCuriosityKills
Everyone has the right to do what they want. But isn’t it ok to have our own opinion and share that?
To me people who release code under GPL has this mentality:
Since i can’t make money off software, i will make sure no one can Or we can look at it like:
I like doing coding but the work that i share, i want to make sure that the contribution to it are shared back because there are many mean people out there who will steal it and misuse it otherwise.
And the one who release BSD think:
Coding is my passion and thats what i do. I give it to people for free so that they can use it as they find suitable.
To me frankly it is immaterial what developers think, i simply don’t like GPL since i feel GPL will cause long term irreversible damage to software industry.
For once again to people who are not clear why i think GPL is not good for Our industry health:
Universities do research->Release code under BSD-> Companies benefit from it and make a product out of it->Companies make money->Donate some back to universities->Universities get more resources for research.
GPL on the other hand kills this eco-system. Many average GPL tools have killed better commercial tools since it is difficult to compete with FREE unless you have features which no one else has which is very difficult to do in software.
2005-08-07 2:39 amShannara
Unfortunately dumb trolls are allowed to vote … this is very bad for the community …. Especially when good people post great posts only to have lame trolls vote you down … any os mods around? Anyways, you make some excellent points, and perfect facts concerning the mentality of GPL and BSD based licenses …
Voted up ..
2005-08-07 3:29 amsepht
I think trying to generalize the mentality of GPL users in such a negative way, and putting BSD in such a postitive light seems a bit trollish to me.
BSD and GPL both have their merits, and I find that there are situations when it seems more reasonable to use BSD, but there are also times GPL makes more sense.
GPL Mentality falls under: if you want to use this code, I want the ability to use your code too.
BSD is simply: Here’s the code, use it how you wish, just make sure I get credit for what I wrote.
The difference between my examples and your’s is that you threw motive into it. “Coding is my passion” “i want to make sure no one can,” your spinning BSD as honorable and GPL as restrictive and dictoreal in motive. I think thats total bullocks. Everyone has their own reason for doing it, I don’t think passion or money are any part of it.
To me BSD seems careless (about what the code is used for) and GPL seems very groupie (i.e. if you want to use our code, join our group and only THEN can you use it)
I hope this doesn’t get voted down by people who are stern supporters of either side, I tried to be fair and honost, and remember, this is all my opinion about how it is, if you think I’m wrong, post a reply.
2005-08-07 5:06 amwakeupneo
Universities do research->Release code under BSD-> Companies benefit from it and make a product out of it->Companies make money->Donate some back to universities->Universities get more resources for research.
Welcome to fantasy land. Population – You. If the company involved was penny pinching by using OSS in the first place, do you really think they’ll go out of their way to pay anyone for anything? It may happen, but it would be the exception, not the rule.
GPL on the other hand kills this eco-system. Many average GPL tools have killed better commercial tools since it is difficult to compete with FREE unless you have features which no one else has which is very difficult to do in software.
So? If you’re trying to build a tool that competes in a crowded marketplace, then design and build a better mousetrap, make it proprietary and hope someone finds it useful enough to pay for it. BTW, why does the above paragraph single out GPL tools when the same would apply to tools licensed under the BSD license? Your bias is coming through loud and clear.
Personally, I like both licenses and believe both have their place. Whining about one not being enough like the other is just pointless. If you don’t like the license the software is licensed under, there’s no point whining about it – just ignore it and move along.
Now, back on topic, I can’t help but be impressed with the progress SkyOS is making. Every update brings jumps in functionality and additional apps. Great work guys! Can’t wait for 1.0
2005-08-09 10:00 amprokoudine
> To me people who release code under GPL has this mentality: Since i can’t make money off software, i will make sure no one can
You couldn’t be more wrong. I mean, yes, there are SOME people who have that mentality. The majority of opensource developers I contact to, have this mentality:
“I know that I’m not almighty, so if I release the code under GPL, people can help me to develop a better tool me me and for them”
2005-08-09 1:10 pm
GLP is just a licence, your philosophic whine on the eco-system of software coding is flawed and not based on reality; but pure rhetoric.
To code must have a few brain cells to rub together, so I assume coders are smart enough to decide what licence to code under.
So I guess your little whine about the GLP licence would have little effect on swaying anybodys opinion.
-5 from me Einstein.
“Well, the problem with that is that there really isn’t anything good enough to (generally) replace C++ out there.”
Actually, not true. The D Programming Language serves exactly that purpose, the only thing it lacks is widespread adoption. See
I’ll say this…. I have always been a supporter of gpl and to a certain extent I still am…. however.. if I start to see the balance go to much towards gpl I will support BSD more or in certain instances even closed source liscensing.
A large liscensing imbalance is not good for anybody, however, the balance of liscensing causes intense competition and gives us, the consumers a much better product.
anyway… there is no reason for SkyOS to go GPL, it is his right to go closed source just as much as it is our right to go gpl or bsd or whatever.
time will tell if he made the right decision, and if he didn’t…. he may open the source.
It seems that people are to religous about this gpl/bsd/closed liscensing… I think it is a good thing to have a balance between the various liscensing and I think it would be detrememtal to the industry and to hundreds of thousands of jobs to make everying opensource. If everything is closed source.. that opens up the opposite extreme….. price gouging.
I will do what I can to maintain balance
First things first, C++ is still widely used in the industry and has a proven track record.
Secondly, Why the heck would anyone want to create a new language for an OS they are developing, I mean, your trying to get developers interested, not drive them away by telling them “Our new language is the best way to do things, please take several months out to learn it”.
Thirdly, Win32API thoughts… imho, a wise decision, ALOT of developers KNOW the Win32API calls (And as someone who has used them for years, they really arn’t that bad at all), and it’s very easy to throw your own C++ class wrappers around it, not to mention easy to get other compilers to support your own OS.. Take for example FreePascal and Lazarus (Delphi clone front end) – It sounds to me like this project could easily be ported to SkyOS without much effort, giving SkyOS developers a speedy Delphi style interface.
Just my 2cents worth in to the kettle.
Just wanted to let you people know.. after you get tired of debating the damn GPL and skyos (like always). btw, you don’t have to think about paying $30 for beta testing, instead if you prefer, it’s $30 for skyos 5.0 final, and the beta is just a free perk for paying up front for the final version. The way I think about it is that by the time I am done beta testing and 5.0 comes out, all my reported bugs will have been worked out.
I am a beta tester for skyos. The last revision is much better than ever. Of course it still is buggy. I am currently having trouble with the automatic dhcp that is built in. However when it does work firefox is blazing fast on skyos. For some reason firefox works so good on skyos it’s unbelievable. Pages render in a split second and browsing is super fast. I guess maybe skyos’s tcp/ip stack is just really fast and efficient or something. Or maybe it’s the new kernel.
Anyways, Skyos installed in about 7 minutes for me with all the packages available. In beta 8.4 it took around 11 minutes, so some definite speed improvements can be seen this time.
unfortunately my 6800gt is still unsupported. but what can you expect when Nvidia refuses to support alternative OS’s. The Vesa 2.0 driver works ok in the mean time.
what’s my opinion of skyos? I like it a lot because it doesn’t require me to know what is going on behind of scenes in order to config the whole system. Instead it just boots up and automatically makes all my hardware work. Something windows nor linux can do at the moment. Provided though that skyos still has limited support for drivers.
I use Gentoo Linux and Windows XP on a everyday basis, and Skyos seems like a mixture of both. It has a windows like interface, however it also has a lot of the features that MacOS touts, such as realtime indexing and “it just works” type of install. I can see that a lot of linux stuff is implemented in there too, but skyos covers up all the config with a simple MacOS like control panel that let’s you config the whole system with a few clicks.
ok, just wanted to give you my impression. Maybe i’ll write a review.
2005-08-07 5:03 amLumbergh
I guess it comes down to a matter of timing. If this was 1995, then SkyOS might be able to play off the whole pay for the Beta scheme, but with fairly mature hobbyist desktop environments like KDE and Gnome, then there is less of a reason…even it is a measly $30
My question is what does SkyOS offer over someone taking the linux kernel and doing something novel in userspace? You get all the drivers for free and don’t have to worry about the viralness of the GPL in userspace.
In any case, best of luck to the SkyOS team.
2005-08-07 8:01 amAnonymous Penguin
Pardon me, why are KDE and Gnome “hobbyist desktop environments” ?
What would be “a professional desktop environment” in your view?
“Apple OSX”
And the problem is that companies can’t take GPL work, improve it and market. No wonder there is no cool UI like Apple for Linux.
2005-08-07 6:52 amwakeupneo
“Apple OSX” – Well, since the education market has been Apple’s biggest cashcow for a long long time, they kinda have a vested interest there don’t they. But besides that, as I said before, it’s the exception and not the rule.
And the problem is that companies can’t take GPL work, improve it and market. – It’s not a problem for the author of the code and ultimately (and rightly so I might add) they’re the ones that decide how their code should be used. If the company wants to use OSS code in this way then they’re more than welcome to use BSD licensed code instead.
No wonder there is no cool UI like Apple for Linux. – It’s just a matter of time really. Here’s just one of several that are being worked on:
Can’t wait ’till it’s ready!
First of all get a name. Second you are a dumbass. Third read my post again. Commercialization is inherent necceasity for industrial growth. GPL is crippling industry.
Name one good innovative software designed by your so called GPL brothers? Most of the innovation has happened in universities and commercial companies but with advent of GPL, it is cutting the legs of industry to stand on the shoulder of industry research.
People like you want all the things free in your life and you can’t see other making money because you are jealous but my friend, prosperity brings prosperity. Wonder if anyone would like to work if they get beer and food for free.
Now shut up your fuckin stallman dog type crap.
And about IBM, IBM is the biggest fokked up company. They know they make money by selling hardware and they are using Linux as a puppet. Its not even worth talking to morons like you…
Why do comments on SkyOS always are about the GPL, but not about SkyOS itself? Only a small group of people do that, so you all lose focus guys. Try for once talking about something relevant when SkyOS is mentioned, talking with GPL zealots is plain boring and they never learn so there’s no use for a discussion. GPL zealots – fcuk you!
Yes, mod me down, I will have 100% proof that GPL zealots exist here. Else no-one would be bothered by this comment.
2005-08-07 8:07 ambornagainenguin
Why do the comments alwaystend to focus on GPL and not on the tremendous (or so we’re told) improvements in SkyOS itself?
Hmm…perhaps its this wa for the simpl reason that if SkyOS /were/ under GPL then it would be possible to test out the fcuking thing–as it is all these once intresting announcements on OSNiews.com ammount to is recollections of the ‘Look at my new toy!’ handwaving so many of us recall from childhood visits to the neighborhood spoilted brat who always had th cool toys that only his ‘extra special but only for today bestest friend’…..
–bornaganpenguin
2005-08-07 8:20 amAnonymous Penguin
Apropos new toys: darn, your nick is better than mine, LOL!
2005-08-08 7:19 ambornagainenguin
> ‘lets make windows look like a Mac craze that hit a few years back and originally based on the old Windowblinds skin ‘iWin’ does anyone even remember that one? but recently I’ve had people in the Linux community not take me seriously about tryingwanting to move to Linux full time and a name change was appropriate, and this one most appropriate of all…)
unfortunately my ‘new’ (used) laptop has a few sticky keys and I ended up mistyping my user name….d’ya think there might be a way some how to get the mods to allow me to fix it?
[looks at mods] Hint…hint…Please?
2005-08-08 7:37 amAnonymous Penguin.
2005-08-08 8:13 ambornagainenguin
That’s a cool story too..
As for contacting them, I intend to–I just don’t know how much work it will be on their end and hesitate to bug ’em about something that was more or less my own fault…
–bornagainpenguin
2005-08-07 9:17 amThom Holwerda.
As much as I hate to admit it, I know you have a very good point.
2005-08-07 9:25 amAnonymous Penguin
+ vote for you too!
2005-08-07 11:21 am
You mean having spent $30 makes you a cool kid? Oh brother. This is a commercial product and you have to pay for it. It is really cheap. If you cannot live with that fact then fine, but don’t bring up the same old discussion about open-sourcing SkyOS as it will simply not happen. You can have any of the many other operating systems that are both free and open-source. Their way is fine, I’m a happy Ubuntu user myself. But it doesn’t mean it’s the *only* “true” way of doing things. Nobody moans about Microsoft not open-sourcing their products anymore, I hope the same will be with SkyOS. Have a good day
2005-08-07 1:32 pm
now this is just conjecture but, i’mma hazard the guess that the main market for altos lies in the college age (and a bit beyond) (the old fogeys having been using *nix so long they feel no desire to change) demographic and to most kids in college or fresh out of it any money is a lot of money, especially when the return value is uncertain.
2005-08-08 7:46 ambornagainenguin
>>You mean having spent $30 makes you a cool kid?
No, I mean getting to play with a cutting edge rapidly evolving OS that looks like it might actually be a ‘good’ Windows replacememnt for home users…fooling around with a new toy…since SkyOS is not yet available in stores or as a viable general release and betas are all that are truly available all it CAN be is a toy to play with.
>>If you cannot live with that fact then fine, but don’t bring
>>up the same old discussion about open-sourcing SkyOS
>>as it will simply not happen.
Fine with me, just don’t expect the rest of us to pay any attention to your new shiny OS that no one can play with.
>>You can have any of the many other operating systems
>>that are both free and open-source. Their way is fine, I’m
>>a happy Ubuntu user myself. But it doesn’t mean it’s the
>>*only* “true” way of doing things.
Congratulations on your Ubuntu experience. (Have you tried XYZ? LOL, no seriously!) I’m glad that you’ve found a Linux to your liking, there’s one for everybody…join the party! I’ve begun to like Ubuntu quite well now that I’ve found the ‘unofficial’ addon cd made by some of the Ubuntu forumers. That said this completely emphaises and underscores my point…if you go back and reread my post (go ahead I’ll wait..)
…
(You ARE rereading, aren’t you? ;P)
…
I only mentioned the GPL (or open sourcing) SkyOS in connection with an answer to the question of why SkyOS discussions always degenerate into GPL (or accusations of improper GPL usage which I personally think Robert is too intelligent to have commited..) rants and so little SkyOS focused commentary takes place. My answer: Not enough people have access to the betas to make commentary–a state of affiars that hurts EVERYONE including SkyOS and its users.
You may note on your careful re-reading of my post that except mentioning the GPL in relation to it being /one/ way to open things up to allow everyone to participate (or to ‘play’ if you will…:P) I don’t make any mention of the GPL in my own suggestions for solving the problem of there being a derth of SkyOS related commentary. Let’s try not to hurt ourselves noding our heads in agreement with each other, okay? I haven’t brought the GPL into play here–you have.
Me–? I just wanna see how this cool toy behaves on my hardware…[laughs] but I’m not about to pay for the privledge of seeing if it will, even if it DOES net me a cool looking official cd-rom some day in the future.
I’ll stick to the things I can try before I buy, thank you very much. And there’s nothing wrong with that. Plenty of other fish in the sea, and soon enough others will realize that and you’ll see the SkyOS discussions with only a token few comments at every story much like you see with RiscOS and the various incarnations of Amiga. ‘tho I’m not quite sure that will be an improvement….
2005-08-07 11:24 amrodviking
[quote].
[/quote]
You hit the nail straight on. Perfect.
2005-08-07 11:54 amThom Holwerda
You hit the nail straight on. Perfect.
Agreed. Really.
2005-08-07 1:09 pmLakedaemon
I agree too.
What SkyOS really needs is a growing community of developers who code 3rd party applications.
To increase the number of people who use/try (and like) SkyOS is the way to get them.
This solution wouldn’t disadvantage actual memebers of the beta team too, cause…the actual “beta membership” is actually about spopnsoring alternate os development AND getting SkyOS 5 final.
As a beta tester, I would actually welcome some move that would increase the SkyOS user/tester/coder base.
On a completely different subject now, I think Osnews should take (more drastic) measures to keep skyos threads on the focus.
Why not automatically get a -2 rating to a post in the skyos thread that would use the words “GPL” or “OPEN SOURCE” ?
I am really SICK (as might be many others) of having to parse 50 opensource zealots posts to get to the actual 10 interesting SkyOS posts…
Lakedaemon
2005-08-07 8:53 pm
>Why not automatically get a -2 rating to a post in the >skyos thread that would use the words “GPL” or “OPEN >SOURCE” ?
Because things like SkyIRC now under GPL would immediately be below the threshold and nobody would see it. Now this might be a bit rude so you pansies can stop reading now. If some of you anti GPL, pro SkyOS zealots could pull you head out of your ass long enough to look at the some of the software that runs on you chosen platform you would finally understand that the GPL and Open Source is a good thing, especially for you. If neither existed your platform would have no software, no compiler and without those two no users.
2005-08-07 9:02 pm
Wait, who has said that Open Source is bad? No one in this topic. The only things have been said that I can figure that you got confused enough into thinking that someone here said OSS was bad were the “quit asking for SkyOS to be Open Sourced”. Which certainly isn’t a bad thing to ask. Seemingly everytime something about SkyOS gets posted on OSnews, someone, somewhere, manages to bring up the point that “omg SkyOS isn’t GPL, I won’t use it”. The only thing I can say to that is: Grow up.
2005-08-07 9:12 pm
>Seemingly everytime something about SkyOS gets posted on >OSnews, someone, somewhere, manages to bring up the >point that “omg SkyOS isn’t GPL, I won’t use it”. The >only thing I can say to that is: Grow up.
What’s your fucking point? Every time somebody brings up MacOSX somebody bitches about how Aqua is not Open Source and how it is not fair since Darwin is based on FreeBSD, but you know what, THOSE THREDS DON’T END IN FLAME WARS!!!!! Some people just aren’t very grown up, maybe because they are minors, its not like we check for IDs. Ignore them, masturbate, do whatever you want just don’t post another “I am so sick and tired of this shit” post. OK?
Come on, you are acting like SkyOS is being singled out by some sort of nut jobs. What about all the BSD is dying or BSD is dead. UNIX will die this year and so on posts that liter the internet? No mater how it feels like to you know, you are not being singled out, but by 6 billion people on the planet, lots with PCs and Internet, couldn’t it just be that some idiots got internet access to? Hey AOLs everywhere you know.
-
-
2005-08-07 2:38 pmgreg
Great post.
I really couldn’t care less whether or not SkyOS is open source, but I do think they’re missing out on a great community of OS geeks out there that will happily test it out and report bugs and altogether make it a far better system, this shoudn’t cost either side any money, and both stand to gain. Also, by allowing betas, SkyOS could gain far more people who are willing to pay for the OS.
Release the betas, you can’t lose!
2005-08-07 11:00 amThom Holwerda
What I don’t like about SkyOS is that iit’s so similar to Windows. Why the hell someone should use a windows a like system instead of Windows.
Care to elaborate on this? What parts do you find too similar to Windows? Have you actually used SkyOS?
You are right about SkyOS being too similar to Windows, but the SkyOS team said they are doing that because else they will scare off Windows users.
It’s a discussion, problem is SkyOS and OSS go hand in hand, SkyOS uses Open Source software for its user applications.
Many people view it as, you should only reap what you sow, and many of these “Zealots” believe the SkyOS team are not sowing enough, and reaping plenty.
I personally do not care if my OS is Open, Closed, Commercial or Free, as long as it does what I want. But many have strong views about Open Source, I know there are a few in the SkyOS community who would be considered SkyOS Zealots, at one time I myself was probably considered one.
You maybe fed up with the “Open it then I’ll use it” comments, and that’s fine, I understand where you are coming from here. Should we also stop the un-educated comments aswell? such as some OS users dismissing Linux as s fad? dismissing BSD simply as a dying OS?
Thing is we can educate un-educated people, the same is with the OSS “Zealot” comments educate them, and show that SkyOS is giving back / is going to give back; show them the OS is worth while using.
You can spout facts, figures, and technical jargon to do this, but theres nothing like real hands on experience, so the idea proposed by “Bornagainpenguin” would help “educate” these people.
2005-08-07 3:11 pm
Well…. I think you are a bit too idealistic there but I concede that you are right.
Mind you, I only suggested to automatically mod these “open source it” posts down a bit, to spare time for the modders and the readers that come in the SkyOS threads for usefull information.
Isn’t it what the post-rating is all about ?
To come back to mister Bornagainpenguin’s post.
It looks like a few people here finds it sensible.
Maybee we should point his post to Robert and have his comments on the subject (it would be interesting).
On the other side, maybee Kelly and him already thought about that and dismissed the idea
Lakedaemon.
2005-08-07 3:35 pmRodrigo
(quote)
To come back to mister Bornagainpenguin’s post.
It looks like a few people here finds it sensible.
Maybee we should point his post to Robert and have his comments on the subject (it would be interesting).
(/quote)
Since Tom H. seems to be know SkyOS well, maybe he could use that post as starting point for an interview with Robert, or write an article expanding these ideas and present it to him..
2005-08-07 3:43 pmThom Holwerda.
2005-08-07 6:18 pmroguelazer.)
2005-08-07 6:47 pmThom Holwerda.
-
It’s a very common phenomenom among some software developers to keep improving it, adding functionality, fixing bugs, and never release it, always having the feeling that there’s a bit more to be done, a bit more to be improved, one more cool functionality to be added.
The problem might get even worst if the software in question is a one-man-show, as it seems to be the case on SkyOS.
Robert should let his baby go, it’s grown enough
The post by bornagainenguin is one of the most lucid I’ve read about SkyOS in ages.
Both me and Thom are from the SkyOS community and date back from SkyOS’s 3.X.X days, but of late I have moved away from the community, I can’t speak for Thom, but I do enjoy watching SkyOS’s progress, I just don’t follow it as rigorously as I used to (when I setup eXpert Zone as a SkyOS news / help site).
But yes someone should present the ideas to Robert, and Kelly and see what happens from there.
True, I am being idealistic… we can all dream can’t we?
The biggest problem with the voting system is that sure we can vote comments down to -5, but as quick as we can do that somebody else could vote it to +5. The voting system is more of a community based democracy, if the comment is totally insane vote it down, if you are unsure leave it be, and if its the best comment ever vote it up.
All i can suggest is that people in the SkyOS community register and help balance the democracy..
2005-08-08 7:56 ambornagainenguin
[quote:].
[:quote]
: /
To me, IMHO that IS pirating SkyOS…you’re promising to develop for the system, using their resources (bandwith, time, ect) and then not giving them anything back. The very definition of theft. Not trying to be personal about it, just the way I feel.
–bornagainpenguin
Problem is, that they don’t highlight this fact on the “Site” it’s all well and good mentioning it in the forum, but it’s highly unlikely any developer is going to sift through reems of posts looking for “serious developers can get the Beta free”
@Youll
Well… I just registered…And I’ll help with the balancing if I can
@Thom
[quote]
In this letter we made a few proposals that could improve SkyOS’ stability, development process, and more…
[quote]
Those are sensible thoughts too. From my distant point of view, it often looked as if there was no roadmap, no planning, no organisation in the SkyOS development process…And It made me wonder….
Well..one year after signing for beta membership,
I must acknowledge that :
1) SkyOS improves fast (essentially due to Robert “I have 5 arms & 3 brains” coding skills and involvment).
2) We have no clue what the next new features will be,
but usually, they are worth the wait.
3) bugs are squashed in a somewhat random manner…
Actually, Robert looks like he isn’t the kind of guy that works best when tied to a development process based on routin and disciplin…
And I would bet my horse that, given a choice between squashing bugs or doing documentation work, he would choose to implement a new feature…
humans…sigh…..
So, maybee that’s the reason why this letter of yours didn’t have the result you expected…
(even though your ideas are sensible, maybee Robert would just hate working under these conditions)
@Youll and Thom
If you don’t want to , I might
post on the SkyOS forums and ask Kelly if he could get in touch with Robert and get us answers to a few questions :
1) what about Mister pengui’s idea ?
2) now that Beta 8.5 is released, what next Beta 8.6 or beta 9, and what to expect ?
3) As Beta 8 was about networking, what about USB ?
Lakedaemon
I’m sorry, but what is the point of this operating system? It’s closed source, but offers nothing of competitive advantage compared to open source (and free) OSes like Linux or FreeBSD. When looking at the screenshots, you see nothing but ported GPL software like Firefox, Thunderbird, Nvu, etc. It looks like a Linux clone with a less thought-out UI.
My question is: who would want to pay for something resembling a Linux clone without any compelling advantages over Windows except the price tag? What is the target audience? Clueless PC users who _are_ aware of alternative OSes but _haven’t_ heard of Linux? Sorry again, but that target audience doesn’t exist.
2005-08-07 7:14 pmThom Holwerda
You see nothing but ported GPL software like Firefox, Thunderbird, Nvu
Erm, get your facts straight: FireFox and Thunderbird are MPL, and NVU is MPL/LGPL/GPL tri-license.
2005-08-07 9:03 pm
>>You see nothing but ported GPL software like Firefox, >>Thunderbird, Nvu
>Erm, get your facts straight: FireFox and Thunderbird >are MPL, and NVU is MPL/LGPL/GPL tri-license.
Get your facts straight? Come on, you are being to picky, if you name me any larger GPL license body of code I could find parts under the LGPL or GPL + amendment or derived (and hence still under) from MIT/X and BSD code. Besides what about GIMP, Blender (yes, yes with BSD, don’t get picky again), Gaim and so on. The point is a point even if you pick on its minute details.
2005-08-07 9:58 pmdjst).
Anyway. the point was that it’s free software, available in e.g. Linux and FreeBSD. Why pay for an OS that has no unique value and offers no benefits compared to the free alternatives?
2005-08-07 10:08 pm
).
Who gives a shit? The point originally made by the poster was not about some brain-dead licensing minutia yet that is what people attack in his comment. As soon as I point this out some dickhead goes back to the licensing thing rather then let it be good enough. Me thinks that the problem with the SkyOS threads that people just cant stop bitching about have got more to do with people nitpicking other peoples shit for irrelevant data to attack. What is this? Politics, where you attack the persons integrity if you cant fend of his arguments?
-
2005-08-08 8:02 ambornagainenguin
Speed.
Of the OS and of its development.
Those are the two things most people are intrested in most in an alternative OS and SkyOS seems to have them both in spades…
I say seems because I simply have nothing to base anything on…but it would be nice to have something that ‘just works’ and ‘works well’ but is NOT Windows.
Price is only a small part of things. Not even the main part.
–bornagainpenguin
So let me just list a few things:
1. If you are interested in developing for SkyOS you can get a free membership in order to use the betas (until 5.0 finals comes out).
2. 5.0 Final will be a packaged cd product, shipped to your home. You are paying $30 right now for that. The beta access is just an extra perk for paying right now. If you like, you could just wait for 5.0 final and pay then. Plus, when 5.0 final comes out it has been said the price might go up.
3. A free “timebomb” SkyOS release would never work because history has taught us that if you make copy protection, it will be broken. In fact, there are old skyos betas and cracks for those betas floating around on p2p software. The skyos team is aware of it, as we even get people who unknowningly go onto our forums and start posting about their problems with SkyOS, even though they don’t have beta access.
4. Eventually a FREE SkyOS live cd will come out. It’s expected to come out around 5.0 final. You have to realize that the live cd will just be to satisfy those who want to see the OS before they buy it. SkyOS has a lot of advanced features that require the native SkyFS in order to function correctly. Already in our beta cd’s we have a live-cd like option, but much of the software in Live-cd mode cannot work without a hard drive. I guess that will be fixed eventually.
5. What’s the appeal of SkyOS when you have Linux and Windows already? Windows is just too expensive for many to legally obtain. Linux is just too complicated generally for people to even begin to think about installing it. And those linux installions that are easy also cost money. SkyOS is on the cutting edge of OS’s. We already have realtime indexing. A 3D system implemented. Photorealistic shadow support. A really good system viewer/file manager. Latest apps.
6. Things we lack are driver support for many devices. but you have to realize that Robert is not yet ready to release all the API and documention in order to create drivers. He is more interested in implementing new features and getting the bugs worked out of them. When the time is right, he will release the driver development kit and many other kits for developers.
7. You should be happy he is using GPL software because any changes and fixes he makes he submits back to the source. SkyFS for example was a derivative of Open BeFS and he managed to squash many of the bugs to make it a viable FS for skyos. He also plans to release many documents explaining how to port firefox for example. He ported NVU in only a few days after somebody requested it.
2005-08-07 10:35 pmdjst
“Linux is just too complicated generally for people to even begin to think about installing it.”
I find installing Ubuntu to be as easy as installing Windows XP on a new computer. Generally, though, I agree about Linux being too complicated.
“SkyOS is on the cutting edge of OS’s. We already have realtime indexing. A 3D system implemented. Photorealistic shadow support. A really good system viewer/file manager. Latest apps.”
Do you think your target audience will appeal to those arguments? You have yet to come up with one compelling reason to choose this operating system over another. There is nothing that you just mentioned that would make a Real User choose SkyOS over other commercial operating systems. As I said in my original post, the only compelling reason is the price tag, but I also said you could get all this for free if you choose e.g. Linux.
I’m certainly not against non-GPL software. I’d be happy to pay for software that helps me get my job done. However, in order to switch from something familiar, compatible, and working to some new solution called SkyOS, you just have to have better arguments than the price tag. If it’s just about the cost, Linux fits my needs like a glove.
I’ve been reading the skyos.org website and found no argument to choose SkyOS over other OS’es. It offers nothing that other OS’es don’t have. Most of the software bundled is freely available.
Maybe I shouldn’t judge an unfinished product, since the final version 5.0 will not be released until next year. On the other hand, skyos.org is already promoting the unfinished product as “an alternative for people looking for a fast, stable, inexpensive, and most of all, user-friendly desktop experience”, along with screenshots showing off some popular open-source apps wrapped in a strange OS UI environment. Where’s the selling arguments? I just read technical terms like 64-bit journaled filesystem and UTF-8 support. Your target audience don’t care about that.
Speaking of technical details, the FAQ claims that “SkyOS uses no GPL’d code in the kernel/system.” However, the tour lists Ext2/Ext3 filesystem support as one of its features. I find that a bit amusing.
2005-08-07 10:51 pm
1. If you can prove you are serious about developing for SkyOS you can get a free copy, but I haven’t seen a good explanation about how to qualify as serious.
2. Given.
3. Given. But once SkyOS reaches a usable state and a larger audience p2p copies will be unavoidable. I’m not saying that this in any way affects your statement about a “timebomb” SkyOS copy.
4. Given.
5. I’m not too fond of Windows and there are things that annoy me about Linux. But since I like my Unix underpipings and want better quality hardware my next PC will be an Intel based Mac.
6. Yes, Robert has his interests, thing is those alone don’t build a viable OS. He would be well advised to release those parts of the OS (cal it a driver kit if you must) that people need to know about to develop or port (from lets say FreeBSD to not have the problem of GPL drivers with a non GPL OS) their own drivers. Some sort of accommodations need to be made and the best time is the sooner the better.
7. I should be happy as what? As SkyOS user or as user of GPL and Open Source software that is also available on SkyOS? I must agree that SkyOS users can be happy since they get good software but how dose it help the GPL and Open Source users? Code is partially not contributed back yet since the SkyOS 5 final version is not out yet and lots of those fixes will be SkyOS specific. Nobody else will care.
Thing is, a hole OS is a large project, especially for one guy and especially if he is doing it alone. He will have to make some accommodations to his users since he can’t do everything that a larger organization like RedHat, Novel, Apple or Microsoft can do. Problem is we have SkyOS users talking and not Robert. Problem is that Robert is a bit of a recluse.
I would like to see all this get sorted out but I know it will take a lot of time.
It says on the site that the beta is available for download. Is it only available to developers or is it available to the public?
2005-08-08 2:48 amAnonymous Penguin
Sorry friend, but where have you been? A two years vacation on Mars?
It is available to beta testers only, that is people who have paid 30Eur (or dollars, not quite sure)
SkyOS is coming along nicely. Perhaps it will replace linux as the default OS on the Walmart computers or provide a cheap powerful OS to developing countries. Either way great work guys!
Sad to see a project with such potential tied download to EULA. | https://www.osnews.com/story/11487/skyos-beta-85-released/ | CC-MAIN-2021-49 | en | refinedweb |
12
Networking in Flutter.
Loading data from the network to show it in a UI is a very common task for apps. In the previous chapter, you learned how to serialize JSON data. Now, you’ll continue the project to learn about retrieving JSON data from the network.
Note: You can also start fresh by opening this chapter’s starter project. If you choose to do this, remember to click the Get dependencies button or execute
flutter pub getfrom Terminal.
By the end of the chapter, you’ll know how to:
- Trigger a search for recipes by name.
- Convert data returned by the API to model classes.
With no further ado, it’s time to get started!
For your remote content, you’ll use the Edamam Recipe API. Open this link in your browser:.
Click the SIGN UP button at the top-right and choose the Recipe Search API option.
The page will display multiple subscription choices. Choose the free option by clicking the START NOW button in the Developer column:
On the Sign Up Info pop-up window, enter your information and click SIGN UP. You’ll receive an email confirmation shortly.
Once you’ve received the email and verified your account, return to the site and sign in. On the menu bar, click the Get an API key now! button:
Next, click the Create a new application button.
On the Select service page, click the Recipe Search API link.
A New Application page will come up. Enter raywenderlich.com Recipes for the app’s name and An app to display raywenderlich.com recipes as the description — or use any values you prefer. When you’re done, press the Create Application button.
Once the site generates the API key, you’ll see a screen with your Application ID and Application Key.
You‘ll need your API Key and ID later, so save them somewhere handy or keep the browser tab open. Now, check the API documentation, which provides important information about the API including paths, parameters and returned data.
Accessing the API documentation
At the top of the window, right-click the API Developer Portal link and select Open Link in New Tab.
Using your API key
For your next step, you’ll need to use your newly created API key.
Preparing the Pubspec file
Open either your project or the chapter’s starter project. To use the HTTP package for this app, you need to add it to pubspec.yaml, so open that file and add the following after the json_annotation package:
http: ^0.12.2
Using the HTTP package
The HTTP package contains only a few files and methods that you’ll use in this chapter. The REST protocol has methods like:
Connecting to the recipe service
To fetch data from the recipe API, you’ll create a file to manage the connection. This file will contain your API Key, ID and URL.
import 'package:http/http.dart';
const String apiKey = '<Your Key>'; const String apiId = '<your ID>'; const String apiUrl = '';
// 1 Future getData(String url) async { // 2 print('Calling url: $url'); // 3 final response = await get(url); // 4 if (response.statusCode == 200) { // 5 return response.body; } else { // 6 print(response.statusCode); } }
class RecipeService { // 1 Future<dynamic> getRecipes(String query, int from, int to) async { // 2 final recipeData = await getData( '$apiUrl?app_id=$apiId&app_key=$apiKey&q=$query&from=$from&to=$to'); // 3 return recipeData; } }
Building the user interface
Every good collection of recipes starts with a recipe card, so you’ll build that first.
Creating the recipe card
The file recipe_card.dart contains a few methods for creating a card for your recipes. Open it now and add the following import:
import '../network/recipe_model.dart';
Widget recipeCard(APIRecipe recipe) {
imageUrl: recipe.image,
recipe.label,
child: recipeCard(recipe),
Adding a recipe list
Your next step is to create a way for your users to find which card they want to try: a recipe list.
import '../../network/recipe_service.dart';
List currentSearchList = List();
List<APIHits> currentSearchList = List();
Retrieving recipe data
In recipe_list.dart, you need to create a method to get the data from
RecipeService. You’ll pass in a query along with the starting and ending positions and the API will return the decoded JSON results.
// 1 Future<APIRecipeQuery> getRecipeData(String query, int from, int to) async { // 2 final recipeJson = await RecipeService().getRecipes(query, from, to); // 3 final recipeMap = json.decode(recipeJson); // 4 return APIRecipeQuery.fromJson(recipeMap); }
// 1 Widget _buildRecipeList(BuildContext recipeListContext, List<APIHits> hits) { // 2 final size = MediaQuery.of(context).size; const itemHeight = 310; final itemWidth = size.width / 2; // 3 return Flexible( // 4 child: GridView.builder( // 5 controller: _scrollController, // 6 gridDelegate: SliverGridDelegateWithFixedCrossAxisCount( crossAxisCount: 2, childAspectRatio: (itemWidth / itemHeight), ), // 7 itemCount: hits.length, // 8 itemBuilder: (BuildContext context, int index) { return _buildRecipeCard(recipeListContext, hits, index); }, ), ); }
Removing the sample code
In the previous chapter, you added code to recipe_list.dart to show a single card. Now that you’re showing a list of cards, you need to clean up some of the existing code to use the new API.
APIRecipeQuery _currentRecipes1;
Widget _buildRecipeLoader(BuildContext context) { // 1 if (searchTextController.text.length < 3) { return Container(); } // 2 return FutureBuilder<APIRecipeQuery>( // 3 future: getRecipeData(searchTextController.text.trim(), currentStartPosition, currentEndPosition), // 4 builder: (context, snapshot) { // 5 if (snapshot.connectionState == ConnectionState.done) { // 6 if (snapshot.hasError) { return Center( child: Text(snapshot.error.toString(), textAlign: TextAlign.center, textScaleFactor: 1.3), ); } // 7 loading = false; final query = snapshot.data; inErrorState = false; currentCount = query.count; hasMore = query.more; currentSearchList.addAll(query.hits); // 8 if (query.to < currentEndPosition) { currentEndPosition = query.to; } // 9 return _buildRecipeList(context, currentSearchList); } // TODO: Handle not done connection }, ); }
// 10 else { // 11 if (currentCount == 0) { // Show a loading indicator while waiting for the movies return const Center(child: CircularProgressIndicator()); } else { // 12 return _buildRecipeList(context, currentSearchList); } }
Key points
- The HTTP package is a simple-to-use set of methods for retrieving data from the internet.
- The built-in
json.decodetransforms JSON strings into a map of objects that you can use in your code.
FutureBuilderis a widget that retrieves information from a
Future.
GridViewis useful for displaying columns of data.
Where to go from here?
You’ve learned how to retrieve data from the internet and parse it into data models. If you want to learn more about the HTTP package and get the latest version, go to. | https://www.raywenderlich.com/books/flutter-apprentice/v1.0.ea3/chapters/12-networking-in-flutter | CC-MAIN-2021-49 | en | refinedweb |
#include <kinematic_options.h>
Definition at line 50 of file kinematic_options.h.
Bits corresponding to each member. NOTE: when adding fields to this structure also add the field to this enum and to the setOptions() method.
Definition at line 58 of file kinematic_options.h.
Constructor - set all options to reasonable default values.
Definition at line 41 of file kinematic_options.cpp.
Copy a subset of source to this. For each bit set in fields the corresponding member is copied from source to this.
Definition at line 72 of file kinematic_options.cpp.
Set state using inverse kinematics
Definition at line 48 of file kinematic_options.cpp.
how many attempts before we give up.
Definition at line 93 of file kinematic_options.h.
other options
Definition at line 99 of file kinematic_options.h.
This is called to determine if the state is valid.
Definition at line 96 of file kinematic_options.h.
max time an IK attempt can take before we give up.
Definition at line 90 of file kinematic_options.h. | http://docs.ros.org/en/hydro/api/moveit_ros_robot_interaction/html/structrobot__interaction_1_1KinematicOptions.html | CC-MAIN-2021-49 | en | refinedweb |
Providing Data for Plots and Tables¶
No data visualization is possible without the underlying data to be represented.
In this section, the various ways of providing data for plots is explained, from
passing data values directly to creating a
ColumnDataSource and filtering using
a
CDSView.
Providing data directly¶
In Bokeh, it is possible to pass lists of values directly into plotting functions.
In the example below, the data,
x_values and
y_values, are passed directly
to the
circle plotting method (see Plotting with Basic Glyphs for more examples). capabilites, such as streaming data,
sharing data between plots, and filtering data.
ColumnDataSource¶)
The
data parameter can also be a Pandas
DataFrame or
GroupBy object.
source = ColumnDataSource(df)
If a
DataFrame is used, the CDS will have columns corresponding to the columns of
the
DataFrame. If the
DataFrame has a named index column, then CDS will also have
a column with this name. However, if the index name (or any subname of a
MultiIndex)
is
None, then the CDS will have a column generically named
index for the index. orginal columns. The CDS columns are
formed by joining original column names with the computed measure. For example, if a
DataFrame has columns
'year' and
'mpg'. Then passing
df.groupby('year')
to a CDS will result in columns such as
'mpg_mean'
Note this capability to adapt
GroupBy objects may only work with Pandas
>=0.20.0.
Note
There is an implicit assumption that all the columns in a given
ColumnDataSource
all have the same length at all times. For this reason, it is usually preferable to
update the
.data property of a data source “all at once”.
Streaming¶.
source = ColumnDataSource(data=dict(foo=[], bar=[])) # has new, identical-length updates for all columns in source new_data = { 'foo' : [10, 20], 'bar' : [100, 200], } source.stream(new_data)
For an example that uses streaming, see examples/app/ohlc.
Patching¶.
The tuples that describe patch changes are of the form:
(index, new_value) # replace a single column value # or (slice, new_values) # replace several column values
For a full example, see examples/howto/patch_app.py.
Filtering data with CDSView¶)
IndexFilter¶
The
IndexFilter is the simplest filter type. It has an
indices property which is a
list of integers that are the indices of the data you want to be included in the plot.
from bokeh.plotting import figure, output_file, show from bokeh.models import ColumnDataSource, CDSView, IndexFilter from bokeh.layouts import gridplot output_file("index_filter.html")]]))
BooleanFilter¶
A
BooleanFilter selects rows from a data source through a list of True or False values
in its
booleans property.
from bokeh.plotting import figure, output_file, show from bokeh.models import ColumnDataSource, CDSView, BooleanFilter from bokeh.layouts import gridplot output_file("boolean_filter.html")]]))
GroupFilter¶.
In the example below,
flowers contains a categorical variable
species which is
either
setosa,
versicolor, or
virginica.
from bokeh.plotting import figure, output_file, show from bokeh.layouts import gridplot from bokeh.models import ColumnDataSource, CDSView, GroupFilter from bokeh.sampledata.iris import flowers output_file("group_filter.html")]]))
CustomJSFilter¶
You can also create a
CustomJSFilter with your own functionality. To do this, use JavaScript
or CoffeeScript.
Javascript¶
To create a
CustomJSFilter with custom functionality written in JavaScript,
pass in the JavaScript code as a string to the parameter
code:
custom_filter = CustomJSFilter(code=''' var indices = []; // iterate through rows of data source and see if each satisfies some constraint for (var i = 0; i <= source.get_length(); i++){ if (source.data['some_column'][i] == 'some_value'){ indices.push(true); } else { indices.push(false); } } return indices; ''')
Coffeescript¶
You can also write code for the
CustomJSFilter in CoffeeScript, and
use the
from_coffeescript class method, which accepts the
code parameter:
custom_filter_coffee = CustomJSFilter.from_coffeescript(code=''' z = source.data['z'] indices = (i for i in [0...source.get_length()] when z[i] == 'b') return indices ''')
Linked selection¶)
Linked selection with filtered data¶.plotting import figure, output_file, show from bokeh.layouts import gridplot from bokeh.models import ColumnDataSource, CDSView, BooleanFilter)
Other Data Types¶
Bokeh also has the capability to render network graph data and geographical data. For more information about how to set up the data for these types of plots, see Visualizing Network Graphs and Mapping Geo Data. | https://docs.bokeh.org/en/0.12.11/docs/user_guide/data.html | CC-MAIN-2020-34 | en | refinedweb |
Padatious is a machine-learning, neural-network based intent parser. It is an alternative to the Adapt intent parser. Unlike Adapt, which uses small groups of unique words, Padatious is trained on the sentence as a whole.
Padatious has a number of key benefits over other intent parsing technologies.
With Padatious, Intents are easy to create
The machine learning model in Padatious requires a relatively small amount of data
Machine learning models need to be trained. The model used by Padatious is quick and easy to train.
Intents run independently of each other. This allows quickly installing new skills without retraining all other skill intents.
With Padatious, you can easily extract entities and then use these in Skills. For example, "Find the nearest gas station" ->
{ "place":"gas station"}
System generated documentation for the Padatious codebase, generated by Sphinx and hosted at ReadTheDocs.
In speech recognition and voice assistance, an intent is the task the user intends to accomplish. A user can accomplish the same task
Each of these examples has very similar intent. The role of Padatious is to determine intent programmatically.
In the example above, we would.
Padatious uses a series of example sentences to train a machine learning model to identify an intent.
The examples are stored in a Skill's
vocab[lang] directory, in files ending in the file extension
.intent. For example, if you were to create a tomato Skill to respond to questions about a tomato, you would create the file
vocab/en-us/what.is.intent
This file would contain examples of questions asking what a tomato is.
What would you say a tomato is?
What's a tomato?
Describe a tomato
What defines a tomato
and
vocab/en-us/do.you.like.intent
with examples of questions about mycroft's opinion about tomatoes:
Are you fond of tomatoes?
Do you like tomatoes?
What are your thoughts on tomatoes?
Are you fond of
{type} tomatoes?
Do you like
{type} tomatoes?
What are your thoughts on
{type} tomatoes?
Note the {type} in above examples these are wild-cards where matching content is forwarded to the skill's intent handler.
Each file should contain at least 4 examples for good modeling.
In the above example,
{type} will match anything. While this makes the intent flexible, it will also match if we say something like Do you like eating tomatoes?. It would think the type of tomato is eating which doesn't make much sense. Instead, we can specify what type of things the {type} of tomato should be. We do this by defining the type entity file here:
vocab/en-us/type.entity
which would contain something like:
redreddishgreengreenishyellowyellowishripeunripepale
Now, we can say things like Do you like greenish red tomatoes? and it will tag type: as greenish red.
A skill using Padatious is no different than previous skills except that
self.register_intent_file() is used instead of
self.register_intent(). To register a
.entity file, use
self.register_entity_file().
For example, the Tomato Skill would be written as:
from mycroft import MycroftSkillclass TomatoSkill(MycroftSkill):def __init__(self):MycroftSkill.__init__(self)def initialize(self):self.register_intent_file('what.is.intent', self.handle_what_is)self.register_intent_file('do.you.like.intent', self.handle_do_you_like)def handle_what_is(self, message):self.speak('A tomato is a big red thing')def handle_do_you_like(self, message):tomato_type = message.data.get('type')if tomato_type is not None:self.speak("Well, I'm not sure if I like " + tomato_type + " tomatoes.")else:self.speak('Of course I like tomatoes!')
The
register_intent_file(intent_file, handler) methods arguments are:
intent_file: the filename of above mentioned intent files with the .intent as argument.
handler: the method/function that the examples in the intent_file should map to
The corresponding decorator is also available:
@intent_handler()
In the handler method the wild card words can be fetched from the message using
def handler(self, message):word = message.data.get('your_keyword') # if not present will return None
Sometimes you might find yourself writing a lot of variations of the same thing. For example, to write a skill that orders food, you might write the following intent:
Order some {food}.Order some {food} from {place}.Grab some {food}.Grab some {food} from {place}.
Rather than writing out all combinations of possibilities, you can embed them into a single line by writing each possible option inside parentheses with | in between each part. For example, that same intent above could be written as:
(Order | Grab) some {food} (from {place} | )
Nested parentheses are supported to create even more complex combinations, such as the following:
(Look (at | for) | Find) {object}.
Which would expand to:
Look at {object}Look for {object}Find {object}
Let's say you are writing an Intent to call a phone number. You can make it only match specific formats of numbers by writing out possible arrangements using
# where a number would go. For example, with the following intent:
Call {number}.Call the phone number {number}.
the number.entity could be written as:
+### (###) ###-####+## (###) ###-####+# (###) ###-####(###) ###-#######-#######-###-#######.###.####### ### ##############
Let's say you wanted to create an intent to match places:
Directions to {place}.Navigate me to {place}.Open maps to {place}.Show me how to get to {place}.How do I get to {place}?
This alone will work, but it will still get a high confidence with a phrase like "How do I get to the boss in my game?". We can try creating a
.entity file with things like:
New York City#### Georgia StreetSan Francisco
The problem is, now anything that is not specifically a mix of New York City, San Francisco, or something on Georgia Street won't match. Instead, we can specify an unknown word with :0. This would would be written as:
:0 :0 City#### :0 Street:0 :0
Now, while this will still match quite a lot, it will match things like "Directions to Baldwin City" more than "How do I get to the boss in my game?"
NOTE: Currently, the number of :0 words is not fully taken into consideration so the above might match quite liberally, but this will change in the future.
NOTE: This section is of use if you are using Padatious on a project other than Mycroft. If you're developing Skills for Mycroft, you don't need to worry about this
program.py:
from padatious.intent_container import IntentContainercontainer = IntentContainer('intent_cache')container.load_file('hello', 'hello.intent')container.load_file('goodbye', 'goodbye.intent')container.train()data = container.calc_intent('Hello there!')print(data.name)
hello.intent:
Hi there!Hello.
goodbye.intent:
See you!Goodbye!
You can then run this in Python using:
python3 program.py
NOTE: This section is of use if you are using Padatious on a project other than Mycroft. If you're developing Skills for Mycroft, you don't need to worry about this
Padatious is designed to be run in Linux. Padatious requires the following native packages to be installed:
Python development headers
pip3
swig
To install these packages on a Ubuntu system, run this command:
sudo apt-get install libfann-dev python3-dev python3-pip swig
Next, install Padatious via
pip3:
pip3 install padatious
Padatious also works in Python 2 if you are unable to upgrade. | https://mycroft-ai.gitbook.io/docs/mycroft-technologies/padatious | CC-MAIN-2020-34 | en | refinedweb |
Alarm tutorial
This tutorial is for an alarm application that uses a simple countdown mechanism. It relies on two input buttons to set, activate and silence the alarm. During the countdown, the device is in sleep mode. When the countdown ends and the alarm triggers, an LED and a digital out pin go high. They go back to low when the alarm is reset.
The LEDs provides some feedback to the user: when setting the alarm, the LEDs blink to show the input was recognised. When the alarm is fully set, the LEDs blink the configured delay once, before letting the device go into sleep mode.
Tip: You can complete this tutorial with the Mbed Online Compiler or Mbed CLI.
Import the example application
If using Mbed CLI, use the
import command:
mbed import mbed-os-example-alarm cd mbed-os-example-alarm
If using the Online Compiler, click) { ThisThread::sleep_for(10); } // Once the delay has been input, blink back the configured hours and // minutes selected for (uint8_t i = 0; i < hour_count * 2; i++) { hour_led = !hour_led; ThisThread::sleep_for(250); } for (uint8_t i = 0; i < min_count * 2; i++) { min_led = !min_led; ThisThread::sleep_for(250); } // Attach the low power ticker with the configured alarm delay alarm_event.attach(&trigger_alarm_out, delay); // Sleep in the main thread while (1) { sleep(); } }
Compile and flash to your board
To compile the application:
- If using Mbed CLI, invoke
mbed compile, and specify the name of your platform and toolchain (
GCC_ARM,
ARM,
IAR). For example, for the ARM Compiler 5 and FRDM-K64F:
mbed compile -m K64F -t ARM
- If using the Online Compiler, click the Compile button.
Your PC may take a few minutes to compile your code.
Find the compiled binary:
- If using the Online Compiler, the compiled binary will be downloaded to your default location.
- If using Mbed CLI, the compiled binary will be next to the source code, in your local copy of the example.
Connect your Mbed device to the computer over USB.
Copy the binary file to the Mbed device.
Press the reset button to start the program.
Use the alarm
The alarm isn't set to a timestamp; it counts down from the moment it's activated. So to set the alarm, specify the countdown duration:
- Press Button1 for the number of desired hours to delay.
- Press Button2 to cycle to minutes, and repeat the previous step for the number of desired minutes.
- Press Button2 again to start the alarm.
- Press Button2 again once the alarm triggers to silence it.
Extending the application
You can set the alarm to a specific time by relying on either the platform's RTC or the time API. You will need to set the time on each reset, or rely on an internet connection and fetch the time.
Troubleshooting
If you have problems, you can review the documentation for suggestions on what could be wrong and how to fix it. | https://os.mbed.com/docs/mbed-os/v6.2/apis/drivers-tutorials.html | CC-MAIN-2020-34 | en | refinedweb |
In this tutorial we will check how to serialize a Python dictionary to a JSON string.
Introduction
In this tutorial we will check how to serialize a Python dictionary to a JSON string.
This tutorial was tested with Python version 3.7.2.
The code
We will start the code by importing the json module. This module will expose to us the function that allows to serialize a dictionary into a JSON string.
import json
After this we will define a variable that will contain a Python dictionary. We will add some arbitrary key-value pairs that could represent a person data structure, just for illustration purposes.
person = { "name": "John", "age": 10, "skills": ["Cooking", "Singing"] }
To serialize the dictionary to a string, we simply need to call the dumps function and pass as input our dictionary. Note that this function has a lot of optional parameters but the only mandatory one is the object that we want to serialize.
We are going to print the result directly to the prompt.
print(json.dumps(person))
Note that if we don’t specify any additional input for the dumps function, the returned JSON string will be in a compact format, without newlines.
So, we can make use of the indent parameter by passing a positive integer. By doing this, the JSON string returned will be pretty printed with a number of indents per level equal to the number we have passed.
We will pass an indent value of 2.
print(json.dumps(person, indent = 2)) print("------------\n\n")
To finalize, we will check another additional parameter called sort_keys. This parameter defaults to False but if we set it to True the keys will be ordered in the JSON string returned.
print(json.dumps(person, sort_keys = True))
The final code can be seen below.
import json person = { "name": "John", "age": 10, "skills": ["Cooking", "Singing"] } print(json.dumps(person)) print("------------\n\n") print(json.dumps(person, indent = 2)) print("------------\n\n") print(json.dumps(person, sort_keys = True))
Testing the code
To test the code, simply run it in a tool of your choice. I’ll be using IDLE, a Python IDE.
You should get an output similar to figure 1. In the first print we can see that the dictionary was correctly converted to a compact JSON string, as expected.
In the second print we can see a prettified version of the JSON with the 2 indents that we specified in the code.
In the third print we can confirm that the keys were ordered.
References
[1] | https://techtutorialsx.com/2020/02/18/python-converting-dictionary-to-json-string/?shared=email&msg=fail | CC-MAIN-2020-34 | en | refinedweb |
zeromq 4.2.1
Interface to the C ZeroMQ library
To use this package, run the following command in your project's root directory:
ZeroMQ zeromq.org
"ZeroMQ in a hundred words" ."
-quote straight from:
Usage can be derived by translating the guide above using the interface(s) here. Alternatively some wrapper implementations are listed below.
Usage:
import deimos.zmq.zmq; link your program with zmq, and Zap, Pow! (to quote the site) it works.
Implementations:
- Registered by Matt Soucy
- 4.2.1 released 3 years ago
- D-Programming-Deimos/ZeroMQ
- github.com/D-Programming-Deimos/ZeroMQ
- LGPL v3
- Authors:
-
- Dependencies:
- none
- Versions:
- Show all 21 versions
- Download Stats:
24 downloads today
75 downloads this week
490 downloads this month
42147 downloads total
- Score:
- 4.4
- Short URL:
- zeromq.dub.pm | https://code.dlang.org/packages/zeromq/4.2.1 | CC-MAIN-2020-34 | en | refinedweb |
System.
Diagnostics Namespace
The System.Diagnostics namespace provides classes that allow you to interact with system processes, event logs, and performance counters.
Classes
Structs
Interfaces
Enums
Delegates
Remarks
The EventLog component provides functionality to write to event logs, read event log entries, and create and delete event logs and event sources on the network. The EntryWrittenEventHandler provides a way to interact with event logs asynchronously. Supporting classes provide access to more detailed control, including: permission restrictions, the ability to specify event log types (which controls the type of default data that is written with an event log entry), and iterate through collections of event log entries. For more information about these tasks, see the EventLogPermission, EventLogEntryType, and EventLogEntryCollection classes.
The Process class provides functionality to monitor system processes across the network, and to start and stop local system processes. In counter component provide access to collections of counters, counter permission, and counter types.
The System.Diagnostics namespace also provides classes that allow you to debug your application and to trace the execution of your code. For more information, see the Trace and Debug classes. | https://docs.microsoft.com/en-us/dotnet/api/system.diagnostics?view=netframework-4.8 | CC-MAIN-2020-34 | en | refinedweb |
table of contents
- buster 4.16-2
- buster-backports 5.04-1~bpo10+1
- testing 5.07-1
- unstable 5.07-1
NAME¶gets - get a string from standard input (DEPRECATED)
SYNOPSIS¶
#include <stdio.h>
char *gets(char *s);
DESCRIPTION¶Never use this function.
gets() reads a line from stdin into the buffer pointed to by s until either a terminating newline or EOF, which it replaces with a null byte ('\0'). No check for buffer overrun is performed (see BUGS below).
RETURN VALUE¶gets() returns s on success, and NULL on error or when end of file occurs while no characters have been read. However, given the lack of buffer overrun checking, there can be no guarantees that the function will even return.
ATTRIBUTES¶For an explanation of the terms used in this section, see attributes(7).
CONFORMING TO¶C¶Never | https://manpages.debian.org/buster/manpages-dev/gets.3.en.html | CC-MAIN-2020-34 | en | refinedweb |
I've spent two whole days trying to create an initial migration for the database of my project. This is so fustrating. Each preview version of the docs points towards different directions, and there're a lot of unclosed issues flying arround for a while.
My project is an AspNetCore application running on the full framework (net462) although I think I've tryed every combination of preview versions, even the workarounds proposed on this issue: EF Tools 1.1.0-preview4 Unrecognized option '--config' or in this one: but neither work.
This is an abstract of my project.json with the relevant parts:
{ "version": "1.0.0-*", "buildOptions": { "platform": "x86", "debugType": "full", "preserveCompilationContext": true, "emitEntryPoint": true }, "dependencies": { .... "Microsoft.EntityFrameworkCore": "1.1.0", "Microsoft.EntityFrameworkCore.Design": "1.1.0", "Microsoft.EntityFrameworkCore.SqlServer": "1.1.0", "Microsoft.EntityFrameworkCore.SqlServer.Design": "1.1.0", .... }, "tools": { "Microsoft.AspNetCore.Server.IISIntegration.Tools": "1.1.0-preview4-final", "Microsoft.EntityFrameworkCore.Tools.DotNet": "1.1.0-preview4-final" }, "frameworks": { "net462": { } }, ... }
In my case the proposed workarounds don't work, neither using the nightly builds nor downgrading the tools to 1.0.0-preview3.
If I use the 1.1.0-preview4-final version of the tools I hit this error:
Unrecognized option --config
If I use the nightly builds I get this one, wich is somehow absurd, as my app has only one project and is not a dll (it has also emitEntryPoint:true set)
Could not load assembly 'Sales'. Ensure it is referenced by the startup project 'Sales'
But this is my favourite one, when I downgrade to the 1.0.0-preview3-final of the tools I get this surrealistic one:
error: Package Microsoft.EntityFrameworkCore.Tools.DotNet 1.0.0-preview3-final is not compatible with netcoreapp1.0 (.NETCoreApp,Version=v1.0). Package Microsoft.EntityFrameworkCore.Tools.DotNet 1.0.0-preview3-final supports: netcoreapp1.0 (.NETCoreApp,Version=v1.0)
I had to read it five times to get sure that in the second sentence was telling just the opposite of the first one... It seems a joke!
Furthermore, commands are not working on the PMC anymore, no matter wich version of the tools I install, no matter if I restore the packages and if I restart the computer...
I'm getting crazy with so many versions of everything and I only want to create a migration, it doesn't matter wich version of the tools I have to use... Is there a valid configuration nowadays or am I trying something imposible? Has anybody been able to create migrations within an asp.net core application targeting the full .net framework (net462) with ANY version of the ef tooling?
If so, HOW?
EDIT:
After targeting the project to .netcoreapp1.0 and removing the incompatible references now I hit this error:
A fatal error was encountered. The library 'hostpolicy.dll' required to execute the application was not found in 'C:\Program Files (x86)\dotnet\shared\Microsoft.NETCore.App\1.0.1'
What's happening here??? I'm really tired of .net Core, and it's still in it's first version. I've suffered a lot of issues like this while it was in beta, but now things are supposed to be stable... They have changed twenty times everything that can be changed, APIs, assembly names, namespaces, package names, conventions... Now let's wait for the preview5, 6 or 25 of the tooling and maybe by the year 2035 EF Core will have appropiate tools and procedures, meanwhile I damn a million time my decission of betting for this technology!
EDIT 2:
As per comments global.json may be relevant:
{ "projects": [ "src", "test" ], "sdk": { "version": "1.0.0-preview2-1-003177" } }
and to add that the 1.0.0-preview2-1-003177 folder exists and is the only one in C:\Program Files (x86)\dotnet\sdk\ and C:\Program Files\dotnet\sdk\
I hate to answer my own question, but I suppose that not too much people will go into this alley... So for those who are struggling with a similar problem I'll tell that mine came from this configuration on project.json:
... "buildOptions": { "platform": "x86", <- THIS!!! "debugType": "portable", "preserveCompilationContext": true, "emitEntryPoint": true },
after removing the "platform" key, migrations started to work again...
I'm not really sure when did I introduced that setting, since I didn't try to create migrations before upgrading to the version 1.1 of the .NET Core SDK. Maybe it was copied from one of the examples on internet, maybe it was from a previous version, I don't know, but that has turned me crazy for days, I hope it helps somebody outthere. | https://entityframeworkcore.com/knowledge-base/41068199/can-t-create-a-migration-with-ef-core-1-1 | CC-MAIN-2020-34 | en | refinedweb |
Folks,
I wrapped the new ServiceContainer.dumpServices() functionality in an MBean.
You can now point jconsole to 'jboss:ServiceContainer' to get access to this management inferface
public interface ManagedServiceContainer { List<String> listServices(); List<String> listServicesByMode(String mode); List<String> listServicesByState(String state); void setMode(String serviceName, String mode); }
Besides providing general state information on registered services, this also allows to set the mode on a given service controller. Hence, it can be used to bring up ON_DEMAND services for example.
With the OSGi integration we were looking for a solution that allows us to bring up the OSGi subsystem triggered by an external provisioning system ( e.g. our hudson testsuite). One obvious approach would have been to copy a dummy bundle into the 'deployments' folder, which would trigger an OSGi deployment. Another approach would have been to register a dummy FrameworkMBean that would allow bundle installation. That MBean would than have to be replaced by its propper implementation that we currently get from Apache Aries.
I figured however that this would be a general problem of bringing up/shutting down arbitrary services.
cheers
-thomas
If we're going to expose stuff like this, we need to carefully design our JMX domain namespaces. Well, we need to do that anyway.
We need users to be able to clearly distinguish mbeans that represent a view on the proper domain-model based management API vs. those that are bypassing it and tweaking internals. This mbean is the latter. The former is what we can commit to as a stable long term API. | https://developer.jboss.org/thread/157977 | CC-MAIN-2020-34 | en | refinedweb |
Symfony 4 - KnpPaginator Bundle "service not found, even though it exists in app's container"
I have been following tutorials, and all instructions show it's done the exact same way, but it doesn't seem to work in Symfony 4. Is there something I'm overlooking or is the bundle simply incompatible?
I ran:
composer require knplabs/knp-paginator-bundle
It was loaded automatically into
bundles.php, thanks to Flex.
Inserted the following into
config/services.yaml:
Tried to use the following in the controller:
$paginator = $this->get('knp_paginator');
and got the following error:
Service "knp_paginator" not found: even though it exists in the app's container, the container inside "App\Controller\PhotoController" is a smaller service locator that only knows about the "doctrine", "form.factory", "http_kernel", "request_stack", "router", "security.authorization_checker", "security.token_storage", "serializer", "session" and "twig" services. Unless you need extra laziness, try using dependency injection instead. Otherwise, you need to declare it using "PhotoController::getSubscribedServices()".
You have to extend
Controller instead of
AbstractController class:
use Symfony\Bundle\FrameworkBundle\Controller\Controller; class MyController extends Controller { public function myAction() { $paginator = $this->get('knp_paginator');
or better leave
AbstractController and inject
knp_paginator service into your action:
use Symfony\Bundle\FrameworkBundle\Controller\AbstractController; use Knp\Component\Pager\PaginatorInterface; class MyController extends AbstractController { public function myAction(PaginatorInterface $paginator) { $paginator->paginate()... }
Compatibility Symfony 4.0.x ? · Issue #468 · KnpLabs , In Symfony 3, the KnpPaginatorBundle is the perfect solution when you work with Doctrine Duration: 4:50 Posted: Mar 29, 2019 I’m looking to implement KnpPaginatorBundle into symfony4. My issue here is that , in their documentation (meant for S3) they recommend using the following parameters i config.yml. knp_paginator:
In my case I use
AbstractController and as
malcolm says, it is beter to inject the service directrly in your action, even so, I call a method several times and I think that overwrite
getSubscribedServices is clener for my porpuse.
public static function getSubscribedServices(): array { $services = parent::getSubscribedServices(); $services['fos_elastica.manager'] = RepositoryManagerInterface::class; $services['knp_paginator'] = PaginatorInterface::class; return $services; } private function listHandler(Search $search, Request $request, int $page): Response { //... $repository = $this->container->get('fos_elastica.manager')->getRepository(Foo::class); //... }
How to install the KnpPaginatorBundle to paginate Doctrine queries , Just add class namespace of KnpPaginatorBundle : in config/bundles.php : <?php return Fix phpunit 4 requiring symfony/yaml <4 By ostrolucky , 2 years ago add symfony 4 to travix build matrix
As it says in the documentation. You must extend the base
Controller class, or use dependency injection instead
pagination in symfony 4 using KnpPaginatorBundle, KnpPaginatorBundle. by KnpLabs. SEO friendly Symfony paginator to sort and paginate. Infos. License: MIT; Score: 808; Last update: 2018-03- Browse other questions tagged symfony symfony-forms symfony-2.1 symfony4 or ask your own question. The Overflow Blog Podcast 228: chatting with Stack Overflow’s community developers
In my case i'm using symfony 4.3, I just injected the class to the methods as an argument and i'm done.
public function list(ProductManager $productManager) { $products = $productManager->prepareProducts(); return $products; }
KnpPaginatorBundle by KnpLabs, composer require knplabs/knp-paginator-bundle. While that's installing Depending on your setup, that may or may not even be possible in Symfony 4. And, in In Symfony 3, the KnpPaginatorBundle is the perfect solution when you work with Doctrine, however with the introduction of Symfony 4, the installation and configuration of this utility isn't well documented and most of the developers will end up implementing another package for sake of simplicity.
Pagination > Mastering Doctrine Relations!, It is, but it's also compatible with Symfony 3. Installing the bundle is largely copy / paste from Posted: Aug 8, 2016 Hi, I'm having a problem using this bundle with Symfony 4.0. I install it via : composer require knplabs/knp-paginator-bundle I then add the basic configuration in my config file Finally I followed the example of the method "listAction (
Pagination with Twig and KnpPaginatorBundle, Symfony 3 - Pagination with Twig and KnpPaginatorBundle (Part 2/14) - Duration: 8:00. Code Duration: 15:46 Posted: Aug 7, 2018 Friendly Symfony paginator to paginate everything. Generally this bundle is based on Knp Pager component. This component introduces a different way for pagination handling. You can read more about the internal logic on the given documentation link. Note: Keep knp-components in sync with this bundle.
KNP Pagination in Symfony 4, The KnpPaginatorBundle is a great bundle for pagination inside Symfony. The bundle requires Symfony >=3.4 and Twig >=2.0 if you use the. | http://thetopsites.net/article/50581373.shtml | CC-MAIN-2020-34 | en | refinedweb |
I'm migrating a project from EF6 to EF-Core. The Metadata API has changed significantly and I am unable to find a solution to this:
Under EF6 I could find the POCO Type from the Proxy Type using:
ObjectContext.GetObjectType(theEntity.GetType)
This, however, does not work under EF-Core (no
ObjectContext class). I've searched and searched to no avail. Does anyone know how to get the POCO type from either the
entity or the
entity proxy type?
EF Core does not support the
ObjectContext API. Further more, EF Core doesn't have proxy types.
You can get metadata about entity types from
IModel.
using (var db = new MyDbContext()) { // gets the metadata about all entity types IEnumerable<IEntityType> entityTypes = db.Model.GetEntityTypes(); foreach (var entityType in entityTypes) { Type pocoType = entityType.ClrType; } }
There is no perfect way. You could for example check the namespace. If it's a proxy it'll
private Type Unproxy(Type type) { if(type.Namespace == "Castle.Proxies") { return type.BaseType; } return type; } | https://entityframeworkcore.com/knowledge-base/38296805/how-to-get-the-entity-poco-type-from-the-entity-proxy-type- | CC-MAIN-2020-34 | en | refinedweb |
NAMEs():
DESCRIPTIONsetOn success, zero is returned. On error, -1 is returned, and errno is set appropriately. (on Linux, does not have the necessary capability in its user namespace: CAP_SETUID in the case of setreuid(), or CAP_SETGID, POSIX.1-2008, 4.3BSD (setreuid() and setregid() first appeared in 4.2BSD).
NOTESSetting. | https://jlk.fjfi.cvut.cz/arch/manpages/man/setregid.2.en | CC-MAIN-2020-34 | en | refinedweb |
libghc-sdl2-doc binary package in Ubuntu Disco ppc64el
This package contains bindings to the SDL 2 library, in both high- and
low-level forms:
.
The SDL namespace contains high-level bindings, where enumerations are split
into sum types, and automatic error-checking is performed.
..
.
This package provides the documentation for a library for the Haskell
programming language.
See http:// | https://launchpad.net/ubuntu/disco/ppc64el/libghc-sdl2-doc | CC-MAIN-2020-10 | en | refinedweb |
Commander
Commander is a small Swift framework allowing you to craft beautiful command line interfaces in a composable way.
Usage
Simple Hello World
import Commander let main = command { (filename:String) in print("Reading file \(filename)...") } main.run()
Type
You can group a collection of commands together.
Group { $0.command("login") { (name:String) in print("Hello \(name)") } $0.command("logout") { print("Goodbye.") } }
Usage:
$ auth Usage: $ auth COMMAND Commands: + login + logout $ auth login Kyle Hello Kyle $ auth logout Goodbye.
Describing Commander
Installation
Commander is available under the BSD license. See the LICENSE file for more info.
Github
You may find interesting
Dependencies
Used By
- jum/tagsync
- steve228uk/StripeConvert
- caojianhua/J
- endocrimes/AssetGen
- anzfactory/UniGenSW
- jum/pdf2png
- LarsJK/swift-luhn-cli
- nsscreencast/249-poker-hands-part-2
- tomokitakahashi/ColorPalletGen
- QueryKit/querykit-cli
- tokorom/swift-build-report
- randolphledesma/SwiftProjects
- kylef-archive/Ploughman
- josefdolezal/fit-mi-paa
- choefele/shpdump
- allewun/binary-search
- subdigital/cocoaconf-austin-2016
- touyou/FPExperiment
- kateinoigakukun/JSONStructGen
- takadev/Mapp
Total: 263
Releases
0.9.1 - 2019-09-23 20:43:23
Enhancements
- Usage/help output for commands which contain flags will now contain the short flag, for example,
-v, --verbose. #71
Bug Fixes
0.9.0 - 2019-06-12 02:57:42
Breaking
- Support for Swift < 4.2 has been removed.
Enhancements
Added syntax for using array as a type with
Argumentinstead of using
VariadicArgument:
command(Argument<[String]>("names")) { names in }
Added support for optional arguments and options, for example:
command(Argument<String?>("name")) { name in } command(Option<String?>("name", default: nil)) { name in }
Added support for using
--to signal that subsequent values should be treated as arguments instead of options.
Tamas Lustyik
Output of
--helpfor group commands will now sort the commands in alphabetical order.
Cameron Mc Gorian
Bug Fixes
0.8.0 - 2017-10-14 11:09:44
0.7.1 - 2017-09-28 12:21:19
Bug Fixes
- The Swift Package now contains the Commander library product.
0.7.0 - 2017-09-28 12:21:08
0.6.0 - 2016-11-27 18:15:02
Enhancements
VariadicArgumentnow supports an optional validator.
- Adds support for variadic options, allowing the user to repeat options to provide additional values. #37
- Argument descriptions are now printed in command help. #33
- Default option and flag default values will now be shown in help output. Only default option types of String and Int are currently supported in help output. #34
Bug Fixes
VaradicArgumenthas been renamed to
VariadicArgument.
0.4.1 - 2016-02-16 18:29:42
Bug Fixes
- Fix a potential crash when
UsageErroris thrown on Linux.
--helpoutput now wraps arguments in diamonds
<>.
- 2015-12-04 20:46:58
Enhancements
- Commander can now run on Linux.
- 2015-12-04 18:11:59
- 2015-11-22 21:54:48
No public facing changes.
- 2015-11-22 21:52:51
Enhancements
- Removes dependency on Objective-C Foundation.
- 2015-11-22 21:52:10
Enhancements
Convenience commands can now throw
You can now supply a
falseNameand
falseFlagwhen creating flags.
Flag("verbose", flag: "v", disabledName": "no-verbose", disabledFlag: "x")
You can supply your own
unknownCommandhelper within a group.
Arguments can now have a description.
Support for variadic arguments.
Bug Fixes
- When invoking a command using a described argument, the argument will know throw an error when the argument is missing.
- Errors are now thrown when a described command receives unknown flags or arguments.
Commander - 2015-09-24 18:17:34
Introduction of Commander. | https://swiftpack.co/package/kylef/Commander | CC-MAIN-2020-10 | en | refinedweb |
Introduction to Bubble Sort in Data Structure
Bubble Sort in Data Structure is one of the easiest sorting algorithm being used. The idea behind this algorithm is to repeatedly compare the elements one by one and swap the adjacent elements to bring them in the correct sorted order. Thus if there are n number of elements in the array then each element will undergo n-1 comparisons. This way after comparing one element with other elements in the array, an element is placed at its place in the sorted list just like a bubble rises up and move. Thus this algorithm is known as Bubble Sort. A number of comparisons are more thus its complexity is more.
Algorithm for Bubble Sort in Data Structure
Bubble Sort is most often used to provide an insight into the sorting algorithms due to its simplicity. It is a stable as well as an in-place algorithm as it does not require extra storage area. Below is the pseudocode for this algorithm to sort the elements of an array arr in ascending order.
Code:
BubbleSort (arr):
BubbleSort (arr):
n =len(arr)
For i=0 to n-1 Repeat step 3
For j=1 to n-1:
If arr[i] > arr[j]: // Compare which one is greater
swap(arr[i],arr[j])
Explanation: In the above pseudocode, n refers to the number of elements in the array. Then the loop is run from the first element of the array to the last element for comparing every element to the next element of the array, if it is greater then they are swapped otherwise loop is incremented. This way the first element is placed at its right place.
Example: Lets consider an array arr = [33,7,2,0,1,98,87,56]
First Pass:
I. 33,7,2,0,1,98,87,56
II. 7,33,2,0,1,98,87,56
III. 7,2,33,0,1,98,87,56
IV. 7,2,0,33,1,98,87,56
V. 7,2,0,1,33,98,87,56
VI. 7,2,0,1,33,98,87,56
VII. 7,2,0,1,33,87,98,56
VIII. 7,2,0,1,33, 87,56,98
Second Pass:
I. 7,2,0,1,33, 87,56,98
II. 2,7,0,1,33, 87,56,98
III. 2,0,7,1,33, 87,56,98
IV. 2,0,1,7,33, 87,56,98
V. 2,0,1,7,33, 87,56,98
VI. 2,0,1,7,33, 87,56,98
VII. 2,0,1,7,33,56,87,98
VIII. 2,0,1,7,33, 56,87,98
Third Pass:
I. 2,0,1,7,33, 56,87,98
II.0,2,1,7,33,56,87,98
III. 0,1,2,7,33,56,87,98
IV. 0,1,2,7,33,56,87,98
V. 0,1,2,7,33,56,87,98
VI. 0,1,2,7,33,56,87,98
VII. 0,1,2,7,33,56,87,98
VIII. 0,1,2,7,33,56,87,98
4.7 (3,220 ratings)
View Course
A similar process is repeated until the loop ends. But we can see we already have got the sorted array. This limitation can be corrected using a flag that sees if any element is swapped in one pass, in case no element has been swapped indicates every element has been placed in the right position.
Code:
BubbleSort (arr):
n =len(arr)
swapped = true
For i=0 to n-1 Repeat 3 and 4 step:
For j=1 to n-1:
If arr[i] > arr[j]:
Swap(arr[i],arr[j])
else swapped =false
if(swapped == false)
break;
Program to Implement Bubble Sort in Data Structure
Bubble Sort algorithm is mostly used in case of computer graphics as it has an ability to detect very small errors such as swap of 2 elements in almost sorted arrays. It is also capable of fixing the error in linear time complexity. Its one of the famous implementation can be seen in polygon filling algorithm where sorting of bounding lines of polygon occurs using their x coordinate and order changes occur at every intersection of the lines with incrementing y coordinate.
Program #1 – Bubble Sort Program
Below is the example of an implementation of the bubble Sorting algorithm in python:
Code:
def bubbleSort(myarr):
n = len(myarr)
for i in range(0,n-1):
for j in range(0, n-1):
if myarr[j] > myarr[j+1]:
myarr[j], myarr[j+1] = myarr[j+1], myarr[j] myarr = [33,7,2,0,1,98,87,56] bubbleSort(myarr)
print ("Array after Sorting is:")
for i in range(len(myarr)):
print ("%d" %myarr[i])
Output:
In the above program, one can also iterate the inner loop from 0 to n-i+1 as, after each pass, one element from the last comes to its sorted order location. Thus with each pass, the number of elements to be sorted reduces by 1.
Still in the above program, one can also optimize to reduce the number of loops by checking if elements have become sorted with every pass.
Example #2 – Optimized Bubble Sort Program
Code:
def bubbleSort(myarr):
n = len(myarr)
for i in range(n):
swapped = False
for j in range(0, n-1):
if myarr[j] > myarr[j+1]:
myarr[j], myarr[j+1] = myarr[j+1], myarr[j] swapped = True
if swapped == False:
break
myarr = [33,7,2,0,1,98,87,56] bubbleSort(myarr)
print ("myarray after Sorting is :")
for i in range(len(myarr)):
print ("%d" %myarr[i],end=" ")
Output:
Complexity of Bubble Sort
- Worst Case Complexity: O(n*n). This type of case occurs when elements of the array are sorted in reverse order. Thus each element of the array is visited twice.
- Average Case Complexity: This case is similar to Worst case complexity as in this case array is half sorted. Thus loops are run through half the array. Thus complexity is O(n2).
- Best Case Time Complexity: O(n). The case when elements of the array are already sorted then complexity includes the time to loop to through all elements once. Thus it takes linear time in the Best case.
- Auxiliary Space: O(1) – As Bubble Sort requires no extra space for storing intermediate results thus almost no auxiliary space is required.
Disadvantage of Bubble Sort
The main disadvantage of Bubble sort can be seen while dealing with an array containing a huge number of elements. As worst-case complexity of this algorithm is O(n2), thus a lot more time is taken to sort them. Thus it is more suitable for teaching sorting algorithms instead of real-life applications.
Conclusion
Now it can be concluded bubble sort is a very easy way of sorting the elements of an array, thus have more time complexity. It is a stable and in-place algorithm which is most used for introducing the concept of sorting algorithms. It is also used in computer graphics because of its feature to detect the minute errors and fix them in linear time.
Recommended Articles
This is a guide to Bubble Sort in Data Structure. Here we discuss the algorithm, complexity, and program to implement bubble sort in data structures with its disadvantages. You may also look at the following articles to learn more – | https://www.educba.com/bubble-sort-in-data-structure/ | CC-MAIN-2020-10 | en | refinedweb |
HOME HELP PREFERENCES
SearchSubjectsFromDates
Please bear with us in reading what is a rather long email communication.
We wanted to summarize as clearly and completely as possible the history
of our experiences trying to install and use Greenstone. First let it
be said that we, being part of a R&D group, fully understand the time and
effort it takes to create and distribute such a package. We also understand
that you can not put yourself in the position of providing serious ongoing
support of the form one might expect from a commercial firm. We deeply
appreciate the kind help of Stefan Boddie in response to our email queries.
We support the open source movement, and think it marvelous that you have
released this substantial effort to the computing community. We are
enormous admirers of the publications and code that have come out of this
group in the past ("mg" prominent among them). The Greenstone system
looks very promising, and we have been highly motivated to get it up and
explore its use. However, we have been working on it for four releases now
(2.31, 2.35, 2.36 and 2.37), starting in May 2001, and have sadly come to
conclude that we must draw a line under our efforts. This communication is
sent in the hope that we might, with once last effort, succeed in making
the system work. Otherwise, we will, with much sadness, have to put it
aside and consider our efforts with it to have been a failure.
We use Sun SPARC platforms running Solaris 2.8; we have both gcc 2.95.2
and 3.0 available (as well as Sun's commercial C compiler, which we rarely
use). Virtually of the important GNU tools are also available. We employ
the depot convention for the management of network-shareable locally installed
software (depot has been around for > 10 years in the UNIX community; we
attach an article we presented at the last USENIX/LISA conference about some
work we have done with it). Local software appears not in /usr/local (which
does not even exist on our machines) but in /depot/bin, /depot/lib, /depot/man,
....
Here is a highly condensed summary of our installation history with
Greenstone:
Version 2.31:
We downloaded the UNIX binary distribution. The script "Install.sh" failed,
and as a result we had to modify the supplied configuration file.
The installation process made assumptions in multiple places about
local software being in /usr/local. The installation instructions did
not specify the use of GNU (as opposed to the Sun-supplied) version of
make. We got stuck at the point that it tried to install wv; we removed this
package from the PACKAGEDIRS list and reconfigured/recompiled, only to
get stuck at another package, whereupon we were advised by S. Boddie
to move to version 2.35.
Version 2.35:
We observed the following:
[...]
--> Install.sh: [cd /depot/package/greenstone_2.35/gsdl]
configuring ...
--> Install.sh: [./configure]
./configure: USE_CORBA=0: is not an identifier
compiling ...
--> Install.sh: [make]
make: Fatal error: No arguments to build
installing ...
--> Install.sh: [make install]
make: Fatal error: Don't know how to make target `install'
--> Install.sh: [cd /site5/SOURCE/INSTALL/greenstone_2.35.dir/gsdl-2.35-linux/Unix]
ERROR: Compilation failed
Greenstone was not installed successfully
Several months after an email report on this set of problems, we were
advised by S. Boddie to move to version 2.36.
Version 2.36:
Given our history with the automated installation system, we downloaded the
source (rather than binary) distribution, and manually compiled all the
packages and installed Greenstone. This was highly non-trivial given the
meager documentation on building from source, but we proceeded with kind
help from the Greenstone mailing list and S. Boddie. We also had to learn
via email from S. B. as to how to set the administrative password and
properly set permissions for gsdl/etc/*, gsdl/tmp, and gsdl/collect
(which were not otherwise documented). We still could not log on,
and after three more email exchanges (with helpful additional information
from S.B.) we finally were able to log on. But we still could not create
a collection. Here is a pithy summary of the final problem list:
1) Can not create a collection unless logged in as admin (quite likely
normal, but the document does not make this clear and we never
received confirmation in response to our email query).
2) During the fourth "Configure collection" step, the text area for
the configuration file display is empty (abnormal according to document).
3) During the "Building" stage, we observed the following error message:
An error has occurred while attempting to build your collection.
The build log contains the following:
copying /depot/package/sam_4.3/pdf/sam_4.3/sam.pdf -->
/depot/package/greenstone_2.36/vendor/tmp/tbuild3/test/import/depot/package/sam_4.3/pdf/sam_4.3/sam.pdf
importing the test collection
No plugins were loaded.
build: ERROR: import.pl failed
Obviously, the system could not find the plugins it required, even
though we had properly set gsdlhome in the file gsdlsite.cfg as
instructed in the documentation.
4) The demo collection icon does not appear on the Greenstone home page.
To create the demo collection, we ran the appropriate Greenstone
executable, as instructed in the installation document:
[...]/cgi-bin/library
Resulting in:
Collections:
------------
Note that collections will only appear as "running" if
their build.cfg files exist, are readable, contain a valid
builddate field (i.e. > 0), and are in the collection's
index directory (i.e. NOT the building directory)
demo public not running
WARNING: No "running" collections were found. You need to
build one of the above collections
We inferred that this occured because the file
$GSDLHOME/collect/demo/etc/collect.cfg was empty, and tried to
manually create such a file by executing "mkcol.pl". Then we
tried to import data by using command "import.pl demo", which
resulted in voluminous output of the form:
RecPlug: getting directory /depot/package/greenstone_2.36/vendor/collect/demo/import
[...]
doc::_calc_OID /depot/package/greenstone_2.36/vendor/bin/SunOS/hashfile could not be found
BASPlug: WARNING: language/encoding could not be extracted from /depot/package/greenstone_2.36/vendor/collect/demo/import/index.txt - defaulting to en/iso_8859_1
[...]
HTMLPlug: processing faobetf/fb34fe/fb34fe.htm
doc::_calc_OID /depot/package/greenstone_2.36/vendor/bin/SunOS/hashfile could not be found
Then, out of curiosity, we tried to build a collection using the
data from the preceding (failed) step using "./buildcol.pl demo":
*** creating the compressed text
collecting text statistics
Uncaught exception from user code:
mgbuilder::compress_text - couldn't run
/depot/package/greenstone_2.36/vendor/bin/SunOS/mg_passes
mgbuilder::compress_text('mgbuilder=HASH(0x13d8bc)', 'section:text') called at ./buildcol.pl line 279
buildcol::main() called at ./buildcol.pl line 43
whereupon we observed that the directory:
/depot/package/greenstone_2.36/vendor/bin/SunOS
does not exist. We never received replies to our queries about this
problem, nor about how (or if) Greenstone cleans up the files it
places in its tmp directory.
After sending numerous queries to the mailing list, we decided to try to
install 2.36 using the binary packasge, but repeated download attempts revealed
that the file on the originating site was corrupted. Therefore we moved
to the latest (2.37) release.
Version 2.37:
We tried to install the binary distribution. The script "Install.sh" failed
again. There were several difficulties:
1) GNU make is used to compile all the package, but the documents still do
not make this clear (fixed in our case by prepending PATH with "depot/bin").
2) In the file src/Unix/configure, GDBM_LIBPATH and GDBM_INCLUDE are hard-coded
so as to look for software in /usr/local. We had to manually edit this
file to change these variables.
3) We use gcc 3.0 as our compiler. The environment variable "CC" did not get
passed down to the rtftohtml package build, and it kept trying to use the
(nonexistent) "cc" command, causing failure. We had to modify
src/package/Makefile, adding an extra variable "MDEFINES" to pass
"CC=gcc".
4) In directory src/mgpp and lib, incorrect header files were included because
constants were not defined during configuration. For example, in the file
lib/text_t.h, the correct header file set is that which is associated with
"GSDL_USE_STL_H" being defined, but it was not so defined:
[...]
#if defined(GSDL_USE_OBJECTSPACE)
# include <ospacestdvector>
[...]
#elif defined(GSDL_USE_STL_H)
# include <vector.h>
[...]
#else
# include <vector>
[...]
#endif
[...]
Conclusion
This is a heavily distilled summary of the large number of interactions we
have had over the past months with respect to Greenstone. We hope that
this note and the history behind it make it clear that we have more than
a casual interest in Greenstone, and have made a serious effort to build it,
expending many hours of our time.
Given our experience, though, we find it difficult to believe that anyone
has successfully built and operated this package under Solaris using gcc.
We have seen only one posting to the mail list from another Solaris site,
and that one was reporting problems with installation. We are still
very interested in installing and using the system, but are at wits end.
The package is complex, and we can't really hope to succeed with it without
closer colaboration with the development team, or at least with another
site that has successfully installed it in a similar environment. The
supplied documentation is insufficient to install the system, the
problems we have pointed out in the past have not been addressed, and
although we appreciate S.B.'s periodic replies to our email queries, the
final advice has consistently been to "move to the latest release,"
which has never advanced the effort. (We would be happy to supply copies
of all past email and mail list postings if it would be helpful).
We send this summary as one last effort to see if we can interact with you
in such as way as to get Greenstone working under Solaris/gcc, which we
think would be of great interest, as this platform is widely available in
the academic, research, and library R&D communities. In any event, we
thank you for your past help and wish the Greenstone project and its
follow-ons the greatest possible success...
Thanks and Best Regards,
Rick Rodgers (rodgers@nlm.nih.gov)
Ziying Sherwin (sherwin@nlm.nih.gov)
-------------------------------------------------------------------------------- | http://www.nzdl.org/gsdlmod?e=d-00000-00---off-0gsarch--00-0----0-10-0---0---0direct-10---4-----dfr--0-1l--11-en-50---20-about-Eduardo+del+Valle--00-0-21-00-0--4----0-0-11-10-0utfZz-8-00&a=d&cl=CL3.1.1&d=Pine-GSO-4-21-0201171036110-23050-100000-hume-nlm-nih-gov | CC-MAIN-2020-16 | en | refinedweb |
I have this program that works by asking the user to determine what size box they would like displayed. It asks 7 times.
What I need to do is modify it so it just prints the output without asking the user. (that's what happens when you don't read the assignment thoroughly!)
The output should say:
A box of 3 is shown below:
* * *
* * * (without the * in the middle)
* * *
It;s supposed to display boxes sized 5, 4, 3, 2, 1, then error messages for 0 and -1.
I got the code working but now I don't know how to change it to just display everything without asking the user to specify the size.
sorry if this is hard to read...everything moved when I pasted it.sorry if this is hard to read...everything moved when I pasted it.Code:#include <iostream> #include <cmath> using namespace std; int centspace(); int counter(); //Functions used ... void instructions (); //User instructions int box (); //------------------------------------------------------- int main () { instructions (); box (); box (); box (); box (); box (); box (); box (); cout << endl; return 0; } //-------------------------------------------------- int box () { int size; cout << "enter a number: "; cin >> size; if ((size <= 0) || (size >= 40)) { cout << "It is not possible to print a box of size " << size << endl; } else { for (int counter = 1; counter <= size; counter++) cout << "* "; cout << endl; if (size > 2) { for (int counter = 1; counter <= (size-2); counter++) { cout << "*"; for (int space = 1; space <= (size-2); space++) cout << " "; cout << " *" << endl; } } if (size > 1) { for (int counter = 1; counter <= size; counter++) { cout << "* "; } cout << endl; } } return 0; }
Any point in the right direction would help me tremendously. Thanks! | https://cboard.cprogramming.com/cplusplus-programming/45929-help-modifying-program.html?s=68899c12e2656b972f4c1362bfa4e647 | CC-MAIN-2020-16 | en | refinedweb |
System Center 2012 Configuration Manager: Determine Reboot Pending State Using WMI / PowerShell / Orchestrator Runbook, however nothing concrete. That is, until I happened upon a new WMI Class of the CCM_ClientSDK namespace named CCM_ClientUtilities. This little gem was waiting at idle capable of providing reboot pending information. In this post I will be quickly taking a look at the CCM_ClientUtilities class and discussing how to use this class for pending reboot detection in your PowerShell / Orchestrator automation solutions.
Before Getting Started:
Before jumping into the guts and glory of CCM_ClientUtilities, maybe I should lay out the situation in which we may need this pending reboot information. For sure, if a Configuration Manager delivered update requires a reboot, shouldn't we just let Configuration Manager handle the reboot, thus no detection needed? While in many circumstances I would agree with this notion, I have found that when automating process around the installation of updates, when a reboot is not required, there is not a good trigger mechanism indicating that it is ok to advance to the next automated task. Thus, if we pause after update installation, and then check for reboot applicability, we can then either reboot the system before carrying on with the remaining process, or skip a reboot (when one is not needed) and move onto the remaining process items.
CCM_ClientUtilities:
So check it out, below is a graphical representation of the namespace ROOT\ccm\ClientSDK, the class CCM_ClientUtilities, and the Class Method DetermineifRebootPending.
Click Image for better view:
We can invoke this method using any number of WMI interfaces; however for the sake of establishing building blocks towards orchestrator Runbook automation I will be using PowerShell.
PowerShell:
The following PowerShell script first returns the CCM_ClientUtilitiis class object and then executes the DetermineIfRebootPending method, placing the results into a variable $results. I’ve then displayed the results using Write-Host.
$reboot = [wmiclass]"\\<Computer Name>\root\ccm\ClientSDK:CCM_ClientUtilities"
$result = $reboot.DetermineIfRebootPending() | Select RebootPending
Write-Host $result
Pretty simple - lets place this into Orchestrator.
Orchestrator Runbook Use:
For sample sake I am going to provide here a very simple example of this script in use. We will enter a Computer Name into the Initialize Data activity, run the script basically as detailed above, and then perform some link logic against the returned (Published) data from the Run .NET Script activity. If the returned data indicates a reboot is necessary we will do so and then proceeded on with the Runbook (indicated by the Continue Workflow activity). If a reboot is not needed we will jump straight to the Continue Workflow activity.
Sample Runbook (Click image for better view):
Run .NET Script Activity PowerShell (Click image for better view):
Published Data from Run .NET Script Activity (Click Image for better view):
Link Logic when a reboot is required (Click image for better view):
Link Logic when no reboot is required (Click image for better view):
Finally for some practical context, here is an example of a real world Runbook in which this reboot detection logic is used. This Runbook is part of a series of Runbooks that applies software updates to a Windows Failover Cluster (more on this later this week).
Sample Runbook (Click image for better view):
Conclusion:
Simple, quick post here. Just wanted to demonstrate the use of this cool new System Center 2012 Configuration Manager WMI trick. In this post we have looked at some PowerShell that allows us to use the CCM_ClientUtilities class to determine is a Reboot is pending on a system due to Configuration Manager activities. We have then seen how to further use this PowerShell in our Orchestrator Runbook solutions. | https://docs.microsoft.com/en-us/archive/blogs/neilp/system-center-2012-configuration-manager-determine-reboot-pending-state-using-wmi-powershell-orchestrator-runbook | CC-MAIN-2020-16 | en | refinedweb |
There's no edit button for original question :(
I know I can use Exclude flag on parent page types, and even extract some interface in order to simplify the list of included / excluded content types... but I have too many page types, and I'd like to keep this DRY.
Is this possible? :)
I looked at the bug but I can't really figure out what you mean, why are you using IncludeOn=B?
If I change your code above using the ExcludeOn=ContainerPage it is correctly exluded from container page. If I use IncludeOn then it is included.
[ContentType(GUID = "99853a59-1496-4b20-9a15-d993ea63a7f8")] [AvailableContentTypes( Availability.Specific, Include = new[] { typeof(B), typeof(C) }, ExcludeOn = new[] { typeof(ContainerPage) })] public class B : PageData { }
Sorry, it might not be a bug :)
If I want to include page type B on page type A, and exclude it from all other page types, do I have to use
[ContentType(GUID = "...")] [AvailableContentTypes( Availability.Specific, ExcludeOn = new[] { typeof(...), typeof(...), typeof(...), typeof(...) ... })] public class B : PageData { }
or can I just use:
[ContentType(GUID = "...")] [AvailableContentTypes( Availability.Specific, IncludeOn = new[] { typeof(A))] public class B : PageData { }
I thought if I only specify IncludeOn = 'something', that's equivalent to IncludeOn = 'something' and ExludeOn = All
Basically I don't want to update ExcludeOn list each time I create a new page type.
If I exclude PageDate like this:
[ContentType(GUID = "...")] [AvailableContentTypes(Availability.None, ExcludeOn = new[] { typeof(PageData) }, IncludeOn = new[] { typeof(A) })] public class B : PageData { }
Then I cannot create page type B under page type A.
edit: Ok, now I see what I want to achieve is not possible. Thanks Per!
I ran into this when creating a module which had two page types: X and Y
Now I want X to be created only directly under the root page, and Y only created under X, like so:
[ContentType] [AvailableContentTypes(Include = new[] { typeof(Y) }, ExcludeOn = new[] { typeof(PageData) }, Availability = Availability.Specific)] public class X : PageData { } [ContentType] [AvailableContentTypes(IncludeOn = new[] { typeof(X), typeof(Y) }, Availability = Availability.Specific)] public class Y : PageData { }
This work alright, except ofcourse Y will be able to be created on all current and future content types that does not have the [AvailableContentTypes(Include = ...)] attribute. As the defult is to have all content types available.
I did some reflecting and came up with this, but I am worried it can have unforeseen consequences, though in the interest of getting an official option to exclude a content type everywhere except where I want, here goes:
/// <summary> /// Modifies the <see cref="ContentTypeModel.AvailableContentTypes"/> setting for all content types to exclude the <see cref="Y"/> type. /// </summary> public class SetContentTypeAvailability : ContentDataAttributeScanningAssigner { public override void AssignValues(ContentTypeModel contentTypeModel) { base.AssignValues(contentTypeModel); if (contentTypeModel.ModelType == typeof (X) || contentTypeModel.ModelType == typeof (Y)) { return; } if (contentTypeModel.AvailableContentTypes == null) { contentTypeModel.AvailableContentTypes = new AvailableContentTypesAttribute(); } if (contentTypeModel.AvailableContentTypes.Exclude.All(t => t != typeof (Y))) { contentTypeModel.AvailableContentTypes.Exclude = contentTypeModel.AvailableContentTypes.Exclude.Union(new[] {typeof (Y)}).ToArray(); } } } [InitializableModule] [ModuleDependency(typeof (InitializationModule), typeof (ServiceContainerInitialization))] public class Initializer : IConfigurableModule { public void Initialize(InitializationEngine context) { } public void UnInitialize(InitializationEngine context) { } public void ConfigureContainer(ServiceConfigurationContext context) { context.Container.Configure(ctx => ctx.For<IContentTypeModelAssigner>().Use<SetContentTypeAvailability>()); } }
A neater option would be something maybe like so, where the Explicit enum would mean "ignore everything else!":
[ContentType] [AvailableContentTypes(IncludeOn = new[] { typeof(Y), typeof(X) }, Availability = Availability.Explicit)] public class Y : PageData { }
I know it's a bit late, but I have just been fixing an issue with this attribute as reported in and though I would have a look into your suggestion Erik.
My suggestion in your case would be to take a step back and instead of trying too hard with the attributes and instead work directly with the IAvailableModelSettingsRepository and Register a setting there. This is the equivalent of changing the setting in the Admin UI, so the result is that new content types won't be included unless explicitly stated so. The downside of this is that you would have to take special care if you still want administrators to be able to manipulate these values.
Hah! This old thread.
I actually did take a satep back and scrapped the whole idea at first. But I now revisited the IAvailableModelSettingsRepository idea from Henriks comment.
The problem with this approach is that you can not register settings specifically for ex. the root page due to it having no ModelType, atleast not from what I can see. Though that will be a minor issue for my specific scenario as editors aren't usually allowed to create pages under the root page directly anyways.
This is what I ended up with in the end:
private void InitializeContentTypeRestrictions() { // load all content types registered var contentTypeRepository = ServiceLocator.Current.GetInstance<IContentTypeRepository>(); var allContentTypes = contentTypeRepository.List().Where(c => c.ModelType != null).Select(c => c.ModelType).ToList(); var availableModelSettingsRepository = ServiceLocator.Current.GetInstance<IAvailableModelSettingsRepository>(); // create and register setting for TranslationStringPage var setting = new AvailableContentTypesAttribute(Availability.Specific) { ExcludeOn = allContentTypes.Except(new[] {typeof(TranslationStringPage), typeof(TranslationRootPage)}).ToArray(), Include = new[] {typeof(TranslationStringPage)} }; availableModelSettingsRepository.RegisterSetting(typeof(TranslationStringPage), setting); // create and register setting for TranslationRootPage setting = new AvailableContentTypesAttribute(Availability.Specific) { ExcludeOn = allContentTypes.ToArray(), Include = new[] {typeof(TranslationStringPage)} }; availableModelSettingsRepository.RegisterSetting(typeof(TranslationRootPage), setting); }
Hi,
I have the following page types:
Container page can contain any page type.
A page can contain B and C page types
B can contain B and C page types
C cannot contain any page type.
Here's the code:
The problem is, editors are allowed to create B pages types under ContainerPage.
Is this a bug with IncludeOn attribute, or is this how it's supposed to work?
Do I have to specify all AvailableContentTypes on ContainerPage or can I solve this in another way?
EPiServer 8.2.0
Thanks! | https://world.episerver.com/forum/developer-forum/-Episerver-75-CMS/Thread-Container/2015/3/episerver-8-and-includeon/ | CC-MAIN-2020-16 | en | refinedweb |
Getting Started with Windows Forms Html Viewer (HTMLUI)
This section describes how to configure a
HTMLUIControl in a Windows Forms application and overview of its basic functionalities.
Assembly deployment
Refer control dependencies section to get the list of assemblies or NuGet package needs to be added as reference to use the control in any application.
Get more details regarding how to install the nuget packages in windows form application in the How to install nuget packages link.
Creating simple application with HTMLUIControl
You can create Windows Forms application with HTMLUIControl as follows:
- Creating the project
- Adding control via designer
- Adding control manually using code
Creating the project
Create a new Windows Forms project in Visual Studio to display the HTMLUIControl.
Adding control via designer
The HTMLUIControl can be added to the application by dragging it from the toolbox and dropping it in the designer view. The following required assembly references will be added automatically:
- Syncfusion.HTMLUI.Base.dll
- Syncfusion.HTMLUI.Windows.dll
- Syncfusion.Scripting.Base.dll
- Syncfusion.Shared.Base
Configure Title
Title text can be set using
Title property. The visibility of the title can be customized using
ShowTitle property.
Adding control manually using code
To add the control manually in C#, follow the steps:
Step 1 : Add the following required assembly references to the project:
- Syncfusion.HTMLUI.Base.dll
- Syncfusion.HTMLUI.Windows.dll
- Syncfusion.Scripting.Base.dll
- Syncfusion.Shared.Base
Step 2 - Include the namespaces Syncfusion.Windows.Forms.HTMLUI.
using Syncfusion.Windows.Forms.HTMLUI;
Imports Syncfusion.Windows.Forms.HTMLUI
Step 3 : Create the HTMLUIControl instance and add it to the form.
HTMLUIControl htmluiControl1 = new HTMLUIControl(); this.htmluiControl1.Dock = System.Windows.Forms.DockStyle.Fill; this.htmluiControl1.Text = "htmluiControl1"; this.Controls.Add(this.htmluiControl1);
Dim htmluiControl1 As New HTMLUIControl() Me.htmluiControl1.Dock = System.Windows.Forms.DockStyle.Fill Me.htmluiControl1.Text = "htmluiControl1" Me.Controls.Add(Me.htmluiControl1)
Configure Title
Title text can be set using Title property. The visibility of the title can be customized using ShowTitle property.
this.htmluiControl1.ShowTitle = true; this.htmluiControl1.Title = "StartUp Document";
Me.htmluiControl1.ShowTitle = True Me.htmluiControl1.>IMAGE_4<<
File can be added to HTMLUIControl using LoadHTML method where the file path given as parameter.
this.htmluiControl1.LoadHTML(Path.GetDirectoryName(Application.ExecutablePath) + @"\..\..\FileName.htm");
Me.htmluiControl1.LoadHTML(Path.GetDirectoryName(Application.ExecutablePath) + @"\..\..\FileName.htm")
| https://help.syncfusion.com/windowsforms/html-viewer/getting-started | CC-MAIN-2020-16 | en | refinedweb |
span8
span4
span8
span4
I'm removing duplicates from a File Geodatabase (FGDB) feature class using Sorter then DuplicateFilter, then writing the Unique features back to the same FGDB feature class.
I want to write the Duplicate features to a CSV log file with the date/time of the translation in the file name so I have a CSV listing the duplicates removed each time the Workspace is run.
First I tried including @Timestamp(^Y^m^d^H^M^S) in the CSV File Name in the Writer but I ended up with multiple CSVs, presumably because features were being written as they were received by the Writer. I haven't noticed this problem with FGDBs but is that because of difference in the way the FGDB Writer works?
I then tried a Creator followed by a TimeStamper running parallel to the actual data processing with the output from the TimeStamper going to the CSV Writer along with the data. But what I get is two CSV files - one with no date/time stamp in the name and all the data, and another with the date/time stamp in the name but no data.
Would "Append to file" in the CSV Writer properties solve this? If so, how can I ensure that the CSV file with the date/time stamp in the name is created first?
Yes, there is more than one way. Another approach: once create the destination CSV file with temporary name (e.g. "temp.csv") by a FeatureWriter, and then rename it to the timestamp. i.e. move the file finally.
You could also use a variable setter and a variable retriever
I agree with @egomm, but I would highly recommend setting "Suppliers first" to yes to avoid it blocking the data flow and consuming huge amounts of memory if you have a lot of data.
Another way to do it is to use a private Python scripted parameter to give you the timestamp at the start of the translation, e.g.
from datetime import datetime return datetime.now().strftime('%Y%m%d%H%M%S')
Useful if you need the same timestamp many places in your workspace and you want to avoid all those FeatureMergers.
You need to merge the timestamper with the rest of your data, e.g. with a FeatureMerger
I am having the same issue and tried this solution but the I'm stumped on the join. I can choose _timestamp in supplier but what do I choose to join on Requester? If I choose a unique identifier nothing outputs from the Merged.
Answers Answers and Comments
8 People are following this question.
Write to csv with timestamp 1 Answer | https://knowledge.safe.com/questions/35824/write-csv-file-where-filename-includes-datetimesta.html | CC-MAIN-2020-16 | en | refinedweb |
Overview
When you deal with external binary data in Python, there are a couple of ways to get that data into a data structure. You can use the
ctypes module to define the data structure or you can use the
struct python module.
You will see both methods used when you explore tool repositories on the web. This article shows you how to use each one to read an IPv4 header off the network. It’s up to you to decide which method you prefer; either way will work fine.
ctypesis a foreign function library for Python. It deals with C-based languages to provide C-compatible data types, and enables you to call functions in shared libraries.
structconverts between Python values and C structs that are represented as Python bytes objects.
So
ctypes handles binary data types in addition to a lot of other functionality, while handling binary data is the main purpose of the
struct module.
Let’s see how these two libraries are used when we need to decode an IPv4 header off the network.
First, here’s the structure of the IPv4 header. This is from the IETF RFC 791:
Initial Data from the Network
We need some data to work with, so let’s get a single packet from the network. This little snippet show do fine. I ran this on Linux.
import socket import sys def sniff(host): sniffer = socket.socket(socket.AF_INET, socket.SOCK_RAW, socket.IPPROTO_ICMP) sniffer.bind((host, 0)) sniffer.setsockopt(socket.IPPROTO_IP, socket.IP_HDRINCL, 1) # read and return a single packet return sniffer.recvfrom(65535) if __name__ == '__main__': if len(sys.argv) == 2: host = sys.argv[1] else: host = '192.168.1.69' buff = sniff(host)
We just grab a single raw packet from the network and put it into a variable,
buff. So now that we have binary data, let’s look at how to use it.
ctypes module
The following code snippet defines a new class,
IP that can read a packet and parse the header into its separate fields.
from ctypes import * import socket import struct class IP(Structure): _fields_ = [ ("ihl", c_ubyte, 4), ("version", c_ubyte, 4), ("tos", c_ubyte, 8), ("len", c_ushort, 16), ("id", c_ushort, 16), ("offset", c_ushort, 16), ("ttl", c_ubyte, 8), ("protocol_num", c_ubyte, 8), ("sum", c_ushort, 16), ("src", c_uint32, 32), ("dst", c_uint32, 32) ] def __new__(cls, socket_buffer=None): return cls.from_buffer_copy(socket_buffer) def __init__(self, socket_buffer=None): # human readable IP addresses self.src_address = socket.inet_ntoa(struct.pack("<L",self.src)) self.dst_address = socket.inet_ntoa(struct.pack("<L",self.dst))
You can see that the
_fields_ structure defines each part of the header, giving the width in bits as the last argument. Being able to specify the bit width is handy. Our
IP class inherits from the
ctypes
Structure class, which specifies that we must have a defined
_fields_ structure before any instance is created.
Class Instantiation
The wrinkle with
ctypes
Structure abstract base class is the
__new__ method. See the documentation for full details: ctypes module.
The
__new__ method takes the class reference as the first argument. It creates and returns an instance of the class, which passes to the
__init__ method.
We create the instance normally, but underneath, Python invokes the class method
__new__, which fills out the
_fields_ data structure immediately before instantiation (when the
__init__ method is called). As long as you’ve defined the structure beforehand, just pass the
__new__ method the external (network packet) data, and the fields magically appear as attributes on your instance.
struct module
The
struct module provides format characters that you used to specify the structure of the binary data. The first character (in our case,
<) specifies the “endianness” of the data. See the documentation for full details: struct module.
import ipaddress import struct class IP: def __init__(self, buff=None): header = struct.unpack('<BBHHHBBH4s4s', buff) self.ver = header[0] >> 4 self.ihl = header[0] & 0xF self.tos = header[1] self.len = header[2] self.id = header[3] self.offset = header[4] self.ttl = header[5] self.protocol_num = header[6] self.sum = header[7] self.src = header[8] self.dst = header[9] # human readable IP addresses self.src_address = ipaddress.ip_address(self.src) self.dst_address = ipaddress.ip_address(self.dst) # map protocol constants to their names self.protocol_map = {1: "ICMP", 6: "TCP", 17: "UDP"}
Here are the individual parts of the header.
- B 1 byte (
ver,
hdrlen)
- B 1 byte
tos
- H 2 bytes
total len
- H 2 bytes
identification
- H 2 bytes
flags + frag offset
- B 1 byte
ttl
- B 1 byte
protocol
- H 2 bytes
checksum
- 4s 4 bytes
src ip
- 4s 4 bytes
dst ip
Everything is pretty straightforward, but with
ctypes, we could specify the bit-width of the individual pieces. With
struct, there’s no format character for a
nybble (4 bits), so we have to do some manipulation to get the
ver and
hdrlen from the first part of the header.
Binary Manipulations
The wrinkle with
struct in this example is that we need to do some manipulation of
header[0], which contains a single byte but we need to create two variables from that byte, each containing a
nybble.
High
nybble
We have one byte and for the
ver variable, we want the high-order
nybble. The typical way you get the high
nybble of a byte is to right-shift.
We right shift the byte by 4 places, which is like prepending 4 zeros at the front so the last 4 bytes fall off, leaving us with the first
nybble:
0 1 0 1 0 1 1 0 >> 4 ----------------------------- 0 0 0 0 0 1 0 1
Low
nybble
We have one byte and for the
hdrlen variable, we want the low-order
nybble. The typical way you get the low
nybble of a byte is to
AND it with
F (00001111):
0 1 0 1 0 1 1 0 &F 0 0 0 0 1 1 1 1 ----------------------------- 0 0 0 0 0 1 1 0
Let’s look an example in the Python REPL:
>>> m = 66 >>> m 66 >>> bin(m) '0b1000010' # or 0100 0010 >>> bin(m>>4) '0b100' # or 0100 >>> bin(m&0xF) '0b10' # or 0010
Now, more specifically to our IPv4 case, the first byte in the header is always
0x45 = 69 decimal = 01000101 binary.
See what that looks like when we right-shift it by 4 and then
AND it with
F:
>>> '{0:08b}'.format(0x45) '01000101' >>> '{0:04b}'.format(0x45>>4) '0100' >>> '{0:04b}'.format(0x45&0xF) '0101'
You don’t have to know binary manipulation backward and forward for decoding an IP header, but there are some patterns like these (shift and
AND) you will see over and over again as you code and as you explore other hackers’ code.
That seems like a lot of work doesn’t it? In the case where we have to do some bit shifting, it does take effort. But for many cases (e.g. ICMP), everything works on an 8-byte boundary and so is very simple to set up. Here is an “Echo Reply” ICMP message; you can see that each parameter of the ICMP header can be defined in a
struct with one of the existing format letters (BBHHH) (RFC777):
Echo or Echo Reply Message 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Type | Code | Checksum | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Identifier | Sequence Number | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ | Data ... +-+-+-+-+-
A quick way to parse that would simply be:
class ICMP: def __init__(self, buff): header = struct.unpack('<BBHHH', buff) self.type = header[0] self.code = header[1] self.sum = header[2] self.id = header[3] self.seq = header[4]
Conclusion
You can use either the
ctypes module or the
struct module to read and parse binary data. Here is an example of instantiating the class no matter which method you use. You instantiate the
IP class with your packet data in the variable
buff:
mypacket = IP(buff) print(f'{mypacket.src_address} -> {mypacket.dst_address}')
With
ctypes, make sure you define your
_fields_ structure and hand the data to it in the
_new_ method. When you instantiate the class, you’ll have the access to the data attributes automatically.
With
struct, you define how to read the data with a format string. For data attributes that don’t lie on the 8-byte boundary, you may need to do some binary manipulation.
In short, use whichever method fits your brain. But always be aware that you may see code from others that use a different method. Hopefully, now you’ll see it and understand it. | https://pythondigest.ru/view/48599/ | CC-MAIN-2020-16 | en | refinedweb |
In probability theory, the inverse Gaussian distribution (also known as the Wald distribution) is a two-parameter family of continuous probability distributions with support on (0,∞).
Its probability density function is given by
for x > 0, where is the mean and is the shape parameter.[1]
As λ tends to infinity, the inverse Gaussian distribution becomes more like a normal (Gaussian) distribution. The inverse Gaussian distribution has several properties analogous to a Gaussian distribution..
Its cumulant generating function (logarithm of the characteristic function) is the inverse of the cumulant generating function of a Gaussian random variable.
To indicate that a random variable X is inverse Gaussian-distributed with mean μ and shape parameter λ we write .
Properties
Single parameter form
The probability density function (pdf) of inverse Gaussian distribution has a single parameter form given by
In this form, the mean and variance of the distribution are equal,
Also, the cumulative distribution function (cdf) of the single parameter inverse Gaussian distribution is related to the standard normal distribution by
where and where the is the cdf of standard normal distribution. The variables and are related to each other by the identity
In the single parameter form, the MGF simplifies to
An inverse Gaussian distribution in double parameter form can be transformed into a single parameter form by appropriate scaling where
The standard form of inverse Gaussian distribution is
Summation
If Xi has an distribution for i = 1, 2, ..., n and all Xi are independent, then
Note that
is constant for all i. This is a necessary condition for the summation. Otherwise S would not be Inverse Gaussian distributed.
Scaling
For any t > 0 it holds that
Exponential family
The inverse Gaussian distribution is a two-parameter exponential family with natural parameters −λ/(2μ2) and −λ/2, and natural statistics X and 1/X.
Relationship with Brownian motion
The stochastic process Xt given by
(where Wt is a standard Brownian motion with drift ).
Then the first passage time for a fixed level by Xt is distributed according to an inverse-Gaussian:
(cf. Schrödinger[2] equation 19, Smoluchowski[3], equation 8, and Folks[4], equation 1).
When drift is zero
A common special case of the above arises when the Brownian motion has no drift. In that case, parameter μ tends to infinity, and the first passage time for fixed level α has probability density function
(see also Bachelier[5]:74[6]:39). This is a Lévy distribution with parameters and .
Maximum likelihood
The model where
with all wi known, (μ, λ) unknown and all Xi independent has the following likelihood function
Solving the likelihood equation yields the following maximum likelihood estimates
and are independent and
Generating random variates from an inverse-Gaussian distribution
The following algorithm may be used.[7]
Generate a random variate from a normal distribution with a mean of 0 and 1 standard deviation
Square the value
and use this relation
Generate another random variate, this time sampled from a uniform distribution between 0 and 1
If then return else return
public double inverseGaussian(double mu, double lambda) { Random rand = new Random(); double v = rand.nextGaussian(); // Sample from a normal distribution with a mean of 0 and 1 standard deviation double y = v * v; double x = mu + (mu * mu * y) / (2 * lambda) - (mu / (2 * lambda)) * Math.sqrt(4 * mu * lambda * y + mu * mu * y * y); double test = rand.nextDouble(); // Sample from a uniform distribution between 0 and 1 if (test <= (mu) / (mu + x)) return x; else return (mu * mu) / x; }
And to plot Wald distribution in Python using matplotlib and NumPy:
import matplotlib.pyplot as plt import numpy as np h = plt.hist(np.random.wald(3, 2, 100000), bins=200, normed=True) plt.show()
Related distributions
The convolution of an inverse Gaussian distribution (a Wald distribution) and an exponential (an ex-Wald distribution) is used as a model for response times in psychology,[8] with visual search as one example.[9]
History
This distribution appears to have been first derived in 1900 by Louis Bachelier[5][6] as the time a stock reaches a certain price for the first time. In 1915 it was used independently by Erwin Schrödinger[2] and Marian v. Smoluchowski[3] as the time to first passage of a Brownian motion. In the field of reproduction modeling it is known as the Hadwiger function, after Hugo Hadwiger who described it in 1940.[10] Abraham Wald re-derived this distribution in 1944[11] as the limiting form of a sample in a sequential probability ratio test. The name inverse Gaussian was proposed by Maurice Tweedie in 1945.[12] Tweedie investigated this distribution in 1956[13] and 1957[14] [15] and established some of its statistical properties. The distribution was extensively reviewed by Folks and Chhikara in 1978.[4]
Numeric computation and software
Despite the simple formula for the probability density function, numerical probability calculations for the inverse Gaussian distribution nevertheless require special care to achieve full machine accuracy in floating point arithmetic for all parameter values.[16] Functions for the inverse Gaussian distribution are provided for the R programming language by several packages including rmutil,[17][18] SuppDists,[19] STAR,[20] invGauss,[21] LaplacesDemon,[22] and statmod.[23]
See also
- Generalized inverse Gaussian distribution
- Tweedie distributions—The inverse Gaussian distribution is a member of the family of Tweedie exponential dispersion models
- Stopping time
References
- ^ a b Chhikara, Raj S.; Folks, J. Leroy (1989), The Inverse Gaussian Distribution: Theory, Methodology and Applications, New York, NY, USA: Marcel Dekker, Inc, ISBN 0-8247-7997-5
- ^ a b Schrödinger, Erwin (1915), "Zur Theorie der Fall- und Steigversuche an Teilchen mit Brownscher Bewegung" [On the Theory of Fall- and Rise Experiments on Particles with Brownian Motion], Physikalische Zeitschrift (in German), 16 (16): 289–295
- ^ a b Smoluchowski, Marian (1915), "Notiz über die Berechnung der Brownschen Molekularbewegung bei der Ehrenhaft-Millikanschen Versuchsanordnung" [Note on the Calculation of Brownian Molecular Motion in the Ehrenhaft-Millikan Experimental Set-up], Physikalische Zeitschrift (in German), 16 (17/18): 318–321
- ^ a b Folks, J. Leroy; Chhikara, Raj S. (1978), "The Inverse Gaussian Distribution and Its Statistical Application—A Review", Journal of the Royal Statistical Society, Series B (Methodological), 40 (3): 263–275, doi:10.1111/j.2517-6161.1978.tb01039.x, JSTOR 2984691
- ^ a b Bachelier, Louis (1900), "Théorie de la spéculation" [The Theory of Speculation] (PDF), Ann. Sci. Éc. Norm. Supér. (in French), Serie 3;17: 21–89
- ^ a b Bachelier, Louis (1900), "The Theory of Speculation", Ann. Sci. Éc. Norm. Supér., Serie 3;17: 21–89 (Engl. translation by David R. May, 2011)
- ^ Michael, John R.; Schucany, William R.; Haas, Roy W. (1976), "Generating Random Variates Using Transformations with Multiple Roots", The American Statistician, 30 (2): 88–90, doi:10.1080/00031305.1976.10479147, JSTOR 2683801
- ^ Schwarz, Wolfgang (2001), "The ex-Wald distribution as a descriptive model of response times", Behavior Research Methods, Instruments, and Computers, 33 (4): 457–469, doi:10.3758/bf03195403, PMID 11816448
- ^ Palmer, E. M.; Horowitz, T. S.; Torralba, A.; Wolfe, J. M. (2011). "What are the shapes of response time distributions in visual search?". Journal of Experimental Psychology: Human Perception and Performance. 37 (1): 58–71. doi:10.1037/a0020747. PMC 3062635. PMID 21090905.
- ^ Hadwiger, H. (1940). "Eine analytische Reproduktionsfunktion für biologische Gesamtheiten". Skandinavisk Aktuarietidskrijt. 7 (3–4): 101–113. doi:10.1080/03461238.1940.10404802.
- ^ Wald, Abraham (1944), "On Cumulative Sums of Random Variables", Annals of Mathematical Statistics, 15 (3): 283–296, doi:10.1214/aoms/1177731235, JSTOR 2236250
- ^ Tweedie, M. C. K. (1945). "Inverse Statistical Variates". Nature. 155 (3937): 453. doi:10.1038/155453a0.
- ^ Tweedie, M. C. K. (1956). "Some Statistical Properties of Inverse Gaussian Distributions". Virginia Journal of Science (New Series). 7 (3): 160–165.
- ^ Tweedie, M. C. K. (1957). "Statistical Properties of Inverse Gaussian Distributions I". Annals of Mathematical Statistics. 28 (2): 362–377. JSTOR 2237158.
- ^ Tweedie, M. C. K. (1957). "Statistical Properties of Inverse Gaussian Distributions II". Annals of Mathematical Statistics. 28 (3): 696–705. JSTOR 2237229.
- ^ Giner, Göknur; Smyth, Gordon (August 2016). "statmod: Probability Calculations for the Inverse Gaussian Distribution". The R Journal. 8 (1): 339–351. doi:10.32614/RJ-2016-024.
- ^ Lindsey, James (2013-09-09). "rmutil: Utilities for Nonlinear Regression and Repeated Measurements Models".
- ^ Swihart, Bruce; Lindsey, James (2019-03-04). "rmutil: Utilities for Nonlinear Regression and Repeated Measurements Models".
- ^ Wheeler, Robert (2016-09-23). "SuppDists: Supplementary Distributions".
- ^ Pouzat, Christophe (2015-02-19). "STAR: Spike Train Analysis with R".
- ^ Gjessing, Hakon K. (2014-03-29). "Threshold regression that fits the (randomized drift) inverse Gaussian distribution to survival data".
- ^ Hall, Byron; Hall, Martina; Statisticat, LLC; Brown, Eric; Hermanson, Richard; Charpentier, Emmanuel; Heck, Daniel; Laurent, Stephane; Gronau, Quentin F.; Singmann, Henrik (2014-03-29). "LaplacesDemon: Complete Environment for Bayesian Inference".
- ^ Giner, Göknur; Smyth, Gordon (2017-06-18). "statmod: Statistical Modeling".
Further reading
- Høyland, Arnljot; Rausand, Marvin (1994). System Reliability Theory. New York: Wiley. ISBN 978-0-471-59397-3.
- Seshadri, V. (1993). The Inverse Gaussian Distribution. Oxford University Press. ISBN 978-0-19-852243-0.
External links
- Inverse Gaussian Distribution in Wolfram website.
| https://wiki2.org/en/Inverse_Gaussian_distribution | CC-MAIN-2020-16 | en | refinedweb |
Content Count466
Joined
Last visited
Days Won1
Reputation Activity
- espace got a reaction from Hamza Wasim in Box Tower - An Addictive Game
Cool concept but the graphics could be better, more attractive...
The speed could be also faster.
Have a good day.
- espace reacted to b10b in how to make a perpetual movement with random
@espace use onLoop instead of onComplete
You won't need to reset the x, because the tween state will do that.
Math.random(0,800) is incorrect, use Math.random() * 800 instead.
Use a regular function (not an arrow) for the signal listener, and use sprite as the context (instead of this). Sorry, couldn't figure out why this scenario didn't fit as expected with arrows?
function create() { var sprite=[] var tw=[]; for (var i = 0; i < 5; i++) { sprite[i]=game.add.sprite(400+i*10, i*100, 'phaser'); } var move=()=>{ for (var i = 0; i < 5; i++) { tw[i]=game.add.tween(sprite[i]).to({x:-400},1000,Phaser.Easing.Linear.none,true,i*200,-1); tw[i].onLoop.add(function(){ this.y = Math.random() * 800; },sprite[i]) } } move(); }
- espace reacted to Milton in cordova android ubuntu fail to install need really help
defaultBuildToolsVersion="27.0.1" //String
Installed packages:=====================] 100% Computing updates...
Path | Version | Description | Location ------- | ------- | ------- | -------
build-tools;19.1.0 | 19.1.0 | Android SDK Build-Tools 19.1 | build-tools/19.1.0/
build-tools;20.0.0 | 20.0.0 | Android SDK Build-Tools 20 | build-tools/20.0.0/
build-tools;26.0.2 | 26.0.2 | Android SDK Build-Tools 26.0.2 | build-tools/26.0.2/
build-tools;28.0.3 | 28.0.3 | Android SDK Build-Tools 28.0.3 | build-tools/28.0.3/
See the problem ?
- espace reacted to Yehuda Katz in is group better ?
Espace, you cannot see the difference if you use a group from CPU point of view. However the difference exist simply because lets say your both sprites are in game.world, if you use wrapper group it means they are inside game.world.warp_group so it means when Phaser will need to calculate where to draw sprites, it should first get local bounds of group and apply them sprite's x/y. Those calculation are so simple that I would prefer use group which makes code simpler and easier to understand. Therefor there is a less chance to make more important, logical mistake in future when games gets bigger 😃
- espace reacted to mattstyles in listener in javascript
You're doing this very naively, so you'd need the flag.
The alternative is to implement an actual event pub/sub system (ala addEventListener, or, any pub/sub implementation out there) on your object i.e.
import EventEmitter from 'eventemitter3' export const onChange = 'sprite:onChange' export class Sprite extends EventEmitter { scale = { x: 1, y: 1 } setScale (x, y) { this.scale = { x, y } this.emit(onChange, { type: 'scale', payload: this.scale }) } } import { Sprite, onChange } from 'sprite' function doSomething () { console.log('doing something arguments:', ...arguments) } const sprite = new Sprite() sprite.once(onChange, doSomething) Note I'm using ES6 syntax (inc. module syntax) and I haven't tested any of this (I rarely use classes so might have mixed up context or some other stupid error) but hopefully it's easy enough to follow.
eventEmitter3 is a really simple pub/sub module and is a really standard implementation. `on` and `off` functions are analogous to DOM `addEventListener` and `removeEventListener`, the `once` method used here is just sugar around removing the handler/callback after the first time it is triggered.
This actually functionally gets you to the same place you're already at, albeit without some implicit state hanging around. Depending on your exact use-case this might be more or less useful to you.
What this is doing:
* Attach a function to list of functions, key it against a string identifier (this is the event)
* On another function call (emit) check the list of functions for the string which was emitted
* Fire the function when the string is matched
* Remove the function from the list of functions to check i.e. the next time that string is used it won't match that function again
What you are doing:
* Inspecting an object for changes by polling
* Reacting to any detected changes by invoking a function
* Deciding a course of action from within the function
In different scenarios both of these approaches have merit so it's up to you which you prefer. Probably the 'most popular' way of solving this sort of problem is with pub/sub.
- espace reacted to b10b in prompt how to loop to be sure to have the correct value ?
window.prompt() is convenient but may create some weird traps. My tip would be to separate the UI from the logic. Take a few steps back and define what the data validation function looks like, implement that first. Then call the prompt (whether window.prompt or an alternative UI) "while" the validation function returns a falsy. Then it'll neatly repeat until a valid name is supplied. No recursion should be needed - ideally each function should do a single thing, and return as soon as that thing is done.
- espace reacted to gafami in best way to convert ES6 syntax to ES5 ?
I believe that you should convert whole your ES6 scripts to ES5 right now. It's not just for export to cocoon, to make sure your game works well on Safari iOS (8, 9), iOS HomeScreen Web App you need to build based on ES5 syntax (almost browser supported). No worries if you have spent hard-work for ES6. Just go to this page:
Tick ES2015 and paste your ES6 script into the left side box, they will help you convert it to ES5. My game also facing the problem like you. Hope it can help you.
I attach an example file from my game you can try it again on babeljs with these options: linewrap + es2015 + stage-2
Don't rewrite the script because it's wasting time. Your logic code and animation way you choose will decide performance on mobile games not ES6 or ES5.
example.js
- espace reacted to onlycape in Is it possible to use tween with an object parameter to be more readable ?
Hi @espace,
I think that directly is not possible. But you can make your own wrapper function for "game.add.tween". Here a simple wrapper ( ) :
// Function to wrap game.add.tween function tweenWrapper(obj){ game.add.tween(obj.target).to(obj.properties,obj.duration,obj.ease,obj.autoStart,obj.delay,obj.repeat,obj.yoyo); } function create() { var sprite = game.add.sprite(0, 0, 'phaser'); var config={ target: sprite, properties : { y:150, x:100, }, duration : 1000, ease : Phaser.Linear, autoStart : true, delay:10, repeat : 0, yoyo:false }; tweenWrapper(config); } Regards.
- espace reacted to khleug35 in how to do this effect ?
Hello
I just add this scale code, but I not sure that is it the best way to do it.........sorry
arrow.scale.setTo(-1);
Example:
- espace got a reaction from khleug35 in how to do this effect ?
waouw thanks
i don't really understand for time how you do but i will read the docs.
- espace reacted to khleug35 in how to do this effect ?
like this???
function create() { game.stage.backgroundColor = '#000'; arrow = game.add.image(90, 36, 'arrow'); widtharrow = new Phaser.Rectangle(0, 0,arrow.width, arrow.height); arrow.cropEnabled = true; arrow.crop(widtharrow); arrow.fixedToCamera = true; widtharrow.width = 0; game.time.events.loop(800, function() { game.add.tween(widtharrow).to( { width: (widtharrow.width + 60) }, 200, Phaser.Easing.Linear.None, true); }, this); } function update() { arrow.updateCrop(); //loop animations if(widtharrow.width >= 300){ game.add.tween(widtharrow).to( { width: (widtharrow.width - arrow.width) }, 200, Phaser.Easing.Linear.None, true); } } I use crop method to do it .
- espace reacted to losthope in how to increment this function ?
a loop in a loop maybe? i have no idea what your code tries to achieve, nor am i a professional programmer. here is pseudo code that roughly shows how i would try to automate it
for(let j = 0; j < o.opponent_actions.length; ++j) { let summed_actions = 0; for(let i = 0; i < j; ++i) summed_actions += o.opponent_actions[i]; wait(() => { o.paper[0].body.moves = false; }, summed_actions); wait(() => { o.paper[0].body.moves = true; }, 100 + j*100 + summed_actions); } or maybe for(let j = 0; j < o.opponent_actions.length*2; ++j) { let summed_actions = 0; for(let i = 0; i < j/2; ++i) summed_actions += o.opponent_actions[i]; if(j % 2 == 1) summed_actions += 50 + j*50; wait(() => { o.paper[0].body.moves = j % 2 == 1; }, summed_actions); }
- espace reacted to onlycape in best way to measure time with collision
Hi @espace,
When the object collide "in the air" : var timeStamp1 = performance.now()
When the object collides with ground : var timeStamp2 = performance.now()
The time the object take to fall to the ground in miliseconds is ----> parseInt ( timeStamp2 - timeStamp1 )
Regards.
- espace reacted to losthope in why this snippet don't work ?problem with parameter
flag is passed by value, flag = true is useless
pass obj and check/assign to obj.flag
- espace got a reaction from TheBoneJarmer in Tap Tap Plane
It's just an flappy bird copy !
- espace got a reaction from bluecake in Effect of a torch on fire ?
you have this example but it's phaser 3...:\game%20objects\lights\graveyard.js
and also this for the halo effect :\game%20objects\lights\spotlight.js
- espace reacted to onlycape in how to call a function as argument ?
Hi @espace,
There is a scope problem. You don't need to put the argument inside an anonymous function. The argument is already a function.
This code works:
var wait = function(callback,duration){ setTimeout(callback, duration); }; var f = function(){ console.log('executed'); }; wait(f,500); Regards.
- espace reacted to mattstyles in how to call a function as argument ?
Nah, its not even a scope problem, whatever `myFunction` is in the original post, it wasn't a function.
However, I do agree that you don't need to wrap the callback function again, but, you don't even need the `wait` function as it has just become a marginally slower alias. I'm assuming this is all simplified for the purposes of illustration though and that you're actually planning to do more with your wait function (like make it a little easier to cancel it, or extend it, or whatever).
Functions in JS can be passed around as first-class citizens, which is an extremely useful language feature.
function add (a, b) { return a + b } You can think of the `add` variable as a pointer to a block of code (rather than a pointer to a memory allocation), using newer syntax makes this even more obvious:
const add = (a, b) => a + b Where you actually assign the anonymous (lambda) function to the constant variable (ha ha, that proper sounds odd!) `add`.
Note: there are subtle differences between the two code snippets above, but you're extremely unlikely to need to worry about it, particularly for pure functions like the ones above.
Either way you can pass the `add` function around, including using it as a parameter for other functions.
Consider the following:
const add = (a, b) => a + b const subtract = (a, b) => a - b const d2 = () => Math.random() < 0.5 const tweak2 = (a, b) => d2() ? add(a, b) : subtract(a, b) Calling the `tweak2` function has a 50/50 chance of either adding or subtracting those numbers, but, imagine we wanted to add other operations:
const add = (a, b) => a + b const subtract = (a, b) => a - b const multiply = (a, b) => a * b const divide = (a, b) => a / b const d2 = () => Math.random() < 0.5 const tweak2addsubtract = (a, b) => d2() ? add(a, b) : subtract(a, b) const tweak2addmultiply = (a, b) => d2() ? add(a, b) : multiply(a, b) This is a pretty terse example but you can imagine how this is going to grow. Now imagine you end up changing some of the implementation, would be pretty error-prone. Imagine trying to test all combos, again, a little annoying.
However, you could refactor this slightly and use another helper function:
const add = (a, b) => a + b const subtract = (a, b) => a - b const multiply = (a, b) => a * b const divide = (a, b) => a / b const d2 = () => Math.random() < 0.5 const tweak2 = (fnA, fnB) => (a, b) => d2() ? fnA(a, b) : fnB(a, b) const addSubtract = tweak2(add, subtract) console.log(addSubtract(10, 2)) // 8 or 12 Now the tweak function can be 'pre-loaded' with functions it uses to operate, which we've passed through.
If you're not comfortable with arrow functions then its a little confusing to follow as it uses closure and explicit return to work, but, have a look as this pattern can be generalised to be useful in a wide variety of situations.
When you get really good you can do stuff like the following:
const isNumber = _ => typeof _ === 'number' const add = (a, b) => a + b const total = compose( reduce(add, 0), filter(isNumber) ) total([1, 2, 'a', 3]) // 6 All functions used here a low-level pieces and trivial to test and re-compose.
Unless you're comfortable with passing functions around it's probably a little tricky to see exactly what is going on (there are lots of good places to read up on it though) but you should be able to see that the `total` function can take an array, filter out only the numbers, and then add them all up.
- espace got a reaction from mattstyles in access a link works on web and not on mobile
solved :
//install plugin :inappbrowser in cocoon.io // this.testlink=function(){ var link="" game.time.events.add(4000,function(){cordova.InAppBrowser.open(link, '_blank', 'location=yes')}) }; is_mobile && this.testlink();
- espace reacted to mkardas91 in while on array unexpected behavior
var e=[0,2,1,-1] console.log(e.indexOf(-1)); // 3
- espace reacted to samme in stop music when leave application on phone with the home button?
Probably you would use Phaser.Game#onBlur or Phaser.Game#onPause, if the browser isn't handling this already.
- espace got a reaction from speedo in is it someone who success to implement tern-phaser with vim ?
hi, speedo sorry for this time but i try different solutions.
your soluce don't works all the time. In fact put tern globally plus vim_for_tern don't work together.
by reading your comments (i would know how do you do for learn all you have posted ? :)) my better solutions is this bash script :
#!/bin/bash # echo "COPY PASTE THESE 2 INSTRUCTIONS ON ANOTHER TERMINAL" echo "curl -fLo ~/.local/share/nvim/site/autoload/plug.vim --create-dirs \ " echo "" read -p "push on a key to go on... " -n 1 -s cd mkdir ~/.config/nvim curl -sL | sudo -E bash - sudo apt-get install -y nodejs sudo add-apt-repository -y ppa:neovim-ppa/stable && sudo apt-get update && sudo apt-get install neovim sudo apt-get install -y python-dev python-pip python3-dev python3-pip sudo -H pip3 install --upgrade neovim sudo -H pip2 install --upgrade neovim sudo npm install -g neovim cp ~/dotfiles/.tern-project ~/ cp ~/dotfiles/nvim/init.vim ~/.config/nvim/ mkdir ~/.config/nvim/colors cp ~/dotfiles/nvim/colors/onedark.vim ~/.config/nvim/colors cp ~/dotfiles/nvim/init.vim ~/.config/nvim/ nvim +PlugInstall cd && cd ~/.vim/plugged/tern_for_vim/ && sudo npm install tern cd && cd ~/.vim/plugged/tern_for_vim/node_modules/tern/plugin && wget "" # sans tern server global ctrl x ctrl o fonctionne #cd && sudo npm install -g tern my init.vim:
set encoding=utf8 set number set mouse=a set autoindent set clipboard=unnamedplus " recherche incrementielle set incsearch set cursorline " insensible à la casse dans les recherches set ignorecase " insensible à la casse dans les chemins set wildignorecase "" true color " relative number set relativenumber " vertical line to show position set cursorcolumn " COLORS set background=dark colorscheme onedark set termguicolors " PERSONAL COMMAND " open snippets for phaser in a vertical split command Pref vsplit ~/.vim/plugged/vim-snippets/snippets/javascript/javascript-phaser.snippets " CUSTOM KEYBINDING "NORMAL MODE nnoremap <C-Up> VDkPk <CR> nnoremap <C-Down> VDjPj <CR> nnoremap <C-a> ggVG <CR> nnoremap <C-=> G=gg <CR> map <F2> :TernDoc<CR> "INSERT MODE inoremap <C-Space> <C-x><C-o> " Plugins will be downloaded under the specified directory. call plug#begin('$HOME/.vim/plugged') if has('nvim') Plug 'Shougo/deoplete.nvim', { 'do': ':UpdateRemotePlugins' } else Plug 'Shougo/deoplete.nvim' Plug 'roxma/nvim-yarp' Plug 'roxma/vim-hug-neovim-rpc' endif " DECLARE THE LIST OF PLUGINS. Plug 'ternjs/tern_for_vim' Plug 'scrooloose/nerdtree' Plug 'Raimondi/delimitMate' Plug 'rhysd/github-complete.vim' Plug 'easymotion/vim-easymotion' Plug 'terryma/vim-multiple-cursors' Plug 'vim-syntastic/syntastic' Plug 'kien/ctrlp.vim' Plug 'pangloss/vim-javascript' Plug 'vim-scripts/indenthtml.vim' Plug 'walm/jshint.vim' Plug 'heavenshell/vim-jsdoc' Plug 'ervandew/supertab' Plug 'SirVer/ultisnips' Plug 'honza/vim-snippets' "Plug 'majutsushi/tagbar' " COLORS THEMES "Plug 'joshdick/onedark.vim' " List ends here. Plugins become visible to Vim after this call. call plug#end() " SUPERTAB "scroll from top to bottom let g:SuperTabDefaultCompletionType = "<c-n>" let g:python_host_prog = '/usr/bin/python2.7' let g:python3_host_prog = '/usr/bin/python3.5' "peut etre à supprimer noautocmd "" start deoplete at startup let g:deoplete#enable_at_startup = 1 set statusline+=%#warningmsg# set statusline+=%* "COMMANDES POUR SYNTASTIC "let g:syntastic_always_populate_loc_list = 1 "let g:syntastic_auto_loc_list = 1 "let g:syntastic_check_on_open = 1 "let g:syntastic_check_on_wq = 0 in fact i do ctrl x ctrl o (with ctrl +space => see my init.vim) to have tern definition. Your previous solution don't work all the time it seems that tern server is broken....because it use tern_for_vim and at the same time tern...
- espace reacted to speedo in is it someone who success to implement tern-phaser with vim ?
for NEOVIM auto-completion for phaser::
neocomplete doesn't work in neovim so, switch to deoplete.nvim (its awesome).
to use auto-completion with-out the "ctrl x + ctrl o" see following steps::
requirements::
1] deoplete.nvim (note it requires python3 :checkhealth)
For vim-plug ::
--> init.vim::
if has('nvim') Plug 'Shougo/deoplete.nvim', { 'do': ':UpdateRemotePlugins' } else Plug 'Shougo/deoplete.nvim' Plug 'roxma/nvim-yarp' Plug 'roxma/vim-hug-neovim-rpc' endif " Use deoplete. let g:deoplete#enable_at_startup = 1 2] deoplete-ternjs (bridge for linking tern plugs)
--> init.vim::
Plug 'carlitux/deoplete-ternjs', { 'do': 'npm install -g tern' } 3] tern_for_vim (it is bread and butter)
--> init.vim::
Plug 'marijnh/tern_for_vim' then, put phaser auto completion api to ...tern_for_vim>node_modules>tern>plugin
***for detail follow point no [3] from
3] Now the most important step ::
install tern globally
sudo npm install -g tern
and put another phaser auto completion api to ...node_modules>tern>plugin this time globally also
eg:: /usr/lib/node_modules/tern/plugin/....
And BOOM !! there you have it all set
Now all the phaser auto-completion is shown as you type .
to use tern hints ---> :TernType and :TernDoc (in vim the :TernType and :TernDoc is automatically displayed when matched , no need to type for neocomplete)
and if the auto-completion list is not shown then only use "ctrl x + ctrl o"
and if you use VIM :: do not toggle off neocomplete by default make a swith key to toggle off/on and use "CTRL X + CTRL O" for unknown listing of phaser auto-completion and switch back if satisfied . for NEOVIM no need to worry about this. (since neocomplete is not needed)
--vim/neovim + phaser Rocks!!!
- espace got a reaction from speedo in is it someone who success to implement tern-phaser with vim ?
yes !
finally i success with you big thanks
i have followed these steps:
install nodejs 8 and not 9
:set ft=javascript
:set omnifunc
disable neocomplete and press ctrl+x and ctrl+o after and tadam the same screen than you.
big big thanks !!!!!!!
- espace reacted to speedo in is it someone who success to implement tern-phaser with vim ?
Just to be sure,
I have tested in all platform as described in
and it works very well (100/100 %).
recheck::
-> python(2,3) ?
-> in home dir::
place..
->.vimrc/_vimrc(win)
->.vim
->bundle
->tern_for_vim
->node_modules .../plugin/
--> give it a try with vundle plugin manager
--> re-check your vimrc file(path/calling..) and folder location once.
--> install all the requirement plugins for associative plugins see doc.
--> if all is set it will work (100/100 %)
keep trying (VIM + PHASER ROCKS!!)
Test result with tern plug both in gvim/vim::
phaser with vim in ubuntu [linux]::
phaser with vim in fedora [linux]::
phaser with vim in windows 10:: | https://www.html5gamedevs.com/profile/22661-espace/reputation/?type=forums_topic_post&change_section=1 | CC-MAIN-2020-16 | en | refinedweb |
Some assumptions are just plain weird.
The simplest regression might look something like this;
\[ \hat{y}_i \approx \text{intercept} + \text{slope } x_i\]
You might assume at this point in time that the slope and intercept are unrelated to eachother. In econometrics you’re even taught that this assumption is necessary. If you’re one I have to warn you for what you’re about to read. I’m about to show you why, by and large, this independance assumption is just plain weird.
Let’s generate some fake data and let’s toss it in PyMC3.
import numpy as np import matplotlib.pylab as plt n = 1000 xs = np.random.uniform(0, 2, (n, )) ys = 1.5 + 4.5 * xs + np.random.normal(0, 1, (n,))
Let’s now throw this data into pymc3.
import pymc3 as pm with pm.Model() as model: intercept = pm.Normal("intercept", 0, 1) slope = pm.Normal("slope", 0, 1) values = pm.Normal("y", intercept + slope * xs, 1, observed=ys) trace = pm.sample(2000, chains=1) plt.scatter(trace.get_values('intercept'), trace.get_values('slope'), alpha=0.2) plt.title("pymc3 results");
That’s interesting; the
intercept and
slope variables aren’t independant at all. They are negatively related!
Don’t trust this result? Let’s sample subsets and throw it into scikit learn, let’s see what comes out of that.
size_subset = 500 n_samples = 2000 samples = np.zeros((n_samples, 2)) for i in range(n_samples): idx = np.random.choice(np.arange(n), size=size_subset, replace=False) X = xs[idx].reshape(n_samples, 1) Y = ys[idx] sk_model = LinearRegression().fit(X, Y) samples[i, 0] = sk_model.intercept_ samples[i, 1] = sk_model.coef_[0] plt.scatter(samples[:, 0], samples[:, 1], alpha=0.2) plt.title("sklearn subsets result");
Again, negative correlation.
So what is going on here? We generated the data with two independent parameters. Why are these posteriors suggesting that there is a relationship between the intercept and slope?
There are two arguments to make this intuitive.
Consider these two regression lines that go into a single point.As far as the little point is concerned both lines are equal. They both have the same fit. We’re able to exchange a little bit of the intercept with a little bit of the slope. Granted, this is for a single point, but also for a collection of points you can make an argument that you can exchange the intercept for the slope. This is why there must be a negative correlation.
Consider this causal graph. The \(x_0\) node and \(x_1\) node are independant, that is, unless \(y\) is given. That is because, once \(y_i\) is known, we’re back to a single point and then the geometry argument kicks in. But also because logically we could explain the point \(y_i\) in many ways; a lack of \(x_0\) can be explained by an increase of \(x_1\), vise versa or something in between. This encoded exactly in the graphical structure.
It actually took me a long time to come to grips with this. Upfront the linear regression does look like the addition of independant features. But since they all need to sum up to a number, it is only logical that they are related.
Assuming properties of your model upfront is best done via a prior, not by an independence assumption.
The interesting thing about this phenomenon is that it is so pronounced in the simplest example. It is far less pronounced in large regressions with many features, like;
\[ y_i \approx \text{intercept} + \beta_1 x_{1i} + ... + \beta_f x_{fi} \] Here’s some plots of the intercept value vs. the first estimated feature, \(\beta_1\) given \(f\) features;
You can clearly see the covariance decrease as \(f\) increases. The code for this can be found here. | https://koaning.io/posts/theoretical-dependence/ | CC-MAIN-2020-16 | en | refinedweb |
table of contents
- buster 241-7~deb10u1
- buster-backports 242-7~bpo10+1
- testing 242-7
- unstable 242-7
- experimental 243-3
NAME¶udev_device_new_from_syspath, udev_device_new_from_devnum, udev_device_new_from_subsystem_sysname, udev_device_new_from_device_id, udev_device_new_from_environment, udev_device_ref, udev_device_unref - Create, acquire and release a udev device object
SYNOPSIS¶
#include <libudev.h>
struct udev_device *udev_device_new_from_syspath(struct udev *udev, const char *syspath);
struct udev_device *udev_device_new_from_devnum(struct udev *udev, char type, dev_t devnum);
struct udev_device *udev_device_new_from_subsystem_sysname(struct udev *udev, const char *subsystem, const char *sysname);
struct udev_device *udev_device_new_from_device_id(struct udev *udev, const char *id);
struct udev_device *udev_device_new_from_environment(struct udev *udev);
struct udev_device *udev_device_ref(struct udev_device *udev_device);
struct udev_device *udev_device_unref(struct udev_device *udev_device);
DESCRIPTION¶udev_device_new_from_syspath, udev_device_new_from_devnum, udev_device_new_from_subsystem_sysname, udev_device_new_from_device_id, and udev_device_new_from_environment allocate a new udev device object and returns a pointer to it. This object is opaque and must not be accessed by the caller via different means than functions provided by libudev. Initially, the reference count of the device is 1. You can acquire further references, and drop gained references via udev_device_ref() and udev_device_unref(). Once the reference count hits 0, the device object is destroyed and freed.
udev_device_new_from_syspath, udev_device_new_from_devnum, udev_device_new_from_subsystem_sysname, and udev_device_new_from_device_id create the device object based on information found in /sys, annotated with properties from the udev-internal device database. A syspath is any subdirectory of /sys, with the restriction that a subdirectory of /sys/devices (or a symlink to one) represents a real device and as such must contain a uevent file. udev_device_new_from_devnum takes a device type, which can be b for block devices or c for character devices, as well as a devnum (see makedev(3)). udev_device_new_from_subsystem_sysname looks up devices based on the provided subsystem and sysname (see udev_device_get_subsystem(3) and udev_device_get_sysname(3)) and udev_device_new_from_device_id looks up devices based on the provided device ID, which is a special string in one of the following four forms:
Table 1. Device ID strings
udev_device_new_from_environment
creates a device from the current environment (see environ(7)). Each key-value pair is interpreted in the same way as if it was received in an uevent (see udev_monitor_receive_device(3)). The keys DEVPATH, SUBSYSTEM, ACTION, and SEQNUM are mandatory. | https://manpages.debian.org/unstable/libudev-dev/udev_device_new_from_environment.3.en.html | CC-MAIN-2020-16 | en | refinedweb |
pymatgen.io.qchem.utils module¶
Utilities for Qchem io.
lower_and_check_unique(dict_to_check)[source]¶
Takes a dictionary and makes all the keys lower case. Also replaces “jobtype” with “job_type” just so that key specifically can be called elsewhere without ambiguity. Finally, ensures that multiple identical keys, that differed only due to different capitalizations, are not present. If there are multiple equivalent keys, an Exception is raised.
- Parameters
dict_to_check (dict) – The dictionary to check and standardize
- Returns
- An identical dictionary but with all keys made
lower case and no identical keys present.
- Return type
to_return (dict)
process_parsed_coords(coords)[source]¶
Takes a set of parsed coordinates, which come as an array of strings, and returns a numpy array of floats.
read_pattern(text_str, patterns, terminate_on_match=False, postprocess=<class 'str'>)[source]¶
General pattern reading on an input string
- Parameters
text_str (str) – the input string to search for patterns
patterns (dict) – A dict of patterns, e.g., {“energy”: r”energy\(sigma->0\)\s+=\s+([\d-.]+)”}.
terminate_on_match (bool) – Whether to terminate when there is at least one match in each key in pattern.
postprocess (callable) – A post processing function to convert all matches. Defaults to str, i.e., no change.
- Renders accessible:
Any attribute in patterns. For example, {“energy”: r”energy\(sigma->0\)\s+=\s+([\d-.]+)”} will set the value of matches[“energy”] = [[-1234], [-3453], …], to the results from regex and postprocess. Note that the returned values are lists of lists, because you can grep multiple items on one line.
read_table_pattern(text_str, header_pattern, row_pattern, footer_pattern, postprocess=<class 'str'>, attribute_name=None, last_one_only=False)[source]¶
Parse table-like data. A table composes of three parts: header, main body, footer. All the data matches “row pattern” in the main body will be returned.
- Parameters
text_str (str) – the input string to search for patterns
header_pattern (str) – The regular expression pattern matches the table header. This pattern should match all the text immediately before the main body of the table. For multiple sections table match the text until the section of interest. MULTILINE and DOTALL options are enforced, as a result, the “.” meta-character will also match “n” in this section.
row_pattern (str) – The regular expression matches a single line in the table. Capture interested field using regular expression groups.
footer_pattern (str) – The regular expression matches the end of the table. E.g. a long dash line.
postprocess (callable) – A post processing function to convert all matches. Defaults to str, i.e., no change.
attribute_name (str) – Name of this table. If present the parsed data will be attached to “data. e.g. self.data[“efg”] = […]
last_one_only (bool) – All the tables will be parsed, if this option is set to True, only the last table will be returned. The enclosing list will be removed. i.e. Only a single table will be returned. Default to be True.
- Returns
List of tables. 1) A table is a list of rows. 2) A row if either a list of attribute values in case the the capturing group is defined without name in row_pattern, or a dict in case that named capturing groups are defined by row_pattern. | https://pymatgen.org/pymatgen.io.qchem.utils.html | CC-MAIN-2020-16 | en | refinedweb |
FlatList
A performant interface for rendering,
extraData={this.state}to
FlatListwe make sure
FlatListitself will re-render when the
state.selectedchanges. Without setting this prop,
FlatListwould not know it needs to re-render any items because it is also a
PureComponentand the prop comparison will not show any changes.
keyExtractortells the list to use the
ids for the react keysList extends React.PureComponent { state = {selected: (new Map(): Map<string, boolean>)}; _keyExtractor = (item, index) => item.id; _onPressItem = (id: string) => { // updater functions are preferred for transactional updates this.setState((state) => { // copy the map rather than modifying state. const selected = new Map(state.selected); selected.set(id, !selected.get(id)); // toggle return {selected}; }); }; _renderItem = ({item}) => ( <MyListItem id={item.id} onPressItem={this._onPressItem} selected={!!this.state.selected.get(item.id)} title={item.title} /> ); render() { return ( <FlatList data={this.props.data} extraData={this.state} keyExtractor={this._keyExtractor} renderItem={this._renderItem} /> ); } ));
Scrolls to the item at the specified index such that it is positioned in the viewable area such that
viewPosition 0 places it at the top, 1 at the bottom, and 0.5 centered in the middle.
viewOffset is a fixed number of pixels to offset the final target position.
Note: cannot scroll to locations outside the render window without specifying the
getItemLayout prop.. | http://reactnative.dev/docs/0.50/flatlist | CC-MAIN-2020-16 | en | refinedweb |
Locating single Tcl script to create a Tk GUI and therefore this should be as pain free as possible.
Any easy way around this is to turn your Tcl file into an array that can be included from your source file. This article explores how to do this with Tcl and C.
Converting the Tcl Script
To include the Tcl script it needs to be converted into a C array. This can be done from Unix with the
xxd -i command. So to convert
my.bin to
my.bin.h you would run:
$ xxd -i my.tcl my.tcl.h
If don’t have access to
xxd, you can use bin2c downloadable as an archive from here. To do as above with
bin2c:
$ tclsh bin2c.tcl my.tcl my_tcl my.tcl.h
This will create a file similar to the following:
unsigned char my_tcl[] = { 0x70, 0x75, 0x74, 0x73, 0x20, 0x22, 0x48, 0x65, 0x6c, 0x6c, 0x6f, 0x2c, 0x20, 0x77, 0x6f, 0x72, 0x6c, 0x64, 0x21, 0x22, 0x0a, 0x70, 0x75, 0x74, 0x73, 0x20, 0x22, 0x49, 0x20, 0x68, 0x6f, 0x70, 0x65, 0x20, 0x79, 0x6f, 0x75, 0x20, 0x6c, 0x69, 0x6b, 0x65, 0x64, 0x20, 0x74, 0x68, 0x69, 0x73, 0x20, 0x61, 0x72, 0x74, 0x69, 0x63, 0x6c, 0x65, 0x20, 0x66, 0x72, 0x6f, 0x6d, 0x3a, 0x20, 0x68, 0x74, 0x74, 0x70, 0x3a, 0x2f, 0x2f, 0x74, 0x65, 0x63, 0x68, 0x74, 0x69, 0x6e, 0x6b, 0x65, 0x72, 0x69, 0x6e, 0x67, 0x2e, 0x63, 0x6f, 0x6d, 0x22, 0x0a }; unsigned int my_tcl_len = 89;
In the example below you can see that
my.tcl.h has been included into the function which will load the script. The array created above is then evaluated by
Tcl_EvalEx() using the created array:
my_tcl, and its associated length variable:
my_tcl_len.
#include <tcl.h> static Tcl_Interp *interp; int Script_init() { // Include my.tcl which has been converted to a char array using xxd -i #include "my.tcl.h" interp = Tcl_CreateInterp(); if (Tcl_Init(interp) == TCL_ERROR) { return 0; } if (Tcl_EvalEx(interp, my_tcl, my_tcl_len, 0) == TCL_ERROR ) { fprintf(stderr, "Error in embedded my.tcl\n"); fprintf(stderr, "%s\n", Tcl_GetStringResult(interp)); return 0; } return 1; }
Conclusion
This process makes it much easier to distribute an executable. Once an initial Tcl script has been loaded, you can use something like the xdgbasedir module to easily locate other scripts for the program. To automate this process, take a look at Using Dynamically Generated Header Files with CMake. | http://techtinkering.com/2013/02/20/compiling-a-tcl-script-into-an-executable/ | CC-MAIN-2018-05 | en | refinedweb |
A. Another way of doing this is that you write these lines inside a function and call that function every time you want to display values. This would make you code simple, readable and reusable.
Syntax of Function
return_type function_name (parameter_list) { //C++ Statements }
Let’s take a simple example to understand this concept.
A simple function example
#include <iostream> using namespace std; /* This function adds two integer values * and returns the result */int sum(int num1, int num2){ int num3 = num1+num2; return num3; } int main(){ //Calling the function cout<<sum(1,99); return 0; }
Output:
100
The same program can be written like this: Well, I am writing this program to let you understand an important term regarding functions, which is function declaration. Lets see the program first and then at the end of it we will discuss function declaration, definition and calling of function.
#include <iostream> using namespace std; //Function declaration int sum(int,int); //Main function int main(){ //Calling the function cout<<sum(1,99); return 0; } /* Function is defined after the main method */ int sum(int num1, int num2){ int num3 = num1+num2; return num3; }
Function Declaration: You have seen that I have written the same program in two ways, in the first program I didn’t have any function declaration and in the second program I have function declaration at the beginning of the program. The thing is that when you define the function before the main() function in your program then you don’t need to do function declaration but if you are writing your function after the main() function like we did in the second program then you need to declare the function first, else you will get compilation error.
syntax of function declaration:
return_type function_name(parameter_list);
Note: While providing parameter_list you can avoid the parameter names, just like I did in the above example. I have given
int sum(int,int); instead of
int sum(int num1,int num2);.
Function definition: Writing the full body of function is known as defining a function.
syntax of function definition:
return_type function_name(parameter_list) { //Statements inside function }
Calling function: We can call the function like this:
function_name(parameters);
Now that we understood the working of function, lets see the types of function in C++
Types of function
We have two types of function in C++:
1) Built-in functions
2) User-defined functions
1) Build-it functions
Built-in functions are also known as library functions. We need not to declare and define these functions as they are already written in the C++ libraries such as iostream, cmath etc. We can directly call them when we need.
Example: C++ built-in function example
Here we are using built-in function pow(x,y) which is x to the power y. This function is declared in
cmath header file so we have included the file in our program using
#include directive.
#include <iostream> #include <cmath> using namespace std; int main(){ /* Calling the built-in function * pow(x, y) which is x to the power y * We are directly calling this function */ cout<<pow(2,5); return 0; }
Output:
32
2) User-defined functions
We have already seen user-defined functions, the example we have given at the beginning of this tutorial is an example of user-defined function. The functions that we declare and write in our programs are user-defined functions. Lets see another example of user-defined functions.
User-defined functions
#include <iostream> #include <cmath> using namespace std; //Declaring the function sum int sum(int,int); int main(){ int x, y; cout<<"enter first number: "; cin>> x; cout<<"enter second number: "; cin>>y; cout<<"Sum of these two :"<<sum(x,y); return 0; } //Defining the function sum int sum(int a, int b) { int c = a+b; return c; }
Output:
enter first number: 22 enter second number: 19 Sum of these two :41 | https://beginnersbook.com/2017/08/cpp-functions/ | CC-MAIN-2018-05 | en | refinedweb |
Business Success Specials
Promotions
Training Events
Back To School......Building Your Business!
September 2017
Our Seminar Schedule..
For The Next 3 Months More Than Parts!
Training for You & Your Staff..So You Make More!
(all seminars are held at La Reggia at the Meadowlands Plaza Hotel, Secaucus and
Costas Italian Restaurant, Roselle Pk, NJ)
All Seminars Presented by ATG, MEA, AC Delco and Motorcraft...
Wednesday, September 13 and Thursday September 14
Mechanical & Variable Timing &
Valvetrain Diagnostics
Getting More Profits From Your Bays
Wednesday, October 18th—1 Night Only-Held At Galloping Hill Caterers
Building Your Business
Getting The Most from
The Buy Wise Service Center Program
Learn How To Gain Profit for You, Your Customer and Your Staff!
Wednesday, November 15th and Thursday, November 16th
Chrysler Drivability & Code Diagnostics
Buy Wise In House Reflash Sessions Seating is
In House Scanner Update Courses Limited to 100
Service Center
In House TPMS Training Events Technicians
Members Get
Seating
Priority
Look for Announcements Soon On Smaller Size Classes Held
PROFESSIONAL TECHNICIAN
TECHNICAL TRAINING SERIES
Mechanical & Variable Timing & Valvetrain Diagnostics
Mechanical timing problems have been around as long as the internal combustion engine, but more
complex timing and valvetrain systems have flooded the market in the last decade. Now they’re
showing up in your shop, and the stakes are high. Position sensor codes are often mechanical or
variable timing faults. Variable valve timing (VVT) codes are often mechanical timing faults or
lubrication issues. Even Fuel Trim, MAF, MAP, and other codes are symptoms can be caused by a
variety of timing and lift faults. When it really is a VVT system fault, it’s hard to tell is the root cause is
an oil, solenoid, actuator, or other mechanical fault. What’s needed is a repeatable strategy to avoid
testing ‘everything’, and instead quickly eliminate possible causes. In this first class of our new ATG
import Series, we’ve fine-tuned this approach to include manufacturer-specific ATG Tips for quick
solutions you’d spend hours discovering on your own.
Coverage includes:
Hydraulic, Electronic & Magnetic Clutch VVT systems
Variable lift & valve clearance faults
Finding stretched/jumped timing without disassembly
Differentiating mechanical & VVT failures
Stuck actuators vs. stuck solenoid diagnostics
Finding internal oiling faults from the outside
Leverage Scan Tool PIDs & functions to eliminate the most possible causes with the least
effort
This list really sums up the problem it’s all related! Only by treating mechanical timing, variable valve
timing, valve train and lubrication all as a single system can you quickly find out what’s not wrong and
zero in on a root cause,
JOIN US...2-Different Evenings
Wednesday Evening...September 13th 6:15pm
at Costas Italian Restaurant
120 Chestnut St, Roselle Park, NJ 07204
(908) 241-1131
Thursday Evening...September 14th 6:15pm
at LaReggia in The Meadowlands Plaza Hotel
Meadowlands Plaza Hotel,
40 Wood Ave, Secaucus, NJ 07094
201-422-0200
Class is limited to 100 students per evening. Priority is given to our Buy Wise Service Center Members.
Sign Me Up for Wednesday, Sept 13th Sign Me Up for Thursday, Sept 14th
Account Name: ______________________________________________________
No. Attending: ___________ Account No: _________________
Return This Portion For Registration! Seating is Limited! Sign Up Now!
THE BUY WISE AUTO PARTS
UNDERCAR PARTS TEAM!
And Now, More Brands, More Choices..
...Our Inventory Is Unmatched By Any Competitor!
Ask Your Territory Manager To Custom Tailor
A Solution To Fit
Your Business’s Needs!
JUST BUY OR.. BUY
ONE OF ONE OF
THESE THESE
The Largest
Control Arm
Inventory in
Our Area!
Get 1 Of These! Get 1 Of These!
Aren’t You Looking for That Original Brand?
Make Your Move To AC Delco Batteries!
We’ll Put 9 Batteries In Your
Location On Consignment!
We’ll Give You a Discounted Price!
We’ll Come Weekly to Check / Restock!
We’ll Do Free Stock Rotations!
Our Battery Trucks Come To You!
Kick-Off Incentive
We’ll Rebate You—$100 Visa Gift Card
After your First 15 Batteries Sold!
(must be within 3 months of Install)
Ongoing—One Free Battery After Each
25 Batteries Purchased for 1st year!
(Free Rebate based on Value of No.1 Selling Battery)
Earn A Free Battery Tester
We’ll Rebate You Back $4 for
every Battery that you
purchased in a 12 month
period, up to your cost for the
Battery Tester!
Rebate Paid at end of 12 Months
THE BUY WISE AUTO PARTS
BRAKE TEAM!
And Now, By Supporting The Buy Wise Brake Team...
...We’ll Reward You With Great Incentives!
Ask Your Territory Manager To Custom Tailor
A Solution To Fit
Your Business’s Needs!
Makita MKXT270—18 Volt 1/2” Impact and Hex Gun
1/4” Impact Driver Kit w/ FREE Drill!
$599.99
MKXT270
Compact and ergonomic design at only 5-3/8" long weighing only 3.9 lbs.
Variable 1/4" Hex Impact (0-2,900 RPM & 0-3,500 IPM) delivers 1,460
in.lbs. of torque for a wide range of fastening applications
Features Extreme Protection Technology (XPT™) which is engineered to
provide increased dust and water resistance in harsh job site conditions
Includes: (2) 18v 4.0 Ah LXT batteries, (1) 18v LXT Rapid Charger.
Induction Innovations MV-777 Mini Ductor Venom
$469.99
IIMDV777
$79.99 $199.99
$119.99 | http://anyflip.com/wxgt/qeyk/basic/ | CC-MAIN-2018-05 | en | refinedweb |
Consider the following Python function.
def blast(n): if n > 0: print n blast(n-1) else: print "Blast off!"
What is the the output from the call?
blast(5)
The following mechanism helps us understand what is happening:
What is the output of the following?
def rp1( L, i ): if i < len(L): print L[i], rp1( L, i+1 ) else: print def rp2( L, i ): if i < len(L): rp2( L, i+1 ) print L[i], else: print L = [ 2, 3, 5, 7, 11 ] rp1(L,0) rp2(L,0)
Note that the entirety of list L is not copied to the top of the stack. Instead, a reference (an alias) to L is placed on the stack.
The factorial function is
and
This is an imprecise definition because the :math:` cdots ` is not formally defined.
Writing this recursively helps to clear it up:
and
The factorial is now defined in terms of itself, but on a smaller number!
Note how this definition now has a recursive part and a non-recursive part:
We’ll add output code to the implementation to help visualize the recursive calls in a different way.
The Fibonacci sequence starts with the values 0 and 1.
Each new value in the sequence is obtained by adding the two previous values, producing
Recursively, the
value,
, of the
sequence is defined as
This leads naturally to a recursive function, which we will complete in lecture.
Fractals are often defined using recursion. How do we draw a Sierpinski triangle like the one shown below?
Define the basic principle
Define the recursive step
def draw_sierpinski( ):
Remember the lego homework? We wanted to find a solution based on mix and match. While a non-recursive solution exists, the recursive solution is easier to formulate.
Given a list of legos and a lego we would like match:
Define the basis step(s): when should we stop?
Define the recursive step
def match_lego(legolist, lego):
Consider the following recursive version of binary search:
def binary_search_rec( L, value, low, high ): if low == high: return L[low] == value mid = (low + high) / 2 if value < L[mid]: return binary_search_rec( L, value, low, mid-1 ) else: return binary_search_rec( L, value, mid, high ) def binary_search(L, value): return binary_search_rec(L, value, 0, len(L)-1)
Here is an example of how this is called:
print binary_search( [ 5, 13, 15, 24, 30, 38, 40, 45], 13 )
Note that we have two functions, with binary_search acting as a “driver function” of binary_search_rec which does the real work.
Is the code right?
The fundamental idea of merge sort is recursive:
We repeat our use of the merge function function in class.
def merge_sort(L):
Comparing what we write to our earlier non-recursive version of merge sort shows that the primary job of the recursion is to organize the merging process! function we’ve just defined, together with +1, -1 and equality tests.
Now, define the integer power function,
, in terms of the mult function you
just wrote, together with +1, -1, and equality.
Euclid’s algorithm for finding the greatest common divisor is one of
the oldest known algorithms. If
and
are positive
integers, with
, then let
be the remainder
of dividing
by
. If
, then
is the GCD of the two integers. Otherwise, the GCD of
and
equals the GCD ofcd is proceeding toward the base case (as required by our “rules” of writing recursive functions)?
Specify the recursive calls and return values from our merge_sort implementation for the list
L = [ 15, 81, 32, 16, 8, 91, 12 ] | http://www.cs.rpi.edu/~sibel/csci1100/fall2014/course_notes/lec21_recursion.html | CC-MAIN-2018-05 | en | refinedweb |
Secure Naming in Information-centric Networks
- Stephany Boone
- 2 years ago
- Views:
Transcription
1 Secure Naming in Information-centric Networks Walter Wong University of Campinas Campinas, Brazil Pekka Nikander Ericsson Research NomadicLab Finland ABSTRACT In this paper, we present a secure naming system to locate resources in information-centric networks. The main goal is to allow secure content retrieval from multiple unknown or untrusted sources. The proposal uses a new, flexible naming scheme that is backwards compatible with the current URL naming scheme and allows for independent content identification regardless of the routing, forwarding, and storage mechanisms by separating the source and location identification rules in the URI/URL authority fields. Some benefits of the new naming system include the opportunity to securely retrieve content from any source in the network, content mobility, content validation with the original source, and full backwards compatibility with the current naming system. Categories and Subject Descriptors C2.1 [Computer-Communication Networks]: Network Architecture Design General Terms Architecture, Security Keywords Information networking, naming system 1. INTRODUCTION The Domain Name System (DNS) was introduced in the Internet to overcome the administrative burden caused by the growing number of hosts. As more and more computers were deployed around the world, the size of the hosts.txt file grew to unmanageable sizes, demanding a new resolution system that was at the same time scalable, distributed, and administratively easy to manage. The original functionality required was the resolution of a hostname into an IP address, easing the access to remote computers through human-readable ReArch 2010, November 30, 2010, Philadelphia, USA. Copyright 2010 ACM /10/11...$ Despite the success of the naming system over the years, the current DNS faces technical limitations to attend new security requirements, for instance, resilience against security threats such as denial-of-service attacks and DNS cache poisoning [3]. In addition, the lack of awareness of the DNS about the location of the clients prevents it to efficiently return the location of the closest available sources [6]. We argue that the root cause of these problems lies in the underpinnings of the architecture: the lack of secure content identifiers that are simultaneously independent of routing, forwarding, and storage mechanisms. The motivation of our work is to provide a more flexible naming system to enable content retrieval from multiple sources with a security mechanism embedded in the name, in the same line as proposed in the IETF DECADE working group [4]. We aim at a mechanism that allows for clients to receive and verify pieces of content from multiple sources that may not be trusted or that are not even known beforehand. In this scenario, we want to leverage security mechanisms based on the information that each data chunk carries instead of using transient security parameters from the current security protocols, such as SSL and IPSec. In those protocols, we are able to authenticate and verify the data, but we are not able to validate it 1. In this paper, we propose a new secure naming system that separates the roles of authority, content identification and location, leveraging security regardless of the network location. The main idea is to separate the security functionality in the names (or URLs) from the routing, forwarding and storage primitives, allowing the content to be verified regardless of its storage location from where it was retrieved. Hence, instead of resolving a name into an IP address, the proposed resolution mechanism resolves it into a metadata structure containing a set of permanent data chunk identifiers that can be used to retrieve the data from the network. The benefits of the proposed approach include the possibility to retrieve content pieces from multiple unknown/untrusted sources, opportunistic content retrieval from local caches, content authentication with the original provider, and backwards compatibility with the current naming system based on URLs. The organization of this paper is as follows. Section 2 presents the secure naming proposal, starting with the motivation of our work, and discussing the naming system. Section 3 describes the implementation architecture and presents the ongoing implementation. In Section 4 we provide a brief 1 By validity we mean that the received data is what the user really looked for and not just unmodified data.
2 security analysis of the designed system. Section 5 describes the related work and compares with our proposal, summarizing the main differences. Section 6 discusses deployment issues. Finally, Section 7 concludes this paper and presents the future work. 2. NAMING SYSTEM DESIGN The naming system was introduced in the Internet to ease the access to resources by using names in the Internet. At the same time, as the Internet grew in scale and the bandwidth usage migrated from remote login and to bandwidth-hungry user generated content (e.g. YouTube), the naming system started to show its limitations. One critical limitation is the lack of location awareness of the DNS [6], which hinders the possibility to return the closest server based on the client s location. As a consequence, client requests need to traverse the entire path towards the IP address indicated by the DNS despite any other closer source in the network. Content Delivery Networks (CDN) [10] came as an alternative to increase content availability by redirecting the DNS requests to their closest servers where the requested content could be retrieved. However, CDNs are restricted to a group of clients that pay this service that the DNS can not provide natively. From the security point of view, DNS is prone to a number of attacks, such as cache poisoning and denial-of-service attacks. DNSSec [5] was proposed to tackle the security issues in the DNS, mainly leveraging the provenance of the resolution mechanism. Clients resolving a name would receive both a signed response together with the signer s public key, allowing the verification of the response. However, DNSSec requires the establishment of a trust relationship between clients and the resolution infrastructure. We will show in the security analysis section that this trust is not necessary. 2.1 Design goals In order to address the limitations described above, we established a set of design goals with simplicity, scalability, robustness and security in mind: Backwards compatibility: The new naming system should be backwards compatible and also be incrementally deployable in the current Internet architecture; Content authentication: the separation of the data authentication, identification and location allows for data retrieval from multiple locations, content replication and mobility; High-level content identification: persistent content identifiers with embedded security properties allows for location-independent labels, resulting in identifiers that can be used to identify data regardless of the storage type, e.g., network caches; Provenance: the naming system should provide mechanisms to verify and validate the content pieces with the original content provider. The main idea is to transfer the trust placed in the storage location (mirror) to the content itself and to the authority over the content. 2.2 Initial design In order to design a secure naming scheme, we introduce three definitions that represent the main components of the naming scheme: authority, content and location. Authority. Authority is any entity who, in the first place, has the direct control over the data stored and handled by the system. It can be either a content provider who generated the content itself, i.e., the content owner, or an entity appointed to act as a representative of the content owner, e.g., a proxy. Content. Content is any resource or piece of information that is generated by an authority, and can be stored in a location. Location. Location is a place in the memory, hard disk or network where a piece of content can be stored and located. Large pieces of content can also be partitioned into smaller data chunks of fixed size to satisfy external requirements, for instance, the maximum transmission unit in the network. In addition, pieces of content can also be fragmented due to system policies, e.g., peer-to-peer networks require content to be divided and exchanged in smaller chunks to increase the overall availability of the system. Therefore, data chunks are smaller components of a piece of content and the sum of all parts forms the entire content. 2.3 Identifiers The mapping of these concepts on the network level is represented by the authority, content and location identifiers. The benefits of using these security tokens in the network level are twofold: first, we decouple the authority from specific locations in the network, leveraging a broader concept of authority instead of a single administrative domain; second, we add security semantics within an identifier, easing the authority authentication in the network. As a consequence, entities are able to recognize a security token that has been confirmed (authenticated) before, regardless of the location, and are able to recreate an indicative relationship between a message and an already known authority. Content identifiers must also be free of location semantics in order to allow content mobility in the network. In order to provide a strong binding between contents and their identifiers, we use cryptographic identifiers (cryptoid) [12] to identify each piece of content. These identifiers are unique and permanent, and result from a strong cryptographic hash function over a piece of content, making them to be only dependent on the content itself and providing mechanisms to self-verify the content integrity. Large pieces of content that require fragmentation into smaller data chunks to be transferred in the network use algorithmic IDs (algids) [11] to leverage provenance with the original provider. Algorithmic IDs are a class of cryptoids that are algorithmically generated and provide strong binding between a content identifier and a set of chunk identifiers. Some examples of algids include hash chains [16] and Merkle Trees [9]. Hash chains allow for a chain of messages to be verified in sequence, as the chain anchor is usually digitally signed and the trust can be sequentially transferred from one chain to another due to the strong properties of cryptographic hash functions. In this scenario, a client first retrieves a signed hash chain anchor, which contains the hash of the first data chunk, and verifies the digital signature on it. Upon retrieving the first data chunk, it contains the hash value of the second data chunk and so on. As the first hash value (hash chain anchor) was digitally signed by the content provider, the trust on the data chunks is transferred to
3 the following data chunk. Any unauthorized modification in any data chunk will result in a different hash value. Merkle Trees are another class of algids where a Root Hash is generated from the cryptographic hash over the data chunks, resulting in a strong binding between data chunks and the root algid. Merkle Trees allow for independent block verification because each data chunk carries its own authentication information, allowing verification of data chunks by intermediate devices, for instance, network routers. Merkle Trees can be used to detect and prevent corruption of large pieces of content, where a single corrupted data chunk will result in an unusable piece of content [14]. Another interesting property is that Merkle tree chunk identifiers are location-independent, allowing for parallel and out-of-sequence chunk retrieval [15]. 3. IMPLEMENTATION 3.1 Implementation architecture The implementation of our naming system requires two main components: a name resolver and a content manager. The name resolver can be implemented as a plug-in for a Web browser, in order to support content retrieval from multiple sources. Given a URL, the plug-in will check whether it has the certificates in its internal trusted key store to authenticate the authority. Otherwise, the plug-in will initiate anameschemeresolutiontoretrievetheauthority s certificate together with the content identifier, authenticate and store them in the local keychain. Later on, the resolver requests a name resolution to the DNS to map the authority s name to the metadata and the Web-server s network location. The DNS returns a type TXT record containing the metadata with the chunk IDs and a type A record containing the IP address of the server. Finally, the plug-in opens multiple HTTP connections containing the chunk request header towards the server s network identifier. Fig. 1 shows the header used in the content pieces retrieval. Figure 1: Content piece header. Each piece contains the context/authority identifier, its own content identifier and the piece length. The type field defines the type of the message, for instance, if it is a data chunk request or a signaling message. The context/authority ID defines either the context of the requested data or the content provider identifier. In the first case, the content ID is used to identify a group that may have special access rights, e.g., a multicast group among a set of users. In the latter case, the authority ID is used to identify the content owner or the proxy identifier that is responsible for the requested piece of content, providing privacy for the provider. The piece crypto ID is the cryptographic ID of the content, protecting against unauthorized modifications in the data that the packet carries. Finally, the piece length contains the length of the carried data. The proposed naming mechanism supports clean-slate architectures based on flat routing mechanism, such as DONA [8], since these architectures support routing on flat identifiers. However, for an IP network, the component will use the server IP address returned together with the metadata to retrieve the content. In this case, the application will open multiple HTTP connections towards the server, and routers in the path are able to parse the content header and divert the requests to them, working similarly as transparent Webproxies. In case a cache does not have the requested piece, it forwards the request towards the server. The proposed caching mechanism is possible due to the independent piece identification mechanism proposed in the naming scheme. 3.2 Naming Scheme We use Universal Resource Identifiers (URIs) [13] as the foundation for our naming scheme. The URIs introduce a set of names and addresses used to identify resources in the Internet, leveraging global search and retrieval of documents across different operating systems and protocols. An URI is mainly composed of three components: a scheme, anauthority and a resource path, asillustratedinfig.2(additional components are described in [13]). Figure 2: (1) Original URI proposal. (2) URI-to- URL mapping. (3) Example of URL resolution in the DNS. The domain name is resolved into an IP address of the server storing the document. (4) Upon having the IP address, the client goes to the server location to retrieve the resource. The scheme defines the protocol handler for the authority; the authority is the owner or the entity responsible for the resource and the resource path points to the resource in the authority namespace. In the current naming system, a URI is actually mapped to an URL and the authority becomes the Fully Qualified Domain Name (FQDN) of the network where the resource is located 2.Thedomainnameisresolved into an IP address through the DNS and the client is able to request the resource to the returned IP address. However, the main drawback is the binding between content and a specific, relatively stable set of locations. In this paper, we take the stance that the authority field defines the content owner/provider, ortheentityresponsible for the content, instead of defining just a namespace authority. Hence, instead of mapping an FQDN to a locationdependent identifier to an IP address with DNS, the resolution system maps the authority field to the public key (or 2 Of course, there may be additional fields within the authority field. However, as they are not commonly used and not essential for the current discussion, we ignore them in the rest of this paper.
4 other secure identifier) of the authority over the data, providing a cryptographic mapping between a user-level name into cryptographic identifiers. We map the resource path to a content identifier that is also location-independent, resembling the DONA [8] style of naming based on Principals and Labels (P:L). However, in our proposal we define a more general and flexible structure to describe a resource called metadata. Themetadata holds all information describing the resource, such as type, total length, number of pieces, piece ID list (cryptoids), piece length, content anchor and digital signature. Fig. 3 illustrates an example of the proposed naming scheme (step 1). Figure 3: Naming scheme and resolution to authority and metadata. 3.3 Authority Mapping The mapping of an authority to a security token can be performed in different ways, for example, local configuration based on administrative rules, integrated with legacy resolution system (DNS) or can be a totally external entity, such as a Distributed Hash Table (DHT). In the first scenario, the bootstrapping procedure is done by an external administrator, so the local host has the key embedded in the system, for instance, a list of root certificates in a Web-browser. The benefit of this approach is the pre-establishment of a trust relationship between the software vendor (the entity that embedded the certificates) and the entities that may provide security services (Web-sites that are trusted). In the second scenario, clients rely on a trusted resolution infrastructure, for instance, DNSSec. In this approach, a user queries a distributed system to reach the authoritative server hosting an authority-to-identifier mapping. The benefit of this approach is that the key management is easier from the infrastructure point of view. However, the drawback is the unnecessary trust placed in the resolution infrastructure to resolve queries correctly. For example, if a client trusts her bank (and its digital certificate), then she does not need to trust in the infrastructure (DNSSec) to resolve the mapping to the bank. The client can spot any forgery since she has the bank s identifier and can check the identity against the bank s certificate. Finally, the third approach uses a distributed database to store the mappings, such as a DHT. The DHT works as a distributed directory where entities post and request certificates in the Internet. The benefit of this approach is that clients do not need to trust the infrastructure since it is a mere placeholder for the certificates. The trust in the infrastructure is minimum because clients just need to trust that the DHT will store and return the certificates whenever queries, and clients are able to verify whether a returned certificate is valid or not by checking its digital signature within the returned certificate. Hence, clients do not need to trust the infrastructure to return the correcting mapping, but rather to just return the content provider s digital certificate. In our proposal, we choose to use the last approach since we need to place the minimum level of trust on the name resolution mechanism, reducing the number of possible vulnerabilities against an eventual attacker. 3.4 Naming Resolution The third step in the secure naming system is to provide aresolutionservicefortheauthority and content identifiers into network locations. We adopt the DNS hierarchical infrastructure to provide the mapping service for the proposed naming scheme. The idea is that a distributed directory provides the mapping of the authority and its identifier to be used in the content authentication and the DNS system to provide the mapping between the authority name to its location. The benefits of this approach are threefold: first, we remove the implicit trust placed in the resolution system to adirecttrustwiththeauthority; second;theauthority name to its identifier resolution is agnostic about the forwarding fabric, it can be used with many forwarding technologies; third, it provides backward compatibility with the current naming scheme and resolution system. Fig. 3 illustrates the resolution procedure (steps 2 and 3). First, an application resolves an URI into the authority and content identifiers in a DHT (step 1). Then, the application resolves the URI into the resource metadata and server IP using the hierarchical resolution in the authority name to reach the authoritative DNS server (step 2). Finally, the application opens multiple connections requesting the data chunks identified with cryptoids retrieved from the metadata (step 3). The request is addressed to the server s network location and routers in the path storing the content identified by the cryptoid can return the data chunk on behalf of the server. 3.5 Ongoing work In order to evaluate our proposal, we have an early implementation of the naming scheme mechanism, implemented as a plug-in for the Firefox Web-browser to provide backward compatibility. The plug-in is composed of two main components: a protocol handler written in JavaScript, responsible for intercepting the new name scheme calls from the Web-browser, and a XPCOM 3 component responsible for parsing the header and retrieving the data chunks. The JavaScript component registers itself in the Mozilla Framework, thus, whenever the secure naming scheme is used, the Web-browser loads out customized protocol handler. The handler parses the URI and splits it into authority and content IDs, whicharepassedasaparametertothe XPCOM component. The component receives the IDs and 3 XPCOM is a cross-platform component that has multiple language bindings, easing the deployment of a new feature as a component.
5 subscribes to them. Currently, our implementation works integrated with the PSIRP Blackhawk prototype [11]. As future work, we will integrate it with the DNSSec system and also integrate the parsing for the metadata structure. 4. SECURITY ANALYSIS In this section, we analyze the proposed naming system from the external and internal security perspective. From the external point of view, we analyze the roles of the distributed directory based on the DHT and the DNS infrastructure. The distributed directory system plays a role as digital certificate storage of authority identities, allowing clients to query for a digital certificate of a content provider. The trust placed in the directory itself is limited, since a client needs to trust that it will behave accordingly with its pre-established function, i.e., store and answer digital certificates. One possible threat is the misbehaving of the directory system, e.g., not replying to a query, resulting in lack of availability due to a malfunctioning or corrupted node. However, clients can notice such a problem and try with another directory, preventing any forgery (man-in-the-middle) attack since the trust is established with the content provider and not with the directory service. Upon retrieving the content provider s certificate, clients are able to verify the digital signature in the certificate and check against the set of trusted keys in their key chain. We assume here that there is at least a small set of trusted keys in order to bootstrap the authentication procedure. Such set of trusted keys are installed by default in some Webbrowsers and others can be added with the users approval. The second resolution step involves the resolution of content IDs to network identifiers in the DNS. The trust level is minimal since clients establish a direct chain of trust with the content provider. Therefore, the DNS infrastructure is used as a simple mapping service and the possible attacks that can be performed is related to the denial of service to the clients. Attacks such as DNS cache poisoning does not affect clients since they have already the trust relationship established with the original provider. Therefore, if a DNS returns a modified entry pointing to a network label belonging to a malicious node, clients will be able to spot that by verifying the certificate retrieved in the previous step and check against the identity of the malicious attacker. As the attacker does not own a valid certificate for that identity, he will not be able to spoof a victim s identity. From the internal point of view, we analyze the security aspects of the binding between content identifier, algorithmic identifier, and the carried data. Content identifiers are based on either cryptoids or algids. In the former case (cryptoid), identifiers are generated from a strong cryptographic hash function, e.g. SHA-256, binding the content identifier with the data that it carries. Therefore, it is statistically impossible for an attacker to tamper with a piece of data without modifying its identifier due to the preimage attack resistance of these cryptographic hash functions. In the latter case (algids), identifiers are generated through the composition of cryptographic hash functions. Similarly to the cryptoids, it is unfeasible for an attacker to tamper with a data chunk without modifying the algid. 5. RELATED WORK Content Centric Network (CCNx) [7] is a receiver-driven content-oriented communication model driven by consumers interests. Requests for pieces of content are identified by hierarchical names, where each name is composed of a highlevel application name, concatenated together with the version, segmentation index and the cryptographic hash of the content to provide data integrity. These requests are routed directly on the names towards the server. Network caches on the path are able to identify and respond with the cached data on behalf of the server. As CCN relies on applicationlevel identifiers to identify pieces of content in the network, content resolution and forwarding is bound to the hierarchical structure described in the URL. In our approach, we use identifiers that are simultaneously independent from the forwarding, routing and storage, allowing content to be placed in multiple locations, e.g. edge networks, to server clients. DONA [8] proposes a clean-slate approach for the Internet naming system with a name-based anycast resolution primitive. Each piece of content is identified by its principals and labels (P:L), and data objects are handled by find/register primitives the architecture uses Resolution Handlers to resolve a name into the closest sources over the legacy IP network. However, the proposal does not address backwards compatibility with the current naming system and the content fragmentation. Our naming system leverages backward compatibility with the current naming system and uses algorithmic IDs to provide binding between content and data chunks. NetInf [2] proposes a secure naming scheme composed of two main components, a naming scheme based on the DONA scheme (P:L) and an information object structure to hold the information of a piece of content. An information object contains the object identifier, the data itself and a metadata used to describe the data that an information object carries. In addition, the metadata carries all the security parameters required to verify an object, for instance, owner s public key and the content piece hashes. The metadata may not contain the owner s public key, but the public key of a proxy to preserver anonymity of the content publisher. Compared to our approach, Net-Inf lacks backward compatibility with the current naming system based on DNS. As they use DONAlike naming scheme, there is no mapping function to map the user-level names to the cryptographic identifiers. Moreover, the naming resolution is left as future work. In our proposal, we provide a secure mapping between human-readable names (URIs) into cryptographic identifiers and we use the legacy DNS resolution mechanism to provide the mapping of the cryptoids into network labels. 6. DEPLOYMENT CONSIDERATIONS The proposed naming scheme can be gradually deployed in the current naming system. From the resolution side, it required an external infrastructure acting as a placeholder for digital certificates, which can be either a DHT, e.g. Pastry [1] or an underlay forwarding/storage mechanism, e.g. PSIRP [11]. Currently, some already deployed DHTs in the PlanetLab testbed can be used for this purpose since resolvers need to place a minimal trust in the infrastructure. The benefit of using such distributed infrastructure is the resistance to some security attacks, such as denialof-service attacks against the storage and better resistance against node failures. For the second resolution step, it is required to introduce
6 anewdnstypetxtrecordintheserverscontainingthe content metadata. This record type is already supported by DNS, thus, servers that support the metadata scheme just need to insert a new type in the DNS to provide enough information about the data chunks. The content servers should support both complete (or legacy request) and partial content retrieval, when a client issues a request with the content header. In the latter case, the server should have an internal mapping of the cryptoids to the data chunks in the complete file. For example, given a cryptoid, the server needs to be able to calculate which position of the complete file it should return, similar to the HTTP Range header. In this type of request, a client defines the byte range that she wants to receive from the file. Another possible approach is to save all data chunks in the server to prevent the mapping to the correct offset in the complete file. In the client side, users need to install a plug-in in the Web-browsers to handle the new naming scheme and the metadata structure. The plug-in can be easily integrated with Web-browsers, providing authority and resource metadata authentication, metadata parsing and multiple connection management for data retrieval. Finally, in-network caching feature can also be gradually deployed in the network. Some routers have built-in Webservers that can be used as an alternate data source. On the other hand, more specialized in-network caches can also be deployed as bumps-in-the-wire to optimize the traffic efficiency by caching a certain amount of the in-transit traffic based on the content popularity. 7. CONCLUSION In this paper, we have presented a secure naming system that aims at decoupling the content authentication from its location in a network. The proposed mechanism allows for content authentication using cryptographic identifiers that are independent from the routing, forwarding, and storage location. As a consequence, pieces of content are authenticated with the original provider or authority and the data can be retrieved from any location, for instance, a proxy or anetworkcacheonthepath. Thenamingsystemusesa distributed directory service to store the authorities digital certificates, allowing for the direct establishment of the trust relations between clients and content providers. Some benefits of the proposed naming system include the security mapping function between high-level names and cryptographic identifiers in the network level, content authentication with the provider regardless of the network location where it was retrieved, and migration of the trust from the resolution infrastructure to the provider, reducing the number of possible threats during the resolution procedure. 8. ACKNOWLEDGMENTS Walter Wong is funded by the Brazilian CAPES agency and Ericsson Research. Wes also would like to thanks Teemu Koponen for the comments on the paper. 9. REFERENCES [1] A. Rowstron and P. Druschel. Pastry:Scalable, distributed object location and routing for large-scale peer-to-peer systems. ACM International Conference on Distributed Systems Platforms (Middleware) (November 2001), [2] C. Dannewitz, J. Golic, B. Ohlman, B. Ahlgren. Secure Naming for a Network of Information. INFOCOM IEEE Conference on Computer Communications Workshops (March 2010), 1 6. [3] Dagon, D., Antonakakis, M., Vixie, P., Jinmei, T., and Lee, W. Increased dns forgery resistance through 0x20-bit encoding: security via leet queries. In 15th ACM conference on Computer and communications security (2008), ACM, pp [4] Decoupled Application Data Enroute (DECADE). URL [5] Friedlander, A., Mankin, A., Maughan, W. D., and Crocker, S. D. Dnssec: a protocol toward securing the internet infrastructure. Commun. ACM 50, 6(2007), [6] J. Scott, P. Hui, J. Crowcroft and C. Diot. Haggle: A Networking Architecture Degisned around Mobile Users. 3rd Wireless On-demand Network Systems and Services (WONS) (2006). [7] Jacobson, V., Smetters, D. K., Thornton, J. D., Plass, M. F., Briggs, N. H., and Braynard, R. L. Networking named content. In CoNEXT 09: Proceedings of the 5th international conference on Emerging networking experiments and technologies (New York, NY, USA, 2009), ACM, pp [8] Koponen, T., Chawla, M., Chun, B.-G., Ermolinskiy, A., Kim, K. H., Shenker, S., and Stoica, I. Adata-oriented(andbeyond)network architecture. SIGCOMM Comput. Commun. Rev. 37, 4(2007), [9] Merkle, R. C. Acertifieddigitalsignature.In CRYPTO 89: Proceedings on Advances in cryptology (New York, NY, USA, 1989), Springer-Verlag New York, Inc., pp [10] Pallis, G., and Vakali, A. Insight and perspectives for content delivery networks. Commun. ACM 49, 1 (2006), [11] Publish/Subscribe Internet Routing Paradigm. Conceptual architecture of psirp including subcomponent descriptions. Deliverable d2.2, PSIRP project. (August 2008). [12] R. Moskowitz, P. Nikander, P. Jokela, T. Henderson. RFC5201:HostIdentityProtocol,April [13] T. Berners-Lee. RFC1630:UniversalResource Identifiers in WWW, June [14] W. Wong, M. Magalhaes and J. Kangasharju. Piece Fingerprinting: Binding Content and Data Blocks Together in Peer-to-peer Networks. IEEE Global Communications Conference (Globecom 10), Miami, Florida, USA (December ). [15] W. Wong, M. Magalhaes and J. Kangasharju. Towards Verifiable Parallel Content Retrieval. 6th Workshop on Secure Network Protocols (NPSec 10), Kyoto, Japan (October 2010). [16] Yih-Chun Hu, M. Jakobsson and A. Perrig. Efficient Constructions for One-Way Hash Chains. Applied Cryptography and Network Security (May 2005),
DOMAIN NAME SECURITY EXTENSIONS
DOMAIN NAME SECURITY EXTENSIONS The aim of this paper is to provide information with regards to the current status of Domain Name System (DNS) and its evolution into Domain Name System Security Extensions
Information-Centric Networking: Introduction and Key Issues
UCL DEPARTMENT OF ELECTRONIC AND ELECTRICAL ENGINEERING COMMUNICATIONS AND INFORMATION SYSTEMS GROUP ICN Session FIA Budapest Information-Centric Networking: Introduction and Key Issues Prof. George Pavlou
Information-Centric Networking: Overview, Current State and Key Challenges
UCL DEPARTMENT OF ELECTRONIC AND ELECTRICAL ENGINEERING COMMUNICATIONS AND INFORMATION SYSTEMS GROUP COMET-ENVISION Workshop Keynote Information-Centric Networking: Overview, Current State and Key Challenges
Sync Security and Privacy Brief
Introduction Security and privacy are two of the leading issues for users when transferring important files. Keeping data on-premises makes business and IT leaders feel more secure, but comes with
Multicast vs. P2P for content distribution
Multicast vs. P2P for content distribution Abstract Many different service architectures, ranging from centralized client-server to fully distributed are available in today s world for Content Distribution
draft-forwarding-label-ccn- 01.txt
draft-forwarding-label-ccn- 01.txt Ravi Ravindran and Asit Chakraborti Huawei (IETF/ICNRG, Yokohama, 94) [ravi.ravindran@huawei.com] [asit.chakraborti@huawei.com] Agenda Draft Objectives Terminology Why
NextServe Framework: Supporting Services Over Content-Centric Networking
NextServe Framework: Supporting Services Over Content-Centric Networking Dima Mansour, Torsten Braun, and Carlos Anastasiades Communication and Distributed Systems, University of Bern Neubruckstrasse 10,
Information-Centric Networking: Overview, Current State and Key Challenges
UCL DEPARTMENT OF ELECTRONIC AND ELECTRICAL ENGINEERING COMMUNICATIONS AND INFORMATION SYSTEMS GROUP IFIP/IEEE IM 2011 Keynote Information-Centric Networking: Overview, Current State and Key Challenges
Maginatics Security Architecture
Maginatics Security Architecture What is the Maginatics Cloud Storage Platform? Enterprise IT organizations are constantly looking for ways to reduce costs and increase operational efficiency. Although
Securing End-to-End Internet communications using DANE protocol
Securing End-to-End Internet communications using DANE protocol Today, the Internet is used by nearly.5 billion people to communicate, provide/get information. When the communication involves sensitive
Expires: September 15, 2011 X. Chen HUAWEI Technologies March 14, 2011
DECADE Internet-Draft Intended status: Informational Expires: September 15, 2011 L. Chen H. Liu Yale University Z. Huang X. Chen HUAWEI Technologies March 14, 2011 Integration Examples of DECADE System
Certificate Authority Transparency and Auditability
Certificate Authority Transparency and Auditability Ben Laurie Adam Langley 22 Nov 2011 Goal The goal is to make it impossible (or at least very difficult) for a Certificate
A seman(c firewall for Content Centric Networking
A seman(c firewall for Content Centric Networking IFIP/IEEE Integrated Network Management Symposium (IM 2013) - MC2: Security Management and Recovery May 27-31, 2013 David Goergen Thibault Cholez Jérôme
Peer-to-Peer Networks. Chapter 6: P2P Content Distribution
Peer-to-Peer Networks Chapter 6: P2P Content Distribution Chapter Outline Content distribution overview Why P2P content distribution? Network coding Peer-to-peer multicast Kangasharju: Peer-to-Peer
Designing a Cloud Storage System
Designing a Cloud Storage System End to End Cloud Storage When designing a cloud storage system, there is value in decoupling the system s archival capacity (its ability to persistently store large volumes
Best Practices for SIP Security
Best Practices for SIP Security IMTC SIP Parity Group Version 21 November 9, 2011 Table of Contents 1. Overview... 33 2. Security Profile... 33 3. Authentication & Identity Protection... 33 4. Protecting
Live Streaming with Content Centric Networking
Live Streaming with Content Centric Networking Hongfeng Xu 2,3, Zhen Chen 1,3, Rui Chen 2,3, Junwei Cao 1,3 1 Research Institute of Information Technology 2 Department of Computer Science and Technology
SANE: A Protection Architecture For Enterprise Networks
Fakultät IV Elektrotechnik und Informatik Intelligent Networks and Management of Distributed Systems Research Group Prof. Anja Feldmann, Ph.D. SANE: A Protection Architecture For Enterprise Networks W Firewall for CCN. IETF / NMRG March 14, David Goergen Thibault Cholez Jérôme François Thomas Engel
A Firewall for CCN IETF / NMRG March 14, 2013 David Goergen Thibault Cholez Jérôme François Thomas Engel SnT Interdisciplinary Centre for Security, Reliability and Trust OUTLINE Introduction Content Centric
Implementation of P2P Reputation Management Using Distributed Identities and Decentralized Recommendation Chains
Implementation of P2P Reputation Management Using Distributed Identities and Decentralized Recommendation Chains P.Satheesh Associate professor Dept of Computer Science and Engineering MVGR college of
Internal Server Names and IP Address Requirements for SSL:
Internal Server Names and IP Address Requirements for SSL: Guidance on the Deprecation of Internal Server Names and Reserved IP Addresses provided by the CA/Browser Forum June 2012, Version 1.0.
Administering the Web Server (IIS) Role of Windows Server
Course 10972A: Administering the Web Server (IIS) Role of Windows Server Course Details Course Outline Module 1: Overview and Installing Internet Information Services In this module students will learn
Peer-to-peer Cooperative Backup System
Peer-to-peer Cooperative Backup System Sameh Elnikety Mark Lillibridge Mike Burrows Rice University Compaq SRC Microsoft Research Abstract This paper presents the design and implementation of a novel backup
Content Distribution Networks (CDNs)
229 Content Distribution Networks (CDNs) A content distribution network can be viewed as a global web replication. main idea: each replica is located in a different geographic area, rather then in
ARP and DNS. ARP entries are cached by network devices to save time, these cached entries make up a table
ARP and DNS Both protocols do conversions of a sort, but the distinct difference is ARP is needed for packet transfers and DNS is not needed but makes things much easier. ARP Address Resolution Protocol
An Insight into Cookie Security
An Insight into Cookie Security Today most websites and web based applications use cookies. Cookies are primarily used by the web server to track an authenticated user or other user specific details.
Dynamic Adaptive Streaming over CCN: A Caching and Overhead Analysis
Dynamic Adaptive Streaming over CCN: A Caching and Overhead Analysis Yaning Liu, Joost Geurts and Jean-Charles Point JCP-Consult, France firstname.lastname@jcp-consult.com Stefan Lederer, Benjamin Rainer,
T NAF & XML Security and Naming. Pekka Nikander Ericsson Research Nomadiclab & Helsinki Institute for Information Technology
T-110.455 NAF & XML Security and Naming Pekka Nikander Ericsson Research Nomadiclab & Helsinki Institute for Information Technology Contents of this lecture Introduction Layered model, Basic security,
Defenses against Distributed Denial of Service Attacks. Internet Threat: DDoS Attacks
Defenses against Distributed Denial of Service Attacks Adrian Perrig, Dawn Song, Avi Yaar CMU Internet Threat: DDoS Attacks Denial of Service (DoS) attack: consumption (exhaustion) of resources to deny:
Security in Structured P2P Systems
P2P Systems, Security and Overlays Presented by Vishal thanks to Dan Rubenstein Columbia University 1 Security in Structured P2P Systems Structured Systems assume all nodes behave Position themselves in
Distributed Systems: Concepts and Design
Distributed Systems: Concepts and Design Edition 3 By George Coulouris, Jean Dollimore and Tim Kindberg Addison-Wesley, Pearson Education 2001. Chapter 2 Exercise Solutions 2.1 Describe and illustrate
Live Streaming with CCN & Content Transmission with CCNx
Live Streaming with CCN & Content Transmission with CCNx 21 Jun. 2012 Suphakit Awiphan Katto Laboratory, Waseda University Outline Introduces the paper entitled Live Streaming with Content Centric Networking
Security Digital Certificate Manager
System i Security Digital Certificate Manager Version 5 Release 4 System i Security Digital Certificate Manager Version 5 Release 4 Note Before using this information and the product it supports, be
Reliable Strong Cache and Security for the Domain Name System
Reliable Strong Cache and Security for the Domain Name System S. Pari Elavarasan #1, K. Sampath Kumar *2 # Department of Computer Science and Engineering, PGP College of Engineering and Technology, Namakkal,
What is Web Security? Motivation
brucker@inf.ethz.ch Information Security ETH Zürich Zürich, Switzerland Information Security Fundamentals March 23, 2004 The End Users View The Server Providers View What is Web
A Framework for Mobility and Flat Addressing in Heterogeneous Domains,
In-Network Caching vs. Redundancy Elimination
In-Network Caching vs. Redundancy Elimination Liang Wang, Walter Wong, Jussi Kangasharju Department of Computer Science, University of Helsinki, Finland School of Electrical and Computer Engineering, University
Global Server Load Balancing
White Paper Overview Many enterprises attempt to scale Web and network capacity by deploying additional servers and increased infrastructure at a single location, but centralized architectures are subject
Computer Networks: Domain Name System
Computer Networks: Domain Name System Domain Name System The domain name system (DNS) is an application-layer protocol for mapping domain names to IP addresses DNS 208.77.188.166
F-Secure Internet Security 2014 Data Transfer Declaration
F-Secure Internet Security 2014 Data Transfer Declaration The product s impact on privacy and bandwidth usage F-Secure Corporation April 15 th 2014 Table of Contents Version history... 3 Abstract... 3:
Application-layer protocols
Application layer Goals: Conceptual aspects of network application protocols Client server paradigm Service models Learn about protocols by examining popular application-level protocols HTTP DNS Application-layer
Computer Networks - CS132/EECS148 - Spring 2013 ------------------------------------------------------------------------------
Computer Networks - CS132/EECS148 - Spring 2013 Instructor: Karim El Defrawy Assignment 2 Deadline : April 25 th 9:30pm (hard and soft copies required) ------------------------------------------------------------------------------.
Installation and configuration guide
Installation and Configuration Guide Installation and configuration guide Adding X-Forwarded-For support to Forward and Reverse Proxy TMG Servers Published: May 2010 Applies to: Winfrasoft X-Forwarded-For
Web Services Security with SOAP Security Proxies
Web Services Security with Security Proxies Gerald Brose, PhD Technical Product Manager Xtradyne Technologies AG OMG Web Services Workshop USA 22 April 2003, Philadelphia Web Services Security Risks! Exposure
Designing a Windows Server 2008 Network Infrastructure
Designing a Windows Server 2008 Network Infrastructure MOC6435 About this Course This five-day course will provide students with an understanding of how to design a Windows Server 2008 Network Infrastructure
Guidelines for Web applications protection with dedicated Web Application Firewall
Guidelines for Web applications protection with dedicated Web Application Firewall Prepared by: dr inŝ. Mariusz Stawowski, CISSP Bartosz Kryński, Imperva Certified Security Engineer INTRODUCTION Security | http://docplayer.net/1274642-Secure-naming-in-information-centric-networks.html | CC-MAIN-2018-05 | en | refinedweb |
SDL_MixAudioSection: SDL API Reference (3)
Updated: Tue 11 Sep 2001, 22:58
Index Return to Main Contents
NAMESDL_MixAudio- Mix audio data
SYNOPSIS
#include "SDL.h"
void SDL_MixAudio(Uint8 *dst, Uint8 *src, Uint32 len, int volume);
DESCRIPTION.
- Note:.
SEE ALSO
SDL_OpenAudio | http://www.thelinuxblog.com/linux-man-pages/3/SDL_MixAudio | CC-MAIN-2018-05 | en | refinedweb |
How to find the top n elements without sorting the array?
Is this possible???
Well I don't want to sort the array because the order is mandatory?
I don't want a direct solution to this, and this is no homework :D, hints are appreciated
Ty
Printable View
How to find the top n elements without sorting the array?
Is this possible???
Well I don't want to sort the array because the order is mandatory?
I don't want a direct solution to this, and this is no homework :D, hints are appreciated
Ty
Within the last week, someone asked how to find the five smallest elements. Changing "smallest" to "largest" and "5" to "n" would give the same answer, I should think.
partial_sort_copy sounds like what you need.
What do u mean?
I want the top n elements in the array without sorting the original array, and then want to make modifications to the top n elements without changing the order of the array
How ?
An array of n pointers to the top-n elements of the array.
This is of course always sorted, so you only compare a[i] with pa[n-1] to determine if you've got a new top-n entry.
Can u simplify what you mean?
Do you mean that I create n pointers to the top elements?
Ok but if I go this way, I'm still having an obstacle on my way. How should I determine the top n elements? Should I sort a copy of the array? or If I find the top element using max_element, then how do I find the next top element?
I think Salem means that you can sort pointers to the element instead, based on the value of the elements.I think Salem means that you can sort pointers to the element instead, based on the value of the elements.Quote:
Originally Posted by manzoor
One way would be to create a vector of pointers to the elements of the container. If you need the top n elements in sorted order, then use std::partial_sort(), otherwise use std::nth_element() on the vector of pointers with some suitable predicate so that you can sort based on the elements rather than their addresses. When you are done, just take the first n pointers from the vector.
Can we sort the elements with their pointers and modify them without changing their order in the original array?
Yes, that's what we've been talking about.Yes, that's what we've been talking about.Quote:
Originally Posted by manzoor
Yes, just build an array of pointers where each pointer points to an item in the originaly array. Then sorting is done by comparing what the pointers in that array point to, but swapping is done directly on those pointers.
For efficiency, you should also either keep the pointer array around all the time and keep it up to date,
OR you can create it each time, but not perform a full sort, and by that I mean performing Quicksort partitioning steps, but mostly only on the upper partition, and only on the lower partition when the upper one plus the pivot doesn't give enough items. This avoids a full sort, but still gives you the items you want. partial_sort_copy probably does something like this so you don't have to code that yourself. Just use greater-than as your comparison function.
It depends: as I mentioned in post #7, partial_sort is appropriate if the elements are to be in sorted order, otherwise nth_element is likely to be more appropriate.It depends: as I mentioned in post #7, partial_sort is appropriate if the elements are to be in sorted order, otherwise nth_element is likely to be more appropriate.Quote:
Originally Posted by iMalc
is this ok?is this ok?Code:
#include <iostream>
#include <vector>
#include <ctime>
#include <algorithm>
#include <numeric>
bool myFunction(int* i, int* j)
{
return *i > *j;
}
using namespace std;
int main()
{
vector<int> numbers;
vector<int*> pNumbers;
srand(time(0));
const int vecSize = 10;
for (size_t i = 0; i < vecSize; i++)
{
numbers.push_back(rand() % 10 + 1);
cout << numbers.at(i) << endl;
pNumbers.push_back(&numbers.at(i));
}
partial_sort(pNumbers.begin(), pNumbers.begin() + 5, pNumbers.end(), myFunction);
cout << endl << endl << endl;
for (size_t i = 0; i < pNumbers.size(); i++)
cout << *pNumbers[i] << endl;
return EXIT_SUCCESS;
}
But this isn't working, there's something wrong in the assigning the pointers. What am I doing wrong, am I initializing the pointers correctly?
Take the addresses only after you have finished pushing back the main data. A push_back can move everything to another place in memory if the vector runs out of space, so all the pointers in the second vector become invalid (or use indices instead of pointers / or reserve enough memory in the first vector so the data won't be relocated).
Thanks everyone, I think I got it now.
Again thanks everyone for helping. This site has been a great resource for programming. Keep it up. I'm waiting for the day when I'll be helping people :). | https://cboard.cprogramming.com/cplusplus-programming/109299-find-top-n-elements-printable-thread.html | CC-MAIN-2018-05 | en | refinedweb |
- David E Smyth: "Tcl and concurrent object oriented flight software: Tcl on Mars ".
- James R Slagle and Zbigniew Weickowski: "Ideas for intelligent user interface design"
- Richard Golding, Carl Staelin, Tim Sullivan and John Wilkes: "Tcl cures 98.3% of all known simulation configuration problems, claims astonished researcher!"
- David Richardson: "Interactively configuring Tk-based applications"
- Tom Phelps: "Ariadne"
- Mike Hoegeman: "The sensor shells: an automated weather observation system"
- James Bassich and Gerald Lester: "Tcl as a strategic tool for the system integrator"
- Michael McLennan: "[incr Tk]: building extensible widgets with [incr Tcl]"
- Lindsay Marshall: "Nautilus: 20,000 leagues under the Tcl"
- Wayne A Christopher: "A 3D viewer widget for Tk"
- Eric M Jordan: "An environment for the development of interactive music and audio applications"
- Benjamin B Bederson and James D Hollan: "Pad++: a zooming graphical interface widget for Tk"
- Frank Stajano: "Writing Tcl programs in the Medusa applications environment"
- Patrick Duval and Tie Liao: "Tcl-Me, a Tcl multimedia extension"
- George Howlett: "Packages: adding namespaces to Tcl"
- Adam Sah, Jon Blow, and Brian Dennis: "An introduction to the Rush language"
- Max Ott: "Jodler: a scripting language for the Infobahn"
- Kevin Kenny: "Dynamic loading for Tcl: (What became of it?)" [1]
- John Menges and Brian Ladd: "Tcl/C++ binding made easy"
- Douglas Pan and Mark Linton: "Dish: a dynamic invocation shell for Fresco" | http://wiki.tcl.tk/836 | CC-MAIN-2018-05 | en | refinedweb |
)
Suthahar J(8)
Jinal Shah(4)
Gourav Jain(4)
Syed Shanu(3)
Viral Jain(3)
Manpreet Singh(3)
Vijai Anand Ramalingam(3)
Ibrahim Ersoy(2)
Sumit Singh Sisodia(2)
Dennis Thomas(2)
Mangesh Kulkarni(2)
Ankit Sharma(2)
Nirav Daraniya(2)
Mani Gautam(2)
Mahender Pal(2)
Swatismita Biswal(2)
John Kocer(2)
Abubackkar Shithik(2)
Tahir Naushad(2)
Akshay Deshmukh(2)
Shiv Sharma(1)
Neel Bhatt(1)
Prashant Kumar(1)
Lakpriya Ganidu(1)
P K Yadav(1)
Jayanthi P(1)
Kantesh Sinha(1)
Shweta Lodha(1)
Mushtaq M A(1)
Gowtham K(1)
Munish A(1)
Puja Kose(1)
Ahsan Siddique(1)
Najuma Mahamuth(1)
Bhuvanesh Mohankumar(1)
Iqra Ali(1)
Nithya Mathan(1)
Sarath Jayachandran(1)
Rahul Dagar(1)
Areeba Moin(1)
Ali Ahmed(1)
Allen O'neill(1)
Srashti Jain(1)
Vincent Maverick Durano(1)
Mohamed Elqassas Mvp(1)
Lou Troilo(1)
Santhakumar Munuswamy(1)
Shantha Kumar T(1)
Nilesh Shah(1)
Talha Bin Afzal(1)
Deepak Kaushik(1)
Vinoth Rajendran(1)
David Mccarter(1)
Shakti Singh Dulawat(1)
Yusuf Karatoprak(1)
Hussain Patel(1)
Ankit Saxena(1)
Hamid Khan(1)
Resources
No resource found..
Navigation Drawer Activity In Android
Dec 27, 2017.
In this article, we will learn how to use a single navigation drawer for different activities.
Getting Started With Azure Service Bus
Dec 26, 2017.
From this article you will learn an overview of Azure service bus and ow to create an Azure service bus namespace using the Azure portal.
Dec 14, 2017..
Getting Started With Angular 5 Using Visual Studio Code
Dec 07, 2017.
In this article, we are going to set up Angular 5 app using Visual Studio Code.
Audit Made Easy Without Audit Log - Part One
Dec 07, 2017.
In Microsoft SQL Server, the activity of each of the database table is tracked in the other table and that is called the Audit trail or Audit log of the database table.
Callback Concept And Events In Node.js
Dec 06, 2017.
Hello friends, today I explain you about callbacks and events in Node JS. People who are new to Node JS please learn previous articles NodeJS - Getting Started With Some Basic!!.
Getting Started With Angular 5 And ASP.NET Core
Nov 13, 2017.
I hope you all know that Angular 5 has been released. In this article, we will see how to start working with Angular 5 and ASP.NET Core using Angular5TemplateCore..
Leadership Challenge 002 - What Is Your Coaching Fitbit
Oct 07, 2017.
Many people have embraced the "Fitbit" craze. The company has great marketing: “Fitbit tracks every part of your day—including activity, exercise, food, weight and sleep—to help you find your fit, stay motivated, and see how small steps make a big impact.”. People wear it daily, slapping it on their wrist the second they get out of bed…or even sleeping with it Sensor Android App Using Android Studio
Oct 03, 2017.
In this article, I will show you how to create a Sensor Android App using Android studio. we are going to create a sensor application that changes the background color of an activity when a device is shaken. Migrate Your On-Premises / Enterprise Data Warehouse Into Azure SQL Data Warehouse
Sep 21, 2017.
I will share how you can start migrating your data into the Azure SQL Data Warehouse!
How To Create A Camera Application In Android Using Android Studio
Sep 12, 2017.
Android is one of the most popular operating systems for mobile. In this article, I will show you how to start the camera application in Android using Android Studio
ASP.NET Core 2.0.
I Am A Programmer And I Love To Exercise
Sep 04, 2017.
here I am going to talk about fitness and health. I will start with very basic things with which you can improve your health and business.. Tables
Sep 01, 2017.
In this article, we will walk through some important concepts of Azure Tables through Emulator. Create the Azure Tables by using Microsoft Azure Storage Explorer... start-activity
NA
File APIs for .NET
Aspose are the market leader of .NET APIs for file business formats – natively work with DOCX, XLSX, PPT, PDF, MSG, MPP, images formats and many more! | http://www.c-sharpcorner.com/tags/start-activity | CC-MAIN-2018-05 | en | refinedweb |
Learning Python, 2nd Edition 322
Python is a dynamic, interpreted, object oriented language used for both scripting and systems programming. Python is known for being easy to learn and use, while also being powerful enough to be used for such projects as Zope and the Chandler project. Its growing popularity is also based on its reputation for fostering programmer productivity and program maintainability. One drawback sometime cited is its relatively slow execution speed compared to compiled languages such as C.
For myself, I have probably read too many books about Python, but that is because I am an amateur hacker who learns programming slowly, and I find that reading several books about the same topic, covering the subject matter from different angles, allows me to better absorb the material. For me, this was a good review of the core language and a welcome refresher course on the newer aspects introduced in versions 2.2 and 2.3. For anyone who is new to Python and wants to learn from the ground up, this book would be a great place to start.
Mark Lutz is an authority on Python and one if its leading teachers, with both Learning and O'Reilly's Programming Python to his credit, as well as the courses and seminars he teaches professionally. In updating the original version, which was already very good, Mark has polished the chapters on the core language to a nearly perfect level, while his co-author David Ascher has done the same on the more advanced aspects of the book. In addition, Mr Lutz has benefited from extensive feedback from students and readers, and his explanations therefore anticipate common misunderstandings. Each chapter is accompanied by a problem and exercise section and answers are included at the back of the book.
A major addition to the new edition is a chapter on "Advanced Function Topics," including list comprehensions, generators and iterators. Python is sometimes used with a functional programing style almost similar to Lisp, although to List purists that may sound like heresy. The recent versions of the language have significantly upgraded Python's support for the functional style. Functions cover three chapters in the 2nd edition instead of just one.
Another major change since the first edition is extended coverage of Modules, which now occupies four chapter instead of just one. Python modules are a high level package structure for code and data, and they help facilitate code reuse. Yet another addition is coverage of Python's "new style classes." Coverage of classes and object oriented programming has been greatly expanded and now includes five whole chapters and almost 100 pages. Coverage of exceptions now is expanded to three chapters.
If you have been considering learning Python, now would be a great time since this new book is the perfect introductory text. If you already know Python and have read the first edition of Learning Python or another introductory text, then this book may not be essential since the new language features are covered pretty well on the web in various places, and you might be better advised to read one of the other fine books on non-introductory aspects of Python. But this book is about as good an introduction to the language as you are likely to find. The book does not cover all of the Python libraries nor many other topics, but it does briefly touch on the major libraries, frameworks, gui toolkits, and community resources.
If you want to learn the core Python language quickly, this may be your best bet. Learning Python only covers the basics, but it is deep in information on what it does cover. Well written, understandable, and in a very logical arrangement, this book is densely packed with info.
I have often found myself returning to the original book, and the new book will now fill this role. It is deep in information, well written, and a joy to read. For an experienced programmer who is just learning Python, it may be possible to thoroughly learn everything about the core language in one reading of this book. For relative newbies, it will be an often-used resource.
To read more reviews of books about Python, visit the Python Learning Foundation. You can purchase the Learning Python, 2nd Ed. from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page.
A nice comparison of Python with other languages.. (Score:5, Informative)
I prefer Ruby [rubyforge.org], but there seem to be a lot of healthy discussions of various language features and ideas across the scripting language community. The "Python comparison page", for example, has a link to John Ousterhout's paper on why scripting languages are useful - even thought he wrote the paper about Tcl, it's just as applicable to Python or Ruby.
Re:A nice comparison of Python with other language (Score:3, Informative)
I've used python for several years, but only 2 weeks after stumbling across "Programming Ruby" (Available free at [rubycentral.com]), I've switched all new development to Ruby. As a language, it just clicks for me. It's like the best of all possible worlds. It brings in the cleanliness of Python (without the whitespace issues, for those who dislike that), the hack value of Perl (tightly integated regex, etc), the OO of smalltalk/java (for those that like that kind of thing)
Re:A nice comparison of Python with other language (Score:2)
adict = {'key': 'val', 'key2': 'val2', 'key3': 3.14}
for key in adict:
print 'k -> ', adict[key]
Re:A nice comparison of Python with other language (Score:2)
make that last line:
print '%s -> %s' % (key, adict[key])
Re:A nice comparison of Python with other language (Score:5, Insightful)
1) I prefer the Python example. It's easier to read. And easier to write the first time. That's the reason Python is better than Ruby: when you write code, you get it right the first time more often. And that's such a huge advantage.
Of course, you screwed up. (I assume you wanted to print the value of k rather than the letter k)
2) You used a comprehension in Ruby but not in Python. And adict.items() would have been easier than adict.keys()
To be more fair:
adict = {'key':'val','key2':'val2','key3':3.14}
[sys.stdout.write(k+'-> '+str(v)+'\n') for k,v in adict.items()]
But if I was writing the code, I'd use a for loop rather than a comprehension.
But for penis length competitions, we'll use the latter.
Bryan
Re:A nice comparison of Python with other language (Score:2)
My rule of thumb: if I don't need the results of a list comprehension, it's probably best written as a regular for loop. The fact that the original poster used 'sys.stdout.write' when he wanted to print something is a dead-giveaway (print is a statement, which is why it's not valid in the list comp):
This:
[sys.stdout.write(k+'-> '+str(v)+'\n') for k,v in adict.items()]
Could have just as easily been written (more clearly) a
Re:A nice comparison of Python with other language (Score:2)
[print k+' -> '+v for k,v in adict.items()]
Re:A nice comparison of Python with other language (Score:2)
The comprehension doesn't add anything to this example. Let's stick with the time-honored:
for k, v in adict.items():
print k+'->'+v
(you can put it on one line if you really want)
Re:A nice comparison of Python with other language (Score:2)
for k, v in adict.iteritems():
print k, '->', v
I think this is actually more readable than Ruby's "each" function which is a weaker alternative to iteration.
Re:A nice comparison of Python with other language (Score:3, Interesting)
It generates a graphical waveform from the component harmonics, as well as graphing the first 10 harmonics.
Source Code: [tylere.com]
Some Examples:
Square Wave [tylere.com]
A Phased Sine Wave [tylere.com]
Triangular Wave [tylere.com]
Sawtooth Wave [tylere.com]
Re:A nice comparison of Python with other language (Score:2, Informative)
Most of my development deals with internationalisation and support for many languages and scripts. For that reason, I prefer Python, because one can make all internal strings Unicode no longer have to worry about a million character sets. Ruby, on the other hand, lacks support for Unicode. I know the iniciator of Ruby comes from Japan, where because of CJK chauvinism Unicode sadly hasn't caught on yet, but this is the 21st century, and a scripting language without support for Unicode is just unacceptable.
Re:A nice comparison of Python with other language (Score:3, Informative)
Virtual +1 insightful to you (Score:3, Insightful)
Your "Python stole the thunder" analysis is not quite right, though, and it relates to why I prefer Python.
Ruby is as old as Python, but Matz wrote Ruby to do his own Japanese *nix work. He focused on his own needs, but made it available to all, so essentially he was focusing on the Japanese *nix community and their needs. The Japanese *nix community, for example, cares far more about handling legacy Japanese data than about handl
Lisp vs Python comparison (Score:4, Informative)
Python? (Score:5, Informative)
Open source, expressive (very short code can achieve a lot), readable (very short expressive code is easily groked -- fewer bugs), no direct pointer manipulation (safe -- fewer bugs), integrates nicely with other languages, runs on a variety of platforms, very easy to learn.
I, too would recommend learning python. It is a very good, language. Zeolotry is another thing though. Keep your mind open. Learn all the languages you can. This book, I can't comment, although I received it a week ago I haven't gotten around to reading it yet.
Re:Python? (Score:5, Interesting)
The importance and utility of this can't be overstated. Python absolutely rocks as a rapid development environment. I have not personally experienced a language that lets me go from concept to implementation nearly so quickly. Once an application is up and running, Python provides a great toolset for profiling your project and making it easy to replace performance-critical sections with the low-level language of your choice.
Does your crypto application need a faster random generator? Replace parts of that module with C. The rest of your project still gets the benefits of a strongly typed, object-oriented language with a robust library of string manipulation, pattern matching, and GUI interfacing functions.
It really is a project manager's dream come true. Python has replaced Perl as my language of choice for all new development.
Re:So, ... (Score:2)
Re:Python? (Score:2)
No. Learn all the languages that will make you a better programmer. Learn languages that will expand your understanding of software engineering. Learn languages that will make you more productive. But for goodness' sake, don't waste your time learning every language you run across: the vast majority of languages are simply unproductive compared to a small handful of better, higher-quality languages.
Languages can be compared based on objective criteria. You can honestl
Re:Python? (Score:2, Interesting)
What I'm kinda curious about is whether they use mod_python. From the looks of the URL's on Adsense and AdWords it seems likely that they are using the Publisher handler, but I've never heard any official (or even rumor) about it.
Re:Python? (Score:3, Informative)
One of the best examples of this is Jython - python implemented in 100% java. This allows the flexibility of a scripting language with the security and portability of Java.
1st edition (Score:4, Informative)
Maybe its just me though.
Re:1st edition = PAPERWEIGHT (Score:4, Interesting)
Anyway. I immediately ran off to the nearest bookstore and grabbed the first edition of the book. I read it once through and it--along with a lot of googling--helped me understand what I was doing, but once I had gone through it once I couldn't use it to recall the details of what I had been taught. If I wanted to look up something that I knew I had learned *from the book* I would have to look it up *on the web* (e.g. syntax or the required parameters of a function) because the index was useless. I never found anything I needed from that book once I did the initial once-through reading.
Though let's not gloss over the fact that I obviously learned python fairly well from this book because I did get the job! So sure, if you need to learn the language, the first edition did the job, but you'd better buy a *real* python book while you're there at the bookstore because as soon as you were done with Learning... it was nothing more than a paperweight.
Re:1st edition = PAPERWEIGHT (Score:2)
Re:1st edition = PAPERWEIGHT (Score:2)
on par with TCL except for C (Score:3, Interesting)
While TCL remains my personal favorite, Python is really good, except for the creating-your-own-extensions part. The Python's C API needs a lot of catching up to do to match the excellence of TCL's.
Re:on par with TCL except for C (Score:4, Informative)
It uses C++ and lots of template mojo, but you don't really need to understand all that to use it.
Python is amazing (Score:5, Informative)
This is not a religious argument; I'm not advocating that python is the one language you should use or anything like that. In fact, not having an "ideology" is one of python's major strengths.
If you're asking "why python [linuxjournal.com]", ESR has said it better than I ever could.
I'm yet another of those who experienced extremely small turnaround times for python programs. It took me a week, working part time (I estimate about 30 hours) totally, to release 1.0 of gretools [ernet.in], starting from scratch. I had not written a single line of python code before that, mind you.
Why python is great:
Its not a religion. It doesn't force its style of thinking on you. Functional programming, excellent string manipulation tools, classes, inheritance, exceptions, polymorphism, operating system integration, they're all available. This is python's biggest advantage. Whichever background you're coming from, you can very quickly become effective at python.
Incredibly compact code. This is largely a consequence of the previous point. Apart from that it is dynamically typed, and has lots of other cool features. Like doing away with braces for delimiting blocks. People who know nothing about the language flame it for using indentation, but I have never found it confusing, and it makes the code smaller far more readable.
A user-friendly programming language! You aren't going to believe this until you've actually programmed in python. Its got this amazing property that if you can express a thought in constant space mentally, then you can code it in a constant number of lines, most of the time in a single line. In other words, the abstractions of the programming language match the "natural" abstractions of programmers very closely. After just a couple of days I got so used to this that I began to "predict" language features intuitively. At one point I just knew there had to be a language construct for something I was trying to do, and found that it was the reduce function.
Simple syntax. Python manages to have all these features while retaining a very simple syntax, perhaps even simpler than C. This is a big plus, because it gets out of the way and Does What You Mean.
Convinced? Get started now!
Re:Python is amazing (Score:2)
I agree with you overall, but Python does bow down to the temple of OOP. To me, a heavy OOP style of thinking feels very dated, even though Python has a very dynamic nature to its OOPness.
Re:Python is amazing (Score:2)
Of course, I don't know how "object oriented" is the same as "dated" in your head, so maybe it won't work for you.
Re:Python is amazing (Score:2)
Maybe I slept through too many CS classes and can't wrap my brain around OOP, maybe it's my failings as a coder, maybe I have a lot less free time nowadays than I used to, but I've been able to do a ton of useful work in Perl and haven't even been able to get started in Python.
Maybe I'll give it an 8th try one of these days. I feel like I'
Re:Python is amazing (Score:2)
Re:Python is amazing (Score:2)
Interesting point - most of the learning material is OOP-related, though, so I feel pretty lost trying to learn Python without knowing OOP. But that's good to know.
Re:Python is amazing (Score:5, Interesting)
I guess Perl is just traditionally what you do these things with. It's not necessarily better. Perl also doesn't support Windows directly like Python does - if you want Perl in Win32 you pretty much have to go with ActiveState whereas Python.org has a Win32 specific distribution. Then again, it's difficult to compete against CPAN's sheer size.
But anyway, it doesn't matter. We use what we want/like and it's cool that we have choices.
However, over the past year or so I've also been looking at Ruby. Not to get into a religious argument (as you say) over which language is better, but if you like Python you should take a look at Ruby. If you're a Windows user there's an installer [sourceforge.net] available, which comes with a full book (in CHM format) that can get you running in no time if you already know Python. As Perl and Python, Ruby has extensions and so on. I do like the OO features in Ruby a bit better than Python.
And least but not least, there's Lua. I wouldn't use Lua the same way I use Python, but Lua is a joy to embed, much more so than Python.
Ahhh, language wars. Cheers =)
Re:Python is amazing (Score:2)
What is wrong with Activestate? Activestate is Perl's "Win32 specific distributition". Don't really see the difference.
Re:Python is amazing (Score:2, Interesting)
There's nothing wrong with ActiveState, except that they lag behind the main *nix releases and are generally slow to incorporate fixes. It also ships with a bunch of stuff you might not necessarily want. For example, the COM extensions. The fact that I'm running in Windows doesn't necessarily mean I want to use COM. It also takes way too long to install, considering what it is.
The Win32
Python vs Java (Score:2)
Generally speaking, I liked Python quite a bit, especially the identation delimeting code blocks thing quite a bit, especially the visual cleanness of the code (although this was slightly offset by the need to dereference member variables with "this->"). I found the module system a bit squirrely, especially th
Re:Python is amazing (Score:2)
You mean they've fixed it so you don't have to rigidly adhere to Python's indentation conventions? 'Cause that and tab damage are the things that keep me from learning it.
I know what you mean. I've avoided C because of those damned curly braces. Give me good ol' "begin" and "end" blocks from Pascal - at least with those you know where you stand!
Upgrade? (Score:2)
If there's anything I hate, it's these big, thick, 1000-page (or 500-odd page) books which tell me how to use the Help system in Appendix 42.
So, I'm always wary.
O'Reilly & Python (Score:3, Informative)
For experienced programmers (Score:5, Insightful)
Free Python Books (Score:5, Informative)
Even if you do mind reading books on your computer screen, most of these books (actually I think all of them) are also available as physical printed books as well.
Thinking In Python [mindview.net] by Bruce Eckel
An Introduction to Python [network-theory.co.uk] by Guido van Rossum, and Fred L. Drake, Jr. (Editor)
How To Think Like a Computer Scientist: Learning with Python [greenteapress.com] by Allen Downey, Jeff Elkner and Chris Meyers
Dive Into Python: Python for Experienced Programmers [diveintopython.org] by Mark Pilgrim
Text Processing In Python [gnosis.cx] by David Mertz
Python Language Reference Manual [network-theory.co.uk] by Guido van Rossum
python runtime (Score:4, Insightful)
Re:python runtime (Score:2)
Oh well, don't drag Java into that. Especially concerning speed.
BTW, I'm not sure about python, someone please straighten this out - I bet it's similar, but Perl isn't "interpreted language" as such. It is some kind of hybrid: From user point of view it's interpreted, the program is its own source code etc. But from the system's point of view, it's compiled, only compilation takes place right before launching t
Re:python runtime (Score:3, Informative)
I love this setup. Your program will st
Re:python runtime (Score:2)
Yes, similar to a
.java file.
A JIT-enabled JVM emits real machine code, IIRC.
Nope. The JVM executes bytecode. The JIT compiles the bytecode into native machine code and caches the results. Before a given block of code is executed, the JVM checks its cache to see if it's still stored, and if so, it executes the pre-compiled version.
Note that there's no strong reason (that I know of) why the Python VM couldn't also compile Python bytecode int
Re:python runtime (Score:2)
Re:python runtime (Score:2)
Second, by definition, a JIT compiler runs immediately before the bytecode in question is to be executed. A system that turned the bytecode into native machine code would just be a plain ol' compiler.
Third, Psyco [sourceforge.net] might come pretty close to what you want, except that it doesn't write out a compiled binary.
Re:python runtime (Score:2, Interesting)
Re:python runtime (Score:3, Interesting)
Re:python runtime (Score:4, Interesting)
Please don't confuse performance and size. Larger systems don't require bigger performance, performance is needed in tight inmost loops. And those you can implement in C while retaining the rest of the Python code.
actually... (Score:2, Interesting)
i think EVE proves that python is ready for big projects, even when performance is critical.
Re:python runtime (Score:2)
No, it is the tradeoff between efficient and less efficient implementations of a language. There are languages that usually come with rather fast implementations while being very dynamic and flexible, like Common Lisp, which is usually compiled to native code. (Of course, there are also completely crappy and inflexible languages that are slow a
Half Life (Score:2)
Re:No, It's Just Slow (Score:2)
Finkployd
Test of a language (Score:5, Insightful)
def quicksort(list):
if len(list) > 1:
pivot = list[0]
left = [x for x in list if x < pivot]
right = [x for x in list if x > pivot]
pivot = [x for x in list if x == pivot]
return quicksort(left) + pivot + quicksort(right)
return list
I'd say this speaks for itself. Enjoy.
Re:Test of a language (Score:2)
Try:
def quicksort(list):
if len(list) > 1:
pivot = list[0]
left, middle, right = []. []. []
for item in list:
if item < pivot: left.append(item)
if item == pivot: middle.append(item)
if item > pivot: right.append(item)
return quicksort(left) + middle + quicksort(right)
else:
return list
The
left = middle = rig
Re:Test of a language (Score:2)
Yep. Another "failing" of this implementation (and yours, too) is that it doesn't do the sort in-place, so it uses n log n memory. The in-place sort implementation is what makes most really fast implementations so hairy and difficult to understand.
Re:Test of a language (Score:2, Informative)
Re:Test of a language (Score:2)
Are you sure? In the typical case, the list is split evenly in half, with n/2 items in the left side, and n/2 items in the right. Therefore at recursion frame k there are n/2+n/2=n items in memory. Since the list will split log n times, there are log n recursion frames, which means n log n memory in use at the deepest level of recursion.
Have I missed something subtle here?
Re:Test of a language (Score:2)
I guess that shows I do much more time analysis than space analysis...
Silly trolling article writer. (Score:5, Informative)
A mention of the Psyco [sourceforge.net] Python runtime compiler is in order. It's simple to use as well - all you do is put this at the top of your entry script: All routines called are then compiled from bytecode on-the-fly into native x86 code. It's not quite as fast as C - but with Psyco you can easily get close, especially if you design your algorithms properly.
While I'm here, these are the Python packages that I find essential once I have the base installation [python.org] (which includes the IDLE IDE). I've used these packages under Windows, but most work on Linux as well:
Re:Silly trolling article writer. (Score:3, Interesting)
Pyrex [canterbury.ac.nz]
You need a book? (Score:5, Funny)
You need a book to learn Python?!??!!? My god, I'm an old C++ programmer, Python is like a gift from a god!
You just have to bang your head against the keyboard a couple of times and I bet you it compiles!
Re:You need a book? (Score:2, Funny)
What would be nice ... (Score:2)
Is a book review that wasn't for lavae [catb.org].
It's like going to the local bookstore and hoping for something more to buy than Learn VB in 24 Hours.
I would love to use Python (Score:5, Insightful)
That said I can't justify the switch. There are just too many good modules available in Perl (esp for the engineering work I do). When python has the bredth of packages that Perl does, and when they have a nicely organized way to access said modules, I'll be happy to switch.
My review (Score:2, Insightful)
Re:My review (Score:2)
Perhaps they might want a more Agile language, instead of clunky C++/Java they are using at the moment? Going with Python, they can retain the scalability while developing code/unit tests/prototypes much faster. Being primarily a C++ programmer, getting to program in Python is extremely liberating. It feels like being able to talk fluently again, instead of measuring every word carefully and th
Re:My review (Score:2)
Apparently you've never worked on a project with a "core" header file that gets #include'd by about 5000 source modules. Make one little diddly change to that header and you have to recompile 5000 files.
It can sure as hell be a massive waste of time. Now, whether or not it's good practice to structure your program in such a way that everything depends on a single head
Re:My review (Score:4, Informative)
Important correction - Perl can look like assembler... but it doesn't have to. A Perl script can be as clean and readable as you want it to be. Ugly code is a result of lazy programmers, not the language itself.
Re:There's one major reason I choose Python over P (Score:2, Interesting)
Re:There's one major reason I choose Python over P (Score:3, Informative)?
Re:There's one major reason I choose Python over P (Score:4, Interesting)
OTOH, Perl as a language is unbelievably flexible and convenient to work with, but it's most definitely a more "hackish" language, in that it's grown more than it's been designed. As such, it's definitely more of a developer's language (ie, has many of the features which, while not necessarily incredibly elegant, are *really* convenient) than a theorist's language.
So, then, why pick one over the other? Frankly, in the end, I suspect it's just personal preference (or predjudice).
Re:There's one major reason I choose Python over P (Score:2)
I think this depends a lot on 1) the application and 2) the chosen architecture. I've written some rather largish Perl projects (well, not that large... ~8000 lines) with little difficulty. It really just depends on the choice of design and how well Perl's language features match that design.
Re:There's one major reason I choose Python over P (Score:3, Informative)
Re:There's one major reason I choose Python over P (Score:3, Interesting)
I sort of hope that Parrot will help Perl overcome its introversion, and let it integrate more readily with other languages.
I think that C++ and Python form a dynamic duo. You can put the effort into compiling items that benefit from such, and glue them together in Python most agreeably.
The C++ standard library focused on Platonic abstractions, but Boost is pulling C++ in more mainstream directions. And that's a beautiful thing.
While issuing random plugs, check out Leo [sourceforge.net]. It's not too ofte
Re:There's one major reason I choose Python over P (Score:2)
Re:There's one major reason I choose Python over P (Score:2)
This turned me off of python for a while, and through the 1.x series of python, the answer was pretty much that python indeed did not have much over perl. I'm astonished perl still doesn't have formal parameters, but that's more a glaring lack in perl than a novel feature in python. Lack of funky "decorations" on variables
Now python
Re:There's one major reason I choose Python over P (Score:4, Informative)
Re:There's one major reason I choose Python over P (Score:5, Informative)
Python doesn't have static typing; it has dynamic typing like Lisp, Ruby, Perl, etc. The difference between Python and Perl is that Perl has rather weak dynamic typing. For example, Perl tries to treat strings and numbers as the same type (resulting in the use of strange constructs such as the value "0 but true"). Python and most other dynamically typed languages have stronger typing, with distinctions between strings, integers, floats, etc.
Static typing means that each variable is only allowed to hold values of one type. Usually the variable types are manually declared (as in C or Java), but some languages (like Haskell, IIRC) can infer the types.
Re:There's one major reason I choose Python over P (Score:3, Funny)
Re:There's one major reason I choose Python over P (Score:5, Informative)
Strong typing is when your language will only allow appropriate operations to be performed on values of the appropriate type.
Weak typing is the opposite, where a language will implicitly convert between (possibly incompatible) types or will simply allow any operation to occur.
Static typing is when a language enforces its typesystem (whether it be strong or weak) at compile time.
Dynamic typing is the opposite, when a language enforces its typesystem at runtime.
Python is strongly, dynamically typed. If you try to perform an integer operation on a string, it will check this at runtime and raise an exception. It will not perform the operation.
Perl is weakly, dynamically typed. If you try to perform an integer operation on a string, it will implicitly convert that string to an integer (using 0 in the case of strings that aren't a valid representation of an integer). It does this at runtime.
Haskell is strongly, statically typed. If the compiler cannot prove that all your operations are performed on values of the appropriate type, it will not compile your program.
C is weakly, statically typed. It will implicitly convert beteween incompatible values (pointers and ints, for instance) but it will determine which implicit conversions will occur at compile time (as well as reject some other conversions or type errors).
Python is not in any way statically typed. Perhaps only moderators who actually know Python should get mod points on articles such as these (yes, I know that'd be impossible, but it'd ridiculous that the parent post got modded up to 5, interesting when it's blatantly and obviously wrong).
Jeremy
Re:There's one major reason I choose Python over P (Score:2)
Jeez, it's good to issue a correction, but the error was in name only, the concept he was trying to express is still perfectly valid.
Re:There's one major reason I choose Python over P (Score:2)
Not that there are a great many dynamically scoped languages around these days...
Re:There's one major reason I choose Python over P (Score:2)
That's not weak typing. That's simply automatic conversion of types in some cases. Weak typing is like that of Forth, where if you add a float to an integer, the BIT PATTERNS are simply added together, as if both of them were integers.
Python is strongly, dynamically typed. If you try to perform an integer operation on a string, it will check this at ru
Re:There's one major reason I choose Python over P (Score:3, Insightful)
Re:There's one major reason I choose Python over P (Score:2)
uh? Care to differentiate between lists and arrays? Or are you trying to spread a little FUD to make it sound more complicated then it is.
Explain how using and referencing extra dimensional data types (list and hashes) are different in Python.
Re:There's one major reason I choose Python over P (Score:2)
There's one major reason I use scheme instead of silly languages like Python or Perl, lack of static typing. Scheme's type predicates are much more flexible.
Re:Python and Perl... (Score:3, Funny)
> a real Unix professional can do with Python
> or Perl that he or she can't do with awk, sed,
> and grep.
Awk? Sed? Bah! There's nothing you can do in awk and sed that you can't do with plain, simple assembly language opcodes!
Re:Python and Perl... (Score:2)
Amateur. Real programmers use nothing but S, K, and apply. [eleves.ens.fr]
Re:Python and Perl... (Score:2)
errr, actually, you only need one: Subtract and Branch if Negative [coventry.ac.uk]
Re:Python and Perl... (Score:3, Insightful)
Re:Bad foundations. (Score:5, Informative)
Python's intellectual ancestor was the language ABC, not Perl or TCL. Python's object system is very clean and well thought out, not accreted into the language. New style classes are an elaboration of that, merging the concept of a type and a class.
I'm not sure which "aspects from Camel" fuck up the whole situation. You're one example about continuations and GC "occultism" doesn't really help. 99% of the wonderful Python applications out there have no need of such stuff, and if you did, maybe Stackless Python (a variant) might interest you.
Python has all the necessary features to build very robust and maintainable systems. It's library is excellent, and it's C API is extremely clean for both embedding and extending.
A valid criticism for *some* applications is that it's slower than C or C++. This should come as no surprise since Python is interpreted and highly dynamic. Fortunately, Python can easily be extended such that critical sections can be coded in C, although most applications won't need to bother. It's also an excellent prototyping language so that if you *did* want to rewrite it in a static language like C++, you'd have an excellent basis for it.
swallowing the flamebait (Score:5, Insightful)
No way in hell. Python tries its best to avoid perlisms, and TCL/Tk doesn't even come close. Python has a strongly typed object system with one namespace.
I don't think that we really have to discuss the problems of Perl's "object system"
Perls object system is a hack. Python object system fits like a glove. ISTR that Larry kinda "copied" the objsystem from Python (and not vice versa), but it didn't really fit perl.
or the shortcomings of TCL/TK.
Shortcomings of TCL/Tk have really nothing to do with the topic. Don't try to sneak TCL/Tk into this. This has got to be the clumsiest strawman argument I've seen in a while. Chewbacca lives on Endor?
The result can be seen when you try to program a caller frame instance-preserving continuation in Python.
What do you mean? Closures (or "nested scopes" as they are referred to in the language docs - look them up before whining) work as expected. Can you give an example of the thing you are talking about in a language you know (assuming you know one). Are talking about what Stackless Python is trying to do?
But when the project advances they suddenly notice that python doesn't provide all necessary features and a whole rewrite is in order.
You don't really need "features", you can use libraries to add "features" and the core language is flexible enough for pretty much any tasks.
Re:Bad foundations. (Score:5, Insightful)
Yeah, it's really sad how many large projects fail because they're implemented in a language that doesn't properly support continuations....
Wait a minute, I've been in the computer industry for decades, and other than myself, I could probably count on one hand the number of people I've met who even know what a continuation is. Other than as an amusing tool to utterly confuse any but the most advanced developers, continuations are probably only useful for coroutines, and coroutines are mostly useful for iterator generators, which recent versions of Python have generators nicely packaged in an easy-to-understand syntax (the yield statement).
Since few if any other popular languages give you even this much, it must be truly amazing that any software works at all.
Re:Bad foundations. (Score:3, Informative)
Continuations are also incredibly useful for massively scalable network applications. They are arguably the best way to write them, in terms of code readability and performance.
Re:While we're on the subject of Python (Score:4, Interesting)
It's too easy to accidentally use do-while when you should have just used while. It actually makes the language less error-prone, because in those few cases where you do have an unconditional first pass, you are forced to structure the code differently and actually think about what you're doing.
You can always transform: do stmt while foo into stmt while foo stmt which isn't even longer (if stmt is actually many statements, it should be a function anyway). It's not worth introducing an abusable language construct just so you get to be lazy and not code a function when you should.
Re:Python getting to big (Score:2)
Criticizing Python because of the new features, modules, etc. really isn't warranted. Python hasn't lost its cohesion. You simply haven't kept up.
And yes, I have done Python full time. And yes, it was about 3 years ago. So yes, I will need to catch up at some point too. But it's not anyone else's fault
Re:Python getting to big (Score:2)
BUT....
You'd be missing out. Everyone gets bugged by how much things keep changing, and it is a problem. But what's the alternative? I've worked on VB, VB.NET, Java, and Python projects and they all keep ch
Re:Still doesn't have a maintained, up-to-date (Score:3, Informative) | https://slashdot.org/story/04/01/20/176200/learning-python-2nd-edition | CC-MAIN-2018-05 | en | refinedweb |
SoAppearanceKit.3coin3 man page
SoAppearanceKit — The SoAppearanceKit class is a node kit catalog that collects miscellaneous appearance node types.
Node kit structure (new entries versus parent class marked with arrow prefix):
Synopsis
#include <Inventor/nodekits/SoAppearanceKit.h>
Inherits SoBaseKit.
Public Member Functions
virtual SoType getTypeId (void) const
virtual const SoNodekitCatalog * getNodekitCatalog (void) const
SoAppearanceKit (void)
Static Public Member Functions
static SoType getClassTypeId (void)
static const SoNodekitCatalog * getClassNodekitCatalog (void)
static void initClass (void)
Protected Member Functions
virtual const SoFieldData * getFieldData (void) const
virtual ~SoAppearanceKit ()
Static Protected Member Functions
static const SoFieldData ** getFieldDataPtr (void)
static const SoNodekitCatalog ** getClassNodekitCatalogPtr (void)
Protected Attributes
SoSFNode complexity
SoSFNode drawStyle
SoSFNode environment
SoSFNode font
SoSFNode lightModel
SoSFNode material
SoSFNode texture2
Additional Inherited Members
Detailed Description
The SoAppearanceKit class is a node kit catalog that collects miscellaneous appearance node types.
Node kit structure (new entries versus parent class marked with arrow prefix):
CLASS SoAppearanceKit -->"this" "callbackList" --> "lightModel" --> "environment" --> "drawStyle" --> "material" --> "complexity" --> "texture2" --> "font"
(See SoBaseKit::printDiagram() for information about the output formatting.)
Detailed information on catalog parts:
CLASS SoAppearanceKit PVT "this", SoAppearanceKit --- "callbackList", SoNodeKitListPart [ SoCallback, SoEventCallback ] "lightModel", SoLightModel --- "environment", SoEnvironment --- "drawStyle", SoDrawStyle --- "material", SoMaterial --- "complexity", SoComplexity --- "texture2", SoTexture2 --- "font", SoFont ---
(See SoBaseKit::printTable() for information about the output formatting.)
Constructor & Destructor Documentation
SoAppearanceKit::SoAppearanceKit (void)
Constructor.
SoAppearanceKit::~SoAppearanceKit () [protected], [virtual]
Destructor.
Member Function Documentation
SoType SoAppearanceKit:BaseKit.
const SoFieldData * SoAppearanceKit::getFieldData (void) const [protected], [virtual]
Returns a pointer to the class-wide field data storage object for this instance. If no fields are present, returns
NULL.
Reimplemented from SoBaseKit.
const SoNodekitCatalog * SoAppearanceKit::getNodekitCatalog (void) const [virtual]
Returns the nodekit catalog which defines the layout of this class' kit.
Reimplemented from SoBaseKit.
Author
Generated automatically by Doxygen for Coin from the source code.
Referenced By
The man page SoAppearanceKit.3coin2(3) is an alias of SoAppearanceKit.3coin3(3). | https://www.mankier.com/3/SoAppearanceKit.3coin3 | CC-MAIN-2018-05 | en | refinedweb |
The material presented up to this point has been primarily concerned with consuming the management data for the purposes of either retrieving or modifying the system configuration information, or for proactively monitoring and troubleshooting various aspects of systems behavior. While on several occasions I mentioned the WMI data providers that are responsible for maintaining the management data exposed through WMI, I have not yet focused on the gory details of provider implementation and operations. Although it is not in my nature to withhold information, in the first four chapters, I consciously avoided delving into the provider machinery for a few reasons.
First, conventional provider programming is complex. The complexity stems mainly from the choice of available programming languages and tools; until the introduction of .NET and FCL, this choice was pretty much limited to C++. Though a hard core developer may feel very much at home implementing COM interfaces with C++, cautious system administrators who wish to remain sane often walk away as soon as somebody as much as mentions IUnknown. Even with the help of various utilities and wizards distributed with WMI SDK, C++ provider programming still remains outside the realm of most system managers.
Yet another reason for taking providers for granted is the versatility of WMI and the Windows Operating Environment, both of which come equipped with enough WMI providers to monitor just about any aspect of systems operations. Thus, if you are only concerned with monitoring the health of the operating system and its services, you may never need to bother learning the provider framework. After all, understanding the WMI client API is often all that is required to accomplish the majority of monitoring and configuration tasks, and the rest of the WMI infrastructure may as well be viewed as a black box.
Your perspective may change, however, as soon as you face the necessity of administering the numerous custom applications and third-party software packages that are spread across dozens of computing nodes. On a rare occasion, you may get lucky and find out that your favorite third-party software is already outfitted with a provider and can be managed with WMI. More often, you will have to deal with in-house developed systems, which are, at best, equipped with some rudimentary logging facilities but have no provisions for remote monitoring and administration. This is where you may roll up your sleeves and turn your undivided attention to the subject of WMI provider development. Unfortunately, this is also where you discover that the WMI Client API is just a small tip of a very large iceberg.
Although the complexity of provider programming may be the main reason for the slow acceptance of WMI as a primary instrumentation framework for vendor software systems, the current state of affairs is somewhat reassuring. The gloomy prospect of digging into the nuts and bolts of the WMI provider architecture became much less gloomy once Microsoft introduced .NET and FCL. Besides the elegant interface to WMI Client API that is housed in the System.Management namespace, FCL offers extensive functionality for exposing application events and data for management in an easy and trouble-free manner. This functionality, designed specifically to instrument .NET applications for WMI, is packaged into the System.Management.Instrumentation namespace and distributed within the System.Management.dll module. System.Management.Instrumentation types are envisioned as a collection of helpers and attributes intended to simplify the process of exposing management events and data to WMI, and as such, they nearly completely shield the developer from the intricacies of provider programming. In fact, the preferred instrumentation model is declarative so that the bulk of management data can be made available through WMI with very little coding.
Although the types of System.Management.Instrumentation namespace are the primary focus of this chapter, I will also touch upon some aspects of WMI provider programming and deployment. Not only will this discussion help you appreciate the simplicity and elegance of .NET instrumentation types, but it will also shed some light onto the underpinnings of System.Management.Instrumentation types.
A provider is nothing but a COM server that implements a slew of WMI interfaces. Depending on the expected functionality and type of the provider, you may be required to supply an implementation for different provider interfaces and methods. However, one interface, which must be implemented by absolutely all providers, is IWbemProviderInit. This interface has a single method, Initialize, which is invoked by WMI following the successful load of a provider COM server. As its name implies, Initialize is designed to let the providers initialize themselves and report the initialization status back to WMI so that CIMOM may start forwarding client requests to the provider. IWbemProviderInit::Initialize has the following signature:
HRESULT IWbemProviderInit: );
where the parameters are defined as follows:
Depending on its type, the provider may carry out different operations during its initialization. For instance, a push provider will store its data into the CIM Repository and shut down, while a pull provider may just set up its execution environment. Typically, an implementation of IWbemProviderInit::Initialize will look somewhat similar to the following code:
HRESULT SampleProvider: ) { if (pNamespace) pNamespace->AddRef(); m_pNamespace = pNamespace; // perform other initialization activities pInitSink->SetStatus(WBEM_S_INITIALIZED,0); return WBEM_S_NO_ERROR; };
Note that if a provider intends to use the pointer to IWbemServices to make calls into WMI, it must call AddRef on it. After it has finished its initialization, the provider must report the status back to WMI by calling the IWbemProviderInitSink::SetStatus method. This method takes two parameters: the provider's initialization status and an unused LONG, which is commonly set to zero. The status may take one of the following values: WBEM_S_INITIALIZED and WBEM_E_FALIED. The former indicates that the provider has completed its initialization sequence and is ready to service the client's requests. The latter is a sign of initialization failure and marks the provider as not functional. Interestingly, if the provider initialization fails, IWbemProviderInit::Initialize does not have to invoke IWbemProviderInitSink::SetStatus; instead it may simply return the WBEM_E_FAILED return code.
As I mentioned before, a push provider does not have to implement any interfaces other than IWbemProviderInit. When it comes to building instance, class, event, method, and property providers, however, the situation is much more complicated. Table 5-1 lists the interfaces that must be implemented depending on the provider type.
The easiest to implement and, perhaps, the predominant type of WMI provider is an instance provider. After all, most application monitoring and configuration issues can often be reduced to retrieving and modifying the instance-level management data. When instrumenting an application, you are most likely to build a custom instance provider, which would act as an intermediary between WMI and your application environment. Thus, for the sake of providing a reasonably complete example while keeping the size of this chapter reasonable, I will concentrate on developing a simple instance provider. Those of you who are interested in implementing other provider types will have to dig into the WMI SDK documentation, although, the following text should supply enough background information to ease the pain a bit.
Since an instance provider is responsible for retrieving the management data, which represents an individual instance, one of the primary methods, to be implemented while developing such a provider is IWbemServices::GetObjectAsync. The method has the following signature:
HRESULT IWbemServices::GetObjectAsync( const BSTR bstrObjPath, LONG lFlags, IWbemContext *pContext, IWbemObjectSink *pObjSink );
where the parameters are defined as follows:
A typical implementation of IWbemServices::GetObjectAsync may resemble the following code:
HRESULT SampleProvider::GetObjectAsync( const BSTR bstrObjPath, LONG lFlags, IWbemContext *pContext, IWbemObjectSink *pObjSink ) { IWbemClassObject *pObj = NULL; if (bstrObjPath == NULL || pObjSink == NULL || m_pNamespace == NULL) return WBEM_E_INVALID_PARAMETER; // retrieve instance based on path if (RetrieveInstanceByPath(bstrObjPath, &pObj) == S_OK) { pObjSink->Indicate(1, &pObj); pObj->Release(); pObjSink->SetStatus(WBEM_STATUS_COMPLETE, WBEM_S_NO_ERROR, NULL, NULL); return WBEM_S_NO_ERROR; } else { pObjSink->SetStatus(WBEM_STATUS_COMPLETE, WBEM_E_NOT_FOUND, NULL, NULL); return WBEM_E_NOT_FOUND; } };
Here the most interesting thing is the call to IWbemObjectSink::Indicate method, which is used to pass the retrieved instance back to WMI. This method takes two parameters: a count that indicates how many objects are being returned, and an array of pointers to IWbemClassObject interfaces. Each interface pointer is a handle to the instance that is discovered by the retrieval operation and passed back to WMI.
Following the completion of IWbemObjectSink::Indicate the status of the operation is reported back to WMI via the IWbemObjectSink::SetStatus method. This method takes four parameters: a bitmask status of the operation, an HRESULT of the operation, a string, and an object parameter. The bitmask status indicates whether an operation is still in progress or completed and may be one of the following: WBEM_STATUS_COMPLETE or WBEM_STATUS_PROGRESS. The HRESULT parameter is simply an error code, if there is any, generated by the retrieval operation. The string parameter is optional and is only used when an operation is expected to return a string. For instance, when updating or creating an instance, IWbemObjectSink::SetStatus may be called with this parameter set to the object path of an updated or newly created instance. Finally, the last parameter, the pointer to the IWbemClassObject interface, is used whenever it is necessary to report any extended status information. In such cases, the pointer may be associated with an instance of the __ExtendedStatus WMI class.
Besides returning individual instances directly based on the object path, instance provides are expected to be able to enumerate all management objects. This is achieved via IWbemServices::CreateInstanceEnumAsync method:
HRESULT IWbemServices::CreateInstanceEnumAsync( const BSTR bstrClass, LONG lFlags, IWbemContext *pContext, IWbemObjectSink *pObjSink );
where the parameters are defined as follows:
IWbemServices::CreateInstanceEnumAsync can be implemented as follows:
HRESULT SampleProvider::CreateInstanceEnumAsync( const BSTR bstrClass, LONG lFlags, IWbemContext *pContext, IWbemObjectSink *pObjSink ) { IWbemClassObject *pClass = NULL; IWbemClassObject *pInst = NULL; HRESULT hr = S_OK; // retrieve class definition using IWbemServices pointer cached // during initialization hr = m_pNamespace->GetObject(strClass, 0, NULL, &pClass, 0); if (hr != S_OK) return hr; while(GetNextInstance(pClass, &pInst, pContext)) { pObjSink->Indicate(1, &pInst); pInst->Release(); } pObjSink->SetStatus(WBEM_STATUS_COMPLETE, WBEM_S_NO_ERROR, NULL, NULL); return WBEM_S_NO_ERROR; };
As you can see, the implementation is very similar to that of IWbemServices::GetObjectAsync. The only difference here is that instead of retrieving an object based on its path, the code continuously calls a hypothetical function GetNextInstance, which assembles new WMI objects based on some kind of management data. These objects are then returned to WMI one-by-one using the IWbemObjectSink::Indicate method. When the enumeration completes (GetNextInstance returns a FALSE value), WMI is notified on the operation's status via a call to IWbemObjectSink::SetStatus.
Once IWbemProviderInit::Initialize, IWbemServices::GetObjectAsync, and IWbemServices::CreateInstanceEnumAsync are implemented, the provider is functional and ready to be deployed. However, it will only be able to provide the management data to WMI in a read-only fashion. In order for a provider to support updates to the instances that it manages, it must implement both the IWbemServices::PutInstanceAsync and IWbemServices::DeleteInstanceAsync methods.
IWbemServices::PutInstanceAsync is used to create or update an instance of a given WMI class. The method has the following signature:
HRESULT IWbemServices::PutInstanceAsync( IWbemClassObject *pInstance, LONG lFlags, IWbemContext *pContext, IWbemObjectSink *pObjSink );
where the parameters are defined as follows:
Implementing IWbemServices::PutInstanceAsync is trivial since the structure of this method is very similar to one of the methods described earlier, such as IWbemServices::GetObjectAsync. One thing you should keep in mind, though, is that for instances of subclasses, an update operation is compound. In other words, if pInstance points to an object of a class that has nonabstract superclasses, WMI automatically invokes IWbemServices::PutInstance for each of these superclasses starting from the top of the hierarchy. The update operation succeeds only if all the providers responsible for each of the classes within the inheritance tree handle the update successfully. You may assume that the same principle works for subclasses as well, meaning that whenever an instance of a class is updated, the update is propagated to instances of all its subclasses. Unfortunately, this is not the case—instead, if an application updates properties of an object, which are inherited by subclass instances, it must explicitly call IWbemServices::PutInstance on each of the affected subclass instances.
IWbemServices::DeleteInstanceAsync deletes an instance of a designated class, residing in a current namespace. The method has the following signature:
HRESULT IWbemServices::DeleteInstanceAsync( IWbemClassObject *pInstance, LONG lFlags, IWbemContext *pContext, IWbemObjectSink *pObjSink );
where the parameters are defined as follows:
Again, implementing IWbemServices::DeleteInstanceAsync is very similar to coding the other provider methods that were described earlier. Similarly to IWbemServices::PutInstanceAsync, WMI automatically invokes IWbemServices::DeleteInstance for each of the superclass instances. It starts from the top of the hierarchy in case pInstance points to an object of a class that has nonabstract superclasses. However, the success of the delete operation depends only on the success of the IWbemServices::DeleteInstance call for the top-level nonabstract class.
Optionally, instance providers may support query processing. When a provider elects to handle queries, it must implement the IWbemServices::ExecQueryAsync method:
HRESULT IWbemServices::ExecQueryAsync( const BSTR bstrQueryLangauge, const BSTR bstrQuery, LONG lFlags, IWbemContext *pContext, IWbemObjectSink *pObjSink );
where the parameters are defined as follows:
A typical implementation of IWbemServices::ExecQueryAsync must be capable of parsing the query test, retrieving the qualifying objects, and returning the results back to WMI. If for some reason, a provider cannot handle the query, it may choose to refuse the query by returning the WBEM_E_PROVIDER_NOT_CAPABLE result code. In such cases, WMI may attempt to either simplify the query and resend it to the provider, or just enumerate all instances of a class, for which the query is invoked.
Other types of providers, such as event or method providers, may need to implement additional interfaces and interface methods. For instance, in order to allow a client to execute object methods, a provider must implement the IWbemServices::ExecMethodAsync method. However, supporting the interfaces and methods described above, is usually sufficient for a provider that satisfies the majority of typical system management needs.
Once a provider COM server is coded and compiled, it must be registered just like any other COM object. To register a provider, use regsvr32.exe as follows:
regsvr32.exe SampleProvider.DLL
COM registration, although required, is not the only piece of information that WMI needs in order to use the provider. As I mentioned earlier, WMI maintains its own provider registration information in the CIM Repository. A provider is described by an instance of the __Win32Provider system class and an instance of a subclass of __ProviderRegistration. For example, in order to register an instance provider "SampleProvider", the following two instances must be added to the repository:
instance of __Win32Provider as $Prov { Name = "SampleProvider"; ClsId = "{fe9af5c0-d3b6-11ce-a5b6-00aa00680c3f}"; }; instance of __InstanceProviderRegistration { Provider = $Prov; InteractionType = 0; SupportsPut = TRUE; SupportsGet = TRUE; SupportsDelete = TRUE; SupportsEnumeration = TRUE; QuerySupportLevels = {"WQL:UnarySelect"}; };
This first instance of the __Win32Provider class simply describes the provider to WMI and establishes a link to an external COM server by setting the ClsId property to the Class ID of the COM object. To allow for finer control over the provider initialization, __Win32Provider offers a few other properties, such as PerLocaleInitialization and PerUserInitialization, which indicate whether a provider is initialized only one time or once per each locale and user. However, under normal circumstances the defaults are usually sufficient so that Name and ClsId are the only properties that need to be set. Because Name is a key, it cannot be left blank. WMI also needs ClsId to load the appropriate provider COM server.
The __InstanceProviderRegistration object serves as a description of the provider's capabilities. Most of its properties are self-explanatory, with the exception of InteractionType and QuerySupportLevels. The former identifies the type of the provider—the value of zero (default) stands for pull providers, while the value of one is associated with push providers. QuerySupportLevels is a bit more complex. As its name implies, this property indicates what kind of query support the provider guarantees. Setting this property to NULL would mark the provider as not capable of processing any queries. For those providers that do support query processing, this property may be set to one or more of the following values: WQL:UnarySelect, WQL:References, WQL:Associators, and WQL:V1ProviderDefined. Under the current release, WMI only delivers simple unary SELECT WQL queries to providers, hence the WQL:UnarySelect designation. Interestingly, marking the provider as capable of handling only the unary SELECT queries does not seem to preclude it from processing the REFERENCES OF or ASSOCIATORS OF queries. WMI takes care of translating such queries into simple SELECT statements before sending them to providers, which enables the providers to handle all types of queries in a uniform fashion.
As you can see, building a provider, while not terribly complex, involves a fair amount of low-level coding and assumes working knowledge of COM. To make provider programming more appealing for a less sophisticated audience, Microsoft developed the Provider Framework, which ships as a part of the WMI SDK. The Provider Framework is nothing but a set of C++ classes that encapsulate most of the boilerplate code necessary to create an instance or method provider. The good thing about the Provider Framework is that it completely shields the developer from the intricacies of COM programming because it handles all interactions with COM. The bad thing, of course, is that it still requires fairly sophisticated C++ coding skills.
The Provider Framework includes a set of classes that implement IWbemProviderInit, IWbemServices, and IWbemClassObject interfaces (such as CWbemProviderGlue, Provider, and CInstance respectively), as well as some utility classes, which facilitate time conversions, time span calculations, string operations, and more. Typically, a developer will create a new provider class by subclassing the Provider class and overriding some of its methods. The base class supplies a default implementation for all of its methods that simply returns WBEM_E_PROVIDER_NOT_CAPABLE when invoked.
Still, coding a provider by hand, even with the help of the Framework classes, is quite an effort. That is why WMI SDK includes a handy utility called the Provider Code Generator wizard, which spits out the stab implementation for all required C++ classes and methods and creates all necessary MOF definitions. The Provider Code Generator wizard is shown in Figure 5-1.
Figure 5-1: Provider Code Generation wizard
You can access the Provider Code Generator through the CIM Studio interface. In order to generate the code for a provider, you must select one of the existing classes and then invoke the wizard by clicking a button in the upper-right corner of the CIM Studio GUI. The wizard will output several CPP files, a header file, a makefile, a MOF definition, and a DEF file. Most files will be named after the initially selected WMI class (although this base name can be overridden) and will have the appropriate extensions. For instance, if you select a hypothetical class My_ManagedElement, the wizard will produce the following files:
Once the source files are generated, all that you have left to do is fill out the blanks in the default method implementations and then build and register the provider DLL. As you can see, with the help of the Provider Code Generation wizard, developing a working WMI provider is trivial, although there are still a few things left to be desired. First, the resulting provider does not allow you to expose management events, which can be a severe limitation for some managed environments. Second, you still need to engage in some, although much less complex, C++ coding exercises. As a result, despite all the nifty WMI SDK utilities, WMI provider development is often perceived as one of the more advanced subjects, and as a result, it remains largely unexplored.
The goal of the System.Management.Instrumentation namespace is to make instrumenting .NET applications easier by providing extensive support for exposing the application data and events for management. However, rather than following the familiar path of supplying the helper and utility types for provider development, the designers of the System.Management.Instrumentation namespace took a completely different and quite innovative approach. The idea behind .NET application instrumentation is very simple, yet elegant, and it revolves around some common traits that are shared by .NET and WMI. Both platforms are based on the same object-oriented design principle and operate in terms of the same entities—classes, objects, properties, and so on. Therefore, you can establish a mapping between the managed elements modeled as .NET types, and WMI schema elements so that an arbitrary .NET type would correspond to a WMI class and its properties, and events would map to corresponding facets of a WMI class.
Thus, once a translation scheme between the .NET application types and the WMI classes is defined, any software system can be instrumented for WMI simply by creating the metadata, which describes the .NET-to-WMI mapping. Naturally, such a concept of mapping metadata fits very well with the overall philosophy of .NET and can easily be supported through the .NET Framework attribution capabilities. Rather than coding to WMI interfaces, you may decorate the existing .NET application types, properties, and events with the appropriate attributes so that the .NET Framework itself takes care of all the necessary translation and marshalling of the application data.
The instrumentation model employed by the designers of System.Management.Instrumentation namespace is largely declarative. This means that the namespace contains mostly the .NET attribute types that are used to mark up the appropriate .NET assemblies, types, properties, and events. Besides the attributes, there are a few helper types. These can be used to customize the process of exposing the application data to WMI and to account for complex situations that cannot be easily handled through attribution. The remainder of this chapter is dedicated to addressing various scenarios for exposing the data and events for management using the attributes as well as the helper types.
When you are instrumenting a particular .NET application, you must first mark the application's assembly as capable of providing the management data to WMI. You can achieve this with the custom attribute type InstrumentedAttribute, which is a part of System.Management.Instrumentation namespace:
using System.Management; using System.Management.Instrumentation; [assembly:Instrumented(@" rootCIMV2")] namespace InstrumentedApplication { // instrumented application types go here ... };
There are a few things happening here. First, when you look at its disassembly listing, you can easily deduce that InstrumentedAttribute is an assembly-level attribute:
.class public auto ansi beforefieldinit InstrumentedAttribute extends [mscorlib]System.Attribute { .custom instance void [mscorlib]System.AttributeUsageAttribute::.ctor( valuetype [mscorlib]System.AttributeTargets) = (01 00 01 00 00 00 00 00) }
Here, the definition for the InstrumentedAttribute type is decorated with another attribute, AttributeUsageAttribute, which defines a set of elements to which a given attribute can be applied. The hexadecimal string, used to initialize the AttributeUsageAttribute represents the bytes in the InstrumentedAttribute value blob. In this case, the sequence of hexadecimal digits corresponds to the Assembly member (with a value of 0x00000001) of the AttributeTargets enumeration, which is used as a parameter to the constructor of the AttributeUsageAttribute type. Thus, InstrumentedAttribute can only be applied to an assembly—if you try to use it any place else, it will trigger a compiler error. This attribute should be marked with the assembly keyword and placed at the assembly level, prior to all type definitions.
Yet another thing you should notice is the parameter that is passed to the constructor of InstrumentedAttribute. This is a string that represents the target namespace for the instrumented types contained within the assembly. In this particular case, all management classes, instances, and events will be imported into the rootCIMV2 WMI namespace. In fact, the constructor, which takes the namespace parameter, is not the only constructor featured by InstrumentedAttribute. There is also a parameterless constructor that causes all instrumented entities to be loaded into the rootdefault namespace. Finally, there is a constructor that takes not only the namespace parameter, but also a security descriptor (SD) that specifies the security restrictions for the instrumented assembly. The security descriptor parameter is a string formatted using the Security Description Definition Language (SDDL). The SDDL is a special notation that allows components of a SD to be represented in a textual form. Essentially, this parameter is a sequence of tokens that correspond to the four components of a SD: owner, primary group, discretionary access control list (DACL), and system access control list (SACL). Thus, a security descriptor string may take the following form:
O:G:D:(ace1)(ace2)...S:(ace1)(ace2)...
where the individual elements are defined as follows:
Assembling such string security descriptors by hand is rather convoluted and error-prone. A better idea is to use the ConvertSecurityDescriptorToStringSecurityDescriptor function, which takes a regular Windows SD (SECURITY_DESCRIPTOR structure) and outputs its string representation. The downside, of course, is that typing the name of this function is nearly as cumbersome as assembling a string SD manually.
Once the application's assembly is decorated with the InstrumentedAttribute attribute, its manifest will include the information necessary for the Framework to detect an instrumented application:
.assembly InstrumentedApplication { ... .custom instance void [System.Management] System.Management.Instrumentation.InstrumentedAttribute::.ctor(string) = (01 00 0A 72 6F 6F 74 5C 43 49 4D 56 32 00 00) //...rootCIMV2.. ... }
The next step is to ensure that the application is registered and its schema is published in the CIM Repository. This is achieved using the standard installer mechanism, which is a part of .NET Framework. Typically, whenever the .NET Framework encounters a type that is a subclass of the System.Configuration.Install.Installer type such that this type is decorated with System.ComponentModel.RunInstallerAttribute, it checks the attribute's properties to determine whether the installation is requested. To request the installation services, the RunInstallerAttribute attribute's constructor has to be invoked with the Boolean TRUE value, which sets its RunInstaller property to TRUE. Thus, in order to ensure that all necessary installation steps are taken, an instrumented application would include some code similar to the following:
using System.ComponentModel; using System.Configuration.Install; [RunInstaller(true)] public class MyInstaller : Installer { ... };
Given such code, it is your responsibility as a developer to override the Install, Commit, and Uninstall methods of the Installer type and manually code all the installation procedures, such as publishing the application schema into the CIM Repository and registering all required components. This may get quite tricky because some of the installation steps may involve nontrivial coding. Fortunately, the System.Management.Instrumentation namespace offers a helper type, DefaultManagementProjectInstaller, which takes upon itself the task of analyzing the application's assembly and fulfilling all installation requirements. Therefore, the code just shown can be simplified as follows:
using System.ComponentModel; using System.Configuration.Install; using System.Management.Instrumentation; [RunInstaller(true)] public class MyInstaller : DefaultManagementProjectInstaller {};
Note that you are no longer required to override the methods of the Installer type and supply your own implementation of the installation procedures. DefaultManagementProjectInstaller already contains all the necessary implementation code, which takes care of the registration and schema publishing. Curiously, if you look at the disassembly listing of DefaultManagementProjectInstaller, you will see that this type does not override any of the Installer methods. Instead, its constructor does the following:
.method public hidebysig specialname rtspecialname instance void .ctor() cil managed { .maxstack 2 .locals init (class System.Management.Instrumentation.ManagementInstaller V_0) IL_0000: ldarg.0 IL_0001: call instance void [System.Configuration.Install]System.Configuration.Install.Installer::.ctor() IL_0006: newobj instance void System.Management.Instrumentation.ManagementInstaller::.ctor() IL_000b: stloc.0 IL_000c: ldarg.0 IL_000d: call instance class [System.Configuration.Install] System.Configuration.Install.InstallerCollection [System.Configuration.Install] System.Configuration.Install.Installer::get_Installers() IL_0012: ldloc.0 IL_0013: callvirt instance int32 [System.Configuration.Install] System.Configuration.Install.InstallerCollection::Add( class [System.Configuration.Install]System.Configuration.Install.Installer) IL_0018: pop IL_0019: ret }
Without going into too much detail, suffice it to say that this code simply creates an instance of another instrumentation helper type, ManagementInstaller, and adds this newly created instance to the Installers collection of the Installer object. Whenever the installation services are invoked, the Install, Rollback, or Commit methods iterate through the Installers collection and invoke the respective methods of each installer object found in the collection. The ManagementInstaller type houses the implementations for the Install, Rollback, and Commit methods that are suitable for publishing the application's schema to the CIM Repository and registering all the necessary application's components. Thus, whenever the installation services are requested, the respective methods of the ManagementInstaller object are called to carry out the necessary installation steps.
The technique demonstrated by the preceding code brings up an interesting thought. Say that your instrumented application already has an installer that takes care of all installation chores, such as copying the binaries, setting up configuration files, and so on. Then, in order to ensure that WMI-related installation steps are taken at the appropriate time, you may want to add the following line of code to the existing project installer's constructor:
Installers.Add(new ManagementInstaller());
This has exactly the same effect as the earlier code—it adds an instance of the ManagementInstaller type to the Installers collection of the project installer.
However, just having an installer embedded into your instrumented application is not enough. Somehow, the appropriate methods of the installer must be invoked at the right time to ensure that the instrumented application is correctly registered and its schema is published to the CIM Repository. The simplest and most versatile way of performing the installation is with the help of the .NET SDK utility installutil.exe. This is a command-line program that can be invoked simply with the name of the assembly to be installed. Thus, in order to install the assembly contained in the InstrumentedApplication.exe file, you can issue the following command:
installutil.exe InstrumentedApplication.exe
When this command finishes, the CIM Repository will be updated with the application's schema and provider registration information.
The complete list of command-line options for installutil.exe is shown in Table 5-2.
Note that you can install multiple assemblies at once by specifying several assembly files on the command line. The command-line options that occur within the command line prior to the name of the assembly will apply to that assembly's installation. When multiple assemblies are installed, the installation process is transactional—if one assembly fails to install, the utility performs a rollback for all the assemblies installed up to that point. On the other hand, uninstallation is not transactional.
If installutil.exe is invoked without any command-line options, when it is finished it outputs the following files into the current directory:
An assembly can also be installed programmatically. For example, the following code, if placed at the beginning of the Main function of your instrumented application, will invoke the installation services:
Type t = typeof(InstrumentedClass); string[] args = new string[] { t.Assembly.Location }; System.Configuration.Install.ManagedInstallerClass.InstallHelper(args);
Here, I assume that the instrumented application contains a type called InstrumentedClass. The first line of code obtains the System.Type object, associated with InstrumentedClass, which is subsequently utilized to get the location of the instrumented assembly. The location string is then packaged into the arguments array and passed to the InstallHelper method of the System.Configuration.Install.ManagedInstallerClass type, which actually invokes the installers. The location of the instrumented assembly is not the only argument taken by the InstallHelper method—in fact, you can use the same parameters as you would use for installutil.exe:
Type t = typeof(InstrumentedClass); string[] args = new string[] { "/logFile=MyLogFile.Log", "/showCallStack", t.Assembly.Location }; System.Configuration.Install.ManagedInstallerClass.InstallHelper(args);
You will find that invoking the installers programmatically is very straightforward and does not involve extensive coding. There is, however, a downside. For obvious reasons, the application's schema as well as its registration information will not be available in the CIM Repository until the first time the application is run. Thus, it is probably not a good idea to use this approach for production application deployment. Moreover, supplying such installation code is not really necessary. In fact, whenever an instrumented application first publishes an instance or raises an event, the .NET Framework performs auto-installation, which takes care of application and schema registration. Note that autoinstallation only succeeds if the user running the application belongs to the Local Administrators group. Again, this is intended as a convenience to facilitate rapid prototyping and testing, and relying on this feature when deploying an application is generally not recommended.
There are also other ways to perform an installation of an instrumented application. For instance, when an application is distributed as an MSI package, the application's installers will be invoked automatically as long as the option of running .NET installers is turned on.
Lastly, there is a question of how the application's management information is actually fed into WMI. Under normal circumstances, a provider (for instance, a COM server managed by WMI itself) would interact with its associated application and gather the appropriate management data. This is not the case when it comes to instrumenting .NET applications. Here, the provider is embedded into the application itself, which allows WMI to interact with the managed subsystem directly rather than having the provider communicate with the application via the application's API. An added benefit of this approach is the degree of control the application has over the provider's life span. Since the provider is no longer controlled by WMI, it is up to the application to determine when to expose its data and events to WMI.
A provider embedded into an application is referred to as a decoupled provider. Each decoupled provider must implement two special interfaces: IWbemDecoupledRegistrar and IWbemDecoupledEventProvider. The former allows the provider to register itself with WMI and define its life span. The latter facilitates the forwarding of management events to WMI.
Even a decoupled provider has to be registered in the CIM Repository. You may find registering the decoupled provider a bit tricky because the process differs from that of registering a regular provider. For instance, rather than using a conventional class __Win32Provider, for this type of registration you must use a brand new class, MSFT_DecoupledProvider, which is the derivative of __Win32Provider. Fortunately, the .NET Framework generates all the necessary registration entries automatically. Although a description of the decoupled provider registration details is beyond the scope of this book, those of you who are curious may want to take a look at the generated MOF files. These files are typically placed into the %SystemRoot%WINNTSystem32WBEMFramework oot directory, where the namespace is the target namespace for the application's schema; and the name is based on the name of the instrumented application's assembly.
Exposing an application's type for management is beyond simple—all you have to do is mark the type with InstrumentationClassAttribute, and pass the InstrumentationType.Instance enumeration member to its constructor. The following code demonstrates how to expose an arbitrary MyManagedClass to WMI:Class { public int Prop1; public string Prop2; public static void Main(string[] args) {} } }
If you save this code to a file, compile it, and run installutil.exe, your CIM Repository will contain the following WMI class definition in the rootCIMV2 namespace:
class MyManagedClass { [key] string ProcessId; [key] string InstanceId; sint32 Prop1; string Prop2; };
As you can see, both the int Prop1 and string Prop2 properties of MyManagedClass type are translated into the respective properties of the WMI class. However, the WMI class has two more properties, marked with the Key qualifier: ProcessID and InstanceID. These properties are added automatically to ensure the uniqueness of any instance of the class, which may be subsequently created.
Mapping .NET Types to WMI Classes
So how do .NET types and type members map to their respective entities within WMI? Fortunately, there is striking similarity between .NET and WMI types, which makes this mapping trivial. Actually, all .NET primitive value types map one-to-one to the corresponding CIM types. Certain reference types, such as String, DateTime, and TimeSpan also map to CIM string, CIM datetime in DMTF date and time format, or datetime in DMTF interval format, respectively. Mapping .NET arrays is also straightforward—they are translated into WMI arrays of appropriate types. For example, the following .NET type
public class MyManagedClass { public string[] StrProp; public static void Main(string[] args) {} }
has the following MOF representation:
class MyManagedClass { [key] string ProcessId; [key] string InstanceId; string StrProp[]; };
The situation is a bit more complex when it comes to embedded objects and references. The latest release of System.Management.Instrumentation only supports WMI embedded objects; it is impossible to generate a WMI class definition with properties of a reference type. Thus, any .NET type members of reference types other than String, DateTime, and TimeSpan, are mapped to embedded objects in WMI. Consider the following .NET types:
[InstrumentationClass(InstrumentationType.Instance)] public class EmbeddedClass { public int EmbProp; } [InstrumentationClass(InstrumentationType.Instance)] public class MyManagedClass { public EmbeddedClass EmbClassProp; public static void Main(string[] args) {} }
The .NET Framework will translate these .NET types into the following WMI definitions:
class EmbeddedClass { [key] string ProcessId; [key] string InstanceId; sint32 EmbProp; }; class MyManagedClass { [key] string ProcessId; [key] string InstanceId; EmbeddedClass EmbClassProp; };
Interestingly, both EmbeddedClass and MyManagedClass types have to be decorated with the InstrumentationClass attribute. If, for some reason, you forget to mark the embedded type with this attribute, any property of the embedded type will simply be ignored. Thus, in the example above, if EmbeddedClass does not have the InstrumentationClass attribute, the EmbClassProp property will not be included in the CIM definition for MyManagedClass. This seemingly odd behavior actually makes sense—in order to support the definition for MyManagedClass, the CIM Repository has to contain the definition for its dependency, EmbeddedClass.
There is another caveat, which has to do with property access modifiers. Only public members of an instrumented type are mapped to WMI class properties. Take a look at the following example:
[InstrumentationClass(InstrumentationType.Instance)] public class MyManagedClass { public string PublicProp; string PrivateProp; public static void Main(string[] args) {} }
The corresponding MOF definition will be the following:
class MyManagedClass { [key] string ProcessId; [key] string InstanceId; string PublicProp; };
As you may see, only the PublicProp, which has a public access modifier, is mapped to WMI class definition. The private field PrivateProp is simply ignored. This brings up another interesting thought. Normally, the .NET Framework does not distinguish between fields and properties—i.e., both are mapped to WMI class properties. Thus, if a given property is based upon a member field, both the property and the field will be translated into WMI class properties, essentially creating duplicate fields. This point is illustrated the following example:
[InstrumentationClass(InstrumentationType.Instance)] public class MyManagedClass { public string StrPropertyField; public string StrProperty { get {return StrPropertyField; } set {StrPropertyField = value; } } public static void Main(string[] args) {} }
The resulting MOF definition is the following:
class MyManagedClass { [key] string ProcessId; [key] string InstanceId; string StrPropertyField; string StrProperty; };
When translating .NET fields and properties, the .NET Framework has no knowledge of any relationship between StrPropertyField and StrProperty, therefore both these elements end up as properties of the corresponding WMI class. By declaring StrPropertyField as private, which is a normal practice for property definitions, you ensure that only StrProperty is exposed to WMI, thus eliminating duplication.
Using access modifiers is not the only way to exclude certain .NET type members from the corresponding WMI class definition. In fact, there is a much cleaner approach based on the System.Management.Instrumentation.IgnoreMemberAttribute custom attribute. All type members that you decorate with this attribute will not be considered when their respective .NET type is mapped to WMI. Thus, the example above can be changed as follows:
[InstrumentationClass(InstrumentationType.Instance)] public class MyManagedClass { [IgnoreMember] public string StrPropertyField; public string StrProperty { get {return StrPropertyField; } set {StrPropertyField = value; } } public static void Main(string[] args) {} }
In fact it is a good idea to always use IgnoreMemberAttribute not only for public type members, which are to be excluded from the WMI class, but also for the private type elements. Although decorating private type members with IgnoreMemberAttribute is redundant, it does not hurt and may certainly add clarity to your code.
Normally, an instrumented .NET type is translated into a WMI class that has the same name. Thus, in the example above, MyManagedClass maps to a WMI class also called MyManagedClass. Every once in a while, you may want to create a WMI class with a different name from that of the corresponding .NET type. For instance, it is a good idea to prefix all WMI classes that belong to a particular application with some kind of schema name to provide for logical grouping. Of course, you could use the same prefix for the respective .NET application types, but it would be a bit inconvenient and just plain ugly. A better approach is to use the ManagedNameAttribute custom attribute to rename .NET types and type members while you are translating them to WMI. Consider the following example:
[InstrumentationClass(InstrumentationType.Instance)] [ManagedName("MYAPP_MyManagedClass")] public class MyManagedClass { [ManagedName("MYAPP_StrProp")] public string StrProp; public static void Main(string[] args) {} }
Here, the MyManagedClass .NET type is exposed to WMI as MYAPP_MyManagedClass so that the schema name of MYAPP is added to the class name. At the same time, its property, StrProp, is translated into the corresponding WMI class property, MYAPP_StrProp:
class MYAPP_MyManagedClass { [key] string ProcessId; [key] string InstanceId; string MYAPP_StrProp; };
Mapping .NET Type Hierarchies to WMI
Normally, all .NET types, even those that are subclasses of some other .NET types, are translated into root-level WMI classes. Consider the following example:
public class MyBaseClass { public int IntField; } [InstrumentationClass(InstrumentationType.Instance)] public class MyManagedClass : MyBaseClass { public string StrProp; public static void Main(string[] args) {} }
Contrary to what you may expect, MyManagedClass will be translated to WMI as follows:
class MyManagedClass { [key] string ProcessId; [key] string InstanceId; string StrProp; };
As you may see, the base type MyBaseClass is ignored and the resulting WMI class definition does not include IntField, which is inherited from the superclass. An obvious remedy for this seems to be marking the base type with InstrumentationClassAttribute:
[InstrumentationClass(InstrumentationType.Instance)] public class MyBaseClass { public int IntField; } [InstrumentationClass(InstrumentationType.Instance)] public class MyManagedClass : MyBaseClass { public string StrProp; public static void Main(string[] args) {} }
Unfortunately, when you attempt to install the compiled assembly, installutil.exe will produce the following error message:
An exception occurred during the Install phase. System.Exception: Instance instrumentation classes must derive from abstract WMI classes.
It turns out that only leaf-level .NET types—types that do not have any subclasses—can publish instances to WMI. This means that applying InstrumentationClassAttribute with InstrumentationType.Instance to a base class is illegal. Somehow, the base class should be marked as abstract to indicate that it cannot expose any of its instances to WMI. You can achieve this by using another member of InstrumentationType enumeration, Abstract:
[InstrumentationClass(InstrumentationType.Abstract)] public class MyBaseClass { public int IntField; } [InstrumentationClass(InstrumentationType.Instance)] public class MyManagedClass : MyBaseClass { public string StrProp; public static void Main(string[] args) {} }
Now the installation process will complete just fine and the CIM Repository will be updated with the following class definitions:
[abstract] class MyBaseClass { sint32 IntField; }; class MyManagedClass : MyBaseClass { [key] string ProcessId; [key] string InstanceId; string StrProp; };
Note that the InstrumentationClassAttribute attribute is inherited by subclasses, so if a type hierarchy includes more than two types, only the root-level type and the leaf-level type should be decorated with InstrumentationClassAttribute. To clarify this point, look at the following example:
[InstrumentationClass(InstrumentationType.Abstract)] class RootLevelClass { } class IntermediateLevelClass : RootLevelClass { } [InstrumentationClass(InstrumentationType.Instance)] class LeafLevelClass : IntermediateLevelClass { }
Here, IntermediateLevelClass inherits InstrumentationClassAttribute from its parent, and therefore, it is considered abstract. The attribute, however, has to be overridden for the leaf-level type LeafLevelClass, otherwise, this type will be translated into an abstract WMI class.
Deriving from Existing WMI Classes
So far, we have discussed mapping .NET type hierarchies to WMI class hierarchies. However, what if you want to produce a WMI class that is derived from one of the existing classes that are not related to any of the instrumented application's types? For instance, how can you map an application type to a WMI class that is derived from CIM_ManagedSystemElement? This is, actually, surprisingly simple. It turns out that InstrumentationClassAttribute has an alternative constructor that not only takes the InstrumentationType parameter, but also takes a name of the existing base class. Thus, to create a WMI class MyManagedClass, which is a derivative of CIM_ManagedSystemElement, you may write the following code:
[InstrumentationClass(InstrumentationType.Instance, "CIM_ManagedSystemElement")] class MyManagedElement { }
Curiously, if you attempt to supply the name of a nonexistent WMI class to the constructor of InstrumentationClassAttribute, the installutil.exe will fail and produce an error message similar to the one shown earlier. Furthermore, if the name of a nonabstract WMI class such as Win32_Process is used as a parameter to the InstrumentationClassAttribute constructor, the installation will still fail with the same error message. The bottom line is that every class that supports instance instrumentation must either be a root-level class, or derive from an abstract WMI class.
There is, however, one exception to this rule. It turns out that you can define a .NET type marked with the InstrumentationClass(InstrumentationType.Instance) attribute as a derivative of another type that is also attributed for instance instrumentation. As the following disassembly listing demonstrates, the System.Management.Instrumentation.Instance type is decorated with the InstrumentationClass(InstrumentationType.Instance) attribute; the hexadecimal initialization string corresponds to the InstrumentationType.Instance enumeration member (int32 value 0x00000000):
.class public abstract auto ansi beforefieldinit Instance extends [mscorlib]System.Object implements System.Management.Instrumentation.IInstance { .custom instance void System.Management.Instrumentation.InstrumentationClassAttribute::.ctor( valuetype System.Management.Instrumentation.InstrumentationType) = ( 01 00 00 00 00 00 00 00 ) }
Nevertheless, the following compiles and passes the installation process just fine:
[InstrumentationClass(InstrumentationType.Instance)] public class MyManagedElement : Instance { public static void Main(string[] args) {} }
However, the outcome may be a bit surprising. Rather than creating two WMI classes: one that corresponds to the .NET Instance type, and the other as the subclass of this class, the .NET Framework updates the CIM Repository with the following class definition:
class MyManagedElement { [key] string ProcessId; [key] string InstanceId; };
So why is this possible and how does the Instance type differ from all the other application types? Well, as I said before, this is a special case—the Framework simply ignores the Instance type when it is used as a base class. This brings up another couple of questions: what is the purpose of Instance and why is it a part of the System.Management.Instrumentation namespace? As you may remember, the InstrumentationClass attribute propagates from the base class to its subclasses, which means that a subclass of Instance does not have to be decorated with this attribute to indicate its ability to support instance instrumentation. Therefore, it is, possible to expose a given class for management by simply deriving it from Instance:
public class MyManagedElement : Instance { public static void Main(string[] args) {} }
Thus, the only purpose of Instance type is to provide a nice alternative to the declarative instrumentation model, used throughout the System.Management.Instrumentation namespace. If, for some reason, you are not fond of custom attributes, you can always achieve the same effect by using Instance as a base for your application types.
When modeling type hierarchies with Instance, there is again a caveat. Since Instance is decorated with InstrumentationClass(InstrumentationType.Instance), which propagates to its subclasses, this attribute must be explicitly overridden for all intermediate and leaf-level types. Consider the following example:
[InstrumentationClass(InstrumentationType.Abstract)] public class MyIntermediateClass1 : Instance { } public class MyIntermediateClass2 : MyIntermediateClass1 { } [InstrumentationClass(InstrumentationType.Instance)] public class MyLeafClass : MyIntermediateClass2 { }
Here, MyIntermediateClass1, which has subclasses and therefore cannot support instance instrumentation, has to be marked with InstrumentationClass(InstrumentationType.Abstract) to override the InstrumentationClass(InstrumentationType.Instance) attribute inherited from Instance. MyIntermediateClass2 is fine because it inherits InstrumentationClass(InstrumentationType.Abstract) from its parent. Lastly, MyLeafClass is a leaf-level type that can support instance instrumentation and, therefore, has to be explicitly marked as such in order to override the inherited attribute.
As you can see, using Instance to instrument single, top-level types is straightforward. However, when it comes to type hierarchies, you still have to resort to using attributes, which, sort of defeats the purpose of using Instance in the first place.
Providing Instance Data to WMI
So far, I have talked about mapping .NET application types to WMI classes. Although important, having a WMI schema that reflects the instrumented types within an application by itself is not sufficient. The schema is just a skeleton, and the meat is the instance-level management data, which somehow has to be exposed to WMI. In other words, once the schema mapping is complete, there has to be a way to create instances of the application types and make them accessible to management clients just like instances of regular WMI classes.
The process of providing the instance-level data to WMI is surprisingly simple. All you have to do is create an instance of an appropriate .NET type and invoke certain helper methods to make such instances visible to management clients. The following is a complete (although fairly useless) example of creating and publishing an instance of an instrumented application type MyManagedElement:Element { public string Description; public int Count; public static void Main(string[] args) { MyManagedElement el = new MyManagedElement(); el.Description = "SAMPLE INSTANCE"; el.Count = 256; Instrumentation.Publish(el); Console.ReadLine(); Instrumentation.Revoke(el); } } }
When this code is complied and run, a console window will pop up and wait for the user input. The program will terminate whenever any key is pressed. For the duration of this code's run, the CIM Repository will contain a single instance of MyManagedElement class, which, when expressed in MOF, will look similar to the following:
instance of MyManagedElement { Count = 256; Description = "SAMPLE INSTANCE"; InstanceId = "3839"; ProcessId = "3c702745-c84b-11d6-9159-000255f41c79"; };
As you can see, the Count and Description properties of this instance reflect the initialization values for their respective .NET fields. Two other mysterious properties—InstanceID and ProcessID—represent the unique identifier for the newly created instance and the process identity of the running .NET application respectively. The InstanceID is just a sequence number that is incremented automatically for each published instance. The ProcessID, contrary to what you may think, has nothing to do with OS process ID (PID) of the running program. It is just a GUID generated once per process so that it guarantees the uniqueness of a particular application's session in time and space. The choice of GUIDs vs. conventional PIDs is obvious: PIDs are recycled by the operating system and may, therefore, cause collisions. The .NET Framework automatically adds these two properties to any WMI class that represents a .NET type and automatically assigns the appropriate values when an instance is published. This is done to ensure that any instance created by a .NET application always has a unique identifier; as you may remember, these two properties are marked with the Key qualifier within the WMI class definition.
The code used to publish the instance of a .NET type is remarkably simple. In fact, there are only two lines that may look somewhat new: calls to Publish and Revoke methods of the helper type Instrumentation. As its name implies, Publish takes an instance of a .NET type as a parameter and makes it visible through WMI. Once published, the instance remains accessible to WMI clients until the application exits or the Revoke method is called. Revoke is the opposite of Publish; it essentially erases all traces of a given instance from the CIM Repository. If an instance is to remain visible for the entire lifetime of the application, calling Revoke is optional—the .NET Framework cleans up after itself automatically when an application shuts down.
When you are managing multiple instances, sometimes you may want to keep track of which of these instances are published. Generally, the .NET Framework is very good about ensuring that a given instance is not duplicated in the CIM Repository. Even if you invoke Publish on a particular instance more than once, all but the first invocation will have no effect and the CIM Repository will only contain a single version of the instance. Nevertheless, to avoid confusion and application errors, it might be a good idea to somehow record that a particular instance has been exposed to WMI. To do so, you can add a Boolean flag to your instances and set it to TRUE every time an instance is published, but unfortunately, it is easy to make a mistake that may eventually wreak havoc on your application. This is where subclassing System.Management.Instrumentation.Instance rather than using custom attributes may prove to be advantageous. All subclasses of Instance will automatically inherit its Published property, which is set to TRUE by the .NET Framework as soon as the instance is published, and updated back to FALSE whenever the Revoke is called on this instance. Consider the following code:
using System; using System.Management; using System.ComponentModel; using System.Configuration.Install; using System.Management.Instrumentation; [assembly:Instrumented(@" rootCIMV2")] namespace InstrumentedApplication { [RunInstaller(true)] public class MyInstaller : DefaultManagementProjectInstaller {} public class MyManagedElement : Instance { public string Description; public int Count; public static void Main(string[] args) { MyManagedElement el = new MyManagedElement(); el.Description = "SAMPLE INSTANCE"; el.Count = 256; Instrumentation.Publish(el); Console.WriteLine("Instance published (true/false): {0}", el.Published); Console.ReadLine(); Instrumentation.Revoke(el); Console.WriteLine("Instance published (true/false): {0}", el.Published); } } }
Upon its invocation, this code will print the following message on the console:
Instance published (true/false): True
Once a key is pressed, the program will terminate and print another message:
Instance published (true/false): False
Although exposing the application's data for management through WMI is invaluable, being able to send out notifications when some application-specific events occur is even more important. Thus, you may rightly expect the System.Management.Instrumentation namespace to provide extensive functionality in support of management events. This is indeed the case, and armed with .NET instrumentation types, you can easily outfit your applications with full-fledged event notification capabilities.
Generating management events is as easy (if not easier) as supporting instance instrumentation. Working in concert with the principles of the declarative instrumentation model, you can simply mark the application types with the appropriate attributes and then use helper methods to route the notifications to WMI. For instance the following snippet of code, creates the application-defined event and sends it to the consumers:.Event)] public class MyManagementEvent { public string Description; public int EventNo; public static void Main(string[] args) { MyManagementEvent ev = new MyManagementEvent(); ev.Description = "SAMPLE MANAGEMENT EVENT"; ev.EventNo = 256; Instrumentation.Fire(ev); Console.ReadLine(); } } }
This code has a lot in common with the previous code fragment that was used to supply instance-level data to WMI. The first noticeable difference is the InstrumentationType enumeration member that is passed to the constructor of the InstrumentationClass attribute, which decorates MyManagementEvent type. To indicate that a certain application type represents a management event, such a type must be marked with InstrumentationClass(InstrumentationType.Event) attribute.
If you save this code to a file and then compile and install it using installutil.exe, the CIM Repository will contain the following definition for the application event class:
class MyManagementEvent : __ExtrinsicEvent { string Description; sint32 EventNo; };
The first thing to notice here is that the event class is derived from the system class __ExtrinsicEvent. This is logical because all application event classes are, indeed extrinsic events. Although it is certainly possible to create application-specific event hierarchies by building the respective hierarchies of .NET types, the top-level class of the resulting WMI event tree will always be __ExtrinsicEvent.
Another aspect of this generated event class definition that makes it different from the previously shown WMI classes that were generated in support of instance instrumentation is the absence of the ProcessID and InstanceID properties. Since WMI events are transient, they do not have to be stored in the CIM Repository, and therefore, they do not need unique identities.
The process of routing an application event to a consumer is also very similar to publishing an instance. The difference here is that rather than using the Publish method of the Instrumentation helper type, you must use the Fire method that belongs to the same type. Just like Publish, Fire takes a single parameter—an object that represents an application event to be sent to the consumers. Note that a call to Fire does not have to be followed by a call to Revoke; the events are transient and do not have to be unpublished.
Similar to the Instance type, which can be used as a superclass for management types, this process uses a BaseEvent type that is designed as a top-level type for modeling events. Subclassing BaseEvent is a nice alternative to decorating the event types with the InstrumentationClass(InstrumentationType.Event) attribute—the base type is already marked with this attribute, which propagates down to its children. Thus, the code example shown earlier can be rewritten as follows:
using System; using System.Management; using System.ComponentModel; using System.Configuration.Install; using System.Management.Instrumentation; [assembly:Instrumented(@" rootCIMV2")] namespace InstrumentedApplication { [RunInstaller(true)] public class MyInstaller : DefaultManagementProjectInstaller {} public class MyManagementEvent : BaseEvent { public string Description; public int EventNo; public static void Main(string[] args) { MyManagementEvent ev = new MyManagementEvent(); ev.Description = "SAMPLE MANAGEMENT EVENT"; ev.EventNo = 256; ev.Fire(); Console.ReadLine(); } } }
The effect of this code is exactly the same as that of the earlier code and the generated WMI definition for the event class remains unchanged. The only substantial difference here is that to fire an event that is a subclass of BaseEvent, you no longer need to use Instrumentation.Fire method. Instead, you can use the Fire method inherited from BaseEvent.
Using BaseEvent as a base type is especially convenient when it comes to modeling complex event hierarchies. Consider the following:
public class MyTopLevelEvent : BaseEvent { }; public class MyIntermediateLevelEvent : MyTopLevelEvent { }; public class MyLeafLevelEvent : MyIntermediateLevelEvent { };
Since BaseEvent is marked with the InstrumentationClass(InstrumentationType.Event) attribute, which is applicable to all of its children, the attribute does not have to be overridden anywhere within the type hierarchy. By the same token, when you are using the declarative approach, you only have to apply the attribute to the top-level type:
[InstrumentationClass(InstrumentationType.Event)] public class MyTopLevelEvent { }; public class MyIntermediateLevelEvent : MyTopLevelEvent { }; public class MyLeafLevelEvent : MyIntermediateLevelEvent { };
This makes modeling event hierarchies a bit less complex than building instance inheritance trees, which is a big help, considering that event hierarchies often come in handy. The apparent value of having all application events be subclasses of a common base event type comes from the ability to issue catch-all event queries against the root-level event class. Thus, given the event tree, shown earlier, you may write the following query to subscribe to all three of the events, MyTopLevelEvent, MyIndermediateLevelEvent, and MyLeafLevelEvent:
SELECT * FROM MyTopLevelEvent
By issuing such a generalized event query, you ask an application to instruct WMI to route to it not only the events that belong to the class that is specified in the query, but also all other events that are the subclasses of that class.
Instrumenting applications is a notoriously complex task that many developers and system administrators have been dreading for years. Fortunately, even the first release of FCL and the System.Management.Instrumentation namespace has made a significant number of the challenges commonly associated with exposing custom programs for management simply go away. Nowadays, it is possible to publish the management data and distribute management events through WMI with minimal coding effort.
This chapter has been a comprehensive introduction to the subject of instrumenting .NET applications with System.Management.Instrumentation types. Although, the scope of this book does not allow me to delve deeper into the guts of the .NET instrumentation framework, after having read the material presented here, you should be at least aware of
Although extremely helpful, the System.Management.Instrumentation namespace is not perfect. There are no major issues and it is fair to say that Microsoft developers did a great job, especially considering that this is the first release of the .NET Framework. Nevertheless, there are a few things that need improvement:
Despite all the deficiencies, System.Management.Instrumentation is still a great framework for instrumenting custom applications. Its most attractive characteristic is simplicity and, even if it does not satisfy all your instrumentation needs, it is still a big step forward. Again, its goal is to bring the joy of instrumenting .NET applications to a wider audience and it is fair to say that it achieves this.
Of course, there will always be people reaching out for more control or flexibility, or there will be those who try to cater to a unique management situation, not covered by the System.Management.Instrumentation framework. Hopefully, such people will represent just a small percentage of the developers and system administrators, but even for them, .NET has something to offer.
In fact, Visual Studio .NET offers two new ATL wizards to make the conventional COM-based provider development more accessible: the WMI Event Provider Wizard and the WMI Instance Provider Wizard. These wizards generate most of the code required to get a full-fledged event, instance, or method provider up and operational in a reasonably short time with minimal coding efforts. Unfortunately, digging into the practical aspects of these wizards is well beyond the scope of this book, so the adventurous reader will have to resort to Visual Studio .NET documentation. | http://flylib.com/books/en/2.568.1/instrumenting_net_applications_with_wmi.html | CC-MAIN-2018-05 | en | refinedweb |
Classes that define geometric shapes are contained in the java.awt.geom package, so to use them in a class we will need an import statement for this package at the beginning of the class file. You can add one to SketchView.java right now if you like. While the classes that define shapes are in java.awt.geom, the Shape interface is defined in java.awt, so you will usually need to import class names from both packages into your source file.
Any class that implements the Shape interface defines a shape – visually it will be some composite of straight lines and curves. Straight lines, rectangles, ellipses and curves are all shapes.
A graphic context knows how to draw Shape objects. To draw a shape on a component, you just need to pass the object defining the shape to the draw()method for the Graphics2D object for the component. To look at this in detail, we'll split the shapes into three groups, straight lines and rectangles, arcs and ellipses, and freeform curves. First though, we must take a look at how points are defined.
There are two classes in the java.awt.geom package that define points, Point2D.Float and Point2D.Double. From the class names you can see that these are both inner classes to the class Point2D, which also happens to be an abstract base class for both too. The Point2D.Float class defines a point from a pair of (x,y) coordinates of type float, whereas the Point2D.Double class defines a point as a coordinate pair of type double. The Point class in the java.awt package also defines a point, but in terms of a coordinate pair of type int. This class also has Point2D as a base.
The Point class actually predates the Point2D class, but the class was redefined to make it a subclass of Point2D when Point2D was introduced, hence the somewhat unusual class hierarchy with only two of the subclasses as inner classes. The merit of this arrangement is that all of the subclasses inherit the methods defined in the Point2D class, so operations on each of the three kinds of point are the same.
The three subclasses of Point2D define a default constructor that defines the point 0,0, and a constructor that accept a pair of coordinates of the type appropriate to the class type.
The operations that each of the three concrete point classes inherit are:
Accessing coordinate values:
The getX() and getY() methods return the x and y coordinates of a point as type double, regardless of how the coordinates are stored. These are abstract methods in the Point2D class so they are defined in each of the subclasses. Although you get coordinates as double values from all three concrete classes via these methods you can always access the coordinates with their original type directly since the coordinates are stored in public fields with the same names, x and y, in each case.
Calculating the distance between two points:
You have no less that three overloaded versions of the distance() method for calculating the distance between two points, and returning it as type double:
Here's how you might calculate the distance between two points:
Point2D.Double p1 = new Point2D.Double(2.5, 3.5); Point p2 = new Point(20, 30); double lineLength = p1.distance(p2);
You could also have calculated this distance without creating the points by using the static method:
double lineLength = Point2D.distance(2.5, 3.5, 20, 30);
Corresponding to each of the three distance() methods there is a convenience method, distanceSq(), with the same parameter list that returns the square of the distance as type double.
Comparing points:
The equals() method compares the current point with the point object referenced by the argument and returns true if they are equal and false otherwise.
Setting a new location for a point:
The inherited setLocation()method comes in two versions. One accepts an argument that is a reference of type Point2D, and sets the coordinate values of the current point to those of the point passed as an argument. The other accepts two arguments of type double that are the x and y coordinates of the new location. The Point class also defines a version of setLocation() that accepts two arguments of type int to define the new coordinates.
The java.awt.geom package contains the following classes for shapes that are straight lines and rectangles:
As with the classes defining points, the Rectangle class that is defined in the java.awt package predates the Rectangle2D class, but the definition of the Rectangle class was changed to make Rectangle2D a base for compatibility reasons. Note that there is no equivalent to the Rectangle class for lines defined by integer coordinates. If you are browsing the documentation, you may notice there is a Line interface, but this is nothing to do with geometry.
You can define a line by supplying two Point2D objects to a constructor, or two pairs of (x, y) coordinates. For example, here's how you define a line by two coordinate pairs:
Line2D.float line = new Line2D.Float(5.0f, 100.0f, 50.0f, 150.0f);
This draws a line from the point (5.0, 100.0) to the point (50.0, 150.0). You could also create the same line using Point2D.Float objects:
Point2D.Float p1 = new Point2D.Float(5.0f, 100.0f); Point2D.Float p2 = new Point2D.Float(50.0f, 150.0f); Line2D.float line = new Line2D.Float(p1, p2);
You draw a line using the draw() method for a Graphics2D object, for example:
g2D.draw(line); // Draw the line
To create a rectangle, you specify the coordinates of its top-left corner, and the width and height:
float width = 120.0f; float height = 90.0f; Rectangle2D.Float rectangle = new Rectangle2D.Float(50.0f, 150.0f, width, height);
The default constructor creates a rectangle at the origin with a zero width and height. You can set the position, width, and height of a rectangle by calling its setRect() method. There are three versions of this method. One of them accepts arguments for the coordinates of the top-left corner and the width and height as float values, exactly as in the constructor. Another accepts the same arguments but of type double. The third accepts an argument of type Rectangle2D so you can pass either type of Rectangle2D to it.
A Rectangle2D object has getX() and getY() methods for retrieving the coordinates of the top-left corner, and getWidth() and getHeight() methods that return the width and height.
A round rectangle is a rectangle with rounded corners. The corners are defined by a width and a height and are essentially a quarter segment of an ellipse (we will get to ellipses later). Of course, if the corner width and height are equal then the corner will be a quarter of a circle.
You can define a round rectangle using coordinates of type double with the statements:
Point2D.Double position = new Point2D.Double(10, 10); double width = 200.0; double height = 100; double cornerWidth = 15.0; double cornerHeight = 10.0; RoundRectangle2D.Double roundRect = new RoundRectangle2D.Double( position.x, position.y, // Position of top-left width, height, // Rectangle width & height cornerWidth, cornerHeight); // Corner width & height
The only difference between this and defining an ordinary rectangle is the addition of the width and height to be applied for the corner rounding.
You can combine two rectangles to produce a new rectangle that is either the union of the two original rectangles or the intersection. Let's take a couple of specifics to see how this works. We can create two rectangles with the statements:
float width = 120.0f; float height = 90.0f; Rectangle2D.Float rect1 = new Rectangle2D.Float(50.0f, 150.0f, width, height); Rectangle2D.Float rect2 = new Rectangle2D.Float(80.0f, 180.0f, width, height);
We can obtain the intersection of the two rectangles with the statement:
Rectangle2D.Float rect3 = rect1.createIntersection(rect2);
The effect is illustrated in the diagram below by the shaded rectangle. Of course, the result is the same if we call the method for rect2 with rect1 as the argument. If the rectangles don't overlap the rectangle that is returned will be the rectangle from the bottom right of one rectangle to the top right of the other that does not overlap either.
The following statement produces the union of the two rectangles:
Rectangle2D.Float rect3 = rect1.createUnion(rect2);
The result is shown in the diagram by the rectangle with the heavy boundary that encloses the other two.
Perhaps the simplest test you can apply is for an empty rectangle. The isEmpty() method that is implemented in all the rectangle classes returns true if the Rectangle2D object is empty – which is when either the width or the height (or both) are zero.
You can also test whether a point lies inside any type of rectangle object by calling its contains()method. There are contains() methods for all the rectangle classes that accept a Point2D argument or a pair of (x, y) coordinates of a type matching that of the rectangle class: they return true if the point lies within the rectangle. Each shape class defines a getBounds2D() method that returns a Rectangle2D object that encloses the shape.
This method is frequently used in association with the contains() method to test efficiently whether the cursor lies within a particular shape. Testing whether the cursor is within the enclosing rectangle will be a lot faster in general than testing whether it is within the precise boundary of the shape, and is good enough for many purposes – when selecting a particular shape on the screen to manipulate it in some way for instance.
There are also versions of the contains() method to test whether a given rectangle lies within the area occupied by a rectangle object – this obviously enables you to test whether a shape lies within another shape. The given rectangle can be passed to the contains() method as the coordinates of its top-left corner and its height and width as type double, or as a Rectangle2D reference. The method returns true if the rectangle object completely contains the given rectangle.
Let's try drawing a few simple lines and rectangles by inserting some code in the paint() method for the view in Sketcher.
Begin by adding an import statement to SketchView.java for the java.awt.geom package:
import java.awt.geom.*;
Now replace the previous code in the paint() method in the SketchView class with the following:
public void paint(Graphics g) { // Temporary code Graphics2D g2D = (Graphics2D)g; // Get a Java 2D device context g2D.setPaint(Color.red); // Draw in red // Position width and height of first rectangle Point2D.Float p1 = new Point2D.Float(50.0f, 10.0f); float width1 = 60; float height1 = 80; // Create and draw the first rectangle Rectangle2D.Float rect = new Rectangle2D.Float(p1.x, p1.y, width1, height1); g2D.draw(rect); // Position width and height of second rectangle Point2D.Float p2 = new Point2D.Float(150.0f, 100.0f); float width2 = width1 + 30; float height2 = height1 + 40; // Create and draw the second rectangle g2D.draw(new Rectangle2D.Float( (float)(p2.getX()), (float)(p2.getY()), width2, height2)); g2D.setPaint(Color.blue); // Draw in blue // Draw lines to join corresponding corners of the rectangles Line2D.Float line = new Line2D.Float(p1,p2); g2D.draw(line);)); g2D.drawString("Lines and rectangles", 60, 250); // Draw some text }
If you type this in correctly and recompile SketchView class, the Sketcher window will look like:
How It Works
After casting the graphics context object that is passed to the paint() method to type Graphics2D we set the drawing color to red. All subsequent drawing that we do will be in red until we change the color with another call to setPaint(). We define a Point2D.Float object to represent the position of the first rectangle, and we define variables to hold the width and height of the rectangle. We use these to create the rectangle by passing them as arguments to the constructor that we have seen before, and display the rectangle by passing the rect object to the draw()method for the graphics context, g2D. The second rectangle is defined by essentially the same process, except that this time we create the Rectangle2D.Float object in the argument expression for the draw() method.
Note that we have to cast the values returned by the getX() and getY() members of the Point2D object as they are returned as type double. It is generally more convenient to reference the x and y fields directly as we do in the rest of the code.
We change the drawing color to blue so that you can see quite clearly the lines we are drawing. We use the setLocation() method for the point objects to move the point on each rectangle to successive corners, and draw a line at each position. The caption also appears in blue since that is the color in effect when we call the drawString() method to output the text string.
There are shape classes defining both arcs and ellipses. The abstract class representing a generic ellipse is:
The class representing an elliptic arc is:
Arcs and ellipses are closely related since an arc is just a segment of an ellipse. To define an ellipse you supply the data necessary to define the enclosing rectangle – the coordinates of the top-left corner, the width, and the height. To define an arc you supply the data to define the ellipse, plus additional data that defines the segment that you want. The seventh argument to the arc constructor determines the type, whether OPEN, CHORD, or PIE.
You could define an ellipse with the statements:
Point2D.Double position = new Point2D.Double(10,10); double width = 200.0; double height = 100; Ellipse2D.Double ellipse = new Ellipse2D.Double( position.x, position.y, // Top-left corner width, height); // width & height of rectangle
You could define an arc that is a segment of the previous ellipse with the statement:
Arc2D.Double arc = new Arc2D.Double( position.x, position.y, // Top-left corner width, height, // width & height of rectangle 0.0, 90.0, // Start and extent angles Arc2D.OPEN); // Arc is open
This defines the upper-right quarter segment of the whole ellipse as an open arc. The angles are measured anticlockwise from the horizontal in degrees. As we saw earlier the first angular argument is where the arc starts, and the second is the angular extent of the arc.
Of course, a circle is just an ellipse where the width and height are the same, so the following statement defines a circle with a diameter of 150:
double diameter = 150.0; Ellipse2D.Double circle = new Ellipse2D.Double( position.x, position.y, // Top-left corner diameter, diameter); // width & height of rectangle
This presumes the point position is defined somewhere. You will often want to define a circle by its center and radius – adjusting the arguments to the constructor a little does this easily:
Point2D.Double center = new Point2D.Double(200, 200); double radius = 150; Ellipse2D.Double newCircle = new Ellipse2D.Double( center.x-radius, center.y-radius, // Top-left corner 2*radius, 2*radius); // width & height of rectangle
The fields storing the coordinates of the top-left corner of the enclosing rectangle and the width and height are public members of Ellipse2D and Arc2D objects. They are x, y, width and height respectively. An Arc2D object also has public members, start and extent, that store the angles.
Let's modify the paint()method in SketchView.java once again to draw some arcs and ellipses.
public void paint(Graphics g) { // Temporary code Graphics2D g2D = (Graphics2D)g; // Get a Java 2D device context Point2D.Double position = new Point2D.Double(50,10); // Initial position double width = 150; // Width of ellipse double height = 100; // Height of ellipse double start = 30; // Start angle for arc double extent = 120; // Extent of arc double diameter = 40; // Diameter of circle // Define open arc as an upper segment of an ellipse Arc2D.Double top = new Arc2D.Double(position.x, position.y, width, height, start, extent, Arc2D.OPEN); // Define open arc as lower segment of ellipse shifted up relative to 1st Arc2D.Double bottom = new Arc2D.Double( position.x, position.y – height + diameter, width, height, start + 180, extent, Arc2D.OPEN); // Create a circle centered between the two arcs Ellipse2D.Double circle1 = new Ellipse2D.Double( position.x + width/2 – diameter/2,position.y, diameter, diameter); // Create a second circle concentric with the first and half the diameter Ellipse2D.Double circle2 = new Ellipse2D.Double( position.x + width/2 – diameter/4, position.y + diameter/4, diameter/2, diameter/2); // Draw all the shapes g2D.setPaint(Color.black); // Draw in black g2D.draw(top); g2D.draw(bottom); g2D.setPaint(Color.blue); // Draw in blue g2D.draw(circle1); g2D.draw(circle2); g2D.drawString("Arcs and ellipses", 80, 100); // Draw some text }
Running Sketcher with this version of the paint() method in SketchView will produce the window shown here.
How It Works
This time we create all the shapes first and then draw them. The two arcs are segments of ellipses of the same height and width. The lower segment is shifted up with respect to the first so that they intersect, and the distance between the top of the rectangle for the first and the bottom of the rectangle for the second is diameter, which is the diameter of the first circle we create.
Both circles are created centered between the two arcs and are concentric. Finally we draw all the shapes – the arcs in black and the circles in blue.
Next time we change the code in Sketcher, we will be building the application as it should be, so remove the temporary code from the paint() method and the code that sets the background color in the ColorAction inner class to the SketchFrame class.
There are two classes that define arbitrary curves, one defining a quadratic or second order curve and the other defining a cubic curve. The cubic curve just happens to be a Bézier curve (so called because it was developed by a Frenchman, Monsieur P. Bézier, and first applied in the context of defining contours for programming numerically-controlled machine tools). The classes defining these curves are:
In general, there are many other methods for modeling arbitrary curves, but the two defined in Java have the merit that they are both easy to understand, and the effect on the curve segment when the control point is moved is quite intuitive.
An object of each curve type defines a curve segment between two points. The control points – one for a QuadCurve2D curve and two for a CubicCurve2D curve – control the direction and magnitude of the tangents at the end points. A QuadCurve2D curve constructor has six parameters corresponding to the coordinates of the starting point for the segment, the coordinates of the control point and the coordinates of the end point. We can define a QuadCurve2D curve from a point start to a point end, plus a control point, control, with the statements:
Point2D.Double startQ = new Point2D.Double(50, 150); Point2D.Double endQ = new Point2D.Double(150, 150); Point2D.Double control = new Point2D.Double(80,100); QuadCurve2D.Double quadCurve = new QuadCurve2D.Double(startQ.x, startQ.y, // Segment start point control.x, control.y, // Control point endQ.x, endQ.y); // Segment end point
The QuadCurve2D subclasses have public members storing the end points and the control point so you can access them directly. The coordinates of the start and end points are stored in the fields, x1, y1, x2, and y2. The coordinates of the control point are stored in ctrlx and ctrly.
Defining a cubic curve segment is very similar – you just have two control points, one for each end of the segment. The arguments are the (x, y) coordinates of the start point, the control point for the start of the segment, the control point for the end of the segment and finally the end point. We could define a cubic curve with the statements:
Point2D.Double startC = new Point2D.Double(50, 300); Point2D.Double endC = new Point2D.Double(150, 300); Point2D.Double controlStart = new Point2D.Double(80, 250); Point2D.Double controlEnd = new Point2D.Double(160, 250); CubicCurve2D.Double cubicCurve = new CubicCurve2D.Double( startC.x, startC.y, // Segment start point controlStart.x, controlStart.y, // Control point for start controlEnd.x, controlEnd.y, // Control point for end endC.x, endC.y); // Segment end point
The cubic curve classes also have public members for all the points: x1, y1, x2 and y2 for the end points, and ctrlx1, ctrly1, ctrlx2 and ctrly2 for the corresponding control points.
We can understand these better if we try them out. This time let's do it with an applet.
We can define an applet to display the curves we used as examples above:
import javax.swing.JApplet; import javax.swing.JComponent; import java.awt.Color; import java.awt.Graphics2D; import java.awt.Container; import java.awt.Graphics; import java.awt.geom.Point2D; import java.awt.geom.CubicCurve2D; import java.awt.geom.QuadCurve2D; public class CurveApplet extends JApplet { // Initialize the applet public void init() { pane = new CurvePane(); // Create pane containing curves Container content = getContentPane(); // Get the content pane // Add the pane displaying the curves to the content pane for the applet content.add(pane); // BorderLayout.CENTER is default position } // Class defining a pane on which to draw class CurvePane extends JComponent { // Constructor public CurvePane() { quadCurve = new QuadCurve2D.Double( // Create quadratic curve startQ.x, startQ.y, // Segment start point control.x, control.y, // Control point endQ.x, endQ.y); // Segment end point cubicCurve = new CubicCurve2D.Double( // Create cubic curve startC.x, startC.y, // Segment start point controlStart.x, controlStart.y, // Control point for start controlEnd.x, controlEnd.y, // Control point for end endC.x, endC.y); // Segment end point } public void paint(Graphics g) { Graphics2D g2D = (Graphics2D)g; // Get a 2D device context // Draw the curves g2D.setPaint(Color.BLUE); g2D.draw(quadCurve); g2D.draw(cubicCurve); } } // Points for quadratic curve Point2D.Double startQ = new Point2D.Double(50, 75); // Start point Point2D.Double endQ = new Point2D.Double(150, 75); // End point Point2D.Double control = new Point2D.Double(80, 25); // Control point // Points for cubic curve Point2D.Double startC = new Point2D.Double(50, 150); // Start point Point2D.Double endC = new Point2D.Double(150, 150); // End point Point2D.Double controlStart = new Point2D.Double(80, 100); // 1st control point Point2D.Double controlEnd = new Point2D.Double(160, 100); // 2nd control point QuadCurve2D.Double quadCurve; // Quadratic curve CubicCurve2D.Double cubicCurve; // Cubic curve CurvePane pane = new CurvePane(); // Pane to contain curves }
You will need an HTML file to run the applet. The contents can be something like:
<applet code="CurveApplet.class" width=300 height=300></applet>
This will display the applet in appletviewer. If you want to display it in your browser, you need to convert the HTML using the HTMLConverter program. If you don't already have it you can download it from the web site.
If you run the applet using appletviewer, you will get a window looking like that here.
How It Works
We need an object of our own class type so that we can implement the paint() method for it. We define the inner class CurvePane for this purpose with JComponent as the base class so it is a Swing component. We create an object of this class (which is a member of the CurveApplet class) and add it to the content pane for the applet using its inherited add() method. The layout manager for the content pane is BorderLayout, and the default positioning is BorderLayout.CENTER so the CurvePane object fills the content pane.
The points defining the quadratic and cubic curves are defined as fields in the CurveApplet class and these are referenced in the paint() method for the CurvePane class to create the objects representing curves. These points are used in the CurvePane class constructor to create the curves. We draw the curves by calling the draw() method for the Graphics2D object and passing a reference to a curve object as the argument.
It's hard to see how the control points affect the shape of the curve, so let's add some code to draw the control points.
We will mark the position of each control point by drawing a small circle around it. We can define a marker using an inner class of CurveApplet that we can define as:
// Inner class defining a control point marker class Marker { public Marker(Point2D.Double control) { center = control; // Save control point as circle center // Create circle around control point circle = new Ellipse2D.Double(control.x-radius, control.y-radius, 2.0*radius, 2.0*radius); } // Draw the marker public void draw(Graphics2D g2D) { g2D.draw(circle); } // Get center of marker – the control point position Point2D.Double getCenter() { return center; } Ellipse2D.Double circle; // Circle around control point Point2D.Double center; // Circle center – the control point static final double radius = 3; // Radius of circle }
The argument to the constructor is the control point that is to be marked. The constructor stores this control point in the member, center, and creates an Ellipse2D.Double object that is the circle to mark the control point. The class also has a method, draw(), to draw the marker using the Graphics2D object reference that is passed to it. The getCenter() method returns the center of the marker as a Point2D.Double reference. We will use this method when we draw tangent lines from the end points of a curve to the corresponding control points.
We will add fields to the CurveApplet class to define the markers for the control points. These definitions should follow the members that defines the points:
// Markers for control points Marker ctrlQuad = new Marker(control); Marker ctrlCubic1 = new Marker(controlStart); Marker ctrlCubic2 = new Marker(controlEnd);
We can now add code to the paint()method for the CurvePane class to draw the markers and the tangents from the endpoints of the curve segments:
public void paint(Graphics g) { // Code to draw curves as before... // Create and draw the markers showing the control points g2D.setPaint(Color.red); // Set the color ctrlQuad.draw(g2D); ctrlCubic1.draw(g2D); ctrlCubic2.draw(g2D); // Draw tangents from the curve end points to the control marker centers Line2D.Double tangent = new Line2D.Double(startQ, ctrlQuad.getCenter()); g2D.draw(tangent); tangent = new Line2D.Double(endQ, ctrlQuad.getCenter()); g2D.draw(tangent); tangent = new Line2D.Double(startC, ctrlCubic1.getCenter()); g2D.draw(tangent); tangent = new Line2D.Double(endC, ctrlCubic2.getCenter()); g2D.draw(tangent); }
If you recompile the applet with these changes, when you execute it again you should see the window shown here.
How It Works
In the Marker class constructor, the top-left corner of the rectangle enclosing the circle for a control point is obtained by subtracting the radius from the x and y coordinates of the control point. We then create an Ellipse2D.Double object with the width and height as twice the value of radius – which is the diameter of the circle.
In the paint()method we call the draw() method for each of the Marker objects to draw a red circle around each control point. The tangents are just lines from the endpoints of each curve segment to the centers of the corresponding Marker objects.
It would be good to see what happens to a curve segment when you move the control points around. Then we could really see how the control points affect the shape of the curve. That's not as difficult to implement as it might sound, so let's give it a try.
We will arrange to allow a control point to be moved by positioning the cursor on it, pressing a mouse button and dragging it around. Releasing the mouse button will stop the process for that control point, so you will then be free to manipulate another one. To do this we will add another inner class to CurveApplet that will handle mouse events:
class MouseHandler extends MouseInputAdapter { public void mousePressed(MouseEvent e) { // Check if the cursor is inside any marker if(ctrlQuad.contains(e.getX(), e.getY())) selected = ctrlQuad; else if(ctrlCubic1.contains(e.getX(), e.getY())) selected = ctrlCubic1; else if(ctrlCubic2.contains(e.getX(), e.getY())) selected = ctrlCubic2; } public void mouseReleased(MouseEvent e) { selected = null; // Deselect any selected marker } public void mouseDragged(MouseEvent e) { if(selected != null) { // If a marker is selected // Set the marker to current cursor position selected.setLocation(e.getX(), e.getY()); pane.repaint(); // Redraw pane contents } } Marker selected = null; // Stores reference to selected marker }
We need to add two import statements to the beginning of the source file, one because we reference the MouseInputAdapter class, and the other because we refer to the MouseEvent class:
import javax.swing.event.MouseInputAdapter; import java.awt.event.MouseEvent;
The mousePressed()method calls a method contains() that should test whether the point defined by the arguments is inside the marker. We can implement this in the Marker class like this:
// Test if a point x,y is inside the marker public boolean contains(double x, double y) { return circle.contains(x,y); }
This just calls the contains()method for the circle object that is the marker. This will return true if the point (x, y) is inside.
The mouseDragged()method calls a method setLocation() for the selected Marker object, so we need to implement this in the Marker class, too:
// Sets a new control point location public void setLocation(double x, double y) { center.x = x; // Update control point center.y = y; // coordinates circle.x = x-radius; // Change circle position circle.y = y-radius; // correspondingly }
After updating the coordinates of the point, center, we also update the position of circle by setting its data member directly. We can do this because x and y are public members of the Ellipse2D.Double class.
We can create a MouseHandler object in the init() method for the applet and set it as the listener for mouse events for the pane object:
public void init() { pane = new CurvePane(); // Create pane containing curves Container content = getContentPane(); // Get the content pane // Add the pane displaying the curves to the content pane for the applet content.add(pane); // BorderLayout.CENTER is default position MouseHandler handler = new MouseHandler(); // Create the listener pane.addMouseListener(handler); // Monitor mouse button presses pane.addMouseMotionListener(handler); // as well as movement }
Of course, to make the effect of moving the control points apparent, we must update the curve objects before we draw them. We can add the following code to the paint() method to do this:
public void paint(Graphics g) { Graphics2D g2D = (Graphics2D)g; // Get a 2D device context // Update the curves with the current control point positions quadCurve.ctrlx = ctrlQuad.getCenter().x; quadCurve.ctrly = ctrlQuad.getCenter().y; cubicCurve.ctrlx1 = ctrlCubic1.getCenter().x; cubicCurve.ctrly1 = ctrlCubic1.getCenter().y; cubicCurve.ctrlx2 = ctrlCubic2.getCenter().x; cubicCurve.ctrly2 = ctrlCubic2.getCenter().y; // Rest of the method as before...
We can update the data members that store the control point coordinates for the curves directly because they are public members of each curve class. We get the coordinates of the new positions for the control points from their markers by calling the getCenter() method for each, and then accessing the appropriate data member of the Point2D.Double object that is returned.
If you recompile the applet with these changes and run it again you should get something like the window here.
You should be able to drag the control points around with the mouse. If it is a bit difficult to select the control points, just make the value of radius a bit larger. Note how the angle of the tangent as well as its length affects the shape of the curve.
How It Works
In the MouseHandler class, the mousePressed() method will be called when you press a mouse button. In this method we check whether the current cursor position is within any of the markers enclosing the control points. We do this by calling the contains() method for each marker object and passing the coordinates of the cursor position to it. The getX() and getY() methods for the MouseEvent object supply the coordinates of the current cursor position. If one of the markers does enclose the cursor, we store a reference to the Marker object in the selected member of the MouseHandler class for use by the mouseDragged() method.
In the mouseDragged()method, we set the location for the Marker object referenced by selected to the current cursor position and call repaint() for the pane object. The repaint()method causes the paint()method to be called for the component, so everything will be redrawn, taking account of the modified control point position.
Releasing the mouse button will cause the mouseReleased() method to be called. In here we just set the selected field back to null so no Marker object is selected. Remarkably easy, wasn't it?
You can define a more complex shape as an object of type GeneralPath. A GeneralPath object can be a composite of lines, Quad2D curves, and Cubic2D curves, or even other GeneralPath objects.
The process for determining whether a point is inside or outside a GeneralPath object is specified by the winding rule for the object. There are two winding rules that you can specify by constants defined in the class:
These winding rules are illustrated below:
The safe option is WIND_EVEN_ODD.
There are four constructors for GeneralPath objects:
We can create a GeneralPath object with the statement:
GeneralPath p = new GeneralPath(GeneralPath.WIND_EVEN_ODD);
A GeneralPath object embodies the notion of a current point of type Point2D from which the next path segment will be drawn. You set the initial current point by passing a pair of (x, y) coordinates as values of type float to the moveTo()method for the object. For example:
p.moveTo(10.0f,10.0f); // Set the current point to 10,10
A segment is added to the general path, starting at the current point, and the end of each segment that you add becomes the new current point that is used to start the next segment. Of course, if you want disconnected segments in a path, you can call moveTo() to move the current point to wherever you want before you add a new segment. If you need to get the current position at any time, you can call the getCurrentPoint() method for a GeneralPath object and get the current point as type Point2D.
You can use the following methods to add segments to a GeneralPath object:
Each of these methods updates the current point to be the end of the segment that is added. A path can consist of several subpaths since a new subpath is started by a moveTo() call. The closePath() method closes the current subpath by connecting the current point after the last segment to the point defined by the previous moveTo()call.
Let's illustrate how this works with a simple example. We could create a triangle with the following statements:
GeneralPath p = new GeneralPath(GeneralPath.WIND_EVEN_ODD); p.moveTo(50.0f, 50.0f); // Start point for path p.lineTo(150.0f, 50.0f); // Line from 50,50 to 150,50 p.lineTo(150.0f, 250.0f); // Line from 150,50 to 150,250 p.closePath(); // Line from 150,250 back to start
The first line segment starts at the current position set by the moveTo() call. Each subsequent segment begins at the endpoint of the previous segment. The closePath() call joins the latest endpoint to the point set by the previous moveTo() – which in this case is the beginning of the path. The process is much the same using quadTo() or curveTo() calls and of course you can intermix them in any sequence you like.
Once you have created a path for a GeneralPath object by calling its methods to add segments to the path, you can remove them all by calling its reset() method. This empties the path.
The GeneralPath class implements the Shape interface, so a Graphics2D object knows how to draw a path. You just pass a reference to the draw() method for the graphics context. To draw the path, p, that we defined above in the graphics context g2D, you would write:
g2D.draw(p); // Draw path p
Let's try an example.
You won't usually want to construct a GeneralPath object as we did above. You will probably want to create a particular shape, a triangle or a star say, and then draw it at various points on a component. You might think you can do this by subclassing GeneralPath, but unfortunately GeneralPath is declared as final so subclassing is not allowed. However, you can always add a GeneralPath object as a member of your class. Let's draw some stars using our own Star class. We will use a GeneralPath object to create the star shown in the diagram.
Here's the code for a class defining the star:
import java.awt.geom.Point2D; import java.awt.geom.GeneralPath; import java.awt.Shape; class Star { public Star(float x, float y) { start = new Point2D.Float(x, y); // store start point createStar(); } // Create the path from start void createStar() { Point2D.Float point = start; p = new GeneralPath(GeneralPath.WIND_NON_ZERO); p.moveTo(point.x, point.y); p.lineTo(point.x + 20.0f, point.y – 5.0f); // Line from start to A point = (Point2D.Float)p.getCurrentPoint(); p.lineTo(point.x + 5.0f, point.y – 20.0f); // Line from A to B point = (Point2D.Float)p.getCurrentPoint(); p.lineTo(point.x + 5.0f, point.y + 20.0f); // Line from B to C point = (Point2D.Float)p.getCurrentPoint(); p.lineTo(point.x + 20.0f, point.y + 5.0f); // Line from C to D point = (Point2D.Float)p.getCurrentPoint(); p.lineTo(point.x – 20.0f, point.y + 5.0f); // Line from D to E point = (Point2D.Float)p.getCurrentPoint(); p.lineTo(point.x – 5.0f, point.y + 20.0f); // Line from E to F point = (Point2D.Float)p.getCurrentPoint(); p.lineTo(point.x – 5.0f, point.y – 20.0f); // Line from F to g p.closePath(); // Line from G to start } Shape atLocation(float x, float y) { start.setLocation(x, y); // Store new start p.reset(); // Erase current path createStar(); // create new path return p; // Return the path } // Make the path available Shape getShape() { return p; } private Point2D.Float start; // Start point for star private GeneralPath p; // Star path }
We can draw stars on an applet:
import javax.swing.JApplet; import javax.swing.JComponent; import java.awt.Graphics; import java.awt.Graphics2D; public class StarApplet extends JApplet { // Initialize the applet public void init() { getContentPane().add(pane); // BorderLayout.CENTER is default position } // Class defining a pane on which to draw class StarPane extends JComponent { public void paint(Graphics g) { Graphics2D g2D = (Graphics2D)g; Star star = new Star(0,0); // Create a star float delta = 60f; // Increment between stars float starty = 0f; // Starting y position // Draw 3 rows of 4 stars for(int yCount = 0; yCount < 3; yCount++) { starty += delta; // Increment row position float startx = 0f; // Start x position in a row // Draw a row of 4 stars for(int xCount = 0; xCount<4; xCount++) { g2D.draw(star.atLocation(startx += delta, starty)); } } } } StarPane pane = new StarPane(); // Pane containing stars }
The HTML file for this applet could contain:
This is large enough to accommodate our stars. If you compile and run the applet, you should see the AppletViewer window shown here.
How It Works
The Star class has a GeneralPath object, p, as a member. The constructor sets the coordinates of the start point from the arguments, and calls the createStar() method that creates the path for the star. The first line is drawn relative to the start point that is set by the call to moveTo() for p. For each subsequent line, we retrieve the current position by calling getCurrentPoint() for p and drawing the line relative to that. The last line to complete the star is drawn by calling closePath().
We always need a Shape reference to draw a Star object, so we have included a getShape() method in the class that simply returns a reference to the current GeneralPath object as type Shape. The atLocation() method recreates the path at the new position specified by the arguments and returns a reference to it.
The StarApplet class draws stars on a component defined by the inner class StarPane. We draw the stars using the paint() method for the StarPane object, which is a member of the StarApplet class. Each star is drawn in the nested loop with the position specified by (x, y). The y coordinate defines the vertical position of a row, so this is incremented by delta on each iteration of the outer loop. The coordinate x is the position of a star within a row so this is incremented by delta on each iteration of the inner loop. | http://www.yaldex.com/java_tutorial/0871986571.htm | CC-MAIN-2018-05 | en | refinedweb |
Bad MonkeyMember
Content count400
Joined
Last visited
Community Reputation145 Neutral
About Bad Monkey
- RankMember
Monster Classes
Bad Monkey replied to Aviscowboy1's topic in General and Gameplay ProgrammingTwo words: data-driven. Not to pick on the personal beliefs of anyone here, but class heirarchies hundreds of levels deep that differentiate SmallGoblinWithTwoWarts from LargeGoblinWithThreeWarts make me want to cry. While the idea of "Animal->Mammal-Dog" style hierarchies works well in the land of academia when trying to demonstrate principles such as inheritance and polymorphism, they tend to be quite painful in real life if you do not take great care in making them flexible, and even then there are just some things that you will find you cannot do (and often not until its very late). As an off-the-cuff example (and prodding poor Joe's nice informative post... no offence meant Joe [smile]), I think a "monster" class is NOT a good candidate for a deep inheritance hierarchy... I just don't see the need to classify them more than one or two levels deep I would tend to stop sub-classifying at a higher level than Joe's example, typically making all "monsters" on an equal level in the tree, and trying to keep the number of abstract sub-classifying levels to a minimum... and focus on making everything more driven by parameters (provided by serialized instances in a file, or passed by calling code, such as an AI agent governing the instance of the monster). Please bear in mind this is totally of the top of my head, and I may recant at any time [smile]... and apologies, but I will be using C#-style syntax and including method bodies in the class definitions... so sue me public abstract class GameObject { // some root class which is often useful for providing functionality such as logging, diagnostics, persistence entry points, etc across many different kinds of game objects... } // class that represents any active unit in the game, whether enemy or friendly npc public abstract class Unit : GameObject { private Avatar avatar; // reference to an object which contains and manages the visual representation of the unit... i.e. model and animation info, textures, etc // any other common attributes that apply to most units private Vector3D position; private Vector3D orientation; private Vector3D velocity; private long hitPoints; private long attackStrength; // etc... // set of virtual methods that are intended to be overridden by your concrete implementations public virtual void Move(...){} public virtual void Attack(...){} // etc... } public class Goblin : Unit { public override void Move(...) { // set the properties governing movement of this goblin according to any special rules you need... } public override void Attack(...) { // select and execute an attack according to current state of this goblin and/or any parameters passed in } // etc... } Again, I apologise that this is such a rudimentary and incomplete example, and sorry if this makes my point no clearer... Anyway, in short: inheritance is way overrated, and should be used judiciously, and data-driven designs (using composition) may very well make extending and tweaking your "monster" catalogue much easier down the track (especially if you can make it possible without having to recompile code Some other ways to think about: - parameterising every possible thing you can think of to do with monsters (i.e. giving them a metric buttload of fields to describe their state and control their behaviour), and have the Unit (or Monster, if you prefer) class methods try and perform actions according to the current values of all these properties. This method requires a LOT of fore-thought as to what your monsters will and will not be able to do, but really opens the game up for tweaking by non-programmers, as all that is required is fiddling of data values rather than deriving from a base class. - write your Unit class to load and run scripts, meaning that again you wouldn't have to derive new classes and compile code in order to come up with new monster types and behaviours. This method would allow for some unseen requirements down the track, but some serious thought has to go into defining the code interface to allow this level of dynamicism. Hope that was some food for thought anyways, and not horribly confusing [smile]... Adam
- Quote:Have you ever had a cat, or know someone who had one? I mean, a baby cat that only know to "defecate", eat, and wonder around. When this cat grows up, isn't able to play cat-chasing-games, or hunt birds if you let it in your backyard? Where did that cat learn to "behave" like a cat? From TV? ;) Behavior comes imprinted into its DNA, commonly known as instincts. That's kind of like saying that humans are born with the innate knowledge of how to climb trees and catch frogs [smile]... play is all about trying things out to see the consequences and glean information from the experience. The tendency towards different kinds of play for different species (which may evolve into different survival techniques) can be explained by th differences in how the organism is composed (e.g. lack of hands means climbing things or throwing-catching games are less appealing to a cat). Take the concept that cats and dogs don't get along... exmaples of that can be seen everywhere... so explain why my cat and my dog play together, and my cat cuddles up with dog to sleep... where's the "imprinted behaviour" reflecting millenia of animosity between the species? I propose that this animosity is not really imprinted behaviour, but information that was learned by each cat and dog as they came across each other and came into conflict over resources such as food and territory/shelter. Remove that conflict and fear, and they get along fine.
- Quote:So, HTF do they evolve? Surely it is obvious that the worker ants fill out TPS reports detailing their daily encounters and experiences, which are then collected and evaulated by the higher-ups in the colony... didn't you get the memo? [smile] I tend to think that most "knowledge", other than really basic biological functions, are learned through experience... the only shortcut around this process is to have the knowledge passed on by communication with those who have the experience, be that spoken, written, shown by example, chemical exchange, morse code via antennae, or whatever. A side note: You could probably even say that stuff like breathing (yes, breathing) is a learned behaviour... when you are born, your body stops receiving oxygen from the placenta, you begin to suffocate and cells begin to die at an accelerated rate (a Bad Thing TM), so your body tries things to rectify the situation... how many babies "evacuate" when they are born... maybe because the system consisting of their body and "mind" is going nuts trying to adapt to the new environment and twitching muscles left, right, and centre. Its something you have to learn over a *very* short space of time, but it can still be regarded as learned. Anyway, how feasible is it that ants pass on "knowledge" when they bump into other ants? ("hey, there's a giant lump of sugar back there, and as we know from Uncle Fred, sugar is good", or "don't go this way, coz I just saw Jim and Bob get drowned... I hink large amounts of running water are bad... pass it on!") It doesn't sound right to me that an ant pops out of an egg, and goes "damn, I think I'm off to harvest some grain for the good of our illustrious colony!"... surely there is some form of "basic training" where other ants clue it in on what the hell is going on as it wanders around taking its first steps. Then, as it meanders around the world looking for this "food" thing that someone mentioned, it experiences stuff and finds solutions by experimenting (based on what it knows already), and gets "told" other stuff by other ants who have had their own experiences. Relating to the lion example, and the concept of "play"... isn't this just experimentation based on stuff the cub has learned so far? (i.e. gravity makes me fall, biting hurts things, clawing hurts things, pouncing can get me over that last distance quicker and catch things by suprise, sneaking helps me get closer without things detecting me, etc) ... I am pretty sure they are not born with much more than the ability to eat, sh!t, and move around, so therefore complex behaviours are not magically passed on by genetics... they must be learned, and the learning begins the moment they are alive and conscious... a rampant sampling of data from the world around them to try and construct a set of rules by which they can evaluate their environment and themselves and predict their future. Stuff like this just messes my head up when you start reducing it to really basic behaviours [smile]... but it just feels wrong to draw some arbitrary line and say "everything simpler than x is just passed on somehow"... although I guess that must be the case at some cellular level, I feel genetic information has probably got more to do with setting up an organism so that it is biologically efficient and has the capacity to learn well...
Language for 9-year old?
Bad Monkey replied to TechnoCore's topic in General and Gameplay ProgrammingSome of you are spruiking about C++, saying "its not that hard", and "don't start with some piss-weak language"... but are possibly failing to take into account that you are speaking with the benefit of years of programming familiarity... we are talking about a totally green little kid here. The peeps pimping something along the lines of LOGO, BASIC, or other languages with less hostile syntax rules are much closer to understanding the nature of what fits this situation best... you don't want to scare the poor little bugger off with what looks to be an alien mish-mash of symbols... at this stage, you want to teach the basic principals or data types, variables, and flow control. If the kid laps it up and wants to wade in deeper, fine... but small steps to build confidence ;) Quote: C macros - C++ templated functions C structs - C++ classes C inheritance - C++ inheritance (this I'm not so sure, but I managed to inherit a struct from struc, so most likely it works). C++ is just safer, with typesafety, rather than what macros use. And namespaces dont really make a difference, if you hate them you can just do using namespace foospace; Ferchrissakes!! Most 9-year-olds couldn't spell half of those features, let alone understand and use them. Quote: Actually, code is read a whole lot more than it is written. Readability beats "writability" any day. Amen, brother Arild :) All you whipper-snappers will appreciate this a bit more when you head out into the workforce and find yourself maintaining other peoples' code. Yes, readability is not solely dictated by syntax, but when you are only just learning to write software, tricky or terse syntax can be a hindrance when trying to grasp the true lessons and concepts behind the code. And I repeat... this is a 9 year-old... to my knowledge, "Spot Learns To Hack In C++" has not been released yet, and probably for a good reason ;)
[.net] Can I make a sound without a window?
Bad Monkey replied to NexusOne's topic in General and Gameplay ProgrammingUm... I believe the Application.Run method blocks until Exit or ExitThread is called... so you are actually not getting past that line I would expect. Just totally shooting from the hip, but hows about something like this: Control c = new Control(); c.CreateControl(); device.SetCooperativeLevel(c, CooperativeLevel.Normal); sound.Play(0, BufferPlayFlags.Default); while(true) { Application.DoEvents(); // other game-loop stuff here... }
Syncronizing the keyboard over the network
Bad Monkey replied to LonelyStar's topic in Networking and MultiplayerQuote:I do not understand why it would help to abstract the information send to the server more. I mean, if a key (a relevant key) has changed, the server has all the information he needs for in Example accelerating the player. What difference would it make if I tell the server "Hey, I am accerlarating" instead of "Hey, the key, causing me to accelerate, has been pressed"? Well one thing is that you lose the ability to have the player remap their inputs (keys, mouse, gamepad, steering-wheel, etc) to different in-game actions. All I meant by abstraction was the decoupling of keypresses (and other input from control devices) from the actions/commands that they represent... that way multiple inputs may actually be aggregated into one "command" (or a change in one client state property) which may be less data to send to the server, and less processing/deciphering once it gets there as well. I guess you can look at it as you are sending key-state information, but keys themselves don't have any meaning... the actions that they trigger is what the server needs to know, so why not make the interpretation at the client end and give the server less to think about [smile] hplus0603: not sure if the first bit of your reply was directed to me, but I certainly don't advocate sending all the client's state in an update message... just the stuff that has changed since the last acknowledged message to the server.
[.net] Run code when command-line app is closed?
Bad Monkey replied to adammil2000's topic in General and Gameplay ProgrammingSorry, should have been a bit more specific, but as the AP has indicated, these events are not going to catch circumstances where the process is being aggressively killed (such as hitting "Stop" while debugging, or hitting "End Process" in Task Manager). They should, however, work in cases where the app/service is being closed/stopped in a more civil manner. I don't think the original poster is trying to keep the app alive, so a guardian process is not the solution needed. Rather he wants to ensure cleanup code is always run and resources always released even if the user fails to invoke the correct shutdown commands before quitting (which would be quite a PITA from outside the actual process, if it is even possible). BTW, another event that may be important for you to handle is the UnhandledException event, so that you can at least cover yourself for failures internal to the app.
[.net] Run code when command-line app is closed?
Bad Monkey replied to adammil2000's topic in General and Gameplay ProgrammingTry the DomainUnload or ProcessExit events of the AppDomain that your program is executing within. Link
Syncronizing the keyboard over the network
Bad Monkey replied to LonelyStar's topic in Networking and MultiplayerUm... yeah... based on the information you have provided (i.e. no really good reason) I don't think that is what you really want to be doing. I think you will find it creates a much nicer experience for the players, and reduces bandwidth usage quite a bit, if you send more abstract state information concerning the player (e.g. multiple key presses may lead to a change in the players velocity... it is this property rather than the keypresses that the server is interested in), and only send messages when these properties change. See this recent thread for a bit more info. Hope that helps [smile]
[.net] C# vs Vb.NET
Bad Monkey replied to Kal_Torak's topic in General and Gameplay ProgrammingPardon me if my tone seems offensive (that is not my intent), but how the hell did you arrive at the idea VB.NET is any less suitable than C#? You state it as if you are resigned to the fact that it can't be done (or that someone with a fair degree of authority told you as much, but didn't deem necessary to explain)... Of course you can make perfectly fine and dandy games with VB.NET... it has all the language constructs and class libraries you could ever need to make a game. It is a different syntax on top of the same framework and common language as C#, so ignore the BS, get stuck in, and code that sucka in VB.NET if thats what you know! :) I'm sure someone else will be along shortly to point you at some good examples. As for myself, I prefer the minimalistic syntax of languages like C/C++/C#... hey, its less typing than VB for the most part ;)
[.net] How to disable Automatic Garbage Collection in .NET 1.1 SP1
Bad Monkey replied to infra001's topic in General and Gameplay ProgrammingI have to agree with where frob is coming from here... there has to be a better way to address this. I would suggest looking over the way your app is designed and look for opportunities where frequently created transient objects can be pooled (such as "action", "transaction", or "message" type objects), so as to reduce overhead for allocation and finalization/collection. By drawing these frequently used objects from a pool (and just adding another if all objects in the pool are currently in use), and returning/releasing them for re-use, I'll bet that you can maintain a very steady level of memory usage (once the pool has ramped up to size on startup), and minimize the amount of memory that the garbage collector has to reclaim. (note: This approach should also cut down on fragmentation if you pre-allocate your pools at startup, and possibly save on free-space compacting time as well) I am happy to be corrected if I am wrong, but I recall reading an interview or a blog or a .plan or something waaaay back where Tim Sweeney of Epic suggesting this approach when using Managed DirectX (and I wouldn't be suprised if it happens behind the scenes in many instances) to stave off hiccups caused by the garbage collector, so the idea must have some merit :) The other suggestion I have is a little less elegant (and makes me cringe a bit), but is there the possibility that throwing hardware at the problem (more processors or faster processors) would reduce it to an acceptable level? Better hardware is often cheaper than reworking an entire software system, even though its a solution that turns the stomach of a developer who takes any pride in what they do :)
Decoupling Client Input from the Render Loop
Bad Monkey replied to PhilW's topic in Networking and MultiplayerFor a start, I think you may need to abstract your client input away from the messages you are sending to the server (if indeed you are sending individual key presses and mouse clicks/movements). And I don't think time-based updates are quite what you want either... I think what would be advisable is an event-driven approach. The main point is that you really only want to send a message to the server when something changes. What I would suggest is polling client input each time around around the game loop, and using this to calculate changes in key properties such as velocity (i.e. speed and direction of the client's vessel), weapon fire state, etc. These changes are the events that the server really cares about (not actual keystate and the like). Stuff information about these events into a queue. Then, on the networking side of things, just dequeue these events and send them to the server until the queue is empty. Process incoming messages from the server as usual. If you don't receive a ack/response before another event occurs, stick a number of them together (in order) in order to send a more efficient (read: closer to MTU) packet. I suspect a number of action games employ an approach something along these lines (correct me if I am way off anyone). Obviously, you will need to do some sanity checking on the server (so that hacking bastards don't go from 0 to a bazillion metres per second in the blink of an eye). You have to be mindful of what you send... position is generally not a good idea (let the server run the simulation and notify the client where it has moved to). I apologise if I misunderstood how you are currently doing things, but hopefully you get the gist of what I am saying anyway :)
[.net] Seperation Of Game Elements
Bad Monkey replied to MonkeyChuff's topic in General and Gameplay ProgrammingI agree with Anonymous here... custom attributes rock the cazbah when it comes to this type of plugin architecture and dynamic discovery of features. And just to add a little more, although probing sub-directories is nice, it may be nicer to allow a manifest/config to specify directories to probe (in case they are not below the directory of the laucher app) as well as allowing for a specific assembly to be nominated (to save probing and loading a crapload of assemblies that you may not even need for the current game). [edit] - seems the grammar fairy went on holidays
XML Problem
Bad Monkey replied to Kryptus's topic in General and Gameplay ProgrammingYour XPath query can be changed to simplify this to a single loop, and casting each "game" element as an XmlElement in the foreach loop allows you to call the GetAttribute method to retrieve "name"... i.e. XmlDocument xmldoc = new XmlDocument(); xmldoc.Load("C:\\Games.xml"); XmlNodeList games = xmldoc.SelectNodes("//supported/games/game"); foreach (XmlElement game in games) { MessageBox.Show( game.GetAttribute("name") ); } Not necessarily the most defensive way to code that, but does it achieve what you are trying to do? The reason your code does not work is because the indexer (ie [] operator) for XmlNode only applies to child nodes that are elements, of which your "game" element has none.
[.net] Why can't I say if(struct1 == struct2) in C#?
Bad Monkey replied to NexusOne's topic in General and Gameplay ProgrammingQuote:Original post by Holy Fuzz This is true. What I like to do is provide a custom, overloaded MyType.Equals(MyType obj) method, which is called by both the overriden Equals(object obj) method and the == operator. That way, there aren't any unboxing issues, and my personal preference is to not put any "meat" inside of operator methods, but instead to merely provide them as syntactic shortcuts to the methods that do the real work. Good call... this is a wise way to do it in order to make your structs/classes CLS-compliant (as operator overloads are only available in C# for .NET 1.x, while both C# and VB.NET get the feature in 2.0), just in case some twisted individual wants to use your code from COBOL.NET or something equally freaky and wrong ;) | https://www.gamedev.net/profile/3061-bad-monkey/?tab=topics | CC-MAIN-2018-05 | en | refinedweb |
Here is the documentation of the BSMModel class. More...
#include <BSMModel.h>
Here is the documentation of the BSMModel class.
Definition at line 21 of file BSMModel.h.
Create a DecayMode object in the repository.
Create a DecayMode object in the repository.
Initialize this object after the setup phase before saving an EventGenerator to disk.
Reimplemented from Herwig::StandardModel.
Reimplemented in Herwig::SusyBase, Herwig::ZprimeModel, Herwig::UEDBase, Herwig::LeptoquarkModel, Herwig::SextetModel, Herwig::LHModel, Herwig::TTbAModel, Herwig::ADDModel, Herwig::RSModel, Herwig::RPV, and Herwig::HiggsPair.
Referenced by Herwig::LHTPModel::kappaLe.
Referenced by Herwig::LHTPModel::kappaLepton().
Function used to read in object persistently.
Function used to write out object persistently.
Read decaymodes from LHA file. | http://herwig.hepforge.org/doxygen/classHerwig_1_1BSMModel.html | CC-MAIN-2018-05 | en | refinedweb |
Our company is going to start exchanging XML documents and I'm trying to understand how to correctly use XML data types in SQL Server 2005.
There is a published xsd which I think I'm supposed to store in a Schema Collection so that Sql Server can use it to validate typed XML variables and columns.
There also are some examples XML documents available for testing.
Because the xsd and the samples are relatively huge (1-2 megabytes each), I have distilled both down to the minimum necessary fields both for my own sanity while testing and for use in examples to forums such as this.
I believe I am down to my last problem which centers on understanding namespaces.
The actual XML documents do not and will not have any namespace parameters within them.
But I am only able to succesfully validate my testing samples when I include an xmlns parameter.
What am I doing wrong?
How can I get a sample without an xmlns parameter to successfully validate?
Here is what I have:
IF EXISTS (SELECT * FROM sys.xml_schema_collections WHERE [name] = 'MyPrivateSchemaCollection')
DROP XML SCHEMA COLLECTION dbo.MyPrivateSchemaCollection
GO
DECLARE @testSchema XML
SET @testSchema =
'<?xml version="1.0" encoding="UTF-8"?>
&
View Complete Post
Is there any way to capture all attributes/values (from the SOAP message) into a colleciton of my type? I tried to look at the DataContractSerializer to understand how I could initiate a new instance of my DataContract and then populate all attributes into
a collection, but I can seem to find there to look.
I have a DataContract Account with properties Name (string) and Number (string), which I would like to capture into a non-datamember (collection) upon receipt from the client side. Should I look to the IXMLSerializer and move to XmlSerializerFormat
- or is it possible anyhow ? I tried implement the
'<OnDeserializing()>
'Friend Sub OnDeseriali
Is it safe to change the replicate schema property of a publication mid-stream? We have a server that rarely gets schema changes to the published table, but once a year or so the vendor might change the underlying table(s) in the publication. This is usually
a field size change and/or adding a new field.
I don't want to take any chances on disrupting the current merge publication since hundreds of users are already syncing their subscriptions daily. And the sqlce database file is very large (1gb) so it's not practical to have them all reinitialize their
subscriptions.
The application using the replica databases will not need the majority of these changes, but at least one of the fields that is being altered in size is already part of the publication. Am I right in assuming that this will 'break' the subscriptions anyway?,
I | http://www.dotnetspark.com/links/41820-problems-with-schema-collection-and-namespaces.aspx | CC-MAIN-2018-05 | en | refinedweb |
Solution Folders are a small but very useful new feature in Visual Studio 2005. If you haven't stumbled across them, they let you group the projects in your solution into a logical hierarchy, rather than have them all at the same level. They are particularly useful for solutions with a lot of projects - and Enterprise Library does have quite a few projects.
We've been trying out a couple of alternatives on how to use solution folders. Here are the options we're considering:
The two options are pretty similar, with the difference being on how the unit tests are organized (note that unlike EntLib 1.x, all unit tests are now in their own projects rather than in namespaces under the existing projects).
Option A has all of the unit tests grouped together under their own root solution folder. This means you can easily close this folder up if you don't want to see the unit tests. It also means that the unit tests are not really grouped anywhere near the code that they demonstrate - so it may make it harder to switch between the two.
Option B groups the unit tests together with the main projects. So they are harder to ignore (which may be a good thing or a bad thing, depending on your perspective), but they are always close to the code that they are testing.
Which would you prefer? Or if you have any other suggestions, that's fine too!
I’m in favor of option B (unit tests grouped with main projects). Option A is like saying "Look this is my family all right here — except for my mother-in-law whom we locked in the closet."
Please keep the unit tests with each individual block. Tests should be close to the code that is tested. Why? Because there is no excuse for a developer to OPEN the code if he is not going to CHANGE the code. And if a developer changes the code, he or she should add unit tests to test those changes. They should not have to dig unit tests out of some distant location.
Option B 🙂
Option B, of course.
But another question : in VS 2003 you have a special Project – that was similar with the project group in VB6 – that can contain one or more project.
I do not discovered this in VS2005 beta. is that feature replaced with "folders" ?
Im also in favor of option B and I totally agree with NickMalik. Tests should be close to the code being tested. To achieve that, there is also another possibility though. We did this also in VS2003: No special folder, but just use the name of the project that is being tested postfixed with ‘.UnitTest’. That way, the test is as close as possible to the code, without being in the same project. Then we use a further one-to-one mapping to have a test class for each class that is being tested
I like Option B. Even if I am not using Unit tests I can minimize the folder. But logically I still want it a subfolder of the unit tests.
Does arranging the unit tests this way make any difference to running tests inside visual studio? Can you run all these tests inside seperate projects? Or does option A easier to run all unit tests?
Option B. It’s way cleaner than option A.
At first blush I thought Option A but the more I thought about it I realized that Option B is the best. Option B.
I’m not a risk taker. 😉
Going with B.
Thanks everyone! B it is 🙂
Excellent resolution. B is the best option IMHO. I can’t wait to see the 2.0 blocks, the last Library had some of the most beautiful code and architecture I’ve had the privelege of working with.
Option B, _Especially_ if you can set up some post build events to trigger the tests. Then you can right-click and build the folder for each block and the unit tests will be run auto-magically. | https://blogs.msdn.microsoft.com/tomholl/2005/10/28/solution-folders-in-enterprise-library/ | CC-MAIN-2018-05 | en | refinedweb |
On Tue, Jan 09, 2007 at 09:49:35AM +0000, Christoph Hellwig wrote:> On Mon, Jan 08, 2007 at 06:25:16PM -0500, Josef Sipek wrote:> > > There's no such problem with bind mounts. It's surprising to see such a> > > restriction with union mounts.> > Bind mounts are a purely VFS level construct. Unionfs is, as the name> > implies, a filesystem. Last year at OLS, it seemed that a lot of people> > agreed that unioning is neither purely a fs construct, nor purely a vfs> > construct.> > > > I'm using Unionfs (and ecryptfs) as guinea pigs to make linux fs stacking> > friendly - a topic to be discussed at LSF in about a month.> > And unionfs is the wrong thing do use for this. Unioning is a complex> namespace operation and needs to be implemented in the VFS or at least> needs a lot of help from the VFS. Getting namespace cache coherency> and especially locking right is imposisble with out that.What I meant was that I use them as an example for a linear and fanoutstacking examples. While unioning itself is a complex operation, the generalidea of one set of vfs objects (dentry, inode, etc.) pointing to severallower ones is very generic and applies to all fan-out stackable fs.Josef "Jeff" Sipek.-- Linux, n.: Generous programmers from around the world all join forces to help you shoot yourself in the foot for free. -To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | https://lkml.org/lkml/2007/1/9/102 | CC-MAIN-2015-48 | en | refinedweb |
pyrtlsdr 0.2.0
A Python wrapper for librtlsdr (a driver for Realtek RTL2832U based SDR's)# Description. It wraps all the
functions in the [librtlsdr library]() (including asynchronous read support),
and also provides a more Pythonic API.
# Usage
pyrtlsdr can be installed by downloading the source files and running `python setup.py install`, or using [pip]() and
`pip install pyrtlsdr`.
All functions in librtlsdr are accessible via librtlsdr.py and a Pythonic interface is available in rtlsdr.py (recommended).
Some documentation can be found in docstrings in the latter file.
## Examples
Simple way to read and print some samples:
```python
from rtlsdr import RtlSdr
sdr = RtlSdr()
# configure device
sdr.sample_rate = 2.048e6 # Hz
sdr.center_freq = 70e6 # Hz
sdr.freq_correction = 60 # PPM
sdr.gain = 'auto'
print(sdr.read_samples(512))
```
Plotting the PSD with matplotlib:
```python
from pylab import *
from rtlsdr import *
sdr = RtlSdr()
# configure device
sdr.sample_rate = 2.4e6
sdr.center_freq = 95e6
sdr.gain = 4
samples = sdr.read_samples(256*1024)
# use matplotlib to estimate and plot the PSD
psd(samples, NFFT=1024, Fs=sdr.sample_rate/1e6, Fc=sdr.center_freq/1e6)
xlabel('Frequency (MHz)')
ylabel('Relative power (dB)')
show()
```
Resulting plot [here]()
See the files 'demo_waterfall.py' and 'test.py' for more examples.
# Dependencies
* Windows/Linux/OSX
* Python 2.7.x/3.3+
* librtlsdr (builds dated after 5/5/12)
* **Optional**: NumPy (wraps samples in a more convenient form)
matplotlib is also useful for plotting data. The librtlsdr binaries (rtlsdr.dll in Windows and librtlsdr.so in Linux)
should be in the pyrtlsdr directory, or a system path. Note that these binaries may have additional dependencies.
# Todo
There are a few remaining functions in librtlsdr that haven't been wrapped yet. It's a simple process if there's an additional
function you need to add support for, and please send a pull request if you'd like to share your changes.
# Troubleshooting
* Some operating systems (Linux, OS X) seem to result in libusb buffer issues when performing small reads. Try reading 1024
(or higher powers of two) samples at a time if you have problems.
* If you're having librtlsdr import errors in Windows, make sure all the DLL files are in your system path, or the same folder
as this README file. Also make sure you have all of *their* dependencies (e.g. the Visual Studio runtime files). If rtl_sdr.exe
works, then you should be okay.
* In Windows, you can't mix the 64 bit version of Python with 32 bit builds of librtlsdr.
# License
All of the code contained here is licensed by the GNU General Public License v3.
# Credit
Credit to dbasden for his earlier wrapper [python-librtlsdr]() and all the
contributers on GitHub.
- Downloads (All Versions):
- 25 downloads in the last day
- 145 downloads in the last week
- 642 downloads in the last month
- Author: roger
- Download URL:
- Keywords: radio librtlsdr rtlsdr sdr
- License: GPLv3
- Categories
- Package Index Owner: roger
- DOAP record: pyrtlsdr-0.2.0.xml | https://pypi.python.org/pypi/pyrtlsdr/0.2.0 | CC-MAIN-2015-48 | en | refinedweb |
Ext.ux.using
As a C# developer, I'm accustomed to the using directive that brings types from other namespaces to the current scope, so I came up with a handy function for my Ext projects to emulate "using" at some extent:
Code:
Ext.ux.using( Ext.data, Ext.form, Ext.grid,... });
The code behind the using function is very simple:
Code:
Ext.ux.using = function() { var ns = {}; var upper = arguments.length - 1; for (var i = 0; i < upper; i++) { var source = arguments[i]; Ext.apply(ns, source); } arguments[upper](ns); };
Last edited by rstuven; 30 Oct 2007 at 5:00 PM. Reason: Grammar
I'm having difficulties following your code.
So basically.. what you are doing is to apply procedures and properties from different namespaces to one namespace, thus merging them.
Why would that be any good? Can you give a practical example?
Yes, it's as simple as you put it: merging different namespaces to one.
The benefits are the same of the C# using directive (see link above) or import in Java and Python, or similar constructs in other languages: less typing in a structured way.
A pratical example? See how is used the xg variable in the official Grid3 example. That's the intent.
I'm sorry to say that namespaces do have their purpose, while you are taking away that purpose. Just my opinion - open to debate on a very practical and useful example.
xg in the Grid3 example is only a shortcut to Ext.grid, which thus has no connection to your idea.
@andrei.neculau
I cannot agree with you there. He's taking nothing away from the purpose of Namespaces AND what he is specifically doing is giving an easy way to create shortcuts to those namespaces.
I was open to debate on a practical example, but instead I will explain my view on what it has already been written.
Code:
Ext.ux.using( Ext.data, Ext.form, Ext.grid, Ext.ux.form,... });
Secondly, that code is only a rewrite of the following:
Code:
(function(){ // Some code... var g = new Ext.grid.GridPanel(); var r = new Ext.data.JsonReader(); var t = new Ext.form.TextField(); var f = new Ext.ux.grid.filter.GridFilter(); var a = new Ext.long.path.to.SomeVeryLongClassName(); // More code... })();
Code:
// code... var EG = Ext.grid; var ED = Ext.data; var EF = Ext.form; var uEF = Ext.ux.form; var uEG = Ext.ux.grid; var uAlias = Ext.long.path.to; // code... (function(){ // Some code... var g = new EG.GridPanel(); var r = new ED.JsonReader(); var t = new EF.TextField(); var f = new uEG.filter.GridFilter(); var a = new uAlias.SomeVeryLongClassName(); // More code... })();
andrei,
Even though you don't see it, his extention is doing the exact same thing as your shortcut example. | https://www.sencha.com/forum/showthread.php?16717-Ext.ux.using | CC-MAIN-2015-48 | en | refinedweb |
Guicy: a Groovy Guice?
I recently came across Bob Lee's brand new IoC/DI framework: Guice. I'm usually using Spring for that purpose, and also because it goes much farther than just IoC/DI, but I thought I'd give Guice a try, especially because I wanted to play with Groovy's support for annotations. So I downloaded Guice, and read the nice getting starteddocumentation I also took a snapshot of Groovy 1.1 that supports Java 5 annotations. With guice-1.0.jar and aopalliance.jar on my classpath, and with the latest Groovy snapshot distribution properly installed, I was ready to go!
So, how do we start? Well, first of all, you must have some service contract that you'd like to depend on and to inject in some client code. Nothing really fancy here, I just shamelessly took inspiration from the documentation:
Now, I need a concrete implementation of this service:Now, I need a concrete implementation of this service:
interface Service { void go() }
So far so good, now, we'll need some client code that needs a service to be injected. This is where you're going to see some specific juicy annotation coming into play.So far so good, now, we'll need some client code that needs a service to be injected. This is where you're going to see some specific juicy annotation coming into play.
class ServiceImpl implements Service { void go() { println "Okay, I'm going somewhere" } }
Here, I'm creating a client class where I'm using constructor injection. The sole thing we have to do here is just use theHere, I'm creating a client class where I'm using constructor injection. The sole thing we have to do here is just use the
class ClientWithCtor { private final Service service @Inject Client(Service service) { this.service = service } void go() { service.go() } }
@Injectannotation. But of course, so far, the wiring isn't specified anywhere, and we have to do it now. Guice has the concept of Modules which contain programmatic code to wire classes together.
We are binding theWe are binding the
class MyModule implements Module { void configure(Binder binder) { binder.bind(Service) .to(ServiceImpl) .in(Scopes.SINGLETON) } }
Serviceinterface to the
ServiceImplimplementation. And we also mention the scope of the injection: we want to have one single implementation of that service available. Instead of specifying the scope in the module, you could also use a
@Singletonannotation on the
ServiceImplclass.
Now that everything is in place, we can create a Guice's injector and retrieve and call a properly wired client with the following code:
def injector = Guice.createInjector(new MyModule()) def clientWithCtor = injector.getInstance(ClientWithCtor) clientWithCtor.go()
Instead of the constructor-based approach, I prefer using a setter-based approch. And since Groovy creates getters and setters automagically when you define a property, the code is a bit shorter:
And the code for injecting is still the same:And the code for injecting is still the same:
class ClientWithSetter { @Inject Service service void go() { service.go() } }
def clientWithSetter = injector.getInstance(ClientWithSetter) clientWithSetter.go()
I'm not sure I'd use Guice for a customer project anytime soon, but for small projects where I want a xml-free DI framework, that might do! However, I might be tempted to use Grails' Spring bean builderinstead, since it's a pretty cool way to avoid the usual XML-hell when working with Spring. Also, in conclusion, it seems that Groovy's new support for annotations work quite well, as demonstrated also by Romain while integrating Groovy and JPA. I'm sure this will propel Groovy as the de facto enterprise scripting solution leveraging the wealth of frameworks and libraries using annotations. | http://glaforge.appspot.com/article/guicy-a-groovy-guice | CC-MAIN-2015-48 | en | refinedweb |
[
]
Dyre Tjeldvoll commented on DERBY-6340:
---------------------------------------
Hi Rick, thank you for your comments.
I agree that the std does not mention comparison of distinct types to their STs, so that a
cast is required in this case. I have updated the fs accordingly. I have also added a description
of the the default identifiers for the casting functions.
When it comes to the namespace of distinct UDT, I wondered if you could clear up a few things:
My understanding of Part 2, section 11.51 (user-defined type definition), general rule 2.b,
is that the casting functions are created in the explicit or implied schema of the UDT. "CREATE
FUNCTION SN.FNUDT ( ... where: SN is the explicit or implicit <schema name> of UDTN".
Is DERBY-5901 the reason why the name must not conflict with a builtin function or aggregate?
A couple of observations:
* Without the ability to specify <cast to source> you can create at most one alias for
each builtin type in the same schema?
* Due to DERBY-5901, we must either always specify <cast to source> or let the default
identifier be non-standard as the default names mandated by the standard are already used
by existing builtin functions in Derby?
Do you agree?
>) | http://mail-archives.apache.org/mod_mbox/db-derby-dev/201310.mbox/%3CJIRA.12667301.1378468614537.24680.1381139502310@arcas%3E | CC-MAIN-2015-48 | en | refinedweb |
Related Titles
- Full Description
Pro Android 2 shows how to build real-world and fun mobile applications using Googless for
This book is for professional software engineers/programmers looking to move their ideas and applications into the mobile space with Android. It assumes that readers have a passable understanding of Java, including being able to write classes and handle basic inheritance structures. This book also targets hobbyists.
-
- Exploring Live Folders
- Home Screen Widgets
- Android Search
- Exploring Text to Speech and Translate APIs
- Touchscreens
- Titanium Mobile: A WebKit-Based Approach to Android Development
- Working with Android Market
- Outlook and 91:Last line of code:
> queryBuilder.appendWhere(Notes._ID + "=" + );
noteId was omitted.
On page 291/292:Listing 8-3:
import java.io.BufferedReader;
import java.io.IOException;
import java.io.InputStreamReader;
are missing. | http://www.apress.com/mobile/android/9781430226598 | CC-MAIN-2015-48 | en | refinedweb |
pickle-warehouse 0.0.17
Easily dump python objects to files, and then load them back.
Pickle Warehouse makes it easy to save Python objects to files with meaningful identifiers.
How to use
Pickle Warehouse provides a dictionary-like object that is associated with a particular directory on your computer.
from pickle_warehouse import Warehouse warehouse = Warehouse('/tmp/a-directory')
The keys correspond to files, and the values get pickled to the files.
warehouse['filename'] = range(100) import pickle range(100) == pickle.load(open('/tmp/a-directory/filename', 'rb'))
You can also read and delete things.
# Read range(100) == warehouse['filename'] # Delete del(warehouse['filename'])
The coolest part is that the key gets interpreted in a fancy way. Aside from strings and string-like objects, you can use iterables of strings; all of these indices refer to the file /tmp/a-directory/foo/bar/baz:
warehouse[('foo','bar','baz')] warehouse[['foo','bar','baz']]
If you pass a relative path to a file, it will be broken up as you’d expect; that is, strings get split on slashes and backslashes.
warehouse['foo/bar/baz'] warehouse['foo\\bar\\baz']
Note well: Specifying an absolute path won’t save things outside the warehouse directory.
warehouse['/foo/bar/baz'] # -> foo, bar, baz warehouse['C:\\foo\\bar\\baz'] # -> c, foo, bar, baz # (lowercase "c")
If you pass a URL, it will also get broken up in a reasonable way.
# /tmp/a-directory/http/thomaslevine.com/!/?foo=bar#baz warehouse[''] # /tmp/a-directory/thomaslevine.com/!?foo=bar#baz warehouse['thomaslevine.com/!?foo=bar#baz']
Dates and datetimes get converted to
YYYY-MM-DD format.
import datetime # /tmp/a-directory/2014-02-26 warehouse[datetime.date(2014,2,26)] warehouse[datetime.datetime(2014,2,26,13,6,42)]
And you can mix these formats!
# /tmp/a-directory/http/thomaslevine.com/open-data/2014-02-26 warehouse[('', datetime.date(2014,2,26))]
It also has typical dictionary methods like
keys,
values,
items,
and
update.
When to use
pickle-warehouse is for when you want a persistant store of Python objects. If you want an in-memory pickle store, look at _pickleDB:.
Pickle Warehouse is strictly better than Mongo, so you should use it anywhere where you were previously using Mongo. Pickle Warehouse is designed for write-heavy workloads that need scalability (easy sharding), traditional database reliability (ACID), flexible schemas, and highly configurable indexing.
Pickle Warehouse is acidic
Here’s how it accomplishes that.
- Atomicity
- Writes are made to a temporary file that gets renamed.
- Consistency
- I don’t get this one, but I’m pretty sure I have it.
- Isolation
- Simultaneous writes are handled quite cleanly. If reads occur during writes, an error gets thrown, and you can try again.
- Durability
- All data are saved to disk right away.
- Downloads (All Versions):
- 19 downloads in the last day
- 341 downloads in the last week
- 1393 downloads in the last month
- Author: Thomas Levine
- License: AGPL
- Package Index Owner: tlevine
- DOAP record: pickle-warehouse-0.0.17.xml | https://pypi.python.org/pypi/pickle-warehouse/0.0.17 | CC-MAIN-2015-48 | en | refinedweb |
Mats Erik Andersson <address@hidden> writes: > diff --git a/am/readline.m4 b/am/readline.m4 > index b7ce9e4..354ab4d 100644 > --- a/am/readline.m4 > +++ b/am/readline.m4 > @@ -53,6 +53,21 @@ AC_DEFUN([gl_FUNC_READLINE], > ]) > > + dnl In case of failure, examine whether libedit can act > + dnl as replacement. Small NetBSD systems use editline > + dnl as wrapper for readline. > + if test "$gl_cv_lib_readline" = no; then > + + LIBREADLINE=-ledit > + LTLIBREADLINE=-ledit > + + AC_LINK_IFELSE([AC_LANG_PROGRAM([[#include <stdio.h> > +#include <readline/readline.h>]], > + [[readline((char*)0);]])], > + [ + fi > + > if test "$gl_cv_lib_readline" != no; then > AC_DEFINE([HAVE_READLINE], [1], [Define if you have the readline > library.]) > extra_lib=`echo "$gl_cv_lib_readline" | sed -n -e 's/yes, requires //p'` This looks fine. I'm not sure it makes sense for gnulib though, but we could keep this as a separate InetUtils file. > +dnl Where is tgetent(3) declared? > +AC_MSG_CHECKING(tgetent in -lcurses) > + + +AC_TRY_LINK([#include <curses.h> Maybe AC_LIB_HAVE_LINKFLAGS would have been simpler here? What about testing for ncurses? I understand from the readline.m4 comments that -lcurses is not working reliable on some systems, whereas -lncurses might. I'm not certain about this though, and I know next to nothing about curses vs ncurses vs termcap (and honestly, I don't want to know a lot more either :-)). > -#ifdef HAVE_READLINE > +#if defined HAVE_TGETENT_CURSES > # include <curses.h> > # include <term.h> > +#elif defined HAVE_TGETENT_TERMCAP > +# include <termcap.h> > #endif This seems great, the abuse of HAVE_READLINE for curses stuff has annoyed me. /Simon | http://lists.gnu.org/archive/html/bug-inetutils/2011-12/msg00014.html | CC-MAIN-2015-48 | en | refinedweb |
sighold, sigignore, sigpause, sigrelse, sigset - signal management
[OB XSI]
#include <signal.h>#include signal mask of the calling process before executing the signal handler; when the signal handler returns, the system shall restore the signal mask of the calling process to its state prior to the delivery of the signal. In addition, if sigset() is used, and disp is equal to SIG_HOLD, sig shall be added to the signal mask of the calling process and sig's disposition shall remain unchanged. If sigset() is used, and disp is not equal to SIG_HOLD, sig shall be removed from the signal mask of the calling process.
The sighold() function shall add sig to the signal mask of the calling process.
The sigrelse() function shall remove sig from the signal mask of the calling process.
The sigignore() function shall set the disposition of sig to SIG_IGN.
The sigpause() function shall remove sig from the signal mask of the calling process and suspend the calling process until a signal is received. The sigpause() function shall restore the signal mask of the process:
- the sigaction() function instead of the obsolescent sigset() function.
The sighold() function, in conjunction with sigrelse() or sigpause(), may be used to establish critical regions of code that require the delivery of a signal to be temporarily deferred. For broader portability, the pthread_sigmask() or sigprocmask() functions should be used instead of the obsolescent sighold() and sigrelse() functions.
For broader portability, the sigsuspend() function should be used instead of the obsolescent sigpause() function.
Each of these historic functions has a direct analog in the other functions which are required to be per-thread and thread-safe (aside from sigprocmask(), which is replaced by pthread_sigmask()). The sigset() function can be implemented as a simple wrapper for sigaction(). The sighold() function is equivalent to sigprocmask() or pthread_sigmask() with SIG_BLOCK set. The sigignore() function is equivalent to sigaction() with SIG_IGN set. The sigpause() function is equivalent to sigsuspend(). The sigrelse() function is equivalent to sigprocmask() or pthread_sigmask() with SIG_UNBLOCK set.
These functions may be removed in a future version. | http://pubs.opengroup.org/onlinepubs/9699919799/functions/sighold.html | CC-MAIN-2015-48 | en | refinedweb |
Thanks Mike! :} Mike McCune wrote: > Mike McCune wrote: >> Michael Stahnke wrote: >>> Now I apparently suck at git and email. Hopefully I suck less over >>> time. >>> >>> stahnma >>> >> >> Minor comment: >> >> public class NoteSerializer implements XmlRpcCustomSerializer { >> >> /** >> * {@inheritDoc} >> */ >> public Class getSupportedClass() { >> // TODO Auto-generated method stub >> return Note.class; >> } >> >> >> No need for the "// TODO Auto-generated method stub comment." >> >> I removed that from the patch, committed and pushed the changes into >> the repo. >> >> Thanks for the contribution! >> > > Forgot to mention I also cleaned up the extra code jsherril had > included in the patch. > | https://www.redhat.com/archives/spacewalk-devel/2008-June/000136.html | CC-MAIN-2015-48 | en | refinedweb |
Ah, you found the line. I've been poking at it for a couple of days, and just found that line too, but from a different direction... It *is* a code bug, not a compiler bug. It's tricky though: numset_find_empty_cell realloc's numbers taken... *which can cause it to move*. So the original assignment would then be writing into the old memory address, if it looks up ns->numbers_taken before the call and then makes the call and then does the assignment... what made me see this was printing out the numset and seeing it go from 0 1 2 3 4 5 6 7 8 9 to 0 1 2 3 4 5 6 7 8 9 65529, when the assignment clearly *should* have been happenning. So, this might be *masked* on some platforms by compiler differences (though I'd have to dig into the ANSI spec and reread the stuff on sequence points to convince me the compiler's allowed to do it both ways - I *suspect* that any compiler that does the lookup after the call (such that it doesn't show the problem) actually has a bug.) This also tells me that I should have *started* this effort by firing up Electric Fence - it would have caught this, and caused it to segfault at the point of the assignment to old memory. (But since ratpoison wasn't crashing, I didn't suspect memory issues.) So, does that convince you to commit the change in that form? ps. Here are some helper functions I found useful; everywhere that had a "ns=%p", ns became a "ns=%ps(%s)", ns, nsname(ns), and debug_numset got called in a bunch of places. Now that you've nailed the problem you probably don't need them, though... static char * nsname (struct numset *ns) { if (ns == rp_window_numset) return "rp_window_numset"; if (ns == rp_frame_numset) return "rp_frame_numset"; /* various: rp_screen.frames_numset */ return "???"; } void debug_numset (struct numset *ns) { #ifdef DEBUG int i; printf("DN: ns=%p(%s) taken=%d max=%d\n", ns, nsname(ns), ns->num_taken, ns->max_taken); for (i = 0; i < ns->max_taken; i++) { if (i < ns->num_taken) printf(" %d", ns->numbers_taken[i]); else printf("(%d)", ns->numbers_taken[i]); } printf("[nt=%p]\n", ns->numbers_taken); #endif } On 11/25/05, Joshua Neuheisel <address@hidden> wrote: > On 11/23/05, address@hidden <address@hidden> wrote: > > > > "ratpoison -c windows" shows that I have 12 windows, two of which are > > numbered "10". If I select window 9 and go "next", I get one of > > them; if I select 0 and go "prev", I get the other. One is an xterm, > > the other is a dclock; they were all started sequentially using > > xtoolwait (thus rapidly, but in a well defined order.) > > > > First noticed it with 1.3.0-7 under ubuntu; that doesn't mean it > > wasn't happening under debian, but I hadn't *noticed* it there. Built > > from CVS a few days ago, with the latest ChangeLog entry being > > 2005-11-05, and it still happens the same way. > > > > Alright, I think I have some new info here. I was able to reliably > reproduce the problem on MacOS X Tiger with gcc version as such: > powerpc-apple-darwin8-gcc-4.0.0 (GCC) 4.0.0 20041026 (Apple Computer, Inc. > build 4061) > > To fix it, I changed line 79 in src/number.c from > > ns->numbers_taken[numset_find_empty_cell(ns)] = n; > > to > > int ec; > ec = numset_find_empty_cell(ns); > ns->numbers_taken[ec] = n; > > and the problem went away. When I stepped through the code, I saw that the > return value from numset_find_empty_cell was being discarded, and the > assigment was being ignored. Obviously, this is a compiler error. I was > not able to reproduce it on my i686/linux machine running gcc version: > gcc (GCC) 3.4.5 20051026 (prerelease). > I was also not able to reproduce it on my MacOS X with the gcc-3.3 version: > gcc-3.3 (GCC) 3.3 20030304 (Apple Computer, Inc. build 1809). > > What gcc version did the original poster use? Or was it another compiler? > > Joshua > > -- _Mark_ <address@hidden> <address@hidden> | http://lists.gnu.org/archive/html/ratpoison-devel/2005-11/msg00006.html | CC-MAIN-2015-48 | en | refinedweb |
Redirect
From Uncyclopedia, the content-free encyclopedia
Redirect is a type of text, website, webpage or anything that contains symbols that does not contain any other symbols except for those sending a person who is reading them somewhere far away with an extremely small chance of return. Everyone who creates or uses a redirect is called a "redirector" or, simply, a "director".
edit Types of redirects
Redirects are widely used and can be divided into four main categories:
edit Web redirects
This image is an example of propaganda often used by the redirectors because it mistreats the aim of redirects which is always to gain attention and money.
Redirects on the World Wide Web are either websites with no particular content except for the different forms of redirects or different webpages with the same feature.
edit Wikipedia
The missing parts in the Wikipedia logo appeared after the first part of the UN plan against this site had been carried out.
Wikipedia is one of the most famous redirect sites (or even redirect pages) in the world. It is mostly famous because of the amount of the redirect sub-pages it currently has. On its main page all the users can see the message of such content: "Welcome to Wikipedia, the free encyclopedia that anyone can edit". [1] After clicking on the word "Wikipedia", the main page suddenly disappears and the link leads him to the article about Wikipedia where the first sentence alone contains at least ten links to other completely different articles.
The majority of readers of Wikipedia (those who spend their whole day following various redirects on this page) state that the links on this website lead you to the original article and so are a dead end. For example, the first link on the Wikipedia article about itself sends the user away to the page called "Wikipedia (disambiguation)". The first link there leads you back. This is the most simple example but it is true for all the links on this site (but sometimes it works like that only after a long series of clicking).
Wikipedia was almost deleted thanks to the initiative of the UN because it took too much place on the Internet and therefore wasted a lot of the world's energy. However, its anonymous creator stated that redirects are pages "that send a person who is reading them somewhere far away with an extremely small chance of return" (certainly quoting this article). Thus, as the majority of people do return (see the previous paragraph) to the pages they came from, Wikipedia has been falsely stopped being considered a redirect.
The situation is still unstable as many people care about the future of the planet and, knowing that they cannot erase the whole project, simply try to blank several sub-pages on Wikipedia. Those pages are always restored after and these people begin being referred to as vandals.
edit Speech redirects
Speech redirects (often known as oral redirects) are phrases that do not give any information and are used by people to make their interlocutor leave. These redirects are classified into two main categories:
- SSR (Simple Speech Redirects) or the USSR (how they are often being referred to);
- CSR (Complicated Speech Redirects), often confused with the abbreviation for the Czech Socialist Republic.
The SSR are simple and comprehensible sentences which just
inform the person, to whom they are addressed, to leave. For example the phrase "Go to hell" is one of the most commonly used SSRs where the word "hell" is actually the address of a poor place inhibited by different criminals. Others often contain obscenity.
The people who use them a lot are rappers, homeless men and tired interlocutors. On the contrary, the CSR or the ISR (Implicit Speech Redirects) are often used by the politicians, scientists and priests. These are often abstruse sentences containing imperative and to know that they are such, it is sometimes necessary to the listener to follow all the given orders and see if he has changed location or not (whether he has been redirected somewhere else).
One of the examples of the CSR is the famous quote of Winston Churchill: "We shall go on to the end. We shall fight in France, we shall fight on the seas and oceans, we shall fight with growing confidence and growing strength in the air, we shall defend our island, whatever the cost may be [...]". This is one of the most complicated speech redirects ever used. Those who discovered that it was such, were soldiers of the British army. After listening to Churchill's speech and being very bright from their birth, they understood that he actually asks everyone to fight for their homeland (the homeland of most of the British people is a small island between the Atlantic and the Arctic Oceans). They followed his order and suddenly found themselves in a completely different place from where they have been before.
Speech redirects are often used for the purposes of people who say them. Double, triple, quadruple and n-ple speech redirects (when the same redirects are repeated n amount times) are used for the benefit of their biological parents (not the adopters).
edit Paper redirects
Paper redirects are very often the printed-out versions of Web redirects. Scientific reviews that just repeat what different scientific investigations report and often (but not always) redirect the reader to all of them are another case. Finally, the address on the paper is also a kind of paper redirect as it does not contain anything except for the location of the place someone wants you to visit.
Other redirects, if been printed out but not on paper, are called real-life redirects. Books are very rarely considered redirects as they often contain some information. But a lot of complicated mystery novels redirect the reader to the previous pages where the reader may find the clue to the murder that he missed. [2]
edit Real-life redirects
A typical redirect used for a secret purpose.
The redirects in real life are maps, signposts and different clothes with the text printed on them (such as "Go Away", for example). They are often not very successful as it requires more than simple clicking to redirect their target audience to the place they want. Also, the principal differences between the real-life redirects and the virtual redirects (not the obvious one) is that a lot of people on the Internet, who get into the trap of the redirectors, are not willing to be relocated. Opposite to this, people who are redirected by, for example, signposts actually do want to move somewhere else and do not know where they currently are. [3] | http://uncyclopedia.wikia.com/wiki/Redirect | CC-MAIN-2015-48 | en | refinedweb |
The glassfish-ejb-jar for Arquillian / Glassfish Embedded does not apply.Charlee Chitsuk Apr 17, 2012 4:23 AM
Dear All,
I've tried to learn the Arquillian version 1.0.0.Final by using the Glassfish embedded version 3.1.2. I've create a simple hello world EJB application as the following: -
The Interface
@Remote @WebService public interface DummyServiceable { @WebMethod String greet(@WebParam(name = "name") final String name); }
The Service
@Stateless @WebService( endpointInterface = "com.poc.DummyServiceable", serviceName = "engine/DummyService", portName = "DummyPort" ) public class DummyService implements DummyServiceable { @Override public String greet(final String name) { String result = "Hello " + name; return result; }
The glassfish-ejb-jar.xml (Configured as CONFIDENTIAL)
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE glassfish-ejb-jar PUBLIC "-//GlassFish.org//DTD GlassFish Application Server 3.1 EJB 3.1//EN" ""> <glassfish-ejb-jar> <enterprise-beans> <ejb> <ejb-name>DummyService</ejb-name> <webservice-endpoint> <port-component-name>DummyService</port-component-name> <endpoint-address-uri>engine/DummyService/DummyService</endpoint-address-uri> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </webservice-endpoint> </ejb> </enterprise-beans> </glassfish-ejb-jar>
The unit test
@RunWith(Arquillian.class)public class DummyServiceTester { @Deployment public static JavaArchive createDeployment() { File file = new File("src/main/resources/META-INF/glassfish-ejb-jar.xml"); Assert.assertNotNull("The glassfish specific file is not found.", file); return ShrinkWrap.create(JavaArchive.class, "myapp.jar") .addClass(DummyService.class) .addClass(DummyServiceable.class) .addAsManifestResource(EmptyAsset.INSTANCE, "beans.xml") .addAsManifestResource(file); } @EJB private DummyServiceable service; @Test public void whenGreet(){ Assert.assertNotNull("The service is null", this.service); String name = "Charlee"; String expected = "Hello " + name; String actual = this.service.greet(name); Assert.assertEquals("The result is unexpected.", expected, actual); } }
By overall, the Arquillian works very well together with Glassfish embedded 3.1.2. Anyhow, the glassfish-ejb-jar.xml does not apply as it is printed at the console as the following: -
INFO: WS00019: EJB Endpoint deployed
test listening at address at
I've tried to deploy the ear file to the remote Glassfish 3.1.2 with purpose to ensure that my glassfish-ejb-jar.xml is configured properly. The console shows me as the following: -
[#|2012-04-17T14:07:11.465+0700|INFO|glassfish3.1.2|javax.enterprise.webservices.org.glassfish.webservices|_ThreadID=18;_ThreadName=Thread-2;|
WS00019: EJB Endpoint deployed
arquillian-ear-0.0.1-SNAPSHOT listening at address at]
I'm not sure if I'm doing something wrong or not. Could you please help to advise further? Thank you very much for your help in advance. I'm looking forward to hearing from you soon.
Regards,
Charlee Ch.
1. Re: The glassfish-ejb-jar for Arquillian / Glassfish Embedded does not apply.Charlee Chitsuk May 23, 2012 1:47 AM (in response to Charlee Chitsuk)
Dear all,
I would like to update the testing result as I've changed the packaging from the JavaArchive to EnterpriseArchive, including with all required dependencies library. The webservice is listen to the https already. Anyhow, there are some missing as it is listened on the admin-listener port. I will try my best to fix it and update the result as soon as possible.
Regards,
Charlee Ch. | https://community.jboss.org/message/737190?tstart=0 | CC-MAIN-2015-48 | en | refinedweb |
The Gnome Applet widget module of Gtk-Perl (Gnome::Applet namespace).
WWW:
No installation instructions: this port has been deleted.
The package name of this deleted port was:
PKGNAME:
NOTE: FreshPorts displays only information on required and default dependencies. Optional dependencies are not covered.
No options to configure
Number of commits found: 6
As announced on May 6, remove the broken p5-GnomeApplet port
BROKEN: Does not compile
Clear moonlight beckons.
Requiem mors pacem pkg-comment,
And be calm ports tree.
E Nomini Patri, E Fili, E Spiritu Sancti.
Upgrade to 0.7008
Upgrade to 0.7006.
Add p5-GnomeApplet, it's perl binding for Gnome Applet.
Servers and bandwidth provided byNew York Internet, SuperNews, and RootBSD
27 vulnerabilities affecting 58 ports have been reported in the past 14 days
* - modified, not new
All vulnerabilities | http://www.freshports.org/x11-toolkits/p5-GnomeApplet/ | CC-MAIN-2015-48 | en | refinedweb |
Ok, i know that i am doing something wrong and if possible would like to be pointed in the right direction. Im just learning java, my background is in c/c++.
I need to be able to add custom levels to the logger and im defiantly doing it wrong. Couldnt find much on the internet about it.
Any help would be greatly appreciated. Here is my code so far.
import java.util.Scanner; import java.util.logging.FileHandler; import java.util.logging.Level; import java.util.logging.Logger; import java.io.*; public class LogRunner { public static void main(String[] args) { // This is what i could come up with so far but its wrong. // Create the new levels final Level LogRunner debug = new LogRunner(); final Level LogRunner error = new LogRunner(); try{ FileHandler hand = new FileHandler("application.log"); Logger log = Logger.getLogger("log_file"); log.addHandler(hand); log.debug("This is bad debug it! "); log.info("Here is the info "); log.warning("DANGER DANGER "); log.error("There seems to be an error "); System.out.println(log.getName()); } catch(IOException e){} } }
Thanks in advance for any help
seanman | http://www.javaprogrammingforums.com/java-se-apis/11123-custom-log-level-help.html | CC-MAIN-2015-48 | en | refinedweb |
The Label widget is a standard Tkinter widget used to display a text or image on the screen. The label can only display text in a single font, but the text may span more than one line. In addition, one of the characters can be underlined, for example to mark a keyboard shortcut.
When to use the Label Widget
Labels are used to display texts and images. The label widget uses double buffering, so you can update the contents at any time, without annoying flicker.
To display data that the user can manipulate in place, it’s probably easier to use the Canvas widget.
Patterns #
To use a label, you just have to specify what to display in it (this can be text, a bitmap, or an image):
from Tkinter import * master = Tk() w = Label(master, text="Hello, world!") w.pack() mainloop()
If you don’t specify a size, the label is made just large enough to hold its contents. You can also use the height and width options to explicitly set the size. If you display text in the label, these options define the size of the label in text units. If you display bitmaps or images instead, they define the size in pixels (or other screen units). See the Button description for an example how to specify the size in pixels also for text labels.
You can specify which color to use for the label with the foreground (or fg) and background (or bg) options. You can also choose which font to use in the label (the following example uses Tk 8.0 font descriptors). Use colors and fonts sparingly; unless you have a good reason to do otherwise, you should stick to the default values.
w = Label(master, text="Rouge", fg="red") w = Label(master, text="Helvetica", font=("Helvetica", 16))
Labels can display multiple lines of text. You can use newlines or use the wraplength option to make the label wrap text by itself. When wrapping text, you might wish to use the anchor and justify options to make things look exactly as you wish. An example:
w = Label(master, text=longtext, anchor=W, justify=LEFT)
You can associate a Tkinter variable with a label. When the contents of the variable changes, the label is automatically updated:
v = StringVar() Label(master, textvariable=v).pack() v.set("New Text!")
You can use the label to display PhotoImage and BitmapImage objects. When doing this, make sure you keep a reference to the image object, to prevent it from being garbage collected by Python’s memory allocator. You can use a global variable or an instance attribute, or easier, just add an attribute to the widget instance:
photo = PhotoImage(file="icon.gif") w = Label(parent, image=photo) w.photo = photo w.pack()
Reference #
-) | http://effbot.org/tkinterbook/label.htm | crawl-001 | en | refinedweb |
Slashdot Log In
MySQL and Perl for the Web:4, Informative)
( | Last Journal: Tuesday April 13 2004, @11:24AM)
??:4,:Pathologically Eclectic Rubbish Lister (Score:5, Funny)
( | Last Journal: Friday October 12, @01:42PM)
<1/2 g>
Three cheers for LAMP (Score:2, Insightful)
Re:Three cheers for LAMP (Score:5, Funny)
()
How the hell do you Godwin a thread about a Perl book?
Good! (Score:1, Troll)
( | Last Journal: Wednesday May 12 2004, @11:28AM)
Perl is very flexible (Score:1, Funny)
Perl / MySQL CMS solution. (Score:4, Informative)
(Last Journal: Wednesday September 22 2004, @11:13AM).
I used to enjoy coding in Perl (Score!
;)
Choice Quote (Score:1, Funny)
However if you are putting together a basket and require that this be done while not being mentally "all there" and submerged in seawater, "Underwater Basketweaving for the Mildly Retarded" will aid immensely in your project.
I mean c'mon, can't we infer the subject matter from the book title? I'll admit that there are some obscure ones out there that you can't tell but this one just seems to be a no-brainer..
Why actually choose MySQL? (Score:5, Interesting)
( | Last Journal: Wednesday January 05 2005, @03:37PM).
The one real gripe I have about Postgres, is god, these people are in love with Hash joins. Any really good database should avoid hash joins like the plague unless it can guarantee that all the data that could possibly be returned by a subquery will fit into RAM. Postgres often wildly mis-estimates the size of a sub query, decides to hash it, and then gets killed when the query returns 100,000.
Perl synonymous? (Score:3, Insightful) own modules (preloaded in your apache config, of course), and you'll have a high-speed, easily-maintained, dynamic site in no time.
I'm a little confused... (Score:3, Insightful)
( | Last Journal: Thursday January 05 2006, @05:36PM).
If you are interested in PHP and SQL... (Score:1).
Nice Book (Score:1)
You need to know perl basics to begin using this though. Just reading Learning Perl would do.
Great book for beginners (that's me). Covers a broad range of web apps.
another review (Score:1):Huh? (Score:2, Insightful)
(Last Journal: Sunday November 11, @09:31AM)
Blah blah blah buzzword buzzword buzzword spacefiller spacefiller blah blah this submission is now longer. at sharing code in a resuable way as Perl with its CPAN system. Borrowing PHP code usually means copy, paste, modify. This problem is exacerbated by the fact that PHP seems to make all functions global within a single namespace, so you have no way of knowing if someone else is stepping on your function name.
Re:USE ASP! (Score:3, Interesting)
(Last Journal: Friday January 30 2004, @03:41PM):Huh? (Score:2)
Translation: either ignorance or flamebait.
You don't use phrases like 'no argument' without expecting an argument.
Re:Put me down for... (Score:2) language in history.
Well, Perl may be used by more people, but I wouldn't be surprised if more people understand PHP.
Re:Perl or PHP faster? (Score:2)
( | Last Journal: Thursday January 05 2006, @05:36PM) great for dymanic web content, cgi's were never good for that.
PHP is actually an counter to something more like SSI's, or ASP
Re:Huh? (Score:1)
Translation please?
The book would be helpful in creating a website. | http://books.slashdot.org/books/04/04/26/1939200.shtml | crawl-001 | en | refinedweb |
ElementTree: Working with Qualified Names
Updated December 8, 2005 | July 27, 2002 | Fredrik Lundh
The elementtree module supports qualified names (QNames) for element tags and attribute names. A qualified name consists of a (uri, local name) pair.
Qualified names was introduced with the XML Namespace specification.
Storing Qualified Names in Element Trees
The element tree represents a qualified name pair as a string of the form “{uri}local“.
The following example creates an element where the tag is the qualified name pair (, egg).
elem = Element("{}egg"}
To check if a name is a qualified name, you can do:
if elem.tag[0] == "{": ...
(you can also use startswith, but the method call overhead makes that a lot slower in current Python versions.)
Storing Qualified Names in XML Files
In theory, we could store qualified names right away in XML files. For example, let’s use the {uri}local notation in the file itself:
<{}egg> some content </{}egg>
There are two problems with this approach. One is the according to the XML base specification, { and } cannot be used in element tags and attribute names. Another, more important problem is bloat; even with a short uri like the one used in the example above, we’ll end up adding nearly 50 bytes to each element. Put a couple of thousand elements in a file, and use longer URIs, and you’ll quickly end up with hundreds of kilobytes of extra data.
To get around this, the XML namespace authors came up with a simple encoding scheme. In an XML file, a qualified name is written as a namespace prefix and a local part, separated by a colon: e.g. “prefix:local“.
Special xmlns:prefix attributes are used to provide a mapping from prefixes to URIs. Our example now looks like this:
<spam:egg xmlns: some content </spam:egg>
For a single element, this doesn’t save us that much. But the trick is that once a prefix is defined, it can be used in hundreds of thousands of places. If you really want to minimize the overhead, you can pick one-character prefixes, and get away with four bytes extra per element.
However, it should be noted that xmlns attributes only affect the element they belong to, and any subelements to that element. But an element can define a prefix even if it doesn’t use it itself, so you can simply put all namespace attributes on the toplevel (document) element, and be done with it.
Qualified Attribute Values
XML-languages like WSDL and SOAP uses qualified names both as names and as attribute values.The standard parser does this for element tags and attribute names, but it cannot do this to attribute values; an attribute value with a colon in it may be a qualified name, or it may be some arbitrary string that just happens to have a colon in it.
And once the element tree has been created, it’s too late to map prefixes to namespace uris; we need to know the prefix mapping that applied to the element where the attribute appears.
To work around this, the recommended approach is to use the iterparse function, and do necessary conversions on the fly. In the following example, the namespaces variable will contain a list of (prefix, uri) pairs for all active namespaces.
events = ("end", "start-ns", "end-ns") namespaces = [] for event, elem in iterparse(source, events=events): if event == "start-ns": namespaces.append(elem) elif event == "end-ns": namespaces.pop() else: ...
Note that the most recent namespace declaration is added to the end of the list; to find the URI for a given prefix, you have to search backwards:
def geturi(prefix, namespaces): for p, uri in reversed(namespaces): if p == prefix: return uri return None # not found | http://effbot.org/zone/element-qnames.htm | crawl-001 | en | refinedweb |
exists.
Since the two APIs are nearly identical, you can get a good feel for the use of both interfaces by looking at either.
The
Destination,
Queue, and
Topic interfaces represent administered objects. Administered objects are created by an administrator and are registered with a directory. They represent globally accessible
resources. In this case, they are used to encapsulate the identity (or address) of a message destination such as a queue or
a topic. They are not themselves a destination. They provide a platform-independent way to encapsulate provider-specific addresses.
Destination objects support concurrent use.
The following code demonstrates how the
Queue interface and associated implementation class
Queue_Impl are implemented. The code for the
Topic interface is nearly identical.
First the
Queue interface:
public interface Queue extends Remote { public String getQueueName() throws RemoteException; }
Then the
Queue_Impl interface:
public class Queue_Impl extends UnicastRemoteObject implements Queue { private String _stringQueueName = null;
public Queue_Impl(String stringQueueName) throws RemoteException { _stringQueueName = stringQueueName; }
public String getQueueName() throws RemoteException { return _stringQueueName; }
public int hashCode() { return _stringQueueName.hashCode(); }
public boolean equals(Object object) { return object.equals(_stringQueueName); } }
The
ConnectionFactory,
QueueConnectionFactory, and
TopicConnectionFactory interfaces represent administered objects. They encapsulate a set of configuration parameters that have been defined by an administrator. A client uses a
ConnectionFactory to create a
Connection with a JMS provider. They simplify the administration of a message service in a large-scale enterprise setting.
ConnectionFactory objects support concurrent use.
Since our implementation has no interesting administrative infrastructure, the only method implemented is the method that returns a connection. The following interface and implementation classes illustrate how this looks within the queue domain. Once again, the topic source code is nearly identical.
First the
QueueConnectionFactory interface:
public interface QueueConnectionFactory extends Remote { public QueueConnection createQueueConnection() throws RemoteException; }
Then the
QueueConnectionFactory_Impl interface:
public class QueueConnectionFactory_Impl extends UnicastRemoteObject implements QueueConnectionFactory { public QueueConnectionFactory_Impl() throws RemoteException { }
public QueueConnection createQueueConnection() throws RemoteException { return new QueueConnection(); } }
The
Connection,
QueueConnection, and
TopicConnection interfaces represent an active connection to a JMS provider. They are the conduit through which communication flows. A client
uses them to create a
Session with the JMS provider. They provide a single point for all communication activities -- thus enabling resource (such as connection)
pooling as well as authentication and security.
Connection objects support concurrent use.
The following code illustrates the implementation within the queue domain.
public class QueueConnection implements Serializable { private Hashtable _hashtable = new Hashtable();
private HQueue lookup(Queue queue) throws MalformedURLException, NotBoundException, UnknownHostException, RemoteException, IOException { HQueue hqueue = null;
if ((hqueue = (HQueue)_hashtable.get(queue)) != null) { return hqueue; }
hqueue = (HQueue)Naming.lookup(queue.getQueueName());
_hashtable.put(queue, hqueue);
return hqueue; }
void send(Message message, Queue queue) throws NotBoundException, RemoteException, IOException { lookup(queue).send(message); }
Message receive(Queue queue) throws NotBoundException, RemoteException, IOException { return lookup(queue).receive(); }
public QueueSession createQueueSession() { return new QueueSession(this); } }
The
Session,
QueueSession, and
TopicSession interfaces represent a single threaded context for sending and receiving messages. A client uses them to create one or more
MessageProducers or
MessageConsumers. They also provide a factory for creating messages and define a serial order for the messages they consume or produce. Sessions
provide a natural way for clients to organize interactions with a provider.
The following code illustrates an implementation.
public class QueueSession implements Serializable { private QueueConnection _queueconnection = null;
QueueSession(QueueConnection queueconnection) { _queueconnection = queueconnection; }
void send(Message message, Queue queue) throws NotBoundException, RemoteException, IOException { _queueconnection.send(message, queue); }
Message receive(Queue queue) throws NotBoundException, RemoteException, IOException { return _queueconnection.receive(queue); }
public QueueSender createSender(Queue queue) { return new QueueSender(this, queue); }
public QueueReceiver createReceiver(Queue queue) { return new QueueReceiver(this, queue); } }
The
MessageProducer,
QueueSender, and
TopicPublisher interfaces represent objects that are used to send messages to a destination.
public class QueueSender implements Serializable { private QueueSession _queuesession = null; private Queue _queue = null;
QueueSender(QueueSession queuesession, Queue queue) { _queuesession = queuesession; _queue = queue; }
public void send(Message message) throws NotBoundException, RemoteException, IOException { _queuesession.send(message, _queue); } }
The
MessageConsumer,
QueueReceiver, and
TopicSubscriber interfaces represent objects that are used to receive messages sent to a destination.
public class QueueReceiver implements Serializable { private QueueSession _queuesession = null; private Queue _queue = null;
QueueReceiver(QueueSession queuesession, Queue queue) { _queuesession = queuesession; _queue = queue; }
public Message receive() throws NotBoundException, RemoteException, IOException { return _queuesession.receive(_queue); } }
I've provided the complete source code for those readers who can't sleep until they see it all. For instructions on downloading, extracting, setting up, and running the code, please read the accompanying sidebar.
With the close of this month's column, I begin a short vacation. After over two years of writing, it's time to step back and reorient myself. In the two years since I started writing this column, Java has grown from a fledgling to an adult. It now seems poised to bury itself within the heart of the enterprise. When I return, I plan to continue to show you how to use the Java platform to solve your problems. Until then.
Free Download - 5 Minute Product Review. When slow equals Off: Manage the complexity of Web applications - Symphoniq
Free Download - 5 Minute Product Review. Realize the benefits of real user monitoring in less than an hour. - Symphoniq | http://www.javaworld.com/jw-03-1999/jw-03-howto.html | crawl-001 | en | refinedweb |
Static interfaces in C++
Brian McNamara and Yannis Smaragdakis
Georgia Institute of Technology
lorgon@cc.gatech.edu, yannis@cc.gatech.edu
Abstract
We present an extensible framework for defining and using "static interfaces" in C++. Static interfaces are especially useful as constraints on template parameters. That is, in addition to the usualtemplate <class T>, template definitions can specify that T "isa" Foo, for some static interface named Foo. These "isa-constraints" can be based on either inheritance (named conformance: T publicly inherits Foo), members (structural conformance: T has these member functions with these signatures), or both. The constraint mecha! ! ! nism imposes no space or time overheads at runtime; virtual functions are conspicuously absent from our framework.
We demonstrate two key utilities of static interfaces. First, constraints enable better error messages with template code. By applying static interfaces as constraints, instantiating a template with the wrong type is an error that can be caught at the instantiation point, rather than later (typically in the bowels of the implementation). Authors of template classes and template functions can also dispatch "custom error messages" to report named constraint violations by clients, making debugging easier. We show examples of the improvement of error messages when constraints are applied to STL code.
Second, constraints enable automatic compile-time dispatch of different implementations of class or function templates based on the named conformance properties of the template types. For example,Set<T> can be written to automatically choose the most efficient implementation: use a hashtable implementation if "T isa Hashable", or else a binary search tree if "T isa LessThanComparable", or else a linked-list if merely "T isa EqualityComparable". This dispatch can be completely hidden from clients of Set<! ! ! FONT SIZE=1>, who just use Set<T> as usual.
1. Introducing static interfaces
We begin by demonstrating how static interfaces could be useful, and then show how to emulate them in C++.
1.1 Motivation
To introduce the idea of "static interfaces", we shall first consider an example using traditional "dynamic interfaces"--that is, abstract classes:
struct Printable {
virtual ostream& print_on( ostream& o ) const =0;
};
struct PFoo : public Printable {
ostream& print_on( ostream& o ) const {
o << "I am a PFoo" << endl;
return o;
}
};
Here "PFoo isa Printable" in the traditional sense of "isa". We have dynamic polymorphism; a Printable variable can be bound at runtime to any kind of (concrete) Printable object. Both named and structural conformance are enforced by the compiler: inheritance is an explicit mechanism for declaring the intent to be a subtype (named conformance), and the pure virtual function implies that concrete subclasses must define an operation with that signature (structural conformance). These conformance guarantees enable users to write functions like
ostream& operator<<( ostream& o, Printable& p ) {
p.print_on(o);
return o;
}
so that "std::cout << p" works for any Printable object p.
The problem with this mechanism for expressing interfaces is that it is sometimes overkill. Virtual functions are a good way to express interfaces when we want dynamic polymorphism. But sometimes we only need static polymorphism. In these cases, interfaces based on abstract classes introduce much needless inefficiency. First, they add a vtable to the overhead of each instance of concrete objects (a space penalty). Second, they introduce a point of indirection in calls to methods in the interface (a runtime penalty). Finally,virtual calls are unlikely to be inlined (an optimization penalty).
As a result, libraries that exclusively use templates to achieve polymorphism (static polymorphism) usually avoidvirtual altogether. The STL is one common example. However, there are no explicit language constructs to express "static interfaces". The only way to say, for example, that a type T "isa" Printable and that it supports the print_on() method with a particular function signature is to use abstract classes as described above. Thus, when templates rely on such interface constraints being met by the template type, the constraints are typically left implicit. At best, clients of template code may find constraints on template parameters in the documentation.
SGI's STL documentation [SGI] does an excellent job with template constraints; indeed, they go as far as dubbing such constraints "concepts", and their documentation describes "concept hierarchies". Each concept informally describes a particular interface requirement for a class. In order to use the STL, one must instantiate templates with types that are "models" of particular concepts likeEqualityComparable, LessThanComparable, ForwardIterator, RandomAccessIterator, CopyConstructable, etc. These concepts exist explicitly only in the documentation; in the STL code they lie implicit. (Actually, SGI's own STL implementation now includes a kind o! ! ! f concept checking that is similar to the method we shall eventually describe at the beginning of Section 3.1.)
Most users of the STL are aware of concepts--it is hard to use the STL effectively if one has no idea of what aForwardIterator is, for example. Concepts are important to understanding the whole framework of the STL, despite the fact that these concepts are not directly represented in C++. There are also meaningful relationships among concepts (e.g. a RandomAccessIterator "isa" ForwardIterator) which can only be expressed in documentation, as C++ has no explicit mechanisms to define concepts, much less describe the relationships among them.
The problems with leaving concepts implicit in the code are numerous. First, detection of improper template instantiations comes late (if at all), and the error messages are often cryptic and verbose. Second, there is no obvious way to determine what "concept constraints" there are for a template parameter without examining the implementation of the template extremely carefully. This makes using the template difficult and error-prone (unless there is copious documentation). Finally, there is no way to have the code "reason" about its own concept-constraints at compile-time (since the concepts are absent from the code).
In short, while C++ provides a nice mechanism for dynamic polymorphism, it provides no analogous mechanism for static polymorphism. Abstract classes allow users to specify constraints on parameters to functions, which may be bound to different objects at run-time. However there is no mechanism to specify constraints on parameters to templates, which may be bound to different types at compile-time. As a result, template code either must leave the constraints implicit, or make clever use abstract class hierarchies (which makes the performance suffer dramatically). However, as we shall demonstrate, static interfaces can give us the benefits of both approaches at once.
1.2 Imagining new language constructs
Let us imagine some new language constructs which let us explicitly express concepts. Consider a template function that sorts a vector. We can imagine writing:
template <class T> interface LessThanComparable {
bool operator<( const T& ) const;
};
struct Foo models LessThanComparable<Foo> {
bool operator<( const Foo& ) const { ... }
// other members
};
template <class T isa LessThanComparable<T> >
// sort the vector with "operator<" as the comparator
void Sort( vector<T>& v ) { ... }
Note that there are three new keywords.interface lets us declare a static interface, which has a name and a structure, and which imposes no virtual overhead on the types that model it. models lets us declare that a type models the interface; the compiler will enforce that the type has the right methods. isa lets us express constraints on types; here, Sort() can only be called for vector<T> types where T is a model of LessThanComparable<T>.
This cleanly expresses what we would like to do. It is similar to the idea of "constrained generics" found in some languages (like GJ [BOSW98]), only without the ability to do dynamic polymorphism (and therefore without the associated performance penalties). It turns out we can do something very close to this using standard C++.
1.3 Emulating Static Interfaces in C++
Here is how we express a static interface (a concept) in our framework:
template <class T> struct LessThanComparable {
MAKE_TRAITS; // a macro (explained later)
template<class Self> static void check_structural() {
bool (Self::*x)(const T&) const = &Self::operator<;
(void) x; // suppress "unused variable" warning
}
protected:
~LessThanComparable() {}
};
Note that we encode the structural conformance check as a template member function (which will be explicitly instantiated elsewhere, withSelf bound to the concrete type being considered) to ensure that types conform structurally. This function takes pointers to the desired members to ensure that they exist. The protected destructor prevents anyone from creating direct instances of this class (but allows subclasses to be instantiated, which will be important in a moment). The MAKE_TRAITS line is a short macro for defining an associated traits class; its importance and use will be described in Section 2.1.
We use public inheritance to specify that a type conforms to a particular static interface (that is, the type models a concept):
struct Foo : public LessThanComparable<Foo> {
bool operator<( const Foo& ) const { ... }
// whatever other stuff
};
Then we use theStaticIsA construct to determine if a type conforms to a static interface. (More details of the implementation of StaticIsA will be explained later.) The expression
StaticIsA< T, SI >::valid
is a boolean value computed at compile-time that tells us if a typeT conforms to a static interface SI. In other words, the value is true iff "T isa SI" (under the meaning of isa we described in Section 1.2). Thus, for example, in Sort(), we can use
StaticIsA< T, LessThanComparable<T> >::valid
to determine if a particular instantiation of the template function is ok. Since the value ofStaticIsA<T,SI>::valid is a compile-time boolean value, we can use template specialization to choose different alternatives for the template at compile-time, based on T's conformance.
2. Applications
In this section we show two useful applications ofStaticIsA: custom error messages and selective implementation dispatch.
2.1 Using StaticIsA to create understandable error messages
Now that we have seen the idea behindStaticIsA, let us put it to use. Often a violation of the "concept constraints" for template arguments causes the compiler to emit tons of seemingly useless error messages. For example, with g++2.95.2 (which comes with an old SGI library), trying to std::sort() a collection of NLCs (where NLC is a type that does not support operator<) causes the compiler to report fifty-seven huge lines of error messages for one innocent-looking line of code. To emphasize the point, realize that this tiny (complete) program
#include <algorithm>
struct NLC {}; // Not LessThanComparable
int main() {
NLC a[5];
std::sort( a, a+5 );
}
compiled with g++ (without any command-line options to turn on extra warnings) produces a stream of error messages beginning with
/include/stl_heap.h: In function `void __adjust_heap<NLC *, int, NLC>(NLC *, int, int, NLC)':
/include/stl_heap.h:214: instantiated from `__make_heap<NLC *,NLC,ptrdiff_t>(NLC *, NLC *, NLC *,
ptrdiff_t *)'
/include/stl_heap.h:225: instantiated from `make_heap<NLC *>(NLC *, NLC *)'
/include/stl_algo.h:1562: instantiated from `__partial_sort<NLC *, NLC>(NLC *, NLC *, NLC *,
NLC *)'
/include/stl_algo.h:1574: instantiated from `partial_sort<NLC *>(NLC *, NLC *, NLC *)'
/include/stl_algo.h:1279: instantiated from `__introsort_loop<NLC *, NLC, int>(NLC *, NLC *,
NLC *, int)'
and continuing through functions in the bowels of the STL implementation that we did not even know existed. On the other hand, sinceStaticIsA lets us detect template constraint conformance at the instantiation point, we can use template specialization to dispatch "custom error messages" which succinctly report the problem at a level of abstraction clients will be able to understand (as we shall illustrate presently).
Consider again theSort() function described above. We shall now use StaticIsA to generate a "custom error message" if the template type T does not conform to LessThanComparable<T>. The actual Sort() function does not do any work, it simply forwards the work to a helper class, which will be specialized on a boolean value:
template <class T> inline void Sort( vector<T>& v ) {
SortHelper< StaticIsA<T,LessThanComparable<T> >::valid >::doIt( v );
}
The helper class is declared as
template <bool b> struct SortHelper;
and it has two very different specializations. Thetrue version does the actual work as we would expect:
template <> struct SortHelper<true> {
template <class T> static inline void doIt( vector<T>& v ) {
// actually sort the vector, using operator< as the comparator
}
};
Thefalse version reports an error:
template <class T>
struct Error {};
template <> struct SortHelper<false> {
template <class T> static inline void doIt( vector<T>& ) {
Error<T>::Sort_only_works_on_LessThanComparables;
}
};
Now, clients can do
vector<Foo> v; // recall: Foo isa LessThanComparable<Foo>
Sort(v);
and the vector gets sorted as we expect. But if clients try to sort a vector whose elements cannot be compared...
vector<NLC> v;
Sort(v);
...then our compiler says:
x.cc: In function `static void SortHelper<false>::doIt<NLC>(vector<NLC,allocator<NLC> > &)':
x.cc:73: instantiated from `Sort<NLC>(vector<NLC,allocator<NLC> > &)'
x.cc:78: instantiated from here
x.cc:67: `Sort_only_works_on_LessThanComparables' is not a member of type `Error<NLC>'
and nothing else.
The technique of trying to access
Error<T>::some_text_you_want_to_appear_in_an_error_message
seems to work well on different compilers; for all practical purposes, it lets one create "custom error messages". Now the error is pinpointed, says what is wrong in no uncertain terms, and is not excessively verbose. This technique is mentioned in [CE00].
Custom error messages can be applied equally well to the algorithms in the STL by writing "wrappers" for STL functions. By supplying a wrapper for functions likestd::sort(), we can create a function that is as efficient as std::sort (it simply forwards the work with an inline function that an optimizing compiler will easily elide), but will report meaningful errors if the type of element being sorted is not a LessThanComparable. Rather than getting fifty-seven lines of gobbledegook from an improper instantiation of std::sort(), we can get four lines of meaningful messages from our own version of sort().
The astute reader may be wondering what will happen with code like this:
vector<int> v;
Sort(v);
Clearlyint does not (and cannot) inherit from LessThanComparable. However, we would like this to compile successfully. We use the usual "traits" trick to solve this problem; each static interface has an associated traits class, which can be specialized for types which conform to the interface "outside of the framework". Put another way, traits provide a way to declare conformance extrinsically.
We mentioned theMAKE_TRAITS macro before; here is its definition:
#define MAKE_TRAITS \
template <class Self> \
struct Traits { \
static const bool valid = false; \
};
For any static interfaceSI, SI::Traits<T>::valid says whether or not T "isa" SI "outside the framework" (that is, T exhibits named conformance to SI despite the lack of an inheritance relationship). To declare named conformance extrinsically, we just specialize the template. For example, to say that int isa LessThanComparable<int>, we say
template <>
struct LessThanComparable<int>::Traits<int> : public Valid {};
whereValid is defined as just
struct Valid {
static const bool valid = true;
};
More generally
template <>
struct SomeStaticInterface::Traits<SomeType> : public Valid {};
declares thatSomeType "isa" SomeStaticInterface.
At this point, we should take a moment to describe the behavior of
StaticIsA<T,SI>::valid
which comprises the "brains" of our static interface approach. The behavior is essentially the following:
if "T isa SI" according to the "traits" of SI, then
return true
else if T inherits SI (named conformance), then
apply the structural conformance check
return true
else
return false
Note that the structural conformance check will issue a compiler-error if structural conformance is not met. For example, if classFoo inherits LessThanComparable<Foo>, but does not define an operator<, then this structural conformance error will be diagnosed when
StaticIsA<Foo,LessThanComparable<Foo> >::valid
is first evaluated (and thus thecheck_structural() member of the static interface is instantiated). The error message will be generated by the compiler (not a "custom error message" as described above), as there is no general way to detect the non-existence of such members before the compiler does. This is unfortunate, as such errors tend to be verbose. Fortunately, these kinds of errors are errors by suppliers, not clients. As a result, they only need to be fixed once.
2.2 Using static interfaces for static dispatch
Just as we can use template specialization to create custom error messages, we can similarly use it to dispatch the appropriate implementation of a template function or class, based on the concepts modeled by the template argument. For example, one could implement aSet with a hashtable, a binary search tree, or a linked list, depending on if the elements were Hashables, LessThanComparables, or EqualityComparables respectively. The code below says exactly that: clients may use Set<T>, and Set will automatically choose the most effective implementation (or report a custom error message if no imp! ! ! lementation is appropriate):
enum { SET_HASH, SET_BST, SET_LL, SET_NONE };
template <class T> struct SetDispatch {
static const bool Hash = StaticIsA<T,Hashable>::valid;
static const bool LtC = StaticIsA<T,LessThanComparable<T> >::valid;
static const bool EqT = StaticIsA<T,EqualityComparable<T> >::valid;
static const int which = Hash?SET_HASH: LtC?SET_BST: EqT?SET_LL: SET_NONE;
};
template <class T, int which = SetDispatch<T>::which > struct Set;
template <class T> struct Set<T,SET_NONE> {
static const int x = Error<T>::
Set_only_works_on_Hashables_or_LessThanComparables_or_EqualityComparables;
};
template <class T> struct Set<T,SET_LL> {
Set() { cout << "Set list" << endl; }
};
template <class T> struct Set<T,SET_BST> {
Set() { cout << "Set bst" << endl; }
};
template <class T> struct Set<T,SET_HASH> {
Set() { cout << "Set hash" << endl; }
};
[Side note:SetDispatch might best be implemented with enums rather than static const variables, but we encountered bugs in our compiler when using enums as template parameters.] The code above is simple: SetDispatch<T>::which computes information about the conformance of T to various static interfaces; Set then uses this information to dispatch the appropriate implementation (or a custom error message).
3. The Design Space of Constraint Checking
In the previous sections, we have described the most "radical" features of our framework. We chose to present these features first, as they are the key features that set our framework apart from other concept-checking ideas that we are aware of. In fact, however, our framework gives the user a choice among concept-checking approaches which span the concept-checking design space. We describe the design space and the components of our system in this section.
3.1 Traditional concept-checking approaches
Stroustrup describes a simple constraints mechanism in [Str94]. An up-to-date generalization of that approach can be seen in this code:
template <class T>
struct TraditionalLessThanComparable {
T y;
template <class Self>
void constraints(Self x) {
bool b = x<y;
(void) b; // suppress spurious warning
}
};
template <class T, class Concept>
inline void Require() {
(void) ((void (Concept::*)(T)) &Concept::template constraints<T>);
}
// just inside any template that wants to ensure the requirement:
Require< Foo, TraditionalLessThanComparable<Foo> >();
A very similar approach is used in the latest versions of the SGI implementation of the STL [SGI]. We will call this approach traditional concepts and contrast it with static interfaces. Note that the benefit of traditional concepts is simply that the compiler will first issue its error messages at the template instantiation point, rather than deeper in the bowels of the implementation. The key differences between traditional concepts and static interfaces are:
1. Traditional concepts use no named conformance; they are entirely structural.
2. Traditional concepts call methods in the required interface (x<y) rather than taking pointers to them.
The consequence of the first difference is that client classes (e.g., models ofTraditionalLessThanComparable) do not need to explicitly state that they conform to the TraditionalLessThanComparable protocol. In practical terms, this is a good property for legacy code and third-party libraries (including the STL) but a dangerous property for new development, because of the possibilities of accidental conformance. Its philosophy is also contrary to the design model of the C++ language that uses named inheritance (a subtype explicitly specifies its supertype) instead of structural inheritance.
The consequence of the second difference (calling methods instead of taking pointers) is that traditional concepts are more "forgiving" than static interfaces. Indeed, static interfaces are strict in two ways. First, they require exact type conformance for the method signatures. For instance, if a method is expected to accept two integer arguments, static interfaces will reject a method accepting arguments of a type to which integers implicitly convert. Second, taking a pointer to a member of a typeT does not allow us to check for non-member functions that take a T as an argument (most commonly, overloaded operators), as the language does not allow us to unambiguously make a pointer to such non-members functions without knowing the namespace these functions are defined in. This is perhaps a defect in the language standard. Koenig loo! ! ! kup (section 3.4.2 of [ISO98]) allows us to call these same functions that we cannot take pointers to. Nevertheless, Koenig lookup does not apply in the context of overload resolution when taking pointers to non-member functions (that is, in 13.4 of [ISO98]). Note that the latter is a limitation of static interfaces (i.e., the scheme is being unnecessarily strict beyond our control).
Again, we see that the trade-off is quite similar to before. For legacy or third party code, one may want to be as "forgiving" as possible, and, thus, the "traditional" concept checking described above makes sense. For a single, controlled project, however, static interfaces are more appropriate, as they allow expressing strict requirements. Instead, traditional concepts make a "best effort" attempt to catch some common errors, but do not provide any real guarantees of doing so.
For an illustration consider the example ofTraditionalLessThanComparable, shown earlier. Writing out expressions (like "x<y") to check concepts is particularly error-prone, by virtue of the many implicit conversions available in C++. The problem is more evident in the return types of expressions. A class satisfying the LessThanComparable concept should implement a "<" operator that is applied on given types. Nevertheless, it is hard to ensure that the return type of this operator is exactly bool and not some type that is implicitly convertible to bool. The latter can cause problems. As an example! ! ! , consider this code:
template <class T>
void some_func() {
Require< T, TraditionalLessThanComparable<T> >();
T a, b, c, d;
if( a<b || c<d ) // Require() should ensure this is legal
std::cout << "whatever" << std::endl;
}
We would expect that any problems with the code will be diagnosed whenRequire() is instantiated. Nevertheless, it is possible to get past Require() without the compiler complaining, but then die in the body of the function. Here is one such scenario:
struct Useless {};
struct EvilProxy {
operator bool() const { return true; }
Useless operator||( const EvilProxy& ) { return Useless(); }
};
struct Bar {
EvilProxy operator<( const Bar& ) const { return EvilProxy(); }
};
some_func<Bar>();
Indeed, we have been foiled by implicit conversions.Bar's operator< does not, in fact, return a bool; it returns a proxy object (implicitly convertible to bool). This proxy is malicious (as suggested by its name) and conspires to make the expression
a<b || c<d
of typeUseless rather than type bool.
Hopefully, template authors do not often run into such contrivedly "evil" instantiations. Nevertheless, the point remains that implicit conversions can lead to surprises. The SGI STL implementation is susceptible to problems of this kind. There are several examples one can devise that evade the concept checking mechanism, although they are not legal instances of the documented STL concepts.
The case of static interfaces vs. traditional concepts is an instance of the classic design trade-off of detection mechanisms: one approach avoids most false-positives at the expense of producing many false-negatives, while the other does just the reverse. Ideally we would like the minimize both false-positives (constraints that admit code which may turn out to be illegal or not meaningful) and false-negatives (constraints that reject code which should be acceptable).
3.2 Enhancing Traditional Concepts
It is possible to strengthen traditional concepts in order to make them less susceptible to implicit conversion attacks. Recall our example of a traditional concept with aconstraints() function with
bool b = x<y;
as the check. This check is vulnerable because implicit conversion can (purposely or inadvertently) make this code legal, without satisfying the implicit requirements. The best solution that we have found involves our own kind of proxy object. This object is designed to protect us from the "evil proxies" in the world, and hence we call it "HeroicProxy". We show the code for it and how it is used here:
template <class T>
struct HeroicProxy {
HeroicProxy( const T& ) {}
};
template <class T>
struct MaybeOptimalLessThanComparable {
const T y;
template <class Self>
void constraints(const Self x) {
HeroicProxy<bool> b = x<y;
(void) b; // suppress spurious warning
}
};
HeroicProxytakes useful advantage of the rules for implicit conversion sequences for binding references ([ISO98] sections 13.3.3.1.4, 8.5.3 and others). We are assured that the result of x<y is not some user-defined proxy object; the result might not be a bool, but at worst it will be another "harmless" type like int. We have also const-qualified x and y, to ensure that operator< isn't trying to mutate its arguments. MaybeOptimalLessThanCompare and the HeroicProxy comprise a middle-of-the-road approach, which seeks to minimize both the false-positives and false-negatives of the structural constraint detection.
3.3 A Hybrid Approach
In the previous sections, we have shed light on the continuum of structural checking that "template constraints" may enforce. Although our primary focus and novelty of our work is on static interfaces, we also supply enhanced traditional concepts in our framework. Users of our framework are, of course, encouraged to utilize the methods that are most suitable to their particular domain.
We now explain all the components of our framework. Given a static interface, which takes the general form
struct AParticularStaticInterface {
MAKE_TRAITS;
template <class Self>
void check_structural( Self ) {
// checking code, using any of the strategies just described
}
protected:
~AParticularStaticInterface() {}
};
we define the following five "components" that users can use to specify template constraints.
(1) StaticIsA< T, AParticularStaticInterface >::valid
This checks both named and structural conformance using static interfaces, and returns the named conformance as a compile-time constant boolean.
(2) Named< T, AParticularStaticInterface >::valid
This checks only named conformance, resulting in a compile-time constant boolean.
(3) RequireStructural< T, AParticularStaticInterface >()
This applies the structural check without the named check by using traditional concepts.
(4) RequireNamed< T, AParticularStaticInterface >()
This enforces the named check; equivalent to generating a custom error iffNamed<...>::valid is false.
(5) RequireBoth< T, AParticularStaticInterface >()
This enforces theStaticIsA check; equivalent to generating a custom error iff StaticIsA<...>::valid is false.
4. Limitations and Extensions
Detecting structural conformance. The first limitation is one that we have already mentioned: we cannot detect structural conformance, rather we can only ensure it with a compile-time check. C++ apparently does not have a general way to detect the presence of members in a template type. Nevertheless, as mentioned above, this is only a tiny hindrance: if there is an error with a supplied class which purports to conform to a static interface, the compiler will emit its own error message (in the instantiation of thecheck_structural() method in a static interface), and the problem can be fixed once-and-for-all in that class (it is not a problem for clients).
Also, since our "custom error messages" and "static dispatch" (both described in Section 2) both require that we can detect conformance, this means they can only be used with named conformance techniques.
Other kinds of constraints. Finally, we have only demonstrated constraints based on normal member functions. One can imagine constraints based on other properties of a type, such as public member variables, member template functions, nested types, etc. There are various clever tricks one can implement incheck_structural() to ensure some of these types of members. To unobtrusively ensure the existence of a public member variable, simply try to take a pointer to it. To ensure that a nested type exists, try to create a typedef for it. To ensure that a method is a template method, declare an empty private struct called UniqueType as a member of the static interface class, and call the method with a Un! ! ! iqueType as a parameter.
5. Related work
In [Str97], exercise 13.16, Stroustrup suggests constraining a template parameter. This exercise motivated our work; our original (simpler) solutions were poor (they either had no structural conformance guarantees or introduced needless overhead withvirtual) and we sought to improve upon them.
The idea of constraining template parameters has been around for a while--at least since Eiffel [Mey97]. Some languages, like Theta [DGLM95], have mechanisms to explicitly check structural conformance at the instantiation point, but lack named conformance. This allows for accidental conformance. Recent languages like GJ [BOSW98] support both named and structural conformance checks--including F-bounded constraints, which are necessary to express the generalized form of many common concepts (likeLessThanComparable). Clearly our work is intended to provide the same kind of mechanism for C++, though our implementation goes beyond this, allowing code to use static interfaces for more than just constraints (e.g., we can do selective implementation dispatch based on membership in a static interface hierarchy).
Alexandrescu [Ale00a] devises a way to make "traits" apply to an entire inheritance hierarchy (rather than just a single type). This seems to enable the same functionality as our selective implementation dispatch, only without structural conformance checks.
Alexandrescu also presents an "inheritance detector" in [Ale00b]. Our framework employs this detector as a general purpose named-conformance detection-engine, which is at the heart of our system.
Baumgartner and Russo [BR95] implement a signature system for C++, which is supported as an option in the g++ compiler. Unlike our work, the emphasis of this system is on structural subtyping, so that classes can conform to a signature without explicitly declaring conformance. In practice, the system is only usable in the context of runtime polymorphism for retrofitting an abstract superclass on top of pre-compiled class hierarchies. Thus, the Baumgartner and Russo work is an alternative implementation of dynamic, and not static, interfaces.
6. Evaluation and Conclusions
Static interfaces allow us to express "concepts" in C++ and have them enforced by the compiler. OurStaticIsA mechanism is a reusable way to detect and ensure named and structural conformance. Users of our framework can define their own interfaces, as well as types that conform to those interfaces, and then use StaticIsA to ensure that templates are instantiated properly.
We have demonstrated idioms that rely onStaticIsA to create informative error messages which are a significant improvement over those error messages the compiler gives. This is a result of being able to directly express "concepts" inside C++. Our techniques also make code more self-documenting.
We have witnessed much "ad hoc trickery" to mimic template constraints in the past. We believe the techniques described here provide a more effective, general, and reusable strategy than previous attempts, and we hope our work will help evolve C++ to better meet the demands of future template programmers.
Our source code can be found at:
7. References
[Ale00a] Alexandrescu, A. "Traits on Steroids." C++ Report 12(6), June 2000.
[Ale00b] Alexandrescu, A. "Generic Programming: Mappings between Types and Values." C/C++ Users Journal, October 2000.
[BR95] Baumgartner, G. and Russo, V. "Signatures: A language extension for improving type abstraction and subtype polymorphism in C++." Software Practice & Experience, 25(8), pp. 863-889, Aug. 1995.
[BOSW98] Bracha, G., Odersky, M., Stoutamire, D. and Wadler, P. "Making the future safe for the past: Adding Genericity to the Java Programming Language." OOPSLA, 1998.
[CE00] Czarnecki, K. and Eisenecker, U. Generative Programming. Addison-Wesley, 2000.
[DGLM95] Day, M., Gruber, R., Liskov, B. and Myers, A. "Subtypes vs. Where Clauses: Constraining Parametric Polymorphism." OOPSLA, 1995.
[ISO98] ISO/IEC 14882: Programming Languages -- C++. ANSI, 1998.
[Mey97] Meyer, B. Object-Oriented Software Construction (Second Edition). Prentice Hall, 1997.
[SGI] Standard Template Library Programmer's Guide (SGI).
[Str94] Stroustrup, B. The Design and Evolution of C++. Addison-Wesley, 1994.
[Str97] Stroustrup, B. The C++ Programming Language (Third Edition). Addison-Wesley, 1997. | http://www.oonumerics.org/tmpw00/mcnamara.html | crawl-001 | en | refinedweb |
What is document management? What is an electronic document management system?
Document Management refers to any method that is used to organize and control files and documents. Electronic document management systems, including document management software, provide a means to employ document management on a computer. Traditional document management systems include utilizing file cabinets, drawers, or any other type of physical storage space that also allows the ability to organize its contents.
While these traditional document management systems may be comfortable to use and easily available, they have significant drawbacks that eventually cause inefficiencies, lapses in productivity, and possibly complete data and file loss. Suppose your current system is a basic file and folder system, using traditional filing cabinets as the primary storage means. Any time a file is required, either you personally get up, go to the filing cabinet, and retrieve the file, or you tell your assistant to do it, which may end up taking even more time. Furthermore, you have to be aware of where the document you are looking is filed, and if it is an obscure document type, it may be placed in one of many locations, further increasing document retrieval times. And you will only find this document if the person that used it before re-filed it correctly. If not, the document could now be in one of hundreds of places, if not lost all together. What if someone is using the document right now? You could end up on a wild goose-chase all over the office, just looking for one document.
Electronic document management systems allow you to find the files you need instantly. They drastically reduce document retrieval times, allowing you to get more work done in significantly less time, thereby cutting unnecessary overhead costs and increasing overall productivity. They also provide the tools for various industries to comply with legislation that specifies procedures for record keeping. Some examples include financial service companies and Sarbanes-Oxley; medical practices and HIPPA; and the legal industry with requirements regarding discovery. Specific departments within organizations, such as human resources and accounting, can greatly benefit from electronic document management. Even individuals, with items like tax returns, mortgages, wills, receipts, and a host of other paper files can benefit by providing safekeeping and security to their most important files.
With electronic document management software like Docsvault being so affordable and easy-to-use, these benefits are now becoming available to average individuals, professionals, and small-to-medium businesses. Read on to find out about the many advantages that comprehensive document management software, such as Docsvault, can provide to make your office, your business, and your everyday life incredibly more productive and efficient. With Docsvault, your document management system will be easy-to-use, affordable, and provide state-of-the-art security, making it even more worthwhile to implement.
Learn more about Docsvault…
What are the benefits of an electronic document management system?
There are five main benefits to electronic document management systems:
Cost savings are realized in various ways. Studies by Forbes ASAP have shown that the average professional wastes 150 hours a year solely to looking for documents. Now assume that this professional’s time is valued at $30 per hour. That means $4500 is wasted per year per individual just in searching for documents. These costs can be immediately eliminated with a good document management system. These systems can also eradicate costs that go to recreating lost or misfiled documents. Yet another way to save costs is by eliminating file cabinets that take up physical space. With real estate prices always increasing, an electronic document management system can free up valuable space for other purposes.
Implementing security for your files and documents is one of the most pressing issues in today’s workplace. Keeping files safe from the competition, thieves, and ill-natured employees is an ongoing process. Electronic document management systems allow you to put various roadblocks in place to prevent these occurrences. Various levels of password protection give access to files and folders only to those with authorization. Certain cabinets and folders can even be hidden so they seem to be non-existent. Encryption can provide impenetrable security, even if someone happens to get away with theft, and the Audit Trail feature track all activities and file access, providing a logbook of events.
As described earlier, providing easy access to documents can drastically reduce costs and increase efficiency. Electronic document management systems can also control multiple users trying to access the same document. Here, various users can view the original document, but only one person can make changes at a time. This is done by having users “check out” the document. Other users can wait in line to have access as soon as the original user is done. Version Control provides a means to know which version is the latest and what changes were made to each version. Users can even access older versions. In a multiple-user environment, an electronic document management system can considerably increase the efficiency of the workgroup.
Recovering from a disaster is also a very real concern. Electronic document management systems provide multiple ways to ensure your home or business does not get crippled after a disaster. One way is by providing data backup features for offsite storage. Another is a repository exporting feature to map your entire repository to another location.
The last main benefit that electronic document management systems provide ensures procedure consistency. With employees usually wanting to do things their own way, an electronic document management system can ensure that employees do things by protocol.
Apart from these five main benefits, each application within each industry will continuously find new advantages of implementing an electronic document management system. It all depends on to what extent each individual or organization utilizes the tools provided.
How do I convert to an electronic document management system?
In order to use an electronic document management system, you must own or use some type of computer. For individuals, a PC is most likely the hardware, whereas with small businesses, a server would be more appropriate. The other piece of hardware required is a scanner. Various types of scanners currently exist that can fit your budget and needs.
The core component of an electronic document management system is the document management software. It is important to software that provides extensive features, is easy-to-use and affordable, and easy to install and implement. One such software is Docsvault by Easy Data Access. Docsvault makes higher priced document management technology available to average users and organizations through a simple interface, richness in features, and low price, coming together to provide extraordinary value.
Next, we will take a look at what features are typically used to convert to an electronic file and document management system. All of the features that will be mentioned from here on are also included in Docsvault. For simplicity, we will assume that the computer used is a PC.
The first step in using document management is to setup a file repository. A file repository specifies where the files that you bring into the software go on your PC. Document management software will always ask you where you would like to send the files. Most often the software will create a special folder on your PC’s hard drive to save all of your files. The next step is to import, or bring in, all of your files into the software and the repository, which can be done in various ways.
Importing your paper files and other physical documents will require digitizing them with a scanner. Good document management software will have an intuitive document imaging or document scanning component included. Using this tool, simply scan in all your files. The advantage of managing your documents electronically is multiple-fold. First, when you scan in the document, good document management software will allow you to assign your own descriptive classes to the files you import, called Properties. Instead of having just a file name or date created, you can make your own fields that describe your files, like client name, due date, document type (letter, fax, etc), or any other field you want. What’s more, good document management software will allow you to group these Properties into Profiles. You then have the ability to assign a Profile to any document you scan or import. You can create multiple Profiles, allowing you to assign different Profiles to different file types. For example, you could have Picture Profile for your pictures that had Properties like Date Taken, Picture Of, and Where Taken, and then have a Document Profile for your various documents that had Properties like Document Type, Author, and Purpose.
You are not limited to just scanning in your files. You can import from your local hard drive, a USB drive, a FTP site, or any other place that your PC can access.
Next, you use your document management software to create a filing structure that can emulates your traditional system. The electronic version has many more options, including the ability to create folders inside of folders. What’s more, you can automatically assign certain Profiles to folders, so any time you put a new file in that folder, it gets the Profile you designated. You simply fill in the Property fields and are on your way. Good document management software will have an intuitive layout for accessing, creating, and navigating this folder structure. Some software, including Docsvault, will allow you to put password protection on individual folders, so you can keep things where they belong and not worry about having the wrong people accessing them.
What are the main features in an electronic document management system? How do I benefit from them?
In this section, we will go through some of the more popular features of an electronic document management system and how they drastically improve the productivity and efficiency of the workplace. Again, all of the features mentioned are included in Docsvault unless otherwise stated.
Document Scanning and Organizing
The first feature we will go over is the built-in scanning interface. It is referred to as ‘built-in’ because all of the parts necessary to scan in your paper documents is included in the document management software. All you need is a compliant scanner and the proper hardware to hook everything up. The software itself then can control and manage the scanner, allowing you to digitize documents from one place. This makes converting to a paperless office that much easier.
The next group of features and functions that will be discussed is the document organization tools. The first of these features is the Cabinet-Folder-File structure. This feature allows you to mimic traditional file cabinet systems. You begin by making a virtual cabinet. Within this cabinet, you can place folders. Now, within these folders you can place more folders or begin placing files and documents. The advantage to this virtual filing cabinet is that you can make as many cabinets as you need and that you can place folders inside of folders, something that is physically impossible with traditional physical filing systems. The possibilities of how you arrange cabinets, folders, and files are endless.
Also included in the document organizational tools are Properties and Profile assignment. Profiles and Properties allow you to add your own descriptive fields to any file you import. An easy example is as follows: suppose you are scanning in your photos. After a photo is scanned, the Properties feature allows you to make custom descriptive fields like Date Taken, Taken By, and Taken At. After you assign these properties, you can fill in these fields with the appropriate information, like May 14, 1983, Mom, and At Home. Instead of painstakingly assigning each of these Properties to every file you import, you can create a Profile that lumps these Properties together and automatically assigns them to files. You can even assign Profiles to a folder or an entire cabinet, giving each file inside the same Profile. Continuing the example, you could call the group of photo properties Photos. Now every time you scan in a photo, you assign it the Photos Profile and fill in the appropriate fields. This benefits you in two ways. First, it allows you to customize the way you organize all of your files and folders. You can make more obscure files easier to manage. This gives you much more flexibility in the way you organize your documents when compared to traditional physical filing systems. Second, the retrieval of these documents also becomes easier. With document management software like Docsvault, you can search for files based not only on file name, but also the properties and profiles that you have assigned, drastically reducing the amount of time you spend looking for files.
Another more subtle feature when you import your files into the software is the ability to add notes. This feature is self-explanatory—any time you bring a file into the software, you can assign notes to it. This will help you remember special circumstances or descriptions that may be associated with files that you bring into the software.
Document Managing
Microsoft® Office integration is very popular feature. With this, you can automatically organize documents from the Microsoft programs you are already comfortable with. For example, with Docsvault, you begin by going to the File menu and selecting Save to Docsvault. From here, you can assign where to save within the repository, assign Properties and Profiles, apply the various security settings (to be discussed), and add descriptive notes. You can also access Docsvault through Outlook, giving you a means to organize and archive all of your emails as well.
Some document management software, including Docsvault, also includes a built-in PDF creator. This feature allows you to create PDF files from any application with printing capabilities. PDF is an Adobe file format and is quickly becoming the most popular file format for sharing documents around the world. You can create PDFs by simply going to File > Print and select the software’s PDF writer under the Printer Name. With Docsvault, for example, you select Docsvault PDF as the printer name. The document then gets automatically saved in the software as a PDF wherever you specify.
Another feature is Version Control. With this feature, you have the ability to create new versions of documents every time you save them. This is done by first ‘checking out’ the document from the software repository. This allows the software to make the necessary moves to allow the feature to work. By ‘checking out’ the document, you are essentially opening the document. Once you make your changes to the documents, you save it as you normally would. You then proceed into the software and ‘check in’ the document. By following this procedure, the software will make new versions every time you save. You can also attach notes to each version, giving you the ability to track what changes were made with each subsequent version. Software like Docsvault even allows you to revert back to an older version. This is to ensure that you are not committed to any changes that you make to a document.
File Search and Retrieval
The search feature also has a lot of functionality associated with it. Apart from performing ordinary searches, good document management software will have advanced search characteristics. One such characteristic is full-text indexing. Anytime you bring a text document into the file repository (the database where the software keeps all the files), the software will automatically index the files and the text within the files. This allows you to search for files based on the text within the files, so if you only remember the first two words written within a document, you can still find it. But remember, files will only be indexed if they are text documents. Files like photos with text embedded within the photo will not be indexed. Another search function that is not often found in software is saved searches. With document management software like Docsvault, you can save frequently used search criteria, including profiles and properties, indexed text, notes, or any of the other search parameters. All of these search functions come together to virtually eliminate file retrieval times, allowing you to find the documents you need when you need them.
File Security
We will now shift towards security features. The first is encryption security. Docsvault is the only known document management software that provides 128-bit encryption security for your files and documents. When a file is encrypted, it is basically scrambled in a unique way. The only way to unscramble the file is to have key. Encrypting your files makes them impenetrable, even by the developers of the software. There is no better way to prevent access to your most valuable digital files than to encrypt them.
Controlling access to files and documents is another way electronic document management provides security. You can implement multiple layers of password protection that gives access only to those users that are authorized. Certain files and folders can even be hidden so that other users don’t even see that they exist.
Another security feature is Audit Trail. This feature provides a logbook of events and changes that have occurred to your documents and settings. It keeps track of changes to specific files, folders, and cabinets by filename, username, event time, event type, or file path. It even tracks changes made to settings and preferences. This feature is invaluable when multiple employees are continuously accessing and modifying files within the repository. Its value can even be seen with individuals with a home office. If children are consistently using the computer, their antics could unknowingly change settings and delete important files. Think about the advantage of Audit Trail as follows: without Audit Trail, there is no record of who has viewed and modified a file, making it virtually impossible to bring inefficiencies and mistakes in a business process to light. With Audit Trail, you know exactly what is happening.
Document Archiving, File Retention, Data Backup and Repository Exporting
Good document management software will also provide all the tools to provide data backup and repository exporting. These features ensure that your home or office is not crippled after a devastating disaster, like floods, hurricanes, and earthquakes. One feature that addresses this is CD/DVD data backup. The document management software will provide a means to create a CD or DVD, assuming your computer has the appropriate burner, which has your full document repository on it. You can then store this CD or DVD offsite, making sure your documents are in multiple locations. With Docsvault, this process is made even easier with one-click burning, providing an effortless means to produce data backups. With files often being stored in one central location, such as a laptop, any theft, damage, or loss of this location would be devastating.
This feature can also be used for document archiving and record retention. This allows various industries to comply with legislation calling for specific procedures for record keeping. It can also free up valuable office space by eliminating filing cabinets and instead having a few CDs or DVDs in its place.
Repository exporting is the last feature regarding data backup. This feature allows users to export the entire file repository to any location. With Docsvault, the advantage of this is that the entire cabinet-folder-file structure is retained in the export. Users can then access the files and folders, in their original filing structure, even if the Docsvault software is unavailable.
Electronic Document Management Conclusions
You should now be comfortable with the details regarding an electronic document management system. With so many drawbacks of traditional filing systems and the seemingly endless advantages of an electronic system, the reasons behind implementing an electronic document management system should now be obvious. The only item remaining on the agenda is choosing the right document management software.
Chances are that if you read this Beginner’s Guide to Document Management, you are not too comfortable with implementing new technology. That’s where Docsvault, the latest document management software by Easy Data Access, comes into the picture. The number one goal in designing Docsvault was to make every aspect incredibly simple and intuitive. Every single step of using Docsvault is easy. Installation takes only a few steps, and since we designed the interface with drag-and-drop functionality like Windows file explorer, learning Docsvault is almost second nature. What this means to you is that you can now implement high-end document management software without all the IT headaches and training. Best of all, Docsvault is incredibly affordable, making it an exceptional value.
Learn more about the solutions and benefits that Docsvault provides.
Learn more about the single-user Docsvault Professional Edition.
Learn more about the multiple-user Docsvault Small Business Edition.
Glossary
audit trail – an electronic logbook that tracks changes to settings, preferences, and major events within a program
check out – refers to the process of temporarily removing a file or document from the repository so that changes may be made
check in – refers to the process of replacing a file or document after it has been ‘checked out’ from the repository after changes have been made
component – a specific sub-unit of software that has its own functionality
data backup – refers to the process of transferring files and information to another media type, such as CD or DVD, for the purpose of offsite storage for disaster recovery
descriptive class – a classifying field associated with files that provides information about that file. Examples include: file name, date created, date modified, file type, etc.
document archiving – refers to the process of taking old files and documents and organizing them in a way that allows convenient access at later date
document imaging – refers to a component of software that allows you to convert paper files into electronic files
document management – refers to the methods of organizing, storing, and controlling documents and files, whether paper or electronic
document management software – type of software that allows users to electronically organize, store, and control files and documents
document management system – any system that employs the methods of document management
document scanning – refers to the conversion of paper files into electronic files via some sort of scanning hardware
electronic document management – a document management system that is completely electronic, meaning the organizing, storing, and controlling of files and documents is done on a computer
file encryption – to alter a file using a unique code so as to be unintelligible to unauthorized parties
export – refers to the process of sending data from one program to another program or location
file repository – refers to the location where data files and documents are kept
file retention – refers to the process of keeping documents and records for the purpose of compliance with legislation; generally speaking is the process of keeping documents and records for possible future use
full-text indexing – refers to the methods that software uses to catalog all words within files and documents to enhance search functions
import – to bring in files or data into a program or location
Microsoft® Office integration – refers to the process of coordinating certain functions of software with Microsoft® Office programs
PDF – acronym for portable document finder;
properties – in document management context, refers to custom descriptive classes that can be assigned to files
profiles – in document management context, refers to groupings of properties that can be assigned to files
scanning interface – the interaction method between a user, a computer, and a scanner.
version control – in document management context, refers to the ability to manage iterative changes made to files and documents | http://www.docsvault.com/beginners_guide.html | crawl-001 | en | refinedweb |
Title: Language detection using character trigrams
Submitter: Douglas Bagnall
(other recipes)
Last Updated: 2004/11/07
Version no: 1.1
Category:
Algorithms
2 vote(s)
Description:
The.
Source: Text Source
#!/usr/bin/python
import random
from urllib import urlopen
class Trigram:
"""From one or more text files,)
0.4
>>> unknown.similarity(reference_en)
0.95
would indicate the unknown text is almost cetrtainly English. As
syntax sugar, the minus sign is overloaded to return the difference
between texts, so the above objects would give you:
>>> unknown - reference_de
0.6
>>> reference_en - unknown # order doesn't matter.
Beware when using urls: HTML won't be parsed out.
Most methods chatter away to standard output, to let you know they're
still there.
"""
length = 0
def __init__(self, fn=None):
self.lut = {}
if fn is not None:
self.parseFile(fn)
def parseFile(self, fn):
pair = ' '
if '://' in fn:
print "trying to fetch url, may take time..."
f = urlopen(fn)
else:
f = open(fn)
for z, line in enumerate(f):
if not z % 1000:
print "line %s" % z
# \n's are spurious in a prose context
for letter in line.strip() + ' ':
d = self.lut.setdefault(pair, {})
d[letter] = d.get(letter, 0) + 1
pair = pair[1] + letter
f.close()
self.measure()
def measure(self):
"""calculates the scalar length of the trigram vector and
stores it in self.length."""
total = 0
for y in self.lut.values():
total += sum([ x * x for x in y.values() ])
self.length = total ** 0.5
def similarity(self, other):
"""returns a number between 0 and 1 indicating similarity.
1 means an identical ratio of trigrams;
0 means no trigrams in common.
"""
if not isinstance(other, Trigram):
raise TypeError("can't compare Trigram with non-Trigram")
lut1 = self.lut
lut2 = other.lut
total = 0
for k in lut1.keys():
if k in lut2:
a = lut1[k]
b = lut2[k]
for x in a:
if x in b:
total += a[x] * b[x]
return float(total) / (self.length * other.length)
def __sub__(self, other):
"""indicates difference between trigram sets; 1 is entirely
different, 0 is entirely the same."""
return 1 - self.similarity(other)
def makeWords(self, count):
"""returns a string of made-up words based on the known text."""
text = []
k = ' '
while count:
n = self.likely(k)
text.append(n)
k = k[1] + n
if n in ' \t':
count -= 1
return ''.join(text)
def likely(self, k):
"""Returns a character likely to follow the given string
two character string, or a space if nothing is found."""
if k not in self.lut:
return ' '
# if you were using this a lot, caching would a good idea.
letters = []
for k, v in self.lut[k].items():
letters.append(k * v)
letters = ''.join(letters)
return random.choice(letters)
def test():
en = Trigram('')
#NB fr and some others have English license text.
# no has english excerpts.
fr = Trigram('')
fi = Trigram('')
no = Trigram('')
se = Trigram('')
no2 = Trigram('')
en2 = Trigram('')
fr2 = Trigram('')
print "calculating difference:"
print "en - fr is %s" % (en - fr)
print "fr - en is %s" % (fr - en)
print "en - en2 is %s" % (en - en2)
print "en - fr2 is %s" % (en - fr2)
print "fr - en2 is %s" % (fr - en2)
print "fr - fr2 is %s" % (fr - fr2)
print "fr2 - en2 is %s" % (fr2 - en2)
print "fi - fr is %s" % (fi - fr)
print "fi - en is %s" % (fi - en)
print "fi - se is %s" % (fi - se)
print "no - se is %s" % (no - se)
print "en - no is %s" % (en - no)
print "no - no2 is %s" % (no - no2)
print "se - no2 is %s" % (se - no2)
print "en - no2 is %s" % (en - no2)
print "fr - no2 is %s" % (fr - no2)
print "\nmaking up English"
print en.makeWords(30)
print "\nmaking up French"
print fr.makeWords(30)
if __name__ == '__main__':
test(): | http://aspn.activestate.com/ASPN/Cookbook/Python/Recipe/326576 | crawl-001 | en | refinedweb |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.